threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "Being a new user of Debian 12/gcc 12.2.0, I wrote the following shell\nscript to conditionally add gmake rules with compiler flags to\nsrc/Makefile.custom to suppress warnings for certain files. This allows\nme to compile all supported Postgres releases without warnings.\n\nI actually didn't how simple it was to add per-file compile flags until\nI read:\n\n\thttps://stackoverflow.com/questions/6546162/how-to-add-different-rules-for-specific-files\n\n---------------------------------------------------------------------------\n\n# PG 14+ uses configure.ac\nif [ ! -e configure.in ] || grep -q 'AC_INIT(\\[PostgreSQL\\], \\[13\\.' configure.in\nthen cat >> src/Makefile.custom <<END\n# work around gcc -O1 bug found in PG 13-current, not -O[023], 2023-08-28\n# https://www.postgresql.org/message-id/[email protected]\n# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111240\n# gmake fix: https://stackoverflow.com/questions/6546162/how-to-add-different-rules-for-specific-files\nclauses.o: CFLAGS+=-O2\nEND\nfi\n\nif [ -e configure.in ] && grep -q 'AC_INIT(\\[PostgreSQL\\], \\[11\\.' configure.in\nthen cat >> src/Makefile.custom <<END\n# new warning in Debian 12, gcc (Debian 12.2.0-14) 12.2.0, 2023-08-14\n# Fix for valid macro using stack_base_ptr, warning only in PG 11\npostgres.o: CFLAGS+=-Wdangling-pointer=0\nEND\nfi\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Thu, 31 Aug 2023 15:25:09 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Suppressing compiler warning on Debian 12/gcc 12.2.0"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 03:25:09PM -0400, Bruce Momjian wrote:\n> Being a new user of Debian 12/gcc 12.2.0, I wrote the following shell\n> script to conditionally add gmake rules with compiler flags to\n> src/Makefile.custom to suppress warnings for certain files. This allows\n> me to compile all supported Postgres releases without warnings.\n> \n> I actually didn't how simple it was to add per-file compile flags until\n> I read:\n> \n> \thttps://stackoverflow.com/questions/6546162/how-to-add-different-rules-for-specific-files\n\nThis might be simpler for people to modify since it abstracts out the\nversion checking.\n\n---------------------------------------------------------------------------\n\n# gmake per-file options: https://stackoverflow.com/questions/6546162/how-to-add-different-rules-for-specific-files\n\nfor FILE in configure.in configure.ac\ndo if [ -e \"$FILE\" ]\n then VERSION=$(sed -n 's/^AC_INIT(\\[PostgreSQL\\], \\[\\([0-9]\\+\\).*$/\\1/p' \"$FILE\")\n break\n fi\ndone\n\n[ -z \"$VERSION\" ] && echo 'Could not find Postgres version' 1>&2 && exit 1\n\nif [ \"$VERSION\" -eq 11 ]\nthen cat >> src/Makefile.custom <<END\n\n# new warning in Debian 12, gcc (Debian 12.2.0-14) 12.2.0, 2023-08-14\n# Fix for valid macro using stack_base_ptr\npostgres.o: CFLAGS+=-Wdangling-pointer=0\nEND\nfi\n\nif [ \"$VERSION\" -ge 13 ]\nthen cat >> src/Makefile.custom <<END\n\n# work around gcc -O1 bug found in PG 13-current, not -O[023], 2023-08-28\n# https://www.postgresql.org/message-id/[email protected]\n# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111240\nclauses.o: CFLAGS+=-O2\nEND\nfi\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 4 Sep 2023 11:12:37 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Suppressing compiler warning on Debian 12/gcc 12.2.0"
}
] |
[
{
"msg_contents": "Hi,\n\nOften we make changes in the pg_hba.conf and leave a #comment there, \njust in case we forget why the change was done. To avoid having to open \nthe configuration file every time just to check the comments, it would \nbe quite nice to have the option to read these comments in the \npg_hba_file_rules view. Something like adding it in the end of the line \nand wrapping it with characters like \"\", '', {}, [], etc\n\nFor instance, this pg_hba.conf ...\n\n# TYPE DATABASE USER ADDRESS METHOD\nlocal all all trust [foo]\nhost all all 127.0.0.1/32 trust\nhost all all ::1/128 trust [bar]\nlocal replication all trust\nhost replication all 127.0.0.1/32 trust\nhostssl replication all ::1/128 cert map=abc [this will \nfail :)]\n\n... could be displayed like this\n\npostgres=# SELECT type, database, user_name, address, comment, error \nFROM pg_hba_file_rules ;\n type | database | user_name | address | comment | error\n---------+---------------+-----------+-----------+-------------------+-----------------------------------------------------\n local | {all} | {all} | | foo |\n host | {all} | {all} | 127.0.0.1 | |\n host | {all} | {all} | ::1 | bar |\n local | {replication} | {all} | | |\n host | {replication} | {all} | 127.0.0.1 | |\n hostssl | {replication} | {all} | ::1 | this will fail :) | \nhostssl record cannot match because SSL is disabled\n(6 rows)\n\nI wrote a very quick&dirty PoC (attached) but before going any further I \nwould like to ask if there is a better way to read these comments using \nSQL - or if it makes sense at all ;-)\n\nAny feedback is much appreciated. Thanks!\n\nJim",
"msg_date": "Fri, 1 Sep 2023 00:01:37 +0200",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Show inline comments from pg_hba lines in the pg_hba_file_rules view"
},
{
"msg_contents": "On Fri, Sep 01, 2023 at 12:01:37AM +0200, Jim Jones wrote:\n> Often we make changes in the pg_hba.conf and leave a #comment there, just in\n> case we forget why the change was done. To avoid having to open the\n> configuration file every time just to check the comments, it would be quite\n> nice to have the option to read these comments in the pg_hba_file_rules\n> view. Something like adding it in the end of the line and wrapping it with\n> characters like \"\", '', {}, [], etc\n> \n> For instance, this pg_hba.conf ...\n> \n> # TYPE DATABASE USER ADDRESS METHOD\n> local all all trust [foo]\n> host all all 127.0.0.1/32 trust\n> host all all ::1/128 trust [bar]\n> local replication all trust\n> host replication all 127.0.0.1/32 trust\n> hostssl replication all ::1/128 cert map=abc [this will fail\n> :)]\n> \n> ... could be displayed like this\n\nhba.c is complex enough these days (inclusion logic, tokenization of\nthe items) that I am not in favor of touching its code paths for\nanything like that. This is not something that can apply only to\npg_hba.conf, but to all configuration files. And this touches in\nadding support for a second type of comment format. This is one of\nthese areas where we may want a smarter version of pg_read_file that\nreturns a SRF for (line_number, line_contents) of a file read? Note\nthat it is possible to add comments at the end of a HBA entry already,\nlike:\nlocal all all trust # My comment, and this is a correct HBA entry.\n--\nMichael",
"msg_date": "Fri, 1 Sep 2023 10:18:59 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show inline comments from pg_hba lines in the pg_hba_file_rules\n view"
},
{
"msg_contents": "Hi Michael\n\nOn 01.09.23 03:18, Michael Paquier wrote:\n> hba.c is complex enough these days (inclusion logic, tokenization of\n> the items) that I am not in favor of touching its code paths for\n> anything like that. This is not something that can apply only to\n> pg_hba.conf, but to all configuration files.\nIt is indeed possible to extrapolate it to any configuration file, but \nmy point was rather to visualize comments purposefully left by the DBA \nregarding user access (pg_hba and pg_ident).\n> And this touches in\n> adding support for a second type of comment format. This is one of\n> these areas where we may want a smarter version of pg_read_file that\n> returns a SRF for (line_number, line_contents) of a file read? Note\n> that it is possible to add comments at the end of a HBA entry already,\n> like:\n> local all all trust # My comment, and this is a correct HBA entry.\n\nI also considered parsing the inline #comments - actually it was my \nfirst idea - but I thought it would leave no option to make an inline \ncomment without populating pg_hba_file_rules. But I guess in this case \none could always write the comment in the line above :)\n\nWould you be in favor of parsing #comments instead? Given that # is \ncurrently already being parsed (ignored), it shouldn't add too much \ncomplexity to the code.\n\nThanks for the feedback.\n\nJim\n\n\n\n",
"msg_date": "Fri, 1 Sep 2023 11:32:35 +0200",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show inline comments from pg_hba lines in the pg_hba_file_rules\n view"
},
{
"msg_contents": "On Fri, Sep 01, 2023 at 11:32:35AM +0200, Jim Jones wrote:\n> Would you be in favor of parsing #comments instead? Given that # is\n> currently already being parsed (ignored), it shouldn't add too much\n> complexity to the code.\n\nI am not sure what you have in mind, but IMO any solution would live\nbetter as long as a solution is:\n- not linked to hba.c, handled in a separate code path.\n- linked to all configuration files where comments are supported, if\nneed be.\n\nPerhaps others have more opinions.\n--\nMichael",
"msg_date": "Fri, 1 Sep 2023 19:44:24 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Show inline comments from pg_hba lines in the pg_hba_file_rules\n view"
},
{
"msg_contents": "\nOn 01.09.23 12:44, Michael Paquier wrote:\n> I am not sure what you have in mind, but IMO any solution would live\n> better as long as a solution is:\n> - not linked to hba.c, handled in a separate code path.\n> - linked to all configuration files where comments are supported, if\n> need be.\nIf I understood you correctly: You mean an independent feature that i.e. \ngets raw lines and parses the inline #comments.\n\nDoing so we could indeed avoid the trouble of messing around with the \nhba.c logic, and it would be accessible to other config files. Very \ninteresting thought! It sounds like a much more elegant solution.\n\n> Perhaps others have more opinions.\n> --\n> Michael\n\nIf I hear no objections, I'll try to sketch it as you suggested.\n\nThanks again for the feedback\n\nJim\n\n\n\n",
"msg_date": "Fri, 1 Sep 2023 13:16:31 +0200",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Show inline comments from pg_hba lines in the pg_hba_file_rules\n view"
}
] |
[
{
"msg_contents": "Hi,\n\nRecently urocryon has been failing with the following errors at [1]:\nchecking for icu-uc icu-i18n... no\nconfigure: error: ICU library not found\nIf you have ICU already installed, see config.log for details on the\nfailure. It is possible the compiler isn't looking in the proper directory.\nUse --without-icu to disable ICU support.\n\nconfigure:8341: checking whether to build with ICU support\nconfigure:8371: result: yes\nconfigure:8378: checking for icu-uc icu-i18n\nconfigure:8440: result: no\nconfigure:8442: error: ICU library not found\nIf you have ICU already installed, see config.log for details on the\nfailure. It is possible the compiler isn't looking in the proper directory.\nUse --without-icu to disable ICU support.\n\nUrocryon has been failing for the last 17 days.\n\nI think ICU libraries need to be installed in urocryon to fix this issue.\n\n[1] - https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=urocryon&dt=2023-09-01%2001%3A09%3A11&stg=configure\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 1 Sep 2023 11:27:47 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Buildfarm failures on urocryon"
},
{
"msg_contents": "Hi,\n\nOn Fri, Sep 01, 2023 at 11:27:47AM +0530, vignesh C wrote:\n> Hi,\n> \n> Recently urocryon has been failing with the following errors at [1]:\n> checking for icu-uc icu-i18n... no\n> configure: error: ICU library not found\n> If you have ICU already installed, see config.log for details on the\n> failure. It is possible the compiler isn't looking in the proper directory.\n> Use --without-icu to disable ICU support.\n> \n> configure:8341: checking whether to build with ICU support\n> configure:8371: result: yes\n> configure:8378: checking for icu-uc icu-i18n\n> configure:8440: result: no\n> configure:8442: error: ICU library not found\n> If you have ICU already installed, see config.log for details on the\n> failure. It is possible the compiler isn't looking in the proper directory.\n> Use --without-icu to disable ICU support.\n> \n> Urocryon has been failing for the last 17 days.\n> \n> I think ICU libraries need to be installed in urocryon to fix this issue.\n\nOops, that's when I upgraded the build farm client (from v14 to v17). I\nthink it's fixed now...\n\nRegards,\nMark\n\n\n",
"msg_date": "Fri, 1 Sep 2023 07:50:38 -0700",
"msg_from": "Mark Wong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Buildfarm failures on urocryon"
},
{
"msg_contents": "On Fri, 1 Sept 2023 at 20:20, Mark Wong <[email protected]> wrote:\n>\n> Hi,\n>\n> On Fri, Sep 01, 2023 at 11:27:47AM +0530, vignesh C wrote:\n> > Hi,\n> >\n> > Recently urocryon has been failing with the following errors at [1]:\n> > checking for icu-uc icu-i18n... no\n> > configure: error: ICU library not found\n> > If you have ICU already installed, see config.log for details on the\n> > failure. It is possible the compiler isn't looking in the proper directory.\n> > Use --without-icu to disable ICU support.\n> >\n> > configure:8341: checking whether to build with ICU support\n> > configure:8371: result: yes\n> > configure:8378: checking for icu-uc icu-i18n\n> > configure:8440: result: no\n> > configure:8442: error: ICU library not found\n> > If you have ICU already installed, see config.log for details on the\n> > failure. It is possible the compiler isn't looking in the proper directory.\n> > Use --without-icu to disable ICU support.\n> >\n> > Urocryon has been failing for the last 17 days.\n> >\n> > I think ICU libraries need to be installed in urocryon to fix this issue.\n>\n> Oops, that's when I upgraded the build farm client (from v14 to v17). I\n> think it's fixed now...\n\nThanks, this issue is fixed now.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 3 Sep 2023 19:18:46 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Buildfarm failures on urocryon"
}
] |
[
{
"msg_contents": "I ran into an Assert failure in ATPrepAddPrimaryKey() with the query\nbelow:\n\nCREATE TABLE t0(c0 boolean);\nCREATE TABLE t1() INHERITS(t0);\n\n# ALTER TABLE t0 ADD CONSTRAINT m EXCLUDE ((1) WITH =);\nserver closed the connection unexpectedly\n\nThe related codes are\n\n foreach(lc, stmt->indexParams)\n {\n IndexElem *elem = lfirst_node(IndexElem, lc);\n Constraint *nnconstr;\n\n Assert(elem->expr == NULL);\n\nIt seems to be introduced by b0e96f3119.\n\nThanks\nRichard\n\nI ran into an Assert failure in ATPrepAddPrimaryKey() with the querybelow:CREATE TABLE t0(c0 boolean);CREATE TABLE t1() INHERITS(t0);# ALTER TABLE t0 ADD CONSTRAINT m EXCLUDE ((1) WITH =);server closed the connection unexpectedlyThe related codes are foreach(lc, stmt->indexParams) { IndexElem *elem = lfirst_node(IndexElem, lc); Constraint *nnconstr; Assert(elem->expr == NULL);It seems to be introduced by b0e96f3119.ThanksRichard",
"msg_date": "Fri, 1 Sep 2023 15:13:42 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Assert failure in ATPrepAddPrimaryKey"
},
{
"msg_contents": "On 2023-Sep-01, Richard Guo wrote:\n\n> I ran into an Assert failure in ATPrepAddPrimaryKey() with the query\n> below:\n> \n> CREATE TABLE t0(c0 boolean);\n> CREATE TABLE t1() INHERITS(t0);\n> \n> # ALTER TABLE t0 ADD CONSTRAINT m EXCLUDE ((1) WITH =);\n> server closed the connection unexpectedly\n\nUgh, right, I failed to make the new function do nothing for this case;\nthis had no coverage. Fix attached, with some additional test cases\nbased on yours.\n\nThanks for reporting.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"XML!\" Exclaimed C++. \"What are you doing here? You're not a programming\nlanguage.\"\n\"Tell that to the people who use me,\" said XML.\nhttps://burningbird.net/the-parable-of-the-languages/",
"msg_date": "Fri, 1 Sep 2023 13:48:00 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure in ATPrepAddPrimaryKey"
},
{
"msg_contents": "On Fri, Sep 1, 2023 at 7:48 PM Alvaro Herrera <[email protected]>\nwrote:\n\n> On 2023-Sep-01, Richard Guo wrote:\n>\n> > I ran into an Assert failure in ATPrepAddPrimaryKey() with the query\n> > below:\n> >\n> > CREATE TABLE t0(c0 boolean);\n> > CREATE TABLE t1() INHERITS(t0);\n> >\n> > # ALTER TABLE t0 ADD CONSTRAINT m EXCLUDE ((1) WITH =);\n> > server closed the connection unexpectedly\n>\n> Ugh, right, I failed to make the new function do nothing for this case;\n> this had no coverage. Fix attached, with some additional test cases\n> based on yours.\n\n\nThanks for the fix!\n\nThanks\nRichard\n\nOn Fri, Sep 1, 2023 at 7:48 PM Alvaro Herrera <[email protected]> wrote:On 2023-Sep-01, Richard Guo wrote:\n\n> I ran into an Assert failure in ATPrepAddPrimaryKey() with the query\n> below:\n> \n> CREATE TABLE t0(c0 boolean);\n> CREATE TABLE t1() INHERITS(t0);\n> \n> # ALTER TABLE t0 ADD CONSTRAINT m EXCLUDE ((1) WITH =);\n> server closed the connection unexpectedly\n\nUgh, right, I failed to make the new function do nothing for this case;\nthis had no coverage. Fix attached, with some additional test cases\nbased on yours.Thanks for the fix!ThanksRichard",
"msg_date": "Mon, 4 Sep 2023 11:05:08 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Assert failure in ATPrepAddPrimaryKey"
}
] |
[
{
"msg_contents": "Hi All,\n\nThis patch moves the pre-processing for tokens in the bki file from\ninitdb to bootstrap. With these changes the bki file will only be\nopened once in bootstrap and parsing will be done by the bootstrap\nparser.\n\nThe flow of bki file processing will be as follows:\n- In initdb gather the values used to replace the tokens in the bki file.\n- Pass these values into postgres bootstrap startup using '-i' option\nas key-value pairs.\n- In bootstrap open the bki file (the bki file name was received as a\nparameter).\n- During the parsing of the bki file, replace the tokens received as\nparameters with their values.\n\nRelated discussion can be found here:\nhttps://www.postgresql.org/message-id/20220216021219.ygzrtb3hd5bn7olz%40alap3.anarazel.de\n\nNote: Currently the patch breaks on windows due to placement of extra\nquotes when passing parameters (Thanks to Thomas Munro for helping me\nfind that). Will follow up with v2 fixing the windows issues on\npassing the parameters and format fixes.\n\nPlease review and provide feedback.\n\n--\nThanks and Regards,\nKrishnakumar (KK).\n[Microsoft]",
"msg_date": "Fri, 1 Sep 2023 01:01:31 -0700",
"msg_from": "Krishnakumar R <[email protected]>",
"msg_from_op": true,
"msg_subject": "Move bki file pre-processing from initdb to bootstrap"
},
{
"msg_contents": "Krishnakumar R <[email protected]> writes:\n> This patch moves the pre-processing for tokens in the bki file from\n> initdb to bootstrap. With these changes the bki file will only be\n> opened once in bootstrap and parsing will be done by the bootstrap\n> parser.\n\nYou haven't provided any iota of evidence why this would be an\nimprovement.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 01 Sep 2023 08:37:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move bki file pre-processing from initdb to bootstrap"
},
{
"msg_contents": "On 01.09.23 14:37, Tom Lane wrote:\n> Krishnakumar R <[email protected]> writes:\n>> This patch moves the pre-processing for tokens in the bki file from\n>> initdb to bootstrap. With these changes the bki file will only be\n>> opened once in bootstrap and parsing will be done by the bootstrap\n>> parser.\n> \n> You haven't provided any iota of evidence why this would be an\n> improvement.\n\nI had played with similar ideas in the past, because it would shave some \ntime of initdb, which would accumulate noticeably over a full test run.\n\nBut now with the initdb caching mechanism, I wonder whether this is \nstill needed.\n\n\n\n",
"msg_date": "Fri, 1 Sep 2023 14:59:57 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move bki file pre-processing from initdb to bootstrap"
},
{
"msg_contents": "Hi,\n\nOn 2023-09-01 14:59:57 +0200, Peter Eisentraut wrote:\n> On 01.09.23 14:37, Tom Lane wrote:\n> > Krishnakumar R <[email protected]> writes:\n> > > This patch moves the pre-processing for tokens in the bki file from\n> > > initdb to bootstrap. With these changes the bki file will only be\n> > > opened once in bootstrap and parsing will be done by the bootstrap\n> > > parser.\n> > \n> > You haven't provided any iota of evidence why this would be an\n> > improvement.\n> \n> I had played with similar ideas in the past, because it would shave some\n> time of initdb, which would accumulate noticeably over a full test run.\n> \n> But now with the initdb caching mechanism, I wonder whether this is still\n> needed.\n\nI think it's still relevant - it's not just our own test infrastructure that\nruns a lot of initdbs, it's also lots of projects using postgres.\n\n\nThe main reason I'd like to move this infrastructure to the backend is that I\nreally would like to get rid of single user mode. It adds complications all\nover, it's barely tested, pointlessly hard to use. I wrote a rough prototype\nof that a while back:\nhttps://postgr.es/m/20220220214439.bhc35hhbaub6dush%40alap3.anarazel.de\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 18 Sep 2023 16:13:33 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move bki file pre-processing from initdb to bootstrap"
},
{
"msg_contents": "On 01.09.23 10:01, Krishnakumar R wrote:\n> This patch moves the pre-processing for tokens in the bki file from\n> initdb to bootstrap. With these changes the bki file will only be\n> opened once in bootstrap and parsing will be done by the bootstrap\n> parser.\n\nI did some rough performance tests on this. I get about a 10% \nimprovement on initdb run time, so this appears to have merit.\n\nI wonder whether we can reduce the number of symbols that we need this \ntreatment for.\n\nFor example, for NAMEDATALEN, SIZEOF_POINTER, ALIGNOF_POINTER, \nFLOAT8PASSBYVAL, these are known at build time, so we could have \ngenbki.pl substitute them at build time.\n\nThe locale-related symbols (ENCODING, LC_COLLATE, etc.), I wonder \nwhether we can eliminate the need for them. Right now, these are only \nused in the bki entry for the template1 database. How about initdb \ncreates template0 first, with hardcoded default encoding, collation, \netc., and then creates template1 from that, using the normal CREATE \nDATABASE command with the appropriate options. Or initdb could just run \nan UPDATE on pg_database to put the right settings in place.\n\nI don't like this part so much, because it adds like 4 more places each \nof these variables is mentioned, which increases the mental and testing \noverhead for dealing with these features (which are an area of active \ndevelopment).\n\nIn general, it would be good if this could be factored a bit more so \neach variable doesn't have to be hardcoded in so many places.\n\n\nSome more detailed comments on the code:\n\n+ boot_yylval.str = pstrdup(yytext);\n+ sprintf(boot_yylval.str, \"%d\", NAMEDATALEN);\n\nThis is weird. You are first assigning the text and then overwriting it \nwith the numeric value?\n\nYou can also use boot_yylval.ival for storing numbers.\n\n+ if (bootp_null(ebootp, ebootp->username)) return \nNULLVAL;\n\nAdd proper line breaks in the code.\n\n+bool bootp_null(extra_bootstrap_params *e, char *s)\n\nAdd a comment what this function is supposed to do.\n\nThis function could be static.\n\n+ while ((flag = getopt(argc, argv, \"B:c:d:D:Fi:kr:X:-:\")) != -1)\n\nYou should use an option letter that isn't already in use in one of the \nother modes of \"postgres\". We try to keep those consistent.\n\nNew options should be added to the --help output (usage() in main.c).\n\n+ elog(INFO, \"Open bki file %s\\n\", bki_file);\n+ boot_yyin = fopen(bki_file, \"r\");\n\nWhy is this needed? It already reads the bki file from stdin?\n\n+ printfPQExpBuffer(&cmd, \"\\\"%s\\\" --boot -X %d %s %s %s %s -i \n%s=%s,%s=%s,%s=%s,\"\n+ \"%s=%s,%s=%s,%s=%s,%s=%s,%s=%c\",\n+ backend_exec,\n+ wal_segment_size_mb * (1024 * 1024),\n+ boot_options, extra_options,\n+ data_checksums ? \"-k\" : \"\",\n+ debug ? \"-d 5\" : \"\",\n\nThis appears to undo some of the changes done in cccdbc5d95.\n\n+#define BOOT_LC_COLLATE \"lc_collate\"\n+#define BOOT_LC_CTYPE \"lc_ctype\"\n+#define BOOT_ICU_LOCALE \"icu_locale\"\n\netc. This doesn't look particularly useful. You can just use the \nstrings directly.\n\n\n\n",
"msg_date": "Tue, 19 Sep 2023 12:18:37 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move bki file pre-processing from initdb to bootstrap"
},
{
"msg_contents": "Thank you, Peter, Andres and Tom for your comments and thoughts.\n\nHi Peter,\n\n> For example, for NAMEDATALEN, SIZEOF_POINTER, ALIGNOF_POINTER,\n> FLOAT8PASSBYVAL, these are known at build time, so we could have\n> genbki.pl substitute them at build time.\n\nI have modified the patch to use genbki to generate these ones during\nbuild time.\n\n> The locale-related symbols (ENCODING, LC_COLLATE, etc.), I wonder\n> whether we can eliminate the need for them. Right now, these are only\n> used in the bki entry for the template1 database. How about initdb\n> creates template0 first, with hardcoded default encoding, collation,\n> etc., and then creates template1 from that, using the normal CREATE\n> DATABASE command with the appropriate options. Or initdb could just run\n> an UPDATE on pg_database to put the right settings in place.\n\nUsing a combination of this suggestion and discussions Andres pointed\nto in this thread, updated the patch to add placeholder values first\ninto template1 and then do UPDATEs in initdb itself.\n\n> You should use an option letter that isn't already in use in one of the\n> other modes of \"postgres\". We try to keep those consistent.\n>\n> New options should be added to the --help output (usage() in main.c).\n\nUsed a -b option under bootstrap mode and added help.\n\n> elog(INFO, \"Open bki file %s\\n\", bki_file);\n> + boot_yyin = fopen(bki_file, \"r\");\n>\n> Why is this needed? It already reads the bki file from stdin?\n\nWe no longer open the bki file in initdb and pass to postgres to parse\nfrom stdin, instead we open the bki file directly in bootstrap and\npass the file stream to the parser. Hence the need to switch the yyin.\nHave added a comment in the commit logs to capture this.\n\nThe version comparison has been moved from initdb to bootstrap. This\ncreated some compatibility problems with windows tests. For now I kept\nthe version check to not have \\n added, which worked fine and serves\nthe purpose. However hoping to have something better in v3 in addition\nto addressing any other comments.\n\nPlease let me know your thoughts and review comments.\n\n--\nThanks and Regards,\nKrishnakumar (KK).\n[Microsoft]\n\nOn Tue, Sep 19, 2023 at 3:18 AM Peter Eisentraut <[email protected]> wrote:\n>\n> On 01.09.23 10:01, Krishnakumar R wrote:\n> > This patch moves the pre-processing for tokens in the bki file from\n> > initdb to bootstrap. With these changes the bki file will only be\n> > opened once in bootstrap and parsing will be done by the bootstrap\n> > parser.\n>\n> I did some rough performance tests on this. I get about a 10%\n> improvement on initdb run time, so this appears to have merit.\n>\n> I wonder whether we can reduce the number of symbols that we need this\n> treatment for.\n>\n> For example, for NAMEDATALEN, SIZEOF_POINTER, ALIGNOF_POINTER,\n> FLOAT8PASSBYVAL, these are known at build time, so we could have\n> genbki.pl substitute them at build time.\n>\n> The locale-related symbols (ENCODING, LC_COLLATE, etc.), I wonder\n> whether we can eliminate the need for them. Right now, these are only\n> used in the bki entry for the template1 database. How about initdb\n> creates template0 first, with hardcoded default encoding, collation,\n> etc., and then creates template1 from that, using the normal CREATE\n> DATABASE command with the appropriate options. Or initdb could just run\n> an UPDATE on pg_database to put the right settings in place.\n>\n> I don't like this part so much, because it adds like 4 more places each\n> of these variables is mentioned, which increases the mental and testing\n> overhead for dealing with these features (which are an area of active\n> development).\n>\n> In general, it would be good if this could be factored a bit more so\n> each variable doesn't have to be hardcoded in so many places.\n>\n>\n> Some more detailed comments on the code:\n>\n> + boot_yylval.str = pstrdup(yytext);\n> + sprintf(boot_yylval.str, \"%d\", NAMEDATALEN);\n>\n> This is weird. You are first assigning the text and then overwriting it\n> with the numeric value?\n>\n> You can also use boot_yylval.ival for storing numbers.\n>\n> + if (bootp_null(ebootp, ebootp->username)) return\n> NULLVAL;\n>\n> Add proper line breaks in the code.\n>\n> +bool bootp_null(extra_bootstrap_params *e, char *s)\n>\n> Add a comment what this function is supposed to do.\n>\n> This function could be static.\n>\n> + while ((flag = getopt(argc, argv, \"B:c:d:D:Fi:kr:X:-:\")) != -1)\n>\n> You should use an option letter that isn't already in use in one of the\n> other modes of \"postgres\". We try to keep those consistent.\n>\n> New options should be added to the --help output (usage() in main.c).\n>\n> + elog(INFO, \"Open bki file %s\\n\", bki_file);\n> + boot_yyin = fopen(bki_file, \"r\");\n>\n> Why is this needed? It already reads the bki file from stdin?\n>\n> + printfPQExpBuffer(&cmd, \"\\\"%s\\\" --boot -X %d %s %s %s %s -i\n> %s=%s,%s=%s,%s=%s,\"\n> + \"%s=%s,%s=%s,%s=%s,%s=%s,%s=%c\",\n> + backend_exec,\n> + wal_segment_size_mb * (1024 * 1024),\n> + boot_options, extra_options,\n> + data_checksums ? \"-k\" : \"\",\n> + debug ? \"-d 5\" : \"\",\n>\n> This appears to undo some of the changes done in cccdbc5d95.\n>\n> +#define BOOT_LC_COLLATE \"lc_collate\"\n> +#define BOOT_LC_CTYPE \"lc_ctype\"\n> +#define BOOT_ICU_LOCALE \"icu_locale\"\n>\n> etc. This doesn't look particularly useful. You can just use the\n> strings directly.\n>",
"msg_date": "Thu, 5 Oct 2023 17:24:21 -0700",
"msg_from": "Krishnakumar R <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Move bki file pre-processing from initdb to bootstrap"
},
{
"msg_contents": "> The version comparison has been moved from initdb to bootstrap. This\n> created some compatibility problems with windows tests. For now I kept\n> the version check to not have \\n added, which worked fine and serves\n> the purpose. However hoping to have something better in v3 in addition\n> to addressing any other comments.\n\nWith help from Thomas, figured out that on windows fopen uses binary\nmode in the backend which causes issues with EOL. Please find the\nattached patch updated with a fix for this.\n\n--\nThanks and Regards,\nKrishnakumar (KK).\n[Microsoft]",
"msg_date": "Mon, 16 Oct 2023 18:32:31 -0700",
"msg_from": "Krishnakumar R <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Move bki file pre-processing from initdb to bootstrap"
},
{
"msg_contents": "On 06.10.23 02:24, Krishnakumar R wrote:\n>> elog(INFO, \"Open bki file %s\\n\", bki_file);\n>> + boot_yyin = fopen(bki_file, \"r\");\n>>\n>> Why is this needed? It already reads the bki file from stdin?\n> We no longer open the bki file in initdb and pass to postgres to parse\n> from stdin, instead we open the bki file directly in bootstrap and\n> pass the file stream to the parser. Hence the need to switch the yyin.\n> Have added a comment in the commit logs to capture this.\n\nWhy this change? I mean, there is nothing wrong with it, but I don't \nfollow how changing from reading from stdin to reading from a named file \nis related to moving the parameter substitution from initdb to the backend.\n\nOne effect of this is that we would now have two different ways initdb \ninteracts with the backend. In bootstrap mode, it reads from a named \nfile, and the second run (the one that loads the system views etc.) \nreads from stdin. It's already confusing enough, so any further \ndivergence should be adequately explained.\n\n\n\n",
"msg_date": "Fri, 10 Nov 2023 09:38:11 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move bki file pre-processing from initdb to bootstrap"
},
{
"msg_contents": "On 17.10.23 03:32, Krishnakumar R wrote:\n>> The version comparison has been moved from initdb to bootstrap. This\n>> created some compatibility problems with windows tests. For now I kept\n>> the version check to not have \\n added, which worked fine and serves\n>> the purpose. However hoping to have something better in v3 in addition\n>> to addressing any other comments.\n> \n> With help from Thomas, figured out that on windows fopen uses binary\n> mode in the backend which causes issues with EOL. Please find the\n> attached patch updated with a fix for this.\n\nI suggest that this patch set be split up into three incremental parts:\n\n1. Move some build-time settings from initdb to postgres.bki.\n2. The database locale handling.\n3. The bki file handling.\n\nEach of these topics really needs a separate detailed consideration.\n\n\n\n",
"msg_date": "Fri, 10 Nov 2023 09:48:37 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move bki file pre-processing from initdb to bootstrap"
},
{
"msg_contents": "Thank you for review, Peter.\n\nMakes sense on the split part. Was starting to think in same lines, at the\nend of last iteration. Will come back shortly.\n\nOn Fri, Nov 10, 2023 at 12:48 AM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 17.10.23 03:32, Krishnakumar R wrote:\n> >> The version comparison has been moved from initdb to bootstrap. This\n> >> created some compatibility problems with windows tests. For now I kept\n> >> the version check to not have \\n added, which worked fine and serves\n> >> the purpose. However hoping to have something better in v3 in addition\n> >> to addressing any other comments.\n> >\n> > With help from Thomas, figured out that on windows fopen uses binary\n> > mode in the backend which causes issues with EOL. Please find the\n> > attached patch updated with a fix for this.\n>\n> I suggest that this patch set be split up into three incremental parts:\n>\n> 1. Move some build-time settings from initdb to postgres.bki.\n> 2. The database locale handling.\n> 3. The bki file handling.\n>\n> Each of these topics really needs a separate detailed consideration.\n>\n>\n\nThank you for review, Peter.Makes sense on the split part. Was starting to think in same lines, at the end of last iteration. Will come back shortly. On Fri, Nov 10, 2023 at 12:48 AM Peter Eisentraut <[email protected]> wrote:On 17.10.23 03:32, Krishnakumar R wrote:\n>> The version comparison has been moved from initdb to bootstrap. This\n>> created some compatibility problems with windows tests. For now I kept\n>> the version check to not have \\n added, which worked fine and serves\n>> the purpose. However hoping to have something better in v3 in addition\n>> to addressing any other comments.\n> \n> With help from Thomas, figured out that on windows fopen uses binary\n> mode in the backend which causes issues with EOL. Please find the\n> attached patch updated with a fix for this.\n\nI suggest that this patch set be split up into three incremental parts:\n\n1. Move some build-time settings from initdb to postgres.bki.\n2. The database locale handling.\n3. The bki file handling.\n\nEach of these topics really needs a separate detailed consideration.",
"msg_date": "Fri, 10 Nov 2023 10:33:18 -0800",
"msg_from": "Krishnakumar R <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Move bki file pre-processing from initdb to bootstrap"
},
{
"msg_contents": "On Sat, 11 Nov 2023 at 00:03, Krishnakumar R <[email protected]> wrote:\n>\n> Thank you for review, Peter.\n>\n> Makes sense on the split part. Was starting to think in same lines, at the end of last iteration. Will come back shortly.\n>\n> On Fri, Nov 10, 2023 at 12:48 AM Peter Eisentraut <[email protected]> wrote:\n>>\n>> On 17.10.23 03:32, Krishnakumar R wrote:\n>> >> The version comparison has been moved from initdb to bootstrap. This\n>> >> created some compatibility problems with windows tests. For now I kept\n>> >> the version check to not have \\n added, which worked fine and serves\n>> >> the purpose. However hoping to have something better in v3 in addition\n>> >> to addressing any other comments.\n>> >\n>> > With help from Thomas, figured out that on windows fopen uses binary\n>> > mode in the backend which causes issues with EOL. Please find the\n>> > attached patch updated with a fix for this.\n>>\n>> I suggest that this patch set be split up into three incremental parts:\n>>\n>> 1. Move some build-time settings from initdb to postgres.bki.\n>> 2. The database locale handling.\n>> 3. The bki file handling.\n>>\n>> Each of these topics really needs a separate detailed consideration.\n\nThe patch which you submitted has been awaiting your attention for\nquite some time now. As such, we have moved it to \"Returned with\nFeedback\" and removed it from the reviewing queue. Depending on\ntiming, this may be reversible. Kindly address the feedback you have\nreceived, and resubmit the patch to the next CommitFest.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 1 Feb 2024 21:54:29 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move bki file pre-processing from initdb to bootstrap"
}
] |
[
{
"msg_contents": "Hi,\n\nI found a small mistake in document in 33.1.3. Additional Test Suites.\n\n> The additional tests that can be invoked this way include:\nThe list doesn't include interface/libpq/test.\n\nI attached patch.\n\nThank you.\n\nBest Regards\nRyo Matsumura\nFujitsu Limited",
"msg_date": "Fri, 1 Sep 2023 08:01:47 +0000",
"msg_from": "\"Ryo Matsumura (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PATCH: document for regression test forgets libpq test"
},
{
"msg_contents": "On Fri, Sep 1, 2023 at 08:01:47AM +0000, Ryo Matsumura (Fujitsu) wrote:\n> Hi,\n> \n> I found a small mistake in document in 33.1.3. Additional Test Suites.\n> \n> > The additional tests that can be invoked this way include:\n> The list doesn't include interface/libpq/test.\n> \n> I attached patch.\n\nYes, good point. I modifed the patch, attached, and applied it to all\nsupported versions.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Tue, 5 Sep 2023 13:06:06 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: document for regression test forgets libpq test"
},
{
"msg_contents": "Hi,\n\n> Wednesday, September 6, 2023 2:06 AM Bruce Momjian <[email protected]> wrote:\n> Yes, good point. I modifed the patch, attached, and applied it to all\n> supported versions.\n\nThank you. # I forgot to send mail.\n\nBest Regards\nRyo Matsumura\n\n> -----Original Message-----\n> From: Bruce Momjian <[email protected]>\n> Sent: Wednesday, September 6, 2023 2:06 AM\n> To: Matsumura, Ryo/松村 量 <[email protected]>\n> Cc: [email protected]\n> Subject: Re: PATCH: document for regression test forgets libpq test\n> \n> On Fri, Sep 1, 2023 at 08:01:47AM +0000, Ryo Matsumura (Fujitsu) wrote:\n> > Hi,\n> >\n> > I found a small mistake in document in 33.1.3. Additional Test Suites.\n> >\n> > > The additional tests that can be invoked this way include:\n> > The list doesn't include interface/libpq/test.\n> >\n> > I attached patch.\n> \n> Yes, good point. I modifed the patch, attached, and applied it to all\n> supported versions.\n> \n> --\n> Bruce Momjian <[email protected]> https://momjian.us\n> EDB https://enterprisedb.com\n> \n> Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 30 Oct 2023 09:04:00 +0000",
"msg_from": "\"Ryo Matsumura (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: PATCH: document for regression test forgets libpq test"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen reading the code, I noticed a typo in the description of WAL record.\n\n/*\n- * Decode XLOG_HEAP2_MULTI_INSERT_insert record into multiple tuplebufs.\n+ * Decode XLOG_HEAP2_MULTI_INSERT record into multiple tuplebufs.\n *\n\nAnd attach a small patch to fix it.\n\nBest Regards,\nHou Zhijie",
"msg_date": "Fri, 1 Sep 2023 09:36:25 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix a typo in decode.c"
},
{
"msg_contents": "On Fri, Sep 1, 2023 at 5:09 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> When reading the code, I noticed a typo in the description of WAL record.\n>\n> /*\n> - * Decode XLOG_HEAP2_MULTI_INSERT_insert record into multiple tuplebufs.\n> + * Decode XLOG_HEAP2_MULTI_INSERT record into multiple tuplebufs.\n> *\n>\n> And attach a small patch to fix it.\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 1 Sep 2023 18:46:12 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix a typo in decode.c"
}
] |
[
{
"msg_contents": "Hi,\n\nIf a null locale is reached in these paths.\nelog will dereference a null pointer.\n\nbest regards,\n\nRanier Vilela",
"msg_date": "Fri, 1 Sep 2023 11:38:12 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Avoid a possible null pointer (src/backend/utils/adt/pg_locale.c)"
},
{
"msg_contents": "On Fri, Sep 1, 2023 at 11:47 AM Ranier Vilela <[email protected]> wrote:\n> If a null locale is reached in these paths.\n> elog will dereference a null pointer.\n\nTrue. That's sloppy coding.\n\nI don't know enough about this code to be sure whether the error\nmessages that you propose are for the best.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 1 Sep 2023 16:16:49 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid a possible null pointer (src/backend/utils/adt/pg_locale.c)"
},
{
"msg_contents": "Em sex., 1 de set. de 2023 às 17:17, Robert Haas <[email protected]>\nescreveu:\n\n> On Fri, Sep 1, 2023 at 11:47 AM Ranier Vilela <[email protected]> wrote:\n> > If a null locale is reached in these paths.\n> > elog will dereference a null pointer.\n>\n> True. That's sloppy coding.\n>\n> I don't know enough about this code to be sure whether the error\n> messages that you propose are for the best.\n>\nI tried to keep the same behavior as before.\nNote that if the locale equals COLLPROVIDER_LIBC,\nthe message to the user will be the same.\n\nbest regards,\nRanier Vilela\n\nEm sex., 1 de set. de 2023 às 17:17, Robert Haas <[email protected]> escreveu:On Fri, Sep 1, 2023 at 11:47 AM Ranier Vilela <[email protected]> wrote:\n> If a null locale is reached in these paths.\n> elog will dereference a null pointer.\n\nTrue. That's sloppy coding.\n\nI don't know enough about this code to be sure whether the error\nmessages that you propose are for the best.I tried to keep the same behavior as before.Note that if the locale equals COLLPROVIDER_LIBC, the message to the user will be the same.best regards,Ranier Vilela",
"msg_date": "Sat, 2 Sep 2023 09:13:11 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid a possible null pointer (src/backend/utils/adt/pg_locale.c)"
},
{
"msg_contents": "On Sat, Sep 02, 2023 at 09:13:11AM -0300, Ranier Vilela wrote:\n> I tried to keep the same behavior as before.\n> Note that if the locale equals COLLPROVIDER_LIBC,\n> the message to the user will be the same.\n\n- /* shouldn't happen */\n- elog(ERROR, \"unsupported collprovider: %c\", locale->provider);\n+ elog(ERROR, \"collprovider '%c' does not support pg_strnxfrm_prefix()\",\n+ locale->provider); \n\nBased on what's written here, these messages could be better because\nfull sentences are not encouraged in error messages, even for\nnon-translated elogs:\nhttps://www.postgresql.org/docs/current/error-style-guide.html\n\nPerhaps something like \"could not use collprovider %c: no support for\n%s\", where the function name is used, would be more consistent.\n--\nMichael",
"msg_date": "Mon, 4 Sep 2023 10:01:43 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid a possible null pointer (src/backend/utils/adt/pg_locale.c)"
},
{
"msg_contents": "Em dom., 3 de set. de 2023 às 22:01, Michael Paquier <[email protected]>\nescreveu:\n\n> On Sat, Sep 02, 2023 at 09:13:11AM -0300, Ranier Vilela wrote:\n> > I tried to keep the same behavior as before.\n> > Note that if the locale equals COLLPROVIDER_LIBC,\n> > the message to the user will be the same.\n>\n> - /* shouldn't happen */\n> - elog(ERROR, \"unsupported collprovider: %c\", locale->provider);\n> + elog(ERROR, \"collprovider '%c' does not support pg_strnxfrm_prefix()\",\n> + locale->provider);\n>\n> Based on what's written here, these messages could be better because\n> full sentences are not encouraged in error messages, even for\n> non-translated elogs:\n> https://www.postgresql.org/docs/current/error-style-guide.html\n>\n> Perhaps something like \"could not use collprovider %c: no support for\n> %s\", where the function name is used, would be more consistent.\n>\nSure.\nI have no objection to the wording of the message.\nIf there is consensus, I can send another patch.\n\nbest regards,\nRanier Vilela\n\nEm dom., 3 de set. de 2023 às 22:01, Michael Paquier <[email protected]> escreveu:On Sat, Sep 02, 2023 at 09:13:11AM -0300, Ranier Vilela wrote:\n> I tried to keep the same behavior as before.\n> Note that if the locale equals COLLPROVIDER_LIBC,\n> the message to the user will be the same.\n\n- /* shouldn't happen */\n- elog(ERROR, \"unsupported collprovider: %c\", locale->provider);\n+ elog(ERROR, \"collprovider '%c' does not support pg_strnxfrm_prefix()\",\n+ locale->provider); \n\nBased on what's written here, these messages could be better because\nfull sentences are not encouraged in error messages, even for\nnon-translated elogs:\nhttps://www.postgresql.org/docs/current/error-style-guide.html\n\nPerhaps something like \"could not use collprovider %c: no support for\n%s\", where the function name is used, would be more consistent.Sure.I have no objection to the wording of the message.If there is consensus, I can send another patch. best regards,Ranier Vilela",
"msg_date": "Mon, 4 Sep 2023 11:27:24 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid a possible null pointer (src/backend/utils/adt/pg_locale.c)"
},
{
"msg_contents": "Em seg., 4 de set. de 2023 às 11:27, Ranier Vilela <[email protected]>\nescreveu:\n\n> Em dom., 3 de set. de 2023 às 22:01, Michael Paquier <[email protected]>\n> escreveu:\n>\n>> On Sat, Sep 02, 2023 at 09:13:11AM -0300, Ranier Vilela wrote:\n>> > I tried to keep the same behavior as before.\n>> > Note that if the locale equals COLLPROVIDER_LIBC,\n>> > the message to the user will be the same.\n>>\n>> - /* shouldn't happen */\n>> - elog(ERROR, \"unsupported collprovider: %c\", locale->provider);\n>> + elog(ERROR, \"collprovider '%c' does not support\n>> pg_strnxfrm_prefix()\",\n>> + locale->provider);\n>>\n>> Based on what's written here, these messages could be better because\n>> full sentences are not encouraged in error messages, even for\n>> non-translated elogs:\n>> https://www.postgresql.org/docs/current/error-style-guide.html\n>>\n>> Perhaps something like \"could not use collprovider %c: no support for\n>> %s\", where the function name is used, would be more consistent.\n>>\n> Sure.\n> I have no objection to the wording of the message.\n> If there is consensus, I can send another patch.\n>\nI think no one objected.\n\nv1 attached.\n\nbest regards,\nRanier Vilela",
"msg_date": "Wed, 6 Sep 2023 07:57:03 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid a possible null pointer (src/backend/utils/adt/pg_locale.c)"
},
{
"msg_contents": "On Wed, Sep 06, 2023 at 07:57:03AM -0300, Ranier Vilela wrote:\n> I think no one objected.\n\nLooking closer, there is much more inconsistency in this file\ndepending on the routine called. How about something like the v2\nattached instead to provide more context in the error message about\nthe function called? Let's say, when the provider is known, we could\nuse:\n+ elog(ERROR, \"unsupported collprovider (%s): %c\",\n+ \"pg_strncoll\", locale->provider);\n\nAnd when the provider is not known, we could use:\n+ elog(ERROR, \"unsupported collprovider (%s)\", \"pg_myfunc\");\n\n@Jeff (added now in CC), the refactoring done in d87d548c seems to be\nat the origin of this confusion, because, before this commit, we never\ngenerated this specific error for all these APIs where the locale is\nundefined. What is your take here?\n--\nMichael",
"msg_date": "Fri, 8 Sep 2023 15:24:28 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid a possible null pointer (src/backend/utils/adt/pg_locale.c)"
},
{
"msg_contents": "On Fri, 2023-09-08 at 15:24 +0900, Michael Paquier wrote:\n> Looking closer, there is much more inconsistency in this file\n> depending on the routine called. How about something like the v2\n> attached instead to provide more context in the error message about\n> the function called? Let's say, when the provider is known, we could\n> use:\n> + elog(ERROR, \"unsupported collprovider (%s): %c\",\n> + \"pg_strncoll\", locale->provider);\n\n+1, thank you.\n\n> And when the provider is not known, we could use:\n> + elog(ERROR, \"unsupported collprovider (%s)\", \"pg_myfunc\");\n\nIt's not that the provider is \"not known\" -- if locale is NULL, then\nthe provider is known to be COLLPROVIDER_LIBC. So perhaps we can\ninstead do something like:\n\n char provider = locale ? locale->provider : COLLPROVIDER_LIBC;\n\nand then always follow the first error format?\n\n[ Aside: it might be nice to refactor so that we used a pointer to a\nspecial static struct rather than NULL, which would cut down on these\nkinds of special cases. I had considered doing that before and didn't\nget around to it. ]\n\n> @Jeff (added now in CC), the refactoring done in d87d548c seems to be\n> at the origin of this confusion, because, before this commit, we\n> never\n> generated this specific error for all these APIs where the locale is\n> undefined. What is your take here?\n\nAgreed.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 08 Sep 2023 14:24:00 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid a possible null pointer\n (src/backend/utils/adt/pg_locale.c)"
},
{
"msg_contents": "Em sex., 8 de set. de 2023 às 03:24, Michael Paquier <[email protected]>\nescreveu:\n\n> On Wed, Sep 06, 2023 at 07:57:03AM -0300, Ranier Vilela wrote:\n> > I think no one objected.\n>\n> Looking closer, there is much more inconsistency in this file\n> depending on the routine called. How about something like the v2\n> attached instead to provide more context in the error message about\n> the function called?\n\n+1\nBut as Jeff mentioned, when the locale is NULL,\nthe provider is known to be COLLPROVIDER_LIBC.\n\nI think we can use this to provide an error message,\nwhen the locale is NULL.\n\nWhat do you think about v3 attached?\n\nbest regards,\nRanier Vilela",
"msg_date": "Sun, 10 Sep 2023 18:28:08 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid a possible null pointer (src/backend/utils/adt/pg_locale.c)"
},
{
"msg_contents": "On Sun, Sep 10, 2023 at 06:28:08PM -0300, Ranier Vilela wrote:\n> +1\n> But as Jeff mentioned, when the locale is NULL,\n> the provider is known to be COLLPROVIDER_LIBC.\n> \n> I think we can use this to provide an error message,\n> when the locale is NULL.\n> \n> What do you think about v3 attached?\n\nThis does not apply for me on HEAD, and it seems to me that the patch\nhas some parts that apply on top of v2 (or v1?) while others would\napply to HEAD.\n\nAnyway, what you are suggesting to change compared to v2 is that:\n\n+\t/*\n+\t * if locale is NULL, then\n+\t * the provider is known to be COLLPROVIDER_LIBC\n+\t */\n \tif (!locale)\n-\t\telog(ERROR, \"unsupported collprovider\");\n+\t\telog(ERROR, \"collprovider '%c' does not support (%s)\", \n+\t\t\tCOLLPROVIDER_LIBC, \"pg_strxfrm_prefix\");\n\nI'm OK with enforcing COLLPROVIDER_LIBC in this path, but I also value\nconsistency across all the error messages of this file. After\nsleeping on it, and as that's a set of elogs, \"unsupported\ncollprovider\" is fine for me across the board as these should not be\nuser-visible.\n\nThis should be made consistent down to 16, I guess, but only after\n16.0 is tagged as we are in release week.\n--\nMichael",
"msg_date": "Mon, 11 Sep 2023 07:24:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid a possible null pointer (src/backend/utils/adt/pg_locale.c)"
},
{
"msg_contents": "On Mon, 2023-09-11 at 07:24 +0900, Michael Paquier wrote:\n> I'm OK with enforcing COLLPROVIDER_LIBC in this path, but I also\n> value\n> consistency across all the error messages of this file. After\n> sleeping on it, and as that's a set of elogs, \"unsupported\n> collprovider\" is fine for me across the board as these should not be\n> user-visible.\n\nThat's fine with me.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 11 Sep 2023 12:15:49 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid a possible null pointer\n (src/backend/utils/adt/pg_locale.c)"
},
{
"msg_contents": "On Mon, Sep 11, 2023 at 12:15:49PM -0700, Jeff Davis wrote:\n> That's fine with me.\n\nOkay. Then, please find attached a v4 for HEAD and REL_16_STABLE.\n--\nMichael",
"msg_date": "Tue, 12 Sep 2023 09:03:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid a possible null pointer (src/backend/utils/adt/pg_locale.c)"
},
{
"msg_contents": "Em seg., 11 de set. de 2023 às 21:03, Michael Paquier <[email protected]>\nescreveu:\n\n> On Mon, Sep 11, 2023 at 12:15:49PM -0700, Jeff Davis wrote:\n> > That's fine with me.\n>\n> Okay. Then, please find attached a v4 for HEAD and REL_16_STABLE.\n>\nLGTM.\n\nbest regards,\nRanier Vilela\n\nEm seg., 11 de set. de 2023 às 21:03, Michael Paquier <[email protected]> escreveu:On Mon, Sep 11, 2023 at 12:15:49PM -0700, Jeff Davis wrote:\n> That's fine with me.\n\nOkay. Then, please find attached a v4 for HEAD and REL_16_STABLE.LGTM.best regards,Ranier Vilela",
"msg_date": "Mon, 11 Sep 2023 21:06:53 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid a possible null pointer (src/backend/utils/adt/pg_locale.c)"
},
{
"msg_contents": "On Tue, 2023-09-12 at 09:03 +0900, Michael Paquier wrote:\n> On Mon, Sep 11, 2023 at 12:15:49PM -0700, Jeff Davis wrote:\n> > That's fine with me.\n> \n> Okay. Then, please find attached a v4 for HEAD and REL_16_STABLE.\n\nOne question: would it make sense to use __func__?\n\nOther than that, looks good. Thank you.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 12 Sep 2023 13:51:25 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid a possible null pointer\n (src/backend/utils/adt/pg_locale.c)"
},
{
"msg_contents": "Em ter., 12 de set. de 2023 às 17:51, Jeff Davis <[email protected]>\nescreveu:\n\n> On Tue, 2023-09-12 at 09:03 +0900, Michael Paquier wrote:\n> > On Mon, Sep 11, 2023 at 12:15:49PM -0700, Jeff Davis wrote:\n> > > That's fine with me.\n> >\n> > Okay. Then, please find attached a v4 for HEAD and REL_16_STABLE.\n>\n> One question: would it make sense to use __func__?\n>\nAccording to the msvc documentation, __func__ requires C++11.\nhttps://learn.microsoft.com/en-us/cpp/cpp/func?view=msvc-170\n\nI think that is not a good idea.\n\nbest regards,\nRanier Vilela\n\nEm ter., 12 de set. de 2023 às 17:51, Jeff Davis <[email protected]> escreveu:On Tue, 2023-09-12 at 09:03 +0900, Michael Paquier wrote:\n> On Mon, Sep 11, 2023 at 12:15:49PM -0700, Jeff Davis wrote:\n> > That's fine with me.\n> \n> Okay. Then, please find attached a v4 for HEAD and REL_16_STABLE.\n\nOne question: would it make sense to use __func__?According to the msvc documentation, __func__ requires C++11.https://learn.microsoft.com/en-us/cpp/cpp/func?view=msvc-170I think that is not a good idea.best regards,Ranier Vilela",
"msg_date": "Tue, 12 Sep 2023 21:40:04 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid a possible null pointer (src/backend/utils/adt/pg_locale.c)"
},
{
"msg_contents": "At Tue, 12 Sep 2023 09:03:27 +0900, Michael Paquier <[email protected]> wrote in \n> On Mon, Sep 11, 2023 at 12:15:49PM -0700, Jeff Davis wrote:\n> > That's fine with me.\n> \n> Okay. Then, please find attached a v4 for HEAD and REL_16_STABLE.\n\nFor example, they result in the following message:\n\nERROR: unsupported collprovider (pg_strcoll): i\n\nEven if it is an elog message, I believe we can make it clearer. The\npg_strcoll seems like a collation privier at first glance. Not sure\nabout others, though, I would spell it as the follows instead:\n\nERROR: unsupported collprovider in pg_strcoll: i\nERROR: unsupported collprovider in pg_strcoll(): i\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 13 Sep 2023 09:59:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid a possible null pointer\n (src/backend/utils/adt/pg_locale.c)"
},
{
"msg_contents": "On Tue, Sep 12, 2023 at 09:40:04PM -0300, Ranier Vilela wrote:\n> I think that is not a good idea.\n\nHm? We already use __func__ across the tree even on Windows and\nnobody has complained about that. Using a macro for the elog()\ngenerated would be slightly more elegant, actually.\n--\nMichael",
"msg_date": "Wed, 13 Sep 2023 10:16:45 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid a possible null pointer (src/backend/utils/adt/pg_locale.c)"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 09:59:22AM +0900, Kyotaro Horiguchi wrote:\n> At Tue, 12 Sep 2023 09:03:27 +0900, Michael Paquier <[email protected]> wrote in \n> > On Mon, Sep 11, 2023 at 12:15:49PM -0700, Jeff Davis wrote:\n> > > That's fine with me.\n> > \n> > Okay. Then, please find attached a v4 for HEAD and REL_16_STABLE.\n> \n> For example, they result in the following message:\n> \n> ERROR: unsupported collprovider (pg_strcoll): i\n> \n> Even if it is an elog message, I believe we can make it clearer. The\n> pg_strcoll seems like a collation privier at first glance. Not sure\n> about others, though, I would spell it as the follows instead:\n> \n> ERROR: unsupported collprovider in pg_strcoll: i\n> ERROR: unsupported collprovider in pg_strcoll(): i\n\nHmm. I see your point, one could be confused that the function name\nis the provider with this wording. How about that instead:\n ERROR: unsupported collprovider for %s: %c\n\nI've hidden that in a macro that uses __func__ as Jeff has suggested.\n\nWhat do you think?\n--\nMichael",
"msg_date": "Wed, 13 Sep 2023 11:48:59 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid a possible null pointer (src/backend/utils/adt/pg_locale.c)"
},
{
"msg_contents": "On Wed, 2023-09-13 at 11:48 +0900, Michael Paquier wrote:\n> Hmm. I see your point, one could be confused that the function name\n> is the provider with this wording. How about that instead:\n> ERROR: unsupported collprovider for %s: %c\n> \n> I've hidden that in a macro that uses __func__ as Jeff has suggested.\n> \n> What do you think?\n\nLooks good to me, thank you.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 13 Sep 2023 08:14:11 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid a possible null pointer\n (src/backend/utils/adt/pg_locale.c)"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 08:14:11AM -0700, Jeff Davis wrote:\n> Looks good to me, thank you.\n\nApplied, then. Thanks.\n--\nMichael",
"msg_date": "Thu, 14 Sep 2023 10:32:02 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid a possible null pointer (src/backend/utils/adt/pg_locale.c)"
},
{
"msg_contents": "Em qua., 13 de set. de 2023 às 22:32, Michael Paquier <[email protected]>\nescreveu:\n\n> On Wed, Sep 13, 2023 at 08:14:11AM -0700, Jeff Davis wrote:\n> > Looks good to me, thank you.\n>\n> Applied, then. Thanks.\n>\nThank you Michael, for the commit.\n\nbest regards,\nRanier Vilela\n\nEm qua., 13 de set. de 2023 às 22:32, Michael Paquier <[email protected]> escreveu:On Wed, Sep 13, 2023 at 08:14:11AM -0700, Jeff Davis wrote:\n> Looks good to me, thank you.\n\nApplied, then. Thanks.Thank you Michael, for the commit.best regards,Ranier Vilela",
"msg_date": "Thu, 14 Sep 2023 08:08:42 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid a possible null pointer (src/backend/utils/adt/pg_locale.c)"
}
] |
[
{
"msg_contents": "Hackers,\n\nI noticed that there was a mismatch between the const qualifiers for \nexcludeDirContents in src/backend/backup/backup/basebackup.c and \nsrc/bin/pg_rewind/file_map.c and that led me to use ^static const.*\\*.*= \nto do a quick search for similar cases.\n\nI think at the least we should make excludeDirContents match, but the \nrest of the changes seem like a good idea as well.\n\nRegards,\n-David",
"msg_date": "Fri, 1 Sep 2023 11:39:24 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add const qualifiers"
},
{
"msg_contents": "On 9/1/23 11:39, David Steele wrote:\n> Hackers,\n> \n> I noticed that there was a mismatch between the const qualifiers for \n> excludeDirContents in src/backend/backup/backup/basebackup.c and \n> src/bin/pg_rewind/file_map.c and that led me to use ^static const.*\\*.*= \n> to do a quick search for similar cases.\n> \n> I think at the least we should make excludeDirContents match, but the \n> rest of the changes seem like a good idea as well.\n\nAdded to 2023-11 CF.\n\nRegards,\n-David\n\n\n",
"msg_date": "Sat, 9 Sep 2023 16:03:37 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add const qualifiers"
},
{
"msg_contents": "On 09.09.23 21:03, David Steele wrote:\n> On 9/1/23 11:39, David Steele wrote:\n>> Hackers,\n>>\n>> I noticed that there was a mismatch between the const qualifiers for \n>> excludeDirContents in src/backend/backup/backup/basebackup.c and \n>> src/bin/pg_rewind/file_map.c and that led me to use ^static \n>> const.*\\*.*= to do a quick search for similar cases.\n>>\n>> I think at the least we should make excludeDirContents match, but the \n>> rest of the changes seem like a good idea as well.\n> \n> Added to 2023-11 CF.\n\ncommitted\n\n\n\n",
"msg_date": "Tue, 26 Sep 2023 11:34:34 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add const qualifiers"
},
{
"msg_contents": "On 9/26/23 06:34, Peter Eisentraut wrote:\n> On 09.09.23 21:03, David Steele wrote:\n>> On 9/1/23 11:39, David Steele wrote:\n>>> Hackers,\n>>>\n>>> I noticed that there was a mismatch between the const qualifiers for \n>>> excludeDirContents in src/backend/backup/backup/basebackup.c and \n>>> src/bin/pg_rewind/file_map.c and that led me to use ^static \n>>> const.*\\*.*= to do a quick search for similar cases.\n>>>\n>>> I think at the least we should make excludeDirContents match, but the \n>>> rest of the changes seem like a good idea as well.\n>>\n>> Added to 2023-11 CF.\n> \n> committed\n\nThank you, Peter!\n\n\n",
"msg_date": "Tue, 26 Sep 2023 16:19:12 -0400",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add const qualifiers"
}
] |
[
{
"msg_contents": "The first two patches in the series are re-proposals that had previously \nbeen approved[0] by Andres, but fell through the cracks.\n\nThe only patch that _could_ be controversial is probably the last one, \nbut from my understanding it would match up with the autotools build.\n\nOne thing that I did notice while testing this patch is that Muon \ndoesn't build postgres without coercing the build a bit. I had to \ndisable nls and plpython. The nls issue could be fixed with a bump to \nMeson 0.59, which introduces import(required:). nls isn't supported in \nMuon unfortunately at the moment. The plpython issue is that it doesn't \nunderstand pymod.find_installation(required:), which is a bug in Muon.\n\nMuon development has slowed quite a bit this year. Postgres is probably \nthe largest project which tries its best to support Muon. It seems like \nif we want to keep supporting Muon, we should get a buildfarm machine to \nuse it instead of Meson to catch regressions. OR we should contemplate \nremoving support for it.\n\nAlternatively someone (me?) could step up and provide some patches to \nMuon to make the postgres experience better. But I wonder if any \nPostgres user even uses Muon to build it.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Fri, 01 Sep 2023 11:31:07 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Casual Meson fixups"
},
{
"msg_contents": "Hi,\n\nOn 2023-09-01 11:31:07 -0500, Tristan Partin wrote:\n> Muon development has slowed quite a bit this year. Postgres is probably the\n> largest project which tries its best to support Muon. It seems like if we\n> want to keep supporting Muon, we should get a buildfarm machine to use it\n> instead of Meson to catch regressions. OR we should contemplate removing\n> support for it.\n\nI found it to be quite useful to find bugs in the meson.build files...\n\n\n> Subject: [PATCH v1 2/7] Add Meson override for libpq\n> \n> Meson has the ability to do transparent overrides when projects are used\n> as subprojects. For instance, say I am building a Postgres extension. I\n> can define Postgres to be a subproject of my extension given the\n> following wrap file:\n> \n> [wrap-git]\n> url = https://git.postgresql.org/git/postgresql.git\n> revision = master\n> depth = 1\n> \n> [provide]\n> dependency_names = libpq\n> \n> Then in my extension (root project), I can have the following line\n> snippet:\n> \n> libpq = dependency('libpq')\n> \n> This will tell Meson to transparently compile libpq prior to it\n> compiling my extension (because I depend on libpq) if libpq isn't found\n> on the host system.\n> ---\n> src/interfaces/libpq/meson.build | 2 ++\n> 1 file changed, 2 insertions(+)\n\nThis example doesn't seem convincing, because if you build a postgres\nextension, you need postgres' headers - which makes it extremely likely that\nlibpq is available :)\n\n\n> From 5455426c9944ff8c8694db46929eaa37e03d907f Mon Sep 17 00:00:00 2001\n> From: Tristan Partin <[email protected]>\n> Date: Fri, 1 Sep 2023 11:07:40 -0500\n> Subject: [PATCH v1 7/7] Disable building contrib targets by default\n> \n> This matches the autotools build.\n\nWhy should we match it here? IIRC this would actually break running meson\ninstall, because it doesn't grok targets that are installed but not built by\ndefault :/.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 22 Oct 2023 17:59:17 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Casual Meson fixups"
},
{
"msg_contents": "\nOn 2023-10-22 Su 20:59, Andres Freund wrote:\n> Hi,\n>\n> On 2023-09-01 11:31:07 -0500, Tristan Partin wrote:\n>> Muon development has slowed quite a bit this year. Postgres is probably the\n>> largest project which tries its best to support Muon. It seems like if we\n>> want to keep supporting Muon, we should get a buildfarm machine to use it\n>> instead of Meson to catch regressions. OR we should contemplate removing\n>> support for it.\n> I found it to be quite useful to find bugs in the meson.build files...\n\n\nI agree with Tristan that if we are going to use it then we should have \na buildfarm animal that does too.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 23 Oct 2023 10:39:56 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Casual Meson fixups"
}
] |
[
{
"msg_contents": "\n\n",
"msg_date": "Sat, 2 Sep 2023 00:41:14 +0800",
"msg_from": "jacktby jacktby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is there a complete doc to describe pg's traction implementation in\n detail?"
}
] |
[
{
"msg_contents": "Hi,\n\nWhilst looking at PostgreSQL's bootstrapping process, I noticed that\npostgres.bki contains quite a few occurrances of the pattern \"open\n$catname; close $catname\".\nI suppose this pattern isn't too expensive, but according to my\nlimited research a combined open+close cycle doens't do anything\nmeaningful, so it does waste some CPU cycles in the process.\n\nThe attached patch 1 removes the occurances of those combined\nopen/close statements in postgresql.bki. Locally it passes\ncheck-world, so I assume that opening and closing a table is indeed\nnot required for initializing a data-less catalog during\nbootstrapping.\n\nA potential addition to the patch would to stop manually closing\nrelations: initdb and check-world succeed without manual 'close'\noperations because the 'open' command auto-closes the previous open\nrelation (in boot_openrel). Testing also suggests that the last opened\nrelation apparently doesn't need closing - check-world succeeds\nwithout issues (incl. with TAP enabled). That is therefore implemented\nin attached patch 2 - it removes the 'close' syntax in its entirety.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)",
"msg_date": "Fri, 1 Sep 2023 19:26:56 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "GenBKI emits useless open;close for catalogs without rows"
},
{
"msg_contents": "On 2023-Sep-01, Matthias van de Meent wrote:\n\n> A potential addition to the patch would to stop manually closing\n> relations: initdb and check-world succeed without manual 'close'\n> operations because the 'open' command auto-closes the previous open\n> relation (in boot_openrel). Testing also suggests that the last opened\n> relation apparently doesn't need closing - check-world succeeds\n> without issues (incl. with TAP enabled). That is therefore implemented\n> in attached patch 2 - it removes the 'close' syntax in its entirety.\n\nHmm, what happens with the last relation in the bootstrap process? Is\ncloserel() called via some other path for that one?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\nEssentially, you're proposing Kevlar shoes as a solution for the problem\nthat you want to walk around carrying a loaded gun aimed at your foot.\n(Tom Lane)\n\n\n",
"msg_date": "Fri, 1 Sep 2023 19:43:30 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GenBKI emits useless open;close for catalogs without rows"
},
{
"msg_contents": "On Fri, 1 Sept 2023 at 19:43, Alvaro Herrera <[email protected]> wrote:\n>\n> On 2023-Sep-01, Matthias van de Meent wrote:\n>\n> > A potential addition to the patch would to stop manually closing\n> > relations: initdb and check-world succeed without manual 'close'\n> > operations because the 'open' command auto-closes the previous open\n> > relation (in boot_openrel). Testing also suggests that the last opened\n> > relation apparently doesn't need closing - check-world succeeds\n> > without issues (incl. with TAP enabled). That is therefore implemented\n> > in attached patch 2 - it removes the 'close' syntax in its entirety.\n>\n> Hmm, what happens with the last relation in the bootstrap process? Is\n> closerel() called via some other path for that one?\n\nThere is a final cleanup() call that closes the last open boot_reldesc\nrelation (if any) at the end of BootstrapModeMain, after boot_yyparse\nhas completed and its changes have been committed.\n\n- Matthias\n\n\n",
"msg_date": "Fri, 1 Sep 2023 19:50:27 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: GenBKI emits useless open;close for catalogs without rows"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> On 2023-Sep-01, Matthias van de Meent wrote:\n>> A potential addition to the patch would to stop manually closing\n>> relations: initdb and check-world succeed without manual 'close'\n>> operations because the 'open' command auto-closes the previous open\n>> relation (in boot_openrel). Testing also suggests that the last opened\n>> relation apparently doesn't need closing - check-world succeeds\n>> without issues (incl. with TAP enabled). That is therefore implemented\n>> in attached patch 2 - it removes the 'close' syntax in its entirety.\n\n> Hmm, what happens with the last relation in the bootstrap process? Is\n> closerel() called via some other path for that one?\n\nTaking a quick census of existing closerel() callers: there is\ncleanup() in bootstrap.c, but it's called uncomfortably late\nand outside any transaction, so I misdoubt that it works\nproperly if asked to actually shoulder any responsibility.\n(A little code reshuffling could fix that.)\nThere are also a couple of low-level elog warnings in CREATE\nthat would likely get triggered, though I suppose we could just\nremove those elogs.\n\nI guess my reaction to this patch is \"why bother?\". It seems\nunlikely to yield any measurable benefit, though of course\nthat guess could be wrong.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 01 Sep 2023 13:52:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GenBKI emits useless open;close for catalogs without rows"
},
{
"msg_contents": "On Fri, 1 Sept 2023 at 19:52, Tom Lane <[email protected]> wrote:\n>\n> Alvaro Herrera <[email protected]> writes:\n> > On 2023-Sep-01, Matthias van de Meent wrote:\n> >> A potential addition to the patch would to stop manually closing\n> >> relations: initdb and check-world succeed without manual 'close'\n> >> operations because the 'open' command auto-closes the previous open\n> >> relation (in boot_openrel). Testing also suggests that the last opened\n> >> relation apparently doesn't need closing - check-world succeeds\n> >> without issues (incl. with TAP enabled). That is therefore implemented\n> >> in attached patch 2 - it removes the 'close' syntax in its entirety.\n>\n> > Hmm, what happens with the last relation in the bootstrap process? Is\n> > closerel() called via some other path for that one?\n>\n> Taking a quick census of existing closerel() callers: there is\n> cleanup() in bootstrap.c, but it's called uncomfortably late\n> and outside any transaction, so I misdoubt that it works\n> properly if asked to actually shoulder any responsibility.\n> (A little code reshuffling could fix that.)\n> There are also a couple of low-level elog warnings in CREATE\n> that would likely get triggered, though I suppose we could just\n> remove those elogs.\n\nYes, that should be easy to fix.\n\n> I guess my reaction to this patch is \"why bother?\". It seems\n> unlikely to yield any measurable benefit, though of course\n> that guess could be wrong.\n\nThere is a small but measurable decrease in size of the generated bki\n(2kb with both patches, on an initial 945kB), and there is some\nrelated code that can be eliminated. If that's not worth bothering,\nthen I can drop the patch. Otherwise, I can update the patch to do the\ncleanup that was within the transaction boundaries at the end of\nboot_yyparse.\n\nIf decreasing the size of postgres.bki is not worth the effort, I'll\ndrop any effort on doing so, but considering that it is about 1MB of\nour uncompressed distributables, I'd say decreases in size are worth\nthe effort, most of the time.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Tue, 12 Sep 2023 17:51:30 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: GenBKI emits useless open;close for catalogs without rows"
},
{
"msg_contents": "On Tue, 12 Sept 2023 at 17:51, Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Fri, 1 Sept 2023 at 19:52, Tom Lane <[email protected]> wrote:\n> >\n> > Alvaro Herrera <[email protected]> writes:\n> > > On 2023-Sep-01, Matthias van de Meent wrote:\n> > >> A potential addition to the patch would to stop manually closing\n> > >> relations: initdb and check-world succeed without manual 'close'\n> > >> operations because the 'open' command auto-closes the previous open\n> > >> relation (in boot_openrel). Testing also suggests that the last opened\n> > >> relation apparently doesn't need closing - check-world succeeds\n> > >> without issues (incl. with TAP enabled). That is therefore implemented\n> > >> in attached patch 2 - it removes the 'close' syntax in its entirety.\n> >\n> > > Hmm, what happens with the last relation in the bootstrap process? Is\n> > > closerel() called via some other path for that one?\n> >\n> > Taking a quick census of existing closerel() callers: there is\n> > cleanup() in bootstrap.c, but it's called uncomfortably late\n> > and outside any transaction, so I misdoubt that it works\n> > properly if asked to actually shoulder any responsibility.\n> > (A little code reshuffling could fix that.)\n> > There are also a couple of low-level elog warnings in CREATE\n> > that would likely get triggered, though I suppose we could just\n> > remove those elogs.\n>\n> Yes, that should be easy to fix.\n>\n> > I guess my reaction to this patch is \"why bother?\". It seems\n> > unlikely to yield any measurable benefit, though of course\n> > that guess could be wrong.\n>\n> There is a small but measurable decrease in size of the generated bki\n> (2kb with both patches, on an initial 945kB), and there is some\n> related code that can be eliminated. If that's not worth bothering,\n> then I can drop the patch. Otherwise, I can update the patch to do the\n> cleanup that was within the transaction boundaries at the end of\n> boot_yyparse.\n>\n> If decreasing the size of postgres.bki is not worth the effort, I'll\n> drop any effort on doing so, but considering that it is about 1MB of\n> our uncompressed distributables, I'd say decreases in size are worth\n> the effort, most of the time.\n\nWith the attached patch I've see a significant decrease in the size of\npostgres.bki of about 25%, and a likely related decrease in wall clock\ntime spent in the bootstrap transaction: with timestamp logs inserted\naround the boot_yyparse() transaction the measured time went from\naround 49 ms on master to around 45 ms patched. In the grand scheme of\ninitdb that might not be a lot of time (initdb takes about 73ms\nlocally with syncing disabled) but it is a nice gain in performance.\n\nComparison:\n\nmaster @ 9c13b681\n $ du -b pg_install/share/postgres.bki\n945220\n $ initdb --no-instructions --auth=md5 --pwfile pwfile -N -D ~/test-dbinit/\n[...]\n2023-09-16 02:22:57.339 CEST [10422] LOG: Finished bootstrapping:\nto_start: 10 ms, transaction: 49 ms, finishing: 1 ms, total: 59 ms\n[...]\n\npatched\n $ du -b pg_install/share/postgres.bki\n702574\n $ initdb --no-instructions --auth=md5 --pwfile pwfile -N -D ~/test-dbinit/\n[...]\n2023-09-16 02:25:57.664 CEST [15645] LOG: Finished bootstrapping:\nto_start: 10 ms, transaction: 45 ms, finishing: 1 ms, total: 54 ms\n[...]\n\nVarious methods of reducing the size of postgres.bki were applied, as\ndetailed in the patch's commit message. I believe the current output\nis still quite human readable.\n\nThere are other potential avenues for further reducing the bki size,\ne.g. through using smaller generated OIDs (reducing the number of\ncharacters used per OID), applying RLE on sequential NULLs (there are\n3k+ occurances of /( __){2,10}/ in the generated bki file remaining),\nand other tricks, but several of those are likely to be detrimental to\nthe readability and manual verifiability of the bki.\n\nKind regards,\n\nMatthias van de Meent",
"msg_date": "Mon, 18 Sep 2023 16:50:13 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: GenBKI emits useless open;close for catalogs without rows"
},
{
"msg_contents": "On 18/09/2023 17:50, Matthias van de Meent wrote:\n> (initdb takes about 73ms locally with syncing disabled)\n\nThat's impressive. It takes about 600 ms on my laptop. Of which about \n140 ms goes into processing the BKI file. And that's with \"initdb \n-no-sync\" option.\n\n> Various methods of reducing the size of postgres.bki were applied, as\n> detailed in the patch's commit message. I believe the current output\n> is still quite human readable.\n\nOverall this does not seem very worthwhile to me.\n\nOne thing caught my eye though: We currently have an \"open\" command \nafter every \"create\". Except for bootstrap relations; creating a \nbootstrap relation opens it implicitly. That seems like a weird \ninconsistency. If we make \"create\" to always open the relation, we can \nboth make it more consistent and save a few lines. That's maybe worth \ndoing, per the attached. It removes the \"open\" command altogether, as \nit's not needed anymore.\n\nLooking at \"perf\" profile of initdb, I also noticed that a small but \nmeasurable amount of time is spent in the \"isatty(0)\" call in do_end(). \nDoes anyone care about doing bootstrap mode interactively? We could \nprobably remove that.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Tue, 19 Sep 2023 21:05:41 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GenBKI emits useless open;close for catalogs without rows"
},
{
"msg_contents": "Hi,\n\nOn 2023-09-19 21:05:41 +0300, Heikki Linnakangas wrote:\n> On 18/09/2023 17:50, Matthias van de Meent wrote:\n> > (initdb takes about 73ms locally with syncing disabled)\n>\n> That's impressive. It takes about 600 ms on my laptop. Of which about 140 ms\n> goes into processing the BKI file. And that's with \"initdb -no-sync\" option.\n\nI think there must be a digit missing in Matthias' numbers.\n\n\n> > Various methods of reducing the size of postgres.bki were applied, as\n> > detailed in the patch's commit message. I believe the current output\n> > is still quite human readable.\n>\n> Overall this does not seem very worthwhile to me.\n\nBecause the wins are too small?\n\nFWIW, Making postgres.bki smaller and improving bootstrapping time does seem\nworthwhile to me. But it doesn't seem quite right to handle the batching in\nthe file format, it should be on the server side, no?\n\nWe really should stop emitting WAL during initdb...\n\n\n> Looking at \"perf\" profile of initdb, I also noticed that a small but\n> measurable amount of time is spent in the \"isatty(0)\" call in do_end(). Does\n> anyone care about doing bootstrap mode interactively? We could probably\n> remove that.\n\nHeh, yea, that's pretty pointless.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 21 Sep 2023 15:25:21 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GenBKI emits useless open;close for catalogs without rows"
},
{
"msg_contents": "On Tue, 19 Sept 2023 at 20:05, Heikki Linnakangas <[email protected]> wrote:\n>\n> On 18/09/2023 17:50, Matthias van de Meent wrote:\n> > (initdb takes about 73ms locally with syncing disabled)\n>\n> That's impressive. It takes about 600 ms on my laptop. Of which about\n> 140 ms goes into processing the BKI file. And that's with \"initdb\n> -no-sync\" option.\n\nHmm, yes, I misinterpreted my own benchmark setup, the actual value\nwould be somewhere around 365ms: I thought I was doing 50*50 runs in\none timed run, but really I was doing only 50 runs. TO add insult to\ninjury, I divided the total time taken by 250 instead of either 50 or\n2500... Thanks for correcting me on that.\n\n> > Various methods of reducing the size of postgres.bki were applied, as\n> > detailed in the patch's commit message. I believe the current output\n> > is still quite human readable.\n>\n> Overall this does not seem very worthwhile to me.\n\nReducing the size of redistributables sounds worthwhile to me, but if\nnone of these changes are worth the effort, then alright, nothing\ngained, only time lost.\n\n> Looking at \"perf\" profile of initdb, I also noticed that a small but\n> measurable amount of time is spent in the \"isatty(0)\" call in do_end().\n> Does anyone care about doing bootstrap mode interactively? We could\n> probably remove that.\n\nYeah, that sounds like a good idea.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Fri, 22 Sep 2023 17:26:47 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: GenBKI emits useless open;close for catalogs without rows"
},
{
"msg_contents": "On Fri, 22 Sept 2023 at 00:25, Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-09-19 21:05:41 +0300, Heikki Linnakangas wrote:\n> > On 18/09/2023 17:50, Matthias van de Meent wrote:\n> > > (initdb takes about 73ms locally with syncing disabled)\n> >\n> > That's impressive. It takes about 600 ms on my laptop. Of which about 140 ms\n> > goes into processing the BKI file. And that's with \"initdb -no-sync\" option.\n>\n> I think there must be a digit missing in Matthias' numbers.\n\nYes, kind of. The run was on 50 iterations, not the assumed 250.\nAlso note that the improved measurements were recorded inside the\nboostrap-mode PostgreSQL instance, not inside the initdb that was\nprocessing the postgres.bki file. So it might well be that I didn't\nimprove the total timing by much.\n\n> > > Various methods of reducing the size of postgres.bki were applied, as\n> > > detailed in the patch's commit message. I believe the current output\n> > > is still quite human readable.\n> >\n> > Overall this does not seem very worthwhile to me.\n>\n> Because the wins are too small?\n>\n> FWIW, Making postgres.bki smaller and improving bootstrapping time does seem\n> worthwhile to me. But it doesn't seem quite right to handle the batching in\n> the file format, it should be on the server side, no?\n\nThe main reason I did batching in the file format is to reduce the\nstorage overhead of the current one \"INSERT\" per row. Batching\nimproved that by replacing the token with a different construct, but\nit's not neccessarily the only solution. The actual parser still\ninserts the tuples one by one in the relation, as I didn't spend time\non making a simple_heap_insert analog for bulk insertions.\n\n> We really should stop emitting WAL during initdb...\n\nI think it's quite elegant that we're able to bootstrap the relation\ndata of a new PostgreSQL cluster from the WAL generated in another\ncluster, even if it is indeed a bit wasteful. I do see your point\nthough - the WAL shouldn't be needed if we're already fsyncing the\nfiles to disk.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 22 Sep 2023 17:50:08 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: GenBKI emits useless open;close for catalogs without rows"
},
{
"msg_contents": "On 19.09.23 20:05, Heikki Linnakangas wrote:\n> One thing caught my eye though: We currently have an \"open\" command \n> after every \"create\". Except for bootstrap relations; creating a \n> bootstrap relation opens it implicitly. That seems like a weird \n> inconsistency. If we make \"create\" to always open the relation, we can \n> both make it more consistent and save a few lines. That's maybe worth \n> doing, per the attached. It removes the \"open\" command altogether, as \n> it's not needed anymore.\n\nThis seems like a good improvement to me.\n\nIt would restrict the bootstrap language so that you can only manipulate \na table right after creating it, but I don't see why that wouldn't be \nsufficient.\n\n\n\n",
"msg_date": "Wed, 8 Nov 2023 08:16:55 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GenBKI emits useless open;close for catalogs without rows"
},
{
"msg_contents": "On 08.11.23 08:16, Peter Eisentraut wrote:\n> On 19.09.23 20:05, Heikki Linnakangas wrote:\n>> One thing caught my eye though: We currently have an \"open\" command \n>> after every \"create\". Except for bootstrap relations; creating a \n>> bootstrap relation opens it implicitly. That seems like a weird \n>> inconsistency. If we make \"create\" to always open the relation, we can \n>> both make it more consistent and save a few lines. That's maybe worth \n>> doing, per the attached. It removes the \"open\" command altogether, as \n>> it's not needed anymore.\n> \n> This seems like a good improvement to me.\n> \n> It would restrict the bootstrap language so that you can only manipulate \n> a table right after creating it, but I don't see why that wouldn't be \n> sufficient.\n\nThen again, this sort of achieves the opposite of what Matthias was \naiming for: You are now forcing some relations to be opened even though \nwe will end up closing it right away.\n\n(In any case, documentation in bki.sgml would need to be updated for \nthis patch.)\n\n\n\n",
"msg_date": "Wed, 8 Nov 2023 08:20:30 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GenBKI emits useless open;close for catalogs without rows"
},
{
"msg_contents": "On Wed, 8 Nov 2023 at 12:50, Peter Eisentraut <[email protected]> wrote:\n>\n> On 08.11.23 08:16, Peter Eisentraut wrote:\n> > On 19.09.23 20:05, Heikki Linnakangas wrote:\n> >> One thing caught my eye though: We currently have an \"open\" command\n> >> after every \"create\". Except for bootstrap relations; creating a\n> >> bootstrap relation opens it implicitly. That seems like a weird\n> >> inconsistency. If we make \"create\" to always open the relation, we can\n> >> both make it more consistent and save a few lines. That's maybe worth\n> >> doing, per the attached. It removes the \"open\" command altogether, as\n> >> it's not needed anymore.\n> >\n> > This seems like a good improvement to me.\n> >\n> > It would restrict the bootstrap language so that you can only manipulate\n> > a table right after creating it, but I don't see why that wouldn't be\n> > sufficient.\n>\n> Then again, this sort of achieves the opposite of what Matthias was\n> aiming for: You are now forcing some relations to be opened even though\n> we will end up closing it right away.\n>\n> (In any case, documentation in bki.sgml would need to be updated for\n> this patch.)\n\nI have changed the status of the patch to WOA, feel free to update the\nstatus once Peter's documentation comments are addressed.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 1 Feb 2024 22:48:38 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GenBKI emits useless open;close for catalogs without rows"
},
{
"msg_contents": "\n\n> On 22 Sep 2023, at 18:50, Matthias van de Meent <[email protected]> wrote:\n\nHi Matthias!\n\nThis is kind reminder that this thread is waiting for your response.\nCF entry [0] is in \"Waiting on Author\", I'll move it to July CF.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://commitfest.postgresql.org/47/4544/\n\n",
"msg_date": "Mon, 8 Apr 2024 11:11:07 +0300",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GenBKI emits useless open;close for catalogs without rows"
},
{
"msg_contents": "On Mon, Apr 08, 2024 at 11:11:07AM +0300, Andrey M. Borodin wrote:\n> This is kind reminder that this thread is waiting for your response.\n> CF entry [0] is in \"Waiting on Author\", I'll move it to July CF.\n\nHmm, is that productive? This patch has been waiting on author since\nthe 1st of February, and it was already moved from the CF 2024-01 to\n2024-03. It would make more sense to me to mark it as RwF, then\nresubmit if there is still interest in working on this topic rather\nthan move it again.\n\nMy personal inner rule is there is enough ground for a patch to be\nmarked as RwF if it has been waiting on author since the middle of a\ncommit fest, which would be the 15th of March for CF 2024-03. This\nlets two weeks to authors to react.\n--\nMichael",
"msg_date": "Tue, 9 Apr 2024 13:03:29 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GenBKI emits useless open;close for catalogs without rows"
},
{
"msg_contents": "On Tue, Apr 9, 2024 at 12:03 AM Michael Paquier <[email protected]> wrote:\n> Hmm, is that productive? This patch has been waiting on author since\n> the 1st of February, and it was already moved from the CF 2024-01 to\n> 2024-03. It would make more sense to me to mark it as RwF, then\n> resubmit if there is still interest in working on this topic rather\n> than move it again.\n\nDone.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 May 2024 12:37:50 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GenBKI emits useless open;close for catalogs without rows"
}
] |
[
{
"msg_contents": "Greetings,\n\nIf you are on the speaker list can you send an email to [email protected]\nindicating whether you are available to travel for meetups?\n\nThis serves the obvious purpose but also provides your email address to us.\n\nThanks,\n\nDave Cramer\n\nGreetings,If you are on the speaker list can you send an email to [email protected] indicating whether you are available to travel for meetups?This serves the obvious purpose but also provides your email address to us.Thanks,Dave Cramer",
"msg_date": "Fri, 1 Sep 2023 15:00:14 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Speaker Bureau"
},
{
"msg_contents": "Added my name to the list.\n\nI am available to travel for meetups.\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Sat, Sep 2, 2023 at 12:30 AM Dave Cramer <[email protected]> wrote:\n>\n> Greetings,\n>\n> If you are on the speaker list can you send an email to [email protected] indicating whether you are available to travel for meetups?\n>\n> This serves the obvious purpose but also provides your email address to us.\n>\n> Thanks,\n>\n> Dave Cramer\n\n\n",
"msg_date": "Mon, 4 Sep 2023 18:27:33 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speaker Bureau"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed that VACUUM FULL actually does freeze the tuples in the\nrewritten table (heap_freeze_tuple()) but then it doesn't mark them\nall visible or all frozen in the visibility map. I don't understand\nwhy. It seems like it would save us future work.\n\nHere is an example:\n\ncreate extension pg_visibility;\ndrop table if exists foo;\ncreate table foo(a int) with (autovacuum_enabled=false);\ninsert into foo select i%3 from generate_series(1,300)i;\nupdate foo set a = 5 where a = 2;\nselect * from pg_visibility_map_summary('foo');\nvacuum (verbose) foo;\nselect * from pg_visibility_map_summary('foo');\nvacuum (full, verbose) foo;\nselect * from pg_visibility_map_summary('foo');\n\nI don't see why the visibility map shouldn't be updated so that all of\nthe pages show all visible and all frozen for this relation after the\nvacuum full.\n\n- Melanie\n\n\n",
"msg_date": "Fri, 1 Sep 2023 15:34:33 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why doesn't Vacuum FULL update the VM"
},
{
"msg_contents": "On 9/1/23 21:34, Melanie Plageman wrote:\n> Hi,\n> \n> I noticed that VACUUM FULL actually does freeze the tuples in the\n> rewritten table (heap_freeze_tuple()) but then it doesn't mark them\n> all visible or all frozen in the visibility map. I don't understand\n> why. It seems like it would save us future work.\n\nI have often wondered this as well, but obviously I haven't done \nanything about it.\n\n> I don't see why the visibility map shouldn't be updated so that all of\n> the pages show all visible and all frozen for this relation after the\n> vacuum full.\n\nIt cannot just blindly mark everything all visible and all frozen \nbecause it will copy over dead tuples that concurrent transactions are \nstill allowed to see.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Fri, 1 Sep 2023 23:48:22 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why doesn't Vacuum FULL update the VM"
},
{
"msg_contents": "On Fri, Sep 1, 2023 at 12:34 PM Melanie Plageman\n<[email protected]> wrote:\n> I don't see why the visibility map shouldn't be updated so that all of\n> the pages show all visible and all frozen for this relation after the\n> vacuum full.\n\nThere was a similar issue with COPY FREEZE. It was fixed relatively\nrecently -- see commit 7db0cd21.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 1 Sep 2023 17:38:13 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why doesn't Vacuum FULL update the VM"
},
{
"msg_contents": "On Fri, Sep 1, 2023 at 8:38 PM Peter Geoghegan <[email protected]> wrote:\n>\n> On Fri, Sep 1, 2023 at 12:34 PM Melanie Plageman\n> <[email protected]> wrote:\n> > I don't see why the visibility map shouldn't be updated so that all of\n> > the pages show all visible and all frozen for this relation after the\n> > vacuum full.\n>\n> There was a similar issue with COPY FREEZE. It was fixed relatively\n> recently -- see commit 7db0cd21.\n\nThanks for digging that up for me!\n\nMy first thought after looking a bit at the vacuum full/cluster code\nis that we could add an all_visible flag to the RewriteState and set\nit to false in heapam_relation_copy_for_cluster() in roughly the same\ncases as heap_page_is_all_visible(), then, if rwstate->all_visible is\ntrue in raw_heap_insert(), when we need to advance to the next block,\nwe set the page all visible and update the VM. Either way, we reset\nall_visible to true since we are advancing to the next block.\n\nI wrote a rough outline of that idea in the attached patches. It\ndoesn't emit WAL for the VM update or handle toast tables or anything\n(it is just a rough sketch), but I just wondered if this was in the\nright direction.\n\n- Melanie",
"msg_date": "Sun, 3 Sep 2023 16:48:15 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why doesn't Vacuum FULL update the VM"
}
] |
[
{
"msg_contents": "During pg_upgrade, we start the server for the old cluster which can\nallow the checkpointer to remove the WAL files. It has been noticed\nthat we do generate certain types of WAL records (e.g\nXLOG_RUNNING_XACTS, XLOG_CHECKPOINT_ONLINE, and XLOG_FPI_FOR_HINT)\neven during pg_upgrade for old cluster, so additional WAL records\ncould let checkpointer decide that certain WAL segments can be removed\n(e.g. say wal size crosses max_slot_wal_keep_size_mb) and invalidate\nthe slots. Currently, I can't see any problem with this but for future\nwork where we want to migrate logical slots during an upgrade[1], we\nneed to decide what to do for such cases. The initial idea we had was\nthat if the old cluster has some invalid slots, we won't allow an\nupgrade unless the user removes such slots or uses some option like\n--exclude-slots. It is quite possible that slots got invalidated\nduring pg_upgrade due to no user activity. Now, even though the\npossibility of the same is less I think it is worth considering what\nshould be the behavior.\n\nThe other possibilities apart from not allowing an upgrade in such a\ncase could be (a) Before starting the old cluster, we fetch the slots\ndirectly from the disk using some tool like [2] and make the decisions\nbased on that state; (b) During the upgrade, we don't allow WAL to be\nremoved if it can invalidate slots; (c) Copy/Migrate the invalid slots\nas well but for that, we need to expose an API to invalidate the\nslots; (d) somehow distinguish the slots that are invalidated during\nan upgrade and then simply copy such slots because anyway we ensure\nthat all the WAL required by slot is sent before shutdown.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/TYAPR01MB58664C81887B3AF2EB6B16E3F5939%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n[2] - https://www.postgresql.org/message-id/flat/CALj2ACW0rV5gWK8A3m6_X62qH%2BVfaq5hznC%3Di0R5Wojt5%2Byhyw%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 2 Sep 2023 10:08:51 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "On Sat, Sep 2, 2023 at 10:09 AM Amit Kapila <[email protected]> wrote:\n>\n> During pg_upgrade, we start the server for the old cluster which can\n> allow the checkpointer to remove the WAL files. It has been noticed\n> that we do generate certain types of WAL records (e.g\n> XLOG_RUNNING_XACTS, XLOG_CHECKPOINT_ONLINE, and XLOG_FPI_FOR_HINT)\n> even during pg_upgrade for old cluster, so additional WAL records\n> could let checkpointer decide that certain WAL segments can be removed\n> (e.g. say wal size crosses max_slot_wal_keep_size_mb) and invalidate\n> the slots. Currently, I can't see any problem with this but for future\n> work where we want to migrate logical slots during an upgrade[1], we\n> need to decide what to do for such cases. The initial idea we had was\n> that if the old cluster has some invalid slots, we won't allow an\n> upgrade unless the user removes such slots or uses some option like\n> --exclude-slots. It is quite possible that slots got invalidated\n> during pg_upgrade due to no user activity. Now, even though the\n> possibility of the same is less I think it is worth considering what\n> should be the behavior.\n\nRight\n\n> The other possibilities apart from not allowing an upgrade in such a\n> case could be (a) Before starting the old cluster, we fetch the slots\n> directly from the disk using some tool like [2] and make the decisions\n> based on that state;\n\nOkay, so IIUC along with dumping the slot data we also need to dump\nthe latest checkpoint LSN because during upgrade we do check that the\nconfirmed flush lsn for all the slots should be the same as the latest\ncheckpoint. Yeah but I think we could work this out.\n\n (b) During the upgrade, we don't allow WAL to be\n> removed if it can invalidate slots; (c) Copy/Migrate the invalid slots\n> as well but for that, we need to expose an API to invalidate the\n> slots;\n\n (d) somehow distinguish the slots that are invalidated during\n> an upgrade and then simply copy such slots because anyway we ensure\n> that all the WAL required by slot is sent before shutdown.\n\nYeah this could also be an option, although we need to think the\nmechanism of distinguishing those slots looks clean and fit well with\nother architecture.\n\nAlternatively can't we just ignore all the invalidated slots and do\nnot migrate them at all. Because such scenarios are very rare that\nsome of the segments are getting dropped just during the upgrade time\nand that too from the old cluster so in such cases not migrating the\nslots which are invalidated should be fine no?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 2 Sep 2023 18:12:01 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "On Sat, Sep 2, 2023 at 6:12 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Sat, Sep 2, 2023 at 10:09 AM Amit Kapila <[email protected]> wrote:\n>\n> > The other possibilities apart from not allowing an upgrade in such a\n> > case could be (a) Before starting the old cluster, we fetch the slots\n> > directly from the disk using some tool like [2] and make the decisions\n> > based on that state;\n>\n> Okay, so IIUC along with dumping the slot data we also need to dump\n> the latest checkpoint LSN because during upgrade we do check that the\n> confirmed flush lsn for all the slots should be the same as the latest\n> checkpoint. Yeah but I think we could work this out.\n>\n\nWe already have the latest checkpoint LSN information from\npg_controldata. I think we can use that as the patch proposed in the\nthread [1] is doing now. Do you have something else in mind?\n\n> (b) During the upgrade, we don't allow WAL to be\n> > removed if it can invalidate slots; (c) Copy/Migrate the invalid slots\n> > as well but for that, we need to expose an API to invalidate the\n> > slots;\n>\n> (d) somehow distinguish the slots that are invalidated during\n> > an upgrade and then simply copy such slots because anyway we ensure\n> > that all the WAL required by slot is sent before shutdown.\n>\n> Yeah this could also be an option, although we need to think the\n> mechanism of distinguishing those slots looks clean and fit well with\n> other architecture.\n>\n\nIf we want to do this we probably need to maintain a flag in the slot\nindicating that it was invalidated during an upgrade and then use the\nsame flag in the upgrade to check the validity of slots. I think such\na flag needs to be maintained at the same level as\nReplicationSlotInvalidationCause to avoid any inconsistency among\nthose.\n\n> Alternatively can't we just ignore all the invalidated slots and do\n> not migrate them at all. Because such scenarios are very rare that\n> some of the segments are getting dropped just during the upgrade time\n> and that too from the old cluster so in such cases not migrating the\n> slots which are invalidated should be fine no?\n>\n\nI also think that such a scenario would be very rare but are you\nsuggesting to ignore all invalidated slots or just the slots that got\ninvalidated during an upgrade? BTW, if we simply ignore invalidated\nslots then users won't be able to drop corresponding subscriptions\nafter an upgrade. They need to first use the Alter Subscription\ncommand to disassociate the slot (by using the command ALTER\nSUBSCRIPTION ... SET (slot_name = NONE)) and then drop the\nsubscription similar to what we suggest in other cases as described in\nthe Replication Slot Management section in docs [2]. Also, if users\nreally want to continue that subscription by syncing corresponding\ntables then they can recreate the slots manually and then continue\nwith replication. So, if we want to do this then we will just rely on\nthe current state (at the time we query for them in the old cluster)\nof slots, and even if they later got invalidated during the upgrade,\nwe will just ignore such invalidations as anyway the required WAL is\nalready copied.\n\n\n[1] - https://www.postgresql.org/message-id/TYAPR01MB58664C81887B3AF2EB6B16E3F5939%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n[2] - https://www.postgresql.org/docs/devel/logical-replication-subscription.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 4 Sep 2023 08:40:53 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "On Mon, Sep 4, 2023 at 8:41 AM Amit Kapila <[email protected]> wrote:\n>\n> On Sat, Sep 2, 2023 at 6:12 PM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Sat, Sep 2, 2023 at 10:09 AM Amit Kapila <[email protected]> wrote:\n> >\n> > > The other possibilities apart from not allowing an upgrade in such a\n> > > case could be (a) Before starting the old cluster, we fetch the slots\n> > > directly from the disk using some tool like [2] and make the decisions\n> > > based on that state;\n> >\n> > Okay, so IIUC along with dumping the slot data we also need to dump\n> > the latest checkpoint LSN because during upgrade we do check that the\n> > confirmed flush lsn for all the slots should be the same as the latest\n> > checkpoint. Yeah but I think we could work this out.\n> >\n> We already have the latest checkpoint LSN information from\n> pg_controldata. I think we can use that as the patch proposed in the\n> thread [1] is doing now. Do you have something else in mind?\n\nI think I did not understood the complete proposal. And what I meant\nis that if we dump the slot before we start the cluster thats fine.\nBut then later after starting the old cluster if we run some query\nlike we do in check_old_cluster_for_valid_slots() then thats not\nright, because there is gap between the status of the slots what we\ndumped before starting the cluster and what we are checking after the\ncluster, so there is not point of that check right?\n\n> > (b) During the upgrade, we don't allow WAL to be\n> > > removed if it can invalidate slots; (c) Copy/Migrate the invalid slots\n> > > as well but for that, we need to expose an API to invalidate the\n> > > slots;\n> >\n> > (d) somehow distinguish the slots that are invalidated during\n> > > an upgrade and then simply copy such slots because anyway we ensure\n> > > that all the WAL required by slot is sent before shutdown.\n> >\n> > Yeah this could also be an option, although we need to think the\n> > mechanism of distinguishing those slots looks clean and fit well with\n> > other architecture.\n> >\n>\n> If we want to do this we probably need to maintain a flag in the slot\n> indicating that it was invalidated during an upgrade and then use the\n> same flag in the upgrade to check the validity of slots. I think such\n> a flag needs to be maintained at the same level as\n> ReplicationSlotInvalidationCause to avoid any inconsistency among\n> those.\n\nI think we can do better, like we can just read the latest\ncheckpoint's LSN before starting the old cluster. And now while\nchecking the slot can't we check if the the slot is invalidated then\ntheir confirmed_flush_lsn >= the latest_checkpoint_lsn we preserved\nbefore starting the cluster because if so then those slot might have\ngot invalidated during the upgrade no?\n\n>\n> > Alternatively can't we just ignore all the invalidated slots and do\n> > not migrate them at all. Because such scenarios are very rare that\n> > some of the segments are getting dropped just during the upgrade time\n> > and that too from the old cluster so in such cases not migrating the\n> > slots which are invalidated should be fine no?\n> >\n>\n> I also think that such a scenario would be very rare but are you\n> suggesting to ignore all invalidated slots or just the slots that got\n> invalidated during an upgrade? BTW, if we simply ignore invalidated\n> slots then users won't be able to drop corresponding subscriptions\n> after an upgrade. They need to first use the Alter Subscription\n> command to disassociate the slot (by using the command ALTER\n> SUBSCRIPTION ... SET (slot_name = NONE)) and then drop the\n> subscription similar to what we suggest in other cases as described in\n> the Replication Slot Management section in docs [2].\n\nYeah I think thats not the best thing to do.\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 4 Sep 2023 10:33:20 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "On Mon, Sep 4, 2023 at 10:33 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Mon, Sep 4, 2023 at 8:41 AM Amit Kapila <[email protected]> wrote:\n> >\n> > On Sat, Sep 2, 2023 at 6:12 PM Dilip Kumar <[email protected]> wrote:\n> > >\n> > > On Sat, Sep 2, 2023 at 10:09 AM Amit Kapila <[email protected]> wrote:\n> > >\n> > > > The other possibilities apart from not allowing an upgrade in such a\n> > > > case could be (a) Before starting the old cluster, we fetch the slots\n> > > > directly from the disk using some tool like [2] and make the decisions\n> > > > based on that state;\n> > >\n> > > Okay, so IIUC along with dumping the slot data we also need to dump\n> > > the latest checkpoint LSN because during upgrade we do check that the\n> > > confirmed flush lsn for all the slots should be the same as the latest\n> > > checkpoint. Yeah but I think we could work this out.\n> > >\n> > We already have the latest checkpoint LSN information from\n> > pg_controldata. I think we can use that as the patch proposed in the\n> > thread [1] is doing now. Do you have something else in mind?\n>\n> I think I did not understood the complete proposal. And what I meant\n> is that if we dump the slot before we start the cluster thats fine.\n> But then later after starting the old cluster if we run some query\n> like we do in check_old_cluster_for_valid_slots() then thats not\n> right, because there is gap between the status of the slots what we\n> dumped before starting the cluster and what we are checking after the\n> cluster, so there is not point of that check right?\n>\n\nThat's right but if we do read slots from disk, we preserve those in\nthe memory and use that information instead of querying it again in\ncheck_old_cluster_for_valid_slots().\n\n> > > (b) During the upgrade, we don't allow WAL to be\n> > > > removed if it can invalidate slots; (c) Copy/Migrate the invalid slots\n> > > > as well but for that, we need to expose an API to invalidate the\n> > > > slots;\n> > >\n> > > (d) somehow distinguish the slots that are invalidated during\n> > > > an upgrade and then simply copy such slots because anyway we ensure\n> > > > that all the WAL required by slot is sent before shutdown.\n> > >\n> > > Yeah this could also be an option, although we need to think the\n> > > mechanism of distinguishing those slots looks clean and fit well with\n> > > other architecture.\n> > >\n> >\n> > If we want to do this we probably need to maintain a flag in the slot\n> > indicating that it was invalidated during an upgrade and then use the\n> > same flag in the upgrade to check the validity of slots. I think such\n> > a flag needs to be maintained at the same level as\n> > ReplicationSlotInvalidationCause to avoid any inconsistency among\n> > those.\n>\n> I think we can do better, like we can just read the latest\n> checkpoint's LSN before starting the old cluster. And now while\n> checking the slot can't we check if the the slot is invalidated then\n> their confirmed_flush_lsn >= the latest_checkpoint_lsn we preserved\n> before starting the cluster because if so then those slot might have\n> got invalidated during the upgrade no?\n>\n\nIsn't that possible only if we update confirmend_flush LSN while\ninvalidating? Otherwise, how the check you are proposing can succeed?\n\n> >\n> > > Alternatively can't we just ignore all the invalidated slots and do\n> > > not migrate them at all. Because such scenarios are very rare that\n> > > some of the segments are getting dropped just during the upgrade time\n> > > and that too from the old cluster so in such cases not migrating the\n> > > slots which are invalidated should be fine no?\n> > >\n> >\n> > I also think that such a scenario would be very rare but are you\n> > suggesting to ignore all invalidated slots or just the slots that got\n> > invalidated during an upgrade? BTW, if we simply ignore invalidated\n> > slots then users won't be able to drop corresponding subscriptions\n> > after an upgrade. They need to first use the Alter Subscription\n> > command to disassociate the slot (by using the command ALTER\n> > SUBSCRIPTION ... SET (slot_name = NONE)) and then drop the\n> > subscription similar to what we suggest in other cases as described in\n> > the Replication Slot Management section in docs [2].\n>\n> Yeah I think thats not the best thing to do.\n>\n\nRight, but OTOH, there is an argument to just document it instead of\nhaving additional complexities in the code as such cases should be\nrare.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 4 Sep 2023 11:18:24 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "On Mon, Sep 4, 2023 at 11:18 AM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Sep 4, 2023 at 10:33 AM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Mon, Sep 4, 2023 at 8:41 AM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Sat, Sep 2, 2023 at 6:12 PM Dilip Kumar <[email protected]> wrote:\n> > > >\n> > > > On Sat, Sep 2, 2023 at 10:09 AM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > > The other possibilities apart from not allowing an upgrade in such a\n> > > > > case could be (a) Before starting the old cluster, we fetch the slots\n> > > > > directly from the disk using some tool like [2] and make the decisions\n> > > > > based on that state;\n> > > >\n> > > > Okay, so IIUC along with dumping the slot data we also need to dump\n> > > > the latest checkpoint LSN because during upgrade we do check that the\n> > > > confirmed flush lsn for all the slots should be the same as the latest\n> > > > checkpoint. Yeah but I think we could work this out.\n> > > >\n> > > We already have the latest checkpoint LSN information from\n> > > pg_controldata. I think we can use that as the patch proposed in the\n> > > thread [1] is doing now. Do you have something else in mind?\n> >\n> > I think I did not understood the complete proposal. And what I meant\n> > is that if we dump the slot before we start the cluster thats fine.\n> > But then later after starting the old cluster if we run some query\n> > like we do in check_old_cluster_for_valid_slots() then thats not\n> > right, because there is gap between the status of the slots what we\n> > dumped before starting the cluster and what we are checking after the\n> > cluster, so there is not point of that check right?\n> >\n>\n> That's right but if we do read slots from disk, we preserve those in\n> the memory and use that information instead of querying it again in\n> check_old_cluster_for_valid_slots().\n>\n> > > > (b) During the upgrade, we don't allow WAL to be\n> > > > > removed if it can invalidate slots; (c) Copy/Migrate the invalid slots\n> > > > > as well but for that, we need to expose an API to invalidate the\n> > > > > slots;\n> > > >\n> > > > (d) somehow distinguish the slots that are invalidated during\n> > > > > an upgrade and then simply copy such slots because anyway we ensure\n> > > > > that all the WAL required by slot is sent before shutdown.\n> > > >\n> > > > Yeah this could also be an option, although we need to think the\n> > > > mechanism of distinguishing those slots looks clean and fit well with\n> > > > other architecture.\n> > > >\n> > >\n> > > If we want to do this we probably need to maintain a flag in the slot\n> > > indicating that it was invalidated during an upgrade and then use the\n> > > same flag in the upgrade to check the validity of slots. I think such\n> > > a flag needs to be maintained at the same level as\n> > > ReplicationSlotInvalidationCause to avoid any inconsistency among\n> > > those.\n> >\n> > I think we can do better, like we can just read the latest\n> > checkpoint's LSN before starting the old cluster. And now while\n> > checking the slot can't we check if the the slot is invalidated then\n> > their confirmed_flush_lsn >= the latest_checkpoint_lsn we preserved\n> > before starting the cluster because if so then those slot might have\n> > got invalidated during the upgrade no?\n> >\n>\n> Isn't that possible only if we update confirmend_flush LSN while\n> invalidating? Otherwise, how the check you are proposing can succeed?\n\nI am not suggesting to compare the confirmend_flush_lsn to the latest\ncheckpoint LSN instead I am suggesting that before starting the\ncluster we get the location of the latest checkpoint LSN that should\nbe the shutdown checkpoint LSN. So now also in [1] we check that\nconfirmed flush lsn should be equal to the latest checkpoint lsn. So\nthe only problem is that after we restart the cluster during the\nupgrade we might invalidate some of the slots which are perfectly fine\nto migrate and we want to identify those slots. So if we know the the\nLSN of the shutdown checkpoint before the cluster started then we can\nperform a additional checks on all the invalidated slots that their\nconfirmed lsn >= shutdown checkpoint lsn we preserved before\nrestarting the cluster (not the latest checkpoint lsn) then those\nslots got invalidated only after we started the cluster for upgrade?\nIs there any loophole in this theory? This theory is based on the\nassumption that the confirmed flush lsn are not moving forward for the\nalready invalidated slots that means the slot which got invalidated\nbefore we shutdown for upgrade will have confirm flush lsn value <\nshutdown checkpoint and the slots which got invalidated during the\nupgrade will have confirm flush lsn at least equal to the shutdown\ncheckpoint.\n\n[1] https://www.postgresql.org/message-id/TYAPR01MB5866F7D8ED15BA1E8E4A2AB0F5E4A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 4 Sep 2023 13:41:48 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "On Mon, Sep 4, 2023 at 1:41 PM Dilip Kumar <[email protected]> wrote:\n>\n> > > I think we can do better, like we can just read the latest\n> > > checkpoint's LSN before starting the old cluster. And now while\n> > > checking the slot can't we check if the the slot is invalidated then\n> > > their confirmed_flush_lsn >= the latest_checkpoint_lsn we preserved\n> > > before starting the cluster because if so then those slot might have\n> > > got invalidated during the upgrade no?\n> > >\n> >\n> > Isn't that possible only if we update confirmend_flush LSN while\n> > invalidating? Otherwise, how the check you are proposing can succeed?\n>\n> I am not suggesting to compare the confirmend_flush_lsn to the latest\n> checkpoint LSN instead I am suggesting that before starting the\n> cluster we get the location of the latest checkpoint LSN that should\n> be the shutdown checkpoint LSN. So now also in [1] we check that\n> confirmed flush lsn should be equal to the latest checkpoint lsn. So\n> the only problem is that after we restart the cluster during the\n> upgrade we might invalidate some of the slots which are perfectly fine\n> to migrate and we want to identify those slots. So if we know the the\n> LSN of the shutdown checkpoint before the cluster started then we can\n> perform a additional checks on all the invalidated slots that their\n> confirmed lsn >= shutdown checkpoint lsn we preserved before\n> restarting the cluster (not the latest checkpoint lsn) then those\n> slots got invalidated only after we started the cluster for upgrade?\n> Is there any loophole in this theory? This theory is based on the\n> assumption that the confirmed flush lsn are not moving forward for the\n> already invalidated slots that means the slot which got invalidated\n> before we shutdown for upgrade will have confirm flush lsn value <\n> shutdown checkpoint and the slots which got invalidated during the\n> upgrade will have confirm flush lsn at least equal to the shutdown\n> checkpoint.\n\nSaid that there is a possibility that some of the slots which got\ninvalidated even on the previous checkpoint might get the same LSN as\nthe slot which got invalidated later if there is no activity between\nthese two checkpoints. So if we go with this approach then there is\nsome risk of migrating some of the slots which were already\ninvalidated even before the shutdown checkpoint.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 4 Sep 2023 16:18:51 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "On Mon, Sep 4, 2023 at 4:19 PM Dilip Kumar <[email protected]> wrote:\n>\n> Said that there is a possibility that some of the slots which got\n> invalidated even on the previous checkpoint might get the same LSN as\n> the slot which got invalidated later if there is no activity between\n> these two checkpoints. So if we go with this approach then there is\n> some risk of migrating some of the slots which were already\n> invalidated even before the shutdown checkpoint.\n>\n\nI think even during the shutdown checkpoint, after writing shutdown\ncheckpoint WAL, we can invalidate some slots that in theory are safe\nto migrate/copy because all the WAL for those slots would also have\nbeen sent. So, those would be similar to what we invalidate during the\nupgrade, no? If so, I think it is better to have the same behavior for\ninvalidated slots irrespective of the time it gets invalidated. We can\neither give an error for such slots during the upgrade (which means\ndisallow the upgrade) or simply ignore such slots during the upgrade.\nI would prefer ERROR but if we want to ignore such slots, we can\nprobably inform the user in some way about ignored slots, so that she\ncan later drop corresponding subscritions or recreate such slots and\ndo the required sync-up to continue the replication.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 5 Sep 2023 09:38:37 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "On Tue, Sep 5, 2023 at 9:38 AM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Sep 4, 2023 at 4:19 PM Dilip Kumar <[email protected]> wrote:\n> >\n> > Said that there is a possibility that some of the slots which got\n> > invalidated even on the previous checkpoint might get the same LSN as\n> > the slot which got invalidated later if there is no activity between\n> > these two checkpoints. So if we go with this approach then there is\n> > some risk of migrating some of the slots which were already\n> > invalidated even before the shutdown checkpoint.\n> >\n>\n> I think even during the shutdown checkpoint, after writing shutdown\n> checkpoint WAL, we can invalidate some slots that in theory are safe\n> to migrate/copy because all the WAL for those slots would also have\n> been sent. So, those would be similar to what we invalidate during the\n> upgrade, no?\n\nThats correct\n\n If so, I think it is better to have the same behavior for\n> invalidated slots irrespective of the time it gets invalidated. We can\n> either give an error for such slots during the upgrade (which means\n> disallow the upgrade) or simply ignore such slots during the upgrade.\n> I would prefer ERROR but if we want to ignore such slots, we can\n> probably inform the user in some way about ignored slots, so that she\n> can later drop corresponding subscritions or recreate such slots and\n> do the required sync-up to continue the replication.\n\nEarlier I was thinking that ERRORing out is better so that the user\ncan take necessary action for the invalidated slots and then retry\nupgrade. But thinking again I could not find what are the advantages\nof this because if we error out then also users need to restart the\nold cluster again and have to drop the corresponding subscriptions\nOTOH if we allow the upgrade by ignoring the slots then also the user\nhas to take similar actions on the new cluster? So what's the\nadvantage of erroring out over upgrading? I see a clear advantage of\nupgrading is that the user wants to upgrade and that's successful\nwithout reattempting. If we say that if we error out and then there\nare some option for user to salvage those invalidated slots and he can\nsomehow migrate those slot as well by retrying upgrade then it make\nsense to error out and let user take some action on old cluster, but\nif all he has to do is to drop the subscription or recreate the slot\nin both the cases then letting the upgrade pass is better option at\nleast IMHO.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Sep 2023 10:09:19 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "On Tue, Sep 5, 2023 at 10:09 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Tue, Sep 5, 2023 at 9:38 AM Amit Kapila <[email protected]> wrote:\n> >\n> > On Mon, Sep 4, 2023 at 4:19 PM Dilip Kumar <[email protected]> wrote:\n> > >\n> > > Said that there is a possibility that some of the slots which got\n> > > invalidated even on the previous checkpoint might get the same LSN as\n> > > the slot which got invalidated later if there is no activity between\n> > > these two checkpoints. So if we go with this approach then there is\n> > > some risk of migrating some of the slots which were already\n> > > invalidated even before the shutdown checkpoint.\n> > >\n> >\n> > I think even during the shutdown checkpoint, after writing shutdown\n> > checkpoint WAL, we can invalidate some slots that in theory are safe\n> > to migrate/copy because all the WAL for those slots would also have\n> > been sent. So, those would be similar to what we invalidate during the\n> > upgrade, no?\n>\n> Thats correct\n>\n> If so, I think it is better to have the same behavior for\n> > invalidated slots irrespective of the time it gets invalidated. We can\n> > either give an error for such slots during the upgrade (which means\n> > disallow the upgrade) or simply ignore such slots during the upgrade.\n> > I would prefer ERROR but if we want to ignore such slots, we can\n> > probably inform the user in some way about ignored slots, so that she\n> > can later drop corresponding subscritions or recreate such slots and\n> > do the required sync-up to continue the replication.\n>\n> Earlier I was thinking that ERRORing out is better so that the user\n> can take necessary action for the invalidated slots and then retry\n> upgrade. But thinking again I could not find what are the advantages\n> of this because if we error out then also users need to restart the\n> old cluster again and have to drop the corresponding subscriptions\n> OTOH if we allow the upgrade by ignoring the slots then also the user\n> has to take similar actions on the new cluster? So what's the\n> advantage of erroring out over upgrading?\n>\n\nThe advantage is that we avoid inconvenience caused to users because\nDrop Subscription will be unsuccessful as the corresponding slots are\nnot present. So users first need to disassociate slots for the\nsubscription and then drop the subscription. Also, I am not sure\nleaving behind some slots doesn't have any other impact, otherwise,\nwhy don't we drop such slots from time to time after they are marked\ninvalidated during normal operation? If users really want to leave\nbehind such invalidated slots after upgrade, we can even think of\nproviding some option like \"exclude_invalid_logical_slots\".\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 5 Sep 2023 10:55:06 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "On Tue, Sep 5, 2023 at 10:55 AM Amit Kapila <[email protected]> wrote:\n>\n> > Earlier I was thinking that ERRORing out is better so that the user\n> > can take necessary action for the invalidated slots and then retry\n> > upgrade. But thinking again I could not find what are the advantages\n> > of this because if we error out then also users need to restart the\n> > old cluster again and have to drop the corresponding subscriptions\n> > OTOH if we allow the upgrade by ignoring the slots then also the user\n> > has to take similar actions on the new cluster? So what's the\n> > advantage of erroring out over upgrading?\n> >\n>\n> The advantage is that we avoid inconvenience caused to users because\n> Drop Subscription will be unsuccessful as the corresponding slots are\n> not present. So users first need to disassociate slots for the\n> subscription and then drop the subscription.\n\nYeah that's a valid argument for erroring out.\n\n Also, I am not sure\n> leaving behind some slots doesn't have any other impact, otherwise,\n> why don't we drop such slots from time to time after they are marked\n> invalidated during normal operation?\n\nOkay, I am also not sure of that.\n\n If users really want to leave\n> behind such invalidated slots after upgrade, we can even think of\n> providing some option like \"exclude_invalid_logical_slots\".\n\n+1\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Sep 2023 12:30:54 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "(Just dumping what I have in mind while reading the thread.)\n\nOn Sat, Sep 02, 2023 at 10:08:51AM +0530, Amit Kapila wrote:\n> During pg_upgrade, we start the server for the old cluster which can\n> allow the checkpointer to remove the WAL files. It has been noticed\n> that we do generate certain types of WAL records (e.g\n> XLOG_RUNNING_XACTS, XLOG_CHECKPOINT_ONLINE, and XLOG_FPI_FOR_HINT)\n> even during pg_upgrade for old cluster, so additional WAL records\n> could let checkpointer decide that certain WAL segments can be removed\n> (e.g. say wal size crosses max_slot_wal_keep_size_mb) and invalidate\n> the slots. Currently, I can't see any problem with this but for future\n> work where we want to migrate logical slots during an upgrade[1], we\n> need to decide what to do for such cases. The initial idea we had was\n> that if the old cluster has some invalid slots, we won't allow an\n> upgrade unless the user removes such slots or uses some option like\n> --exclude-slots. It is quite possible that slots got invalidated\n> during pg_upgrade due to no user activity. Now, even though the\n> possibility of the same is less I think it is worth considering what\n> should be the behavior.\n> \n> The other possibilities apart from not allowing an upgrade in such a\n> case could be (a) Before starting the old cluster, we fetch the slots\n> directly from the disk using some tool like [2] and make the decisions\n> based on that state; (b) During the upgrade, we don't allow WAL to be\n> removed if it can invalidate slots; (c) Copy/Migrate the invalid slots\n> as well but for that, we need to expose an API to invalidate the\n> slots; (d) somehow distinguish the slots that are invalidated during\n> an upgrade and then simply copy such slots because anyway we ensure\n> that all the WAL required by slot is sent before shutdown.\n\nChecking for any invalid slots at the beginning of the upgrade and\ncomplain sounds like a good thing to do, because these are not\nexpected to begin with, no? That's similar to a pre-check requirement\nthat the slots should have fed the subscribers with all the data\navailable up to the shutdown checkpoint when the publisher was stopped\nbefore running pg_upgrade. So (a) is a good idea to prevent an\nupgrade based on a state we don't expect from the start, as something\nin check.c, I assume.\n\nOn top of that, (b) sounds like a good idea to me anyway to be more\ndefensive. But couldn't we do that just like we do for autovacuum and\nforce the GUCs that could remove the slot's WAL to not do their work\ninstead? An upgrade is a special state of the cluster, and I'm not\nmuch into painting more checks based on IsBinaryUpgrade to prevent WAL\nsegments to be removed while we can have a full control of what we\nwant with just the GUCs that force the hand of the slots. That just\nseems simpler to me, and the WAL recycling logic is already complex\nparticularly with all the GUCs popping lately to force some conditions\nto do the WAL recycling and/or removal.\n \nDuring the upgrade, if some segments are removed and some of the slots\nwe expect to still be valid once the upgrade is done are marked as\ninvalid and become unusable, I think that we should just copy these\nslots, but also update their state data so as they can still be used\nwith what we expect, as these could be wanted by the subscribers.\nThat's what you mean with (d), I assume. Do you think that it would\nbe possible to force the slot's data on the publishers so as they use\na local LSN based on the new WAL we are resetting at? At least that\nseems more friendly to me as this limits the set of manipulations to\ndo on the slots for the end-user. The protection from (b) ought to be\nenough, in itself. (c) overlaps with (a), especially if we want to be\nable to read or write some of the slot's data offline.\n--\nMichael",
"msg_date": "Thu, 7 Sep 2023 16:25:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 12:55 PM Michael Paquier <[email protected]> wrote:\n>\n> (Just dumping what I have in mind while reading the thread.)\n>\n> On Sat, Sep 02, 2023 at 10:08:51AM +0530, Amit Kapila wrote:\n> > During pg_upgrade, we start the server for the old cluster which can\n> > allow the checkpointer to remove the WAL files. It has been noticed\n> > that we do generate certain types of WAL records (e.g\n> > XLOG_RUNNING_XACTS, XLOG_CHECKPOINT_ONLINE, and XLOG_FPI_FOR_HINT)\n> > even during pg_upgrade for old cluster, so additional WAL records\n> > could let checkpointer decide that certain WAL segments can be removed\n> > (e.g. say wal size crosses max_slot_wal_keep_size_mb) and invalidate\n> > the slots. Currently, I can't see any problem with this but for future\n> > work where we want to migrate logical slots during an upgrade[1], we\n> > need to decide what to do for such cases. The initial idea we had was\n> > that if the old cluster has some invalid slots, we won't allow an\n> > upgrade unless the user removes such slots or uses some option like\n> > --exclude-slots. It is quite possible that slots got invalidated\n> > during pg_upgrade due to no user activity. Now, even though the\n> > possibility of the same is less I think it is worth considering what\n> > should be the behavior.\n> >\n> > The other possibilities apart from not allowing an upgrade in such a\n> > case could be (a) Before starting the old cluster, we fetch the slots\n> > directly from the disk using some tool like [2] and make the decisions\n> > based on that state; (b) During the upgrade, we don't allow WAL to be\n> > removed if it can invalidate slots; (c) Copy/Migrate the invalid slots\n> > as well but for that, we need to expose an API to invalidate the\n> > slots; (d) somehow distinguish the slots that are invalidated during\n> > an upgrade and then simply copy such slots because anyway we ensure\n> > that all the WAL required by slot is sent before shutdown.\n>\n> Checking for any invalid slots at the beginning of the upgrade and\n> complain sounds like a good thing to do, because these are not\n> expected to begin with, no? That's similar to a pre-check requirement\n> that the slots should have fed the subscribers with all the data\n> available up to the shutdown checkpoint when the publisher was stopped\n> before running pg_upgrade. So (a) is a good idea to prevent an\n> upgrade based on a state we don't expect from the start, as something\n> in check.c, I assume.\n>\n> On top of that, (b) sounds like a good idea to me anyway to be more\n> defensive. But couldn't we do that just like we do for autovacuum and\n> force the GUCs that could remove the slot's WAL to not do their work\n> instead?\n>\n\nI think if we just make max_slot_wal_keep_size to -1 that should be\nsufficient to not let any slots get invalidated during upgrade. Do you\nhave anything else in mind?\n\n An upgrade is a special state of the cluster, and I'm not\n> much into painting more checks based on IsBinaryUpgrade to prevent WAL\n> segments to be removed while we can have a full control of what we\n> want with just the GUCs that force the hand of the slots. That just\n> seems simpler to me, and the WAL recycling logic is already complex\n> particularly with all the GUCs popping lately to force some conditions\n> to do the WAL recycling and/or removal.\n>\n> During the upgrade, if some segments are removed and some of the slots\n> we expect to still be valid once the upgrade is done are marked as\n> invalid and become unusable, I think that we should just copy these\n> slots, but also update their state data so as they can still be used\n> with what we expect, as these could be wanted by the subscribers.\n> That's what you mean with (d), I assume. Do you think that it would\n> be possible to force the slot's data on the publishers so as they use\n> a local LSN based on the new WAL we are resetting at?\n>\n\nYes.\n\n>\n> At least that\n> seems more friendly to me as this limits the set of manipulations to\n> do on the slots for the end-user. The protection from (b) ought to be\n> enough, in itself. (c) overlaps with (a), especially if we want to be\n> able to read or write some of the slot's data offline.\n>\n\nIf we do (b) either via GUCs or IsBinaryUpgrade check we don't need to\ndo any of (a), (b), or (d). I feel that would be a minimal and\nsufficient fix to prevent any side impact of checkpointer on slots\nduring an upgrade.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 7 Sep 2023 15:33:52 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 3:34 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Sep 7, 2023 at 12:55 PM Michael Paquier <[email protected]> wrote:\n> >\n> > (Just dumping what I have in mind while reading the thread.)\n> >\n> > On Sat, Sep 02, 2023 at 10:08:51AM +0530, Amit Kapila wrote:\n> > > During pg_upgrade, we start the server for the old cluster which can\n> > > allow the checkpointer to remove the WAL files. It has been noticed\n> > > that we do generate certain types of WAL records (e.g\n> > > XLOG_RUNNING_XACTS, XLOG_CHECKPOINT_ONLINE, and XLOG_FPI_FOR_HINT)\n> > > even during pg_upgrade for old cluster, so additional WAL records\n> > > could let checkpointer decide that certain WAL segments can be removed\n> > > (e.g. say wal size crosses max_slot_wal_keep_size_mb) and invalidate\n> > > the slots. Currently, I can't see any problem with this but for future\n> > > work where we want to migrate logical slots during an upgrade[1], we\n> > > need to decide what to do for such cases. The initial idea we had was\n> > > that if the old cluster has some invalid slots, we won't allow an\n> > > upgrade unless the user removes such slots or uses some option like\n> > > --exclude-slots. It is quite possible that slots got invalidated\n> > > during pg_upgrade due to no user activity. Now, even though the\n> > > possibility of the same is less I think it is worth considering what\n> > > should be the behavior.\n> > >\n> > > The other possibilities apart from not allowing an upgrade in such a\n> > > case could be (a) Before starting the old cluster, we fetch the slots\n> > > directly from the disk using some tool like [2] and make the decisions\n> > > based on that state; (b) During the upgrade, we don't allow WAL to be\n> > > removed if it can invalidate slots; (c) Copy/Migrate the invalid slots\n> > > as well but for that, we need to expose an API to invalidate the\n> > > slots; (d) somehow distinguish the slots that are invalidated during\n> > > an upgrade and then simply copy such slots because anyway we ensure\n> > > that all the WAL required by slot is sent before shutdown.\n> >\n> > Checking for any invalid slots at the beginning of the upgrade and\n> > complain sounds like a good thing to do, because these are not\n> > expected to begin with, no? That's similar to a pre-check requirement\n> > that the slots should have fed the subscribers with all the data\n> > available up to the shutdown checkpoint when the publisher was stopped\n> > before running pg_upgrade. So (a) is a good idea to prevent an\n> > upgrade based on a state we don't expect from the start, as something\n> > in check.c, I assume.\n> >\n> > On top of that, (b) sounds like a good idea to me anyway to be more\n> > defensive. But couldn't we do that just like we do for autovacuum and\n> > force the GUCs that could remove the slot's WAL to not do their work\n> > instead?\n> >\n>\n> I think if we just make max_slot_wal_keep_size to -1 that should be\n> sufficient to not let any slots get invalidated during upgrade. Do you\n> have anything else in mind?\n\nThis seems like a good solution to the problem.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Sep 2023 15:42:29 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "On Thu, Sep 07, 2023 at 03:33:52PM +0530, Amit Kapila wrote:\n> I think if we just make max_slot_wal_keep_size to -1 that should be\n> sufficient to not let any slots get invalidated during upgrade. Do you\n> have anything else in mind?\n\nForcing wal_keep_size while on it may be a good thing.\n\n> If we do (b) either via GUCs or IsBinaryUpgrade check we don't need to\n> do any of (a), (b), or (d). I feel that would be a minimal and\n> sufficient fix to prevent any side impact of checkpointer on slots\n> during an upgrade.\n\nI could get into the addition of a post-upgrade check to make sure\nthat nothing got invalidated while the upgrade was running, FWIW.\n--\nMichael",
"msg_date": "Fri, 8 Sep 2023 09:07:13 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "On Fri, Sep 8, 2023 at 5:37 AM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Sep 07, 2023 at 03:33:52PM +0530, Amit Kapila wrote:\n> > I think if we just make max_slot_wal_keep_size to -1 that should be\n> > sufficient to not let any slots get invalidated during upgrade. Do you\n> > have anything else in mind?\n>\n> Forcing wal_keep_size while on it may be a good thing.\n>\n\nI had thought about it but couldn't come up with a reason to force\nwal_keep_size for this purpose.\n\n> > If we do (b) either via GUCs or IsBinaryUpgrade check we don't need to\n> > do any of (a), (b), or (d). I feel that would be a minimal and\n> > sufficient fix to prevent any side impact of checkpointer on slots\n> > during an upgrade.\n>\n> I could get into the addition of a post-upgrade check to make sure\n> that nothing got invalidated while the upgrade was running, FWIW.\n>\n\nThis validation tries to ensure that we don't have any bugs/issues in\nour patch. It may be a candidate for assert but I feel even if we\nencounter any bug it is better to fix the bug.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 8 Sep 2023 08:18:14 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "On Fri, Sep 08, 2023 at 08:18:14AM +0530, Amit Kapila wrote:\n> This validation tries to ensure that we don't have any bugs/issues in\n> our patch. It may be a candidate for assert but I feel even if we\n> encounter any bug it is better to fix the bug.\n\nMy guess is that an elog-like error is more adapted so as we are able\nto detect problems in more cases, but perhaps an assert may be enough\nfor the buildfarm. If there is anything in the backend that causes\nslots to become invalidated, I agree that any issue causing that\nshould be fixed, but isn't the point different here? Having a check\nat the end of an upgrade is a mean to improve the detection rate of\nbugs where slots get invalidated, so it is actually helpful to have\none anyway? I am not sure what is your strategy here, do you mean to\nkeep a check at the end of pg_upgrade only in the patch to validate\nit? Or do you mean to add something in pg_upgrade as part of the\nfeature? I mean that doing the latter is benefitial for the sake of\nany patch committed and as a long-term method to rely on.\n--\nMichael",
"msg_date": "Fri, 8 Sep 2023 11:58:14 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "On Friday, September 8, 2023 10:58 AM Michael Paquier <[email protected]> wrote:\n> \n> On Fri, Sep 08, 2023 at 08:18:14AM +0530, Amit Kapila wrote:\n> > This validation tries to ensure that we don't have any bugs/issues in\n> > our patch. It may be a candidate for assert but I feel even if we\n> > encounter any bug it is better to fix the bug.\n> \n> My guess is that an elog-like error is more adapted so as we are able to detect\n> problems in more cases, but perhaps an assert may be enough for the\n> buildfarm. If there is anything in the backend that causes slots to become\n> invalidated, I agree that any issue causing that should be fixed, but isn't the\n> point different here? Having a check at the end of an upgrade is a mean to\n> improve the detection rate of bugs where slots get invalidated, so it is actually\n> helpful to have one anyway? I am not sure what is your strategy here, do you\n> mean to keep a check at the end of pg_upgrade only in the patch to validate it?\n> Or do you mean to add something in pg_upgrade as part of the feature? I\n> mean that doing the latter is benefitial for the sake of any patch committed and\n> as a long-term method to rely on.\n\nI feel adding a check at pg_upgrade may not 100% detect the slot invalidation\nif we check by querying the old cluster to get the slot info, because the\ninvalidation can happen before the first time we fetch the info or after the\nlast time we fetch the info(e.g. shutdown checkpoint could also invalidate\nslots)\n\nPersonally, I think if we really want to add a check, it might be better to put\nit at server side, Like: reporting an ERROR at server side when invalidating\nthe slot(InvalidatePossiblyObsoleteSlot) if in upgrade mode.\n\nHaving said that I feel it's fine if we don't add this check as setting\nmax_slot_wal_keep_size to -1 looks sufficient.\n\n\nBest Regards,\nHou zj\n\n\n",
"msg_date": "Fri, 8 Sep 2023 03:30:23 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "On Fri, Sep 08, 2023 at 03:30:23AM +0000, Zhijie Hou (Fujitsu) wrote:\n> I feel adding a check at pg_upgrade may not 100% detect the slot invalidation\n> if we check by querying the old cluster to get the slot info, because the\n> invalidation can happen before the first time we fetch the info or after the\n> last time we fetch the info(e.g. shutdown checkpoint could also invalidate\n> slots)\n> \n> Personally, I think if we really want to add a check, it might be better to put\n> it at server side, Like: reporting an ERROR at server side when invalidating\n> the slot(InvalidatePossiblyObsoleteSlot) if in upgrade mode.\n\nYeah, that may be enough to paint one isBinaryUpgrade in the\ninvalidation path of the backend, with en elog(ERROR) as that would be\nan unexpected state.\n\n> Having said that I feel it's fine if we don't add this check as setting\n> max_slot_wal_keep_size to -1 looks sufficient.\n\nI would do both, FWIW, to stay on the safe side. And both are\nnon-invasive.\n--\nMichael",
"msg_date": "Fri, 8 Sep 2023 12:42:38 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "On Fri, Sep 8, 2023 at 9:00 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Friday, September 8, 2023 10:58 AM Michael Paquier <[email protected]> wrote:\n> >\n> > On Fri, Sep 08, 2023 at 08:18:14AM +0530, Amit Kapila wrote:\n> > > This validation tries to ensure that we don't have any bugs/issues in\n> > > our patch. It may be a candidate for assert but I feel even if we\n> > > encounter any bug it is better to fix the bug.\n> >\n> > My guess is that an elog-like error is more adapted so as we are able to detect\n> > problems in more cases, but perhaps an assert may be enough for the\n> > buildfarm. If there is anything in the backend that causes slots to become\n> > invalidated, I agree that any issue causing that should be fixed, but isn't the\n> > point different here? Having a check at the end of an upgrade is a mean to\n> > improve the detection rate of bugs where slots get invalidated, so it is actually\n> > helpful to have one anyway? I am not sure what is your strategy here, do you\n> > mean to keep a check at the end of pg_upgrade only in the patch to validate it?\n> > Or do you mean to add something in pg_upgrade as part of the feature?\n> >\n\nWe can do whatever the consensus is but I feel such an end-check to\nsome extent is only helpful for the testing of a patch before the\ncommit but not otherwise.\n\n> > I\n> > mean that doing the latter is benefitial for the sake of any patch committed and\n> > as a long-term method to rely on.\n>\n\nWhat is your worry here? Are you worried that unknowingly in the\nfuture we could add some other way to invalidate slots during upgrades\nthat we won't be able to detect?\n\n> I feel adding a check at pg_upgrade may not 100% detect the slot invalidation\n> if we check by querying the old cluster to get the slot info, because the\n> invalidation can happen before the first time we fetch the info or after the\n> last time we fetch the info(e.g. shutdown checkpoint could also invalidate\n> slots)\n>\n> Personally, I think if we really want to add a check, it might be better to put\n> it at server side, Like: reporting an ERROR at server side when invalidating\n> the slot(InvalidatePossiblyObsoleteSlot) if in upgrade mode.\n>\n\nI don't know whether we really need to be extra careful in this case,\nthe same could be said about other consistency checks on the old\ncluster.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 8 Sep 2023 09:14:59 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "On Fri, Sep 08, 2023 at 09:14:59AM +0530, Amit Kapila wrote:\n> On Fri, Sep 8, 2023 at 9:00 AM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n>>> I\n>>> mean that doing the latter is benefitial for the sake of any patch committed and\n>>> as a long-term method to rely on.\n> \n> What is your worry here? Are you worried that unknowingly in the\n> future we could add some other way to invalidate slots during upgrades\n> that we won't be able to detect?\n\nExactly. A safety belt would not hurt, especially if the belt added\nis simple. The idea of a backend side elog(ERROR) with\nisBinaryUpgrade is tempting in the invalidation slot path.\n--\nMichael",
"msg_date": "Fri, 8 Sep 2023 13:40:44 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "On Fri, Sep 8, 2023 at 10:10 AM Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Sep 08, 2023 at 09:14:59AM +0530, Amit Kapila wrote:\n> > On Fri, Sep 8, 2023 at 9:00 AM Zhijie Hou (Fujitsu)\n> > <[email protected]> wrote:\n> >>> I\n> >>> mean that doing the latter is benefitial for the sake of any patch committed and\n> >>> as a long-term method to rely on.\n> >\n> > What is your worry here? Are you worried that unknowingly in the\n> > future we could add some other way to invalidate slots during upgrades\n> > that we won't be able to detect?\n>\n> Exactly. A safety belt would not hurt, especially if the belt added\n> is simple. The idea of a backend side elog(ERROR) with\n> isBinaryUpgrade is tempting in the invalidation slot path.\n>\n\nI agree with doing something simple. So, to conclude, we agree on two\nthings in this thread (a) Use max_slot_wal_keep_size to -1 to start\npostmaster for the old cluster during the upgrade; (b) Have an\nelog(ERROR) to avoid invalidating slots during the upgrade.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 8 Sep 2023 11:59:41 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "On Fri, Sep 08, 2023 at 11:59:41AM +0530, Amit Kapila wrote:\n> I agree with doing something simple. So, to conclude, we agree on two\n> things in this thread (a) Use max_slot_wal_keep_size to -1 to start\n> postmaster for the old cluster during the upgrade; (b) Have an\n> elog(ERROR) to avoid invalidating slots during the upgrade.\n\n+1.\n--\nMichael",
"msg_date": "Fri, 8 Sep 2023 15:38:12 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Impact of checkpointer during pg_upgrade"
},
{
"msg_contents": "On Fri, Sep 8, 2023 at 11:59 AM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Sep 8, 2023 at 10:10 AM Michael Paquier <[email protected]> wrote:\n> >\n> > On Fri, Sep 08, 2023 at 09:14:59AM +0530, Amit Kapila wrote:\n> > > On Fri, Sep 8, 2023 at 9:00 AM Zhijie Hou (Fujitsu)\n> > > <[email protected]> wrote:\n> > >>> I\n> > >>> mean that doing the latter is benefitial for the sake of any patch committed and\n> > >>> as a long-term method to rely on.\n> > >\n> > > What is your worry here? Are you worried that unknowingly in the\n> > > future we could add some other way to invalidate slots during upgrades\n> > > that we won't be able to detect?\n> >\n> > Exactly. A safety belt would not hurt, especially if the belt added\n> > is simple. The idea of a backend side elog(ERROR) with\n> > isBinaryUpgrade is tempting in the invalidation slot path.\n> >\n>\n> I agree with doing something simple. So, to conclude, we agree on two\n> things in this thread (a) Use max_slot_wal_keep_size to -1 to start\n> postmaster for the old cluster during the upgrade; (b) Have an\n> elog(ERROR) to avoid invalidating slots during the upgrade.\n\n+1\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 11 Sep 2023 09:19:59 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Impact of checkpointer during pg_upgrade"
}
] |
[
{
"msg_contents": "Hi.\nIn src/backend/executor/nodeAgg.c\n817: advance_aggregates(AggState *aggstate)\n\nDo we need to add \"(void)\" before ExecEvalExprSwitchContext?\n\n\n",
"msg_date": "Sun, 3 Sep 2023 11:16:41 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "add (void) cast inside advance_aggregates for function\n ExecEvalExprSwitchContext"
},
{
"msg_contents": "> On 3 Sep 2023, at 05:16, jian he <[email protected]> wrote:\n\n> In src/backend/executor/nodeAgg.c\n> 817: advance_aggregates(AggState *aggstate)\n> \n> Do we need to add \"(void)\" before ExecEvalExprSwitchContext?\n\nI don't think we need to, but we could since we are in fact discardnig the\nreturn value. Did you get a compiler warning on unchecked return, and if so\nwith which flags?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 7 Sep 2023 14:14:14 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add (void) cast inside advance_aggregates for function\n ExecEvalExprSwitchContext"
}
] |
[
{
"msg_contents": "Hi,\n\nSomehow these tests have recently become unstable and have failed a few times:\n\nhttps://github.com/postgres/postgres/commits/REL_15_STABLE\n\nThe failures are like:\n\n[22:32:26.722] # Failed test 'pgbench simple update stdout\n/(?^:builtin: simple update)/'\n[22:32:26.722] # at t/001_pgbench_with_server.pl line 119.\n[22:32:26.722] # 'pgbench (15.4)\n[22:32:26.722] # '\n[22:32:26.722] # doesn't match '(?^:builtin: simple update)'\n\n\n",
"msg_date": "Mon, 4 Sep 2023 15:18:40 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "REL_15_STABLE: pgbench tests randomly failing on CI, Windows only"
},
{
"msg_contents": "On Mon, Sep 04, 2023 at 03:18:40PM +1200, Thomas Munro wrote:\n> Somehow these tests have recently become unstable and have failed a few times:\n> \n> https://github.com/postgres/postgres/commits/REL_15_STABLE\n> \n> The failures are like:\n> \n> [22:32:26.722] # Failed test 'pgbench simple update stdout\n> /(?^:builtin: simple update)/'\n> [22:32:26.722] # at t/001_pgbench_with_server.pl line 119.\n> [22:32:26.722] # 'pgbench (15.4)\n> [22:32:26.722] # '\n> [22:32:26.722] # doesn't match '(?^:builtin: simple update)'\n\nFun. That's a test of \"pgbench -C\". The test harness isn't reporting\npgbench's stderr, so I hacked things to get that and the actual file\ndescriptor values being assigned. The failure case gets \"pgbench: error: too\nmany client connections for select()\" in stderr, from this pgbench.c function:\n\nstatic void\nadd_socket_to_set(socket_set *sa, int fd, int idx)\n{\n\tif (fd < 0 || fd >= FD_SETSIZE)\n\t{\n\t\t/*\n\t\t * Doing a hard exit here is a bit grotty, but it doesn't seem worth\n\t\t * complicating the API to make it less grotty.\n\t\t */\n\t\tpg_fatal(\"too many client connections for select()\");\n\t}\n\tFD_SET(fd, &sa->fds);\n\tif (fd > sa->maxfd)\n\t\tsa->maxfd = fd;\n}\n\nThe \"fd >= FD_SETSIZE\" check is irrelevant on Windows. See comments in the\nattached patch; in brief, Windows assigns FDs and uses FD_SETSIZE differently.\nThe first associated failure was commit dea12a1 (2023-08-03); as a doc commit,\nit's an innocent victim. Bisect blamed 8488bab \"ci: Use windows VMs instead\nof windows containers\" (2023-02), long before the failures began. I'll guess\nsome 2023-08 Windows update or reconfiguration altered file descriptor\nassignment, hence the onset of failures. In my tests of v16, the highest file\ndescriptor was 948. I could make v16 fail by changing --client=5 to\n--client=90 in the test. With the attached patch and --client=90, v16 peaked\nat file descriptor 2040.\n\nThanks,\nnm\n\nP.S. Later, we should change test code so the pgbench stderr can't grow an\nextra line without that line appearing in test logs.",
"msg_date": "Sun, 8 Oct 2023 19:25:29 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: REL_15_STABLE: pgbench tests randomly failing on CI, Windows only"
},
{
"msg_contents": "On Mon, Oct 9, 2023 at 3:25 PM Noah Misch <[email protected]> wrote:\n> The \"fd >= FD_SETSIZE\" check is irrelevant on Windows. See comments in the\n> attached patch; in brief, Windows assigns FDs and uses FD_SETSIZE differently.\n> The first associated failure was commit dea12a1 (2023-08-03); as a doc commit,\n> it's an innocent victim. Bisect blamed 8488bab \"ci: Use windows VMs instead\n> of windows containers\" (2023-02), long before the failures began. I'll guess\n> some 2023-08 Windows update or reconfiguration altered file descriptor\n> assignment, hence the onset of failures. In my tests of v16, the highest file\n> descriptor was 948. I could make v16 fail by changing --client=5 to\n> --client=90 in the test. With the attached patch and --client=90, v16 peaked\n> at file descriptor 2040.\n\nOhhh. Thanks for figuring that out. I'd never noticed that quirk. I\ndidn't/can't test it but the patch looks reasonable after reading the\nreferenced docs. Maybe instead of just \"out of range\" I'd say \"out of\nrange for select()\" since otherwise it might seem a little baffling --\nwhat range?\n\nRandom water cooler speculation about future ideas: I wonder\nwhether/when we can eventually ditch select() and use WSAPoll() for\nthis on Windows, which is supposed to be like poll(). There's a\ncomment explaining that we prefer select() because it has a higher\nresolution sleep than poll() (us vs ms), so we wouldn't want to use\npoll() on Unixen, but we know that Windows doesn't even use high\nresolution timers for any user space APIs anyway so that's just not a\nconcern on that platform. The usual reason nobody ever uses WSAPoll()\nis because the Curl guys told[1] everyone that it's terrible in a way\nthat would quite specifically break our usage. But I wonder, because\nthe documentation now says \"As of Windows 10 version 2004, when a TCP\nsocket fails to connect, (POLLHUP | POLLERR | POLLWRNORM) is\nindicated\", it *sounds* like it might have been fixed ~3 years ago?\nOne idea would be to hide it inside WaitEventSet, and let WaitEventSet\nbe used in front end code (we couldn't use the\nWaitForMultipleObjects() version because it's hard-limited to 64\nevents, but in front end code we don't need latches and other stuff,\nso we could have a sockets-only version with WSAPoll()). The idea of\nusing WaitEventSet for pgbench on Unix was already mentioned a couple\nof times by others for general scalability reasons -- that way we\ncould finish up using a better kernel interface on all supported\nplatforms. We'd probably want to add high resolution time-out support\n(I already posted a patch for that somewhere), or we'd be back to 1ms\ntimeouts.\n\n[1] https://daniel.haxx.se/blog/2012/10/10/wsapoll-is-broken/\n\n\n",
"msg_date": "Mon, 9 Oct 2023 16:22:52 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: REL_15_STABLE: pgbench tests randomly failing on CI, Windows only"
},
{
"msg_contents": "On Mon, Oct 09, 2023 at 04:22:52PM +1300, Thomas Munro wrote:\n> On Mon, Oct 9, 2023 at 3:25 PM Noah Misch <[email protected]> wrote:\n> > The \"fd >= FD_SETSIZE\" check is irrelevant on Windows. See comments in the\n> > attached patch; in brief, Windows assigns FDs and uses FD_SETSIZE differently.\n> > The first associated failure was commit dea12a1 (2023-08-03); as a doc commit,\n> > it's an innocent victim. Bisect blamed 8488bab \"ci: Use windows VMs instead\n> > of windows containers\" (2023-02), long before the failures began. I'll guess\n> > some 2023-08 Windows update or reconfiguration altered file descriptor\n> > assignment, hence the onset of failures. In my tests of v16, the highest file\n> > descriptor was 948. I could make v16 fail by changing --client=5 to\n> > --client=90 in the test. With the attached patch and --client=90, v16 peaked\n> > at file descriptor 2040.\n> \n> Ohhh. Thanks for figuring that out. I'd never noticed that quirk. I\n> didn't/can't test it but the patch looks reasonable after reading the\n> referenced docs.\n\nFor what it's worth, I did all that testing through CI, using hacks like\nhttps://cirrus-ci.com/task/5352974499708928 to get the relevant information.\nI didn't bother trying non-CI, since buildfarm animals aren't failing.\n\n> Maybe instead of just \"out of range\" I'd say \"out of\n> range for select()\" since otherwise it might seem a little baffling --\n> what range?\n\nI was trying to follow this from the style guide:\n\n Avoid mentioning called function names, either; instead say what the code was trying to do:\n BAD: open() failed: %m\n BETTER: could not open file %s: %m\n\nI didn't think of any phrasing that clearly explained things without the\nreader consulting the code. I considered these:\n\n \"socket file descriptor out of range: %d\" [what range?]\n \"socket file descriptor out of range for select(): %d\" [style guide violation]\n \"socket file descriptor out of range for checking whether ready for reading: %d\" [?]\n \"socket file descriptor out of range for synchronous I/O multiplexing: %d\" [term from POSIX]\n\n> Random water cooler speculation about future ideas: I wonder\n> whether/when we can eventually ditch select() and use WSAPoll() for\n> this on Windows, which is supposed to be like poll(). There's a\n> comment explaining that we prefer select() because it has a higher\n> resolution sleep than poll() (us vs ms), so we wouldn't want to use\n> poll() on Unixen, but we know that Windows doesn't even use high\n> resolution timers for any user space APIs anyway so that's just not a\n> concern on that platform. The usual reason nobody ever uses WSAPoll()\n> is because the Curl guys told[1] everyone that it's terrible in a way\n> that would quite specifically break our usage. But I wonder, because\n> the documentation now says \"As of Windows 10 version 2004, when a TCP\n> socket fails to connect, (POLLHUP | POLLERR | POLLWRNORM) is\n> indicated\", it *sounds* like it might have been fixed ~3 years ago?\n> One idea would be to hide it inside WaitEventSet, and let WaitEventSet\n> be used in front end code (we couldn't use the\n> WaitForMultipleObjects() version because it's hard-limited to 64\n> events, but in front end code we don't need latches and other stuff,\n> so we could have a sockets-only version with WSAPoll()). The idea of\n> using WaitEventSet for pgbench on Unix was already mentioned a couple\n> of times by others for general scalability reasons -- that way we\n> could finish up using a better kernel interface on all supported\n> platforms. We'd probably want to add high resolution time-out support\n> (I already posted a patch for that somewhere), or we'd be back to 1ms\n> timeouts.\n> \n> [1] https://daniel.haxx.se/blog/2012/10/10/wsapoll-is-broken/\n\nInteresting. I have no current knowledge to add there.\n\n\n",
"msg_date": "Sun, 8 Oct 2023 21:08:46 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: REL_15_STABLE: pgbench tests randomly failing on CI, Windows only"
},
{
"msg_contents": "On Sun, Oct 8, 2023 at 9:10 PM Noah Misch <[email protected]> wrote:\n\n>\n> I didn't think of any phrasing that clearly explained things without the\n> reader consulting the code. I considered these:\n>\n> \"socket file descriptor out of range: %d\" [what range?]\n>\n>\nQuick drive-by...but it seems that < 0 is a distinctly different problem\nthan exceeding FD_SETSIZE. I'm unsure what would cause the former but the\nerror for the later seems like:\n\nerror: \"Requested socket file descriptor %d exceeds connection limit of\n%d\", fd, FD_SETSIZE-1\nhint: Reduce the requested number of concurrent connections\n\nIn short, the concept of range doesn't seem applicable here. There is an\nerror state at the max, and some invalid system state condition where the\nposition within a set is somehow negative. These should be separated -\nwith the < 0 check happening first.\n\nAnd apparently this condition isn't applicable when dealing with jobs in\nconnect_slot? Also, I see that for connections we immediately issue FD_SET\nso this check on the boundary of the file descriptor makes sense. But the\nremaining code in connect_slot doesn't involve FD_SET so the test for the\nfile descriptor falling within its maximum seems to be coming out of\nnowhere. Likely this is all good, and the lack of symmetry is just due to\nthe natural progressive development of the code, but it stands out to the\nuninitiated looking at this patch.\n\nDavid J.\n\nOn Sun, Oct 8, 2023 at 9:10 PM Noah Misch <[email protected]> wrote:\nI didn't think of any phrasing that clearly explained things without the\nreader consulting the code. I considered these:\n\n \"socket file descriptor out of range: %d\" [what range?]Quick drive-by...but it seems that < 0 is a distinctly different problem than exceeding FD_SETSIZE. I'm unsure what would cause the former but the error for the later seems like:error: \"Requested socket file descriptor %d exceeds connection limit of %d\", fd, FD_SETSIZE-1hint: Reduce the requested number of concurrent connectionsIn short, the concept of range doesn't seem applicable here. There is an error state at the max, and some invalid system state condition where the position within a set is somehow negative. These should be separated - with the < 0 check happening first.And apparently this condition isn't applicable when dealing with jobs in connect_slot? Also, I see that for connections we immediately issue FD_SET so this check on the boundary of the file descriptor makes sense. But the remaining code in connect_slot doesn't involve FD_SET so the test for the file descriptor falling within its maximum seems to be coming out of nowhere. Likely this is all good, and the lack of symmetry is just due to the natural progressive development of the code, but it stands out to the uninitiated looking at this patch.David J.",
"msg_date": "Sun, 8 Oct 2023 22:00:03 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: REL_15_STABLE: pgbench tests randomly failing on CI, Windows only"
},
{
"msg_contents": "On Sun, Oct 08, 2023 at 10:00:03PM -0700, David G. Johnston wrote:\n> On Sun, Oct 8, 2023 at 9:10 PM Noah Misch <[email protected]> wrote:\n> > I didn't think of any phrasing that clearly explained things without the\n> > reader consulting the code. I considered these:\n\nI'm leaning toward:\n\n \"socket file descriptor out of range for select(): %d\" [style guide violation]\n\nA true style guide purist might bury it in a detail message:\n\n ERROR: socket file descriptor out of range: %d\n DETAIL: select() accepts file descriptors from 0 to 1023, inclusive, in this build.\n HINT: Try fewer concurrent database clients.\n\n> > \"socket file descriptor out of range: %d\" [what range?]\n> >\n> Quick drive-by...but it seems that < 0 is a distinctly different problem\n> than exceeding FD_SETSIZE. I'm unsure what would cause the former but the\n> error for the later seems like:\n> \n> error: \"Requested socket file descriptor %d exceeds connection limit of\n> %d\", fd, FD_SETSIZE-1\n> hint: Reduce the requested number of concurrent connections\n> \n> In short, the concept of range doesn't seem applicable here. There is an\n> error state at the max, and some invalid system state condition where the\n> position within a set is somehow negative. These should be separated -\n> with the < 0 check happening first.\n\nI view it as: the range of select()-able file descriptors is [0,FD_SETSIZE).\nNegative is out of range.\n\n> And apparently this condition isn't applicable when dealing with jobs in\n> connect_slot?\n\nFor both pgbench.c and parallel_slot.c, there are sufficient negative-FD\nchecks elsewhere in the file. Ideally, either both files would have redundant\nchecks, or neither file would. I didn't feel the need to mess with that part\nof the status quo.\n\n> Also, I see that for connections we immediately issue FD_SET\n> so this check on the boundary of the file descriptor makes sense. But the\n> remaining code in connect_slot doesn't involve FD_SET so the test for the\n> file descriptor falling within its maximum seems to be coming out of\n> nowhere. Likely this is all good, and the lack of symmetry is just due to\n> the natural progressive development of the code, but it stands out to the\n> uninitiated looking at this patch.\n\nTrue. The approach in parallel_slot.c is to check the FD number each time it\nopens a socket, after which its loop with FD_SET() doesn't recheck. That's a\nbit more efficient than the pgbench.c way, because each file may call FD_SET()\nmany times per socket. Again, I didn't mess with that part of the status quo.\n\n\n",
"msg_date": "Tue, 10 Oct 2023 20:40:26 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: REL_15_STABLE: pgbench tests randomly failing on CI, Windows only"
}
] |
[
{
"msg_contents": "Hi, hackers,\n\nLooking at the planner behaviour with the memory consumption patch [1], I figured out that arrays increase memory consumption by the optimizer significantly. See init.sql in attachment.\nThe point here is that the planner does small memory allocations for each element during estimation. As a result, it looks like the planner consumes about 250 bytes for each integer element.\n\nIt is maybe not a problem most of the time. However, in the case of partitions, memory consumption multiplies by each partition. Such a corner case looks weird, but the fix is simple. So, why not?\n\nThe diff in the attachment is proof of concept showing how to reduce wasting of memory. Having benchmarked a bit, I didn't find any overhead.\n\n[1] Report planning memory in EXPLAIN ANALYZE\nhttps://www.postgresql.org/message-id/flat/CAExHW5sZA=5LJ_ZPpRO-w09ck8z9p7eaYAqq3Ks9GDfhrxeWBw@mail.gmail.com\n\n--\nRegards,\nAndrey Lepikhov",
"msg_date": "Mon, 04 Sep 2023 12:25:44 +0700",
"msg_from": "\"Lepikhov Andrei\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimize planner memory consumption for huge arrays"
},
{
"msg_contents": "On Mon, Sep 4, 2023 at 11:58 AM Lepikhov Andrei\n<[email protected]> wrote:\n>\n> Hi, hackers,\n>\n> Looking at the planner behaviour with the memory consumption patch [1], I figured out that arrays increase memory consumption by the optimizer significantly. See init.sql in attachment.\n> The point here is that the planner does small memory allocations for each element during estimation. As a result, it looks like the planner consumes about 250 bytes for each integer element.\n>\n> It is maybe not a problem most of the time. However, in the case of partitions, memory consumption multiplies by each partition. Such a corner case looks weird, but the fix is simple. So, why not?\n>\n> The diff in the attachment is proof of concept showing how to reduce wasting of memory. Having benchmarked a bit, I didn't find any overhead.\n\n+ Const *c = makeConst(nominal_element_type,\n+ -1,\n+ nominal_element_collation,\n+ elmlen,\n+ elem_values[i],\n+ elem_nulls[i],\n+ elmbyval);\n+\n+ args = list_make2(leftop, c);\n if (is_join_clause)\n s2 = DatumGetFloat8(FunctionCall5Coll(&oprselproc,\n clause->inputcollid,\n@@ -1984,7 +1985,8 @@ scalararraysel(PlannerInfo *root,\n ObjectIdGetDatum(operator),\n PointerGetDatum(args),\n Int32GetDatum(varRelid)));\n-\n+ list_free(args);\n+ pfree(c);\n\nMaybe you can just use list_free_deep, instead of storing the constant\nin a separate variable.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 4 Sep 2023 14:07:03 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize planner memory consumption for huge arrays"
},
{
"msg_contents": "\n\nOn Mon, Sep 4, 2023, at 3:37 PM, Dilip Kumar wrote:\n> On Mon, Sep 4, 2023 at 11:58 AM Lepikhov Andrei\n> <[email protected]> wrote:\n>>\n>> Hi, hackers,\n>>\n>> Looking at the planner behaviour with the memory consumption patch [1], I figured out that arrays increase memory consumption by the optimizer significantly. See init.sql in attachment.\n>> The point here is that the planner does small memory allocations for each element during estimation. As a result, it looks like the planner consumes about 250 bytes for each integer element.\n>>\n>> It is maybe not a problem most of the time. However, in the case of partitions, memory consumption multiplies by each partition. Such a corner case looks weird, but the fix is simple. So, why not?\n>>\n>> The diff in the attachment is proof of concept showing how to reduce wasting of memory. Having benchmarked a bit, I didn't find any overhead.\n>\n> + Const *c = makeConst(nominal_element_type,\n> + -1,\n> + nominal_element_collation,\n> + elmlen,\n> + elem_values[i],\n> + elem_nulls[i],\n> + elmbyval);\n> +\n> + args = list_make2(leftop, c);\n> if (is_join_clause)\n> s2 = DatumGetFloat8(FunctionCall5Coll(&oprselproc,\n> clause->inputcollid,\n> @@ -1984,7 +1985,8 @@ scalararraysel(PlannerInfo *root,\n> ObjectIdGetDatum(operator),\n> PointerGetDatum(args),\n> Int32GetDatum(varRelid)));\n> -\n> + list_free(args);\n> + pfree(c);\n>\n> Maybe you can just use list_free_deep, instead of storing the constant\n> in a separate variable.\nAs I see, the first element in the array is leftop, which is used on other iterations. So, we can't use list_free_deep here.\n\n-- \nRegards,\nAndrei Lepikhov\n\n\n",
"msg_date": "Mon, 04 Sep 2023 17:19:05 +0700",
"msg_from": "\"Lepikhov Andrei\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimize planner memory consumption for huge arrays"
},
{
"msg_contents": "On Mon, Sep 4, 2023 at 3:49 PM Lepikhov Andrei\n<[email protected]> wrote:\n>\n> > + Const *c = makeConst(nominal_element_type,\n> > + -1,\n> > + nominal_element_collation,\n> > + elmlen,\n> > + elem_values[i],\n> > + elem_nulls[i],\n> > + elmbyval);\n> > +\n> > + args = list_make2(leftop, c);\n> > if (is_join_clause)\n> > s2 = DatumGetFloat8(FunctionCall5Coll(&oprselproc,\n> > clause->inputcollid,\n> > @@ -1984,7 +1985,8 @@ scalararraysel(PlannerInfo *root,\n> > ObjectIdGetDatum(operator),\n> > PointerGetDatum(args),\n> > Int32GetDatum(varRelid)));\n> > -\n> > + list_free(args);\n> > + pfree(c);\n> >\n> > Maybe you can just use list_free_deep, instead of storing the constant\n> > in a separate variable.\n> As I see, the first element in the array is leftop, which is used on other iterations. So, we can't use list_free_deep here.\n\nYeah you are right, thanks.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 4 Sep 2023 16:29:01 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize planner memory consumption for huge arrays"
},
{
"msg_contents": "Hi Lepikhov,\n\nThanks for using my patch and I am glad that you found it useful.\n\nOn Mon, Sep 4, 2023 at 10:56 AM Lepikhov Andrei\n<[email protected]> wrote:\n>\n> Hi, hackers,\n>\n> Looking at the planner behaviour with the memory consumption patch [1], I figured out that arrays increase memory consumption by the optimizer significantly. See init.sql in attachment.\n> The point here is that the planner does small memory allocations for each element during estimation. As a result, it looks like the planner consumes about 250 bytes for each integer element.\n\nI guess the numbers you mentioned in init.sql are total memory used by\nthe planner (as reported by the patch in the thread) when planning\nthat query and not memory consumed by Const nodes themselves. Am I\nright? I think the measurements need to be explained better and also\nthe realistic scenario you are trying to oprimize.\n\nI guess, the reason you think that partitioning will increase the\nmemory consumed is because each partition will have the clause\ntranslated for it. Selectivity estimation for each partition will\ncreate those many Const nodes and hence consume memory. Am I right?\nCan you please measure the memory consumed with and without your\npatch.\n\n>\n> It is maybe not a problem most of the time. However, in the case of partitions, memory consumption multiplies by each partition. Such a corner case looks weird, but the fix is simple. So, why not?\n\nWith vectorized operations becoming a norm these days, it's possible\nto have thousands of element in array of an ANY or IN clause. Also\nwill be common to have thousands of partitions. But I think what we\nneed to do here is to write a selectivity estimation function which\ntakes an const array and return selectivity without requiring to\ncreate a Const node for each element.\n\n>\n> The diff in the attachment is proof of concept showing how to reduce wasting of memory. Having benchmarked a bit, I didn't find any overhead.\n>\n\nYou might want to include your benchmarking results as well.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 6 Sep 2023 18:39:35 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize planner memory consumption for huge arrays"
},
{
"msg_contents": "On Wed, Sep 6, 2023, at 8:09 PM, Ashutosh Bapat wrote:\n> Hi Lepikhov,\n>\n> Thanks for using my patch and I am glad that you found it useful.\n>\n> On Mon, Sep 4, 2023 at 10:56 AM Lepikhov Andrei\n> <[email protected]> wrote:\n>>\n>> Hi, hackers,\n>>\n>> Looking at the planner behaviour with the memory consumption patch [1], I figured out that arrays increase memory consumption by the optimizer significantly. See init.sql in attachment.\n>> The point here is that the planner does small memory allocations for each element during estimation. As a result, it looks like the planner consumes about 250 bytes for each integer element.\n>\n> I guess the numbers you mentioned in init.sql are total memory used by\n> the planner (as reported by the patch in the thread) when planning\n> that query and not memory consumed by Const nodes themselves. Am I\n> right? I think the measurements need to be explained better and also\n> the realistic scenario you are trying to oprimize.\n\nYes, it is the total memory consumed by the planner - I used the numbers generated by your patch [1]. I had been increasing the number of elements in the array to exclude the memory consumed by the planner for other purposes. As you can see, the array with 1 element consumes 12kB of memory, 1E4 elements - 2.6 MB. All of that memory increment is related to the only enlargement of this array. (2600-12)/10 = 260 bytes. So, I make a conclusion: each 4-byte element produces a consumption of 260 bytes of memory.\nThis scenario I obtained from the user complaint - they had strict restrictions on memory usage and were stuck in this unusual memory usage case.\n\n> I guess, the reason you think that partitioning will increase the\n> memory consumed is because each partition will have the clause\n> translated for it. Selectivity estimation for each partition will\n> create those many Const nodes and hence consume memory. Am I right?\n\nYes.\n\n> Can you please measure the memory consumed with and without your\n> patch.\n\nDone. See test case and results in 'init_parts.sql' in attachment. Short summary below. I varied a number of elements from 1 to 10000 and partitions from 1 to 100. As you can see, partitioning adds a lot of memory consumption by itself. But we see an effect from patch also.\n\nmaster:\nelems\t1\t\t1E1\t\t1E2\t\t1E3\t\t1E4\t\nparts\n1\t\t28kB\t50kB\t0.3MB\t2.5MB\t25MB\n10\t\t45kB\t143kB\t0.6MB\t4.8MB\t47MB\n100\t\t208kB\t125kB\t3.3MB\t27MB\t274MB\n\npatched:\nelems\t1\t\t1E1\t\t1E2\t\t1E3\t\t1E4\nparts\n1\t\t28kB\t48kB\t0.25MB\t2.2MB\t22.8MB\n10\t\t44kB\t100kB\t313kB\t2.4MB\t23.7MB\n100\t\t208kB\t101kB\t0.9MB\t3.7MB\t32.4MB\n\nJust for comparison, without partitioning:\nelems\t1\t\t1E1\t\t1E2\t\t1E3\t\t1E4\t\nmaster:\t12kB\t14kB\t37kB\t266kB\t2.5MB\npatched:\t12kB\t11.5kB\t13kB\t24kB\t141kB\n\n>> It is maybe not a problem most of the time. However, in the case of partitions, memory consumption multiplies by each partition. Such a corner case looks weird, but the fix is simple. So, why not?\n>\n> With vectorized operations becoming a norm these days, it's possible\n> to have thousands of element in array of an ANY or IN clause. Also\n> will be common to have thousands of partitions. But I think what we\n> need to do here is to write a selectivity estimation function which\n> takes an const array and return selectivity without requiring to\n> create a Const node for each element.\n\nMaybe you're right. Could you show any examples of vectorized usage of postgres to understand your idea more clearly?\nHere I propose only quick simple solution. I don't think it would change the way of development.\n\n>> The diff in the attachment is proof of concept showing how to reduce wasting of memory. Having benchmarked a bit, I didn't find any overhead.\n>>\n>\n> You might want to include your benchmarking results as well.\n\nHere is nothing interesting. pgbench TPS and planning time for the cases above doesn't change planning time.\n\n[1] Report planning memory in EXPLAIN ANALYZE\n\n-- \nRegards,\nAndrei Lepikhov",
"msg_date": "Fri, 08 Sep 2023 12:11:31 +0700",
"msg_from": "\"Lepikhov Andrei\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimize planner memory consumption for huge arrays"
},
{
"msg_contents": "\n\nOn 9/8/23 07:11, Lepikhov Andrei wrote:\n> \n> \n> On Wed, Sep 6, 2023, at 8:09 PM, Ashutosh Bapat wrote:\n>> Hi Lepikhov,\n>>\n>> Thanks for using my patch and I am glad that you found it useful.\n>>\n>> On Mon, Sep 4, 2023 at 10:56 AM Lepikhov Andrei\n>> <[email protected]> wrote:\n>>>\n>>> Hi, hackers,\n>>>\n>>> Looking at the planner behaviour with the memory consumption patch [1], I figured out that arrays increase memory consumption by the optimizer significantly. See init.sql in attachment.\n>>> The point here is that the planner does small memory allocations for each element during estimation. As a result, it looks like the planner consumes about 250 bytes for each integer element.\n>>\n>> I guess the numbers you mentioned in init.sql are total memory used by\n>> the planner (as reported by the patch in the thread) when planning\n>> that query and not memory consumed by Const nodes themselves. Am I\n>> right? I think the measurements need to be explained better and also\n>> the realistic scenario you are trying to oprimize.\n> \n> Yes, it is the total memory consumed by the planner - I used the numbers generated by your patch [1]. I had been increasing the number of elements in the array to exclude the memory consumed by the planner for other purposes. As you can see, the array with 1 element consumes 12kB of memory, 1E4 elements - 2.6 MB. All of that memory increment is related to the only enlargement of this array. (2600-12)/10 = 260 bytes. So, I make a conclusion: each 4-byte element produces a consumption of 260 bytes of memory.\n> This scenario I obtained from the user complaint - they had strict restrictions on memory usage and were stuck in this unusual memory usage case.\n> \n>> I guess, the reason you think that partitioning will increase the\n>> memory consumed is because each partition will have the clause\n>> translated for it. Selectivity estimation for each partition will\n>> create those many Const nodes and hence consume memory. Am I right?\n> \n> Yes.\n> \n>> Can you please measure the memory consumed with and without your\n>> patch.\n> \n> Done. See test case and results in 'init_parts.sql' in attachment. Short summary below. I varied a number of elements from 1 to 10000 and partitions from 1 to 100. As you can see, partitioning adds a lot of memory consumption by itself. But we see an effect from patch also.\n> \n> master:\n> elems\t1\t\t1E1\t\t1E2\t\t1E3\t\t1E4\t\n> parts\n> 1\t\t28kB\t50kB\t0.3MB\t2.5MB\t25MB\n> 10\t\t45kB\t143kB\t0.6MB\t4.8MB\t47MB\n> 100\t\t208kB\t125kB\t3.3MB\t27MB\t274MB\n> \n> patched:\n> elems\t1\t\t1E1\t\t1E2\t\t1E3\t\t1E4\n> parts\n> 1\t\t28kB\t48kB\t0.25MB\t2.2MB\t22.8MB\n> 10\t\t44kB\t100kB\t313kB\t2.4MB\t23.7MB\n> 100\t\t208kB\t101kB\t0.9MB\t3.7MB\t32.4MB\n> \n> Just for comparison, without partitioning:\n> elems\t1\t\t1E1\t\t1E2\t\t1E3\t\t1E4\t\n> master:\t12kB\t14kB\t37kB\t266kB\t2.5MB\n> patched:\t12kB\t11.5kB\t13kB\t24kB\t141kB\n> \n\nThese improvements look pretty nice, considering how simple the patch\nseems to be. I can't even imagine how much memory we'd need with even\nmore partitions (say, 1000) if 100 partitions means 274MB.\n\nBTW when releasing memory in scalararraysel, wouldn't it be good to also\nfree the elem_values/elem_nulls? I haven't tried and maybe it's not that\nsignificant amount.\n\n\nConsidering there are now multiple patches improving memory usage during\nplanning with partitions, perhaps it's time to take a step back and\nthink about how we manage (or rather not manage) memory during query\nplanning, and see if we could improve that instead of an infinite\nsequence of ad hoc patches?\n\nOur traditional attitude is to not manage memory, and rely on the memory\ncontext to not be very long-lived. And that used to be fine, but\npartitioning clearly changed the equation, increasing the amount of\nallocated memory etc.\n\nI don't think we want to stop relying on memory contexts for planning in\ngeneral - memory contexts are obviously very convenient etc. But maybe\nwe could identify \"stages\" in the planning and release the memory more\naggressively in those?\n\nFor example, I don't think we expect selectivity functions to allocate\nlong-lived objects, right? So maybe we could run them in a dedicated\nmemory context, and reset it aggressively (after each call).\n\nOfc, I'm not suggesting this patch should be responsible for doing this.\n\n\n>>> It is maybe not a problem most of the time. However, in the case of partitions, memory consumption multiplies by each partition. Such a corner case looks weird, but the fix is simple. So, why not?\n>>\n>> With vectorized operations becoming a norm these days, it's possible\n>> to have thousands of element in array of an ANY or IN clause. Also\n>> will be common to have thousands of partitions. But I think what we\n>> need to do here is to write a selectivity estimation function which\n>> takes an const array and return selectivity without requiring to\n>> create a Const node for each element.\n> \n> Maybe you're right. Could you show any examples of vectorized usage of postgres to understand your idea more clearly?\n> Here I propose only quick simple solution. I don't think it would change the way of development.\n> \n\nI'm a big fan of SIMD and vectorization, but I don't think there's a\nchance to achieve that without major reworks to how we evaluate\nexpressions. It's pretty fundamentally incompatible with how we handle\nwith user-defined functions, FunctionCall etc.\n\n>>> The diff in the attachment is proof of concept showing how to reduce wasting of memory. Having benchmarked a bit, I didn't find any overhead.\n>>>\n>>\n>> You might want to include your benchmarking results as well.\n> \n> Here is nothing interesting. pgbench TPS and planning time for the cases above doesn't change planning time.\n> \n\nYeah, I don't think we'd expect regressions from this patch. It pretty\nmuch just pfree-s a list + Const node.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 19 Feb 2024 14:47:37 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize planner memory consumption for huge arrays"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> Considering there are now multiple patches improving memory usage during\n> planning with partitions, perhaps it's time to take a step back and\n> think about how we manage (or rather not manage) memory during query\n> planning, and see if we could improve that instead of an infinite\n> sequence of ad hoc patches?\n\n+1, I've been getting an itchy feeling about that too. I don't have\nany concrete proposals ATM, but I quite like your idea here:\n\n> For example, I don't think we expect selectivity functions to allocate\n> long-lived objects, right? So maybe we could run them in a dedicated\n> memory context, and reset it aggressively (after each call).\n\nThat could eliminate a whole lot of potential leaks. I'm not sure\nthough how much it moves the needle in terms of overall planner memory\nconsumption. I've always supposed that the big problem was data\nstructures associated with rejected Paths, but I might be wrong.\nIs there some simple way we could get a handle on where the most\nmemory goes while planning?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Feb 2024 10:45:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize planner memory consumption for huge arrays"
},
{
"msg_contents": "On 2/19/24 16:45, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> Considering there are now multiple patches improving memory usage during\n>> planning with partitions, perhaps it's time to take a step back and\n>> think about how we manage (or rather not manage) memory during query\n>> planning, and see if we could improve that instead of an infinite\n>> sequence of ad hoc patches?\n> \n> +1, I've been getting an itchy feeling about that too. I don't have\n> any concrete proposals ATM, but I quite like your idea here:\n> \n>> For example, I don't think we expect selectivity functions to allocate\n>> long-lived objects, right? So maybe we could run them in a dedicated\n>> memory context, and reset it aggressively (after each call).\n> \n> That could eliminate a whole lot of potential leaks. I'm not sure \n> though how much it moves the needle in terms of overall planner\n> memory consumption.\n\nI'm not sure about that either, maybe not much - for example it would\nnot help with the two other memory usage patches (which are related to\nSpecialJoinInfo and RestrictInfo, outside selectivity functions).\n\nIt was an ad hoc thought, inspired by the issue at hand. Maybe it would\nbe possible to find similar \"boundaries\" in other parts of the planner.\n\nI keep thinking about how compilers/optimizers typically have separate\noptimizations passes, maybe that's something we might leverage ...\n\n> I've always supposed that the big problem was data structures\n> associated with rejected Paths, but I might be wrong. Is there some\n> simple way we could get a handle on where the most memory goes while\n> planning?\n> \n\nI suspect this might have changed thanks to partitioning - it's not a\ncoincidence most of the recent memory usage improvements address cases\nwith many partitions.\n\nAs for how to analyze the memory usage - maybe there's a simpler way,\nbut what I did recently was adding simple instrumentation into memory\ncontexts, recording pointer/size/backtrace for each request, and then a\nscript that aggregated that into a \"currently allocated\" report with\ninformation about \"source\" of the allocation.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 19 Feb 2024 18:37:49 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize planner memory consumption for huge arrays"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 2/19/24 16:45, Tom Lane wrote:\n>> Tomas Vondra <[email protected]> writes:\n>>> For example, I don't think we expect selectivity functions to allocate\n>>> long-lived objects, right? So maybe we could run them in a dedicated\n>>> memory context, and reset it aggressively (after each call).\n\n>> That could eliminate a whole lot of potential leaks. I'm not sure \n>> though how much it moves the needle in terms of overall planner\n>> memory consumption.\n\n> I'm not sure about that either, maybe not much - for example it would\n> not help with the two other memory usage patches (which are related to\n> SpecialJoinInfo and RestrictInfo, outside selectivity functions).\n\n> It was an ad hoc thought, inspired by the issue at hand. Maybe it would\n> be possible to find similar \"boundaries\" in other parts of the planner.\n\nHere's a quick and probably-incomplete implementation of that idea.\nI've not tried to study its effects on memory consumption, just made\nsure it passes check-world.\n\nThe main hazard here is that something invoked inside clause\nselectivity might try to cache a data structure for later use.\nHowever, there are already places that do that kind of thing,\nand they already explicitly switch into the planner_cxt, because\notherwise they fail under GEQO. (If we do find places that need\nfixing for this, they were probably busted under GEQO already.)\nPerhaps it's worth updating the comments at those places, but\nI didn't bother in this first cut.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 19 Feb 2024 16:51:47 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize planner memory consumption for huge arrays"
},
{
"msg_contents": "On 19/2/2024 20:47, Tomas Vondra wrote:\n> On 9/8/23 07:11, Lepikhov Andrei wrote:\n>> Just for comparison, without partitioning:\n>> elems\t1\t\t1E1\t\t1E2\t\t1E3\t\t1E4\t\n>> master:\t12kB\t14kB\t37kB\t266kB\t2.5MB\n>> patched:\t12kB\t11.5kB\t13kB\t24kB\t141kB\n>>\n> \n> These improvements look pretty nice, considering how simple the patch\n> seems to be. I can't even imagine how much memory we'd need with even\n> more partitions (say, 1000) if 100 partitions means 274MB.\n> \n> BTW when releasing memory in scalararraysel, wouldn't it be good to also\n> free the elem_values/elem_nulls? I haven't tried and maybe it's not that\n> significant amount.\nAgree. Added into the next version of the patch.\nMoreover, I see a slight planning speedup. Looking into the reason for \nthat, I discovered that it is because sometimes the planner utilizes the \nsame memory piece for the next array element. It finds this piece more \nquickly than before that optimization.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional",
"msg_date": "Tue, 20 Feb 2024 11:17:31 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize planner memory consumption for huge arrays"
},
{
"msg_contents": "On 20/2/2024 04:51, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> On 2/19/24 16:45, Tom Lane wrote:\n>>> Tomas Vondra <[email protected]> writes:\n>>>> For example, I don't think we expect selectivity functions to allocate\n>>>> long-lived objects, right? So maybe we could run them in a dedicated\n>>>> memory context, and reset it aggressively (after each call).\n> Here's a quick and probably-incomplete implementation of that idea.\n> I've not tried to study its effects on memory consumption, just made\n> sure it passes check-world.\nThanks for the sketch. The trick with the planner_tmp_cxt_depth \nespecially looks interesting.\nI think we should design small memory contexts - one per scalable \ndirection of memory utilization, like selectivity or partitioning \n(appending ?).\nMy coding experience shows that short-lived GEQO memory context forces \npeople to learn on Postgres internals more precisely :).\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Tue, 20 Feb 2024 11:41:17 +0700",
"msg_from": "Andrei Lepikhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize planner memory consumption for huge arrays"
},
{
"msg_contents": "Hi!\n\nOn 20.02.2024 07:41, Andrei Lepikhov wrote:\n> On 20/2/2024 04:51, Tom Lane wrote:\n>> Tomas Vondra <[email protected]> writes:\n>>> On 2/19/24 16:45, Tom Lane wrote:\n>>>> Tomas Vondra <[email protected]> writes:\n>>>>> For example, I don't think we expect selectivity functions to \n>>>>> allocate\n>>>>> long-lived objects, right? So maybe we could run them in a dedicated\n>>>>> memory context, and reset it aggressively (after each call).\n>> Here's a quick and probably-incomplete implementation of that idea.\n>> I've not tried to study its effects on memory consumption, just made\n>> sure it passes check-world.\n> Thanks for the sketch. The trick with the planner_tmp_cxt_depth \n> especially looks interesting.\n> I think we should design small memory contexts - one per scalable \n> direction of memory utilization, like selectivity or partitioning \n> (appending ?).\n> My coding experience shows that short-lived GEQO memory context forces \n> people to learn on Postgres internals more precisely :).\n>\nI think there was a problem in your patch when you freed up the memory \nof a variable in the eqsel_internal function, because we have a case \nwhere the variable was deleted by reference in the \neval_const_expressions_mutator function (it is only for T_SubPlan and \nT_AlternativeSubPlan type of nodes.\n\nThis query just causes an error in your case:\n\ncreate table a (id bigint, c1 bigint, primary key(id));\ncreate table b (a_id bigint, b_id bigint, b2 bigint, primary key(a_id, \nb_id));\nexplain select id\n from a, b\n where id = a_id\n and b2 = (select min(b2)\n from b\n where id = a_id);\ndrop table a;\ndrop table b;\n\nWe can return a copy of the variable or not release the memory of this \nvariable.\n\nI attached two patch: the first one is removing your memory cleanup and \nanother one returns the copy of variable.\n\nThe author of the corrections is not only me, but also Daniil Anisimov.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 22 Feb 2024 21:50:03 +0300",
"msg_from": "Alena Rybakina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize planner memory consumption for huge arrays"
},
{
"msg_contents": "I wrote:\n> Tomas Vondra <[email protected]> writes:\n>> On 2/19/24 16:45, Tom Lane wrote:\n>>> Tomas Vondra <[email protected]> writes:\n>>>> For example, I don't think we expect selectivity functions to allocate\n>>>> long-lived objects, right? So maybe we could run them in a dedicated\n>>>> memory context, and reset it aggressively (after each call).\n\n>>> That could eliminate a whole lot of potential leaks. I'm not sure \n>>> though how much it moves the needle in terms of overall planner\n>>> memory consumption.\n\n>> It was an ad hoc thought, inspired by the issue at hand. Maybe it would\n>> be possible to find similar \"boundaries\" in other parts of the planner.\n\n> Here's a quick and probably-incomplete implementation of that idea.\n> I've not tried to study its effects on memory consumption, just made\n> sure it passes check-world.\n\nI spent a bit more time on this patch. One thing I was concerned\nabout was whether it causes any noticeable slowdown, and it seems that\nit does: testing with \"pgbench -S\" I observe perhaps 1% slowdown.\nHowever, we don't necessarily need to reset the temp context after\nevery single usage. I experimented with resetting it every tenth\ntime, and that got me from 1% slower than HEAD to 1% faster. Of\ncourse \"every tenth time\" is very ad hoc. I wondered if we could\nmake it somehow conditional on how much memory had been consumed\nin the temp context, but there doesn't seem to be any cheap way\nto get that. Applying something like MemoryContextMemConsumed\nwould surely be a loser. I'm not sure if it'd be worth extending\nthe mcxt.c API to provide something like \"MemoryContextResetIfBig\",\nwith some internal rule that would be cheap to apply like \"reset\nif we have any non-keeper blocks\".\n\nI also looked into whether it really does reduce overall memory\nconsumption noticeably, by collecting stats about planner memory\nconsumption during the core regression tests. The answer is that\nit barely helps. I see the average used space across all planner\ninvocations drop from 23344 bytes to 23220, and the worst-case\nnumbers hardly move at all. So that's a little discouraging.\nBut of course the regression tests prefer not to deal in very\nlarge/expensive test cases, so maybe it's not surprising that\nI don't see much win in this test.\n\nAnyway, 0001 attached is a cleaned-up patch with the every-tenth-\ntime rule, and 0002 (not meant for commit) is the quick and\ndirty instrumentation patch I used for collecting usage stats.\n\nEven though this seems of only edge-case value, I'd much prefer\nto do this than the sort of ad-hoc patching initially proposed\nin this thread.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 24 Feb 2024 18:07:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize planner memory consumption for huge arrays"
},
{
"msg_contents": "On 2/25/24 00:07, Tom Lane wrote:\n> I wrote:\n>> Tomas Vondra <[email protected]> writes:\n>>> On 2/19/24 16:45, Tom Lane wrote:\n>>>> Tomas Vondra <[email protected]> writes:\n>>>>> For example, I don't think we expect selectivity functions to allocate\n>>>>> long-lived objects, right? So maybe we could run them in a dedicated\n>>>>> memory context, and reset it aggressively (after each call).\n> \n>>>> That could eliminate a whole lot of potential leaks. I'm not sure \n>>>> though how much it moves the needle in terms of overall planner\n>>>> memory consumption.\n> \n>>> It was an ad hoc thought, inspired by the issue at hand. Maybe it would\n>>> be possible to find similar \"boundaries\" in other parts of the planner.\n> \n>> Here's a quick and probably-incomplete implementation of that idea.\n>> I've not tried to study its effects on memory consumption, just made\n>> sure it passes check-world.\n> \n> I spent a bit more time on this patch. One thing I was concerned\n> about was whether it causes any noticeable slowdown, and it seems that\n> it does: testing with \"pgbench -S\" I observe perhaps 1% slowdown.\n> However, we don't necessarily need to reset the temp context after\n> every single usage. I experimented with resetting it every tenth\n> time, and that got me from 1% slower than HEAD to 1% faster.\n\nIsn't 1% well within the usual noise and/or the differences that can be\ncaused simply by slightly different alignment of the binary? I'd treat\nthis as \"same performance\" ...\n\n> Of course \"every tenth time\" is very ad hoc. I wondered if we could\n> make it somehow conditional on how much memory had been consumed\n> in the temp context, but there doesn't seem to be any cheap way\n> to get that. Applying something like MemoryContextMemConsumed\n> would surely be a loser. I'm not sure if it'd be worth extending\n> the mcxt.c API to provide something like \"MemoryContextResetIfBig\",\n> with some internal rule that would be cheap to apply like \"reset\n> if we have any non-keeper blocks\".\n\nWouldn't it be sufficient to look simply at MemoryContextMemAllocated?\nThat's certainly way cheaper than MemoryContextStatsInternal, especially\nif the context tree is shallow (which I think we certainly expect here).\n\nI think MemoryContextResetIfBig is an interesting idea - I think a good\nway to define \"big\" would be \"has multiple blocks\", because that's the\nonly case where we can actually reclaim some memory.\n\n> \n> I also looked into whether it really does reduce overall memory\n> consumption noticeably, by collecting stats about planner memory\n> consumption during the core regression tests. The answer is that\n> it barely helps. I see the average used space across all planner\n> invocations drop from 23344 bytes to 23220, and the worst-case\n> numbers hardly move at all. So that's a little discouraging.\n> But of course the regression tests prefer not to deal in very\n> large/expensive test cases, so maybe it's not surprising that\n> I don't see much win in this test.\n> \n\nI'm not really surprised by this - I think you're right most of our\nselectivity functions either doesn't do memory-expensive stuff, or we\ndon't have such corner cases in our regression tests. Or at least not to\nthe extent to move the overall average, so we'd need to look at\nindividual cases allocating quite a bit of memory.\n\nBut I think that's fine - I see this as a safety measure, not something\nthat'd improve the \"good\" cases.\n\n> Anyway, 0001 attached is a cleaned-up patch with the every-tenth-\n> time rule, and 0002 (not meant for commit) is the quick and\n> dirty instrumentation patch I used for collecting usage stats.\n> \n> Even though this seems of only edge-case value, I'd much prefer\n> to do this than the sort of ad-hoc patching initially proposed\n> in this thread.\n> \n\n+1 to that, it seems like a more principled approach.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 25 Feb 2024 14:52:47 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize planner memory consumption for huge arrays"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 2/25/24 00:07, Tom Lane wrote:\n>> ... I'm not sure if it'd be worth extending\n>> the mcxt.c API to provide something like \"MemoryContextResetIfBig\",\n>> with some internal rule that would be cheap to apply like \"reset\n>> if we have any non-keeper blocks\".\n\n> I think MemoryContextResetIfBig is an interesting idea - I think a good\n> way to define \"big\" would be \"has multiple blocks\", because that's the\n> only case where we can actually reclaim some memory.\n\nYeah. Also: once we had such an idea, it'd be very tempting to apply\nit to other frequently-reset contexts like the executor's per-tuple\nevaluation contexts. I'm not quite prepared to argue that\nMemoryContextReset should just act that way all the time ... but\nit's sure interesting to think about.\n\nAnother question is whether this wouldn't hurt debugging, in that\ndangling-pointer bugs would become harder to catch. We'd certainly\nwant to turn off the optimization in USE_VALGRIND builds, and maybe\nwe just shouldn't do it at all if USE_ASSERT_CHECKING.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 25 Feb 2024 11:29:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize planner memory consumption for huge arrays"
},
{
"msg_contents": "\n\nOn 2/25/24 17:29, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> On 2/25/24 00:07, Tom Lane wrote:\n>>> ... I'm not sure if it'd be worth extending\n>>> the mcxt.c API to provide something like \"MemoryContextResetIfBig\",\n>>> with some internal rule that would be cheap to apply like \"reset\n>>> if we have any non-keeper blocks\".\n> \n>> I think MemoryContextResetIfBig is an interesting idea - I think a good\n>> way to define \"big\" would be \"has multiple blocks\", because that's the\n>> only case where we can actually reclaim some memory.\n> \n> Yeah. Also: once we had such an idea, it'd be very tempting to apply\n> it to other frequently-reset contexts like the executor's per-tuple\n> evaluation contexts. I'm not quite prepared to argue that\n> MemoryContextReset should just act that way all the time ... but\n> it's sure interesting to think about.\n> \n\nDo the context resets consume enough time to make this measurable? I may\nbe wrong, but I'd guess it's not measurable. In which case, what would\nbe the benefit?\n\n> Another question is whether this wouldn't hurt debugging, in that\n> dangling-pointer bugs would become harder to catch. We'd certainly\n> want to turn off the optimization in USE_VALGRIND builds, and maybe\n> we just shouldn't do it at all if USE_ASSERT_CHECKING.\n> \n> \t\t\tregards, tom lane\n\n+1 to disable this optimization in assert-enabled builds. I guess we'd\ninvent a new constant to disable it, and tie it to USE_ASSERT_CHECKING\n(similar to CLOBBER_FREED_MEMORY, for example).\n\nThinking about CLOBBER_FREED_MEMORY, could it be useful to still clobber\nthe memory, even if we don't actually reset the context?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 26 Feb 2024 10:01:31 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize planner memory consumption for huge arrays"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 2/25/24 17:29, Tom Lane wrote:\n>> Yeah. Also: once we had such an idea, it'd be very tempting to apply\n>> it to other frequently-reset contexts like the executor's per-tuple\n>> evaluation contexts. I'm not quite prepared to argue that\n>> MemoryContextReset should just act that way all the time ... but\n>> it's sure interesting to think about.\n\n> Do the context resets consume enough time to make this measurable?\n\nI think they do. We previously invented the \"isReset\" mechanism to\neliminate work in the case of exactly zero allocations since the\nlast reset, and that made a very measurable difference at the time,\neven though you'd think the amount of work saved would be negligible.\nThis idea seems like it might be able to supersede that one and win\nin a larger fraction of cases.\n\n> +1 to disable this optimization in assert-enabled builds. I guess we'd\n> invent a new constant to disable it, and tie it to USE_ASSERT_CHECKING\n> (similar to CLOBBER_FREED_MEMORY, for example).\n\n> Thinking about CLOBBER_FREED_MEMORY, could it be useful to still clobber\n> the memory, even if we don't actually reset the context?\n\nI think in any case where we were trying to support debugging, we'd\njust disable the optimization, so that reset always resets.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 26 Feb 2024 10:20:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize planner memory consumption for huge arrays"
}
] |
[
{
"msg_contents": "Hi,\n\nThis patch proposes the column \"comment\" to the pg_hba_file_rules view. \nIt basically parses the inline comment (if any) of a valid pg_hba.conf \nentry and displays it in the new column.\n\nFor such pg_hba entries ...\n\nhost db jim 127.0.0.1/32 md5 # foo\nhost db jim 127.0.0.1/32 md5 #bar\nhost db jim 127.0.0.1/32 md5 # #foo#\n\n... it returns the following pg_hba_file_rules records:\n\npostgres=# SELECT type, database, user_name, address, comment\n FROM pg_hba_file_rules\n WHERE user_name[1]='jim';\n\n type | database | user_name | address | comment\n------+----------+-----------+-----------+---------\n host | {db} | {jim} | 127.0.0.1 | foo\n host | {db} | {jim} | 127.0.0.1 | bar\n host | {db} | {jim} | 127.0.0.1 | #foo#\n(3 rows)\n\n\nThis feature can come in quite handy when we need to read important \ncomments from the hba entries without having access to the pg_hba.conf \nfile directly.\n\nThe patch slightly changes the test 004_file_inclusion.pl to accommodate \nthe new column and the hba comments.\n\nDiscussion: \nhttps://www.postgresql.org/message-id/flat/3fec6550-93b0-b542-b203-b0054aaee83b%40uni-muenster.de\n\nBest regards,\nJim",
"msg_date": "Mon, 4 Sep 2023 12:54:15 +0200",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add inline comments to the pg_hba_file_rules view"
},
{
"msg_contents": "This is a very useful feature. I applied the patch to the master branch, \nand both make check and make check-world passed without any issues.\n\nJust one comment here, based on the example below,\n\n> host db jim 127.0.0.1/32 md5 # #foo#\n>\n> ... it returns the following pg_hba_file_rules records:\n>\n> postgres=# SELECT type, database, user_name, address, comment\n> FROM pg_hba_file_rules\n> WHERE user_name[1]='jim';\n>\n> type | database | user_name | address | comment\n> ------+----------+-----------+-----------+---------\n> host | {db} | {jim} | 127.0.0.1 | #foo#\n\nSince \"only the first #\" and \"any leading spaces\" are removed, IMO, it \ncan be more accurate to say,\n\nText after the first <literal>#</literal> comment character in the end \nof a valid <literal>pg_hba.conf</literal> entry, if any\n\n\nBest regards,\n\nDavid\n\n\n\n\n",
"msg_date": "Fri, 8 Sep 2023 16:52:03 -0700",
"msg_from": "David Zhang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add inline comments to the pg_hba_file_rules view"
},
{
"msg_contents": "Hi David\n\nOn 09.09.23 01:52, David Zhang wrote:\n> This is a very useful feature. I applied the patch to the master \n> branch, and both make check and make check-world passed without any \n> issues.\n>\nThanks for reviewing this patch!\n\n>\n> Since \"only the first #\" and \"any leading spaces\" are removed, IMO, it \n> can be more accurate to say,\n>\n> Text after the first <literal>#</literal> comment character in the end \n> of a valid <literal>pg_hba.conf</literal> entry, if any\n>\nI agree.\n\nv2 attached includes your suggestion. Thanks!\n\nJim",
"msg_date": "Sat, 9 Sep 2023 22:36:10 +0200",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add inline comments to the pg_hba_file_rules view"
},
{
"msg_contents": "On Mon, Sep 04, 2023 at 12:54:15PM +0200, Jim Jones wrote:\n> The patch slightly changes the test 004_file_inclusion.pl to accommodate the\n> new column and the hba comments.\n> \n> Discussion: https://www.postgresql.org/message-id/flat/3fec6550-93b0-b542-b203-b0054aaee83b%40uni-muenster.de\n\nWell, it looks like what I wrote a couple of days ago was perhaps\nconfusing:\nhttps://www.postgresql.org/message-id/ZPHAiNp%2ByKMsa/vc%40paquier.xyz\nhttps://www.postgresql.org/message-id/[email protected]\n\nThis patch touches hbafuncs.c and the system view pg_hba_file_rules,\nbut I don't think this stuff should touch any of these code paths.\nThat's what I meant in my second message: the SQL portion should be\nusable for all types of configuration files, even pg_ident.conf and\npostgresql.conf, and not only pg_hba.conf. A new SQL function\nreturning a SRF made of the comments extracted and the line numbers \ncan be joined with all the system views of the configuration files,\nlike sourcefile and sourceline in pg_settings, etc.\n--\nMichael",
"msg_date": "Mon, 11 Sep 2023 07:33:16 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add inline comments to the pg_hba_file_rules view"
},
{
"msg_contents": "Hi\n\nOn 11.09.23 00:33, Michael Paquier wrote:\n> Well, it looks like what I wrote a couple of days ago was perhaps\n> confusing:\n> https://www.postgresql.org/message-id/ZPHAiNp%2ByKMsa/vc%40paquier.xyz\n> https://www.postgresql.org/message-id/[email protected]\n>\n> This patch touches hbafuncs.c and the system view pg_hba_file_rules,\n> but I don't think this stuff should touch any of these code paths.\n> That's what I meant in my second message: the SQL portion should be\n> usable for all types of configuration files, even pg_ident.conf and\n> postgresql.conf, and not only pg_hba.conf. A new SQL function\n> returning a SRF made of the comments extracted and the line numbers\n> can be joined with all the system views of the configuration files,\n> like sourcefile and sourceline in pg_settings, etc.\n> --\n> Michael\n\nThanks for the feedback.\n\nI indeed misunderstood what you meant in the other thread, as you \nexplicitly only mentioned hba.c.\n\nThe change to hbafunc.c was mostly a function call and a new column to \nthe view:\n\n\ncomment = GetInlineComment(hba->rawline);\nif(comment)\n values[index++] = CStringGetTextDatum(comment);\nelse\n nulls[index++] = true;\n\n\nJust to make sure I got what you have in mind: you suggest to read the \npg_hba.conf a second time via a new (generic) function like \npg_read_file() that returns line numbers and their contents (+comments), \nand the results of this new function would be joined pg_hba_file_rules \nin SQL. Is that correct?\n\nThanks\n\n\n\n",
"msg_date": "Thu, 14 Sep 2023 13:33:04 +0200",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add inline comments to the pg_hba_file_rules view"
},
{
"msg_contents": "On Thu, Sep 14, 2023 at 01:33:04PM +0200, Jim Jones wrote:\n> Just to make sure I got what you have in mind: you suggest to read the\n> pg_hba.conf a second time via a new (generic) function like pg_read_file()\n> that returns line numbers and their contents (+comments), and the results of\n> this new function would be joined pg_hba_file_rules in SQL. Is that correct?\n\nYes, my suggestion was to define a new set-returning function that\ntakes in input a file path and that returns as one row one comment and\nits line number from the configuration file.\n--\nMichael",
"msg_date": "Fri, 15 Sep 2023 08:28:26 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add inline comments to the pg_hba_file_rules view"
},
{
"msg_contents": "On 15.09.23 01:28, Michael Paquier wrote:\n> Yes, my suggestion was to define a new set-returning function that\n> takes in input a file path and that returns as one row one comment and\n> its line number from the configuration file.\n> --\n> Michael\n\nThanks!\n\nIf reading the file again is an option, perhaps a simple SQL function \nwould suffice?\n\nSomething along these lines ..\n\nCREATE OR REPLACE FUNCTION pg_read_conf_comments(text)\nRETURNS TABLE (line_number int, comment text) AS $$\n SELECT lnum,\n trim(substring(line,\n nullif(strpos(line,'#'),0)+1,\n length(line)-strpos(line,'#')\n )) AS comment\n FROM unnest(string_to_array(pg_read_file($1),E'\\n'))\n WITH ORDINALITY hba(line,lnum)\n WHERE trim(line) !~~ '#%' AND trim(line) <> '';\n$$\nSTRICT LANGUAGE SQL ;\n\n\n.. then we could join it with pg_hba_file_rules (or any other conf file)\n\n\nSELECT type, database, user_name, address, c.comment\nFROM pg_hba_file_rules h, pg_read_conf_comments(h.file_name) c\nWHERE user_name[1]='jim' AND h.line_number = c.line_number ;\n\n type | database | user_name | address | comment\n------+----------+-----------+-----------+---------\n host | {db} | {jim} | 127.0.0.1 | foo\n host | {db} | {jim} | 127.0.0.1 | bar\n host | {db} | {jim} | 127.0.0.1 | #foo#\n(3 rows)\n\n\nIs it more or less what you had in mind?\n\n\n",
"msg_date": "Fri, 15 Sep 2023 09:37:23 +0200",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add inline comments to the pg_hba_file_rules view"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 09:37:23AM +0200, Jim Jones wrote:\n> SELECT type, database, user_name, address, c.comment\n> FROM pg_hba_file_rules h, pg_read_conf_comments(h.file_name) c\n> WHERE user_name[1]='jim' AND h.line_number = c.line_number ;\n> \n> type | database | user_name | address | comment\n> ------+----------+-----------+-----------+---------\n> host | {db} | {jim} | 127.0.0.1 | foo\n> host | {db} | {jim} | 127.0.0.1 | bar\n> host | {db} | {jim} | 127.0.0.1 | #foo#\n> (3 rows)\n> \n> \n> Is it more or less what you had in mind?\n\nThat was the idea. I forgot about strpos(), but if you do that, do we\nactually need a function in core to achieve that? There are a few\nfancy cases with the SQL function you have sent, actually.. strpos()\nwould grep the first '#' character, ignoring quoted areas.\n--\nMichael",
"msg_date": "Sat, 16 Sep 2023 13:18:41 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add inline comments to the pg_hba_file_rules view"
},
{
"msg_contents": "Hi Michael\n\nOn 16.09.23 06:18, Michael Paquier wrote:\n> That was the idea. I forgot about strpos(), but if you do that, do we\n> actually need a function in core to achieve that? \nI guess it depends who you ask :) I personally think it would be a good \naddition to the view, as it would provide a more comprehensive look into \nthe hba file. Yes, the fact that it could possibly be written in SQL \nsounds silly, but it's IMHO still relevant to have it by default.\n> There are a few fancy cases with the SQL function you have sent, \n> actually.. strpos() would grep the first '#' character, ignoring \n> quoted areas.\n\nYes, you're totally right. I didn't take into account any token \nsurrounded by double quotes containing #.\n\nv3 attached addresses this issue.\n\n From the following hba:\n\n host db jim 192.168.10.1/32 md5 # foo\n host db jim 192.168.10.2/32 md5 #bar\n host db jim 192.168.10.3/32 md5 # #foo#\n host \"a#db\" \"a#user\" 192.168.10.4/32 md5 # fancy #hba entry\n\nWe can get these records from the view:\n\n SELECT type, database, user_name, address, comment\n FROM pg_hba_file_rules\n WHERE address ~~ '192.168.10.%';\n\n type | database | user_name | address | comment\n------+----------+-----------+--------------+------------------\n host | {db} | {jim} | 192.168.10.1 | foo\n host | {db} | {jim} | 192.168.10.2 | bar\n host | {db} | {jim} | 192.168.10.3 | #foo#\n host | {a#db} | {a#user} | 192.168.10.4 | fancy #hba entry\n\n\nI am still struggling to find a way to enable this function in separated \npath without having to read the conf file multiple times, or writing too \nmuch redundant code. How many other conf files do you think would profit \nfrom this feature?\n\nJim",
"msg_date": "Wed, 20 Sep 2023 00:29:27 +0200",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add inline comments to the pg_hba_file_rules view"
},
{
"msg_contents": "On 04.09.23 11:54, Jim Jones wrote:\n> This patch proposes the column \"comment\" to the pg_hba_file_rules view. \n> It basically parses the inline comment (if any) of a valid pg_hba.conf \n> entry and displays it in the new column.\n> \n> For such pg_hba entries ...\n> \n> host db jim 127.0.0.1/32 md5 # foo\n> host db jim 127.0.0.1/32 md5 #bar\n> host db jim 127.0.0.1/32 md5 # #foo#\n\nI'm skeptical about this.\n\nFirst, there are multiple commenting styles. The end-of-line style is \nless common in my experience, because pg_hba.conf lines tend to belong. \nAnother style is\n\n# foo\nhost db jim 127.0.0.1/32 md5\n# bar\nhost db jim 127.0.0.1/32 md5\n\nor even as a block\n\n# foo and bar\nhost db jim 127.0.0.1/32 md5\nhost db jim 127.0.0.1/32 md5\n\nAnother potential problem is that maybe people don't want their comments \nleaked out of the file. Who knows what they have written in there.\n\nI think we should leave file comments be file comments. If we want some \nannotations to be exported to higher-level views, we should make that an \nintentional and explicit separate feature.\n\n\n\n",
"msg_date": "Tue, 26 Sep 2023 14:19:54 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add inline comments to the pg_hba_file_rules view"
},
{
"msg_contents": "> On 26 Sep 2023, at 15:19, Peter Eisentraut <[email protected]> wrote:\n> \n> On 04.09.23 11:54, Jim Jones wrote:\n>> This patch proposes the column \"comment\" to the pg_hba_file_rules view. It basically parses the inline comment (if any) of a valid pg_hba.conf entry and displays it in the new column.\n>> For such pg_hba entries ...\n>> host db jim 127.0.0.1/32 md5 # foo\n>> host db jim 127.0.0.1/32 md5 #bar\n>> host db jim 127.0.0.1/32 md5 # #foo#\n> \n> I'm skeptical about this.\n> \n> First, there are multiple commenting styles. The end-of-line style is less common in my experience, because pg_hba.conf lines tend to belong. Another style is\n> \n> # foo\n> host db jim 127.0.0.1/32 md5\n> # bar\n> host db jim 127.0.0.1/32 md5\n> \n> or even as a block\n> \n> # foo and bar\n> host db jim 127.0.0.1/32 md5\n> host db jim 127.0.0.1/32 md5\n\nOr even a more complicated one (which I've seen variants of in production)\nwhere only horizontal whitespace separates two subsequent lines of comments:\n\n# Block comment\nhost db jim 127.0.0.1/32 md5 #end of line multi-\n #line comment\n# A new block comment directly following\nhost db jim 127.0.0.1/32 md5\n\n> I think we should leave file comments be file comments. If we want some annotations to be exported to higher-level views, we should make that an intentional and explicit separate feature.\n\n+1\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 26 Sep 2023 15:55:31 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add inline comments to the pg_hba_file_rules view"
},
{
"msg_contents": "Also a reluctant -1, as the comment-at-EOL style is very rare in my\nexperience over the years of seeing many a pg_hba file.\n\nAlso a reluctant -1, as the comment-at-EOL style is very rare in my experience over the years of seeing many a pg_hba file.",
"msg_date": "Tue, 26 Sep 2023 11:55:08 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add inline comments to the pg_hba_file_rules view"
},
{
"msg_contents": "Hi!\n\nOn 26.09.23 15:19, Peter Eisentraut wrote:\n> On 04.09.23 11:54, Jim Jones wrote:\n>> This patch proposes the column \"comment\" to the pg_hba_file_rules \n>> view. It basically parses the inline comment (if any) of a valid \n>> pg_hba.conf entry and displays it in the new column.\n>>\n>> For such pg_hba entries ...\n>>\n>> host db jim 127.0.0.1/32 md5 # foo\n>> host db jim 127.0.0.1/32 md5 #bar\n>> host db jim 127.0.0.1/32 md5 # #foo#\n>\n> I'm skeptical about this.\n>\n> First, there are multiple commenting styles. The end-of-line style is \n> less common in my experience, because pg_hba.conf lines tend to \n> belong. Another style is\n>\n> # foo\n> host db jim 127.0.0.1/32 md5\n> # bar\n> host db jim 127.0.0.1/32 md5\n>\n> or even as a block\n>\n> # foo and bar\n> host db jim 127.0.0.1/32 md5\n> host db jim 127.0.0.1/32 md5\n>\n> Another potential problem is that maybe people don't want their \n> comments leaked out of the file. Who knows what they have written in \n> there.\n\nI also considered this for a while. That's why I suggested only inline \ncomments. On a second thought, I agree that making only certain types of \ncomments \"accessible\" by the pg_hba_file_rules view can be misleading \nand can possibly leak sensible info if misused.\n\n>\n> I think we should leave file comments be file comments. If we want \n> some annotations to be exported to higher-level views, we should make \n> that an intentional and explicit separate feature.\n\nMy first suggestion [1] was to use a different character (other than \n'#'), but a good point was made, that it would add more complexity to \nthe hba.c, which is already complex enough.\nMy main motivation with this feature is to be able to annotate pg_hba \nentries in a way that it can be read using the pg_hba_file_rule via SQL \n- these annotations might contain information like tags, client \n(application) names or any relevant info regarding the granted access. \nThis info would help me to generate some reports that contain client \naccess information. I can sort of achieve something similar using \npg_read_file(),[2] but I thought it would be nice to have it directly \nfrom the view.\n\nDo you think that this feature is in general not a good idea? Or perhaps \na different annotation method would address your concerns?\n\nThank you very much for taking a look into it!\nJim\n\n\n\n1- \nhttps://www.postgresql.org/message-id/flat/[email protected]\n2- \nhttps://www.postgresql.org/message-id/b63625ca-580f-14dc-7e7c-f90cd4d95cf7%40uni-muenster.de\n\n\n",
"msg_date": "Tue, 26 Sep 2023 20:40:52 +0200",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add inline comments to the pg_hba_file_rules view"
},
{
"msg_contents": "> On 26 Sep 2023, at 20:40, Jim Jones <[email protected]> wrote:\n\n> Do you think that this feature is in general not a good idea?\n\nI wouldn't rule it out as a bad idea per se. As always when dealing with\naccess rules and pg_hba there is a security angle to consider, but I think that\ncould be addressed.\n\n> Or perhaps a different annotation method would address your concerns?\n\nAn annotation syntax specifically for this would address my concern, but the\nargument that pg_hba (and related code) is border-line too complicated as it is\ndoes hold some water. Complexity in code can lead to bugs, but complexity in\nsyntax can lead to misconfigurations or unintentional infosec leaks which is\nusually more problematic.\n\nI would propose to not worry about code and instead just discuss a potential\nnew format for annotations, and only implement parsing and handling once\nsomething has been agreed upon. This should be in a new thread however to\nensure visibility, since it's beyond the subject of this thread.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 27 Sep 2023 10:21:29 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add inline comments to the pg_hba_file_rules view"
},
{
"msg_contents": "Hi Daniel\n\nOn 27.09.23 10:21, Daniel Gustafsson wrote:\n> An annotation syntax specifically for this would address my concern, \n> but the\n> argument that pg_hba (and related code) is border-line too complicated as it is\n> does hold some water. Complexity in code can lead to bugs, but complexity in\n> syntax can lead to misconfigurations or unintentional infosec leaks which is\n> usually more problematic.\nYeah, that's why the possibility to use the normal comments for this \nfeature seemed at first so appealing :)\n> I would propose to not worry about code and instead just discuss a potential\n> new format for annotations, and only implement parsing and handling once\n> something has been agreed upon. This should be in a new thread however to\n> ensure visibility, since it's beyond the subject of this thread.\n\nSounds good! I will open a new thread as soon as I get back home, so \nthat we can collect some ideas.\n\nThanks\n\nJim\n\n\n\n",
"msg_date": "Thu, 28 Sep 2023 11:55:58 +0200",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add inline comments to the pg_hba_file_rules view"
}
] |
[
{
"msg_contents": "Hi,\n\nI realized that I forgot to add the new extra test to my test scripts.\nSo, I thought maybe we can use shorthand for including all extra\ntests. With that, running a full testsuite is easier without having to\nkeep up with new tests and updates.\n\nI created an 'all' option for PG_TEST_EXTRA to enable all test suites\ndefined under PG_TEST_EXTRA. I created the check_extra_tests_enabled()\nfunction in the Test/Utils.pm file. This function takes the test's\nname as an input and checks if PG_TEST_EXTRA contains 'all' or this\ntest's name.\n\nI thought another advantage could be that this can be used in CI. But\nwhen 'wal_consistency_checking' is enabled, CI times get longer since\nit does resource intensive operations.\n\nAny kind of feedback would be appreciated.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Mon, 4 Sep 2023 17:43:49 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Create shorthand for including all extra tests"
},
{
"msg_contents": "Nazir Bilal Yavuz <[email protected]> writes:\n> I created an 'all' option for PG_TEST_EXTRA to enable all test suites\n> defined under PG_TEST_EXTRA.\n\nI think this is a seriously bad idea. The entire point of not including\ncertain tests in check-world by default is that the omitted tests are\nsecurity hazards, so a developer or buildfarm owner should review each\none before deciding whether to activate it on their machine.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Sep 2023 11:01:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Create shorthand for including all extra tests"
},
{
"msg_contents": "> On 4 Sep 2023, at 17:01, Tom Lane <[email protected]> wrote:\n> \n> Nazir Bilal Yavuz <[email protected]> writes:\n>> I created an 'all' option for PG_TEST_EXTRA to enable all test suites\n>> defined under PG_TEST_EXTRA.\n> \n> I think this is a seriously bad idea. The entire point of not including\n> certain tests in check-world by default is that the omitted tests are\n> security hazards, so a developer or buildfarm owner should review each\n> one before deciding whether to activate it on their machine.\n\nI dunno, I've certainly managed to not run the tests I hoped to multiple times.\nI think it could be useful for sandboxed testrunners which are destroyed after\neach run. There is for sure a foot-gun angle to it, no question about that.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 4 Sep 2023 20:16:44 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Create shorthand for including all extra tests"
},
{
"msg_contents": "On Mon, Sep 04, 2023 at 08:16:44PM +0200, Daniel Gustafsson wrote:\n> > On 4 Sep 2023, at 17:01, Tom Lane <[email protected]> wrote:\n> > Nazir Bilal Yavuz <[email protected]> writes:\n> >> I created an 'all' option for PG_TEST_EXTRA to enable all test suites\n> >> defined under PG_TEST_EXTRA.\n> > \n> > I think this is a seriously bad idea. The entire point of not including\n> > certain tests in check-world by default is that the omitted tests are\n> > security hazards, so a developer or buildfarm owner should review each\n> > one before deciding whether to activate it on their machine.\n> \n> I dunno, I've certainly managed to not run the tests I hoped to multiple times.\n> I think it could be useful for sandboxed testrunners which are destroyed after\n> each run. There is for sure a foot-gun angle to it, no question about that.\n\nOther than PG_TEST_EXTRA=wal_consistency_checking, they have the same hazard:\nthey treat the loopback interface as private, so anyone with access to\nloopback interface ports can hijack the test. I'd be fine with e.g.\nPG_TEST_EXTRA=private-lo activating all of those. We don't gain by inviting\nthe tester to review the tests to rediscover this common factor.\n\n\n",
"msg_date": "Mon, 4 Sep 2023 11:41:12 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Create shorthand for including all extra tests"
},
{
"msg_contents": "Noah Misch <[email protected]> writes:\n> On Mon, Sep 04, 2023 at 08:16:44PM +0200, Daniel Gustafsson wrote:\n>> On 4 Sep 2023, at 17:01, Tom Lane <[email protected]> wrote:\n>>> I think this is a seriously bad idea. The entire point of not including\n>>> certain tests in check-world by default is that the omitted tests are\n>>> security hazards, so a developer or buildfarm owner should review each\n>>> one before deciding whether to activate it on their machine.\n\n> Other than PG_TEST_EXTRA=wal_consistency_checking, they have the same hazard:\n> they treat the loopback interface as private, so anyone with access to\n> loopback interface ports can hijack the test. I'd be fine with e.g.\n> PG_TEST_EXTRA=private-lo activating all of those. We don't gain by inviting\n> the tester to review the tests to rediscover this common factor.\n\nYeah, I could live with something like that from the security standpoint.\nNot sure if it helps Nazir's use-case though. Maybe we could invent\ncategories that can be used in place of individual test names?\nFor now,\n\n\tPG_TEST_EXTRA=\"needs-private-lo slow\"\n\nwould cover the territory of \"all\", and I think it'd be very seldom\nthat we'd have to invent new categories here (though maybe I lack\nimagination today).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Sep 2023 16:30:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Create shorthand for including all extra tests"
},
{
"msg_contents": "On Mon, Sep 04, 2023 at 04:30:31PM -0400, Tom Lane wrote:\n> Noah Misch <[email protected]> writes:\n> > On Mon, Sep 04, 2023 at 08:16:44PM +0200, Daniel Gustafsson wrote:\n> >> On 4 Sep 2023, at 17:01, Tom Lane <[email protected]> wrote:\n> >>> I think this is a seriously bad idea. The entire point of not including\n> >>> certain tests in check-world by default is that the omitted tests are\n> >>> security hazards, so a developer or buildfarm owner should review each\n> >>> one before deciding whether to activate it on their machine.\n> \n> > Other than PG_TEST_EXTRA=wal_consistency_checking, they have the same hazard:\n> > they treat the loopback interface as private, so anyone with access to\n> > loopback interface ports can hijack the test. I'd be fine with e.g.\n> > PG_TEST_EXTRA=private-lo activating all of those. We don't gain by inviting\n> > the tester to review the tests to rediscover this common factor.\n> \n> Yeah, I could live with something like that from the security standpoint.\n> Not sure if it helps Nazir's use-case though. Maybe we could invent\n> categories that can be used in place of individual test names?\n> For now,\n> \n> \tPG_TEST_EXTRA=\"needs-private-lo slow\"\n> \n> would cover the territory of \"all\", and I think it'd be very seldom\n> that we'd have to invent new categories here (though maybe I lack\n> imagination today).\n\nI could imagine categories for filesystem bytes and RAM bytes. Also, while\nneeds-private-lo has a bounded definition, \"slow\" doesn't. If today's one\n\"slow\" test increases check-world duration by 1.1x, we may not let a\n100x-increase test use the same keyword.\n\nIf one introduced needs-private-lo, the present spelling of \"all\" would be\n\"needs-private-lo wal_consistency_checking\". Looks okay to me. Doing nothing\nhere wouldn't be ruinous, of course.\n\n\n",
"msg_date": "Mon, 4 Sep 2023 14:09:06 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Create shorthand for including all extra tests"
},
{
"msg_contents": "Hi,\n\nThanks for the feedback! I updated the patch, 'needs-private-lo'\noption enables kerberos, ldap, load_balance and ssl extra tests now.\n\n> On Mon, Sep 04, 2023 at 04:30:31PM -0400, Tom Lane wrote:\n> > Yeah, I could live with something like that from the security standpoint.\n> > Not sure if it helps Nazir's use-case though. Maybe we could invent\n> > categories that can be used in place of individual test names?\n> > For now,\n\nYes, that is not ideal for my use-case but still better.\n\nOn Tue, 5 Sept 2023 at 00:09, Noah Misch <[email protected]> wrote:\n>\n> I could imagine categories for filesystem bytes and RAM bytes. Also, while\n> needs-private-lo has a bounded definition, \"slow\" doesn't. If today's one\n> \"slow\" test increases check-world duration by 1.1x, we may not let a\n> 100x-increase test use the same keyword.\n\nI agree. I didn't create a new category as 'slow' but still open to suggestions.\n\nI am not very familiar with perl syntax, I would like to hear your\nopinions on how the implementation of the check_extra_text_enabled()\nfunction could be done better.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Tue, 5 Sep 2023 20:26:20 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Create shorthand for including all extra tests"
},
{
"msg_contents": "> On 4 Sep 2023, at 23:09, Noah Misch <[email protected]> wrote:\n\n> I could imagine categories for filesystem bytes and RAM bytes. Also, while\n> needs-private-lo has a bounded definition, \"slow\" doesn't. If today's one\n> \"slow\" test increases check-world duration by 1.1x, we may not let a\n> 100x-increase test use the same keyword.\n\nAgreed, the names should be descriptive enough to contain a boundary. Any new\ntest which is orders of magnitude slower than an existing test suite most\nlikely will have one/more boundary characteristics not shared with existing\nsuites. The test in [email protected] for\nautovacuum wraparound comes to mind as one that would warrant a new category.\n\n> If one introduced needs-private-lo, the present spelling of \"all\" would be\n> \"needs-private-lo wal_consistency_checking\".\n\nI think it makes sense to invent a new PG_TEST_EXTRA category which (for now)\nonly contains wal_consistency_checking to make it consistent, such that \"all\"\ncan be achieved by a set of categories.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 7 Sep 2023 11:01:24 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Create shorthand for including all extra tests"
},
{
"msg_contents": "On 04.09.23 22:30, Tom Lane wrote:\n> Noah Misch <[email protected]> writes:\n>> On Mon, Sep 04, 2023 at 08:16:44PM +0200, Daniel Gustafsson wrote:\n>>> On 4 Sep 2023, at 17:01, Tom Lane <[email protected]> wrote:\n>>>> I think this is a seriously bad idea. The entire point of not including\n>>>> certain tests in check-world by default is that the omitted tests are\n>>>> security hazards, so a developer or buildfarm owner should review each\n>>>> one before deciding whether to activate it on their machine.\n> \n>> Other than PG_TEST_EXTRA=wal_consistency_checking, they have the same hazard:\n>> they treat the loopback interface as private, so anyone with access to\n>> loopback interface ports can hijack the test. I'd be fine with e.g.\n>> PG_TEST_EXTRA=private-lo activating all of those. We don't gain by inviting\n>> the tester to review the tests to rediscover this common factor.\n> \n> Yeah, I could live with something like that from the security standpoint.\n> Not sure if it helps Nazir's use-case though. Maybe we could invent\n> categories that can be used in place of individual test names?\n> For now,\n> \n> \tPG_TEST_EXTRA=\"needs-private-lo slow\"\n> \n> would cover the territory of \"all\", and I think it'd be very seldom\n> that we'd have to invent new categories here (though maybe I lack\n> imagination today).\n\nAt least the kerberos tests also appear to require a lot of randomness \nfor their setup, and sometimes in VM environments they hang for minutes \nuntil they get that. I suppose that would go under \"slow\".\n\nAlso, at least in my mind, when we added the kerberos and ldap tests, a \npartial reason for excluding them from the default run was \"requires \nadditional unusual software to be installed\". The additional kerberos \nand ldap server software used in those tests is not covered by \nconfigure/meson, so it's a bit more DIY.\n\n\n\n\n",
"msg_date": "Tue, 12 Sep 2023 10:00:48 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Create shorthand for including all extra tests"
},
{
"msg_contents": "On 05.09.23 19:26, Nazir Bilal Yavuz wrote:\n> Thanks for the feedback! I updated the patch, 'needs-private-lo'\n> option enables kerberos, ldap, load_balance and ssl extra tests now.\n\nAs was discussed, I don't think \"needs private lo\" is the only condition \nfor these tests. At least kerberos and ldap also need extra software \ninstalled, and load_balance might need editing the system's hosts file. \nSo someone would still need to familiarize themselves with these tests \nindividually before setting a global option like this.\n\nAlso, if we were to create test groupings like this, I think the \nimplementation should be different. The way you have it, there is a \nsort of central registry of all affected tests in \nsrc/test/perl/PostgreSQL/Test/Utils.pm and a mapping of groups to tests. \n I would prefer a more decentralized approach where each test decides \non its own whether to run, with pseudo-code conditionals like\n\nif (!(PG_TEST_EXTRA contains \"ldap\" or PG_TEST_EXTRA contains \n\"needs-private-lo\"))\n skip_all\n\nAnyway, at the moment, I don't see a sensible way to group these things \nbeyond what we have now (effectively, \"ldap\" is already a group, because \nit affects more than one test suite). Right now, we have six possible \nvalues, which is probably just about doable to keep track of manually. \nIf we get a lot more, then we need to look into this again, but maybe \nthen we'll also have more patterns to group things around.\n\n\n\n",
"msg_date": "Wed, 10 Jan 2024 21:48:44 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Create shorthand for including all extra tests"
},
{
"msg_contents": "Hi,\n\nOn Wed, 10 Jan 2024 at 23:48, Peter Eisentraut <[email protected]> wrote:\n>\n> On 05.09.23 19:26, Nazir Bilal Yavuz wrote:\n> > Thanks for the feedback! I updated the patch, 'needs-private-lo'\n> > option enables kerberos, ldap, load_balance and ssl extra tests now.\n>\n> As was discussed, I don't think \"needs private lo\" is the only condition\n> for these tests. At least kerberos and ldap also need extra software\n> installed, and load_balance might need editing the system's hosts file.\n> So someone would still need to familiarize themselves with these tests\n> individually before setting a global option like this.\n>\n> Also, if we were to create test groupings like this, I think the\n> implementation should be different. The way you have it, there is a\n> sort of central registry of all affected tests in\n> src/test/perl/PostgreSQL/Test/Utils.pm and a mapping of groups to tests.\n> I would prefer a more decentralized approach where each test decides\n> on its own whether to run, with pseudo-code conditionals like\n>\n> if (!(PG_TEST_EXTRA contains \"ldap\" or PG_TEST_EXTRA contains\n> \"needs-private-lo\"))\n> skip_all\n>\n> Anyway, at the moment, I don't see a sensible way to group these things\n> beyond what we have now (effectively, \"ldap\" is already a group, because\n> it affects more than one test suite). Right now, we have six possible\n> values, which is probably just about doable to keep track of manually.\n> If we get a lot more, then we need to look into this again, but maybe\n> then we'll also have more patterns to group things around.\n\nI see your point. It looks like the best option is to reevaluate this\nif there are more PG_TEST_EXTRA options.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Mon, 15 Jan 2024 11:54:19 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Create shorthand for including all extra tests"
},
{
"msg_contents": "On 15.01.24 09:54, Nazir Bilal Yavuz wrote:\n> Hi,\n> \n> On Wed, 10 Jan 2024 at 23:48, Peter Eisentraut <[email protected]> wrote:\n>>\n>> On 05.09.23 19:26, Nazir Bilal Yavuz wrote:\n>>> Thanks for the feedback! I updated the patch, 'needs-private-lo'\n>>> option enables kerberos, ldap, load_balance and ssl extra tests now.\n>>\n>> As was discussed, I don't think \"needs private lo\" is the only condition\n>> for these tests. At least kerberos and ldap also need extra software\n>> installed, and load_balance might need editing the system's hosts file.\n>> So someone would still need to familiarize themselves with these tests\n>> individually before setting a global option like this.\n>>\n>> Also, if we were to create test groupings like this, I think the\n>> implementation should be different. The way you have it, there is a\n>> sort of central registry of all affected tests in\n>> src/test/perl/PostgreSQL/Test/Utils.pm and a mapping of groups to tests.\n>> I would prefer a more decentralized approach where each test decides\n>> on its own whether to run, with pseudo-code conditionals like\n>>\n>> if (!(PG_TEST_EXTRA contains \"ldap\" or PG_TEST_EXTRA contains\n>> \"needs-private-lo\"))\n>> skip_all\n>>\n>> Anyway, at the moment, I don't see a sensible way to group these things\n>> beyond what we have now (effectively, \"ldap\" is already a group, because\n>> it affects more than one test suite). Right now, we have six possible\n>> values, which is probably just about doable to keep track of manually.\n>> If we get a lot more, then we need to look into this again, but maybe\n>> then we'll also have more patterns to group things around.\n> \n> I see your point. It looks like the best option is to reevaluate this\n> if there are more PG_TEST_EXTRA options.\n\nOk, I'm closing this commitfest entry.\n\n\n\n",
"msg_date": "Sat, 20 Jan 2024 09:22:55 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Create shorthand for including all extra tests"
}
] |
[
{
"msg_contents": "Hi,\nI used backtrace_functions to debug one of my ideas and found its behavior counter-intuitive and contradictory to it own docs. I think the GUC is supposed to be used to dump backtrace only on elog(ERROR) (should it also be done for higher levels? not sure about this), but, in fact, it does that for any log-level. I have attached a patch that checks log-level before attaching backtrace.\n\nRegards,\nIlya",
"msg_date": "Mon, 4 Sep 2023 21:30:32 +0100",
"msg_from": "Ilya Gladyshev <[email protected]>",
"msg_from_op": true,
"msg_subject": "backtrace_functions emits trace for any elog"
},
{
"msg_contents": "On Mon, Sep 04, 2023 at 09:30:32PM +0100, Ilya Gladyshev wrote:\n> I used backtrace_functions to debug one of my ideas and found its behavior counter-intuitive and contradictory to it own docs. I think the GUC is supposed to be used to dump backtrace only on elog(ERROR) (should it also be done for higher levels? not sure about this), but, in fact, it does that for any log-level. I have attached a patch that checks log-level before attaching backtrace.\n\nThis would make the feature much less useful. Better to change the docs.\n\n\n",
"msg_date": "Mon, 4 Sep 2023 14:13:18 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: backtrace_functions emits trace for any elog"
}
] |
[
{
"msg_contents": "Thank you for your response. It is evident that there is a need\nfor this features in our system.\nFirstly, our customers express their desire to utilize tablespaces\nfor table management, without necessarily being concerned about\nthe directory location of these tablespaces.\nSecondly, currently PG only supports absolute-path tablespaces, but\nin-place tablespaces are very likely to become popular in the future.\nTherefore, it is essential to incorporate support for in-place\ntablespaces in the pg_upgrade feature. I intend to implement\nthis functionality in our system to accommodate our customers'\nrequirements.\nIt would be highly appreciated if the official PG could also\nincorporate support for this feature.\n--\nBest regards,\nRui Zhao\n------------------------------------------------------------------\nFrom:Michael Paquier <[email protected]>\nSent At:2023 Sep. 1 (Fri.) 12:58\nTo:Mark <[email protected]>\nCc:pgsql-hackers <[email protected]>\nSubject:Re: pg_upgrade fails with in-place tablespace[\nOn Sat, Aug 19, 2023 at 08:11:28PM +0800, Rui Zhao wrote:\n> Please refer to the TAP test I have included for a better understanding\n> of my suggestion.\nSure, but it seems to me that my original question is not really\nanswered: what's your use case for being able to support in-place\ntablespaces in pg_upgrade? The original use case being such\ntablespaces is to ease the introduction of tests with primaries and\nstandbys, which is not something that really applies to pg_upgrade,\nno? Perhaps there is meaning in having more TAP tests with\ntablespaces and a combination of primary/standbys, still having\nin-place tablespaces does not really make much sense to me because, as\nthese are in the data folder, we don't use them to test the path\nre-creation logic.\nI think that we should just add a routine in check.c that scans\npg_tablespace, reporting all the non-absolute paths found with their\nassociated tablespace names.\n--\nMichael\n\nThank you for your response. It is evident that there is a needfor this features in our system.Firstly, our customers express their desire to utilize tablespacesfor table management, without necessarily being concerned aboutthe directory location of these tablespaces.Secondly, currently PG only supports absolute-path tablespaces, butin-place tablespaces are very likely to become popular in the future.Therefore, it is essential to incorporate support for in-placetablespaces in the pg_upgrade feature. I intend to implementthis functionality in our system to accommodate our customers'requirements.It would be highly appreciated if the official PG could alsoincorporate support for this feature.--Best regards,Rui Zhao------------------------------------------------------------------From:Michael Paquier <[email protected]>Sent At:2023 Sep. 1 (Fri.) 12:58To:Mark <[email protected]>Cc:pgsql-hackers <[email protected]>Subject:Re: pg_upgrade fails with in-place tablespace[On Sat, Aug 19, 2023 at 08:11:28PM +0800, Rui Zhao wrote:> Please refer to the TAP test I have included for a better understanding> of my suggestion.Sure, but it seems to me that my original question is not reallyanswered: what's your use case for being able to support in-placetablespaces in pg_upgrade? The original use case being suchtablespaces is to ease the introduction of tests with primaries andstandbys, which is not something that really applies to pg_upgrade,no? Perhaps there is meaning in having more TAP tests withtablespaces and a combination of primary/standbys, still havingin-place tablespaces does not really make much sense to me because, asthese are in the data folder, we don't use them to test the pathre-creation logic.I think that we should just add a routine in check.c that scanspg_tablespace, reporting all the non-absolute paths found with theirassociated tablespace names.--Michael",
"msg_date": "Tue, 05 Sep 2023 10:06:55 +0800",
"msg_from": "\"Rui Zhao\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?UmU6IHBnX3VwZ3JhZGUgZmFpbHMgd2l0aCBpbi1wbGFjZSB0YWJsZXNwYWNl?="
}
] |
[
{
"msg_contents": "Hi all,\n\n I recently run benchmark[1] on master, but I found performance problem\nas below:\n\nexplain analyze select\n subq_0.c0 as c0,\n subq_0.c1 as c1,\n subq_0.c2 as c2\nfrom\n (select\n ref_0.l_shipmode as c0,\n sample_0.l_orderkey as c1,\n sample_0.l_quantity as c2,\n ref_0.l_orderkey as c3,\n sample_0.l_shipmode as c5,\n ref_0.l_shipinstruct as c6\n from\n public.lineitem as ref_0\n left join public.lineitem as sample_0\n on ((select p_partkey from public.part order by p_partkey limit 1)\n is not NULL)\n where sample_0.l_orderkey is NULL) as subq_0\nwhere subq_0.c5 is NULL\nlimit 1;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=78.00..45267050.75 rows=1 width=27) (actual\ntime=299695.097..299695.099 rows=0 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=78.00..78.00 rows=1 width=8) (actual\ntime=0.651..0.652 rows=1 loops=1)\n -> Sort (cost=78.00..83.00 rows=2000 width=8) (actual\ntime=0.650..0.651 rows=1 loops=1)\n Sort Key: part.p_partkey\n Sort Method: top-N heapsort Memory: 25kB\n -> Seq Scan on part (cost=0.00..68.00 rows=2000 width=8)\n(actual time=0.013..0.428 rows=2000 loops=1)\n -> Nested Loop Left Join (cost=0.00..45266972.75 rows=1 width=27)\n(actual time=299695.096..299695.096 rows=0 loops=1)\n Join Filter: ($0 IS NOT NULL)\n Filter: ((sample_0.l_orderkey IS NULL) AND (sample_0.l_shipmode IS\nNULL))\n Rows Removed by Filter: 3621030625\n -> Seq Scan on lineitem ref_0 (cost=0.00..1969.75 rows=60175\nwidth=11) (actual time=0.026..6.225 rows=60175 loops=1)\n -> Materialize (cost=0.00..2270.62 rows=60175 width=27) (actual\ntime=0.000..2.554 rows=60175 loops=60175)\n -> Seq Scan on lineitem sample_0 (cost=0.00..1969.75\nrows=60175 width=27) (actual time=0.004..8.169 rows=60175 loops=1)\n Planning Time: 0.172 ms\n Execution Time: 299695.501 ms\n(16 rows)\n\nAfter I set enable_material to off, the same query run faster, as below:\nset enable_material = off;\nexplain analyze select\n subq_0.c0 as c0,\n subq_0.c1 as c1,\n subq_0.c2 as c2\nfrom\n (select\n ref_0.l_shipmode as c0,\n sample_0.l_orderkey as c1,\n sample_0.l_quantity as c2,\n ref_0.l_orderkey as c3,\n sample_0.l_shipmode as c5,\n ref_0.l_shipinstruct as c6\n from\n public.lineitem as ref_0\n left join public.lineitem as sample_0\n on ((select p_partkey from public.part order by p_partkey limit 1)\n is not NULL)\n where sample_0.l_orderkey is NULL) as subq_0\nwhere subq_0.c5 is NULL\nlimit 1;\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=1078.00..91026185.57 rows=1 width=27) (actual\ntime=192669.605..192670.425 rows=0 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=78.00..78.00 rows=1 width=8) (actual\ntime=0.662..0.663 rows=1 loops=1)\n -> Sort (cost=78.00..83.00 rows=2000 width=8) (actual\ntime=0.661..0.662 rows=1 loops=1)\n Sort Key: part.p_partkey\n Sort Method: top-N heapsort Memory: 25kB\n -> Seq Scan on part (cost=0.00..68.00 rows=2000 width=8)\n(actual time=0.017..0.430 rows=2000 loops=1)\n -> Gather (cost=1000.00..91026107.57 rows=1 width=27) (actual\ntime=192669.604..192670.422 rows=0 loops=1)\n Workers Planned: 1\n Params Evaluated: $0\n Workers Launched: 1\n -> Nested Loop Left Join (cost=0.00..91025107.47 rows=1\nwidth=27) (actual time=192588.143..192588.144 rows=0 loops=2)\n Join Filter: ($0 IS NOT NULL)\n Filter: ((sample_0.l_orderkey IS NULL) AND\n(sample_0.l_shipmode IS NULL))\n Rows Removed by Filter: 1810515312\n -> Parallel Seq Scan on lineitem ref_0 (cost=0.00..1721.97\nrows=35397 width=11) (actual time=0.007..3.797 rows=30088 loops=2)\n -> Seq Scan on lineitem sample_0 (cost=0.00..1969.75\nrows=60175 width=27) (actual time=0.000..2.637 rows=60175 loops=60175)\n Planning Time: 0.174 ms\n Execution Time: 192670.458 ms\n(19 rows)\n\nI debug the code and find consider_parallel_nestloop() doesn't consider\nmaterialized form of the cheapest inner path.\nWhen enable_material = true, we can see Material path won in first plan,\nbut Parallel Seq Scan node doesn't add as outer path, which because\nin try_partial_nestloop_path() , the cost of nestloop wat computed using\nseq scan path not material path.\n\n[1] include test table schema and data, you can repeat above problem.\n\nI try fix this problem in attached patch, and I found pg12.12 also had this\nissue. Please review my patch, thanks!\n\n[1] https://github.com/tenderwg/tpch_test",
"msg_date": "Tue, 5 Sep 2023 16:52:35 +0800",
"msg_from": "tender wang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "After using patch, the result as below :\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=1078.00..26630101.20 rows=1 width=27) (actual\ntime=160571.005..160571.105 rows=0 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=78.00..78.00 rows=1 width=8) (actual\ntime=1.065..1.066 rows=1 loops=1)\n -> Sort (cost=78.00..83.00 rows=2000 width=8) (actual\ntime=1.064..1.065 rows=1 loops=1)\n Sort Key: part.p_partkey\n Sort Method: top-N heapsort Memory: 25kB\n -> Seq Scan on part (cost=0.00..68.00 rows=2000 width=8)\n(actual time=0.046..0.830 rows=2000 loops=1)\n -> Gather (cost=1000.00..26630023.20 rows=1 width=27) (actual\ntime=160571.003..160571.102 rows=0 loops=1)\n Workers Planned: 1\n Params Evaluated: $0\n Workers Launched: 1\n -> Nested Loop Left Join (cost=0.00..26629023.10 rows=1\nwidth=27) (actual time=160549.257..160549.258 rows=0 loops=2)\n Join Filter: ($0 IS NOT NULL)\n Filter: ((sample_0.l_orderkey IS NULL) AND\n(sample_0.l_shipmode IS NULL))\n Rows Removed by Filter: 1810515312\n -> Parallel Seq Scan on lineitem ref_0 (cost=0.00..1721.97\nrows=35397 width=11) (actual time=0.010..3.393 rows=30088 loops=2)\n -> Materialize (cost=0.00..2270.62 rows=60175 width=27)\n(actual time=0.000..2.839 rows=60175 loops=60175)\n -> Seq Scan on lineitem sample_0 (cost=0.00..1969.75\nrows=60175 width=27) (actual time=0.008..11.381 rows=60175 loops=2)\n Planning Time: 0.174 ms\n Execution Time: 160571.476 ms\n(20 rows)\n\ntender wang <[email protected]> 于2023年9月5日周二 16:52写道:\n\n> Hi all,\n>\n> I recently run benchmark[1] on master, but I found performance problem\n> as below:\n>\n> explain analyze select\n> subq_0.c0 as c0,\n> subq_0.c1 as c1,\n> subq_0.c2 as c2\n> from\n> (select\n> ref_0.l_shipmode as c0,\n> sample_0.l_orderkey as c1,\n> sample_0.l_quantity as c2,\n> ref_0.l_orderkey as c3,\n> sample_0.l_shipmode as c5,\n> ref_0.l_shipinstruct as c6\n> from\n> public.lineitem as ref_0\n> left join public.lineitem as sample_0\n> on ((select p_partkey from public.part order by p_partkey limit\n> 1)\n> is not NULL)\n> where sample_0.l_orderkey is NULL) as subq_0\n> where subq_0.c5 is NULL\n> limit 1;\n> QUERY PLAN\n>\n>\n> -----------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=78.00..45267050.75 rows=1 width=27) (actual\n> time=299695.097..299695.099 rows=0 loops=1)\n> InitPlan 1 (returns $0)\n> -> Limit (cost=78.00..78.00 rows=1 width=8) (actual\n> time=0.651..0.652 rows=1 loops=1)\n> -> Sort (cost=78.00..83.00 rows=2000 width=8) (actual\n> time=0.650..0.651 rows=1 loops=1)\n> Sort Key: part.p_partkey\n> Sort Method: top-N heapsort Memory: 25kB\n> -> Seq Scan on part (cost=0.00..68.00 rows=2000\n> width=8) (actual time=0.013..0.428 rows=2000 loops=1)\n> -> Nested Loop Left Join (cost=0.00..45266972.75 rows=1 width=27)\n> (actual time=299695.096..299695.096 rows=0 loops=1)\n> Join Filter: ($0 IS NOT NULL)\n> Filter: ((sample_0.l_orderkey IS NULL) AND (sample_0.l_shipmode\n> IS NULL))\n> Rows Removed by Filter: 3621030625\n> -> Seq Scan on lineitem ref_0 (cost=0.00..1969.75 rows=60175\n> width=11) (actual time=0.026..6.225 rows=60175 loops=1)\n> -> Materialize (cost=0.00..2270.62 rows=60175 width=27) (actual\n> time=0.000..2.554 rows=60175 loops=60175)\n> -> Seq Scan on lineitem sample_0 (cost=0.00..1969.75\n> rows=60175 width=27) (actual time=0.004..8.169 rows=60175 loops=1)\n> Planning Time: 0.172 ms\n> Execution Time: 299695.501 ms\n> (16 rows)\n>\n> After I set enable_material to off, the same query run faster, as below:\n> set enable_material = off;\n> explain analyze select\n> subq_0.c0 as c0,\n> subq_0.c1 as c1,\n> subq_0.c2 as c2\n> from\n> (select\n> ref_0.l_shipmode as c0,\n> sample_0.l_orderkey as c1,\n> sample_0.l_quantity as c2,\n> ref_0.l_orderkey as c3,\n> sample_0.l_shipmode as c5,\n> ref_0.l_shipinstruct as c6\n> from\n> public.lineitem as ref_0\n> left join public.lineitem as sample_0\n> on ((select p_partkey from public.part order by p_partkey limit\n> 1)\n> is not NULL)\n> where sample_0.l_orderkey is NULL) as subq_0\n> where subq_0.c5 is NULL\n> limit 1;\n> QUERY\n> PLAN\n>\n> -----------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=1078.00..91026185.57 rows=1 width=27) (actual\n> time=192669.605..192670.425 rows=0 loops=1)\n> InitPlan 1 (returns $0)\n> -> Limit (cost=78.00..78.00 rows=1 width=8) (actual\n> time=0.662..0.663 rows=1 loops=1)\n> -> Sort (cost=78.00..83.00 rows=2000 width=8) (actual\n> time=0.661..0.662 rows=1 loops=1)\n> Sort Key: part.p_partkey\n> Sort Method: top-N heapsort Memory: 25kB\n> -> Seq Scan on part (cost=0.00..68.00 rows=2000\n> width=8) (actual time=0.017..0.430 rows=2000 loops=1)\n> -> Gather (cost=1000.00..91026107.57 rows=1 width=27) (actual\n> time=192669.604..192670.422 rows=0 loops=1)\n> Workers Planned: 1\n> Params Evaluated: $0\n> Workers Launched: 1\n> -> Nested Loop Left Join (cost=0.00..91025107.47 rows=1\n> width=27) (actual time=192588.143..192588.144 rows=0 loops=2)\n> Join Filter: ($0 IS NOT NULL)\n> Filter: ((sample_0.l_orderkey IS NULL) AND\n> (sample_0.l_shipmode IS NULL))\n> Rows Removed by Filter: 1810515312\n> -> Parallel Seq Scan on lineitem ref_0\n> (cost=0.00..1721.97 rows=35397 width=11) (actual time=0.007..3.797\n> rows=30088 loops=2)\n> -> Seq Scan on lineitem sample_0 (cost=0.00..1969.75\n> rows=60175 width=27) (actual time=0.000..2.637 rows=60175 loops=60175)\n> Planning Time: 0.174 ms\n> Execution Time: 192670.458 ms\n> (19 rows)\n>\n> I debug the code and find consider_parallel_nestloop() doesn't consider\n> materialized form of the cheapest inner path.\n> When enable_material = true, we can see Material path won in first plan,\n> but Parallel Seq Scan node doesn't add as outer path, which because\n> in try_partial_nestloop_path() , the cost of nestloop wat computed using\n> seq scan path not material path.\n>\n> [1] include test table schema and data, you can repeat above problem.\n>\n> I try fix this problem in attached patch, and I found pg12.12 also had\n> this issue. Please review my patch, thanks!\n>\n> [1] https://github.com/tenderwg/tpch_test\n>\n\nAfter using patch, the result as below : QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=1078.00..26630101.20 rows=1 width=27) (actual time=160571.005..160571.105 rows=0 loops=1) InitPlan 1 (returns $0) -> Limit (cost=78.00..78.00 rows=1 width=8) (actual time=1.065..1.066 rows=1 loops=1) -> Sort (cost=78.00..83.00 rows=2000 width=8) (actual time=1.064..1.065 rows=1 loops=1) Sort Key: part.p_partkey Sort Method: top-N heapsort Memory: 25kB -> Seq Scan on part (cost=0.00..68.00 rows=2000 width=8) (actual time=0.046..0.830 rows=2000 loops=1) -> Gather (cost=1000.00..26630023.20 rows=1 width=27) (actual time=160571.003..160571.102 rows=0 loops=1) Workers Planned: 1 Params Evaluated: $0 Workers Launched: 1 -> Nested Loop Left Join (cost=0.00..26629023.10 rows=1 width=27) (actual time=160549.257..160549.258 rows=0 loops=2) Join Filter: ($0 IS NOT NULL) Filter: ((sample_0.l_orderkey IS NULL) AND (sample_0.l_shipmode IS NULL)) Rows Removed by Filter: 1810515312 -> Parallel Seq Scan on lineitem ref_0 (cost=0.00..1721.97 rows=35397 width=11) (actual time=0.010..3.393 rows=30088 loops=2) -> Materialize (cost=0.00..2270.62 rows=60175 width=27) (actual time=0.000..2.839 rows=60175 loops=60175) -> Seq Scan on lineitem sample_0 (cost=0.00..1969.75 rows=60175 width=27) (actual time=0.008..11.381 rows=60175 loops=2) Planning Time: 0.174 ms Execution Time: 160571.476 ms(20 rows)tender wang <[email protected]> 于2023年9月5日周二 16:52写道:Hi all, I recently run benchmark[1] on master, but I found performance problem as below:explain analyze select subq_0.c0 as c0, subq_0.c1 as c1, subq_0.c2 as c2from (select ref_0.l_shipmode as c0, sample_0.l_orderkey as c1, sample_0.l_quantity as c2, ref_0.l_orderkey as c3, sample_0.l_shipmode as c5, ref_0.l_shipinstruct as c6 from public.lineitem as ref_0 left join public.lineitem as sample_0 on ((select p_partkey from public.part order by p_partkey limit 1) is not NULL) where sample_0.l_orderkey is NULL) as subq_0where subq_0.c5 is NULLlimit 1; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=78.00..45267050.75 rows=1 width=27) (actual time=299695.097..299695.099 rows=0 loops=1) InitPlan 1 (returns $0) -> Limit (cost=78.00..78.00 rows=1 width=8) (actual time=0.651..0.652 rows=1 loops=1) -> Sort (cost=78.00..83.00 rows=2000 width=8) (actual time=0.650..0.651 rows=1 loops=1) Sort Key: part.p_partkey Sort Method: top-N heapsort Memory: 25kB -> Seq Scan on part (cost=0.00..68.00 rows=2000 width=8) (actual time=0.013..0.428 rows=2000 loops=1) -> Nested Loop Left Join (cost=0.00..45266972.75 rows=1 width=27) (actual time=299695.096..299695.096 rows=0 loops=1) Join Filter: ($0 IS NOT NULL) Filter: ((sample_0.l_orderkey IS NULL) AND (sample_0.l_shipmode IS NULL)) Rows Removed by Filter: 3621030625 -> Seq Scan on lineitem ref_0 (cost=0.00..1969.75 rows=60175 width=11) (actual time=0.026..6.225 rows=60175 loops=1) -> Materialize (cost=0.00..2270.62 rows=60175 width=27) (actual time=0.000..2.554 rows=60175 loops=60175) -> Seq Scan on lineitem sample_0 (cost=0.00..1969.75 rows=60175 width=27) (actual time=0.004..8.169 rows=60175 loops=1) Planning Time: 0.172 ms Execution Time: 299695.501 ms(16 rows)After I set enable_material to off, the same query run faster, as below:set enable_material = off;explain analyze select subq_0.c0 as c0, subq_0.c1 as c1, subq_0.c2 as c2from (select ref_0.l_shipmode as c0, sample_0.l_orderkey as c1, sample_0.l_quantity as c2, ref_0.l_orderkey as c3, sample_0.l_shipmode as c5, ref_0.l_shipinstruct as c6 from public.lineitem as ref_0 left join public.lineitem as sample_0 on ((select p_partkey from public.part order by p_partkey limit 1) is not NULL) where sample_0.l_orderkey is NULL) as subq_0where subq_0.c5 is NULLlimit 1; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=1078.00..91026185.57 rows=1 width=27) (actual time=192669.605..192670.425 rows=0 loops=1) InitPlan 1 (returns $0) -> Limit (cost=78.00..78.00 rows=1 width=8) (actual time=0.662..0.663 rows=1 loops=1) -> Sort (cost=78.00..83.00 rows=2000 width=8) (actual time=0.661..0.662 rows=1 loops=1) Sort Key: part.p_partkey Sort Method: top-N heapsort Memory: 25kB -> Seq Scan on part (cost=0.00..68.00 rows=2000 width=8) (actual time=0.017..0.430 rows=2000 loops=1) -> Gather (cost=1000.00..91026107.57 rows=1 width=27) (actual time=192669.604..192670.422 rows=0 loops=1) Workers Planned: 1 Params Evaluated: $0 Workers Launched: 1 -> Nested Loop Left Join (cost=0.00..91025107.47 rows=1 width=27) (actual time=192588.143..192588.144 rows=0 loops=2) Join Filter: ($0 IS NOT NULL) Filter: ((sample_0.l_orderkey IS NULL) AND (sample_0.l_shipmode IS NULL)) Rows Removed by Filter: 1810515312 -> Parallel Seq Scan on lineitem ref_0 (cost=0.00..1721.97 rows=35397 width=11) (actual time=0.007..3.797 rows=30088 loops=2) -> Seq Scan on lineitem sample_0 (cost=0.00..1969.75 rows=60175 width=27) (actual time=0.000..2.637 rows=60175 loops=60175) Planning Time: 0.174 ms Execution Time: 192670.458 ms(19 rows)I debug the code and find consider_parallel_nestloop() doesn't consider materialized form of the cheapest inner path.When enable_material = true, we can see Material path won in first plan, but Parallel Seq Scan node doesn't add as outer path, which becausein try_partial_nestloop_path() , the cost of nestloop wat computed using seq scan path not material path. [1] include test table schema and data, you can repeat above problem.I try fix this problem in attached patch, and I found pg12.12 also had this issue. Please review my patch, thanks! [1] https://github.com/tenderwg/tpch_test",
"msg_date": "Tue, 5 Sep 2023 18:10:32 +0800",
"msg_from": "tender wang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "On Tue, Sep 5, 2023 at 4:52 PM tender wang <[email protected]> wrote:\n\n> I recently run benchmark[1] on master, but I found performance problem\n> as below:\n> ...\n>\n> I debug the code and find consider_parallel_nestloop() doesn't consider\n> materialized form of the cheapest inner path.\n>\n\nYeah, this seems an omission in commit 45be99f8. I reviewed the patch\nand here are some comments.\n\n* I think we should not consider materializing the cheapest inner path\n if we're doing JOIN_UNIQUE_INNER, because in this case we have to\n unique-ify the inner path.\n\n* I think we can check if it'd be parallel safe before creating the\n material path, thus avoid the creation in unsafe cases.\n\n* I don't think the test case you added works for the code changes.\n Maybe a plan likes below is better:\n\nexplain (costs off)\nselect * from tenk1, tenk2 where tenk1.two = tenk2.two;\n QUERY PLAN\n----------------------------------------------\n Gather\n Workers Planned: 4\n -> Nested Loop\n Join Filter: (tenk1.two = tenk2.two)\n -> Parallel Seq Scan on tenk1\n -> Materialize\n -> Seq Scan on tenk2\n(7 rows)\n\nThanks\nRichard\n\nOn Tue, Sep 5, 2023 at 4:52 PM tender wang <[email protected]> wrote: I recently run benchmark[1] on master, but I found performance problem as below:...I debug the code and find consider_parallel_nestloop() doesn't consider materialized form of the cheapest inner path.Yeah, this seems an omission in commit 45be99f8. I reviewed the patchand here are some comments.* I think we should not consider materializing the cheapest inner path if we're doing JOIN_UNIQUE_INNER, because in this case we have to unique-ify the inner path.* I think we can check if it'd be parallel safe before creating the material path, thus avoid the creation in unsafe cases.* I don't think the test case you added works for the code changes. Maybe a plan likes below is better:explain (costs off)select * from tenk1, tenk2 where tenk1.two = tenk2.two; QUERY PLAN---------------------------------------------- Gather Workers Planned: 4 -> Nested Loop Join Filter: (tenk1.two = tenk2.two) -> Parallel Seq Scan on tenk1 -> Materialize -> Seq Scan on tenk2(7 rows)ThanksRichard",
"msg_date": "Tue, 5 Sep 2023 18:50:56 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "Richard Guo <[email protected]> 于2023年9月5日周二 18:51写道:\n\n>\n> On Tue, Sep 5, 2023 at 4:52 PM tender wang <[email protected]> wrote:\n>\n>> I recently run benchmark[1] on master, but I found performance problem\n>> as below:\n>> ...\n>>\n>> I debug the code and find consider_parallel_nestloop() doesn't consider\n>> materialized form of the cheapest inner path.\n>>\n>\n> Yeah, this seems an omission in commit 45be99f8. I reviewed the patch\n> and here are some comments.\n>\n> * I think we should not consider materializing the cheapest inner path\n> if we're doing JOIN_UNIQUE_INNER, because in this case we have to\n> unique-ify the inner path.\n>\n\n That's right. The V2 patch has been fixed.\n\n\n> * I think we can check if it'd be parallel safe before creating the\n> material path, thus avoid the creation in unsafe cases.\n>\n\n Agreed.\n\n\n\n> * I don't think the test case you added works for the code changes.\n> Maybe a plan likes below is better:\n>\n\n Agreed.\n\nexplain (costs off)\n> select * from tenk1, tenk2 where tenk1.two = tenk2.two;\n> QUERY PLAN\n> ----------------------------------------------\n> Gather\n> Workers Planned: 4\n> -> Nested Loop\n> Join Filter: (tenk1.two = tenk2.two)\n> -> Parallel Seq Scan on tenk1\n> -> Materialize\n> -> Seq Scan on tenk2\n> (7 rows)\n>\n> Thanks\n> Richard\n>",
"msg_date": "Thu, 7 Sep 2023 17:56:40 +0800",
"msg_from": "tender wang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "On Tue, Sep 5, 2023 at 8:07 AM Richard Guo <[email protected]> wrote:\n> Yeah, this seems an omission in commit 45be99f8.\n\nIt's been a while, but I think I omitted this deliberately because I\ndidn't really understand the value of it and wanted to keep the\nplanning cost down.\n\nThe example query provided here seems rather artificial. Surely few\npeople write a join clause that references neither of the tables being\njoined. Is there a more realistic case where this makes a big\ndifference?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Sep 2023 15:14:50 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "On Fri, Sep 8, 2023 at 3:15 AM Robert Haas <[email protected]> wrote:\n\n> The example query provided here seems rather artificial. Surely few\n> people write a join clause that references neither of the tables being\n> joined. Is there a more realistic case where this makes a big\n> difference?\n\n\nYes the given example query is not that convincing. I tried a query\nwith plans as below (after some GUC setting) which might be more\nrealistic in real world.\n\nunpatched:\n\nexplain select * from partsupp join lineitem on l_partkey > ps_partkey;\n QUERY PLAN\n--------------------------------------------------------------------------------------\n Gather (cost=0.00..5522666.44 rows=160466667 width=301)\n Workers Planned: 4\n -> Nested Loop (cost=0.00..5522666.44 rows=40116667 width=301)\n Join Filter: (lineitem.l_partkey > partsupp.ps_partkey)\n -> Parallel Seq Scan on lineitem (cost=0.00..1518.44 rows=15044\nwidth=144)\n -> Seq Scan on partsupp (cost=0.00..267.00 rows=8000 width=157)\n(6 rows)\n\npatched:\n\nexplain select * from partsupp join lineitem on l_partkey > ps_partkey;\n QUERY PLAN\n--------------------------------------------------------------------------------------\n Gather (cost=0.00..1807085.44 rows=160466667 width=301)\n Workers Planned: 4\n -> Nested Loop (cost=0.00..1807085.44 rows=40116667 width=301)\n Join Filter: (lineitem.l_partkey > partsupp.ps_partkey)\n -> Parallel Seq Scan on lineitem (cost=0.00..1518.44 rows=15044\nwidth=144)\n -> Materialize (cost=0.00..307.00 rows=8000 width=157)\n -> Seq Scan on partsupp (cost=0.00..267.00 rows=8000\nwidth=157)\n(7 rows)\n\nThe execution time (ms) are (avg of 3 runs):\n\nunpatched: 71769.21\npatched: 65510.04\n\nSo we can see some (~9%) performance gains in this case.\n\nThanks\nRichard\n\nOn Fri, Sep 8, 2023 at 3:15 AM Robert Haas <[email protected]> wrote:\nThe example query provided here seems rather artificial. Surely few\npeople write a join clause that references neither of the tables being\njoined. Is there a more realistic case where this makes a big\ndifference?Yes the given example query is not that convincing. I tried a querywith plans as below (after some GUC setting) which might be morerealistic in real world.unpatched:explain select * from partsupp join lineitem on l_partkey > ps_partkey; QUERY PLAN-------------------------------------------------------------------------------------- Gather (cost=0.00..5522666.44 rows=160466667 width=301) Workers Planned: 4 -> Nested Loop (cost=0.00..5522666.44 rows=40116667 width=301) Join Filter: (lineitem.l_partkey > partsupp.ps_partkey) -> Parallel Seq Scan on lineitem (cost=0.00..1518.44 rows=15044 width=144) -> Seq Scan on partsupp (cost=0.00..267.00 rows=8000 width=157)(6 rows)patched:explain select * from partsupp join lineitem on l_partkey > ps_partkey; QUERY PLAN-------------------------------------------------------------------------------------- Gather (cost=0.00..1807085.44 rows=160466667 width=301) Workers Planned: 4 -> Nested Loop (cost=0.00..1807085.44 rows=40116667 width=301) Join Filter: (lineitem.l_partkey > partsupp.ps_partkey) -> Parallel Seq Scan on lineitem (cost=0.00..1518.44 rows=15044 width=144) -> Materialize (cost=0.00..307.00 rows=8000 width=157) -> Seq Scan on partsupp (cost=0.00..267.00 rows=8000 width=157)(7 rows)The execution time (ms) are (avg of 3 runs):unpatched: 71769.21patched: 65510.04So we can see some (~9%) performance gains in this case.ThanksRichard",
"msg_date": "Fri, 8 Sep 2023 14:06:44 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "Hi tom,\n Do you have any comments or suggestions on this issue? Thanks.\n\nRichard Guo <[email protected]> 于2023年9月8日周五 14:06写道:\n\n>\n> On Fri, Sep 8, 2023 at 3:15 AM Robert Haas <[email protected]> wrote:\n>\n>> The example query provided here seems rather artificial. Surely few\n>> people write a join clause that references neither of the tables being\n>> joined. Is there a more realistic case where this makes a big\n>> difference?\n>\n>\n> Yes the given example query is not that convincing. I tried a query\n> with plans as below (after some GUC setting) which might be more\n> realistic in real world.\n>\n> unpatched:\n>\n> explain select * from partsupp join lineitem on l_partkey > ps_partkey;\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------\n> Gather (cost=0.00..5522666.44 rows=160466667 width=301)\n> Workers Planned: 4\n> -> Nested Loop (cost=0.00..5522666.44 rows=40116667 width=301)\n> Join Filter: (lineitem.l_partkey > partsupp.ps_partkey)\n> -> Parallel Seq Scan on lineitem (cost=0.00..1518.44 rows=15044\n> width=144)\n> -> Seq Scan on partsupp (cost=0.00..267.00 rows=8000 width=157)\n> (6 rows)\n>\n> patched:\n>\n> explain select * from partsupp join lineitem on l_partkey > ps_partkey;\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------\n> Gather (cost=0.00..1807085.44 rows=160466667 width=301)\n> Workers Planned: 4\n> -> Nested Loop (cost=0.00..1807085.44 rows=40116667 width=301)\n> Join Filter: (lineitem.l_partkey > partsupp.ps_partkey)\n> -> Parallel Seq Scan on lineitem (cost=0.00..1518.44 rows=15044\n> width=144)\n> -> Materialize (cost=0.00..307.00 rows=8000 width=157)\n> -> Seq Scan on partsupp (cost=0.00..267.00 rows=8000\n> width=157)\n> (7 rows)\n>\n> The execution time (ms) are (avg of 3 runs):\n>\n> unpatched: 71769.21\n> patched: 65510.04\n>\n> So we can see some (~9%) performance gains in this case.\n>\n> Thanks\n> Richard\n>\n\nHi tom, Do you have any comments or suggestions on this issue? Thanks.Richard Guo <[email protected]> 于2023年9月8日周五 14:06写道:On Fri, Sep 8, 2023 at 3:15 AM Robert Haas <[email protected]> wrote:\nThe example query provided here seems rather artificial. Surely few\npeople write a join clause that references neither of the tables being\njoined. Is there a more realistic case where this makes a big\ndifference?Yes the given example query is not that convincing. I tried a querywith plans as below (after some GUC setting) which might be morerealistic in real world.unpatched:explain select * from partsupp join lineitem on l_partkey > ps_partkey; QUERY PLAN-------------------------------------------------------------------------------------- Gather (cost=0.00..5522666.44 rows=160466667 width=301) Workers Planned: 4 -> Nested Loop (cost=0.00..5522666.44 rows=40116667 width=301) Join Filter: (lineitem.l_partkey > partsupp.ps_partkey) -> Parallel Seq Scan on lineitem (cost=0.00..1518.44 rows=15044 width=144) -> Seq Scan on partsupp (cost=0.00..267.00 rows=8000 width=157)(6 rows)patched:explain select * from partsupp join lineitem on l_partkey > ps_partkey; QUERY PLAN-------------------------------------------------------------------------------------- Gather (cost=0.00..1807085.44 rows=160466667 width=301) Workers Planned: 4 -> Nested Loop (cost=0.00..1807085.44 rows=40116667 width=301) Join Filter: (lineitem.l_partkey > partsupp.ps_partkey) -> Parallel Seq Scan on lineitem (cost=0.00..1518.44 rows=15044 width=144) -> Materialize (cost=0.00..307.00 rows=8000 width=157) -> Seq Scan on partsupp (cost=0.00..267.00 rows=8000 width=157)(7 rows)The execution time (ms) are (avg of 3 runs):unpatched: 71769.21patched: 65510.04So we can see some (~9%) performance gains in this case.ThanksRichard",
"msg_date": "Wed, 27 Sep 2023 21:06:03 +0800",
"msg_from": "tender wang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "On Fri, 8 Sept 2023 at 09:41, Robert Haas <[email protected]> wrote:\n>\n> On Tue, Sep 5, 2023 at 8:07 AM Richard Guo <[email protected]> wrote:\n> > Yeah, this seems an omission in commit 45be99f8.\n>\n> It's been a while, but I think I omitted this deliberately because I\n> didn't really understand the value of it and wanted to keep the\n> planning cost down.\n\nI think the value is potentially not having to repeatedly execute some\nexpensive rescan to the nested loop join once for each outer-side\ntuple.\n\nThe planning cost is something to consider for sure, but it seems\nstrange that we deemed it worthy to consider material paths for the\nnon-parallel input paths but draw the line for the parallel/partial\nones. It seems to me that the additional costs and the possible\nbenefits are the same for both.\n\nDavid\n\n\n",
"msg_date": "Thu, 28 Sep 2023 12:41:03 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "On Fri, 8 Sept 2023 at 19:14, Richard Guo <[email protected]> wrote:\n> explain select * from partsupp join lineitem on l_partkey > ps_partkey;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------\n> Gather (cost=0.00..1807085.44 rows=160466667 width=301)\n> Workers Planned: 4\n> -> Nested Loop (cost=0.00..1807085.44 rows=40116667 width=301)\n> Join Filter: (lineitem.l_partkey > partsupp.ps_partkey)\n> -> Parallel Seq Scan on lineitem (cost=0.00..1518.44 rows=15044 width=144)\n> -> Materialize (cost=0.00..307.00 rows=8000 width=157)\n> -> Seq Scan on partsupp (cost=0.00..267.00 rows=8000 width=157)\n> (7 rows)\n>\n> The execution time (ms) are (avg of 3 runs):\n>\n> unpatched: 71769.21\n> patched: 65510.04\n\nThis gap would be wider if the partsupp Seq Scan were filtering off\nsome rows and wider still if you added more rows to lineitem.\nHowever, a clauseless seqscan is not the most compelling use case\nbelow a material node. The inner side of the nested loop could be some\nsubquery that takes 6 days to complete. Running the 6 day query ~15044\ntimes seems like something that would be good to avoid.\n\nIt seems worth considering Material paths to me. I think that the\nabove example could be tuned any way you like to make it look better\nor worse.\n\nDavid\n\n\n",
"msg_date": "Thu, 28 Sep 2023 12:49:47 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "Hi!\n\nThank you for your work on the subject.\n\n\nI reviewed your patch and found that your commit message does not fully \nexplain your code, in addition, I found several spelling mistakes.\n\nI think it's better to change to:\n\nWith parallel seqscan, we should consider materializing the cheapest \ninner path in\ncase of nested loop if it doesn't contain a unique node or it is unsafe \nto use it in a subquery.\n\n\nBesides, I couldn't understand why we again check that material path is \nsafe?\n\nif (matpath != NULL && matpath->parallel_safe)\n try_partial_nestloop_path(root, joinrel, outerpath, matpath,\n pathkeys, jointype, extra);\n\n-- \nRegards,\nAlena Rybakina\n\n\n\n\n\n\nHi!\nThank you for your work on the subject.\n\n\nI reviewed your patch and found that your commit\n message does not fully explain your code, in addition, I found\n several spelling mistakes.\nI think it's better to change to:\nWith parallel seqscan, we should\n consider materializing the cheapest inner path in \n case of nested loop if it doesn't contain a unique node or it is\n unsafe to use it in a subquery.\n\n\nBesides, I couldn't understand why we again check\n that material path is safe?\nif (matpath != NULL && matpath->parallel_safe)\n try_partial_nestloop_path(root, joinrel, outerpath,\n matpath,\n pathkeys, jointype, extra);\n\n\n-- \nRegards,\nAlena Rybakina",
"msg_date": "Wed, 18 Oct 2023 16:44:08 +0300",
"msg_from": "Alena Rybakina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "\n\n> On 27 Sep 2023, at 16:06, tender wang <[email protected]> wrote:\n> \n> Do you have any comments or suggestions on this issue? Thanks.\nHi Tender,\n\nthere are some review comments in the thread, that you might be interested in.\nI'll mark this [0] entry \"Waiting on Author\" and move to next CF.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n[0]https://commitfest.postgresql.org/47/4549/\n\n",
"msg_date": "Mon, 8 Apr 2024 12:40:17 +0300",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "Andrey M. Borodin <[email protected]> 于2024年4月8日周一 17:40写道:\n\n>\n>\n> > On 27 Sep 2023, at 16:06, tender wang <[email protected]> wrote:\n> >\n> > Do you have any comments or suggestions on this issue? Thanks.\n> Hi Tender,\n>\n> there are some review comments in the thread, that you might be interested\n> in.\n> I'll mark this [0] entry \"Waiting on Author\" and move to next CF.\n>\n\n Thank you for the reminder. I will update the patch later.\nI also deeply hope to get more advice about this patch.\n(even the advice that not worth continuint to work on this patch).\n\nThanks.\n\nThanks!\n>\n>\n> Best regards, Andrey Borodin.\n>\n> [0]https://commitfest.postgresql.org/47/4549/\n\n\n\n-- \nTender Wang\nOpenPie: https://en.openpie.com/\n\nAndrey M. Borodin <[email protected]> 于2024年4月8日周一 17:40写道:\n\n> On 27 Sep 2023, at 16:06, tender wang <[email protected]> wrote:\n> \n> Do you have any comments or suggestions on this issue? Thanks.\nHi Tender,\n\nthere are some review comments in the thread, that you might be interested in.\nI'll mark this [0] entry \"Waiting on Author\" and move to next CF. Thank you for the reminder. I will update the patch later.I also deeply hope to get more advice about this patch.(even the advice that not worth continuint to work on this patch).Thanks.\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n[0]https://commitfest.postgresql.org/47/4549/-- Tender WangOpenPie: https://en.openpie.com/",
"msg_date": "Mon, 8 Apr 2024 18:54:41 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "Andrey M. Borodin <[email protected]> 于2024年4月8日周一 17:40写道:\n\n>\n>\n> > On 27 Sep 2023, at 16:06, tender wang <[email protected]> wrote:\n> >\n> > Do you have any comments or suggestions on this issue? Thanks.\n> Hi Tender,\n>\n> there are some review comments in the thread, that you might be interested\n> in.\n> I'll mark this [0] entry \"Waiting on Author\" and move to next CF.\n>\n> Thanks!\n>\n>\n> Best regards, Andrey Borodin.\n>\n> [0]https://commitfest.postgresql.org/47/4549/\n\n\nI have rebased master and fixed a plan diff case.\n-- \nTender Wang\nOpenPie: https://en.openpie.com/",
"msg_date": "Tue, 23 Apr 2024 16:59:42 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "On Tue, 2024-04-23 at 16:59 +0800, Tender Wang wrote:\n> \n[ cut ]\n> \n> I have rebased master and fixed a plan diff case.\n\nWe (me, Paul Jungwirth, and Yuki Fujii) reviewed this patch\nat PgConf.dev Patch Review Workshop.\nHere are our findings.\n\nPatch tries to allow for using materialization together\nwith parallel subqueries.\nIt applies cleanly on 8fea1bd5411b793697a4c9087c403887e050c4ac\n(current HEAD).\nTests pass locally on macOS and Linux in VM under Windows.\nTests are also green in cfbot (for last 2 weeks; they were\nred previously, probably because of need to rebase).\n\nPlease add more tests. Especially please add some negative tests;\ncurrent patch checks that it is safe to apply materialization. It would\nbe helpful to add tests checking that materialization is not applied\nin both checked cases:\n1. when inner join path is not parallel safe\n2. when matpath is not parallel safe\n\nThis patch tries to apply materialization only when join type\nis not JOIN_UNIQUE_INNER. Comment mentions this, but does not\nexplain why. So please either add comment describing reason for that\nor try enabling materialization in such a case.\n\nBest regards.\n\n-- \nTomasz Rybak, Debian Developer <[email protected]>\nGPG: A565 CE64 F866 A258 4DDC F9C7 ECB7 3E37 E887 AA8C\n\n\n",
"msg_date": "Thu, 30 May 2024 22:21:14 +0200",
"msg_from": "Tomasz Rybak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "Tomasz Rybak <[email protected]> 于2024年5月31日周五 04:21写道:\n\n> On Tue, 2024-04-23 at 16:59 +0800, Tender Wang wrote:\n> >\n> [ cut ]\n> >\n> > I have rebased master and fixed a plan diff case.\n>\n> We (me, Paul Jungwirth, and Yuki Fujii) reviewed this patch\n> at PgConf.dev Patch Review Workshop.\n>\n\nThanks for reviewing this patch.\n\n> Here are our findings.\n>\n> Patch tries to allow for using materialization together\n> with parallel subqueries.\n> It applies cleanly on 8fea1bd5411b793697a4c9087c403887e050c4ac\n> (current HEAD).\n> Tests pass locally on macOS and Linux in VM under Windows.\n> Tests are also green in cfbot (for last 2 weeks; they were\n> red previously, probably because of need to rebase).\n>\n> Please add more tests. Especially please add some negative tests;\n> current patch checks that it is safe to apply materialization. It would\n> be helpful to add tests checking that materialization is not applied\n> in both checked cases:\n> 1. when inner join path is not parallel safe\n> 2. when matpath is not parallel safe\n>\n\nI added a test case that inner rel is not parallel safe. Actually, matpath\nwill not create\nif inner rel is not parallel safe. So I didn't add test case for the\nsecond scenario.\n\nThis patch tries to apply materialization only when join type\n> is not JOIN_UNIQUE_INNER. Comment mentions this, but does not\n> explain why. So please either add comment describing reason for that\n> or try enabling materialization in such a case.\n>\n\nYeah, Richard commented the v1 patch about JOIN_UNIQUE_INNER in [1]\n\n* I think we should not consider materializing the cheapest inner path\nif we're doing JOIN_UNIQUE_INNER, because in this case we have to\nunique-ify the inner path.\n\nWe don't consider material inner path if jointype is JOIN_UNIQUE_INNER in\nmatch_unsorted_order().\nSo here is as same logic as match_unsorted_order(). I added comments to\nexplain why.\n\n\n\n[1]\nhttps://www.postgresql.org/message-id/CAMbWs49LbQF_Z0iKPRPnTHfsRECT7M-4rF6ft5vpW1ARSpBkPA%40mail.gmail.com\n\n\n\n-- \nTender Wang\nOpenPie: https://en.openpie.com/",
"msg_date": "Tue, 4 Jun 2024 18:51:02 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "Hi. Tender.\r\n\r\nThank you for modification.\r\n\r\n> From: Tender Wang <[email protected]>\r\n> Sent: Tuesday, June 4, 2024 7:51 PM\r\n> \tPlease add more tests. Especially please add some negative tests;\r\n> \tcurrent patch checks that it is safe to apply materialization. It would\r\n> \tbe helpful to add tests checking that materialization is not applied\r\n> \tin both checked cases:\r\n> \t1. when inner join path is not parallel safe\r\n> \t2. when matpath is not parallel safe\r\n> \r\n> \r\n> \r\n> I added a test case that inner rel is not parallel safe. Actually, \r\n> matpath will not create if inner rel is not parallel safe. So I didn't add test case for the second scenario.\r\nIs there case in which matpath is not parallel safe and inner rel is parallel safe?\r\nIf right, I think that it would be suitable to add a negative test in a such case.\r\n\r\nSincerely yours,\r\nYuuki Fujii\r\n\r\n--\r\nYuuki Fujii\r\nInformation Technology R&D Center Mitsubishi Electric Corporation\r\n\r\n\r\n",
"msg_date": "Wed, 5 Jun 2024 01:26:30 +0000",
"msg_from": "\"[email protected]\"\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "[email protected] <\[email protected]> 于2024年6月5日周三 09:26写道:\n\n> Hi. Tender.\n>\n> Thank you for modification.\n>\n> > From: Tender Wang <[email protected]>\n> > Sent: Tuesday, June 4, 2024 7:51 PM\n> > Please add more tests. Especially please add some negative tests;\n> > current patch checks that it is safe to apply materialization. It\n> would\n> > be helpful to add tests checking that materialization is not\n> applied\n> > in both checked cases:\n> > 1. when inner join path is not parallel safe\n> > 2. when matpath is not parallel safe\n> >\n> >\n> >\n> > I added a test case that inner rel is not parallel safe. Actually,\n> > matpath will not create if inner rel is not parallel safe. So I didn't\n> add test case for the second scenario.\n> Is there case in which matpath is not parallel safe and inner rel is\n> parallel safe?\n> If right, I think that it would be suitable to add a negative test in a\n> such case.\n>\n\nI looked through create_xxx_path(), and I found that almost\npath.parallel_safe is assigned from RelOptiInfo.consider_parallel.\nSome pathes take subpath->parallel_safe into account(e.g. Material path).\nIn most cases, Material is parallel_safe if rel is parallel\nsafe. Now I haven't come up a query plan that material is un parallel-safe\nbut rel is parallel-safe.\n\n\n>\n> Sincerely yours,\n> Yuuki Fujii\n>\n> --\n> Yuuki Fujii\n> Information Technology R&D Center Mitsubishi Electric Corporation\n>\n>\n>\n\n-- \nTender Wang\nOpenPie: https://en.openpie.com/\n\[email protected] <[email protected]> 于2024年6月5日周三 09:26写道:Hi. Tender.\n\nThank you for modification.\n\n> From: Tender Wang <[email protected]>\n> Sent: Tuesday, June 4, 2024 7:51 PM\n> Please add more tests. Especially please add some negative tests;\n> current patch checks that it is safe to apply materialization. It would\n> be helpful to add tests checking that materialization is not applied\n> in both checked cases:\n> 1. when inner join path is not parallel safe\n> 2. when matpath is not parallel safe\n> \n> \n> \n> I added a test case that inner rel is not parallel safe. Actually, \n> matpath will not create if inner rel is not parallel safe. So I didn't add test case for the second scenario.\nIs there case in which matpath is not parallel safe and inner rel is parallel safe?\nIf right, I think that it would be suitable to add a negative test in a such case.I looked through create_xxx_path(), and I found that almost path.parallel_safe is assigned from RelOptiInfo.consider_parallel.Some pathes take subpath->parallel_safe into account(e.g. Material path). In most cases, Material is parallel_safe if rel is parallelsafe. Now I haven't come up a query plan that material is un parallel-safe but rel is parallel-safe. \n\nSincerely yours,\nYuuki Fujii\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation\n\n\n-- Tender WangOpenPie: https://en.openpie.com/",
"msg_date": "Tue, 11 Jun 2024 16:11:26 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "Hi. Tender.\r\n\r\n> From: Tender Wang <[email protected]>\r\n> Sent: Tuesday, June 11, 2024 5:11 PM\r\n> \r\n> \t> From: Tender Wang <[email protected] <mailto:[email protected]> >\r\n> \t> Sent: Tuesday, June 4, 2024 7:51 PM\r\n> \t> Please add more tests. Especially please add some negative tests;\r\n> \t> current patch checks that it is safe to apply materialization. It would\r\n> \t> be helpful to add tests checking that materialization is not applied\r\n> \t> in both checked cases:\r\n> \t> 1. when inner join path is not parallel safe\r\n> \t> 2. when matpath is not parallel safe\r\n> \t>\r\n> \t>\r\n> \t>\r\n> \t> I added a test case that inner rel is not parallel safe. Actually,\r\n> \t> matpath will not create if inner rel is not parallel safe. So I didn't add test case for the second scenario.\r\n> \tIs there case in which matpath is not parallel safe and inner rel is parallel safe?\r\n> \tIf right, I think that it would be suitable to add a negative test in a such case.\r\n> \r\n> \r\n> \r\n> I looked through create_xxx_path(), and I found that almost path.parallel_safe is assigned from\r\n> RelOptiInfo.consider_parallel.\r\n> Some pathes take subpath->parallel_safe into account(e.g. Material path). In most cases, Material is parallel_safe if rel is\r\n> parallel safe. Now I haven't come up a query plan that material is un parallel-safe but rel is parallel-safe.\r\nThank you for looking into the source code. I understand the situation now.\r\n\r\nSincerely yours,\r\nYuki Fujii\r\n\r\n--\r\nYuki Fujii\r\nInformation Technology R&D Center Mitsubishi Electric Corporation\r\n",
"msg_date": "Tue, 11 Jun 2024 10:40:14 +0000",
"msg_from": "\"[email protected]\"\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "Hi Robert,\n\n Since this patch had been reviewed at PgConf.dev Patch Review\nWorkshop. And I have updated\nthe patch according to the review advice. Now there are no others to\ncomment this patch.\nThe status of this patch on commitfest have stayed \"need review\" for a long\ntime.\nI want to know if it is ready to move to the next status \"Ready for\ncommiter\".\n\nThanks.\n\n-- \nTender Wang\n\nHi Robert, Since this patch had been reviewed at PgConf.dev Patch Review Workshop. And I have updatedthe patch according to the review advice. Now there are no others to comment this patch. The status of this patch on commitfest have stayed \"need review\" for a long time. I want to know if it is ready to move to the next status \"Ready for commiter\".Thanks.-- Tender Wang",
"msg_date": "Fri, 14 Jun 2024 11:02:43 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "On Tue, Jun 4, 2024 at 6:51 PM Tender Wang <[email protected]> wrote:\n> Yeah, Richard commented the v1 patch about JOIN_UNIQUE_INNER in [1]\n>\n> * I think we should not consider materializing the cheapest inner path\n> if we're doing JOIN_UNIQUE_INNER, because in this case we have to\n> unique-ify the inner path.\n>\n> We don't consider material inner path if jointype is JOIN_UNIQUE_INNER in match_unsorted_order().\n> So here is as same logic as match_unsorted_order(). I added comments to explain why.\n\nI looked through the v4 patch and found an issue. For the plan diff:\n\n+ -> Nested Loop\n+ -> Parallel Seq Scan on prt1_p1 t1_1\n+ -> Materialize\n+ -> Sample Scan on prt1_p1 t2_1\n+ Sampling: system (t1_1.a) REPEATABLE (t1_1.b)\n+ Filter: (t1_1.a = a)\n\nThis does not seem correct to me. The inner path is parameterized by\nthe outer rel, in which case it does not make sense to add a Materialize\nnode on top of it.\n\nI updated the patch to include a check in consider_parallel_nestloop\nensuring that inner_cheapest_total is not parameterized by outerrel\nbefore materializing it. I also tweaked the comments, test cases and\ncommit message.\n\nThanks\nRichard",
"msg_date": "Tue, 18 Jun 2024 17:24:05 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "Richard Guo <[email protected]> 于2024年6月18日周二 17:24写道:\n\n> On Tue, Jun 4, 2024 at 6:51 PM Tender Wang <[email protected]> wrote:\n> > Yeah, Richard commented the v1 patch about JOIN_UNIQUE_INNER in [1]\n> >\n> > * I think we should not consider materializing the cheapest inner path\n> > if we're doing JOIN_UNIQUE_INNER, because in this case we have to\n> > unique-ify the inner path.\n> >\n> > We don't consider material inner path if jointype is JOIN_UNIQUE_INNER\n> in match_unsorted_order().\n> > So here is as same logic as match_unsorted_order(). I added comments to\n> explain why.\n>\n> I looked through the v4 patch and found an issue. For the plan diff:\n>\n> + -> Nested Loop\n> + -> Parallel Seq Scan on prt1_p1 t1_1\n> + -> Materialize\n> + -> Sample Scan on prt1_p1 t2_1\n> + Sampling: system (t1_1.a) REPEATABLE (t1_1.b)\n> + Filter: (t1_1.a = a)\n>\n> This does not seem correct to me. The inner path is parameterized by\n> the outer rel, in which case it does not make sense to add a Materialize\n> node on top of it.\n>\n\nYeah, you're right.\n\n>\n> I updated the patch to include a check in consider_parallel_nestloop\n> ensuring that inner_cheapest_total is not parameterized by outerrel\n> before materializing it. I also tweaked the comments, test cases and\n> commit message.\n>\n\nThanks for the work. Now it looks better.\nI have changed the status from \"need review\" to \"ready for commiters\" on\nthe commitfest.\n\n-- \nTender Wang\n\nRichard Guo <[email protected]> 于2024年6月18日周二 17:24写道:On Tue, Jun 4, 2024 at 6:51 PM Tender Wang <[email protected]> wrote:\n> Yeah, Richard commented the v1 patch about JOIN_UNIQUE_INNER in [1]\n>\n> * I think we should not consider materializing the cheapest inner path\n> if we're doing JOIN_UNIQUE_INNER, because in this case we have to\n> unique-ify the inner path.\n>\n> We don't consider material inner path if jointype is JOIN_UNIQUE_INNER in match_unsorted_order().\n> So here is as same logic as match_unsorted_order(). I added comments to explain why.\n\nI looked through the v4 patch and found an issue. For the plan diff:\n\n+ -> Nested Loop\n+ -> Parallel Seq Scan on prt1_p1 t1_1\n+ -> Materialize\n+ -> Sample Scan on prt1_p1 t2_1\n+ Sampling: system (t1_1.a) REPEATABLE (t1_1.b)\n+ Filter: (t1_1.a = a)\n\nThis does not seem correct to me. The inner path is parameterized by\nthe outer rel, in which case it does not make sense to add a Materialize\nnode on top of it.Yeah, you're right. \n\nI updated the patch to include a check in consider_parallel_nestloop\nensuring that inner_cheapest_total is not parameterized by outerrel\nbefore materializing it. I also tweaked the comments, test cases and\ncommit message.Thanks for the work. Now it looks better.I have changed the status from \"need review\" to \"ready for commiters\" on the commitfest.-- Tender Wang",
"msg_date": "Wed, 19 Jun 2024 10:55:25 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "On Wed, Jun 19, 2024 at 10:55 AM Tender Wang <[email protected]> wrote:\n> Richard Guo <[email protected]> 于2024年6月18日周二 17:24写道:\n>> I updated the patch to include a check in consider_parallel_nestloop\n>> ensuring that inner_cheapest_total is not parameterized by outerrel\n>> before materializing it. I also tweaked the comments, test cases and\n>> commit message.\n>\n> Thanks for the work. Now it looks better.\n> I have changed the status from \"need review\" to \"ready for commiters\" on the commitfest.\n\nHere is a new rebase.\n\nI'm planning to push it next week, barring any objections.\n\nThanks\nRichard",
"msg_date": "Sat, 6 Jul 2024 17:32:41 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "On Sat, Jul 6, 2024 at 5:32 PM Richard Guo <[email protected]> wrote:\n> Here is a new rebase.\n>\n> I'm planning to push it next week, barring any objections.\n\nPushed.\n\nThanks\nRichard\n\n\n",
"msg_date": "Fri, 12 Jul 2024 10:29:48 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "Richard Guo <[email protected]> 于2024年7月12日周五 10:30写道:\n\n> On Sat, Jul 6, 2024 at 5:32 PM Richard Guo <[email protected]> wrote:\n> > Here is a new rebase.\n> >\n> > I'm planning to push it next week, barring any objections.\n>\n> Pushed.\n>\n> Thanks\n> Richard\n>\n\nThanks for pushing.\n\n-- \nTender Wang\n\nRichard Guo <[email protected]> 于2024年7月12日周五 10:30写道:On Sat, Jul 6, 2024 at 5:32 PM Richard Guo <[email protected]> wrote:\n> Here is a new rebase.\n>\n> I'm planning to push it next week, barring any objections.\n\nPushed.\n\nThanks\nRichard\nThanks for pushing.-- Tender Wang",
"msg_date": "Fri, 12 Jul 2024 10:43:58 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "Hello Richard,\n\n12.07.2024 05:29, Richard Guo wrote:\n> On Sat, Jul 6, 2024 at 5:32 PM Richard Guo <[email protected]> wrote:\n>> Here is a new rebase.\n>>\n>> I'm planning to push it next week, barring any objections.\n> Pushed.\n\nPlease look at a recent buildfarm failure [1], which shows some\ninstability of that test addition:\n -- the joinrel is not parallel-safe due to the OFFSET clause in the subquery\n explain (costs off)\n select * from tenk1 t1, (select * from tenk2 t2 offset 0) t2 where t1.two > t2.two;\n- QUERY PLAN\n--------------------------------------------\n+ QUERY PLAN\n+-------------------------------------------------\n Nested Loop\n Join Filter: (t1.two > t2.two)\n- -> Gather\n- Workers Planned: 4\n- -> Parallel Seq Scan on tenk1 t1\n+ -> Seq Scan on tenk2 t2\n -> Materialize\n- -> Seq Scan on tenk2 t2\n+ -> Gather\n+ Workers Planned: 4\n+ -> Parallel Seq Scan on tenk1 t1\n (7 rows)\n\nI've managed to reproduce this plan change when running\nmultiple 027_stream_regress.pl instances simultaneously, with\nparallel_schedule reduced to:\ntest: test_setup\ntest: create_misc\ntest: create_index\ntest: sanity_check\ntest: select_parallel\n\nI've added the following to the test and got two verbose plans for\ncomparison (see the attachment).\n -- the joinrel is not parallel-safe due to the OFFSET clause in the subquery\n explain (costs off)\n select * from tenk1 t1, (select * from tenk2 t2 offset 0) t2 where t1.two > t2.two;\n+\\o plan.txt\n+explain (verbose)\n+ select * from tenk1 t1, (select * from tenk2 t2 offset 0) t2 where t1.two > t2.two;\n+\\o\n alter table tenk2 reset (parallel_workers);\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-07-17%2017%3A12%3A53\n\nBest regards,\nAlexander",
"msg_date": "Thu, 18 Jul 2024 11:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "On Thu, Jul 18, 2024 at 4:00 PM Alexander Lakhin <[email protected]> wrote:\n> Please look at a recent buildfarm failure [1], which shows some\n> instability of that test addition:\n> -- the joinrel is not parallel-safe due to the OFFSET clause in the subquery\n> explain (costs off)\n> select * from tenk1 t1, (select * from tenk2 t2 offset 0) t2 where t1.two > t2.two;\n> - QUERY PLAN\n> --------------------------------------------\n> + QUERY PLAN\n> +-------------------------------------------------\n> Nested Loop\n> Join Filter: (t1.two > t2.two)\n> - -> Gather\n> - Workers Planned: 4\n> - -> Parallel Seq Scan on tenk1 t1\n> + -> Seq Scan on tenk2 t2\n> -> Materialize\n> - -> Seq Scan on tenk2 t2\n> + -> Gather\n> + Workers Planned: 4\n> + -> Parallel Seq Scan on tenk1 t1\n> (7 rows)\n\nThank you for the report and investigation. Will have a look.\n\nThanks\nRichard\n\n\n",
"msg_date": "Thu, 18 Jul 2024 16:11:50 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "On Thu, Jul 18, 2024 at 4:11 PM Richard Guo <[email protected]> wrote:\n> On Thu, Jul 18, 2024 at 4:00 PM Alexander Lakhin <[email protected]> wrote:\n> > Please look at a recent buildfarm failure [1], which shows some\n> > instability of that test addition:\n> > -- the joinrel is not parallel-safe due to the OFFSET clause in the subquery\n> > explain (costs off)\n> > select * from tenk1 t1, (select * from tenk2 t2 offset 0) t2 where t1.two > t2.two;\n> > - QUERY PLAN\n> > --------------------------------------------\n> > + QUERY PLAN\n> > +-------------------------------------------------\n> > Nested Loop\n> > Join Filter: (t1.two > t2.two)\n> > - -> Gather\n> > - Workers Planned: 4\n> > - -> Parallel Seq Scan on tenk1 t1\n> > + -> Seq Scan on tenk2 t2\n> > -> Materialize\n> > - -> Seq Scan on tenk2 t2\n> > + -> Gather\n> > + Workers Planned: 4\n> > + -> Parallel Seq Scan on tenk1 t1\n> > (7 rows)\n>\n> Thank you for the report and investigation. Will have a look.\n\nThe problemed plan is a non-parallel nestloop join. It's just chance\nwhich join order the planner will pick, and slight variations in\nunderlying statistics could result in a different displayed plan.\n From the two verbose plans, we can see slight variations in the\nstatistics for the parallel seqscan of tenk1.\n\n-> Parallel Seq Scan on public.tenk1 t1 (cost=0.00..370.00 rows=2500\nwidth=244)\n\nVS.\n\n-> Parallel Seq Scan on public.tenk1 t1 (cost=0.00..369.99 rows=2499\nwidth=244)\n\nI have no idea why the underlying statistics changed, but it seems\nthat this slight change is sufficent to result in a different plan.\n\nAccording to the discussion in [1], I think what we wanted to test\nwith this query is that parallel nestloop join is not generated if the\ninner path is not parallel-safe. Therefore, I modified this test case\nto use a lateral join, rendering the inner path not parallel-safe\nwhile also enforcing the join order. Please see attached.\n\n[1] https://postgr.es/m/[email protected]\n\nThanks\nRichard",
"msg_date": "Thu, 18 Jul 2024 22:30:22 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "Hello Richard,\n\n18.07.2024 17:30, Richard Guo wrote:\n> The problemed plan is a non-parallel nestloop join. It's just chance\n> which join order the planner will pick, and slight variations in\n> underlying statistics could result in a different displayed plan.\n> From the two verbose plans, we can see slight variations in the\n> statistics for the parallel seqscan of tenk1.\n>\n> -> Parallel Seq Scan on public.tenk1 t1 (cost=0.00..370.00 rows=2500\n> width=244)\n>\n> VS.\n>\n> -> Parallel Seq Scan on public.tenk1 t1 (cost=0.00..369.99 rows=2499\n> width=244)\n>\n> I have no idea why the underlying statistics changed, but it seems\n> that this slight change is sufficent to result in a different plan.\n\nI think it could be caused by the same reason as [1] and I really can\neasily (without multiple instances/loops. just with `make check`) reproduce\nthe failure with cranky-ConditionalLockBufferForCleanup.patch (but\ntargeted for \"VACUUM ANALYZE tenk1;\").\n\n> According to the discussion in [1], I think what we wanted to test\n> with this query is that parallel nestloop join is not generated if the\n> inner path is not parallel-safe. Therefore, I modified this test case\n> to use a lateral join, rendering the inner path not parallel-safe\n> while also enforcing the join order. Please see attached.\n\nThe modified test survives my testing procedure. Thank you for the patch!\n\n[1] https://www.postgresql.org/message-id/flat/66eb9a6e-fc67-a230-c5b1-2a741e8b88c6%40gmail.com\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 19 Jul 2024 07:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
},
{
"msg_contents": "On Fri, Jul 19, 2024 at 12:00 PM Alexander Lakhin <[email protected]> wrote:\n> 18.07.2024 17:30, Richard Guo wrote:\n> > I have no idea why the underlying statistics changed, but it seems\n> > that this slight change is sufficent to result in a different plan.\n>\n> I think it could be caused by the same reason as [1] and I really can\n> easily (without multiple instances/loops. just with `make check`) reproduce\n> the failure with cranky-ConditionalLockBufferForCleanup.patch (but\n> targeted for \"VACUUM ANALYZE tenk1;\").\n\nYeah. Anyway I think we need to make the test more tolerant of slight\nvariations in the statistics.\n\n> > According to the discussion in [1], I think what we wanted to test\n> > with this query is that parallel nestloop join is not generated if the\n> > inner path is not parallel-safe. Therefore, I modified this test case\n> > to use a lateral join, rendering the inner path not parallel-safe\n> > while also enforcing the join order. Please see attached.\n>\n> The modified test survives my testing procedure. Thank you for the patch!\n\nThanks for testing this patch. I've pushed it.\n\nThanks\nRichard\n\n\n",
"msg_date": "Mon, 22 Jul 2024 10:43:50 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should consider materializing the cheapest inner path in\n consider_parallel_nestloop()"
}
] |
[
{
"msg_contents": "Hi,\n\nThere are multiple 'always:' keywords under the CompilerWarnings task.\nInstead of that, we can use one 'always:' and move the instructions\nunder this. So, I removed unnecessary ones and rearranged indents\naccording to that change.\n\nAny kind of feedback would be appreciated.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Tue, 5 Sep 2023 13:25:29 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Remove unnecessary 'always:' from CompilerWarnings task"
},
{
"msg_contents": "On 05.09.23 12:25, Nazir Bilal Yavuz wrote:\n> There are multiple 'always:' keywords under the CompilerWarnings task.\n> Instead of that, we can use one 'always:' and move the instructions\n> under this. So, I removed unnecessary ones and rearranged indents\n> according to that change.\n\nI'm not sure this change is beneficial. The way the code is currently \narranged, it's a bit easier to move or change individual blocks, and \nit's also easier to read the file, because the \"always:\" is next to each \n\"script\" and doesn't scroll off the screen.\n\n\n\n",
"msg_date": "Wed, 8 Nov 2023 08:31:00 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove unnecessary 'always:' from CompilerWarnings task"
},
{
"msg_contents": "Hi,\n\nThanks for the review.\n\nOn Wed, 8 Nov 2023 at 10:31, Peter Eisentraut <[email protected]> wrote:\n>\n> On 05.09.23 12:25, Nazir Bilal Yavuz wrote:\n> > There are multiple 'always:' keywords under the CompilerWarnings task.\n> > Instead of that, we can use one 'always:' and move the instructions\n> > under this. So, I removed unnecessary ones and rearranged indents\n> > according to that change.\n>\n> I'm not sure this change is beneficial. The way the code is currently\n> arranged, it's a bit easier to move or change individual blocks, and\n> it's also easier to read the file, because the \"always:\" is next to each\n> \"script\" and doesn't scroll off the screen.\n\nThat makes sense. I am planning to withdraw this soon if there are no\nother objections.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Thu, 9 Nov 2023 10:53:13 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove unnecessary 'always:' from CompilerWarnings task"
}
] |
[
{
"msg_contents": "Hello\nI encountered a very lucky logical decoding error on the publisher:\n\n2023-09-05 09:58:38.955 UTC 28316 melkij@postgres from [local] [vxid:3/0 txid:0] [START_REPLICATION] LOG: starting logical decoding for slot \"pubsub\"\n2023-09-05 09:58:38.955 UTC 28316 melkij@postgres from [local] [vxid:3/0 txid:0] [START_REPLICATION] DETAIL: Streaming transactions committing after 0/16AD5F8, reading WAL from 0/16AD5F8.\n2023-09-05 09:58:38.955 UTC 28316 melkij@postgres from [local] [vxid:3/0 txid:0] [START_REPLICATION] STATEMENT: START_REPLICATION SLOT \"pubsub\" LOGICAL 0/16AD5F8 (proto_version '4', origin 'any', publication_names '\"testpub\"')\n2023-09-05 09:58:38.956 UTC 28316 melkij@postgres from [local] [vxid:3/0 txid:0] [START_REPLICATION] LOG: logical decoding found consistent point at 0/16AD5F8\n2023-09-05 09:58:38.956 UTC 28316 melkij@postgres from [local] [vxid:3/0 txid:0] [START_REPLICATION] DETAIL: There are no running transactions.\n2023-09-05 09:58:38.956 UTC 28316 melkij@postgres from [local] [vxid:3/0 txid:0] [START_REPLICATION] STATEMENT: START_REPLICATION SLOT \"pubsub\" LOGICAL 0/16AD5F8 (proto_version '4', origin 'any', publication_names '\"testpub\"')\n2023-09-05 09:58:39.187 UTC 28316 melkij@postgres from [local] [vxid:3/0 txid:0] [START_REPLICATION] ERROR: could not create file \"pg_replslot/pubsub/state.tmp\": File exists\n\nAs I found out, the disk with the database ran out of space, but it was so lucky that postgresql did not go into crash recovery. Doubly lucky that logical walsender was able to create state.tmp, but could not write the contents and got \"ERROR: could not write to file \"pg_replslot/pubsub/state.tmp\": No space left on device\". The empty state.tmp remained on disk. When the problem with free disk space was solved, the publication remained inoperative. To fix it, one need to restart the database (RestoreSlotFromDisk always deletes state.tmp) or delete state.tmp manually.\n\nMaybe in SaveSlotToPath (src/backend/replication/slot.c) it's also worth deleting state.tmp if it already exists? All operations are performed under LWLock and there should be no parallel access.\n\nPS: I reproduced the error on HEAD by adding pg_usleep to SaveSlotToPath before writing to file. At this time, I filled up the virtual disk.\n\nregards, Sergei\n\n\n",
"msg_date": "Tue, 05 Sep 2023 13:38:46 +0300",
"msg_from": "Sergei Kornilov <[email protected]>",
"msg_from_op": true,
"msg_subject": "whether to unlink the existing state.tmp file in SaveSlotToPath"
}
] |
[
{
"msg_contents": "\n\n",
"msg_date": "Tue, 5 Sep 2023 22:29:58 +0800",
"msg_from": "jacktby jacktby <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to add a new pg oid?"
},
{
"msg_contents": "\n> 2023年9月5日 22:29,jacktby jacktby <[email protected]> 写道:\n> \nI’m trying to add a new data type for my pg. How to do that? Can you give me more details or an example?\n\n",
"msg_date": "Tue, 5 Sep 2023 23:09:35 +0800",
"msg_from": "jacktby jacktby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to add a new pg oid?"
},
{
"msg_contents": "On Tue, 5 Sept 2023 at 18:13, jacktby jacktby <[email protected]> wrote:\n>\n> I’m trying to add a new data type for my pg. How to do that? Can you give me more details or an example?\n\nYou could get started by looking at the documentation on custom SQL\ntypes with https://www.postgresql.org/docs/current/sql-createtype.html,\nor look at the comments in pg_type.dat and the comments on TypInfo in\nbootstrap.c on how the built-in types are created and managed.\n\nLastly, you could look at pg_class and the genbki documentation if you\nwant to add new catalog types.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 5 Sep 2023 18:46:16 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to add a new pg oid?"
},
{
"msg_contents": "OIDs don't exist independently of the data they are associated with. Give\nmore context if you want a better answer. Or just go look at the source\ncode commits for when the last time something needing an OID got added to\nthe core catalog.\n\nDavid J.\n\nOIDs don't exist independently of the data they are associated with. Give more context if you want a better answer. Or just go look at the source code commits for when the last time something needing an OID got added to the core catalog.David J.",
"msg_date": "Tue, 5 Sep 2023 10:47:45 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to add a new pg oid?"
},
{
"msg_contents": "On Tue, Sep 5, 2023, 11:17 jacktby jacktby <[email protected]> wrote:\n\n>\n> > 2023年9月5日 22:29,jacktby jacktby <[email protected]> 写道:\n> >\n> I’m trying to add a new data type for my pg. How to do that? Can you give\n> me more details or an example\n>\n\nUse create type and let the system deal with it. Otherwise, no, I don't\nhave that knowledge.\n\nDavid J.\n\n>\n\nOn Tue, Sep 5, 2023, 11:17 jacktby jacktby <[email protected]> wrote:\n> 2023年9月5日 22:29,jacktby jacktby <[email protected]> 写道:\n> \nI’m trying to add a new data type for my pg. How to do that? Can you give me more details or an exampleUse create type and let the system deal with it. Otherwise, no, I don't have that knowledge.David J.",
"msg_date": "Tue, 5 Sep 2023 11:32:57 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to add a new pg oid?"
},
{
"msg_contents": "> 2023年9月6日 01:47,David G. Johnston <[email protected]> 写道:\n> \n> OIDs don't exist independently of the data they are associated with. Give more context if you want a better answer. Or just go look at the source code commits for when the last time something needing an OID got added to the core catalog.\n> \n> David J.\n> \n\n{ oid => '111', array_type_oid => '6099', descr => 'similarity columns',\n typname => 'similarity_columns', typlen => '-1', typlen => '-1', typbyval => 'f', typcategory => 'U',\n typinput => 'byteain', typoutput => 'byteaout', typreceive => 'bytearecv',\n typsend => 'byteasend', typalign => 'i', typstorage => 'x' },\n\nI add above into pg_type.dat. And then I add execute “make install” and restart pg. And Then do below:\npostgres=# SELECT typname from pg_type where typname like '%similarity%';\n typname \n---------\n(0 rows)\n\nI can’t get the type I added. What else I need to do?\n2023年9月6日 01:47,David G. Johnston <[email protected]> 写道:OIDs don't exist independently of the data they are associated with. Give more context if you want a better answer. Or just go look at the source code commits for when the last time something needing an OID got added to the core catalog.David J. \n{ oid => '111', array_type_oid => '6099', descr => 'similarity columns', typname => 'similarity_columns', typlen => '-1', typlen => '-1', typbyval => 'f', typcategory => 'U', typinput => 'byteain', typoutput => 'byteaout', typreceive => 'bytearecv', typsend => 'byteasend', typalign => 'i', typstorage => 'x' },I add above into pg_type.dat. And then I add execute “make install” and restart pg. And Then do below:postgres=# SELECT typname from pg_type where typname like '%similarity%'; typname ---------(0 rows)I can’t get the type I added. What else I need to do?",
"msg_date": "Wed, 6 Sep 2023 18:19:14 +0800",
"msg_from": "jacktby jacktby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to add a new pg oid?"
},
{
"msg_contents": "> 2023年9月6日 18:19,jacktby jacktby <[email protected]> 写道:\n> \n> \n> \n>> 2023年9月6日 01:47,David G. Johnston <[email protected]> 写道:\n>> \n>> OIDs don't exist independently of the data they are associated with. Give more context if you want a better answer. Or just go look at the source code commits for when the last time something needing an OID got added to the core catalog.\n>> \n>> David J.\n>> \n> \n> { oid => '111', array_type_oid => '6099', descr => 'similarity columns',\n> typname => 'similarity_columns', typlen => '-1', typlen => '-1', typbyval => 'f', typcategory => 'U',\n> typinput => 'byteain', typoutput => 'byteaout', typreceive => 'bytearecv',\n> typsend => 'byteasend', typalign => 'i', typstorage => 'x' },\n> \n> I add above into pg_type.dat. And then I add execute “make install” and restart pg. And Then do below:\n> postgres=# SELECT typname from pg_type where typname like '%similarity%';\n> typname \n> ---------\n> (0 rows)\n> \n> I can’t get the type I added. What else I need to do?\nI add below in bootstrap.c:\nstatic const struct typinfo TypInfo[] = {\n\t{\"similarity_columns\", SimilarityColumns, 0, -1, false, TYPALIGN_INT, TYPSTORAGE_EXTENDED, InvalidOid,\n\t F_BYTEAIN, F_BYTEAOUT},\n….\n}\nAnd then “make install” and restart pg.but still:\npostgres=# SELECT typname from pg_type where typname like '%similarity%';\n typname \n---------\n(0 rows)\n\nPlease give me help.\n2023年9月6日 18:19,jacktby jacktby <[email protected]> 写道:2023年9月6日 01:47,David G. Johnston <[email protected]> 写道:OIDs don't exist independently of the data they are associated with. Give more context if you want a better answer. Or just go look at the source code commits for when the last time something needing an OID got added to the core catalog.David J. \n{ oid => '111', array_type_oid => '6099', descr => 'similarity columns', typname => 'similarity_columns', typlen => '-1', typlen => '-1', typbyval => 'f', typcategory => 'U', typinput => 'byteain', typoutput => 'byteaout', typreceive => 'bytearecv', typsend => 'byteasend', typalign => 'i', typstorage => 'x' },I add above into pg_type.dat. And then I add execute “make install” and restart pg. And Then do below:postgres=# SELECT typname from pg_type where typname like '%similarity%'; typname ---------(0 rows)I can’t get the type I added. What else I need to do?I add below in bootstrap.c:static const struct typinfo TypInfo[] = { {\"similarity_columns\", SimilarityColumns, 0, -1, false, TYPALIGN_INT, TYPSTORAGE_EXTENDED, InvalidOid, F_BYTEAIN, F_BYTEAOUT},….}And then “make install” and restart pg.but still:postgres=# SELECT typname from pg_type where typname like '%similarity%'; typname ---------(0 rows)Please give me help.",
"msg_date": "Wed, 6 Sep 2023 18:50:54 +0800",
"msg_from": "jacktby jacktby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to add a new pg oid?"
},
{
"msg_contents": "> 2023年9月6日 18:50,jacktby jacktby <[email protected]> 写道:\n> \n> \n> \n>> 2023年9月6日 18:19,jacktby jacktby <[email protected]> 写道:\n>> \n>> \n>> \n>>> 2023年9月6日 01:47,David G. Johnston <[email protected]> 写道:\n>>> \n>>> OIDs don't exist independently of the data they are associated with. Give more context if you want a better answer. Or just go look at the source code commits for when the last time something needing an OID got added to the core catalog.\n>>> \n>>> David J.\n>>> \n>> \n>> { oid => '111', array_type_oid => '6099', descr => 'similarity columns',\n>> typname => 'similarity_columns', typlen => '-1', typlen => '-1', typbyval => 'f', typcategory => 'U',\n>> typinput => 'byteain', typoutput => 'byteaout', typreceive => 'bytearecv',\n>> typsend => 'byteasend', typalign => 'i', typstorage => 'x' },\n>> \n>> I add above into pg_type.dat. And then I add execute “make install” and restart pg. And Then do below:\n>> postgres=# SELECT typname from pg_type where typname like '%similarity%';\n>> typname \n>> ---------\n>> (0 rows)\n>> \n>> I can’t get the type I added. What else I need to do?\n> I add below in bootstrap.c:\n> static const struct typinfo TypInfo[] = {\n> \t{\"similarity_columns\", SimilarityColumns, 0, -1, false, TYPALIGN_INT, TYPSTORAGE_EXTENDED, InvalidOid,\n> \t F_BYTEAIN, F_BYTEAOUT},\n> ….\n> }\n> And then “make install” and restart pg.but still:\n> postgres=# SELECT typname from pg_type where typname like '%similarity%';\n> typname \n> ---------\n> (0 rows)\n> \n> Please give me help.\nAfter initdb , I get it. Thanks\n\n\n2023年9月6日 18:50,jacktby jacktby <[email protected]> 写道:2023年9月6日 18:19,jacktby jacktby <[email protected]> 写道:2023年9月6日 01:47,David G. Johnston <[email protected]> 写道:OIDs don't exist independently of the data they are associated with. Give more context if you want a better answer. Or just go look at the source code commits for when the last time something needing an OID got added to the core catalog.David J. \n{ oid => '111', array_type_oid => '6099', descr => 'similarity columns', typname => 'similarity_columns', typlen => '-1', typlen => '-1', typbyval => 'f', typcategory => 'U', typinput => 'byteain', typoutput => 'byteaout', typreceive => 'bytearecv', typsend => 'byteasend', typalign => 'i', typstorage => 'x' },I add above into pg_type.dat. And then I add execute “make install” and restart pg. And Then do below:postgres=# SELECT typname from pg_type where typname like '%similarity%'; typname ---------(0 rows)I can’t get the type I added. What else I need to do?I add below in bootstrap.c:static const struct typinfo TypInfo[] = { {\"similarity_columns\", SimilarityColumns, 0, -1, false, TYPALIGN_INT, TYPSTORAGE_EXTENDED, InvalidOid, F_BYTEAIN, F_BYTEAOUT},….}And then “make install” and restart pg.but still:postgres=# SELECT typname from pg_type where typname like '%similarity%'; typname ---------(0 rows)Please give me help.After initdb , I get it. Thanks",
"msg_date": "Wed, 6 Sep 2023 19:46:42 +0800",
"msg_from": "jacktby jacktby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to add a new pg oid?"
}
] |
[
{
"msg_contents": "Here's a fix to move the privilege check on constraint dropping from\nATExecDropConstraint to dropconstraint_internal. The former doesn't\nrecurse anymore, so there's no point in doing that or in fact even\nhaving the 'recursing' argument anymore.\n\nThis fixes the following test case\n\nCREATE ROLE alice;\nCREATE ROLE bob;\n\nGRANT ALL ON SCHEMA PUBLIC to alice, bob;\nGRANT alice TO bob;\n\nSET ROLE alice;\nCREATE TABLE parent (a int NOT NULL);\n\nSET ROLE bob;\nCREATE TABLE child () INHERITS (parent);\n\nAt this point, bob owns the child table, to which alice has no access.\nBut alice can do this:\nALTER TABLE parent ALTER a DROP NOT NULL;\nwhich is undesirable, because it removes the NOT NULL constraint from\ntable child, which is owned by bob.\n\n\nAlternatively, we could say that Alice is allowed to drop the constraint\non her table, and that we should react by marking the constraint on\nBob's child table as 'islocal' instead of removing it. Now, I'm pretty\nsure we don't really care one bit about this case, and the reason is\nthis: we seem to have no tests for mixed-ownership table hierarchies.\nIf we did care, we would have some, and this bug would not have occurred\nin the first place. Besides, nobody likes legacy inheritance anyway.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La persona que no quería pecar / estaba obligada a sentarse\n en duras y empinadas sillas / desprovistas, por cierto\n de blandos atenuantes\" (Patricio Vogel)\n\n\n",
"msg_date": "Tue, 5 Sep 2023 19:44:44 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": true,
"msg_subject": "missing privilege check after not-null constraint rework"
},
{
"msg_contents": "On 2023-Sep-05, Alvaro Herrera wrote:\n\n> Here's a fix to move the privilege check on constraint dropping from\n> ATExecDropConstraint to dropconstraint_internal.\n\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"No renuncies a nada. No te aferres a nada.\"",
"msg_date": "Tue, 5 Sep 2023 19:45:27 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: missing privilege check after not-null constraint rework"
},
{
"msg_contents": "On 2023-Sep-05, Alvaro Herrera wrote:\n\n> On 2023-Sep-05, Alvaro Herrera wrote:\n> \n> > Here's a fix to move the privilege check on constraint dropping from\n> > ATExecDropConstraint to dropconstraint_internal.\n\nI have pushed this. It's just a fixup for an embarrasing bug in\nb0e96f311985.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Pensar que el espectro que vemos es ilusorio no lo despoja de espanto,\nsólo le suma el nuevo terror de la locura\" (Perelandra, C.S. Lewis)\n\n\n",
"msg_date": "Thu, 7 Sep 2023 13:02:45 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: missing privilege check after not-null constraint rework"
}
] |
[
{
"msg_contents": "I was reading through the page and noticed this portion which didn't \nsound quite right. I am hoping that I captured the original intent \ncorrectly. Please let me know if something should be changed and/or \nreflowed, since I am not sure what best practices are when editing the \ndocs. I did notice that this same wording issue has existed since \n428b1d6.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Tue, 05 Sep 2023 15:38:38 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix some wording in WAL docs"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile browsing the test cases, found that the incorrect filename was there\nin the test case comment.\nThe below commit added the custom hash opclass in insert.sql,\n\n--------------------------------------------------------------\n\n\n\n\n*commit fafec4cce814b9b15991b62520dc5e5e84655a8aAuthor: Alvaro Herrera\n<[email protected] <[email protected]>>Date: Fri Apr 13\n12:27:22 2018 -0300 Use custom hash opclass for hash partition pruning*\n --------------------------------------------------------------\n\nand later below commit moved those to test_setup.sql\n\n--------------------------------------------------------------\n\n\n\n\n*commit cc50080a828dd4791b43539f5a0f976e535d147cAuthor: Tom Lane\n<[email protected] <[email protected]>>Date: Tue Feb 8 15:30:38 2022\n-0500*\n\n* Rearrange core regression tests to reduce cross-script dependencies. *\n--------------------------------------------------------------\n\nbut we haven't changed the filename in other test cases.\nDid the same in the attached patch.\n\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\n\n\n\nedbpostgres.com",
"msg_date": "Wed, 6 Sep 2023 10:48:32 +0530",
"msg_from": "Suraj Kharage <[email protected]>",
"msg_from_op": true,
"msg_subject": "[Regression] Incorrect filename in test case comment"
},
{
"msg_contents": "On Wed, Sep 06, 2023 at 10:48:32AM +0530, Suraj Kharage wrote:\n> While browsing the test cases, found that the incorrect filename was there\n> in the test case comment.\n> The below commit added the custom hash opclass in insert.sql,\n\n--- part_part_test_int4_ops and part_test_text_ops in insert.sql.\n+-- part_part_test_int4_ops and part_test_text_ops in test_setup.sql.\n\nGood catch, but part_part_test_int4_ops should be renamed to\npart_test_int4_ops, removing the first \"part_\", no?\n--\nMichael",
"msg_date": "Wed, 6 Sep 2023 17:19:14 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Regression] Incorrect filename in test case comment"
},
{
"msg_contents": "> On 6 Sep 2023, at 07:18, Suraj Kharage <[email protected]> wrote:\n\n> we haven't changed the filename in other test cases.\n> Did the same in the attached patch.\n\nPushed (along with a small typo fix), thanks!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 6 Sep 2023 10:21:07 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Regression] Incorrect filename in test case comment"
},
{
"msg_contents": "> On 6 Sep 2023, at 10:19, Michael Paquier <[email protected]> wrote:\n> \n> On Wed, Sep 06, 2023 at 10:48:32AM +0530, Suraj Kharage wrote:\n>> While browsing the test cases, found that the incorrect filename was there\n>> in the test case comment.\n>> The below commit added the custom hash opclass in insert.sql,\n> \n> --- part_part_test_int4_ops and part_test_text_ops in insert.sql.\n> +-- part_part_test_int4_ops and part_test_text_ops in test_setup.sql.\n> \n> Good catch, but part_part_test_int4_ops should be renamed to\n> part_test_int4_ops, removing the first \"part_\", no?\n\nAh, seems we came to same conclusion when looking simultaneously, I just pushed\nthe fix with the typo fix.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 6 Sep 2023 10:22:10 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Regression] Incorrect filename in test case comment"
},
{
"msg_contents": "Thanks Daniel and Michael.\n\nOn Wed, Sep 6, 2023 at 1:52 PM Daniel Gustafsson <[email protected]> wrote:\n\n> > On 6 Sep 2023, at 10:19, Michael Paquier <[email protected]> wrote:\n> >\n> > On Wed, Sep 06, 2023 at 10:48:32AM +0530, Suraj Kharage wrote:\n> >> While browsing the test cases, found that the incorrect filename was\n> there\n> >> in the test case comment.\n> >> The below commit added the custom hash opclass in insert.sql,\n> >\n> > --- part_part_test_int4_ops and part_test_text_ops in insert.sql.\n> > +-- part_part_test_int4_ops and part_test_text_ops in test_setup.sql.\n> >\n> > Good catch, but part_part_test_int4_ops should be renamed to\n> > part_test_int4_ops, removing the first \"part_\", no?\n>\n> Ah, seems we came to same conclusion when looking simultaneously, I just\n> pushed\n> the fix with the typo fix.\n>\n> --\n> Daniel Gustafsson\n>\n>\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\n\n\n\nedbpostgres.com\n\nThanks Daniel and Michael.On Wed, Sep 6, 2023 at 1:52 PM Daniel Gustafsson <[email protected]> wrote:> On 6 Sep 2023, at 10:19, Michael Paquier <[email protected]> wrote:\n> \n> On Wed, Sep 06, 2023 at 10:48:32AM +0530, Suraj Kharage wrote:\n>> While browsing the test cases, found that the incorrect filename was there\n>> in the test case comment.\n>> The below commit added the custom hash opclass in insert.sql,\n> \n> --- part_part_test_int4_ops and part_test_text_ops in insert.sql.\n> +-- part_part_test_int4_ops and part_test_text_ops in test_setup.sql.\n> \n> Good catch, but part_part_test_int4_ops should be renamed to\n> part_test_int4_ops, removing the first \"part_\", no?\n\nAh, seems we came to same conclusion when looking simultaneously, I just pushed\nthe fix with the typo fix.\n\n--\nDaniel Gustafsson\n\n-- --Thanks & Regards, Suraj kharage, edbpostgres.com",
"msg_date": "Wed, 6 Sep 2023 15:30:21 +0530",
"msg_from": "Suraj Kharage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [Regression] Incorrect filename in test case comment"
}
] |
[
{
"msg_contents": "Hi,\n\nIn PG-16, I see that we have made a lot of changes in the area roles\nand privileges. I have a question related to this and here is my\nquestion:\n\nLet's say there is a roleA who creates roleB and then roleB creates\nanother role, say roleC. By design, A can administer B and B can\nadminister C. But, can A administer C although it has not created C?\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n",
"msg_date": "Wed, 6 Sep 2023 14:57:11 +0530",
"msg_from": "Ashutosh Sharma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Can a role have indirect ADMIN OPTION on another role?"
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 11:14 AM Ashutosh Sharma <[email protected]> wrote:\n> In PG-16, I see that we have made a lot of changes in the area roles\n> and privileges. I have a question related to this and here is my\n> question:\n>\n> Let's say there is a roleA who creates roleB and then roleB creates\n> another role, say roleC. By design, A can administer B and B can\n> administer C. But, can A administer C although it has not created C?\n\nUltimately, yes, because A can get access to all of B's privileges,\nwhich include administering C. However, A might or might not have B's\nprivileges by default, depending on the value of createrole_self_grant\nin effect at the time when B was created. So, depending on the\nsituation, A might (or might not) need to do something like GRANT\nroleB to roleA or SET ROLE roleB in order to be able to actually\nexecute the administration commands in question.\n\nIMHO, it really couldn't reasonably work in any other way. Consider\nthat A's right to administer B includes the right to change B's\npassword. If the superuser wants users A and B that can't interfere\nwith each other, the superuser should create both of those accounts\nthemselves instead of letting one create the other.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Sep 2023 11:33:23 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can a role have indirect ADMIN OPTION on another role?"
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 9:03 PM Robert Haas <[email protected]> wrote:\n>\n> On Wed, Sep 6, 2023 at 11:14 AM Ashutosh Sharma <[email protected]> wrote:\n> > In PG-16, I see that we have made a lot of changes in the area roles\n> > and privileges. I have a question related to this and here is my\n> > question:\n> >\n> > Let's say there is a roleA who creates roleB and then roleB creates\n> > another role, say roleC. By design, A can administer B and B can\n> > administer C. But, can A administer C although it has not created C?\n>\n> Ultimately, yes, because A can get access to all of B's privileges,\n> which include administering C. However, A might or might not have B's\n> privileges by default, depending on the value of createrole_self_grant\n> in effect at the time when B was created. So, depending on the\n> situation, A might (or might not) need to do something like GRANT\n> roleB to roleA or SET ROLE roleB in order to be able to actually\n> execute the administration commands in question.\n>\n> IMHO, it really couldn't reasonably work in any other way. Consider\n> that A's right to administer B includes the right to change B's\n> password. If the superuser wants users A and B that can't interfere\n> with each other, the superuser should create both of those accounts\n> themselves instead of letting one create the other.\n>\n\nThank you for the clarification. This is very helpful.\n\nActually I have one more question. With this new design, assuming that\ncreaterole_self_grant is set to 'set, inherit' in postgresql.conf and\nif roleA creates roleB. So, in this case, roleA will inherit\npermissions of roleB which means roleA will have access to objects\nowned by roleB. But what if roleB doesn't want to give roleA access to\nthe certain objects it owns. As an example let's say that roleB\ncreates a table 't' and by default (with this setting) roleA will have\naccess to this table, but for some reason roleB does not want roleA to\nhave access to it. So what's the option for roleB? I've tried running\n\"revoke select on table t from roleA\" but that doesn't seem to be\nworking. the only option that works is roleA himself set inherit\noption on roleB to false - \"grant roleB to roleA with inherit false;\"\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n",
"msg_date": "Wed, 6 Sep 2023 23:03:23 +0530",
"msg_from": "Ashutosh Sharma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can a role have indirect ADMIN OPTION on another role?"
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 1:33 PM Ashutosh Sharma <[email protected]> wrote:\n> Actually I have one more question. With this new design, assuming that\n> createrole_self_grant is set to 'set, inherit' in postgresql.conf and\n> if roleA creates roleB. So, in this case, roleA will inherit\n> permissions of roleB which means roleA will have access to objects\n> owned by roleB. But what if roleB doesn't want to give roleA access to\n> the certain objects it owns. As an example let's say that roleB\n> creates a table 't' and by default (with this setting) roleA will have\n> access to this table, but for some reason roleB does not want roleA to\n> have access to it. So what's the option for roleB? I've tried running\n> \"revoke select on table t from roleA\" but that doesn't seem to be\n> working. the only option that works is roleA himself set inherit\n> option on roleB to false - \"grant roleB to roleA with inherit false;\"\n\nIt doesn't matter what roleB wants. roleA is strictly more powerful\nthan roleB and can do whatever they want to roleB or roleB's objects\nregardless of how roleB feels about it.\n\nIn the same way, the superuser is strictly more powerful than either\nroleA or roleB and can override any security control that either one\nof them put in place.\n\nNeither roleB nor roleA has any right to hide their data from the\nsuperuser, and roleB has no right to hide data from roleA. It's a\nhierarchy. If you're on top, you're in charge, and that's it.\n\nHere again, it can't really meaningfully work in any other way.\nSuppose you were to add a feature to allow roleB to hide data from\nroleA. Given that roleA has the ability to change roleB's password,\nhow could that possibly work? When you give one user the ability to\nadminister another user, that includes the right to change that user's\npassword, change whether they can log in, drop the role, give the\nprivileges of that role to themselves or other users, and a whole\nbunch of other super-powerful stuff. You can't really give someone\nthat level of power over another account and, at the same time, expect\nthe account being administered to be able to keep the more powerful\naccount from doing stuff. It just can't possibly work. If you want\nroleB to be able to resist roleA, you have to give roleA less power.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Sep 2023 14:50:41 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can a role have indirect ADMIN OPTION on another role?"
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 1:55 PM Ashutosh Sharma <[email protected]>\nwrote:\n\n> But what if roleB doesn't want to give roleA access to\n> the certain objects it owns.\n\n\nNot doable - roleA can always pretend they are roleB one way or another\nsince roleA made roleB.\n\nDavid J.\n\nOn Wed, Sep 6, 2023 at 1:55 PM Ashutosh Sharma <[email protected]> wrote: But what if roleB doesn't want to give roleA access to\nthe certain objects it owns.Not doable - roleA can always pretend they are roleB one way or another since roleA made roleB.David J.",
"msg_date": "Wed, 6 Sep 2023 15:13:33 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can a role have indirect ADMIN OPTION on another role?"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 12:20 AM Robert Haas <[email protected]> wrote:\n>\n> On Wed, Sep 6, 2023 at 1:33 PM Ashutosh Sharma <[email protected]> wrote:\n> > Actually I have one more question. With this new design, assuming that\n> > createrole_self_grant is set to 'set, inherit' in postgresql.conf and\n> > if roleA creates roleB. So, in this case, roleA will inherit\n> > permissions of roleB which means roleA will have access to objects\n> > owned by roleB. But what if roleB doesn't want to give roleA access to\n> > the certain objects it owns. As an example let's say that roleB\n> > creates a table 't' and by default (with this setting) roleA will have\n> > access to this table, but for some reason roleB does not want roleA to\n> > have access to it. So what's the option for roleB? I've tried running\n> > \"revoke select on table t from roleA\" but that doesn't seem to be\n> > working. the only option that works is roleA himself set inherit\n> > option on roleB to false - \"grant roleB to roleA with inherit false;\"\n>\n> It doesn't matter what roleB wants. roleA is strictly more powerful\n> than roleB and can do whatever they want to roleB or roleB's objects\n> regardless of how roleB feels about it.\n>\n> In the same way, the superuser is strictly more powerful than either\n> roleA or roleB and can override any security control that either one\n> of them put in place.\n>\n> Neither roleB nor roleA has any right to hide their data from the\n> superuser, and roleB has no right to hide data from roleA. It's a\n> hierarchy. If you're on top, you're in charge, and that's it.\n>\n> Here again, it can't really meaningfully work in any other way.\n> Suppose you were to add a feature to allow roleB to hide data from\n> roleA. Given that roleA has the ability to change roleB's password,\n> how could that possibly work? When you give one user the ability to\n> administer another user, that includes the right to change that user's\n> password, change whether they can log in, drop the role, give the\n> privileges of that role to themselves or other users, and a whole\n> bunch of other super-powerful stuff. You can't really give someone\n> that level of power over another account and, at the same time, expect\n> the account being administered to be able to keep the more powerful\n> account from doing stuff. It just can't possibly work. If you want\n> roleB to be able to resist roleA, you have to give roleA less power.\n>\n\nI agree with you. thank you once again.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n",
"msg_date": "Thu, 7 Sep 2023 12:59:09 +0530",
"msg_from": "Ashutosh Sharma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can a role have indirect ADMIN OPTION on another role?"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 3:43 AM David G. Johnston\n<[email protected]> wrote:\n>\n> On Wed, Sep 6, 2023 at 1:55 PM Ashutosh Sharma <[email protected]> wrote:\n>>\n>> But what if roleB doesn't want to give roleA access to\n>> the certain objects it owns.\n>\n>\n> Not doable - roleA can always pretend they are roleB one way or another since roleA made roleB.\n>\n\nOkay. It makes sense. thanks.!\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n",
"msg_date": "Thu, 7 Sep 2023 13:02:26 +0530",
"msg_from": "Ashutosh Sharma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can a role have indirect ADMIN OPTION on another role?"
}
] |
[
{
"msg_contents": "Hi:\n\nThis thread is a refactor of thread [1] for easier communication.\n\nCurrently add_paths_to_append_rel overlooked the startup cost for creating\nappend path, so it may have lost some optimization chances. After a\nglance,\nthe following 4 identifiers can be impacted.\n\nIdentifier 1:\n\nSELECT .. FROM v1\nUNION ALL\nSELECT .. FROM v2\nLIMIT 3;\n\nIdentifier 2:\n\nSELECT * FROM p .. LIMIT 3; p is a partitioned table.\n\nIdentifier 3:\nSELECT * FROM p JOIN q using (partkey) LIMIT 3;\n\nIf we did the partition-wise-join, then we lost the chances for a better\nplan.\n\nIdentifier 4: -- EXISTS implies LIMIT 1;\nSELECT * FROM foo\nWHERE EXISTS\n(SELECT 1 FROM append_rel_v_not_pullup_able WHERE xxx);\n\nHowever, after I completed my patch and wanted to build some real\nqueries to prove my idea, I just find it is hard to build the case for\nIdentifier 2/3/4. But the Improvement for Identifier 1 is easy and\nmy real user case in work is Identifier 1 as well.\n\nSo a patch is attached for this case, it will use fractional costs\nrather than total costs if needed. The following items needs more\nattention during development.\n\n- We shouldn't do the optimization if there are still more tables to join,\n the reason is similar to has_multiple_baserels(root) in\n set_subquery_pathlist. But the trouble here is we may inherit multiple\n levels to build an appendrel, so I have to keep the 'top_relids' all the\n time and compare it with PlannerInfo.all_baserels. If they are the same,\n then it is the case we want to optimize.\n\n- add_partial_path doesn't consider the startup costs by design, I didn't\n rethink too much about this, but the idea of \"choose a path which\n let each worker produces the top-N tuples as fast as possible\" looks\n reasonable, and even though add_partial_path doesn't consider startup\n cost, it is still possible that childrel keeps more than 1 partial paths\n due to any reasons except startup_cost, for example PathKey. then we\n still have chances to choose the cheapest fractional path among\n them. The same strategy also applies to mixed partial and non-partial\n append paths.\n\n- Due to the complexity of add_paths_to_append_rel, 3 arguments have\n to be added to get_cheapest_fractional_path...\n\n Path *\nget_cheapest_fractional_path(RelOptInfo *rel, double tuple_fraction,\nbool allow_parameterized, bool look_partial,\nbool must_parallel_safe)\n\n\nCases can be improved.\n\nHere is the simplest test case, but it will not be hard to provide more\ncases for Identifier 1.\n\n (select * from tenk1 order by hundred)\n UNION ALL\n (select * from tenk1 order by hundred)\n limit 3;\n\nmaster: 8.096ms.\npatched: 0.204ms.\n\nThe below user case should be more reasonable for real business.\n\nwith a as (select * from t1 join t2..),\nb as (select * from t1 join t3 ..)\nselect * from a union all select * from b\nlimit 3;\n\nThe patch would also have impacts on identifier 2/3/4, even though I can't\nmake a demo sql which can get benefits from this patch, I also added\nsome test cases for code coverage purposes.\n\nAny feedback is welcome!\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAKU4AWqEnzhUTxopVhENC3vs6NnYV32+e6GSBtp1rAv0ZNX=mQ@mail.gmail.com\n\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Wed, 6 Sep 2023 20:39:56 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "make add_paths_to_append_rel aware of startup cost"
},
{
"msg_contents": "> - We shouldn't do the optimization if there are still more tables to join,\n> the reason is similar to has_multiple_baserels(root) in\n> set_subquery_pathlist.\n>\n\nAfter some internal discussion, we have 2 different choices here. Let's\ncall the current choice as method-a, and the new choice as method-b.\nMethod-b will just ignore the \"no more tables to join \"limitation\nand build the append path with both cheapest startup cost and cheapest\ntotal cost, this is pretty like the method we joins a plain relation with\nanother relation. The uneasy part is it is the cheapest start up cost\nrather than the cheapest fractional cost.\n\nmethod-a is pretty same as what set_subquery_pathlist is doing, which has\na limitation on \"no more tables to join\" and has no the \"cheapest startup\ncost\" part.\n\nIdeally we can apply both strategies if we don't consider the effort. If\nthere are no more tables to join, we use method-a. otherwise use\nmethod-b. With this thinking, we can even apply the same strategy to plain\nrelations as well.\n\nHowever, I am not sure if the \"cheapest startup cost\" is a real problem.\nIf it is not, we can apply method-b directly and modify\nset_subquery_pathlist to do the same for consistency.\n\n\n-- \nBest Regards\nAndy Fan\n\n - We shouldn't do the optimization if there are still more tables to join, the reason is similar to has_multiple_baserels(root) in set_subquery_pathlist. After some internal discussion, we have 2 different choices here. Let'scall the current choice as method-a, and the new choice as method-b.Method-b will just ignore the \"no more tables to join \"limitationand build the append path with both cheapest startup cost and cheapesttotal cost, this is pretty like the method we joins a plain relation withanother relation. The uneasy part is it is the cheapest start up costrather than the cheapest fractional cost.method-a is pretty same as what set_subquery_pathlist is doing, which hasa limitation on \"no more tables to join\" and has no the \"cheapest startupcost\" part.Ideally we can apply both strategies if we don't consider the effort. Ifthere are no more tables to join, we use method-a. otherwise use method-b. With this thinking, we can even apply the same strategy to plainrelations as well.However, I am not sure if the \"cheapest startup cost\" is a real problem. If it is not, we can apply method-b directly and modify set_subquery_pathlist to do the same for consistency. -- Best RegardsAndy Fan",
"msg_date": "Wed, 13 Sep 2023 20:20:57 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: make add_paths_to_append_rel aware of startup cost"
},
{
"msg_contents": "On Thu, 7 Sept 2023 at 04:37, Andy Fan <[email protected]> wrote:\n> Currently add_paths_to_append_rel overlooked the startup cost for creating\n> append path, so it may have lost some optimization chances. After a glance,\n> the following 4 identifiers can be impacted.\n\n> - We shouldn't do the optimization if there are still more tables to join,\n> the reason is similar to has_multiple_baserels(root) in\n> set_subquery_pathlist. But the trouble here is we may inherit multiple\n> levels to build an appendrel, so I have to keep the 'top_relids' all the\n> time and compare it with PlannerInfo.all_baserels. If they are the same,\n> then it is the case we want to optimize.\n\nI think you've likely gone to the trouble of trying to determine if\nthere are any joins pending because you're considering using a cheap\nstartup path *instead* of the cheapest total path and you don't want\nto do that when some join will cause all the rows to be read thus\nmaking the plan more expensive if a cheap startup path was picked.\n\nInstead of doing that, why don't you just create a completely new\nAppendPath containing all the cheapest_startup_paths and add that to\nthe append rel. You can optimise this and only do it when\nrel->consider_startup is true.\n\nDoes the attached do anything less than what your patch does?\n\nDavid",
"msg_date": "Fri, 15 Sep 2023 19:15:24 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make add_paths_to_append_rel aware of startup cost"
},
{
"msg_contents": "Hi David,\n Thanks for taking a look at this!\n\nOn Fri, Sep 15, 2023 at 3:15 PM David Rowley <[email protected]> wrote:\n\n> On Thu, 7 Sept 2023 at 04:37, Andy Fan <[email protected]> wrote:\n> > Currently add_paths_to_append_rel overlooked the startup cost for\n> creating\n> > append path, so it may have lost some optimization chances. After a\n> glance,\n> > the following 4 identifiers can be impacted.\n>\n> > - We shouldn't do the optimization if there are still more tables to\n> join,\n> > the reason is similar to has_multiple_baserels(root) in\n> > set_subquery_pathlist. But the trouble here is we may inherit multiple\n> > levels to build an appendrel, so I have to keep the 'top_relids' all\n> the\n> > time and compare it with PlannerInfo.all_baserels. If they are the\n> same,\n> > then it is the case we want to optimize.\n>\n> I think you've likely gone to the trouble of trying to determine if\n> there are any joins pending because you're considering using a cheap\n> startup path *instead* of the cheapest total path and you don't want\n> to do that when some join will cause all the rows to be read thus\n> making the plan more expensive if a cheap startup path was picked.\n>\n\nYes, that's true. However it is not something we can't resolve, one\nof the solutions is just like what I did in the patch. but currently the\nmain stuff which confuses me is if it is the right thing to give up the\noptimization if it has more tables to join (just like set_subquery_pathlist\ndid).\n\n\n> Instead of doing that, why don't you just create a completely new\n> AppendPath containing all the cheapest_startup_paths and add that to\n> the append rel. You can optimise this and only do it when\n> rel->consider_startup is true.\n>\n> Does the attached do anything less than what your patch does?\n>\n\nWe can work like this, but there is one difference from what\nmy current patch does. It is cheapest_startup_path vs cheapest\nfraction path. For example if we have the following 3 paths with\nall of the estimated rows is 100 and the tuple_fraction is 10.\n\nPath 1: startup_cost = 60, total_cost = 80 -- cheapest total cost.\nPath 2: startup_cost = 10, total_cost = 1000 -- cheapest startup cost\nPath 3: startup_cost = 20, total_cost = 90 -- cheapest fractional cost\n\nSo with the patch you propose, Path 1 & Path 3 are chosen to build\nappend path. but with my current patch, Only path 3 is kept. IIUC,\npath 3 should be the best one in this case.\n\nWe might also argue why Path 3 is kept in the first place (the children\nlevel), I think pathkey might be one option. and even path 3 is\ndiscarded somehow, I think only if it is the best one, we should\nkeep it ideally.\n\nAnother tiny factor of this is your propose isn't consistent with\nwhat set_subquery_pathlist which uses cheapest fractional cost\nand my proposal isn't consistent plain rel which uses cheapest\nstartup cost. We can't say which one is better, though.\n\nIf my above analysis is correct, I think the best way to handle this\nis if there is no more tables to join, we use cheapest fraction cost\nfor all the kinds of relations, including plain relation, append rel,\nsubquery and so on. If we have more tables to join, we use\ncheapest startup cost. On the implementation side, I want to use\nRelOptInfo.tuple_fraction instead of RelOptInfo.consider_startup.\ntuple_fraction = -1 means startup cost should not be considered.\ntuple_fraction = 0 means cheapest startup cost should be used.\ntuple_franction > 0 means cheapest fraction cost should be used.\n\nI still don't pay enough attention to consider_param_startup in\nRelOptInfo, I'm feeling the above strategy will not generate\ntoo much overhead to the planner for now while it can provides\na better plan sometimes.\n\n-- \nBest Regards\nAndy Fan\n\nHi David, Thanks for taking a look at this!On Fri, Sep 15, 2023 at 3:15 PM David Rowley <[email protected]> wrote:On Thu, 7 Sept 2023 at 04:37, Andy Fan <[email protected]> wrote:\n> Currently add_paths_to_append_rel overlooked the startup cost for creating\n> append path, so it may have lost some optimization chances. After a glance,\n> the following 4 identifiers can be impacted.\n\n> - We shouldn't do the optimization if there are still more tables to join,\n> the reason is similar to has_multiple_baserels(root) in\n> set_subquery_pathlist. But the trouble here is we may inherit multiple\n> levels to build an appendrel, so I have to keep the 'top_relids' all the\n> time and compare it with PlannerInfo.all_baserels. If they are the same,\n> then it is the case we want to optimize.\n\nI think you've likely gone to the trouble of trying to determine if\nthere are any joins pending because you're considering using a cheap\nstartup path *instead* of the cheapest total path and you don't want\nto do that when some join will cause all the rows to be read thus\nmaking the plan more expensive if a cheap startup path was picked.Yes, that's true. However it is not something we can't resolve, oneof the solutions is just like what I did in the patch. but currently themain stuff which confuses me is if it is the right thing to give up theoptimization if it has more tables to join (just like set_subquery_pathlistdid). \nInstead of doing that, why don't you just create a completely new\nAppendPath containing all the cheapest_startup_paths and add that to\nthe append rel. You can optimise this and only do it when\nrel->consider_startup is true.\n\nDoes the attached do anything less than what your patch does?We can work like this, but there is one difference from whatmy current patch does. It is cheapest_startup_path vs cheapestfraction path. For example if we have the following 3 paths withall of the estimated rows is 100 and the tuple_fraction is 10. Path 1: startup_cost = 60, total_cost = 80 -- cheapest total cost. Path 2: startup_cost = 10, total_cost = 1000 -- cheapest startup costPath 3: startup_cost = 20, total_cost = 90 -- cheapest fractional cost So with the patch you propose, Path 1 & Path 3 are chosen to buildappend path. but with my current patch, Only path 3 is kept. IIUC, path 3 should be the best one in this case.We might also argue why Path 3 is kept in the first place (the childrenlevel), I think pathkey might be one option. and even path 3 is discarded somehow, I think only if it is the best one, we shouldkeep it ideally. Another tiny factor of this is your propose isn't consistent withwhat set_subquery_pathlist which uses cheapest fractional costand my proposal isn't consistent plain rel which uses cheapeststartup cost. We can't say which one is better, though. If my above analysis is correct, I think the best way to handle thisis if there is no more tables to join, we use cheapest fraction costfor all the kinds of relations, including plain relation, append rel,subquery and so on. If we have more tables to join, we usecheapest startup cost. On the implementation side, I want to useRelOptInfo.tuple_fraction instead of RelOptInfo.consider_startup. tuple_fraction = -1 means startup cost should not be considered.tuple_fraction = 0 means cheapest startup cost should be used. tuple_franction > 0 means cheapest fraction cost should be used.I still don't pay enough attention to consider_param_startup inRelOptInfo, I'm feeling the above strategy will not generatetoo much overhead to the planner for now while it can providesa better plan sometimes. \n-- Best RegardsAndy Fan",
"msg_date": "Sun, 17 Sep 2023 21:42:05 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: make add_paths_to_append_rel aware of startup cost"
},
{
"msg_contents": "On Mon, 18 Sept 2023 at 01:42, Andy Fan <[email protected]> wrote:\n> On Fri, Sep 15, 2023 at 3:15 PM David Rowley <[email protected]> wrote:\n>> Instead of doing that, why don't you just create a completely new\n>> AppendPath containing all the cheapest_startup_paths and add that to\n>> the append rel. You can optimise this and only do it when\n>> rel->consider_startup is true.\n>>\n>> Does the attached do anything less than what your patch does?\n>\n>\n> We can work like this, but there is one difference from what\n> my current patch does. It is cheapest_startup_path vs cheapest\n> fraction path. For example if we have the following 3 paths with\n> all of the estimated rows is 100 and the tuple_fraction is 10.\n\nYeah, it's true that the version I wrote didn't consider the\nfractional part, but I didn't really see it as worse than what you\ndid. It looks like you're assuming that every append child will have\nthe same number of tuples read from it, but it seems to me that it\nwould only be valid to use the fractional part for the first child.\nThe path chosen for subsequent child paths would, if done correctly,\nneed to account for the estimated rows from the previous child paths.\nIt's not valid here to copy the code in generate_orderedappend_paths()\nas MergeAppend won't necessarily exhaust the first child subpath first\nlike Append will.\n\n> Path 1: startup_cost = 60, total_cost = 80 -- cheapest total cost.\n> Path 2: startup_cost = 10, total_cost = 1000 -- cheapest startup cost\n> Path 3: startup_cost = 20, total_cost = 90 -- cheapest fractional cost\n>\n> So with the patch you propose, Path 1 & Path 3 are chosen to build\n> append path. but with my current patch, Only path 3 is kept. IIUC,\n> path 3 should be the best one in this case.\n\nI assume you mean mine would build AppendPaths for 1+2, not 1+3.\n\nYou mentioned:\n\n> I just find it is hard to build the case for Identifier 2/3/4.\n\nI wonder if this is because generate_orderedappend_paths() handles\nstartup paths and most cases will that need a cheap startup plan will\nrequire some sort of pathkeys.\n\nThe example you mentioned of:\n\n(select * from tenk1 order by hundred)\n UNION ALL\n (select * from tenk1 order by hundred)\n limit 3;\n\nI don't find this to be a compellingly real-world case. The planner\nis under no obligation to output rows from the 1st branch of the UNION\nALL before the 2nd one. If the user cared about that then they'd have\ninstead added a top-level ORDER BY, in which case the planner seems\nhappy to use the index scan:\n\nregression=# explain (costs off) (select * from tenk1) UNION ALL\n(select * from tenk1) order by hundred limit 3;\n QUERY PLAN\n-------------------------------------------------------------\n Limit\n -> Merge Append\n Sort Key: tenk1.hundred\n -> Index Scan using tenk1_hundred on tenk1\n -> Index Scan using tenk1_hundred on tenk1 tenk1_1\n\nIt would be good if you could provide a bit more detail on the cases\nyou want to improve here. For example, if your #4 case, you have\n\"WHERE xxx\". I don't know if \"xxx\" is just a base qual or if there's a\ncorrelation to the outer query in there.\n\nAnother concern I have with your patch is that it seems to be under\nthe impression that there being further joins to evaluate at this\nquery level is the only reason that we would have to pull more than\nthe tuple fraction number of rows from the query. What gives you the\nconfidence that's the only reason we may want to pull more than the\ntuple fraction of tuples from the append child?\n\nDavid\n\n\n",
"msg_date": "Mon, 18 Sep 2023 15:58:32 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make add_paths_to_append_rel aware of startup cost"
},
{
"msg_contents": "On Mon, Sep 18, 2023 at 11:58 AM David Rowley <[email protected]> wrote:\n\n> On Mon, 18 Sept 2023 at 01:42, Andy Fan <[email protected]> wrote:\n> > On Fri, Sep 15, 2023 at 3:15 PM David Rowley <[email protected]>\n> wrote:\n> >> Instead of doing that, why don't you just create a completely new\n> >> AppendPath containing all the cheapest_startup_paths and add that to\n> >> the append rel. You can optimise this and only do it when\n> >> rel->consider_startup is true.\n> >>\n> >> Does the attached do anything less than what your patch does?\n> >\n> >\n> > We can work like this, but there is one difference from what\n> > my current patch does. It is cheapest_startup_path vs cheapest\n> > fraction path. For example if we have the following 3 paths with\n> > all of the estimated rows is 100 and the tuple_fraction is 10.\n>\n> Yeah, it's true that the version I wrote didn't consider the\n> fractional part, but I didn't really see it as worse than what you\n> did. It looks like you're assuming that every append child will have\n\nthe same number of tuples read from it, but it seems to me that it\n> would only be valid to use the fractional part for the first child.\n\nThe path chosen for subsequent child paths would, if done correctly,\n> need to account for the estimated rows from the previous child paths.\n>\n\nActually this is consistent with what generate_union_paths does now.\n\n /*\n* If plain UNION, tell children to fetch all tuples.\n*\n* Note: in UNION ALL, we pass the top-level tuple_fraction unmodified to\n* each arm of the UNION ALL. One could make a case for reducing the\n* tuple fraction for later arms (discounting by the expected size of the\n* earlier arms' results) but it seems not worth the trouble. The normal\n* case where tuple_fraction isn't already zero is a LIMIT at top level,\n* and passing it down as-is is usually enough to get the desired result\n* of preferring fast-start plans.\n*/\nif (!op->all)\nroot->tuple_fraction = 0.0;\n\nUNION ALL is pretty like append rel.\n\n\nIt's not valid here to copy the code in generate_orderedappend_paths()\n> as MergeAppend won't necessarily exhaust the first child subpath first\n> like Append will.\n>\n\nNot sure which code you are referring to, but the code I refer to\nmuch is generate_union_paths and set_subquery_pathlist.\n\n\n> > Path 1: startup_cost = 60, total_cost = 80 -- cheapest total cost.\n> > Path 2: startup_cost = 10, total_cost = 1000 -- cheapest startup cost\n> > Path 3: startup_cost = 20, total_cost = 90 -- cheapest fractional cost\n> >\n> > So with the patch you propose, Path 1 & Path 3 are chosen to build\n> > append path. but with my current patch, Only path 3 is kept. IIUC,\n> > path 3 should be the best one in this case.\n>\n> I assume you mean mine would build AppendPaths for 1+2, not 1+3.\n\n\nYes, it should be 1+2.\n\n>\n>\nYou mentioned:\n>\n> > I just find it is hard to build the case for Identifier 2/3/4.\n>\n> I wonder if this is because generate_orderedappend_paths() handles\n> startup paths and most cases will that need a cheap startup plan will\n> require some sort of pathkeys.\n>\n\nProbably yes.\n\n\n> The example you mentioned of:\n>\n> (select * from tenk1 order by hundred)\n> UNION ALL\n> (select * from tenk1 order by hundred)\n> limit 3;\n>\n> I don't find this to be a compellingly real-world case. The planner\n> is under no obligation to output rows from the 1st branch of the UNION\n> ALL before the 2nd one. If the user cared about that then they'd have\n> instead added a top-level ORDER BY, in which case the planner seems\n> happy to use the index scan:\n>\n\nSorry about the test case, here is the one with more compelling\nreal-world.\n\nwith s1 as (select * from tenk1 join tenk2 using (hundred)),\ns2 as (select * from tenk1 join tenk2 using (hundred))\nselect * from s1\nunion all\nselect * from s2\nlimit 3;\n\nIt would be good if you could provide a bit more detail on the cases\n> you want to improve here. For example, if your #4 case, you have\n> \"WHERE xxx\". I don't know if \"xxx\" is just a base qual or if there's a\n> correlation to the outer query in there.\n>\n\nfor the #4, the quickest test case is\n\nselect * from tenk1 where exists\n(\nwith s1 as (select * from tenk1 join tenk2 using (hundred)),\ns2 as (select * from tenk1 join tenk2 using (hundred))\nselect * from s1\nunion all\nselect * from s2\nwhere random() > 0.4);\n\nrandom() is used to make it can't be pull-up. and exists implies\nLIMIT 1;\n\n\nAnother concern I have with your patch is that it seems to be under\n> the impression that there being further joins to evaluate at this\n> query level is the only reason that we would have to pull more than\n> the tuple fraction number of rows from the query. What gives you the\n> confidence that's the only reason we may want to pull more than the\n> tuple fraction of tuples from the append child?\n>\n\nI think you are talking about something like ORDER BY, GROUP BY\nclause, I do overlook it. but if we admit cheapest fractional cost\nis a right factor to consider, this issue is not unresolvable since parse\nis at hand.\n\nAt last, you ignore the part of set_subquery_pathlist. I always use it\nto prove the value of the cheapest fractional cost. Am I missing something?\n\n /*\n * We can safely pass the outer tuple_fraction down to the subquery\nif the\n * outer level has no joining, aggregation, or sorting to do.\nOtherwise\n * we'd better tell the subquery to plan for full retrieval. (XXX\nThis\n * could probably be made more intelligent ...)\n */\n if (parse->hasAggs ||\n parse->groupClause ||\n parse->groupingSets ||\n root->hasHavingQual ||\n parse->distinctClause ||\n parse->sortClause ||\n has_multiple_baserels(root))\n tuple_fraction = 0.0; /* default case */\n else\n tuple_fraction = root->tuple_fraction;\n\nWhat do you think about this in my last reply? \"If my above\nanalysis is correct, I think the best way to handle this is if there\nis no more tables to join, we use cheapest fraction cost for all\nthe kinds of relations, including plain relation, append rel,\nsubquery and so on, If we have more tables to join, we use\ncheapest startup cost.\". This is what is in my mind now.\n\n-- \nBest Regards\nAndy Fan\n\nOn Mon, Sep 18, 2023 at 11:58 AM David Rowley <[email protected]> wrote:On Mon, 18 Sept 2023 at 01:42, Andy Fan <[email protected]> wrote:\n> On Fri, Sep 15, 2023 at 3:15 PM David Rowley <[email protected]> wrote:\n>> Instead of doing that, why don't you just create a completely new\n>> AppendPath containing all the cheapest_startup_paths and add that to\n>> the append rel. You can optimise this and only do it when\n>> rel->consider_startup is true.\n>>\n>> Does the attached do anything less than what your patch does?\n>\n>\n> We can work like this, but there is one difference from what\n> my current patch does. It is cheapest_startup_path vs cheapest\n> fraction path. For example if we have the following 3 paths with\n> all of the estimated rows is 100 and the tuple_fraction is 10.\n\nYeah, it's true that the version I wrote didn't consider the\nfractional part, but I didn't really see it as worse than what you\ndid. It looks like you're assuming that every append child will have\nthe same number of tuples read from it, but it seems to me that it\nwould only be valid to use the fractional part for the first child. \nThe path chosen for subsequent child paths would, if done correctly,\nneed to account for the estimated rows from the previous child paths.Actually this is consistent with what generate_union_paths does now. /*\t * If plain UNION, tell children to fetch all tuples.\t *\t * Note: in UNION ALL, we pass the top-level tuple_fraction unmodified to\t * each arm of the UNION ALL. One could make a case for reducing the\t * tuple fraction for later arms (discounting by the expected size of the\t * earlier arms' results) but it seems not worth the trouble. The normal\t * case where tuple_fraction isn't already zero is a LIMIT at top level,\t * and passing it down as-is is usually enough to get the desired result\t * of preferring fast-start plans.\t */\tif (!op->all)\t\troot->tuple_fraction = 0.0;UNION ALL is pretty like append rel. \nIt's not valid here to copy the code in generate_orderedappend_paths()\nas MergeAppend won't necessarily exhaust the first child subpath first\nlike Append will.Not sure which code you are referring to, but the code I refer to much is generate_union_paths and set_subquery_pathlist. \n> Path 1: startup_cost = 60, total_cost = 80 -- cheapest total cost.\n> Path 2: startup_cost = 10, total_cost = 1000 -- cheapest startup cost\n> Path 3: startup_cost = 20, total_cost = 90 -- cheapest fractional cost\n>\n> So with the patch you propose, Path 1 & Path 3 are chosen to build\n> append path. but with my current patch, Only path 3 is kept. IIUC,\n> path 3 should be the best one in this case.\n\nI assume you mean mine would build AppendPaths for 1+2, not 1+3.Yes, it should be 1+2. \nYou mentioned:\n\n> I just find it is hard to build the case for Identifier 2/3/4.\n\nI wonder if this is because generate_orderedappend_paths() handles\nstartup paths and most cases will that need a cheap startup plan will\nrequire some sort of pathkeys.Probably yes. \nThe example you mentioned of:\n\n(select * from tenk1 order by hundred)\n UNION ALL\n (select * from tenk1 order by hundred)\n limit 3;\n\nI don't find this to be a compellingly real-world case. The planner\nis under no obligation to output rows from the 1st branch of the UNION\nALL before the 2nd one. If the user cared about that then they'd have\ninstead added a top-level ORDER BY, in which case the planner seems\nhappy to use the index scan:Sorry about the test case, here is the one with more compelling real-world.with s1 as (select * from tenk1 join tenk2 using (hundred)), s2 as (select * from tenk1 join tenk2 using (hundred)) select * from s1union all select * from s2 limit 3;\nIt would be good if you could provide a bit more detail on the cases\nyou want to improve here. For example, if your #4 case, you have\n\"WHERE xxx\". I don't know if \"xxx\" is just a base qual or if there's a\ncorrelation to the outer query in there.for the #4, the quickest test case isselect * from tenk1 where exists(with s1 as (select * from tenk1 join tenk2 using (hundred)), s2 as (select * from tenk1 join tenk2 using (hundred)) select * from s1union all select * from s2 where random() > 0.4);random() is used to make it can't be pull-up. and exists implies LIMIT 1; \nAnother concern I have with your patch is that it seems to be under\nthe impression that there being further joins to evaluate at this\nquery level is the only reason that we would have to pull more than\nthe tuple fraction number of rows from the query. What gives you the\nconfidence that's the only reason we may want to pull more than the\ntuple fraction of tuples from the append child?I think you are talking about something like ORDER BY, GROUP BY clause, I do overlook it. but if we admit cheapest fractional costis a right factor to consider, this issue is not unresolvable since parseis at hand. At last, you ignore the part of set_subquery_pathlist. I always use itto prove the value of the cheapest fractional cost. Am I missing something? /* * We can safely pass the outer tuple_fraction down to the subquery if the * outer level has no joining, aggregation, or sorting to do. Otherwise * we'd better tell the subquery to plan for full retrieval. (XXX This * could probably be made more intelligent ...) */ if (parse->hasAggs || parse->groupClause || parse->groupingSets || root->hasHavingQual || parse->distinctClause || parse->sortClause || has_multiple_baserels(root)) tuple_fraction = 0.0; /* default case */ else tuple_fraction = root->tuple_fraction;What do you think about this in my last reply? \"If my above analysis is correct, I think the best way to handle this is if there is no more tables to join, we use cheapest fraction cost for all the kinds of relations, including plain relation, append rel, subquery and so on, If we have more tables to join, we usecheapest startup cost.\". This is what is in my mind now. -- Best RegardsAndy Fan",
"msg_date": "Mon, 18 Sep 2023 14:38:09 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: make add_paths_to_append_rel aware of startup cost"
},
{
"msg_contents": "Hi,\n\n\n> What do you think about this in my last reply? \"If my above\n> analysis is correct, I think the best way to handle this is if there\n> is no more tables to join, we use cheapest fraction cost for all\n> the kinds of relations, including plain relation, append rel,\n> subquery and so on, If we have more tables to join, we use\n> cheapest startup cost.\". This is what is in my mind now.\n>\n>\nHere is an updated version to show what's in my mind.\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Mon, 18 Sep 2023 18:55:46 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: make add_paths_to_append_rel aware of startup cost"
},
{
"msg_contents": "On Mon, 18 Sept 2023 at 22:55, Andy Fan <[email protected]> wrote:\n> Here is an updated version to show what's in my mind.\n\nMy current thoughts on this are that the fractional cost part adds\nquite a bit more complexity than the minimalistic approach of just\nalso considering the cheapest startup path.\n\nThere's also quite a bit I don't like about the extra code you've added:\n\n1. RelOptInfo.tuple_fraction is not given a default value in locations\nwhere we do makeNode(RelOptInfo);\n\n2. This is very poorly documented and badly named. Also seems to have\na typo \"stopper\"\n\n+ /* Like order by, group by, distinct and so. */\n+ bool has_stoper_op;\n\nWith that, nobody has a hope of knowing if some new operation should\nset that value to true or false.\n\nI think it needs to define the meaning, which I think (very roughly)\nis \"does the query require any additional upper-planner operations\nwhich could require having to read more tuples from the final join\nrelation than the number of tuples which are read from the final upper\nrel.\"\n\n3. get_fractional_path_cost() goes to the trouble of setting\ntotal_rows then does not use it.\n\n4. I don't see why it's ok to take the total_rows from the first Path\nin the list in get_cheapest_fractional_path_ext(). What if another\nPath has some other value?\n\nBut overall, I'm more inclined to just go with the more simple \"add a\ncheap unordered startup append path if considering cheap startup\nplans\" version. I see your latest patch does both. So, I'd suggest two\npatches as I do see the merit in keeping this simple and cheap. If we\ncan get the first part in and you still find cases where you're not\ngetting the most appropriate startup plan based on the tuple fraction,\nthen we can reconsider what extra complexity we should endure in the\ncode based on the example query where we've demonstrated the planner\nis not choosing the best startup path appropriate to the given tuple\nfraction.\n\nDavid\n\n\n",
"msg_date": "Wed, 27 Sep 2023 21:03:43 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make add_paths_to_append_rel aware of startup cost"
},
{
"msg_contents": "Hi David,\n\nBut overall, I'm more inclined to just go with the more simple \"add a\n> cheap unordered startup append path if considering cheap startup\n> plans\" version. I see your latest patch does both. So, I'd suggest two\n> patches as I do see the merit in keeping this simple and cheap. If we\n> can get the first part in and you still find cases where you're not\n> getting the most appropriate startup plan based on the tuple fraction,\n> then we can reconsider what extra complexity we should endure in the\n> code based on the example query where we've demonstrated the planner\n> is not choosing the best startup path appropriate to the given tuple\n> fraction.\n>\n\nI think this is a fair point, I agree that your first part is good enough\nto be\ncommitted first. Actually I tried a lot to make a test case which can\nprove\nthe value of cheapest fractional cost but no gain so far:(\n\n-- \nBest Regards\nAndy Fan\n\n Hi David, \nBut overall, I'm more inclined to just go with the more simple \"add a\ncheap unordered startup append path if considering cheap startup\nplans\" version. I see your latest patch does both. So, I'd suggest two\npatches as I do see the merit in keeping this simple and cheap. If we\ncan get the first part in and you still find cases where you're not\ngetting the most appropriate startup plan based on the tuple fraction,\nthen we can reconsider what extra complexity we should endure in the\ncode based on the example query where we've demonstrated the planner\nis not choosing the best startup path appropriate to the given tuple\nfraction.\nI think this is a fair point, I agree that your first part is good enough to be committed first. Actually I tried a lot to make a test case which can provethe value of cheapest fractional cost but no gain so far:( -- Best RegardsAndy Fan",
"msg_date": "Sun, 1 Oct 2023 16:26:13 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: make add_paths_to_append_rel aware of startup cost"
},
{
"msg_contents": "On Sun, 1 Oct 2023 at 21:26, Andy Fan <[email protected]> wrote:\n>> But overall, I'm more inclined to just go with the more simple \"add a\n>> cheap unordered startup append path if considering cheap startup\n>> plans\" version. I see your latest patch does both. So, I'd suggest two\n>> patches as I do see the merit in keeping this simple and cheap. If we\n>> can get the first part in and you still find cases where you're not\n>> getting the most appropriate startup plan based on the tuple fraction,\n>> then we can reconsider what extra complexity we should endure in the\n>> code based on the example query where we've demonstrated the planner\n>> is not choosing the best startup path appropriate to the given tuple\n>> fraction.\n>\n> I think this is a fair point, I agree that your first part is good enough to be\n> committed first. Actually I tried a lot to make a test case which can prove\n> the value of cheapest fractional cost but no gain so far:(\n\nI've attached a patch with the same code as the previous patch but\nthis time including a regression test.\n\nI see no reason to not commit this so if anyone feels differently\nplease let me know.\n\nDavid",
"msg_date": "Wed, 4 Oct 2023 13:41:24 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make add_paths_to_append_rel aware of startup cost"
},
{
"msg_contents": "On Wed, Oct 4, 2023 at 8:41 AM David Rowley <[email protected]> wrote:\n\n> On Sun, 1 Oct 2023 at 21:26, Andy Fan <[email protected]> wrote:\n> >> But overall, I'm more inclined to just go with the more simple \"add a\n> >> cheap unordered startup append path if considering cheap startup\n> >> plans\" version. I see your latest patch does both. So, I'd suggest two\n> >> patches as I do see the merit in keeping this simple and cheap. If we\n> >> can get the first part in and you still find cases where you're not\n> >> getting the most appropriate startup plan based on the tuple fraction,\n> >> then we can reconsider what extra complexity we should endure in the\n> >> code based on the example query where we've demonstrated the planner\n> >> is not choosing the best startup path appropriate to the given tuple\n> >> fraction.\n> >\n> > I think this is a fair point, I agree that your first part is good\n> enough to be\n> > committed first. Actually I tried a lot to make a test case which can\n> prove\n> > the value of cheapest fractional cost but no gain so far:(\n>\n> I've attached a patch with the same code as the previous patch but\n> this time including a regression test.\n>\n> I see no reason to not commit this so if anyone feels differently\n> please let me know.\n>\n> David\n>\n\nPatch LGTM.\n\n-- \nBest Regards\nAndy Fan\n\nOn Wed, Oct 4, 2023 at 8:41 AM David Rowley <[email protected]> wrote:On Sun, 1 Oct 2023 at 21:26, Andy Fan <[email protected]> wrote:\n>> But overall, I'm more inclined to just go with the more simple \"add a\n>> cheap unordered startup append path if considering cheap startup\n>> plans\" version. I see your latest patch does both. So, I'd suggest two\n>> patches as I do see the merit in keeping this simple and cheap. If we\n>> can get the first part in and you still find cases where you're not\n>> getting the most appropriate startup plan based on the tuple fraction,\n>> then we can reconsider what extra complexity we should endure in the\n>> code based on the example query where we've demonstrated the planner\n>> is not choosing the best startup path appropriate to the given tuple\n>> fraction.\n>\n> I think this is a fair point, I agree that your first part is good enough to be\n> committed first. Actually I tried a lot to make a test case which can prove\n> the value of cheapest fractional cost but no gain so far:(\n\nI've attached a patch with the same code as the previous patch but\nthis time including a regression test.\n\nI see no reason to not commit this so if anyone feels differently\nplease let me know.\n\nDavid\nPatch LGTM.-- Best RegardsAndy Fan",
"msg_date": "Thu, 5 Oct 2023 09:11:35 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: make add_paths_to_append_rel aware of startup cost"
},
{
"msg_contents": "On Thu, 5 Oct 2023 at 14:11, Andy Fan <[email protected]> wrote:\n> Patch LGTM.\n\nThanks. Pushed.\n\nDavid\n\n\n",
"msg_date": "Thu, 5 Oct 2023 21:04:36 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make add_paths_to_append_rel aware of startup cost"
},
{
"msg_contents": "On Thu, Oct 5, 2023 at 9:07 PM David Rowley <[email protected]> wrote:\n> Thanks. Pushed.\n\nFYI somehow this plan from a8a968a8212e flipped in this run:\n\n=== dumping /home/bf/bf-build/mylodon/HEAD/pgsql.build/testrun/recovery/027_stream_regress/data/regression.diffs\n===\ndiff -U3 /home/bf/bf-build/mylodon/HEAD/pgsql/src/test/regress/expected/union.out\n/home/bf/bf-build/mylodon/HEAD/pgsql.build/testrun/recovery/027_stream_regress/data/results/union.out\n--- /home/bf/bf-build/mylodon/HEAD/pgsql/src/test/regress/expected/union.out\n2024-01-15 00:31:13.947555940 +0000\n+++ /home/bf/bf-build/mylodon/HEAD/pgsql.build/testrun/recovery/027_stream_regress/data/results/union.out\n2024-02-14 00:06:17.075584839 +0000\n@@ -1447,9 +1447,9 @@\n -> Append\n -> Nested Loop\n Join Filter: (t1.tenthous = t2.tenthous)\n- -> Seq Scan on tenk1 t1\n+ -> Seq Scan on tenk2 t2\n -> Materialize\n- -> Seq Scan on tenk2 t2\n+ -> Seq Scan on tenk1 t1\n -> Result\n (8 rows)\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-02-14%2000%3A01%3A03\n\n\n",
"msg_date": "Wed, 14 Feb 2024 13:21:27 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make add_paths_to_append_rel aware of startup cost"
},
{
"msg_contents": "\nThomas Munro <[email protected]> writes:\n\n> On Thu, Oct 5, 2023 at 9:07 PM David Rowley <[email protected]> wrote:\n>> Thanks. Pushed.\n>\n> FYI somehow this plan from a8a968a8212e flipped in this run:\n>\n> === dumping /home/bf/bf-build/mylodon/HEAD/pgsql.build/testrun/recovery/027_stream_regress/data/regression.diffs\n> ===\n> diff -U3 /home/bf/bf-build/mylodon/HEAD/pgsql/src/test/regress/expected/union.out\n> /home/bf/bf-build/mylodon/HEAD/pgsql.build/testrun/recovery/027_stream_regress/data/results/union.out\n> --- /home/bf/bf-build/mylodon/HEAD/pgsql/src/test/regress/expected/union.out\n> 2024-01-15 00:31:13.947555940 +0000\n> +++ /home/bf/bf-build/mylodon/HEAD/pgsql.build/testrun/recovery/027_stream_regress/data/results/union.out\n> 2024-02-14 00:06:17.075584839 +0000\n> @@ -1447,9 +1447,9 @@\n> -> Append\n> -> Nested Loop\n> Join Filter: (t1.tenthous = t2.tenthous)\n> - -> Seq Scan on tenk1 t1\n> + -> Seq Scan on tenk2 t2\n> -> Materialize\n> - -> Seq Scan on tenk2 t2\n> + -> Seq Scan on tenk1 t1\n> -> Result\n> (8 rows)\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-02-14%2000%3A01%3A03\n\nThanks for this information! I will take a look at this.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Thu, 15 Feb 2024 11:16:41 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make add_paths_to_append_rel aware of startup cost"
},
{
"msg_contents": "Andy Fan <[email protected]> writes:\n\n> Thomas Munro <[email protected]> writes:\n>\n>> On Thu, Oct 5, 2023 at 9:07 PM David Rowley <[email protected]> wrote:\n>>> Thanks. Pushed.\n>>\n>> FYI somehow this plan from a8a968a8212e flipped in this run:\n>>\n>> === dumping /home/bf/bf-build/mylodon/HEAD/pgsql.build/testrun/recovery/027_stream_regress/data/regression.diffs\n>> ===\n>> diff -U3 /home/bf/bf-build/mylodon/HEAD/pgsql/src/test/regress/expected/union.out\n>> /home/bf/bf-build/mylodon/HEAD/pgsql.build/testrun/recovery/027_stream_regress/data/results/union.out\n>> --- /home/bf/bf-build/mylodon/HEAD/pgsql/src/test/regress/expected/union.out\n>> 2024-01-15 00:31:13.947555940 +0000\n>> +++ /home/bf/bf-build/mylodon/HEAD/pgsql.build/testrun/recovery/027_stream_regress/data/results/union.out\n>> 2024-02-14 00:06:17.075584839 +0000\n>> @@ -1447,9 +1447,9 @@\n>> -> Append\n>> -> Nested Loop\n>> Join Filter: (t1.tenthous = t2.tenthous)\n>> - -> Seq Scan on tenk1 t1\n>> + -> Seq Scan on tenk2 t2\n>> -> Materialize\n>> - -> Seq Scan on tenk2 t2\n>> + -> Seq Scan on tenk1 t1\n>> -> Result\n>> (8 rows)\n>>\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-02-14%2000%3A01%3A03\n>\n> Thanks for this information! I will take a look at this.\n\nI found the both plans have the same cost, I can't get the accurate\ncause of this after some hours research, but it is pretty similar with\n7516056c584e3, so I uses a similar strategy to stable it. is it\nacceptable? \n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Thu, 15 Feb 2024 16:38:03 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make add_paths_to_append_rel aware of startup cost"
},
{
"msg_contents": "On Thu, 15 Feb 2024 at 21:42, Andy Fan <[email protected]> wrote:\n> I found the both plans have the same cost, I can't get the accurate\n> cause of this after some hours research, but it is pretty similar with\n> 7516056c584e3, so I uses a similar strategy to stable it. is it\n> acceptable?\n\nIt's pretty hard to say. I can only guess why this test would be\nflapping like this. I see it's happened before on mylodon, so probably\nnot a cosmic ray. It's not like add_path() chooses a random path when\nthe costs are the same, so I wondered if something similar is going on\nhere that was going on that led to f03a9ca4. In particular, see [1].\n\nOn master, I set a breakpoint in try_nestloop_path() to break on\n\"outerrel->relid==1 && innerrel->relid==2\". I see the total Nested\nLoop cost comes out the same with the join order reversed.\n\nWhich is:\n\n -> Nested Loop (cost=0.00..1500915.00 rows=10000 width=4)\n\nDoing the same with your patch applied, I get:\n\n-> Nested Loop (cost=0.00..600925.00 rows=4000 width=4)\n\nand forcing the join order to swap with the debugger, I see:\n\n-> Nested Loop (cost=0.00..600940.00 rows=4000 width=4)\n\nSo there's a difference now, but it's quite small. If it was a problem\nlike we had on [1], then since tenk1 and tenk2 have 345 pages (on my\nmachine), if relpages is down 1 or 2 pages, we'll likely get more of a\ncosting difference than 600925 vs 600940.\n\nIf I use:\n\nexplain\nselect t1.unique1 from tenk1 t1\ninner join tenk2 t2 on t1.tenthous = t2.tenthous and t2.thousand = 0\nunion all\n(values(1)) limit 1;\n\nI get:\n\n-> Nested Loop (cost=0.00..2415.03 rows=10 width=4)\n\nand with the join order reversed, I get:\n\n -> Nested Loop (cost=0.00..2440.00 rows=10 width=4)\n\nI'd be more happy using this one as percentage-wise, the cost\ndifference is much larger. I don't quite have the will to go through\nproving what the actual problem is here. I think [1] already proved\nthe relpages problem can (or could) happen.\n\nI checked that the t2.thounsand = 0 query still tests the cheap\nstartup paths in add_paths_to_append_rel() and it does. If I flip\nstartup_subpaths_valid to false in the debugger, the plan flips to:\n\n QUERY PLAN\n-----------------------------------------------------------------------------------\n Limit (cost=470.12..514.00 rows=1 width=4)\n -> Append (cost=470.12..952.79 rows=11 width=4)\n -> Hash Join (cost=470.12..952.73 rows=10 width=4)\n Hash Cond: (t1.tenthous = t2.tenthous)\n -> Seq Scan on tenk1 t1 (cost=0.00..445.00 rows=10000 width=8)\n -> Hash (cost=470.00..470.00 rows=10 width=4)\n -> Seq Scan on tenk2 t2 (cost=0.00..470.00\nrows=10 width=4)\n Filter: (thousand = 0)\n -> Result (cost=0.00..0.01 rows=1 width=4)\n\nSo, if nobody has any better ideas, I'm just going to push the \" and\nt2.thousand = 0\" adjustment.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/4174.1563239552%40sss.pgh.pa.us\n\n\n",
"msg_date": "Fri, 16 Feb 2024 01:09:37 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make add_paths_to_append_rel aware of startup cost"
},
{
"msg_contents": "\nDavid Rowley <[email protected]> writes:\n\n> I'd be more happy using this one as percentage-wise, the cost\n> difference is much larger.\n\n+1 for the percentage-wise.\n>\n> I checked that the t2.thounsand = 0 query still tests the cheap\n> startup paths in add_paths_to_append_rel().\n\nI get the same conclusion here. \n>\n> So, if nobody has any better ideas, I'm just going to push the \" and\n> t2.thousand = 0\" adjustment.\n\nLGTM.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Fri, 16 Feb 2024 09:19:05 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make add_paths_to_append_rel aware of startup cost"
},
{
"msg_contents": "On Fri, 16 Feb 2024 at 01:09, David Rowley <[email protected]> wrote:\n>\n> On Thu, 15 Feb 2024 at 21:42, Andy Fan <[email protected]> wrote:\n> > I found the both plans have the same cost, I can't get the accurate\n> > cause of this after some hours research, but it is pretty similar with\n> > 7516056c584e3, so I uses a similar strategy to stable it. is it\n> > acceptable?\n>\n> It's pretty hard to say. I can only guess why this test would be\n> flapping like this. I see it's happened before on mylodon, so probably\n> not a cosmic ray. It's not like add_path() chooses a random path when\n> the costs are the same, so I wondered if something similar is going on\n> here that was going on that led to f03a9ca4. In particular, see [1].\n\nWhile it's not conclusive proof, the following demonstrates that\nrelpages dropping by just 1 page causes the join order to change.\n\nregression=# explain\nregression-# select t1.unique1 from tenk1 t1\nregression-# inner join tenk2 t2 on t1.tenthous = t2.tenthous\nregression-# union all\nregression-# (values(1)) limit 1;\n QUERY PLAN\n--------------------------------------------------------------------------------------\n Limit (cost=0.00..150.08 rows=1 width=4)\n -> Append (cost=0.00..1500965.01 rows=10001 width=4)\n -> Nested Loop (cost=0.00..1500915.00 rows=10000 width=4)\n Join Filter: (t1.tenthous = t2.tenthous)\n -> Seq Scan on tenk1 t1 (cost=0.00..445.00 rows=10000 width=8)\n -> Materialize (cost=0.00..495.00 rows=10000 width=4)\n -> Seq Scan on tenk2 t2 (cost=0.00..445.00\nrows=10000 width=4)\n -> Result (cost=0.00..0.01 rows=1 width=4)\n\nregression=# update pg_class set relpages=relpages - 1 where relname = 'tenk2';\nUPDATE 1\nregression=# explain\nregression-# select t1.unique1 from tenk1 t1\nregression-# inner join tenk2 t2 on t1.tenthous = t2.tenthous\nregression-# union all\nregression-# (values(1)) limit 1;\n QUERY PLAN\n--------------------------------------------------------------------------------------\n Limit (cost=0.00..150.52 rows=1 width=4)\n -> Append (cost=0.00..1505315.30 rows=10001 width=4)\n -> Nested Loop (cost=0.00..1505265.29 rows=10000 width=4)\n Join Filter: (t1.tenthous = t2.tenthous)\n -> Seq Scan on tenk2 t2 (cost=0.00..445.29 rows=10029 width=4)\n -> Materialize (cost=0.00..495.00 rows=10000 width=8)\n -> Seq Scan on tenk1 t1 (cost=0.00..445.00\nrows=10000 width=8)\n -> Result (cost=0.00..0.01 rows=1 width=4)\n\nI tried this with the proposed changes to the test and the plan did not change.\n\nI've pushed the change now.\n\nDavid\n\n\n",
"msg_date": "Fri, 16 Feb 2024 15:03:21 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: make add_paths_to_append_rel aware of startup cost"
}
] |
[
{
"msg_contents": "I create a patch that outputs affected rows in EXPLAIN that occur by\nINSERT/UPDATE/DELETE.\nDespite the fact that commands in EXPLAIN ANALYZE query are executed as\nusual, EXPLAIN doesn't show outputting affected rows as in these commands.\nThe patch fixes this problem.\n\nExamples:\nexplain analyze insert into a values (1);\n QUERY PLAN\n\n------------------------------------------------------------------------------------------\n Insert on a (cost=0.00..0.01 rows=0 width=0) (actual time=0.076..0.077\nrows=0 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.002..0.002\nrows=1 loops=1)\n Planning Time: 0.025 ms\n Execution Time: 0.412 ms\n(4 rows)\n\nINSERT 0 1\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------\n Update on a (cost=0.00..35.50 rows=0 width=0) (actual time=0.059..0.060\nrows=0 loops=1)\n -> Seq Scan on a (cost=0.00..35.50 rows=2550 width=10) (actual\ntime=0.012..0.013 rows=7 loops=1)\n Planning Time: 0.142 ms\n Execution Time: 0.666 ms\n(4 rows)\n\nUPDATE 7\n\nexplain analyze delete from a where n = 1;\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------\n Delete on a (cost=0.00..41.88 rows=0 width=0) (actual time=0.147..0.147\nrows=0 loops=1)\n -> Seq Scan on a (cost=0.00..41.88 rows=13 width=6) (actual\ntime=0.120..0.123 rows=7 loops=1)\n Filter: (n = 1)\n Planning Time: 1.073 ms\n Execution Time: 0.178 ms\n(5 rows)\n\nDELETE 7\n\nEXPLAIN queries without ANALYZE don't affect rows, so the output number is\n0.\n\nexplain update a set n = 2;\n QUERY PLAN\n------------------------------------------------------------\n Update on a (cost=0.00..35.50 rows=0 width=0)\n -> Seq Scan on a (cost=0.00..35.50 rows=2550 width=10)\n(2 rows)\n\nUPDATE 0\n\nMaybe there is no need to add this row when EXPLAIN has no ANALYZE. So it\nis a discussion question.\nAlso haven't fixed regress tests yet.\n\nRegards,\nDamir Belyalov\nPostgres Professional",
"msg_date": "Wed, 6 Sep 2023 15:49:36 +0300",
"msg_from": "Damir Belyalov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Output affected rows in EXPLAIN"
},
{
"msg_contents": "> On 6 Sep 2023, at 14:49, Damir Belyalov <[email protected]> wrote\n\n> The patch fixes this problem.\n\nGiven that EXPLAIN ANALYZE has worked like for a very long time, which problem\nis it you have identified?\n\nI'm also not convinced that the added \"EXPLAIN\" in the below plan is an\nimprovement in any way.\n\npostgres=# explain (analyze) select * from t;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Seq Scan on t (cost=0.00..35.50 rows=2550 width=4) (actual time=0.064..0.075 rows=5 loops=1)\n Planning Time: 1.639 ms\n Execution Time: 0.215 ms\n(3 rows)\n\nEXPLAIN\npostgres=#\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 6 Sep 2023 15:20:45 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Output affected rows in EXPLAIN"
},
{
"msg_contents": "Damir Belyalov <[email protected]> writes:\n> I create a patch that outputs affected rows in EXPLAIN that occur by\n> INSERT/UPDATE/DELETE.\n> Despite the fact that commands in EXPLAIN ANALYZE query are executed as\n> usual, EXPLAIN doesn't show outputting affected rows as in these commands.\n> The patch fixes this problem.\n\nThis creates a bug, not fixes one. It's intentional that \"insert into a\"\nis shown as returning zero rows, because that's what it did. If you'd\nwritten \"insert ... returning\", you'd have gotten a different result:\n\n=# explain analyze insert into a values (1);\n QUERY PLAN \n------------------------------------------------------------------------------------------\n Insert on a (cost=0.00..0.01 rows=0 width=0) (actual time=0.015..0.016 rows=0 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=1)\n Planning Time: 0.015 ms\n Execution Time: 0.027 ms\n(4 rows)\n\n=# explain analyze insert into a values (1) returning *;\n QUERY PLAN \n------------------------------------------------------------------------------------------\n Insert on a (cost=0.00..0.01 rows=1 width=4) (actual time=0.026..0.028 rows=1 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.003..0.003 rows=1 loops=1)\n Planning Time: 0.031 ms\n Execution Time: 0.051 ms\n(4 rows)\n\nNow admittedly, if you want to know the number of rows that went to disk,\nyou have to infer that from the number of rows emitted by the\nModifyTable's child plan. But that's a matter for documentation\n(and I'm pretty sure it's documented someplace).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 06 Sep 2023 10:00:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Output affected rows in EXPLAIN"
},
{
"msg_contents": "> This creates a bug, not fixes one. It's intentional that \"insert into a\"\n> is shown as returning zero rows, because that's what it did. If you'd\n> written \"insert ... returning\", you'd have gotten a different result:\n>\nMaybe I didn't understand you correctly, but I didn't touch the number of\naffected rows in EXPLAIN output.\nIt's just a simple patch that adds 1 row after using commands: EXPLAIN\nINSERT, EXPLAIN UPDATE, EXPLAIN DELETE.\nIt was done because the commands INSERT/UPDATE/DELETE return one row after\nexecution: \"UPDATE 7\" or \"INSERT 0 4\".\nEXPLAIN (ANALYZE) INSERT/UPDATE/DELETE does the same thing as these\ncommands, but doesn't output this row. So I added it.\n\n\nPatch is fixed. There is no row \"EXPLAIN\" in queries like:\npostgres=# explain (analyze) select * from t;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Seq Scan on t (cost=0.00..35.50 rows=2550 width=4) (actual\ntime=0.064..0.075 rows=5 loops=1)\n Planning Time: 1.639 ms\n Execution Time: 0.215 ms\n(3 rows)\n\nEXPLAIN\n\n\nWhat is about queries EXPLAIN INSERT/UPDATE/DELETE without ANALYZE?\nNow it is outputting a row with 0 affected (inserted) rows at the end:\n\"INSERT 0 0\", \"UPDATE 0\". Example:\nexplain update a set n = 2;\n QUERY PLAN\n------------------------------------------------------------\n Update on a (cost=0.00..35.50 rows=0 width=0)\n -> Seq Scan on a (cost=0.00..35.50 rows=2550 width=10)\n(2 rows)\n\nUPDATE 0\n\nRegards,\nDamir Belyalov\nPostgres Professional",
"msg_date": "Thu, 7 Sep 2023 17:57:12 +0300",
"msg_from": "Damir Belyalov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Output affected rows in EXPLAIN"
},
{
"msg_contents": "Hi hackers,\n\nIndeed, I think it is a little confusing that when executing\nEXPLAIN(ANALYZE), even though an update is actually occurring,\nthe commandtag of the update result is not returned.\n\nHowever, the manual also describes the information that will be\naffected when EXPLAIN (ANALYZE) is executed as important information.\nhttps://www.postgresql.org/docs/current/sql-explain.html\n\nAlso, in most cases, users who use EXPLAIN(ANALYZE) only want\nan execution plan of a statement.\nIf command tags are not required, this can be controlled using\nthe QUIET variable, but command tags other than EXPLAIN will also\nbe omitted, increasing the scope of the effect.\nWe can check the number of updated rows from execute plan,\nI think there is no need to return the command tag\nwhen EXPLAIN(ANALYZE) is executed by default.\n\n## patch and QUIET=off(default)\n\npostgres=# explain (analyze) insert into a values (1);\n QUERY PLAN\n------------------------------------------------------------------------------------------\n Insert on a (cost=0.00..0.01 rows=0 width=0) (actual time=0.227..0.228 \nrows=0 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=4) (actual \ntime=0.013..0.015 rows=1 loops=1)\n Planning Time: 0.152 ms\n Execution Time: 0.480 ms\n(4 rows)\n\nINSERT 0 1\n\n## patch and QUIET=on(psql work quietly)\n\n'INSERT 0 1' is omitted both 'explain(analyze) and 'INSERT'.\n\npostgres=# \\set QUIET on\npostgres=# explain (analyze) insert into a values (1);\n QUERY PLAN\n------------------------------------------------------------------------------------------\n Insert on a (cost=0.00..0.01 rows=0 width=0) (actual time=0.058..0.059 \nrows=0 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=4) (actual \ntime=0.004..0.005 rows=1 loops=1)\n Planning Time: 0.059 ms\n Execution Time: 0.117 ms\n(4 rows)\n\npostgres=# insert into a values (1);\npostgres=#\n\nBest Regards,\nKeisuke Kuroda\nNTT COMWARE\n\nOn 2023-09-07 23:57, Damir Belyalov wrote:\n>> This creates a bug, not fixes one. It's intentional that \"insert\n>> into a\"\n>> is shown as returning zero rows, because that's what it did. If\n>> you'd\n>> written \"insert ... returning\", you'd have gotten a different\n>> result:\n> \n> Maybe I didn't understand you correctly, but I didn't touch the number\n> of affected rows in EXPLAIN output.\n> It's just a simple patch that adds 1 row after using commands: EXPLAIN\n> INSERT, EXPLAIN UPDATE, EXPLAIN DELETE.\n> It was done because the commands INSERT/UPDATE/DELETE return one row\n> after execution: \"UPDATE 7\" or \"INSERT 0 4\".\n> EXPLAIN (ANALYZE) INSERT/UPDATE/DELETE does the same thing as these\n> commands, but doesn't output this row. So I added it.\n> \n> Patch is fixed. There is no row \"EXPLAIN\" in queries like:\n> \n> postgres=# explain (analyze) select * from t;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------\n> Seq Scan on t (cost=0.00..35.50 rows=2550 width=4) (actual\n> time=0.064..0.075 rows=5 loops=1)\n> Planning Time: 1.639 ms\n> Execution Time: 0.215 ms\n> (3 rows)\n> \n> EXPLAIN\n> \n> What is about queries EXPLAIN INSERT/UPDATE/DELETE without ANALYZE?\n> Now it is outputting a row with 0 affected (inserted) rows at the end:\n> \"INSERT 0 0\", \"UPDATE 0\". Example:\n> explain update a set n = 2;\n> QUERY PLAN\n> ------------------------------------------------------------\n> Update on a (cost=0.00..35.50 rows=0 width=0)\n> -> Seq Scan on a (cost=0.00..35.50 rows=2550 width=10)\n> (2 rows)\n> \n> UPDATE 0\n> \n> Regards,\n> Damir Belyalov\n> Postgres Professional\n\n\n\n",
"msg_date": "Wed, 15 Nov 2023 16:15:34 +0900",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Output affected rows in EXPLAIN"
},
{
"msg_contents": "On Wed, Nov 15, 2023 at 2:17 PM <[email protected]> wrote:\n>\n> We can check the number of updated rows from execute plan,\n> I think there is no need to return the command tag\n> when EXPLAIN(ANALYZE) is executed by default.\n\nGiven that several people have voiced an opinion against this patch,\nI'm marking it rejected.\n\n\n",
"msg_date": "Wed, 22 Nov 2023 17:07:47 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Output affected rows in EXPLAIN"
}
] |
[
{
"msg_contents": "We have:\n\n\"This improves security and now requires subscription owners to be\neither superusers or to have SET ROLE permissions on all tables in the\nreplication set. The previous behavior of performing all operations as\nthe subscription owner can be enabled with the subscription\nrun_as_owner option.\"\n\nHow does one have SET ROLE permissions on a table? I think that's\nsupposed to be:\n\n\"subscription owners be either superusers or to have SET ROLE\npermissions on all roles owning tables in the replication set.\"\n\nOr something like that? Or can someone suggest a better wording?\n\n//Magnus\n\n\n",
"msg_date": "Wed, 6 Sep 2023 21:29:25 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": true,
"msg_subject": "Release notes wording about logical replication as table owner"
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 09:29:25PM +0200, Magnus Hagander wrote:\n> We have:\n> \n> \"This improves security and now requires subscription owners to be\n> either superusers or to have SET ROLE permissions on all tables in the\n> replication set. The previous behavior of performing all operations as\n> the subscription owner can be enabled with the subscription\n> run_as_owner option.\"\n> \n> How does one have SET ROLE permissions on a table? I think that's\n> supposed to be:\n> \n> \"subscription owners be either superusers or to have SET ROLE\n> permissions on all roles owning tables in the replication set.\"\n> \n> Or something like that? Or can someone suggest a better wording?\n\nYou are exactly corrected. Patch attached and applied.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Wed, 6 Sep 2023 15:36:28 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Release notes wording about logical replication as table owner"
}
] |
[
{
"msg_contents": "I'm trying to write a custom scan. It's pretty confusing. I've read the\ndocumentation at\nhttps://www.postgresql.org/docs/current/custom-scan.html, and I've scanned\nthe\ncode in Citus Columnar and in Timescale, both of which are quite complex.\n\nIs anyone aware of code with a simple example of a custom scan? Or maybe a\ntutorial?\n\n-- \nChris Cleveland\n312-339-2677 mobile\n\nI'm trying to write a custom scan. It's pretty confusing. I've read the documentation at https://www.postgresql.org/docs/current/custom-scan.html, and I've scanned thecode in Citus Columnar and in Timescale, both of which are quite complex.Is anyone aware of code with a simple example of a custom scan? Or maybe a tutorial?-- Chris Cleveland312-339-2677 mobile",
"msg_date": "Wed, 6 Sep 2023 14:32:21 -0500",
"msg_from": "Chris Cleveland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Simple CustomScan example"
},
{
"msg_contents": "Hi,\n\nOn 9/6/23 9:32 PM, Chris Cleveland wrote:\n> I'm trying to write a custom scan. It's pretty confusing. I've read the documentation at\n> https://www.postgresql.org/docs/current/custom-scan.html <https://www.postgresql.org/docs/current/custom-scan.html>, and I've scanned the\n> code in Citus Columnar and in Timescale, both of which are quite complex.\n> \n> Is anyone aware of code with a simple example of a custom scan? \n\nYou may find the one in this repo: https://github.com/bdrouvot/pg_directpaths simple enough.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 7 Sep 2023 18:29:27 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple CustomScan example"
},
{
"msg_contents": "On Thu, Sep 07, 2023 at 06:29:27PM +0200, Drouvot, Bertrand wrote:\n> You may find the one in this repo:\n> https://github.com/bdrouvot/pg_directpaths simple enough.\n\nI'll repeat a remark I have made exactly yesterday on a different\nthread: having a test module for custom scans in src/test/modules/ to\ntest these APIs and have a usable template would be welcome :D\n--\nMichael",
"msg_date": "Fri, 8 Sep 2023 10:08:01 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple CustomScan example"
}
] |
[
{
"msg_contents": "Allow using syncfs() in frontend utilities.\n\nThis commit allows specifying a --sync-method in several frontend\nutilities that must synchronize many files to disk (initdb,\npg_basebackup, pg_checksums, pg_dump, pg_rewind, and pg_upgrade).\nOn Linux, users can specify \"syncfs\" to synchronize the relevant\nfile systems instead of calling fsync() for every single file. In\nmany cases, using syncfs() is much faster.\n\nAs with recovery_init_sync_method, this new option comes with some\ncaveats. The descriptions of these caveats have been moved to a\nnew appendix section in the documentation.\n\nCo-authored-by: Justin Pryzby\nReviewed-by: Michael Paquier, Thomas Munro, Robert Haas, Justin Pryzby\nDiscussion: https://postgr.es/m/20210930004340.GM831%40telsasoft.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/8c16ad3b43299695f203f9157a2b27c22b9ed634\n\nModified Files\n--------------\ndoc/src/sgml/config.sgml | 12 +++---------\ndoc/src/sgml/filelist.sgml | 1 +\ndoc/src/sgml/postgres.sgml | 1 +\ndoc/src/sgml/ref/initdb.sgml | 22 +++++++++++++++++++++\ndoc/src/sgml/ref/pg_basebackup.sgml | 25 ++++++++++++++++++++++++\ndoc/src/sgml/ref/pg_checksums.sgml | 22 +++++++++++++++++++++\ndoc/src/sgml/ref/pg_dump.sgml | 21 ++++++++++++++++++++\ndoc/src/sgml/ref/pg_rewind.sgml | 22 +++++++++++++++++++++\ndoc/src/sgml/ref/pgupgrade.sgml | 23 ++++++++++++++++++++++\ndoc/src/sgml/syncfs.sgml | 36 +++++++++++++++++++++++++++++++++++\nsrc/bin/initdb/initdb.c | 6 ++++++\nsrc/bin/initdb/t/001_initdb.pl | 12 ++++++++++++\nsrc/bin/pg_basebackup/pg_basebackup.c | 7 +++++++\nsrc/bin/pg_checksums/pg_checksums.c | 6 ++++++\nsrc/bin/pg_dump/pg_dump.c | 7 +++++++\nsrc/bin/pg_rewind/pg_rewind.c | 8 ++++++++\nsrc/bin/pg_upgrade/option.c | 13 +++++++++++++\nsrc/bin/pg_upgrade/pg_upgrade.c | 6 ++++--\nsrc/bin/pg_upgrade/pg_upgrade.h | 1 +\nsrc/fe_utils/option_utils.c | 27 ++++++++++++++++++++++++++\nsrc/include/fe_utils/option_utils.h | 4 ++++\n21 files changed, 271 insertions(+), 11 deletions(-)",
"msg_date": "Wed, 06 Sep 2023 23:28:00 +0000",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql: Allow using syncfs() in frontend utilities."
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 7:28 PM Nathan Bossart <[email protected]> wrote:\n> Allow using syncfs() in frontend utilities.\n>\n> This commit allows specifying a --sync-method in several frontend\n> utilities that must synchronize many files to disk (initdb,\n> pg_basebackup, pg_checksums, pg_dump, pg_rewind, and pg_upgrade).\n> On Linux, users can specify \"syncfs\" to synchronize the relevant\n> file systems instead of calling fsync() for every single file. In\n> many cases, using syncfs() is much faster.\n>\n> As with recovery_init_sync_method, this new option comes with some\n> caveats. The descriptions of these caveats have been moved to a\n> new appendix section in the documentation.\n\nHi,\n\nI'd like to complain about this commit's addition of a new appendix. I\ndo understand the temptation to document caveats like this centrally\ninstead of in multiple places, but as I've been complaining about over\nin the \"documentation structure\" thread, our top-level documentation\nindex is too big, and I feel strongly that we need to de-clutter it\nrather than cluttering it further. This added a new chapter which is\njust 5 sentences long. I understand that this was done because the\nsame issue applies to a bunch of different utilities and we didn't\nwant to duplicate this text in all of those places, but I feel like\nthis approach just doesn't scale. If we did this in every place where\nwe have this much text that we want to avoid duplicating, we'd soon\nhave hundreds of appendixes.\n\nWhat I would suggest we do instead is pick one of the places where\nthis comes up and document it there, perhaps the\nrecovery_init_sync_method GUC. And then make the documentation for the\nother say something like, you know those issues we documented for\nrecovery_init_sync_method? Well they also apply to this.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 22 Mar 2024 12:52:15 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow using syncfs() in frontend utilities."
},
{
"msg_contents": "On 22.03.24 17:52, Robert Haas wrote:\n> On Wed, Sep 6, 2023 at 7:28 PM Nathan Bossart <[email protected]> wrote:\n>> Allow using syncfs() in frontend utilities.\n>>\n>> This commit allows specifying a --sync-method in several frontend\n>> utilities that must synchronize many files to disk (initdb,\n>> pg_basebackup, pg_checksums, pg_dump, pg_rewind, and pg_upgrade).\n>> On Linux, users can specify \"syncfs\" to synchronize the relevant\n>> file systems instead of calling fsync() for every single file. In\n>> many cases, using syncfs() is much faster.\n>>\n>> As with recovery_init_sync_method, this new option comes with some\n>> caveats. The descriptions of these caveats have been moved to a\n>> new appendix section in the documentation.\n> \n> Hi,\n> \n> I'd like to complain about this commit's addition of a new appendix.\n\nI already complained about that at \n<https://www.postgresql.org/message-id/[email protected]> \nand some follow-up was announced but didn't happen. It was on my list \nto look into cleaning up during beta.\n\n\n\n",
"msg_date": "Tue, 26 Mar 2024 11:18:57 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow using syncfs() in frontend utilities."
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 12:52:15PM -0400, Robert Haas wrote:\n> I'd like to complain about this commit's addition of a new appendix. I\n> do understand the temptation to document caveats like this centrally\n> instead of in multiple places, but as I've been complaining about over\n> in the \"documentation structure\" thread, our top-level documentation\n> index is too big, and I feel strongly that we need to de-clutter it\n> rather than cluttering it further. This added a new chapter which is\n> just 5 sentences long. I understand that this was done because the\n> same issue applies to a bunch of different utilities and we didn't\n> want to duplicate this text in all of those places, but I feel like\n> this approach just doesn't scale. If we did this in every place where\n> we have this much text that we want to avoid duplicating, we'd soon\n> have hundreds of appendixes.\n\nSorry I missed this. I explored a couple of options last year but the\ndiscussion trailed off [0].\n\n> What I would suggest we do instead is pick one of the places where\n> this comes up and document it there, perhaps the\n> recovery_init_sync_method GUC. And then make the documentation for the\n> other say something like, you know those issues we documented for\n> recovery_init_sync_method? Well they also apply to this.\n\nWFM. I'll put together a patch.\n\n[0] https://postgr.es/m/20231009204823.GA659480%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 26 Mar 2024 09:52:10 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow using syncfs() in frontend utilities."
},
{
"msg_contents": "On Tue, Mar 26, 2024 at 11:18:57AM +0100, Peter Eisentraut wrote:\n> On 22.03.24 17:52, Robert Haas wrote:\n>> I'd like to complain about this commit's addition of a new appendix.\n> \n> I already complained about that at <https://www.postgresql.org/message-id/[email protected]>\n> and some follow-up was announced but didn't happen. It was on my list to\n> look into cleaning up during beta.\n\nSorry about this, I lost track of it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 26 Mar 2024 10:11:31 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow using syncfs() in frontend utilities."
},
{
"msg_contents": "On Tue, Mar 26, 2024 at 10:11:31AM -0500, Nathan Bossart wrote:\n> On Tue, Mar 26, 2024 at 11:18:57AM +0100, Peter Eisentraut wrote:\n>> On 22.03.24 17:52, Robert Haas wrote:\n>>> I'd like to complain about this commit's addition of a new appendix.\n>> \n>> I already complained about that at <https://www.postgresql.org/message-id/[email protected]>\n>> and some follow-up was announced but didn't happen. It was on my list to\n>> look into cleaning up during beta.\n> \n> Sorry about this, I lost track of it.\n\nHere's a first attempt at a patch based on Robert's suggestion from\nupthread.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 26 Mar 2024 11:34:49 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow using syncfs() in frontend utilities."
},
{
"msg_contents": "On Tue, Mar 26, 2024 at 12:34 PM Nathan Bossart\n<[email protected]> wrote:\n> Here's a first attempt at a patch based on Robert's suggestion from\n> upthread.\n\nWFM.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 27 Mar 2024 10:41:42 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow using syncfs() in frontend utilities."
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 10:41:42AM -0400, Robert Haas wrote:\n> On Tue, Mar 26, 2024 at 12:34 PM Nathan Bossart\n> <[email protected]> wrote:\n>> Here's a first attempt at a patch based on Robert's suggestion from\n>> upthread.\n> \n> WFM.\n\nCommitted. Again, I apologize this took so long.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 27 Mar 2024 10:25:16 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow using syncfs() in frontend utilities."
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 11:25 AM Nathan Bossart\n<[email protected]> wrote:\n> Committed. Again, I apologize this took so long.\n\nNo worries from my side; I only noticed recently. I guess Peter's been\nwaiting a while, though. Thanks for committing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 27 Mar 2024 11:27:46 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow using syncfs() in frontend utilities."
}
] |
[
{
"msg_contents": "This topic is extracted from [1].\n\nAs mentioned there, in psql, running \\? displays the following lines.\n\n > \\gdesc describe result of query, without executing it\n > \\gexec execute query, then execute each value in its result\n > \\gset [PREFIX] execute query and store result in psql variables\n > \\gx [(OPTIONS)] [FILE] as \\g, but forces expanded output mode\n > \\q quit psql\n > \\watch [[i=]SEC] [c=N] [m=MIN]\n!> execute query every SEC seconds, up to N times\n!> stop if less than MIN rows are returned\n\nThe last two lines definitely have some extra indentation.\nI've attached a patch that fixes this.\n\n[1] https://www.postgresql.org/message-id/[email protected]\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 07 Sep 2023 14:29:56 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "psql help message contains excessive indentations"
},
{
"msg_contents": "On Thu, 07 Sep 2023 14:29:56 +0900 (JST)\nKyotaro Horiguchi <[email protected]> wrote:\n\n> This topic is extracted from [1].\n> \n> As mentioned there, in psql, running \\? displays the following lines.\n> \n> > \\gdesc describe result of query, without executing it\n> > \\gexec execute query, then execute each value in its result\n> > \\gset [PREFIX] execute query and store result in psql variables\n> > \\gx [(OPTIONS)] [FILE] as \\g, but forces expanded output mode\n> > \\q quit psql\n> > \\watch [[i=]SEC] [c=N] [m=MIN]\n> !> execute query every SEC seconds, up to N times\n> !> stop if less than MIN rows are returned\n> \n> The last two lines definitely have some extra indentation.\n\nAgreed.\n\n> I've attached a patch that fixes this.\n\nI wonder this better to fix this in similar way to other places where the\ndescription has multiple lines., like \"\\g [(OPTIONS)] [FILE]\".\n\nRegards,\nYugo Nagata\n\n> \n> [1] https://www.postgresql.org/message-id/[email protected]\n> \n> regards.\n> \n> -- \n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Thu, 7 Sep 2023 15:02:49 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql help message contains excessive indentations"
},
{
"msg_contents": "At Thu, 7 Sep 2023 15:02:49 +0900, Yugo NAGATA <[email protected]> wrote in \n> On Thu, 07 Sep 2023 14:29:56 +0900 (JST)\n> Kyotaro Horiguchi <[email protected]> wrote:\n> > > \\q quit psql\n> > > \\watch [[i=]SEC] [c=N] [m=MIN]\n> > !> execute query every SEC seconds, up to N times\n> > !> stop if less than MIN rows are returned\n> > \n> > The last two lines definitely have some extra indentation.\n> \n> Agreed.\n> \n> > I've attached a patch that fixes this.\n> \n> I wonder this better to fix this in similar way to other places where the\n> description has multiple lines., like \"\\g [(OPTIONS)] [FILE]\".\n\nIt looks correct to me:\n\n> \\errverbose show most recent error message at maximum verbosity\n> \\g [(OPTIONS)] [FILE] execute query (and send result to file or |pipe);\n> \\g with no arguments is equivalent to a semicolon\n> \\gdesc describe result of query, without executing it\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 07 Sep 2023 15:36:10 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: psql help message contains excessive indentations"
},
{
"msg_contents": "On Thu, 07 Sep 2023 15:36:10 +0900 (JST)\nKyotaro Horiguchi <[email protected]> wrote:\n\n> At Thu, 7 Sep 2023 15:02:49 +0900, Yugo NAGATA <[email protected]> wrote in \n> > On Thu, 07 Sep 2023 14:29:56 +0900 (JST)\n> > Kyotaro Horiguchi <[email protected]> wrote:\n> > > > \\q quit psql\n> > > > \\watch [[i=]SEC] [c=N] [m=MIN]\n> > > !> execute query every SEC seconds, up to N times\n> > > !> stop if less than MIN rows are returned\n> > > \n> > > The last two lines definitely have some extra indentation.\n> > \n> > Agreed.\n> > \n> > > I've attached a patch that fixes this.\n> > \n> > I wonder this better to fix this in similar way to other places where the\n> > description has multiple lines., like \"\\g [(OPTIONS)] [FILE]\".\n> \n> It looks correct to me:\n\nYes. So, I mean how about fixing \\watch description as the attached patch.\n\n> \n> > \\errverbose show most recent error message at maximum verbosity\n> > \\g [(OPTIONS)] [FILE] execute query (and send result to file or |pipe);\n> > \\g with no arguments is equivalent to a semicolon\n> > \\gdesc describe result of query, without executing it\n> \n> regards.\n> \n> -- \n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Thu, 7 Sep 2023 16:08:10 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql help message contains excessive indentations"
},
{
"msg_contents": "At Thu, 7 Sep 2023 16:08:10 +0900, Yugo NAGATA <[email protected]> wrote in \n> On Thu, 07 Sep 2023 15:36:10 +0900 (JST)\n> Kyotaro Horiguchi <[email protected]> wrote:\n> \n> > At Thu, 7 Sep 2023 15:02:49 +0900, Yugo NAGATA <[email protected]> wrote in \n> > > I wonder this better to fix this in similar way to other places where the\n> > > description has multiple lines., like \"\\g [(OPTIONS)] [FILE]\".\n> > \n> > It looks correct to me:\n> \n> Yes. So, I mean how about fixing \\watch description as the attached patch.\n\nAh. I see. I thought you meant that line needed the same change.\n\n> > \n> > > \\errverbose show most recent error message at maximum verbosity\n> > > \\g [(OPTIONS)] [FILE] execute query (and send result to file or |pipe);\n> > > \\g with no arguments is equivalent to a semicolon\n> > > \\gdesc describe result of query, without executing it\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 07 Sep 2023 17:03:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: psql help message contains excessive indentations"
},
{
"msg_contents": "On 2023-Sep-07, Yugo NAGATA wrote:\n\n> Yes. So, I mean how about fixing \\watch description as the attached patch.\n\n> diff --git a/src/bin/psql/help.c b/src/bin/psql/help.c\n> index 38c165a627..12280c0e54 100644\n> --- a/src/bin/psql/help.c\n> +++ b/src/bin/psql/help.c\n> @@ -200,9 +200,9 @@ slashUsage(unsigned short int pager)\n> \tHELP0(\" \\\\gset [PREFIX] execute query and store result in psql variables\\n\");\n> \tHELP0(\" \\\\gx [(OPTIONS)] [FILE] as \\\\g, but forces expanded output mode\\n\");\n> \tHELP0(\" \\\\q quit psql\\n\");\n> -\tHELP0(\" \\\\watch [[i=]SEC] [c=N] [m=MIN]\\n\");\n> -\tHELP0(\" execute query every SEC seconds, up to N times\\n\");\n> -\tHELP0(\" stop if less than MIN rows are returned\\n\");\n> +\tHELP0(\" \\\\watch [[i=]SEC] [c=N] [m=MIN]\\n\"\n> +\t\t \" execute query every SEC seconds, up to N times\\n\"\n> +\t\t \" stop if less than MIN rows are returned\\n\");\n\nYeah, this looks right to me -- the whole help entry as a single\ntranslatable unit, instead of three separately translatable lines.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Estoy de acuerdo contigo en que la verdad absoluta no existe...\nEl problema es que la mentira sí existe y tu estás mintiendo\" (G. Lama)\n\n\n",
"msg_date": "Thu, 7 Sep 2023 13:06:35 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql help message contains excessive indentations"
},
{
"msg_contents": "On Thu, 7 Sep 2023 13:06:35 +0200\nAlvaro Herrera <[email protected]> wrote:\n\n> On 2023-Sep-07, Yugo NAGATA wrote:\n> \n> > Yes. So, I mean how about fixing \\watch description as the attached patch.\n> \n> > diff --git a/src/bin/psql/help.c b/src/bin/psql/help.c\n> > index 38c165a627..12280c0e54 100644\n> > --- a/src/bin/psql/help.c\n> > +++ b/src/bin/psql/help.c\n> > @@ -200,9 +200,9 @@ slashUsage(unsigned short int pager)\n> > \tHELP0(\" \\\\gset [PREFIX] execute query and store result in psql variables\\n\");\n> > \tHELP0(\" \\\\gx [(OPTIONS)] [FILE] as \\\\g, but forces expanded output mode\\n\");\n> > \tHELP0(\" \\\\q quit psql\\n\");\n> > -\tHELP0(\" \\\\watch [[i=]SEC] [c=N] [m=MIN]\\n\");\n> > -\tHELP0(\" execute query every SEC seconds, up to N times\\n\");\n> > -\tHELP0(\" stop if less than MIN rows are returned\\n\");\n> > +\tHELP0(\" \\\\watch [[i=]SEC] [c=N] [m=MIN]\\n\"\n> > +\t\t \" execute query every SEC seconds, up to N times\\n\"\n> > +\t\t \" stop if less than MIN rows are returned\\n\");\n> \n> Yeah, this looks right to me -- the whole help entry as a single\n> translatable unit, instead of three separately translatable lines.\n\nThank you for your explanation. I understood it. I thought of just\nimitating other places, and I didn't know each is a single translatable\nunit.\n\nRegards,\nYugo Nagata\n\n> -- \n> Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n> \"Estoy de acuerdo contigo en que la verdad absoluta no existe...\n> El problema es que la mentira sí existe y tu estás mintiendo\" (G. Lama)\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Thu, 7 Sep 2023 21:56:51 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql help message contains excessive indentations"
},
{
"msg_contents": "At Thu, 7 Sep 2023 13:06:35 +0200, Alvaro Herrera <[email protected]> wrote in \n> On 2023-Sep-07, Yugo NAGATA wrote:\n> > +\tHELP0(\" \\\\watch [[i=]SEC] [c=N] [m=MIN]\\n\"\n> > +\t\t \" execute query every SEC seconds, up to N times\\n\"\n> > +\t\t \" stop if less than MIN rows are returned\\n\");\n> \n> Yeah, this looks right to me -- the whole help entry as a single\n> translatable unit, instead of three separately translatable lines.\n\n+1\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 08 Sep 2023 09:55:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: psql help message contains excessive indentations"
},
{
"msg_contents": "On 2023-Sep-07, Yugo NAGATA wrote:\n\n> Thank you for your explanation. I understood it. I thought of just\n> imitating other places, and I didn't know each is a single translatable\n> unit.\n\nThanks for reviewing, and Kyotaro for reporting. Pushed now.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Estoy de acuerdo contigo en que la verdad absoluta no existe...\nEl problema es que la mentira sí existe y tu estás mintiendo\" (G. Lama)\n\n\n",
"msg_date": "Mon, 18 Sep 2023 16:39:36 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: psql help message contains excessive indentations"
}
] |
[
{
"msg_contents": "Hi there,\n\nI tested pg_rewind behavior and found a suspicious one.\n\nConsider a scenario like this,\n\nServer A: primary\nServer B :replica of A\nServer C :replica of B\n\nand somehow A down ,so B gets promoted.\n\nServer A: down\nServer B :new primary\nServer C :replica of B\n\nIn this case, pg_rewind can be used to reconstruct the cascade; the source\nis C and the target is A.\nHowever, we get error as belows by running pg_rewind.\n\n```\npg_rewind: fetched file \"global/pg_control\", length 8192\npg_rewind: source and target cluster are on the same timeline\npg_rewind: no rewind required\n```\nThough A's timeline is 1 and C's is 2 ideally, it says they're on the same\ntimeline.\n\nThis is because `pg_rewind` currently uses minRecoveryPointTLI and latest\ncheckpoint's TimelineID to compare the TLI between source and target[1].\nBoth C's minRecoveryPointTLI and Latest checkpoint's TimelineID are not\nmodified until checkpointing. (even though B's are modified).\nAnd then, if you run pg_rewind immediately, pg_rewind won't work because C\nand A appear to be on the same timeline. So we have to CHECKPOINT on C\nbefore running pg_rewind;\n\nBTW, immediate pg_rewind with cascade standby seems to be already concerned\nin another discussion[2], but unfortunately missed.\n\nAnyway, I don't think this behavior is kind.\nTo fix this, should we use another variable to compare TLI?\nOr, modify the cascade standby's minRecoveryPointTLI somehow?\n\nMasaki Kuwamura\n\n[1]\nhttps://www.postgresql.org/message-id/flat/9f568c97-87fe-a716-bd39-65299b8a60f4%40iki.fi\n[2]\nhttps://www.postgresql.org/message-id/flat/aeb5f31a-8de2-40a8-64af-ab659a309d6b%40iki.fi\n\nHi there,I tested pg_rewind behavior and found a suspicious one.Consider a scenario like this,Server A: primaryServer B :replica of AServer C :replica of Band somehow A down ,so B gets promoted.Server A: downServer B :new primaryServer C :replica of BIn this case, pg_rewind can be used to reconstruct the cascade; the source is C and the target is A.However, we get error as belows by running pg_rewind.```pg_rewind: fetched file \"global/pg_control\", length 8192pg_rewind: source and target cluster are on the same timelinepg_rewind: no rewind required```Though A's timeline is 1 and C's is 2 ideally, it says they're on the same timeline.This is because `pg_rewind` currently uses minRecoveryPointTLI and latest checkpoint's TimelineID to compare the TLI between source and target[1].Both C's minRecoveryPointTLI and Latest checkpoint's TimelineID are not modified until checkpointing. (even though B's are modified).And then, if you run pg_rewind immediately, pg_rewind won't work because C and A appear to be on the same timeline. So we have to CHECKPOINT on C before running pg_rewind;BTW, immediate pg_rewind with cascade standby seems to be already concerned in another discussion[2], but unfortunately missed.Anyway, I don't think this behavior is kind.To fix this, should we use another variable to compare TLI?Or, modify the cascade standby's minRecoveryPointTLI somehow?Masaki Kuwamura[1] https://www.postgresql.org/message-id/flat/9f568c97-87fe-a716-bd39-65299b8a60f4%40iki.fi[2] https://www.postgresql.org/message-id/flat/aeb5f31a-8de2-40a8-64af-ab659a309d6b%40iki.fi",
"msg_date": "Thu, 7 Sep 2023 15:33:45 +0900",
"msg_from": "Kuwamura Masaki <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_rewind with cascade standby doesn't work well"
},
{
"msg_contents": "> Consider a scenario like this,\n>\n> Server A: primary\n> Server B :replica of A\n> Server C :replica of B\n>\n> and somehow A down ,so B gets promoted.\n> Server A: down\n> Server B :new primary\n> Server C :replica of B\n>\n> In this case, pg_rewind can be used to reconstruct the cascade; the\nsource is C and the target is A.\n> However, we get error as belows by running pg_rewind.\n>\n> ```\n> pg_rewind: fetched file \"global/pg_control\", length 8192\n> pg_rewind: source and target cluster are on the same timeline\n> pg_rewind: no rewind required\n> ```\n\nTo fix the above mentioned behavior of pg_rewind, I suggest to change the\ncascade standby's (i.e. server C's) minRecoveryPointTLI when it receives\nthe new timeline information from the new primary (i.e. server B).\n\nWhen server B is promoted, it creates an end-of-recovery record by calling\nCreateEndOfRecoveryRecord(). (in xlog.c)\nAnd also updates B's minRecoveryPoint and minRecoveryPointTLI.\n```\n/*\n * Update the control file so that crash recovery can follow the\ntimeline\n * changes to this point.\n */\n LWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n ControlFile->minRecoveryPoint = recptr;\n ControlFile->minRecoveryPointTLI = xlrec.ThisTimeLineID;\n UpdateControlFile();\n LWLockRelease(ControlFileLock);\n```\nSince C is a replica of B, the end-of-recovery record is replicated from B\nto C, so the record is replayed in C by xlog_redo().\nThe attached patch updates minRecoveryPoint and minRecoveryPointTLI at this\npoint by mimicking CreateEndOfRecoveryRecord().\nWith this patch, you can run pg_rewind with cascade standby immediately.\n(without waiting for checkpointing)\n\nThoughts?\n\nMasaki Kuwamura",
"msg_date": "Mon, 11 Sep 2023 17:49:46 +0900",
"msg_from": "Kuwamura Masaki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind with cascade standby doesn't work well"
},
{
"msg_contents": "Hi,\n\n> The attached patch updates minRecoveryPoint and minRecoveryPointTLI at this point by mimicking CreateEndOfRecoveryRecord().\n> With this patch, you can run pg_rewind with cascade standby immediately. (without waiting for checkpointing)\n\nMany thanks for submitting the patch. I added it to the nearest open\ncommitfest [1].\n\nIMO a test is needed that makes sure no one is going to break this in\nthe future.\n\n[1]: https://commitfest.postgresql.org/45/4559/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 11 Sep 2023 19:04:30 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind with cascade standby doesn't work well"
},
{
"msg_contents": "On Mon, Sep 11, 2023 at 07:04:30PM +0300, Aleksander Alekseev wrote:\n> Many thanks for submitting the patch. I added it to the nearest open\n> commitfest [1].\n> \n> IMO a test is needed that makes sure no one is going to break this in\n> the future.\n\nYou definitely need more complex test scenarios for that. If you can\ncome up with new ways to make the TAP tests of pg_rewind mode modular\nin handling more complicated node setups, that would be a nice\naddition, for example.\n\n> [1]: https://commitfest.postgresql.org/45/4559/\n\n@@ -7951,6 +7951,15 @@ xlog_redo(XLogReaderState *record)\n ereport(PANIC,\n (errmsg(\"unexpected timeline ID %u (should be %u) in end-of-recovery record\",\n xlrec.ThisTimeLineID, replayTLI)));\n+ /*\n+ * Update the control file so that crash recovery can follow the timeline\n+ * changes to this point.\n+ */\n+ LWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n+ ControlFile->minRecoveryPoint = lsn;\n+ ControlFile->minRecoveryPointTLI = xlrec.ThisTimeLineID;\n\nThis patch is at least incorrect in its handling of crash recovery,\nbecause these two should *never* be set in this case as we want to\nreplay up to the end of WAL. For example, see xlog.c or the top of\nxlogrecovery.c about the assumptions behind these variables:\n /* crash recovery should always recover to the end of WAL */\n ControlFile->minRecoveryPoint = InvalidXLogRecPtr;\n ControlFile->minRecoveryPointTLI = 0;\n \nIf an end-of-recovery record is replayed during crash recovery, these\nassumptions are plain broken.\n\nOne thing that we could consider is to be more aggressive with\nrestartpoints when replaying this record for a standby, see a few\nlines above the lines added by your patch, for example. And we could\npotentially emulate a post-promotion restart point to get a refresh of\nthe control file as it should, with the correct code paths involved in\nthe updates of minRecoveryPoint when the checkpointer does the job.\n--\nMichael",
"msg_date": "Tue, 12 Sep 2023 15:10:17 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind with cascade standby doesn't work well"
},
{
"msg_contents": ">> IMO a test is needed that makes sure no one is going to break this in\n>> the future.\n>\n> You definitely need more complex test scenarios for that. If you can\n> come up with new ways to make the TAP tests of pg_rewind mode modular\n> in handling more complicated node setups, that would be a nice\n> addition, for example.\n\nI'm sorry for lacking tests. For now, I started off with a simple test\nthat cause the problem I mentioned. The updated WIP patch 0001 includes\nthe new test for pg_rewind.\nAnd also, I'm afraid that I'm not sure what kind of tests I have to make\nfor fix this behavior. Would you mind giving me some advice?\n\n> This patch is at least incorrect in its handling of crash recovery,\n> because these two should *never* be set in this case as we want to\n> replay up to the end of WAL. For example, see xlog.c or the top of\n> xlogrecovery.c about the assumptions behind these variables:\n> /* crash recovery should always recover to the end of WAL */\n> ControlFile->minRecoveryPoint = InvalidXLogRecPtr;\n> ControlFile->minRecoveryPointTLI = 0;\n>\n> If an end-of-recovery record is replayed during crash recovery, these\n> assumptions are plain broken.\n\nThat make sense! I really appreciate your knowledgeable review.\n\n> One thing that we could consider is to be more aggressive with\n> restartpoints when replaying this record for a standby, see a few\n> lines above the lines added by your patch, for example. And we could\n> potentially emulate a post-promotion restart point to get a refresh of\n> the control file as it should, with the correct code paths involved in\n> the updates of minRecoveryPoint when the checkpointer does the job.\n\nI'm not confident but you meant we could make restartpoint\n(i.e., call `RequestCheckpoint()`) instead of my old patch?\nThe patch 0001 also contains my understanding.\n\nI also found a bug (maybe). If we call `CreateRestartPoint()` during\ncrash-recovery, the assertion fails at ComputeXidHorizon() in procarray.c.\nIt's inherently orthogonal to the problem I already reported. So you can\nreproduce this at HEAD with this procedure.\n\n1. Start primary and standby server\n2. Modify checkpoint_timeout to 1h on standby\n3. Insert 10^10 records and concurrently run CHECKPOINT every second on\nprimary\n4. Do an immediate stop on both standby and primary at the end of the insert\n5. Modify checkpoint_timeout to 30 on standby\n6. Remove standby.signal on standby\n7. Restart standby (it will start crash-recovery)\n8. Assertion failure is raised shortly\n\nI think this is because `TruncateSUBTRANS();` in `CreateRestartPoint()` is\ncalled but `StartupSUBTRANS()` isn't called yet. In `StartupXLOG()`, we\ncall\n`StartupSUBTRANS()` if `(ArchiveRecoveryRequested && EnableHotStandby)`.\nHowever, in `CreateRestartPoint()`, we call `TruncateSUBTRANS()` if\n`(EnableHotStandby)`. I guess the difference causes this bug. The latter\npossibly be called even crash-recovery while former isn't.\nThe attached patch 0002 fixes it. I think we could discuss about this bug\nin\nanother thread if needed.\n\nBest regards!\n\nMasaki Kuwamura",
"msg_date": "Wed, 20 Sep 2023 11:46:45 +0900",
"msg_from": "Kuwamura Masaki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind with cascade standby doesn't work well"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 11:46:45AM +0900, Kuwamura Masaki wrote:\n> I also found a bug (maybe). If we call `CreateRestartPoint()` during\n> crash-recovery, the assertion fails at ComputeXidHorizon() in procarray.c.\n> It's inherently orthogonal to the problem I already reported. So you can\n> reproduce this at HEAD with this procedure.\n> \n> 1. Start primary and standby server\n> 2. Modify checkpoint_timeout to 1h on standby\n> 3. Insert 10^10 records and concurrently run CHECKPOINT every second on\n> primary\n> 4. Do an immediate stop on both standby and primary at the end of the insert\n> 5. Modify checkpoint_timeout to 30 on standby\n> 6. Remove standby.signal on standby\n> 7. Restart standby (it will start crash-recovery)\n> 8. Assertion failure is raised shortly\n> \n> I think this is because `TruncateSUBTRANS();` in `CreateRestartPoint()` is\n> called but `StartupSUBTRANS()` isn't called yet. In `StartupXLOG()`, we\n> call\n> `StartupSUBTRANS()` if `(ArchiveRecoveryRequested && EnableHotStandby)`.\n> However, in `CreateRestartPoint()`, we call `TruncateSUBTRANS()` if\n> `(EnableHotStandby)`. I guess the difference causes this bug. The latter\n> possibly be called even crash-recovery while former isn't.\n> The attached patch 0002 fixes it. I think we could discuss about this bug\n> in\n> another thread if needed.\n\nThis is a known issue. I guess that the same as this thread and this\nCF entry:\nhttps://commitfest.postgresql.org/44/4244/\nhttps://www.postgresql.org/message-id/flat/[email protected]\n--\nMichael",
"msg_date": "Wed, 20 Sep 2023 12:04:26 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind with cascade standby doesn't work well"
},
{
"msg_contents": "\n\nOn 2023/09/20 12:04, Michael Paquier wrote:\n> This is a known issue. I guess that the same as this thread and this\n> CF entry:\n> https://commitfest.postgresql.org/44/4244/\n> https://www.postgresql.org/message-id/flat/[email protected]\n\nI think this is a separate issue, and we should still use Kuwamura-san's patch\neven after the one you posted on the thread gets accepted. BTW, I was able to\nreproduce the assertion failure Kuwamura-san reported, even after applying\nyour latest patch from the thread.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 20 Sep 2023 17:27:22 +0900",
"msg_from": "Fujii Masao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind with cascade standby doesn't work well"
},
{
"msg_contents": "Hi,\n\n> >> IMO a test is needed that makes sure no one is going to break this in\n> >> the future.\n> >\n> > You definitely need more complex test scenarios for that. If you can\n> > come up with new ways to make the TAP tests of pg_rewind mode modular\n> > in handling more complicated node setups, that would be a nice\n> > addition, for example.\n>\n> I'm sorry for lacking tests. For now, I started off with a simple test\n> that cause the problem I mentioned. The updated WIP patch 0001 includes\n> the new test for pg_rewind.\n\nMany thanks for a quick update.\n\n> And also, I'm afraid that I'm not sure what kind of tests I have to make\n> for fix this behavior. Would you mind giving me some advice?\n\nPersonally I would prefer not to increase the scope of work. Your TAP\ntest added in 0001 seems to be adequate.\n\n> BTW, I was able to\n> reproduce the assertion failure Kuwamura-san reported, even after applying\n> your latest patch from the thread.\n\nDo you mean that the test fails or it doesn't but there are other\nsteps to reproduce the issue?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 26 Sep 2023 18:44:50 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind with cascade standby doesn't work well"
},
{
"msg_contents": "On Tue, Sep 26, 2023 at 06:44:50PM +0300, Aleksander Alekseev wrote:\n>> And also, I'm afraid that I'm not sure what kind of tests I have to make\n>> for fix this behavior. Would you mind giving me some advice?\n> \n> Personally I would prefer not to increase the scope of work. Your TAP\n> test added in 0001 seems to be adequate.\n\nYeah, agreed. I'm OK with what you are proposing, basically (the\ntest could be made a bit cheaper actually).\n\n /*\n- * For Hot Standby, we could treat this like a Shutdown Checkpoint,\n- * but this case is rarer and harder to test, so the benefit doesn't\n- * outweigh the potential extra cost of maintenance.\n+ * For Hot Standby, we could treat this like an end-of-recovery checkpoint\n */\n+ RequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_WAIT);\n\nI don't understand what you want to change here. Archive recovery and\ncrash recovery are two different things, still this code would be\ntriggered even if there is no standby.signal, aka the node is not a\nstandby. Why isn't this stuff conditional?\n\n>> BTW, I was able to\n>> reproduce the assertion failure Kuwamura-san reported, even after applying\n>> your latest patch from the thread.\n> \n> Do you mean that the test fails or it doesn't but there are other\n> steps to reproduce the issue?\n\nI get it as Fujii-san testing the patch from [1], still failing the\ntest from [2]:\n[1]: https://www.postgresql.org/message-id/[email protected]\n[2]: https://www.postgresql.org/message-id/CAMyC8qryE7mKyvPvGHCt5GpANAmp8sS_tRbraqXcPBx14viy6g@mail.gmail.com\n\nI would be surprised, actually, because the patch from [1] would cause\nstep 7 of the test to fail: the patch causes standby.signal or\nrecovery.signal to be required. Anyway, this specific issue, if any,\nhad better be discussed on the other thread. I need to address a few\ncomments there as well and was planning to get back to it. It is\npossible that I've missed something on the other thread with the\nrestrictions I was proposing in the latest version of the patch.\n\nFor this thread, let's focus on the pg_rewind case and how we want to\ntreat these records to improve the cascading case.\n--\nMichael",
"msg_date": "Wed, 27 Sep 2023 08:33:03 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind with cascade standby doesn't work well"
},
{
"msg_contents": "Thanks for your review!\n\n2023年9月27日(水) 8:33 Michael Paquier <[email protected]>:\n\n> On Tue, Sep 26, 2023 at 06:44:50PM +0300, Aleksander Alekseev wrote:\n> >> And also, I'm afraid that I'm not sure what kind of tests I have to make\n> >> for fix this behavior. Would you mind giving me some advice?\n> >\n> > Personally I would prefer not to increase the scope of work. Your TAP\n> > test added in 0001 seems to be adequate.\n>\n> Yeah, agreed. I'm OK with what you are proposing, basically (the\n> test could be made a bit cheaper actually).\n\n\nI guess you meant that it contains an unnecessary insert and wait.\nI fixed this and some incorrect comments caused by copy & paste.\nPlease see the attached patch.\n\n\n>\n /*\n> - * For Hot Standby, we could treat this like a Shutdown\n> Checkpoint,\n> - * but this case is rarer and harder to test, so the benefit\n> doesn't\n> - * outweigh the potential extra cost of maintenance.\n> + * For Hot Standby, we could treat this like an end-of-recovery\n> checkpoint\n> */\n> + RequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_WAIT);\n>\n> I don't understand what you want to change here. Archive recovery and\n> crash recovery are two different things, still this code would be\n> triggered even if there is no standby.signal, aka the node is not a\n> standby. Why isn't this stuff conditional?\n\n\nYou are absolutely right. It should only run in standby mode.\nAlso, according to the document[1], a server can be \"Hot Standby\" even if\nit is\nnot in standby mode (i.e. when it is in archive recovery mode).\nSo I fixed the comment above `RequestCheckpoint()`.\n\n[1]: https://www.postgresql.org/docs/current/hot-standby.html\n\nI hope you will review it again.\n\nRegards,\n\nMasaki Kuwamura",
"msg_date": "Fri, 6 Oct 2023 16:53:20 +0900",
"msg_from": "Kuwamura Masaki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind with cascade standby doesn't work well"
},
{
"msg_contents": "> <v3-0001-pg_rewind-Fix-bug-using-cascade-standby-as-source.patch>\n\nHi,\n\nThank you for addressing this issue!\n\nThe patch needs to be rebased as it doesn’t apply on master anymore, but here are some thoughts on the patch in general without testing: \n\n1. Regarding the approach to force a checkpoint on every restartpoint record, I wonder if it has any performance implications, since now the WAL replay will wait for the restartpoint to finish as opposed to it happening in the background. \n2. This change of behaviour should be documented in [1], there’s a paragraph about restartpoints. \n3. It looks like some pg_rewind code accommodating for the \"restartpoint < last common checkpoint\" situation could be cleaned up as well, I found this at pg_rewind.c:669 on efcbb76efe, but maybe there’s more:\n\nif (ControlFile_source.checkPointCopy.redo < chkptredo) …\n\nThere’s also a less invasive option to fix this problem by detecting this situation from pg_rewind and simply calling checkpoint on the standby that I think is worth exploring. \n\nRegards,\nIlya\n\n[1] https://www.postgresql.org/docs/devel/wal-configuration.html\n\n<v3-0001-pg_rewind-Fix-bug-using-cascade-standby-as-source.patch>Hi,Thank you for addressing this issue!The patch needs to be rebased as it doesn’t apply on master anymore, but here are some thoughts on the patch in general without testing: 1. Regarding the approach to force a checkpoint on every restartpoint record, I wonder if it has any performance implications, since now the WAL replay will wait for the restartpoint to finish as opposed to it happening in the background. 2. This change of behaviour should be documented in [1], there’s a paragraph about restartpoints. 3. It looks like some pg_rewind code accommodating for the \"restartpoint < last common checkpoint\" situation could be cleaned up as well, I found this at pg_rewind.c:669 on efcbb76efe, but maybe there’s more:if (ControlFile_source.checkPointCopy.redo < chkptredo) …There’s also a less invasive option to fix this problem by detecting this situation from pg_rewind and simply calling checkpoint on the standby that I think is worth exploring. Regards,Ilya[1] https://www.postgresql.org/docs/devel/wal-configuration.html",
"msg_date": "Mon, 22 Jul 2024 22:47:37 +0100",
"msg_from": "Ilya Gladyshev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind with cascade standby doesn't work well"
}
] |
[
{
"msg_contents": "Dear hackers,\n\nWhile investigating the cfbot failure [1], I found a strange behavior of pg_ctl\ncommand. How do you think? Is this a bug to be fixed or in the specification?\n\n# Problem\n\nThe \"pg_ctl start\" command returns 0 (succeeded) even if the cluster has\nalready been started. This occurs on Windows environment, and when the command\nis executed just after postmaster starts.\n\n\n# Analysis\n\nThe primal reason is in wait_for_postmaster_start(). In this function the\npostmaster.pid file is read and checked whether the start command is\nsuccessfully done or not.\n\nCheck (1) requires that the postmaster must be started after the our pg_ctl\ncommand, but 2 seconds delay is accepted. \n\nIn the linux mode, the check (2) is also executed to ensures that the forked\nprocess modified the file, so this time window is not so problematic.\nBut in the windows system, (2) is ignored, *so the pg_ctl command may be\nsucceeded if the postmaster is started within 2 seconds.*\n\n```\n\t\tif ((optlines = readfile(pid_file, &numlines)) != NULL &&\n\t\t\tnumlines >= LOCK_FILE_LINE_PM_STATUS)\n\t\t{\n\t\t\t/* File is complete enough for us, parse it */\n\t\t\tpid_t\t\tpmpid;\n\t\t\ttime_t\t\tpmstart;\n\n\t\t\t/*\n\t\t\t * Make sanity checks. If it's for the wrong PID, or the recorded\n\t\t\t * start time is before pg_ctl started, then either we are looking\n\t\t\t * at the wrong data directory, or this is a pre-existing pidfile\n\t\t\t * that hasn't (yet?) been overwritten by our child postmaster.\n\t\t\t * Allow 2 seconds slop for possible cross-process clock skew.\n\t\t\t */\n\t\t\tpmpid = atol(optlines[LOCK_FILE_LINE_PID - 1]);\n\t\t\tpmstart = atol(optlines[LOCK_FILE_LINE_START_TIME - 1]);\n\t\t\tif (pmstart >= start_time - 2 && // (1)\n#ifndef WIN32\n\t\t\t\tpmpid == pm_pid // (2)\n#else\n\t\t\t/* Windows can only reject standalone-backend PIDs */\n\t\t\t\tpmpid > 0\n#endif\n\n```\n\n# Appendix - how do I found?\n\nI found it while investigating the failure. In the test \"pg_upgrade --check\"\nis executed just after old cluster has been started. I checked the output file [2]\nand found that the banner says \"Performing Consistency Checks\", which meant that\nthe parameter live_check was set to false (see output_check_banner()). This\nparameter is set to true when the postmaster has been started at that time and\nthe pg_ctl start fails. That's how I find.\n\n[1]: https://cirrus-ci.com/task/4634769732927488\n[2]: https://api.cirrus-ci.com/v1/artifact/task/4634769732927488/testrun/build/testrun/pg_upgrade/003_logical_replication_slots/data/t_003_logical_replication_slots_new_publisher_data/pgdata/pg_upgrade_output.d/20230905T080645.548/log/pg_upgrade_internal.log\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Thu, 7 Sep 2023 07:07:36 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_ctl start may return 0 even if the postmaster has been already\n started on Windows"
},
{
"msg_contents": "On Thu, Sep 07, 2023 at 07:07:36AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> # Problem\n> \n> The \"pg_ctl start\" command returns 0 (succeeded) even if the cluster has\n> already been started. This occurs on Windows environment, and when the command\n> is executed just after postmaster starts.\n\nNot failing on `pg_ctl start` if the command is run on a data folder\nthat has already been started previously by a different command with a\npostmaster still alive feels like cheating, because pg_ctl is lying\nabout its result. If pg_ctl wants to start a cluster but is not able\nto do it, either because the postmaster failed at startup or because\nthe cluster has already started, it should report a failure. Now, I\nalso recall that the processes spawned by pg_ctl on Windows make the\nstatus handling rather tricky to reason about..\n--\nMichael",
"msg_date": "Thu, 7 Sep 2023 16:37:08 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been\n already started on Windows"
},
{
"msg_contents": "Dear Michael,\n\nThank you for replying!\n\n> Not failing on `pg_ctl start` if the command is run on a data folder\n> that has already been started previously by a different command with a\n> postmaster still alive feels like cheating, because pg_ctl is lying\n> about its result. If pg_ctl wants to start a cluster but is not able\n> to do it, either because the postmaster failed at startup or because\n> the cluster has already started, it should report a failure.\n\nI have a same feelings as you. Users may use the return code in their batch file\nand they may decide what to do based on the wrong status. Reporting the status\nmore accurately is nice.\n\nMy first idea is that to move the checking part to above, but this may not handle\nthe case the postmaster is still alive (now sure this is real issue). Do we have to\nadd a new indicator which ensures the identity of processes for windows?\nPlease tell me how you feel.\n\n> Now, I\n> also recall that the processes spawned by pg_ctl on Windows make the\n> status handling rather tricky to reason about..\n\nDid you say about the below comment? Currently I have no idea to make\ncodes more proper, sorry.\n\n```\n\t\t * On Windows, we may be checking the postmaster's parent shell, but\n\t\t * that's fine for this purpose.\n```\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Thu, 7 Sep 2023 10:53:41 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: pg_ctl start may return 0 even if the postmaster has been already\n started on Windows"
},
{
"msg_contents": "At Thu, 7 Sep 2023 10:53:41 +0000, \"Hayato Kuroda (Fujitsu)\" <[email protected]> wrote in \n> My first idea is that to move the checking part to above, but this may not handle\n> the case the postmaster is still alive (now sure this is real issue). Do we have to\n> add a new indicator which ensures the identity of processes for windows?\n> Please tell me how you feel.\n\nIt doesn't seem to work as expected. We still lose the relationship\nbetween the PID file and the launched postmaster.\n\n> > Now, I\n> > also recall that the processes spawned by pg_ctl on Windows make the\n> > status handling rather tricky to reason about..\n> \n> Did you say about the below comment? Currently I have no idea to make\n> codes more proper, sorry.\n> \n> ```\n> \t\t * On Windows, we may be checking the postmaster's parent shell, but\n> \t\t * that's fine for this purpose.\n> ```\n\nDitching cmd.exe seems like a big hassle. So, on the flip side, I\ntried to identify the postmaster PID using the shell's PID, and it\nseem to work. The APIs used are avaiable from XP/2003 onwards.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 08 Sep 2023 14:17:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been\n already started on Windows"
},
{
"msg_contents": "At Fri, 08 Sep 2023 14:17:16 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> Ditching cmd.exe seems like a big hassle. So, on the flip side, I\n> tried to identify the postmaster PID using the shell's PID, and it\n> seem to work. The APIs used are avaiable from XP/2003 onwards.\n\nCleaned it up a bit.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 08 Sep 2023 14:22:09 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been\n already started on Windows"
},
{
"msg_contents": "Dear Hoiguchi-san,\n\nThank you for making the patch!\n\n> It doesn't seem to work as expected. We still lose the relationship\n> between the PID file and the launched postmaster.\n\nYes, I did not expect that the relationship can be kept.\nConceptually +1 for your approach.\n\n> > Ditching cmd.exe seems like a big hassle. So, on the flip side, I\n> > tried to identify the postmaster PID using the shell's PID, and it\n> > seem to work. The APIs used are avaiable from XP/2003 onwards.\n\nAccording to 495ed0ef2, Windows 10 seems the minimal requirement for using\nthe postgres. So the approach seems OK.\n\nFollowings are my comment, but I can say only cosmetic ones because I do not have\nwindows machine which can run postgres.\n\n\n1.\nForward declaration seems missing. In the pg_ctl.c, the static function seems to\nbe declared even if there is only one caller (c.f., GetPrivilegesToDelete).\n\n2.\nI think the argument should be pid_t.\n\n3.\nI'm not sure the return type of the function should be pid_t or not. According\nto the document, DWORD corrresponds to the pid_t. In win32_port.h, the pid_t is\ndefiend as int (_MSC_VER seems to be defined when the VisualStduio is used). It\nis harmless, but I perfer to match the interface between caller/callee. IIUC we\ncan add just a cast.\n\n```\n#ifdef _MSC_VER\ntypedef int pid_t;\n#endif\n```\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Fri, 8 Sep 2023 08:02:57 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: pg_ctl start may return 0 even if the postmaster has been already\n started on Windows"
},
{
"msg_contents": "At Fri, 8 Sep 2023 08:02:57 +0000, \"Hayato Kuroda (Fujitsu)\" <[email protected]> wrote in \n> > > Ditching cmd.exe seems like a big hassle. So, on the flip side, I\n> > > tried to identify the postmaster PID using the shell's PID, and it\n> > > seem to work. The APIs used are avaiable from XP/2003 onwards.\n> \n> According to 495ed0ef2, Windows 10 seems the minimal requirement for using\n> the postgres. So the approach seems OK.\n> \n> Followings are my comment, but I can say only cosmetic ones because I do not have\n> windows machine which can run postgres.\n\nThank you for the comment!\n\n> 1.\n> Forward declaration seems missing. In the pg_ctl.c, the static function seems to\n> be declared even if there is only one caller (c.f., GetPrivilegesToDelete).\n\nAgreed. \n\n> 2.\n> I think the argument should be pid_t.\n\nYeah, I didn't linger on that detail earlier. But revisiting it, I\ncoucur it is best suited since it is a local function in\npg_ctl.c. I've now positioned it at the end of a WIN32 section\ndefining other win32-specific functions. Hence, a forward declaration\nbecame necessary:p\n\n> 3.\n> I'm not sure the return type of the function should be pid_t or not. According\n> to the document, DWORD corrresponds to the pid_t. In win32_port.h, the pid_t is\n> defiend as int (_MSC_VER seems to be defined when the VisualStduio is used). It\n> is harmless, but I perfer to match the interface between caller/callee. IIUC we\n> can add just a cast.\n\nFor the reason previously stated, I've adjusted the type for both the\nparameter and the return value to pid_t. start_postmaster() already\nassumed that pid_t is wider than DWORD.\n\nI noticed that PID 0 is valid on Windows. However, it is consistently\nthe PID for the system idle process, so it can't be associated with\ncmd.exe or postgres. I've added a comment noting that premise. Also I\ndid away with an unused variable. For the CreateToolhelp32Snapshot\nfunction, I changed the second parameter to 0 from shell_pid, since it\nis not used when using TH32CS_SNAPPROCESS. I changed the comparison\noperator for pid_t from > to !=, ensuring correct behavior even with\nnegative values.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 13 Sep 2023 15:52:39 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been\n already started on Windows"
},
{
"msg_contents": "Dear Horiguchi-san,\n\nI have tested your patch on my CI, but several test could not patch with error:\n\"pg_ctl: launcher shell executed multiple processes\".\n\nI added the thread to next CF entry, so let's see the how cfbot says.\n\n[1]: https://commitfest.postgresql.org/45/4573/\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Tue, 19 Sep 2023 02:37:20 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: pg_ctl start may return 0 even if the postmaster has been already\n started on Windows"
},
{
"msg_contents": "Dear Horiguchi-san,\n\n> I added the thread to next CF entry, so let's see the how cfbot says.\n\nAt least there are several compiler warnings. E.g.,\n\n* pgwin32_find_postmaster_pid() has \"return;\", but IIUC it should be \"exit(1)\"\n* When DWORD is printed, \"%lx\" should be used.\n* The variable \"flags\" seems not needed.\n\nHere is a patch which suppresses warnings, whereas test would fail...\nYou can use it if acceptable.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Tue, 19 Sep 2023 13:48:55 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: pg_ctl start may return 0 even if the postmaster has been already\n started on Windows"
},
{
"msg_contents": "At Tue, 19 Sep 2023 13:48:55 +0000, \"Hayato Kuroda (Fujitsu)\" <[email protected]> wrote in \n> Dear Horiguchi-san,\n> \n> > I added the thread to next CF entry, so let's see the how cfbot says.\n> \n> At least there are several compiler warnings. E.g.,\n> \n> * pgwin32_find_postmaster_pid() has \"return;\", but IIUC it should be \"exit(1)\"\n> * When DWORD is printed, \"%lx\" should be used.\n> * The variable \"flags\" seems not needed.\n\nYeah, I thought that they all have been fixed but.. you are right in\nevery respect.\n\n> Here is a patch which suppresses warnings, whereas test would fail...\n> You can use it if acceptable.\n\nI was able to see the trouble in the CI environment, but not\nlocally. I'll delve deeper into this. Thanks you for bringing it to my\nattention.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 20 Sep 2023 14:18:41 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been\n already started on Windows"
},
{
"msg_contents": "At Wed, 20 Sep 2023 14:18:41 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> I was able to see the trouble in the CI environment, but not\n> locally. I'll delve deeper into this. Thanks you for bringing it to my\n> attention.\n\nI found two instances with multiple child processes.\n\n# child-pid / parent-pid / given-pid : exec name\nprocess parent PID child PID target PID exec file\nshell 1228 6472 1228 cmd.exe\nchild 5184 1228 1228 cmd.exe\nchild 6956 1228 1228 postgres.exe\n> launcher shell executed multiple processes\n\nprocess parent PID child PID target PID exec file\nshell 4296 5880 4296 cmd.exe\nchild 5156 4296 4296 agent.exe\nchild 5640 4296 4296 postgres.exe\n> launcher shell executed multiple processes\n\nIt looks like the environment has autorun setups for cmd.exe. There's\nanother known issue related to auto-launching chcp at\nstartup. Ideally, we would avoid such behavior in the\npostmaster-launcher shell. I think we should add \"/D\" flag to cmd.exe\ncommand line, perhaps in a separate patch.\n\nEven after making that change, I still see something being launched from the launcher cmd.exe...\n\nprocess parent PID child PID target PID exec file\nshell 2784 6668 2784 cmd.exe\nchild 6140 2784 2784 MicrosoftEdgeUpdate.exe\nchild 6260 2784 2784 postgres.exe\n> launcher shell executed multiple processes\n\nI'm not sure what triggers this; perhaps some other kind of hooks? If\nwe cannot avoid this behavior, we'll have to verify the executable\nfile name. It should be fine, given that the file name is constant,\nbut I'm not fully convinced that this is the ideal solution.\n\nAnother issue is.. that I haven't been able to cause the false\npositive of pg_ctl start.. Do you have a concise reproducer of the\nissue?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 22 Sep 2023 16:15:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been\n already started on Windows"
},
{
"msg_contents": "Dear Horiguchi-san,\n\nThank you for making a patch! They can pass ci.\nI'm still not sure what should be, but I can respond a part.\n\n> Another issue is.. that I haven't been able to cause the false\n> positive of pg_ctl start.. Do you have a concise reproducer of the\n> issue?\n\nI found a short sleep in pg_ctl/t/001_start_stop.pl. This was introduced in\n6bcce2580 to ensure waiting more than 2 seconds. I've tested on my CI and\nfound that removing the sleep can trigger the failure. Also, I confirmed your patch\nfixes the problem. PSA the small patch for cfbot. 0001 and 0002 were not changed.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Fri, 22 Sep 2023 12:20:56 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: pg_ctl start may return 0 even if the postmaster has been already\n started on Windows"
},
{
"msg_contents": "On Fri, 6 Oct 2023 at 11:38, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Horiguchi-san,\n>\n> Thank you for making a patch! They can pass ci.\n> I'm still not sure what should be, but I can respond a part.\n>\n> > Another issue is.. that I haven't been able to cause the false\n> > positive of pg_ctl start.. Do you have a concise reproducer of the\n> > issue?\n>\n> I found a short sleep in pg_ctl/t/001_start_stop.pl. This was introduced in\n> 6bcce2580 to ensure waiting more than 2 seconds. I've tested on my CI and\n> found that removing the sleep can trigger the failure. Also, I confirmed your patch\n> fixes the problem. PSA the small patch for cfbot. 0001 and 0002 were not changed.\n\nI have tested the patches on my windows setup.\nI am trying to start two postgres servers with an interval of 5 secs.\n\nwith HEAD (when same server is started after an interval of 5 secs):\nD:\\project\\pg\\bin>pg_ctl -D ../data -l data2.log start\npg_ctl: another server might be running; trying to start server anyway\nwaiting for server to start.... stopped waiting\npg_ctl: could not start server\nExamine the log output.\n\nwith Patch:(when same server is started after an interval of 5 secs)\nD:\\project\\pg_dev\\bin>pg_ctl -D ../data -l data2.log start\npg_ctl: another server might be running; trying to start server anyway\nwaiting for server to start....pg_ctl: launcher shell died\n\nThe output message after patch is different from the HEAD. I felt that\nwith patch as well we should get the message \"pg_ctl: could not start\nserver\".\nIs this message change intentional?\n\nThanks,\nShlok Kumar Kyal\n\n\n",
"msg_date": "Fri, 6 Oct 2023 12:28:32 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been already\n started on Windows"
},
{
"msg_contents": "Thank you for testing this!\n\nAt Fri, 6 Oct 2023 12:28:32 +0530, Shlok Kyal <[email protected]> wrote i> D:\\project\\pg_dev\\bin>pg_ctl -D ../data -l data2.log start\n> pg_ctl: another server might be running; trying to start server anyway\n> waiting for server to start....pg_ctl: launcher shell died\n> \n> The output message after patch is different from the HEAD. I felt that\n> with patch as well we should get the message \"pg_ctl: could not start\n> server\".\n> Is this message change intentional?\n\nPartly no, partly yes. My focus was on verifying the accuracy of\nidentifying the actual postmaster PID on Windows. The current patch\nprovides a detailed description of the events, primarily because I\nlack a comprehensive understanding of both the behavior of Windows\nAPIs and the associated processes. Given that context, the messages\nessentially serve debugging purposes.\n\nI agree with your suggestion. Ultimately, if there's a possibility\nfor this to be committed, the message will be consolidated to \"could\nnot start server\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 10 Oct 2023 10:52:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been\n already started on Windows"
},
{
"msg_contents": "Dear Horiguchi-san, Shlok,\n\n> \n> At Fri, 6 Oct 2023 12:28:32 +0530, Shlok Kyal <[email protected]> wrote\n> i> D:\\project\\pg_dev\\bin>pg_ctl -D ../data -l data2.log start\n> > pg_ctl: another server might be running; trying to start server anyway\n> > waiting for server to start....pg_ctl: launcher shell died\n> >\n> > The output message after patch is different from the HEAD. I felt that\n> > with patch as well we should get the message \"pg_ctl: could not start\n> > server\".\n> > Is this message change intentional?\n> \n> Partly no, partly yes. My focus was on verifying the accuracy of\n> identifying the actual postmaster PID on Windows. The current patch\n> provides a detailed description of the events, primarily because I\n> lack a comprehensive understanding of both the behavior of Windows\n> APIs and the associated processes. Given that context, the messages\n> essentially serve debugging purposes.\n> \n> I agree with your suggestion. Ultimately, if there's a possibility\n> for this to be committed, the message will be consolidated to \"could\n> not start server\".\n\nBased on the suggestion, I tried to update the patch.\nA new argument is_valid is added for reporting callee. Also, reporting formats\nare adjusted based on other functions. How do you think?\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Mon, 23 Oct 2023 08:57:19 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: pg_ctl start may return 0 even if the postmaster has been already\n started on Windows"
},
{
"msg_contents": "At Mon, 23 Oct 2023 08:57:19 +0000, \"Hayato Kuroda (Fujitsu)\" <[email protected]> wrote in \n> > I agree with your suggestion. Ultimately, if there's a possibility\n> > for this to be committed, the message will be consolidated to \"could\n> > not start server\".\n> \n> Based on the suggestion, I tried to update the patch.\n> A new argument is_valid is added for reporting callee. Also, reporting formats\n> are adjusted based on other functions. How do you think?\n\nAn equivalent check is already done shortly afterward in the calling\nfunction. Therefore, we can simply remove the code path for \"launcher\nshell died\", and it will work the same way. Please find the attached.\n\nOther error cases will fit to \"shouldn't occur under normal\nconditions\" errors.\n\nThere is a possibility that the launcher shell terminates while\npostmaster is running. Even in such a case, the server continue\nworking without any problems. I contemplated accomodating this case\nbut the effort required seemed disproportionate to the possibility.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 24 Oct 2023 15:00:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been\n already started on Windows"
},
{
"msg_contents": "Dear Horiguchi-san,\n\nThanks for updates! I was quite not sure the Windows env, but I can post comments.\n(We need reviews by windows-friendly developers...)\n\n> Other error cases will fit to \"shouldn't occur under normal\n> conditions\" errors.\n\nFormatting of messages for write_stderr() seem different from others. In v3,\nI slightly modified for readability like below. I wanted to let you know just in case\nbecause you did not say anything about these changes...\n\n```\n+\t/* create a process snapshot */\n+\thSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0);\n+\tif (hSnapshot == INVALID_HANDLE_VALUE)\n+\t{\n+\t\twrite_stderr(_(\"%s: could not create a snapshot: error code %lu\\n\"),\n+\t\t\t\t\t progname, (unsigned long) GetLastError());\n+\t\texit(1);\n+\t}\n+\n+\t/* start iterating on the snapshot */\n+\tppe.dwSize = sizeof(PROCESSENTRY32);\n+\tif (!Process32First(hSnapshot, &ppe))\n+\t{\n+\t\twrite_stderr(_(\"%s: cound not retrieve information about the process: error code %lu\\n\"),\n+\t\t\t\t\t progname, GetLastError());\n+\t\texit(1);\n+\t}\n+\n```\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Tue, 24 Oct 2023 07:37:22 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: pg_ctl start may return 0 even if the postmaster has been already\n started on Windows"
},
{
"msg_contents": "At Tue, 24 Oct 2023 07:37:22 +0000, \"Hayato Kuroda (Fujitsu)\" <[email protected]> wrote in \n> Dear Horiguchi-san,\n> \n> Thanks for updates! I was quite not sure the Windows env, but I can post comments.\n> (We need reviews by windows-friendly developers...)\n\nIndeed, I haven't managed to successfully build using Meson on\nWindows...\n\n> Formatting of messages for write_stderr() seem different from others. In v3,\n> I slightly modified for readability like below. I wanted to let you know just in case\n> because you did not say anything about these changes...\n\nAh. Sorry, I was lazy about the messages because I didn't regard this\nto be at that stage yet.\n\nIn the attached, fixed the existing two messages, and adjusted one\nmessage to display an error code, all in the consistent format.\n\nThanks!\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 24 Oct 2023 17:25:36 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been\n already started on Windows"
},
{
"msg_contents": "On Tue, Oct 24, 2023 at 4:28 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n> In the attached, fixed the existing two messages, and adjusted one\n> message to display an error code, all in the consistent format.\n\nHi,\n\nI'm not a Windows expert, but my guess is that 0001 is a very good\nidea. I hope someone who is a Windows expert will comment on that.\n\n0002 seems problematic to me. One potential issue is that it would\nbreak if someone renamed postgres.exe to something else -- although\nthat's probably not really a serious problem. A bigger issue is that\nit seems like it would break if someone used pg_ctl to start several\ninstances in different data directories on the same machine. If I'm\nunderstanding correctly, that currently mostly works, and this would\nbreak it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Jan 2024 14:58:55 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been already\n started on Windows"
},
{
"msg_contents": "On Fri, Jan 05, 2024 at 02:58:55PM -0500, Robert Haas wrote:\n> I'm not a Windows expert, but my guess is that 0001 is a very good\n> idea. I hope someone who is a Windows expert will comment on that.\n\nI am +1 on 0001. It is just something we've never anticipated when\nthese wrappers around cmd in pg_ctl were written.\n\n> 0002 seems problematic to me. One potential issue is that it would\n> break if someone renamed postgres.exe to something else -- although\n> that's probably not really a serious problem.\n\nWe do a find_other_exec_or_die() on \"postgres\" with what could be a\ncustom execution path. So we're actually sure that the binary will be\nthere in the start path, no? I don't like much the hardcoded\ndependency to .exe here.\n\n> A bigger issue is that\n> it seems like it would break if someone used pg_ctl to start several\n> instances in different data directories on the same machine. If I'm\n> understanding correctly, that currently mostly works, and this would\n> break it.\n\nNot having the guarantee that a single shell_pid is associated to a\nsingle postgres.exe would be a problem. Now the patch includes this\ncode:\n+\t\tif (ppe.th32ParentProcessID == shell_pid &&\n+\t\t\tstrcmp(\"postgres.exe\", ppe.szExeFile) == 0)\n+\t\t{\n+\t\t\tif (pm_pid != ppe.th32ProcessID && pm_pid != 0)\n+\t\t\t\tmultiple_children = true;\n+\t\t\tpm_pid = ppe.th32ProcessID;\n+\t\t}\n\nWhich is basically giving this guarantee? multiple_children should\nnever happen once the autorun part is removed. Is that right?\n\n+ * The launcher shell might start other cmd.exe instances or programs\n+ * besides postgres.exe. Veryfying the program file name is essential.\n\nWith the autorun part of cmd.exe removed, what's still relevant here?\ns/Veryfying/Verifying/.\n\nPerhaps 0002 should make more efforts in documenting things like\nth32ProcessID and th32ParentProcessID.\n--\nMichael",
"msg_date": "Tue, 9 Jan 2024 09:40:23 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been\n already started on Windows"
},
{
"msg_contents": "On Tue, Jan 09, 2024 at 09:40:23AM +0900, Michael Paquier wrote:\n> On Fri, Jan 05, 2024 at 02:58:55PM -0500, Robert Haas wrote:\n> > I'm not a Windows expert, but my guess is that 0001 is a very good\n> > idea. I hope someone who is a Windows expert will comment on that.\n> \n> I am +1 on 0001. It is just something we've never anticipated when\n> these wrappers around cmd in pg_ctl were written.\n\nI have now applied 0001 for pg_ctl.\n\nWhile reviewing that, I have also noticed spawn_process() in\npg_regress.c that includes direct command invocations with cmd.exe /c.\nCould it make sense to append an extra /d for this case as well?\n--\nMichael",
"msg_date": "Wed, 10 Jan 2024 10:44:19 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been\n already started on Windows"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> I have now applied 0001 for pg_ctl.\n\n> While reviewing that, I have also noticed spawn_process() in\n> pg_regress.c that includes direct command invocations with cmd.exe /c.\n> Could it make sense to append an extra /d for this case as well?\n\nNo Windows expert here, but it does seem like the same argument\napplies.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Jan 2024 21:40:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been already\n started on Windows"
},
{
"msg_contents": "On Tue, Jan 09, 2024 at 09:40:12PM -0500, Tom Lane wrote:\n> No Windows expert here, but it does seem like the same argument\n> applies.\n\nYeah, I've applied the same restriction for pg_regress to avoid\nsimilar problems as we spawn a postgres process in this case. I've\ntested it and it was not causing issues in my own setup or the CI.\n\nI am wondering if we'd better backpatch all that, TBH.\n--\nMichael",
"msg_date": "Thu, 11 Jan 2024 12:40:35 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been\n already started on Windows"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> I am wondering if we'd better backpatch all that, TBH.\n\nSeems like a good idea to me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Jan 2024 23:02:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been already\n started on Windows"
},
{
"msg_contents": "Thanks for restarting this thread.\n\nAt Tue, 9 Jan 2024 09:40:23 +0900, Michael Paquier <[email protected]> wrote in \n> On Fri, Jan 05, 2024 at 02:58:55PM -0500, Robert Haas wrote:\n> > I'm not a Windows expert, but my guess is that 0001 is a very good\n> > idea. I hope someone who is a Windows expert will comment on that.\n> \n> I am +1 on 0001. It is just something we've never anticipated when\n> these wrappers around cmd in pg_ctl were written.\n\nThanks for committing it.\n\n> > 0002 seems problematic to me. One potential issue is that it would\n> > break if someone renamed postgres.exe to something else -- although\n> > that's probably not really a serious problem.\n> \n> We do a find_other_exec_or_die() on \"postgres\" with what could be a\n> custom execution path. So we're actually sure that the binary will be\n> there in the start path, no? I don't like much the hardcoded\n> dependency to .exe here.\n\nThe patch doesn't care of the path for postgres.exe. If you are referring to the code you cited below, it's for another reason. I'll describe that there.\n\n> > A bigger issue is that\n> > it seems like it would break if someone used pg_ctl to start several\n> > instances in different data directories on the same machine. If I'm\n> > understanding correctly, that currently mostly works, and this would\n> > break it.\n> \n> Not having the guarantee that a single shell_pid is associated to a\n> single postgres.exe would be a problem. Now the patch includes this\n> code:\n> +\t\tif (ppe.th32ParentProcessID == shell_pid &&\n> +\t\t\tstrcmp(\"postgres.exe\", ppe.szExeFile) == 0)\n> +\t\t{\n> +\t\t\tif (pm_pid != ppe.th32ProcessID && pm_pid != 0)\n> +\t\t\t\tmultiple_children = true;\n> +\t\t\tpm_pid = ppe.th32ProcessID;\n> +\t\t}\n> \n> Which is basically giving this guarantee? multiple_children should\n> never happen once the autorun part is removed. Is that right?\n\nThe patch indeed ensures the relationship between the parent\npg_ctl.exe and postgres.exe. However, for some reason, in my Windows\n11 environment with the /D option specified, I observed that another\ncmd.exe is spawned as the second child process of the parent\ncmd.exe. This is why there is a need to verify the executable file\nname. I have no idea how the second cmd.exe is being spawned.\n\n> + * The launcher shell might start other cmd.exe instances or programs\n> + * besides postgres.exe. Veryfying the program file name is essential.\n> \n> With the autorun part of cmd.exe removed, what's still relevant here?\n\nYes, if the strcmp() is commented out, multiple_children sometimes\nbecomes true..\n\n> s/Veryfying/Verifying/.\n\nOops!\n\n> Perhaps 0002 should make more efforts in documenting things like\n> th32ProcessID and th32ParentProcessID.\n\nIs it correct to understand that you are requesting changes as follows?\n\n--- a/src/bin/pg_ctl/pg_ctl.c\n+++ b/src/bin/pg_ctl/pg_ctl.c\n@@ -1995,11 +1995,14 @@ pgwin32_find_postmaster_pid(pid_t shell_pid)\n \t *\n \t * Check for duplicate processes to ensure reliability.\n \t *\n- \t * The launcher shell might start other cmd.exe instances or programs\n-\t * besides postgres.exe. Verifying the program file name is essential.\n-\t *\n-\t * The launcher shell process isn't checked in this function. It will be\n-\t * checked by the caller.\n+\t * The ppe entry to be examined is identified by th32ParentProcessID, which\n+\t * should correspond to the cmd.exe process that executes the postgres.exe\n+\t * binary. Additionally, th32ProcessID in the same entry should be the PID\n+\t * of the launched postgres.exe. However, even though we have launched the\n+\t * parent cmd.exe with the /D option specified, it is sometimes observed\n+\t * that another cmd.exe is launched for unknown reasons. Therefore, it is\n+\t * crucial to verify the program file name to avoid returning the wrong\n+\t * PID.\n \t */\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 11 Jan 2024 17:33:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been\n already started on Windows"
},
{
"msg_contents": "On Thu, Jan 11, 2024 at 3:33 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n> Is it correct to understand that you are requesting changes as follows?\n>\n> --- a/src/bin/pg_ctl/pg_ctl.c\n> +++ b/src/bin/pg_ctl/pg_ctl.c\n> @@ -1995,11 +1995,14 @@ pgwin32_find_postmaster_pid(pid_t shell_pid)\n> *\n> * Check for duplicate processes to ensure reliability.\n> *\n> - * The launcher shell might start other cmd.exe instances or programs\n> - * besides postgres.exe. Verifying the program file name is essential.\n> - *\n> - * The launcher shell process isn't checked in this function. It will be\n> - * checked by the caller.\n> + * The ppe entry to be examined is identified by th32ParentProcessID, which\n> + * should correspond to the cmd.exe process that executes the postgres.exe\n> + * binary. Additionally, th32ProcessID in the same entry should be the PID\n> + * of the launched postgres.exe. However, even though we have launched the\n> + * parent cmd.exe with the /D option specified, it is sometimes observed\n> + * that another cmd.exe is launched for unknown reasons. Therefore, it is\n> + * crucial to verify the program file name to avoid returning the wrong\n> + * PID.\n> */\n\nThis kind of change looks massively helpful to me. I don't know if it\nis exactly right or not, but it would have been a big help to me when\nwriting my previous review, so +1 for some change of this general\ntype.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Jan 2024 13:34:46 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been already\n started on Windows"
},
{
"msg_contents": "On Mon, Jan 15, 2024 at 01:34:46PM -0500, Robert Haas wrote:\n> This kind of change looks massively helpful to me. I don't know if it\n> is exactly right or not, but it would have been a big help to me when\n> writing my previous review, so +1 for some change of this general\n> type.\n\nDuring a live review of this patch last week, as part of the Advanced\nPatch Workshop of pgconf.dev, it has been mentioned by Tom that we may\nbe able to simplify the check on pmstart if the detection of the\npostmaster PID started by pg_ctl is more stable using the WIN32\ninternals that this patch relies on. I am not sure that this\nsuggestion is right, though, because we should still care about the\nclock skew case as written in the surrounding comments? Even if\nthat's OK, I would assume that this should be an independent patch,\nwritten on top of the proposed v6-0001.\n\nTom, could you comment about that? Perhaps my notes did not catch\nwhat you meant.\n--\nMichael",
"msg_date": "Tue, 4 Jun 2024 08:30:19 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been\n already started on Windows"
},
{
"msg_contents": "At Tue, 4 Jun 2024 08:30:19 +0900, Michael Paquier <[email protected]> wrote in \n> On Mon, Jan 15, 2024 at 01:34:46PM -0500, Robert Haas wrote:\n> > This kind of change looks massively helpful to me. I don't know if it\n> > is exactly right or not, but it would have been a big help to me when\n> > writing my previous review, so +1 for some change of this general\n> > type.\n> \n> During a live review of this patch last week, as part of the Advanced\n> Patch Workshop of pgconf.dev, it has been mentioned by Tom that we may\n> be able to simplify the check on pmstart if the detection of the\n> postmaster PID started by pg_ctl is more stable using the WIN32\n> internals that this patch relies on. I am not sure that this\n> suggestion is right, though, because we should still care about the\n> clock skew case as written in the surrounding comments? Even if\n> that's OK, I would assume that this should be an independent patch,\n> written on top of the proposed v6-0001.\n> \n> Tom, could you comment about that? Perhaps my notes did not catch\n> what you meant.\n\nThank you for the follow-up.\n\nI have been thinking about this since then. At first, I thought it\nreferred to FindFirstChangeNotification() and friends, and inotify on\nLinux. However, I haven't found a way to simplify the specified code\narea using those APIs.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 06 Jun 2024 16:45:00 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been\n already started on Windows"
},
{
"msg_contents": "At Thu, 06 Jun 2024 16:45:00 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> I have been thinking about this since then. At first, I thought it\n> referred to FindFirstChangeNotification() and friends, and inotify on\n> Linux. However, I haven't found a way to simplify the specified code\n> area using those APIs.\n\nBy the way, the need to shift by 2 seconds to tolerate clock skew\nsuggests that the current launcher-postmaster association mechanism is\nsomewhat unreliable. Couldn't we add a command line option to\npostmaster to explicitly pass a unique identifier (say, pid itself) of\nthe launcher? If it is not specified, the number should be the PID of\nthe immediate parent process.\n\nThis change avoids the need for the special treatment for Windows.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 06 Jun 2024 17:15:15 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been\n already started on Windows"
},
{
"msg_contents": "At Thu, 06 Jun 2024 17:15:15 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> At Thu, 06 Jun 2024 16:45:00 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> > I have been thinking about this since then. At first, I thought it\n> > referred to FindFirstChangeNotification() and friends, and inotify on\n> > Linux. However, I haven't found a way to simplify the specified code\n> > area using those APIs.\n> \n> By the way, the need to shift by 2 seconds to tolerate clock skew\n> suggests that the current launcher-postmaster association mechanism is\n> somewhat unreliable. Couldn't we add a command line option to\n> postmaster to explicitly pass a unique identifier (say, pid itself) of\n> the launcher? If it is not specified, the number should be the PID of\n> the immediate parent process.\n\nNo. The combination of pg_ctl's pid and timestamp, to avoid false\nmatching during reboot.\n\n> This change avoids the need for the special treatment for Windows.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 06 Jun 2024 17:21:46 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been\n already started on Windows"
},
{
"msg_contents": "\nOn 2024-06-06 Th 04:15, Kyotaro Horiguchi wrote:\n> At Thu, 06 Jun 2024 16:45:00 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in\n>> I have been thinking about this since then. At first, I thought it\n>> referred to FindFirstChangeNotification() and friends, and inotify on\n>> Linux. However, I haven't found a way to simplify the specified code\n>> area using those APIs.\n> By the way, the need to shift by 2 seconds to tolerate clock skew\n> suggests that the current launcher-postmaster association mechanism is\n> somewhat unreliable. Couldn't we add a command line option to\n> postmaster to explicitly pass a unique identifier (say, pid itself) of\n> the launcher? If it is not specified, the number should be the PID of\n> the immediate parent process.\n>\n> This change avoids the need for the special treatment for Windows.\n>\n\nLooks good generally. I assume iterating over the process table to find \nthe right pid will be pretty quick.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 6 Jun 2024 15:03:41 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been already\n started on Windows"
},
{
"msg_contents": "On Thu, Jun 06, 2024 at 05:21:46PM +0900, Kyotaro Horiguchi wrote:\n> At Thu, 06 Jun 2024 17:15:15 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> > By the way, the need to shift by 2 seconds to tolerate clock skew\n> > suggests that the current launcher-postmaster association mechanism is\n> > somewhat unreliable. Couldn't we add a command line option to\n> > postmaster to explicitly pass a unique identifier (say, pid itself) of\n> > the launcher? If it is not specified, the number should be the PID of\n> > the immediate parent process.\n> \n> No. The combination of pg_ctl's pid and timestamp, to avoid false\n> matching during reboot.\n> \n> > This change avoids the need for the special treatment for Windows.\n\nRegarding your \"unique identifier\" idea, pg_ctl could generate an 8-byte\nrandom value for the postmaster to write to postmaster.pid. That would be\nenough for wait_for_postmaster_start() to ignore PIDs and achieve its mission\nwithout OS-specific code.\n\nCommits 9886744 and b83747a added /D to two %comspec% callers. I gather they\narose to make particular cmd.exe invocations have just one child. However,\nhttp://postgr.es/m/[email protected]\nreports multiple children remain possible. v17 is currently in a weird state\nwhere most Windows subprocess invocation routes through pgwin32_system() and\ndoes not add /D, while these two callers add /D. I suspect we should either\n(1) prepend /D in pgwin32_system() and other %comspec% usage or (2) revert\nprepending it in the callers from this thread's commits. While\n\"Software\\Microsoft\\Command Processor\\AutoRun\" is hard to use without breaking\nthings, it's not PostgreSQL's job to second-guess the user in that respect.\nHence, I lean toward (2). What do you think?\n\n\n",
"msg_date": "Sat, 29 Jun 2024 19:12:11 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been\n already started on Windows"
},
{
"msg_contents": "On Sat, Jun 29, 2024 at 07:12:11PM -0700, Noah Misch wrote:\n> Commits 9886744 and b83747a added /D to two %comspec% callers. I gather they\n> arose to make particular cmd.exe invocations have just one child. However,\n> http://postgr.es/m/[email protected]\n> reports multiple children remain possible. v17 is currently in a weird state\n> where most Windows subprocess invocation routes through pgwin32_system() and\n> does not add /D, while these two callers add /D. I suspect we should either\n> (1) prepend /D in pgwin32_system() and other %comspec% usage or (2) revert\n> prepending it in the callers from this thread's commits. While\n> \"Software\\Microsoft\\Command Processor\\AutoRun\" is hard to use without breaking\n> things, it's not PostgreSQL's job to second-guess the user in that respect.\n> Hence, I lean toward (2). What do you think?\n\nThanks for the ping.\n\nAs of this stage of the game for v17, I am going to agree with (2) to\nremove this inconsistency rather than experiment with new things. We\ncould always study more in v18 what could be done with the /D switches\nand the other patch, though that will unlikely be something I'll be\nable to look at in the short term.\n--\nMichael",
"msg_date": "Fri, 5 Jul 2024 14:19:54 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been\n already started on Windows"
},
{
"msg_contents": "On Fri, Jul 05, 2024 at 02:19:54PM +0900, Michael Paquier wrote:\n> As of this stage of the game for v17, I am going to agree with (2) to\n> remove this inconsistency rather than experiment with new things. We\n> could always study more in v18 what could be done with the /D switches\n> and the other patch, though that will unlikely be something I'll be\n> able to look at in the short term.\n\nAs I am not tempted to play the apprentice sorcerer on a stable\nbranch, just reverted these two things with 74b8e6a69802, and\nbackpatched the change down to 17.\n--\nMichael",
"msg_date": "Mon, 8 Jul 2024 09:58:10 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been\n already started on Windows"
},
{
"msg_contents": "Hi,\n\nI'm reviewing patches in Commitfest 2024-07 from top to bottom:\nhttps://commitfest.postgresql.org/48/\n\nThis is the 2st patch:\nhttps://commitfest.postgresql.org/48/4573/\n\nFYI: https://commitfest.postgresql.org/48/4681/ is my patch.\n\nIn <[email protected]>\n \"Re: pg_ctl start may return 0 even if the postmaster has been already started on Windows\" on Tue, 4 Jun 2024 08:30:19 +0900,\n Michael Paquier <[email protected]> wrote:\n\n> During a live review of this patch last week, as part of the Advanced\n> Patch Workshop of pgconf.dev, it has been mentioned by Tom that we may\n> be able to simplify the check on pmstart if the detection of the\n> postmaster PID started by pg_ctl is more stable using the WIN32\n> internals that this patch relies on. I am not sure that this\n> suggestion is right, though, because we should still care about the\n> clock skew case as written in the surrounding comments? Even if\n> that's OK, I would assume that this should be an independent patch,\n> written on top of the proposed v6-0001.\n\nI reviewed the latest patch set and I felt different\nimpression.\n\nstart_postmaster() on Windows uses cmd.exe for redirection\nbased on the comment in the function:\n\n> /*\n> * As with the Unix case, it's easiest to use the shell (CMD.EXE) to\n> * handle redirection etc. Unfortunately CMD.EXE lacks any equivalent of\n> * \"exec\", so we don't get to find out the postmaster's PID immediately.\n> */\n\nIt seems that we can use redirection by CreateProcess()\nfamily functions without cmd.exe based on the following\ndocumentation:\n\nhttps://learn.microsoft.com/en-us/windows/win32/procthread/creating-a-child-process-with-redirected-input-and-output\n\nHow about changing start_postmaster() for Windows to start\npostgres.exe directly so that it returns the postgres.exe's\nPID not cmd.exe's PID? If we can do it, we don't need\npgwin32_find_postmaster_pid() in the patch set.\n\n\nThanks,\n-- \nkou\n\n\n",
"msg_date": "Sat, 13 Jul 2024 06:41:14 +0900 (JST)",
"msg_from": "Sutou Kouhei <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been\n already started on Windows"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: tested, failed\nSpec compliant: not tested\nDocumentation: not tested\n\nHi,\r\n\r\nI have verified following: \r\n - Bug exits in PG17. I also checked it in PG16 but it does not exits there. \r\n - After applying your patch, I can confirm that bug get fixed. \r\n - no regression found. I ran \"meson test\".\r\n - I would like to suggest you that #includes should be included at appropriate location keeping the #includes alphabetically sorted, what I observed in the PG code as a standard:\r\n Your patch:\r\n #include <versionhelpers.h>\r\n #include <tlhelp32.h>\r\n\r\n It should be like:\r\n #include <tlhelp32.h>\r\n #include <versionhelpers.h>\r\n\r\nRegards...\r\n\r\n\r\nYasir Hussain\r\nBitnine Global Inc.",
"msg_date": "Tue, 16 Jul 2024 11:57:37 +0000",
"msg_from": "Yasir Shah <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been already\n started on Windows"
},
{
"msg_contents": "On Tue, Jul 16, 2024 at 4:58 PM Yasir Shah <[email protected]>\nwrote:\n\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, failed (meson test, passed)\n> Implements feature: tested, failed (tested, passed)\n> Spec compliant: not tested (tested, passed\n> with suggestion)\n> Documentation: not tested\n>\n\nPlease ignore the above 4 lines in my review. See my comments in blue.\n\n\n> Hi,\n>\n> I have verified following:\n> - Bug exits in PG17. I also checked it in PG16 but it does not exits\n> there.\n> - After applying your patch, I can confirm that bug get fixed.\n> - no regression found. I ran \"meson test\".\n> - I would like to suggest you that #includes should be included at\n> appropriate location keeping the #includes alphabetically sorted, what I\n> observed in the PG code as a standard:\n> Your patch:\n> #include <versionhelpers.h>\n> #include <tlhelp32.h>\n>\n> It should be like:\n> #include <tlhelp32.h>\n> #include <versionhelpers.h>\n>\n> Regards...\n>\n>\n> Yasir Hussain\n> Bitnine Global Inc.\n\nOn Tue, Jul 16, 2024 at 4:58 PM Yasir Shah <[email protected]> wrote:The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed (meson test, passed)\nImplements feature: tested, failed (tested, passed)\nSpec compliant: not tested (tested, passed with suggestion)\nDocumentation: not testedPlease ignore the above 4 lines in my review. See my comments in blue. \n\nHi,\n\nI have verified following: \n - Bug exits in PG17. I also checked it in PG16 but it does not exits there. \n - After applying your patch, I can confirm that bug get fixed. \n - no regression found. I ran \"meson test\".\n - I would like to suggest you that #includes should be included at appropriate location keeping the #includes alphabetically sorted, what I observed in the PG code as a standard:\n Your patch:\n #include <versionhelpers.h>\n #include <tlhelp32.h>\n\n It should be like:\n #include <tlhelp32.h>\n #include <versionhelpers.h>\n\nRegards...\n\n\nYasir Hussain\nBitnine Global Inc.",
"msg_date": "Tue, 16 Jul 2024 17:04:31 +0500",
"msg_from": "Yasir <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been already\n started on Windows"
},
{
"msg_contents": "On Tue, Jul 16, 2024 at 8:04 AM Yasir <[email protected]> wrote:\n> Please ignore the above 4 lines in my review. See my comments in blue.\n\nOK, so I think it's unclear what the next steps are for this patch.\n\n1. On June 3rd, Michael Paquier said that Tom Lane proposed that,\nafter doing what the patch currently does, we could simplify some\nother stuff. The details are unclear, and Tom hasn't commented.\n\n2. On June 29th, Noah Misch proposed a platform-independent way of\nsolving the problem.\n\n3. On July 12th, Sutou Kouhei proposed using CreateProcess() to start\nthe postmaster instead of cmd.exe.\n\n4. On July 16th, Yasir Shah said that he tested the patch and found\nthat the problem only exists in v17, not any prior release, which is\ncontrary to my understanding of the situation. He also proposed a\nminor tweak to the patch itself.\n\nSo, as I see it, we have three possible ways forward here. First, we\ncould stick with the current patch, possibly with further work as per\n[1] or adjustments as per [4]. Second, we could abandon the current\napproach and adopt Noah's proposal in [2]. Third, we could possibly\nabandon the current approach and adopt Sutou's proposal in [3]. I say\n\"possibly\" because I can't personally assess whether this approach is\nfeasible.\n\nI have some bias toward thinking that real patches are better than\nimaginary ones, and that we ought to therefore think about committing\nHoriguchi-san's actual patch to fix the actual problem rather than\nworrying much about other hypothetical things that we could do. On the\nother hand, I'm also not volunteering, among other reasons because I\nam not knowledgeable enough about Windows. And, certainly, there is\nsome appeal to a platform-independent approach. But I feel like we're\nnot doing ourselves any favors by letting this patch sit for (checks\nthread) 10 months when according to multiple reviewers, it works.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 19 Jul 2024 14:32:47 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_ctl start may return 0 even if the postmaster has been already\n started on Windows"
}
] |
[
{
"msg_contents": "Hi,\n\nWith f47ed79cc8, the test suite doesn't run 'wal_consistency_checking'\nas default because it is resource intensive; but regress docs doesn't\nstate resource intensiveness as a reason for not running tests by\ndefault. So, I created a patch for updating the docs.\n\nAny kind of feedback would be appreciated.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Thu, 7 Sep 2023 14:09:35 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add resource intensiveness as a reason to not running tests by\n default"
},
{
"msg_contents": "> On 7 Sep 2023, at 13:09, Nazir Bilal Yavuz <[email protected]> wrote:\n\n> With f47ed79cc8, the test suite doesn't run 'wal_consistency_checking'\n> as default because it is resource intensive; but regress docs doesn't\n> state resource intensiveness as a reason for not running tests by\n> default. So, I created a patch for updating the docs.\n\nAgreed, the current wording lacks the mention of skipping tests due to high\nresource usage. Patch looks good from a quick skim, I'll backpatch it down to\n15 which is where PG_TEST_EXTRA was first used in this capacity.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 7 Sep 2023 13:24:16 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add resource intensiveness as a reason to not running tests by\n default"
},
{
"msg_contents": "> On 7 Sep 2023, at 13:24, Daniel Gustafsson <[email protected]> wrote:\n> \n>> On 7 Sep 2023, at 13:09, Nazir Bilal Yavuz <[email protected]> wrote:\n> \n>> With f47ed79cc8, the test suite doesn't run 'wal_consistency_checking'\n>> as default because it is resource intensive; but regress docs doesn't\n>> state resource intensiveness as a reason for not running tests by\n>> default. So, I created a patch for updating the docs.\n> \n> Agreed, the current wording lacks the mention of skipping tests due to high\n> resource usage. Patch looks good from a quick skim, I'll backpatch it down to\n> 15 which is where PG_TEST_EXTRA was first used in this capacity.\n\nPushed and backpatched, thanks!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 8 Sep 2023 11:39:30 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add resource intensiveness as a reason to not running tests by\n default"
}
] |
[
{
"msg_contents": "Hi,\n\nThere is an old work item about building the docs if there are changes\nin the docs, otherwise don't build the docs. I wanted to make an\naddition to that idea; if the changes are only in the docs, don't run\nall tasks except building the docs task; this could help to save more\nCI times. I attached two patches.\n\nI assumed that the docs related changes are limited with the changes\nin the docs folder but I am not really sure about that.\n\nv1-0001-Only-built-the-docs-if-there-are-changes-are-in-t.patch:\nThis patch creates another task named 'Building the Docs' and moves\nbuilding the doc script from 'CompilerWarnings' task to this task.\nThis building the docs task only runs if there are changes in the docs\n(under the doc/**) or in the CI files ('.cirrus.yml',\n'.cirrus.tasks.yml') and if a specific OS is not requested.\n\nv1-0002-Just-run-the-Build-the-Docs-task-if-the-changes-a.patch:\nThis patch adds that: if the changes are *only* in the docs (under the\ndoc/**), *only* run building the docs task.\n\nAs a summary:\n1- If the changes are not in the docs: Don't run build the docs task.\n2- If the changes are in the docs or in the CI files : Run build the docs task.\n3- If the changes are only in the docs: Only run build the docs task.\n4- If 'ci-os-only:' set (There could be changes in the docs): Don't\nrun build the docs task.\n\nAny kind of feedback would be appreciated.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Thu, 7 Sep 2023 19:06:57 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Build the docs if there are changes in docs and don't run other tasks\n if the changes are only in docs"
},
{
"msg_contents": "> On 7 Sep 2023, at 18:06, Nazir Bilal Yavuz <[email protected]> wrote:\n\n> if the changes are only in the docs, don't run\n> all tasks except building the docs task; this could help to save more\n> CI times.\n\nA related idea for docs in order to save CI time: if the changes are only in\ninternal docs, ie README files, then don't run any tasks at all. Looking at\nsrc/backend/parser/README the last two commits only touched that file, and\nwhile such patches might not be all that common, spending precious CI resources\non them seems silly if we can avoid it.\n\nIt doesn't have to be included in this, just wanted to bring it up as it's\nrelated.\n\n> I attached two patches.\n> \n> I assumed that the docs related changes are limited with the changes\n> in the docs folder but I am not really sure about that.\n\nAlmost, but not entirely. There are a set of scripts which generate content\nfor the docs based on files in src/, like src/backend/catalog/sql_features.txt\nand src/include/parser/kwlist.h. If those source files change, or their\nscripts, it would be helpful to build docs.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 8 Sep 2023 10:05:11 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Build the docs if there are changes in docs and don't run other\n tasks if the changes are only in docs"
},
{
"msg_contents": "Hi,\n\nThanks for the reply!\n\nOn Fri, 8 Sept 2023 at 11:05, Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 7 Sep 2023, at 18:06, Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> > if the changes are only in the docs, don't run\n> > all tasks except building the docs task; this could help to save more\n> > CI times.\n>\n> A related idea for docs in order to save CI time: if the changes are only in\n> internal docs, ie README files, then don't run any tasks at all. Looking at\n> src/backend/parser/README the last two commits only touched that file, and\n> while such patches might not be all that common, spending precious CI resources\n> on them seems silly if we can avoid it.\n>\n> It doesn't have to be included in this, just wanted to bring it up as it's\n> related.\n\nI liked the idea, I am planning to edit the 0002 patch. CI won't run\nany tasks if the changes are only in the README files.\n\n> Almost, but not entirely. There are a set of scripts which generate content\n> for the docs based on files in src/, like src/backend/catalog/sql_features.txt\n> and src/include/parser/kwlist.h. If those source files change, or their\n> scripts, it would be helpful to build docs.\n\nThanks! Are these the only files that are not in the doc subfolders\nbut effect docs?\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Mon, 11 Sep 2023 14:03:13 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Build the docs if there are changes in docs and don't run other\n tasks if the changes are only in docs"
},
{
"msg_contents": "> On 11 Sep 2023, at 13:03, Nazir Bilal Yavuz <[email protected]> wrote:\n\n>> Almost, but not entirely. There are a set of scripts which generate content\n>> for the docs based on files in src/, like src/backend/catalog/sql_features.txt\n>> and src/include/parser/kwlist.h. If those source files change, or their\n>> scripts, it would be helpful to build docs.\n> \n> Thanks! Are these the only files that are not in the doc subfolders\n> but effect docs?\n\nI believe so, the list of scripts and input files can be teased out of the\ndoc/src/sgml/meson.build file.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 11 Sep 2023 14:11:13 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Build the docs if there are changes in docs and don't run other\n tasks if the changes are only in docs"
},
{
"msg_contents": "Hi,\n\nI attached the second version of the patch.\n\nOn Mon, 11 Sept 2023 at 15:11, Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 11 Sep 2023, at 13:03, Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> >> Almost, but not entirely. There are a set of scripts which generate content\n> >> for the docs based on files in src/, like src/backend/catalog/sql_features.txt\n> >> and src/include/parser/kwlist.h. If those source files change, or their\n> >> scripts, it would be helpful to build docs.\n> >\n> > Thanks! Are these the only files that are not in the doc subfolders\n> > but effect docs?\n>\n> I believe so, the list of scripts and input files can be teased out of the\n> doc/src/sgml/meson.build file.\n\nNow the files mentioned in the doc/src/sgml/meson.build file will\ntrigger building the docs task. Also, if the changes are only in the\nREADME files, CI will be skipped completely.\n\nAny kind of feedback would be appreciated.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Mon, 25 Sep 2023 14:56:28 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Build the docs if there are changes in docs and don't run other\n tasks if the changes are only in docs"
},
{
"msg_contents": "On 25.09.23 12:56, Nazir Bilal Yavuz wrote:\n> + # Only run if a specific OS is not requested and if there are changes in docs\n> + # or in the CI files.\n> + skip: >\n> + $CIRRUS_CHANGE_MESSAGE =~ '.*\\nci-os-only:.*' ||\n> + !changesInclude('doc/**',\n> + '.cirrus.yml',\n> + '.cirrus.tasks.yml',\n> + 'src/backend/catalog/sql_feature_packages.txt',\n> + 'src/backend/catalog/sql_features.txt',\n> + 'src/backend/utils/errcodes.txt',\n> + 'src/backend/utils/activity/wait_event_names.txt',\n> + 'src/backend/utils/activity/generate-wait_event_types.pl',\n> + 'src/include/parser/kwlist.h')\n\nThis is kind of annoying. Now we need to maintain yet another list of \nthese dependencies and keep it in sync with the build systems.\n\nI think meson can produce a dependency tree from a build. Maybe we \ncould use that somehow and have Cirrus cache it between runs?\n\nAlso note that there are also dependencies in the other direction. For \nexample, the psql help is compiled from XML DocBook sources. So your \nother patch would also need to include similar changesInclude() clauses.\n\n\n\n\n",
"msg_date": "Tue, 26 Sep 2023 11:48:17 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Build the docs if there are changes in docs and don't run other\n tasks if the changes are only in docs"
},
{
"msg_contents": "Hi,\n\nOn Tue, 26 Sept 2023 at 13:48, Peter Eisentraut <[email protected]> wrote:\n>\n> On 25.09.23 12:56, Nazir Bilal Yavuz wrote:\n> > + # Only run if a specific OS is not requested and if there are changes in docs\n> > + # or in the CI files.\n> > + skip: >\n> > + $CIRRUS_CHANGE_MESSAGE =~ '.*\\nci-os-only:.*' ||\n> > + !changesInclude('doc/**',\n> > + '.cirrus.yml',\n> > + '.cirrus.tasks.yml',\n> > + 'src/backend/catalog/sql_feature_packages.txt',\n> > + 'src/backend/catalog/sql_features.txt',\n> > + 'src/backend/utils/errcodes.txt',\n> > + 'src/backend/utils/activity/wait_event_names.txt',\n> > + 'src/backend/utils/activity/generate-wait_event_types.pl',\n> > + 'src/include/parser/kwlist.h')\n>\n> This is kind of annoying. Now we need to maintain yet another list of\n> these dependencies and keep it in sync with the build systems.\n\nI agree.\n\n>\n> I think meson can produce a dependency tree from a build. Maybe we\n> could use that somehow and have Cirrus cache it between runs?\n\nI will check that.\n\n>\n> Also note that there are also dependencies in the other direction. For\n> example, the psql help is compiled from XML DocBook sources. So your\n> other patch would also need to include similar changesInclude() clauses.\n>\n\nIf there are more cases like this, it may not be worth it. Instead, we can just:\n\n- Build the docs when the doc related files are changed (This still\ncreates a dependency like you said).\n\n- Skip CI completely if the README files are changed.\n\nWhat are your opinions on these?\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Tue, 26 Sep 2023 17:51:37 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Build the docs if there are changes in docs and don't run other\n tasks if the changes are only in docs"
},
{
"msg_contents": "On 26.09.23 16:51, Nazir Bilal Yavuz wrote:\n>> Also note that there are also dependencies in the other direction. For\n>> example, the psql help is compiled from XML DocBook sources. So your\n>> other patch would also need to include similar changesInclude() clauses.\n> \n> If there are more cases like this, it may not be worth it. Instead, we can just:\n> \n> - Build the docs when the doc related files are changed (This still\n> creates a dependency like you said).\n> \n> - Skip CI completely if the README files are changed.\n> \n> What are your opinions on these?\n\nI don't have a good sense of what you are trying to optimize for. If \nit's the mainline build-on-every-commit type, then I wonder how many \ncommits would really be affected by this. Like, how many commits touch \nonly a README file. If it's for things like the cfbot, then I think the \ntime-triggered builds would be more frequent than new patch versions, so \nI don't know if these kinds of optimizations would affect anything.\n\n\n\n",
"msg_date": "Fri, 6 Oct 2023 10:21:12 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Build the docs if there are changes in docs and don't run other\n tasks if the changes are only in docs"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I don't have a good sense of what you are trying to optimize for. If \n> it's the mainline build-on-every-commit type, then I wonder how many \n> commits would really be affected by this. Like, how many commits touch \n> only a README file. If it's for things like the cfbot, then I think the \n> time-triggered builds would be more frequent than new patch versions, so \n> I don't know if these kinds of optimizations would affect anything.\n\nAs a quick cross-check, I searched our commit log to see how many\nREADME-only commits there were so far this year. I found 11 since\nJanuary. (Several were triggered by the latest round of pgindent\ncode and process changes, so maybe this is more than typical.)\n\nNot sure what that tells us about the value of changing the CI\nlogic, but it does seem like it could be worth the one-liner\nchange needed to teach buildfarm animals to ignore READMEs.\n\n-\ttrigger_exclude => qr[^doc/|\\.po$],\n+\ttrigger_exclude => qr[^doc/|/README$|\\.po$],\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Oct 2023 10:07:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Build the docs if there are changes in docs and don't run other\n tasks if the changes are only in docs"
},
{
"msg_contents": "Hi,\n\nSorry for the late reply.\n\nOn Fri, 6 Oct 2023 at 17:07, Tom Lane <[email protected]> wrote:\n>\n> As a quick cross-check, I searched our commit log to see how many\n> README-only commits there were so far this year. I found 11 since\n> January. (Several were triggered by the latest round of pgindent\n> code and process changes, so maybe this is more than typical.)\n>\n> Not sure what that tells us about the value of changing the CI\n> logic, but it does seem like it could be worth the one-liner\n> change needed to teach buildfarm animals to ignore READMEs.\n\nI agree that it could be worth implementing this logic on buildfarm animals.\n\nIn case we want to implement the same logic on the CI, I added a new\nversion of the patch; it skips CI completely if the changes are only\nin the README files.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Thu, 14 Dec 2023 16:40:29 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Build the docs if there are changes in docs and don't run other\n tasks if the changes are only in docs"
},
{
"msg_contents": "> On 14 Dec 2023, at 14:40, Nazir Bilal Yavuz <[email protected]> wrote:\n> On Fri, 6 Oct 2023 at 17:07, Tom Lane <[email protected]> wrote:\n\n>> Not sure what that tells us about the value of changing the CI\n>> logic, but it does seem like it could be worth the one-liner\n>> change needed to teach buildfarm animals to ignore READMEs.\n> \n> I agree that it could be worth implementing this logic on buildfarm animals.\n> \n> In case we want to implement the same logic on the CI, I added a new\n> version of the patch; it skips CI completely if the changes are only\n> in the README files.\n\nI think it makes sense to avoid wasting CI cycles on commits only changing\nREADME files since we know it will be a no-op. A README documentation patch\ngoing through N revisions will incur at least N full CI runs which are\nresources we can spend on other things. So +1 for doing this both in CI and in\nthe buildfarm.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 17 Jan 2024 15:49:44 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Build the docs if there are changes in docs and don't run other\n tasks if the changes are only in docs"
},
{
"msg_contents": "\nOn 2023-10-06 Fr 10:07, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> I don't have a good sense of what you are trying to optimize for. If\n>> it's the mainline build-on-every-commit type, then I wonder how many\n>> commits would really be affected by this. Like, how many commits touch\n>> only a README file. If it's for things like the cfbot, then I think the\n>> time-triggered builds would be more frequent than new patch versions, so\n>> I don't know if these kinds of optimizations would affect anything.\n> As a quick cross-check, I searched our commit log to see how many\n> README-only commits there were so far this year. I found 11 since\n> January. (Several were triggered by the latest round of pgindent\n> code and process changes, so maybe this is more than typical.)\n>\n> Not sure what that tells us about the value of changing the CI\n> logic, but it does seem like it could be worth the one-liner\n> change needed to teach buildfarm animals to ignore READMEs.\n>\n> -\ttrigger_exclude => qr[^doc/|\\.po$],\n> +\ttrigger_exclude => qr[^doc/|/README$|\\.po$],\n>\n> \t\t\t\n\n\n\nI've put that in the sample config file for the next release.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 17 Jan 2024 11:46:24 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Build the docs if there are changes in docs and don't run other\n tasks if the changes are only in docs"
},
{
"msg_contents": "On 14.12.23 14:40, Nazir Bilal Yavuz wrote:\n> On Fri, 6 Oct 2023 at 17:07, Tom Lane <[email protected]> wrote:\n>>\n>> As a quick cross-check, I searched our commit log to see how many\n>> README-only commits there were so far this year. I found 11 since\n>> January. (Several were triggered by the latest round of pgindent\n>> code and process changes, so maybe this is more than typical.)\n>>\n>> Not sure what that tells us about the value of changing the CI\n>> logic, but it does seem like it could be worth the one-liner\n>> change needed to teach buildfarm animals to ignore READMEs.\n> \n> I agree that it could be worth implementing this logic on buildfarm animals.\n> \n> In case we want to implement the same logic on the CI, I added a new\n> version of the patch; it skips CI completely if the changes are only\n> in the README files.\n\nI don't see how this could be applicable widely enough to be useful:\n\n- While there are some patches that touch on README files, very few of \nthose end up in a commit fest.\n\n- If someone manually pushes a change to their own CI environment, I \ndon't see why we need to second-guess that.\n\n- Buildfarm runs generally batch several commits together, so it is very \nunlikely that this would be hit.\n\nI think unless some concrete reason for this change can be shown, we \nshould drop it.\n\n\n\n",
"msg_date": "Sun, 12 May 2024 13:53:17 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Build the docs if there are changes in docs and don't run other\n tasks if the changes are only in docs"
},
{
"msg_contents": "Hi,\n\nOn Sun, 12 May 2024 at 14:53, Peter Eisentraut <[email protected]> wrote:\n>\n> On 14.12.23 14:40, Nazir Bilal Yavuz wrote:\n> > On Fri, 6 Oct 2023 at 17:07, Tom Lane <[email protected]> wrote:\n> >>\n> >> As a quick cross-check, I searched our commit log to see how many\n> >> README-only commits there were so far this year. I found 11 since\n> >> January. (Several were triggered by the latest round of pgindent\n> >> code and process changes, so maybe this is more than typical.)\n> >>\n> >> Not sure what that tells us about the value of changing the CI\n> >> logic, but it does seem like it could be worth the one-liner\n> >> change needed to teach buildfarm animals to ignore READMEs.\n> >\n> > I agree that it could be worth implementing this logic on buildfarm animals.\n> >\n> > In case we want to implement the same logic on the CI, I added a new\n> > version of the patch; it skips CI completely if the changes are only\n> > in the README files.\n>\n> I don't see how this could be applicable widely enough to be useful:\n>\n> - While there are some patches that touch on README files, very few of\n> those end up in a commit fest.\n>\n> - If someone manually pushes a change to their own CI environment, I\n> don't see why we need to second-guess that.\n>\n> - Buildfarm runs generally batch several commits together, so it is very\n> unlikely that this would be hit.\n>\n> I think unless some concrete reason for this change can be shown, we\n> should drop it.\n\nThese points make sense. I thought that it is useful regarding Tom's\n'11 README-only commit since January' analysis (at 6 Oct 2023) but\nthis may not be enough on its own. If there are no objections, I will\nwithdraw this soon.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Wed, 15 May 2024 14:28:06 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Build the docs if there are changes in docs and don't run other\n tasks if the changes are only in docs"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI would like to propose a patch that allows administrators to disable\n`ALTER SYSTEM` via either a runt-time option to pass to the Postgres server\nprocess at startup (e.g. `--disable-alter-system=true`, false by default)\nor a new GUC (or even both), without changing the current default method of\nthe server.\n\nThe main reason is that this would help improve the “security by default”\nposture of Postgres in a Kubernetes/Cloud Native environment - and, in\ngeneral, in any environment on VMs/bare metal behind a configuration\nmanagement system in which changes should only be made in a declarative way\nand versioned like Ansible Tower, to cite one.\n\nBelow you find some background information and the longer story behind this\nproposal.\n\nSticking to the Kubernetes use case, I am primarily speaking on behalf of\nthe CloudNativePG open source operator (cloudnative-pg.io, of which I am\none of the maintainers). However, I am sure that this option could benefit\nany operator for Postgres - an operator is the most common and recommended\nway to run a complex application like a PostgreSQL database management\nsystem inside Kubernetes.\n\nIn this case, the state of a PostgreSQL cluster (for example its number of\nreplicas, configuration, storage, etc.) is defined in a Custom Resource\nDefinition in the form of configuration, typically YAML, and the operator\nworks with Kubernetes to ensure that, at any moment, the requested Postgres\ncluster matches the observed one. This is a very basic example in\nCloudNativePG:\nhttps://cloudnative-pg.io/documentation/current/samples/cluster-example.yaml\n\nAs I was mentioning above, in a Cloud Native environment it is expected\nthat workloads are secure by default. Without going into much detail, many\ndecisions have been made in that direction by operators for Postgres,\nincluding CloudNativePG. The goal of this proposal is to provide a way to\nensure that changes to the PostgreSQL configuration in a Kubernetes\ncontrolled Postgres cluster are allowed only through the Kubernetes API.\n\nBasically, if you want to change an option for PostgreSQL, you need to\nchange the desired state in the Kubernetes resource, then Kubernetes will\nconverge (through the operator). In simple words, it’s like empowering the\noperator to impersonate the PostgreSQL superuser.\n\nHowever, given that we cannot force this use case, there could be roles\nwith the login+superuser privileges connecting to the PostgreSQL instance\nand potentially “interfering” with the requested state defined in the\nconfiguration by imperatively running “ALTER SYSTEM” commands.\n\nFor example: CloudNativePG has a fixed value for some GUCs in order to\nmanage a full HA cluster, including SSL, log, some WAL and replication\nsettings. While the operator eventually reconciles those settings, even the\ntemporary change of those settings in a cluster might be harmful. Think for\nexample of a user that, through `ALTER SYSTEM`, tries to change WAL level\nto minimal, or change the setting of the log (we require CSV), potentially\ncreating issues to the underlying instance and cluster (potentially leaving\nit in an unrecoverable state in the case of other more invasive GUCS).\n\nAt the moment, a possible workaround is that `ALTER SYSTEM` can be blocked\nby making the postgresql.auto.conf read only, but the returned message is\nmisleading and that’s certainly bad user experience (which is very\nimportant in a cloud native environment):\n\n```\npostgres=# ALTER SYSTEM SET wal_level TO minimal;\nERROR: could not open file \"postgresql.auto.conf\": Permission denied\n```\n\nFor this reason, I would like to propose the option to be given to the\npostgres process at startup, in order to be as less invasive as possible\n(the operator could then start Postgres requesting `ALTER SYSTEM` to be\ndisabled). That’d be my preference at the moment, if possible.\n\nAlternatively, or in addition, the introduction of a GUC to disable `ALTER\nSYSTEM` altogether. This enables tuning this setting through configuration\nat the Kubernetes level, only if the operators require it - without\ndamaging the rest of the users.\n\nBefore I start writing any lines of code and propose a patch, I would like\nfirst to understand if there’s room for it.\n\nThanks for your attention and … looking forward to your feedback!\n\nCiao,\nGabriele\n--\nGabriele Bartolini\nVice President, Cloud Native at EDB\nenterprisedb.com\n\nHi everyone,I would like to propose a patch that allows administrators to disable `ALTER SYSTEM` via either a runt-time option to pass to the Postgres server process at startup (e.g. `--disable-alter-system=true`, false by default) or a new GUC (or even both), without changing the current default method of the server.The main reason is that this would help improve the “security by default” posture of Postgres in a Kubernetes/Cloud Native environment - and, in general, in any environment on VMs/bare metal behind a configuration management system in which changes should only be made in a declarative way and versioned like Ansible Tower, to cite one.Below you find some background information and the longer story behind this proposal.Sticking to the Kubernetes use case, I am primarily speaking on behalf of the CloudNativePG open source operator (cloudnative-pg.io, of which I am one of the maintainers). However, I am sure that this option could benefit any operator for Postgres - an operator is the most common and recommended way to run a complex application like a PostgreSQL database management system inside Kubernetes.In this case, the state of a PostgreSQL cluster (for example its number of replicas, configuration, storage, etc.) is defined in a Custom Resource Definition in the form of configuration, typically YAML, and the operator works with Kubernetes to ensure that, at any moment, the requested Postgres cluster matches the observed one. This is a very basic example in CloudNativePG: https://cloudnative-pg.io/documentation/current/samples/cluster-example.yamlAs I was mentioning above, in a Cloud Native environment it is expected that workloads are secure by default. Without going into much detail, many decisions have been made in that direction by operators for Postgres, including CloudNativePG. The goal of this proposal is to provide a way to ensure that changes to the PostgreSQL configuration in a Kubernetes controlled Postgres cluster are allowed only through the Kubernetes API.Basically, if you want to change an option for PostgreSQL, you need to change the desired state in the Kubernetes resource, then Kubernetes will converge (through the operator). In simple words, it’s like empowering the operator to impersonate the PostgreSQL superuser.However, given that we cannot force this use case, there could be roles with the login+superuser privileges connecting to the PostgreSQL instance and potentially “interfering” with the requested state defined in the configuration by imperatively running “ALTER SYSTEM” commands.For example: CloudNativePG has a fixed value for some GUCs in order to manage a full HA cluster, including SSL, log, some WAL and replication settings. While the operator eventually reconciles those settings, even the temporary change of those settings in a cluster might be harmful. Think for example of a user that, through `ALTER SYSTEM`, tries to change WAL level to minimal, or change the setting of the log (we require CSV), potentially creating issues to the underlying instance and cluster (potentially leaving it in an unrecoverable state in the case of other more invasive GUCS).At the moment, a possible workaround is that `ALTER SYSTEM` can be blocked by making the postgresql.auto.conf read only, but the returned message is misleading and that’s certainly bad user experience (which is very important in a cloud native environment):``` postgres=# ALTER SYSTEM SET wal_level TO minimal;ERROR: could not open file \"postgresql.auto.conf\": Permission denied``` For this reason, I would like to propose the option to be given to the postgres process at startup, in order to be as less invasive as possible (the operator could then start Postgres requesting `ALTER SYSTEM` to be disabled). That’d be my preference at the moment, if possible.Alternatively, or in addition, the introduction of a GUC to disable `ALTER SYSTEM` altogether. This enables tuning this setting through configuration at the Kubernetes level, only if the operators require it - without damaging the rest of the users.Before I start writing any lines of code and propose a patch, I would like first to understand if there’s room for it.Thanks for your attention and … looking forward to your feedback!Ciao,Gabriele--Gabriele BartoliniVice President, Cloud Native at EDBenterprisedb.com",
"msg_date": "Thu, 7 Sep 2023 21:51:14 +0200",
"msg_from": "Gabriele Bartolini <[email protected]>",
"msg_from_op": true,
"msg_subject": "Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On 9/7/23 15:51, Gabriele Bartolini wrote:\n> I would like to propose a patch that allows administrators to disable \n> `ALTER SYSTEM` via either a runt-time option to pass to the Postgres \n> server process at startup (e.g. `--disable-alter-system=true`, false by \n> default) or a new GUC (or even both), without changing the current \n> default method of the server.\n\nWithout trying to debate the particulars, a huge +1 for the concept of \nallowing ALTER SYSTEM to be disabled. FWIW I would vote the same for \ncontrol over COPY PROGRAM.\n\nNot coincidentally both concepts were built into set_user: \nhttps://github.com/pgaudit/set_user\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Thu, 7 Sep 2023 15:57:22 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Hi Joe,\n\nOn Thu, 7 Sept 2023 at 21:57, Joe Conway <[email protected]> wrote:\n\n> Without trying to debate the particulars, a huge +1 for the concept of\n> allowing ALTER SYSTEM to be disabled. FWIW I would vote the same for\n> control over COPY PROGRAM.\n>\n\nOh ... another one of my favourite security friendly features! :)\n\nThat sounds like a good idea to me.\n\nThanks,\nGabriele\n-- \nGabriele Bartolini\nVice President, Cloud Native at EDB\nenterprisedb.com\n\nHi Joe,On Thu, 7 Sept 2023 at 21:57, Joe Conway <[email protected]> wrote:Without trying to debate the particulars, a huge +1 for the concept of \nallowing ALTER SYSTEM to be disabled. FWIW I would vote the same for \ncontrol over COPY PROGRAM.Oh ... another one of my favourite security friendly features! :)That sounds like a good idea to me.Thanks,Gabriele-- Gabriele BartoliniVice President, Cloud Native at EDBenterprisedb.com",
"msg_date": "Thu, 7 Sep 2023 22:03:16 +0200",
"msg_from": "Gabriele Bartolini <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Gabriele Bartolini <[email protected]> writes:\n> I would like to propose a patch that allows administrators to disable\n> `ALTER SYSTEM` via either a runt-time option to pass to the Postgres server\n> process at startup (e.g. `--disable-alter-system=true`, false by default)\n> or a new GUC (or even both), without changing the current default method of\n> the server.\n\nALTER SYSTEM is already heavily restricted. I don't think we need random\nkluges added to the permissions system. I especially don't believe in\nkluges to the effect of \"superuser doesn't have all permissions anymore\".\n\nIf you nonetheless feel that that's a good idea for your use case,\nyou can implement the restriction with an event trigger or the like.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 07 Sep 2023 16:27:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Hi Tom,\n\nOn Thu, 7 Sept 2023 at 22:27, Tom Lane <[email protected]> wrote:\n\n> Gabriele Bartolini <[email protected]> writes:\n> > I would like to propose a patch that allows administrators to disable\n> > `ALTER SYSTEM` via either a runt-time option to pass to the Postgres\n> server\n> > process at startup (e.g. `--disable-alter-system=true`, false by default)\n> > or a new GUC (or even both), without changing the current default method\n> of\n> > the server.\n>\n> ALTER SYSTEM is already heavily restricted.\n\n\nCould you please help me better understand what you mean here?\n\n\n> I don't think we need random kluges added to the permissions system.\n\n\nIf you allow me, why do you think disabling ALTER SYSTEM altogether is a\nrandom kluge? Again, I'd like to better understand this position. I've\npersonally been in many conversations on the security side of things for\nPostgres in Kubernetes environments, and this is a frequent concern by\nusers who request that changes to the Postgres system (not a database)\nshould only be done declaratively and prevented from within the system.\n\nThanks,\nGabriele\n-- \nGabriele Bartolini\nVice President, Cloud Native at EDB\nenterprisedb.com\n\nHi Tom,On Thu, 7 Sept 2023 at 22:27, Tom Lane <[email protected]> wrote:Gabriele Bartolini <[email protected]> writes:\n> I would like to propose a patch that allows administrators to disable\n> `ALTER SYSTEM` via either a runt-time option to pass to the Postgres server\n> process at startup (e.g. `--disable-alter-system=true`, false by default)\n> or a new GUC (or even both), without changing the current default method of\n> the server.\n\nALTER SYSTEM is already heavily restricted.Could you please help me better understand what you mean here? I don't think we need random kluges added to the permissions system.If you allow me, why do you think disabling ALTER SYSTEM altogether is a random kluge? Again, I'd like to better understand this position. I've personally been in many conversations on the security side of things for Postgres in Kubernetes environments, and this is a frequent concern by users who request that changes to the Postgres system (not a database) should only be done declaratively and prevented from within the system.Thanks,Gabriele-- Gabriele BartoliniVice President, Cloud Native at EDBenterprisedb.com",
"msg_date": "Fri, 8 Sep 2023 13:31:16 +0200",
"msg_from": "Gabriele Bartolini <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Fri, 8 Sept 2023 at 10:03, Gabriele Bartolini <\[email protected]> wrote:\n\n\n> ALTER SYSTEM is already heavily restricted.\n>\n>\n> Could you please help me better understand what you mean here?\n>\n>\n>> I don't think we need random kluges added to the permissions system.\n>\n>\n> If you allow me, why do you think disabling ALTER SYSTEM altogether is a\n> random kluge? Again, I'd like to better understand this position. I've\n> personally been in many conversations on the security side of things for\n> Postgres in Kubernetes environments, and this is a frequent concern by\n> users who request that changes to the Postgres system (not a database)\n> should only be done declaratively and prevented from within the system.\n>\n\nAlternate idea, not sure how good this is: Use existing OS security\nfeatures (regular permissions, or more modern features such as the\nimmutable attribute) to mark the postgresql.auto.conf file as not being\nwriteable. Then any attempt to ALTER SYSTEM should result in an error.\n\nOn Fri, 8 Sept 2023 at 10:03, Gabriele Bartolini <[email protected]> wrote: \nALTER SYSTEM is already heavily restricted.Could you please help me better understand what you mean here? I don't think we need random kluges added to the permissions system.If you allow me, why do you think disabling ALTER SYSTEM altogether is a random kluge? Again, I'd like to better understand this position. I've personally been in many conversations on the security side of things for Postgres in Kubernetes environments, and this is a frequent concern by users who request that changes to the Postgres system (not a database) should only be done declaratively and prevented from within the system.Alternate idea, not sure how good this is: Use existing OS security features (regular permissions, or more modern features such as the immutable attribute) to mark the postgresql.auto.conf file as not being writeable. Then any attempt to ALTER SYSTEM should result in an error.",
"msg_date": "Fri, 8 Sep 2023 10:11:30 -0400",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Hi Isaac,\n\nOn Fri, 8 Sept 2023 at 16:11, Isaac Morland <[email protected]> wrote:\n\n> Alternate idea, not sure how good this is: Use existing OS security\n> features (regular permissions, or more modern features such as the\n> immutable attribute) to mark the postgresql.auto.conf file as not being\n> writeable. Then any attempt to ALTER SYSTEM should result in an error.\n>\n\nThat is the point I highlighted in the initial post in the thread. We could\nmake it readonly, but the returned error is misleading and definitely poor\nUX:\n\n```\npostgres=# ALTER SYSTEM SET wal_level TO minimal;\nERROR: could not open file \"postgresql.auto.conf\": Permission denied\n```\n\nIMO we should clearly state that `ALTER SYSTEM` is deliberately disabled in\na system, rather than indirectly hinting it through an inaccessible file.\nNot sure if I am clearly highlighting the fine difference here.\n\nThanks,\nGabriele\n-- \nGabriele Bartolini\nVice President, Cloud Native at EDB\nenterprisedb.com\n\nHi Isaac,On Fri, 8 Sept 2023 at 16:11, Isaac Morland <[email protected]> wrote:Alternate idea, not sure how good this is: Use existing OS security features (regular permissions, or more modern features such as the immutable attribute) to mark the postgresql.auto.conf file as not being writeable. Then any attempt to ALTER SYSTEM should result in an error.\nThat is the point I highlighted in the initial post in the thread. We could make it readonly, but the returned error is misleading and definitely poor UX:```postgres=# ALTER SYSTEM SET wal_level TO minimal;ERROR: could not open file \"postgresql.auto.conf\": Permission denied```IMO we should clearly state that `ALTER SYSTEM` is deliberately disabled in a system, rather than indirectly hinting it through an inaccessible file. Not sure if I am clearly highlighting the fine difference here.Thanks,Gabriele-- Gabriele BartoliniVice President, Cloud Native at EDBenterprisedb.com",
"msg_date": "Fri, 8 Sep 2023 16:17:04 +0200",
"msg_from": "Gabriele Bartolini <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On 2023-Sep-08, Gabriele Bartolini wrote:\n\n> That is the point I highlighted in the initial post in the thread. We could\n> make it readonly, but the returned error is misleading and definitely poor\n> UX:\n> \n> ```\n> postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> ERROR: could not open file \"postgresql.auto.conf\": Permission denied\n> ```\n> \n> IMO we should clearly state that `ALTER SYSTEM` is deliberately disabled in\n> a system, rather than indirectly hinting it through an inaccessible file.\n> Not sure if I am clearly highlighting the fine difference here.\n\nCome on. This is not a \"fine difference\" -- it's the difference between\na crummy hack and a real implementation of an important system\nrestriction.\n\nI don't understand Tom's resistance to this request. I understand the\nuse case and I agree with Gabriele that this is a very simple code\nchange(*) that Postgres could make to help it get run better in a\ndifferent kind of environment than what we're accustomed to.\n\nI've read that containers people consider services in a different light\nthan how we've historically seen them; they say \"cattle, not pets\".\nThis affects the way you think about these services. postgresql.conf\n(all the PG configuration, really) is just a derived file from an\noverall system description that lives outside the database server. You\nno longer feed your PG servers one by one, but rather they behave as a\nherd, and the herder is some container supervisor (whatever it's called).\n\nEnsuring that the configuration state cannot change from within is\nimportant to maintain the integrity of the service. If the user wants\nto change things, the tools to do that are operated from outside; this\nlets things like ancillary services to be kept in sync (say, start a\nreplica here, or a backup system there, or WAL archival/collection is\nhandled in this or that way). If users are allowed to change the config\nfrom within they break things, and the supervisor program can't put\nthings together again.\n\n\nI did not like the mention of COPY PROGRAM, though, and in principle I\ndo not support the idea of treating it the same way as ALTER SYSTEM. If\npeople are using that to write into postgresql.conf, then they deserve\nall the hell they get when their containers break. I think trying to\nrestrict this outside of the privilege system is going to be more of a\nwart than ALTER SYSTEM.\n\n\n(*) To be proven. Let's see the patch.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\nSyntax error: function hell() needs an argument.\nPlease choose what hell you want to involve.\n\n\n",
"msg_date": "Fri, 8 Sep 2023 16:55:47 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> I don't understand Tom's resistance to this request.\n\nIt's false security. If you think you are going to prevent a superuser\nfrom messing with the system's configuration, you are going to need a\nlot more restrictions than this, and we'll be forever getting security\nreports that \"hey, I found another way for a superuser to get filesystem\naccess\". I think the correct answer to this class of problems is \"don't\ngive superuser privileges to clients running inside the container\".\n\n> I did not like the mention of COPY PROGRAM, though, and in principle I\n> do not support the idea of treating it the same way as ALTER SYSTEM.\n\nIt's one of the easiest ways to modify postgresql.conf from SQL. If you\ndon't block that off, the feature is certainly not secure. (But of\ncourse, there are more ways.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 08 Sep 2023 11:31:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Hi Tom and Alvaro,\n\nOn Fri, 8 Sept 2023 at 17:31, Tom Lane <[email protected]> wrote:\n\n> Alvaro Herrera <[email protected]> writes:\n> > I don't understand Tom's resistance to this request.\n>\n> It's false security. If you think you are going to prevent a superuser\n> from messing with the system's configuration, you are going to need a\n> lot more restrictions than this, and we'll be forever getting security\n> reports that \"hey, I found another way for a superuser to get filesystem\n> access\". I think the correct answer to this class of problems is \"don't\n> give superuser privileges to clients running inside the container\".\n>\n\nOk, this is clearer. That makes sense now, and this probably helps me\nexplain better the goal here. I also omitted in the initial email all the\nsecurity precautions that a Kubernetes should take. This could be another\nstep towards that direction but, you are right, it won't fix it entirely\n(in case of malicious superusers).\n\nIn my opinion, the biggest benefit of this possibility is on the usability\nside, providing a clear and configurable way to disable ALTER SYSTEM in\nthose environments where declarative configuration is a requirement. For\nexample, this should at least \"warn\" human beings that have the permissions\nto connect to a Postgres database (think of SREs managing a DBaaS solution\nor a DBA) and try to change a setting in an instance. Moreover, for those\nwho are managing through declarative configuration not only one instance,\nbut a Postgres cluster that controls standby instances too, the benefit of\nimpeding these modifications could be even higher (think of the hot standby\nsensitive parameters like max_connections that require coordination\ndepending whether you increase or decrease them).\n\nI hope this is clearer. For what it's worth, I have done a basic PoC patch\n(roughly 20 lines of code), which I have attached here just to provide some\nbasis for further analysis and comments. The general idea is to disable\nALTER SYSTEM at startup, like this:\n\npg_ctl start -o \"-c enable_alter_system=off\"\n\n\nThe setting can be verified with:\n\npsql -c 'SHOW enable_alter_system'\n enable_alter_system\n---------------------\n off\n(1 row)\n\n\nAnd then:\n\npsql -c 'ALTER SYSTEM SET max_connections TO 10'\nERROR: permission denied to run ALTER SYSTEM\n\n\nThanks for your attention and looking forward to getting feedback and\nadvice.\n\nCheers,\nGabriele\n-- \nGabriele Bartolini\nVice President, Cloud Native at EDB\nenterprisedb.com",
"msg_date": "Fri, 8 Sep 2023 23:31:50 +0200",
"msg_from": "Gabriele Bartolini <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Fri, Sep 8, 2023 at 5:31 PM Tom Lane <[email protected]> wrote:\n>\n> Alvaro Herrera <[email protected]> writes:\n> > I don't understand Tom's resistance to this request.\n>\n> It's false security. If you think you are going to prevent a superuser\n> from messing with the system's configuration, you are going to need a\n> lot more restrictions than this, and we'll be forever getting security\n> reports that \"hey, I found another way for a superuser to get filesystem\n> access\". I think the correct answer to this class of problems is \"don't\n> give superuser privileges to clients running inside the container\".\n\n+1. And to make that happen, the appropriate thing is to identify\n*why* they are using superuser today, and focus efforts on finding\nways for them to do that without being superuser.\n\n\n> > I did not like the mention of COPY PROGRAM, though, and in principle I\n> > do not support the idea of treating it the same way as ALTER SYSTEM.\n>\n> It's one of the easiest ways to modify postgresql.conf from SQL. If you\n> don't block that off, the feature is certainly not secure. (But of\n> course, there are more ways.)\n\nIt's easier to just create a function in an untrusted language. Same\nprinciple. Once you're superuser, you can break through anything.\n\nWe need a \"allowlist\" of things a user can do, rather than a blocklist\nof \"they can do everything they can possibly think of and a computer\nis capable of doing, except for this one specific thing\". Blocklisting\nindividual permissions of a superuser will never be secure.\n\nNow, it might be that you don't care at all about the *security* side\nof the feature, and only care about the convenience side. But in that\ncase, the original suggestion from Tom of using an even trigger seems\nlike a fine enough solution?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Fri, 8 Sep 2023 23:43:08 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On 7/9/23 21:51, Gabriele Bartolini wrote:\n> Hi everyone,\n>\n> I would like to propose a patch that allows administrators to disable \n> `ALTER SYSTEM` via either a runt-time option to pass to the Postgres \n> server process at startup (e.g. `--disable-alter-system=true`, false \n> by default) or a new GUC (or even both), without changing the current \n> default method of the server.\n>\n> The main reason is that this would help improve the “security by \n> default” posture of Postgres in a Kubernetes/Cloud Native environment \n> - and, in general, in any environment on VMs/bare metal behind a \n> configuration management system in which changes should only be made \n> in a declarative way and versioned like Ansible Tower, to cite one.\n>\n> Below you find some background information and the longer story behind \n> this proposal.\n>\n> Sticking to the Kubernetes use case, I am primarily speaking on behalf \n> of the CloudNativePG open source operator (cloudnative-pg.io \n> <http://cloudnative-pg.io>, of which I am one of the maintainers). \n> However, I am sure that this option could benefit any operator for \n> Postgres - an operator is the most common and recommended way to run a \n> complex application like a PostgreSQL database management system \n> inside Kubernetes.\n>\n> In this case, the state of a PostgreSQL cluster (for example its \n> number of replicas, configuration, storage, etc.) is defined in a \n> Custom Resource Definition in the form of configuration, typically \n> YAML, and the operator works with Kubernetes to ensure that, at any \n> moment, the requested Postgres cluster matches the observed one. This \n> is a very basic example in CloudNativePG: \n> https://cloudnative-pg.io/documentation/current/samples/cluster-example.yaml\n>\n> As I was mentioning above, in a Cloud Native environment it is \n> expected that workloads are secure by default. Without going into much \n> detail, many decisions have been made in that direction by operators \n> for Postgres, including CloudNativePG. The goal of this proposal is to \n> provide a way to ensure that changes to the PostgreSQL configuration \n> in a Kubernetes controlled Postgres cluster are allowed only through \n> the Kubernetes API.\n>\n> Basically, if you want to change an option for PostgreSQL, you need to \n> change the desired state in the Kubernetes resource, then Kubernetes \n> will converge (through the operator). In simple words, it’s like \n> empowering the operator to impersonate the PostgreSQL superuser.\n>\n\n Coming from a similar background to Gabriele's, I support this \nproposal.\n\n In StackGres (https://stackgres.io) we also allow users to manage \npostgresql.conf's configuration declaratively. We have a CRD (Custom \nResource Definition) that precisely defines and controls how a \npostgresql.conf configuration looks like (see \nhttps://stackgres.io/doc/latest/reference/crd/sgpgconfig/). This \nconfiguration, once created by the user, is strongly validated by \nStackGres (parameters are valid for the given major version, values are \nwithin the ranges and appropriate types) and then periodically applied \nto the database if there's any drift between that user-declared \n(desired) state and current system status.\n\n Similarly, we also have some parameters which the user is not \nallowed to change \n(https://gitlab.com/ongresinc/stackgres/-/blob/main/stackgres-k8s/src/operator/src/main/resources/postgresql-blocklist.properties). \nIf the user is allowed to use ALTER SYSTEM and modify some of these \nparameters, significant breakage can happen in the cluster, as the \noperator may become \"confused\" and manual operation may be required, \nbreaking many of the user's expectations of stability and how the system \nworks and heals automatically.\n\n Sure, as mentioned elsewhere in the thread, a \"malicious\" user can \nstill use other mechanisms such as COPY or untrusted PLs to still \noverwrite the configuration. But both attempts are obviously conscious \nattempts to break the system (and if so, it's all yours to break it). \nBut ALTER SYSTEM may be an *unintended* way to break it, causing a bad \nuser's experience. This may be defined more of a way to avoid users \nshooting themselves in the feet, inadvertedly.\n\n There's apparently some opposition to implementing this. But given \nthat there's also interest in having it, I'd like to know what the \nnegative effects of implementing such a startup configuration property \nwould be, so that advantages can be compared with the disadvantages.\n\n All that being said, the behavior to prevent ALTER SYSTEM can also \nbe easily implemented as an extension. Actually some colleagues wrote \none with a similar scope \n(https://gitlab.com/ongresinc/extensions/noset), and I believe it could \nbe the base for a similar extension focused on preventing ALTER SYSTEM.\n\n\n Regards,\n\n Álvaro\n\n-- \n\nAlvaro Hernandez\n\n\n-----------\nOnGres\n\n\n\n\n\n\n\n\nOn 7/9/23 21:51, Gabriele Bartolini\n wrote:\n\n\n\nHi everyone,\n\n I would like to propose a patch that allows administrators to\n disable `ALTER SYSTEM` via either a runt-time option to pass to\n the Postgres server process at startup (e.g.\n `--disable-alter-system=true`, false by default) or a new GUC\n (or even both), without changing the current default method of\n the server.\n\n The main reason is that this would help improve the “security by\n default” posture of Postgres in a Kubernetes/Cloud Native\n environment - and, in general, in any environment on VMs/bare\n metal behind a configuration management system in which changes\n should only be made in a declarative way and versioned like\n Ansible Tower, to cite one.\n\n Below you find some background information and the longer story\n behind this proposal.\n\n Sticking to the Kubernetes use case, I am primarily speaking on\n behalf of the CloudNativePG open source operator (cloudnative-pg.io,\n of which I am one of the maintainers). However, I am sure that\n this option could benefit any operator for Postgres - an\n operator is the most common and recommended way to run a complex\n application like a PostgreSQL database management system inside\n Kubernetes.\n\n In this case, the state of a PostgreSQL cluster (for example its\n number of replicas, configuration, storage, etc.) is defined in\n a Custom Resource Definition in the form of configuration,\n typically YAML, and the operator works with Kubernetes to ensure\n that, at any moment, the requested Postgres cluster matches the\n observed one. This is a very basic example in CloudNativePG: https://cloudnative-pg.io/documentation/current/samples/cluster-example.yaml\n\n As I was mentioning above, in a Cloud Native environment it is\n expected that workloads are secure by default. Without going\n into much detail, many decisions have been made in that\n direction by operators for Postgres, including CloudNativePG.\n The goal of this proposal is to provide a way to ensure that\n changes to the PostgreSQL configuration in a Kubernetes\n controlled Postgres cluster are allowed only through the\n Kubernetes API.\n\n Basically, if you want to change an option for PostgreSQL, you\n need to change the desired state in the Kubernetes resource,\n then Kubernetes will converge (through the operator). In simple\n words, it’s like empowering the operator to impersonate the\n PostgreSQL superuser.\n\n\n\n\n Coming from a similar background to Gabriele's, I support this\n proposal.\n\n In StackGres (https://stackgres.io) we also allow users to\n manage postgresql.conf's configuration declaratively. We have a CRD\n (Custom Resource Definition) that precisely defines and controls how\n a postgresql.conf configuration looks like (see\n https://stackgres.io/doc/latest/reference/crd/sgpgconfig/). This\n configuration, once created by the user, is strongly validated by\n StackGres (parameters are valid for the given major version, values\n are within the ranges and appropriate types) and then periodically\n applied to the database if there's any drift between that\n user-declared (desired) state and current system status.\n\n Similarly, we also have some parameters which the user is not\n allowed to change\n(https://gitlab.com/ongresinc/stackgres/-/blob/main/stackgres-k8s/src/operator/src/main/resources/postgresql-blocklist.properties).\n If the user is allowed to use ALTER SYSTEM and modify some of these\n parameters, significant breakage can happen in the cluster, as the\n operator may become \"confused\" and manual operation may be required,\n breaking many of the user's expectations of stability and how the\n system works and heals automatically.\n\n Sure, as mentioned elsewhere in the thread, a \"malicious\" user\n can still use other mechanisms such as COPY or untrusted PLs to\n still overwrite the configuration. But both attempts are obviously\n conscious attempts to break the system (and if so, it's all yours to\n break it). But ALTER SYSTEM may be an *unintended* way to break it,\n causing a bad user's experience. This may be defined more of a way\n to avoid users shooting themselves in the feet, inadvertedly.\n\n There's apparently some opposition to implementing this. But\n given that there's also interest in having it, I'd like to know what\n the negative effects of implementing such a startup configuration\n property would be, so that advantages can be compared with the\n disadvantages.\n\n All that being said, the behavior to prevent ALTER SYSTEM can\n also be easily implemented as an extension. Actually some colleagues\n wrote one with a similar scope\n (https://gitlab.com/ongresinc/extensions/noset), and I believe it\n could be the base for a similar extension focused on preventing\n ALTER SYSTEM.\n\n \n Regards,\n\n Álvaro\n\n-- \n\nAlvaro Hernandez\n\n\n-----------\nOnGres",
"msg_date": "Sat, 9 Sep 2023 01:24:02 +0200",
"msg_from": "=?UTF-8?B?w4FsdmFybyBIZXJuw6FuZGV6?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Hi Magnus,\n\nOn Fri, 8 Sept 2023 at 23:43, Magnus Hagander <[email protected]> wrote:\n\n> +1. And to make that happen, the appropriate thing is to identify\n> *why* they are using superuser today, and focus efforts on finding\n> ways for them to do that without being superuser.\n>\n\nAs I am explaining in the other post (containing a very basic proof of\nconcept patch), it is not about restricting superuser. It is primarily a\nusability and configuration matter of a running instance, which helps\ncontrol an entire cluster like in the case of Kubernetes (where, in order\nto provide self-healing and high availability we are forced to go beyond\nthe single instance and think in terms of primary with one or more standbys\nor at least continuous backup in place).\n\nThanks,\nGabriele\n-- \nGabriele Bartolini\nVice President, Cloud Native at EDB\nenterprisedb.com\n\nHi Magnus,On Fri, 8 Sept 2023 at 23:43, Magnus Hagander <[email protected]> wrote:+1. And to make that happen, the appropriate thing is to identify\n*why* they are using superuser today, and focus efforts on finding\nways for them to do that without being superuser.As I am explaining in the other post (containing a very basic proof of concept patch), it is not about restricting superuser. It is primarily a usability and configuration matter of a running instance, which helps control an entire cluster like in the case of Kubernetes (where, in order to provide self-healing and high availability we are forced to go beyond the single instance and think in terms of primary with one or more standbys or at least continuous backup in place).Thanks,Gabriele-- Gabriele BartoliniVice President, Cloud Native at EDBenterprisedb.com",
"msg_date": "Sat, 9 Sep 2023 08:37:47 +0200",
"msg_from": "Gabriele Bartolini <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On 2023-Sep-08, Magnus Hagander wrote:\n\n> Now, it might be that you don't care at all about the *security* side\n> of the feature, and only care about the convenience side. But in that\n> case, the original suggestion from Tom of using an even trigger seems\n> like a fine enough solution?\n\nALTER SYSTEM, like all system-wide commands, does not trigger event\ntriggers. These are per-database only.\n\nhttps://www.postgresql.org/docs/16/event-trigger-matrix.html\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"If it is not right, do not do it.\nIf it is not true, do not say it.\" (Marcus Aurelius, Meditations)\n\n\n",
"msg_date": "Sat, 9 Sep 2023 17:14:50 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Hi,\n\n> I would like to propose a patch that allows administrators to disable `ALTER SYSTEM` via either a runt-time option to pass to the Postgres server process at startup (e.g. `--disable-alter-system=true`, false by default) or a new GUC (or even both), without changing the current default method of the server.\n\nI'm actually going to put a strong +1 to Gabriele's proposal. It's an\nundeniable problem (I'm only seeing arguments regarding other ways the\nsystem would be insecure), and there might be real use cases for users\noutside kubernetes for having this feature and using it (meaning\ndisabling the use of ALTER SYSTEM).\n\nIn Patroni for example, the PostgreSQL service is controlled on all\nnodes by Patroni, and these kinds of changes could end up breaking the\ncluster if there was a failover. For this reason Patroni starts\npostgres with some GUC options as CMD arguments so that values in\npostgresql.conf or postgresql.auto.conf are ignored. The values in the\nDCS are the ones that matter.\n\n```\npostgres 1171221 0.0 1.9 903560 55168 ? S 10:16 0:00\n/usr/pgsql-15/bin/postgres -D /opt/postgres/data\n--config-file=/opt/postgres/data/postgresql.conf\n--listen_addresses=0.0.0.0 --port=5432 --cluster_name=patroni-tpa\n--wal_level=logical --hot_standby=on --max_connections=250\n--max_wal_senders=6 --max_prepared_transactions=0\n--max_locks_per_transaction=64 --track_commit_timestamp=off\n--max_replication_slots=6 --max_worker_processes=16 --wal_log_hints=on\n```\n\n(see more about how Patroni manages this here:\nhttps://patroni.readthedocs.io/en/latest/patroni_configuration.html#postgresql-parameters-controlled-by-patroni\n\nBut let's face it, that's a hack, not something to be proud of, even\nif it does what is intended. And this is a product and we shouldn't be\nadvertising hacks to overcome limitations.\n\nI find the opposition to this lacking good reasons, while I find the\nimplementation to be useful in some cases.\n\nKind regards, Martín\n\n\n--\nMartín Marqués\nIt’s not that I have something to hide,\nit’s that I have nothing I want you to see\n\n\n",
"msg_date": "Mon, 11 Sep 2023 13:55:53 +0200",
"msg_from": "=?UTF-8?B?TWFydMOtbiBNYXJxdcOpcw==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Sat, Sep 9, 2023 at 5:14 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2023-Sep-08, Magnus Hagander wrote:\n>\n> > Now, it might be that you don't care at all about the *security* side\n> > of the feature, and only care about the convenience side. But in that\n> > case, the original suggestion from Tom of using an even trigger seems\n> > like a fine enough solution?\n>\n> ALTER SYSTEM, like all system-wide commands, does not trigger event\n> triggers. These are per-database only.\n>\n> https://www.postgresql.org/docs/16/event-trigger-matrix.html\n\nHah, didn't think of that. And yes, that's a very good point. But one\nway to fix that would be to actually make event triggers for system\nwide commands, which would then be useful for other things as well...\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 11 Sep 2023 15:50:19 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Mon, Sep 11, 2023 at 1:56 PM Martín Marqués <[email protected]> wrote:\n>\n> Hi,\n>\n> > I would like to propose a patch that allows administrators to disable `ALTER SYSTEM` via either a runt-time option to pass to the Postgres server process at startup (e.g. `--disable-alter-system=true`, false by default) or a new GUC (or even both), without changing the current default method of the server.\n>\n> I'm actually going to put a strong +1 to Gabriele's proposal. It's an\n> undeniable problem (I'm only seeing arguments regarding other ways the\n> system would be insecure), and there might be real use cases for users\n> outside kubernetes for having this feature and using it (meaning\n> disabling the use of ALTER SYSTEM).\n\nIf enough people are in favor of it *given the known issues with it*,\nI can drop my objection to a \"meh, but I still don't think it's a good\nidea\".\n\nBut to do that, there would need to be a very in-your-face warning in\nthe documentation about it like \"note that this only disables the\nALTER SYSTEM command. It does not prevent a superuser from changing\nthe configuration remotely using other means\".\n\nFor example, in the very simplest, wth the POC patch out there now, I\ncan still run:\npostgres=# CREATE TEMP TABLE x(t text);\nCREATE TABLE\npostgres=# INSERT INTO x VALUES ('work_mem=1TB');\nINSERT 0 1\npostgres=# COPY x TO '/home/mha/postgresql/inst/head/data/postgresql.auto.conf';\nCOPY 1\npostgres=# SELECT pg_reload_conf();\n pg_reload_conf\n----------------\n t\n(1 row)\npostgres=# show work_mem;\n work_mem\n----------\n 1TB\n(1 row)\n\n\nOr anything similar to that.\n\nYes, this is marginally harder than saying ALTER SYSTEM SET\nwork_mem='1TB', but only very very marginally so. And from a security\nperspective, there is zero difference.\n\nBut we do also allow \"trust\" authentication which is another major\nfootgun from a security perspective, against which we only defend with\nwarnings, so that in itself is not a reason not to do the same here.\n\n\n> In Patroni for example, the PostgreSQL service is controlled on all\n> nodes by Patroni, and these kinds of changes could end up breaking the\n> cluster if there was a failover. For this reason Patroni starts\n> postgres with some GUC options as CMD arguments so that values in\n> postgresql.conf or postgresql.auto.conf are ignored. The values in the\n> DCS are the ones that matter.\n\nRight. And patroni would need to continue to do that even with this\npatch, because it also needs to override if somebody puts something in\npostgresql.conf, no? Removing the defence against that seems like a\nbad idea...\n\n\n> (see more about how Patroni manages this here:\n> https://patroni.readthedocs.io/en/latest/patroni_configuration.html#postgresql-parameters-controlled-by-patroni\n>\n> But let's face it, that's a hack, not something to be proud of, even\n> if it does what is intended. And this is a product and we shouldn't be\n> advertising hacks to overcome limitations.\n\nIt's in a way a hack. But it's not the fault of ALTER SYSTEM, as you'd\nalso have to manage postgresql.conf. One slightly less hacky part\nmight be to have patroni generate a config file of it's own and\ninclude it with the highest priority -- at that point it *would* be\ncome a hack around ALTER SYSTEM, because ALTER SYSTEM has a higher\npriority than any manual user config file. But it is not today.\n\nAnother idea to solve the problem could be to instead introduce a\nspecific configuration file (either hardcoded or an\ninclude_final_parameter=<path> parameter, in which case patroni or the\nk8s operator could set that parameter on the command line and that\nparameter only) that is parsed *after* postgresql.auto.conf and\nthereby would override the manual settings. This file would explicilty\nbe documented as intended for this type of tooling, and when you have\na tool - whether patroni or another declarative operator - it owns\nthis file and can overwrite it with whatever it wants. And this would\nalso retain the ability to use ALTER SYSTEM SET for *other*\nparameters, if they want to.\n\nThat's just a very quick idea and there may definitely be holes in it,\nbut I'm not sure those holes are any worse than what's suggested here,\nand I do thin kit's cleaner.\n\n> I find the opposition to this lacking good reasons, while I find the\n> implementation to be useful in some cases.\n\nStopping ALTER SYSTEM SET without actually preventing the superuser\nfrom doing the same thing as they were doing before would to me be at\nleast as much of a hack as what patroni does today is.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 11 Sep 2023 16:04:40 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Hi Magnus,\n\nOn Mon, 11 Sept 2023 at 16:04, Magnus Hagander <[email protected]> wrote:\n\n> But to do that, there would need to be a very in-your-face warning in\n> the documentation about it like \"note that this only disables the\n> ALTER SYSTEM command. It does not prevent a superuser from changing\n> the configuration remotely using other means\".\n>\n\nAlthough I did not include any docs in the PoC patch, that's exactly the\nplan. So +1 from me.\n\n\n> Yes, this is marginally harder than saying ALTER SYSTEM SET\n> work_mem='1TB', but only very very marginally so. And from a security\n> perspective, there is zero difference.\n>\n\nAgree, but the primary goal is not security. Indeed, security requires a\nmore holistic approach (and in my initial thread I deliberately did not\nmention all the knobs that the operator provides, with stricter and\nstricter default values, as I thought it was not relevant from a Postgres'\nPoV). However, as I explained in the patch PoC thread, the change is\nintended primarily to warn legitimate administrators in a configuration\nmanaged controlled environment that ALTER SYSTEM has been disabled for that\nsystem. These are environments where network access for a superuser is\nprohibited, but still possible for local SREs to log in via the container\nin the pod for incident resolution - very often this happens in high stress\nconditions and I believe that this gate will help them remind that if they\nwant to change the settings they need to do it through the Kubernetes\nresources. So primarily: usability.\n\nAnother idea to solve the problem could be to instead introduce a\n> specific configuration file (either hardcoded or an\n> include_final_parameter=<path> parameter, in which case patroni or the\n> k8s operator could set that parameter on the command line and that\n> parameter only) that is parsed *after* postgresql.auto.conf and\n> thereby would override the manual settings. This file would explicilty\n> be documented as intended for this type of tooling, and when you have\n> a tool - whether patroni or another declarative operator - it owns\n> this file and can overwrite it with whatever it wants. And this would\n> also retain the ability to use ALTER SYSTEM SET for *other*\n> parameters, if they want to.\n>\n\nBut that is exactly the whole point of this request: disable the last\noperation altogether. This option will easily give any operator (or\ndeployment in a configuration management scenario) the possibility to ship\na system that, out-of-the box, facilitates this one direction update of the\nconfiguration, allowing the observed state to easily reconcile with the\ndesired one. Without breaking any existing deployment.\n\n\n> Stopping ALTER SYSTEM SET without actually preventing the superuser\n> from doing the same thing as they were doing before would to me be at\n> least as much of a hack as what patroni does today is.\n>\n\nAgree, but as I said above, that'd be at that point the role of an\noperator. An operator, at that point, will have the possibility to\nconfigure this knob in conjunction with others. A possibility that Postgres\nis not currently giving.\n\nPostgres itself should be able to give this possibility, as these\nenvironments demand Postgres to address their emerging needs.\n\nThank you,\nGabriele\n-- \nGabriele Bartolini\nVice President, Cloud Native at EDB\nenterprisedb.com\n\nHi Magnus,On Mon, 11 Sept 2023 at 16:04, Magnus Hagander <[email protected]> wrote:But to do that, there would need to be a very in-your-face warning in\nthe documentation about it like \"note that this only disables the\nALTER SYSTEM command. It does not prevent a superuser from changing\nthe configuration remotely using other means\".Although I did not include any docs in the PoC patch, that's exactly the plan. So +1 from me. Yes, this is marginally harder than saying ALTER SYSTEM SET\nwork_mem='1TB', but only very very marginally so. And from a security\nperspective, there is zero difference.Agree, but the primary goal is not security. Indeed, security requires a more holistic approach (and in my initial thread I deliberately did not mention all the knobs that the operator provides, with stricter and stricter default values, as I thought it was not relevant from a Postgres' PoV). However, as I explained in the patch PoC thread, the change is intended primarily to warn legitimate administrators in a configuration managed controlled environment that ALTER SYSTEM has been disabled for that system. These are environments where network access for a superuser is prohibited, but still possible for local SREs to log in via the container in the pod for incident resolution - very often this happens in high stress conditions and I believe that this gate will help them remind that if they want to change the settings they need to do it through the Kubernetes resources. So primarily: usability. Another idea to solve the problem could be to instead introduce a\nspecific configuration file (either hardcoded or an\ninclude_final_parameter=<path> parameter, in which case patroni or the\nk8s operator could set that parameter on the command line and that\nparameter only) that is parsed *after* postgresql.auto.conf and\nthereby would override the manual settings. This file would explicilty\nbe documented as intended for this type of tooling, and when you have\na tool - whether patroni or another declarative operator - it owns\nthis file and can overwrite it with whatever it wants. And this would\nalso retain the ability to use ALTER SYSTEM SET for *other*\nparameters, if they want to.But that is exactly the whole point of this request: disable the last operation altogether. This option will easily give any operator (or deployment in a configuration management scenario) the possibility to ship a system that, out-of-the box, facilitates this one direction update of the configuration, allowing the observed state to easily reconcile with the desired one. Without breaking any existing deployment. Stopping ALTER SYSTEM SET without actually preventing the superuser\nfrom doing the same thing as they were doing before would to me be at\nleast as much of a hack as what patroni does today is.Agree, but as I said above, that'd be at that point the role of an operator. An operator, at that point, will have the possibility to configure this knob in conjunction with others. A possibility that Postgres is not currently giving.Postgres itself should be able to give this possibility, as these environments demand Postgres to address their emerging needs.Thank you,Gabriele-- Gabriele BartoliniVice President, Cloud Native at EDBenterprisedb.com",
"msg_date": "Mon, 11 Sep 2023 16:59:02 +0200",
"msg_from": "Gabriele Bartolini <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Greetings,\n\n* Magnus Hagander ([email protected]) wrote:\n> On Mon, Sep 11, 2023 at 1:56 PM Martín Marqués <[email protected]> wrote:\n> > > I would like to propose a patch that allows administrators to disable `ALTER SYSTEM` via either a runt-time option to pass to the Postgres server process at startup (e.g. `--disable-alter-system=true`, false by default) or a new GUC (or even both), without changing the current default method of the server.\n> >\n> > I'm actually going to put a strong +1 to Gabriele's proposal. It's an\n> > undeniable problem (I'm only seeing arguments regarding other ways the\n> > system would be insecure), and there might be real use cases for users\n> > outside kubernetes for having this feature and using it (meaning\n> > disabling the use of ALTER SYSTEM).\n> \n> If enough people are in favor of it *given the known issues with it*,\n> I can drop my objection to a \"meh, but I still don't think it's a good\n> idea\".\n\nA lot of the objections seem to be on the grounds of returning a\n'permission denied' kind of error and I generally agree with that being\nthe wrong approach.\n\nAs an alternative idea- what if we had something in postgresql.conf\nalong the lines of:\n\ninclude_alter_system = true/false\n\nand use that to determine if the postgresql.auto.conf is included or\nnot..?\n\n> But to do that, there would need to be a very in-your-face warning in\n> the documentation about it like \"note that this only disables the\n> ALTER SYSTEM command. It does not prevent a superuser from changing\n> the configuration remotely using other means\".\n\nWith the above, we could throw a WARNING or maybe just NOTICE when ALTER\nSYSTEM is run that 'include_alter_system is false and therefore these\nchanges won't be included in the running configuration' or similar.\n\nWhat this does cause problems with is that pg_basebackup and other tools\n(eg: pgbackrest) write into postgresql.auto.conf currently and we'd want\nthose to still work. That's an opportunity, imv, though, since I don't\nreally think where ALTER SYSTEM writes to and where backup/restore\ntools are writing to should really be the same place anyway. Therefore,\nperhaps we add a 'postgresql.system.conf' or similar and maybe a\ncorresponding option in postgresql.conf to include it or not.\n\n> For example, in the very simplest, wth the POC patch out there now, I\n> can still run:\n> postgres=# CREATE TEMP TABLE x(t text);\n> CREATE TABLE\n> postgres=# INSERT INTO x VALUES ('work_mem=1TB');\n> INSERT 0 1\n> postgres=# COPY x TO '/home/mha/postgresql/inst/head/data/postgresql.auto.conf';\n> COPY 1\n> postgres=# SELECT pg_reload_conf();\n> pg_reload_conf\n> ----------------\n> t\n> (1 row)\n> postgres=# show work_mem;\n> work_mem\n> ----------\n> 1TB\n> (1 row)\n> \n> Or anything similar to that.\n\nThis is an issue if you're looking at it as a security thing. This\nisn't an issue if don't view it that way. Still, I do see some merit in\nmaking it so that you can't actually change the config that's loaded at\nsystem start from inside the data directory or as the PG superuser,\nwhich my proposal above would support- just configure in postgresql.conf\nto not include any of the alter-system or generated config. The actual\npostgresql.conf could be owned by root then too.\n\n> > In Patroni for example, the PostgreSQL service is controlled on all\n> > nodes by Patroni, and these kinds of changes could end up breaking the\n> > cluster if there was a failover. For this reason Patroni starts\n> > postgres with some GUC options as CMD arguments so that values in\n> > postgresql.conf or postgresql.auto.conf are ignored. The values in the\n> > DCS are the ones that matter.\n> \n> Right. And patroni would need to continue to do that even with this\n> patch, because it also needs to override if somebody puts something in\n> postgresql.conf, no? Removing the defence against that seems like a\n> bad idea...\n> \n> \n> > (see more about how Patroni manages this here:\n> > https://patroni.readthedocs.io/en/latest/patroni_configuration.html#postgresql-parameters-controlled-by-patroni\n> >\n> > But let's face it, that's a hack, not something to be proud of, even\n> > if it does what is intended. And this is a product and we shouldn't be\n> > advertising hacks to overcome limitations.\n> \n> It's in a way a hack. But it's not the fault of ALTER SYSTEM, as you'd\n> also have to manage postgresql.conf. One slightly less hacky part\n> might be to have patroni generate a config file of it's own and\n> include it with the highest priority -- at that point it *would* be\n> come a hack around ALTER SYSTEM, because ALTER SYSTEM has a higher\n> priority than any manual user config file. But it is not today.\n\nI suppose we could invent a priority control thing as part of the above\nproposal too.. but I would think just having include_alter_system and\ninclude_auto_config (or whatever we name them) would be enough, with the\nauto-config bit being loaded last and therefore having the 'highest'\npriority.\n\n> Another idea to solve the problem could be to instead introduce a\n> specific configuration file (either hardcoded or an\n> include_final_parameter=<path> parameter, in which case patroni or the\n> k8s operator could set that parameter on the command line and that\n> parameter only) that is parsed *after* postgresql.auto.conf and\n> thereby would override the manual settings. This file would explicilty\n> be documented as intended for this type of tooling, and when you have\n> a tool - whether patroni or another declarative operator - it owns\n> this file and can overwrite it with whatever it wants. And this would\n> also retain the ability to use ALTER SYSTEM SET for *other*\n> parameters, if they want to.\n\nYeah, this is along the lines of what I propose above, but with the\naddition of having a way to control if these are loaded or not in the\nfirst place, instead of having to deal with every possible option that\nmight be an issue. \n\nGenerally, I do think having a separate file for tools to write into\nthat's independent of ALTER SYSTEM would just be a good idea. I don't\ncare for the way those are mixed in the same file these days.\n\n> That's just a very quick idea and there may definitely be holes in it,\n> but I'm not sure those holes are any worse than what's suggested here,\n> and I do thin kit's cleaner.\n\nPerhaps not surprising, I tend to agree that something along these lines\nis cleaner.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 11 Sep 2023 11:12:01 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Hi Stephen,\n\nOn Mon, 11 Sept 2023 at 17:12, Stephen Frost <[email protected]> wrote:\n\n> A lot of the objections seem to be on the grounds of returning a\n> 'permission denied' kind of error and I generally agree with that being\n> the wrong approach.\n>\n> As an alternative idea- what if we had something in postgresql.conf\n> along the lines of:\n>\n> include_alter_system = true/false\n>\n> and use that to determine if the postgresql.auto.conf is included or\n> not..?\n>\n\nThat sounds like a very good idea. I had thought about that when writing\nthe PoC, as a SIGHUP controlled GUC. I had trouble finding an adequate GUC\ncategory for that (ideas?), and thought it was a more intrusive patch\nto trigger the conversation. But I am willing to explore that too (which\nwon't change by any inch the goal of the patch).\n\nWith the above, we could throw a WARNING or maybe just NOTICE when ALTER\n> SYSTEM is run that 'include_alter_system is false and therefore these\n> changes won't be included in the running configuration' or similar.\n>\n\nThat's also an option I'd be willing to explore with folks here.\n\n\n> What this does cause problems with is that pg_basebackup and other tools\n> (eg: pgbackrest) write into postgresql.auto.conf currently and we'd want\n> those to still work. That's an opportunity, imv, though, since I don't\n> really think where ALTER SYSTEM writes to and where backup/restore\n> tools are writing to should really be the same place anyway. Therefore,\n> perhaps we add a 'postgresql.system.conf' or similar and maybe a\n> corresponding option in postgresql.conf to include it or not.\n>\n\nTotally. We are for example in the same position with the CloudNativePG\noperator, and it is something we are intending to fix (\nhttps://github.com/cloudnative-pg/cloudnative-pg/issues/2727). I agree with\nyou that it is the wrong place to do it.\n\nThis is an issue if you're looking at it as a security thing. This\n> isn't an issue if don't view it that way. Still, I do see some merit in\n> making it so that you can't actually change the config that's loaded at\n> system start from inside the data directory or as the PG superuser,\n> which my proposal above would support- just configure in postgresql.conf\n> to not include any of the alter-system or generated config. The actual\n> postgresql.conf could be owned by root then too.\n>\n\n+1.\n\nThank you,\nGabriele\n-- \nGabriele Bartolini\nVice President, Cloud Native at EDB\nenterprisedb.com\n\nHi Stephen,On Mon, 11 Sept 2023 at 17:12, Stephen Frost <[email protected]> wrote:A lot of the objections seem to be on the grounds of returning a\n'permission denied' kind of error and I generally agree with that being\nthe wrong approach.\n\nAs an alternative idea- what if we had something in postgresql.conf\nalong the lines of:\n\ninclude_alter_system = true/false\n\nand use that to determine if the postgresql.auto.conf is included or\nnot..?That sounds like a very good idea. I had thought about that when writing the PoC, as a SIGHUP controlled GUC. I had trouble finding an adequate GUC category for that (ideas?), and thought it was a more intrusive patch to trigger the conversation. But I am willing to explore that too (which won't change by any inch the goal of the patch).With the above, we could throw a WARNING or maybe just NOTICE when ALTER\nSYSTEM is run that 'include_alter_system is false and therefore these\nchanges won't be included in the running configuration' or similar.That's also an option I'd be willing to explore with folks here. What this does cause problems with is that pg_basebackup and other tools\n(eg: pgbackrest) write into postgresql.auto.conf currently and we'd want\nthose to still work. That's an opportunity, imv, though, since I don't\nreally think where ALTER SYSTEM writes to and where backup/restore\ntools are writing to should really be the same place anyway. Therefore,\nperhaps we add a 'postgresql.system.conf' or similar and maybe a\ncorresponding option in postgresql.conf to include it or not.Totally. We are for example in the same position with the CloudNativePG operator, and it is something we are intending to fix (https://github.com/cloudnative-pg/cloudnative-pg/issues/2727). I agree with you that it is the wrong place to do it.This is an issue if you're looking at it as a security thing. This\nisn't an issue if don't view it that way. Still, I do see some merit in\nmaking it so that you can't actually change the config that's loaded at\nsystem start from inside the data directory or as the PG superuser,\nwhich my proposal above would support- just configure in postgresql.conf\nto not include any of the alter-system or generated config. The actual\npostgresql.conf could be owned by root then too.+1.Thank you,Gabriele-- Gabriele BartoliniVice President, Cloud Native at EDBenterprisedb.com",
"msg_date": "Mon, 11 Sep 2023 17:56:01 +0200",
"msg_from": "Gabriele Bartolini <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Mon, 11 Sept 2023 at 11:11, Magnus Hagander <[email protected]> wrote:\n\n> I'm actually going to put a strong +1 to Gabriele's proposal. It's an\n> > undeniable problem (I'm only seeing arguments regarding other ways the\n> > system would be insecure), and there might be real use cases for users\n> > outside kubernetes for having this feature and using it (meaning\n> > disabling the use of ALTER SYSTEM).\n>\n> If enough people are in favor of it *given the known issues with it*,\n> I can drop my objection to a \"meh, but I still don't think it's a good\n> idea\".\n>\n> But to do that, there would need to be a very in-your-face warning in\n> the documentation about it like \"note that this only disables the\n> ALTER SYSTEM command. It does not prevent a superuser from changing\n> the configuration remotely using other means\".\n>\n> For example, in the very simplest, wth the POC patch out there now, I\n> can still run:\n>\n[…]\n\nMaybe in addition to making \"ALTER SYSTEM\" throw an error, the feature that\ndisables it should also disable reading postgresql.auto.conf? Maybe even\ndelete it and make it an error if it is present on startup (maybe even warn\nif it shows up while the DB is running?).\n\nInteresting corner case: What happens if I do \"ALTER SYSTEM SET\nalter_system_disabled = true\"?\n\nCounterpoint: maybe the idea is to disable ALTER SYSTEM but still use\npostgresql.auto.conf, maintained by an external program, to control the\ninstance's behaviour.\n\nOn Mon, 11 Sept 2023 at 11:11, Magnus Hagander <[email protected]> wrote:\n> I'm actually going to put a strong +1 to Gabriele's proposal. It's an\n> undeniable problem (I'm only seeing arguments regarding other ways the\n> system would be insecure), and there might be real use cases for users\n> outside kubernetes for having this feature and using it (meaning\n> disabling the use of ALTER SYSTEM).\n\nIf enough people are in favor of it *given the known issues with it*,\nI can drop my objection to a \"meh, but I still don't think it's a good\nidea\".\n\nBut to do that, there would need to be a very in-your-face warning in\nthe documentation about it like \"note that this only disables the\nALTER SYSTEM command. It does not prevent a superuser from changing\nthe configuration remotely using other means\".\n\nFor example, in the very simplest, wth the POC patch out there now, I\ncan still run:\n[…]Maybe in addition to making \"ALTER SYSTEM\" throw an error, the feature that disables it should also disable reading postgresql.auto.conf? Maybe even delete it and make it an error if it is present on startup (maybe even warn if it shows up while the DB is running?).Interesting corner case: What happens if I do \"ALTER SYSTEM SET alter_system_disabled = true\"?Counterpoint: maybe the idea is to disable ALTER SYSTEM but still use postgresql.auto.conf, maintained by an external program, to control the instance's behaviour.",
"msg_date": "Mon, 11 Sep 2023 12:15:29 -0400",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Hi,\n\n> Maybe in addition to making \"ALTER SYSTEM\" throw an error, the feature that disables it should also disable reading postgresql.auto.conf? Maybe even delete it and make it an error if it is present on startup (maybe even warn if it shows up while the DB is running?).\n\nThe outcome looked for is that the system GUCs that require a restart\nor reload are not modified unless it's through some orchestration or\nsomeone with physical access to the configuration files (yeah, we\nstill have the COPY PROGRAM).\n\nWe shouldn't mix this with not reading postgresql.auto.conf, or even\nworse, deleting it. I don't think it's a good idea to delete the file.\nIgnoring it might be of interest, but completely outside the scope of\nthe intention I'm seeing from the k8s teams.\n\n> Counterpoint: maybe the idea is to disable ALTER SYSTEM but still use postgresql.auto.conf, maintained by an external program, to control the instance's behaviour.\n\nI believe that's the idea, although we have `include` and\n`include_dir` which can be used the same way as `postgresql.auto.conf`\nis automatically included.\n\nKind regards, Martín\n\n-- \nMartín Marqués\nIt’s not that I have something to hide,\nit’s that I have nothing I want you to see\n\n\n",
"msg_date": "Tue, 12 Sep 2023 14:33:56 +0200",
"msg_from": "=?UTF-8?B?TWFydMOtbiBNYXJxdcOpcw==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Seems to be some resistance to getting this in core, so why not just use an\nextension? I was able to create a quick POC to do just that. Hook into PG\nand look for AlterSystemStmt, throw a \"Sorry, ALTER SYSTEM is not currently\nallowed\" error. Put into shared_preload_libraries and you're done. As a\nbonus, works on all supported versions, so no need to wait for Postgres 17\n- or Postgres 18/19 given the feature drift this thread is experiencing :)\n\nCheers,\nGreg\n\nSeems to be some resistance to getting this in core, so why not just use an extension? I was able to create a quick POC to do just that. Hook into PG and look for AlterSystemStmt, throw a \"Sorry, ALTER SYSTEM is not currently allowed\" error. Put into shared_preload_libraries and you're done. As a bonus, works on all supported versions, so no need to wait for Postgres 17 - or Postgres 18/19 given the feature drift this thread is experiencing :)Cheers,Greg",
"msg_date": "Wed, 13 Sep 2023 13:10:16 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Hi Greg,\n\nOn Wed, 13 Sept 2023 at 19:10, Greg Sabino Mullane <[email protected]>\nwrote:\n\n> Seems to be some resistance to getting this in core, so why not just use\n> an extension? I was able to create a quick POC to do just that. Hook into\n> PG and look for AlterSystemStmt, throw a \"Sorry, ALTER SYSTEM is not\n> currently allowed\" error. Put into shared_preload_libraries and you're\n> done. As a bonus, works on all supported versions, so no need to wait for\n> Postgres 17 - or Postgres 18/19 given the feature drift this thread is\n> experiencing :)\n>\n\nAs much as I would like to see your extension, I would still like to\nunderstand why Postgres itself shouldn't solve this basic requirement\ncoming from the configuration management driven/Kubernetes space. It\nshouldn't be a big deal to have such an option, either as a startup one or\na GUC, should it?\n\nThanks,\nGabriele\n-- \nGabriele Bartolini\nVice President, Cloud Native at EDB\nenterprisedb.com\n\nHi Greg,On Wed, 13 Sept 2023 at 19:10, Greg Sabino Mullane <[email protected]> wrote:Seems to be some resistance to getting this in core, so why not just use an extension? I was able to create a quick POC to do just that. Hook into PG and look for AlterSystemStmt, throw a \"Sorry, ALTER SYSTEM is not currently allowed\" error. Put into shared_preload_libraries and you're done. As a bonus, works on all supported versions, so no need to wait for Postgres 17 - or Postgres 18/19 given the feature drift this thread is experiencing :)As much as I would like to see your extension, I would still like to understand why Postgres itself shouldn't solve this basic requirement coming from the configuration management driven/Kubernetes space. It shouldn't be a big deal to have such an option, either as a startup one or a GUC, should it?Thanks,Gabriele-- Gabriele BartoliniVice President, Cloud Native at EDBenterprisedb.com",
"msg_date": "Fri, 15 Sep 2023 11:16:09 +0200",
"msg_from": "Gabriele Bartolini <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "> On 11 Sep 2023, at 15:50, Magnus Hagander <[email protected]> wrote:\n> \n> On Sat, Sep 9, 2023 at 5:14 PM Alvaro Herrera <[email protected]> wrote:\n>> \n>> On 2023-Sep-08, Magnus Hagander wrote:\n>> \n>>> Now, it might be that you don't care at all about the *security* side\n>>> of the feature, and only care about the convenience side. But in that\n>>> case, the original suggestion from Tom of using an even trigger seems\n>>> like a fine enough solution?\n>> \n>> ALTER SYSTEM, like all system-wide commands, does not trigger event\n>> triggers. These are per-database only.\n>> \n>> https://www.postgresql.org/docs/16/event-trigger-matrix.html\n> \n> Hah, didn't think of that. And yes, that's a very good point. But one\n> way to fix that would be to actually make event triggers for system\n> wide commands, which would then be useful for other things as well...\n\nWouldn't having system wide EVTs be a generic solution which could be the\ninfrastructure for this requested change as well as others in the same area?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 15 Sep 2023 11:18:51 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Hi,\n\nI am sending an updated patch, and submitting this to the next commit fest,\nas I still believe this could be very useful.\n\nThanks,\nGabriele\n\nOn Thu, 7 Sept 2023 at 21:51, Gabriele Bartolini <\[email protected]> wrote:\n\n> Hi everyone,\n>\n> I would like to propose a patch that allows administrators to disable\n> `ALTER SYSTEM` via either a runt-time option to pass to the Postgres server\n> process at startup (e.g. `--disable-alter-system=true`, false by default)\n> or a new GUC (or even both), without changing the current default method of\n> the server.\n>\n> The main reason is that this would help improve the “security by default”\n> posture of Postgres in a Kubernetes/Cloud Native environment - and, in\n> general, in any environment on VMs/bare metal behind a configuration\n> management system in which changes should only be made in a declarative way\n> and versioned like Ansible Tower, to cite one.\n>\n> Below you find some background information and the longer story behind\n> this proposal.\n>\n> Sticking to the Kubernetes use case, I am primarily speaking on behalf of\n> the CloudNativePG open source operator (cloudnative-pg.io, of which I am\n> one of the maintainers). However, I am sure that this option could benefit\n> any operator for Postgres - an operator is the most common and recommended\n> way to run a complex application like a PostgreSQL database management\n> system inside Kubernetes.\n>\n> In this case, the state of a PostgreSQL cluster (for example its number of\n> replicas, configuration, storage, etc.) is defined in a Custom Resource\n> Definition in the form of configuration, typically YAML, and the operator\n> works with Kubernetes to ensure that, at any moment, the requested Postgres\n> cluster matches the observed one. This is a very basic example in\n> CloudNativePG:\n> https://cloudnative-pg.io/documentation/current/samples/cluster-example.yaml\n>\n> As I was mentioning above, in a Cloud Native environment it is expected\n> that workloads are secure by default. Without going into much detail, many\n> decisions have been made in that direction by operators for Postgres,\n> including CloudNativePG. The goal of this proposal is to provide a way to\n> ensure that changes to the PostgreSQL configuration in a Kubernetes\n> controlled Postgres cluster are allowed only through the Kubernetes API.\n>\n> Basically, if you want to change an option for PostgreSQL, you need to\n> change the desired state in the Kubernetes resource, then Kubernetes will\n> converge (through the operator). In simple words, it’s like empowering the\n> operator to impersonate the PostgreSQL superuser.\n>\n> However, given that we cannot force this use case, there could be roles\n> with the login+superuser privileges connecting to the PostgreSQL instance\n> and potentially “interfering” with the requested state defined in the\n> configuration by imperatively running “ALTER SYSTEM” commands.\n>\n> For example: CloudNativePG has a fixed value for some GUCs in order to\n> manage a full HA cluster, including SSL, log, some WAL and replication\n> settings. While the operator eventually reconciles those settings, even the\n> temporary change of those settings in a cluster might be harmful. Think for\n> example of a user that, through `ALTER SYSTEM`, tries to change WAL level\n> to minimal, or change the setting of the log (we require CSV), potentially\n> creating issues to the underlying instance and cluster (potentially leaving\n> it in an unrecoverable state in the case of other more invasive GUCS).\n>\n> At the moment, a possible workaround is that `ALTER SYSTEM` can be blocked\n> by making the postgresql.auto.conf read only, but the returned message is\n> misleading and that’s certainly bad user experience (which is very\n> important in a cloud native environment):\n>\n> ```\n> postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> ERROR: could not open file \"postgresql.auto.conf\": Permission denied\n> ```\n>\n> For this reason, I would like to propose the option to be given to the\n> postgres process at startup, in order to be as less invasive as possible\n> (the operator could then start Postgres requesting `ALTER SYSTEM` to be\n> disabled). That’d be my preference at the moment, if possible.\n>\n> Alternatively, or in addition, the introduction of a GUC to disable `ALTER\n> SYSTEM` altogether. This enables tuning this setting through configuration\n> at the Kubernetes level, only if the operators require it - without\n> damaging the rest of the users.\n>\n> Before I start writing any lines of code and propose a patch, I would like\n> first to understand if there’s room for it.\n>\n> Thanks for your attention and … looking forward to your feedback!\n>\n> Ciao,\n> Gabriele\n> --\n> Gabriele Bartolini\n> Vice President, Cloud Native at EDB\n> enterprisedb.com\n>\n\n\n-- \nGabriele Bartolini\nVice President, Cloud Native at EDB\nenterprisedb.com",
"msg_date": "Tue, 30 Jan 2024 18:05:37 +0100",
"msg_from": "Gabriele Bartolini <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Tue, Sep 12, 2023 at 10:39 AM Martín Marqués\n<[email protected]> wrote:\n> The outcome looked for is that the system GUCs that require a restart\n> or reload are not modified unless it's through some orchestration or\n> someone with physical access to the configuration files (yeah, we\n> still have the COPY PROGRAM).\n\nIf I understand this correctly, you're saying it's not a security\nvulnerability if someone finds a way to use COPY PROGRAM or some other\nmechanism to bypass the ALTER SYSTEM restriction, because the point of\nthe constraint isn't to make it impossible for the superuser to modify\nthe configuration in a way that they shouldn't, but rather to make it\ninconvenient for them to do so.\n\nI have to admit that I'm a little afraid that people will mistake this\nfor an actual security feature and file bug reports or CVEs about the\nsuperuser being able to circumvent these restrictions. If we add this,\nwe had better make sure that the documentation is extremely clear\nabout what we are guaranteeing, or more to the point about what we are\nnot guaranteeing.\n\nI understand that there's some frustration on the part of Gabriele and\nothers that this proposal hasn't been enthusiastically adopted, but I\nwould ask for a little bit of forbearance because those are also, by\nand large, not the people who will not have to cope with it when we\nstart getting security researchers threatening to publish our evilness\nin the Register. Such conversations are no fun at all. Explaining that\nwe're not actually evil doesn't tend to work, because the security\nresearchers are just as convinced that they are right as anyone\narguing for this feature is. Statements like \"we don't actually intend\nto guarantee X\" tend to fall on deaf ears.\n\nIn fact, I would go so far as to argue that many of our security\nproblems (and non-problems) are widely misunderstood even within our\nown community, and that far from being something anyone should dismiss\nas pedantry, it's actually a critical issue for the project to solve\nand something we really need to address in order to be able to move\nforward. From that point of view, this feature seems bound to make an\nalready-annoying problem worse. I don't necessarily expect the people\nwho are in favor of this feature to accept that as a reason not to do\nthis, but I do hope to be taken seriously when I say there's a real\nissue there. Something can be a serious problem even if it's not YOUR\nproblem, and in this case, that apparently goes both ways.\n\nI also think that using the GUC system to manage itself is a little\nbit suspect. I wonder if it would be better to do this some other way,\ne.g. a sentinel file in the data directory. For example, suppose we\nrefuse ALTER SYSTEM if $PGDATA/disable_alter_system exists, or\nsomething like that. It seems like it would be very easy for an\nexternal management solution (k8s or whatever) to drop that file in\nplace if desired, and then it would be crystal clear that there's no\nway of bypassing the restriction from within the GUC system itself\n(though you could still bypass it via filesystem access).\n\nI agree with those who have said that this shouldn't disable\npostgresql.auto.conf, but only the ability of ALTER SYSTEM to modify\nit. Right now, third-party tools external to the server can count on\nbeing able to add things to postgresql.auto.conf with the reasonable\nexpectations that they'll take effect. I'd rather not break that\nproperty.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 30 Jan 2024 12:48:50 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> I have to admit that I'm a little afraid that people will mistake this\n> for an actual security feature and file bug reports or CVEs about the\n> superuser being able to circumvent these restrictions. If we add this,\n> we had better make sure that the documentation is extremely clear\n> about what we are guaranteeing, or more to the point about what we are\n> not guaranteeing.\n\n> I understand that there's some frustration on the part of Gabriele and\n> others that this proposal hasn't been enthusiastically adopted, but I\n> would ask for a little bit of forbearance because those are also, by\n> and large, not the people who will not have to cope with it when we\n> start getting security researchers threatening to publish our evilness\n> in the Register. Such conversations are no fun at all.\n\nIndeed. I'd go so far as to say that we should reject not only this\nproposal, but any future ones that intend to prevent superusers from\ndoing things that superusers normally could do (and, indeed, are\nnormally expected to do). That sort of thing is not part of our\nsecurity model, never has been, and it's simply naive to believe that\nit won't have a boatload of easily-reachable holes in it. Which we\n*will* get complaints about, if we claim that thus-and-such feature\nprevents it. So why bother? Don't give out superuser to people you\ndon't trust to not do the things you wish they wouldn't.\n\n> I also think that using the GUC system to manage itself is a little\n> bit suspect.\n\nSomething like contrib/sepgsql would be a better mechanism, perhaps.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Jan 2024 14:20:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Tue, Jan 30, 2024 at 2:20 PM Tom Lane <[email protected]> wrote:\n> Indeed. I'd go so far as to say that we should reject not only this\n> proposal, but any future ones that intend to prevent superusers from\n> doing things that superusers normally could do (and, indeed, are\n> normally expected to do). That sort of thing is not part of our\n> security model, never has been, and it's simply naive to believe that\n> it won't have a boatload of easily-reachable holes in it. Which we\n> *will* get complaints about, if we claim that thus-and-such feature\n> prevents it. So why bother? Don't give out superuser to people you\n> don't trust to not do the things you wish they wouldn't.\n\nIn my opinion, we need to have the conversation, whereas you seem to\nwant to try to shut it down before it starts. If we take that\napproach, people are going to get (more) frustrated.\n\nAlso in my opinion, there is a fair amount of nuance here. On the one\nhand, I and others put a lot of work into making it possible to not\ngive people superuser and still be able to do a controlled subset of\nthe things that a superuser can do. For example, thanks to Mark\nDilger's work, you can make somebody not a superuser and still allow\nthem to set GUCs that can normally be set only by superusers, and you\ncan choose which GUCs you do and do not want them to be able to set.\nAnd, thanks to my work, you can make someone a CREATEROLE user without\nletting them escalate to superuser, and you can then allow them to\nmanage the users that they create almost exactly as if they were a\nsuperuser, with only the limitations that seem necessary to maintain\nsystem security. It is worth asking - and I would like to hear a real,\nnon-flip answer - why someone who wants to do what is proposed here\nisn't using those mechanisms instead of handing out SUPERUSER and then\ncomplaining that it grants too much power.\n\nOn the other hand, I don't see why it isn't legitimate to imagine a\nscenario where there is no security boundary between the Kubernetes\nadministrator and the PostgreSQL DBA, and yet the PostgreSQL DBA\nshould still be pushed in the direction of doing things in a way that\ndoesn't break Kubernetes. It surprises me a little bit that Gabriele\nand others want to build the system that way, though, because you\nmight expect that in a typical install the Kubernetes administrator\nwould want to FORCIBLY PREVENT the PostgreSQL administrator from\nmessing things up instead of doing what is proposed here, which\namounts to suggesting perhaps the PostgreSQL administrator would be\nkind enough not to mess things up. Nonetheless, there's no law against\nsuggestions. When my wife puts the ground beef that I'm supposed to\nuse to cook dinner at the top of the freezer and the stuff I'm\nsupposed to not use at the bottom, nothing prevents me from digging\nout the other ground beef and using it, but I don't, because I can\ntake a hint. And indeed, I benefit from that hint. This seems like it\ncould be construed as a very similar type of hint.\n\nI don't think we should pretend like one of the two paragraphs above\nis valid and the other is hot garbage. That's not solving anything. We\ncan't resolve the tension between those two things in either direction\nby somebody hammering on the side of the argument that they believe to\nbe correct and ignoring the other one.\n\n> Something like contrib/sepgsql would be a better mechanism, perhaps.\n\nThere's nothing wrong with that exactly, but what does it gain us over\nmy proposal of a sentinel file? I don't see much value in adding a\nhook and then a module that uses that hook to return false or\nunconditionally ereport.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 30 Jan 2024 16:25:12 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Tue, Jan 30, 2024 at 2:20 PM Tom Lane <[email protected]> wrote:\n>> Indeed. I'd go so far as to say that we should reject not only this\n>> proposal, but any future ones that intend to prevent superusers from\n>> doing things that superusers normally could do (and, indeed, are\n>> normally expected to do).\n\n> Also in my opinion, there is a fair amount of nuance here. On the one\n> hand, I and others put a lot of work into making it possible to not\n> give people superuser and still be able to do a controlled subset of\n> the things that a superuser can do.\n\nSure, and that is a line of thought that we should continue to pursue.\nBut we already have enough mechanism to let a non-superuser set only\nthe ALTER SYSTEM stuff she's authorized to. There is no reason to\nthink that a non-superuser could break through that restriction at\nall, let alone easily. So that's an actual security feature, not\nsecurity theater. I don't see how the feature proposed here isn't\nsecurity theater, or at least close enough to that.\n\n>> Something like contrib/sepgsql would be a better mechanism, perhaps.\n\n> There's nothing wrong with that exactly, but what does it gain us over\n> my proposal of a sentinel file?\n\nI was imagining using selinux and/or sepgsql to directly prevent\nwriting postgresql.auto.conf from the Postgres account. Combine that\nwith a non-Postgres-owned postgresql.conf (already supported) and you\nhave something that seems actually bulletproof, rather than a hint.\nAdmittedly, using that approach requires knowing something about a\nnon-Postgres security mechanism.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Jan 2024 16:48:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Tue, Jan 30, 2024 at 10:48 PM Tom Lane <[email protected]> wrote:\n>\n> Robert Haas <[email protected]> writes:\n> > There's nothing wrong with that exactly, but what does it gain us over\n> > my proposal of a sentinel file?\n>\n> I was imagining using selinux and/or sepgsql to directly prevent\n> writing postgresql.auto.conf from the Postgres account. Combine that\n> with a non-Postgres-owned postgresql.conf (already supported) and you\n> have something that seems actually bulletproof, rather than a hint.\n> Admittedly, using that approach requires knowing something about a\n> non-Postgres security mechanism.\n\nWouldn't a simple \"chattr +i postgresql.auto.conf\" work?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Tue, 30 Jan 2024 22:58:56 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Magnus Hagander <[email protected]> writes:\n> On Tue, Jan 30, 2024 at 10:48 PM Tom Lane <[email protected]> wrote:\n>> I was imagining using selinux and/or sepgsql to directly prevent\n>> writing postgresql.auto.conf from the Postgres account.\n\n> Wouldn't a simple \"chattr +i postgresql.auto.conf\" work?\n\nHmm, I'm not too familiar with that file attribute, but it looks\nlike it'd work (on platforms that support it).\n\nMy larger point here is that trying to enforce restrictions on\nsuperusers *within* Postgres is simply not a good plan, for\nlargely the same reasons that Robert questioned making the\nGUC mechanism police itself. It needs to be done outside,\neither at the filesystem level or via some other kernel-level\nsecurity system.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Jan 2024 23:25:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Tuesday, January 30, 2024, Tom Lane <[email protected]> wrote:\n>\n>\n> My larger point here is that trying to enforce restrictions on\n> superusers *within* Postgres is simply not a good plan, for\n> largely the same reasons that Robert questioned making the\n> GUC mechanism police itself. It needs to be done outside,\n> either at the filesystem level or via some other kernel-level\n> security system.\n>\n>\nThe idea of adding a file to the data directory appeals to me.\n\noptional_runtime_features.conf\nalter_system=enabled\ncopy_from_program=enabled\ncopy_to_program=disabled\n\nIf anyone tries to use disabled features the system emits an error:\n\nERROR: Cannot send copy output to program, action disabled by host.\n\nMy main usability question is whether restart required is an acceptable\nrestriction.\n\nMaking said file owned by root (or equivalent) and only readable by the\npostgres process user suffices to lock it down. Refusing to start if the\nfile is writable, and at least one feature is disabled can be considered,\nwith a startup option to bypass that check if desired.\n\nDavid J.\n\nOn Tuesday, January 30, 2024, Tom Lane <[email protected]> wrote:\n\nMy larger point here is that trying to enforce restrictions on\nsuperusers *within* Postgres is simply not a good plan, for\nlargely the same reasons that Robert questioned making the\nGUC mechanism police itself. It needs to be done outside,\neither at the filesystem level or via some other kernel-level\nsecurity system.\nThe idea of adding a file to the data directory appeals to me.optional_runtime_features.confalter_system=enabledcopy_from_program=enabledcopy_to_program=disabledIf anyone tries to use disabled features the system emits an error:ERROR: Cannot send copy output to program, action disabled by host.My main usability question is whether restart required is an acceptable restriction.Making said file owned by root (or equivalent) and only readable by the postgres process user suffices to lock it down. Refusing to start if the file is writable, and at least one feature is disabled can be considered, with a startup option to bypass that check if desired.David J.",
"msg_date": "Tue, 30 Jan 2024 22:10:01 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Tuesday, January 30, 2024, Tom Lane <[email protected]> wrote:\n>> My larger point here is that trying to enforce restrictions on\n>> superusers *within* Postgres is simply not a good plan, for\n>> largely the same reasons that Robert questioned making the\n>> GUC mechanism police itself. It needs to be done outside,\n>> either at the filesystem level or via some other kernel-level\n>> security system.\n\n> The idea of adding a file to the data directory appeals to me.\n>\n> optional_runtime_features.conf\n> alter_system=enabled\n> copy_from_program=enabled\n> copy_to_program=disabled\n\n... so, exactly what keeps an uncooperative superuser from\noverwriting that file?\n\nYou cannot enforce such restrictions within Postgres.\nIt has to be done by an outside mechanism. If you think\ndifferent, you are mistaken.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Jan 2024 00:28:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Tuesday, January 30, 2024, Tom Lane <[email protected]> wrote:\n\n> \"David G. Johnston\" <[email protected]> writes:\n> > On Tuesday, January 30, 2024, Tom Lane <[email protected]> wrote:\n> >> My larger point here is that trying to enforce restrictions on\n> >> superusers *within* Postgres is simply not a good plan, for\n> >> largely the same reasons that Robert questioned making the\n> >> GUC mechanism police itself. It needs to be done outside,\n> >> either at the filesystem level or via some other kernel-level\n> >> security system.\n>\n> > The idea of adding a file to the data directory appeals to me.\n> >\n> > optional_runtime_features.conf\n> > alter_system=enabled\n> > copy_from_program=enabled\n> > copy_to_program=disabled\n>\n> ... so, exactly what keeps an uncooperative superuser from\n> overwriting that file?\n>\n> You cannot enforce such restrictions within Postgres.\n> It has to be done by an outside mechanism. If you think\n> different, you are mistaken.\n>\n\nIf the only user on the OS that can modify that file is root, how does the\nsuperuser, who is hard coded to not be root, modify it? The root/admin\nuser on the OS and it’s filesystem permissions is the outside mechanism\nbeing suggested here.\n\nIf the complaint is that the in-memory boolean is changeable by the\nsuperuser, or even the logic pertaining to the error branch of the code,\nthen yes this is a lost cause.\n\nBut root prevents superuser from controlling that file and then that file\ncan prevent the superuser from escaping to the operating system and\nleveraging its OS postgres user.\n\nDavid J.\n\nOn Tuesday, January 30, 2024, Tom Lane <[email protected]> wrote:\"David G. Johnston\" <[email protected]> writes:\n> On Tuesday, January 30, 2024, Tom Lane <[email protected]> wrote:\n>> My larger point here is that trying to enforce restrictions on\n>> superusers *within* Postgres is simply not a good plan, for\n>> largely the same reasons that Robert questioned making the\n>> GUC mechanism police itself. It needs to be done outside,\n>> either at the filesystem level or via some other kernel-level\n>> security system.\n\n> The idea of adding a file to the data directory appeals to me.\n>\n> optional_runtime_features.conf\n> alter_system=enabled\n> copy_from_program=enabled\n> copy_to_program=disabled\n\n... so, exactly what keeps an uncooperative superuser from\noverwriting that file?\n\nYou cannot enforce such restrictions within Postgres.\nIt has to be done by an outside mechanism. If you think\ndifferent, you are mistaken.\nIf the only user on the OS that can modify that file is root, how does the superuser, who is hard coded to not be root, modify it? The root/admin user on the OS and it’s filesystem permissions is the outside mechanism being suggested here.If the complaint is that the in-memory boolean is changeable by the superuser, or even the logic pertaining to the error branch of the code, then yes this is a lost cause.But root prevents superuser from controlling that file and then that file can prevent the superuser from escaping to the operating system and leveraging its OS postgres user.David J.",
"msg_date": "Tue, 30 Jan 2024 22:43:57 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On 31.01.24 06:28, Tom Lane wrote:\n>> The idea of adding a file to the data directory appeals to me.\n>>\n>> optional_runtime_features.conf\n>> alter_system=enabled\n>> copy_from_program=enabled\n>> copy_to_program=disabled\n> ... so, exactly what keeps an uncooperative superuser from\n> overwriting that file?\n\nThe point of this feature would be to keep the honest people honest.\n\nThe first thing I did when ALTER SYSTEM came out however many years ago \nwas to install Nagios checks to warn when postgresql.auto.conf exists. \nBecause the thing is an attractive nuisance, especially when you want to \ndo centralized configuration control. Of course you can bypass it using \nCOPY PROGRAM etc., but then you *know* that you are *bypassing* \nsomething. If you just see ALTER SYSTEM, you'll think, \"that is \nobviously the appropriate tool\", and there is no generally accepted way \nto communicate that, in particular environment, it might not be.\n\n\n\n",
"msg_date": "Wed, 31 Jan 2024 08:43:14 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Hi there,\n\nI very much like the idea of a file in the data directory that also\ncontrols the copy operations.\n\nJust wanted to highlight though that in our operator we have already\napplied the read-only postgresql.auto.conf trick to disable the system (see\nhttps://cloudnative-pg.io/documentation/current/postgresql_conf/#enabling-alter-system).\nHowever, having that file read-only triggered an issue when using pg_rewind\nto resync a former primary, as pg_rewind immediately bails out when a\nread-only file is encountered in the PGDATA (see\nhttps://github.com/cloudnative-pg/cloudnative-pg/issues/3698).\n\nWe might keep this in mind if we go down the path of the separate file.\n\nThanks,\nGabriele\n\nOn Wed, 31 Jan 2024 at 08:43, Peter Eisentraut <[email protected]> wrote:\n\n> On 31.01.24 06:28, Tom Lane wrote:\n> >> The idea of adding a file to the data directory appeals to me.\n> >>\n> >> optional_runtime_features.conf\n> >> alter_system=enabled\n> >> copy_from_program=enabled\n> >> copy_to_program=disabled\n> > ... so, exactly what keeps an uncooperative superuser from\n> > overwriting that file?\n>\n> The point of this feature would be to keep the honest people honest.\n>\n> The first thing I did when ALTER SYSTEM came out however many years ago\n> was to install Nagios checks to warn when postgresql.auto.conf exists.\n> Because the thing is an attractive nuisance, especially when you want to\n> do centralized configuration control. Of course you can bypass it using\n> COPY PROGRAM etc., but then you *know* that you are *bypassing*\n> something. If you just see ALTER SYSTEM, you'll think, \"that is\n> obviously the appropriate tool\", and there is no generally accepted way\n> to communicate that, in particular environment, it might not be.\n>\n>\n\n-- \nGabriele Bartolini\nVice President, Cloud Native at EDB\nenterprisedb.com\n\nHi there,I very much like the idea of a file in the data directory that also controls the copy operations.Just wanted to highlight though that in our operator we have already applied the read-only postgresql.auto.conf trick to disable the system (see https://cloudnative-pg.io/documentation/current/postgresql_conf/#enabling-alter-system). However, having that file read-only triggered an issue when using pg_rewind to resync a former primary, as pg_rewind immediately bails out when a read-only file is encountered in the PGDATA (see https://github.com/cloudnative-pg/cloudnative-pg/issues/3698).We might keep this in mind if we go down the path of the separate file.Thanks,GabrieleOn Wed, 31 Jan 2024 at 08:43, Peter Eisentraut <[email protected]> wrote:On 31.01.24 06:28, Tom Lane wrote:\n>> The idea of adding a file to the data directory appeals to me.\n>>\n>> optional_runtime_features.conf\n>> alter_system=enabled\n>> copy_from_program=enabled\n>> copy_to_program=disabled\n> ... so, exactly what keeps an uncooperative superuser from\n> overwriting that file?\n\nThe point of this feature would be to keep the honest people honest.\n\nThe first thing I did when ALTER SYSTEM came out however many years ago \nwas to install Nagios checks to warn when postgresql.auto.conf exists. \nBecause the thing is an attractive nuisance, especially when you want to \ndo centralized configuration control. Of course you can bypass it using \nCOPY PROGRAM etc., but then you *know* that you are *bypassing* \nsomething. If you just see ALTER SYSTEM, you'll think, \"that is \nobviously the appropriate tool\", and there is no generally accepted way \nto communicate that, in particular environment, it might not be.\n\n-- Gabriele BartoliniVice President, Cloud Native at EDBenterprisedb.com",
"msg_date": "Wed, 31 Jan 2024 11:16:34 +0100",
"msg_from": "Gabriele Bartolini <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Jan 31, 2024 at 12:28 AM Tom Lane <[email protected]> wrote:\n> You cannot enforce such restrictions within Postgres.\n> It has to be done by an outside mechanism. If you think\n> different, you are mistaken.\n\nIt seems like the biggest reason why we can't enforce such\nrestrictions with Postgres is that you won't hear of anyone committing\nany code which would let us enforce such restrictions in Postgres. I'm\nnot saying that there's no other problem here, but you're just digging\nin your heels. I wrote upthread \"We can't resolve the tension between\nthose two things in either direction by somebody hammering on the side\nof the argument that they believe to be correct and ignoring the other\none\" and you replied to that by quoting what I said about the side of\nthe argument that you believe and hammering on it some more. I really\nwish you wouldn't do stuff like that.\n\nOne thing that I think might be helpful here is to address the\nquestion of exactly how the superuser can get general-purpose\nfilesystem access. They can definitely do it if there are any\nuntrusted PLs installed, but the person who configures the machine can\ncontrol that. They can also do it if extensions like adminpack are\navailable, but the server administrator can control that, too. They\ncan do it through COPY TO/FROM PROGRAM, but we could provide a way to\nrestrict that, and I think an awful lot of people want that. I don't\nknow of any other \"normal\" way of getting filesystem access, but the\nsuperuser can also hack the system catalogs. That means they can\ncreate a function definition that tries to run an arbitrary function\neither in PostgreSQL itself or any .so they can get their hands on --\nbut this is a much less powerful technique since\n5ded4bd21403e143dd3eb66b92d52732fdac1945 removed the version 0 calling\nconvention. You can no longer manufacture calls to random C functions\nthat aren't expecting to be called from SQL. The superuser can also\narrange to call a function that *is* intended to be SQL-callable with\nthe wrong argument types. It's not hard to manufacture a crash that\nway, because if you call a function that's expecting a varlena with an\ninteger, you can induce the function to read more memory than intended\nand run right off the stack. I'm not quite sure whether this can be\nparlayed into arbitrary code execution; I think it's possible.\n\nAnd, then, of course, you can use ALTER SYSTEM to set archive_command\nor restore_command or similar to a shell command of your choosing.\n\nWhat else is there? We should actually document the whole list of ways\nthat a superuser can escape the sandbox. Because right now there are\ntons of people, even experienced PG users, who think that superusers\ncan't escape from PG at all, or that it's just about COPY TO/FROM\nPROGRAM. The lack of clarity about what the issues are makes\nintelligent discussion difficult. Our documentation hints at the fact\nthat there's no privilege boundary between the superuser and the OS\nuser, but it's not very clear or very detailed or in any very central\nplace, and it's not surprising that not everyone understands the\nsituation correctly.\n\nAt any rate, unless there are way more ways to get filesystem access\nthan what I've listed here, it's not unreasonable for people to want\nto shut off the most obvious ones, like COPY TO/FROM PROGRAM and ALTER\nSYSTEM. And there's no real reason we can't provide a way to do that.\nIt's just sticking your head in the stand to say \"well, because we\ncan't prevent people from crafting a stack overrun attack to access\nthe filesystem, we shouldn't have a feature that tells them ALTER\nSYSTEM is disabled on this instance.\"\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 31 Jan 2024 09:35:12 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Jan 31, 2024 at 5:16 AM Gabriele Bartolini\n<[email protected]> wrote:\n> I very much like the idea of a file in the data directory that also controls the copy operations.\n>\n> Just wanted to highlight though that in our operator we have already applied the read-only postgresql.auto.conf trick to disable the system (see https://cloudnative-pg.io/documentation/current/postgresql_conf/#enabling-alter-system). However, having that file read-only triggered an issue when using pg_rewind to resync a former primary, as pg_rewind immediately bails out when a read-only file is encountered in the PGDATA (see https://github.com/cloudnative-pg/cloudnative-pg/issues/3698).\n>\n> We might keep this in mind if we go down the path of the separate file.\n\nYeah. It would be possible to teach pg_rewind and other utilities to\nhandle unreadable or unwritable files in the data directory, but I'm\nnot sure that's the best path forward here, and it would require some\nconsensus that it's the way we want to go.\n\nAnother option I thought of would be to control these sorts of things\nwith a command-line switch. I doubt whether that does anything really\nfundamental from a security point of view, but it removes the control\nof the toggles from anything in the data directory while still leaving\nit within the server administrator's remit.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 31 Jan 2024 10:56:13 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Tue, Jan 30, 2024 at 04:25:12PM -0500, Robert Haas wrote:\n> I don't think we should pretend like one of the two paragraphs above\n> is valid and the other is hot garbage. That's not solving anything. We\n> can't resolve the tension between those two things in either direction\n> by somebody hammering on the side of the argument that they believe to\n> be correct and ignoring the other one.\n\nWhat if we generate log messages when certain commands are used, like\nALTER TABLE? We could have GUC which controls which commands are\nlogged.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Thu, 1 Feb 2024 07:33:04 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Thu, Feb 1, 2024 at 7:33 AM Bruce Momjian <[email protected]> wrote:\n> On Tue, Jan 30, 2024 at 04:25:12PM -0500, Robert Haas wrote:\n> > I don't think we should pretend like one of the two paragraphs above\n> > is valid and the other is hot garbage. That's not solving anything. We\n> > can't resolve the tension between those two things in either direction\n> > by somebody hammering on the side of the argument that they believe to\n> > be correct and ignoring the other one.\n>\n> What if we generate log messages when certain commands are used, like\n> ALTER TABLE? We could have GUC which controls which commands are\n> logged.\n\nWell, as I understand it, that doesn't solve the problem here. The\nproblem some people want to solve here seems to be:\n\nOn my system, the PostgreSQL configuration parameters are being\nmanaged by $EXTERNAL_TOOL. Therefore, they should not be managed by\nPostgreSQL itself. Therefore, if someone uses ALTER SYSTEM, they've\nmade a mistake, so we should give them an ERROR telling them that,\nlike:\n\nERROR: you're supposed to update the configuration via k8s, not ALTER\nSYSTEM, you dummy!\nDETAIL: Stop being an idiot.\n\nThe exact message text might need some wordsmithing. :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 1 Feb 2024 10:27:56 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On 31.01.24 11:16, Gabriele Bartolini wrote:\n> I very much like the idea of a file in the data directory that also \n> controls the copy operations.\n> \n> Just wanted to highlight though that in our operator we have already \n> applied the read-only postgresql.auto.conf trick to disable the system \n> (see \n> https://cloudnative-pg.io/documentation/current/postgresql_conf/#enabling-alter-system <https://cloudnative-pg.io/documentation/current/postgresql_conf/#enabling-alter-system>). However, having that file read-only triggered an issue when using pg_rewind to resync a former primary, as pg_rewind immediately bails out when a read-only file is encountered in the PGDATA (see https://github.com/cloudnative-pg/cloudnative-pg/issues/3698 <https://github.com/cloudnative-pg/cloudnative-pg/issues/3698>).\n> \n> We might keep this in mind if we go down the path of the separate file.\n\nHow about ALTER SYSTEM is disabled if the file \npostgresql.auto.conf.disabled exists? This is somewhat similar to making \nthe file read-only, but doesn't risk other tools breaking when they \nencounter such a file. And it's more obvious and self-explaining.\n\n\n\n\n",
"msg_date": "Tue, 6 Feb 2024 15:10:27 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Tue, Feb 6, 2024 at 7:10 AM Peter Eisentraut <[email protected]>\nwrote:\n\n>\n> How about ALTER SYSTEM is disabled if the file\n> postgresql.auto.conf.disabled exists? This is somewhat similar to making\n> the file read-only, but doesn't risk other tools breaking when they\n> encounter such a file. And it's more obvious and self-explaining.\n>\n\nA separate configuration file would be self-documenting and able to always\nexist; the same properties as postgres.conf\n\nISTM the main requirement regardless of how the file system API is designed\n- assuming there is a filesystem API - is that the running postgres process\nbe unable to write to the file. It seems immaterial how the OS admin\naccomplishes that goal.\n\nThe command line argument method seems appealing but it seems harder in\nthat case to ensure that the postgres process be disallowed from modifyIng\nwhatever file defines what should be run.\n\nOne concern with a file configuration is that if we require it to be\npresent in the data directory that goes somewhat against the design of\nallowing configuration files to be placed anywhere by changing the\nconfig_file guc.\n\nAny design should factor in the almost immediate need to be extended to\nprevent copy variants that touch the local filesystem or shell directly.\n\nI was pondering a directory in pgdata where you could add *.disabled files\nindicating which features to disable. This is a bit more pluggable than a\nsingle configuration file but the later still seems better to me.\n\nDavid J.\n\nOn Tue, Feb 6, 2024 at 7:10 AM Peter Eisentraut <[email protected]> wrote:\nHow about ALTER SYSTEM is disabled if the file \npostgresql.auto.conf.disabled exists? This is somewhat similar to making \nthe file read-only, but doesn't risk other tools breaking when they \nencounter such a file. And it's more obvious and self-explaining.A separate configuration file would be self-documenting and able to always exist; the same properties as postgres.confISTM the main requirement regardless of how the file system API is designed - assuming there is a filesystem API - is that the running postgres process be unable to write to the file. It seems immaterial how the OS admin accomplishes that goal.The command line argument method seems appealing but it seems harder in that case to ensure that the postgres process be disallowed from modifyIng whatever file defines what should be run.One concern with a file configuration is that if we require it to be present in the data directory that goes somewhat against the design of allowing configuration files to be placed anywhere by changing the config_file guc.Any design should factor in the almost immediate need to be extended to prevent copy variants that touch the local filesystem or shell directly.I was pondering a directory in pgdata where you could add *.disabled files indicating which features to disable. This is a bit more pluggable than a single configuration file but the later still seems better to me.David J.",
"msg_date": "Tue, 6 Feb 2024 07:38:06 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Tue, 30 Jan 2024 at 18:49, Robert Haas <[email protected]> wrote:\n> I also think that using the GUC system to manage itself is a little\n> bit suspect. I wonder if it would be better to do this some other way,\n> e.g. a sentinel file in the data directory. For example, suppose we\n> refuse ALTER SYSTEM if $PGDATA/disable_alter_system exists, or\n> something like that.\n\nOn Tue, 6 Feb 2024 at 15:10, Peter Eisentraut <[email protected]> wrote:\n> How about ALTER SYSTEM is disabled if the file\n> postgresql.auto.conf.disabled exists? This is somewhat similar to making\n> the file read-only, but doesn't risk other tools breaking when they\n> encounter such a file. And it's more obvious and self-explaining.\n\nI'm not convinced we need a new file to disable ALTER SYSTEM. I feel\nlike an \"enable_alter_system\" GUC that defaults to ON would work fine\nfor this. If we make that GUC be PGC_POSTMASTER then an operator can\ndisable ALTER SYSTEM either with a command line argument or by\nchanging the main config file. Since this feature is mostly useful\nwhen the config file is managed by an external system, it seems rather\nsimple for that system to configure one extra GUC in the config file.\n\n\n",
"msg_date": "Tue, 6 Feb 2024 16:22:28 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Fri, Sep 8, 2023, at 16:17, Gabriele Bartolini wrote:\n> ```\n> postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> ERROR: could not open file \"postgresql.auto.conf\": Permission denied\n> ```\n\n+1 to simply mark postgresql.auto.conf file as not being writeable.\n\nTo improve the UX experience, how about first checking if the file is not writeable, or catch EACCESS, and add a user-friendly hint?\n\n```\npostgres=# ALTER SYSTEM SET wal_level TO minimal;\nERROR: could not open file \"postgresql.auto.conf\": Permission denied\nHINT: The ALTER SYSTEM command is effectively disabled as the configuration file is set to read-only.\n```\n\nOn Fri, Sep 8, 2023, at 23:43, Magnus Hagander wrote:\n> We need a \"allowlist\" of things a user can do, rather than a blocklist\n> of \"they can do everything they can possibly think of and a computer\n> is capable of doing, except for this one specific thing\". Blocklisting\n> individual permissions of a superuser will never be secure.\n\n+1 for preferring an \"allowlist\" approach over a blocklist.\n\nIn a way, I think this is similar to the project's philosophy on Query Hints, which I strongly support as I think it leads to a better PostgreSQL over the long term. It creates a crucial feedback loop between users facing query planner issues and our developer community, providing essential insights for enhancing the Query Planner.\n\nIf users were to simply apply Query Hints as a quick fix instead of reporting underlying problems, we would often lose these valuable opportunities for improvement of the Query Planner.\n\nSimilarly, I think it's crucial to identify functionalities that currently require superuser privileges and cannot yet be explicitly granted to non-superusers.\n\n/Joel\n\n\n",
"msg_date": "Wed, 07 Feb 2024 09:56:58 +0100",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On 06.02.24 16:22, Jelte Fennema-Nio wrote:\n> On Tue, 30 Jan 2024 at 18:49, Robert Haas <[email protected]> wrote:\n>> I also think that using the GUC system to manage itself is a little\n>> bit suspect. I wonder if it would be better to do this some other way,\n>> e.g. a sentinel file in the data directory. For example, suppose we\n>> refuse ALTER SYSTEM if $PGDATA/disable_alter_system exists, or\n>> something like that.\n> \n> On Tue, 6 Feb 2024 at 15:10, Peter Eisentraut <[email protected]> wrote:\n>> How about ALTER SYSTEM is disabled if the file\n>> postgresql.auto.conf.disabled exists? This is somewhat similar to making\n>> the file read-only, but doesn't risk other tools breaking when they\n>> encounter such a file. And it's more obvious and self-explaining.\n> \n> I'm not convinced we need a new file to disable ALTER SYSTEM. I feel\n> like an \"enable_alter_system\" GUC that defaults to ON would work fine\n> for this. If we make that GUC be PGC_POSTMASTER then an operator can\n> disable ALTER SYSTEM either with a command line argument or by\n> changing the main config file. Since this feature is mostly useful\n> when the config file is managed by an external system, it seems rather\n> simple for that system to configure one extra GUC in the config file.\n\nYeah, I'm all for that, but some others didn't like handling this in the \nGUC system, so I'm throwing around other ideas.\n\n\n\n",
"msg_date": "Wed, 7 Feb 2024 11:16:06 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Hi Jelte,\n\nOn Tue, 6 Feb 2024 at 16:22, Jelte Fennema-Nio <[email protected]> wrote:\n\n> I'm not convinced we need a new file to disable ALTER SYSTEM. I feel\n> like an \"enable_alter_system\" GUC that defaults to ON would work fine\n> for this. If we make that GUC be PGC_POSTMASTER then an operator can\n> disable ALTER SYSTEM either with a command line argument or by\n> changing the main config file. Since this feature is mostly useful\n> when the config file is managed by an external system, it seems rather\n> simple for that system to configure one extra GUC in the config file.\n>\n\nThis is mostly the approach I have taken in the patch, except allowing to\nchange the value in the configuration file. The patch at the moment was\nenforcing just the setting at startup (which is more than enough for a\nKubernetes operator given that Postgres runs in the container). I had done\nsome experiments enabling the change in the configuration file, but wasn't\nsure in which `config_group` to place the 'enable_alter_system` GUC, based\non the src/include/utils/guc_tables.h. Any thoughts/hints?\n\nCheers,\nGabriele\n-- \nGabriele Bartolini\nVice President, Cloud Native at EDB\nenterprisedb.com\n\nHi Jelte,On Tue, 6 Feb 2024 at 16:22, Jelte Fennema-Nio <[email protected]> wrote:I'm not convinced we need a new file to disable ALTER SYSTEM. I feel\nlike an \"enable_alter_system\" GUC that defaults to ON would work fine\nfor this. If we make that GUC be PGC_POSTMASTER then an operator can\ndisable ALTER SYSTEM either with a command line argument or by\nchanging the main config file. Since this feature is mostly useful\nwhen the config file is managed by an external system, it seems rather\nsimple for that system to configure one extra GUC in the config file.This is mostly the approach I have taken in the patch, except allowing to change the value in the configuration file. The patch at the moment was enforcing just the setting at startup (which is more than enough for a Kubernetes operator given that Postgres runs in the container). I had done some experiments enabling the change in the configuration file, but wasn't sure in which `config_group` to place the 'enable_alter_system` GUC, based on the src/include/utils/guc_tables.h. Any thoughts/hints?Cheers,Gabriele-- Gabriele BartoliniVice President, Cloud Native at EDBenterprisedb.com",
"msg_date": "Wed, 7 Feb 2024 11:35:14 +0100",
"msg_from": "Gabriele Bartolini <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Hi Joel,\n\nOn Wed, 7 Feb 2024 at 10:00, Joel Jacobson <[email protected]> wrote:\n\n> On Fri, Sep 8, 2023, at 16:17, Gabriele Bartolini wrote:\n> > ```\n> > postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> > ERROR: could not open file \"postgresql.auto.conf\": Permission denied\n> > ```\n>\n> +1 to simply mark postgresql.auto.conf file as not being writeable.\n>\n> To improve the UX experience, how about first checking if the file is not\n> writeable, or catch EACCESS, and add a user-friendly hint?\n>\n> ```\n> postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> ERROR: could not open file \"postgresql.auto.conf\": Permission denied\n> HINT: The ALTER SYSTEM command is effectively disabled as the\n> configuration file is set to read-only.\n> ```\n>\n\nThis would do - provided we fix the issue with pg_rewind not handling\nread-only files in PGDATA.\n\nCheers,\nGabriele\n-- \nGabriele Bartolini\nVice President, Cloud Native at EDB\nenterprisedb.com\n\nHi Joel,On Wed, 7 Feb 2024 at 10:00, Joel Jacobson <[email protected]> wrote:On Fri, Sep 8, 2023, at 16:17, Gabriele Bartolini wrote:\n> ```\n> postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> ERROR: could not open file \"postgresql.auto.conf\": Permission denied\n> ```\n\n+1 to simply mark postgresql.auto.conf file as not being writeable.\n\nTo improve the UX experience, how about first checking if the file is not writeable, or catch EACCESS, and add a user-friendly hint?\n\n```\npostgres=# ALTER SYSTEM SET wal_level TO minimal;\nERROR: could not open file \"postgresql.auto.conf\": Permission denied\nHINT: The ALTER SYSTEM command is effectively disabled as the configuration file is set to read-only.\n```This would do - provided we fix the issue with pg_rewind not handling read-only files in PGDATA.Cheers,Gabriele-- Gabriele BartoliniVice President, Cloud Native at EDBenterprisedb.com",
"msg_date": "Wed, 7 Feb 2024 11:37:20 +0100",
"msg_from": "Gabriele Bartolini <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, 7 Feb 2024 at 11:16, Peter Eisentraut <[email protected]> wrote:\n> On 06.02.24 16:22, Jelte Fennema-Nio wrote:\n> > On Tue, 30 Jan 2024 at 18:49, Robert Haas <[email protected]> wrote:\n> >> I also think that using the GUC system to manage itself is a little\n> >> bit suspect. I wonder if it would be better to do this some other way,\n> >> e.g. a sentinel file in the data directory. For example, suppose we\n> >> refuse ALTER SYSTEM if $PGDATA/disable_alter_system exists, or\n> >> something like that.\n> > I'm not convinced we need a new file to disable ALTER SYSTEM. I feel\n> > like an \"enable_alter_system\" GUC that defaults to ON would work fine\n> > for this. If we make that GUC be PGC_POSTMASTER then an operator can\n> > disable ALTER SYSTEM either with a command line argument or by\n> > changing the main config file. Since this feature is mostly useful\n> > when the config file is managed by an external system, it seems rather\n> > simple for that system to configure one extra GUC in the config file.\n>\n> Yeah, I'm all for that, but some others didn't like handling this in the\n> GUC system, so I'm throwing around other ideas.\n\nOkay, then we're agreeing here. Reading back the email thread the only\nargument against GUCs that I could find was Robert thinking it is a \"a\nlittle bit suspect\" to let the GUC system manage itself. This would\nnot be the first time we're doing that though, the same is true for\n\"config_file\" and \"data_directory\" (which even needed the introduction\nof GUC_DISALLOW_IN_AUTO_FILE).\n\nSo, I personally would like to hear some other options before we start\nentertaining some new ways of configuring Postgres its behaviour (even\nthe read-only postgresql.auto.conf seems quite strange to me).\n\n\n",
"msg_date": "Wed, 7 Feb 2024 14:31:28 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wednesday, February 7, 2024, Joel Jacobson <[email protected]> wrote:\n\n>\n> On Fri, Sep 8, 2023, at 23:43, Magnus Hagander wrote:\n> > We need a \"allowlist\" of things a user can do, rather than a blocklist\n> > of \"they can do everything they can possibly think of and a computer\n> > is capable of doing, except for this one specific thing\". Blocklisting\n> > individual permissions of a superuser will never be secure.\n>\n> +1 for preferring an \"allowlist\" approach over a blocklist.\n>\n\nThe status quo is allow everything so while the theory is nice it seems\nthat requiring it to be allowlist is just going to scare anyone off of\nactually improving matters.\n\nAlso, this isn’t necessarily about blocking the superuser, it is about\neffectively disabling features deemed undesirable at runtime. All features\nenabled by default seems like a valid policy.\n\nWhile the only features likely to be disabled are those involving someone’s\ndefinition of security the security policy is still that superuser can do\neverything the system is capable of doing.\n\nDavid J.\n\nOn Wednesday, February 7, 2024, Joel Jacobson <[email protected]> wrote:\nOn Fri, Sep 8, 2023, at 23:43, Magnus Hagander wrote:\n> We need a \"allowlist\" of things a user can do, rather than a blocklist\n> of \"they can do everything they can possibly think of and a computer\n> is capable of doing, except for this one specific thing\". Blocklisting\n> individual permissions of a superuser will never be secure.\n\n+1 for preferring an \"allowlist\" approach over a blocklist.\nThe status quo is allow everything so while the theory is nice it seems that requiring it to be allowlist is just going to scare anyone off of actually improving matters.Also, this isn’t necessarily about blocking the superuser, it is about effectively disabling features deemed undesirable at runtime. All features enabled by default seems like a valid policy.While the only features likely to be disabled are those involving someone’s definition of security the security policy is still that superuser can do everything the system is capable of doing.David J.",
"msg_date": "Wed, 7 Feb 2024 06:34:06 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, 7 Feb 2024 at 11:35, Gabriele Bartolini\n<[email protected]> wrote:\n> This is mostly the approach I have taken in the patch, except allowing to change the value in the configuration file.\n\n(I had missed the patch in the long thread). I think it would be nice\nto have this be PGC_SIGHUP, and set GUC_DISALLOW_IN_AUTO_FILE. That\nway this behaviour can be changed without shutting down postgres (but\nnot with ALTER SYSTEM, because that seems confusing).\n\n> but wasn't sure in which `config_group` to place the 'enable_alter_system` GUC, based on the src/include/utils/guc_tables.h. Any thoughts/hints?\n\nI agree that none of the existing groups fit particularly well. I see\na few options:\n\n1. Create a new group (maybe something like \"Administration\" or\n\"Enabled Features\")\n2. Use FILE_LOCATIONS, which seems sort of related at least.\n3. Instead of adding an \"enable_alter_system\" GUC we would add an\n\"auto_config_file\" guc (and use the FILE_LOCATIONS group). Then if a\nuser sets \"auto_config_file\" to an empty string, we would disable the\nauto config file and thus ALTER SYSTEM.\n\nI'd prefer 1 or 3 I think. I kinda like option 3 for its consistency\nof being able to configure other config file locations, but I think\nthat would be quite a bit more work, and I'm not sure how useful it is\nto change the location of the auto file.\n\n\n",
"msg_date": "Wed, 7 Feb 2024 14:49:01 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Feb 7, 2024, at 10:49 AM, Jelte Fennema-Nio wrote:\n> On Wed, 7 Feb 2024 at 11:35, Gabriele Bartolini\n> <[email protected]> wrote:\n> > This is mostly the approach I have taken in the patch, except allowing to change the value in the configuration file.\n> \n> (I had missed the patch in the long thread). I think it would be nice\n> to have this be PGC_SIGHUP, and set GUC_DISALLOW_IN_AUTO_FILE. That\n> way this behaviour can be changed without shutting down postgres (but\n> not with ALTER SYSTEM, because that seems confusing).\n\nBased on Gabriele's use case (Kubernetes) and possible others like a cloud\nvendor, I think it should be more restrictive not permissive. I mean,\nPGC_POSTMASTER and *only* allow this GUC to be from command-line. (I don't\ninspect the code but maybe setting GUC_DISALLOW_IN_FILE is not sufficient to\naccomplish this goal.) The main advantages of the GUC system are (a) the\nsetting is dynamically assigned during startup and (b) you can get the current\nsetting via SQL.\n\nAnother idea is to set it per cluster during initdb like data checksums. You\ndon't rely on the GUC system but store this information into pg_control. I\nthink for the referred use cases, you will never have to change it but you can\nhave a mechanism to change it.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Feb 7, 2024, at 10:49 AM, Jelte Fennema-Nio wrote:On Wed, 7 Feb 2024 at 11:35, Gabriele Bartolini<[email protected]> wrote:> This is mostly the approach I have taken in the patch, except allowing to change the value in the configuration file.(I had missed the patch in the long thread). I think it would be niceto have this be PGC_SIGHUP, and set GUC_DISALLOW_IN_AUTO_FILE. Thatway this behaviour can be changed without shutting down postgres (butnot with ALTER SYSTEM, because that seems confusing).Based on Gabriele's use case (Kubernetes) and possible others like a cloudvendor, I think it should be more restrictive not permissive. I mean,PGC_POSTMASTER and *only* allow this GUC to be from command-line. (I don'tinspect the code but maybe setting GUC_DISALLOW_IN_FILE is not sufficient toaccomplish this goal.) The main advantages of the GUC system are (a) thesetting is dynamically assigned during startup and (b) you can get the currentsetting via SQL.Another idea is to set it per cluster during initdb like data checksums. Youdon't rely on the GUC system but store this information into pg_control. Ithink for the referred use cases, you will never have to change it but you canhave a mechanism to change it.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Wed, 07 Feb 2024 14:07:08 -0300",
"msg_from": "\"Euler Taveira\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On 2024-02-07 We 05:37, Gabriele Bartolini wrote:\n> Hi Joel,\n>\n> On Wed, 7 Feb 2024 at 10:00, Joel Jacobson <[email protected]> wrote:\n>\n> On Fri, Sep 8, 2023, at 16:17, Gabriele Bartolini wrote:\n> > ```\n> > postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> > ERROR: could not open file \"postgresql.auto.conf\": Permission\n> denied\n> > ```\n>\n> +1 to simply mark postgresql.auto.conf file as not being writeable.\n>\n> To improve the UX experience, how about first checking if the file\n> is not writeable, or catch EACCESS, and add a user-friendly hint?\n>\n> ```\n> postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> ERROR: could not open file \"postgresql.auto.conf\": Permission denied\n> HINT: The ALTER SYSTEM command is effectively disabled as the\n> configuration file is set to read-only.\n> ```\n>\n>\n> This would do - provided we fix the issue with pg_rewind not handling \n> read-only files in PGDATA.\n>\n\nThis seems like the simplest solution. And maybe we should be fixing \npg_rewind regardless of this issue?\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-02-07 We 05:37, Gabriele\n Bartolini wrote:\n\n\n\n\nHi Joel,\n\n\nOn Wed, 7 Feb 2024 at 10:00,\n Joel Jacobson <[email protected]>\n wrote:\n\nOn Fri, Sep 8, 2023, at\n 16:17, Gabriele Bartolini wrote:\n > ```\n > postgres=# ALTER SYSTEM SET wal_level TO minimal;\n > ERROR: could not open file \"postgresql.auto.conf\":\n Permission denied\n > ```\n\n +1 to simply mark postgresql.auto.conf file as not being\n writeable.\n\n To improve the UX experience, how about first checking if\n the file is not writeable, or catch EACCESS, and add a\n user-friendly hint?\n\n ```\n postgres=# ALTER SYSTEM SET wal_level TO minimal;\n ERROR: could not open file \"postgresql.auto.conf\":\n Permission denied\n HINT: The ALTER SYSTEM command is effectively disabled as\n the configuration file is set to read-only.\n ```\n\n\n\nThis would do - provided we fix the issue with pg_rewind\n not handling read-only files in PGDATA.\n\n\n\n\n\n\n\nThis seems like the simplest solution. And maybe we should be\n fixing pg_rewind regardless of this issue?\n\n\ncheers\n\n\nandrew\n\n\n--\n \nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 10 Feb 2024 11:16:46 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Sat, Feb 10, 2024 at 9:47 PM Andrew Dunstan <[email protected]> wrote:\n>> To improve the UX experience, how about first checking if the file is not writeable, or catch EACCESS, and add a user-friendly hint?\n>>\n>> ```\n>> postgres=# ALTER SYSTEM SET wal_level TO minimal;\n>> ERROR: could not open file \"postgresql.auto.conf\": Permission denied\n>> HINT: The ALTER SYSTEM command is effectively disabled as the configuration file is set to read-only.\n>> ```\n>\n> This would do - provided we fix the issue with pg_rewind not handling read-only files in PGDATA.\n>\n> This seems like the simplest solution. And maybe we should be fixing pg_rewind regardless of this issue?\n\nIs it just pg_rewind? What about pg_basebackup, for example? Will it\npreserve permissions on that file in both directory and tar-format\nmode? Will any of the other tools that access the data directory via\nthe filesystem care about this? What about third-party backup tools,\nor other third-party tools that access the data directory? I think in\ngeneral we've taken the approach so far of basically making everything\nin the data directory have the same permissions as each other, and\noverall either everything is only user-accessible, or it's also\ngroup-readable, and there's a fair amount of code in various places\nthat assumes these things are true.\n\nWhat I like about using a sentinel file for this -- I like Peter's\nsuggestion of postgresql.auto.conf.disabled -- is that it keeps that\nproperty that our tools and third-party tools mostly don't need to\ncare about file permissions, because they're all uniform. I think it\nmay be simpler in the long run if we stick with that idea. I suspect\nthat if we deviate from it we'll slowly find bugs here and there and\nhave to add special-case logic in various unanticipated places to\ncompensate.\n\nWe can also make a GUC work, if people prefer that approach. If we go\nthat route, the suggestion of making it PGC_SIGHUP and\nGUC_DISALLOW_IN_AUTO_FILE is a good one. When I earlier referred to\nmanaging the GUC system with GUCs as \"suspect,\" what I really meant\nwas that (1) there shouldn't be an easy way to make an end run around\nthe thing that's disabling ALTER SYSTEM and (2) you shouldn't be able\nto use ALTER SYSTEM to disable ALTER SYSTEM. It sounds like those\nflags might be strong enough to prevent that. If it turns out they're\nnot we can always add more flags.\n\nIt's not entirely clear to me what our wider vision is here. Some\npeople seem to want a whole series of flags that can disable various\nthings that the superuser might otherwise be able to do, which is fine\nwith me, except that we have no plan to disable all of the things a\nsuperuser can do to get filesystem/OS access, and I don't think it's\npossible to construct such a plan. To do so, we'd have to lock down\nthe superuser account to the point where it can't create functions\nwritten in any untrusted procedural language -- in particular, C\nfunctions -- which would preclude installing most extensions; and we'd\nalso have to forbid direct access to the catalogs. I think those kinds\nof restrictions are basically untenable. A service provider might not\nwant a customer to have the ability to do those kinds of things, but\nsome user must retain those capabilities, at the very least to handle\nemergencies. So, the solution there seems to be for the service\nprovider to be the superuser and the customer to not be the\nsuper-user, rather than for the service provider and the customer to\nboth be super-user but with some attempt at sandboxing. I'm not trying\nto kill this particular proposal, which I think is broadly reasonable,\nbut I'm still uncomfortable with the fact that it looks a lot like\npseudo-security.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 11 Feb 2024 19:28:20 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Sun, Feb 11, 2024, at 14:58, Robert Haas wrote:\n> It's not entirely clear to me what our wider vision is here. Some\n> people seem to want a whole series of flags that can disable various\n> things that the superuser might otherwise be able to do,\n\nYes, that's what bothers me a little with the idea of a special fix for this special case.\n\nOn Thu, Sep 7, 2023, at 22:27, Tom Lane wrote:\n> If you nonetheless feel that that's a good idea for your use case,\n> you can implement the restriction with an event trigger or the like.\n\nOn Fri, Sep 15, 2023, at 11:18, Daniel Gustafsson wrote:\n>> On 11 Sep 2023, at 15:50, Magnus Hagander <[email protected]> wrote:\n>> \n>> On Sat, Sep 9, 2023 at 5:14 PM Alvaro Herrera <[email protected]> wrote:\n>>> \n>>> On 2023-Sep-08, Magnus Hagander wrote:\n>>> \n>>>> Now, it might be that you don't care at all about the *security* side\n>>>> of the feature, and only care about the convenience side. But in that\n>>>> case, the original suggestion from Tom of using an even trigger seems\n>>>> like a fine enough solution?\n>>> \n>>> ALTER SYSTEM, like all system-wide commands, does not trigger event\n>>> triggers. These are per-database only.\n>>> \n>>> https://www.postgresql.org/docs/16/event-trigger-matrix.html\n>> \n>> Hah, didn't think of that. And yes, that's a very good point. But one\n>> way to fix that would be to actually make event triggers for system\n>> wide commands, which would then be useful for other things as well...\n>\n> Wouldn't having system wide EVTs be a generic solution which could be the\n> infrastructure for this requested change as well as others in the same area?\n\n+1\n\nI like the wider vision of providing the necessary infrastructure to provide a solution for the general case.\n\n/Joel\n\n\n",
"msg_date": "Tue, 13 Feb 2024 08:05:03 +0100",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Tue, Feb 13, 2024 at 2:05 AM Joel Jacobson <[email protected]> wrote:\n> > Wouldn't having system wide EVTs be a generic solution which could be the\n> > infrastructure for this requested change as well as others in the same area?\n>\n> +1\n>\n> I like the wider vision of providing the necessary infrastructure to provide a solution for the general case.\n\nWe don't seem to be making much progress here.\n\nAs far as I can see from reading the thread, most people agree that\nit's reasonable to have some way to disable ALTER SYSTEM, but there\nare at least six competing ideas about how to do that:\n\n1. command-line option\n2. GUC\n3. event trigger\n4. extension\n5. sentinel file\n6. remove permissions on postgresql.auto.conf\n\nAs I see it, (5) or (6) are most convenient for the system\nadministrator, since they let that person make changes without needing\nto log into the database or, really, worry very much about the\ndatabase's usual configuration mechanisms at all, and (5) seems like\nless work to implement than (6), because (6) probably breaks a bunch\nof client tools in weird ways that might not be easy for us to\ndiscover during patch review. (1) doesn't allow changing things at\nruntime, and might require the system administrator to fiddle with the\nstartup scripts, which seems like it could be inconvenient. (2) and\n(3) seem like they put the superuser in a position to easily reverse a\npolicy about what the superuser ought to do, but in the case of (2),\nthat can be mitigated if the GUC can only be set in postgresql.conf\nand not elsewhere. (4) has no real advantages except for allowing core\nto maintain the fiction that we don't support this while actually\nsupporting it; I think we should reject that approach outright.\n\nSo what I'd like to see is a patch that implements (5), or in the\nalternative (2) but with the GUC being PGC_SIGHUP and\nGUC_DISALLOW_IN_AUTO_FILE. I believe there would be adequate consensus\nto proceed with either of those approaches. Anybody feel like coding\nit up?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Mar 2024 12:37:14 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Thu, 14 Mar 2024 at 17:37, Robert Haas <[email protected]> wrote:\n> or in the\n> alternative (2) but with the GUC being PGC_SIGHUP and\n> GUC_DISALLOW_IN_AUTO_FILE. I believe there would be adequate consensus\n> to proceed with either of those approaches. Anybody feel like coding\n> it up?\n\nHere is a slightly modified version of Gabrielle his original patch,\nwhich already implemented GUC approach. The changes I made are adding\nPGC_SIGHUP and GUC_DISALLOW_IN_AUTO_FILE as well as adding some more\ndocs.",
"msg_date": "Thu, 14 Mar 2024 19:27:57 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> As far as I can see from reading the thread, most people agree that\n> it's reasonable to have some way to disable ALTER SYSTEM, but there\n> are at least six competing ideas about how to do that:\n\n> 1. command-line option\n> 2. GUC\n> 3. event trigger\n> 4. extension\n> 5. sentinel file\n> 6. remove permissions on postgresql.auto.conf\n\nWith the possible exception of #1, every one of these is easily\ndefeatable by an uncooperative superuser. I'm not excited about\nadding a \"security\" feature with such obvious holes in it.\nWe reverted MAINTAIN last year for much less obvious holes;\nhow is it that we're going to look the other way on this one?\n\n#2 with the GUC_DISALLOW_IN_AUTO_FILE flag can be made secure\n(I think) by putting the main postgresql.conf file outside the\ndata directory and then making it not owned by or writable by the\npostgres user. But I doubt that's a common configuration, and\nI'm sure we will get complaints from people who failed to set it\nup that way. The proposed patch certainly doesn't bother to\ndocument the hazard.\n\nReally we'd need to do something about removing superusers'\naccess to the filesystem in order to build something with\nfewer holes. I'm not against inventing such a feature,\nbut it'd take a fair amount of work and likely would end\nin a noticeably less usable system (no plpython for example).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Mar 2024 15:13:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 3:13 PM Tom Lane <[email protected]> wrote:\n> With the possible exception of #1, every one of these is easily\n> defeatable by an uncooperative superuser. I'm not excited about\n> adding a \"security\" feature with such obvious holes in it.\n> We reverted MAINTAIN last year for much less obvious holes;\n> how is it that we're going to look the other way on this one?\n\nWe're going to document that it's not a security feature along the\nlines of what Magnus suggested in\nhttp://postgr.es/m/CABUevEx9m=CV8=WpXVW+rtVVs858kDJ6YpRkExV7n+F6MK05CQ@mail.gmail.com\n\nAnd then maybe someday we'll do this:\n\n> Really we'd need to do something about removing superusers'\n> access to the filesystem in order to build something with\n> fewer holes. I'm not against inventing such a feature,\n> but it'd take a fair amount of work and likely would end\n> in a noticeably less usable system (no plpython for example).\n\nYep. It would be useful if you replied to the portion of\nhttp://postgr.es/m/CA+TgmoasUgkZ27x0XZH4EdmQ_b6JbRT6cSUxf+pHdgj-ESk_zA@mail.gmail.com\nwhere I enumerate the methods that I know about for the superuser to\nget filesystem access. I don't think it's going to be practical to\nblock all of those methods in a single commit, and I'm not entirely\nconvinced that we can ever close all the holes without compromising\nthe superuser's ability to do necessary system administration tasks,\nbut maybe it's possible, and documenting the list of such methods\nwould make it a lot easier for users to understand the risks and\nhackers to pick problems to try to tackle.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Mar 2024 15:23:48 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Mar 14, 2024 at 3:13 PM Tom Lane <[email protected]> wrote:\n>> With the possible exception of #1, every one of these is easily\n>> defeatable by an uncooperative superuser. I'm not excited about\n>> adding a \"security\" feature with such obvious holes in it.\n\n> We're going to document that it's not a security feature along the\n> lines of what Magnus suggested in\n> http://postgr.es/m/CABUevEx9m=CV8=WpXVW+rtVVs858kDJ6YpRkExV7n+F6MK05CQ@mail.gmail.com\n\nThe patch-of-record contains no such wording. And if this isn't a\nsecurity feature, then what is it? If you have to say to your\n(super) users \"please don't mess with the system configuration\",\nyou might as well just trust them not to do it the easy way as not\nto do it the hard way. If they're untrustworthy, why have they\ngot superuser?\n\nWhat I think this is is a loaded foot-gun painted in kid-friendly\ncolors. People will use it and then file CVEs about how it did\nnot turn out to be as secure as they imagined (probably without\nreading the documentation).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Mar 2024 16:08:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 4:08 PM Tom Lane <[email protected]> wrote:\n> The patch-of-record contains no such wording.\n\nI plan to fix that, if nobody else beats me to it.\n\n> And if this isn't a\n> security feature, then what is it? If you have to say to your\n> (super) users \"please don't mess with the system configuration\",\n> you might as well just trust them not to do it the easy way as not\n> to do it the hard way. If they're untrustworthy, why have they\n> got superuser?\n\nI mean, I feel like this question has been asked and answered before,\nmultiple times, on this thread. If you sincerely don't understand the\nuse case, I can try again to explain it. But somehow I feel like it's\nmore that you just don't like the idea, which is fair, but it seems\nlike a considerable number of people feel otherwise.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Mar 2024 16:38:35 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 1:38 PM Robert Haas <[email protected]> wrote:\n> On Thu, Mar 14, 2024 at 4:08 PM Tom Lane <[email protected]> wrote:\n> > The patch-of-record contains no such wording.\n>\n> I plan to fix that, if nobody else beats me to it.\n>\n> > And if this isn't a\n> > security feature, then what is it? If you have to say to your\n> > (super) users \"please don't mess with the system configuration\",\n> > you might as well just trust them not to do it the easy way as not\n> > to do it the hard way. If they're untrustworthy, why have they\n> > got superuser?\n>\n> I mean, I feel like this question has been asked and answered before,\n> multiple times, on this thread. If you sincerely don't understand the\n> use case, I can try again to explain it. But somehow I feel like it's\n> more that you just don't like the idea, which is fair, but it seems\n> like a considerable number of people feel otherwise.\n\nI know I'm jumping into a long thread here, but I've been following it\nout of interest. I'm sympathetic to the use case, since I used to work\nat a Postgres cloud provider, and while our system intentionally did\nnot give our end users superuser privileges, I can imagine other\nmanaged environments where that's not an issue. I'd like to give\nanswering this question again a shot, because I think this has been a\npersistent misunderstanding in this thread, and I don't think it's\nbeen made all that clear.\n\nIt's not a security feature: it's a usability feature.\n\nIt's a usability feature because, when Postgres configuration is\nmanaged by an outside mechanism (e.g., as in a Kubernetes\nenvironment), ALTER SYSTEM currently allows a superuser to make\nchanges that appear to work, but may be discarded at some point in the\nfuture when that outside mechanism updates the config. They may also\nbe represented incorrectly in a management dashboard if that dashboard\nis based on the values in the outside configuration mechanism, rather\nthan values directly from Postgres.\n\nIn this case, the end user with access to Postgres superuser\nprivileges presumably also has access to the outside configuration\nmechanism. The goal is not to prevent them from changing settings, but\nto offer guard rails that prevent them from changing settings in a way\nthat will be unstable (revertible by a future update) or confusing\n(not showing up in a management UI).\n\nThere are challenges here in making sure this is _not_ seen as a\nsecurity feature. But I do think the feature itself is sensible and\nworthwhile.\n\nThanks,\nMaciek\n\n\n",
"msg_date": "Thu, 14 Mar 2024 14:14:29 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 5:15 PM Maciek Sakrejda <[email protected]> wrote:\n> It's not a security feature: it's a usability feature.\n>\n> It's a usability feature because, when Postgres configuration is\n> managed by an outside mechanism (e.g., as in a Kubernetes\n> environment), ALTER SYSTEM currently allows a superuser to make\n> changes that appear to work, but may be discarded at some point in the\n> future when that outside mechanism updates the config. They may also\n> be represented incorrectly in a management dashboard if that dashboard\n> is based on the values in the outside configuration mechanism, rather\n> than values directly from Postgres.\n>\n> In this case, the end user with access to Postgres superuser\n> privileges presumably also has access to the outside configuration\n> mechanism. The goal is not to prevent them from changing settings, but\n> to offer guard rails that prevent them from changing settings in a way\n> that will be unstable (revertible by a future update) or confusing\n> (not showing up in a management UI).\n>\n> There are challenges here in making sure this is _not_ seen as a\n> security feature. But I do think the feature itself is sensible and\n> worthwhile.\n\nThis is what I would have said if I'd tried to offer an explanation,\nexcept you said it better than I would have done.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Mar 2024 19:43:15 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 07:43:15PM -0400, Robert Haas wrote:\n> On Thu, Mar 14, 2024 at 5:15 PM Maciek Sakrejda <[email protected]> wrote:\n> > It's not a security feature: it's a usability feature.\n> >\n> > It's a usability feature because, when Postgres configuration is\n> > managed by an outside mechanism (e.g., as in a Kubernetes\n> > environment), ALTER SYSTEM currently allows a superuser to make\n> > changes that appear to work, but may be discarded at some point in the\n> > future when that outside mechanism updates the config. They may also\n> > be represented incorrectly in a management dashboard if that dashboard\n> > is based on the values in the outside configuration mechanism, rather\n> > than values directly from Postgres.\n> >\n> > In this case, the end user with access to Postgres superuser\n> > privileges presumably also has access to the outside configuration\n> > mechanism. The goal is not to prevent them from changing settings, but\n> > to offer guard rails that prevent them from changing settings in a way\n> > that will be unstable (revertible by a future update) or confusing\n> > (not showing up in a management UI).\n> >\n> > There are challenges here in making sure this is _not_ seen as a\n> > security feature. But I do think the feature itself is sensible and\n> > worthwhile.\n> \n> This is what I would have said if I'd tried to offer an explanation,\n> except you said it better than I would have done.\n\nI do think the docs need to clearly say this is not a security feature.\nIn fact, I wonder if the ALTER SYSTEM error message should explain the\nGUC that is causing the failure.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Thu, 14 Mar 2024 22:58:37 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Thu, 14 Mar 2024 at 22:15, Maciek Sakrejda <[email protected]> wrote:\n> In this case, the end user with access to Postgres superuser\n> privileges presumably also has access to the outside configuration\n> mechanism. The goal is not to prevent them from changing settings, but\n> to offer guard rails that prevent them from changing settings in a way\n> that will be unstable (revertible by a future update) or confusing\n> (not showing up in a management UI).\n\nGreat explanation! Attached is a much changed patch that updates to\ndocs and code to reflect this. I particularly liked your use of the\nword \"guard rail\" as that reflects the intent of the feature very well\nIMO. So I included that wording in both the GUC group and the error\ncode.",
"msg_date": "Fri, 15 Mar 2024 11:03:18 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "> On 15 Mar 2024, at 03:58, Bruce Momjian <[email protected]> wrote:\n> \n> On Thu, Mar 14, 2024 at 07:43:15PM -0400, Robert Haas wrote:\n>> On Thu, Mar 14, 2024 at 5:15 PM Maciek Sakrejda <[email protected]> wrote:\n>>> It's not a security feature: it's a usability feature.\n>>> \n>>> It's a usability feature because, when Postgres configuration is\n>>> managed by an outside mechanism (e.g., as in a Kubernetes\n>>> environment), ALTER SYSTEM currently allows a superuser to make\n>>> changes that appear to work, but may be discarded at some point in the\n>>> future when that outside mechanism updates the config. They may also\n>>> be represented incorrectly in a management dashboard if that dashboard\n>>> is based on the values in the outside configuration mechanism, rather\n>>> than values directly from Postgres.\n>>> \n>>> In this case, the end user with access to Postgres superuser\n>>> privileges presumably also has access to the outside configuration\n>>> mechanism. The goal is not to prevent them from changing settings, but\n>>> to offer guard rails that prevent them from changing settings in a way\n>>> that will be unstable (revertible by a future update) or confusing\n>>> (not showing up in a management UI).\n>>> \n>>> There are challenges here in making sure this is _not_ seen as a\n>>> security feature. But I do think the feature itself is sensible and\n>>> worthwhile.\n>> \n>> This is what I would have said if I'd tried to offer an explanation,\n>> except you said it better than I would have done.\n> \n> I do think the docs need to clearly say this is not a security feature.\n\nA usability feature whose purpose is to guard against a superuser willingly\nacting against how the system is managed, or not even knowing how it is\nmanaged, does have a certain security feature smell. We've already had a few\nCVE's filed against usability features so I don't think Tom's fears are at all\nungrounded.\n\nAnother quirk for the documentation of this: if I disable ALTER SYSTEM I would\nassume that postgresql.auto.conf is no longer consumed, but it still is (and\nstill need to be), so maybe \"enable/disable\" is the wrong choice of words?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 15 Mar 2024 11:08:14 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Fri, 15 Mar 2024 at 11:08, Daniel Gustafsson <[email protected]> wrote:\n> Another quirk for the documentation of this: if I disable ALTER SYSTEM I would\n> assume that postgresql.auto.conf is no longer consumed, but it still is (and\n> still need to be), so maybe \"enable/disable\" is the wrong choice of words?\n\nUpdated the docs to reflect this quirk. But I kept the same name for\nthe GUC for now, because I couldn't come up with a better name myself.\nIf someone suggests a better name, I'm happy to change it though.",
"msg_date": "Fri, 15 Mar 2024 12:09:10 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Fri, Mar 15, 2024 at 7:09 AM Jelte Fennema-Nio <[email protected]> wrote:\n> On Fri, 15 Mar 2024 at 11:08, Daniel Gustafsson <[email protected]> wrote:\n> > Another quirk for the documentation of this: if I disable ALTER SYSTEM I would\n> > assume that postgresql.auto.conf is no longer consumed, but it still is (and\n> > still need to be), so maybe \"enable/disable\" is the wrong choice of words?\n>\n> Updated the docs to reflect this quirk. But I kept the same name for\n> the GUC for now, because I couldn't come up with a better name myself.\n> If someone suggests a better name, I'm happy to change it though.\n\nHmm. So in this patch, we have a whole new kind of GUC - guard rails -\nof which enable_alter_system is the first member. Is that what we\nwant? I would have been somewhat inclined to find an existing section\nof postgresql.auto.conf for this parameter, perhaps \"platform and\nversion compatibility\". But if we're going to add a bunch of similar\nGUCs, maybe grouping them all together is the way to go.\n\nEven if that is what we're going to do, do we want to call them \"guard\nrails\"? I'm not sure I'd find that name terribly clear, as a user. We\nknow what we mean right now because we're having a very active\ndiscussion about this topic, but it might not seem as clear to someone\ncoming at it fresh.\n\nOn balance, I'm disinclined to add a new category for this. If we get\nto a point where we have several of these and we want to break them\nout into a new category, we can do it then. Maybe by that time the\nnaming will seem more clear, too.\n\nI also don't think it's good enough to just say that this isn't a\nsecurity feature. Talk is cheap. I think we need to say why it's not a\nsecurity feature. So my proposal is something like this, taking a\nbunch of text from Jelte's patch and some inspiration from Magnus's\nearlier remarks:\n\n==\nWhen <literal>enable_alter_system</literal> is set to\n<literal>off</literal>, an error is returned if the <command>ALTER\nSYSTEM</command> command is used. This parameter can only be set in\nthe <filename>postgresql.conf</filename> file or on the server command\nline. The default value is <literal>on</literal>.\n\nNote that this setting cannot be regarded as a security feature. It\nonly disables the <literal>ALTER SYSTEM</literal> command. It does not\nprevent a superuser from changing the configuration remotely using\nother means. A superuser has many ways of executing shell commands at\nthe operating system level, and can therefore modify\n<literal>postgresql.auto.conf</literal> regardless of the value of\nthis setting. The purpose of the setting is to prevent\n<emphasis>accidental</emphasis> modifications via <literal>ALTER\nSYSTEM</literal> in environments where <literal>PostgreSQL</literal>'s\nconfiguration is managed by some outside mechanism. In such\nenvironments, using <command>ALTER SYSTEM</command> to make\nconfiguration changes might appear to work, but then may be discarded\nat some point in the future when that outside mechanism updates the\nconfiguration. Setting this parameter to <literal>false</literal> can\nhelp to avoid such mistakes.\n==\n\nI agree with Daniel's comment that Tom's concerns about people filing\nCVEs are not without merit; indeed, I said the same thing in my first\npost to this thread. However, I also believe that's not a sufficient\nreason for rejecting a feature that many people seem to want. I think\nthe root of this problem is that our documentation is totally unclear\nabout the fact that we don't intend for there to be privilege\nseparation between the operating system user and the PostgreSQL\nsuperuser; people want there to be a distinction, and think there is.\nHence CVE-2019-9193, for example. Several people, including me, wrote\nblog posts about how that's not a security vulnerability, but while I\nwas researching mine, I went looking for where in the documentation we\nactually SAY that there's no privilege separation between the OS user\nand the superuser. The only mention I found at the time was the\nPL/perlu documentation, which said this:\n\n\"The writer of a PL/PerlU function must take care that the function\ncannot be used to do anything unwanted, since it will be able to do\nanything that could be done by a user logged in as the database\nadministrator.\"\n\nThat statement, from the official documentation, in my mind at least,\nDOES confirm that we don't intend privilege separation, but it's\nreally oblique. You have to think through the fact that the superuser\nhas to be the one to install plperlu, and that plperlu functions can\nusurp the OS user; since both of those things are documented to be the\ncase, it follows that we know and expect that the superuser can usurp\nthe OS user. But someone who is wondering how PostgreSQL's security\nmodel works is not going to read the plperlu documentation and make\nthe inferences I just described. It's crazy to me that a principle\nfrequently cited as gospel on this mailing list and others is nearly\nundocumented. Obviously, even if we did document it clearly, people\ncould still get confused (or just disagree with our position) and file\nCVEs anyway, but we're not helping our case by having nothing to cite.\n\nA difficulty is where to PUT such a mention in the documentation.\nThere's not a single section title in the top-level documentation\nindex that includes the word \"security\". Perhaps figuring out how to\ndocument this is best left to a separate thread, and there's also the\nquestion of whether a new section that talks about this also ought to\ntalk about anything else. But I feel like we're way overdue to do\nsomething about this.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 18 Mar 2024 08:57:34 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "> On 18 Mar 2024, at 13:57, Robert Haas <[email protected]> wrote:\n\n> my proposal is something like this, taking a\n> bunch of text from Jelte's patch and some inspiration from Magnus's\n> earlier remarks:\n\nI still think any wording should clearly mention that settings in the file are\nstill applied. The proposed wording says to implicitly but to avoid confusion\nI think it should be explicit.\n\n> Perhaps figuring out how to\n> document this is best left to a separate thread, and there's also the\n> question of whether a new section that talks about this also ought to\n> talk about anything else. But I feel like we're way overdue to do\n> something about this.\n\nSeconded, both that it needs to be addressed and that it should be done on a\nseparate thread from this one.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 18 Mar 2024 14:09:04 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Mon, 18 Mar 2024 at 13:57, Robert Haas <[email protected]> wrote:\n> I would have been somewhat inclined to find an existing section\n> of postgresql.auto.conf for this parameter, perhaps \"platform and\n> version compatibility\".\n\nI tried to find an existing section, but I couldn't find any that this\nnew GUC would fit into naturally. \"Version and Platform Compatibility\n/ Previous PostgreSQL Versions\" (the one you suggested) seems wrong\ntoo. The GUCs there are to get back to Postgres behaviour from\nprevious versions. So that section would only make sense if we'd turn\nenable_alter_system off by default (which obviously no-one in this\nthread suggests/wants).\n\nIf you have another suggestion for an existing category that we should\nuse, feel free to share. But imho, none of the existing ones are a\ngood fit.\n\n> Even if that is what we're going to do, do we want to call them \"guard\n> rails\"? I'm not sure I'd find that name terribly clear, as a user.\n\nIf anyone has a better suggestion, I'm happy to change it.\n\n\nOn Mon, 18 Mar 2024 at 14:09, Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 18 Mar 2024, at 13:57, Robert Haas <[email protected]> wrote:\n>\n> > my proposal is something like this, taking a\n> > bunch of text from Jelte's patch and some inspiration from Magnus's\n> > earlier remarks:\n>\n> I still think any wording should clearly mention that settings in the file are\n> still applied. The proposed wording says to implicitly but to avoid confusion\n> I think it should be explicit.\n\nI updated the first two paragraphs with Robert his wording (and did\nnot remove the third one as that addresses the point made by Daniel)",
"msg_date": "Mon, 18 Mar 2024 15:12:25 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 2:09 PM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 18 Mar 2024, at 13:57, Robert Haas <[email protected]> wrote:\n>\n> > my proposal is something like this, taking a\n> > bunch of text from Jelte's patch and some inspiration from Magnus's\n> > earlier remarks:\n>\n> I still think any wording should clearly mention that settings in the file are\n> still applied. The proposed wording says to implicitly but to avoid confusion\n> I think it should be explicit.\n\nI haven't kept up with the thread, but in general I'd prefer it to\nactually turn off parsing the file as well. I think just turning off\nthe ability to change it -- including the ability to *revert* changes\nthat were made to it before -- is going to be confusing.\n\nBut, if we have decided it shouldn't do that, then IMHO we should\nconsider naming it maybe enable_alter_system_command instead -- since\nwe're only disabling the alter system command, not the actual feature\nin total.\n\n\n> > Perhaps figuring out how to\n> > document this is best left to a separate thread, and there's also the\n> > question of whether a new section that talks about this also ought to\n> > talk about anything else. But I feel like we're way overdue to do\n> > something about this.\n>\n> Seconded, both that it needs to be addressed and that it should be done on a\n> separate thread from this one.\n\n+1.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 18 Mar 2024 16:34:26 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "> On 18 Mar 2024, at 16:34, Magnus Hagander <[email protected]> wrote:\n> \n> On Mon, Mar 18, 2024 at 2:09 PM Daniel Gustafsson <[email protected]> wrote:\n>> \n>>> On 18 Mar 2024, at 13:57, Robert Haas <[email protected]> wrote:\n>> \n>>> my proposal is something like this, taking a\n>>> bunch of text from Jelte's patch and some inspiration from Magnus's\n>>> earlier remarks:\n>> \n>> I still think any wording should clearly mention that settings in the file are\n>> still applied. The proposed wording says to implicitly but to avoid confusion\n>> I think it should be explicit.\n> \n> I haven't kept up with the thread, but in general I'd prefer it to\n> actually turn off parsing the file as well. I think just turning off\n> the ability to change it -- including the ability to *revert* changes\n> that were made to it before -- is going to be confusing.\n\nWouldn't that break pgBackrest which IIRC write to .auto.conf directly\nwithout using ALTER SYSTEM?\n\n> But, if we have decided it shouldn't do that, then IMHO we should\n> consider naming it maybe enable_alter_system_command instead -- since\n> we're only disabling the alter system command, not the actual feature\n> in total.\n\nGood point.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 18 Mar 2024 16:44:20 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 4:44 PM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 18 Mar 2024, at 16:34, Magnus Hagander <[email protected]> wrote:\n> >\n> > On Mon, Mar 18, 2024 at 2:09 PM Daniel Gustafsson <[email protected]> wrote:\n> >>\n> >>> On 18 Mar 2024, at 13:57, Robert Haas <[email protected]> wrote:\n> >>\n> >>> my proposal is something like this, taking a\n> >>> bunch of text from Jelte's patch and some inspiration from Magnus's\n> >>> earlier remarks:\n> >>\n> >> I still think any wording should clearly mention that settings in the file are\n> >> still applied. The proposed wording says to implicitly but to avoid confusion\n> >> I think it should be explicit.\n> >\n> > I haven't kept up with the thread, but in general I'd prefer it to\n> > actually turn off parsing the file as well. I think just turning off\n> > the ability to change it -- including the ability to *revert* changes\n> > that were made to it before -- is going to be confusing.\n>\n> Wouldn't that break pgBackrest which IIRC write to .auto.conf directly\n> without using ALTER SYSTEM?\n\nUgh of course. And not only that, it would also break pg_basebackup\nwhich does the same.\n\nSo I guess that's not a good idea. I guess nobody anticipated this\nwhen that was done:)\n\n\n> > But, if we have decided it shouldn't do that, then IMHO we should\n> > consider naming it maybe enable_alter_system_command instead -- since\n> > we're only disabling the alter system command, not the actual feature\n> > in total.\n>\n> Good point.\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 18 Mar 2024 16:46:33 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 7:12 AM Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Mon, 18 Mar 2024 at 13:57, Robert Haas <[email protected]> wrote:\n> > I would have been somewhat inclined to find an existing section\n> > of postgresql.auto.conf for this parameter, perhaps \"platform and\n> > version compatibility\".\n>\n> I tried to find an existing section, but I couldn't find any that this\n> new GUC would fit into naturally. \"Version and Platform Compatibility\n> / Previous PostgreSQL Versions\" (the one you suggested) seems wrong\n> too. The GUCs there are to get back to Postgres behaviour from\n> previous versions. So that section would only make sense if we'd turn\n> enable_alter_system off by default (which obviously no-one in this\n> thread suggests/wants).\n>\n> If you have another suggestion for an existing category that we should\n> use, feel free to share. But imho, none of the existing ones are a\n> good fit.\n\n+1 on Version and Platform Compatibility. Maybe it just needs a new\nsubsection there? This is for compatibility with a \"deployment\nplatform\". The \"Platform and Client Compatibility\" subsection has just\none entry, so a new subsection with also just one entry seems\ndefensible, maybe just \"Deployment Compatibility\"? I think it's also\nplausible that there will be other similar settings for managed\ndeployments in the future.\n\n> > Even if that is what we're going to do, do we want to call them \"guard\n> > rails\"? I'm not sure I'd find that name terribly clear, as a user.\n>\n> If anyone has a better suggestion, I'm happy to change it.\n\nNo better suggestion at the moment, but while I used the term to\nexplain the feature, I also don't think that's a great official name.\nFor one thing, the section could easily be misinterpreted as guard\nrails for end-users who are new to Postgres. Also, I think it's more\ncolloquial in tone than Postgres docs conventions.\n\nFurther, I think we may want to change the GUC name itself. All the\nother GUCs that start with enable_ control planner behavior:\n\nmaciek=# select name from pg_settings where name like 'enable_%';\n name\n--------------------------------\n enable_async_append\n enable_bitmapscan\n enable_gathermerge\n enable_hashagg\n enable_hashjoin\n enable_incremental_sort\n enable_indexonlyscan\n enable_indexscan\n enable_material\n enable_memoize\n enable_mergejoin\n enable_nestloop\n enable_parallel_append\n enable_parallel_hash\n enable_partition_pruning\n enable_partitionwise_aggregate\n enable_partitionwise_join\n enable_presorted_aggregate\n enable_seqscan\n enable_sort\n enable_tidscan\n(21 rows)\n\nDo we really want to break that pattern?\n\n\n",
"msg_date": "Mon, 18 Mar 2024 09:18:46 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 11:46 AM Magnus Hagander <[email protected]> wrote:\n> > Wouldn't that break pgBackrest which IIRC write to .auto.conf directly\n> > without using ALTER SYSTEM?\n>\n> Ugh of course. And not only that, it would also break pg_basebackup\n> which does the same.\n>\n> So I guess that's not a good idea. I guess nobody anticipated this\n> when that was done:)\n\nI'm also +1 for the idea that the feature should only disable ALTER\nSYSTEM, not postgresql.auto.conf. I can't really see any reason why it\nneeds to do both, and it might be more convenient if it didn't. If\nyou're managing PostgreSQL's configuration externally, you might find\nit convenient to write the configuration you're managing into\npostgresql.auto.conf. Or you might want to write it to\npostgresql.conf. Or you might want to do something more complicated\nwith include directives or whatever. But there's no reason why you\n*couldn't* want to use postgresql.auto.conf, and on the other hand I\ndon't see how anyone benefits from that file not being read. That just\nseems confusing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 18 Mar 2024 13:24:44 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 12:19 PM Maciek Sakrejda <[email protected]> wrote:\n> +1 on Version and Platform Compatibility. Maybe it just needs a new\n> subsection there? This is for compatibility with a \"deployment\n> platform\". The \"Platform and Client Compatibility\" subsection has just\n> one entry, so a new subsection with also just one entry seems\n> defensible, maybe just \"Deployment Compatibility\"? I think it's also\n> plausible that there will be other similar settings for managed\n> deployments in the future.\n\nRight, we're adding this because of environments like Kubernetes,\nwhich isn't a version, but it is a platform, or at least a deployment\nmode, which is why I thought of that section. I think for now we\nshould just file this under \"Other platforms and clients,\" which only\nhas one existing setting. If the number of settings of this type\ngrows, we can split it out.\n\n> Do we really want to break that pattern?\n\nUsing enable_* as code for \"this is a planner GUC\" is a pretty stupid\npattern, honestly, but I agree with you that it's long-established and\nwe probably shouldn't deviate from it lightly. Perhaps just rename to\nallow_alter_system?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 18 Mar 2024 13:27:43 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 10:27 AM Robert Haas <[email protected]> wrote:\n> Right, we're adding this because of environments like Kubernetes,\n> which isn't a version, but it is a platform, or at least a deployment\n> mode, which is why I thought of that section. I think for now we\n> should just file this under \"Other platforms and clients,\" which only\n> has one existing setting. If the number of settings of this type\n> grows, we can split it out.\n\nFair enough, +1.\n\n> Using enable_* as code for \"this is a planner GUC\" is a pretty stupid\n> pattern, honestly, but I agree with you that it's long-established and\n> we probably shouldn't deviate from it lightly. Perhaps just rename to\n> allow_alter_system?\n\n+1\n\n\n",
"msg_date": "Mon, 18 Mar 2024 10:37:45 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 12:37 PM Robert Haas <[email protected]> wrote:\n>\n> On Tue, Feb 13, 2024 at 2:05 AM Joel Jacobson <[email protected]> wrote:\n> > > Wouldn't having system wide EVTs be a generic solution which could be the\n> > > infrastructure for this requested change as well as others in the same area?\n> >\n> > +1\n> >\n> > I like the wider vision of providing the necessary infrastructure to provide a solution for the general case.\n>\n> We don't seem to be making much progress here.\n>\n> As far as I can see from reading the thread, most people agree that\n> it's reasonable to have some way to disable ALTER SYSTEM, but there\n> are at least six competing ideas about how to do that:\n>\n> 1. command-line option\n> 2. GUC\n> 3. event trigger\n> 4. extension\n> 5. sentinel file\n> 6. remove permissions on postgresql.auto.conf\n>\n> As I see it, (5) or (6) are most convenient for the system\n> administrator, since they let that person make changes without needing\n> to log into the database or, really, worry very much about the\n> database's usual configuration mechanisms at all, and (5) seems like\n> less work to implement than (6), because (6) probably breaks a bunch\n> of client tools in weird ways that might not be easy for us to\n> discover during patch review. (1) doesn't allow changing things at\n> runtime, and might require the system administrator to fiddle with the\n> startup scripts, which seems like it could be inconvenient. (2) and\n> (3) seem like they put the superuser in a position to easily reverse a\n> policy about what the superuser ought to do, but in the case of (2),\n> that can be mitigated if the GUC can only be set in postgresql.conf\n> and not elsewhere. (4) has no real advantages except for allowing core\n> to maintain the fiction that we don't support this while actually\n> supporting it; I think we should reject that approach outright.\n>\n\nYou know it's funny, you say #4 has no advantage and should be\nrejected outright, but AFAICT\n\na) no one has actually laid out why it wouldn't work for them,\nb) and it's the one solution that can be implemented now\nc) and that implementation would be backwards compatible with some set\nof existing releases\nd) and certainly anyone running k8s or config management system would\nhave the ability to install\ne) and it could be custom tailored to individual deployments as needed\n(including other potential commands that some systems might care\nabout)\nf) and it seems like the least likely option to be mistaken for a\nsecurity feature\ng) and also seems pretty safe wrt not breaking existing tooling (like\n5/6 might do)\n\nLooking at it, you could make the argument that #4 is actually the\nbest of the solutions proposed, except it has the one drawback that it\nrequires folks to double down on the fiction that we think extensions\nare a good way to build solutions when really everyone just wants to\nhave everything in core.\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Mon, 18 Mar 2024 16:07:35 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 4:07 PM Robert Treat <[email protected]> wrote:\n> You know it's funny, you say #4 has no advantage and should be\n> rejected outright, but AFAICT\n>\n> a) no one has actually laid out why it wouldn't work for them,\n> b) and it's the one solution that can be implemented now\n> c) and that implementation would be backwards compatible with some set\n> of existing releases\n> d) and certainly anyone running k8s or config management system would\n> have the ability to install\n> e) and it could be custom tailored to individual deployments as needed\n> (including other potential commands that some systems might care\n> about)\n> f) and it seems like the least likely option to be mistaken for a\n> security feature\n> g) and also seems pretty safe wrt not breaking existing tooling (like\n> 5/6 might do)\n>\n> Looking at it, you could make the argument that #4 is actually the\n> best of the solutions proposed, except it has the one drawback that it\n> requires folks to double down on the fiction that we think extensions\n> are a good way to build solutions when really everyone just wants to\n> have everything in core.\n\nI think that all of this is true except for (c). I think we'd need a\nnew hook to make it work.\n\nThat said, I think that extensions are a good way of implementing some\nfunctionality, but not this functionality. Extensions are a good\napproach when there's a bunch of stuff core can't know but an\nextension author can. For instance, the FDW interface caters to\nsituations where the extension author knows how to access some data\nthat PostgreSQL doesn't know how to access; and the operator class\nstuff is useful when the extension author knows how some user-defined\ndata type should behave and we don't. But there's not really a\nsubstantial policy question here. All we do by pushing a feature like\nthis out of core is wash our hands of it. Your (f) argues that might\nbe a good thing, but I don't think so. When we know that a feature is\nwidely-needed, it's better to have one good implementation of it in\ncore than several perhaps not-so-good implementations out of core.\nThat allows us to focus all of our efforts on that one implementation\ninstead of splitting them across several -- which is the whole selling\npoint of open source, really -- and it makes it easier for users who\nwant the feature to get access to it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 18 Mar 2024 16:59:37 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Going to agree with Robert Treat here about an extension being a great\nsolution. I resisted posting earlier as I wanted to see how this all pans\nout, but I wrote a quick little POC extension some months ago that does\nthe disabling and works well (and cannot be easily worked around).\n\nOn Mon, Mar 18, 2024 at 4:59 PM Robert Haas <[email protected]> wrote:\n\n> I think that all of this is true except for (c). I think we'd need a\n> new hook to make it work.\n>\n\nSeems we can just use ProcessUtility and:\nif (IsA(parsetree, AlterSystemStmt) { ereport(ERROR, ...\n\nWhen we know that a feature is\n> widely-needed, it's better to have one good implementation of it in\n> core than several perhaps not-so-good implementations out of core.\n>\n\nMeh, maybe. This one seems pretty dirt simple. Granted, I have expanded my\noriginal POC to allow *some* things to be changed by ALTER SYSTEM, but the\noriginal use case warrants a very small extension.\n\nThat allows us to focus all of our efforts on that one implementation\n> instead of splitting them across several -- which is the whole selling\n> point of open source, really -- and it makes it easier for users who\n> want the feature to get access to it.\n>\n\nWell, yeah, but they have to wait until version 18 at best, while an\nextension can run on any current version and probably be pretty\nfuture-proof as well.\n\nCheers,\nGreg\n\nGoing to agree with Robert Treat here about an extension being a great solution. I resisted posting earlier as I wanted to see how this all pans out, but I wrote a quick little POC extension some months ago that does the disabling and works well (and cannot be easily worked around).On Mon, Mar 18, 2024 at 4:59 PM Robert Haas <[email protected]> wrote:I think that all of this is true except for (c). I think we'd need a\nnew hook to make it work.Seems we can just use ProcessUtility and:if (IsA(parsetree, AlterSystemStmt) { ereport(ERROR, ...When we know that a feature is\nwidely-needed, it's better to have one good implementation of it in\ncore than several perhaps not-so-good implementations out of core.\nMeh, maybe. This one seems pretty dirt simple. Granted, I have expanded my original POC to allow *some* things to be changed by ALTER SYSTEM, but the original use case warrants a very small extension.That allows us to focus all of our efforts on that one implementation\ninstead of splitting them across several -- which is the whole selling\npoint of open source, really -- and it makes it easier for users who\nwant the feature to get access to it.Well, yeah, but they have to wait until version 18 at best, while an extension can run on any current version and probably be pretty future-proof as well.Cheers,Greg",
"msg_date": "Mon, 18 Mar 2024 19:38:07 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "I want to remind everyone of this from Gabriele's first message that \nstarted this thread:\n\n> At the moment, a possible workaround is that `ALTER SYSTEM` can be blocked\n> by making the postgresql.auto.conf read only, but the returned message is\n> misleading and that’s certainly bad user experience (which is very\n> important in a cloud native environment):\n> \n> \n> ```\n> postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> ERROR: could not open file \"postgresql.auto.conf\": Permission denied\n> ```\n\nI think making the config file read-only is a fine solution. If you \ndon't want postgres to mess with the config files, forbid it with the \npermission system.\n\nProblems with pg_rewind, pg_basebackup were mentioned with that \napproach. I think if you want the config files to be managed outside \nPostgreSQL, by kubernetes, patroni or whatever, it would be good for \nthem to be read-only to the postgres user anyway, even if we had a \nmechanism to disable ALTER SYSTEM. So it would be good to fix the \nproblems with those tools anyway.\n\nThe error message is not great, I agree with that. Can we improve it? \nMaybe just add a HINT like this:\n\npostgres=# ALTER SYSTEM SET wal_level TO minimal;\nERROR: could not open file \"postgresql.auto.conf\" for writing: \nPermission denied\nHINT: Configuration might be managed outside PostgreSQL\n\n\nPerhaps we could make that even better with a GUC though. I propose a \nGUC called 'configuration_managed_externally = true / false\". If you set \nit to true, we prevent ALTER SYSTEM and make the error message more \ndefinitive:\n\npostgres=# ALTER SYSTEM SET wal_level TO minimal;\nERROR: configuration is managed externally\n\nAs a bonus, if that GUC is set, we could even check at server startup \nthat all the configuration files are not writable by the postgres user, \nand print a warning or refuse to start up if they are.\n\n(Another way to read this proposal is to rename the GUC that's been \ndiscussed in this thread to 'configuration_managed_externally'. That \nmakes it look less like a security feature, and describes the intended \nuse case.)\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 19 Mar 2024 11:26:20 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Tue, Mar 19, 2024 at 5:26 AM Heikki Linnakangas <[email protected]> wrote:\n\n> I want to remind everyone of this from Gabriele's first message that\n> started this thread:\n>\n> > At the moment, a possible workaround is that `ALTER SYSTEM` can be\n> blocked\n> > by making the postgresql.auto.conf read only, but the returned message is\n> > misleading and that’s certainly bad user experience (which is very\n> > important in a cloud native environment):\n> >\n> >\n> > ```\n> > postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> > ERROR: could not open file \"postgresql.auto.conf\": Permission denied\n> > ```\n>\n> I think making the config file read-only is a fine solution. If you\n> don't want postgres to mess with the config files, forbid it with the\n> permission system.\n>\n> Problems with pg_rewind, pg_basebackup were mentioned with that\n> approach. I think if you want the config files to be managed outside\n> PostgreSQL, by kubernetes, patroni or whatever, it would be good for\n> them to be read-only to the postgres user anyway, even if we had a\n> mechanism to disable ALTER SYSTEM. So it would be good to fix the\n> problems with those tools anyway.\n>\n> The error message is not great, I agree with that. Can we improve it?\n> Maybe just add a HINT like this:\n>\n> postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> ERROR: could not open file \"postgresql.auto.conf\" for writing:\n> Permission denied\n> HINT: Configuration might be managed outside PostgreSQL\n>\n>\n> Perhaps we could make that even better with a GUC though. I propose a\n> GUC called 'configuration_managed_externally = true / false\". If you set\n> it to true, we prevent ALTER SYSTEM and make the error message more\n> definitive:\n>\n> postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> ERROR: configuration is managed externally\n>\n> As a bonus, if that GUC is set, we could even check at server startup\n> that all the configuration files are not writable by the postgres user,\n> and print a warning or refuse to start up if they are.\n>\n> (Another way to read this proposal is to rename the GUC that's been\n> discussed in this thread to 'configuration_managed_externally'. That\n> makes it look less like a security feature, and describes the intended\n> use case.)\n>\n>\n>\n\nI agree with pretty much all of this.\n\ncheers\n\nandrew\n\nOn Tue, Mar 19, 2024 at 5:26 AM Heikki Linnakangas <[email protected]> wrote:I want to remind everyone of this from Gabriele's first message that \nstarted this thread:\n\n> At the moment, a possible workaround is that `ALTER SYSTEM` can be blocked\n> by making the postgresql.auto.conf read only, but the returned message is\n> misleading and that’s certainly bad user experience (which is very\n> important in a cloud native environment):\n> \n> \n> ```\n> postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> ERROR: could not open file \"postgresql.auto.conf\": Permission denied\n> ```\n\nI think making the config file read-only is a fine solution. If you \ndon't want postgres to mess with the config files, forbid it with the \npermission system.\n\nProblems with pg_rewind, pg_basebackup were mentioned with that \napproach. I think if you want the config files to be managed outside \nPostgreSQL, by kubernetes, patroni or whatever, it would be good for \nthem to be read-only to the postgres user anyway, even if we had a \nmechanism to disable ALTER SYSTEM. So it would be good to fix the \nproblems with those tools anyway.\n\nThe error message is not great, I agree with that. Can we improve it? \nMaybe just add a HINT like this:\n\npostgres=# ALTER SYSTEM SET wal_level TO minimal;\nERROR: could not open file \"postgresql.auto.conf\" for writing: \nPermission denied\nHINT: Configuration might be managed outside PostgreSQL\n\n\nPerhaps we could make that even better with a GUC though. I propose a \nGUC called 'configuration_managed_externally = true / false\". If you set \nit to true, we prevent ALTER SYSTEM and make the error message more \ndefinitive:\n\npostgres=# ALTER SYSTEM SET wal_level TO minimal;\nERROR: configuration is managed externally\n\nAs a bonus, if that GUC is set, we could even check at server startup \nthat all the configuration files are not writable by the postgres user, \nand print a warning or refuse to start up if they are.\n\n(Another way to read this proposal is to rename the GUC that's been \ndiscussed in this thread to 'configuration_managed_externally'. That \nmakes it look less like a security feature, and describes the intended \nuse case.)\nI agree with pretty much all of this.cheersandrew",
"msg_date": "Tue, 19 Mar 2024 07:49:10 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Mon, 18 Mar 2024 at 18:27, Robert Haas <[email protected]> wrote:\n> I think for now we\n> should just file this under \"Other platforms and clients,\" which only\n> has one existing setting. If the number of settings of this type\n> grows, we can split it out.\n\nDone. I also included a patch to rename COMPAT_OPTIONS_CLIENTS to\nCOMPAT_OPTIONS_OTHER, since that enum variant naming doesn't match the\nnew intent of the section.\n\nOn Tue, 19 Mar 2024 at 10:26, Heikki Linnakangas <[email protected]> wrote:\n> (Another way to read this proposal is to rename the GUC that's been\n> discussed in this thread to 'configuration_managed_externally'. That\n> makes it look less like a security feature, and describes the intended\n> use case.)\n\nI like this idea of naming the GUC in such a way. I swapped the words\na bit and went for externally_managed_configuration, since order\nmatches other GUCs e.g. standard_conforming_strings. But if you feel\nstrongly about the ordering of the words, I'm happy to change it back.\n\nFor the errorcode I now went for ERRCODE_FEATURE_NOT_SUPPORTED, which\nseemed most fitting.\n\nOn Tue, 19 Mar 2024 at 10:26, Heikki Linnakangas <[email protected]> wrote:\n>\n> I want to remind everyone of this from Gabriele's first message that\n> started this thread:\n>\n> > At the moment, a possible workaround is that `ALTER SYSTEM` can be blocked\n> > by making the postgresql.auto.conf read only, but the returned message is\n> > misleading and that’s certainly bad user experience (which is very\n> > important in a cloud native environment):\n> >\n> >\n> > ```\n> > postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> > ERROR: could not open file \"postgresql.auto.conf\": Permission denied\n> > ```\n>\n> I think making the config file read-only is a fine solution. If you\n> don't want postgres to mess with the config files, forbid it with the\n> permission system.\n>\n> Problems with pg_rewind, pg_basebackup were mentioned with that\n> approach. I think if you want the config files to be managed outside\n> PostgreSQL, by kubernetes, patroni or whatever, it would be good for\n> them to be read-only to the postgres user anyway, even if we had a\n> mechanism to disable ALTER SYSTEM. So it would be good to fix the\n> problems with those tools anyway.\n>\n> The error message is not great, I agree with that. Can we improve it?\n> Maybe just add a HINT like this:\n>\n> postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> ERROR: could not open file \"postgresql.auto.conf\" for writing:\n> Permission denied\n> HINT: Configuration might be managed outside PostgreSQL\n>\n>\n> Perhaps we could make that even better with a GUC though. I propose a\n> GUC called 'configuration_managed_externally = true / false\". If you set\n> it to true, we prevent ALTER SYSTEM and make the error message more\n> definitive:\n>\n> postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> ERROR: configuration is managed externally\n>\n> As a bonus, if that GUC is set, we could even check at server startup\n> that all the configuration files are not writable by the postgres user,\n> and print a warning or refuse to start up if they are.\n>\n> (Another way to read this proposal is to rename the GUC that's been\n> discussed in this thread to 'configuration_managed_externally'. That\n> makes it look less like a security feature, and describes the intended\n> use case.)\n>\n> --\n> Heikki Linnakangas\n> Neon (https://neon.tech)\n>",
"msg_date": "Tue, 19 Mar 2024 14:13:21 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On 3/19/24 07:49, Andrew Dunstan wrote:\n> \n> \n> On Tue, Mar 19, 2024 at 5:26 AM Heikki Linnakangas <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> I want to remind everyone of this from Gabriele's first message that\n> started this thread:\n> \n> > At the moment, a possible workaround is that `ALTER SYSTEM` can\n> be blocked\n> > by making the postgresql.auto.conf read only, but the returned\n> message is\n> > misleading and that’s certainly bad user experience (which is very\n> > important in a cloud native environment):\n> >\n> >\n> > ```\n> > postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> > ERROR: could not open file \"postgresql.auto.conf\": Permission denied\n> > ```\n> \n> I think making the config file read-only is a fine solution. If you\n> don't want postgres to mess with the config files, forbid it with the\n> permission system.\n> \n> Problems with pg_rewind, pg_basebackup were mentioned with that\n> approach. I think if you want the config files to be managed outside\n> PostgreSQL, by kubernetes, patroni or whatever, it would be good for\n> them to be read-only to the postgres user anyway, even if we had a\n> mechanism to disable ALTER SYSTEM. So it would be good to fix the\n> problems with those tools anyway.\n> \n> The error message is not great, I agree with that. Can we improve it?\n> Maybe just add a HINT like this:\n> \n> postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> ERROR: could not open file \"postgresql.auto.conf\" for writing:\n> Permission denied\n> HINT: Configuration might be managed outside PostgreSQL\n> \n> \n> Perhaps we could make that even better with a GUC though. I propose a\n> GUC called 'configuration_managed_externally = true / false\". If you\n> set\n> it to true, we prevent ALTER SYSTEM and make the error message more\n> definitive:\n> \n> postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> ERROR: configuration is managed externally\n> \n> As a bonus, if that GUC is set, we could even check at server startup\n> that all the configuration files are not writable by the postgres user,\n> and print a warning or refuse to start up if they are.\n> \n> (Another way to read this proposal is to rename the GUC that's been\n> discussed in this thread to 'configuration_managed_externally'. That\n> makes it look less like a security feature, and describes the intended\n> use case.)\n> \n> \n> \n> \n> I agree with pretty much all of this.\n\n\n+1 me too.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Tue, 19 Mar 2024 10:39:25 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> Perhaps we could make that even better with a GUC though. I propose a \n> GUC called 'configuration_managed_externally = true / false\". If you set \n> it to true, we prevent ALTER SYSTEM and make the error message more \n> definitive:\n\n> postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> ERROR: configuration is managed externally\n\n> As a bonus, if that GUC is set, we could even check at server startup \n> that all the configuration files are not writable by the postgres user, \n> and print a warning or refuse to start up if they are.\n\nI like this idea. The \"bonus\" is not optional though, because\nsetting the files' ownership/permissions is the only way to be\nsure that the prohibition is even a little bit bulletproof.\n\nOne small issue: how do we make that work on Windows? Have recent\nversions grown anything that looks like real file permissions?\n\nAnother question is whether this should be one-size-fits-all for\nall the configuration files. I can imagine situations where\nyou'd like to lock down postgresql[.auto].conf but not pg_hba.conf.\nBut maybe that can wait for somebody to show up with a use-case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Mar 2024 10:51:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Tue, 19 Mar 2024 at 15:52, Tom Lane <[email protected]> wrote:\n> I like this idea. The \"bonus\" is not optional though, because\n> setting the files' ownership/permissions is the only way to be\n> sure that the prohibition is even a little bit bulletproof.\n\nI don't agree with this. The only \"normal\" way of modifying\npostgresql.auto.conf from within postgres is using ALTER SYSTEM, so\nsimply disabling ALTER SYSTEM seems enough to me.\n\n> Another question is whether this should be one-size-fits-all for\n> all the configuration files. I can imagine situations where\n> you'd like to lock down postgresql[.auto].conf but not pg_hba.conf.\n> But maybe that can wait for somebody to show up with a use-case.\n\nAfaik there's no way to modify pg_hba.conf from within postgres, only\nread it. (except for COPY TO FILE/PROGRAM etc) So, I don't think we\nneed to worry about this now.\n\nOn Tue, 19 Mar 2024 at 15:52, Tom Lane <[email protected]> wrote:\n>\n> Heikki Linnakangas <[email protected]> writes:\n> > Perhaps we could make that even better with a GUC though. I propose a\n> > GUC called 'configuration_managed_externally = true / false\". If you set\n> > it to true, we prevent ALTER SYSTEM and make the error message more\n> > definitive:\n>\n> > postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> > ERROR: configuration is managed externally\n>\n> > As a bonus, if that GUC is set, we could even check at server startup\n> > that all the configuration files are not writable by the postgres user,\n> > and print a warning or refuse to start up if they are.\n>\n> I like this idea. The \"bonus\" is not optional though, because\n> setting the files' ownership/permissions is the only way to be\n> sure that the prohibition is even a little bit bulletproof.\n>\n> One small issue: how do we make that work on Windows? Have recent\n> versions grown anything that looks like real file permissions?\n>\n> Another question is whether this should be one-size-fits-all for\n> all the configuration files. I can imagine situations where\n> you'd like to lock down postgresql[.auto].conf but not pg_hba.conf.\n> But maybe that can wait for somebody to show up with a use-case.\n>\n> regards, tom lane\n\n\n",
"msg_date": "Tue, 19 Mar 2024 16:36:43 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Jelte Fennema-Nio <[email protected]> writes:\n> On Tue, 19 Mar 2024 at 15:52, Tom Lane <[email protected]> wrote:\n>> I like this idea. The \"bonus\" is not optional though, because\n>> setting the files' ownership/permissions is the only way to be\n>> sure that the prohibition is even a little bit bulletproof.\n\n> I don't agree with this. The only \"normal\" way of modifying\n> postgresql.auto.conf from within postgres is using ALTER SYSTEM, so\n> simply disabling ALTER SYSTEM seems enough to me.\n\nI've said this repeatedly: it's not enough. The only reason we need\nany feature whatsoever is that somebody doesn't trust their database\nsuperusers to not try to modify the configuration. Given that\nrequirement, merely disabling ALTER SYSTEM isn't a solution, it's a\nfig leaf that might fool incompetent auditors but no more.\n\nIf you aren't willing to build a solution that blocks off mods\nusing COPY TO FILE/PROGRAM and other readily-available-to-superusers\ntools (plpythonu for instance), I think you shouldn't bother asking\nfor a feature at all. Just trust your superusers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Mar 2024 12:05:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Tue, 19 Mar 2024 at 17:05, Tom Lane <[email protected]> wrote:\n> I've said this repeatedly: it's not enough. The only reason we need\n> any feature whatsoever is that somebody doesn't trust their database\n> superusers to not try to modify the configuration.\n\nAnd as everyone else on this thread has said: It is enough. Because\nthe point is not security, the point is hinting to a superuser that a\nworkflow they know from other systems (or an ALTER SYSTEM command they\ncopied from the internet) is not the intended way to modify their\nserver configuration on the system they are currently working on.\n\nI feel like the docs and error message in the current active patch are\nvery clear on that. If you think they are not clear, feel free to\nsuggest what could clarify the intent of this feature. But at this\npoint, it's really starting to seem to me like you're willingly trying\nto interpret this feature as a thing that it is not (i.e. a security\nfeature).\n\n\n",
"msg_date": "Tue, 19 Mar 2024 17:53:59 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Tue, Mar 19, 2024 at 12:05 PM Tom Lane <[email protected]> wrote:\n\n> If you aren't willing to build a solution that blocks off mods\n> using COPY TO FILE/PROGRAM and other readily-available-to-superusers\n> tools (plpythonu for instance), I think you shouldn't bother asking\n> for a feature at all. Just trust your superusers.\n>\n\nThere is a huge gap between using a well-documented standard tool like\nALTER SYSTEM and going out of your way to modify the configuration files\nthrough trickery. I think we need to only solve the former as in \"hey,\nplease don't do that because your changes will be overwritten\"\n\nCheers,\nGreg\n\nOn Tue, Mar 19, 2024 at 12:05 PM Tom Lane <[email protected]> wrote:\nIf you aren't willing to build a solution that blocks off mods\nusing COPY TO FILE/PROGRAM and other readily-available-to-superusers\ntools (plpythonu for instance), I think you shouldn't bother asking\nfor a feature at all. Just trust your superusers.There is a huge gap between using a well-documented standard tool like ALTER SYSTEM and going out of your way to modify the configuration files through trickery. I think we need to only solve the former as in \"hey, please don't do that because your changes will be overwritten\"Cheers,Greg",
"msg_date": "Tue, 19 Mar 2024 12:56:01 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "> On 19 Mar 2024, at 17:53, Jelte Fennema-Nio <[email protected]> wrote:\n> \n> On Tue, 19 Mar 2024 at 17:05, Tom Lane <[email protected]> wrote:\n>> I've said this repeatedly: it's not enough. The only reason we need\n>> any feature whatsoever is that somebody doesn't trust their database\n>> superusers to not try to modify the configuration.\n> \n> And as everyone else on this thread has said: It is enough. Because\n> the point is not security, the point is hinting to a superuser that a\n> workflow they know from other systems (or an ALTER SYSTEM command they\n> copied from the internet) is not the intended way to modify their\n> server configuration on the system they are currently working on.\n\nWell. Protection against superusers randomly copying ALTER SYSTEM commands\nfrom the internet actually does turn this into a security feature =)\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 19 Mar 2024 18:56:08 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "> On 19 Mar 2024, at 15:51, Tom Lane <[email protected]> wrote:\n> \n> Heikki Linnakangas <[email protected]> writes:\n>> Perhaps we could make that even better with a GUC though. I propose a \n>> GUC called 'configuration_managed_externally = true / false\". If you set \n>> it to true, we prevent ALTER SYSTEM and make the error message more \n>> definitive:\n> \n>> postgres=# ALTER SYSTEM SET wal_level TO minimal;\n>> ERROR: configuration is managed externally\n> \n>> As a bonus, if that GUC is set, we could even check at server startup \n>> that all the configuration files are not writable by the postgres user, \n>> and print a warning or refuse to start up if they are.\n> \n> I like this idea. The \"bonus\" is not optional though, because\n> setting the files' ownership/permissions is the only way to be\n> sure that the prohibition is even a little bit bulletproof.\n\nAgreed, assuming we can solve the below..\n\n> One small issue: how do we make that work on Windows? Have recent\n> versions grown anything that looks like real file permissions?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 19 Mar 2024 18:57:10 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Tue, Mar 19, 2024 at 3:52 PM Tom Lane <[email protected]> wrote:\n>\n> Heikki Linnakangas <[email protected]> writes:\n> > Perhaps we could make that even better with a GUC though. I propose a\n> > GUC called 'configuration_managed_externally = true / false\". If you set\n> > it to true, we prevent ALTER SYSTEM and make the error message more\n> > definitive:\n>\n> > postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> > ERROR: configuration is managed externally\n>\n> > As a bonus, if that GUC is set, we could even check at server startup\n> > that all the configuration files are not writable by the postgres user,\n> > and print a warning or refuse to start up if they are.\n>\n> I like this idea. The \"bonus\" is not optional though, because\n> setting the files' ownership/permissions is the only way to be\n> sure that the prohibition is even a little bit bulletproof.\n>\n> One small issue: how do we make that work on Windows? Have recent\n> versions grown anything that looks like real file permissions?\n\nWindows has had full ACL support since 1993. The easiest way to do\nwhat you're doing here is to just set a DENY permission on the\npostgres operating system user.\n\n\n//Magnus\n\n\n",
"msg_date": "Tue, 19 Mar 2024 19:28:12 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Greg Sabino Mullane:\n> On Tue, Mar 19, 2024 at 12:05 PM Tom Lane <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> If you aren't willing to build a solution that blocks off mods\n> using COPY TO FILE/PROGRAM and other readily-available-to-superusers\n> tools (plpythonu for instance), I think you shouldn't bother asking\n> for a feature at all. Just trust your superusers.\n> \n> \n> There is a huge gap between using a well-documented standard tool like \n> ALTER SYSTEM and going out of your way to modify the configuration files \n> through trickery. I think we need to only solve the former as in \"hey, \n> please don't do that because your changes will be overwritten\"\n\nRecap: The requested feature is not supposed to be a security feature. \nIt is supposed to prevent the admin from accidentally doing the wrong \nthing - but not from willfully doing the same through different means.\n\nThis very much sounds like a \"warning\" - how about turning the feature \ninto one?\n\nHave a GUC warn_on_alter_system = \"<message>\", which allows the \nkubernetes operator to set it to something like \"hey, please don't do \nthat because your changes will be overwritten. Use xyz operator instead.\".\n\nThis will hardly be taken as a security feature by anyone, but should \nessentially achieve what is asked for.\n\nA more sophisticated way would be to make that GUC throw an error, but \nhave a syntax for ALTER SYSTEM to override this - i.e. similar to a \n--force flag.\n\nBest,\n\nWolfgang\n\n\n",
"msg_date": "Tue, 19 Mar 2024 21:53:46 +0100",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Tue, Mar 19, 2024 at 2:28 PM Magnus Hagander <[email protected]> wrote:\n\n> On Tue, Mar 19, 2024 at 3:52 PM Tom Lane <[email protected]> wrote:\n> >\n> > Heikki Linnakangas <[email protected]> writes:\n> > > Perhaps we could make that even better with a GUC though. I propose a\n> > > GUC called 'configuration_managed_externally = true / false\". If you\n> set\n> > > it to true, we prevent ALTER SYSTEM and make the error message more\n> > > definitive:\n> >\n> > > postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> > > ERROR: configuration is managed externally\n> >\n> > > As a bonus, if that GUC is set, we could even check at server startup\n> > > that all the configuration files are not writable by the postgres user,\n> > > and print a warning or refuse to start up if they are.\n> >\n> > I like this idea. The \"bonus\" is not optional though, because\n> > setting the files' ownership/permissions is the only way to be\n> > sure that the prohibition is even a little bit bulletproof.\n> >\n> > One small issue: how do we make that work on Windows? Have recent\n> > versions grown anything that looks like real file permissions?\n>\n> Windows has had full ACL support since 1993. The easiest way to do\n> what you're doing here is to just set a DENY permission on the\n> postgres operating system user.\n>\n>\n>\n>\n\n\nYeah. See <\nhttps://learn.microsoft.com/en-us/windows-server/administration/windows-commands/icacls>\nfor example.\n\ncheers\n\nandrew\n\nOn Tue, Mar 19, 2024 at 2:28 PM Magnus Hagander <[email protected]> wrote:On Tue, Mar 19, 2024 at 3:52 PM Tom Lane <[email protected]> wrote:\n>\n> Heikki Linnakangas <[email protected]> writes:\n> > Perhaps we could make that even better with a GUC though. I propose a\n> > GUC called 'configuration_managed_externally = true / false\". If you set\n> > it to true, we prevent ALTER SYSTEM and make the error message more\n> > definitive:\n>\n> > postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> > ERROR: configuration is managed externally\n>\n> > As a bonus, if that GUC is set, we could even check at server startup\n> > that all the configuration files are not writable by the postgres user,\n> > and print a warning or refuse to start up if they are.\n>\n> I like this idea. The \"bonus\" is not optional though, because\n> setting the files' ownership/permissions is the only way to be\n> sure that the prohibition is even a little bit bulletproof.\n>\n> One small issue: how do we make that work on Windows? Have recent\n> versions grown anything that looks like real file permissions?\n\nWindows has had full ACL support since 1993. The easiest way to do\nwhat you're doing here is to just set a DENY permission on the\npostgres operating system user.\n\n Yeah. See <https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/icacls> for example. cheersandrew",
"msg_date": "Tue, 19 Mar 2024 18:24:47 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On Tue, Mar 19, 2024 at 2:28 PM Magnus Hagander <[email protected]> wrote:\n>> Windows has had full ACL support since 1993. The easiest way to do\n>> what you're doing here is to just set a DENY permission on the\n>> postgres operating system user.\n\n> Yeah. See <\n> https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/icacls>\n> for example.\n\nCool. Maybe somebody should take a fresh look at the places where\nwe're assuming Windows has nothing comparable to Unix permissions\n(for example, checking for world readability of ssl_key_file).\nIt's off-topic for this thread though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Mar 2024 18:35:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Hi,\n\nOn Tue, Mar 19, 2024 at 10:51:50AM -0400, Tom Lane wrote:\n> Heikki Linnakangas <[email protected]> writes:\n> > Perhaps we could make that even better with a GUC though. I propose a \n> > GUC called 'configuration_managed_externally = true / false\". If you set \n> > it to true, we prevent ALTER SYSTEM and make the error message more \n> > definitive:\n> \n> > postgres=# ALTER SYSTEM SET wal_level TO minimal;\n> > ERROR: configuration is managed externally\n> \n> > As a bonus, if that GUC is set, we could even check at server startup \n> > that all the configuration files are not writable by the postgres user, \n> > and print a warning or refuse to start up if they are.\n> \n> I like this idea. The \"bonus\" is not optional though, because\n> setting the files' ownership/permissions is the only way to be\n> sure that the prohibition is even a little bit bulletproof.\n\nIsn't this going to break pgbackrest restore then, which (AIUI, and was\nmentioned upthread) writes recovery configs into postgresql.auto.conf? \nOr do I misunderstand the proposal? I think it would be awkward if only\nroot users are able to run pgbackrest restore. I have added David to the\nCC list to make him aware of this, in case he was not following this\nthread.\n\nThe other candidate for breakage that was mentioned was pg_basebackup\n-R, but I guess that could be worked around.\n\n\nMichael\n\n\n",
"msg_date": "Wed, 20 Mar 2024 10:30:54 +0100",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": ">\n> As a bonus, if that GUC is set, we could even check at server startup that\n> all the configuration files are not writable by the postgres user,\n> and print a warning or refuse to start up if they are.\n>\n\nUgh, please let's not do this. This was bouncing around in my head last\nnight, and this is really a quite radical change - especially just to\nhandle the given ask, which is to prevent a specific command from running.\nNot implement a brand new security system. There are so many ways this\ncould go wrong if we start having separate permissions for some of our\nfiles. In addition to backups and other tools that need to write to the\nconf files as the postgres user, what about systems that create a new\ncluster automatically e.g. Patroni? It will now need elevated privs just to\ncreate the conf files and assign the new ownership to them. Lots of moving\npieces there and ways things could go wrong. So a big -1 from me, as they\nsay/ :)\n\nCheers,\nGreg\n\nAs a bonus, if that GUC is set, we could even check at server startup that all the configuration files are not writable by the postgres user, and print a warning or refuse to start up if they are.Ugh, please let's not do this. This was bouncing around in my head last night, and this is really a quite radical change - especially just to handle the given ask, which is to prevent a specific command from running. Not implement a brand new security system. There are so many ways this could go wrong if we start having separate permissions for some of our files. In addition to backups and other tools that need to write to the conf files as the postgres user, what about systems that create a new cluster automatically e.g. Patroni? It will now need elevated privs just to create the conf files and assign the new ownership to them. Lots of moving pieces there and ways things could go wrong. So a big -1 from me, as they say/ :)Cheers,Greg",
"msg_date": "Wed, 20 Mar 2024 09:04:21 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, 20 Mar 2024 at 14:04, Greg Sabino Mullane <[email protected]> wrote:\n>>\n>> As a bonus, if that GUC is set, we could even check at server startup that all the configuration files are not writable by the postgres user,\n>> and print a warning or refuse to start up if they are.\n>\n>\n> Ugh, please let's not do this. This was bouncing around in my head last night, and this is really a quite radical change - especially just to handle the given ask, which is to prevent a specific command from running. Not implement a brand new security system. There are so many ways this could go wrong if we start having separate permissions for some of our files. In addition to backups and other tools that need to write to the conf files as the postgres user, what about systems that create a new cluster automatically e.g. Patroni? It will now need elevated privs just to create the conf files and assign the new ownership to them. Lots of moving pieces there and ways things could go wrong. So a big -1 from me, as they say/ :)\n\n\nWell put. I don't think the effort of making all tooling handle this\ncorrectly is worth the benefit that it brings. afaict everyone on this\nthread that actually wants to use this feature would be happy with the\nfunctionality that the current patch provides (i.e. having\npostgresql.auto.conf writable, but having ALTER SYSTEM error out).\n\n\n",
"msg_date": "Wed, 20 Mar 2024 16:06:56 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 20, 2024 at 11:07 AM Jelte Fennema-Nio <[email protected]> wrote:\n> > Ugh, please let's not do this. This was bouncing around in my head last night, and this is really a quite radical change - especially just to handle the given ask, which is to prevent a specific command from running. Not implement a brand new security system. There are so many ways this could go wrong if we start having separate permissions for some of our files. In addition to backups and other tools that need to write to the conf files as the postgres user, what about systems that create a new cluster automatically e.g. Patroni? It will now need elevated privs just to create the conf files and assign the new ownership to them. Lots of moving pieces there and ways things could go wrong. So a big -1 from me, as they say/ :)\n>\n> Well put. I don't think the effort of making all tooling handle this\n> correctly is worth the benefit that it brings. afaict everyone on this\n> thread that actually wants to use this feature would be happy with the\n> functionality that the current patch provides (i.e. having\n> postgresql.auto.conf writable, but having ALTER SYSTEM error out).\n\nYeah, I agree with this completely. I don't understand why people who\nhate the feature and hope it dies in a fire get to decide how it has\nto work.\n\nAnd also, if we verify that the configuration files are all read-only\nat the OS level, that also prevents the external tool from managing\nthem. Well, it can: it can make them non-read-only after server start,\nthen modify them, then make them read-only again, and it can make sure\nthat if the system crashes, it again marks them read-only before\ntrying to start PG. But it seems quite obvious that this will be\ninconvenient and difficult to get right. I find it quite easy to\nunderstand the idea that someone wants the PostgreSQL configuration to\nbe managed by Kubernetes rather than via by ALTER SYSTEM, but I can't\nthink of any scenario when you just don't want to be able to manage\nthe configuration at all. Who in the world would want that?\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 20 Mar 2024 15:03:38 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 20, 2024 at 8:04 PM Robert Haas <[email protected]> wrote:\n\n> On Wed, Mar 20, 2024 at 11:07 AM Jelte Fennema-Nio <[email protected]>\n> wrote:\n> > > Ugh, please let's not do this. This was bouncing around in my head\n> last night, and this is really a quite radical change - especially just to\n> handle the given ask, which is to prevent a specific command from running.\n> Not implement a brand new security system. There are so many ways this\n> could go wrong if we start having separate permissions for some of our\n> files. In addition to backups and other tools that need to write to the\n> conf files as the postgres user, what about systems that create a new\n> cluster automatically e.g. Patroni? It will now need elevated privs just to\n> create the conf files and assign the new ownership to them. Lots of moving\n> pieces there and ways things could go wrong. So a big -1 from me, as they\n> say/ :)\n> >\n> > Well put. I don't think the effort of making all tooling handle this\n> > correctly is worth the benefit that it brings. afaict everyone on this\n> > thread that actually wants to use this feature would be happy with the\n> > functionality that the current patch provides (i.e. having\n> > postgresql.auto.conf writable, but having ALTER SYSTEM error out).\n>\n> Yeah, I agree with this completely. I don't understand why people who\n> hate the feature and hope it dies in a fire get to decide how it has\n> to work.\n>\n> And also, if we verify that the configuration files are all read-only\n> at the OS level, that also prevents the external tool from managing\n> them. Well, it can: it can make them non-read-only after server start,\n> then modify them, then make them read-only again, and it can make sure\n> that if the system crashes, it again marks them read-only before\n> trying to start PG. But it seems quite obvious that this will be\n> inconvenient and difficult to get right. I find it quite easy to\n> understand the idea that someone wants the PostgreSQL configuration to\n> be managed by Kubernetes rather than via by ALTER SYSTEM, but I can't\n> think of any scenario when you just don't want to be able to manage\n> the configuration at all. Who in the world would want that?\n>\n\nYeah, I don't see why it's our responsibility to decide what permissions\npeople should have on their config files.\n\nI would argue that having the default permissions not allow postgres to\nedit it's own config files *except* for postgresql.auto.conf would be a\nbetter default than what we have now, but that's completely independent of\nthe patch being discussed on this thread. (And FWIW also already solved on\ndebian-based platforms for example, which but the main config files in /etc\nwith postgres only having read permissions on them - and having the\n*packagers* adapt such things for their platforms in general seems like a\nbetter place).\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Mar 20, 2024 at 8:04 PM Robert Haas <[email protected]> wrote:On Wed, Mar 20, 2024 at 11:07 AM Jelte Fennema-Nio <[email protected]> wrote:\n> > Ugh, please let's not do this. This was bouncing around in my head last night, and this is really a quite radical change - especially just to handle the given ask, which is to prevent a specific command from running. Not implement a brand new security system. There are so many ways this could go wrong if we start having separate permissions for some of our files. In addition to backups and other tools that need to write to the conf files as the postgres user, what about systems that create a new cluster automatically e.g. Patroni? It will now need elevated privs just to create the conf files and assign the new ownership to them. Lots of moving pieces there and ways things could go wrong. So a big -1 from me, as they say/ :)\n>\n> Well put. I don't think the effort of making all tooling handle this\n> correctly is worth the benefit that it brings. afaict everyone on this\n> thread that actually wants to use this feature would be happy with the\n> functionality that the current patch provides (i.e. having\n> postgresql.auto.conf writable, but having ALTER SYSTEM error out).\n\nYeah, I agree with this completely. I don't understand why people who\nhate the feature and hope it dies in a fire get to decide how it has\nto work.\n\nAnd also, if we verify that the configuration files are all read-only\nat the OS level, that also prevents the external tool from managing\nthem. Well, it can: it can make them non-read-only after server start,\nthen modify them, then make them read-only again, and it can make sure\nthat if the system crashes, it again marks them read-only before\ntrying to start PG. But it seems quite obvious that this will be\ninconvenient and difficult to get right. I find it quite easy to\nunderstand the idea that someone wants the PostgreSQL configuration to\nbe managed by Kubernetes rather than via by ALTER SYSTEM, but I can't\nthink of any scenario when you just don't want to be able to manage\nthe configuration at all. Who in the world would want that?\nYeah, I don't see why it's our responsibility to decide what permissions people should have on their config files. I would argue that having the default permissions not allow postgres to edit it's own config files *except* for postgresql.auto.conf would be a better default than what we have now, but that's completely independent of the patch being discussed on this thread. (And FWIW also already solved on debian-based platforms for example, which but the main config files in /etc with postgres only having read permissions on them - and having the *packagers* adapt such things for their platforms in general seems like a better place).-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 20 Mar 2024 20:11:32 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 20, 2024 at 3:11 PM Magnus Hagander <[email protected]> wrote:\n> I would argue that having the default permissions not allow postgres to edit it's own config files *except* for postgresql.auto.conf would be a better default than what we have now, but that's completely independent of the patch being discussed on this thread. (And FWIW also already solved on debian-based platforms for example, which but the main config files in /etc with postgres only having read permissions on them - and having the *packagers* adapt such things for their platforms in general seems like a better place).\n\nI don't think that I agree that it's categorically better, but it\nmight be better for some people or in some circumstances. I very much\ndo agree that it's a packaging question rather than our job to sort\nout.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 20 Mar 2024 15:14:47 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 20, 2024 at 8:14 PM Robert Haas <[email protected]> wrote:\n\n> On Wed, Mar 20, 2024 at 3:11 PM Magnus Hagander <[email protected]>\n> wrote:\n> > I would argue that having the default permissions not allow postgres to\n> edit it's own config files *except* for postgresql.auto.conf would be a\n> better default than what we have now, but that's completely independent of\n> the patch being discussed on this thread. (And FWIW also already solved on\n> debian-based platforms for example, which but the main config files in /etc\n> with postgres only having read permissions on them - and having the\n> *packagers* adapt such things for their platforms in general seems like a\n> better place).\n>\n> I don't think that I agree that it's categorically better, but it\n> might be better for some people or in some circumstances. I very much\n> do agree that it's a packaging question rather than our job to sort\n> out.\n>\n\nRight, what I meant is that making it a packaging decision is the better\nplace. Wherever it goes, allowing the administrator to choose what fits\nthem should be made possible.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Mar 20, 2024 at 8:14 PM Robert Haas <[email protected]> wrote:On Wed, Mar 20, 2024 at 3:11 PM Magnus Hagander <[email protected]> wrote:\n> I would argue that having the default permissions not allow postgres to edit it's own config files *except* for postgresql.auto.conf would be a better default than what we have now, but that's completely independent of the patch being discussed on this thread. (And FWIW also already solved on debian-based platforms for example, which but the main config files in /etc with postgres only having read permissions on them - and having the *packagers* adapt such things for their platforms in general seems like a better place).\n\nI don't think that I agree that it's categorically better, but it\nmight be better for some people or in some circumstances. I very much\ndo agree that it's a packaging question rather than our job to sort\nout.Right, what I meant is that making it a packaging decision is the better place. Wherever it goes, allowing the administrator to choose what fits them should be made possible. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 20 Mar 2024 20:16:52 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 20, 2024 at 3:17 PM Magnus Hagander <[email protected]> wrote:\n> Right, what I meant is that making it a packaging decision is the better place. Wherever it goes, allowing the administrator to choose what fits them should be made possible.\n\n+1. Which is also the justification for this patch, when it comes\nright down to it. The administrator gets to decide how the contents of\npostgresql.conf are to be managed on their particular installation.\nThey can decide that postgresql.conf should be writable by the same\nuser that runs PostgreSQL, or not. And they should also be able to\ndecide that ALTER SYSTEM is an OK way to change configuration, or that\nit isn't. How we enable them to make that decision is a point for\ndiscussion, and how exactly we phrase the documentation is a point for\ndiscussion, but we have no business trying to impose conditions, as if\nthey're only allowed to make that decision if they conform to some\n(IMHO ridiculous) requirements that we dictate from on high. It's\ntheir system, not ours.\n\nI mean, for crying out loud, users can set enable_seqscan=off in\npostgresql.conf and GLOBALLY DISABLE SEQUENTIAL SCANS. They can set\nzero_damaged_pages=on in postgresql.conf and silently remove vast\nquantities of data without knowing that they're doing anything. We\ndon't even question that stuff ... although we probably should be\nquestioning the second one, because, in my experience, it's just a\nfoot-gun and never solves anything. Nonetheless, as of today, we have\nit. So somehow we're talking ourselves into believing that letting the\nuser just shut off ALTER SYSTEM, without taking any other action as a\nprerequisite, is more scary than those things.\n\nIt's not.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 20 Mar 2024 15:52:04 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Hi,\n\nOn Wed, Mar 20, 2024 at 08:11:32PM +0100, Magnus Hagander wrote:\n> (And FWIW also already solved on debian-based platforms for example,\n> which but the main config files in /etc with postgres only having read\n> permissions on them \n\nJFTR - Debian/Ubuntu keep postgresql.conf under /etc/postgresql, but\nthat directory is owned by the postgres user by default and it can\nchange the configuration files (if that wasn't the case, external tools\nlike Patroni that run under the postgres user and manage postgresql.conf\nwould work much less easily on them).\n\n\nMichael\n\n\n",
"msg_date": "Wed, 20 Mar 2024 20:55:16 +0100",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On 3/20/24 22:30, Michael Banck wrote:\n> \n> On Tue, Mar 19, 2024 at 10:51:50AM -0400, Tom Lane wrote:\n>> Heikki Linnakangas <[email protected]> writes:\n>>> Perhaps we could make that even better with a GUC though. I propose a\n>>> GUC called 'configuration_managed_externally = true / false\". If you set\n>>> it to true, we prevent ALTER SYSTEM and make the error message more\n>>> definitive:\n>>\n>>> postgres=# ALTER SYSTEM SET wal_level TO minimal;\n>>> ERROR: configuration is managed externally\n>>\n>>> As a bonus, if that GUC is set, we could even check at server startup\n>>> that all the configuration files are not writable by the postgres user,\n>>> and print a warning or refuse to start up if they are.\n>>\n>> I like this idea. The \"bonus\" is not optional though, because\n>> setting the files' ownership/permissions is the only way to be\n>> sure that the prohibition is even a little bit bulletproof.\n> \n> Isn't this going to break pgbackrest restore then, which (AIUI, and was\n> mentioned upthread) writes recovery configs into postgresql.auto.conf?\n> Or do I misunderstand the proposal? I think it would be awkward if only\n> root users are able to run pgbackrest restore. I have added David to the\n> CC list to make him aware of this, in case he was not following this\n> thread.\n\nIt doesn't sound like people are in favor of requiring read-only \npermissions for postgresql.auto.conf, but in any case it would not be a \nbig issue for pgBackRest or other backup solutions as far as I can see.\n\npgBackRest stores all permissions and ownership so a restore by the user \nwill bring everything back just as it was. Restoring as root sounds bad \non the face of it, but for managed environments like k8s it would not be \nall that unusual.\n\nThere is also the option of restoring and then modifying permissions \nlater, or in pgBackRest use the --type=preserve option to leave \npostgresql.auto.conf as it is. Permissions could also be updated before \nthe backup tool is run and then set back.\n\nSince this feature is intended for managed environments scripting these \nkinds of changes should be pretty easy and not a barrier to using any \nbackup tool.\n\nRegards,\n-David\n\n\n",
"msg_date": "Thu, 21 Mar 2024 10:21:08 +1300",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 20, 2024 at 8:52 PM Robert Haas <[email protected]> wrote:\n\n> On Wed, Mar 20, 2024 at 3:17 PM Magnus Hagander <[email protected]>\n> wrote:\n> > Right, what I meant is that making it a packaging decision is the better\n> place. Wherever it goes, allowing the administrator to choose what fits\n> them should be made possible.\n>\n> +1. Which is also the justification for this patch, when it comes\n> right down to it. The administrator gets to decide how the contents of\n> postgresql.conf are to be managed on their particular installation.\n>\n\nNot really. The administrator can *already* do that. It's trivial.\n\nThis patch is about doing it in a way that doesn't produce as ugly a\nmessage.But if we're \"delegating\" it to packagers and \"os administrators\",\nthen the problem is already solved. This patch is about trying to solve it\n*without* involving the packagers or OS administrators.\n\nNot saying we shouldn't do it, but I'd argue the exact opposite of yours\naboe, which is that it's very much not the justification of the patch :)\n\n\n\n> They can decide that postgresql.conf should be writable by the same\n> user that runs PostgreSQL, or not. And they should also be able to\n> decide that ALTER SYSTEM is an OK way to change configuration, or that\n> it isn't. How we enable them to make that decision is a point for\n> discussion, and how exactly we phrase the documentation is a point for\n> discussion, but we have no business trying to impose conditions, as if\n> they're only allowed to make that decision if they conform to some\n> (IMHO ridiculous) requirements that we dictate from on high. It's\n> their system, not ours.\n>\n\nAgreed on all those except they can already do this. It's just that the\nerror message is ugly. The path of least resistance would be to just\nspecifically detect a permissions error on the postgresql.auto.conf file\nwhen you try to do ALTER SYSTEM, and throw at least an error hint about\n\"you must allow writing to this file for the feature to work\".\n\nSo this patch isn't at all about enabling this functionality. It's about\nmaking it more user friendly.\n\n\nI mean, for crying out loud, users can set enable_seqscan=off in\n> postgresql.conf and GLOBALLY DISABLE SEQUENTIAL SCANS. They can set\n>\n\nThis is actually a good example, because it's kind of like this patch. It\ndoesn't *actually* disable the ability to run sequential scans, it just\ndisables the \"usual way\". Just like this patch doesn't prevent the\nsuperuser from editing the config, but it does prevent them droin doing it\n\"the usual way\".\n\n\n\n> zero_damaged_pages=on in postgresql.conf and silently remove vast\n> quantities of data without knowing that they're doing anything. We\n> don't even question that stuff ... although we probably should be\n>\n\nI like how you got this far and didn't even mention fsync=off :)\n\n--\n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Mar 20, 2024 at 8:52 PM Robert Haas <[email protected]> wrote:On Wed, Mar 20, 2024 at 3:17 PM Magnus Hagander <[email protected]> wrote:\n> Right, what I meant is that making it a packaging decision is the better place. Wherever it goes, allowing the administrator to choose what fits them should be made possible.\n\n+1. Which is also the justification for this patch, when it comes\nright down to it. The administrator gets to decide how the contents of\npostgresql.conf are to be managed on their particular installation.Not really. The administrator can *already* do that. It's trivial.This patch is about doing it in a way that doesn't produce as ugly a message.But if we're \"delegating\" it to packagers and \"os administrators\", then the problem is already solved. This patch is about trying to solve it *without* involving the packagers or OS administrators.Not saying we shouldn't do it, but I'd argue the exact opposite of yours aboe, which is that it's very much not the justification of the patch :) \nThey can decide that postgresql.conf should be writable by the same\nuser that runs PostgreSQL, or not. And they should also be able to\ndecide that ALTER SYSTEM is an OK way to change configuration, or that\nit isn't. How we enable them to make that decision is a point for\ndiscussion, and how exactly we phrase the documentation is a point for\ndiscussion, but we have no business trying to impose conditions, as if\nthey're only allowed to make that decision if they conform to some\n(IMHO ridiculous) requirements that we dictate from on high. It's\ntheir system, not ours.Agreed on all those except they can already do this. It's just that the error message is ugly. The path of least resistance would be to just specifically detect a permissions error on the postgresql.auto.conf file when you try to do ALTER SYSTEM, and throw at least an error hint about \"you must allow writing to this file for the feature to work\".So this patch isn't at all about enabling this functionality. It's about making it more user friendly.\nI mean, for crying out loud, users can set enable_seqscan=off in\npostgresql.conf and GLOBALLY DISABLE SEQUENTIAL SCANS. They can setThis is actually a good example, because it's kind of like this patch. It doesn't *actually* disable the ability to run sequential scans, it just disables the \"usual way\". Just like this patch doesn't prevent the superuser from editing the config, but it does prevent them droin doing it \"the usual way\". \nzero_damaged_pages=on in postgresql.conf and silently remove vast\nquantities of data without knowing that they're doing anything. We\ndon't even question that stuff ... although we probably should beI like how you got this far and didn't even mention fsync=off :)-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 21 Mar 2024 03:30:43 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 20, 2024 at 10:30 PM Magnus Hagander <[email protected]> wrote:\n> Not really. The administrator can *already* do that. It's trivial.\n>\n> This patch is about doing it in a way that doesn't produce as ugly a message.But if we're \"delegating\" it to packagers and \"os administrators\", then the problem is already solved. This patch is about trying to solve it *without* involving the packagers or OS administrators.\n>\n> Not saying we shouldn't do it, but I'd argue the exact opposite of yours aboe, which is that it's very much not the justification of the patch :)\n\nOK, that's a fair way of looking at it, too (and also you break client tools).\n\n>> I mean, for crying out loud, users can set enable_seqscan=off in\n>> postgresql.conf and GLOBALLY DISABLE SEQUENTIAL SCANS. They can set\n>\n> This is actually a good example, because it's kind of like this patch. It doesn't *actually* disable the ability to run sequential scans, it just disables the \"usual way\". Just like this patch doesn't prevent the superuser from editing the config, but it does prevent them droin doing it \"the usual way\".\n\nGood point.\n\n>> zero_damaged_pages=on in postgresql.conf and silently remove vast\n>> quantities of data without knowing that they're doing anything. We\n>> don't even question that stuff ... although we probably should be\n>\n> I like how you got this far and didn't even mention fsync=off :)\n\nHa!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 21 Mar 2024 08:42:04 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 20, 2024 at 10:31 PM Magnus Hagander <[email protected]> wrote:\n>\n> On Wed, Mar 20, 2024 at 8:52 PM Robert Haas <[email protected]> wrote:\n>>\n>> On Wed, Mar 20, 2024 at 3:17 PM Magnus Hagander <[email protected]> wrote:\n>> > Right, what I meant is that making it a packaging decision is the better place. Wherever it goes, allowing the administrator to choose what fits them should be made possible.\n>>\n>> +1. Which is also the justification for this patch, when it comes\n>> right down to it. The administrator gets to decide how the contents of\n>> postgresql.conf are to be managed on their particular installation.\n>\n>\n> Not really. The administrator can *already* do that. It's trivial.\n>\n> This patch is about doing it in a way that doesn't produce as ugly a message.But if we're \"delegating\" it to packagers and \"os administrators\", then the problem is already solved. This patch is about trying to solve it *without* involving the packagers or OS administrators.\n>\n> Not saying we shouldn't do it, but I'd argue the exact opposite of yours aboe, which is that it's very much not the justification of the patch :)\n>\n>\n>>\n>> They can decide that postgresql.conf should be writable by the same\n>> user that runs PostgreSQL, or not. And they should also be able to\n>> decide that ALTER SYSTEM is an OK way to change configuration, or that\n>> it isn't. How we enable them to make that decision is a point for\n>> discussion, and how exactly we phrase the documentation is a point for\n>> discussion, but we have no business trying to impose conditions, as if\n>> they're only allowed to make that decision if they conform to some\n>> (IMHO ridiculous) requirements that we dictate from on high. It's\n>> their system, not ours.\n>\n>\n> Agreed on all those except they can already do this. It's just that the error message is ugly. The path of least resistance would be to just specifically detect a permissions error on the postgresql.auto.conf file when you try to do ALTER SYSTEM, and throw at least an error hint about \"you must allow writing to this file for the feature to work\".\n>\n> So this patch isn't at all about enabling this functionality. It's about making it more user friendly.\n>\n>\n>> I mean, for crying out loud, users can set enable_seqscan=off in\n>> postgresql.conf and GLOBALLY DISABLE SEQUENTIAL SCANS. They can set\n>\n>\n> This is actually a good example, because it's kind of like this patch. It doesn't *actually* disable the ability to run sequential scans, it just disables the \"usual way\". Just like this patch doesn't prevent the superuser from editing the config, but it does prevent them droin doing it \"the usual way\".\n>\n>\n>>\n>> zero_damaged_pages=on in postgresql.conf and silently remove vast\n>> quantities of data without knowing that they're doing anything. We\n>> don't even question that stuff ... although we probably should be\n>\n>\n> I like how you got this far and didn't even mention fsync=off :)\n>\n\nAnd yet somehow query hints are more scary than ALL of these things. Go figure!\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Thu, 21 Mar 2024 11:37:39 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Tue, Mar 19, 2024 at 9:13 AM Jelte Fennema-Nio <[email protected]> wrote:\n> On Mon, 18 Mar 2024 at 18:27, Robert Haas <[email protected]> wrote:\n> > I think for now we\n> > should just file this under \"Other platforms and clients,\" which only\n> > has one existing setting. If the number of settings of this type\n> > grows, we can split it out.\n>\n> Done. I also included a patch to rename COMPAT_OPTIONS_CLIENTS to\n> COMPAT_OPTIONS_OTHER, since that enum variant naming doesn't match the\n> new intent of the section.\n\nI reviewed these patches. I think 0001 probably isn't strictly\nnecessary, but I don't think it's problematic either. And I'm quite\nhappy with 0002 also. In particular, I think the documentation - which\nmust be by far the most important of the patch - does an excellent job\nexplaining the limitations of this feature. My only quibbles are:\n\n- 0002 deletes a blank line from postgresql.conf.sample, and I think\nit shouldn't; and\n- I think the last sentence of the documentation is odd and could be\ndropped; who would expect changing a GUC to reset the contents of a\nconfig file, anyway?\n\nSince those are just minor points, that brings us to the question of\nwhether there is consensus to proceed with this. I believe that there\nis a clear consensus that there should be some way to disable ALTER\nSYSTEM. Sure, some people, particularly Tom, disagree, but I don't\nthink there is any way of counting up the votes that leads to the\nconclusion that we shouldn't have this feature at all. If someone\nfeels otherwise, show us how you counted the votes. What is less clear\nis whether there is a consensus in favor of this particular method of\ndisabling ALTER SYSTEM, namely, via a GUC. The two alternate\napproaches that seem to enjoy some level of support are (a) an\nextension or (b) changing the permissions on the files.\n\nI haven't tried to count up how many people are specifically in favor\nof each approach. I personally think that it doesn't matter very much,\nbecause I interpret the comments in favor of one or another\nimplementation as saying \"I want us to have this feature and of the\npossible approaches I prefer $WHATEVER\" rather than \"the only\narchitecturally acceptable approach to this feature is $WHATEVER and\nif we can't have that then i'd rather have nothing at all.\" Of course,\nlike everything else, that conclusion is open to debate, and certainly\nto correction by the people who have voted in favor of one of the\nalternate approaches, if I've misinterpreted their views.\n\nBut, as a practical matter, this is the patch we have, because this is\nthe patch that Gabriele and Jelte took time to write and polish.\nNobody else has taken the opportunity to produce a competing one. And,\nif we nevertheless insist that it has to be done some other way, I\nthink the inevitable result will be that nothing gets into this\nrelease at all, because we're less than 2 weeks from feature freeze,\nand there's not time for a complete do-over of something that was\noriginally proposed all the way back in September. And my reading of\nthe thread, at least, is that more people will be happy if something\ngets committed here, even if it's not exactly what they would have\npreferred, than if we get nothing at all.\n\nI'm going to wait a few days for any final comments. If it becomes\nclear that there is in fact no consensus to commit this version of the\npatch set (or something very similar) then I'll mark this as Returned\nwith Feedback. Otherwise, I plan to commit these patches (perhaps\nafter adjusting in accordance with my comments above).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 25 Mar 2024 13:29:46 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> Since those are just minor points, that brings us to the question of\n> whether there is consensus to proceed with this. I believe that there\n> is a clear consensus that there should be some way to disable ALTER\n> SYSTEM. Sure, some people, particularly Tom, disagree, but I don't\n> think there is any way of counting up the votes that leads to the\n> conclusion that we shouldn't have this feature at all.\n\nFWIW, I never objected to the idea of being able to disable ALTER\nSYSTEM. I felt that it ought to be part of a larger feature that\nwould provide a more bulletproof guarantee that a superuser can't\nalter the system configuration; but I'm clearly in the minority\non that. I'm content with just having it disable ALTER SYSTEM\nand no more, as long as the documentation is sufficiently clear\nthat an uncooperative superuser can easily bypass this if you don't\nback it up with filesystem-level controls.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 25 Mar 2024 13:47:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 1:47 PM Tom Lane <[email protected]> wrote:\n> FWIW, I never objected to the idea of being able to disable ALTER\n> SYSTEM. I felt that it ought to be part of a larger feature that\n> would provide a more bulletproof guarantee that a superuser can't\n> alter the system configuration; but I'm clearly in the minority\n> on that. I'm content with just having it disable ALTER SYSTEM\n> and no more, as long as the documentation is sufficiently clear\n> that an uncooperative superuser can easily bypass this if you don't\n> back it up with filesystem-level controls.\n\nOK, great. The latest patch doesn't specifically talk about backing it\nup with filesystem-level controls, but it does clearly say that this\nfeature is not going to stop a determined superuser from bypassing the\nfeature, which I think is the appropriate level of detail. We don't\nactually know whether a user has filesystem-level controls available\non their system that are equal to the task; certainly chmod isn't good\nenough, unless you can prevent the superuser from just running chmod\nagain, which you probably can't. An FS-level immutable flag or some\nother kind of OS-level wizardry might well get the job done, but I\ndon't think our documentation needs to speculate about that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 25 Mar 2024 14:13:21 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> OK, great. The latest patch doesn't specifically talk about backing it\n> up with filesystem-level controls, but it does clearly say that this\n> feature is not going to stop a determined superuser from bypassing the\n> feature, which I think is the appropriate level of detail. We don't\n> actually know whether a user has filesystem-level controls available\n> on their system that are equal to the task; certainly chmod isn't good\n> enough, unless you can prevent the superuser from just running chmod\n> again, which you probably can't. An FS-level immutable flag or some\n> other kind of OS-level wizardry might well get the job done, but I\n> don't think our documentation needs to speculate about that.\n\nTrue. For postgresql.conf, you can put it outside the data directory\nand make it be owned by some other user, and the job is done. It's\nharder for postgresql.auto.conf because that always lives in the data\ndirectory which is necessarily postgres-writable, so even if you\ndid those two things to it the superuser could just rename or\nremove it and then write postgresql.auto.conf of his choosing.\n\nI wonder whether this feature should include teaching the server\nto ignore postgresql.auto.conf altogether, which would make it\nrelatively easy to get to a bulletproof configuration.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 25 Mar 2024 14:26:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 7:27 PM Tom Lane <[email protected]> wrote:\n\n> Robert Haas <[email protected]> writes:\n> > OK, great. The latest patch doesn't specifically talk about backing it\n> > up with filesystem-level controls, but it does clearly say that this\n> > feature is not going to stop a determined superuser from bypassing the\n> > feature, which I think is the appropriate level of detail. We don't\n> > actually know whether a user has filesystem-level controls available\n> > on their system that are equal to the task; certainly chmod isn't good\n> > enough, unless you can prevent the superuser from just running chmod\n> > again, which you probably can't. An FS-level immutable flag or some\n> > other kind of OS-level wizardry might well get the job done, but I\n> > don't think our documentation needs to speculate about that.\n>\n> True. For postgresql.conf, you can put it outside the data directory\n> and make it be owned by some other user, and the job is done. It's\n> harder for postgresql.auto.conf because that always lives in the data\n> directory which is necessarily postgres-writable, so even if you\n> did those two things to it the superuser could just rename or\n> remove it and then write postgresql.auto.conf of his choosing.\n>\n\nJust to add to that -- if you use chattr +i on it, the superuser in\npostgres won't be able to rename it -- only the actual root user.\n\nJust chowning it won't help of course, then the rename part works.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Mar 25, 2024 at 7:27 PM Tom Lane <[email protected]> wrote:Robert Haas <[email protected]> writes:\n> OK, great. The latest patch doesn't specifically talk about backing it\n> up with filesystem-level controls, but it does clearly say that this\n> feature is not going to stop a determined superuser from bypassing the\n> feature, which I think is the appropriate level of detail. We don't\n> actually know whether a user has filesystem-level controls available\n> on their system that are equal to the task; certainly chmod isn't good\n> enough, unless you can prevent the superuser from just running chmod\n> again, which you probably can't. An FS-level immutable flag or some\n> other kind of OS-level wizardry might well get the job done, but I\n> don't think our documentation needs to speculate about that.\n\nTrue. For postgresql.conf, you can put it outside the data directory\nand make it be owned by some other user, and the job is done. It's\nharder for postgresql.auto.conf because that always lives in the data\ndirectory which is necessarily postgres-writable, so even if you\ndid those two things to it the superuser could just rename or\nremove it and then write postgresql.auto.conf of his choosing.Just to add to that -- if you use chattr +i on it, the superuser in postgres won't be able to rename it -- only the actual root user.Just chowning it won't help of course, then the rename part works.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 25 Mar 2024 19:30:03 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 2:26 PM Tom Lane <[email protected]> wrote:\n> I wonder whether this feature should include teaching the server\n> to ignore postgresql.auto.conf altogether, which would make it\n> relatively easy to get to a bulletproof configuration.\n\nThis has been debated a few times on the thread already, but a number\nof problems with that idea have been raised, and as far as I can see,\neveryone who suggested went on to recant and agree that we shouldn't\ndo that. If you feel a strong need to relitigate that, please check\nthe prior discussion first.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 25 Mar 2024 14:45:40 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 01:29:46PM -0400, Robert Haas wrote:\n> What is less clear is whether there is a consensus in favor of this\n> particular method of disabling ALTER SYSTEM, namely, via a GUC. The\n> two alternate approaches that seem to enjoy some level of support are\n> (a) an extension or (b) changing the permissions on the files.\n\nI am wondering if the fact that you would be able to do:\n\n ALTER SYSTEM SET externally_managed_configuration = false\n\nand then be unable to use ALTER SYSTEM to revert the change is\nsignificant. I can't think of many such cases.\n\nIsn't \"configuration\" too generic a term for disabling ALTER SYSTEM?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 25 Mar 2024 15:16:09 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Mon, 25 Mar 2024 at 20:16, Bruce Momjian <[email protected]> wrote:\n> I am wondering if the fact that you would be able to do:\n>\n> ALTER SYSTEM SET externally_managed_configuration = false\n>\n> and then be unable to use ALTER SYSTEM to revert the change is\n> significant.\n\nThis is not possible, due to the externally_managed_configuration GUC\nhaving the GUC_DISALLOW_IN_AUTO_FILE flag.\n\n> Isn't \"configuration\" too generic a term for disabling ALTER SYSTEM?\n\nmaybe \"externally_managed_auto_config\"\n\n\n",
"msg_date": "Mon, 25 Mar 2024 21:40:55 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 09:40:55PM +0100, Jelte Fennema-Nio wrote:\n> On Mon, 25 Mar 2024 at 20:16, Bruce Momjian <[email protected]> wrote:\n> > I am wondering if the fact that you would be able to do:\n> >\n> > ALTER SYSTEM SET externally_managed_configuration = false\n> >\n> > and then be unable to use ALTER SYSTEM to revert the change is\n> > significant.\n> \n> This is not possible, due to the externally_managed_configuration GUC\n> having the GUC_DISALLOW_IN_AUTO_FILE flag.\n\nAh, good, thanks.\n\n> > Isn't \"configuration\" too generic a term for disabling ALTER SYSTEM?\n> \n> maybe \"externally_managed_auto_config\"\n\nHow many people associate \"auto\" with ALTER SYSTEM? I assume not many. \n\nTo me, externally_managed_configuration is promising a lot more than it\ndelivers because there is still a lot of ocnfiguration it doesn't\ncontrol. I am also confused why the purpose of the feature, external\nmanagement of configuation, is part of the variable name. We usually\nname parameters for what they control.\n\nIt seems this is really controlling the ability to alter system\nvariables at the SQL level, maybe sql_alter_system_vars.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 25 Mar 2024 17:04:31 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 5:04 PM Bruce Momjian <[email protected]> wrote:\n> > > Isn't \"configuration\" too generic a term for disabling ALTER SYSTEM?\n> >\n> > maybe \"externally_managed_auto_config\"\n>\n> How many people associate \"auto\" with ALTER SYSTEM? I assume not many.\n>\n> To me, externally_managed_configuration is promising a lot more than it\n> delivers because there is still a lot of ocnfiguration it doesn't\n> control. I am also confused why the purpose of the feature, external\n> management of configuation, is part of the variable name. We usually\n> name parameters for what they control.\n\nI actually agree with this. I wasn't going to quibble with it because\nother people seemed to like it. But I think something like\nallow_alter_system would be better, as it would describe the exact\nthing that the parameter does, rather than how we think the parameter\nought to be used.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 26 Mar 2024 08:11:33 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "> On 26 Mar 2024, at 13:11, Robert Haas <[email protected]> wrote:\n> On Mon, Mar 25, 2024 at 5:04 PM Bruce Momjian <[email protected]> wrote:\n\n>> To me, externally_managed_configuration is promising a lot more than it\n>> delivers because there is still a lot of ocnfiguration it doesn't\n>> control. I am also confused why the purpose of the feature, external\n>> management of configuation, is part of the variable name. We usually\n>> name parameters for what they control.\n> \n> I actually agree with this. I wasn't going to quibble with it because\n> other people seemed to like it. But I think something like\n> allow_alter_system would be better, as it would describe the exact\n> thing that the parameter does, rather than how we think the parameter\n> ought to be used.\n\n+Many. Either allow_alter_system or enable_alter_system_command is IMO\npreferrable, not least because someone might use this without using any\nexternal configuration tool, making the name even more misleading.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 26 Mar 2024 13:15:25 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "At 2024-03-26 08:11:33 -0400, [email protected] wrote:\n>\n> On Mon, Mar 25, 2024 at 5:04 PM Bruce Momjian <[email protected]> wrote:\n> > > > Isn't \"configuration\" too generic a term for disabling ALTER SYSTEM?\n> > >\n> > > maybe \"externally_managed_auto_config\"\n> >\n> > How many people associate \"auto\" with ALTER SYSTEM? I assume not many.\n> >\n> > To me, externally_managed_configuration is promising a lot more than it\n> > delivers because there is still a lot of ocnfiguration it doesn't\n> > control. I am also confused why the purpose of the feature, external\n> > management of configuation, is part of the variable name. We usually\n> > name parameters for what they control.\n> \n> I actually agree with this. I wasn't going to quibble with it because\n> other people seemed to like it. But I think something like\n> allow_alter_system would be better, as it would describe the exact\n> thing that the parameter does, rather than how we think the parameter\n> ought to be used.\n\nYes, \"externally_managed_configuration\" raises far more questions than\nit answers. \"enable_alter_system\" is clearer in terms of what to expect\nwhen you set it. \"enable_alter_system_command\" is rather long, but even\nbetter in that it is specific enough to not promise anything about not\nallowing superusers to change the configuration some other way.\n\n-- Abhijit (as someone who could find a use for this feature)\n\n\n",
"msg_date": "Tue, 26 Mar 2024 18:25:04 +0530",
"msg_from": "Abhijit Menon-Sen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Tue, Mar 26, 2024 at 8:55 AM Abhijit Menon-Sen <[email protected]> wrote:\n> Yes, \"externally_managed_configuration\" raises far more questions than\n> it answers. \"enable_alter_system\" is clearer in terms of what to expect\n> when you set it. \"enable_alter_system_command\" is rather long, but even\n> better in that it is specific enough to not promise anything about not\n> allowing superusers to change the configuration some other way.\n\nIt was previously suggested that we shouldn't start the GUC name with\n\"enable,\" since those are all planner GUCs currently. It's sort of a\nsilly precedent, but we have it, so that's why I proposed \"allow\"\ninstead.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 26 Mar 2024 09:43:58 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Mon, Mar 25, 2024 at 5:04 PM Bruce Momjian <[email protected]> wrote:\n>> To me, externally_managed_configuration is promising a lot more than it\n>> delivers because there is still a lot of ocnfiguration it doesn't\n>> control. I am also confused why the purpose of the feature, external\n>> management of configuation, is part of the variable name. We usually\n>> name parameters for what they control.\n\n> I actually agree with this. I wasn't going to quibble with it because\n> other people seemed to like it. But I think something like\n> allow_alter_system would be better, as it would describe the exact\n> thing that the parameter does, rather than how we think the parameter\n> ought to be used.\n\n+1. The overpromise-and-underdeliver aspect of the currently proposed\nname is a lot of the reason I've been unhappy and kept pushing for it\nto lock things down more. \"allow_alter_system\" is a lot more\nstraightforward about exactly what it does, and if that is all we want\nit to do, then a name like that is good.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 26 Mar 2024 10:23:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Tue, Mar 26, 2024 at 10:23:51AM -0400, Tom Lane wrote:\n> Robert Haas <[email protected]> writes:\n> > On Mon, Mar 25, 2024 at 5:04 PM Bruce Momjian <[email protected]> wrote:\n> >> To me, externally_managed_configuration is promising a lot more than it\n> >> delivers because there is still a lot of ocnfiguration it doesn't\n> >> control. I am also confused why the purpose of the feature, external\n> >> management of configuation, is part of the variable name. We usually\n> >> name parameters for what they control.\n> \n> > I actually agree with this. I wasn't going to quibble with it because\n> > other people seemed to like it. But I think something like\n> > allow_alter_system would be better, as it would describe the exact\n> > thing that the parameter does, rather than how we think the parameter\n> > ought to be used.\n> \n> +1. The overpromise-and-underdeliver aspect of the currently proposed\n> name is a lot of the reason I've been unhappy and kept pushing for it\n> to lock things down more. \"allow_alter_system\" is a lot more\n> straightforward about exactly what it does, and if that is all we want\n> it to do, then a name like that is good.\n\nI am thinking \"enable_alter_system_command\" is probably good because we\nalready use \"enable\" so why not reuse that idea, and I think \"command\"\nis needed because we need to clarify we are talking about the command,\nand not generic altering of the system. We could use\n\"enable_sql_alter_system\" if people want something shorter.\n\nWill people think this allows non-root users to use ALTER SYSTEM if\nenabled?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Tue, 26 Mar 2024 12:35:39 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I am thinking \"enable_alter_system_command\" is probably good because we\n> already use \"enable\" so why not reuse that idea, and I think \"command\"\n> is needed because we need to clarify we are talking about the command,\n> and not generic altering of the system. We could use\n> \"enable_sql_alter_system\" if people want something shorter.\n\nRobert already mentioned why not use \"enable_\": up to now that prefix\nhas only been applied to planner plan-type-enabling GUCs. I'd be okay\nwith \"allow_alter_system_command\", although I find it unnecessarily\nverbose.\n\n> Will people think this allows non-root users to use ALTER SYSTEM if\n> enabled?\n\nThey'll soon find out differently, so I'm not concerned about that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 26 Mar 2024 13:23:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "\n\n> On Mar 27, 2024, at 3:53 AM, Tom Lane <[email protected]> wrote:\n> \n> Bruce Momjian <[email protected]> writes:\n>> I am thinking \"enable_alter_system_command\" is probably good because we\n>> already use \"enable\" so why not reuse that idea, and I think \"command\"\n>> is needed because we need to clarify we are talking about the command,\n>> and not generic altering of the system. We could use\n>> \"enable_sql_alter_system\" if people want something shorter.\n> \n> Robert already mentioned why not use \"enable_\": up to now that prefix\n> has only been applied to planner plan-type-enabling GUCs. I'd be okay\n> with \"allow_alter_system_command\", although I find it unnecessarily\n> verbose.\n\nAgree. I don’t think “_command” adds much clarity.\n\nCheers\n\nAndrew\n\n\n",
"msg_date": "Wed, 27 Mar 2024 11:53:55 +1030",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, 27 Mar 2024 at 02:24, Andrew Dunstan <[email protected]> wrote:\n> Agree. I don’t think “_command” adds much clarity.\n\nAlright, changed the GUC name to \"allow_alter_system\" since that seems\nto have the most \"votes\". One other option would be to call it simply\n\"alter_system\", just like \"jit\" is not called \"allow_jit\" or\n\"enable_jit\".\n\nBut personally I feel that the \"allow_alter_system\" is clearer than\nplain \"alter_system\" for the GUC name.",
"msg_date": "Wed, 27 Mar 2024 15:43:28 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 03:43:28PM +0100, Jelte Fennema-Nio wrote:\n> + </term>\n> + <listitem>\n> + <para>\n> + When <literal>allow_alter_system</literal> is set to\n> + <literal>on</literal>, an error is returned if the <command>ALTER\n> + SYSTEM</command> command is used. This parameter can only be set in\n> + the <filename>postgresql.conf</filename> file or on the server command\n> + line. The default value is <literal>on</literal>.\n> + </para>\n\nUh, the above is clearly wrong. I think you mean \"off\" on the second line.\n\n> +\n> + <para>\n> + Note that this setting cannot be regarded as a security feature. It\n> + only disables the <literal>ALTER SYSTEM</literal> command. It does not\n> + prevent a superuser from changing the configuration remotely using\n\nWhy \"remotely\"?\n\n> + other means. A superuser has many ways of executing shell commands at\n> + the operating system level, and can therefore modify\n> + <literal>postgresql.auto.conf</literal> regardless of the value of\n> + this setting. The purpose of the setting is to prevent\n> + <emphasis>accidental</emphasis> modifications via <literal>ALTER\n> + SYSTEM</literal> in environments where\n> + <productname>PostgreSQL</productname> its configuration is managed by\n\n\"its\"?\n\n> + some outside mechanism. In such environments, using <command>ALTER\n> + SYSTEM</command> to make configuration changes might appear to work,\n> + but then may be discarded at some point in the future when that outside\n\n\"might\"\n\n> + mechanism updates the configuration. Setting this parameter to\n> + <literal>on</literal> can help to avoid such mistakes.\n> + </para>\n\n\"off\"\n\nIs this really a patch we think we can push into PG 17. I am having my\ndoubts.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 27 Mar 2024 11:01:28 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 10:43 AM Jelte Fennema-Nio <[email protected]> wrote:\n> Alright, changed the GUC name to \"allow_alter_system\" since that seems\n> to have the most \"votes\". One other option would be to call it simply\n> \"alter_system\", just like \"jit\" is not called \"allow_jit\" or\n> \"enable_jit\".\n>\n> But personally I feel that the \"allow_alter_system\" is clearer than\n> plain \"alter_system\" for the GUC name.\n\nI agree, and have committed your 0001.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 27 Mar 2024 11:05:55 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 11:01 AM Bruce Momjian <[email protected]> wrote:\n> Uh, the above is clearly wrong. I think you mean \"off\" on the second line.\n\nWoops. When the name changed from externally_managed_configuration to\nallow_alter_system, the sense of it was reversed, and I guess Jelte\nmissed flipping the documentation references around. I likely would\nhave made the same mistake, but it's easily fixed.\n\n> > +\n> > + <para>\n> > + Note that this setting cannot be regarded as a security feature. It\n> > + only disables the <literal>ALTER SYSTEM</literal> command. It does not\n> > + prevent a superuser from changing the configuration remotely using\n>\n> Why \"remotely\"?\n\nThis wording was suggested upthread. I think the point here is that if\nthe superuser is logging in from the local machine, it's obvious that\nthey can do whatever they want. The point is to emphasize that a\nsuperuser without a local login can, too.\n\n> \"its\"?\n\nYeah, that seems like an extra word.\n\n> > + some outside mechanism. In such environments, using <command>ALTER\n> > + SYSTEM</command> to make configuration changes might appear to work,\n> > + but then may be discarded at some point in the future when that outside\n>\n> \"might\"\n\nThis does not seem like a mistake to me. I'm not sure why you think it is.\n\n> > + mechanism updates the configuration. Setting this parameter to\n> > + <literal>on</literal> can help to avoid such mistakes.\n> > + </para>\n>\n> \"off\"\n\nThis is another case that needs to be fixed now that the sense of the\nGUC is reversed. (We'd better make sure the code has the test the\nright way around, too.)\n\n> Is this really a patch we think we can push into PG 17. I am having my\n> doubts.\n\nIf the worst thing that happens in PG 17 is that we push a patch that\nneeds a few documentation corrections, we're going to be doing\nfabulously well.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 27 Mar 2024 11:10:31 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, 27 Mar 2024 at 16:10, Robert Haas <[email protected]> wrote:\n>\n> On Wed, Mar 27, 2024 at 11:01 AM Bruce Momjian <[email protected]> wrote:\n> > Uh, the above is clearly wrong. I think you mean \"off\" on the second line.\n>\n> Woops. When the name changed from externally_managed_configuration to\n> allow_alter_system, the sense of it was reversed, and I guess Jelte\n> missed flipping the documentation references around.\n\nYeah, that's definitely what happened. I did change a few, but I\nindeed missed a few others (or maybe flipped some twice by accident,\nor hadn't been flipped before when it reversed previously).\n\n> > Why \"remotely\"?\n>\n> This wording was suggested upthread. I think the point here is that if\n> the superuser is logging in from the local machine, it's obvious that\n> they can do whatever they want. The point is to emphasize that a\n> superuser without a local login can, too.\n\nChanged this from \"remotely using other means\" to \"using other SQL commands\".\n\n> > \"its\"?\n>\n> Yeah, that seems like an extra word.\n\nChanged this to \"the configuration of PostgreSQL\"\n\n> > > + some outside mechanism. In such environments, using <command>ALTER\n> > > + SYSTEM</command> to make configuration changes might appear to work,\n> > > + but then may be discarded at some point in the future when that outside\n> >\n> > \"might\"\n>\n> This does not seem like a mistake to me. I'm not sure why you think it is.\n\nI also think the original sentence was correct, but I don't think it\nread very naturally. Changed it now in hopes to improve that.\n\n> > > + mechanism updates the configuration. Setting this parameter to\n> > > + <literal>on</literal> can help to avoid such mistakes.\n> > > + </para>\n> >\n> > \"off\"\n>\n> This is another case that needs to be fixed now that the sense of the\n> GUC is reversed. (We'd better make sure the code has the test the\n> right way around, too.)\n\nFixed this one too, and the code is the right way around.",
"msg_date": "Wed, 27 Mar 2024 16:50:27 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": ">\n> The purpose of the setting is to prevent <emphasis>accidental</emphasis>\n> modifications via <literal>ALTER SYSTEM</literal> in environments where\n\n\nThe emphasis on 'accidental' seems a bit heavy here, and odd. Surely, just\n\"to prevent modifications via ALTER SYSTEM in environments where...\" is\nenough?\n\nCheers,\nGreg\n\nThe purpose of the setting is to prevent <emphasis>accidental</emphasis> modifications via <literal>ALTER SYSTEM</literal> in environments whereThe emphasis on 'accidental' seems a bit heavy here, and odd. Surely, just \"to prevent modifications via ALTER SYSTEM in environments where...\" is enough?Cheers,Greg",
"msg_date": "Wed, 27 Mar 2024 13:05:08 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, 27 Mar 2024 at 13:05, Greg Sabino Mullane <[email protected]>\nwrote:\n\n> The purpose of the setting is to prevent <emphasis>accidental</emphasis>\n>> modifications via <literal>ALTER SYSTEM</literal> in environments where\n>\n>\n> The emphasis on 'accidental' seems a bit heavy here, and odd. Surely, just\n> \"to prevent modifications via ALTER SYSTEM in environments where...\" is\n> enough?\n>\n\nNot necessarily disagreeing, but it's very important nobody ever mistake\nthis for a security feature. I don't know if the extra word \"accidental\" is\nnecessary, but I think that's the motivation.\n\nOn Wed, 27 Mar 2024 at 13:05, Greg Sabino Mullane <[email protected]> wrote:The purpose of the setting is to prevent <emphasis>accidental</emphasis> modifications via <literal>ALTER SYSTEM</literal> in environments whereThe emphasis on 'accidental' seems a bit heavy here, and odd. Surely, just \"to prevent modifications via ALTER SYSTEM in environments where...\" is enough?Not necessarily disagreeing, but it's very important nobody ever mistake this for a security feature. I don't know if the extra word \"accidental\" is necessary, but I think that's the motivation.",
"msg_date": "Wed, 27 Mar 2024 13:12:17 -0400",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 10:12 AM Isaac Morland <[email protected]>\nwrote:\n\n> On Wed, 27 Mar 2024 at 13:05, Greg Sabino Mullane <[email protected]>\n> wrote:\n>\n>> The purpose of the setting is to prevent <emphasis>accidental</emphasis>\n>>> modifications via <literal>ALTER SYSTEM</literal> in environments where\n>>\n>>\n>> The emphasis on 'accidental' seems a bit heavy here, and odd. Surely,\n>> just \"to prevent modifications via ALTER SYSTEM in environments where...\"\n>> is enough?\n>>\n>\n> Not necessarily disagreeing, but it's very important nobody ever mistake\n> this for a security feature. I don't know if the extra word \"accidental\" is\n> necessary, but I think that's the motivation.\n>\n\nPrevent non-malicious modifications via ALTER SYSTEM in environments where\n...\n\nDavid J.\n\nOn Wed, Mar 27, 2024 at 10:12 AM Isaac Morland <[email protected]> wrote:On Wed, 27 Mar 2024 at 13:05, Greg Sabino Mullane <[email protected]> wrote:The purpose of the setting is to prevent <emphasis>accidental</emphasis> modifications via <literal>ALTER SYSTEM</literal> in environments whereThe emphasis on 'accidental' seems a bit heavy here, and odd. Surely, just \"to prevent modifications via ALTER SYSTEM in environments where...\" is enough?Not necessarily disagreeing, but it's very important nobody ever mistake this for a security feature. I don't know if the extra word \"accidental\" is necessary, but I think that's the motivation.Prevent non-malicious modifications via ALTER SYSTEM in environments where ...David J.",
"msg_date": "Wed, 27 Mar 2024 10:34:57 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 1:12 PM Isaac Morland <[email protected]> wrote:\n> On Wed, 27 Mar 2024 at 13:05, Greg Sabino Mullane <[email protected]> wrote:\n>>> The purpose of the setting is to prevent <emphasis>accidental</emphasis> modifications via <literal>ALTER SYSTEM</literal> in environments where\n>> The emphasis on 'accidental' seems a bit heavy here, and odd. Surely, just \"to prevent modifications via ALTER SYSTEM in environments where...\" is enough?\n> Not necessarily disagreeing, but it's very important nobody ever mistake this for a security feature. I don't know if the extra word \"accidental\" is necessary, but I think that's the motivation.\n\nI think the emphasis is entirely warranted in this case.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 27 Mar 2024 14:46:01 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 27, 2024, 11:46 Robert Haas <[email protected]> wrote:\n\n> On Wed, Mar 27, 2024 at 1:12 PM Isaac Morland <[email protected]>\n> wrote:\n> > On Wed, 27 Mar 2024 at 13:05, Greg Sabino Mullane <[email protected]>\n> wrote:\n> >>> The purpose of the setting is to prevent\n> <emphasis>accidental</emphasis> modifications via <literal>ALTER\n> SYSTEM</literal> in environments where\n> >> The emphasis on 'accidental' seems a bit heavy here, and odd. Surely,\n> just \"to prevent modifications via ALTER SYSTEM in environments where...\"\n> is enough?\n> > Not necessarily disagreeing, but it's very important nobody ever mistake\n> this for a security feature. I don't know if the extra word \"accidental\" is\n> necessary, but I think that's the motivation.\n>\n> I think the emphasis is entirely warranted in this case.\n\n\n+1. And while \"non-malicious\" may technically be more correct, I don't\nthink it's any clearer.\n\n>\n\nOn Wed, Mar 27, 2024, 11:46 Robert Haas <[email protected]> wrote:On Wed, Mar 27, 2024 at 1:12 PM Isaac Morland <[email protected]> wrote:\n> On Wed, 27 Mar 2024 at 13:05, Greg Sabino Mullane <[email protected]> wrote:\n>>> The purpose of the setting is to prevent <emphasis>accidental</emphasis> modifications via <literal>ALTER SYSTEM</literal> in environments where\n>> The emphasis on 'accidental' seems a bit heavy here, and odd. Surely, just \"to prevent modifications via ALTER SYSTEM in environments where...\" is enough?\n> Not necessarily disagreeing, but it's very important nobody ever mistake this for a security feature. I don't know if the extra word \"accidental\" is necessary, but I think that's the motivation.\n\nI think the emphasis is entirely warranted in this case.+1. And while \"non-malicious\" may technically be more correct, I don't think it's any clearer.",
"msg_date": "Wed, 27 Mar 2024 12:10:02 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 04:50:27PM +0100, Jelte Fennema-Nio wrote:\n> > This wording was suggested upthread. I think the point here is that if\n> > the superuser is logging in from the local machine, it's obvious that\n> > they can do whatever they want. The point is to emphasize that a\n> > superuser without a local login can, too.\n> \n> Changed this from \"remotely using other means\" to \"using other SQL commands\".\n\nYes, I like the SQL emphasis since \"remote\" just doesn't seem like the\nright thing to highlight here.\n\n> > > > + some outside mechanism. In such environments, using <command>ALTER\n> > > > + SYSTEM</command> to make configuration changes might appear to work,\n> > > > + but then may be discarded at some point in the future when that outside\n> > >\n> > > \"might\"\n> >\n> > This does not seem like a mistake to me. I'm not sure why you think it is.\n> \n> I also think the original sentence was correct, but I don't think it\n> read very naturally. Changed it now in hopes to improve that.\n\nSo, might means \"possibility\" while \"may\" means permission, so \"might\"\nis clearer here.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 27 Mar 2024 18:06:32 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 11:05:55AM -0400, Robert Haas wrote:\n> On Wed, Mar 27, 2024 at 10:43 AM Jelte Fennema-Nio <[email protected]> wrote:\n> > Alright, changed the GUC name to \"allow_alter_system\" since that seems\n> > to have the most \"votes\". One other option would be to call it simply\n> > \"alter_system\", just like \"jit\" is not called \"allow_jit\" or\n> > \"enable_jit\".\n> >\n> > But personally I feel that the \"allow_alter_system\" is clearer than\n> > plain \"alter_system\" for the GUC name.\n> \n> I agree, and have committed your 0001.\n\nSo, I email \"Is this really a patch we think we can push into PG 17. I\nam having my doubts,\" and the patch is applied a few hours after my\nemail. Wow.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 27 Mar 2024 18:09:02 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 06:09:02PM -0400, Bruce Momjian wrote:\n> On Wed, Mar 27, 2024 at 11:05:55AM -0400, Robert Haas wrote:\n> > On Wed, Mar 27, 2024 at 10:43 AM Jelte Fennema-Nio <[email protected]> wrote:\n> > > Alright, changed the GUC name to \"allow_alter_system\" since that seems\n> > > to have the most \"votes\". One other option would be to call it simply\n> > > \"alter_system\", just like \"jit\" is not called \"allow_jit\" or\n> > > \"enable_jit\".\n> > >\n> > > But personally I feel that the \"allow_alter_system\" is clearer than\n> > > plain \"alter_system\" for the GUC name.\n> > \n> > I agree, and have committed your 0001.\n> \n> So, I email \"Is this really a patch we think we can push into PG 17. I\n> am having my doubts,\" and the patch is applied a few hours after my\n> email. Wow.\n\nAlso odd is that I don't see the commit in git master, so now I am\nconfused.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 27 Mar 2024 18:13:22 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 3:13 PM Bruce Momjian <[email protected]> wrote:\n\n> On Wed, Mar 27, 2024 at 06:09:02PM -0400, Bruce Momjian wrote:\n> > On Wed, Mar 27, 2024 at 11:05:55AM -0400, Robert Haas wrote:\n> > > On Wed, Mar 27, 2024 at 10:43 AM Jelte Fennema-Nio <[email protected]>\n> wrote:\n> > > > Alright, changed the GUC name to \"allow_alter_system\" since that\n> seems\n> > > > to have the most \"votes\". One other option would be to call it simply\n> > > > \"alter_system\", just like \"jit\" is not called \"allow_jit\" or\n> > > > \"enable_jit\".\n> > > >\n> > > > But personally I feel that the \"allow_alter_system\" is clearer than\n> > > > plain \"alter_system\" for the GUC name.\n> > >\n> > > I agree, and have committed your 0001.\n> >\n> > So, I email \"Is this really a patch we think we can push into PG 17. I\n> > am having my doubts,\" and the patch is applied a few hours after my\n> > email. Wow.\n>\n> Also odd is that I don't see the commit in git master, so now I am\n> confused.\n>\n\nThe main feature being discussed is in the 0002 patch while Robert pushed a\ndoc section rename in the 0001 patch.\n\nDavid J.\n\nOn Wed, Mar 27, 2024 at 3:13 PM Bruce Momjian <[email protected]> wrote:On Wed, Mar 27, 2024 at 06:09:02PM -0400, Bruce Momjian wrote:\n> On Wed, Mar 27, 2024 at 11:05:55AM -0400, Robert Haas wrote:\n> > On Wed, Mar 27, 2024 at 10:43 AM Jelte Fennema-Nio <[email protected]> wrote:\n> > > Alright, changed the GUC name to \"allow_alter_system\" since that seems\n> > > to have the most \"votes\". One other option would be to call it simply\n> > > \"alter_system\", just like \"jit\" is not called \"allow_jit\" or\n> > > \"enable_jit\".\n> > >\n> > > But personally I feel that the \"allow_alter_system\" is clearer than\n> > > plain \"alter_system\" for the GUC name.\n> > \n> > I agree, and have committed your 0001.\n> \n> So, I email \"Is this really a patch we think we can push into PG 17. I\n> am having my doubts,\" and the patch is applied a few hours after my\n> email. Wow.\n\nAlso odd is that I don't see the commit in git master, so now I am\nconfused.\nThe main feature being discussed is in the 0002 patch while Robert pushed a doc section rename in the 0001 patch.David J.",
"msg_date": "Wed, 27 Mar 2024 15:18:26 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 3:18 PM David G. Johnston <\[email protected]> wrote:\n\n> On Wed, Mar 27, 2024 at 3:13 PM Bruce Momjian <[email protected]> wrote:\n>\n>> On Wed, Mar 27, 2024 at 06:09:02PM -0400, Bruce Momjian wrote:\n>> > On Wed, Mar 27, 2024 at 11:05:55AM -0400, Robert Haas wrote:\n>> > > On Wed, Mar 27, 2024 at 10:43 AM Jelte Fennema-Nio <\n>> [email protected]> wrote:\n>> > > > Alright, changed the GUC name to \"allow_alter_system\" since that\n>> seems\n>> > > > to have the most \"votes\". One other option would be to call it\n>> simply\n>> > > > \"alter_system\", just like \"jit\" is not called \"allow_jit\" or\n>> > > > \"enable_jit\".\n>> > > >\n>> > > > But personally I feel that the \"allow_alter_system\" is clearer than\n>> > > > plain \"alter_system\" for the GUC name.\n>> > >\n>> > > I agree, and have committed your 0001.\n>> >\n>> > So, I email \"Is this really a patch we think we can push into PG 17. I\n>> > am having my doubts,\" and the patch is applied a few hours after my\n>> > email. Wow.\n>>\n>> Also odd is that I don't see the commit in git master, so now I am\n>> confused.\n>>\n>\n> The main feature being discussed is in the 0002 patch while Robert pushed\n> a doc section rename in the 0001 patch.\n>\n>\nWell, the internal category name was changed though the docs did remain\nunchanged.\n\nDavid J.\n\nOn Wed, Mar 27, 2024 at 3:18 PM David G. Johnston <[email protected]> wrote:On Wed, Mar 27, 2024 at 3:13 PM Bruce Momjian <[email protected]> wrote:On Wed, Mar 27, 2024 at 06:09:02PM -0400, Bruce Momjian wrote:\n> On Wed, Mar 27, 2024 at 11:05:55AM -0400, Robert Haas wrote:\n> > On Wed, Mar 27, 2024 at 10:43 AM Jelte Fennema-Nio <[email protected]> wrote:\n> > > Alright, changed the GUC name to \"allow_alter_system\" since that seems\n> > > to have the most \"votes\". One other option would be to call it simply\n> > > \"alter_system\", just like \"jit\" is not called \"allow_jit\" or\n> > > \"enable_jit\".\n> > >\n> > > But personally I feel that the \"allow_alter_system\" is clearer than\n> > > plain \"alter_system\" for the GUC name.\n> > \n> > I agree, and have committed your 0001.\n> \n> So, I email \"Is this really a patch we think we can push into PG 17. I\n> am having my doubts,\" and the patch is applied a few hours after my\n> email. Wow.\n\nAlso odd is that I don't see the commit in git master, so now I am\nconfused.\nThe main feature being discussed is in the 0002 patch while Robert pushed a doc section rename in the 0001 patch.Well, the internal category name was changed though the docs did remain unchanged.David J.",
"msg_date": "Wed, 27 Mar 2024 15:20:38 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 11:10:31AM -0400, Robert Haas wrote:\n> > Is this really a patch we think we can push into PG 17. I am having my\n> > doubts.\n> \n> If the worst thing that happens in PG 17 is that we push a patch that\n> needs a few documentation corrections, we're going to be doing\n> fabulously well.\n\nMy point is that we are designing the user API in the last weeks of the\ncommitfest, which usually ends badly for us, and the fact the docs were\nnot even right in the patch just reenforces that concern.\n\nBut, as I stated in another email, you said you committed the patch,\nyet I don't see it committed in git master, so I am confused.\n\nAh, I figured it out. You were talking about the GUC renaming:\n\n\tcommit de7e96bd0fc\n\tAuthor: Robert Haas <[email protected]>\n\tDate: Wed Mar 27 10:45:28 2024 -0400\n\t\n\t Rename COMPAT_OPTIONS_CLIENT to COMPAT_OPTIONS_OTHER.\n\t\n\t The user-facing name is \"Other Platforms and Clients\", but the\n\t internal name seems too focused on clients specifically, especially\n\t given the plan to add a new setting to this session that is about\n\t platform or deployment model compatibility rather than client\n\t compatibility.\n\t\n\t Jelte Fennema-Nio\n\t\n\t Discussion: http://postgr.es/m/CAGECzQTfMbDiM6W3av+3weSnHxJvPmuTEcjxVvSt91sQBdOxuQ@mail.gmail.com\n\nPlease ignore my complaints, and my apologies.\n\nAs far as the GUC change, let's just be careful since we have a bad\nhistory of pushing things near the end that we regret. I am not saying\nthat would be this feature, but let's be careful.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 27 Mar 2024 18:23:54 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 03:20:38PM -0700, David G. Johnston wrote:\n> On Wed, Mar 27, 2024 at 3:18 PM David G. Johnston <[email protected]>\n> wrote:\n> \n> On Wed, Mar 27, 2024 at 3:13 PM Bruce Momjian <[email protected]> wrote:\n> \n> On Wed, Mar 27, 2024 at 06:09:02PM -0400, Bruce Momjian wrote:\n> > On Wed, Mar 27, 2024 at 11:05:55AM -0400, Robert Haas wrote:\n> > > On Wed, Mar 27, 2024 at 10:43 AM Jelte Fennema-Nio <\n> [email protected]> wrote:\n> > > > Alright, changed the GUC name to \"allow_alter_system\" since that\n> seems\n> > > > to have the most \"votes\". One other option would be to call it\n> simply\n> > > > \"alter_system\", just like \"jit\" is not called \"allow_jit\" or\n> > > > \"enable_jit\".\n> > > >\n> > > > But personally I feel that the \"allow_alter_system\" is clearer\n> than\n> > > > plain \"alter_system\" for the GUC name.\n> > >\n> > > I agree, and have committed your 0001.\n> >\n> > So, I email \"Is this really a patch we think we can push into PG 17.\n> I\n> > am having my doubts,\" and the patch is applied a few hours after my\n> > email. Wow.\n> \n> Also odd is that I don't see the commit in git master, so now I am\n> confused.\n> \n> \n> The main feature being discussed is in the 0002 patch while Robert pushed a\n> doc section rename in the 0001 patch.\n> \n> \n> \n> Well, the internal category name was changed though the docs did remain\n> unchanged.\n\nYes, I figured that out, thank you.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 27 Mar 2024 18:24:15 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, 27 Mar 2024 at 23:23, Bruce Momjian <[email protected]> wrote:\n>\n> On Wed, Mar 27, 2024 at 11:10:31AM -0400, Robert Haas wrote:\n> > > Is this really a patch we think we can push into PG 17. I am having my\n> > > doubts.\n> >\n> > If the worst thing that happens in PG 17 is that we push a patch that\n> > needs a few documentation corrections, we're going to be doing\n> > fabulously well.\n>\n> My point is that we are designing the user API in the last weeks of the\n> commitfest, which usually ends badly for us, and the fact the docs were\n> not even right in the patch just reenforces that concern.\n\nThis user API is exactly the same as the original patch from Gabriele\nin September (apart from enable->allow). And we spent half a year\ndiscussing other designs for the user API. So I disagree that we're\ndesigning the user API in the last weeks of the commitfest.\n\n\n",
"msg_date": "Thu, 28 Mar 2024 00:00:43 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, 27 Mar 2024 at 20:10, Maciek Sakrejda <[email protected]> wrote:\n>\n> On Wed, Mar 27, 2024, 11:46 Robert Haas <[email protected]> wrote:\n>>\n>> On Wed, Mar 27, 2024 at 1:12 PM Isaac Morland <[email protected]> wrote:\n>> > On Wed, 27 Mar 2024 at 13:05, Greg Sabino Mullane <[email protected]> wrote:\n>> >>> The purpose of the setting is to prevent <emphasis>accidental</emphasis> modifications via <literal>ALTER SYSTEM</literal> in environments where\n>> >> The emphasis on 'accidental' seems a bit heavy here, and odd. Surely, just \"to prevent modifications via ALTER SYSTEM in environments where...\" is enough?\n>> > Not necessarily disagreeing, but it's very important nobody ever mistake this for a security feature. I don't know if the extra word \"accidental\" is necessary, but I think that's the motivation.\n>>\n>> I think the emphasis is entirely warranted in this case.\n>\n> +1. And while \"non-malicious\" may technically be more correct, I don't think it's any clearer.\n\nAttached is a new version of the patch with some sentences reworded. I\nchanged accidentally to mistakenly (which still has emphasis). And I\nhope with the rewording it's now clearer to the reader why that\nemphasis is there.",
"msg_date": "Thu, 28 Mar 2024 00:43:29 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, 27 Mar 2024 at 23:06, Bruce Momjian <[email protected]> wrote:\n> > > > > + some outside mechanism. In such environments, using <command>ALTER\n> > > > > + SYSTEM</command> to make configuration changes might appear to work,\n> > > > > + but then may be discarded at some point in the future when that outside\n> > > >\n> > > > \"might\"\n> > >\n> > > This does not seem like a mistake to me. I'm not sure why you think it is.\n> >\n> > I also think the original sentence was correct, but I don't think it\n> > read very naturally. Changed it now in hopes to improve that.\n>\n> So, might means \"possibility\" while \"may\" means permission, so \"might\"\n> is clearer here.\n\nAaah, I misunderstood your original feedback then. I thought you\ndidn't like the use of \"might\" in \"might appear to work\". But I now\nrealize you meant \"may be discarded\" should be changed to \"might be\ndiscarded\". I addressed that in my latest version of the patch.\n\n\n",
"msg_date": "Thu, 28 Mar 2024 00:47:46 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Thu, Mar 28, 2024 at 12:47:46AM +0100, Jelte Fennema-Nio wrote:\n> On Wed, 27 Mar 2024 at 23:06, Bruce Momjian <[email protected]> wrote:\n> > > > > > + some outside mechanism. In such environments, using <command>ALTER\n> > > > > > + SYSTEM</command> to make configuration changes might appear to work,\n> > > > > > + but then may be discarded at some point in the future when that outside\n> > > > >\n> > > > > \"might\"\n> > > >\n> > > > This does not seem like a mistake to me. I'm not sure why you think it is.\n> > >\n> > > I also think the original sentence was correct, but I don't think it\n> > > read very naturally. Changed it now in hopes to improve that.\n> >\n> > So, might means \"possibility\" while \"may\" means permission, so \"might\"\n> > is clearer here.\n> \n> Aaah, I misunderstood your original feedback then. I thought you\n> didn't like the use of \"might\" in \"might appear to work\". But I now\n> realize you meant \"may be discarded\" should be changed to \"might be\n> discarded\". I addressed that in my latest version of the patch.\n\nThanks. I did the may/might/can changes in the docs years ago so I\nremember the distinction.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 27 Mar 2024 19:58:56 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Thu, Mar 28, 2024 at 12:43:29AM +0100, Jelte Fennema-Nio wrote:\n> + <varlistentry id=\"guc-allow-alter-system\" xreflabel=\"allow_alter_system\">\n> + <term><varname>allow_alter_system</varname> (<type>boolean</type>)\n> + <indexterm>\n> + <primary><varname>allow_alter_system</varname> configuration parameter</primary>\n> + </indexterm>\n> + </term>\n> + <listitem>\n> + <para>\n> + When <literal>allow_alter_system</literal> is set to\n> + <literal>off</literal>, an error is returned if the <command>ALTER\n> + SYSTEM</command> command is used. This parameter can only be set in\n\n\"command is used.\" -> \"command is issued.\" ?\n\n> + the <filename>postgresql.conf</filename> file or on the server command\n> + line. The default value is <literal>on</literal>.\n> + </para>\n> +\n> + <para>\n> + Note that this setting cannot be regarded as a security feature. It\n\n\"setting cannot be regarded\" -> \"setting should not be regarded\"\n\n> + only disables the <literal>ALTER SYSTEM</literal> command. It does not\n> + prevent a superuser from changing the configuration using other SQL\n> + commands. A superuser has many ways of executing shell commands at\n> + the operating system level, and can therefore modify\n> + <literal>postgresql.auto.conf</literal> regardless of the value of\n> + this setting.\n\nI like that you explained how this can be bypassed.\n\n> +\n> + <para>\n> + Turning this setting off is intended for environments where the\n> + configuration of <productname>PostgreSQL</productname> is managed by\n> + some outside mechanism.\n> + In such environments, a well intenioned superuser user might\n> + <emphasis>mistakenly</emphasis> use <command>ALTER SYSTEM</command>\n> + to change the configuration instead of using the outside mechanism.\n> + This might even appear to update the configuration as intended, but\n\n\"This might even appear to update\" -> \"This might temporarily update\"\n\n> + then might be discarded at some point in the future when that outside\n\n\"that outside\" -> \"the outside\"\n\n> + mechanism updates the configuration.\n> + Setting this parameter to <literal>off</literal> can\n> + help to avoid such mistakes.\n\n\"help to avoid\" -> \"help avoid\"\n\n> + </para>\n> +\n> + <para>\n> + This parameter only controls the use of <command>ALTER SYSTEM</command>.\n> + The settings stored in <filename>postgresql.auto.conf</filename> always\n\n\"always\" -> \"still\"\n\nShould this paragraph be moved after or as part of the paragraph about\nmodifying postgresql.auto.conf?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 27 Mar 2024 20:17:09 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 5:17 PM Bruce Momjian <[email protected]> wrote:\n\n> On Thu, Mar 28, 2024 at 12:43:29AM +0100, Jelte Fennema-Nio wrote:\n> > + <varlistentry id=\"guc-allow-alter-system\"\n> xreflabel=\"allow_alter_system\">\n> > + <term><varname>allow_alter_system</varname> (<type>boolean</type>)\n> > + <indexterm>\n> > + <primary><varname>allow_alter_system</varname> configuration\n> parameter</primary>\n> > + </indexterm>\n> > + </term>\n> > + <listitem>\n> > + <para>\n> > + When <literal>allow_alter_system</literal> is set to\n> > + <literal>off</literal>, an error is returned if the\n> <command>ALTER\n> > + SYSTEM</command> command is used. This parameter can only be\n> set in\n>\n> \"command is used.\" -> \"command is issued.\" ?\n>\n\n\"command is executed\" seems even better. I'd take used over issued.\n\n\n> > + the <filename>postgresql.conf</filename> file or on the server\n> command\n> > + line. The default value is <literal>on</literal>.\n> > + </para>\n> > +\n> > + <para>\n> > + Note that this setting cannot be regarded as a security\n> feature. It\n>\n> \"setting cannot be regarded\" -> \"setting should not be regarded\"\n>\n\n\"setting must not be regarded\" is the correct option here. Stronger than\nshould; we are unable to control whether someone can/does regard it\ndifferently.\n\n\n> > +\n> > + <para>\n> > + Turning this setting off is intended for environments where the\n> > + configuration of <productname>PostgreSQL</productname> is\n> managed by\n> > + some outside mechanism.\n> > + In such environments, a well intenioned superuser user might\n> > + <emphasis>mistakenly</emphasis> use <command>ALTER\n> SYSTEM</command>\n> > + to change the configuration instead of using the outside\n> mechanism.\n> > + This might even appear to update the configuration as intended,\n> but\n>\n> \"This might even appear to update\" -> \"This might temporarily update\"\n>\n\nI strongly prefer temporarily over may/might/could.\n\n\n\n>\n> > + then might be discarded at some point in the future when that\n> outside\n>\n> \"that outside\" -> \"the outside\"\n>\n\nFeel like \"external\" is a more context appropriate term here than \"outside\".\n\nExternal also has precedent.\nhttps://www.postgresql.org/docs/current/config-setting.html#CONFIG-INCLUDES\n\"External tools may also modify postgresql.auto.conf. It is not recommended\nto do this while the server is running,\"\n\nThat suggests using \"external tools\" instead of \"outside mechanisms\"\n\nThis section is also the main entry point for users into the configuration\nsubsystem and hasn't been updated to reflect this new feature. That seems\nlike an oversight that needs to be corrected.\n\n> + </para>\n> > +\n> > + <para>\n> > + This parameter only controls the use of <command>ALTER\n> SYSTEM</command>.\n> > + The settings stored in\n> <filename>postgresql.auto.conf</filename> always\n>\n> \"always\" -> \"still\"\n>\n\nNeither qualifier is needed, nor does one seem clearly better than the\nother. Always is true so the weaker \"still\" seems like the worse choice.\n\nThe following is a complete and clear sentence.\n\nThe settings stored in postgresql.auto.conf take effect even if\nallow_alter_system is set to off.\n\n\n> Should this paragraph be moved after or as part of the paragraph about\n> modifying postgresql.auto.conf?\n>\n>\nI like it by itself.\n\nDavid J.\n\nOn Wed, Mar 27, 2024 at 5:17 PM Bruce Momjian <[email protected]> wrote:On Thu, Mar 28, 2024 at 12:43:29AM +0100, Jelte Fennema-Nio wrote:\n> + <varlistentry id=\"guc-allow-alter-system\" xreflabel=\"allow_alter_system\">\n> + <term><varname>allow_alter_system</varname> (<type>boolean</type>)\n> + <indexterm>\n> + <primary><varname>allow_alter_system</varname> configuration parameter</primary>\n> + </indexterm>\n> + </term>\n> + <listitem>\n> + <para>\n> + When <literal>allow_alter_system</literal> is set to\n> + <literal>off</literal>, an error is returned if the <command>ALTER\n> + SYSTEM</command> command is used. This parameter can only be set in\n\n\"command is used.\" -> \"command is issued.\" ?\"command is executed\" seems even better. I'd take used over issued.\n\n> + the <filename>postgresql.conf</filename> file or on the server command\n> + line. The default value is <literal>on</literal>.\n> + </para>\n> +\n> + <para>\n> + Note that this setting cannot be regarded as a security feature. It\n\n\"setting cannot be regarded\" -> \"setting should not be regarded\"\"setting must not be regarded\" is the correct option here. Stronger than should; we are unable to control whether someone can/does regard it differently.\n> +\n> + <para>\n> + Turning this setting off is intended for environments where the\n> + configuration of <productname>PostgreSQL</productname> is managed by\n> + some outside mechanism.\n> + In such environments, a well intenioned superuser user might\n> + <emphasis>mistakenly</emphasis> use <command>ALTER SYSTEM</command>\n> + to change the configuration instead of using the outside mechanism.\n> + This might even appear to update the configuration as intended, but\n\n\"This might even appear to update\" -> \"This might temporarily update\"I strongly prefer temporarily over may/might/could. \n\n> + then might be discarded at some point in the future when that outside\n\n\"that outside\" -> \"the outside\"Feel like \"external\" is a more context appropriate term here than \"outside\".External also has precedent.https://www.postgresql.org/docs/current/config-setting.html#CONFIG-INCLUDES\"External tools may also modify postgresql.auto.conf. It is not recommended to do this while the server is running,\"That suggests using \"external tools\" instead of \"outside mechanisms\"This section is also the main entry point for users into the configuration subsystem and hasn't been updated to reflect this new feature. That seems like an oversight that needs to be corrected.\n> + </para>\n> +\n> + <para>\n> + This parameter only controls the use of <command>ALTER SYSTEM</command>.\n> + The settings stored in <filename>postgresql.auto.conf</filename> always\n\n\"always\" -> \"still\"Neither qualifier is needed, nor does one seem clearly better than the other. Always is true so the weaker \"still\" seems like the worse choice.The following is a complete and clear sentence.The settings stored in postgresql.auto.conf take effect even if allow_alter_system is set to off.\n\nShould this paragraph be moved after or as part of the paragraph about\nmodifying postgresql.auto.conf?I like it by itself.David J.",
"msg_date": "Wed, 27 Mar 2024 17:43:06 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 5:43 PM David G. Johnston <\[email protected]> wrote:\n\n>\n> This section is also the main entry point for users into the configuration\n> subsystem and hasn't been updated to reflect this new feature. That seems\n> like an oversight that needs to be corrected.\n>\n>\nShouldn't the \"alter system\" reference page receive an update as well?\n\nDavid J.\n\nOn Wed, Mar 27, 2024 at 5:43 PM David G. Johnston <[email protected]> wrote:This section is also the main entry point for users into the configuration subsystem and hasn't been updated to reflect this new feature. That seems like an oversight that needs to be corrected.Shouldn't the \"alter system\" reference page receive an update as well?David J.",
"msg_date": "Wed, 27 Mar 2024 17:45:50 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Thu, 28 Mar 2024 at 01:43, David G. Johnston\n<[email protected]> wrote:\n>\n> On Wed, Mar 27, 2024 at 5:17 PM Bruce Momjian <[email protected]> wrote:\n>>\n>> <snip many documentation suggestions>\n\nI addressed them all I think. Mostly the small changes that were\nsuggested, but I rewrote the sentence with \"might be discarded\". And I\nadded references to the new GUC in both places suggested by David.",
"msg_date": "Thu, 28 Mar 2024 10:24:34 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Thu, 28 Mar 2024 at 10:24, Jelte Fennema-Nio <[email protected]> wrote:\n> I addressed them all I think. Mostly the small changes that were\n> suggested, but I rewrote the sentence with \"might be discarded\". And I\n> added references to the new GUC in both places suggested by David.\n\nChanged the error hint to use \"external tool\" too. And removed a\nduplicate header definition of AllowAlterSystem (I moved it to guc.h\nso it was together with other definitions a few patches ago, but\napparently forgot to remove it from guc_tables.h)",
"msg_date": "Thu, 28 Mar 2024 10:41:50 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Thu, Mar 28, 2024 at 5:42 AM Jelte Fennema-Nio <[email protected]> wrote:\n> On Thu, 28 Mar 2024 at 10:24, Jelte Fennema-Nio <[email protected]> wrote:\n> > I addressed them all I think. Mostly the small changes that were\n> > suggested, but I rewrote the sentence with \"might be discarded\". And I\n> > added references to the new GUC in both places suggested by David.\n>\n> Changed the error hint to use \"external tool\" too. And removed a\n> duplicate header definition of AllowAlterSystem (I moved it to guc.h\n> so it was together with other definitions a few patches ago, but\n> apparently forgot to remove it from guc_tables.h)\n\nI disagree with a lot of these changes. I think the old version was\nmostly better. But I can live with a lot of it if it makes other\npeople happy. However:\n\n+ Which might result in unintended behavior, such as the external tool\n+ discarding the change at some later point in time when it updates the\n+ configuration.\n\nThis is not OK from a grammatical point of view. You can't just start\na sentence with \"which\" like this. You could replace \"Which\" with\n\"This\", though.\n\n+ if (!AllowAlterSystem)\n+ {\n+\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"ALTER SYSTEM is not allowed in this environment\"),\n+ errhint(\"Global configuration changes should be made using an\nexternal tool, not by using ALTER SYSTEM.\")));\n+ }\n\nThe extra blank line should go. The brackets should go. And I think\nthe errhint should go, too, because the errhint implies that we know\nwhy the user chose to set allow_alter_system=false. There's no reason\nfor this message to be opinionated about that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 28 Mar 2024 07:57:24 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Thu, 28 Mar 2024 at 12:57, Robert Haas <[email protected]> wrote:\n> I disagree with a lot of these changes. I think the old version was\n> mostly better. But I can live with a lot of it if it makes other\n> people happy.\n\nI'd have been fine with many of the previous versions of the docs too.\n(I'm not a native english speaker though, so that might be part of it)\n\n> However:\n\nAttached is a patch with your last bit of feedback addressed.",
"msg_date": "Thu, 28 Mar 2024 13:23:36 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 6:24 PM Bruce Momjian <[email protected]> wrote:\n> Please ignore my complaints, and my apologies.\n>\n> As far as the GUC change, let's just be careful since we have a bad\n> history of pushing things near the end that we regret. I am not saying\n> that would be this feature, but let's be careful.\n\nEven if what I had pushed was the patch itself, so what? This patch\nhas been sitting around, largely unchanged, for six months. There has\nbeen plenty of time for wordsmithing the documentation, yet nobody got\ninterested in doing it until I expressed interest in committing the\npatch. Meanwhile, there are over 100 other patches that no committer\nis paying attention to right now, some of which could probably really\nbenefit from some wordsmithing of the documentation. It drives me\ninsane that this is the patch everyone is getting worked up about.\nThis is a 27-line code change that does something many people want,\nand we're acting like the future of the project depends on it. Both I\nand others have committed thousands of lines of new code over the last\nfew months that could easily be full of bugs that will eat your data\nwithout nearly the scrutiny that this patch is getting.\n\nTo be honest, I had every intention of pushing the main patch right\nafter I pushed that preliminary patch, but I stopped because I saw you\nhad emailed the thread. I'm pretty sure that I would have missed the\nfact that the documentation hadn't been properly updated for the fact\nthat the sense of the GUC had been inverted. That would have been\nembarrassing, and I would have had to push a follow-up commit to fix\nthat. But no real harm would have been done, except that somebody\nsurely would have seized on my mistake as proof that this patch wasn't\nready to be committed and that I was being irresponsible and\ninconsiderate by pushing forward with it, which is a garbage argument.\nCommitters make mistakes like that all the time, every week, even\nevery day. It doesn't mean that they're bad committers, and it doesn't\nmean that the patches suck. Some of the patches that get committed do\nsuck, but it's not because there are a few words wrong in the\ndocumentation.\n\nLet's please, please stop pretending like this patch is somehow\ndeserving of special scrutiny. There's barely even anything to\nscrutinize. It's literally if (!variable) ereport(...) plus some\nboilerplate and docs. I entirely agree that, because of the risk of\nsomeone filing a bogus CVE, the docs do need to be carefully worded.\nBut, I'm going to be honest: I feel completely confident in my ability\nto review a patch well enough to know whether the documentation for a\nsingle test-and-ereport has been done up to project standard. It\nsaddens and frustrates me that you don't seem to agree.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 28 Mar 2024 08:38:24 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Thu, Mar 28, 2024 at 08:38:24AM -0400, Robert Haas wrote:\n> Let's please, please stop pretending like this patch is somehow\n> deserving of special scrutiny. There's barely even anything to\n> scrutinize. It's literally if (!variable) ereport(...) plus some\n> boilerplate and docs. I entirely agree that, because of the risk of\n> someone filing a bogus CVE, the docs do need to be carefully worded.\n> But, I'm going to be honest: I feel completely confident in my ability\n> to review a patch well enough to know whether the documentation for a\n> single test-and-ereport has been done up to project standard. It\n> saddens and frustrates me that you don't seem to agree.\n\nThe concern about this patch is not its contents but because it is our\nfirst attempt at putting limits on the superuser for an external tool. \nIf done improperly, this could open a flood of problems, including CVE\nand user confusion, which would reflect badly on the project.\n\nI think the email discussion has expressed those concerns clearly, and\nit is only recently that we have gotten to a stage where we are ready to\nadd this, and doing this near the closing of the last commitfest can be\na valid concern. I do agree with your analysis of other patches in the\ncommitfest, but I just don't see them stretching our boundaries like\nthis patch.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Thu, 28 Mar 2024 13:45:54 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Thu, Mar 28, 2024 at 1:46 PM Bruce Momjian <[email protected]> wrote:\n> The concern about this patch is not its contents but because it is our\n> first attempt at putting limits on the superuser for an external tool.\n> If done improperly, this could open a flood of problems, including CVE\n> and user confusion, which would reflect badly on the project.\n>\n> I think the email discussion has expressed those concerns clearly, and\n> it is only recently that we have gotten to a stage where we are ready to\n> add this, and doing this near the closing of the last commitfest can be\n> a valid concern. I do agree with your analysis of other patches in the\n> commitfest, but I just don't see them stretching our boundaries like\n> this patch.\n\nI do understand the concern, and I'm not saying that you're wrong to\nhave it at some level, but I do sincerely think it's excessive. I\ndon't think this is even close to being the scariest patch in this\nrelease, or even in this CommitFest. I also agree that doing things\nnear the end of the last CommitFest isn't great, because even if your\npatch is fantastic, people start to think maybe you're only committing\nit to beat the deadline, and then the conversation can get unpleasant.\nHowever, I don't think that's really what is happening here. If this\npatch gets bounced out of this release, it won't be in any better\nshape a year from now than it is right now. It can't be, because the\ncode is completely trivial; and the documentation has already been\nextensively wordsmithed. Surely we don't need another whole release\ncycle to polish three paragraphs of documentation. I think it has to\nbe right to get this done while we're all thinking about it and the\nissue is fresh in everybody's mind.\n\nHow would you like to proceed from here? I think that in addressing\nall of the comments given in the last few days, the documentation has\ngotten modestly worse. I think it was crisp and clear before, and now\nit feels a little ... over-edited. But if you're happy with the latest\nversion, we can go with that. Or, do you need more time to review?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 28 Mar 2024 14:43:38 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Thu, Mar 28, 2024 at 02:43:38PM -0400, Robert Haas wrote:\n> How would you like to proceed from here? I think that in addressing\n> all of the comments given in the last few days, the documentation has\n> gotten modestly worse. I think it was crisp and clear before, and now\n> it feels a little ... over-edited. But if you're happy with the latest\n> version, we can go with that. Or, do you need more time to review?\n\nI am fine with moving ahead. I thought my later emails explaining we\nhave to be careful communicated that.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Thu, 28 Mar 2024 15:33:00 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Thu, Mar 28, 2024 at 01:23:36PM +0100, Jelte Fennema-Nio wrote:\n> + <para>\n> + Turning this setting off is intended for environments where the\n> + configuration of <productname>PostgreSQL</productname> is managed by\n> + some external tool.\n> + In such environments, a well intentioned superuser might\n> + <emphasis>mistakenly</emphasis> use <command>ALTER SYSTEM</command>\n> + to change the configuration instead of using the external tool.\n> + This might result in unintended behavior, such as the external tool\n> + discarding the change at some later point in time when it updates the\n\n\"discarding\" -> \"overwriting\" ?\n\n> + <para>\n> + <literal>ALTER SYSTEM</literal> can be disabled by setting\n> + <xref linkend=\"guc-allow-alter-system\"/> to <literal>off</literal>, but this\n> + is no security mechanism (as explained in detail in the documentation for\n\n\"is no\" -> \"is not a\"\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Thu, 28 Mar 2024 16:24:05 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Thu, Mar 28, 2024 at 3:33 PM Bruce Momjian <[email protected]> wrote:\n> I am fine with moving ahead. I thought my later emails explaining we\n> have to be careful communicated that.\n\nOK. Thanks for clarifying. I've committed the patch with the two\nwording changes that you suggested in your subsequent email.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 29 Mar 2024 08:46:33 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 08:46:33AM -0400, Robert Haas wrote:\n> On Thu, Mar 28, 2024 at 3:33 PM Bruce Momjian <[email protected]> wrote:\n> > I am fine with moving ahead. I thought my later emails explaining we\n> > have to be careful communicated that.\n> \n> OK. Thanks for clarifying. I've committed the patch with the two\n> wording changes that you suggested in your subsequent email.\n\nGreat, I know this has been frustrating, and you are right that this\nwouldn't have been any simpler next year.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 29 Mar 2024 10:47:43 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 10:48 AM Bruce Momjian <[email protected]> wrote:\n> On Fri, Mar 29, 2024 at 08:46:33AM -0400, Robert Haas wrote:\n> > On Thu, Mar 28, 2024 at 3:33 PM Bruce Momjian <[email protected]> wrote:\n> > > I am fine with moving ahead. I thought my later emails explaining we\n> > > have to be careful communicated that.\n> >\n> > OK. Thanks for clarifying. I've committed the patch with the two\n> > wording changes that you suggested in your subsequent email.\n>\n> Great, I know this has been frustrating, and you are right that this\n> wouldn't have been any simpler next year.\n\nThanks, Bruce.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 29 Mar 2024 12:06:08 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Possibility to disable `ALTER SYSTEM`"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 18097\nLogged by: Jim Keener\nEmail address: [email protected]\nPostgreSQL version: 15.0\nOperating system: Linux\nDescription: \n\nGiven this table:\r\n\r\nCREATE TABLE test_table (\r\nid bigint GENERATED ALWAYS AS IDENTITY PRIMARY KEY,\r\ncreated_at timestamptz NOT NULL DEFAULT now()\r\n);\r\n\r\nThe following work:\r\n\r\n* alter table test_table add created_local_y text GENERATED ALWAYS AS\n(EXTRACT(isoyear FROM created_at AT TIME ZONE 'America/New_York')) STORED;\r\n\r\n* alter table test_table add created_local_w text GENERATED ALWAYS AS\n(EXTRACT(week FROM created_at AT TIME ZONE 'America/New_York')) STORED;\r\n\r\n* alter table test_table add created_local text GENERATED ALWAYS AS\n(EXTRACT(isoyear FROM created_at AT TIME ZONE 'America/New_York')::text ||\n'|' || EXTRACT(week FROM created_at AT TIME ZONE 'America/New_York')::text)\nSTORED;\r\n\r\n* CREATE INDEX ON test_table ((EXTRACT(isoyear FROM created_at AT TIME ZONE\n'America/New_York') || '|' || EXTRACT(week FROM created_at AT TIME ZONE\n'America/New_York')));\r\n\r\nHowever, the following DOES NOT work with an error of (ERROR: generation\nexpression is not immutable):\r\n\r\n* alter table test_table add created_local text GENERATED ALWAYS AS\n(EXTRACT(isoyear FROM created_at AT TIME ZONE 'America/New_York') || '|' ||\nEXTRACT(week FROM created_at AT TIME ZONE 'America/New_York')) STORED;\r\n\r\nGiven that casting shouldn't \"increase\" the immutability of an expression,\nand expression indexes need also be immutable afaik, I think that there is a\nbug somewhere here?\r\n\r\nThank you,\r\nJim",
"msg_date": "Fri, 08 Sep 2023 03:47:49 +0000",
"msg_from": "PG Bug reporting form <[email protected]>",
"msg_from_op": true,
"msg_subject": "BUG #18097: Immutable expression not allowed in generated at"
},
{
"msg_contents": "On Thursday, September 7, 2023, PG Bug reporting form <\[email protected]> wrote:\n\n> The following bug has been logged on the website:\n>\n> Bug reference: 18097\n> Logged by: Jim Keener\n> Email address: [email protected]\n> PostgreSQL version: 15.0\n> Operating system: Linux\n> Description:\n>\n> However, the following DOES NOT work with an error of (ERROR: generation\n> expression is not immutable):\n>\n> * alter table test_table add created_local text GENERATED ALWAYS AS\n> (EXTRACT(isoyear FROM created_at AT TIME ZONE 'America/New_York') || '|' ||\n> EXTRACT(week FROM created_at AT TIME ZONE 'America/New_York')) STORED;\n>\n> Given that casting shouldn't \"increase\" the immutability of an expression,\n> and expression indexes need also be immutable afaik, I think that there is\n> a\n> bug somewhere here?\n>\n\nCasting very much can be a non-immutable activity, dates being the prime\nexample, and I presume going from numeric to text is indeed defined to be\nstable hence the error. This is probably due to needing to consult locale\nfor deciding how to represent the decimal places divider. This is one of\nthe few places, assuming you write the function to set an environment\nfixing locale to some know value like you did with the time zones, where\ncreating an immutable function around a stable expression makes sense.\n\nDavid J.\n\nOn Thursday, September 7, 2023, PG Bug reporting form <[email protected]> wrote:The following bug has been logged on the website:\n\nBug reference: 18097\nLogged by: Jim Keener\nEmail address: [email protected]\nPostgreSQL version: 15.0\nOperating system: Linux\nDescription: \nHowever, the following DOES NOT work with an error of (ERROR: generation\nexpression is not immutable):\n\n* alter table test_table add created_local text GENERATED ALWAYS AS\n(EXTRACT(isoyear FROM created_at AT TIME ZONE 'America/New_York') || '|' ||\nEXTRACT(week FROM created_at AT TIME ZONE 'America/New_York')) STORED;\n\nGiven that casting shouldn't \"increase\" the immutability of an expression,\nand expression indexes need also be immutable afaik, I think that there is a\nbug somewhere here?\nCasting very much can be a non-immutable activity, dates being the prime example, and I presume going from numeric to text is indeed defined to be stable hence the error. This is probably due to needing to consult locale for deciding how to represent the decimal places divider. This is one of the few places, assuming you write the function to set an environment fixing locale to some know value like you did with the time zones, where creating an immutable function around a stable expression makes sense.David J.",
"msg_date": "Fri, 8 Sep 2023 08:11:42 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18097: Immutable expression not allowed in generated at"
},
{
"msg_contents": "The issue here, though, is that it works as an expression for an index, but doesn't work as a generated column unless I explicitly cast it to text (which should have happened implicitly anyways). (The cast is turning a non-immutable expression to be immutable.)\n\nI'm also able to make generated fields for the individual function calls, but concatenation doesn't work without the explicit cast.\n\nJim\n\nOn September 8, 2023 11:11:42 AM EDT, \"David G. Johnston\" <[email protected]> wrote:\n>On Thursday, September 7, 2023, PG Bug reporting form <\n>[email protected]> wrote:\n>\n>> The following bug has been logged on the website:\n>>\n>> Bug reference: 18097\n>> Logged by: Jim Keener\n>> Email address: [email protected]\n>> PostgreSQL version: 15.0\n>> Operating system: Linux\n>> Description:\n>>\n>> However, the following DOES NOT work with an error of (ERROR: generation\n>> expression is not immutable):\n>>\n>> * alter table test_table add created_local text GENERATED ALWAYS AS\n>> (EXTRACT(isoyear FROM created_at AT TIME ZONE 'America/New_York') || '|' ||\n>> EXTRACT(week FROM created_at AT TIME ZONE 'America/New_York')) STORED;\n>>\n>> Given that casting shouldn't \"increase\" the immutability of an expression,\n>> and expression indexes need also be immutable afaik, I think that there is\n>> a\n>> bug somewhere here?\n>>\n>\n>Casting very much can be a non-immutable activity, dates being the prime\n>example, and I presume going from numeric to text is indeed defined to be\n>stable hence the error. This is probably due to needing to consult locale\n>for deciding how to represent the decimal places divider. This is one of\n>the few places, assuming you write the function to set an environment\n>fixing locale to some know value like you did with the time zones, where\n>creating an immutable function around a stable expression makes sense.\n>\n>David J.\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\nThe issue here, though, is that it works as an expression for an index, but doesn't work as a generated column unless I explicitly cast it to text (which should have happened implicitly anyways). (The cast is turning a non-immutable expression to be immutable.)I'm also able to make generated fields for the individual function calls, but concatenation doesn't work without the explicit cast.JimOn September 8, 2023 11:11:42 AM EDT, \"David G. Johnston\" <[email protected]> wrote:\nOn Thursday, September 7, 2023, PG Bug reporting form <[email protected]> wrote:The following bug has been logged on the website:\n\nBug reference: 18097\nLogged by: Jim Keener\nEmail address: [email protected]\nPostgreSQL version: 15.0\nOperating system: Linux\nDescription: \nHowever, the following DOES NOT work with an error of (ERROR: generation\nexpression is not immutable):\n\n* alter table test_table add created_local text GENERATED ALWAYS AS\n(EXTRACT(isoyear FROM created_at AT TIME ZONE 'America/New_York') || '|' ||\nEXTRACT(week FROM created_at AT TIME ZONE 'America/New_York')) STORED;\n\nGiven that casting shouldn't \"increase\" the immutability of an expression,\nand expression indexes need also be immutable afaik, I think that there is a\nbug somewhere here?\nCasting very much can be a non-immutable activity, dates being the prime example, and I presume going from numeric to text is indeed defined to be stable hence the error. This is probably due to needing to consult locale for deciding how to represent the decimal places divider. This is one of the few places, assuming you write the function to set an environment fixing locale to some know value like you did with the time zones, where creating an immutable function around a stable expression makes sense.David J.\n-- Sent from my Android device with K-9 Mail. Please excuse my brevity.",
"msg_date": "Fri, 08 Sep 2023 11:22:07 -0400",
"msg_from": "James Keener <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18097: Immutable expression not allowed in generated at"
},
{
"msg_contents": "James Keener <[email protected]> writes:\n> The issue here, though, is that it works as an expression for an index, but doesn't work as a generated column unless I explicitly cast it to text (which should have happened implicitly anyways). (The cast is turning a non-immutable expression to be immutable.)\n\nThe reason that the generated expression fails is that (if you don't\nexplicitly cast to text) then it relies on anytextcat(anynonarray,text),\nwhich is only stable, and can't be marked any more restrictively because\ndepending on the type of the non-text argument the corresponding output\nfunction might not be immutable.\n\nBut then why doesn't the equivalent index definition spit up?\nI found the answer in indexcmds.c's CheckMutability():\n\n /*\n * First run the expression through the planner. This has a couple of\n * important consequences. First, function default arguments will get\n * inserted, which may affect volatility (consider \"default now()\").\n * Second, inline-able functions will get inlined, which may allow us to\n * conclude that the function is really less volatile than it's marked. As\n * an example, polymorphic functions must be marked with the most volatile\n * behavior that they have for any input type, but once we inline the\n * function we may be able to conclude that it's not so volatile for the\n * particular input type we're dealing with.\n *\n * We assume here that expression_planner() won't scribble on its input.\n */\n expr = expression_planner(expr);\n\n /* Now we can search for non-immutable functions */\n return contain_mutable_functions((Node *) expr);\n\nApplying expression_planner() solves the problem because it inlines\nanytextcat(anynonarray,text), resolving that the required cast is\nnumeric->text which is immutable. The code for generated expressions\nomits that step and arrives at the less desirable answer. I wonder\nwhere else we have the same issue.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 08 Sep 2023 12:08:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18097: Immutable expression not allowed in generated at"
},
{
"msg_contents": "[ moving to pgsql-hackers ]\n\nI wrote:\n> Applying expression_planner() solves the problem because it inlines\n> anytextcat(anynonarray,text), resolving that the required cast is\n> numeric->text which is immutable. The code for generated expressions\n> omits that step and arrives at the less desirable answer. I wonder\n> where else we have the same issue.\n\nAfter digging around, I could only find one other place where\noutside-the-planner code was doing this wrong: AddRelationNewConstraints\ncan come to the wrong conclusion about whether it's safe to use\nmissingMode. So here's a patch series to resolve this. I split it\ninto three parts mostly because 0002 will only go back to v12 where\nwe added GENERATED, but the missingMode bug exists in v11.\n\nThere are a couple of points worth bikeshedding perhaps. I didn't\nspend much thought on the wrapper functions' names, but it's surely\ntrue that the semantic difference between contain_mutable_functions\nand ContainMutableFunctions is quite un-apparent from those names.\nAnybody got a better idea? It also seemed about fifty-fifty whether\nto make the wrappers' argument types be Node * or Expr *. I stuck\nwith Expr * because that's what the predecessor code CheckMutability()\nused, but that's not a very strong argument.\n\nBTW, the test function in 0003 might look funny:\n\nCREATE FUNCTION foolme(timestamptz DEFAULT clock_timestamp())\n RETURNS timestamptz\n IMMUTABLE AS 'select $1' LANGUAGE sql;\n\nbut AFAICS it's perfectly legit. The function itself is indeed immutable,\nsince it's only \"select $1\"; it's the default argument that's volatile.\n\nI'll add this to the open CF 2023-11, but we really ought to\nget it committed before that so we can ship these bug fixes in\nNovember's releases.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 09 Sep 2023 15:18:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18097: Immutable expression not allowed in generated at"
},
{
"msg_contents": "I wrote:\n> After digging around, I could only find one other place where\n> outside-the-planner code was doing this wrong: AddRelationNewConstraints\n> can come to the wrong conclusion about whether it's safe to use\n> missingMode. So here's a patch series to resolve this.\n\nArgh ... I forgot to mention that there's one other place that\nthis patch series doesn't address, which is that publicationcmds.c's\ncheck_simple_rowfilter_expr() also checks for volatile functions\nwithout having preprocessed the expression. I'm not entirely sure\nthat there's a reachable problem in the direction of underestimating\nthe expression's volatility, given that that logic rejects non-builtin\nfunctions entirely: it seems unlikely that any immutable builtin\nfunction would have a volatile default expression. But it definitely\nseems possible that there would be a problem in the other direction,\nleading to rejecting row filter expressions that we'd like to allow,\nmuch as in bug #18097.\n\nI'm not sure about a good way to resolve this. Simply applying\nexpression simplification ahead of the check would break the code's\nintent of rejecting non-builtin operators, in the case where such\nan operator resolves to an inline-able builtin function. I find\nthe entire design of check_simple_rowfilter_expr pretty questionable\nanyway, and there are a bunch of dubious (and undocumented) individual\ndecisions like allowing whole-row Vars, using FirstNormalObjectId\nrather than FirstUnpinnedObjectId as the cutoff, etc. So I'm not\nplanning to touch that code, but somebody who was paying attention\nwhen it was written might want to take a second look.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 09 Sep 2023 18:56:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18097: Immutable expression not allowed in generated at"
},
{
"msg_contents": "Hi,\n\nI noticed that the patchset needs a review and decided to take a look.\n\n> There are a couple of points worth bikeshedding perhaps. I didn't\n> spend much thought on the wrapper functions' names, but it's surely\n> true that the semantic difference between contain_mutable_functions\n> and ContainMutableFunctions is quite un-apparent from those names.\n> Anybody got a better idea?\n\nOh no! We encountered one of the most difficult problems in computer\nscience [1].\n\nContainMutableFunctionsAfterPerformingPlannersTransformations() would\nbe somewhat long but semantically correct. It can be shortened to\nContainMutableFunctionsAfterTransformations() or perhaps\nTransformedExprContainMutableFunctions(). Personally I don't mind long\nnames. This being said, ContainMutableFunctions() doesn't disgusts my\nsense of beauty too much either. All in all any name will do IMO.\nNaturally ContainVolatileFunctions() should be renamed consistently\nwith ContainMutableFunctions().\n\nI couldn't find anything wrong with 0001..0003. The parches were\ntested in several environments and passed `make check-world`. I\nsuggest merging them.\n\n[1]: https://martinfowler.com/bliki/TwoHardThings.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 14 Nov 2023 15:10:05 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18097: Immutable expression not allowed in generated at"
},
{
"msg_contents": "Aleksander Alekseev <[email protected]> writes:\n>> There are a couple of points worth bikeshedding perhaps. I didn't\n>> spend much thought on the wrapper functions' names, but it's surely\n>> true that the semantic difference between contain_mutable_functions\n>> and ContainMutableFunctions is quite un-apparent from those names.\n>> Anybody got a better idea?\n\n> Oh no! We encountered one of the most difficult problems in computer\n> science [1].\n\nIndeed :-(. Looking at it again this morning, I'm thinking of\nusing \"contain_mutable_functions_after_planning\" --- what do you\nthink of that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Nov 2023 12:48:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18097: Immutable expression not allowed in generated at"
},
{
"msg_contents": "Hi,\n\n> > Oh no! We encountered one of the most difficult problems in computer\n> > science [1].\n>\n> Indeed :-(. Looking at it again this morning, I'm thinking of\n> using \"contain_mutable_functions_after_planning\" --- what do you\n> think of that?\n\nIt's better but creates an impression that the actual planning will be\ninvolved. According to the comments for expression_planner():\n\n```\n * Currently, we disallow sublinks in standalone expressions, so there's no\n * real \"planning\" involved here. (That might not always be true though.)\n```\n\nI'm not very well familiar with the part of code responsible for\nplanning, but I find this inconsistency confusing.\n\nSince the code is written for people to be read and is read more often\nthan written personally I believe that longer and more descriptive\nnames are better. Something like\ncontain_mutable_functions_after_planner_transformations(). This being\nsaid, in practice one should read the comments to learn about corner\ncases, pre- and postconditions anyway, so maybe it's not that a big\ndeal. I think of contain_mutable_functions_after_transformations() as\na good compromise between the length and descriptiveness.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 15 Nov 2023 14:44:09 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18097: Immutable expression not allowed in generated at"
},
{
"msg_contents": "Aleksander Alekseev <[email protected]> writes:\n>>> Oh no! We encountered one of the most difficult problems in computer\n>>> science [1].\n\n>> Indeed :-(. Looking at it again this morning, I'm thinking of\n>> using \"contain_mutable_functions_after_planning\" --- what do you\n>> think of that?\n\n> It's better but creates an impression that the actual planning will be\n> involved.\n\nTrue, but from the perspective of the affected code, the question is\nbasically \"did you call expression_planner() yet\". So I like this\nnaming for that connection, whereas something based on \"transformation\"\ndoesn't really connect to anything in existing function names.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Nov 2023 10:15:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18097: Immutable expression not allowed in generated at"
},
{
"msg_contents": "Hi,\n\n> True, but from the perspective of the affected code, the question is\n> basically \"did you call expression_planner() yet\". So I like this\n> naming for that connection, whereas something based on \"transformation\"\n> doesn't really connect to anything in existing function names.\n\nFair enough.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 15 Nov 2023 18:58:57 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18097: Immutable expression not allowed in generated at"
},
{
"msg_contents": "Aleksander Alekseev <[email protected]> writes:\n>> True, but from the perspective of the affected code, the question is\n>> basically \"did you call expression_planner() yet\". So I like this\n>> naming for that connection, whereas something based on \"transformation\"\n>> doesn't really connect to anything in existing function names.\n\n> Fair enough.\n\nPushed like that, then. Thanks for reviewing!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Nov 2023 10:06:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18097: Immutable expression not allowed in generated at"
},
{
"msg_contents": "Hello,\n\nA customer encountered an issue while restoring a dump of its database \nafter applying 15.6 minor version.\n\nIt seems due to this fix :\n\n > Fix function volatility checking for GENERATED and DEFAULT \nexpressions (Tom Lane)\n > These places could fail to detect insertion of a volatile function \ndefault-argument expression, or decide that a polymorphic function is \nvolatile although it is actually immutable on the datatype of interest. \nThis could lead to improperly rejecting or accepting a GENERATED clause, \nor to mistakenly applying the constant-default-value optimization in \nALTER TABLE ADD COLUMN.\n\nRelated commit 9057ddbef\n\n\nI managed to reproduce it with a simple test case :\n\n\nCREATE SCHEMA s1;\nCREATE SCHEMA s2;\n\nCREATE FUNCTION s2.f1 (c1 text) RETURNS text\nLANGUAGE SQL IMMUTABLE\nAS $$\n SELECT c1\n$$;\n\nCREATE FUNCTION s2.f2 (c1 text) RETURNS text\nLANGUAGE SQL IMMUTABLE\nAS $$\n SELECT s2.f1 (c1);\n$$;\n\nCREATE TABLE s1.t1 (c1 text, c2 text GENERATED ALWAYS AS (s2.f2 (c1)) \nSTORED);\n\nCREATE FUNCTION s1.f3 () RETURNS SETOF s1.t1\nLANGUAGE sql\nAS $$\n SELECT *\n FROM s1.t1\n$$;\n\nThe resulting dump is attached.\n\nYou will notice that the table s1.t1 is created before the function \ns2.f1. This is due to the function s1.f3 which returns a SETOF s1.t1\n\nI understand Postgres has to create s1.t1 before s1.f3. Unfortunately, \nthe function s2.f1 is created later.\n\nWhen we try to restore the dump, we have this error :\nCREATE TABLE s1.t1 (\n c1 text,\n c2 text GENERATED ALWAYS AS (s2.f2(c1)) STORED\n);\npsql:b2.sql:61: ERROR: function s2.f1(text) does not exist\nLINE 2: SELECT s2.f1(c1);\n ^\nHINT: No function matches the given name and argument types. You might \nneed to add explicit type casts.\nQUERY:\nSELECT s2.f1(c1);\n\nCONTEXT: SQL function \"f2\" during inlining\n\nThanks to Jordi Morillo, Alexis Lucazeau, Matthieu Honel for reporting this.\n\nRegards,\n\n-- \nAdrien NAYRAT",
"msg_date": "Wed, 25 Sep 2024 13:07:13 +0200",
"msg_from": "Adrien Nayrat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18097: Immutable expression not allowed in generated at"
},
{
"msg_contents": "Adrien Nayrat <[email protected]> writes:\n> A customer encountered an issue while restoring a dump of its database \n> after applying 15.6 minor version.\n> It seems due to this fix :\n>>> Fix function volatility checking for GENERATED and DEFAULT \n>>> expressions (Tom Lane)\n\nI don't believe this example has anything to do with that.\n\n> CREATE SCHEMA s1;\n> CREATE SCHEMA s2;\n> CREATE FUNCTION s2.f1 (c1 text) RETURNS text\n> LANGUAGE SQL IMMUTABLE\n> AS $$\n> SELECT c1\n> $$;\n> CREATE FUNCTION s2.f2 (c1 text) RETURNS text\n> LANGUAGE SQL IMMUTABLE\n> AS $$\n> SELECT s2.f1 (c1);\n> $$;\n> CREATE TABLE s1.t1 (c1 text, c2 text GENERATED ALWAYS AS (s2.f2 (c1)) \n> STORED);\n\nThe problem here is that to pg_dump, the body of s2.f2 is just an\nopaque string, so it has no idea that that depends on s2.f1, and\nit ends up picking a dump order that doesn't respect that\ndependency.\n\nIt used to be that there wasn't much you could do about this\nexcept choose object names that wouldn't cause the problem.\nIn v14 and up there's another way, at least for SQL-language\nfunctions: you can write the function in SQL spec style.\n\nCREATE FUNCTION s2.f2 (c1 text) RETURNS text\nIMMUTABLE\nBEGIN ATOMIC\n SELECT s2.f1 (c1);\nEND;\n\nThen the dependency is visible, both to the server and to pg_dump,\nand you get a valid dump order.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 25 Sep 2024 10:41:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18097: Immutable expression not allowed in generated at"
},
{
"msg_contents": "On 9/25/24 4:41 PM, Tom Lane wrote:\n> Adrien Nayrat <[email protected]> writes:\n>> A customer encountered an issue while restoring a dump of its database\n>> after applying 15.6 minor version.\n>> It seems due to this fix :\n>>>> Fix function volatility checking for GENERATED and DEFAULT\n>>>> expressions (Tom Lane)\n> \n> I don't believe this example has anything to do with that.\n\nI've done a git bisect between 15.5 and 15.6 and this commit trigger the \nerror.\n\n\n\n> \n>> CREATE SCHEMA s1;\n>> CREATE SCHEMA s2;\n>> CREATE FUNCTION s2.f1 (c1 text) RETURNS text\n>> LANGUAGE SQL IMMUTABLE\n>> AS $$\n>> SELECT c1\n>> $$;\n>> CREATE FUNCTION s2.f2 (c1 text) RETURNS text\n>> LANGUAGE SQL IMMUTABLE\n>> AS $$\n>> SELECT s2.f1 (c1);\n>> $$;\n>> CREATE TABLE s1.t1 (c1 text, c2 text GENERATED ALWAYS AS (s2.f2 (c1))\n>> STORED);\n> \n> The problem here is that to pg_dump, the body of s2.f2 is just an\n> opaque string, so it has no idea that that depends on s2.f1, and\n> it ends up picking a dump order that doesn't respect that\n> dependency.\n> \n> It used to be that there wasn't much you could do about this\n> except choose object names that wouldn't cause the problem.\n\nI see. So I understand we were lucky it worked before the commit added \nthe check of volatility in generated column ?\n\n> In v14 and up there's another way, at least for SQL-language\n> functions: you can write the function in SQL spec style.\n> \n> CREATE FUNCTION s2.f2 (c1 text) RETURNS text\n> IMMUTABLE\n> BEGIN ATOMIC\n> SELECT s2.f1 (c1);\n> END;\n> \n> Then the dependency is visible, both to the server and to pg_dump,\n> and you get a valid dump order.\n> \n\nOh, thanks !\n\n-- \nAdrien NAYRAT\nhttps://pro.anayrat.info\n\n\n\n",
"msg_date": "Wed, 25 Sep 2024 18:36:45 +0200",
"msg_from": "Adrien Nayrat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18097: Immutable expression not allowed in generated at"
},
{
"msg_contents": "Adrien Nayrat <[email protected]> writes:\n> I see. So I understand we were lucky it worked before the commit added \n> the check of volatility in generated column ?\n\nPretty much. There are other cases that could trigger expansion\nof such a function before the restore is complete. It is unfortunate\nthat this bit you in a minor release, but there are lots of other\nways you could have tripped over the missing dependency unexpectedly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 25 Sep 2024 12:48:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18097: Immutable expression not allowed in generated at"
}
] |
[
{
"msg_contents": "Hi all, I was reading code of COPY FROM and I found some suspicious\nredundant assignment for tuple descriptor and number of attributes. Is\nit a behavior on purpose, or an accidently involved by the refactor in\nc532d15? Patch is attached.",
"msg_date": "Fri, 8 Sep 2023 12:23:17 +0800",
"msg_from": "Jingtang Zhang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Suspicious redundant assignment in COPY FROM"
},
{
"msg_contents": "On Fri, Sep 08, 2023 at 12:23:17PM +0800, Jingtang Zhang wrote:\n> Hi all, I was reading code of COPY FROM and I found some suspicious\n> redundant assignment for tuple descriptor and number of attributes. Is\n> it a behavior on purpose, or an accidently involved by the refactor in\n> c532d15? Patch is attached.\n\nThis looks like a copy-pasto to me, as the tuple descriptor coming\nfrom the relation is just used for sanity checks on the attributes\ndepending on the options by the caller for the COPY.\n\nThe assignment of num_phys_attrs could be kept at the same place as\non HEAD, a bit closer to the palloc0() where it is used.\n--\nMichael",
"msg_date": "Fri, 8 Sep 2023 14:41:54 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suspicious redundant assignment in COPY FROM"
},
{
"msg_contents": "Michael Paquier <[email protected]> 于2023年9月8日周五 13:42写道:\n\nThanks, Michael~\n\n\n> The assignment of num_phys_attrs could be kept at the same place as\n> on HEAD, a bit closer to the palloc0() where it is used.\n>\n\nAgreed with this principle. Patch is modified and attached.\n\n--\nJingtang",
"msg_date": "Fri, 8 Sep 2023 13:54:51 +0800",
"msg_from": "Jingtang Zhang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Suspicious redundant assignment in COPY FROM"
},
{
"msg_contents": "On Fri, Sep 08, 2023 at 01:54:51PM +0800, Jingtang Zhang wrote:\n> Agreed with this principle. Patch is modified and attached.\n\nDone as of e434e21e1.\n--\nMichael",
"msg_date": "Sat, 9 Sep 2023 21:13:38 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suspicious redundant assignment in COPY FROM"
}
] |
[
{
"msg_contents": "Hello,\n\nI got a trouble report here:\nhttps://github.com/heterodb/pg-strom/issues/636\n\nIt says that PG-Strom raised an error when the HAVING clause used\nnon-grouping-keys,\neven though the vanilla PostgreSQL successfully processed the query.\n\nSELECT MAX(c0) FROM t0 GROUP BY t0.c1 HAVING t0.c0<MIN(t0.c0);\n\nHowever, I'm not certain what is the right behavior here.\nThe \"c0\" column does not appear in the GROUP BY clause, thus we cannot\nknow its individual\nvalues after the group-by stage, right?\nSo, what does the \"HAVING t0.c0<MIN(t0.c0)\" evaluate here?\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <[email protected]>\n\n\n",
"msg_date": "Fri, 8 Sep 2023 16:42:57 +0900",
"msg_from": "Kohei KaiGai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Using non-grouping-keys at HAVING clause"
},
{
"msg_contents": "On 9/8/23 09:42, Kohei KaiGai wrote:\n> Hello,\n> \n> I got a trouble report here:\n> https://github.com/heterodb/pg-strom/issues/636\n> \n> It says that PG-Strom raised an error when the HAVING clause used\n> non-grouping-keys,\n> even though the vanilla PostgreSQL successfully processed the query.\n> \n> SELECT MAX(c0) FROM t0 GROUP BY t0.c1 HAVING t0.c0<MIN(t0.c0);\n> \n> However, I'm not certain what is the right behavior here.\n> The \"c0\" column does not appear in the GROUP BY clause, thus we cannot\n> know its individual\n> values after the group-by stage, right?\n\nWrong. c1 is the primary key and so c0 is functionally dependent on it. \n Grouping by the PK is equivalent to grouping by all of the columns in \nthe table.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Fri, 8 Sep 2023 12:07:52 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using non-grouping-keys at HAVING clause"
},
{
"msg_contents": "2023年9月8日(金) 19:07 Vik Fearing <[email protected]>:\n>\n> On 9/8/23 09:42, Kohei KaiGai wrote:\n> > Hello,\n> >\n> > I got a trouble report here:\n> > https://github.com/heterodb/pg-strom/issues/636\n> >\n> > It says that PG-Strom raised an error when the HAVING clause used\n> > non-grouping-keys,\n> > even though the vanilla PostgreSQL successfully processed the query.\n> >\n> > SELECT MAX(c0) FROM t0 GROUP BY t0.c1 HAVING t0.c0<MIN(t0.c0);\n> >\n> > However, I'm not certain what is the right behavior here.\n> > The \"c0\" column does not appear in the GROUP BY clause, thus we cannot\n> > know its individual\n> > values after the group-by stage, right?\n>\n> Wrong. c1 is the primary key and so c0 is functionally dependent on it.\n> Grouping by the PK is equivalent to grouping by all of the columns in\n> the table.\n>\nWow! Thanks, I got the point. Indeed, it is equivalent to the grouping\nby all the columns.\n\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <[email protected]>\n\n\n",
"msg_date": "Fri, 8 Sep 2023 21:25:17 +0900",
"msg_from": "Kohei KaiGai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Using non-grouping-keys at HAVING clause"
}
] |
[
{
"msg_contents": "Dear hackers,\n\nI recently found a weird behaviour involving FDW (postgres_fdw) and\nplanning.\n\nHere’s a simplified use-case:\n\nGiven a remote table (say on server2) with the following definition:\n\nCREATE TABLE t1(\n ts timestamp without time zone,\n x bigint,\n x2 text\n);\n--Then populate t1 table:INSERT INTO t1\n SELECT\n current_timestamp - 1000*random()*'1 day'::interval\n ,x\n ,''||x\n FROM\n generate_series(1,100000) as x;\n\n\nThis table is imported in a specific schema on server1 (we do not use\nuse_remote_estimate) also with t1 name in a specific schema:\n\nOn server1:\n\nCREATE SERVER server2\n FOREIGN DATA WRAPPER postgres_fdw\n OPTIONS (\n host '127.0.0.1',\n port '9002',\n dbname 'postgres',\n use_remote_estimate 'false'\n );\nCREATE USER MAPPING FOR jc\n SERVER server2\n OPTIONS (user 'jc');\nCREATE SCHEMA remote;\n\nIMPORT FOREIGN SCHEMA public\n FROM SERVER server2\n INTO remote ;\n\nOn a classic PostgreSQL 15 version the following query using date_trunc()\nis executed and results in the following plan:\n\njc=# explain (verbose,analyze) select date_trunc('day',ts), count(1)\nfrom remote.t1 group by date_trunc('day',ts) order by 1;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=216.14..216.64 rows=200 width=16) (actual\ntime=116.699..116.727 rows=1001 loops=1)\n Output: (date_trunc('day'::text, ts)), (count(1))\n Sort Key: (date_trunc('day'::text, t1.ts))\n Sort Method: quicksort Memory: 79kB\n -> HashAggregate (cost=206.00..208.50 rows=200 width=16) (actual\ntime=116.452..116.532 rows=1001 loops=1)\n Output: (date_trunc('day'::text, ts)), count(1)\n Group Key: date_trunc('day'::text, t1.ts)\n Batches: 1 Memory Usage: 209kB\n -> Foreign Scan on remote.t1 (cost=100.00..193.20 rows=2560\nwidth=8) (actual time=0.384..106.225 rows=100000 loops=1)\n Output: date_trunc('day'::text, ts)\n Remote SQL: SELECT ts FROM public.t1\n Planning Time: 0.077 ms\n Execution Time: 117.028 ms\n\n\nWhereas the same query with date_bin()\n\njc=# explain (verbose,analyze) select\ndate_bin('1day',ts,'2023-01-01'), count(1) from remote.t1 group by 1\norder by 1;\n\n QUERY PLAN\n\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Foreign Scan (cost=113.44..164.17 rows=200 width=16) (actual\ntime=11.297..16.312 rows=1001 loops=1)\n Output: (date_bin('1 day'::interval, ts, '2023-01-01\n00:00:00'::timestamp without time zone)), (count(1))\n Relations: Aggregate on (remote.t1)\n Remote SQL: SELECT date_bin('1 day'::interval, ts, '2023-01-01\n00:00:00'::timestamp without time zone), count(1) FROM public.t1 GROUP\nBY 1 ORDER BY date_bin('1 day'::interval, ts, '2023-01-01\n00:00:00'::timestamp without time zone) ASC NULLS LAST\n Planning Time: 0.114 ms\n Execution Time: 16.599 ms\n\n\n\nWith date_bin() the whole expression is pushed down to the remote server,\nwhereas with date_trunc() it’s not.\n\nI dived into the code and live debugged. It turns out that decisions to\npushdown or not a whole query depends on many factors like volatility and\ncollation. In the date_trunc() case, the problem is all about collation (\ndate_trunc() on timestamp without time zone). And decision is made in the\nforeign_expr_walker() in deparse.c (\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=contrib/postgres_fdw/deparse.c;h=efaf387890e3f85c419748ec3af5d1e9696c9c4c;hb=86648dcdaec67b83cec20a9d25b45ec089a7c624#l468\n)\n\nFirst the function is tested as shippable (able to be pushed down) and\ndate_trunc() and date_bin() both are.\n\nThen parameters sub-expressions are evaluated with collation and\n“shippability”, and they all are with both functions.\n\nThen we arrive at this code portion:\n\nif (fe->inputcollid == InvalidOid)\n /* OK, inputs are all noncollatable */ ;else if (inner_cxt.state !=\nFDW_COLLATE_SAFE ||\n fe->inputcollid != inner_cxt.collation)\n return false;\n\nFor date_trunc() function :\n\n -\n\n fe variable contains the sub-expressions/arguments merged constraints\n such as fe->inputcollid. This field is evaluated to 100 (default\n collation) so codes jumps to else statement and evaluates the if\n predicates. This 100 inputcollationid is due to text predicate 'day'.\n -\n\n inner_cxt.state contains FDW_COLLATE_STATE but inner_cxt.collation\n contains 0 (InvalidOid) so the control flow returns false thus the\n function cannot be pushed down.\n\nFor date_bin() function :\n\n - fe variable contains the sub-expressions/arguments merged constraints.\n Here, fe->inputcollid is evaluated to 0 (InvalidOid) thus skips the else\n statement and continues the control flow in the function.\n\nFor date_bin(), all arguments are “non-collatable” arguments (timestamp\nwithout time zone and interval).\n\nSo the situation is that date_trunc() is a “non-collatable” function\nfailing to be pushed down whereas it may be a good idea to do so.\n\nMaybe we could add another condition to the first if statement in order to\nallow a “no-collation” function to be pushed down even if they have\n“collatable” parameters. I’m not sure about the possible regressions of\nbehaviour of this change, but it seems to work fine with date_trunc() and\ndate_part() (which suffers the same problem).\n\nHere’s the following change\n\n/*\n* If function's input collation is not derived from a foreign\n* Var, it can't be sent to remote.\n*/if (fe->inputcollid == InvalidOid ||\n fe->funccollid == InvalidOid)\n /* OK, inputs are all noncollatable */ ;else if (inner_cxt.state !=\nFDW_COLLATE_SAFE ||\n fe->inputcollid != inner_cxt.collation)\n return false;\n\nI don’t presume this patch is free from side effects or fits all use-cases.\n\nA patch (tiny) is attached to this email. This patch works against\nmaster/head at the time of writing.\nThank you for any thoughts.\n\n-- \nJean-Christophe Arnu",
"msg_date": "Fri, 8 Sep 2023 16:41:42 +0200",
"msg_from": "Jean-Christophe Arnu <[email protected]>",
"msg_from_op": true,
"msg_subject": "FDW pushdown of non-collated functions"
},
{
"msg_contents": "Dear Hackers,\n\nI figured out this email was sent at release time. The worst time to ask\nfor thoughts on a subject IMHO. Anyway, I hope this email will pop the\ntopic over the stack!\nThank you!\n\nLe ven. 8 sept. 2023 à 16:41, Jean-Christophe Arnu <[email protected]> a\nécrit :\n\n> Dear hackers,\n>\n> I recently found a weird behaviour involving FDW (postgres_fdw) and\n> planning.\n>\n> Here’s a simplified use-case:\n>\n> Given a remote table (say on server2) with the following definition:\n>\n> CREATE TABLE t1(\n> ts timestamp without time zone,\n> x bigint,\n> x2 text\n> );\n> --Then populate t1 table:INSERT INTO t1\n> SELECT\n> current_timestamp - 1000*random()*'1 day'::interval\n> ,x\n> ,''||x\n> FROM\n> generate_series(1,100000) as x;\n>\n>\n> This table is imported in a specific schema on server1 (we do not use\n> use_remote_estimate) also with t1 name in a specific schema:\n>\n> On server1:\n>\n> CREATE SERVER server2\n> FOREIGN DATA WRAPPER postgres_fdw\n> OPTIONS (\n> host '127.0.0.1',\n> port '9002',\n> dbname 'postgres',\n> use_remote_estimate 'false'\n> );\n> CREATE USER MAPPING FOR jc\n> SERVER server2\n> OPTIONS (user 'jc');\n> CREATE SCHEMA remote;\n>\n> IMPORT FOREIGN SCHEMA public\n> FROM SERVER server2\n> INTO remote ;\n>\n> On a classic PostgreSQL 15 version the following query using date_trunc()\n> is executed and results in the following plan:\n>\n> jc=# explain (verbose,analyze) select date_trunc('day',ts), count(1) from remote.t1 group by date_trunc('day',ts) order by 1;\n> QUERY PLAN -----------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=216.14..216.64 rows=200 width=16) (actual time=116.699..116.727 rows=1001 loops=1)\n> Output: (date_trunc('day'::text, ts)), (count(1))\n> Sort Key: (date_trunc('day'::text, t1.ts))\n> Sort Method: quicksort Memory: 79kB\n> -> HashAggregate (cost=206.00..208.50 rows=200 width=16) (actual time=116.452..116.532 rows=1001 loops=1)\n> Output: (date_trunc('day'::text, ts)), count(1)\n> Group Key: date_trunc('day'::text, t1.ts)\n> Batches: 1 Memory Usage: 209kB\n> -> Foreign Scan on remote.t1 (cost=100.00..193.20 rows=2560 width=8) (actual time=0.384..106.225 rows=100000 loops=1)\n> Output: date_trunc('day'::text, ts)\n> Remote SQL: SELECT ts FROM public.t1\n> Planning Time: 0.077 ms\n> Execution Time: 117.028 ms\n>\n>\n> Whereas the same query with date_bin()\n>\n> jc=# explain (verbose,analyze) select date_bin('1day',ts,'2023-01-01'), count(1) from remote.t1 group by 1 order by 1;\n> QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Foreign Scan (cost=113.44..164.17 rows=200 width=16) (actual time=11.297..16.312 rows=1001 loops=1)\n> Output: (date_bin('1 day'::interval, ts, '2023-01-01 00:00:00'::timestamp without time zone)), (count(1))\n> Relations: Aggregate on (remote.t1)\n> Remote SQL: SELECT date_bin('1 day'::interval, ts, '2023-01-01 00:00:00'::timestamp without time zone), count(1) FROM public.t1 GROUP BY 1 ORDER BY date_bin('1 day'::interval, ts, '2023-01-01 00:00:00'::timestamp without time zone) ASC NULLS LAST\n> Planning Time: 0.114 ms\n> Execution Time: 16.599 ms\n>\n>\n>\n> With date_bin() the whole expression is pushed down to the remote server,\n> whereas with date_trunc() it’s not.\n>\n> I dived into the code and live debugged. It turns out that decisions to\n> pushdown or not a whole query depends on many factors like volatility and\n> collation. In the date_trunc() case, the problem is all about collation (\n> date_trunc() on timestamp without time zone). And decision is made in the\n> foreign_expr_walker() in deparse.c (\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=contrib/postgres_fdw/deparse.c;h=efaf387890e3f85c419748ec3af5d1e9696c9c4c;hb=86648dcdaec67b83cec20a9d25b45ec089a7c624#l468\n> )\n>\n> First the function is tested as shippable (able to be pushed down) and\n> date_trunc() and date_bin() both are.\n>\n> Then parameters sub-expressions are evaluated with collation and\n> “shippability”, and they all are with both functions.\n>\n> Then we arrive at this code portion:\n>\n> if (fe->inputcollid == InvalidOid)\n> /* OK, inputs are all noncollatable */ ;else if (inner_cxt.state != FDW_COLLATE_SAFE ||\n> fe->inputcollid != inner_cxt.collation)\n> return false;\n>\n> For date_trunc() function :\n>\n> -\n>\n> fe variable contains the sub-expressions/arguments merged constraints\n> such as fe->inputcollid. This field is evaluated to 100 (default\n> collation) so codes jumps to else statement and evaluates the if\n> predicates. This 100 inputcollationid is due to text predicate 'day'.\n> -\n>\n> inner_cxt.state contains FDW_COLLATE_STATE but inner_cxt.collation\n> contains 0 (InvalidOid) so the control flow returns false thus the\n> function cannot be pushed down.\n>\n> For date_bin() function :\n>\n> - fe variable contains the sub-expressions/arguments merged\n> constraints. Here, fe->inputcollid is evaluated to 0 (InvalidOid) thus\n> skips the else statement and continues the control flow in the\n> function.\n>\n> For date_bin(), all arguments are “non-collatable” arguments (timestamp\n> without time zone and interval).\n>\n> So the situation is that date_trunc() is a “non-collatable” function\n> failing to be pushed down whereas it may be a good idea to do so.\n>\n> Maybe we could add another condition to the first if statement in order to\n> allow a “no-collation” function to be pushed down even if they have\n> “collatable” parameters. I’m not sure about the possible regressions of\n> behaviour of this change, but it seems to work fine with date_trunc() and\n> date_part() (which suffers the same problem).\n>\n> Here’s the following change\n>\n> /*\n> * If function's input collation is not derived from a foreign\n> * Var, it can't be sent to remote.\n> */if (fe->inputcollid == InvalidOid ||\n> fe->funccollid == InvalidOid)\n> /* OK, inputs are all noncollatable */ ;else if (inner_cxt.state != FDW_COLLATE_SAFE ||\n> fe->inputcollid != inner_cxt.collation)\n> return false;\n>\n> I don’t presume this patch is free from side effects or fits all use-cases.\n>\n> A patch (tiny) is attached to this email. This patch works against\n> master/head at the time of writing.\n> Thank you for any thoughts.\n>\n> --\n> Jean-Christophe Arnu\n>\n\n\n-- \nJean-Christophe Arnu\n\nDear Hackers,I figured out this email was sent at release time. The worst time to ask for thoughts on a subject IMHO. Anyway, I hope this email will pop the topic over the stack! Thank you!Le ven. 8 sept. 2023 à 16:41, Jean-Christophe Arnu <[email protected]> a écrit :Dear hackers,I recently found a weird behaviour involving FDW (postgres_fdw) and planning.Here’s a simplified use-case:Given a remote table (say on server2) with the following definition:CREATE TABLE t1(\n ts timestamp without time zone,\n x bigint,\n x2 text\n);\n\n--Then populate t1 table:\nINSERT INTO t1 \n SELECT \n current_timestamp - 1000*random()*'1 day'::interval\n ,x\n ,''||x \n FROM \n generate_series(1,100000) as x;\n\nThis table is imported in a specific schema on server1 (we do not use use_remote_estimate) also with t1 name in a specific schema:On server1:CREATE SERVER server2 \n FOREIGN DATA WRAPPER postgres_fdw \n OPTIONS (\n host '127.0.0.1', \n port '9002', \n dbname 'postgres', \n use_remote_estimate 'false'\n );\n\nCREATE USER MAPPING FOR jc \n SERVER server2 \n OPTIONS (user 'jc');\n\nCREATE SCHEMA remote;\n\nIMPORT FOREIGN SCHEMA public \n FROM SERVER server2 \n INTO remote ;\nOn a classic PostgreSQL 15 version the following query using date_trunc() is executed and results in the following plan:jc=# explain (verbose,analyze) select date_trunc('day',ts), count(1) from remote.t1 group by date_trunc('day',ts) order by 1;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=216.14..216.64 rows=200 width=16) (actual time=116.699..116.727 rows=1001 loops=1)\n Output: (date_trunc('day'::text, ts)), (count(1))\n Sort Key: (date_trunc('day'::text, t1.ts))\n Sort Method: quicksort Memory: 79kB\n -> HashAggregate (cost=206.00..208.50 rows=200 width=16) (actual time=116.452..116.532 rows=1001 loops=1)\n Output: (date_trunc('day'::text, ts)), count(1)\n Group Key: date_trunc('day'::text, t1.ts)\n Batches: 1 Memory Usage: 209kB\n -> Foreign Scan on remote.t1 (cost=100.00..193.20 rows=2560 width=8) (actual time=0.384..106.225 rows=100000 loops=1)\n Output: date_trunc('day'::text, ts)\n Remote SQL: SELECT ts FROM public.t1\n Planning Time: 0.077 ms\n Execution Time: 117.028 ms\n\nWhereas the same query with date_bin()jc=# explain (verbose,analyze) select date_bin('1day',ts,'2023-01-01'), count(1) from remote.t1 group by 1 order by 1;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Foreign Scan (cost=113.44..164.17 rows=200 width=16) (actual time=11.297..16.312 rows=1001 loops=1)\n Output: (date_bin('1 day'::interval, ts, '2023-01-01 00:00:00'::timestamp without time zone)), (count(1))\n Relations: Aggregate on (remote.t1)\n Remote SQL: SELECT date_bin('1 day'::interval, ts, '2023-01-01 00:00:00'::timestamp without time zone), count(1) FROM public.t1 GROUP BY 1 ORDER BY date_bin('1 day'::interval, ts, '2023-01-01 00:00:00'::timestamp without time zone) ASC NULLS LAST\n Planning Time: 0.114 ms\n Execution Time: 16.599 ms\n\n\nWith date_bin() the whole expression is pushed down to the remote server, whereas with date_trunc() it’s not.I\n dived into the code and live debugged. It turns out that decisions to \npushdown or not a whole query depends on many factors like volatility \nand collation. In the date_trunc() case, the problem is all about collation (date_trunc() on timestamp without time zone). And decision is made in the foreign_expr_walker() in deparse.c (https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=contrib/postgres_fdw/deparse.c;h=efaf387890e3f85c419748ec3af5d1e9696c9c4c;hb=86648dcdaec67b83cec20a9d25b45ec089a7c624#l468)First the function is tested as shippable (able to be pushed down) and date_trunc() and date_bin() both are.Then parameters sub-expressions are evaluated with collation and “shippability”, and they all are with both functions.Then we arrive at this code portion:if (fe->inputcollid == InvalidOid)\n /* OK, inputs are all noncollatable */ ;\nelse if (inner_cxt.state != FDW_COLLATE_SAFE ||\n fe->inputcollid != inner_cxt.collation)\n return false;\nFor date_trunc() function :\nfe variable contains the sub-expressions/arguments merged constraints such as fe->inputcollid. This field is evaluated to 100 (default collation) so codes jumps to else statement and evaluates the if predicates. This 100 inputcollationid is due to text predicate 'day'.\n\ninner_cxt.state contains FDW_COLLATE_STATE but inner_cxt.collation contains 0 (InvalidOid) so the control flow returns false thus the function cannot be pushed down.\nFor date_bin() function :fe variable contains the sub-expressions/arguments merged constraints. Here, fe->inputcollid is evaluated to 0 (InvalidOid) thus skips the else statement and continues the control flow in the function.For date_bin(), all arguments are “non-collatable” arguments (timestamp without time zone and interval).So the situation is that date_trunc() is a “non-collatable” function failing to be pushed down whereas it may be a good idea to do so.Maybe\n we could add another condition to the first if statement in order to \nallow a “no-collation” function to be pushed down even if they have \n“collatable” parameters. I’m not sure about the possible regressions of \nbehaviour of this change, but it seems to work fine with date_trunc() and date_part() (which suffers the same problem).Here’s the following change/*\n* If function's input collation is not derived from a foreign\n* Var, it can't be sent to remote.\n*/\nif (fe->inputcollid == InvalidOid ||\n fe->funccollid == InvalidOid)\n /* OK, inputs are all noncollatable */ ;\nelse if (inner_cxt.state != FDW_COLLATE_SAFE ||\n fe->inputcollid != inner_cxt.collation)\n return false;\nI don’t presume this patch is free from side effects or fits all use-cases.A patch (tiny) is attached to this email. This patch works against master/head at the time of writing.Thank you for any thoughts.-- Jean-Christophe Arnu\n-- Jean-Christophe Arnu",
"msg_date": "Thu, 5 Oct 2023 15:36:55 +0200",
"msg_from": "Jean-Christophe Arnu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: FDW pushdown of non-collated functions"
},
{
"msg_contents": "Hi Jean-Christophe,\n\nOn Fri, Sep 8, 2023 at 11:30 PM Jean-Christophe Arnu <[email protected]> wrote:\n>\n> Maybe we could add another condition to the first if statement in order to allow a “no-collation” function to be pushed down even if they have “collatable” parameters. I’m not sure about the possible regressions of behaviour of this change, but it\nseems to work fine with date_trunc() and date_part() (which suffers\nthe same problem).\n\nThat may not work since the output of the function may be dependent\nupon the collation on the inputs.\n\nThere were similar discussions earlier. E.g.\nhttps://www.postgresql.org/message-id/flat/CACowWR1ARWyRepRxGfijMcsw%2BH84Dj8x2o9N3kvz%3Dz1p%2B6b45Q%40mail.gmail.com.\n\nReading Tom's first reply there you may work around this by declaring\nthe collation explicitly.\n\nBriefly reading Tom's reply, the problem seems to be trusting whether\nthe default collation locally and on the foreign server respectively\nis same or not. May be a simple fix is to declare a foreign server\nlevel option declaring that the default collation on the foreign\nserver is same as the local server may be a way to move forward. But\ngiven that the problem remains unsolved for 7 years at least, may be\nsuch a simple fix is not enough.\n\nAnother solution would be to attach another attribute to a function\nindicating whether the output of that function depends upon the input\ncollations or not. Doing that just for FDW may not be acceptable\nthough.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 6 Oct 2023 17:46:19 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FDW pushdown of non-collated functions"
},
{
"msg_contents": "Hi Ashutosh,\n\nLe ven. 6 oct. 2023 à 14:16, Ashutosh Bapat <[email protected]>\na écrit :\n\n> Hi Jean-Christophe,\n>\n> On Fri, Sep 8, 2023 at 11:30 PM Jean-Christophe Arnu <[email protected]>\n> wrote:\n> >\n> > Maybe we could add another condition to the first if statement in order\n> to allow a “no-collation” function to be pushed down even if they have\n> “collatable” parameters. I’m not sure about the possible regressions of\n> behaviour of this change, but it\n> seems to work fine with date_trunc() and date_part() (which suffers\n> the same problem).\n>\n> That may not work since the output of the function may be dependent\n> upon the collation on the inputs.\n>\n> There were similar discussions earlier. E.g.\n>\n> https://www.postgresql.org/message-id/flat/CACowWR1ARWyRepRxGfijMcsw%2BH84Dj8x2o9N3kvz%3Dz1p%2B6b45Q%40mail.gmail.com\n> .\n>\n> Reading Tom's first reply there you may work around this by declaring\n> the collation explicitly.\n>\n\nThanks for your reply. I did not catch these messages in the archive.\nThanks for spotting them.\n\n\n> Briefly reading Tom's reply, the problem seems to be trusting whether\n> the default collation locally and on the foreign server respectively\n> is same or not. May be a simple fix is to declare a foreign server\n> level option declaring that the default collation on the foreign\n> server is same as the local server may be a way to move forward. But\n> given that the problem remains unsolved for 7 years at least, may be\n> such a simple fix is not enough.\n>\n\nI studied postgres_fdw source code a bit and the problem is not as easy to\nsolve : one could set an option telling the default remote collation is\naligned with local per \"server\" but nothing guaranties that the parameter\ncollation is known on the «remote» side.\n\n\n>\n> Another solution would be to attach another attribute to a function\n> indicating whether the output of that function depends upon the input\n> collations or not. Doing that just for FDW may not be acceptable\n> though.\n>\n\nYes, definitely. I thought\n\nAnyway, you're right, after 7 years, this is a really difficult problem to\nsolve and there's no straightforward solution (to my eyes).\nThanks again for your kind explanations\nRegards\n\n-- \nJean-Christophe Arnu\n\nHi Ashutosh,Le ven. 6 oct. 2023 à 14:16, Ashutosh Bapat <[email protected]> a écrit :Hi Jean-Christophe,\n\nOn Fri, Sep 8, 2023 at 11:30 PM Jean-Christophe Arnu <[email protected]> wrote:\n>\n> Maybe we could add another condition to the first if statement in order to allow a “no-collation” function to be pushed down even if they have “collatable” parameters. I’m not sure about the possible regressions of behaviour of this change, but it\nseems to work fine with date_trunc() and date_part() (which suffers\nthe same problem).\n\nThat may not work since the output of the function may be dependent\nupon the collation on the inputs.\n\nThere were similar discussions earlier. E.g.\nhttps://www.postgresql.org/message-id/flat/CACowWR1ARWyRepRxGfijMcsw%2BH84Dj8x2o9N3kvz%3Dz1p%2B6b45Q%40mail.gmail.com.\n\nReading Tom's first reply there you may work around this by declaring\nthe collation explicitly.Thanks for your reply. I did not catch these messages in the archive. Thanks for spotting them.\n\nBriefly reading Tom's reply, the problem seems to be trusting whether\nthe default collation locally and on the foreign server respectively\nis same or not. May be a simple fix is to declare a foreign server\nlevel option declaring that the default collation on the foreign\nserver is same as the local server may be a way to move forward. But\ngiven that the problem remains unsolved for 7 years at least, may be\nsuch a simple fix is not enough.I studied postgres_fdw source code a bit and the problem is not as easy to solve : one could set an option telling the default remote collation is aligned with local per \"server\" but nothing guaranties that the parameter collation is known on the «remote» side. \n\nAnother solution would be to attach another attribute to a function\nindicating whether the output of that function depends upon the input\ncollations or not. Doing that just for FDW may not be acceptable\nthough.Yes, definitely. I thought Anyway, you're right, after 7 years, this is a really difficult problem to solve and there's no straightforward solution (to my eyes).Thanks again for your kind explanationsRegards-- Jean-Christophe Arnu",
"msg_date": "Tue, 10 Oct 2023 23:42:57 +0200",
"msg_from": "Jean-Christophe Arnu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: FDW pushdown of non-collated functions"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nCurrently obtaining the base type of a domain involves a somewhat long\nrecursive query. Consider:\n\n```\ncreate domain mytext as text;\ncreate domain mytext_child_1 as mytext;\ncreate domain mytext_child_2 as mytext_child_1;\n```\n\nTo get `mytext_child_2` base type we can do:\n\n```\nWITH RECURSIVE\nrecurse AS (\n SELECT\n oid,\n typbasetype,\n COALESCE(NULLIF(typbasetype, 0), oid) AS base\n FROM pg_type\n UNION\n SELECT\n t.oid,\n b.typbasetype,\n COALESCE(NULLIF(b.typbasetype, 0), b.oid) AS base\n FROM recurse t\n JOIN pg_type b ON t.typbasetype = b.oid\n)\nSELECT\n oid::regtype,\n base::regtype\nFROM recurse\nWHERE typbasetype = 0 and oid = 'mytext_child_2'::regtype;\n\n oid | base\n----------------+------\n mytext_child_2 | text\n```\n\nCore has the `getBaseType` function, which already gets a domain base type\nrecursively.\n\nI've attached a patch that exposes a `pg_basetype` SQL function that uses\n`getBaseType`, so the long query above just becomes:\n\n```\nselect pg_basetype('mytext_child_2'::regtype);\n pg_basetype\n-------------\n text\n(1 row)\n```\n\nTests and docs are added.\n\nBest regards,\nSteve Chavez",
"msg_date": "Sat, 9 Sep 2023 01:17:02 -0300",
"msg_from": "Steve Chavez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add pg_basetype() function to obtain a DOMAIN base type"
},
{
"msg_contents": "Just to give a data point for the need of this function:\n\nhttps://dba.stackexchange.com/questions/231879/how-to-get-the-basetype-of-a-domain-in-pg-type\n\nThis is also a common use case for services/extensions that require\npostgres metadata for their correct functioning, like postgREST or\npg_graphql.\n\nHere's a query for getting domain base types, taken from the postgREST\ncodebase:\nhttps://github.com/PostgREST/postgrest/blob/531a183b44b36614224fda432335cdaa356b4a0a/src/PostgREST/SchemaCache.hs#L342-L364\n\nSo having `pg_basetype` would be really helpful in those cases.\n\nLooking forward to hearing any feedback. Or if this would be a bad idea.\n\nBest regards,\nSteve Chavez\n\nOn Sat, 9 Sept 2023 at 01:17, Steve Chavez <[email protected]> wrote:\n\n> Hello hackers,\n>\n> Currently obtaining the base type of a domain involves a somewhat long\n> recursive query. Consider:\n>\n> ```\n> create domain mytext as text;\n> create domain mytext_child_1 as mytext;\n> create domain mytext_child_2 as mytext_child_1;\n> ```\n>\n> To get `mytext_child_2` base type we can do:\n>\n> ```\n> WITH RECURSIVE\n> recurse AS (\n> SELECT\n> oid,\n> typbasetype,\n> COALESCE(NULLIF(typbasetype, 0), oid) AS base\n> FROM pg_type\n> UNION\n> SELECT\n> t.oid,\n> b.typbasetype,\n> COALESCE(NULLIF(b.typbasetype, 0), b.oid) AS base\n> FROM recurse t\n> JOIN pg_type b ON t.typbasetype = b.oid\n> )\n> SELECT\n> oid::regtype,\n> base::regtype\n> FROM recurse\n> WHERE typbasetype = 0 and oid = 'mytext_child_2'::regtype;\n>\n> oid | base\n> ----------------+------\n> mytext_child_2 | text\n> ```\n>\n> Core has the `getBaseType` function, which already gets a domain base type\n> recursively.\n>\n> I've attached a patch that exposes a `pg_basetype` SQL function that uses\n> `getBaseType`, so the long query above just becomes:\n>\n> ```\n> select pg_basetype('mytext_child_2'::regtype);\n> pg_basetype\n> -------------\n> text\n> (1 row)\n> ```\n>\n> Tests and docs are added.\n>\n> Best regards,\n> Steve Chavez\n>\n\nJust to give a data point for the need of this function:https://dba.stackexchange.com/questions/231879/how-to-get-the-basetype-of-a-domain-in-pg-typeThis is also a common use case for services/extensions that require postgres metadata for their correct functioning, like postgREST or pg_graphql.Here's a query for getting domain base types, taken from the postgREST codebase:https://github.com/PostgREST/postgrest/blob/531a183b44b36614224fda432335cdaa356b4a0a/src/PostgREST/SchemaCache.hs#L342-L364So having `pg_basetype` would be really helpful in those cases.Looking forward to hearing any feedback. Or if this would be a bad idea.Best regards,Steve ChavezOn Sat, 9 Sept 2023 at 01:17, Steve Chavez <[email protected]> wrote:Hello hackers,Currently obtaining the base type of a domain involves a somewhat long recursive query. Consider:```create domain mytext as text;create domain mytext_child_1 as mytext;create domain mytext_child_2 as mytext_child_1;```To get `mytext_child_2` base type we can do:```WITH RECURSIVErecurse AS ( SELECT oid, typbasetype, COALESCE(NULLIF(typbasetype, 0), oid) AS base FROM pg_type UNION SELECT t.oid, b.typbasetype, COALESCE(NULLIF(b.typbasetype, 0), b.oid) AS base FROM recurse t JOIN pg_type b ON t.typbasetype = b.oid)SELECT oid::regtype, base::regtypeFROM recurseWHERE typbasetype = 0 and oid = 'mytext_child_2'::regtype; oid | base ----------------+------ mytext_child_2 | text```Core has the `getBaseType` function, which already gets a domain base type recursively.I've attached a patch that exposes a `pg_basetype` SQL function that uses `getBaseType`, so the long query above just becomes:```select pg_basetype('mytext_child_2'::regtype); pg_basetype ------------- text(1 row)```Tests and docs are added.Best regards,Steve Chavez",
"msg_date": "Tue, 19 Sep 2023 11:20:24 -0300",
"msg_from": "Steve Chavez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add pg_basetype() function to obtain a DOMAIN base type"
},
{
"msg_contents": "Hi, Steve!\n\nOn Tue, Sep 19, 2023 at 8:36 PM Steve Chavez <[email protected]> wrote:\n>\n> Just to give a data point for the need of this function:\n>\n> https://dba.stackexchange.com/questions/231879/how-to-get-the-basetype-of-a-domain-in-pg-type\n>\n> This is also a common use case for services/extensions that require postgres metadata for their correct functioning, like postgREST or pg_graphql.\n>\n> Here's a query for getting domain base types, taken from the postgREST codebase:\n> https://github.com/PostgREST/postgrest/blob/531a183b44b36614224fda432335cdaa356b4a0a/src/PostgREST/SchemaCache.hs#L342-L364\n>\n> So having `pg_basetype` would be really helpful in those cases.\n>\n> Looking forward to hearing any feedback. Or if this would be a bad idea.\n\nI think this is a good idea. It's nice to have a simple (and fast)\nbuilt-in function to call instead of investing complex queries over\nthe system catalog.\n\nThe one thing triggering my perfectionism is that the patch does two\nsyscache lookups instead of one. In order to fit into one syscache\nlookup we could add \"bool missing_ok\" argument to\ngetBaseTypeAndTypmod(). However, getBaseTypeAndTypmod() is heavily\nused in our codebase. So, changing its signature would be invasive.\nCould we invent getBaseTypeAndTypmodExtended() (ideas for a better\nname?) that does all the job and supports \"bool missing_ok\" argument,\nand have getBaseTypeAndTypmod() as a wrapper with the same signature?\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 27 Sep 2023 20:22:48 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_basetype() function to obtain a DOMAIN base type"
},
{
"msg_contents": "On Thu, Sep 28, 2023 at 11:56 AM Alexander Korotkov\n<[email protected]> wrote:\n>\n> The one thing triggering my perfectionism is that the patch does two\n> syscache lookups instead of one. In order to fit into one syscache\n> lookup we could add \"bool missing_ok\" argument to\n> getBaseTypeAndTypmod(). However, getBaseTypeAndTypmod() is heavily\n> used in our codebase. So, changing its signature would be invasive.\n> Could we invent getBaseTypeAndTypmodExtended() (ideas for a better\n> name?) that does all the job and supports \"bool missing_ok\" argument,\n> and have getBaseTypeAndTypmod() as a wrapper with the same signature?\n>\n\nhi.\nattached patch, not 100% confident it's totally correct, but one\nsyscache lookup.\nanother function getBaseTypeAndTypmodExtended added.\n\ngetBaseTypeAndTypmodExtended function signature:\nOid getBaseTypeAndTypmodExtended(Oid typid, int32 *typmod, bool missing_ok).\n\nbased on Steve Chavez's patch, minor doc changes.",
"msg_date": "Sat, 18 Nov 2023 08:00:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_basetype() function to obtain a DOMAIN base type"
},
{
"msg_contents": "On Thu, Sep 28, 2023 at 12:22 AM Alexander Korotkov\n<[email protected]> wrote:\n> The one thing triggering my perfectionism is that the patch does two\n> syscache lookups instead of one.\n\nFor an admin function used interactively, I'm not sure why that\nmatters? Or do you see another use case?\n\n\n",
"msg_date": "Mon, 4 Dec 2023 16:10:36 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_basetype() function to obtain a DOMAIN base type"
},
{
"msg_contents": "On Mon, Dec 4, 2023 at 5:11 PM John Naylor <[email protected]> wrote:\n>\n> On Thu, Sep 28, 2023 at 12:22 AM Alexander Korotkov\n> <[email protected]> wrote:\n> > The one thing triggering my perfectionism is that the patch does two\n> > syscache lookups instead of one.\n>\n> For an admin function used interactively, I'm not sure why that\n> matters? Or do you see another use case?\n\nI did a minor refactor based on v1-0001.\nI think pg_basetype should stay at \"9.26.4. System Catalog Information\nFunctions\".\nSo I placed it before pg_char_to_encoding.\nNow functions listed on \"Table 9.73. System Catalog Information\nFunctions\" will look like alphabetical ordering.\nI slightly changed the src/include/catalog/pg_proc.dat.\nnow it looks like very similar to pg_typeof\n\nsrc6=# \\df pg_typeof\n List of functions\n Schema | Name | Result data type | Argument data types | Type\n------------+-----------+------------------+---------------------+------\n pg_catalog | pg_typeof | regtype | \"any\" | func\n(1 row)\n\nsrc6=# \\df pg_basetype\n List of functions\n Schema | Name | Result data type | Argument data types | Type\n------------+-------------+------------------+---------------------+------\n pg_catalog | pg_basetype | regtype | \"any\" | func\n(1 row)\n\nv2-0001 is as is in the first email thread, 0002 is my changes based on v2-0001.",
"msg_date": "Tue, 2 Jan 2024 08:00:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_basetype() function to obtain a DOMAIN base type"
},
{
"msg_contents": "Hi,\n\nOn 1/2/24 01:00, jian he wrote:\n> On Mon, Dec 4, 2023 at 5:11 PM John Naylor <[email protected]> wrote:\n>>\n>> On Thu, Sep 28, 2023 at 12:22 AM Alexander Korotkov\n>> <[email protected]> wrote:\n>>> The one thing triggering my perfectionism is that the patch does two\n>>> syscache lookups instead of one.\n>>\n>> For an admin function used interactively, I'm not sure why that\n>> matters? Or do you see another use case?\n> \n> I did a minor refactor based on v1-0001.\n> I think pg_basetype should stay at \"9.26.4. System Catalog Information\n> Functions\".\n> So I placed it before pg_char_to_encoding.\n> Now functions listed on \"Table 9.73. System Catalog Information\n> Functions\" will look like alphabetical ordering.\n> I slightly changed the src/include/catalog/pg_proc.dat.\n> now it looks like very similar to pg_typeof\n> \n> src6=# \\df pg_typeof\n> List of functions\n> Schema | Name | Result data type | Argument data types | Type\n> ------------+-----------+------------------+---------------------+------\n> pg_catalog | pg_typeof | regtype | \"any\" | func\n> (1 row)\n> \n> src6=# \\df pg_basetype\n> List of functions\n> Schema | Name | Result data type | Argument data types | Type\n> ------------+-------------+------------------+---------------------+------\n> pg_catalog | pg_basetype | regtype | \"any\" | func\n> (1 row)\n> \n> v2-0001 is as is in the first email thread, 0002 is my changes based on v2-0001.\n\n\nI think the patch(es) look reasonable, so just a couple minor comments.\n\n1) We already have pg_typeof() function, so maybe we should use a\nsimilar naming convention pg_basetypeof()?\n\n2) I was going to suggest using \"any\" argument, just like pg_typeof, but\nI see 0002 patch already does that. Thanks!\n\n3) I think the docs probably need some formatting - wrapping lines (to\nmake it consistent with the nearby stuff) and similar stuff.\n\n\nOther than that it looks fine to me. It's a simple patch, so if we can\nagree on the naming I'll get it cleaned up and pushed.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 16 Feb 2024 19:16:18 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_basetype() function to obtain a DOMAIN base type"
},
{
"msg_contents": "On Sat, Feb 17, 2024 at 2:16 AM Tomas Vondra\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On 1/2/24 01:00, jian he wrote:\n> > On Mon, Dec 4, 2023 at 5:11 PM John Naylor <[email protected]> wrote:\n> >>\n> >> On Thu, Sep 28, 2023 at 12:22 AM Alexander Korotkov\n> >> <[email protected]> wrote:\n> >>> The one thing triggering my perfectionism is that the patch does two\n> >>> syscache lookups instead of one.\n> >>\n> >> For an admin function used interactively, I'm not sure why that\n> >> matters? Or do you see another use case?\n> >\n> > I did a minor refactor based on v1-0001.\n> > I think pg_basetype should stay at \"9.26.4. System Catalog Information\n> > Functions\".\n> > So I placed it before pg_char_to_encoding.\n> > Now functions listed on \"Table 9.73. System Catalog Information\n> > Functions\" will look like alphabetical ordering.\n> > I slightly changed the src/include/catalog/pg_proc.dat.\n> > now it looks like very similar to pg_typeof\n> >\n> > src6=# \\df pg_typeof\n> > List of functions\n> > Schema | Name | Result data type | Argument data types | Type\n> > ------------+-----------+------------------+---------------------+------\n> > pg_catalog | pg_typeof | regtype | \"any\" | func\n> > (1 row)\n> >\n> > src6=# \\df pg_basetype\n> > List of functions\n> > Schema | Name | Result data type | Argument data types | Type\n> > ------------+-------------+------------------+---------------------+------\n> > pg_catalog | pg_basetype | regtype | \"any\" | func\n> > (1 row)\n> >\n> > v2-0001 is as is in the first email thread, 0002 is my changes based on v2-0001.\n>\n>\n> I think the patch(es) look reasonable, so just a couple minor comments.\n>\n> 1) We already have pg_typeof() function, so maybe we should use a\n> similar naming convention pg_basetypeof()?\n>\nI am ok with pg_basetypeof.\n\n\n",
"msg_date": "Sat, 17 Feb 2024 08:57:50 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_basetype() function to obtain a DOMAIN base type"
},
{
"msg_contents": "\n\nOn 2/17/24 01:57, jian he wrote:\n> On Sat, Feb 17, 2024 at 2:16 AM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> On 1/2/24 01:00, jian he wrote:\n>>> On Mon, Dec 4, 2023 at 5:11 PM John Naylor <[email protected]> wrote:\n>>>>\n>>>> On Thu, Sep 28, 2023 at 12:22 AM Alexander Korotkov\n>>>> <[email protected]> wrote:\n>>>>> The one thing triggering my perfectionism is that the patch does two\n>>>>> syscache lookups instead of one.\n>>>>\n>>>> For an admin function used interactively, I'm not sure why that\n>>>> matters? Or do you see another use case?\n>>>\n>>> I did a minor refactor based on v1-0001.\n>>> I think pg_basetype should stay at \"9.26.4. System Catalog Information\n>>> Functions\".\n>>> So I placed it before pg_char_to_encoding.\n>>> Now functions listed on \"Table 9.73. System Catalog Information\n>>> Functions\" will look like alphabetical ordering.\n>>> I slightly changed the src/include/catalog/pg_proc.dat.\n>>> now it looks like very similar to pg_typeof\n>>>\n>>> src6=# \\df pg_typeof\n>>> List of functions\n>>> Schema | Name | Result data type | Argument data types | Type\n>>> ------------+-----------+------------------+---------------------+------\n>>> pg_catalog | pg_typeof | regtype | \"any\" | func\n>>> (1 row)\n>>>\n>>> src6=# \\df pg_basetype\n>>> List of functions\n>>> Schema | Name | Result data type | Argument data types | Type\n>>> ------------+-------------+------------------+---------------------+------\n>>> pg_catalog | pg_basetype | regtype | \"any\" | func\n>>> (1 row)\n>>>\n>>> v2-0001 is as is in the first email thread, 0002 is my changes based on v2-0001.\n>>\n>>\n>> I think the patch(es) look reasonable, so just a couple minor comments.\n>>\n>> 1) We already have pg_typeof() function, so maybe we should use a\n>> similar naming convention pg_basetypeof()?\n>>\n> I am ok with pg_basetypeof.\n\nAn alternative approach would be modifying pg_typeof() to optionally\ndetermine the base type, depending on a new argument which would default\nto \"false\" (i.e. the current behavior).\n\nSo you'd do\n\n SELECT pg_typeof(x);\n\nor\n\n SELECT pg_typeof(x, false);\n\nto get the current behavior, or and\n\n SELECT pg_typeof(x, true);\n\nto determine the base type.\n\n\nPerhaps this would be better than adding a new function doing almost the\nsame thing as pg_typeof(). But I haven't tried, maybe it doesn't work\nfor some reason, or maybe we don't want to do it this way ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 17 Feb 2024 19:49:21 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_basetype() function to obtain a DOMAIN base type"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 2/17/24 01:57, jian he wrote:\n>> On Sat, Feb 17, 2024 at 2:16 AM Tomas Vondra\n>> <[email protected]> wrote:\n>>> 1) We already have pg_typeof() function, so maybe we should use a\n>>> similar naming convention pg_basetypeof()?\n\n>> I am ok with pg_basetypeof.\n\n> An alternative approach would be modifying pg_typeof() to optionally\n> determine the base type, depending on a new argument which would default\n> to \"false\" (i.e. the current behavior).\n\nForgive me for not having read the thread, but I wonder why we want\nthis to duplicate the functionality of pg_typeof() at all. My first\nreaction to the requirement given in the thread subject is to write\na function that takes a type OID and returns another type OID\n(or the same OID, if it's not a domain). If you want to determine\nthe base type of some namable object, you could combine the functions\nlike \"basetypeof(pg_typeof(x))\". But ISTM there are other use cases\nwhere you'd have a type OID. Then having to construct an object to\napply a pg_typeof-like function to would be difficult.\n\nI don't have an immediate proposal for exactly what to call such a\nfunction, but naming it by analogy to pg_typeof would be questionable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 17 Feb 2024 14:20:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_basetype() function to obtain a DOMAIN base type"
},
{
"msg_contents": "\n\nOn 2/17/24 20:20, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> On 2/17/24 01:57, jian he wrote:\n>>> On Sat, Feb 17, 2024 at 2:16 AM Tomas Vondra\n>>> <[email protected]> wrote:\n>>>> 1) We already have pg_typeof() function, so maybe we should use a\n>>>> similar naming convention pg_basetypeof()?\n> \n>>> I am ok with pg_basetypeof.\n> \n>> An alternative approach would be modifying pg_typeof() to optionally\n>> determine the base type, depending on a new argument which would default\n>> to \"false\" (i.e. the current behavior).\n> \n> Forgive me for not having read the thread, but I wonder why we want\n> this to duplicate the functionality of pg_typeof() at all. My first\n> reaction to the requirement given in the thread subject is to write\n> a function that takes a type OID and returns another type OID\n> (or the same OID, if it's not a domain). If you want to determine\n> the base type of some namable object, you could combine the functions\n> like \"basetypeof(pg_typeof(x))\". But ISTM there are other use cases\n> where you'd have a type OID. Then having to construct an object to\n> apply a pg_typeof-like function to would be difficult.\n> \n\nYeah, I think you're right - the initial message does actually seem to\nindicate it needs to pass type \"type OID\" to the function, not some\narbitrary expression (and then process a type of it). So modeling it per\npg_typeof(any) would not even work.\n\nAlso, now that I looked at the v2 patch again, I see it only really\ntweaked the pg_proc.dat entry, but the code still does PG_GETARG_OID (so\nthe \"any\" bit is not really correct).\n\n> I don't have an immediate proposal for exactly what to call such a\n> function, but naming it by analogy to pg_typeof would be questionable.\n> \n\nAre you objecting to the pg_basetypeof() name, or just to it accepting\n\"any\" argument? I think pg_basetypeof(regtype) would work ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 18 Feb 2024 00:29:45 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_basetype() function to obtain a DOMAIN base type"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 2/17/24 20:20, Tom Lane wrote:\n>> I don't have an immediate proposal for exactly what to call such a\n>> function, but naming it by analogy to pg_typeof would be questionable.\n\n> Are you objecting to the pg_basetypeof() name, or just to it accepting\n> \"any\" argument? I think pg_basetypeof(regtype) would work ...\n\nI'm not sure. \"pg_basetypeof\" seems like it invites confusion with\n\"pg_typeof\", but I don't really have a better idea. Perhaps\n\"pg_baseofdomain(regtype)\"? I'm not especially thrilled with that,\neither.\n\nAlso, just to be clear, we intend this to drill down to the bottom\nnon-domain type, right? Do we need a second function that goes\ndown only one level? I'm inclined to say \"no\", mainly because\n(1) that would complicate the naming situation even more, and\n(2) that use-case is pretty easy to handle with a sub-select.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 17 Feb 2024 19:47:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_basetype() function to obtain a DOMAIN base type"
},
{
"msg_contents": "On Sun, Feb 18, 2024 at 2:49 AM Tomas Vondra\n<[email protected]> wrote:\n>\n> An alternative approach would be modifying pg_typeof() to optionally\n> determine the base type, depending on a new argument which would default\n> to \"false\" (i.e. the current behavior).\n>\n> So you'd do\n>\n> SELECT pg_typeof(x);\n>\n> or\n>\n> SELECT pg_typeof(x, false);\n>\n> to get the current behavior, or and\n>\n> SELECT pg_typeof(x, true);\n>\n> to determine the base type.\n>\n> Perhaps this would be better than adding a new function doing almost the\n> same thing as pg_typeof(). But I haven't tried, maybe it doesn't work\n> for some reason, or maybe we don't want to do it this way ...\n>\n\npg_typeof is quite hot.\ngetting the base type of a domain is niche.\n\nchanging pg_typeof requires extra effort to make it compatible with\nprevious behavior.\nbundling it together seems not worth it.\n\n\n",
"msg_date": "Sun, 18 Feb 2024 09:30:44 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_basetype() function to obtain a DOMAIN base type"
},
{
"msg_contents": "On Sun, Feb 18, 2024 at 7:29 AM Tomas Vondra\n<[email protected]> wrote:\n>\n>\n> Also, now that I looked at the v2 patch again, I see it only really\n> tweaked the pg_proc.dat entry, but the code still does PG_GETARG_OID (so\n> the \"any\" bit is not really correct).\n>\n\nPG_GETARG_OID part indeed is wrong. so I change to following:\n\n+Datum\n+pg_basetype(PG_FUNCTION_ARGS)\n+{\n+ Oid oid;\n+\n+ oid = get_fn_expr_argtype(fcinfo->flinfo, 0);\n+ if (!SearchSysCacheExists1(TYPEOID, ObjectIdGetDatum(oid)))\n+ PG_RETURN_NULL();\n+\n+ PG_RETURN_OID(getBaseType(oid));\n+}\n\nI still name the function as pg_basetype, feel free to change it.\n\n+ <row>\n+ <entry role=\"func_table_entry\"><para role=\"func_signature\">\n+ <indexterm>\n+ <primary>pg_basetype</primary>\n+ </indexterm>\n+ <function>pg_basetype</function> ( <type>\"any\"</type> )\n+ <returnvalue>regtype</returnvalue>\n+ </para>\n+ <para>\n+ Returns the OID of the base type of a domain or if the\nargument is a basetype it returns the same type.\n+ If there's a chain of domain dependencies, it will recurse\nuntil finding the base type.\n+ </para>\ncompared with pg_typeof's explanation, I feel like pg_basetype's\nexplanation doesn't seem accurate.\nHowever, I don't know how to rephrase it.",
"msg_date": "Mon, 19 Feb 2024 15:21:15 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_basetype() function to obtain a DOMAIN base type"
},
{
"msg_contents": "looking at it again.\nI found out we can just simply do\n`\nDatum\npg_basetype(PG_FUNCTION_ARGS)\n{\nOid oid;\n\noid = get_fn_expr_argtype(fcinfo->flinfo, 0);\nPG_RETURN_OID(getBaseType(oid));\n}\n`\n\nif the type is not a domain, work the same as pg_typeof.\nif the type is domain, pg_typeof return as is, pg_basetype return the\nbase type.\nso it only diverges when the argument type is a type of domain.\n\nthe doc:\n <function>pg_basetype</function> ( <type>\"any\"</type> )\n <returnvalue>regtype</returnvalue>\n </para>\n <para>\n Returns the OID of the base type of a domain. If the argument\nis not a type of domain,\n return the OID of the data type of the argument just like <link\nlinkend=\"function-pg-typeof\"><function>pg_typeof()</function></link>.\n If there's a chain of domain dependencies, it will recurse\nuntil finding the base type.\n </para>\n\n\nalso, I think this way, we only do one syscache lookup.",
"msg_date": "Mon, 18 Mar 2024 08:00:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_basetype() function to obtain a DOMAIN base type"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 2:01 AM jian he <[email protected]> wrote:\n>\n> looking at it again.\n> I found out we can just simply do\n> `\n> Datum\n> pg_basetype(PG_FUNCTION_ARGS)\n> {\n> Oid oid;\n>\n> oid = get_fn_expr_argtype(fcinfo->flinfo, 0);\n> PG_RETURN_OID(getBaseType(oid));\n> }\n> `\n>\n> if the type is not a domain, work the same as pg_typeof.\n> if the type is domain, pg_typeof return as is, pg_basetype return the\n> base type.\n> so it only diverges when the argument type is a type of domain.\n>\n> the doc:\n> <function>pg_basetype</function> ( <type>\"any\"</type> )\n> <returnvalue>regtype</returnvalue>\n> </para>\n> <para>\n> Returns the OID of the base type of a domain. If the argument\n> is not a type of domain,\n> return the OID of the data type of the argument just like <link\n> linkend=\"function-pg-typeof\"><function>pg_typeof()</function></link>.\n> If there's a chain of domain dependencies, it will recurse\n> until finding the base type.\n> </para>\n>\n>\n> also, I think this way, we only do one syscache lookup.\n\nLooks good to me. But should it be named pg_basetypeof()?\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 18 Mar 2024 13:24:47 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_basetype() function to obtain a DOMAIN base type"
},
{
"msg_contents": "Alexander Korotkov <[email protected]> writes:\n> On Mon, Mar 18, 2024 at 2:01 AM jian he <[email protected]> wrote:\n>> `\n>> Datum\n>> pg_basetype(PG_FUNCTION_ARGS)\n>> {\n>> \tOid oid;\n>> \n>> \toid = get_fn_expr_argtype(fcinfo->flinfo, 0);\n>> \tPG_RETURN_OID(getBaseType(oid));\n>> }\n>> `\n\n> Looks good to me. But should it be named pg_basetypeof()?\n\nI still don't like this approach. It forces the function to be\nused in a particular way that's highly redundant with pg_typeof.\nI think we'd be better off with\n\npg_basetype(PG_FUNCTION_ARGS)\n{\n\tOid typid = PG_GETARG_OID(0);\n\n\tPG_RETURN_OID(getBaseType(typid));\n}\n\nThe use-case that the other definition handles would be implemented\nlike\n\n\tpg_basetype(pg_typeof(expression))\n\nbut there are other use-cases. For example, if you want to know\nthe base types of the columns of a table, you could do something\nlike\n\nselect attname, pg_basetype(atttypid) from pg_attribute\n where attrelid = 'foo'::regclass order by attnum;\n\nbut that functionality is simply not available with the other\ndefinition.\n\nPerhaps there's an argument for providing both things, but that\nfeels like overkill to me. I doubt that pg_basetype(pg_typeof())\nis going to be so common as to need a shorthand.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Mar 2024 11:43:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_basetype() function to obtain a DOMAIN base type"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 11:43 PM Tom Lane <[email protected]> wrote:\n>\n> Alexander Korotkov <[email protected]> writes:\n> > On Mon, Mar 18, 2024 at 2:01 AM jian he <[email protected]> wrote:\n> >> `\n> >> Datum\n> >> pg_basetype(PG_FUNCTION_ARGS)\n> >> {\n> >> Oid oid;\n> >>\n> >> oid = get_fn_expr_argtype(fcinfo->flinfo, 0);\n> >> PG_RETURN_OID(getBaseType(oid));\n> >> }\n> >> `\n>\n> > Looks good to me. But should it be named pg_basetypeof()?\n>\n> I still don't like this approach. It forces the function to be\n> used in a particular way that's highly redundant with pg_typeof.\n> I think we'd be better off with\n>\n> pg_basetype(PG_FUNCTION_ARGS)\n> {\n> Oid typid = PG_GETARG_OID(0);\n>\n> PG_RETURN_OID(getBaseType(typid));\n> }\n>\n> The use-case that the other definition handles would be implemented\n> like\n>\n> pg_basetype(pg_typeof(expression))\n>\n\ntrying to do it this way.\nnot sure the following error message is expected.\n\nSELECT pg_basetype(-1);\nERROR: cache lookup failed for type 4294967295",
"msg_date": "Thu, 21 Mar 2024 10:34:36 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_basetype() function to obtain a DOMAIN base type"
},
{
"msg_contents": "On Thu, Mar 21, 2024 at 10:34 AM jian he <[email protected]> wrote:\n>\n> On Mon, Mar 18, 2024 at 11:43 PM Tom Lane <[email protected]> wrote:\n> >\n> > Alexander Korotkov <[email protected]> writes:\n> > > On Mon, Mar 18, 2024 at 2:01 AM jian he <[email protected]> wrote:\n> > >> `\n> > >> Datum\n> > >> pg_basetype(PG_FUNCTION_ARGS)\n> > >> {\n> > >> Oid oid;\n> > >>\n> > >> oid = get_fn_expr_argtype(fcinfo->flinfo, 0);\n> > >> PG_RETURN_OID(getBaseType(oid));\n> > >> }\n> > >> `\n> >\n> > > Looks good to me. But should it be named pg_basetypeof()?\n> >\n> > I still don't like this approach. It forces the function to be\n> > used in a particular way that's highly redundant with pg_typeof.\n> > I think we'd be better off with\n> >\n> > pg_basetype(PG_FUNCTION_ARGS)\n> > {\n> > Oid typid = PG_GETARG_OID(0);\n> >\n> > PG_RETURN_OID(getBaseType(typid));\n> > }\n> >\n> > The use-case that the other definition handles would be implemented\n> > like\n> >\n> > pg_basetype(pg_typeof(expression))\n> >\n>\n> trying to do it this way.\n> not sure the following error message is expected.\n>\n> SELECT pg_basetype(-1);\n> ERROR: cache lookup failed for type 4294967295\n\nI think the error message should be fine.\neven though\n`select '-1'::oid::regtype;` return 4294967295.\n\nI noticed psql \\dD didn't return the basetype of a domain.\none of the usage of this feature would be in psql \\dD.\n\nnow we can:\n\\dD mytext_child_2\n List of domains\n Schema | Name | Type | Basetype | Collation |\nNullable | Default | Check\n--------+----------------+----------------+----------+-----------+----------+---------+-------\n public | mytext_child_2 | mytext_child_1 | text | |\n | |\n(1 row)",
"msg_date": "Thu, 28 Mar 2024 10:54:08 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_basetype() function to obtain a DOMAIN base type"
},
{
"msg_contents": "jian he <[email protected]> writes:\n> I noticed psql \\dD didn't return the basetype of a domain.\n> one of the usage of this feature would be in psql \\dD.\n\nYour 0002 will cause \\dD to fail entirely against an older server.\nI'm not necessarily against adding this info, but you can't just\nignore the expectations for psql \\d commands:\n\n * Support for the various \\d (\"describe\") commands. Note that the current\n * expectation is that all functions in this file will succeed when working\n * with servers of versions 9.2 and up. It's okay to omit irrelevant\n * information for an old server, but not to fail outright. (But failing\n * against a pre-9.2 server is allowed.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Mar 2024 22:59:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_basetype() function to obtain a DOMAIN base type"
},
{
"msg_contents": "jian he <[email protected]> writes:\n> trying to do it this way.\n> not sure the following error message is expected.\n\n> SELECT pg_basetype(-1);\n> ERROR: cache lookup failed for type 4294967295\n\nYeah, that's not really OK. You could say it's fine for bogus input,\nbut we've found over the years that it's better for catalog inspection\nfunctions like this to be forgiving of bad input. Otherwise,\nyour query can blow up in unexpected ways due to race conditions\n(ie somebody just dropped the type you are interested in).\n\nA fairly common solution to that is to return NULL for bad input,\nbut in this case we could just have it return the OID unchanged.\n\nEither way though, we can't use getBaseType as-is. We could imagine\nextending that function to support a \"noerror\"-like flag, but I\nbelieve it's already a hot-spot and I'd rather not complicate it\nfurther. So what I suggest doing is just duplicating the code;\nthere's not very much of it.\n\nI did a little polishing of the docs and test cases too, ending\nwith the v7 attached. I think this is about ready to go unless\nthere are objections to the definition.\n\nNot sure what I think about your 0002 proposal to extend \\dD\nwith this. Aside from the server-version-compatibility problem,\nI think it's about 90% redundant because \\dD already shows\nthe immediate base type. The new column would only be\ndifferent in the case of nested domains, which I think are\nnot common. \\dD's output is already pretty wide, so on the\nwhole I'm inclined to leave it alone.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 28 Mar 2024 16:47:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_basetype() function to obtain a DOMAIN base type"
},
{
"msg_contents": "I wrote:\n> A fairly common solution to that is to return NULL for bad input,\n> but in this case we could just have it return the OID unchanged.\n\nAfter sleeping on it, I concluded that was a bad idea and we'd\nbe best off returning NULL for invalid type OIDs. So this is\njust about back to Steve's original proposal, except for being\na bit more bulletproof against races with DROP TYPE.\nPushed that way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 30 Mar 2024 14:00:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_basetype() function to obtain a DOMAIN base type"
}
] |
[
{
"msg_contents": "I happened to notice this bit in fix_expr_common's processing\nof ScalarArrayOpExprs:\n\n set_sa_opfuncid(saop);\n record_plan_function_dependency(root, saop->opfuncid);\n\n if (!OidIsValid(saop->hashfuncid))\n record_plan_function_dependency(root, saop->hashfuncid);\n\n if (!OidIsValid(saop->negfuncid))\n record_plan_function_dependency(root, saop->negfuncid);\n\nSurely those if-conditions are exactly backward, and we should be\nrecording nonzero hashfuncid and negfuncid entries, not zero ones.\nAs-is, the code's a no-op because record_plan_function_dependency\nwill ignore OIDs less than FirstUnpinnedObjectId, including zero.\n\n\"git blame\" blames 50e17ad28 and 29f45e299 for these, so v14\nhas only half the problem of later branches.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 09 Sep 2023 19:22:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Surely this code in setrefs.c is wrong?"
},
{
"msg_contents": "On Sun, 10 Sept 2023 at 11:22, Tom Lane <[email protected]> wrote:\n> if (!OidIsValid(saop->hashfuncid))\n> record_plan_function_dependency(root, saop->hashfuncid);\n>\n> if (!OidIsValid(saop->negfuncid))\n> record_plan_function_dependency(root, saop->negfuncid);\n>\n> Surely those if-conditions are exactly backward, and we should be\n> recording nonzero hashfuncid and negfuncid entries, not zero ones.\n\nThat's certainly not coded as I intended. Perhaps I got my wires\ncrossed and mixed up OidIsValid and InvalidOid and without reading\ncorrectly somehow thought OidIsValid was for the inverse case.\n\nI'll push fixes once the 16.0 release is out of the way.\n\nDavid\n\n\n",
"msg_date": "Sun, 10 Sep 2023 21:07:53 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Surely this code in setrefs.c is wrong?"
},
{
"msg_contents": "On Sun, 10 Sept 2023 at 21:07, David Rowley <[email protected]> wrote:\n>\n> On Sun, 10 Sept 2023 at 11:22, Tom Lane <[email protected]> wrote:\n> > if (!OidIsValid(saop->hashfuncid))\n> > record_plan_function_dependency(root, saop->hashfuncid);\n> >\n> > if (!OidIsValid(saop->negfuncid))\n> > record_plan_function_dependency(root, saop->negfuncid);\n> >\n> > Surely those if-conditions are exactly backward, and we should be\n> > recording nonzero hashfuncid and negfuncid entries, not zero ones.\n>\n\n> I'll push fixes once the 16.0 release is out of the way.\n\nFixed in ee3a551e9.\n\nDavid\n\n\n",
"msg_date": "Thu, 14 Sep 2023 11:57:11 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Surely this code in setrefs.c is wrong?"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI noticed that BufferUsage counters' values are strangely different for the\nsame queries on REL_15_STABLE and REL_16_STABLE. For example, I run\n\nCREATE EXTENSION pg_stat_statements;\nCREATE TEMP TABLE test(b int);\nINSERT INTO test(b) SELECT generate_series(1,1000);\nSELECT query, local_blks_hit, local_blks_read, local_blks_written,\n local_blks_dirtied, temp_blks_written FROM pg_stat_statements;\n\nand output on REL_15_STABLE contains\n\nquery | INSERT INTO test(b) SELECT generate_series($1,$2)\nlocal_blks_hit | 1005\nlocal_blks_read | 2\nlocal_blks_written | 5\nlocal_blks_dirtied | 5\ntemp_blks_written | 0\n\nwhile output on REL_16_STABLE contains\n\nquery | INSERT INTO test(b) SELECT generate_series($1,$2)\nlocal_blks_hit | 1006\nlocal_blks_read | 0\nlocal_blks_written | 0\nlocal_blks_dirtied | 5\ntemp_blks_written | 8\n\n\nI found a bug that causes one of the differences. Wrong counter is\nincremented\nin ExtendBufferedRelLocal(). The attached patch fixes it and should be\napplied\nto REL_16_STABLE and master. With the patch applied output contains\n\nquery | INSERT INTO test(b) SELECT generate_series($1,$2)\nlocal_blks_hit | 1006\nlocal_blks_read | 0\nlocal_blks_written | 8\nlocal_blks_dirtied | 5\ntemp_blks_written | 0\n\n\nI still wonder why local_blks_written is greater than it was on\nREL_15_STABLE,\nand why local_blks_read became zero. These changes are caused by fcdda1e4b5.\nThis code is new to me, and I'm still trying to understand whether it's a\nbug\nin computing the counters or just changes in how many blocks are\nread/written\nduring the query execution. If anyone can help me, I would be grateful.\n\nBest regards,\nKarina Litskevich\nPostgres Professional: http://postgrespro.com/\n\nHi hackers,I noticed that BufferUsage counters' values are strangely different for thesame queries on REL_15_STABLE and REL_16_STABLE. For example, I runCREATE EXTENSION pg_stat_statements;CREATE TEMP TABLE test(b int);INSERT INTO test(b) SELECT generate_series(1,1000);SELECT query, local_blks_hit, local_blks_read, local_blks_written, local_blks_dirtied, temp_blks_written FROM pg_stat_statements;and output on REL_15_STABLE containsquery | INSERT INTO test(b) SELECT generate_series($1,$2)local_blks_hit | 1005local_blks_read | 2local_blks_written | 5local_blks_dirtied | 5temp_blks_written | 0while output on REL_16_STABLE containsquery | INSERT INTO test(b) SELECT generate_series($1,$2)local_blks_hit | 1006local_blks_read | 0local_blks_written | 0local_blks_dirtied | 5temp_blks_written | 8I found a bug that causes one of the differences. Wrong counter is incrementedin ExtendBufferedRelLocal(). The attached patch fixes it and should be appliedto REL_16_STABLE and master. With the patch applied output containsquery | INSERT INTO test(b) SELECT generate_series($1,$2)local_blks_hit | 1006local_blks_read | 0local_blks_written | 8local_blks_dirtied | 5temp_blks_written | 0I still wonder why local_blks_written is greater than it was on REL_15_STABLE,and why local_blks_read became zero. These changes are caused by fcdda1e4b5.This code is new to me, and I'm still trying to understand whether it's a bugin computing the counters or just changes in how many blocks are read/writtenduring the query execution. If anyone can help me, I would be grateful.Best regards,Karina LitskevichPostgres Professional: http://postgrespro.com/",
"msg_date": "Mon, 11 Sep 2023 09:08:04 +0300",
"msg_from": "Karina Litskevich <[email protected]>",
"msg_from_op": true,
"msg_subject": "BufferUsage counters' values have changed"
},
{
"msg_contents": "On Mon, Sep 11, 2023 at 9:08 AM Karina Litskevich <\[email protected]> wrote:\n\n> I found a bug that causes one of the differences. Wrong counter is\n> incremented\n> in ExtendBufferedRelLocal(). The attached patch fixes it and should be\n> applied\n> to REL_16_STABLE and master.\n>\n\n I've forgotten to attach the patch. Here it is.",
"msg_date": "Mon, 11 Sep 2023 09:23:59 +0300",
"msg_from": "Karina Litskevich <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BufferUsage counters' values have changed"
},
{
"msg_contents": "Hi,\n\nOn Mon, 11 Sept 2023 at 14:28, Karina Litskevich\n<[email protected]> wrote:\n>\n> Hi hackers,\n>\n> I noticed that BufferUsage counters' values are strangely different for the\n> same queries on REL_15_STABLE and REL_16_STABLE. For example, I run\n>\n> CREATE EXTENSION pg_stat_statements;\n> CREATE TEMP TABLE test(b int);\n> INSERT INTO test(b) SELECT generate_series(1,1000);\n> SELECT query, local_blks_hit, local_blks_read, local_blks_written,\n> local_blks_dirtied, temp_blks_written FROM pg_stat_statements;\n>\n> and output on REL_15_STABLE contains\n>\n> query | INSERT INTO test(b) SELECT generate_series($1,$2)\n> local_blks_hit | 1005\n> local_blks_read | 2\n> local_blks_written | 5\n> local_blks_dirtied | 5\n> temp_blks_written | 0\n>\n> while output on REL_16_STABLE contains\n>\n> query | INSERT INTO test(b) SELECT generate_series($1,$2)\n> local_blks_hit | 1006\n> local_blks_read | 0\n> local_blks_written | 0\n> local_blks_dirtied | 5\n> temp_blks_written | 8\n>\n>\n> I found a bug that causes one of the differences. Wrong counter is incremented\n> in ExtendBufferedRelLocal(). The attached patch fixes it and should be applied\n> to REL_16_STABLE and master. With the patch applied output contains\n\nNice finding! I agree, it should be changed.\n\n> query | INSERT INTO test(b) SELECT generate_series($1,$2)\n> local_blks_hit | 1006\n> local_blks_read | 0\n> local_blks_written | 8\n> local_blks_dirtied | 5\n> temp_blks_written | 0\n>\n>\n> I still wonder why local_blks_written is greater than it was on REL_15_STABLE,\n> and why local_blks_read became zero. These changes are caused by fcdda1e4b5.\n> This code is new to me, and I'm still trying to understand whether it's a bug\n> in computing the counters or just changes in how many blocks are read/written\n> during the query execution. If anyone can help me, I would be grateful.\n\nI spent some time on it:\n\nlocal_blks_read became zero because:\n1_ One more cache hit. It was supposed to be local_blks_read but it is\nlocal_blks_hit now. This is an assumption, I didn't check this deeply.\n2_ Before fcdda1e4b5, there was one local_blks_read coming from\nbuf = ReadBufferExtended(rel, VISIBILITYMAP_FORKNUM, blkno,\nRBM_ZERO_ON_ERROR, NULL) in freespace.c -> ReadBuffer_common() ->\npgBufferUsage.local_blks_read++.\nBut buf = ReadBufferExtended(rel, VISIBILITYMAP_FORKNUM, blkno,\nRBM_ZERO_ON_ERROR, NULL) is moved into the else case, so it didn't\ncalled and local_blks_read isn't incremented.\n\nlocal_blks_written is greater because of the combination of fcdda1e4b5\nand 00d1e02be2.\nIn PG_15:\nRelationGetBufferForTuple() -> ReadBufferBI(P_NEW, RBM_ZERO_AND_LOCK)\n-> ReadBufferExtended() -> ReadBuffer_common() ->\npgBufferUsage.local_blks_written++; (called 5 times) [0]\nIn PG_16:\n1_ 5 of the local_blks_written is coming from:\nRelationGetBufferForTuple() -> RelationAddBlocks() ->\nExtendBufferedRelBy() -> ExtendBufferedRelCommon() ->\nExtendBufferedRelLocal() -> pgBufferUsage.local_blks_written +=\nextend_by; (extend_by is 1, this is called 5 times) [1]\n2_ 3 of the local_blks_written is coming from:\nRelationGetBufferForTuple() -> RecordAndGetPageWithFreeSpace() ->\nfsm_set_and_search() -> fsm_readbuf() -> fsm_extend() ->\nExtendBufferedRelTo() -> ExtendBufferedRelCommon() ->\nExtendBufferedRelLocal() -> pgBufferUsage.local_blks_written +=\nextend_by; (extend_by is 3, this is called 1 time) [2]\n\nI think [0] is the same path as [1] but [2] is new. 'fsm extends'\nwasn't counted in local_blks_written in PG_15. Calling\nExtendBufferedRelTo() from fsm_extend() caused 'fsm extends' to be\ncounted in local_blks_written. I am not sure which one is correct.\n\nI hope these help.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Wed, 13 Sep 2023 16:04:00 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BufferUsage counters' values have changed"
},
{
"msg_contents": "Hi,\n\nOn 2023-09-11 09:23:59 +0300, Karina Litskevich wrote:\n> On Mon, Sep 11, 2023 at 9:08 AM Karina Litskevich <\n> [email protected]> wrote:\n> \n> > I found a bug that causes one of the differences. Wrong counter is\n> > incremented\n> > in ExtendBufferedRelLocal(). The attached patch fixes it and should be\n> > applied\n> > to REL_16_STABLE and master.\n> >\n> \n> I've forgotten to attach the patch. Here it is.\n\n> From 999a3d533a9b74c8568cc8a3d715c287de45dd2c Mon Sep 17 00:00:00 2001\n> From: Karina Litskevich <[email protected]>\n> Date: Thu, 7 Sep 2023 17:44:40 +0300\n> Subject: [PATCH v1] Fix local_blks_written counter incrementation\n> \n> ---\n> src/backend/storage/buffer/localbuf.c | 2 +-\n> 1 file changed, 1 insertion(+), 1 deletion(-)\n> \n> diff --git a/src/backend/storage/buffer/localbuf.c b/src/backend/storage/buffer/localbuf.c\n> index 1735ec7141..567b8d15ef 100644\n> --- a/src/backend/storage/buffer/localbuf.c\n> +++ b/src/backend/storage/buffer/localbuf.c\n> @@ -431,7 +431,7 @@ ExtendBufferedRelLocal(BufferManagerRelation bmr,\n> \n> \t*extended_by = extend_by;\n> \n> -\tpgBufferUsage.temp_blks_written += extend_by;\n> +\tpgBufferUsage.local_blks_written += extend_by;\n> \n> \treturn first_block;\n> }\n> -- \n> 2.34.1\n> \n\nUgh, you're right.\n\nThe naming of local vs temp here is pretty unfortunate imo. I wonder if we\nought to at least dd a comment to BufferUsage clarifying the situation? Just\nreading the comments therein one would be hard pressed to figure out which of\nthe variables temp table activity should be added to.\n\nI don't think we currently can write a test for this in the core tests, as the\nrelevant data isn't visible anywhere, iirc. Thus I added a test to\npg_stat_statements. Afaict it should be stable?\n\nRunning the attached patch through CI, planning to push after that succeeds,\nunless somebody has a comment?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 13 Sep 2023 11:59:39 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BufferUsage counters' values have changed"
},
{
"msg_contents": "Hi,\n\nOn 2023-09-13 16:04:00 +0300, Nazir Bilal Yavuz wrote:\n> On Mon, 11 Sept 2023 at 14:28, Karina Litskevich\n> <[email protected]> wrote:\n> >\n> > Hi hackers,\n> >\n> > I noticed that BufferUsage counters' values are strangely different for the\n> > same queries on REL_15_STABLE and REL_16_STABLE. For example, I run\n> >\n> > CREATE EXTENSION pg_stat_statements;\n> > CREATE TEMP TABLE test(b int);\n> > INSERT INTO test(b) SELECT generate_series(1,1000);\n> > SELECT query, local_blks_hit, local_blks_read, local_blks_written,\n> > local_blks_dirtied, temp_blks_written FROM pg_stat_statements;\n> >\n> > and output on REL_15_STABLE contains\n> >\n> > query | INSERT INTO test(b) SELECT generate_series($1,$2)\n> > local_blks_hit | 1005\n> > local_blks_read | 2\n> > local_blks_written | 5\n> > local_blks_dirtied | 5\n> > temp_blks_written | 0\n> >\n> > while output on REL_16_STABLE contains\n> >\n> > query | INSERT INTO test(b) SELECT generate_series($1,$2)\n> > local_blks_hit | 1006\n> > local_blks_read | 0\n> > local_blks_written | 0\n> > local_blks_dirtied | 5\n> > temp_blks_written | 8\n> >\n> >\n> > I found a bug that causes one of the differences. Wrong counter is incremented\n> > in ExtendBufferedRelLocal(). The attached patch fixes it and should be applied\n> > to REL_16_STABLE and master. With the patch applied output contains\n>\n> Nice finding! I agree, it should be changed.\n>\n> > query | INSERT INTO test(b) SELECT generate_series($1,$2)\n> > local_blks_hit | 1006\n> > local_blks_read | 0\n> > local_blks_written | 8\n> > local_blks_dirtied | 5\n> > temp_blks_written | 0\n> >\n> >\n> > I still wonder why local_blks_written is greater than it was on REL_15_STABLE,\n> > and why local_blks_read became zero. These changes are caused by fcdda1e4b5.\n> > This code is new to me, and I'm still trying to understand whether it's a bug\n> > in computing the counters or just changes in how many blocks are read/written\n> > during the query execution. If anyone can help me, I would be grateful.\n>\n> I spent some time on it:\n>\n> local_blks_read became zero because:\n> 1_ One more cache hit. It was supposed to be local_blks_read but it is\n> local_blks_hit now. This is an assumption, I didn't check this deeply.\n> 2_ Before fcdda1e4b5, there was one local_blks_read coming from\n> buf = ReadBufferExtended(rel, VISIBILITYMAP_FORKNUM, blkno,\n> RBM_ZERO_ON_ERROR, NULL) in freespace.c -> ReadBuffer_common() ->\n> pgBufferUsage.local_blks_read++.\n> But buf = ReadBufferExtended(rel, VISIBILITYMAP_FORKNUM, blkno,\n> RBM_ZERO_ON_ERROR, NULL) is moved into the else case, so it didn't\n> called and local_blks_read isn't incremented.\n\nThat imo is a legitimate difference / improvement. The read we previously did\nhere was unnecessary.\n\n\n> local_blks_written is greater because of the combination of fcdda1e4b5\n> and 00d1e02be2.\n> In PG_15:\n> RelationGetBufferForTuple() -> ReadBufferBI(P_NEW, RBM_ZERO_AND_LOCK)\n> -> ReadBufferExtended() -> ReadBuffer_common() ->\n> pgBufferUsage.local_blks_written++; (called 5 times) [0]\n> In PG_16:\n> 1_ 5 of the local_blks_written is coming from:\n> RelationGetBufferForTuple() -> RelationAddBlocks() ->\n> ExtendBufferedRelBy() -> ExtendBufferedRelCommon() ->\n> ExtendBufferedRelLocal() -> pgBufferUsage.local_blks_written +=\n> extend_by; (extend_by is 1, this is called 5 times) [1]\n> 2_ 3 of the local_blks_written is coming from:\n> RelationGetBufferForTuple() -> RecordAndGetPageWithFreeSpace() ->\n> fsm_set_and_search() -> fsm_readbuf() -> fsm_extend() ->\n> ExtendBufferedRelTo() -> ExtendBufferedRelCommon() ->\n> ExtendBufferedRelLocal() -> pgBufferUsage.local_blks_written +=\n> extend_by; (extend_by is 3, this is called 1 time) [2]\n>\n> I think [0] is the same path as [1] but [2] is new. 'fsm extends'\n> wasn't counted in local_blks_written in PG_15. Calling\n> ExtendBufferedRelTo() from fsm_extend() caused 'fsm extends' to be\n> counted in local_blks_written. I am not sure which one is correct.\n\nI think it's correct to count the fsm writes here. The pg_stat_statement\ncolumns aren't specific to the main relation for or such... If anything it was\na bug to not count them before.\n\nThanks for looking into this.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 13 Sep 2023 12:10:30 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BufferUsage counters' values have changed"
},
{
"msg_contents": "Hi,\n\nOn 2023-09-13 11:59:39 -0700, Andres Freund wrote:\n> Running the attached patch through CI, planning to push after that succeeds,\n> unless somebody has a comment?\n\nAnd pushed.\n\nThanks Karina for the report and fix!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 13 Sep 2023 19:22:40 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BufferUsage counters' values have changed"
},
{
"msg_contents": "Nazir, Andres, thank you both for help!\n\nOn Wed, Sep 13, 2023 at 10:10 PM Andres Freund <[email protected]> wrote:\n\n> On 2023-09-13 16:04:00 +0300, Nazir Bilal Yavuz wrote:\n> > local_blks_read became zero because:\n> > 1_ One more cache hit. It was supposed to be local_blks_read but it is\n> > local_blks_hit now. This is an assumption, I didn't check this deeply.\n> > 2_ Before fcdda1e4b5, there was one local_blks_read coming from\n> > buf = ReadBufferExtended(rel, VISIBILITYMAP_FORKNUM, blkno,\n> > RBM_ZERO_ON_ERROR, NULL) in freespace.c -> ReadBuffer_common() ->\n> > pgBufferUsage.local_blks_read++.\n> > But buf = ReadBufferExtended(rel, VISIBILITYMAP_FORKNUM, blkno,\n> > RBM_ZERO_ON_ERROR, NULL) is moved into the else case, so it didn't\n> > called and local_blks_read isn't incremented.\n>\n> That imo is a legitimate difference / improvement. The read we previously\n> did\n> here was unnecessary.\n>\n>\n> > local_blks_written is greater because of the combination of fcdda1e4b5\n> > and 00d1e02be2.\n> > In PG_15:\n> > RelationGetBufferForTuple() -> ReadBufferBI(P_NEW, RBM_ZERO_AND_LOCK)\n> > -> ReadBufferExtended() -> ReadBuffer_common() ->\n> > pgBufferUsage.local_blks_written++; (called 5 times) [0]\n> > In PG_16:\n> > 1_ 5 of the local_blks_written is coming from:\n> > RelationGetBufferForTuple() -> RelationAddBlocks() ->\n> > ExtendBufferedRelBy() -> ExtendBufferedRelCommon() ->\n> > ExtendBufferedRelLocal() -> pgBufferUsage.local_blks_written +=\n> > extend_by; (extend_by is 1, this is called 5 times) [1]\n> > 2_ 3 of the local_blks_written is coming from:\n> > RelationGetBufferForTuple() -> RecordAndGetPageWithFreeSpace() ->\n> > fsm_set_and_search() -> fsm_readbuf() -> fsm_extend() ->\n> > ExtendBufferedRelTo() -> ExtendBufferedRelCommon() ->\n> > ExtendBufferedRelLocal() -> pgBufferUsage.local_blks_written +=\n> > extend_by; (extend_by is 3, this is called 1 time) [2]\n> >\n> > I think [0] is the same path as [1] but [2] is new. 'fsm extends'\n> > wasn't counted in local_blks_written in PG_15. Calling\n> > ExtendBufferedRelTo() from fsm_extend() caused 'fsm extends' to be\n> > counted in local_blks_written. I am not sure which one is correct.\n>\n> I think it's correct to count the fsm writes here. The pg_stat_statement\n> columns aren't specific to the main relation for or such... If anything it\n> was\n> a bug to not count them before.\n>\n\n\nBest regards,\nKarina Litskevich\nPostgres Professional: http://postgrespro.com/\n\nNazir, Andres, thank you both for help!On Wed, Sep 13, 2023 at 10:10 PM Andres Freund <[email protected]> wrote:\nOn 2023-09-13 16:04:00 +0300, Nazir Bilal Yavuz wrote:\n> local_blks_read became zero because:\n> 1_ One more cache hit. It was supposed to be local_blks_read but it is\n> local_blks_hit now. This is an assumption, I didn't check this deeply.\n> 2_ Before fcdda1e4b5, there was one local_blks_read coming from\n> buf = ReadBufferExtended(rel, VISIBILITYMAP_FORKNUM, blkno,\n> RBM_ZERO_ON_ERROR, NULL) in freespace.c -> ReadBuffer_common() ->\n> pgBufferUsage.local_blks_read++.\n> But buf = ReadBufferExtended(rel, VISIBILITYMAP_FORKNUM, blkno,\n> RBM_ZERO_ON_ERROR, NULL) is moved into the else case, so it didn't\n> called and local_blks_read isn't incremented.\n\nThat imo is a legitimate difference / improvement. The read we previously did\nhere was unnecessary.\n\n\n> local_blks_written is greater because of the combination of fcdda1e4b5\n> and 00d1e02be2.\n> In PG_15:\n> RelationGetBufferForTuple() -> ReadBufferBI(P_NEW, RBM_ZERO_AND_LOCK)\n> -> ReadBufferExtended() -> ReadBuffer_common() ->\n> pgBufferUsage.local_blks_written++; (called 5 times) [0]\n> In PG_16:\n> 1_ 5 of the local_blks_written is coming from:\n> RelationGetBufferForTuple() -> RelationAddBlocks() ->\n> ExtendBufferedRelBy() -> ExtendBufferedRelCommon() ->\n> ExtendBufferedRelLocal() -> pgBufferUsage.local_blks_written +=\n> extend_by; (extend_by is 1, this is called 5 times) [1]\n> 2_ 3 of the local_blks_written is coming from:\n> RelationGetBufferForTuple() -> RecordAndGetPageWithFreeSpace() ->\n> fsm_set_and_search() -> fsm_readbuf() -> fsm_extend() ->\n> ExtendBufferedRelTo() -> ExtendBufferedRelCommon() ->\n> ExtendBufferedRelLocal() -> pgBufferUsage.local_blks_written +=\n> extend_by; (extend_by is 3, this is called 1 time) [2]\n>\n> I think [0] is the same path as [1] but [2] is new. 'fsm extends'\n> wasn't counted in local_blks_written in PG_15. Calling\n> ExtendBufferedRelTo() from fsm_extend() caused 'fsm extends' to be\n> counted in local_blks_written. I am not sure which one is correct.\n\nI think it's correct to count the fsm writes here. The pg_stat_statement\ncolumns aren't specific to the main relation for or such... If anything it was\na bug to not count them before.Best regards,Karina LitskevichPostgres Professional: http://postgrespro.com/",
"msg_date": "Fri, 15 Sep 2023 10:37:10 +0300",
"msg_from": "Karina Litskevich <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BufferUsage counters' values have changed"
}
] |
[
{
"msg_contents": "I only add below:\n\nDatum fake_dinstance2(PG_FUNCTION_ARGS)\n{\n\tPG_RETURN_INT16(0);\n}\nin src/backend/utils/adt/int8.c, and the I run “make install”,\nBut I can’t find the fake_distance2 in src/backend/utils/fmgrtab.c which is\ngenerated by src/backend/utils/Gen_fmgrtab.pl. What else do I need to add?\n\n",
"msg_date": "Mon, 11 Sep 2023 22:52:53 +0800",
"msg_from": "jacktby jacktby <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to add built-in func?"
},
{
"msg_contents": "Hi,\n\n> I only add below:\n>\n> Datum fake_dinstance2(PG_FUNCTION_ARGS)\n> {\n> PG_RETURN_INT16(0);\n> }\n> in src/backend/utils/adt/int8.c, and the I run “make install”,\n> But I can’t find the fake_distance2 in src/backend/utils/fmgrtab.c which is\n> generated by src/backend/utils/Gen_fmgrtab.pl. What else do I need to add?\n\nIf the goal is to add a function that can be executed by a user (e.g.\nvia psql) you have to add it to pg_proc.dat, or alternatively (and\noften better) add a corresponding extension to /contrib/. You can find\na complete example here [1] for instance, see v4-0001 patch and the\nfunction pg_get_relation_publishing_info(). Make sure it has a proper\nvolatility [2]. The patch [3] shows how to add an extension.\n\n[1]: https://postgr.es/m/CAAWbhmjcnoV7Xu6LHr_hxqWmVtehv404bvDye%2BQZcUDSg8NSKw%40mail.gmail.com\n[2]: https://www.postgresql.org/docs/current/xfunc-volatility.html\n[3]: https://postgr.es/m/CAJ7c6TMSat6qjPrrrK0tRTgZsdXwFAbkDn5gjeDtFnUFrjZX-g%40mail.gmail.com\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 11 Sep 2023 18:51:59 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to add built-in func?"
},
{
"msg_contents": "Hi\n\npo 11. 9. 2023 v 17:59 odesílatel jacktby jacktby <[email protected]>\nnapsal:\n\n> I only add below:\n>\n> Datum fake_dinstance2(PG_FUNCTION_ARGS)\n> {\n> PG_RETURN_INT16(0);\n> }\n> in src/backend/utils/adt/int8.c, and the I run “make install”,\n> But I can’t find the fake_distance2 in src/backend/utils/fmgrtab.c which is\n> generated by src/backend/utils/Gen_fmgrtab.pl. What else do I need to add?\n>\n\nyou need to add the function metadata to pg_proc.dat\n\nFor free oid use unused_oids script\n\nRegards\n\nPavel\n\nHipo 11. 9. 2023 v 17:59 odesílatel jacktby jacktby <[email protected]> napsal:I only add below:\n\nDatum fake_dinstance2(PG_FUNCTION_ARGS)\n{\n PG_RETURN_INT16(0);\n}\nin src/backend/utils/adt/int8.c, and the I run “make install”,\nBut I can’t find the fake_distance2 in src/backend/utils/fmgrtab.c which is\ngenerated by src/backend/utils/Gen_fmgrtab.pl. What else do I need to add?you need to add the function metadata to pg_proc.datFor free oid use unused_oids scriptRegardsPavel",
"msg_date": "Mon, 11 Sep 2023 18:18:28 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to add built-in func?"
},
{
"msg_contents": "po 11. 9. 2023 v 18:18 odesílatel Pavel Stehule <[email protected]>\nnapsal:\n\n> Hi\n>\n> po 11. 9. 2023 v 17:59 odesílatel jacktby jacktby <[email protected]>\n> napsal:\n>\n>> I only add below:\n>>\n>> Datum fake_dinstance2(PG_FUNCTION_ARGS)\n>> {\n>> PG_RETURN_INT16(0);\n>> }\n>> in src/backend/utils/adt/int8.c, and the I run “make install”,\n>> But I can’t find the fake_distance2 in src/backend/utils/fmgrtab.c which\n>> is\n>> generated by src/backend/utils/Gen_fmgrtab.pl. What else do I need to\n>> add?\n>>\n>\n> you need to add the function metadata to pg_proc.dat\n>\n> For free oid use unused_oids script\n>\n\nhttps://www.postgresql.org/docs/current/system-catalog-initial-data.html\n\nhttps://www.highgo.ca/2021/03/04/how-to-create-a-system-information-function-in-postgresql/\n\n\n\n> Regards\n>\n> Pavel\n>\n>\n\npo 11. 9. 2023 v 18:18 odesílatel Pavel Stehule <[email protected]> napsal:Hipo 11. 9. 2023 v 17:59 odesílatel jacktby jacktby <[email protected]> napsal:I only add below:\n\nDatum fake_dinstance2(PG_FUNCTION_ARGS)\n{\n PG_RETURN_INT16(0);\n}\nin src/backend/utils/adt/int8.c, and the I run “make install”,\nBut I can’t find the fake_distance2 in src/backend/utils/fmgrtab.c which is\ngenerated by src/backend/utils/Gen_fmgrtab.pl. What else do I need to add?you need to add the function metadata to pg_proc.datFor free oid use unused_oids scripthttps://www.postgresql.org/docs/current/system-catalog-initial-data.html https://www.highgo.ca/2021/03/04/how-to-create-a-system-information-function-in-postgresql/RegardsPavel",
"msg_date": "Mon, 11 Sep 2023 18:19:49 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to add built-in func?"
},
{
"msg_contents": "\n\n> 2023年9月11日 23:51,Aleksander Alekseev <[email protected]> 写道:\n> \n> Hi,\n> \n>> I only add below:\n>> \n>> Datum fake_dinstance2(PG_FUNCTION_ARGS)\n>> {\n>> PG_RETURN_INT16(0);\n>> }\n>> in src/backend/utils/adt/int8.c, and the I run “make install”,\n>> But I can’t find the fake_distance2 in src/backend/utils/fmgrtab.c which is\n>> generated by src/backend/utils/Gen_fmgrtab.pl. What else do I need to add?\n> \n> If the goal is to add a function that can be executed by a user (e.g.\n> via psql) you have to add it to pg_proc.dat, or alternatively (and\n> often better) add a corresponding extension to /contrib/. You can find\n> a complete example here [1] for instance, see v4-0001 patch and the\n> function pg_get_relation_publishing_info(). Make sure it has a proper\n> volatility [2]. The patch [3] shows how to add an extension.\n> \n> [1]: https://postgr.es/m/CAAWbhmjcnoV7Xu6LHr_hxqWmVtehv404bvDye%2BQZcUDSg8NSKw%40mail.gmail.com\n> [2]: https://www.postgresql.org/docs/current/xfunc-volatility.html\n> [3]: https://postgr.es/m/CAJ7c6TMSat6qjPrrrK0tRTgZsdXwFAbkDn5gjeDtFnUFrjZX-g%40mail.gmail.com\n> -- \n> Best regards,\n> Aleksander Alekseev\nI need to make it used for a new operator in my pg.\n\n",
"msg_date": "Tue, 12 Sep 2023 00:28:05 +0800",
"msg_from": "jacktby jacktby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to add built-in func?"
},
{
"msg_contents": "On 2023-09-11 12:28, jacktby jacktby wrote:\n>> 2023年9月11日 23:51,Aleksander Alekseev <[email protected]> 写道:\n>> often better) add a corresponding extension to /contrib/. You can find\n>> a complete example here [1] for instance, see v4-0001 patch and the\n>> function pg_get_relation_publishing_info(). Make sure it has a proper\n>> volatility [2]. The patch [3] shows how to add an extension.\n>> \n>> [1]: \n>> https://postgr.es/m/CAAWbhmjcnoV7Xu6LHr_hxqWmVtehv404bvDye%2BQZcUDSg8NSKw%40mail.gmail.com\n>> [2]: https://www.postgresql.org/docs/current/xfunc-volatility.html\n>> [3]: \n>> https://postgr.es/m/CAJ7c6TMSat6qjPrrrK0tRTgZsdXwFAbkDn5gjeDtFnUFrjZX-g%40mail.gmail.com\n>> --\n> I need to make it used for a new operator in my pg.\n\nYou can implement both a function and an operator (and all that goes \nwith)\nin an extension, without having to hack at all on PostgreSQL itself.\nYou can then, if it seems generally useful enough, offer that extension\nto go in contrib/. If it's agreed to be something everyone should have,\nit could then make its way into core.\n\nDo you have it working as an extension yet? That can be a good way\nto start, separating the difficulties you have to solve from the ones\nyou don't have to solve yet.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Mon, 11 Sep 2023 12:34:05 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to add built-in func?"
},
{
"msg_contents": "\n\n> 2023年9月12日 00:34,Chapman Flack <[email protected]> 写道:\n> \n> On 2023-09-11 12:28, jacktby jacktby wrote:\n>>> 2023年9月11日 23:51,Aleksander Alekseev <[email protected]> 写道:\n>>> often better) add a corresponding extension to /contrib/. You can find\n>>> a complete example here [1] for instance, see v4-0001 patch and the\n>>> function pg_get_relation_publishing_info(). Make sure it has a proper\n>>> volatility [2]. The patch [3] shows how to add an extension.\n>>> [1]: https://postgr.es/m/CAAWbhmjcnoV7Xu6LHr_hxqWmVtehv404bvDye%2BQZcUDSg8NSKw%40mail.gmail.com\n>>> [2]: https://www.postgresql.org/docs/current/xfunc-volatility.html\n>>> [3]: https://postgr.es/m/CAJ7c6TMSat6qjPrrrK0tRTgZsdXwFAbkDn5gjeDtFnUFrjZX-g%40mail.gmail.com\n>>> --\n>> I need to make it used for a new operator in my pg.\n> \n> You can implement both a function and an operator (and all that goes with)\n> in an extension, without having to hack at all on PostgreSQL itself.\n> You can then, if it seems generally useful enough, offer that extension\n> to go in contrib/. If it's agreed to be something everyone should have,\n> it could then make its way into core.\n> \n> Do you have it working as an extension yet? That can be a good way\n> to start, separating the difficulties you have to solve from the ones\n> you don't have to solve yet.\n> \n> Regards,\n> -Chap\nI solved it , but I need to use it in my new grammar, so I have to add in into core. That’s necessary. Thanks. But My own storage engine is implemented by extension. Extension is a good idea and I’m using it now.\n\n",
"msg_date": "Tue, 12 Sep 2023 13:11:23 +0800",
"msg_from": "jacktby jacktby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to add built-in func?"
}
] |
[
{
"msg_contents": "Hi,\n\nI have fallen into this trap and others have too. If you run\nEXPLAIN(ANALYZE) no de-toasting happens. This makes query-runtimes\ndiffer a lot. The bigger point is that the average user expects more\nfrom EXPLAIN(ANALYZE) than what it provides. This can be suprising. You\ncan force detoasting during explain with explicit calls to length(), but\nthat is tedious. Those of us who are forced to work using java stacks,\norms and still store mostly documents fall into this trap sooner or\nlater. I have already received some good feedback on this one, so this\nis an issue that bother quite a few people out there.\n\nAttached is a patch for addressing the issue in form of adding another\nparameter to explain. I don't know if that is a good idea, but I got\nsome feedback that a solution to this problem would be appreciated by\nsome people out there. It would also be nice to reflect the detoasting\nin the \"buffers\" option of explain as well. The change for detoasting is\nonly a few lines though.\n\nSo the idea was to allow this\n\nEXPLAIN (ANALYZE, DETOAST) SELECT * FROM sometable;\n\nand perform the detoasting step additionally during the explain. This\njust gives a more realistic runtime and by playing around with the\nparameter and comparing the execution-times of the query one even gets\nan impression about the detoasting cost involved in a query. Since the\nparameter is purely optional, it would not affect any existing measures.\n\nIt is not uncommon that the runtime of explain-analyze is way\nunrealistic in the real world, where people use PostgreSQL to store\nlarger and larger documents inside tables and not using Large-Objects.\n\n\nHere is a video of the effect (in an exagerated form):\nhttps://www.stepanrutz.com/short_detoast_subtitles.mp4\n\nIt would be great to get some feedback on the subject and how to address\nthis, maybe in totally different ways.\n\nGreetings from cologne, Stepan\n\n\nStepan Rutz - IT Consultant, Cologne Germany, stepan.rutz AT gmx.de",
"msg_date": "Tue, 12 Sep 2023 10:59:30 +0200",
"msg_from": "stepan rutz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "On Tue, 12 Sept 2023 at 12:56, stepan rutz <[email protected]> wrote:\n>\n> Hi,\n>\n> I have fallen into this trap and others have too. If you run\n> EXPLAIN(ANALYZE) no de-toasting happens. This makes query-runtimes\n> differ a lot. The bigger point is that the average user expects more\n> from EXPLAIN(ANALYZE) than what it provides. This can be suprising. You\n> can force detoasting during explain with explicit calls to length(), but\n> that is tedious. Those of us who are forced to work using java stacks,\n> orms and still store mostly documents fall into this trap sooner or\n> later. I have already received some good feedback on this one, so this\n> is an issue that bother quite a few people out there.\n\nYes, the lack of being able to see the impact of detoasting (amongst\nothers) in EXPLAIN (ANALYZE) can hide performance issues.\n\n> It would be great to get some feedback on the subject and how to address\n> this, maybe in totally different ways.\n\nHmm, maybe we should measure the overhead of serializing the tuples instead.\nThe difference between your patch and \"serializing the tuples, but not\nsending them\" is that serializing also does the detoasting, but also\nincludes any time spent in the serialization functions of the type. So\nan option \"SERIALIZE\" which measures all the time the server spent on\nthe query (except the final step of sending the bytes to the client)\nwould likely be more useful than \"just\" detoasting.\n\n> 0001_explain_analyze_and_detoast.patch\n\nI notice that this patch creates and destroys a memory context for\nevery tuple that the DestReceiver receives. I think that's quite\nwasteful, as you should be able to create only one memory context and\nreset it before (or after) each processed tuple. That also reduces the\ndifferences in measurements between EXPLAIN and normal query\nprocessing of the tuples - after all, we don't create new memory\ncontexts for every tuple in the normal DestRemote receiver either,\nright?\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Tue, 12 Sep 2023 14:25:40 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "Hi Matthias,\n\nthanks for your feedback.\n\nI wasn't sure on the memory-contexts. I was actually also unsure on\nwhether the array\n\n TupleTableSlot.tts_isnull\n\nis also set up correctly by the previous call to slot_getallattrs(slot).\nThis would allow to get rid of even more code from this patch, which is\nin the loop and determines whether a field is null or not. I switched to\nusing tts_isnull from TupleTableSlot now, seems to be ok and the docs\nsay it is save to use.\n\nAlso I reuse the MemoryContext throughout the livetime of the receiver.\nNot sure if that makes a difference. One more thing I noticed. During\nexplain.c the DestReceiver's destroy callback was never invoked. I added\na line to do that, however I am unsure whether this is the right place\nor a good idea in the first place. This potentially affects plain\nanalyze calls as well, though seems to behave nicely. Even when I\nexplain (analyze) select * into ....\n\nThis is the call I am talking about, which was missing from explain.c\n\n dest->rDestroy(dest);\n\n\nAttached a new patch. Hoping for feedback,\n\nGreetings, Stepan\n\n\nOn 12.09.23 14:25, Matthias van de Meent wrote:\n> On Tue, 12 Sept 2023 at 12:56, stepan rutz<[email protected]> wrote:\n>> Hi,\n>>\n>> I have fallen into this trap and others have too. If you run\n>> EXPLAIN(ANALYZE) no de-toasting happens. This makes query-runtimes\n>> differ a lot. The bigger point is that the average user expects more\n>> from EXPLAIN(ANALYZE) than what it provides. This can be suprising. You\n>> can force detoasting during explain with explicit calls to length(), but\n>> that is tedious. Those of us who are forced to work using java stacks,\n>> orms and still store mostly documents fall into this trap sooner or\n>> later. I have already received some good feedback on this one, so this\n>> is an issue that bother quite a few people out there.\n> Yes, the lack of being able to see the impact of detoasting (amongst\n> others) in EXPLAIN (ANALYZE) can hide performance issues.\n>\n>> It would be great to get some feedback on the subject and how to address\n>> this, maybe in totally different ways.\n> Hmm, maybe we should measure the overhead of serializing the tuples instead.\n> The difference between your patch and \"serializing the tuples, but not\n> sending them\" is that serializing also does the detoasting, but also\n> includes any time spent in the serialization functions of the type. So\n> an option \"SERIALIZE\" which measures all the time the server spent on\n> the query (except the final step of sending the bytes to the client)\n> would likely be more useful than \"just\" detoasting.\n>\n>> 0001_explain_analyze_and_detoast.patch\n> I notice that this patch creates and destroys a memory context for\n> every tuple that the DestReceiver receives. I think that's quite\n> wasteful, as you should be able to create only one memory context and\n> reset it before (or after) each processed tuple. That also reduces the\n> differences in measurements between EXPLAIN and normal query\n> processing of the tuples - after all, we don't create new memory\n> contexts for every tuple in the normal DestRemote receiver either,\n> right?\n>\n> Kind regards,\n>\n> Matthias van de Meent",
"msg_date": "Tue, 12 Sep 2023 17:16:00 +0200",
"msg_from": "stepan rutz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "Matthias van de Meent <[email protected]> writes:\n> Hmm, maybe we should measure the overhead of serializing the tuples instead.\n> The difference between your patch and \"serializing the tuples, but not\n> sending them\" is that serializing also does the detoasting, but also\n> includes any time spent in the serialization functions of the type. So\n> an option \"SERIALIZE\" which measures all the time the server spent on\n> the query (except the final step of sending the bytes to the client)\n> would likely be more useful than \"just\" detoasting.\n\n+1, that was my immediate reaction to the proposal as well. Some\noutput functions are far from cheap. Doing only the detoast part\nseems like it's still misleading.\n\nDo we need to go as far as offering both text-output and binary-output\noptions?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Sep 2023 11:26:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "Hi Stepan & all,\n\nOn Tue, 12 Sep 2023 17:16:00 +0200\nstepan rutz <[email protected]> wrote:\n\n...\n> Attached a new patch. Hoping for feedback,\n\nNice addition to EXPLAIN!\n\nOn the feature front, what about adding the actual detoasting/serializing time\nin the explain output?\n\nThat could be:\n\n => explain (analyze,serialize,costs off,timing off) \n select * from test_detoast;\n QUERY PLAN \n ─────────────────────────────────────────────────────────\n Seq Scan on public.test_detoast (actual rows=Nv loops=N)\n Planning Time: N ms\n Execution Time: N ms\n Serialize Time: N ms\n\n\n\n",
"msg_date": "Wed, 13 Sep 2023 13:09:04 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "Hi Tom, Hi Matthias,\n\nyou are right of course. I have looked at the code from printtup.c and\nmade a new version of the patch.\n\nThanks for the MemoryContextReset hint too (@Matthias)\n\nThis time is called EXPLAIN(ANALYZE,SERIALIZE) (hey, it also sounds\nnicer phonetically)\n\nIf the option SERIALIZE is active, the output functions are called and\nthey perform the detoasting, which I have even checked.\n\nSo things are better this way, however I hardcoded the output option\n\"Text\" (format=0). In printtup.c there is an incoming array which\napplies Text (format=0) or Binary (format=1) for each column\nindividually. I am not sure whether this is even needed. I left in the\nif-statement from printtup.c which calls the binary output method of a\ngiven type. The result of the output is ignored and apparently free'd\nbecause of the memory-context-reset at the end.\n\nPlease also note, that I added a call to DestReceiver's rDestroy hook,\nwhich was missing from explain.c before altogether.\n\nFeedback is appreciated.\n\n/Stepan\n\n\nOn 12.09.23 17:26, Tom Lane wrote:\n> Matthias van de Meent <[email protected]> writes:\n>> Hmm, maybe we should measure the overhead of serializing the tuples instead.\n>> The difference between your patch and \"serializing the tuples, but not\n>> sending them\" is that serializing also does the detoasting, but also\n>> includes any time spent in the serialization functions of the type. So\n>> an option \"SERIALIZE\" which measures all the time the server spent on\n>> the query (except the final step of sending the bytes to the client)\n>> would likely be more useful than \"just\" detoasting.\n> +1, that was my immediate reaction to the proposal as well. Some\n> output functions are far from cheap. Doing only the detoast part\n> seems like it's still misleading.\n>\n> Do we need to go as far as offering both text-output and binary-output\n> options?\n>\n> \t\t\tregards, tom lane",
"msg_date": "Thu, 14 Sep 2023 21:27:18 +0200",
"msg_from": "stepan rutz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "Hi,\n\nplease see a revised version yesterday's mail. The patch attached now\nprovides the following:\n\nEXPLAIN(ANALYZE,SERIALIZE)\n\nand\n\nEXPLAIN(ANALYZE,SERIALIZEBINARY)\n\nand timing output.\n\nBoth options perform the serialization during analyze and provide an\nadditional output in the plan like this:\n\n\ntemplate1=# explain (analyze,serialize) select * from t12 limit 1;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------\n\n ...\n\n Serialized Bytes: 36 bytes\n Execution Time: 0.035 ms\n(5 rows)\n\nor also this\n\n\ntemplate1=# explain (analyze,serialize) select * from t1;\n QUERY PLAN\n---------------------------------------------------------------------------------------------\n Seq Scan on t1 (cost=0.00..1.02 rows=2 width=19) (actual\ntime=0.101..0.111 rows=5 loops=1)\n Planning Time: 0.850 ms\n Serialized Bytes: 85777978 bytes\n Execution Time: 354.284 ms\n(4 rows)\n\n\nIts tempting to divide Serialized-Bytes by Execution-Time to get an idea\nof the serialization bandwidth. This is /dev/null serialization though.\nThe results are length-counted and then discarded.\n\nSince detoasting happens implicitly during serialization, the number of\nbytes becomes huge in this case and accounts for the detoasted lengths\nas well. I tried to get the number of bytes send for the protocol's\nmessages and the attribute headers correctly. For the actual values I am\nquite sure I get the correct measures, as one can really tell by sending\nmore values across. Null is 4 bytes on the wire interestingly. I didn't\nknow that, but it makes sense, since its using the same prefix\nlength-field as all values do.\n\nI have checked the JBDC driver and it uses binary and text formats\ndepending on an attribute's type oid. So having the SERIALIZEBINARY\noption is not accurate, as in reality both formats can be occur for the\nsame tuple.\n\nPlease provide some feedback on the new patch and let me know if this\nmakes sense. In general this kind of option for EXPLAIN is a good thing\nfor sure.\n\n\nGreetings,\n\nStepan\n\n\nOn 14.09.23 21:27, stepan rutz wrote:\n> Hi Tom, Hi Matthias,\n>\n> you are right of course. I have looked at the code from printtup.c and\n> made a new version of the patch.\n>\n> Thanks for the MemoryContextReset hint too (@Matthias)\n>\n> This time is called EXPLAIN(ANALYZE,SERIALIZE) (hey, it also sounds\n> nicer phonetically)\n>\n> If the option SERIALIZE is active, the output functions are called and\n> they perform the detoasting, which I have even checked.\n>\n> So things are better this way, however I hardcoded the output option\n> \"Text\" (format=0). In printtup.c there is an incoming array which\n> applies Text (format=0) or Binary (format=1) for each column\n> individually. I am not sure whether this is even needed. I left in the\n> if-statement from printtup.c which calls the binary output method of a\n> given type. The result of the output is ignored and apparently free'd\n> because of the memory-context-reset at the end.\n>\n> Please also note, that I added a call to DestReceiver's rDestroy hook,\n> which was missing from explain.c before altogether.\n>\n> Feedback is appreciated.\n>\n> /Stepan\n>\n>\n> On 12.09.23 17:26, Tom Lane wrote:\n>> Matthias van de Meent <[email protected]> writes:\n>>> Hmm, maybe we should measure the overhead of serializing the tuples\n>>> instead.\n>>> The difference between your patch and \"serializing the tuples, but not\n>>> sending them\" is that serializing also does the detoasting, but also\n>>> includes any time spent in the serialization functions of the type. So\n>>> an option \"SERIALIZE\" which measures all the time the server spent on\n>>> the query (except the final step of sending the bytes to the client)\n>>> would likely be more useful than \"just\" detoasting.\n>> +1, that was my immediate reaction to the proposal as well. Some\n>> output functions are far from cheap. Doing only the detoast part\n>> seems like it's still misleading.\n>>\n>> Do we need to go as far as offering both text-output and binary-output\n>> options?\n>>\n>> regards, tom lane",
"msg_date": "Fri, 15 Sep 2023 22:09:41 +0200",
"msg_from": "stepan rutz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "Hi,\n\nOn 9/15/23 22:09, stepan rutz wrote:\n> Hi,\n> \n> please see a revised version yesterday's mail. The patch attached now\n> provides the following:\n> \n> EXPLAIN(ANALYZE,SERIALIZE)\n> \n> and\n> \n> EXPLAIN(ANALYZE,SERIALIZEBINARY)\n> \n\nI haven't looked at the patch in detail yet, but this option name looks\na bit strange/inconsistent. Either it should be SERIALIZE_BINARY (to\nmatch other multi-word options), or maybe there should be just SERIALIZE\nwith a parameter to determine text/binary (like FORMAT, for example).\n\nSo we'd do either\n\n EXPLAIN (SERIALIZE)\n EXPLAIN (SERIALIZE TEXT)\n\nto get serialization to text (which I guess 99% of people will do), or\n\n EXPLAIN (SERIALIZE BINARY)\n\nto get binary.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 2 Nov 2023 18:49:34 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "Hi Thomas,\n\nyou are right of course. Thanks!\n\nI have attached a new version of the patch that supports the syntax like\nsuggested. The previous patch was insonsistent in style indeed.\n\nexplain (analyze, serialize)\n\nand\n\nexplain (analyze, serialize binary)\n\nThat doesn't make too much of a difference for most scenarios I am\ncertain. However the the seralize option itself does. Mostly because it\nperforms the detoasting and that was a trap for me in the past with just\nplain analyze.\n\n\nEg this scenario really is not too far fetched in a world where people\nhave large JSONB values.\n\n\ndb1=# create table test(id bigint, val text);\n\ndb1=# insert into test(val) select string_agg(s::text, ',') from (select\ngenerate_series(1, 10_000_000) as s) as a1;\n\nnow we have a cell that has roughly 80Mb in it. A large detoasting that\nwill happen in reallife but in explain(analyze).\n\nand then...\n\ndb1=# explain (analyze) select * from test;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..22.00 rows=1200 width=40) (actual\ntime=0.018..0.020 rows=1 loops=1)\n Planning Time: 0.085 ms\n Execution Time: 0.044 ms\n(3 rows)\n\ndb1=# explain (analyze, serialize) select * from test;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..22.00 rows=1200 width=40) (actual\ntime=0.023..0.027 rows=1 loops=1)\n Planning Time: 0.077 ms\n Execution Time: 303.281 ms\n Serialized Bytes: 78888953 Bytes. Mode Text. Bandwidth 248.068 MB/sec\n(4 rows)\n\ndb1=#\n\nSo the explain(analyze) does not process the ~80 MB in 0.044ms in any\nway of course.\n\nActually I could print the serialized bytes using 1. grouping-separators\n(eg 78_888_953) or 2. in the way pg_size_pretty does it.\n\nIf doing it the pg_size_pretty way I am uncertain if it would be ok to\nquery the actual pg_size_pretty function via its (certainly frozen) oid\nof 3166 and do OidFunctionCall1(3166...) to invoke it. Otherwise I'd say\nit would be nice if the code from that function would be made available\nas a utility function for all c-code. Any suggestions on this topic?\n\nRegards,\n\n/Stepan\n\n\nOn 02.11.23 18:49, Tomas Vondra wrote:\n> Hi,\n>\n> On 9/15/23 22:09, stepan rutz wrote:\n>> Hi,\n>>\n>> please see a revised version yesterday's mail. The patch attached now\n>> provides the following:\n>>\n>> EXPLAIN(ANALYZE,SERIALIZE)\n>>\n>> and\n>>\n>> EXPLAIN(ANALYZE,SERIALIZEBINARY)\n>>\n> I haven't looked at the patch in detail yet, but this option name looks\n> a bit strange/inconsistent. Either it should be SERIALIZE_BINARY (to\n> match other multi-word options), or maybe there should be just SERIALIZE\n> with a parameter to determine text/binary (like FORMAT, for example).\n>\n> So we'd do either\n>\n> EXPLAIN (SERIALIZE)\n> EXPLAIN (SERIALIZE TEXT)\n>\n> to get serialization to text (which I guess 99% of people will do), or\n>\n> EXPLAIN (SERIALIZE BINARY)\n>\n> to get binary.\n>\n>\n> regards\n>",
"msg_date": "Thu, 2 Nov 2023 20:09:30 +0100",
"msg_from": "stepan rutz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "\n\nOn 11/2/23 20:09, stepan rutz wrote:\n> Hi Thomas,\n> \n> you are right of course. Thanks!\n> \n> I have attached a new version of the patch that supports the syntax like\n> suggested. The previous patch was insonsistent in style indeed.\n> \n> explain (analyze, serialize)\n> \n> and\n> \n> explain (analyze, serialize binary)\n> \n> That doesn't make too much of a difference for most scenarios I am\n> certain. However the the seralize option itself does. Mostly because it\n> performs the detoasting and that was a trap for me in the past with just\n> plain analyze.\n> \n> \n> Eg this scenario really is not too far fetched in a world where people\n> have large JSONB values.\n> \n> \n> db1=# create table test(id bigint, val text);\n> \n> db1=# insert into test(val) select string_agg(s::text, ',') from (select\n> generate_series(1, 10_000_000) as s) as a1;\n> \n> now we have a cell that has roughly 80Mb in it. A large detoasting that\n> will happen in reallife but in explain(analyze).\n> \n> and then...\n> \n> db1=# explain (analyze) select * from test;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------\n> Seq Scan on test (cost=0.00..22.00 rows=1200 width=40) (actual\n> time=0.018..0.020 rows=1 loops=1)\n> Planning Time: 0.085 ms\n> Execution Time: 0.044 ms\n> (3 rows)\n> \n> db1=# explain (analyze, serialize) select * from test;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------\n> Seq Scan on test (cost=0.00..22.00 rows=1200 width=40) (actual\n> time=0.023..0.027 rows=1 loops=1)\n> Planning Time: 0.077 ms\n> Execution Time: 303.281 ms\n> Serialized Bytes: 78888953 Bytes. Mode Text. Bandwidth 248.068 MB/sec\n> (4 rows)\n> \n> db1=#\n> \n> So the explain(analyze) does not process the ~80 MB in 0.044ms in any\n> way of course.\n\nHonestly, I see absolutely no point in printing this. I have little idea\nwhat to do with the \"bytes\". We have to transfer this over network, but\nsurely there's other data not included in this sum, right?\n\nBut the bandwidth seems pretty bogus/useless, as it's calculated from\nexecution time, which includes everything else, not just serialization.\nSo what does it say? It certainly does not include the network transfer.\n\nIMO we should either print nothing or just the bytes. I don't see the\npoint in printing the mode, which is determined by the command.\n\n> \n> Actually I could print the serialized bytes using 1. grouping-separators\n> (eg 78_888_953) or 2. in the way pg_size_pretty does it.\n> \n> If doing it the pg_size_pretty way I am uncertain if it would be ok to\n> query the actual pg_size_pretty function via its (certainly frozen) oid\n> of 3166 and do OidFunctionCall1(3166...) to invoke it. Otherwise I'd say\n> it would be nice if the code from that function would be made available\n> as a utility function for all c-code. Any suggestions on this topic?\n> \n\nI'm rather skeptical about this proposal, mostly because it makes it\nharder to process the explain output in scripts etc.\n\nBut more importantly, it's a completely separate matter from what this\npatch does, so if you want to pursue this, I suggest you start a\nseparate thread. If you want to introduce separators, surely this is not\nthe only place that should do it (e.g. why not to do that for \"rows\" or\n\"cost\" estimates)?\n\nBTW if you really want to print amount of memory, maybe print it in\nkilobytes, like every other place in explain.c? Also, explain generally\nprints stuff in \"key: value\" style (in text format).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 2 Nov 2023 20:32:27 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "Hi Thomas,\n\nindeed by doing less the code also becomes trivial and\nExplainPropertyInteger can be used as a oneliner.\n\nMy intention was to actually get the realistic payload-bytes from the\nwire-protocol counted by the serialization option. I am also adding the\nprotocol bits and the length of the data that is generated by\nserialization output-functions. So it should (hopefully) be the real\nnumbers.\n\nAttached is a new revision of the patch that prints kB (floor'ed by\ninteger-division by 1024). Maybe that is also misleading and bytes would\nbe nicer (though harder to read).\n\nThe output is now as follows:\n\ndb1=# explain (analyze, serialize) select * from test;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..22.00 rows=1200 width=40) (actual\ntime=0.014..0.017 rows=1 loops=1)\n Planning Time: 0.245 ms\n Execution Time: 292.983 ms\n Serialized Bytes: 77039 kB\n(4 rows)\n\nDefinately a lot nicer and controlled by ExplainPropertyInteger in\nterms of formatting.\n\nThe main motivation was to actually get a correct feeling for the\nexecution time. Actually counting the bytes gives an impression of what\nwould go over the wire. Only the big numbers matter here of course.\n\nRegards, Stepan\n\n\n\nOn 02.11.23 20:32, Tomas Vondra wrote:\n>\n> On 11/2/23 20:09, stepan rutz wrote:\n>> Hi Thomas,\n>>\n>> you are right of course. Thanks!\n>>\n>> I have attached a new version of the patch that supports the syntax like\n>> suggested. The previous patch was insonsistent in style indeed.\n>>\n>> explain (analyze, serialize)\n>>\n>> and\n>>\n>> explain (analyze, serialize binary)\n>>\n>> That doesn't make too much of a difference for most scenarios I am\n>> certain. However the the seralize option itself does. Mostly because it\n>> performs the detoasting and that was a trap for me in the past with just\n>> plain analyze.\n>>\n>>\n>> Eg this scenario really is not too far fetched in a world where people\n>> have large JSONB values.\n>>\n>>\n>> db1=# create table test(id bigint, val text);\n>>\n>> db1=# insert into test(val) select string_agg(s::text, ',') from (select\n>> generate_series(1, 10_000_000) as s) as a1;\n>>\n>> now we have a cell that has roughly 80Mb in it. A large detoasting that\n>> will happen in reallife but in explain(analyze).\n>>\n>> and then...\n>>\n>> db1=# explain (analyze) select * from test;\n>> QUERY PLAN\n>> ---------------------------------------------------------------------------------------------------\n>> Seq Scan on test (cost=0.00..22.00 rows=1200 width=40) (actual\n>> time=0.018..0.020 rows=1 loops=1)\n>> Planning Time: 0.085 ms\n>> Execution Time: 0.044 ms\n>> (3 rows)\n>>\n>> db1=# explain (analyze, serialize) select * from test;\n>> QUERY PLAN\n>> ---------------------------------------------------------------------------------------------------\n>> Seq Scan on test (cost=0.00..22.00 rows=1200 width=40) (actual\n>> time=0.023..0.027 rows=1 loops=1)\n>> Planning Time: 0.077 ms\n>> Execution Time: 303.281 ms\n>> Serialized Bytes: 78888953 Bytes. Mode Text. Bandwidth 248.068 MB/sec\n>> (4 rows)\n>>\n>> db1=#\n>>\n>> So the explain(analyze) does not process the ~80 MB in 0.044ms in any\n>> way of course.\n> Honestly, I see absolutely no point in printing this. I have little idea\n> what to do with the \"bytes\". We have to transfer this over network, but\n> surely there's other data not included in this sum, right?\n>\n> But the bandwidth seems pretty bogus/useless, as it's calculated from\n> execution time, which includes everything else, not just serialization.\n> So what does it say? It certainly does not include the network transfer.\n>\n> IMO we should either print nothing or just the bytes. I don't see the\n> point in printing the mode, which is determined by the command.\n>\n>> Actually I could print the serialized bytes using 1. grouping-separators\n>> (eg 78_888_953) or 2. in the way pg_size_pretty does it.\n>>\n>> If doing it the pg_size_pretty way I am uncertain if it would be ok to\n>> query the actual pg_size_pretty function via its (certainly frozen) oid\n>> of 3166 and do OidFunctionCall1(3166...) to invoke it. Otherwise I'd say\n>> it would be nice if the code from that function would be made available\n>> as a utility function for all c-code. Any suggestions on this topic?\n>>\n> I'm rather skeptical about this proposal, mostly because it makes it\n> harder to process the explain output in scripts etc.\n>\n> But more importantly, it's a completely separate matter from what this\n> patch does, so if you want to pursue this, I suggest you start a\n> separate thread. If you want to introduce separators, surely this is not\n> the only place that should do it (e.g. why not to do that for \"rows\" or\n> \"cost\" estimates)?\n>\n> BTW if you really want to print amount of memory, maybe print it in\n> kilobytes, like every other place in explain.c? Also, explain generally\n> prints stuff in \"key: value\" style (in text format).\n>\n>\n> regards\n>",
"msg_date": "Thu, 2 Nov 2023 20:59:41 +0100",
"msg_from": "stepan rutz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "On Thu, 2 Nov 2023 at 20:32, Tomas Vondra <[email protected]> wrote:\n> On 11/2/23 20:09, stepan rutz wrote:\n> > db1=# explain (analyze, serialize) select * from test;\n> > QUERY PLAN\n> > ---------------------------------------------------------------------------------------------------\n> > Seq Scan on test (cost=0.00..22.00 rows=1200 width=40) (actual\n> > time=0.023..0.027 rows=1 loops=1)\n> > Planning Time: 0.077 ms\n> > Execution Time: 303.281 ms\n> > Serialized Bytes: 78888953 Bytes. Mode Text. Bandwidth 248.068 MB/sec\n> [...]\n> BTW if you really want to print amount of memory, maybe print it in\n> kilobytes, like every other place in explain.c?\n\nIsn't node width in bytes, or is it an opaque value not to be\ninterpreted by users? I've never really investigated that part of\nPostgres' explain output...\n\n> Also, explain generally\n> prints stuff in \"key: value\" style (in text format).\n\nThat'd be key: metrickey=metricvalue for expanded values like those in\nplan nodes and the buffer usage, no?\n\n> > Serialized Bytes: 78888953 Bytes. Mode Text. Bandwidth 248.068 MB/sec\n\nI was thinking more along the lines of something like this:\n\n[...]\nExecution Time: xxx ms\nSerialization: time=yyy.yyy (in ms) size=yyy (in KiB, or B) mode=text\n(or binary)\n\nThis is significantly different from your output, as it doesn't hide\nthe measured time behind a lossy calculation of bandwidth, but gives\nthe measured data to the user; allowing them to derive their own\nprecise bandwidth if they're so inclined.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 2 Nov 2023 21:02:08 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "\n\nOn 11/2/23 21:02, Matthias van de Meent wrote:\n> On Thu, 2 Nov 2023 at 20:32, Tomas Vondra <[email protected]> wrote:\n>> On 11/2/23 20:09, stepan rutz wrote:\n>>> db1=# explain (analyze, serialize) select * from test;\n>>> QUERY PLAN\n>>> ---------------------------------------------------------------------------------------------------\n>>> Seq Scan on test (cost=0.00..22.00 rows=1200 width=40) (actual\n>>> time=0.023..0.027 rows=1 loops=1)\n>>> Planning Time: 0.077 ms\n>>> Execution Time: 303.281 ms\n>>> Serialized Bytes: 78888953 Bytes. Mode Text. Bandwidth 248.068 MB/sec\n>> [...]\n>> BTW if you really want to print amount of memory, maybe print it in\n>> kilobytes, like every other place in explain.c?\n> \n> Isn't node width in bytes, or is it an opaque value not to be\n> interpreted by users? I've never really investigated that part of\n> Postgres' explain output...\n> \n\nRight, \"width=\" is always in bytes. But fields like amount of sorted\ndata is in kB, and this seems closer to that.\n\n>> Also, explain generally\n>> prints stuff in \"key: value\" style (in text format).\n> \n> That'd be key: metrickey=metricvalue for expanded values like those in\n> plan nodes and the buffer usage, no?\n> \n\nPossibly. But the proposed output does neither. Also, it starts with\n\"Serialized Bytes\" but then prints info about bandwidth.\n\n\n>>> Serialized Bytes: 78888953 Bytes. Mode Text. Bandwidth 248.068 MB/sec\n> \n> I was thinking more along the lines of something like this:\n> \n> [...]\n> Execution Time: xxx ms\n> Serialization: time=yyy.yyy (in ms) size=yyy (in KiB, or B) mode=text\n> (or binary)\n> > This is significantly different from your output, as it doesn't hide\n> the measured time behind a lossy calculation of bandwidth, but gives\n> the measured data to the user; allowing them to derive their own\n> precise bandwidth if they're so inclined.\n> \n\nMight work. I'm still not convinced we need to include the mode, or that\nthe size is that interesting/useful, though.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 2 Nov 2023 22:25:16 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "On Thu, 2 Nov 2023 at 22:25, Tomas Vondra <[email protected]> wrote:\n>\n>\n>\n> On 11/2/23 21:02, Matthias van de Meent wrote:\n> > On Thu, 2 Nov 2023 at 20:32, Tomas Vondra <[email protected]> wrote:\n> >> On 11/2/23 20:09, stepan rutz wrote:\n> >>> db1=# explain (analyze, serialize) select * from test;\n> >>> QUERY PLAN\n> >>> ---------------------------------------------------------------------------------------------------\n> >>> Seq Scan on test (cost=0.00..22.00 rows=1200 width=40) (actual\n> >>> time=0.023..0.027 rows=1 loops=1)\n> >>> Planning Time: 0.077 ms\n> >>> Execution Time: 303.281 ms\n> >>> Serialized Bytes: 78888953 Bytes. Mode Text. Bandwidth 248.068 MB/sec\n> >> [...]\n> >> BTW if you really want to print amount of memory, maybe print it in\n> >> kilobytes, like every other place in explain.c?\n> >\n> > Isn't node width in bytes, or is it an opaque value not to be\n> > interpreted by users? I've never really investigated that part of\n> > Postgres' explain output...\n> >\n>\n> Right, \"width=\" is always in bytes. But fields like amount of sorted\n> data is in kB, and this seems closer to that.\n>\n> >> Also, explain generally\n> >> prints stuff in \"key: value\" style (in text format).\n> >\n> > That'd be key: metrickey=metricvalue for expanded values like those in\n> > plan nodes and the buffer usage, no?\n> >\n>\n> Possibly. But the proposed output does neither. Also, it starts with\n> \"Serialized Bytes\" but then prints info about bandwidth.\n>\n>\n> >>> Serialized Bytes: 78888953 Bytes. Mode Text. Bandwidth 248.068 MB/sec\n> >\n> > I was thinking more along the lines of something like this:\n> >\n> > [...]\n> > Execution Time: xxx ms\n> > Serialization: time=yyy.yyy (in ms) size=yyy (in KiB, or B) mode=text\n> > (or binary)\n> > > This is significantly different from your output, as it doesn't hide\n> > the measured time behind a lossy calculation of bandwidth, but gives\n> > the measured data to the user; allowing them to derive their own\n> > precise bandwidth if they're so inclined.\n> >\n>\n> Might work. I'm still not convinced we need to include the mode, or that\n> the size is that interesting/useful, though.\n\nI'd say size is interesting for systems where network bandwidth is\nconstrained, but CPU isn't. We currently only show estimated widths &\naccurate number of tuples returned, but that's not an accurate\nexplanation of why your 30-row 3GB resultset took 1h to transmit on a\n10mbit line - that is only explained by the bandwidth of your\nconnection and the size of the dataset. As we can measure the size of\nthe returned serialized dataset here, I think it's in the interest of\nany debugability to also present it to the user. Sadly, we don't have\ngood measures of bandwidth without sending that data across, so that's\nthe only metric that we can't show here, but total query data size is\ndefinitely something that I'd be interested in here.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 2 Nov 2023 22:33:57 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "\n\nOn 11/2/23 22:33, Matthias van de Meent wrote:\n> On Thu, 2 Nov 2023 at 22:25, Tomas Vondra <[email protected]> wrote:\n>>\n>>\n>>\n>> On 11/2/23 21:02, Matthias van de Meent wrote:\n>>> On Thu, 2 Nov 2023 at 20:32, Tomas Vondra <[email protected]> wrote:\n>>>> On 11/2/23 20:09, stepan rutz wrote:\n>>>>> db1=# explain (analyze, serialize) select * from test;\n>>>>> QUERY PLAN\n>>>>> ---------------------------------------------------------------------------------------------------\n>>>>> Seq Scan on test (cost=0.00..22.00 rows=1200 width=40) (actual\n>>>>> time=0.023..0.027 rows=1 loops=1)\n>>>>> Planning Time: 0.077 ms\n>>>>> Execution Time: 303.281 ms\n>>>>> Serialized Bytes: 78888953 Bytes. Mode Text. Bandwidth 248.068 MB/sec\n>>>> [...]\n>>>> BTW if you really want to print amount of memory, maybe print it in\n>>>> kilobytes, like every other place in explain.c?\n>>>\n>>> Isn't node width in bytes, or is it an opaque value not to be\n>>> interpreted by users? I've never really investigated that part of\n>>> Postgres' explain output...\n>>>\n>>\n>> Right, \"width=\" is always in bytes. But fields like amount of sorted\n>> data is in kB, and this seems closer to that.\n>>\n>>>> Also, explain generally\n>>>> prints stuff in \"key: value\" style (in text format).\n>>>\n>>> That'd be key: metrickey=metricvalue for expanded values like those in\n>>> plan nodes and the buffer usage, no?\n>>>\n>>\n>> Possibly. But the proposed output does neither. Also, it starts with\n>> \"Serialized Bytes\" but then prints info about bandwidth.\n>>\n>>\n>>>>> Serialized Bytes: 78888953 Bytes. Mode Text. Bandwidth 248.068 MB/sec\n>>>\n>>> I was thinking more along the lines of something like this:\n>>>\n>>> [...]\n>>> Execution Time: xxx ms\n>>> Serialization: time=yyy.yyy (in ms) size=yyy (in KiB, or B) mode=text\n>>> (or binary)\n>>>> This is significantly different from your output, as it doesn't hide\n>>> the measured time behind a lossy calculation of bandwidth, but gives\n>>> the measured data to the user; allowing them to derive their own\n>>> precise bandwidth if they're so inclined.\n>>>\n>>\n>> Might work. I'm still not convinced we need to include the mode, or that\n>> the size is that interesting/useful, though.\n> \n> I'd say size is interesting for systems where network bandwidth is\n> constrained, but CPU isn't. We currently only show estimated widths &\n> accurate number of tuples returned, but that's not an accurate\n> explanation of why your 30-row 3GB resultset took 1h to transmit on a\n> 10mbit line - that is only explained by the bandwidth of your\n> connection and the size of the dataset. As we can measure the size of\n> the returned serialized dataset here, I think it's in the interest of\n> any debugability to also present it to the user. Sadly, we don't have\n> good measures of bandwidth without sending that data across, so that's\n> the only metric that we can't show here, but total query data size is\n> definitely something that I'd be interested in here.\n\nYeah, I agree with that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 2 Nov 2023 23:24:36 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "Hi,\n\nI've taken the liberty to update this patch, and register it in the\ncommitfest app to not lose track of progress [0].\n\nThe attached v8 patch measures scratch memory allocations (with MEMORY\noption), total time spent in serialization (with TIMING on, measures\nare inclusive of unseparated memcpy to the message buffer), and a\ncount of produced bytes plus the output format used (text or binary).\nIt's a light rework of the earlier 0007 patch, I've reused tests and\nsome infrastructure, while the implementation details and comments\nhave been updated significantly.\n\nI think we can bikeshed on format and names, but overall I think the\npatch is in a very decent shape.\n\nStepan, thank you for your earlier work, and feel free to check it out\nor pick it up again if you want to; else I'll try to get this done.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://commitfest.postgresql.org/47/4852/",
"msg_date": "Mon, 26 Feb 2024 20:30:56 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "Hi Matthias, thanks for picking it up. I still believe this is valuable\nto a lot of people out there. Thanks for dealing with my proposal.\nMatthias, Tom, Tomas everyone.\n\nTwo (more or less) controversial remarks from side.\n\n1. Actually serialization should be the default for \"analyze\" in\nexplain, as current analyze doesn't detoast and thus distorts the result\nin extreme (but common) cases easily by many order of magnitude (see my\noriginal video on that one). So current \"explain analyze\" only works for\nsome queries and since detoasting is really transparent, it is quite\nsomething to leave detoasting out of explain analyze. This surprises\npeople all the time, since explain analyze suggests it \"runs\" the query\nmore realistically.\n\n2. The bandwidth I computed in one of the previous versions of the patch\nwas certainly cluttering up the explain output and it is misleading yes,\nbut then again it performs a calculation people will now do in their\nhead. The \"bandwidth\" here is how much data your query gets out of\nbackend by means of the query and the deserialization. So of course if\nyou do id-lookups you get a single row and such querries do have a lower\ndata-retrieval bandwidth compared to bulk querries. However having some\nmeasure of how fast data is delivered from the backend especially on\nlarger joins is still a good indicator of one aspect of a query.\n\nSorry for the remarks. Both are not really important, just restating my\npoints here. I understand the objections and reasons that speak against\nboth points and believe the current scope is just right.\n\n/Stepan\n\n\n\nOn 26.02.24 20:30, Matthias van de Meent wrote:\n> Hi,\n>\n> I've taken the liberty to update this patch, and register it in the\n> commitfest app to not lose track of progress [0].\n>\n> The attached v8 patch measures scratch memory allocations (with MEMORY\n> option), total time spent in serialization (with TIMING on, measures\n> are inclusive of unseparated memcpy to the message buffer), and a\n> count of produced bytes plus the output format used (text or binary).\n> It's a light rework of the earlier 0007 patch, I've reused tests and\n> some infrastructure, while the implementation details and comments\n> have been updated significantly.\n>\n> I think we can bikeshed on format and names, but overall I think the\n> patch is in a very decent shape.\n>\n> Stepan, thank you for your earlier work, and feel free to check it out\n> or pick it up again if you want to; else I'll try to get this done.\n>\n> Kind regards,\n>\n> Matthias van de Meent\n>\n> [0] https://commitfest.postgresql.org/47/4852/\n\n\n",
"msg_date": "Mon, 26 Feb 2024 21:54:11 +0100",
"msg_from": "stepan rutz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "On Mon, 26 Feb 2024 at 21:54, stepan rutz <[email protected]> wrote:\n>\n> Hi Matthias, thanks for picking it up. I still believe this is valuable\n> to a lot of people out there. Thanks for dealing with my proposal.\n> Matthias, Tom, Tomas everyone.\n>\n> Two (more or less) controversial remarks from side.\n>\n> 1. Actually serialization should be the default for \"analyze\" in\n> explain, as current analyze doesn't detoast and thus distorts the result\n> in extreme (but common) cases easily by many order of magnitude (see my\n> original video on that one). So current \"explain analyze\" only works for\n> some queries and since detoasting is really transparent, it is quite\n> something to leave detoasting out of explain analyze. This surprises\n> people all the time, since explain analyze suggests it \"runs\" the query\n> more realistically.\n\nI'm not sure about this, but it could easily be a mid-beta decision\n(if this is introduced before the feature freeze of 17, whenever that\nis).\n\n> 2. The bandwidth I computed in one of the previous versions of the patch\n> was certainly cluttering up the explain output and it is misleading yes,\n> but then again it performs a calculation people will now do in their\n> head. The \"bandwidth\" here is how much data your query gets out of\n> backend by means of the query and the deserialization. So of course if\n> you do id-lookups you get a single row and such querries do have a lower\n> data-retrieval bandwidth compared to bulk querries.\n\nI think that's a job for post-processing the EXPLAIN output by the\nuser. If we don't give users the raw data, they won't be able to do\ntheir own cross-referenced processing: \"5MB/sec\" doesn't tell you how\nmuch time or data was actually spent.\n\n> However having some\n> measure of how fast data is delivered from the backend especially on\n> larger joins is still a good indicator of one aspect of a query.\n\nI'm not sure about that. Network speed is a big limiting factor that\nwe can't measure here, and the size on disk is often going to be\nsmaller than the data size when transfered across the network.\n\n> Sorry for the remarks. Both are not really important, just restating my\n> points here. I understand the objections and reasons that speak against\n> both points and believe the current scope is just right.\n\nNo problem. Remarks from users (when built on solid arguments) are\nalways welcome, even if we may not always agree on the specifics.\n\n------>8------\n\nAttached is v9, which is rebased on master to handle 24eebc65's\nchanged signature of pq_sendcountedtext.\nIt now also includes autocompletion, and a second patch which adds\ndocumentation to give users insights into this new addition to\nEXPLAIN.\n\n\nKind regards,\n\nMatthias van de Meent",
"msg_date": "Tue, 12 Mar 2024 13:20:10 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "Matthias van de Meent <[email protected]> writes:\n> Attached is v9, which is rebased on master to handle 24eebc65's\n> changed signature of pq_sendcountedtext.\n> It now also includes autocompletion, and a second patch which adds\n> documentation to give users insights into this new addition to\n> EXPLAIN.\n\nI took a quick look through this. Some comments in no particular\norder:\n\nDocumentation is not optional, so I don't really see the point of\nsplitting this into two patches.\n\nIIUC, it's not possible to use the SERIALIZE option when explaining\nCREATE TABLE AS, because you can't install the instrumentation tuple\nreceiver when the IntoRel one is needed. I think that's fine because\nno serialization would happen in that case anyway, but should we\nthrow an error if that combination is requested? Blindly reporting\nthat zero bytes were serialized seems like it'd confuse people.\n\nI'd lose the stuff about measuring memory consumption. Nobody asked\nfor that and the total is completely misleading, because in reality\nwe'll reclaim the memory used after each row. It would allow cutting\nthe text-mode output down to one line, too, instead of having your\nown format that's not like anything else.\n\nI thought the upthread agreement was to display the amount of\ndata sent rounded to kilobytes, so why is the code displaying\nan exact byte count?\n\nI don't especially care for magic numbers like these:\n\n+\t\t/* see printtup.h why we add 18 bytes here. These are the infos\n+\t\t * needed for each attribute plus the attribute's name */\n+\t\treceiver->metrics.bytesSent += (int64) namelen + 1 + 18;\n\nIf the protocol is ever changed in a way that invalidates this,\nthere's about 0 chance that somebody would remember to touch\nthis code.\n\nHowever, isn't the bottom half of serializeAnalyzeStartup doing\nexactly what the comment above it says we don't do, that is accounting\nfor the RowDescription message? Frankly I agree with the comment that\nit's not worth troubling over, so I'd just drop that code rather than\nfinding a solution for the magic-number problem.\n\nDon't bother with duplicating valgrind-related logic in\nserializeAnalyzeReceive --- that's irrelevant to actual users.\n\nThis seems like cowboy coding:\n\n+\tself->destRecevier.mydest = DestNone;\n\nYou should define a new value of the CommandDest enum and\nintegrate this receiver type into the support functions\nin dest.c.\n\nBTW, \"destRecevier\" is misspelled...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Apr 2024 11:47:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "On Tue, 2 Apr 2024 at 17:47, Tom Lane <[email protected]> wrote:\n>\n> Matthias van de Meent <[email protected]> writes:\n> > Attached is v9, which is rebased on master to handle 24eebc65's\n> > changed signature of pq_sendcountedtext.\n> > It now also includes autocompletion, and a second patch which adds\n> > documentation to give users insights into this new addition to\n> > EXPLAIN.\n>\n> I took a quick look through this. Some comments in no particular\n> order:\n\nThanks!\n\n> Documentation is not optional, so I don't really see the point of\n> splitting this into two patches.\n\nI've seen the inverse several times, but I've merged them in the\nattached version 10.\n\n> IIUC, it's not possible to use the SERIALIZE option when explaining\n> CREATE TABLE AS, because you can't install the instrumentation tuple\n> receiver when the IntoRel one is needed. I think that's fine because\n> no serialization would happen in that case anyway, but should we\n> throw an error if that combination is requested? Blindly reporting\n> that zero bytes were serialized seems like it'd confuse people.\n\nI think it's easily explained as no rows were transfered to the\nclient. If there is actual confusion, we can document it, but\nconfusing disk with network is not a case I'd protect against. See\nalso: EXPLAIN (ANALYZE, SERIALIZE) INSERT without the RETURNING\nclause.\n\n> I'd lose the stuff about measuring memory consumption. Nobody asked\n> for that and the total is completely misleading, because in reality\n> we'll reclaim the memory used after each row. It would allow cutting\n> the text-mode output down to one line, too, instead of having your\n> own format that's not like anything else.\n\nDone.\n\n> I thought the upthread agreement was to display the amount of\n> data sent rounded to kilobytes, so why is the code displaying\n> an exact byte count?\n\nProbably it was because the other explain code I referenced was using\nbytes in the json/yaml format. Fixed.\n\n> I don't especially care for magic numbers like these:\n>\n> + /* see printtup.h why we add 18 bytes here. These are the infos\n> + * needed for each attribute plus the attribute's name */\n> + receiver->metrics.bytesSent += (int64) namelen + 1 + 18;\n>\n> If the protocol is ever changed in a way that invalidates this,\n> there's about 0 chance that somebody would remember to touch\n> this code.\n> However, isn't the bottom half of serializeAnalyzeStartup doing\n> exactly what the comment above it says we don't do, that is accounting\n> for the RowDescription message? Frankly I agree with the comment that\n> it's not worth troubling over, so I'd just drop that code rather than\n> finding a solution for the magic-number problem.\n\nIn the comment above I intended to explain that it takes negligible\ntime to serialize the RowDescription message (when compared to all\nother tasks of explain), so skipping the actual writing of the message\nwould be fine.\nI'm not sure I agree with not including the size of RowDescription\nitself though, as wide results can have a very large RowDescription\noverhead; up to several times the returned data in cases where few\nrows are returned.\n\nEither way, I've removed that part of the code.\n\n> Don't bother with duplicating valgrind-related logic in\n> serializeAnalyzeReceive --- that's irrelevant to actual users.\n\nRemoved. I've instead added buffer usage, as I realised that wasn't\ncovered yet, and quite important to detect excessive detoasting (it's\nnot covered at the top-level scan).\n\n> This seems like cowboy coding:\n>\n> + self->destRecevier.mydest = DestNone;\n>\n> You should define a new value of the CommandDest enum and\n> integrate this receiver type into the support functions\n> in dest.c.\n\nDone.\n\n> BTW, \"destRecevier\" is misspelled...\n\nThanks, fixed.\n\n\nKind regards,\n\nMatthias van de Meent.",
"msg_date": "Wed, 3 Apr 2024 18:59:40 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "Matthias van de Meent <[email protected]> writes:\n> On Tue, 2 Apr 2024 at 17:47, Tom Lane <[email protected]> wrote:\n>> IIUC, it's not possible to use the SERIALIZE option when explaining\n>> CREATE TABLE AS, because you can't install the instrumentation tuple\n>> receiver when the IntoRel one is needed. I think that's fine because\n>> no serialization would happen in that case anyway, but should we\n>> throw an error if that combination is requested? Blindly reporting\n>> that zero bytes were serialized seems like it'd confuse people.\n\n> I think it's easily explained as no rows were transfered to the\n> client. If there is actual confusion, we can document it, but\n> confusing disk with network is not a case I'd protect against. See\n> also: EXPLAIN (ANALYZE, SERIALIZE) INSERT without the RETURNING\n> clause.\n\nFair enough. There were a couple of spots in the code where I thought\nthis was important to comment about, though.\n\n>> However, isn't the bottom half of serializeAnalyzeStartup doing\n>> exactly what the comment above it says we don't do, that is accounting\n>> for the RowDescription message? Frankly I agree with the comment that\n>> it's not worth troubling over, so I'd just drop that code rather than\n>> finding a solution for the magic-number problem.\n\n> I'm not sure I agree with not including the size of RowDescription\n> itself though, as wide results can have a very large RowDescription\n> overhead; up to several times the returned data in cases where few\n> rows are returned.\n\nMeh --- if we're rounding off to kilobytes, you're unlikely to see it.\nIn any case, if we start counting overhead messages, where shall we\nstop? Should we count the eventual CommandComplete or ReadyForQuery,\nfor instance? I'm content to say that this measures data only; that\nseems to jibe with other aspects of EXPLAIN's behavior.\n\n> Removed. I've instead added buffer usage, as I realised that wasn't\n> covered yet, and quite important to detect excessive detoasting (it's\n> not covered at the top-level scan).\n\nDuh, good catch.\n\nI've pushed this after a good deal of cosmetic polishing -- for\nexample, I spent some effort on making serializeAnalyzeReceive\nlook as much like printtup as possible, in hopes of making it\neasier to keep the two functions in sync in future.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Apr 2024 17:50:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "On Wed, 3 Apr 2024 at 23:50, Tom Lane <[email protected]> wrote:\n>\n> Matthias van de Meent <[email protected]> writes:\n>> On Tue, 2 Apr 2024 at 17:47, Tom Lane <[email protected]> wrote:\n>>> IIUC, it's not possible to use the SERIALIZE option when explaining\n>>> CREATE TABLE AS, because you can't install the instrumentation tuple\n>>> receiver when the IntoRel one is needed. I think that's fine because\n>>> no serialization would happen in that case anyway, but should we\n>>> throw an error if that combination is requested? Blindly reporting\n>>> that zero bytes were serialized seems like it'd confuse people.\n>\n>> I think it's easily explained as no rows were transfered to the\n>> client. If there is actual confusion, we can document it, but\n>> confusing disk with network is not a case I'd protect against. See\n>> also: EXPLAIN (ANALYZE, SERIALIZE) INSERT without the RETURNING\n>> clause.\n>\n> Fair enough. There were a couple of spots in the code where I thought\n> this was important to comment about, though.\n\nYeah, I'll agree with that.\n\n>>> However, isn't the bottom half of serializeAnalyzeStartup doing\n>>> exactly what the comment above it says we don't do, that is accounting\n>>> for the RowDescription message? Frankly I agree with the comment that\n>>> it's not worth troubling over, so I'd just drop that code rather than\n>>> finding a solution for the magic-number problem.\n>\n>> I'm not sure I agree with not including the size of RowDescription\n>> itself though, as wide results can have a very large RowDescription\n>> overhead; up to several times the returned data in cases where few\n>> rows are returned.\n>\n> Meh --- if we're rounding off to kilobytes, you're unlikely to see it.\n> In any case, if we start counting overhead messages, where shall we\n> stop? Should we count the eventual CommandComplete or ReadyForQuery,\n> for instance? I'm content to say that this measures data only; that\n> seems to jibe with other aspects of EXPLAIN's behavior.\n\nFine with me.\n\n> > Removed. I've instead added buffer usage, as I realised that wasn't\n> > covered yet, and quite important to detect excessive detoasting (it's\n> > not covered at the top-level scan).\n>\n> Duh, good catch.\n>\n> I've pushed this after a good deal of cosmetic polishing -- for\n> example, I spent some effort on making serializeAnalyzeReceive\n> look as much like printtup as possible, in hopes of making it\n> easier to keep the two functions in sync in future.\n\nThanks for the review, and for pushing!\n\n-Matthias\n\n\n",
"msg_date": "Thu, 4 Apr 2024 09:52:19 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "On Wed, 3 Apr 2024 at 23:50, Tom Lane <[email protected]> wrote:\n> I've pushed this after a good deal of cosmetic polishing -- for\n> example, I spent some effort on making serializeAnalyzeReceive\n> look as much like printtup as possible, in hopes of making it\n> easier to keep the two functions in sync in future.\n\nUpthread at [0], Stepan mentioned that we should default to SERIALIZE\nwhen ANALYZE is enabled. I suspect a patch in that direction would\nprimarily contain updates in the test plan outputs, but I've not yet\nworked on that.\n\nDoes anyone else have a strong opinion for or against adding SERIALIZE\nto the default set of explain features enabled with ANALYZE?\n\nI'll add this to \"Decisions to Recheck Mid-Beta\"-section if nobody has\na clear objection.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n[0] https://postgr.es/m/ea885631-21f1-425a-97ed-c4bfb8cf9c63%40gmx.de\n\n\n",
"msg_date": "Wed, 10 Apr 2024 12:01:46 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "Matthias van de Meent <[email protected]> writes:\n> Upthread at [0], Stepan mentioned that we should default to SERIALIZE\n> when ANALYZE is enabled. I suspect a patch in that direction would\n> primarily contain updates in the test plan outputs, but I've not yet\n> worked on that.\n\n> Does anyone else have a strong opinion for or against adding SERIALIZE\n> to the default set of explain features enabled with ANALYZE?\n\nI am 100% dead set against that, because it would silently render\nEXPLAIN outputs from different versions quite non-comparable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Apr 2024 09:57:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "First of all thanks for bringing this Feature to PostgreSQL. From a\nregular-user perspective (not everyone is a Pro) it is very misleading\nthat ANALYZE doesn't do what it suggests it does. To run the query into\nsome kind of /dev/null type of destination is feasible and that is what\npeople end up doing after they have fallen into the \"de-toasting\" trap.\n\nHaving SERIALIZE is a great improvement for certain. When I said that\nSERIALIZE should be the default, then this came mostly out of very\nsurprising subjective experiences in the past. Turning it on certainly\nalters some existing benchmarks and timings. That is destructive in a\nway and would destroy some existing work and measures. I lack the\noverall understanding of the consequences, so please don't follow this\n(emotional) advice.\n\nSo thanks again! and this will really help a lot of people. The people\nactually bothering with EXPLAIN options are likely to explore the\ndocumentation and now have a hint about this pitfall. The EXPLAIN part\nof PostgreSQL \"feels\" a lot better now.\n\nI appreciate all of your work on this issue, which came up without being\non some kind of plan and of course for the overall work on PostgreSQL.\n\n/Stepan\n\nOn 4/10/24 15:57, Tom Lane wrote:\n\n> Matthias van de Meent <[email protected]> writes:\n>> Upthread at [0], Stepan mentioned that we should default to SERIALIZE\n>> when ANALYZE is enabled. I suspect a patch in that direction would\n>> primarily contain updates in the test plan outputs, but I've not yet\n>> worked on that.\n>> Does anyone else have a strong opinion for or against adding SERIALIZE\n>> to the default set of explain features enabled with ANALYZE?\n> I am 100% dead set against that, because it would silently render\n> EXPLAIN outputs from different versions quite non-comparable.\n>\n> \t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Apr 2024 20:14:00 +0200",
"msg_from": "stepan rutz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": ">\n> So thanks again! and this will really help a lot of people.\n\n\nI'd like to echo this thanks to you all.\n\nWhile looking to add support for SERIALIZE in an explain visualisation tool\nI work on, I realised there isn't yet an equivalent auto_explain parameter\nfor SERIALIZE. I'm not sure if this is a deliberate omission (perhaps for a\nsimilar reason planning time is not included in auto_explain?), but I\ndidn't see it mentioned above, so I thought best to ask in case not.\n\nThanks again,\nMichael\n\nSo thanks again! and this will really help a lot of people.I'd like to echo this thanks to you all. While looking to add support for SERIALIZE in an explain visualisation tool I work on, I realised there isn't yet an equivalent auto_explain parameter for SERIALIZE. I'm not sure if this is a deliberate omission (perhaps for a similar reason planning time is not included in auto_explain?), but I didn't see it mentioned above, so I thought best to ask in case not.Thanks again,Michael",
"msg_date": "Mon, 8 Jul 2024 17:54:33 +0100",
"msg_from": "Michael Christofides <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "Michael Christofides <[email protected]> writes:\n> While looking to add support for SERIALIZE in an explain visualisation tool\n> I work on, I realised there isn't yet an equivalent auto_explain parameter\n> for SERIALIZE. I'm not sure if this is a deliberate omission (perhaps for a\n> similar reason planning time is not included in auto_explain?), but I\n> didn't see it mentioned above, so I thought best to ask in case not.\n\nI'm not sure there's a need for it. When a query runs under\nauto_explain, the output values will be sent to the client,\nso those cycles should be accounted for anyway, no?\n\n(Perhaps the auto_explain documentation should mention this?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Jul 2024 13:08:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
},
{
"msg_contents": "> I'm not sure there's a need for it. When a query runs under\n> auto_explain, the output values will be sent to the client,\n> so those cycles should be accounted for anyway, no?\n>\n\nYes, great point, the total duration reported by auto_explain includes it.\nExplicit serialization stats might still be helpful for folks when it is\nthe bottleneck, but less useful for sure (especially if nothing else causes\nbig discrepancies between the duration reported by auto_explain and the\n\"actual total time\" of the root node).\n\n(Perhaps the auto_explain documentation should mention this?)\n>\n\nI'd value this. I notice the folks working on the other new explain\nparameter (memory) opted to add a comment to the auto_explain source code\nto say it wasn't supported.\n\nThanks again,\nMichael\n\nI'm not sure there's a need for it. When a query runs under\nauto_explain, the output values will be sent to the client,\nso those cycles should be accounted for anyway, no?Yes, great point, the total duration reported by auto_explain includes it. Explicit serialization stats might still be helpful for folks when it is the bottleneck, but less useful for sure (especially if nothing else causes big discrepancies between the duration reported by auto_explain and the \"actual total time\" of the root node).\n(Perhaps the auto_explain documentation should mention this?) I'd value this. I notice the folks working on the other new explain parameter (memory) opted to add a comment to the auto_explain source code to say it wasn't supported. Thanks again,Michael",
"msg_date": "Mon, 8 Jul 2024 19:13:07 +0100",
"msg_from": "Michael Christofides <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Detoasting optionally to make Explain-Analyze less misleading"
}
] |
[
{
"msg_contents": "Hi,\n\nFound server crash on RHEL 9/s390x platform with below test case -\n\n*Machine details:*\n\n\n\n\n\n\n\n*[edb@9428da9d2137 postgres]$ cat /etc/redhat-release AlmaLinux release 9.2\n(Turquoise Kodkod)[edb@9428da9d2137 postgres]$ lscpuArchitecture:\ns390x CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits\nphysical, 48 bits virtual Byte Order: Big Endian*\n*Configure command:*\n./configure --prefix=/home/edb/postgres/ --with-lz4 --with-zstd --with-llvm\n--with-perl --with-python --with-tcl --with-openssl --enable-nls\n--with-libxml --with-libxslt --with-systemd --with-libcurl --without-icu\n--enable-debug --enable-cassert --with-pgport=5414\n\n\n*Test case:*\nCREATE TABLE rm32044_t1\n(\n pkey integer,\n val text\n);\nCREATE TABLE rm32044_t2\n(\n pkey integer,\n label text,\n hidden boolean\n);\nCREATE TABLE rm32044_t3\n(\n pkey integer,\n val integer\n);\nCREATE TABLE rm32044_t4\n(\n pkey integer\n);\ninsert into rm32044_t1 values ( 1 , 'row1');\ninsert into rm32044_t1 values ( 2 , 'row2');\ninsert into rm32044_t2 values ( 1 , 'hidden', true);\ninsert into rm32044_t2 values ( 2 , 'visible', false);\ninsert into rm32044_t3 values (1 , 1);\ninsert into rm32044_t3 values (2 , 1);\n\npostgres=# SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON rm32044_t1.pkey\n= rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON rm32044_t3.pkey =\nrm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nThe connection to the server was lost. Attempting reset: Failed.\n\n*backtrace:*\n[edb@9428da9d2137 postgres]$ gdb bin/postgres\ndata/qemu_postgres_20230911-140628_65620.core\nCore was generated by `postgres: edb postgres [local] SELECT '.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 0x00000000010a8366 in heap_compute_data_size\n(tupleDesc=tupleDesc@entry=0x1ba3d10,\nvalues=values@entry=0x1ba4168, isnull=isnull@entry=0x1ba41a8) at\nheaptuple.c:227\n227 VARATT_CAN_MAKE_SHORT(DatumGetPointer(val)))\n[Current thread is 1 (LWP 65597)]\nMissing separate debuginfos, use: dnf debuginfo-install\nglibc-2.34-60.el9.s390x libcap-2.48-8.el9.s390x\nlibedit-3.1-37.20210216cvs.el9.s390x libffi-3.4.2-7.el9.s390x\nlibgcc-11.3.1-4.3.el9.alma.s390x libgcrypt-1.10.0-10.el9_2.s390x\nlibgpg-error-1.42-5.el9.s390x libstdc++-11.3.1-4.3.el9.alma.s390x\nlibxml2-2.9.13-3.el9_2.1.s390x libzstd-1.5.1-2.el9.s390x\nllvm-libs-15.0.7-1.el9.s390x lz4-libs-1.9.3-5.el9.s390x\nncurses-libs-6.2-8.20210508.el9.s390x openssl-libs-3.0.7-17.el9_2.s390x\nsystemd-libs-252-14.el9_2.3.s390x xz-libs-5.2.5-8.el9_0.s390x\n(gdb) bt\n#0 0x00000000010a8366 in heap_compute_data_size\n(tupleDesc=tupleDesc@entry=0x1ba3d10,\nvalues=values@entry=0x1ba4168, isnull=isnull@entry=0x1ba41a8) at\nheaptuple.c:227\n#1 0x00000000010a9bb0 in heap_form_minimal_tuple\n(tupleDescriptor=0x1ba3d10, values=0x1ba4168, isnull=0x1ba41a8) at\nheaptuple.c:1484\n#2 0x00000000016553fa in ExecCopySlotMinimalTuple (slot=<optimized out>)\nat ../../../../src/include/executor/tuptable.h:472\n#3 tuplesort_puttupleslot (state=state@entry=0x1be4d18,\nslot=slot@entry=0x1ba4120)\nat tuplesortvariants.c:610\n#4 0x00000000012dc0e0 in ExecIncrementalSort (pstate=0x1acb4d8) at\nnodeIncrementalSort.c:716\n#5 0x00000000012b32c6 in ExecProcNode (node=0x1acb4d8) at\n../../../src/include/executor/executor.h:273\n#6 ExecutePlan (execute_once=<optimized out>, dest=0x1ade698,\ndirection=<optimized out>, numberTuples=0, sendTuples=<optimized out>,\noperation=CMD_SELECT, use_parallel_mode=<optimized out>,\nplanstate=0x1acb4d8, estate=0x1acb258) at execMain.c:1670\n#7 standard_ExecutorRun (queryDesc=0x19ad338, direction=<optimized out>,\ncount=0, execute_once=<optimized out>) at execMain.c:365\n#8 0x00000000014a6ae2 in PortalRunSelect (portal=portal@entry=0x1a63558,\nforward=forward@entry=true, count=0, count@entry=9223372036854775807,\ndest=dest@entry=0x1ade698) at pquery.c:924\n#9 0x00000000014a84e0 in PortalRun (portal=portal@entry=0x1a63558,\ncount=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true,\nrun_once=run_once@entry=true, dest=dest@entry=0x1ade698, altdest=0x1ade698,\nqc=0x40007ff7b0) at pquery.c:768\n#10 0x00000000014a3c1c in exec_simple_query (\n query_string=0x19ea0e8 \"SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2\nON rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON\nrm32044_t3.pkey = rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;\")\nat postgres.c:1274\n#11 0x00000000014a57aa in PostgresMain (dbname=<optimized out>,\nusername=<optimized out>) at postgres.c:4637\n#12 0x00000000013fdaf6 in BackendRun (port=0x1a132c0, port=0x1a132c0) at\npostmaster.c:4464\n#13 BackendStartup (port=0x1a132c0) at postmaster.c:4192\n#14 ServerLoop () at postmaster.c:1782\n#15 0x00000000013fec34 in PostmasterMain (argc=argc@entry=3,\nargv=argv@entry=0x19a59a0)\nat postmaster.c:1466\n#16 0x0000000001096faa in main (argc=<optimized out>, argv=0x19a59a0) at\nmain.c:198\n\n(gdb) p val\n$1 = 0\n```\n\nDoes anybody have any idea about this?\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\n\n\n\nedbpostgres.com\n\nHi,Found server crash on RHEL 9/s390x platform with below test case - Machine details:[edb@9428da9d2137 postgres]$ cat /etc/redhat-release AlmaLinux release 9.2 (Turquoise Kodkod)[edb@9428da9d2137 postgres]$ lscpuArchitecture: s390x CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Big EndianConfigure command:./configure --prefix=/home/edb/postgres/ --with-lz4 --with-zstd --with-llvm --with-perl --with-python --with-tcl --with-openssl --enable-nls --with-libxml --with-libxslt --with-systemd --with-libcurl --without-icu --enable-debug --enable-cassert --with-pgport=5414Test case:CREATE TABLE rm32044_t1( pkey integer, val text);CREATE TABLE rm32044_t2( pkey integer, label text, hidden boolean);CREATE TABLE rm32044_t3( pkey integer, val integer);CREATE TABLE rm32044_t4( pkey integer);insert into rm32044_t1 values ( 1 , 'row1');insert into rm32044_t1 values ( 2 , 'row2');insert into rm32044_t2 values ( 1 , 'hidden', true);insert into rm32044_t2 values ( 2 , 'visible', false);insert into rm32044_t3 values (1 , 1);insert into rm32044_t3 values (2 , 1);postgres=# SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON rm32044_t3.pkey = rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;server closed the connection unexpectedly\tThis probably means the server terminated abnormally\tbefore or while processing the request.The connection to the server was lost. Attempting reset: Failed.The connection to the server was lost. Attempting reset: Failed.backtrace:[edb@9428da9d2137 postgres]$ gdb bin/postgres data/qemu_postgres_20230911-140628_65620.coreCore was generated by `postgres: edb postgres [local] SELECT '.Program terminated with signal SIGSEGV, Segmentation fault.#0 0x00000000010a8366 in heap_compute_data_size (tupleDesc=tupleDesc@entry=0x1ba3d10, values=values@entry=0x1ba4168, isnull=isnull@entry=0x1ba41a8) at heaptuple.c:227227\t\t\t\tVARATT_CAN_MAKE_SHORT(DatumGetPointer(val)))[Current thread is 1 (LWP 65597)]Missing separate debuginfos, use: dnf debuginfo-install glibc-2.34-60.el9.s390x libcap-2.48-8.el9.s390x libedit-3.1-37.20210216cvs.el9.s390x libffi-3.4.2-7.el9.s390x libgcc-11.3.1-4.3.el9.alma.s390x libgcrypt-1.10.0-10.el9_2.s390x libgpg-error-1.42-5.el9.s390x libstdc++-11.3.1-4.3.el9.alma.s390x libxml2-2.9.13-3.el9_2.1.s390x libzstd-1.5.1-2.el9.s390x llvm-libs-15.0.7-1.el9.s390x lz4-libs-1.9.3-5.el9.s390x ncurses-libs-6.2-8.20210508.el9.s390x openssl-libs-3.0.7-17.el9_2.s390x systemd-libs-252-14.el9_2.3.s390x xz-libs-5.2.5-8.el9_0.s390x(gdb) bt#0 0x00000000010a8366 in heap_compute_data_size (tupleDesc=tupleDesc@entry=0x1ba3d10, values=values@entry=0x1ba4168, isnull=isnull@entry=0x1ba41a8) at heaptuple.c:227#1 0x00000000010a9bb0 in heap_form_minimal_tuple (tupleDescriptor=0x1ba3d10, values=0x1ba4168, isnull=0x1ba41a8) at heaptuple.c:1484#2 0x00000000016553fa in ExecCopySlotMinimalTuple (slot=<optimized out>) at ../../../../src/include/executor/tuptable.h:472#3 tuplesort_puttupleslot (state=state@entry=0x1be4d18, slot=slot@entry=0x1ba4120) at tuplesortvariants.c:610#4 0x00000000012dc0e0 in ExecIncrementalSort (pstate=0x1acb4d8) at nodeIncrementalSort.c:716#5 0x00000000012b32c6 in ExecProcNode (node=0x1acb4d8) at ../../../src/include/executor/executor.h:273#6 ExecutePlan (execute_once=<optimized out>, dest=0x1ade698, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x1acb4d8, estate=0x1acb258) at execMain.c:1670#7 standard_ExecutorRun (queryDesc=0x19ad338, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:365#8 0x00000000014a6ae2 in PortalRunSelect (portal=portal@entry=0x1a63558, forward=forward@entry=true, count=0, count@entry=9223372036854775807, dest=dest@entry=0x1ade698) at pquery.c:924#9 0x00000000014a84e0 in PortalRun (portal=portal@entry=0x1a63558, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x1ade698, altdest=0x1ade698, qc=0x40007ff7b0) at pquery.c:768#10 0x00000000014a3c1c in exec_simple_query ( query_string=0x19ea0e8 \"SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON rm32044_t3.pkey = rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;\") at postgres.c:1274#11 0x00000000014a57aa in PostgresMain (dbname=<optimized out>, username=<optimized out>) at postgres.c:4637#12 0x00000000013fdaf6 in BackendRun (port=0x1a132c0, port=0x1a132c0) at postmaster.c:4464#13 BackendStartup (port=0x1a132c0) at postmaster.c:4192#14 ServerLoop () at postmaster.c:1782#15 0x00000000013fec34 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x19a59a0) at postmaster.c:1466#16 0x0000000001096faa in main (argc=<optimized out>, argv=0x19a59a0) at main.c:198(gdb) p val$1 = 0```Does anybody have any idea about this?-- --Thanks & Regards, Suraj kharage, edbpostgres.com",
"msg_date": "Tue, 12 Sep 2023 15:27:21 +0530",
"msg_from": "Suraj Kharage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Server crash on RHEL 9/s390x platform against PG16"
},
{
"msg_contents": "Few more details on this:\n\n(gdb) p val\n$1 = 0\n(gdb) p i\n$2 = 3\n(gdb) f 3\n#3 0x0000000001a1ef70 in ExecCopySlotMinimalTuple (slot=0x202e4f8) at\n../../../../src/include/executor/tuptable.h:472\n472 return slot->tts_ops->copy_minimal_tuple(slot);\n(gdb) p *slot\n$3 = {type = T_TupleTableSlot, tts_flags = 16, tts_nvalid = 8, tts_ops =\n0x1b6dcc8 <TTSOpsVirtual>, tts_tupleDescriptor = 0x202e0e8, tts_values =\n0x202e540, tts_isnull = 0x202e580, tts_mcxt = 0x1f54550, tts_tid =\n{ip_blkid = {bi_hi = 65535,\n bi_lo = 65535}, ip_posid = 0}, tts_tableOid = 0}\n(gdb) p *slot->tts_tupleDescriptor\n$2 = {natts = 8, tdtypeid = 2249, tdtypmod = -1, tdrefcount = -1, constr =\n0x0, attrs = 0x202cd28}\n\n(gdb) p slot.tts_values[3]\n$4 = 0\n(gdb) p slot.tts_values[2]\n$5 = 1\n(gdb) p slot.tts_values[1]\n$6 = 34027556\n\n\nAs per the resultslot, it has 0 value for the third attribute (column\nlable).\nIm testing this on the docker container and facing some issues with gdb\nhence could not able to debug it further.\n\nHere is a explain plan:\n\npostgres=# explain (verbose, costs off) SELECT * FROM rm32044_t1 LEFT JOIN\nrm32044_t2 ON rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN\nrm32044_t4 ON rm32044_t3.pkey = rm32044_t4.pkey order by\nrm32044_t1.pkey,label,hidden;\n\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Incremental Sort\n Output: rm32044_t1.pkey, rm32044_t1.val, rm32044_t2.pkey,\nrm32044_t2.label, rm32044_t2.hidden, rm32044_t3.pkey, rm32044_t3.val,\nrm32044_t4.pkey\n Sort Key: rm32044_t1.pkey, rm32044_t2.label, rm32044_t2.hidden\n Presorted Key: rm32044_t1.pkey\n -> Merge Left Join\n Output: rm32044_t1.pkey, rm32044_t1.val, rm32044_t2.pkey,\nrm32044_t2.label, rm32044_t2.hidden, rm32044_t3.pkey, rm32044_t3.val,\nrm32044_t4.pkey\n Merge Cond: (rm32044_t1.pkey = rm32044_t2.pkey)\n -> Sort\n Output: rm32044_t3.pkey, rm32044_t3.val, rm32044_t4.pkey,\nrm32044_t1.pkey, rm32044_t1.val\n Sort Key: rm32044_t1.pkey\n -> Nested Loop\n Output: rm32044_t3.pkey, rm32044_t3.val,\nrm32044_t4.pkey, rm32044_t1.pkey, rm32044_t1.val\n -> Merge Left Join\n Output: rm32044_t3.pkey, rm32044_t3.val,\nrm32044_t4.pkey\n Merge Cond: (rm32044_t3.pkey = rm32044_t4.pkey)\n -> Sort\n Output: rm32044_t3.pkey, rm32044_t3.val\n Sort Key: rm32044_t3.pkey\n -> Seq Scan on public.rm32044_t3\n Output: rm32044_t3.pkey,\nrm32044_t3.val\n -> Sort\n Output: rm32044_t4.pkey\n Sort Key: rm32044_t4.pkey\n -> Seq Scan on public.rm32044_t4\n Output: rm32044_t4.pkey\n -> Materialize\n Output: rm32044_t1.pkey, rm32044_t1.val\n -> Seq Scan on public.rm32044_t1\n Output: rm32044_t1.pkey, rm32044_t1.val\n -> Sort\n Output: rm32044_t2.pkey, rm32044_t2.label, rm32044_t2.hidden\n Sort Key: rm32044_t2.pkey\n -> Seq Scan on public.rm32044_t2\n Output: rm32044_t2.pkey, rm32044_t2.label,\nrm32044_t2.hidden\n(34 rows)\n\n\nIt seems like while building the innerslot for merge join, the value for\nattnum 1 is not getting fetched correctly.\n\nOn Tue, Sep 12, 2023 at 3:27 PM Suraj Kharage <\[email protected]> wrote:\n\n> Hi,\n>\n> Found server crash on RHEL 9/s390x platform with below test case -\n>\n> *Machine details:*\n>\n>\n>\n>\n>\n>\n>\n> *[edb@9428da9d2137 postgres]$ cat /etc/redhat-release AlmaLinux release\n> 9.2 (Turquoise Kodkod)[edb@9428da9d2137 postgres]$ lscpuArchitecture:\n> s390x CPU op-mode(s): 32-bit, 64-bit Address sizes: 39\n> bits physical, 48 bits virtual Byte Order: Big Endian*\n> *Configure command:*\n> ./configure --prefix=/home/edb/postgres/ --with-lz4 --with-zstd\n> --with-llvm --with-perl --with-python --with-tcl --with-openssl\n> --enable-nls --with-libxml --with-libxslt --with-systemd --with-libcurl\n> --without-icu --enable-debug --enable-cassert --with-pgport=5414\n>\n>\n> *Test case:*\n> CREATE TABLE rm32044_t1\n> (\n> pkey integer,\n> val text\n> );\n> CREATE TABLE rm32044_t2\n> (\n> pkey integer,\n> label text,\n> hidden boolean\n> );\n> CREATE TABLE rm32044_t3\n> (\n> pkey integer,\n> val integer\n> );\n> CREATE TABLE rm32044_t4\n> (\n> pkey integer\n> );\n> insert into rm32044_t1 values ( 1 , 'row1');\n> insert into rm32044_t1 values ( 2 , 'row2');\n> insert into rm32044_t2 values ( 1 , 'hidden', true);\n> insert into rm32044_t2 values ( 2 , 'visible', false);\n> insert into rm32044_t3 values (1 , 1);\n> insert into rm32044_t3 values (2 , 1);\n>\n> postgres=# SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON\n> rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON\n> rm32044_t3.pkey = rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> The connection to the server was lost. Attempting reset: Failed.\n>\n> *backtrace:*\n> [edb@9428da9d2137 postgres]$ gdb bin/postgres\n> data/qemu_postgres_20230911-140628_65620.core\n> Core was generated by `postgres: edb postgres [local] SELECT '.\n> Program terminated with signal SIGSEGV, Segmentation fault.\n> #0 0x00000000010a8366 in heap_compute_data_size (tupleDesc=tupleDesc@entry=0x1ba3d10,\n> values=values@entry=0x1ba4168, isnull=isnull@entry=0x1ba41a8) at\n> heaptuple.c:227\n> 227 VARATT_CAN_MAKE_SHORT(DatumGetPointer(val)))\n> [Current thread is 1 (LWP 65597)]\n> Missing separate debuginfos, use: dnf debuginfo-install\n> glibc-2.34-60.el9.s390x libcap-2.48-8.el9.s390x\n> libedit-3.1-37.20210216cvs.el9.s390x libffi-3.4.2-7.el9.s390x\n> libgcc-11.3.1-4.3.el9.alma.s390x libgcrypt-1.10.0-10.el9_2.s390x\n> libgpg-error-1.42-5.el9.s390x libstdc++-11.3.1-4.3.el9.alma.s390x\n> libxml2-2.9.13-3.el9_2.1.s390x libzstd-1.5.1-2.el9.s390x\n> llvm-libs-15.0.7-1.el9.s390x lz4-libs-1.9.3-5.el9.s390x\n> ncurses-libs-6.2-8.20210508.el9.s390x openssl-libs-3.0.7-17.el9_2.s390x\n> systemd-libs-252-14.el9_2.3.s390x xz-libs-5.2.5-8.el9_0.s390x\n> (gdb) bt\n> #0 0x00000000010a8366 in heap_compute_data_size (tupleDesc=tupleDesc@entry=0x1ba3d10,\n> values=values@entry=0x1ba4168, isnull=isnull@entry=0x1ba41a8) at\n> heaptuple.c:227\n> #1 0x00000000010a9bb0 in heap_form_minimal_tuple\n> (tupleDescriptor=0x1ba3d10, values=0x1ba4168, isnull=0x1ba41a8) at\n> heaptuple.c:1484\n> #2 0x00000000016553fa in ExecCopySlotMinimalTuple (slot=<optimized out>)\n> at ../../../../src/include/executor/tuptable.h:472\n> #3 tuplesort_puttupleslot (state=state@entry=0x1be4d18, slot=slot@entry=0x1ba4120)\n> at tuplesortvariants.c:610\n> #4 0x00000000012dc0e0 in ExecIncrementalSort (pstate=0x1acb4d8) at\n> nodeIncrementalSort.c:716\n> #5 0x00000000012b32c6 in ExecProcNode (node=0x1acb4d8) at\n> ../../../src/include/executor/executor.h:273\n> #6 ExecutePlan (execute_once=<optimized out>, dest=0x1ade698,\n> direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>,\n> operation=CMD_SELECT, use_parallel_mode=<optimized out>,\n> planstate=0x1acb4d8, estate=0x1acb258) at execMain.c:1670\n> #7 standard_ExecutorRun (queryDesc=0x19ad338, direction=<optimized out>,\n> count=0, execute_once=<optimized out>) at execMain.c:365\n> #8 0x00000000014a6ae2 in PortalRunSelect (portal=portal@entry=0x1a63558,\n> forward=forward@entry=true, count=0, count@entry=9223372036854775807,\n> dest=dest@entry=0x1ade698) at pquery.c:924\n> #9 0x00000000014a84e0 in PortalRun (portal=portal@entry=0x1a63558,\n> count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true,\n> run_once=run_once@entry=true, dest=dest@entry=0x1ade698,\n> altdest=0x1ade698, qc=0x40007ff7b0) at pquery.c:768\n> #10 0x00000000014a3c1c in exec_simple_query (\n> query_string=0x19ea0e8 \"SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2\n> ON rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON\n> rm32044_t3.pkey = rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;\")\n> at postgres.c:1274\n> #11 0x00000000014a57aa in PostgresMain (dbname=<optimized out>,\n> username=<optimized out>) at postgres.c:4637\n> #12 0x00000000013fdaf6 in BackendRun (port=0x1a132c0, port=0x1a132c0) at\n> postmaster.c:4464\n> #13 BackendStartup (port=0x1a132c0) at postmaster.c:4192\n> #14 ServerLoop () at postmaster.c:1782\n> #15 0x00000000013fec34 in PostmasterMain (argc=argc@entry=3,\n> argv=argv@entry=0x19a59a0) at postmaster.c:1466\n> #16 0x0000000001096faa in main (argc=<optimized out>, argv=0x19a59a0) at\n> main.c:198\n>\n> (gdb) p val\n> $1 = 0\n> ```\n>\n> Does anybody have any idea about this?\n>\n> --\n> --\n>\n> Thanks & Regards,\n> Suraj kharage,\n>\n>\n>\n> edbpostgres.com\n>\n\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\n\n\n\nedbpostgres.com\n\nFew more details on this:\t\t\t (gdb) p val$1 = 0(gdb) p i$2 = 3(gdb) f 3#3 0x0000000001a1ef70 in ExecCopySlotMinimalTuple (slot=0x202e4f8) at ../../../../src/include/executor/tuptable.h:472472\t\treturn slot->tts_ops->copy_minimal_tuple(slot);(gdb) p *slot$3 = {type = T_TupleTableSlot, tts_flags = 16, tts_nvalid = 8, tts_ops = 0x1b6dcc8 <TTSOpsVirtual>, tts_tupleDescriptor = 0x202e0e8, tts_values = 0x202e540, tts_isnull = 0x202e580, tts_mcxt = 0x1f54550, tts_tid = {ip_blkid = {bi_hi = 65535, bi_lo = 65535}, ip_posid = 0}, tts_tableOid = 0}(gdb) p *slot->tts_tupleDescriptor$2 = {natts = 8, tdtypeid = 2249, tdtypmod = -1, tdrefcount = -1, constr = 0x0, attrs = 0x202cd28}(gdb) p slot.tts_values[3]$4 = 0(gdb) p slot.tts_values[2]$5 = 1(gdb) p slot.tts_values[1]$6 = 34027556As per the resultslot, it has 0 value for the third attribute (column lable).Im testing this on the docker container and facing some issues with gdb hence could not able to debug it further.Here is a explain plan:postgres=# explain (verbose, costs off) SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON rm32044_t3.pkey = rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------------- Incremental Sort Output: rm32044_t1.pkey, rm32044_t1.val, rm32044_t2.pkey, rm32044_t2.label, rm32044_t2.hidden, rm32044_t3.pkey, rm32044_t3.val, rm32044_t4.pkey Sort Key: rm32044_t1.pkey, rm32044_t2.label, rm32044_t2.hidden Presorted Key: rm32044_t1.pkey -> Merge Left Join Output: rm32044_t1.pkey, rm32044_t1.val, rm32044_t2.pkey, rm32044_t2.label, rm32044_t2.hidden, rm32044_t3.pkey, rm32044_t3.val, rm32044_t4.pkey Merge Cond: (rm32044_t1.pkey = rm32044_t2.pkey) -> Sort Output: rm32044_t3.pkey, rm32044_t3.val, rm32044_t4.pkey, rm32044_t1.pkey, rm32044_t1.val Sort Key: rm32044_t1.pkey -> Nested Loop Output: rm32044_t3.pkey, rm32044_t3.val, rm32044_t4.pkey, rm32044_t1.pkey, rm32044_t1.val -> Merge Left Join Output: rm32044_t3.pkey, rm32044_t3.val, rm32044_t4.pkey Merge Cond: (rm32044_t3.pkey = rm32044_t4.pkey) -> Sort Output: rm32044_t3.pkey, rm32044_t3.val Sort Key: rm32044_t3.pkey -> Seq Scan on public.rm32044_t3 Output: rm32044_t3.pkey, rm32044_t3.val -> Sort Output: rm32044_t4.pkey Sort Key: rm32044_t4.pkey -> Seq Scan on public.rm32044_t4 Output: rm32044_t4.pkey -> Materialize Output: rm32044_t1.pkey, rm32044_t1.val -> Seq Scan on public.rm32044_t1 Output: rm32044_t1.pkey, rm32044_t1.val -> Sort Output: rm32044_t2.pkey, rm32044_t2.label, rm32044_t2.hidden Sort Key: rm32044_t2.pkey -> Seq Scan on public.rm32044_t2 Output: rm32044_t2.pkey, rm32044_t2.label, rm32044_t2.hidden(34 rows)It seems like while building the innerslot for merge join, the value for attnum 1 is not getting fetched correctly.On Tue, Sep 12, 2023 at 3:27 PM Suraj Kharage <[email protected]> wrote:Hi,Found server crash on RHEL 9/s390x platform with below test case - Machine details:[edb@9428da9d2137 postgres]$ cat /etc/redhat-release AlmaLinux release 9.2 (Turquoise Kodkod)[edb@9428da9d2137 postgres]$ lscpuArchitecture: s390x CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Big EndianConfigure command:./configure --prefix=/home/edb/postgres/ --with-lz4 --with-zstd --with-llvm --with-perl --with-python --with-tcl --with-openssl --enable-nls --with-libxml --with-libxslt --with-systemd --with-libcurl --without-icu --enable-debug --enable-cassert --with-pgport=5414Test case:CREATE TABLE rm32044_t1( pkey integer, val text);CREATE TABLE rm32044_t2( pkey integer, label text, hidden boolean);CREATE TABLE rm32044_t3( pkey integer, val integer);CREATE TABLE rm32044_t4( pkey integer);insert into rm32044_t1 values ( 1 , 'row1');insert into rm32044_t1 values ( 2 , 'row2');insert into rm32044_t2 values ( 1 , 'hidden', true);insert into rm32044_t2 values ( 2 , 'visible', false);insert into rm32044_t3 values (1 , 1);insert into rm32044_t3 values (2 , 1);postgres=# SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON rm32044_t3.pkey = rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;server closed the connection unexpectedly\tThis probably means the server terminated abnormally\tbefore or while processing the request.The connection to the server was lost. Attempting reset: Failed.The connection to the server was lost. Attempting reset: Failed.backtrace:[edb@9428da9d2137 postgres]$ gdb bin/postgres data/qemu_postgres_20230911-140628_65620.coreCore was generated by `postgres: edb postgres [local] SELECT '.Program terminated with signal SIGSEGV, Segmentation fault.#0 0x00000000010a8366 in heap_compute_data_size (tupleDesc=tupleDesc@entry=0x1ba3d10, values=values@entry=0x1ba4168, isnull=isnull@entry=0x1ba41a8) at heaptuple.c:227227\t\t\t\tVARATT_CAN_MAKE_SHORT(DatumGetPointer(val)))[Current thread is 1 (LWP 65597)]Missing separate debuginfos, use: dnf debuginfo-install glibc-2.34-60.el9.s390x libcap-2.48-8.el9.s390x libedit-3.1-37.20210216cvs.el9.s390x libffi-3.4.2-7.el9.s390x libgcc-11.3.1-4.3.el9.alma.s390x libgcrypt-1.10.0-10.el9_2.s390x libgpg-error-1.42-5.el9.s390x libstdc++-11.3.1-4.3.el9.alma.s390x libxml2-2.9.13-3.el9_2.1.s390x libzstd-1.5.1-2.el9.s390x llvm-libs-15.0.7-1.el9.s390x lz4-libs-1.9.3-5.el9.s390x ncurses-libs-6.2-8.20210508.el9.s390x openssl-libs-3.0.7-17.el9_2.s390x systemd-libs-252-14.el9_2.3.s390x xz-libs-5.2.5-8.el9_0.s390x(gdb) bt#0 0x00000000010a8366 in heap_compute_data_size (tupleDesc=tupleDesc@entry=0x1ba3d10, values=values@entry=0x1ba4168, isnull=isnull@entry=0x1ba41a8) at heaptuple.c:227#1 0x00000000010a9bb0 in heap_form_minimal_tuple (tupleDescriptor=0x1ba3d10, values=0x1ba4168, isnull=0x1ba41a8) at heaptuple.c:1484#2 0x00000000016553fa in ExecCopySlotMinimalTuple (slot=<optimized out>) at ../../../../src/include/executor/tuptable.h:472#3 tuplesort_puttupleslot (state=state@entry=0x1be4d18, slot=slot@entry=0x1ba4120) at tuplesortvariants.c:610#4 0x00000000012dc0e0 in ExecIncrementalSort (pstate=0x1acb4d8) at nodeIncrementalSort.c:716#5 0x00000000012b32c6 in ExecProcNode (node=0x1acb4d8) at ../../../src/include/executor/executor.h:273#6 ExecutePlan (execute_once=<optimized out>, dest=0x1ade698, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x1acb4d8, estate=0x1acb258) at execMain.c:1670#7 standard_ExecutorRun (queryDesc=0x19ad338, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:365#8 0x00000000014a6ae2 in PortalRunSelect (portal=portal@entry=0x1a63558, forward=forward@entry=true, count=0, count@entry=9223372036854775807, dest=dest@entry=0x1ade698) at pquery.c:924#9 0x00000000014a84e0 in PortalRun (portal=portal@entry=0x1a63558, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x1ade698, altdest=0x1ade698, qc=0x40007ff7b0) at pquery.c:768#10 0x00000000014a3c1c in exec_simple_query ( query_string=0x19ea0e8 \"SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON rm32044_t3.pkey = rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;\") at postgres.c:1274#11 0x00000000014a57aa in PostgresMain (dbname=<optimized out>, username=<optimized out>) at postgres.c:4637#12 0x00000000013fdaf6 in BackendRun (port=0x1a132c0, port=0x1a132c0) at postmaster.c:4464#13 BackendStartup (port=0x1a132c0) at postmaster.c:4192#14 ServerLoop () at postmaster.c:1782#15 0x00000000013fec34 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x19a59a0) at postmaster.c:1466#16 0x0000000001096faa in main (argc=<optimized out>, argv=0x19a59a0) at main.c:198(gdb) p val$1 = 0```Does anybody have any idea about this?-- --Thanks & Regards, Suraj kharage, edbpostgres.com\n-- --Thanks & Regards, Suraj kharage, edbpostgres.com",
"msg_date": "Mon, 18 Sep 2023 11:20:41 +0530",
"msg_from": "Suraj Kharage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Server crash on RHEL 9/s390x platform against PG16"
},
{
"msg_contents": "It looks like an issue with JIT. If I disable the JIT then the above query\nruns successfully.\n\npostgres=# set jit to off;\n\nSET\n\npostgres=# SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON rm32044_t1.pkey\n= rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON rm32044_t3.pkey =\nrm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;\n\n pkey | val | pkey | label | hidden | pkey | val | pkey\n\n------+------+------+---------+--------+------+-----+------\n\n 1 | row1 | 1 | hidden | t | 1 | 1 |\n\n 1 | row1 | 1 | hidden | t | 2 | 1 |\n\n 2 | row2 | 2 | visible | f | 1 | 1 |\n\n 2 | row2 | 2 | visible | f | 2 | 1 |\n\n(4 rows)\n\nAny idea on this?\n\nOn Mon, Sep 18, 2023 at 11:20 AM Suraj Kharage <\[email protected]> wrote:\n\n> Few more details on this:\n>\n> (gdb) p val\n> $1 = 0\n> (gdb) p i\n> $2 = 3\n> (gdb) f 3\n> #3 0x0000000001a1ef70 in ExecCopySlotMinimalTuple (slot=0x202e4f8) at\n> ../../../../src/include/executor/tuptable.h:472\n> 472 return slot->tts_ops->copy_minimal_tuple(slot);\n> (gdb) p *slot\n> $3 = {type = T_TupleTableSlot, tts_flags = 16, tts_nvalid = 8, tts_ops =\n> 0x1b6dcc8 <TTSOpsVirtual>, tts_tupleDescriptor = 0x202e0e8, tts_values =\n> 0x202e540, tts_isnull = 0x202e580, tts_mcxt = 0x1f54550, tts_tid =\n> {ip_blkid = {bi_hi = 65535,\n> bi_lo = 65535}, ip_posid = 0}, tts_tableOid = 0}\n> (gdb) p *slot->tts_tupleDescriptor\n> $2 = {natts = 8, tdtypeid = 2249, tdtypmod = -1, tdrefcount = -1, constr =\n> 0x0, attrs = 0x202cd28}\n>\n> (gdb) p slot.tts_values[3]\n> $4 = 0\n> (gdb) p slot.tts_values[2]\n> $5 = 1\n> (gdb) p slot.tts_values[1]\n> $6 = 34027556\n>\n>\n> As per the resultslot, it has 0 value for the third attribute (column\n> lable).\n> Im testing this on the docker container and facing some issues with gdb\n> hence could not able to debug it further.\n>\n> Here is a explain plan:\n>\n> postgres=# explain (verbose, costs off) SELECT * FROM rm32044_t1 LEFT JOIN\n> rm32044_t2 ON rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN\n> rm32044_t4 ON rm32044_t3.pkey = rm32044_t4.pkey order by\n> rm32044_t1.pkey,label,hidden;\n>\n> QUERY PLAN\n>\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------\n> Incremental Sort\n> Output: rm32044_t1.pkey, rm32044_t1.val, rm32044_t2.pkey,\n> rm32044_t2.label, rm32044_t2.hidden, rm32044_t3.pkey, rm32044_t3.val,\n> rm32044_t4.pkey\n> Sort Key: rm32044_t1.pkey, rm32044_t2.label, rm32044_t2.hidden\n> Presorted Key: rm32044_t1.pkey\n> -> Merge Left Join\n> Output: rm32044_t1.pkey, rm32044_t1.val, rm32044_t2.pkey,\n> rm32044_t2.label, rm32044_t2.hidden, rm32044_t3.pkey, rm32044_t3.val,\n> rm32044_t4.pkey\n> Merge Cond: (rm32044_t1.pkey = rm32044_t2.pkey)\n> -> Sort\n> Output: rm32044_t3.pkey, rm32044_t3.val, rm32044_t4.pkey,\n> rm32044_t1.pkey, rm32044_t1.val\n> Sort Key: rm32044_t1.pkey\n> -> Nested Loop\n> Output: rm32044_t3.pkey, rm32044_t3.val,\n> rm32044_t4.pkey, rm32044_t1.pkey, rm32044_t1.val\n> -> Merge Left Join\n> Output: rm32044_t3.pkey, rm32044_t3.val,\n> rm32044_t4.pkey\n> Merge Cond: (rm32044_t3.pkey = rm32044_t4.pkey)\n> -> Sort\n> Output: rm32044_t3.pkey, rm32044_t3.val\n> Sort Key: rm32044_t3.pkey\n> -> Seq Scan on public.rm32044_t3\n> Output: rm32044_t3.pkey,\n> rm32044_t3.val\n> -> Sort\n> Output: rm32044_t4.pkey\n> Sort Key: rm32044_t4.pkey\n> -> Seq Scan on public.rm32044_t4\n> Output: rm32044_t4.pkey\n> -> Materialize\n> Output: rm32044_t1.pkey, rm32044_t1.val\n> -> Seq Scan on public.rm32044_t1\n> Output: rm32044_t1.pkey, rm32044_t1.val\n> -> Sort\n> Output: rm32044_t2.pkey, rm32044_t2.label, rm32044_t2.hidden\n> Sort Key: rm32044_t2.pkey\n> -> Seq Scan on public.rm32044_t2\n> Output: rm32044_t2.pkey, rm32044_t2.label,\n> rm32044_t2.hidden\n> (34 rows)\n>\n>\n> It seems like while building the innerslot for merge join, the value for\n> attnum 1 is not getting fetched correctly.\n>\n> On Tue, Sep 12, 2023 at 3:27 PM Suraj Kharage <\n> [email protected]> wrote:\n>\n>> Hi,\n>>\n>> Found server crash on RHEL 9/s390x platform with below test case -\n>>\n>> *Machine details:*\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>> *[edb@9428da9d2137 postgres]$ cat /etc/redhat-release AlmaLinux release\n>> 9.2 (Turquoise Kodkod)[edb@9428da9d2137 postgres]$ lscpuArchitecture:\n>> s390x CPU op-mode(s): 32-bit, 64-bit Address sizes: 39\n>> bits physical, 48 bits virtual Byte Order: Big Endian*\n>> *Configure command:*\n>> ./configure --prefix=/home/edb/postgres/ --with-lz4 --with-zstd\n>> --with-llvm --with-perl --with-python --with-tcl --with-openssl\n>> --enable-nls --with-libxml --with-libxslt --with-systemd --with-libcurl\n>> --without-icu --enable-debug --enable-cassert --with-pgport=5414\n>>\n>>\n>> *Test case:*\n>> CREATE TABLE rm32044_t1\n>> (\n>> pkey integer,\n>> val text\n>> );\n>> CREATE TABLE rm32044_t2\n>> (\n>> pkey integer,\n>> label text,\n>> hidden boolean\n>> );\n>> CREATE TABLE rm32044_t3\n>> (\n>> pkey integer,\n>> val integer\n>> );\n>> CREATE TABLE rm32044_t4\n>> (\n>> pkey integer\n>> );\n>> insert into rm32044_t1 values ( 1 , 'row1');\n>> insert into rm32044_t1 values ( 2 , 'row2');\n>> insert into rm32044_t2 values ( 1 , 'hidden', true);\n>> insert into rm32044_t2 values ( 2 , 'visible', false);\n>> insert into rm32044_t3 values (1 , 1);\n>> insert into rm32044_t3 values (2 , 1);\n>>\n>> postgres=# SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON\n>> rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON\n>> rm32044_t3.pkey = rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;\n>> server closed the connection unexpectedly\n>> This probably means the server terminated abnormally\n>> before or while processing the request.\n>> The connection to the server was lost. Attempting reset: Failed.\n>> The connection to the server was lost. Attempting reset: Failed.\n>>\n>> *backtrace:*\n>> [edb@9428da9d2137 postgres]$ gdb bin/postgres\n>> data/qemu_postgres_20230911-140628_65620.core\n>> Core was generated by `postgres: edb postgres [local] SELECT '.\n>> Program terminated with signal SIGSEGV, Segmentation fault.\n>> #0 0x00000000010a8366 in heap_compute_data_size\n>> (tupleDesc=tupleDesc@entry=0x1ba3d10, values=values@entry=0x1ba4168,\n>> isnull=isnull@entry=0x1ba41a8) at heaptuple.c:227\n>> 227 VARATT_CAN_MAKE_SHORT(DatumGetPointer(val)))\n>> [Current thread is 1 (LWP 65597)]\n>> Missing separate debuginfos, use: dnf debuginfo-install\n>> glibc-2.34-60.el9.s390x libcap-2.48-8.el9.s390x\n>> libedit-3.1-37.20210216cvs.el9.s390x libffi-3.4.2-7.el9.s390x\n>> libgcc-11.3.1-4.3.el9.alma.s390x libgcrypt-1.10.0-10.el9_2.s390x\n>> libgpg-error-1.42-5.el9.s390x libstdc++-11.3.1-4.3.el9.alma.s390x\n>> libxml2-2.9.13-3.el9_2.1.s390x libzstd-1.5.1-2.el9.s390x\n>> llvm-libs-15.0.7-1.el9.s390x lz4-libs-1.9.3-5.el9.s390x\n>> ncurses-libs-6.2-8.20210508.el9.s390x openssl-libs-3.0.7-17.el9_2.s390x\n>> systemd-libs-252-14.el9_2.3.s390x xz-libs-5.2.5-8.el9_0.s390x\n>> (gdb) bt\n>> #0 0x00000000010a8366 in heap_compute_data_size\n>> (tupleDesc=tupleDesc@entry=0x1ba3d10, values=values@entry=0x1ba4168,\n>> isnull=isnull@entry=0x1ba41a8) at heaptuple.c:227\n>> #1 0x00000000010a9bb0 in heap_form_minimal_tuple\n>> (tupleDescriptor=0x1ba3d10, values=0x1ba4168, isnull=0x1ba41a8) at\n>> heaptuple.c:1484\n>> #2 0x00000000016553fa in ExecCopySlotMinimalTuple (slot=<optimized out>)\n>> at ../../../../src/include/executor/tuptable.h:472\n>> #3 tuplesort_puttupleslot (state=state@entry=0x1be4d18, slot=slot@entry=0x1ba4120)\n>> at tuplesortvariants.c:610\n>> #4 0x00000000012dc0e0 in ExecIncrementalSort (pstate=0x1acb4d8) at\n>> nodeIncrementalSort.c:716\n>> #5 0x00000000012b32c6 in ExecProcNode (node=0x1acb4d8) at\n>> ../../../src/include/executor/executor.h:273\n>> #6 ExecutePlan (execute_once=<optimized out>, dest=0x1ade698,\n>> direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>,\n>> operation=CMD_SELECT, use_parallel_mode=<optimized out>,\n>> planstate=0x1acb4d8, estate=0x1acb258) at execMain.c:1670\n>> #7 standard_ExecutorRun (queryDesc=0x19ad338, direction=<optimized out>,\n>> count=0, execute_once=<optimized out>) at execMain.c:365\n>> #8 0x00000000014a6ae2 in PortalRunSelect (portal=portal@entry=0x1a63558,\n>> forward=forward@entry=true, count=0, count@entry=9223372036854775807,\n>> dest=dest@entry=0x1ade698) at pquery.c:924\n>> #9 0x00000000014a84e0 in PortalRun (portal=portal@entry=0x1a63558,\n>> count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true,\n>> run_once=run_once@entry=true, dest=dest@entry=0x1ade698,\n>> altdest=0x1ade698, qc=0x40007ff7b0) at pquery.c:768\n>> #10 0x00000000014a3c1c in exec_simple_query (\n>> query_string=0x19ea0e8 \"SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2\n>> ON rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON\n>> rm32044_t3.pkey = rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;\")\n>> at postgres.c:1274\n>> #11 0x00000000014a57aa in PostgresMain (dbname=<optimized out>,\n>> username=<optimized out>) at postgres.c:4637\n>> #12 0x00000000013fdaf6 in BackendRun (port=0x1a132c0, port=0x1a132c0) at\n>> postmaster.c:4464\n>> #13 BackendStartup (port=0x1a132c0) at postmaster.c:4192\n>> #14 ServerLoop () at postmaster.c:1782\n>> #15 0x00000000013fec34 in PostmasterMain (argc=argc@entry=3,\n>> argv=argv@entry=0x19a59a0) at postmaster.c:1466\n>> #16 0x0000000001096faa in main (argc=<optimized out>, argv=0x19a59a0) at\n>> main.c:198\n>>\n>> (gdb) p val\n>> $1 = 0\n>> ```\n>>\n>> Does anybody have any idea about this?\n>>\n>> --\n>> --\n>>\n>> Thanks & Regards,\n>> Suraj kharage,\n>>\n>>\n>>\n>> edbpostgres.com\n>>\n>\n>\n> --\n> --\n>\n> Thanks & Regards,\n> Suraj kharage,\n>\n>\n>\n> edbpostgres.com\n>\n\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\n\n\n\nedbpostgres.com\n\nIt looks like an issue with JIT. If I disable the JIT then the above query runs successfully.\npostgres=# set jit to off;\nSET\npostgres=# SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON rm32044_t3.pkey = rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;\n pkey | val | pkey | label | hidden | pkey | val | pkey \n------+------+------+---------+--------+------+-----+------\n 1 | row1 | 1 | hidden | t | 1 | 1 | \n 1 | row1 | 1 | hidden | t | 2 | 1 | \n 2 | row2 | 2 | visible | f | 1 | 1 | \n 2 | row2 | 2 | visible | f | 2 | 1 | \n(4 rows)Any idea on this?On Mon, Sep 18, 2023 at 11:20 AM Suraj Kharage <[email protected]> wrote:Few more details on this:\t\t\t (gdb) p val$1 = 0(gdb) p i$2 = 3(gdb) f 3#3 0x0000000001a1ef70 in ExecCopySlotMinimalTuple (slot=0x202e4f8) at ../../../../src/include/executor/tuptable.h:472472\t\treturn slot->tts_ops->copy_minimal_tuple(slot);(gdb) p *slot$3 = {type = T_TupleTableSlot, tts_flags = 16, tts_nvalid = 8, tts_ops = 0x1b6dcc8 <TTSOpsVirtual>, tts_tupleDescriptor = 0x202e0e8, tts_values = 0x202e540, tts_isnull = 0x202e580, tts_mcxt = 0x1f54550, tts_tid = {ip_blkid = {bi_hi = 65535, bi_lo = 65535}, ip_posid = 0}, tts_tableOid = 0}(gdb) p *slot->tts_tupleDescriptor$2 = {natts = 8, tdtypeid = 2249, tdtypmod = -1, tdrefcount = -1, constr = 0x0, attrs = 0x202cd28}(gdb) p slot.tts_values[3]$4 = 0(gdb) p slot.tts_values[2]$5 = 1(gdb) p slot.tts_values[1]$6 = 34027556As per the resultslot, it has 0 value for the third attribute (column lable).Im testing this on the docker container and facing some issues with gdb hence could not able to debug it further.Here is a explain plan:postgres=# explain (verbose, costs off) SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON rm32044_t3.pkey = rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------------- Incremental Sort Output: rm32044_t1.pkey, rm32044_t1.val, rm32044_t2.pkey, rm32044_t2.label, rm32044_t2.hidden, rm32044_t3.pkey, rm32044_t3.val, rm32044_t4.pkey Sort Key: rm32044_t1.pkey, rm32044_t2.label, rm32044_t2.hidden Presorted Key: rm32044_t1.pkey -> Merge Left Join Output: rm32044_t1.pkey, rm32044_t1.val, rm32044_t2.pkey, rm32044_t2.label, rm32044_t2.hidden, rm32044_t3.pkey, rm32044_t3.val, rm32044_t4.pkey Merge Cond: (rm32044_t1.pkey = rm32044_t2.pkey) -> Sort Output: rm32044_t3.pkey, rm32044_t3.val, rm32044_t4.pkey, rm32044_t1.pkey, rm32044_t1.val Sort Key: rm32044_t1.pkey -> Nested Loop Output: rm32044_t3.pkey, rm32044_t3.val, rm32044_t4.pkey, rm32044_t1.pkey, rm32044_t1.val -> Merge Left Join Output: rm32044_t3.pkey, rm32044_t3.val, rm32044_t4.pkey Merge Cond: (rm32044_t3.pkey = rm32044_t4.pkey) -> Sort Output: rm32044_t3.pkey, rm32044_t3.val Sort Key: rm32044_t3.pkey -> Seq Scan on public.rm32044_t3 Output: rm32044_t3.pkey, rm32044_t3.val -> Sort Output: rm32044_t4.pkey Sort Key: rm32044_t4.pkey -> Seq Scan on public.rm32044_t4 Output: rm32044_t4.pkey -> Materialize Output: rm32044_t1.pkey, rm32044_t1.val -> Seq Scan on public.rm32044_t1 Output: rm32044_t1.pkey, rm32044_t1.val -> Sort Output: rm32044_t2.pkey, rm32044_t2.label, rm32044_t2.hidden Sort Key: rm32044_t2.pkey -> Seq Scan on public.rm32044_t2 Output: rm32044_t2.pkey, rm32044_t2.label, rm32044_t2.hidden(34 rows)It seems like while building the innerslot for merge join, the value for attnum 1 is not getting fetched correctly.On Tue, Sep 12, 2023 at 3:27 PM Suraj Kharage <[email protected]> wrote:Hi,Found server crash on RHEL 9/s390x platform with below test case - Machine details:[edb@9428da9d2137 postgres]$ cat /etc/redhat-release AlmaLinux release 9.2 (Turquoise Kodkod)[edb@9428da9d2137 postgres]$ lscpuArchitecture: s390x CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Big EndianConfigure command:./configure --prefix=/home/edb/postgres/ --with-lz4 --with-zstd --with-llvm --with-perl --with-python --with-tcl --with-openssl --enable-nls --with-libxml --with-libxslt --with-systemd --with-libcurl --without-icu --enable-debug --enable-cassert --with-pgport=5414Test case:CREATE TABLE rm32044_t1( pkey integer, val text);CREATE TABLE rm32044_t2( pkey integer, label text, hidden boolean);CREATE TABLE rm32044_t3( pkey integer, val integer);CREATE TABLE rm32044_t4( pkey integer);insert into rm32044_t1 values ( 1 , 'row1');insert into rm32044_t1 values ( 2 , 'row2');insert into rm32044_t2 values ( 1 , 'hidden', true);insert into rm32044_t2 values ( 2 , 'visible', false);insert into rm32044_t3 values (1 , 1);insert into rm32044_t3 values (2 , 1);postgres=# SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON rm32044_t3.pkey = rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;server closed the connection unexpectedly\tThis probably means the server terminated abnormally\tbefore or while processing the request.The connection to the server was lost. Attempting reset: Failed.The connection to the server was lost. Attempting reset: Failed.backtrace:[edb@9428da9d2137 postgres]$ gdb bin/postgres data/qemu_postgres_20230911-140628_65620.coreCore was generated by `postgres: edb postgres [local] SELECT '.Program terminated with signal SIGSEGV, Segmentation fault.#0 0x00000000010a8366 in heap_compute_data_size (tupleDesc=tupleDesc@entry=0x1ba3d10, values=values@entry=0x1ba4168, isnull=isnull@entry=0x1ba41a8) at heaptuple.c:227227\t\t\t\tVARATT_CAN_MAKE_SHORT(DatumGetPointer(val)))[Current thread is 1 (LWP 65597)]Missing separate debuginfos, use: dnf debuginfo-install glibc-2.34-60.el9.s390x libcap-2.48-8.el9.s390x libedit-3.1-37.20210216cvs.el9.s390x libffi-3.4.2-7.el9.s390x libgcc-11.3.1-4.3.el9.alma.s390x libgcrypt-1.10.0-10.el9_2.s390x libgpg-error-1.42-5.el9.s390x libstdc++-11.3.1-4.3.el9.alma.s390x libxml2-2.9.13-3.el9_2.1.s390x libzstd-1.5.1-2.el9.s390x llvm-libs-15.0.7-1.el9.s390x lz4-libs-1.9.3-5.el9.s390x ncurses-libs-6.2-8.20210508.el9.s390x openssl-libs-3.0.7-17.el9_2.s390x systemd-libs-252-14.el9_2.3.s390x xz-libs-5.2.5-8.el9_0.s390x(gdb) bt#0 0x00000000010a8366 in heap_compute_data_size (tupleDesc=tupleDesc@entry=0x1ba3d10, values=values@entry=0x1ba4168, isnull=isnull@entry=0x1ba41a8) at heaptuple.c:227#1 0x00000000010a9bb0 in heap_form_minimal_tuple (tupleDescriptor=0x1ba3d10, values=0x1ba4168, isnull=0x1ba41a8) at heaptuple.c:1484#2 0x00000000016553fa in ExecCopySlotMinimalTuple (slot=<optimized out>) at ../../../../src/include/executor/tuptable.h:472#3 tuplesort_puttupleslot (state=state@entry=0x1be4d18, slot=slot@entry=0x1ba4120) at tuplesortvariants.c:610#4 0x00000000012dc0e0 in ExecIncrementalSort (pstate=0x1acb4d8) at nodeIncrementalSort.c:716#5 0x00000000012b32c6 in ExecProcNode (node=0x1acb4d8) at ../../../src/include/executor/executor.h:273#6 ExecutePlan (execute_once=<optimized out>, dest=0x1ade698, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x1acb4d8, estate=0x1acb258) at execMain.c:1670#7 standard_ExecutorRun (queryDesc=0x19ad338, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:365#8 0x00000000014a6ae2 in PortalRunSelect (portal=portal@entry=0x1a63558, forward=forward@entry=true, count=0, count@entry=9223372036854775807, dest=dest@entry=0x1ade698) at pquery.c:924#9 0x00000000014a84e0 in PortalRun (portal=portal@entry=0x1a63558, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x1ade698, altdest=0x1ade698, qc=0x40007ff7b0) at pquery.c:768#10 0x00000000014a3c1c in exec_simple_query ( query_string=0x19ea0e8 \"SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON rm32044_t3.pkey = rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;\") at postgres.c:1274#11 0x00000000014a57aa in PostgresMain (dbname=<optimized out>, username=<optimized out>) at postgres.c:4637#12 0x00000000013fdaf6 in BackendRun (port=0x1a132c0, port=0x1a132c0) at postmaster.c:4464#13 BackendStartup (port=0x1a132c0) at postmaster.c:4192#14 ServerLoop () at postmaster.c:1782#15 0x00000000013fec34 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x19a59a0) at postmaster.c:1466#16 0x0000000001096faa in main (argc=<optimized out>, argv=0x19a59a0) at main.c:198(gdb) p val$1 = 0```Does anybody have any idea about this?-- --Thanks & Regards, Suraj kharage, edbpostgres.com\n-- --Thanks & Regards, Suraj kharage, edbpostgres.com\n-- --Thanks & Regards, Suraj kharage, edbpostgres.com",
"msg_date": "Mon, 9 Oct 2023 08:21:18 +0530",
"msg_from": "Suraj Kharage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Server crash on RHEL 9/s390x platform against PG16"
},
{
"msg_contents": "Here is clang version:\n\n[edb@9428da9d2137]$ clang --version\n\nclang version 15.0.7 (Red Hat 15.0.7-2.el9)\n\nTarget: s390x-ibm-linux-gnu\n\nThread model: posix\n\nInstalledDir: /usr/bin\n\n\nLet me know if any further information is needed.\n\nOn Mon, Oct 9, 2023 at 8:21 AM Suraj Kharage <[email protected]>\nwrote:\n\n> It looks like an issue with JIT. If I disable the JIT then the above query\n> runs successfully.\n>\n> postgres=# set jit to off;\n>\n> SET\n>\n> postgres=# SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON\n> rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON\n> rm32044_t3.pkey = rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;\n>\n> pkey | val | pkey | label | hidden | pkey | val | pkey\n>\n> ------+------+------+---------+--------+------+-----+------\n>\n> 1 | row1 | 1 | hidden | t | 1 | 1 |\n>\n> 1 | row1 | 1 | hidden | t | 2 | 1 |\n>\n> 2 | row2 | 2 | visible | f | 1 | 1 |\n>\n> 2 | row2 | 2 | visible | f | 2 | 1 |\n>\n> (4 rows)\n>\n> Any idea on this?\n>\n> On Mon, Sep 18, 2023 at 11:20 AM Suraj Kharage <\n> [email protected]> wrote:\n>\n>> Few more details on this:\n>>\n>> (gdb) p val\n>> $1 = 0\n>> (gdb) p i\n>> $2 = 3\n>> (gdb) f 3\n>> #3 0x0000000001a1ef70 in ExecCopySlotMinimalTuple (slot=0x202e4f8) at\n>> ../../../../src/include/executor/tuptable.h:472\n>> 472 return slot->tts_ops->copy_minimal_tuple(slot);\n>> (gdb) p *slot\n>> $3 = {type = T_TupleTableSlot, tts_flags = 16, tts_nvalid = 8, tts_ops =\n>> 0x1b6dcc8 <TTSOpsVirtual>, tts_tupleDescriptor = 0x202e0e8, tts_values =\n>> 0x202e540, tts_isnull = 0x202e580, tts_mcxt = 0x1f54550, tts_tid =\n>> {ip_blkid = {bi_hi = 65535,\n>> bi_lo = 65535}, ip_posid = 0}, tts_tableOid = 0}\n>> (gdb) p *slot->tts_tupleDescriptor\n>> $2 = {natts = 8, tdtypeid = 2249, tdtypmod = -1, tdrefcount = -1, constr\n>> = 0x0, attrs = 0x202cd28}\n>>\n>> (gdb) p slot.tts_values[3]\n>> $4 = 0\n>> (gdb) p slot.tts_values[2]\n>> $5 = 1\n>> (gdb) p slot.tts_values[1]\n>> $6 = 34027556\n>>\n>>\n>> As per the resultslot, it has 0 value for the third attribute (column\n>> lable).\n>> Im testing this on the docker container and facing some issues with gdb\n>> hence could not able to debug it further.\n>>\n>> Here is a explain plan:\n>>\n>> postgres=# explain (verbose, costs off) SELECT * FROM rm32044_t1 LEFT\n>> JOIN rm32044_t2 ON rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN\n>> rm32044_t4 ON rm32044_t3.pkey = rm32044_t4.pkey order by\n>> rm32044_t1.pkey,label,hidden;\n>>\n>> QUERY PLAN\n>>\n>>\n>> ---------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Incremental Sort\n>> Output: rm32044_t1.pkey, rm32044_t1.val, rm32044_t2.pkey,\n>> rm32044_t2.label, rm32044_t2.hidden, rm32044_t3.pkey, rm32044_t3.val,\n>> rm32044_t4.pkey\n>> Sort Key: rm32044_t1.pkey, rm32044_t2.label, rm32044_t2.hidden\n>> Presorted Key: rm32044_t1.pkey\n>> -> Merge Left Join\n>> Output: rm32044_t1.pkey, rm32044_t1.val, rm32044_t2.pkey,\n>> rm32044_t2.label, rm32044_t2.hidden, rm32044_t3.pkey, rm32044_t3.val,\n>> rm32044_t4.pkey\n>> Merge Cond: (rm32044_t1.pkey = rm32044_t2.pkey)\n>> -> Sort\n>> Output: rm32044_t3.pkey, rm32044_t3.val, rm32044_t4.pkey,\n>> rm32044_t1.pkey, rm32044_t1.val\n>> Sort Key: rm32044_t1.pkey\n>> -> Nested Loop\n>> Output: rm32044_t3.pkey, rm32044_t3.val,\n>> rm32044_t4.pkey, rm32044_t1.pkey, rm32044_t1.val\n>> -> Merge Left Join\n>> Output: rm32044_t3.pkey, rm32044_t3.val,\n>> rm32044_t4.pkey\n>> Merge Cond: (rm32044_t3.pkey = rm32044_t4.pkey)\n>> -> Sort\n>> Output: rm32044_t3.pkey, rm32044_t3.val\n>> Sort Key: rm32044_t3.pkey\n>> -> Seq Scan on public.rm32044_t3\n>> Output: rm32044_t3.pkey,\n>> rm32044_t3.val\n>> -> Sort\n>> Output: rm32044_t4.pkey\n>> Sort Key: rm32044_t4.pkey\n>> -> Seq Scan on public.rm32044_t4\n>> Output: rm32044_t4.pkey\n>> -> Materialize\n>> Output: rm32044_t1.pkey, rm32044_t1.val\n>> -> Seq Scan on public.rm32044_t1\n>> Output: rm32044_t1.pkey, rm32044_t1.val\n>> -> Sort\n>> Output: rm32044_t2.pkey, rm32044_t2.label,\n>> rm32044_t2.hidden\n>> Sort Key: rm32044_t2.pkey\n>> -> Seq Scan on public.rm32044_t2\n>> Output: rm32044_t2.pkey, rm32044_t2.label,\n>> rm32044_t2.hidden\n>> (34 rows)\n>>\n>>\n>> It seems like while building the innerslot for merge join, the value for\n>> attnum 1 is not getting fetched correctly.\n>>\n>> On Tue, Sep 12, 2023 at 3:27 PM Suraj Kharage <\n>> [email protected]> wrote:\n>>\n>>> Hi,\n>>>\n>>> Found server crash on RHEL 9/s390x platform with below test case -\n>>>\n>>> *Machine details:*\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>\n>>> *[edb@9428da9d2137 postgres]$ cat /etc/redhat-release AlmaLinux release\n>>> 9.2 (Turquoise Kodkod)[edb@9428da9d2137 postgres]$ lscpuArchitecture:\n>>> s390x CPU op-mode(s): 32-bit, 64-bit Address sizes: 39\n>>> bits physical, 48 bits virtual Byte Order: Big Endian*\n>>> *Configure command:*\n>>> ./configure --prefix=/home/edb/postgres/ --with-lz4 --with-zstd\n>>> --with-llvm --with-perl --with-python --with-tcl --with-openssl\n>>> --enable-nls --with-libxml --with-libxslt --with-systemd --with-libcurl\n>>> --without-icu --enable-debug --enable-cassert --with-pgport=5414\n>>>\n>>>\n>>> *Test case:*\n>>> CREATE TABLE rm32044_t1\n>>> (\n>>> pkey integer,\n>>> val text\n>>> );\n>>> CREATE TABLE rm32044_t2\n>>> (\n>>> pkey integer,\n>>> label text,\n>>> hidden boolean\n>>> );\n>>> CREATE TABLE rm32044_t3\n>>> (\n>>> pkey integer,\n>>> val integer\n>>> );\n>>> CREATE TABLE rm32044_t4\n>>> (\n>>> pkey integer\n>>> );\n>>> insert into rm32044_t1 values ( 1 , 'row1');\n>>> insert into rm32044_t1 values ( 2 , 'row2');\n>>> insert into rm32044_t2 values ( 1 , 'hidden', true);\n>>> insert into rm32044_t2 values ( 2 , 'visible', false);\n>>> insert into rm32044_t3 values (1 , 1);\n>>> insert into rm32044_t3 values (2 , 1);\n>>>\n>>> postgres=# SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON\n>>> rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON\n>>> rm32044_t3.pkey = rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;\n>>> server closed the connection unexpectedly\n>>> This probably means the server terminated abnormally\n>>> before or while processing the request.\n>>> The connection to the server was lost. Attempting reset: Failed.\n>>> The connection to the server was lost. Attempting reset: Failed.\n>>>\n>>> *backtrace:*\n>>> [edb@9428da9d2137 postgres]$ gdb bin/postgres\n>>> data/qemu_postgres_20230911-140628_65620.core\n>>> Core was generated by `postgres: edb postgres [local] SELECT '.\n>>> Program terminated with signal SIGSEGV, Segmentation fault.\n>>> #0 0x00000000010a8366 in heap_compute_data_size\n>>> (tupleDesc=tupleDesc@entry=0x1ba3d10, values=values@entry=0x1ba4168,\n>>> isnull=isnull@entry=0x1ba41a8) at heaptuple.c:227\n>>> 227 VARATT_CAN_MAKE_SHORT(DatumGetPointer(val)))\n>>> [Current thread is 1 (LWP 65597)]\n>>> Missing separate debuginfos, use: dnf debuginfo-install\n>>> glibc-2.34-60.el9.s390x libcap-2.48-8.el9.s390x\n>>> libedit-3.1-37.20210216cvs.el9.s390x libffi-3.4.2-7.el9.s390x\n>>> libgcc-11.3.1-4.3.el9.alma.s390x libgcrypt-1.10.0-10.el9_2.s390x\n>>> libgpg-error-1.42-5.el9.s390x libstdc++-11.3.1-4.3.el9.alma.s390x\n>>> libxml2-2.9.13-3.el9_2.1.s390x libzstd-1.5.1-2.el9.s390x\n>>> llvm-libs-15.0.7-1.el9.s390x lz4-libs-1.9.3-5.el9.s390x\n>>> ncurses-libs-6.2-8.20210508.el9.s390x openssl-libs-3.0.7-17.el9_2.s390x\n>>> systemd-libs-252-14.el9_2.3.s390x xz-libs-5.2.5-8.el9_0.s390x\n>>> (gdb) bt\n>>> #0 0x00000000010a8366 in heap_compute_data_size\n>>> (tupleDesc=tupleDesc@entry=0x1ba3d10, values=values@entry=0x1ba4168,\n>>> isnull=isnull@entry=0x1ba41a8) at heaptuple.c:227\n>>> #1 0x00000000010a9bb0 in heap_form_minimal_tuple\n>>> (tupleDescriptor=0x1ba3d10, values=0x1ba4168, isnull=0x1ba41a8) at\n>>> heaptuple.c:1484\n>>> #2 0x00000000016553fa in ExecCopySlotMinimalTuple (slot=<optimized\n>>> out>) at ../../../../src/include/executor/tuptable.h:472\n>>> #3 tuplesort_puttupleslot (state=state@entry=0x1be4d18, slot=slot@entry=0x1ba4120)\n>>> at tuplesortvariants.c:610\n>>> #4 0x00000000012dc0e0 in ExecIncrementalSort (pstate=0x1acb4d8) at\n>>> nodeIncrementalSort.c:716\n>>> #5 0x00000000012b32c6 in ExecProcNode (node=0x1acb4d8) at\n>>> ../../../src/include/executor/executor.h:273\n>>> #6 ExecutePlan (execute_once=<optimized out>, dest=0x1ade698,\n>>> direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>,\n>>> operation=CMD_SELECT, use_parallel_mode=<optimized out>,\n>>> planstate=0x1acb4d8, estate=0x1acb258) at execMain.c:1670\n>>> #7 standard_ExecutorRun (queryDesc=0x19ad338, direction=<optimized\n>>> out>, count=0, execute_once=<optimized out>) at execMain.c:365\n>>> #8 0x00000000014a6ae2 in PortalRunSelect (portal=portal@entry=0x1a63558,\n>>> forward=forward@entry=true, count=0, count@entry=9223372036854775807,\n>>> dest=dest@entry=0x1ade698) at pquery.c:924\n>>> #9 0x00000000014a84e0 in PortalRun (portal=portal@entry=0x1a63558,\n>>> count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true,\n>>> run_once=run_once@entry=true, dest=dest@entry=0x1ade698,\n>>> altdest=0x1ade698, qc=0x40007ff7b0) at pquery.c:768\n>>> #10 0x00000000014a3c1c in exec_simple_query (\n>>> query_string=0x19ea0e8 \"SELECT * FROM rm32044_t1 LEFT JOIN\n>>> rm32044_t2 ON rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN\n>>> rm32044_t4 ON rm32044_t3.pkey = rm32044_t4.pkey order by\n>>> rm32044_t1.pkey,label,hidden;\") at postgres.c:1274\n>>> #11 0x00000000014a57aa in PostgresMain (dbname=<optimized out>,\n>>> username=<optimized out>) at postgres.c:4637\n>>> #12 0x00000000013fdaf6 in BackendRun (port=0x1a132c0, port=0x1a132c0) at\n>>> postmaster.c:4464\n>>> #13 BackendStartup (port=0x1a132c0) at postmaster.c:4192\n>>> #14 ServerLoop () at postmaster.c:1782\n>>> #15 0x00000000013fec34 in PostmasterMain (argc=argc@entry=3,\n>>> argv=argv@entry=0x19a59a0) at postmaster.c:1466\n>>> #16 0x0000000001096faa in main (argc=<optimized out>, argv=0x19a59a0) at\n>>> main.c:198\n>>>\n>>> (gdb) p val\n>>> $1 = 0\n>>> ```\n>>>\n>>> Does anybody have any idea about this?\n>>>\n>>> --\n>>> --\n>>>\n>>> Thanks & Regards,\n>>> Suraj kharage,\n>>>\n>>>\n>>>\n>>> edbpostgres.com\n>>>\n>>\n>>\n>> --\n>> --\n>>\n>> Thanks & Regards,\n>> Suraj kharage,\n>>\n>>\n>>\n>> edbpostgres.com\n>>\n>\n>\n> --\n> --\n>\n> Thanks & Regards,\n> Suraj kharage,\n>\n>\n>\n> edbpostgres.com\n>\n\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\n\n\n\nedbpostgres.com\n\nHere is clang version:\n[edb@9428da9d2137]$ clang --version\nclang version 15.0.7 (Red Hat 15.0.7-2.el9)\nTarget: s390x-ibm-linux-gnu\nThread model: posix\nInstalledDir: /usr/binLet me know if any further information is needed.On Mon, Oct 9, 2023 at 8:21 AM Suraj Kharage <[email protected]> wrote:It looks like an issue with JIT. If I disable the JIT then the above query runs successfully.\npostgres=# set jit to off;\nSET\npostgres=# SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON rm32044_t3.pkey = rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;\n pkey | val | pkey | label | hidden | pkey | val | pkey \n------+------+------+---------+--------+------+-----+------\n 1 | row1 | 1 | hidden | t | 1 | 1 | \n 1 | row1 | 1 | hidden | t | 2 | 1 | \n 2 | row2 | 2 | visible | f | 1 | 1 | \n 2 | row2 | 2 | visible | f | 2 | 1 | \n(4 rows)Any idea on this?On Mon, Sep 18, 2023 at 11:20 AM Suraj Kharage <[email protected]> wrote:Few more details on this:\t\t\t (gdb) p val$1 = 0(gdb) p i$2 = 3(gdb) f 3#3 0x0000000001a1ef70 in ExecCopySlotMinimalTuple (slot=0x202e4f8) at ../../../../src/include/executor/tuptable.h:472472\t\treturn slot->tts_ops->copy_minimal_tuple(slot);(gdb) p *slot$3 = {type = T_TupleTableSlot, tts_flags = 16, tts_nvalid = 8, tts_ops = 0x1b6dcc8 <TTSOpsVirtual>, tts_tupleDescriptor = 0x202e0e8, tts_values = 0x202e540, tts_isnull = 0x202e580, tts_mcxt = 0x1f54550, tts_tid = {ip_blkid = {bi_hi = 65535, bi_lo = 65535}, ip_posid = 0}, tts_tableOid = 0}(gdb) p *slot->tts_tupleDescriptor$2 = {natts = 8, tdtypeid = 2249, tdtypmod = -1, tdrefcount = -1, constr = 0x0, attrs = 0x202cd28}(gdb) p slot.tts_values[3]$4 = 0(gdb) p slot.tts_values[2]$5 = 1(gdb) p slot.tts_values[1]$6 = 34027556As per the resultslot, it has 0 value for the third attribute (column lable).Im testing this on the docker container and facing some issues with gdb hence could not able to debug it further.Here is a explain plan:postgres=# explain (verbose, costs off) SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON rm32044_t3.pkey = rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------------- Incremental Sort Output: rm32044_t1.pkey, rm32044_t1.val, rm32044_t2.pkey, rm32044_t2.label, rm32044_t2.hidden, rm32044_t3.pkey, rm32044_t3.val, rm32044_t4.pkey Sort Key: rm32044_t1.pkey, rm32044_t2.label, rm32044_t2.hidden Presorted Key: rm32044_t1.pkey -> Merge Left Join Output: rm32044_t1.pkey, rm32044_t1.val, rm32044_t2.pkey, rm32044_t2.label, rm32044_t2.hidden, rm32044_t3.pkey, rm32044_t3.val, rm32044_t4.pkey Merge Cond: (rm32044_t1.pkey = rm32044_t2.pkey) -> Sort Output: rm32044_t3.pkey, rm32044_t3.val, rm32044_t4.pkey, rm32044_t1.pkey, rm32044_t1.val Sort Key: rm32044_t1.pkey -> Nested Loop Output: rm32044_t3.pkey, rm32044_t3.val, rm32044_t4.pkey, rm32044_t1.pkey, rm32044_t1.val -> Merge Left Join Output: rm32044_t3.pkey, rm32044_t3.val, rm32044_t4.pkey Merge Cond: (rm32044_t3.pkey = rm32044_t4.pkey) -> Sort Output: rm32044_t3.pkey, rm32044_t3.val Sort Key: rm32044_t3.pkey -> Seq Scan on public.rm32044_t3 Output: rm32044_t3.pkey, rm32044_t3.val -> Sort Output: rm32044_t4.pkey Sort Key: rm32044_t4.pkey -> Seq Scan on public.rm32044_t4 Output: rm32044_t4.pkey -> Materialize Output: rm32044_t1.pkey, rm32044_t1.val -> Seq Scan on public.rm32044_t1 Output: rm32044_t1.pkey, rm32044_t1.val -> Sort Output: rm32044_t2.pkey, rm32044_t2.label, rm32044_t2.hidden Sort Key: rm32044_t2.pkey -> Seq Scan on public.rm32044_t2 Output: rm32044_t2.pkey, rm32044_t2.label, rm32044_t2.hidden(34 rows)It seems like while building the innerslot for merge join, the value for attnum 1 is not getting fetched correctly.On Tue, Sep 12, 2023 at 3:27 PM Suraj Kharage <[email protected]> wrote:Hi,Found server crash on RHEL 9/s390x platform with below test case - Machine details:[edb@9428da9d2137 postgres]$ cat /etc/redhat-release AlmaLinux release 9.2 (Turquoise Kodkod)[edb@9428da9d2137 postgres]$ lscpuArchitecture: s390x CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Big EndianConfigure command:./configure --prefix=/home/edb/postgres/ --with-lz4 --with-zstd --with-llvm --with-perl --with-python --with-tcl --with-openssl --enable-nls --with-libxml --with-libxslt --with-systemd --with-libcurl --without-icu --enable-debug --enable-cassert --with-pgport=5414Test case:CREATE TABLE rm32044_t1( pkey integer, val text);CREATE TABLE rm32044_t2( pkey integer, label text, hidden boolean);CREATE TABLE rm32044_t3( pkey integer, val integer);CREATE TABLE rm32044_t4( pkey integer);insert into rm32044_t1 values ( 1 , 'row1');insert into rm32044_t1 values ( 2 , 'row2');insert into rm32044_t2 values ( 1 , 'hidden', true);insert into rm32044_t2 values ( 2 , 'visible', false);insert into rm32044_t3 values (1 , 1);insert into rm32044_t3 values (2 , 1);postgres=# SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON rm32044_t3.pkey = rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;server closed the connection unexpectedly\tThis probably means the server terminated abnormally\tbefore or while processing the request.The connection to the server was lost. Attempting reset: Failed.The connection to the server was lost. Attempting reset: Failed.backtrace:[edb@9428da9d2137 postgres]$ gdb bin/postgres data/qemu_postgres_20230911-140628_65620.coreCore was generated by `postgres: edb postgres [local] SELECT '.Program terminated with signal SIGSEGV, Segmentation fault.#0 0x00000000010a8366 in heap_compute_data_size (tupleDesc=tupleDesc@entry=0x1ba3d10, values=values@entry=0x1ba4168, isnull=isnull@entry=0x1ba41a8) at heaptuple.c:227227\t\t\t\tVARATT_CAN_MAKE_SHORT(DatumGetPointer(val)))[Current thread is 1 (LWP 65597)]Missing separate debuginfos, use: dnf debuginfo-install glibc-2.34-60.el9.s390x libcap-2.48-8.el9.s390x libedit-3.1-37.20210216cvs.el9.s390x libffi-3.4.2-7.el9.s390x libgcc-11.3.1-4.3.el9.alma.s390x libgcrypt-1.10.0-10.el9_2.s390x libgpg-error-1.42-5.el9.s390x libstdc++-11.3.1-4.3.el9.alma.s390x libxml2-2.9.13-3.el9_2.1.s390x libzstd-1.5.1-2.el9.s390x llvm-libs-15.0.7-1.el9.s390x lz4-libs-1.9.3-5.el9.s390x ncurses-libs-6.2-8.20210508.el9.s390x openssl-libs-3.0.7-17.el9_2.s390x systemd-libs-252-14.el9_2.3.s390x xz-libs-5.2.5-8.el9_0.s390x(gdb) bt#0 0x00000000010a8366 in heap_compute_data_size (tupleDesc=tupleDesc@entry=0x1ba3d10, values=values@entry=0x1ba4168, isnull=isnull@entry=0x1ba41a8) at heaptuple.c:227#1 0x00000000010a9bb0 in heap_form_minimal_tuple (tupleDescriptor=0x1ba3d10, values=0x1ba4168, isnull=0x1ba41a8) at heaptuple.c:1484#2 0x00000000016553fa in ExecCopySlotMinimalTuple (slot=<optimized out>) at ../../../../src/include/executor/tuptable.h:472#3 tuplesort_puttupleslot (state=state@entry=0x1be4d18, slot=slot@entry=0x1ba4120) at tuplesortvariants.c:610#4 0x00000000012dc0e0 in ExecIncrementalSort (pstate=0x1acb4d8) at nodeIncrementalSort.c:716#5 0x00000000012b32c6 in ExecProcNode (node=0x1acb4d8) at ../../../src/include/executor/executor.h:273#6 ExecutePlan (execute_once=<optimized out>, dest=0x1ade698, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x1acb4d8, estate=0x1acb258) at execMain.c:1670#7 standard_ExecutorRun (queryDesc=0x19ad338, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:365#8 0x00000000014a6ae2 in PortalRunSelect (portal=portal@entry=0x1a63558, forward=forward@entry=true, count=0, count@entry=9223372036854775807, dest=dest@entry=0x1ade698) at pquery.c:924#9 0x00000000014a84e0 in PortalRun (portal=portal@entry=0x1a63558, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x1ade698, altdest=0x1ade698, qc=0x40007ff7b0) at pquery.c:768#10 0x00000000014a3c1c in exec_simple_query ( query_string=0x19ea0e8 \"SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON rm32044_t3.pkey = rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;\") at postgres.c:1274#11 0x00000000014a57aa in PostgresMain (dbname=<optimized out>, username=<optimized out>) at postgres.c:4637#12 0x00000000013fdaf6 in BackendRun (port=0x1a132c0, port=0x1a132c0) at postmaster.c:4464#13 BackendStartup (port=0x1a132c0) at postmaster.c:4192#14 ServerLoop () at postmaster.c:1782#15 0x00000000013fec34 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x19a59a0) at postmaster.c:1466#16 0x0000000001096faa in main (argc=<optimized out>, argv=0x19a59a0) at main.c:198(gdb) p val$1 = 0```Does anybody have any idea about this?-- --Thanks & Regards, Suraj kharage, edbpostgres.com\n-- --Thanks & Regards, Suraj kharage, edbpostgres.com\n-- --Thanks & Regards, Suraj kharage, edbpostgres.com\n-- --Thanks & Regards, Suraj kharage, edbpostgres.com",
"msg_date": "Thu, 12 Oct 2023 16:12:18 +0530",
"msg_from": "Suraj Kharage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Server crash on RHEL 9/s390x platform against PG16"
},
{
"msg_contents": "On Sun, Oct 8, 2023 at 10:55 PM Suraj Kharage <\[email protected]> wrote:\n\n> It looks like an issue with JIT. If I disable the JIT then the above query\n> runs successfully.\n>\n> postgres=# set jit to off;\n>\n> SET\n>\n> postgres=# SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON\n> rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON\n> rm32044_t3.pkey = rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;\n>\n> pkey | val | pkey | label | hidden | pkey | val | pkey\n>\n> ------+------+------+---------+--------+------+-----+------\n>\n> 1 | row1 | 1 | hidden | t | 1 | 1 |\n>\n> 1 | row1 | 1 | hidden | t | 2 | 1 |\n>\n> 2 | row2 | 2 | visible | f | 1 | 1 |\n>\n> 2 | row2 | 2 | visible | f | 2 | 1 |\n>\n> (4 rows)\n>\n> Any idea on this?\n>\n\nNo, but I found a few previous threads complaining about JIT not working on\ns390x.\n\nhttps://www.postgresql.org/message-id/[email protected]\nhttps://www.postgresql.org/message-id/[email protected]\nhttps://www.postgresql.org/message-id/20200715091509.GA3354074%40msg.df7cb.de\n\nThe most interesting email I found in those threads was this one:\n\nhttp://postgr.es/m/[email protected]\n\nThe backtrace there is different from the one you posted here in\nsignificant ways, but it seems like both that case and this one involve a\nnull pointer showing up for a non-null pass-by-reference datum. That\ndoesn't seem like a whole lot to go on, but maybe somebody who understands\nthe JIT stuff better than I do will have an idea.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\nOn Sun, Oct 8, 2023 at 10:55 PM Suraj Kharage <[email protected]> wrote:It looks like an issue with JIT. If I disable the JIT then the above query runs successfully.\npostgres=# set jit to off;\nSET\npostgres=# SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON rm32044_t1.pkey = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON rm32044_t3.pkey = rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;\n pkey | val | pkey | label | hidden | pkey | val | pkey \n------+------+------+---------+--------+------+-----+------\n 1 | row1 | 1 | hidden | t | 1 | 1 | \n 1 | row1 | 1 | hidden | t | 2 | 1 | \n 2 | row2 | 2 | visible | f | 1 | 1 | \n 2 | row2 | 2 | visible | f | 2 | 1 | \n(4 rows)Any idea on this?No, but I found a few previous threads complaining about JIT not working on s390x.https://www.postgresql.org/message-id/[email protected]://www.postgresql.org/message-id/[email protected]://www.postgresql.org/message-id/20200715091509.GA3354074%40msg.df7cb.deThe most interesting email I found in those threads was this one:http://postgr.es/m/[email protected] The backtrace there is different from the one you posted here in significant ways, but it seems like both that case and this one involve a null pointer showing up for a non-null pass-by-reference datum. That doesn't seem like a whole lot to go on, but maybe somebody who understands the JIT stuff better than I do will have an idea.-- Robert HaasEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 16 Oct 2023 11:06:27 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server crash on RHEL 9/s390x platform against PG16"
},
{
"msg_contents": "Hi,\n\nOn 2023-09-12 15:27:21 +0530, Suraj Kharage wrote:\n> *[edb@9428da9d2137 postgres]$ cat /etc/redhat-release AlmaLinux release 9.2\n> (Turquoise Kodkod)[edb@9428da9d2137 postgres]$ lscpuArchitecture:\n> s390x CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits\n\nCan you provide the rest of the lscpu output? There have been issues with Z14\nvs Z15:\nhttps://github.com/llvm/llvm-project/issues/53009\n\nYou're apparently not hitting that, but given that fact, you either are on a\nslightly older CPU, or you have applied a patch to work around it. Because\notherwise your uild instructions below would hit that problem, I think.\n\n\n> physical, 48 bits virtual Byte Order: Big Endian*\n> *Configure command:*\n> ./configure --prefix=/home/edb/postgres/ --with-lz4 --with-zstd --with-llvm\n> --with-perl --with-python --with-tcl --with-openssl --enable-nls\n> --with-libxml --with-libxslt --with-systemd --with-libcurl --without-icu\n> --enable-debug --enable-cassert --with-pgport=5414\n\nHm, based on \"--with-libcurl\" this isn't upstream postgres, correct? Have you\nverified the issue reproduces on upstream postgres?\n\n> \n> *Test case:*\n> CREATE TABLE rm32044_t1\n> (\n> pkey integer,\n> val text\n> );\n> CREATE TABLE rm32044_t2\n> (\n> pkey integer,\n> label text,\n> hidden boolean\n> );\n> CREATE TABLE rm32044_t3\n> (\n> pkey integer,\n> val integer\n> );\n> CREATE TABLE rm32044_t4\n> (\n> pkey integer\n> );\n> insert into rm32044_t1 values ( 1 , 'row1');\n> insert into rm32044_t1 values ( 2 , 'row2');\n> insert into rm32044_t2 values ( 1 , 'hidden', true);\n> insert into rm32044_t2 values ( 2 , 'visible', false);\n> insert into rm32044_t3 values (1 , 1);\n> insert into rm32044_t3 values (2 , 1);\n> \n> postgres=# SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON rm32044_t1.pkey\n> = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON rm32044_t3.pkey =\n> rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;\n\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> The connection to the server was lost. Attempting reset: Failed.\n\nI tried this on both master and 16, without hitting this issue.\n\nIf you can reproduce the issue on upstream postgres, can you share more about\nyour configuration?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 20 Oct 2023 16:47:43 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server crash on RHEL 9/s390x platform against PG16"
},
{
"msg_contents": "On Sat, Oct 21, 2023 at 5:17 AM Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2023-09-12 15:27:21 +0530, Suraj Kharage wrote:\n> > *[edb@9428da9d2137 postgres]$ cat /etc/redhat-release AlmaLinux release\n> 9.2\n> > (Turquoise Kodkod)[edb@9428da9d2137 postgres]$ lscpuArchitecture:\n> > s390x CPU op-mode(s): 32-bit, 64-bit Address sizes: 39\n> bits\n>\n> Can you provide the rest of the lscpu output? There have been issues with\n> Z14\n> vs Z15:\n> https://github.com/llvm/llvm-project/issues/53009\n>\n> You're apparently not hitting that, but given that fact, you either are on\n> a\n> slightly older CPU, or you have applied a patch to work around it. Because\n> otherwise your uild instructions below would hit that problem, I think.\n>\n>\n> > physical, 48 bits virtual Byte Order: Big Endian*\n> > *Configure command:*\n> > ./configure --prefix=/home/edb/postgres/ --with-lz4 --with-zstd\n> --with-llvm\n> > --with-perl --with-python --with-tcl --with-openssl --enable-nls\n> > --with-libxml --with-libxslt --with-systemd --with-libcurl --without-icu\n> > --enable-debug --enable-cassert --with-pgport=5414\n>\n> Hm, based on \"--with-libcurl\" this isn't upstream postgres, correct? Have\n> you\n> verified the issue reproduces on upstream postgres?\n>\n\nYes, I can reproduce this on upstream postgres master and v16 branch.\n\nHere are details:\n\n./configure --prefix=/home/edb/postgres/ --with-zstd --with-llvm\n--with-perl --with-python --with-tcl --with-openssl --enable-nls\n--with-libxml --with-libxslt --with-systemd --without-icu --enable-debug\n--enable-cassert --with-pgport=5414 CFLAGS=\"-g -O0\"\n\n\n\n[edb@9428da9d2137 postgres]$ cat /etc/redhat-release\n\nAlmaLinux release 9.2 (Turquoise Kodkod)\n\n\n[edb@9428da9d2137 edbas]$ lscpu\n\nArchitecture: s390x\n\n CPU op-mode(s): 32-bit, 64-bit\n\n Address sizes: 39 bits physical, 48 bits virtual\n\n Byte Order: Big Endian\n\nCPU(s): 9\n\n On-line CPU(s) list: 0-8\n\nVendor ID: GenuineIntel\n\n Model name: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz\n\n CPU family: 6\n\n Model: 158\n\n Thread(s) per core: 1\n\n Core(s) per socket: 1\n\n Socket(s): 9\n\n Stepping: 10\n\n BogoMIPS: 5200.00\n\n Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr\npge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht pbe syscall nx\npdpe1gb lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni\npclmulqdq dtes64 ds_cpl ssse3 sdbg fma cx\n\n 16 xtpr pcid sse4_1 sse4_2 movbe popcnt aes xsave\navx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase bmi1 avx2\nbmi2 erms xsaveopt arat\n\nCaches (sum of all):\n\n L1d: 288 KiB (9 instances)\n\n L1i: 288 KiB (9 instances)\n\n L2: 2.3 MiB (9 instances)\n\n L3: 108 MiB (9 instances)\n\nVulnerabilities:\n\n Itlb multihit: KVM: Mitigation: VMX unsupported\n\n L1tf: Mitigation; PTE Inversion\n\n Mds: Vulnerable; SMT Host state unknown\n\n Meltdown: Vulnerable\n\n Mmio stale data: Vulnerable\n\n Spec store bypass: Vulnerable\n\n Spectre v1: Vulnerable: __user pointer sanitization and\nusercopy barriers only; no swapgs barriers\n\n Spectre v2: Vulnerable, STIBP: disabled\n\n Srbds: Unknown: Dependent on hypervisor status\n\n Tsx async abort: Not affected\n\n\n[edb@9428da9d2137 postgres]$ clang --version\n\nclang version 15.0.7 (Red Hat 15.0.7-2.el9)\n\nTarget: s390x-ibm-linux-gnu\n\nThread model: posix\n\nInstalledDir: /usr/bin\n\n\n[edb@9428da9d2137 postgres]$ rpm -qa | grep llvm\n\n*llvm*-libs-15.0.7-1.el9.s390x\n\n*llvm*-15.0.7-1.el9.s390x\n\n*llvm*-test-15.0.7-1.el9.s390x\n\n*llvm*-static-15.0.7-1.el9.s390x\n\n*llvm*-devel-15.0.7-1.el9.s390x\n\nPlease let me know if any further information is required.\n\n\n> >\n> > *Test case:*\n> > CREATE TABLE rm32044_t1\n> > (\n> > pkey integer,\n> > val text\n> > );\n> > CREATE TABLE rm32044_t2\n> > (\n> > pkey integer,\n> > label text,\n> > hidden boolean\n> > );\n> > CREATE TABLE rm32044_t3\n> > (\n> > pkey integer,\n> > val integer\n> > );\n> > CREATE TABLE rm32044_t4\n> > (\n> > pkey integer\n> > );\n> > insert into rm32044_t1 values ( 1 , 'row1');\n> > insert into rm32044_t1 values ( 2 , 'row2');\n> > insert into rm32044_t2 values ( 1 , 'hidden', true);\n> > insert into rm32044_t2 values ( 2 , 'visible', false);\n> > insert into rm32044_t3 values (1 , 1);\n> > insert into rm32044_t3 values (2 , 1);\n> >\n> > postgres=# SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON\n> rm32044_t1.pkey\n> > = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON rm32044_t3.pkey =\n> > rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;\n>\n> > server closed the connection unexpectedly\n> > This probably means the server terminated abnormally\n> > before or while processing the request.\n> > The connection to the server was lost. Attempting reset: Failed.\n> > The connection to the server was lost. Attempting reset: Failed.\n>\n> I tried this on both master and 16, without hitting this issue.\n>\n> If you can reproduce the issue on upstream postgres, can you share more\n> about\n> your configuration?\n>\n> Greetings,\n>\n> Andres Freund\n>\n\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\n\n\n\nedbpostgres.com\n\nOn Sat, Oct 21, 2023 at 5:17 AM Andres Freund <[email protected]> wrote:Hi,\n\nOn 2023-09-12 15:27:21 +0530, Suraj Kharage wrote:\n> *[edb@9428da9d2137 postgres]$ cat /etc/redhat-release AlmaLinux release 9.2\n> (Turquoise Kodkod)[edb@9428da9d2137 postgres]$ lscpuArchitecture:\n> s390x CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits\n\nCan you provide the rest of the lscpu output? There have been issues with Z14\nvs Z15:\nhttps://github.com/llvm/llvm-project/issues/53009\n\nYou're apparently not hitting that, but given that fact, you either are on a\nslightly older CPU, or you have applied a patch to work around it. Because\notherwise your uild instructions below would hit that problem, I think.\n\n\n> physical, 48 bits virtual Byte Order: Big Endian*\n> *Configure command:*\n> ./configure --prefix=/home/edb/postgres/ --with-lz4 --with-zstd --with-llvm\n> --with-perl --with-python --with-tcl --with-openssl --enable-nls\n> --with-libxml --with-libxslt --with-systemd --with-libcurl --without-icu\n> --enable-debug --enable-cassert --with-pgport=5414\n\nHm, based on \"--with-libcurl\" this isn't upstream postgres, correct? Have you\nverified the issue reproduces on upstream postgres?Yes, I can reproduce this on upstream postgres master and v16 branch.Here are details:\n./configure --prefix=/home/edb/postgres/ --with-zstd --with-llvm --with-perl --with-python --with-tcl --with-openssl --enable-nls --with-libxml --with-libxslt --with-systemd --without-icu --enable-debug --enable-cassert --with-pgport=5414 CFLAGS=\"-g -O0\"[edb@9428da9d2137 postgres]$ cat /etc/redhat-releaseAlmaLinux release 9.2 (Turquoise Kodkod)[edb@9428da9d2137 edbas]$ lscpuArchitecture: s390x CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Big EndianCPU(s): 9 On-line CPU(s) list: 0-8Vendor ID: GenuineIntel Model name: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz CPU family: 6 Model: 158 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 9 Stepping: 10 BogoMIPS: 5200.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht pbe syscall nx pdpe1gb lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq dtes64 ds_cpl ssse3 sdbg fma cx 16 xtpr pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase bmi1 avx2 bmi2 erms xsaveopt aratCaches (sum of all): L1d: 288 KiB (9 instances) L1i: 288 KiB (9 instances) L2: 2.3 MiB (9 instances) L3: 108 MiB (9 instances)Vulnerabilities: Itlb multihit: KVM: Mitigation: VMX unsupported L1tf: Mitigation; PTE Inversion Mds: Vulnerable; SMT Host state unknown Meltdown: Vulnerable Mmio stale data: Vulnerable Spec store bypass: Vulnerable Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers Spectre v2: Vulnerable, STIBP: disabled Srbds: Unknown: Dependent on hypervisor status Tsx async abort: Not affected[edb@9428da9d2137 postgres]$ clang --versionclang version 15.0.7 (Red Hat 15.0.7-2.el9)Target: s390x-ibm-linux-gnuThread model: posixInstalledDir: /usr/bin[edb@9428da9d2137 postgres]$ rpm -qa | grep llvmllvm-libs-15.0.7-1.el9.s390xllvm-15.0.7-1.el9.s390xllvm-test-15.0.7-1.el9.s390xllvm-static-15.0.7-1.el9.s390x\nllvm-devel-15.0.7-1.el9.s390x Please let me know if any further information is required.\n\n> \n> *Test case:*\n> CREATE TABLE rm32044_t1\n> (\n> pkey integer,\n> val text\n> );\n> CREATE TABLE rm32044_t2\n> (\n> pkey integer,\n> label text,\n> hidden boolean\n> );\n> CREATE TABLE rm32044_t3\n> (\n> pkey integer,\n> val integer\n> );\n> CREATE TABLE rm32044_t4\n> (\n> pkey integer\n> );\n> insert into rm32044_t1 values ( 1 , 'row1');\n> insert into rm32044_t1 values ( 2 , 'row2');\n> insert into rm32044_t2 values ( 1 , 'hidden', true);\n> insert into rm32044_t2 values ( 2 , 'visible', false);\n> insert into rm32044_t3 values (1 , 1);\n> insert into rm32044_t3 values (2 , 1);\n> \n> postgres=# SELECT * FROM rm32044_t1 LEFT JOIN rm32044_t2 ON rm32044_t1.pkey\n> = rm32044_t2.pkey, rm32044_t3 LEFT JOIN rm32044_t4 ON rm32044_t3.pkey =\n> rm32044_t4.pkey order by rm32044_t1.pkey,label,hidden;\n\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> The connection to the server was lost. Attempting reset: Failed.\n\nI tried this on both master and 16, without hitting this issue.\n\nIf you can reproduce the issue on upstream postgres, can you share more about\nyour configuration?\n\nGreetings,\n\nAndres Freund\n-- --Thanks & Regards, Suraj kharage, edbpostgres.com",
"msg_date": "Mon, 23 Oct 2023 09:36:36 +0530",
"msg_from": "Suraj Kharage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Server crash on RHEL 9/s390x platform against PG16"
}
] |
[
{
"msg_contents": "Hi,\n\nI created a tiny patch that documents that the code block following\nPG_TRY() cannot have any return statement.\n\nPlease CC me, as I'm not subscribed to this list.",
"msg_date": "Tue, 12 Sep 2023 14:54:24 +0200",
"msg_from": "Serpent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Document that PG_TRY block cannot have a return statement"
},
{
"msg_contents": "Serpent <[email protected]> writes:\n> I created a tiny patch that documents that the code block following\n> PG_TRY() cannot have any return statement.\n\nAFAIK, this is wrong. The actual requirement is already stated\nin the comment:\n\n * ... The error recovery code\n * can either do PG_RE_THROW to propagate the error outwards, or do a\n * (sub)transaction abort.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Sep 2023 10:29:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document that PG_TRY block cannot have a return statement"
},
{
"msg_contents": "Serpent <[email protected]> writes:\n> I'm talking about this part:\n\n> PG_TRY();\n> {\n> ... code that might throw ereport(ERROR) ...\n> }\n\nAh. Your phrasing needs work for clarity then. Also, \"return\"\nis hardly the only way to break it; break, continue, or goto\nleading out of the PG_TRY are other possibilities. Maybe more\nlike \"The XXX code must exit normally (by control reaching\nthe end) if it does not throw ereport(ERROR).\" Not quite sure\nwhat to use for XXX.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Sep 2023 11:22:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document that PG_TRY block cannot have a return statement"
},
{
"msg_contents": "Hi,\n\nWhat about this wording:\n\nThe code that might throw ereport(ERROR) cannot contain any non local\ncontrol flow other than ereport(ERROR) e.g.: return, goto, break, continue.\nIn other words once PG_TRY() is executed, either PG_CATCH() or PG_FINALLY()\nmust be executed as well.\n\nI used 'code that might throw ereport(ERROR)' for XXX since this is what's\nused earlier in the comment.\n\nOn Tue, 12 Sept 2023 at 17:22, Tom Lane <[email protected]> wrote:\n\n> Serpent <[email protected]> writes:\n> > I'm talking about this part:\n>\n> > PG_TRY();\n> > {\n> > ... code that might throw ereport(ERROR) ...\n> > }\n>\n> Ah. Your phrasing needs work for clarity then. Also, \"return\"\n> is hardly the only way to break it; break, continue, or goto\n> leading out of the PG_TRY are other possibilities. Maybe more\n> like \"The XXX code must exit normally (by control reaching\n> the end) if it does not throw ereport(ERROR).\" Not quite sure\n> what to use for XXX.\n>\n> regards, tom lane\n>",
"msg_date": "Wed, 13 Sep 2023 14:49:17 +0200",
"msg_from": "Serpent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Document that PG_TRY block cannot have a return statement"
},
{
"msg_contents": "LGTM!\n\nSerpent <[email protected]> 于2024年8月15日周四 15:01写道:\n\n> Hi,\n>\n> What about this wording:\n>\n> The code that might throw ereport(ERROR) cannot contain any non local\n> control flow other than ereport(ERROR) e.g.: return, goto, break, continue.\n> In other words once PG_TRY() is executed, either PG_CATCH() or\n> PG_FINALLY() must be executed as well.\n>\n> I used 'code that might throw ereport(ERROR)' for XXX since this is what's\n> used earlier in the comment.\n>\n> On Tue, 12 Sept 2023 at 17:22, Tom Lane <[email protected]> wrote:\n>\n>> Serpent <[email protected]> writes:\n>> > I'm talking about this part:\n>>\n>> > PG_TRY();\n>> > {\n>> > ... code that might throw ereport(ERROR) ...\n>> > }\n>>\n>> Ah. Your phrasing needs work for clarity then. Also, \"return\"\n>> is hardly the only way to break it; break, continue, or goto\n>> leading out of the PG_TRY are other possibilities. Maybe more\n>> like \"The XXX code must exit normally (by control reaching\n>> the end) if it does not throw ereport(ERROR).\" Not quite sure\n>> what to use for XXX.\n>>\n>> regards, tom lane\n>>\n>\n\n-- \nBest regards !\nXiaoran Wang\n\nLGTM!Serpent <[email protected]> 于2024年8月15日周四 15:01写道:Hi,What about this wording:The code that might throw ereport(ERROR) cannot contain any non local control flow other than ereport(ERROR) e.g.: return, goto, break, continue.In other words once PG_TRY() is executed, either PG_CATCH() or PG_FINALLY() must be executed as well.I used 'code that might throw ereport(ERROR)' for XXX since this is what's used earlier in the comment.On Tue, 12 Sept 2023 at 17:22, Tom Lane <[email protected]> wrote:Serpent <[email protected]> writes:\n> I'm talking about this part:\n\n> PG_TRY();\n> {\n> ... code that might throw ereport(ERROR) ...\n> }\n\nAh. Your phrasing needs work for clarity then. Also, \"return\"\nis hardly the only way to break it; break, continue, or goto\nleading out of the PG_TRY are other possibilities. Maybe more\nlike \"The XXX code must exit normally (by control reaching\nthe end) if it does not throw ereport(ERROR).\" Not quite sure\nwhat to use for XXX.\n\n regards, tom lane\n\n-- Best regards !Xiaoran Wang",
"msg_date": "Mon, 19 Aug 2024 12:03:58 +0800",
"msg_from": "Xiaoran Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document that PG_TRY block cannot have a return statement"
}
] |
[
{
"msg_contents": "\nOne of the frustrations with using the \"C\" locale (or any deterministic\nlocale) is that the following returns false:\n\n SELECT 'á' = 'á'; -- false\n\nbecause those are the unicode sequences U&'\\0061\\0301' and U&'\\00E1',\nrespectively, so memcmp() returns non-zero. But it's really the same\ncharacter with just a different representation, and if you normalize\nthem they are equal:\n\n SELECT normalize('á') = normalize('á'); -- true\n\nThe idea is to have a new data type, say \"UTEXT\", that normalizes the\ninput so that it can have an improved notion of equality while still\nusing memcmp().\n\nUnicode guarantees that \"the results of normalizing a string on one\nversion will always be the same as normalizing it on any other version,\nas long as the string contains only assigned characters according to\nboth versions\"[1]. It also guarantees that it \"will not reallocate,\nremove, or reassign\" characters[2]. That means that we can normalize in\na forward-compatible way as long as we don't allow the use of\nunassigned code points.\n\nI looked at the standard to see what it had to say, and is discusses\nnormalization, but a standard UCS string with an unassigned code point\nis not an error. Without a data type to enforce the constraint that\nthere are no unassigned code points, we can't guarantee forward\ncompatibility. Some other systems support NVARCHAR, but I didn't see\nany guarantee of normalization or blocking unassigned code points\nthere, either.\n\nUTEXT benefits:\n * slightly better natural language semantics than TEXT with\ndeterministic collation\n * still deterministic=true\n * fast memcmp()-based comparisons\n * no breaking semantic changes as unicode evolves\n\nTEXT allows unassigned code points, and generally returns the same byte\nsequences that were orgiinally entered; therefore UTEXT is not a\nreplacement for TEXT.\n\nUTEXT could be built-in or it could be an extension or in contrib. If\nan extension, we'd probably want to at least expose a function that can\ndetect unassigned code points, so that it's easy to be consistent with\nthe auto-generated unicode tables. I also notice that there already is\nan unassigned code points table in saslprep.c, but it seems to be\nfrozen as of Unicode 3.2, and I'm not sure why.\n\nQuestions:\n\n * Would this be useful enough to justify a new data type? Would it be\nconfusing about when to choose one versus the other?\n * Would cross-type comparisons between TEXT and UTEXT become a major\nproblem that would reduce the utility?\n * Should \"some_utext_value = some_text_value\" coerce the LHS to TEXT\nor the RHS to UTEXT?\n * Other comments or am I missing something?\n\nRegards,\n\tJeff Davis\n\n\n[1] https://unicode.org/reports/tr15/\n[2] https://www.unicode.org/policies/stability_policy.html\n\n\n",
"msg_date": "Tue, 12 Sep 2023 15:47:10 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On 13.09.23 00:47, Jeff Davis wrote:\n> The idea is to have a new data type, say \"UTEXT\", that normalizes the\n> input so that it can have an improved notion of equality while still\n> using memcmp().\n\nI think a new type like this would obviously be suboptimal because it's \nnonstandard and most people wouldn't use it.\n\nI think a better direction here would be to work toward making \nnondeterministic collations usable on the global/database level and then \nencouraging users to use those.\n\nIt's also not clear which way the performance tradeoffs would fall.\n\nNondeterministic collations are obviously going to be slower, but by how \nmuch? People have accepted moving from C locale to \"real\" locales \nbecause they needed those semantics. Would it be any worse moving from \nreal locales to \"even realer\" locales?\n\nOn the other hand, a utext type would either require a large set of its \nown functions and operators, or you would have to inject text-to-utext \ncasts in places, which would also introduce overhead.\n\n\n",
"msg_date": "Mon, 2 Oct 2023 10:47:48 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Mon, Oct 2, 2023 at 3:42 PM Peter Eisentraut <[email protected]> wrote:\n> I think a better direction here would be to work toward making\n> nondeterministic collations usable on the global/database level and then\n> encouraging users to use those.\n\nIt seems to me that this overlooks one of the major points of Jeff's\nproposal, which is that we don't reject text input that contains\nunassigned code points. That decision turns out to be really painful.\nHere, Jeff mentions normalization, but I think it's a major issue with\ncollation support. If new code points are added, users can put them\ninto the database before they are known to the collation library, and\nthen when they become known to the collation library the sort order\nchanges and indexes break. Would we endorse a proposal to make\npg_catalog.text with encoding UTF-8 reject code points that aren't yet\nknown to the collation library? To do so would be tighten things up\nconsiderably from where they stand today, and the way things stand\ntoday is already rigid enough to cause problems for some users. But if\nwe're not willing to do that then I find it easy to understand why\nJeff wants an alternative type that does.\n\nNow, there is still the question of whether such a data type would\nproperly belong in core or even contrib rather than being an\nout-of-core project. It's not obvious to me that such a data type\nwould get enough traction that we'd want it to be part of PostgreSQL\nitself. But at the same time I can certainly understand why Jeff finds\nthe status quo problematic.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 2 Oct 2023 16:06:09 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Tue, Sep 12, 2023 at 03:47:10PM -0700, Jeff Davis wrote:\n> One of the frustrations with using the \"C\" locale (or any deterministic\n> locale) is that the following returns false:\n> \n> SELECT 'á' = 'á'; -- false\n> \n> because those are the unicode sequences U&'\\0061\\0301' and U&'\\00E1',\n> respectively, so memcmp() returns non-zero. But it's really the same\n> character with just a different representation, and if you normalize\n> them they are equal:\n> \n> SELECT normalize('á') = normalize('á'); -- true\n\nI think you misunderstand Unicode normalization and equivalence. There\nis no standard Unicode `normalize()` that would cause the above equality\npredicate to be true. If you normalize to NFD (normal form decomposed)\nthen a _prefix_ of those two strings will be equal, but that's clearly\nnot what you're looking for.\n\nPostgreSQL already has Unicode normalization support, though it would be\nnice to also have form-insensitive indexing and equality predicates.\n\nThere are two ways to write 'á' in Unicode: one is pre-composed (one\ncodepoint) and the other is decomposed (two codepoints in this specific\ncase), and it would be nice to be able to preserve input form when\nstoring strings but then still be able to index and match them\nform-insensitively (in the case of 'á' both equivalent representations\nshould be considered equal, and for UNIQUE indexes they should be\nconsidered the same).\n\nYou could also have functions that perform lossy normalization in the\nsort of way that soundex does, such as first normalizing to NFD then\ndropping all combining codepoints which then could allow 'á' to be eq to\n'a'. But this would not be a Unicode normalization function.\n\nNico\n-- \n\n\n",
"msg_date": "Mon, 2 Oct 2023 15:27:08 -0500",
"msg_from": "Nico Williams <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Mon, 2023-10-02 at 15:27 -0500, Nico Williams wrote:\n> I think you misunderstand Unicode normalization and equivalence. \n> There\n> is no standard Unicode `normalize()` that would cause the above\n> equality\n> predicate to be true. If you normalize to NFD (normal form\n> decomposed)\n> then a _prefix_ of those two strings will be equal, but that's\n> clearly\n> not what you're looking for.\n\n From [1]:\n\n\"Unicode Normalization Forms are formally defined normalizations of\nUnicode strings which make it possible to determine whether any two\nUnicode strings are equivalent to each other. Depending on the\nparticular Unicode Normalization Form, that equivalence can either be a\ncanonical equivalence or a compatibility equivalence... A binary\ncomparison of the transformed strings will then determine equivalence.\"\n\nNFC and NFD are based on Canonical Equivalence.\n\n\"Canonical equivalence is a fundamental equivalency between characters\nor sequences of characters which represent the same abstract character,\nand which when correctly displayed should always have the same visual\nappearance and behavior.\"\n\nCan you explain why NFC (the default form of normalization used by the\npostgres normalize() function), followed by memcmp(), is not the right\nthing to use to determine Canonical Equivalence?\n\nOr are you saying that Canonical Equivalence is not a useful thing to\ntest?\n\nWhat do you mean about the \"prefix\"?\n\nIn Postgres today:\n\n SELECT normalize(U&'\\0061\\0301', nfc)::bytea; -- \\xc3a1\n SELECT normalize(U&'\\00E1', nfc)::bytea; -- \\xc3a1\n\n SELECT normalize(U&'\\0061\\0301', nfd)::bytea; -- \\x61cc81\n SELECT normalize(U&'\\00E1', nfd)::bytea; -- \\x61cc81\n\nwhich looks useful to me, but I assume you are saying that it doesn't\ngeneralize well to other cases?\n\n[1] https://unicode.org/reports/tr15/\n\n> There are two ways to write 'á' in Unicode: one is pre-composed (one\n> codepoint) and the other is decomposed (two codepoints in this\n> specific\n> case), and it would be nice to be able to preserve input form when\n> storing strings but then still be able to index and match them\n> form-insensitively (in the case of 'á' both equivalent\n> representations\n> should be considered equal, and for UNIQUE indexes they should be\n> considered the same).\n\nSometimes preserving input differences is a good thing, other times\nit's not, depending on the context. Almost any data type has some\naspects of the input that might not be preserved -- leading zeros in a\nnumber, or whitespace in jsonb, etc.\n\nIf text is stored as normalized with NFC, it could be frustrating if\nthe retrieved string has a different binary representation than the\nsource data. But it could also be frustrating to look at two strings\nmade up of ordinary characters that look identical and for the database\nto consider them unequal.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Tue, 03 Oct 2023 12:15:10 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Mon, 2023-10-02 at 16:06 -0400, Robert Haas wrote:\n> It seems to me that this overlooks one of the major points of Jeff's\n> proposal, which is that we don't reject text input that contains\n> unassigned code points. That decision turns out to be really painful.\n\nYeah, because we lose forward-compatibility of some useful operations.\n\n> Here, Jeff mentions normalization, but I think it's a major issue\n> with\n> collation support. If new code points are added, users can put them\n> into the database before they are known to the collation library, and\n> then when they become known to the collation library the sort order\n> changes and indexes break.\n\nThe collation version number may reflect the change in understanding\nabout assigned code points that may affect collation -- though I'd like\nto understand whether this is guaranteed or not.\n\nRegardless, given that (a) we don't have a good story for migrating to\nnew collation versions; and (b) it would be painful to rebuild indexes\neven if we did; then you are right that it's a problem.\n\n> Would we endorse a proposal to make\n> pg_catalog.text with encoding UTF-8 reject code points that aren't\n> yet\n> known to the collation library? To do so would be tighten things up\n> considerably from where they stand today, and the way things stand\n> today is already rigid enough to cause problems for some users.\n\nWhat problems exist today due to the rigidity of text?\n\nI assume you mean because we reject invalid byte sequences? Yeah, I'm\nsure that causes a problem for some (especially migrations), but it's\ndifficult for me to imagine a database working well with no rules at\nall for the the basic data types.\n\n> Now, there is still the question of whether such a data type would\n> properly belong in core or even contrib rather than being an\n> out-of-core project. It's not obvious to me that such a data type\n> would get enough traction that we'd want it to be part of PostgreSQL\n> itself.\n\nAt minimum I think we need to have some internal functions to check for\nunassigned code points. That belongs in core, because we generate the\nunicode tables from a specific version.\n\nI also think we should expose some SQL functions to check for\nunassigned code points. That sounds useful, especially since we already\nexpose normalization functions.\n\nOne could easily imagine a domain with CHECK(NOT\ncontains_unassigned(a)). Or an extension with a data type that uses the\ninternal functions.\n\nWhether we ever get to a core data type -- and more importantly,\nwhether anyone uses it -- I'm not sure.\n\n> But at the same time I can certainly understand why Jeff finds\n> the status quo problematic.\n\nYeah, I am looking for a better compromise between:\n\n * everything is memcmp() and 'á' sometimes doesn't equal 'á'\n(depending on code point sequence)\n * everything is constantly changing, indexes break, and text\ncomparisons are slow\n\nA stable idea of unicode normalization based on using only assigned\ncode points is very tempting.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 03 Oct 2023 12:54:46 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Tue, Oct 03, 2023 at 12:15:10PM -0700, Jeff Davis wrote:\n> On Mon, 2023-10-02 at 15:27 -0500, Nico Williams wrote:\n> > I think you misunderstand Unicode normalization and equivalence. \n> > There is no standard Unicode `normalize()` that would cause the\n> > above equality predicate to be true. If you normalize to NFD\n> > (normal form decomposed) then a _prefix_ of those two strings will\n> > be equal, but that's clearly not what you're looking for.\n\nUgh, My client is not displying 'a' correctly, thus I misunderstood your\npost.\n\n> From [1]:\n\nHere's what you wrote in your post:\n\n| [...] But it's really the same\n| character with just a different representation, and if you normalize\n| them they are equal:\n|\n| SELECT normalize('á') = normalize('á'); -- true\n\nbut my client is not displying 'a' correctly! (It displays like 'a' but\nit should display like 'á'.)\n\nBah. So I'd (mis)interpreted you as saying that normalize('a') should\nequal normalize('á'). Please disregard that part of my reply.\n\n> > There are two ways to write 'á' in Unicode: one is pre-composed (one\n> > codepoint) and the other is decomposed (two codepoints in this\n> > specific case), and it would be nice to be able to preserve input\n> > form when storing strings but then still be able to index and match\n> > them form-insensitively (in the case of 'á' both equivalent\n> > representations should be considered equal, and for UNIQUE indexes\n> > they should be considered the same).\n> \n> Sometimes preserving input differences is a good thing, other times\n> it's not, depending on the context. Almost any data type has some\n> aspects of the input that might not be preserved -- leading zeros in a\n> number, or whitespace in jsonb, etc.\n\nAlmost every Latin input mode out there produces precomposed characters\nand so they effectively produce NFC. I'm not sure if the same is true\nfor, e.g., Hangul (Korean) and various other scripts.\n\nBut there are things out there that produce NFD. Famously Apple's HFS+\nuses NFD (or something very close to NFD). So if you cut-n-paste things\nthat got normalized to NFD and paste them into contexts where\nnormalization isn't done, then you might start wanting to alter those\ncontexts to either normalize or be form-preserving/form-insensitive.\nSometimes you don't get to normalize, so you have to pick form-\npreserving/form-insensitive behavior.\n\n> If text is stored as normalized with NFC, it could be frustrating if\n> the retrieved string has a different binary representation than the\n> source data. But it could also be frustrating to look at two strings\n> made up of ordinary characters that look identical and for the database\n> to consider them unequal.\n\nExactly. If you have such a case you might like the option to make your\ndatabase form-preserving and form-insensitive. That means that indices\nneed to normalize strings, but tables need to store unnormalized\nstrings.\n\nZFS (filesystems are a bit like databases) does just that!\n\nNico\n-- \n\n\n",
"msg_date": "Tue, 3 Oct 2023 15:15:17 -0500",
"msg_from": "Nico Williams <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Tue, 2023-10-03 at 15:15 -0500, Nico Williams wrote:\n> Ugh, My client is not displying 'a' correctly\n\nUgh. Is that an argument in favor of normalization or against?\n\nI've also noticed that some fonts render the same character a bit\ndifferently depending on the constituent code points. For instance, if\nthe accent is its own code point, it seems to be more prominent than if\na single code point represents both the base character and the accent.\nThat seems to be a violation, but I can understand why that might be\nuseful.\n\n> \n> Almost every Latin input mode out there produces precomposed\n> characters\n> and so they effectively produce NFC.\n\nThe problem is not the normal case, the problem will be things like\nobscure input methods, some kind of software that's being too clever,\nor some kind of malicious user trying to confuse the database.\n\n> \n> That means that indices\n> need to normalize strings, but tables need to store unnormalized\n> strings.\n\nThat's an interesting idea. Would the equality operator normalize\nfirst, or are you saying that the index would need to recheck the\nresults?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 03 Oct 2023 15:34:44 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Mon, 2023-10-02 at 10:47 +0200, Peter Eisentraut wrote:\n> I think a better direction here would be to work toward making \n> nondeterministic collations usable on the global/database level and\n> then \n> encouraging users to use those.\n> \n> It's also not clear which way the performance tradeoffs would fall.\n> \n> Nondeterministic collations are obviously going to be slower, but by\n> how \n> much? People have accepted moving from C locale to \"real\" locales \n> because they needed those semantics. Would it be any worse moving\n> from \n> real locales to \"even realer\" locales?\n\nIf you normalize first, then you can get some semantic improvements\nwithout giving up on the stability and performance of memcmp(). That\nseems like a win with zero costs in terms of stability or performance\n(except perhaps some extra text->utext casts).\n\nGoing to a \"real\" locale gives more semantic benefits but at a very\nhigh cost: depending on a collation provider library, dealing with\ncollation changes, and performance costs. While supporting the use of\nnondeterministic collations at the database level may be a good idea,\nit's not helping to reach the compromise that I'm trying to reach in\nthis thread.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 03 Oct 2023 15:55:32 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Tue, Oct 03, 2023 at 03:34:44PM -0700, Jeff Davis wrote:\n> On Tue, 2023-10-03 at 15:15 -0500, Nico Williams wrote:\n> > Ugh, My client is not displying 'a' correctly\n> \n> Ugh. Is that an argument in favor of normalization or against?\n\nHeheh, well, it's an argument in favor of more software getting this\nright (darn it).\n\nIt's also an argument for building a time machine so HFS+ can just\nalways have used NFC. But the existence of UTF-16 is proof that time\nmachines don't exist (or that only bad actors have them).\n\n> I've also noticed that some fonts render the same character a bit\n> differently depending on the constituent code points. For instance, if\n> the accent is its own code point, it seems to be more prominent than if\n> a single code point represents both the base character and the accent.\n> That seems to be a violation, but I can understand why that might be\n> useful.\n\nYes, that happens. Did you know that the ASCII character set was\ndesigned with overstrike in mind for typing of accented Latin\ncharacters? Unicode combining sequences are kinda like that, but more\ncomplex.\n\nYes, the idea really was that you could write a<BS>' (or '<BS>a) to get �.\nThat's how people did it with typewriters anyways.\n\n> > Almost every Latin input mode out there produces precomposed\n> > characters and so they effectively produce NFC.\n> \n> The problem is not the normal case, the problem will be things like\n> obscure input methods, some kind of software that's being too clever,\n> or some kind of malicious user trying to confuse the database.\n\n_HFS+ enters the chat_\n\n> > That means that indices\n> > need to normalize strings, but tables need to store unnormalized\n> > strings.\n> \n> That's an interesting idea. Would the equality operator normalize\n> first, or are you saying that the index would need to recheck the\n> results?\n\nYou can optimize this to avoid having to normalize first. Most strings\nare not equal, and they tend to differ early. And most strings will\nlikely be ASCII-mostly or in the same form anyways. So you can just\nwalk a cursor down each string looking at two bytes, and if they are\nboth ASCII then you move each cursor forward by one byte, and if then\nare not both ASCII then you take a slow path where you normalize one\ngrapheme cluster at each cursor (if necessary) and compare that. (ZFS\ndoes this.)\n\nYou can also assume ASCII-mostly, load as many bits of each string\n(padding as needed) as will fit in SIMD registers, compare and check\nthat they're all ASCII, and if not then jump to the slow path.\n\nYou can also normalize one grapheme cluster at a time when hashing\n(e.g., for hash indices), thus avoiding a large allocation if the string\nis large.\n\nNico\n-- \n\n\n",
"msg_date": "Tue, 3 Oct 2023 18:01:16 -0500",
"msg_from": "Nico Williams <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Tue, Oct 3, 2023 at 3:54 PM Jeff Davis <[email protected]> wrote:\n> I assume you mean because we reject invalid byte sequences? Yeah, I'm\n> sure that causes a problem for some (especially migrations), but it's\n> difficult for me to imagine a database working well with no rules at\n> all for the the basic data types.\n\nThere's a very popular commercial database where, or so I have been\nled to believe, any byte sequence at all is accepted when you try to\nput values into the database. The rumors I've heard -- I have not\nplayed with it myself -- are that when you try to do anything, byte\nsequences that are not valid in the configured encoding are treated as\nsingle-byte characters or something of that sort. So like if you had\nUTF-8 as the encoding and the first byte of the string is something\nthat can only appear as a continuation byte in UTF-8, I think that\nbyte is just treated as a separate character. I don't quite know how\nyou make all of the operations work that way, but it seems like\nthey've come up with a somewhat-consistent set of principles that are\napplied across the board. Very different from the PG philosophy, of\ncourse. And I'm not saying it's better. But it does eliminate the\nproblem of being unable to load data into the database, because in\nsuch a model there's no such thing as invalidly-encoded data. Instead,\nan encoding like UTF-8 is effectively extended so that every byte\nsequence represents *something*. Whether that something is what you\nwanted is another story.\n\nAt any rate, if we were to go in the direction of rejecting code\npoints that aren't yet assigned, or aren't yet known to the collation\nlibrary, that's another way for data loading to fail. Which feels like\nvery defensible behavior, but not what everyone wants, or is used to.\n\n> At minimum I think we need to have some internal functions to check for\n> unassigned code points. That belongs in core, because we generate the\n> unicode tables from a specific version.\n\nThat's a good idea.\n\n> I also think we should expose some SQL functions to check for\n> unassigned code points. That sounds useful, especially since we already\n> expose normalization functions.\n\nThat's a good idea, too.\n\n> One could easily imagine a domain with CHECK(NOT\n> contains_unassigned(a)). Or an extension with a data type that uses the\n> internal functions.\n\nYeah.\n\n> Whether we ever get to a core data type -- and more importantly,\n> whether anyone uses it -- I'm not sure.\n\nSame here.\n\n> Yeah, I am looking for a better compromise between:\n>\n> * everything is memcmp() and 'á' sometimes doesn't equal 'á'\n> (depending on code point sequence)\n> * everything is constantly changing, indexes break, and text\n> comparisons are slow\n>\n> A stable idea of unicode normalization based on using only assigned\n> code points is very tempting.\n\nThe fact that there are multiple types of normalization and multiple\nnotions of equality doesn't make this easier.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 4 Oct 2023 13:16:22 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Tue, Sep 12, 2023 at 03:47:10PM -0700, Jeff Davis wrote:\n> The idea is to have a new data type, say \"UTEXT\", that normalizes the\n> input so that it can have an improved notion of equality while still\n> using memcmp().\n\nA UTEXT type would be helpful for specifying that the text must be\nUnicode (in which transform?) even if the character data encoding for\nthe database is not UTF-8.\n\nMaybe UTF8 might be a better name for the new type, since it would\ndenote the transform (and would allow for UTF16 and UTF32 some day,\nthough it's doubtful those would ever happen).\n\nBut it's one thing to specify Unicode (and transform) in the type and\nanother to specify an NF to normalize to on insert or on lookup.\n\nHow about new column constraint keywords, such as NORMALIZE (meaning\nnormalize on insert) and NORMALIZED (meaning reject non-canonical form\ntext), with an optional parenthetical by which to specify a non-default\nform? (These would apply to TEXT as well when the default encoding for\nthe DB is UTF-8.)\n\nOne could then ALTER TABLE to add this to existing tables.\n\nThis would also make it easier to add a form-preserving/form-insensitive\nmode later if it turns out to be useful or necessary, maybe making it\nthe default for Unicode text in new tables.\n\n> Questions:\n> \n> * Would this be useful enough to justify a new data type? Would it be\n> confusing about when to choose one versus the other?\n\nYes. See above. I think I'd rather have it be called UTF8, and the\nnormalization properties of it to be specified as column constraints.\n\n> * Would cross-type comparisons between TEXT and UTEXT become a major\n> problem that would reduce the utility?\n\nMaybe when the database's encoding is UTF_8 then UTEXT (or UTF8) can be an alias\nof TEXT.\n\n> * Should \"some_utext_value = some_text_value\" coerce the LHS to TEXT\n> or the RHS to UTEXT?\n\nOoh, this is nice! If the TEXT is _not_ UTF-8 then it could be\nconverted to UTF-8. So I think which is RHS and which is LHS doesn't\nmatter -- it's which is UTF-8, and if both are then the only thing left\nto do is normalize, and for that I'd take the LHS' form if the LHS is\nUTF-8, else the RHS'.\n\nNico\n-- \n\n\n",
"msg_date": "Wed, 4 Oct 2023 12:23:41 -0500",
"msg_from": "Nico Williams <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Wed, Oct 4, 2023 at 1:27 PM Nico Williams <[email protected]> wrote:\n> A UTEXT type would be helpful for specifying that the text must be\n> Unicode (in which transform?) even if the character data encoding for\n> the database is not UTF-8.\n\nThat's actually pretty thorny ... because right now client_encoding\nspecifies the encoding to be used for all data sent to the client. So\nwould we convert the data from UTF8 to the selected client encoding?\nOr what?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 4 Oct 2023 13:47:40 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On 2023-10-04 13:47, Robert Haas wrote:\n> On Wed, Oct 4, 2023 at 1:27 PM Nico Williams <[email protected]> \n> wrote:\n>> A UTEXT type would be helpful for specifying that the text must be\n>> Unicode (in which transform?) even if the character data encoding for\n>> the database is not UTF-8.\n> \n> That's actually pretty thorny ... because right now client_encoding\n> specifies the encoding to be used for all data sent to the client. So\n> would we convert the data from UTF8 to the selected client encoding?\n\nThe SQL standard would have me able to:\n\nCREATE TABLE foo (\n a CHARACTER VARYING CHARACTER SET UTF8,\n b CHARACTER VARYING CHARACTER SET LATIN1\n)\n\nand so on, and write character literals like\n\n_UTF8'Hello, world!' and _LATIN1'Hello, world!'\n\nand have those columns and data types independently contain what\nthey can contain, without constraints imposed by one overall\ndatabase encoding.\n\nObviously, we're far from being able to do that. But should it\nbecome desirable to get closer, would it be worthwhile to also\ntry to follow how the standard would have it look?\n\nClearly, part of the job would involve making the wire protocol\nable to transmit binary values and identify their encodings.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Wed, 04 Oct 2023 14:02:50 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Wed, Oct 4, 2023 at 2:02 PM Chapman Flack <[email protected]> wrote:\n> Clearly, part of the job would involve making the wire protocol\n> able to transmit binary values and identify their encodings.\n\nRight. Which unfortunately is moving the goal posts into the\nstratosphere compared to any other work mentioned so far. I agree it\nwould be great. But not if you want concrete progress any time soon.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 4 Oct 2023 14:05:58 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Wed, 4 Oct 2023 at 14:05, Chapman Flack <[email protected]> wrote:\n\n> On 2023-10-04 13:47, Robert Haas wrote:\n>\n\n\n> The SQL standard would have me able to:\n>\n> CREATE TABLE foo (\n> a CHARACTER VARYING CHARACTER SET UTF8,\n> b CHARACTER VARYING CHARACTER SET LATIN1\n> )\n>\n> and so on, and write character literals like\n>\n> _UTF8'Hello, world!' and _LATIN1'Hello, world!'\n>\n> and have those columns and data types independently contain what\n> they can contain, without constraints imposed by one overall\n> database encoding.\n>\n> Obviously, we're far from being able to do that. But should it\n> become desirable to get closer, would it be worthwhile to also\n> try to follow how the standard would have it look?\n>\n> Clearly, part of the job would involve making the wire protocol\n> able to transmit binary values and identify their encodings.\n>\n\nI would go in the other direction (note: I’m ignoring all backward\ncompatibility considerations related to the current design of Postgres).\n\nAlways store only UTF-8 in the database, and send only UTF-8 on the wire\nprotocol. If we still want to have a concept of \"client encoding\", have the\nclient libpq take care of translating the bytes between the bytes used by\nthe caller and the bytes sent on the wire.\n\nNote that you could still define columns as you say, but the character set\nspecification would effectively act simply as a CHECK constraint on the\ncharacters allowed, essentially CHECK (column_name ~ '^[...all characters\nin encoding...]$*'). We don't allow different on-disk representations of\ndates or other data types; except when we really need to, and then we have\nmultiple data types (e.g. int vs. float) rather than different ways of\nstoring the same datatype.\n\nWhat about characters not in UTF-8? If a character is important enough for\nus to worry about in Postgres, it’s important enough to get a U+ number\nfrom the Unicode Consortium, which automatically puts it in UTF-8. In the\nmodern context, \"plain text\" mean \"UTF-8 encoded text\", as far as I'm\nconcerned.\n\nOn Wed, 4 Oct 2023 at 14:05, Chapman Flack <[email protected]> wrote:On 2023-10-04 13:47, Robert Haas wrote:\n \nThe SQL standard would have me able to:\n\nCREATE TABLE foo (\n a CHARACTER VARYING CHARACTER SET UTF8,\n b CHARACTER VARYING CHARACTER SET LATIN1\n)\n\nand so on, and write character literals like\n\n_UTF8'Hello, world!' and _LATIN1'Hello, world!'\n\nand have those columns and data types independently contain what\nthey can contain, without constraints imposed by one overall\ndatabase encoding.\n\nObviously, we're far from being able to do that. But should it\nbecome desirable to get closer, would it be worthwhile to also\ntry to follow how the standard would have it look?\n\nClearly, part of the job would involve making the wire protocol\nable to transmit binary values and identify their encodings.I would go in the other direction (note: I’m ignoring all backward compatibility considerations related to the current design of Postgres).Always store only UTF-8 in the database, and send only UTF-8 on the wire protocol. If we still want to have a concept of \"client encoding\", have the client libpq take care of translating the bytes between the bytes used by the caller and the bytes sent on the wire.Note that you could still define columns as you say, but the character set specification would effectively act simply as a CHECK constraint on the characters allowed, essentially CHECK (column_name ~ '^[...all characters in encoding...]$*'). We don't allow different on-disk representations of dates or other data types; except when we really need to, and then we have multiple data types (e.g. int vs. float) rather than different ways of storing the same datatype.What about characters not in UTF-8? If a character is important enough for us to worry about in Postgres, it’s important enough to get a U+ number from the Unicode Consortium, which automatically puts it in UTF-8. In the modern context, \"plain text\" mean \"UTF-8 encoded text\", as far as I'm concerned.",
"msg_date": "Wed, 4 Oct 2023 14:14:45 -0400",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Wed, 2023-10-04 at 13:16 -0400, Robert Haas wrote:\n> any byte sequence at all is accepted when you try to\n> put values into the database.\n\nWe support SQL_ASCII, which allows something similar.\n\n> At any rate, if we were to go in the direction of rejecting code\n> points that aren't yet assigned, or aren't yet known to the collation\n> library, that's another way for data loading to fail.\n\nA failure during data loading is either a feature or a bug, depending\non whether you are the one loading the data or the one trying to make\nsense of it later ;-)\n\n> Which feels like\n> very defensible behavior, but not what everyone wants, or is used to.\n\nYeah, there are many reasons someone might want to accept unassigned\ncode points. An obvious one is if their application is on a newer\nversion of unicode where the codepoint *is* assigned.\n\n> \n> The fact that there are multiple types of normalization and multiple\n> notions of equality doesn't make this easier.\n\nNFC is really the only one that makes sense.\n\nNFD is semantically the same as NFC, but expanded into a larger\nrepresentation. NFKC/NFKD are based on a more relaxed notion of\nequality -- kind of like non-deterministic collations. These other\nforms might make sense in certain cases, but not general use.\n\nI believe that having a kind of text data type where it's stored in NFC\nand compared with memcmp() would be a good place for many users to be -\n- probably most users. It's got all the performance and stability\nbenefits of memcmp(), with slightly richer semantics. It's less likely\nthat someone malicious can confuse the database by using different\nrepresentations of the same character.\n\nThe problem is that it's not universally better for everyone: there are\ncertainly users who would prefer that the codepoints they send to the\ndatabase are preserved exactly, and also users who would like to be\nable to use unassigned code points.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Wed, 04 Oct 2023 13:15:03 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Wed, 2023-10-04 at 14:02 -0400, Chapman Flack wrote:\n> The SQL standard would have me able to:\n> \n> CREATE TABLE foo (\n> a CHARACTER VARYING CHARACTER SET UTF8,\n> b CHARACTER VARYING CHARACTER SET LATIN1\n> )\n> \n> and so on, and write character literals like\n> \n> _UTF8'Hello, world!' and _LATIN1'Hello, world!'\n\nIs there a use case for that? UTF-8 is able to encode any unicode code\npoint, it's relatively compact, and it's backwards-compatible with 7-\nbit ASCII. If you have a variety of text data in your system (and in\nmany cases even if not), then UTF-8 seems like the right solution.\n\nText data encoded 17 different ways requires a lot of bookkeeping in\nthe type system, and it also requires injecting a bunch of fallible\ntranscoding operators around just to compare strings.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 04 Oct 2023 13:38:15 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Wed, Oct 04, 2023 at 01:38:15PM -0700, Jeff Davis wrote:\n> On Wed, 2023-10-04 at 14:02 -0400, Chapman Flack wrote:\n> > The SQL standard would have me able to:\n> > \n> > [...]\n> > _UTF8'Hello, world!' and _LATIN1'Hello, world!'\n> \n> Is there a use case for that? UTF-8 is able to encode any unicode code\n> point, it's relatively compact, and it's backwards-compatible with 7-\n> bit ASCII. If you have a variety of text data in your system (and in\n> many cases even if not), then UTF-8 seems like the right solution.\n> \n> Text data encoded 17 different ways requires a lot of bookkeeping in\n> the type system, and it also requires injecting a bunch of fallible\n> transcoding operators around just to compare strings.\n\nBetter that than TEXT blobs w/ the encoding given by the `CREATE\nDATABASE` or `initdb` default!\n\nIt'd be a lot _less_ fragile to have all text tagged with an encoding\n(indirectly, via its type which then denotes the encoding).\n\nThat would be a lot of work, but starting with just a UTF-8 text type\nwould be an improvement.\n\nNico\n-- \n\n\n",
"msg_date": "Wed, 4 Oct 2023 16:15:06 -0500",
"msg_from": "Nico Williams <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On 2023-10-04 16:38, Jeff Davis wrote:\n> On Wed, 2023-10-04 at 14:02 -0400, Chapman Flack wrote:\n>> The SQL standard would have me able to:\n>> \n>> CREATE TABLE foo (\n>> a CHARACTER VARYING CHARACTER SET UTF8,\n>> b CHARACTER VARYING CHARACTER SET LATIN1\n>> )\n>> \n>> and so on\n> \n> Is there a use case for that? UTF-8 is able to encode any unicode code\n> point, it's relatively compact, and it's backwards-compatible with 7-\n> bit ASCII. If you have a variety of text data in your system (and in\n> many cases even if not), then UTF-8 seems like the right solution.\n\nWell, for what reason does anybody run PG now with the encoding set\nto anything besides UTF-8? I don't really have my finger on that pulse.\nCould it be that it bloats common strings in their local script, and\nwith enough of those to store, it could matter to use the local\nencoding that stores them more economically?\n\nAlso, while any Unicode transfer format can encode any Unicode code\npoint, I'm unsure whether it's yet the case that {any Unicode code\npoint} is a superset of every character repertoire associated with\nevery non-Unicode encoding.\n\nThe cheap glaring counterexample is SQL_ASCII. Half those code points\nare *nobody knows what Unicode character* (or even *whether*). I'm not\ninsisting that's a good thing, but it is a thing.\n\nIt might be a very tidy future to say all text is Unicode and all\nserver encodings are UTF-8, but I'm not sure it wouldn't still\nbe a good step on the way to be able to store some things in\ntheir own encodings. We have JSON and XML now, two data types\nthat are *formally defined* to accept any Unicode content, and\nwe hedge and mumble and say (well, as long as it goes in the\nserver encoding) and that makes me sad. Things like that should\nbe easy to handle even without declaring UTF-8 as a server-wide\nencoding ... they already are their own distinct data types, and\ncould conceivably know their own encodings.\n\nBut there again, it's possible that going with unconditional\nUTF-8 for JSON or XML documents could, in some regions, bloat them.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Wed, 04 Oct 2023 17:32:50 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Wed, 2023-10-04 at 14:14 -0400, Isaac Morland wrote:\n> Always store only UTF-8 in the database\n\nWhat problem does that solve? I don't see our encoding support as a big\nsource of problems, given that database-wide UTF-8 already works fine.\nIn fact, some postgres features only work with UTF-8.\n\nI agree that we shouldn't add a bunch of bookkeeping and type system\nsupport for per-column encodings without a clear use case, because that\nwould have a cost. But right now it's just a database-wide thing.\n\nI don't see encodings as a major area to solve problems or innovate. At\nthe end of the day, encodings have little semantic significance, and\ntherefore limited upside and limited downside. Collations and\nnormalization get more interesting, but those are happening at a higher\nlayer than the encoding.\n\n\n> What about characters not in UTF-8?\n\nHonestly I'm not clear on this topic. Are the \"private use\" areas in\nunicode enough to cover use cases for characters not recognized by\nunicode? Which encodings in postgres can represent characters that\ncan't be automatically transcoded (without failure) to unicode?\n\nObviously if we have some kind of unicode-based type, it would only\nwork with encodings that are a subset of unicode.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 04 Oct 2023 14:37:40 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Wed, Oct 04, 2023 at 05:32:50PM -0400, Chapman Flack wrote:\n> Well, for what reason does anybody run PG now with the encoding set\n> to anything besides UTF-8? I don't really have my finger on that pulse.\n\nBecause they still have databases that didn't use UTF-8 10 or 20 years\nago that they haven't migrated to UTF-8?\n\nIt's harder to think of why one might _want_ to store text in any\nencoding other than UTF-8 for _new_ databases.\n\nThough too there's no reason that it should be impossible other than\nlack of developer interest: as long as text is tagged with its encoding,\nit should be possible to store text in any number of encodings.\n\n> Could it be that it bloats common strings in their local script, and\n> with enough of those to store, it could matter to use the local\n> encoding that stores them more economically?\n\nUTF-8 bloat is not likely worth the trouble. UTF-8 is only clearly\nbloaty when compared to encodings with 1-byte code units, like\nISO-8859-*. For CJK UTF-8 is not much more bloaty than native\nnon-Unicode encodings like SHIFT_JIS.\n\nUTF-8 is not much bloatier than UTF-16 in general either.\n\nBloat is not really a good reason to avoid Unicode or any specific TF.\n\n> Also, while any Unicode transfer format can encode any Unicode code\n> point, I'm unsure whether it's yet the case that {any Unicode code\n> point} is a superset of every character repertoire associated with\n> every non-Unicode encoding.\n\nIt's not always been the case that Unicode is a strict superset of all\ncurrently-in-use human scripts. Making Unicode a strict superset of all\ncurrently-in-use human scripts seems to be the Unicode Consortium's aim.\n\nI think you're asking why not just use UTF-8 for everything, all the\ntime. It's a fair question. I don't have a reason to answer in the\nnegative (maybe someone else does). But that doesn't mean that one\ncouldn't want to store text in many encodings (e.g., for historical\nreasons).\n\nNico\n-- \n\n\n",
"msg_date": "Wed, 4 Oct 2023 17:15:47 -0500",
"msg_from": "Nico Williams <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Wed, 2023-10-04 at 16:15 -0500, Nico Williams wrote:\n> Better that than TEXT blobs w/ the encoding given by the `CREATE\n> DATABASE` or `initdb` default!\n\n From an engineering perspective, yes, per-column encodings would be\nmore flexible. But I still don't understand who exactly would use that,\nand why.\n\nIt would take an awful lot of effort to implement and make the code\nmore complex, so we'd really need to see some serious demand for that.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 04 Oct 2023 16:01:26 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Wed, Oct 04, 2023 at 04:01:26PM -0700, Jeff Davis wrote:\n> On Wed, 2023-10-04 at 16:15 -0500, Nico Williams wrote:\n> > Better that than TEXT blobs w/ the encoding given by the `CREATE\n> > DATABASE` or `initdb` default!\n> \n> From an engineering perspective, yes, per-column encodings would be\n> more flexible. But I still don't understand who exactly would use that,\n> and why.\n\nSay you have a bunch of text files in different encodings for reasons\n(historical). And now say you want to store them in a database so you\ncan index them and search them. Sure, you could use a filesystem, but\nyou want an RDBMS. Well, the answer to this is \"convert all those files\nto UTF-8\".\n\n> It would take an awful lot of effort to implement and make the code\n> more complex, so we'd really need to see some serious demand for that.\n\nYes, it's better to just use UTF-8.\n\nThe DB could implement conversions to/from other codesets and encodings\nfor clients that insist on it. Why would clients insist anyways?\nBetter to do the conversions at the clients.\n\nIn the middle its best to just have Unicode, and specifically UTF-8,\nthen push all conversions to the edges of the system.\n\nNico\n-- \n\n\n",
"msg_date": "Wed, 4 Oct 2023 18:43:37 -0500",
"msg_from": "Nico Williams <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Wed, 4 Oct 2023 at 17:37, Jeff Davis <[email protected]> wrote:\n\n> On Wed, 2023-10-04 at 14:14 -0400, Isaac Morland wrote:\n> > Always store only UTF-8 in the database\n>\n> What problem does that solve? I don't see our encoding support as a big\n> source of problems, given that database-wide UTF-8 already works fine.\n> In fact, some postgres features only work with UTF-8.\n>\n\nMy idea is in the context of a suggestion that we support specifying the\nencoding per column. I don't mean to suggest eliminating the ability to set\na server-wide encoding, although I doubt there is any use case for using\nanything other than UTF-8 except for an old database that hasn’t been\nconverted yet.\n\nI see no reason to write different strings using different encodings in the\ndata files, depending on what column they belong to. The various text types\nare all abstract data types which store sequences of characters (not\nbytes); if one wants bytes, then one has to encode them. Of course, if one\nwants UTF-8 bytes, then the encoding is, under the covers, the identity\nfunction, but conceptually it is still taking the characters stored in the\ndatabase and converting them to bytes according to a specific encoding.\n\nBy contrast, although I don’t see it as a top-priority use case, I can\nimagine somebody wanting to restrict the characters stored in a particular\ncolumn to characters that can be encoded in a particular encoding. That is\nwhat \"CHARACTER SET LATIN1\" and so on should mean.\n\n> What about characters not in UTF-8?\n>\n> Honestly I'm not clear on this topic. Are the \"private use\" areas in\n> unicode enough to cover use cases for characters not recognized by\n> unicode? Which encodings in postgres can represent characters that\n> can't be automatically transcoded (without failure) to unicode?\n>\n\nHere I’m just anticipating a hypothetical objection, “what about characters\nthat can’t be represented in UTF-8?” to my suggestion to always use UTF-8\nand I’m saying we shouldn’t care about them. I believe the answers to your\nquestions in this paragraph are “yes”, and “none”.\n\nOn Wed, 4 Oct 2023 at 17:37, Jeff Davis <[email protected]> wrote:On Wed, 2023-10-04 at 14:14 -0400, Isaac Morland wrote:\n> Always store only UTF-8 in the database\n\nWhat problem does that solve? I don't see our encoding support as a big\nsource of problems, given that database-wide UTF-8 already works fine.\nIn fact, some postgres features only work with UTF-8.My idea is in the context of a suggestion that we support specifying the encoding per column. I don't mean to suggest eliminating the ability to set a server-wide encoding, although I doubt there is any use case for using anything other than UTF-8 except for an old database that hasn’t been converted yet.I see no reason to write different strings using different encodings in the data files, depending on what column they belong to. The various text types are all abstract data types which store sequences of characters (not bytes); if one wants bytes, then one has to encode them. Of course, if one wants UTF-8 bytes, then the encoding is, under the covers, the identity function, but conceptually it is still taking the characters stored in the database and converting them to bytes according to a specific encoding.By contrast, although I don’t see it as a top-priority use case, I can imagine somebody wanting to restrict the characters stored in a particular column to characters that can be encoded in a particular encoding. That is what \"CHARACTER SET LATIN1\" and so on should mean. \n> What about characters not in UTF-8?\n\nHonestly I'm not clear on this topic. Are the \"private use\" areas in\nunicode enough to cover use cases for characters not recognized by\nunicode? Which encodings in postgres can represent characters that\ncan't be automatically transcoded (without failure) to unicode?Here I’m just anticipating a hypothetical objection, “what about characters that can’t be represented in UTF-8?” to my suggestion to always use UTF-8 and I’m saying we shouldn’t care about them. I believe the answers to your questions in this paragraph are “yes”, and “none”.",
"msg_date": "Wed, 4 Oct 2023 21:02:21 -0400",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Wed, Oct 4, 2023 at 9:02 PM Isaac Morland <[email protected]> wrote:\n>> > What about characters not in UTF-8?\n>>\n>> Honestly I'm not clear on this topic. Are the \"private use\" areas in\n>> unicode enough to cover use cases for characters not recognized by\n>> unicode? Which encodings in postgres can represent characters that\n>> can't be automatically transcoded (without failure) to unicode?\n>\n> Here I’m just anticipating a hypothetical objection, “what about characters that can’t be represented in UTF-8?” to my suggestion to always use UTF-8 and I’m saying we shouldn’t care about them. I believe the answers to your questions in this paragraph are “yes”, and “none”.\n\nYears ago, I remember SJIS being cited as an example of an encoding\nthat had characters which weren't part of Unicode. I don't know\nwhether this is still a live issue.\n\nBut I do think that sometimes users are reluctant to perform encoding\nconversions on the data that they have. Sometimes they're not\ncompletely certain what encoding their data is in, and sometimes\nthey're worried that the encoding conversion might fail or produce\nwrong answers. In theory, if your existing data is validly encoded and\nyou know what encoding it's in and it's easily mapped onto UTF-8,\nthere's no problem. You can just transcode it and be done. But a lot\nof times the reality is a lot messier than that.\n\nWhich gives me some sympathy with the idea of wanting multiple\ncharacter sets within a database. Such a feature exists in some other\ndatabase systems and is, presumably, useful to some people. On the\nother hand, to do that in PostgreSQL, we'd need to propagate the\ncharacter set/encoding information into all of the places that\ncurrently get the typmod and collation, and that is not a small number\nof places. It's a lot of infrastructure for the project to carry\naround for a feature that's probably only going to continue to become\nless relevant.\n\nI suppose you never know, though. Maybe the Unicode consortium will\nexplode in a tornado of fiery rage and there will be dueling standards\nmaking war over the proper way of representing an emoji of a dog\neating broccoli for decades to come. In that case, our hypothetical\nmulti-character-set feature might seem prescient.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 5 Oct 2023 07:31:54 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Thu, 5 Oct 2023 at 07:32, Robert Haas <[email protected]> wrote:\n\n\n> But I do think that sometimes users are reluctant to perform encoding\n> conversions on the data that they have. Sometimes they're not\n> completely certain what encoding their data is in, and sometimes\n> they're worried that the encoding conversion might fail or produce\n> wrong answers. In theory, if your existing data is validly encoded and\n> you know what encoding it's in and it's easily mapped onto UTF-8,\n> there's no problem. You can just transcode it and be done. But a lot\n> of times the reality is a lot messier than that.\n>\n\nIn the case you describe, the users don’t have text at all; they have\nbytes, and a vague belief about what encoding the bytes might be in and\ntherefore what characters they are intended to represent. The correct way\nto store that in the database is using bytea. Text types should be for when\nyou know what characters you want to store. In this scenario, the\nimplementation detail of what encoding the database uses internally to\nwrite the data on the disk doesn't matter, any more than it matters to a\ncasual user how a table is stored on disk.\n\nSimilarly, I don't believe we have a \"YMD\" data type which stores year,\nmonth, and day, without being specific as to whether it's Gregorian or\nJulian; if you have that situation, make a 3-tuple type or do something\nelse. \"Date\" is for when you actually know what day you want to record.\n\nOn Thu, 5 Oct 2023 at 07:32, Robert Haas <[email protected]> wrote: \nBut I do think that sometimes users are reluctant to perform encoding\nconversions on the data that they have. Sometimes they're not\ncompletely certain what encoding their data is in, and sometimes\nthey're worried that the encoding conversion might fail or produce\nwrong answers. In theory, if your existing data is validly encoded and\nyou know what encoding it's in and it's easily mapped onto UTF-8,\nthere's no problem. You can just transcode it and be done. But a lot\nof times the reality is a lot messier than that.In the case you describe, the users don’t have text at all; they have bytes, and a vague belief about what encoding the bytes might be in and therefore what characters they are intended to represent. The correct way to store that in the database is using bytea. Text types should be for when you know what characters you want to store. In this scenario, the implementation detail of what encoding the database uses internally to write the data on the disk doesn't matter, any more than it matters to a casual user how a table is stored on disk.Similarly, I don't believe we have a \"YMD\" data type which stores year, month, and day, without being specific as to whether it's Gregorian or Julian; if you have that situation, make a 3-tuple type or do something else. \"Date\" is for when you actually know what day you want to record.",
"msg_date": "Thu, 5 Oct 2023 09:10:23 -0400",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Thu, 2023-10-05 at 07:31 -0400, Robert Haas wrote:\n> It's a lot of infrastructure for the project to carry\n> around for a feature that's probably only going to continue to become\n> less relevant.\n\nAgreed, at least until we understand the set of users per-column\nencoding is important to. I acknowledge that the presence of per-column\nencoding in the standard is some kind of signal there, but not enough\nby itself to justify something so invasive.\n\n> I suppose you never know, though.\n\nOn balance I think it's better to keep the code clean enough that we\ncan adapt to whatever unanticipated things happen in the future; rather\nthan to make the code very complicated trying to anticipate everything,\nand then being completely unable to adapt it when something\nunanticipated happens anyway.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 05 Oct 2023 10:30:51 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Thu, Oct 05, 2023 at 07:31:54AM -0400, Robert Haas wrote:\n> [...] On the other hand, to do that in PostgreSQL, we'd need to\n> propagate the character set/encoding information into all of the\n> places that currently get the typmod and collation, and that is not a\n> small number of places. It's a lot of infrastructure for the project\n> to carry around for a feature that's probably only going to continue\n> to become less relevant.\n\nText+encoding can be just like bytea with a one- or two-byte prefix\nindicating what codeset+encoding it's in. That'd be how to encode\nsuch text values on the wire, though on disk the column's type should\nindicate the codeset+encoding, so no need to add a prefix to the value.\n\nComplexity would creep in around when and whether to perform automatic\nconversions. The easy answer would be \"never, on the server side\", but\non the client side it might be useful to convert to/from the locale's\ncodeset+encoding when displaying to the user or accepting user input.\n\nIf there's no automatic server-side codeset/encoding conversions then\nthe server-side cost of supporting non-UTF-8 text should not be too high\ndev-wise -- it's just (famous last words) a generic text type\nparameterized by codeset+ encoding type. There would not even be a hard\nneed for functions for conversions, though there would be demand for\nthem.\n\nBut I agree that if there's no need, there's no need. UTF-8 is great,\nand if only all PG users would just switch then there's not much more to\ndo.\n\nNico\n-- \n\n\n",
"msg_date": "Thu, 5 Oct 2023 14:14:54 -0500",
"msg_from": "Nico Williams <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Thu, 2023-10-05 at 09:10 -0400, Isaac Morland wrote:\n> In the case you describe, the users don’t have text at all; they have\n> bytes, and a vague belief about what encoding the bytes might be in\n> and therefore what characters they are intended to represent. The\n> correct way to store that in the database is using bytea.\n\nI wouldn't be so absolute. It's text data to the user, and is\npresumably working fine for them now, and if they switched to bytea\ntoday then 'foo' would show up as '\\x666f6f' in psql.\n\nThe point is that this is a somewhat messy problem because there's so\nmuch software out there that treats byte strings and textual data\ninterchangably. Rust goes the extra mile to organize all of this, and\nit ends up with:\n\n * String -- always UTF-8, never NUL-terminated\n * CString -- NUL-terminated byte sequence with no internal NULs\n * OsString[3] -- needed to make a Path[4], which is needed to open a\nfile[5]\n * Vec<u8> -- any byte sequence\n\nand I suppose we could work towards offering better support for these\ndifferent types, the casts between them, and delivering them in a form\nthe client can understand. But I wouldn't describe it as a solved\nproblem with one \"correct\" solution.\n\nOne takeaway from this discussion is that it would be useful to provide\nmore flexibility in how values are represented to the client in a more\ngeneral way. In addition to encoding, representational issues have come\nup with binary formats, bytea, extra_float_digits, etc.\n\nThe collection of books by CJ Date & Hugh Darwen, et al. (sorry I don't\nremember exactly which books), made the theoretical case for explicitly\ndistinguishing values from representations at the lanugage level. We're\nstarting to see that representational issues can't be satisfied with a\nfew special cases and hacks -- it's worth thinking about a general\nsolution to that problem. There was also a lot of relevant discussion\nabout how to think about overlapping domains (e.g. ASCII is valid in\nany of these text domains).\n\n> Text types should be for when you know what characters you want to\n> store. In this scenario, the implementation detail of what encoding\n> the database uses internally to write the data on the disk doesn't\n> matter, any more than it matters to a casual user how a table is\n> stored on disk.\n\nPerhaps the user and application do know, and there's some kind of\nsubtlety that we're missing, or some historical artefact that we're not\naccounting for, and that somehow makes UTF-8 unsuitable. Surely there\nare applications that treat certain byte sequences in non-standard\nways, and perhaps not all of those byte sequences can be reproduced by\ntranscoding from UTF-8 to the client_encoding. In any case, I would\nwant to understand in detail why a user thinks UTF8 is not good enough\nbefore I make too strong of a statement here.\n\nEven the terminal font that I use renders some \"identical\" unicode\ncharacters slightly differently depending on the code points from which\nthey are composed. I believe that's an intentional convenience to make\nit more apparent why the \"diff\" command (or other byte-based tool) is\nshowing a difference between two textually identical strings, but it's\nalso a violation of unicode. (This is another reason why normalization\nmight not be for everyone, but I believe it's still good in typical\ncases.)\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 05 Oct 2023 12:16:34 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "Nico Williams <[email protected]> writes:\n> Text+encoding can be just like bytea with a one- or two-byte prefix\n> indicating what codeset+encoding it's in. That'd be how to encode\n> such text values on the wire, though on disk the column's type should\n> indicate the codeset+encoding, so no need to add a prefix to the value.\n\nThe precedent of BOMs (byte order marks) suggests strongly that\nsuch a solution would be horrible to use.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 05 Oct 2023 15:49:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Thu, Oct 05, 2023 at 03:49:37PM -0400, Tom Lane wrote:\n> Nico Williams <[email protected]> writes:\n> > Text+encoding can be just like bytea with a one- or two-byte prefix\n> > indicating what codeset+encoding it's in. That'd be how to encode\n> > such text values on the wire, though on disk the column's type should\n> > indicate the codeset+encoding, so no need to add a prefix to the value.\n> \n> The precedent of BOMs (byte order marks) suggests strongly that\n> such a solution would be horrible to use.\n\nThis is just how you encode the type of the string. You have any number\nof options. The point is that already PG can encode binary data, so if\nhow to encode text of disparate encodings on the wire, building on top\nof the encoding of bytea is an option.\n\n\n",
"msg_date": "Thu, 5 Oct 2023 14:52:37 -0500",
"msg_from": "Nico Williams <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On 03.10.23 21:54, Jeff Davis wrote:\n>> Here, Jeff mentions normalization, but I think it's a major issue\n>> with\n>> collation support. If new code points are added, users can put them\n>> into the database before they are known to the collation library, and\n>> then when they become known to the collation library the sort order\n>> changes and indexes break.\n> \n> The collation version number may reflect the change in understanding\n> about assigned code points that may affect collation -- though I'd like\n> to understand whether this is guaranteed or not.\n\nThis is correct. The collation version number produced by ICU contains \nthe UCA version, which is effectively the Unicode version (14.0, 15.0, \netc.). Since new code point assignments can only come from new Unicode \nversions, a new assigned code point will always result in a different \ncollation version.\n\nFor example, with ICU 70 / CLDR 40 / Unicode 14:\n\nselect collversion from pg_collation where collname = 'unicode';\n= 153.112\n\nWith ICU 72 / CLDR 42 / Unicode 15:\n= 153.120\n\n> At minimum I think we need to have some internal functions to check for\n> unassigned code points. That belongs in core, because we generate the\n> unicode tables from a specific version.\n\nIf you want to be rigid about it, you also need to consider whether the \nUnicode version used by the ICU library in use matches the one used by \nthe in-core tables.\n\n\n\n",
"msg_date": "Fri, 6 Oct 2023 09:58:37 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On 05.10.23 19:30, Jeff Davis wrote:\n> Agreed, at least until we understand the set of users per-column\n> encoding is important to. I acknowledge that the presence of per-column\n> encoding in the standard is some kind of signal there, but not enough\n> by itself to justify something so invasive.\n\nThe per-column encoding support in SQL is clearly a legacy feature from \nbefore Unicode. If one were to write something like SQL today, one \nwould most likely just specify, \"everything is Unicode\".\n\n\n\n",
"msg_date": "Fri, 6 Oct 2023 10:10:59 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Fri, 2023-10-06 at 09:58 +0200, Peter Eisentraut wrote:\n> If you want to be rigid about it, you also need to consider whether\n> the \n> Unicode version used by the ICU library in use matches the one used\n> by \n> the in-core tables.\n\nWhat problem are you concerned about here? I thought about it and I\ndidn't see an obvious issue.\n\nIf the ICU unicode version is ahead of the Postgres unicode version,\nand no unassigned code points are used according to the Postgres\nversion, then there's no problem.\n\nAnd in the other direction, there might be some code points that are\nassigned according to the postgres unicode version but unassigned\naccording to the ICU version. But that would be tracked by the\ncollation version as you pointed out earlier, so upgrading ICU would be\nlike any other ICU upgrade (with the same risks). Right?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 06 Oct 2023 10:22:48 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Thu, Oct 5, 2023 at 3:15 PM Nico Williams <[email protected]> wrote:\n> Text+encoding can be just like bytea with a one- or two-byte prefix\n> indicating what codeset+encoding it's in. That'd be how to encode\n> such text values on the wire, though on disk the column's type should\n> indicate the codeset+encoding, so no need to add a prefix to the value.\n\nWell, that would be making the encoding a per-value property, rather\nthan a per-column property like collation as I proposed. I can't see\nthat working out very nicely, because encodings are\ncollation-specific. It wouldn't make any sense if the column collation\nwere en_US.UTF8 or ko_KR.eucKR or en_CA.ISO8859-1 (just to pick a few\nvalues that are legal on my machine) while data stored in the column\nwas from a whole bunch of different encodings, at most one of which\ncould be the one to which the column's collation applied. That would\nend up meaning, for example, that such a column was very hard to sort.\n\nFor that and other reasons, I suspect that the utility of storing data\nfrom a variety of different encodings in the same database column is\nquite limited. What I think people really want is a whole column in\nsome encoding that isn't the normal one for that database. That's not\nto say we should add such a feature, but if we do, I think it should\nbe that, not a different encoding for every individual value.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 6 Oct 2023 13:33:06 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Fri, Oct 06, 2023 at 01:33:06PM -0400, Robert Haas wrote:\n> On Thu, Oct 5, 2023 at 3:15 PM Nico Williams <[email protected]> wrote:\n> > Text+encoding can be just like bytea with a one- or two-byte prefix\n> > indicating what codeset+encoding it's in. That'd be how to encode\n> > such text values on the wire, though on disk the column's type should\n> > indicate the codeset+encoding, so no need to add a prefix to the value.\n> \n> Well, that would be making the encoding a per-value property, rather\n> than a per-column property like collation as I proposed. I can't see\n\nOn-disk it would be just a property of the type, not part of the value.\n\nNico\n-- \n\n\n",
"msg_date": "Fri, 6 Oct 2023 12:38:45 -0500",
"msg_from": "Nico Williams <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Thu, 2023-10-05 at 14:52 -0500, Nico Williams wrote:\n> This is just how you encode the type of the string. You have any\n> number\n> of options. The point is that already PG can encode binary data, so\n> if\n> how to encode text of disparate encodings on the wire, building on\n> top\n> of the encoding of bytea is an option.\n\nThere's another significant discussion going on here:\n\nhttps://www.postgresql.org/message-id/CA+TgmoZ8r8xb_73WzKHGb00cV3tpHV_U0RHuzzMFKvLepdu2Jw@mail.gmail.com\n\nabout how to handle binary formats better, so it's not clear to me that\nit's a great precedent to expand upon. At least not yet.\n\nI think it would be interesting to think more generally about these\nrepresentational issues in a way that accounds for binary formats,\nextra_float_digits, client_encoding, etc. But I see that as more of an\nissue with how the client expects to receive the data -- nobody has a\npresented a reason in this thread that we need per-column encodings on\nthe server.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 06 Oct 2023 10:42:09 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Fri, Oct 6, 2023 at 1:38 PM Nico Williams <[email protected]> wrote:\n> On Fri, Oct 06, 2023 at 01:33:06PM -0400, Robert Haas wrote:\n> > On Thu, Oct 5, 2023 at 3:15 PM Nico Williams <[email protected]> wrote:\n> > > Text+encoding can be just like bytea with a one- or two-byte prefix\n> > > indicating what codeset+encoding it's in. That'd be how to encode\n> > > such text values on the wire, though on disk the column's type should\n> > > indicate the codeset+encoding, so no need to add a prefix to the value.\n> >\n> > Well, that would be making the encoding a per-value property, rather\n> > than a per-column property like collation as I proposed. I can't see\n>\n> On-disk it would be just a property of the type, not part of the value.\n\nI mean, that's not how it works.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 6 Oct 2023 14:17:32 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Fri, Oct 06, 2023 at 02:17:32PM -0400, Robert Haas wrote:\n> On Fri, Oct 6, 2023 at 1:38 PM Nico Williams <[email protected]> wrote:\n> > On Fri, Oct 06, 2023 at 01:33:06PM -0400, Robert Haas wrote:\n> > > On Thu, Oct 5, 2023 at 3:15 PM Nico Williams <[email protected]> wrote:\n> > > > Text+encoding can be just like bytea with a one- or two-byte prefix\n> > > > indicating what codeset+encoding it's in. That'd be how to encode\n> > > > such text values on the wire, though on disk the column's type should\n> > > > indicate the codeset+encoding, so no need to add a prefix to the value.\n> > >\n> > > Well, that would be making the encoding a per-value property, rather\n> > > than a per-column property like collation as I proposed. I can't see\n> >\n> > On-disk it would be just a property of the type, not part of the value.\n> \n> I mean, that's not how it works.\n\nSure, because TEXT in PG doesn't have codeset+encoding as part of it --\nit's whatever the database's encoding is. Collation can and should be a\nporperty of a column, since for Unicode it wouldn't be reasonable to\nmake that part of the type. But codeset+encoding should really be a\nproperty of the type if PG were to support more than one. IMO.\n\n\n",
"msg_date": "Fri, 6 Oct 2023 13:25:44 -0500",
"msg_from": "Nico Williams <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Fri, Oct 6, 2023 at 2:25 PM Nico Williams <[email protected]> wrote:\n> > > > Well, that would be making the encoding a per-value property, rather\n> > > > than a per-column property like collation as I proposed. I can't see\n> > >\n> > > On-disk it would be just a property of the type, not part of the value.\n> >\n> > I mean, that's not how it works.\n>\n> Sure, because TEXT in PG doesn't have codeset+encoding as part of it --\n> it's whatever the database's encoding is. Collation can and should be a\n> porperty of a column, since for Unicode it wouldn't be reasonable to\n> make that part of the type. But codeset+encoding should really be a\n> property of the type if PG were to support more than one. IMO.\n\nNo, what I mean is, you can't just be like \"oh, the varlena will be\ndifferent in memory than on disk\" as if that were no big deal.\n\nI agree that, as an alternative to encoding being a column property,\nit could instead be completely a type property, meaning that if you\nwant to store, say, LATIN1 text in your UTF-8 database, you first\ncreate a latint1text data type and then use it, rather than, as in the\nmodel I proposed, creating a text column and then applying a setting\nlike ENCODING latin1 to it. I think that there might be some problems\nwith that model, but it could also have some benefits. If someone were\ngoing to make a run at implementing this, they might want to consider\nboth designs and evaluate the tradeoffs.\n\nBut, even if we were all convinced that this kind of feature was good\nto add, I think it would almost certainly be wrong to invent new\nvarlena features along the way.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 6 Oct 2023 14:37:06 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Fri, 2023-10-06 at 13:33 -0400, Robert Haas wrote:\n> What I think people really want is a whole column in\n> some encoding that isn't the normal one for that database.\n\nDo people really want that? I'd be curious to know why.\n\nA lot of modern projects are simply declaring UTF-8 to be the \"one true\nway\". I am not suggesting that we do that, but it seems odd to go in\nthe opposite direction and have greater flexibility for many encodings.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 06 Oct 2023 12:07:17 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Fri, 6 Oct 2023 at 15:07, Jeff Davis <[email protected]> wrote:\n\n> On Fri, 2023-10-06 at 13:33 -0400, Robert Haas wrote:\n> > What I think people really want is a whole column in\n> > some encoding that isn't the normal one for that database.\n>\n> Do people really want that? I'd be curious to know why.\n>\n> A lot of modern projects are simply declaring UTF-8 to be the \"one true\n> way\". I am not suggesting that we do that, but it seems odd to go in\n> the opposite direction and have greater flexibility for many encodings.\n>\n\nAnd even if they want it, we can give it to them when we send/accept the\ndata from the client; just because they want to store ISO-8859-1 doesn't\nmean the actual bytes on the disk need to be that. And by \"client\" maybe I\nmean the client end of the network connection, and maybe I mean the program\nthat is calling in to libpq.\n\nIf they try to submit data that cannot possibly be encoded in the stated\nencoding because the bytes they submit don't correspond to any string in\nthat encoding, then that is unambiguously an error, just as trying to put\nFebruary 30 in a date column is an error.\n\nIs there a single other data type where anybody is even discussing letting\nthe client tell us how to write the data on disk?\n\nOn Fri, 6 Oct 2023 at 15:07, Jeff Davis <[email protected]> wrote:On Fri, 2023-10-06 at 13:33 -0400, Robert Haas wrote:\n> What I think people really want is a whole column in\n> some encoding that isn't the normal one for that database.\n\nDo people really want that? I'd be curious to know why.\n\nA lot of modern projects are simply declaring UTF-8 to be the \"one true\nway\". I am not suggesting that we do that, but it seems odd to go in\nthe opposite direction and have greater flexibility for many encodings.And even if they want it, we can give it to them when we send/accept the data from the client; just because they want to store ISO-8859-1 doesn't mean the actual bytes on the disk need to be that. And by \"client\" maybe I mean the client end of the network connection, and maybe I mean the program that is calling in to libpq.If they try to submit data that cannot possibly be encoded in the stated encoding because the bytes they submit don't correspond to any string in that encoding, then that is unambiguously an error, just as trying to put February 30 in a date column is an error.Is there a single other data type where anybody is even discussing letting the client tell us how to write the data on disk?",
"msg_date": "Fri, 6 Oct 2023 15:15:16 -0400",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Fri, 6 Oct 2023, 21:08 Jeff Davis, <[email protected]> wrote:\n\n> On Fri, 2023-10-06 at 13:33 -0400, Robert Haas wrote:\n> > What I think people really want is a whole column in\n> > some encoding that isn't the normal one for that database.\n>\n> Do people really want that? I'd be curious to know why.\n>\n\nOne reason someone would like this is because a database cluster may have\nbeen initialized with something like --no-locale (thus getting defaulted to\nLC_COLLATE=C, which is desired behaviour and gets fast strcmp operations\nfor indexing, and LC_CTYPE=SQL_ASCII, which is not exactly expected but can\nbe sufficient for some workloads), but now that the data has grown they\nwant to use utf8.EN_US collations in some of their new and modern table's\nfields?\nOr, a user wants to maintain literal translation tables, where different\nencodings would need to be used for different languages to cover the full\nscript when Unicode might not cover the full character set yet.\nAdditionally, I'd imagine specialized encodings like Shift_JIS could be\nmore space efficient than UTF-8 for e.g. japanese text, which might be\nuseful for someone who wants to be a bit more frugal with storage when they\nknow text is guaranteed to be in some encoding's native language:\ncompression can do the same work, but also adds significant overhead.\n\nI've certainly experienced situations where I forgot to explicitly include\nthe encoding in initdb --no-locale and then only much later noticed that my\nbig data load is useless due to an inability to create UTF-8 collated\nindexes.\nI often use --no-locale to make string indexing fast (locales/collation are\nnot often important to my workload) and to block any environment variables\nfrom being carried over into the installation. An ability to set or update\nthe encoding of columns would help reduce the pain: I would no longer have\nto re-initialize the database or cluster from 0.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\nOn Fri, 6 Oct 2023, 21:08 Jeff Davis, <[email protected]> wrote:On Fri, 2023-10-06 at 13:33 -0400, Robert Haas wrote:\n> What I think people really want is a whole column in\n> some encoding that isn't the normal one for that database.\n\nDo people really want that? I'd be curious to know why.One reason someone would like this is because a database cluster may have been initialized with something like --no-locale (thus getting defaulted to LC_COLLATE=C, which is desired behaviour and gets fast strcmp operations for indexing, and LC_CTYPE=SQL_ASCII, which is not exactly expected but can be sufficient for some workloads), but now that the data has grown they want to use utf8.EN_US collations in some of their new and modern table's fields? Or, a user wants to maintain literal translation tables, where different encodings would need to be used for different languages to cover the full script when Unicode might not cover the full character set yet.Additionally, I'd imagine specialized encodings like Shift_JIS could be more space efficient than UTF-8 for e.g. japanese text, which might be useful for someone who wants to be a bit more frugal with storage when they know text is guaranteed to be in some encoding's native language: compression can do the same work, but also adds significant overhead.I've certainly experienced situations where I forgot to explicitly include the encoding in initdb --no-locale and then only much later noticed that my big data load is useless due to an inability to create UTF-8 collated indexes.I often use --no-locale to make string indexing fast (locales/collation are not often important to my workload) and to block any environment variables from being carried over into the installation. An ability to set or update the encoding of columns would help reduce the pain: I would no longer have to re-initialize the database or cluster from 0.Kind regards,Matthias van de MeentNeon (https://neon.tech)",
"msg_date": "Sat, 7 Oct 2023 00:30:00 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Wed, 2023-10-04 at 13:16 -0400, Robert Haas wrote:\n> > At minimum I think we need to have some internal functions to check\n> > for\n> > unassigned code points. That belongs in core, because we generate\n> > the\n> > unicode tables from a specific version.\n> \n> That's a good idea.\n\nPatch attached.\n\nI added a new perl script to parse UnicodeData.txt and generate a\nlookup table (of ranges, which can be binary-searched).\n\nThe C entry point does the same thing as u_charType(), and I also\nmatched the enum numeric values for convenience. I didn't use\nu_charType() because I don't think this kind of unicode functionality\nshould depend on ICU, and I think it should match other Postgres\nUnicode functionality.\n\nStrictly speaking, I only needed to know whether it's unassigned or\nnot, not the general category. But it seemed easy enough to return the\ngeneral category, and it will be easier to create other potentially-\nuseful functions on top of this.\n\nThe tests do require ICU though, because I compare with the results of\nu_charType().\n\nRegards,\n\tJeff Davis",
"msg_date": "Fri, 06 Oct 2023 18:18:01 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Fri, Oct 6, 2023 at 3:07 PM Jeff Davis <[email protected]> wrote:\n> On Fri, 2023-10-06 at 13:33 -0400, Robert Haas wrote:\n> > What I think people really want is a whole column in\n> > some encoding that isn't the normal one for that database.\n>\n> Do people really want that? I'd be curious to know why.\n\nBecause it's a feature that exists in other products and so having it\neases migrations and/or replication of data between systems.\n\nI'm not saying that there are a lot of people who want this, any more.\nI think there used to be more interest in it. But the point of the\ncomment was that people who want multiple character set support want\nit as a per-column property, not a per-value property. I've never\nheard of anyone wanting to store text blobs in multiple distinct\ncharacter sets in the same column. But I have heard of people wanting\ntext blobs in multiple distinct character sets in the same database,\neach one in its own column.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Oct 2023 15:08:22 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On 07.10.23 03:18, Jeff Davis wrote:\n> On Wed, 2023-10-04 at 13:16 -0400, Robert Haas wrote:\n>>> At minimum I think we need to have some internal functions to check\n>>> for\n>>> unassigned code points. That belongs in core, because we generate\n>>> the\n>>> unicode tables from a specific version.\n>> That's a good idea.\n> Patch attached.\n\nCan you restate what this is supposed to be for? This thread appears to \nhave morphed from \"let's normalize everything\" to \"let's check for \nunassigned code points\", but I'm not sure what we are aiming for now.\n\n\n\n",
"msg_date": "Tue, 10 Oct 2023 08:44:50 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On 06.10.23 19:22, Jeff Davis wrote:\n> On Fri, 2023-10-06 at 09:58 +0200, Peter Eisentraut wrote:\n>> If you want to be rigid about it, you also need to consider whether\n>> the\n>> Unicode version used by the ICU library in use matches the one used\n>> by\n>> the in-core tables.\n> What problem are you concerned about here? I thought about it and I\n> didn't see an obvious issue.\n> \n> If the ICU unicode version is ahead of the Postgres unicode version,\n> and no unassigned code points are used according to the Postgres\n> version, then there's no problem.\n> \n> And in the other direction, there might be some code points that are\n> assigned according to the postgres unicode version but unassigned\n> according to the ICU version. But that would be tracked by the\n> collation version as you pointed out earlier, so upgrading ICU would be\n> like any other ICU upgrade (with the same risks). Right?\n\nIt might be alright in this particular combination of circumstances. \nBut in general if we rely on these tables for correctness (e.g., check \nthat a string is normalized before passing it to a function that \nrequires it to be normalized), we would need to consider this. The \ncorrect fix would then probably be to not use our own tables but use \nsome ICU function to achieve the desired task.\n\n\n\n",
"msg_date": "Tue, 10 Oct 2023 08:47:31 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Tue, Oct 10, 2023 at 2:44 AM Peter Eisentraut <[email protected]> wrote:\n> Can you restate what this is supposed to be for? This thread appears to\n> have morphed from \"let's normalize everything\" to \"let's check for\n> unassigned code points\", but I'm not sure what we are aiming for now.\n\nJeff can say what he wants it for, but one obvious application would\nbe to have the ability to add a CHECK constraint that forbids\ninserting unassigned code points into your database, which would be\nuseful if you're worried about forward-compatibility with collation\ndefinitions that might be extended to cover those code points in the\nfuture. Another application would be to find data already in your\ndatabase that has this potential problem.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 10 Oct 2023 10:02:30 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Tue, 2023-10-10 at 10:02 -0400, Robert Haas wrote:\n> On Tue, Oct 10, 2023 at 2:44 AM Peter Eisentraut\n> <[email protected]> wrote:\n> > Can you restate what this is supposed to be for? This thread\n> > appears to\n> > have morphed from \"let's normalize everything\" to \"let's check for\n> > unassigned code points\", but I'm not sure what we are aiming for\n> > now.\n\nIt was a \"pre-proposal\", so yes, the goalposts have moved a bit. Right\nnow I'm aiming to get some primitives in place that will be useful by\nthemselves, but also that we can potentially build on.\n\nAttached is a new version of the patch which introduces some SQL\nfunctions as well:\n\n * unicode_is_valid(text): returns true if all codepoints are\nassigned, false otherwise\n * unicode_version(): version of unicode Postgres is built with\n * icu_unicode_version(): version of Unicode ICU is built with\n\nI'm not 100% clear on the consequences of differences between the PG\nunicode version and the ICU unicode version, but because normalization\nuses the Postgres version of Unicode, I believe the Postgres version of\nUnicode should also be available to determine whether a code point is\nassigned or not.\n\nWe may also find it interesting to use the PG Unicode tables for regex\ncharacter classification. This is just an idea and we can discuss\nwhether that makes sense or not, but having the primitives in place\nseems like a good idea regardless.\n\n> Jeff can say what he wants it for, but one obvious application would\n> be to have the ability to add a CHECK constraint that forbids\n> inserting unassigned code points into your database, which would be\n> useful if you're worried about forward-compatibility with collation\n> definitions that might be extended to cover those code points in the\n> future. Another application would be to find data already in your\n> database that has this potential problem.\n\nExactly. Avoiding unassigned code points also allows you to be forward-\ncompatible with normalization.\n\nRegards,\n\tJeff Davis",
"msg_date": "Tue, 10 Oct 2023 18:08:41 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On 10.10.23 16:02, Robert Haas wrote:\n> On Tue, Oct 10, 2023 at 2:44 AM Peter Eisentraut <[email protected]> wrote:\n>> Can you restate what this is supposed to be for? This thread appears to\n>> have morphed from \"let's normalize everything\" to \"let's check for\n>> unassigned code points\", but I'm not sure what we are aiming for now.\n> \n> Jeff can say what he wants it for, but one obvious application would\n> be to have the ability to add a CHECK constraint that forbids\n> inserting unassigned code points into your database, which would be\n> useful if you're worried about forward-compatibility with collation\n> definitions that might be extended to cover those code points in the\n> future.\n\nI don't see how this would really work in practice. Whether your data \nhas unassigned code points or not, when the collations are updated to \nthe next Unicode version, the collations will have a new version number, \nand so you need to run the refresh procedure in any case.\n\n\n\n",
"msg_date": "Wed, 11 Oct 2023 08:51:27 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On 11.10.23 03:08, Jeff Davis wrote:\n> * unicode_is_valid(text): returns true if all codepoints are\n> assigned, false otherwise\n\nWe need to be careful about precise terminology. \"Valid\" has a defined \nmeaning for Unicode. A byte sequence can be valid or not as UTF-8. But \na string containing unassigned code points is not not-\"valid\" as Unicode.\n\n> * unicode_version(): version of unicode Postgres is built with\n> * icu_unicode_version(): version of Unicode ICU is built with\n\nThis seems easy enough, but it's not clear what users would actually do \nwith that.\n\n\n\n",
"msg_date": "Wed, 11 Oct 2023 08:56:13 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Wed, 2023-10-11 at 08:56 +0200, Peter Eisentraut wrote:\n> On 11.10.23 03:08, Jeff Davis wrote:\n> > * unicode_is_valid(text): returns true if all codepoints are\n> > assigned, false otherwise\n> \n> We need to be careful about precise terminology. \"Valid\" has a\n> defined \n> meaning for Unicode. A byte sequence can be valid or not as UTF-8. \n> But \n> a string containing unassigned code points is not not-\"valid\" as\n> Unicode.\n\nAgreed. Perhaps \"unicode_assigned()\" is better?\n\n> > * unicode_version(): version of unicode Postgres is built with\n> > * icu_unicode_version(): version of Unicode ICU is built with\n> \n> This seems easy enough, but it's not clear what users would actually\n> do \n> with that.\n\nJust there to make it visible. If it affects the semantics (which it\ndoes currently for normalization) it seems wise to have some way to\naccess the version.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 11 Oct 2023 00:37:46 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Wed, 2023-10-11 at 08:51 +0200, Peter Eisentraut wrote:\n> I don't see how this would really work in practice. Whether your\n> data \n> has unassigned code points or not, when the collations are updated to\n> the next Unicode version, the collations will have a new version\n> number, \n> and so you need to run the refresh procedure in any case.\n\nEven with a version number, we don't provide a great reresh procedure\nor document how it should be done. In practice, avoiding unassigned\ncode points might mitigate some kinds of problems, especially for glibc\nwhich has a very coarse version number.\n\nIn any case, a CHECK constraint to avoid unassigned code points has\nutility to be forward-compatible with normalization, and also might\njust be a good sanity check.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 11 Oct 2023 00:53:39 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Wed, 2023-10-11 at 08:56 +0200, Peter Eisentraut wrote:\n> We need to be careful about precise terminology. \"Valid\" has a\n> defined \n> meaning for Unicode. A byte sequence can be valid or not as UTF-8. \n> But \n> a string containing unassigned code points is not not-\"valid\" as\n> Unicode.\n\nNew patch attached, function name is \"unicode_assigned\".\n\nI believe the patch has utility as-is, but I've been brainstorming a\nfew more ideas that could build on it:\n\n* Add a per-database option to enforce only storing assigned unicode\ncode points.\n\n* (More radical) Add a per-database option to normalize all text in\nNFC.\n\n* Do character classification in Unicode rather than relying on\nglibc/ICU. This would affect regex character classes, etc., but not\naffect upper/lower/initcap nor collation. I did some experiments and\nthe General Category doesn't change a lot: a total of 197 characters\nchanged their General Category since Unicode 6.0.0, and only 5 since\nICU 11.0.0. I'm not quite sure how to expose this, but it seems like a\nnicer way to handle it than tying it into the collation provider.\n\nRegards,\n\tJeff Davis",
"msg_date": "Mon, 16 Oct 2023 20:32:19 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "\tJeff Davis wrote:\n\n> I believe the patch has utility as-is, but I've been brainstorming a\n> few more ideas that could build on it:\n> \n> * Add a per-database option to enforce only storing assigned unicode\n> code points.\n\nThere's a problem in the fact that the set of assigned code points is\nexpanding with every Unicode release, which happens about every year.\n\nIf we had this option in Postgres 11 released in 2018 it would use\nUnicode 11, and in 2023 this feature would reject thousands of code\npoints that have been assigned since then.\n\nAside from that, aborting a transaction because there's an\nunassigned code point in a string feels like doing too much,\ntoo late.\nThe programs that want to filter out unwanted code points\ndo it before they hit the database, client-side.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Tue, 17 Oct 2023 17:07:40 +0200",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 11:07 AM Daniel Verite <[email protected]> wrote:\n> There's a problem in the fact that the set of assigned code points is\n> expanding with every Unicode release, which happens about every year.\n>\n> If we had this option in Postgres 11 released in 2018 it would use\n> Unicode 11, and in 2023 this feature would reject thousands of code\n> points that have been assigned since then.\n\nAre code points assigned from a gapless sequence? That is, is the\nimplementation of codepoint_is_assigned(char) just 'codepoint <\nSOME_VALUE' and SOME_VALUE increases over time?\n\nIf so, we could consider having a function that lets you specify the\nbound as an input parameter. But whether anyone would use it, or know\nhow to set that input parameter, is questionable. The real issue here\nis whether you can figure out which of the code points that you could\nput into the database already have collation definitions.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Oct 2023 11:12:28 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Tue, 17 Oct 2023 at 11:15, Robert Haas <[email protected]> wrote:\n\n\n> Are code points assigned from a gapless sequence? That is, is the\n> implementation of codepoint_is_assigned(char) just 'codepoint <\n> SOME_VALUE' and SOME_VALUE increases over time?\n>\n\nNot even close. Code points are organized in blocks, e.g. for mathematical\nsymbols or Ethiopic script. Sometimes new blocks are added, sometimes new\ncharacters are added to existing blocks. Where they go is a combination of\nconvenience, history, and planning.\n\nOn Tue, 17 Oct 2023 at 11:15, Robert Haas <[email protected]> wrote: \nAre code points assigned from a gapless sequence? That is, is the\nimplementation of codepoint_is_assigned(char) just 'codepoint <\nSOME_VALUE' and SOME_VALUE increases over time?Not even close. Code points are organized in blocks, e.g. for mathematical symbols or Ethiopic script. Sometimes new blocks are added, sometimes new characters are added to existing blocks. Where they go is a combination of convenience, history, and planning.",
"msg_date": "Tue, 17 Oct 2023 11:38:07 -0400",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 11:38 AM Isaac Morland <[email protected]> wrote:\n> On Tue, 17 Oct 2023 at 11:15, Robert Haas <[email protected]> wrote:\n>> Are code points assigned from a gapless sequence? That is, is the\n>> implementation of codepoint_is_assigned(char) just 'codepoint <\n>> SOME_VALUE' and SOME_VALUE increases over time?\n>\n> Not even close. Code points are organized in blocks, e.g. for mathematical symbols or Ethiopic script. Sometimes new blocks are added, sometimes new characters are added to existing blocks. Where they go is a combination of convenience, history, and planning.\n\nAh. Good to know.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Oct 2023 11:43:18 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Tue, 2023-10-17 at 17:07 +0200, Daniel Verite wrote:\n> There's a problem in the fact that the set of assigned code points is\n> expanding with every Unicode release, which happens about every year.\n> \n> If we had this option in Postgres 11 released in 2018 it would use\n> Unicode 11, and in 2023 this feature would reject thousands of code\n> points that have been assigned since then.\n\nThat wouldn't be good for everyone, but might it be good for some\nusers?\n\nWe already expose normalization functions. If users are depending on\nnormalization, and they have unassigned code points in their system,\nthat will break when we update Unicode. By restricting themselves to\nassigned code points, normalization is guaranteed to be forward-\ncompatible.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 17 Oct 2023 09:32:18 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Mon, 2023-10-16 at 20:32 -0700, Jeff Davis wrote:\n> On Wed, 2023-10-11 at 08:56 +0200, Peter Eisentraut wrote:\n> > We need to be careful about precise terminology. \"Valid\" has a\n> > defined \n> > meaning for Unicode. A byte sequence can be valid or not as UTF-\n> > 8. \n> > But \n> > a string containing unassigned code points is not not-\"valid\" as\n> > Unicode.\n> \n> New patch attached, function name is \"unicode_assigned\".\n\nI plan to commit something like v3 early next week unless someone else\nhas additional comments or I missed a concern.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 27 Oct 2023 14:15:00 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "bowerbird and hammerkop didn't like commit a02b37fc. They're still\nusing the old 3rd build system that is not tested by CI. It's due for\nremoval in the 17 cycle IIUC but in the meantime I guess the new\ncodegen script needs to be invoked by something under src/tools/msvc?\n\n varlena.obj : error LNK2019: unresolved external symbol\nunicode_category referenced in function unicode_assigned\n[H:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\postgres.vcxproj]\n\n\n",
"msg_date": "Fri, 3 Nov 2023 10:51:12 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Fri, Oct 06, 2023 at 02:37:06PM -0400, Robert Haas wrote:\n> > Sure, because TEXT in PG doesn't have codeset+encoding as part of it --\n> > it's whatever the database's encoding is. Collation can and should be a\n> > porperty of a column, since for Unicode it wouldn't be reasonable to\n> > make that part of the type. But codeset+encoding should really be a\n> > property of the type if PG were to support more than one. IMO.\n> \n> No, what I mean is, you can't just be like \"oh, the varlena will be\n> different in memory than on disk\" as if that were no big deal.\n\nIt would have to be the same in memory as on disk, indeed, but you might\nneed new types in C as well for that.\n\n> I agree that, as an alternative to encoding being a column property,\n> it could instead be completely a type property, meaning that if you\n> want to store, say, LATIN1 text in your UTF-8 database, you first\n> create a latint1text data type and then use it, rather than, as in the\n> model I proposed, creating a text column and then applying a setting\n> like ENCODING latin1 to it. I think that there might be some problems\n\nYes, that was the idea.\n\n> with that model, but it could also have some benefits. [...]\n\nMainly, I think, whether you want PG to do automatic codeset conversions\n(ugly and problematic) or not, like for when using text functions.\n\nAutomatic codeset conversions are problematic because a) it can be lossy\n(so what to do when it is?) and b) automatic type conversions can be\nsurprising.\n\nUltimately the client would have to do its own codeset conversions, if\nit wants them, or treat text in codesets other than its local one as\nblobs and leave it for a higher app layer to deal with.\n\nI wouldn't want to propose automatic codeset conversions. If you'd want\nthat then you might as well declare it has to all be UTF-8 and say no to\nany other codesets.\n\n> But, even if we were all convinced that this kind of feature was good\n> to add, I think it would almost certainly be wrong to invent new\n> varlena features along the way.\n\nYes.\n\nNico\n-- \n\n\n",
"msg_date": "Thu, 2 Nov 2023 17:38:47 -0500",
"msg_from": "Nico Williams <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Wed, Oct 04, 2023 at 01:16:22PM -0400, Robert Haas wrote:\n> There's a very popular commercial database where, or so I have been\n> led to believe, any byte sequence at all is accepted when you try to\n> put values into the database. [...]\n\nIn other circles we call this \"just-use-8\".\n\nZFS, for example, has an option to require that filenames be valid\nUTF-8 or not, and if not it will accept any garbage (other than ASCII\nNUL and /, for obvious reasons).\n\nFor filesystems the situation is a bit dire because:\n\n - strings at the system call boundary have never been tagged with a\n codeset (in the beginning there was only ASCII)\n - there has never been a standard codeset to use at the system call\n boundary, \n - there have been multiple codesets in use for decades\n\nso filesystems have to be prepared to be tolerant of garbage, at least\nuntil only Unicode is left (UTF-16 on Windows filesystems, UTF-8 for\nmost others).\n\nThis is another reason that ZFS has form-insensitive/form-preserving\nbehavior: if you want to use non-UTF-8 filenames then names or\nsubstrings thereof that look like valid UTF-8 won't accidentally be\nbroken by normalization.\n\nIf PG never tagged strings with codesets on the wire then PG has the\nsame problem, especially since there's multiple implementations of the\nPG wire protocol.\n\nSo I can see why a \"popular database\" might want to take this approach.\n\nFor the longer run though, either move to supporting only UTF-8, or\nallow multiple text types each with a codeset specified in its type.\n\n> At any rate, if we were to go in the direction of rejecting code\n> points that aren't yet assigned, or aren't yet known to the collation\n> library, that's another way for data loading to fail. Which feels like\n> very defensible behavior, but not what everyone wants, or is used to.\n\nYes. See points about ZFS. I do think ZFS struck a good balance.\n\nPG could take the ZFS approach and add functions for use in CHECK\nconstraints that enforce valid UTF-8, valid Unicode (no use of\nunassigned codepoints, no use of private use codepoints not configured\ninto the database), etc.\n\nComing back to the \"just-use-8\" thing, a database could have a text type\nwhere the codeset is not specified, one or more text types where the\ncodeset is specified, manual or automatic codeset conversions, and\nwhatever enforcement functions make sense. Provided that the type\ninformation is not lost at the edges.\n\n> > Whether we ever get to a core data type -- and more importantly,\n> > whether anyone uses it -- I'm not sure.\n> \n> Same here.\n\nA TEXTutf8 type (whatever name you want to give it) could be useful as a\nway to a) opt into heavier enforcement w/o having to write CHECK\nconstraints, b) documentation of intent, all provided that the type is\nnot lost on the wire nor in memory.\n\nSupport for other codesets is less important.\n\nNico\n-- \n\n\n",
"msg_date": "Thu, 2 Nov 2023 17:54:49 -0500",
"msg_from": "Nico Williams <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 05:07:40PM +0200, Daniel Verite wrote:\n> > * Add a per-database option to enforce only storing assigned unicode\n> > code points.\n> \n> There's a problem in the fact that the set of assigned code points is\n> expanding with every Unicode release, which happens about every year.\n> \n> If we had this option in Postgres 11 released in 2018 it would use\n> Unicode 11, and in 2023 this feature would reject thousands of code\n> points that have been assigned since then.\n\nYes, and that's desirable if PG were to normalize text as Jeff proposes,\nsince then PG wouldn't know how to normalize text containing codepoints\nassigned after that. At that point to use those codepoints you'd have\nto upgrade PG -- not too unreasonable.\n\nNico\n-- \n\n\n",
"msg_date": "Thu, 2 Nov 2023 18:17:33 -0500",
"msg_from": "Nico Williams <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Wed, Oct 04, 2023 at 01:15:03PM -0700, Jeff Davis wrote:\n> > The fact that there are multiple types of normalization and multiple\n> > notions of equality doesn't make this easier.\n\nAnd then there's text that isn't normalized to any of them.\n\n> NFC is really the only one that makes sense.\n\nYes.\n\nMost input modes produce NFC, though there may be scripts (like Hangul)\nwhere input modes might produce NFD, so I wouldn't say NFC is universal.\n\nUnfortunately HFS+ uses NFD so NFD can leak into places naturally enough\nthrough OS X.\n\n> I believe that having a kind of text data type where it's stored in NFC\n> and compared with memcmp() would be a good place for many users to be -\n> - probably most users. It's got all the performance and stability\n> benefits of memcmp(), with slightly richer semantics. It's less likely\n> that someone malicious can confuse the database by using different\n> representations of the same character.\n> \n> The problem is that it's not universally better for everyone: there are\n> certainly users who would prefer that the codepoints they send to the\n> database are preserved exactly, and also users who would like to be\n> able to use unassigned code points.\n\nThe alternative is forminsensitivity, where you compare strings as\nequal even if they aren't memcmp() eq as long as they are equal when\nnormalized. This can be made fast, though not as fast as memcmp().\n\nThe problem with form insensitivity is that you might have to implement\nit in numerous places. In ZFS there's only a few, but in a database\nevery index type, for example, will need to hook in form insensitivity.\nIf so then that complexity would be a good argument to just normalize.\n\nNico\n-- \n\n\n",
"msg_date": "Thu, 2 Nov 2023 18:23:19 -0500",
"msg_from": "Nico Williams <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Fri, 2023-11-03 at 10:51 +1300, Thomas Munro wrote:\n> bowerbird and hammerkop didn't like commit a02b37fc. They're still\n> using the old 3rd build system that is not tested by CI. It's due\n> for\n> removal in the 17 cycle IIUC but in the meantime I guess the new\n> codegen script needs to be invoked by something under src/tools/msvc?\n> \n> varlena.obj : error LNK2019: unresolved external symbol\n> unicode_category referenced in function unicode_assigned\n> [H:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\postgres.vcxproj]\n\nI think I just need to add unicode_category.c to @pgcommonallfiles in\nMkvcbuild.pm. I'll do a trial commit tomorrow and see if that fixes it\nunless someone has a better suggestion.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 03 Nov 2023 00:49:37 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Fri, 3 Nov 2023 at 20:49, Jeff Davis <[email protected]> wrote:\n>\n> On Fri, 2023-11-03 at 10:51 +1300, Thomas Munro wrote:\n> > bowerbird and hammerkop didn't like commit a02b37fc. They're still\n> > using the old 3rd build system that is not tested by CI. It's due\n> > for\n> > removal in the 17 cycle IIUC but in the meantime I guess the new\n> > codegen script needs to be invoked by something under src/tools/msvc?\n> >\n> > varlena.obj : error LNK2019: unresolved external symbol\n> > unicode_category referenced in function unicode_assigned\n> > [H:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\postgres.vcxproj]\n>\n> I think I just need to add unicode_category.c to @pgcommonallfiles in\n> Mkvcbuild.pm. I'll do a trial commit tomorrow and see if that fixes it\n> unless someone has a better suggestion.\n\n(I didn't realise this was being discussed.)\n\nThomas mentioned this to me earlier today. After looking I also\nconcluded that unicode_category.c needed to be added to\n@pgcommonallfiles. After looking at the time, I didn't expect you to\nbe around so opted just to push that to fix the MSVC buildfarm\nmembers.\n\nSorry for the duplicate effort and/or stepping on your toes.\n\nDavid\n\n\n",
"msg_date": "Fri, 3 Nov 2023 21:01:42 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Sat, Oct 28, 2023 at 4:15 AM Jeff Davis <[email protected]> wrote:\n>\n> I plan to commit something like v3 early next week unless someone else\n> has additional comments or I missed a concern.\n\nHi Jeff, is the CF entry titled \"Unicode character general category\nfunctions\" ready to be marked committed?\n\n\n",
"msg_date": "Fri, 3 Nov 2023 17:11:50 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Fri, 2023-11-03 at 21:01 +1300, David Rowley wrote:\n> Thomas mentioned this to me earlier today. After looking I also\n> concluded that unicode_category.c needed to be added to\n> @pgcommonallfiles. After looking at the time, I didn't expect you to\n> be around so opted just to push that to fix the MSVC buildfarm\n> members.\n> \n> Sorry for the duplicate effort and/or stepping on your toes.\n\nThank you, no apology necessary.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 03 Nov 2023 11:42:55 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Fri, 2023-11-03 at 17:11 +0700, John Naylor wrote:\n> On Sat, Oct 28, 2023 at 4:15 AM Jeff Davis <[email protected]> wrote:\n> > \n> > I plan to commit something like v3 early next week unless someone\n> > else\n> > has additional comments or I missed a concern.\n> \n> Hi Jeff, is the CF entry titled \"Unicode character general category\n> functions\" ready to be marked committed?\n\nDone, thank you.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 03 Nov 2023 11:43:57 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On 2023-10-04 23:32, Chapman Flack wrote:\n> Well, for what reason does anybody run PG now with the encoding set\n> to anything besides UTF-8? I don't really have my finger on that pulse.\n> Could it be that it bloats common strings in their local script, and\n> with enough of those to store, it could matter to use the local\n> encoding that stores them more economically?\n\nI do use CP1251 for storing some data which is coming in as XMLs in \nCP1251, and thus definitely fits. In UTF-8, that data would take exactly \n2x the size on disks (before compression, and pglz/lz4 won't help much \nwith that).\n\n-- Ph.\n\n\n",
"msg_date": "Fri, 03 Nov 2023 21:15:30 +0100",
"msg_from": "Phil Krylov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Fri, Nov 3, 2023 at 9:01 PM David Rowley <[email protected]> wrote:\n> On Fri, 3 Nov 2023 at 20:49, Jeff Davis <[email protected]> wrote:\n> > On Fri, 2023-11-03 at 10:51 +1300, Thomas Munro wrote:\n> > > bowerbird and hammerkop didn't like commit a02b37fc. They're still\n> > > using the old 3rd build system that is not tested by CI. It's due\n> > > for\n> > > removal in the 17 cycle IIUC but in the meantime I guess the new\n> > > codegen script needs to be invoked by something under src/tools/msvc?\n> > >\n> > > varlena.obj : error LNK2019: unresolved external symbol\n> > > unicode_category referenced in function unicode_assigned\n> > > [H:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\postgres.vcxproj]\n> >\n> > I think I just need to add unicode_category.c to @pgcommonallfiles in\n> > Mkvcbuild.pm. I'll do a trial commit tomorrow and see if that fixes it\n> > unless someone has a better suggestion.\n>\n> (I didn't realise this was being discussed.)\n>\n> Thomas mentioned this to me earlier today. After looking I also\n> concluded that unicode_category.c needed to be added to\n> @pgcommonallfiles. After looking at the time, I didn't expect you to\n> be around so opted just to push that to fix the MSVC buildfarm\n> members.\n\nShouldn't it be added unconditionally near unicode_norm.c? It looks\nlike it was accidentally made conditional on openssl, which might\nexplain why it worked for David but not for bowerbird.\n\n\n",
"msg_date": "Sat, 4 Nov 2023 10:56:44 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Sat, 4 Nov 2023 at 10:57, Thomas Munro <[email protected]> wrote:\n>\n> On Fri, Nov 3, 2023 at 9:01 PM David Rowley <[email protected]> wrote:\n> > On Fri, 3 Nov 2023 at 20:49, Jeff Davis <[email protected]> wrote:\n> > > I think I just need to add unicode_category.c to @pgcommonallfiles in\n> > > Mkvcbuild.pm. I'll do a trial commit tomorrow and see if that fixes it\n> > > unless someone has a better suggestion.\n> >\n> > Thomas mentioned this to me earlier today. After looking I also\n> > concluded that unicode_category.c needed to be added to\n> > @pgcommonallfiles. After looking at the time, I didn't expect you to\n> > be around so opted just to push that to fix the MSVC buildfarm\n> > members.\n>\n> Shouldn't it be added unconditionally near unicode_norm.c? It looks\n> like it was accidentally made conditional on openssl, which might\n> explain why it worked for David but not for bowerbird.\n\nWell, I did that one pretty poorly :-(\n\nI've just pushed a fix for that. Thanks.\n\nDavid\n\n\n",
"msg_date": "Sat, 4 Nov 2023 15:43:34 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Mon, 2023-10-02 at 16:06 -0400, Robert Haas wrote:\n> It seems to me that this overlooks one of the major points of Jeff's\n> proposal, which is that we don't reject text input that contains\n> unassigned code points. That decision turns out to be really painful.\n\nAttached is an implementation of a per-database option STRICT_UNICODE\nwhich enforces the use of assigned code points only.\n\nNot everyone would want to use it. There are lots of applications that\naccept free-form text, and that may include recently-assigned code\npoints not yet recognized by Postgres.\n\nBut it would offer protection/stability for some databases. It makes it\npossible to have a hard guarantee that Unicode normalization is\nstable[1]. And it may also mitigate the risk of collation changes --\nusing unassigned code points carries a high risk that the collation\norder changes as soon as the collation provider recognizes the\nassignment. (Though assigned code points can change, too, so limiting\nyourself to assigned code points is only a mitigation.)\n\nI worry slightly that users will think at first that they want only\nassigned code points, and then later figure out that the application\nhas increased in scope and now takes all kinds of free-form text. In\nthat case, the user can \"ALTER DATABASE ... STRICT_UNICODE FALSE\", and\nfollow up with some \"CHECK (unicode_assigned(...))\" constraints on the\nparticular fields that they'd like to protect.\n\nThere's some weirdness that the set of assigned code points as Postgres\nsees it may not match what a collation provider sees due to differing\nUnicode versions. That's not great -- perhaps we could check that code\npoints are considered assigned by *both* Postgres and ICU. I don't know\nif there's a way to tell if libc considers a code point to be assigned.\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://www.unicode.org/policies/stability_policy.html#Normalization",
"msg_date": "Thu, 29 Feb 2024 17:02:51 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
},
{
"msg_contents": "On Thu, 2024-02-29 at 17:02 -0800, Jeff Davis wrote:\n> Attached is an implementation of a per-database option STRICT_UNICODE\n> which enforces the use of assigned code points only.\n\nThe CF app doesn't seem to point at the latest patch:\n\nhttps://www.postgresql.org/message-id/[email protected]\n\nwhich is perhaps why nobody has looked at it yet.\n\nBut in any case, I'm OK if this gets bumped to 18. I still think it's a\ngood feature, but some of the value will come later in v18 anyway, when\nI plan to propose support for case folding. Case folding is a version\nof lowercasing with compatibility guarantees when you only use assigned\ncode points.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 14 Mar 2024 11:07:00 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pre-proposal: unicode normalized text"
}
] |
[
{
"msg_contents": "Greetings Hackers,\n\nBeen a while! I’m working on some experiments with JSONB columns and GIN indexes, and have operated on the assumption that JSON Path operations would take advantage of GIN indexes, with json_path_ops as a nice optimization. But I’ve run into what appear to be some inconsistencies and oddities I’m hoping to figure out with your help.\n\nFor the examples in this email, I’m using this simple table:\n\nCREATE TABLE MOVIES (id SERIAL PRIMARY KEY, movie JSONB NOT NULL);\n\\copy movies(movie) from PROGRAM 'curl -s https://raw.githubusercontent.com/prust/wikipedia-movie-data/master/movies.json | jq -c \".[]\" | sed \"s|\\\\\\\\|\\\\\\\\\\\\\\\\|g\"';\ncreate index on movies using gin (movie);\nanalyze movies;\n\nThat gives me a simple table with around 3600 rows. Not a lot of data, but hopefully enough to demonstrate the issues.\n\nIssue 1: @@ vs @?\n-----------------\n\nI have been confused as to the difference between @@ vs @?: Why do these return different results?\n\ndavid=# select id from movies where movie @@ '$ ?(@.title == \"New Life Rescue\")';\n id\n----\n(0 rows)\n\ndavid=# select id from movies where movie @? '$ ?(@.title == \"New Life Rescue\")';\n id\n----\n 10\n(1 row)\n\nI posted this question on Stack Overflow (https://stackoverflow.com/q/77046554/79202), and from the suggestion I got there, it seems that @@ expects a boolean to be returned by the path query, while @? wraps it in an implicit exists(). Is that right?\n\nIf so, I’d like to submit a patch to the docs talking about this, and suggesting the use of jsonb_path_query() to test paths to see if they return a boolean or not.\n\n\nIssue 2: @? Index Use\n---------------------\n\nFrom Oleg’s (happy belated birthday!) notes (https://github.com/obartunov/sqljsondoc/blob/master/jsonpath.md#jsonpath-operators):\n\n\n> Operators @? and @@ are interchangeable:\n> \n> js @? '$.a' <=> js @@ 'exists($.a)’\n> js @@ '$.a == 1' <=> js @? '$ ? ($.a == 1)’\n\nFor the purposes of the above example, this appears to hold true: if I wrap the path query in exists(), @@ returns a result:\n\ndavid=# select id from movies where movie @@ 'exists($ ?(@.title == \"New Life Rescue\"))';\n id\n----\n 10\n(1 row)\n\nYay! However, @@ and @? don’t seem to use an index the same way: @@ uses a GIN index while @? does not.\n\nOr, no, fiddling with it again just now, I think I have still been confusing these operators! @@ was using the index with an an explicit exists(), but @? was not…because I was still using an explicit exists.\n\nIn other words:\n\n* @@ 'exists($ ?($.year == 1944))' Uses the index\n* @? '$ ?(@.year == 1944)' Uses the index\n* @? 'exists($ ?($.year == 1944))' Does not use the index\n\nThat last one presumably doesn’t work, because there is an implicit exists() around the exists(), making it `exists(exists($ ?($.year == 1944)))`, which returns true for every row (true and false both exists)! 🤦🏻♂️.\n\nAnyway, if I have this right, I’d like to flesh out the docs a bit.\n\nIssue 3: Index Use for Comparison\n---------------------------------\n\nFrom the docs (https://www.postgresql.org/docs/current/datatype-json.html#JSON-INDEXING), I had assumed any JSON Path query would be able to use the GIN index. However while the use of the == JSON Path operator is able to take advantage of the GIN index, apparently the >= operator cannot:\n\ndavid=# explain analyze select id from movies where movie @? '$ ?($.year >= 2023)';\n QUERY PLAN ---------------------------------------------------------------------------------------------------------\n Seq Scan on movies (cost=0.00..3741.41 rows=366 width=4) (actual time=34.815..36.259 rows=192 loops=1)\n Filter: (movie @? '$?($.\"year\" >= 2023)'::jsonpath)\n Rows Removed by Filter: 36081\n Planning Time: 1.864 ms\n Execution Time: 36.338 ms\n(5 rows)\n\nIs this expected? Originally I tried with json_path_ops, which I can understand not working, since it stores hashes of paths, which would allow only exact matches. But a plain old GIN index doesn’t appear to work, either. Should it? Is there perhaps some other op class that would allow it to work? Or would I have to create a separate BTREE index on `movie -> 'year'`?\n\nThanks your your patience with my questions!\n\nBest,\n\nDavid",
"msg_date": "Tue, 12 Sep 2023 20:16:53 -0400",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "JSON Path and GIN Questions"
},
{
"msg_contents": "Hi David,\n\nOn 13/09/2023 02:16 CEST David E. Wheeler <[email protected]> wrote:\n\n> CREATE TABLE MOVIES (id SERIAL PRIMARY KEY, movie JSONB NOT NULL);\n> \\copy movies(movie) from PROGRAM 'curl -s https://raw.githubusercontent.com/prust/wikipedia-movie-data/master/movies.json | jq -c \".[]\" | sed \"s|\\\\\\\\|\\\\\\\\\\\\\\\\|g\"';\n> create index on movies using gin (movie);\n> analyze movies;\n>\n> I have been confused as to the difference between @@ vs @?: Why do these\n> return different results?\n>\n> david=# select id from movies where movie @@ '$ ?(@.title == \"New Life Rescue\")';\n> id\n> ----\n> (0 rows)\n>\n> david=# select id from movies where movie @? '$ ?(@.title == \"New Life Rescue\")';\n> id\n> ----\n> 10\n> (1 row)\n>\n> I posted this question on Stack Overflow (https://stackoverflow.com/q/77046554/79202),\n> and from the suggestion I got there, it seems that @@ expects a boolean to be\n> returned by the path query, while @? wraps it in an implicit exists(). Is that\n> right?\n\nThat's also my understanding. We had a discussion about the docs on @@, @?, and\njsonb_path_query on -general a while back [1]. Maybe it's useful also.\n\n> If so, I’d like to submit a patch to the docs talking about this, and\n> suggesting the use of jsonb_path_query() to test paths to see if they return\n> a boolean or not.\n\n+1\n\n[1] https://www.postgresql.org/message-id/CACJufxE01sxgvtG4QEvRZPzs_roggsZeVvBSGpjM5tzE5hMCLA%40mail.gmail.com\n\n--\nErik\n\n\n",
"msg_date": "Wed, 13 Sep 2023 03:00:07 +0200 (CEST)",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "Op 9/13/23 om 03:00 schreef Erik Wienhold:\n> Hi David,\n> \n> On 13/09/2023 02:16 CEST David E. Wheeler <[email protected]> wrote:\n> \n>> CREATE TABLE MOVIES (id SERIAL PRIMARY KEY, movie JSONB NOT NULL);\n>> \\copy movies(movie) from PROGRAM 'curl -s https://raw.githubusercontent.com/prust/wikipedia-movie-data/master/movies.json | jq -c \".[]\" | sed \"s|\\\\\\\\|\\\\\\\\\\\\\\\\|g\"';\n>> create index on movies using gin (movie);\n>> analyze movies;\n>>\n>> I have been confused as to the difference between @@ vs @?: Why do these\n>> return different results?\n>>\n>> david=# select id from movies where movie @@ '$ ?(@.title == \"New Life Rescue\")';\n>> id\n>> ----\n>> (0 rows)\n>>\n>> david=# select id from movies where movie @? '$ ?(@.title == \"New Life Rescue\")';\n>> id\n>> ----\n>> 10\n>> (1 row)\n>>\n>> I posted this question on Stack Overflow (https://stackoverflow.com/q/77046554/79202),\n>> and from the suggestion I got there, it seems that @@ expects a boolean to be\n>> returned by the path query, while @? wraps it in an implicit exists(). Is that\n>> right?\n> \n> That's also my understanding. We had a discussion about the docs on @@, @?, and\n> jsonb_path_query on -general a while back [1]. Maybe it's useful also.\n> \n>> If so, I’d like to submit a patch to the docs talking about this, and\n>> suggesting the use of jsonb_path_query() to test paths to see if they return\n>> a boolean or not.\n> \n> +1\n> \n> [1] https://www.postgresql.org/message-id/CACJufxE01sxgvtG4QEvRZPzs_roggsZeVvBSGpjM5tzE5hMCLA%40mail.gmail.com\n> \n> --\n> Erik\n\n\n\"All use of json*() functions preclude index usage.\"\n\nThat sentence is missing from the documentation.\n\n\nErik Rijkers\n\n\n\n\n\n\n",
"msg_date": "Wed, 13 Sep 2023 07:11:52 +0200",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "On Sep 13, 2023, at 01:11, Erik Rijkers <[email protected]> wrote:\n\n> \"All use of json*() functions preclude index usage.\"\n> \n> That sentence is missing from the documentation.\n\nWhere did that come from? Why wouldn’t JSON* functions use indexes? I see that the docs only mention operators; why would the corresponding functions behave the same?\n\nD",
"msg_date": "Wed, 13 Sep 2023 16:01:03 -0400",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "p 9/13/23 om 22:01 schreef David E. Wheeler:\n> On Sep 13, 2023, at 01:11, Erik Rijkers <[email protected]> wrote:\n> \n>> \"All use of json*() functions preclude index usage.\"\n>>\n>> That sentence is missing from the documentation.\n> \n> Where did that come from? Why wouldn’t JSON* functions use indexes? I see that the docs only mention operators; why would the corresponding functions behave the same?\n> \n> D\n\nSorry, perhaps my reply was a bit off-topic.\nBut you mentioned perhaps touching the docs and\nthe not-use-of-index is just so unexpected.\nCompare these two statements:\n\nselect count(id) from movies where\nmovie @? '$ ? (@.year == 2023)'\nTime: 1.259 ms\n (index used)\n\nselect count(id) from movies where\njsonb_path_match(movie, '$.year == 2023');\nTime: 17.260 ms\n (no index used - unexpectedly slower)\n\nWith these two indexes available:\n using gin (movie);\n using gin (movie jsonb_path_ops);\n\n(REL_15_STABLE; but it's the same in HEAD and\nthe not-yet-committed SQL/JSON patches.)\n\nErik Rijkers\n\n\n",
"msg_date": "Thu, 14 Sep 2023 06:04:28 +0200",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "Erik Rijkers <[email protected]> writes:\n> p 9/13/23 om 22:01 schreef David E. Wheeler:\n>> On Sep 13, 2023, at 01:11, Erik Rijkers <[email protected]> wrote:\n>>> \"All use of json*() functions preclude index usage.\"\n\n>> Where did that come from? Why wouldn’t JSON* functions use indexes? I see that the docs only mention operators; why would the corresponding functions behave the same?\n\n> Sorry, perhaps my reply was a bit off-topic.\n> But you mentioned perhaps touching the docs and\n> the not-use-of-index is just so unexpected.\n\nUnexpected to who? I think the docs make it pretty plain that only\noperators on indexed columns are considered as index qualifications.\nAdmittedly, 11.2 Index Types [1] makes the point only by not\ndiscussing any other case, but when you get to 11.10 Operator Classes\nand Operator Families [2] and discover that the entire index definition\nmechanism is based around operators not functions, you should be able\nto reach that conclusion. The point is made even more directly in\n38.16 Interfacing Extensions to Indexes [3], though I'll concede\nthat that's not material I'd expect the average PG user to read.\nAs far as json in particular is concerned, 8.14.4 jsonb Indexing [4]\nis pretty clear about what is or is not supported.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/current/indexes-types.html\n[2] https://www.postgresql.org/docs/current/indexes-opclass.html\n[3] https://www.postgresql.org/docs/current/xindex.html\n[4] https://www.postgresql.org/docs/current/datatype-json.html#JSON-INDEXING\n\n\n",
"msg_date": "Thu, 14 Sep 2023 00:41:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "On Sep 14, 2023, at 00:41, Tom Lane <[email protected]> wrote:\n\n> As far as json in particular is concerned, 8.14.4 jsonb Indexing [4]\n> is pretty clear about what is or is not supported.\n\nHow do you feel about this note, then?\n\ndiff --git a/doc/src/sgml/json.sgml b/doc/src/sgml/json.sgml\nindex b6c2ddbf55..7dda727f0d 100644\n--- a/doc/src/sgml/json.sgml\n+++ b/doc/src/sgml/json.sgml\n@@ -413,6 +413,13 @@ SELECT doc->'site_name' FROM websites\n Two GIN <quote>operator classes</quote> are provided, offering different\n performance and flexibility trade-offs.\n </para>\n+ <note>\n+ <para>\n+ As with all indexes, only operators on indexed columns are considered as\n+ index qualifications. In other words, only <type>jsonb</type> operators can\n+ take advantage of GIN indexes; <type>jsonb</type> functions cannot.\n+ </para>\n+ </note>\n <para>\n The default GIN operator class for <type>jsonb</type> supports queries with\n the key-exists operators <literal>?</literal>, <literal>?|</literal>\n\n\nBest,\n\nDavid",
"msg_date": "Fri, 15 Sep 2023 16:13:22 -0400",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "On Sep 12, 2023, at 21:00, Erik Wienhold <[email protected]> wrote:\n\n> That's also my understanding. We had a discussion about the docs on @@, @?, and\n> jsonb_path_query on -general a while back [1]. Maybe it's useful also.\n\nOkay, I’ll take a pass at expanding the docs on this. I think a little mini-tutorial on these two operators would be useful.\n\nMeanwhile, I’d like to re-up this question about the index qualification of non-equality JSON Path operators.\n\nOn Sep 12, 2023, at 20:16, David E. Wheeler <[email protected]> wrote:\n\n> Issue 3: Index Use for Comparison\n> ---------------------------------\n> \n> From the docs (https://www.postgresql.org/docs/current/datatype-json.html#JSON-INDEXING), I had assumed any JSON Path query would be able to use the GIN index. However while the use of the == JSON Path operator is able to take advantage of the GIN index, apparently the >= operator cannot:\n> \n> david=# explain analyze select id from movies where movie @? '$ ?($.year >= 2023)';\n> QUERY PLAN ---------------------------------------------------------------------------------------------------------\n> Seq Scan on movies (cost=0.00..3741.41 rows=366 width=4) (actual time=34.815..36.259 rows=192 loops=1)\n> Filter: (movie @? '$?($.\"year\" >= 2023)'::jsonpath)\n> Rows Removed by Filter: 36081\n> Planning Time: 1.864 ms\n> Execution Time: 36.338 ms\n> (5 rows)\n> \n> Is this expected? Originally I tried with json_path_ops, which I can understand not working, since it stores hashes of paths, which would allow only exact matches. But a plain old GIN index doesn’t appear to work, either. Should it? Is there perhaps some other op class that would allow it to work? Or would I have to create a separate BTREE index on `movie -> 'year'`?\n\nThanks,\n\nDavid",
"msg_date": "Fri, 15 Sep 2023 16:27:37 -0400",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "\"David E. Wheeler\" <[email protected]> writes:\n> On Sep 14, 2023, at 00:41, Tom Lane <[email protected]> wrote:\n>> As far as json in particular is concerned, 8.14.4 jsonb Indexing [4]\n>> is pretty clear about what is or is not supported.\n\n> How do you feel about this note, then?\n\nI think it's unnecessary. If we did consider it necessary,\nwhy wouldn't just about every subsection in chapter 8 need\nsimilar wording?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Sep 2023 17:14:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "Op 9/15/23 om 22:27 schreef David E. Wheeler:\n> On Sep 12, 2023, at 21:00, Erik Wienhold <[email protected]> wrote:\n> \n>> That's also my understanding. We had a discussion about the docs on @@, @?, and\n>> jsonb_path_query on -general a while back [1]. Maybe it's useful also.\n> \n> Okay, I’ll take a pass at expanding the docs on this. I think a little mini-tutorial on these two operators would be useful.\n> \n> Meanwhile, I’d like to re-up this question about the index qualification of non-equality JSON Path operators.\n> \n> On Sep 12, 2023, at 20:16, David E. Wheeler <[email protected]> wrote:\n> \n>> Issue 3: Index Use for Comparison\n>> ---------------------------------\n>>\n>> From the docs (https://www.postgresql.org/docs/current/datatype-json.html#JSON-INDEXING), I had assumed any JSON Path query would be able to use the GIN index. However while the use of the == JSON Path operator is able to take advantage of the GIN index, apparently the >= operator cannot:\n>>\n>> david=# explain analyze select id from movies where movie @? '$ ?($.year >= 2023)';\n>> QUERY PLAN ---------------------------------------------------------------------------------------------------------\n>> Seq Scan on movies (cost=0.00..3741.41 rows=366 width=4) (actual time=34.815..36.259 rows=192 loops=1)\n>> Filter: (movie @? '$?($.\"year\" >= 2023)'::jsonpath)\n>> Rows Removed by Filter: 36081\n>> Planning Time: 1.864 ms\n>> Execution Time: 36.338 ms\n>> (5 rows)\n>>\n>> Is this expected? Originally I tried with json_path_ops, which I can understand not working, since it stores hashes of paths, which would allow only exact matches. But a plain old GIN index doesn’t appear to work, either. Should it? Is there perhaps some other op class that would allow it to work? Or would I have to create a separate BTREE index on `movie -> 'year'`?\n> \n\nmovie @? '$ ?($.year >= 2023)'\n\nI believe it is indeed not possible to have such a unequality-search use \nthe GIN index. It is another weakness of JSON that can be unexpected to \nthose not in the fullness of Knowledge of the manual. Yes, this too \nwould be good to explain in the doc where JSON indexes are explained.\n\nErik Rijkers\n\n> Thanks,\n> \n> David\n> \n\n\n",
"msg_date": "Sat, 16 Sep 2023 05:59:26 +0200",
"msg_from": "Erik Rijkers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "On Sep 15, 2023, at 20:36, Tom Lane <[email protected]> wrote:\n\n> I think that that indicates that you're putting the info in the\n> wrong place. Perhaps the right answer is to insert something\n> more explicit in section 11.2, which is the first place where\n> we really spend any effort discussing what can be indexed.\n\nFair enough. How ’bout this?\n\n--- a/doc/src/sgml/indices.sgml\n+++ b/doc/src/sgml/indices.sgml\n@@ -120,7 +120,7 @@ CREATE INDEX test1_id_index ON test1 (id);\n B-tree, Hash, GiST, SP-GiST, GIN, BRIN, and the extension <link\n linkend=\"bloom\">bloom</link>.\n Each index type uses a different\n- algorithm that is best suited to different types of queries.\n+ algorithm that is best suited to different types of queries and operators.\n By default, the <link linkend=\"sql-createindex\"><command>CREATE\n INDEX</command></link> command creates\n B-tree indexes, which fit the most common situations.\n@@ -132,6 +132,14 @@ CREATE INDEX <replaceable>name</replaceable> ON <replaceable>table</replaceable>\n </programlisting>\n </para>\n\n+ <note>\n+ <para>\n+ Only operators on indexed columns are considered as index qualifications.\n+ Functions never qualify for index usage, aside from\n+ <link linkend=\"indexes-expressional\">indexes on expressions</link>.\n+ </para>\n+ </note>\n+\n <sect2 id=\"indexes-types-btree\">\n <title>B-Tree</title>",
"msg_date": "Sat, 16 Sep 2023 13:43:47 -0400",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "On Sep 15, 2023, at 23:59, Erik Rijkers <[email protected]> wrote:\n\n> movie @? '$ ?($.year >= 2023)'\n> \n> I believe it is indeed not possible to have such a unequality-search use the GIN index. It is another weakness of JSON that can be unexpected to those not in the fullness of Knowledge of the manual. Yes, this too would be good to explain in the doc where JSON indexes are explained.\n\nIs that a limitation of GIN indexes in general? Or could there be opclass improvements in the future that would enable such comparisons?\n\nThanks,\n\nDavid",
"msg_date": "Sat, 16 Sep 2023 16:19:23 -0400",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "On Sep 12, 2023, at 21:00, Erik Wienhold <[email protected]> wrote:\n\n>> If so, I’d like to submit a patch to the docs talking about this, and\n>> suggesting the use of jsonb_path_query() to test paths to see if they return\n>> a boolean or not.\n> \n> +1\n\nI’ve started work on this; there’s so much to learn! Here’s a new example that surprised me a bit. Using the GPS tracker example from the docs [1] loaded into a `:json` psql variable, this output of this query makes perfect sense to me:\n\ndavid=# select jsonb_path_query(:'json', '$.track.segments.location[*] ? (@ < 14)');\n jsonb_path_query\n------------------\n 13.4034\n 13.2635\n\nBecause `[*]` selects all the values. This, however, I did not expect:\n\ndavid=# select jsonb_path_query(:'json', '$.track.segments.location ? (@[*] < 14)');\n jsonb_path_query\n------------------\n 13.4034\n 13.2635\n(2 rows)\n\nI had expected it to return two single-value arrays, instead:\n\n [13.4034]\n [13.2635]\n\nIt appears that the filter expression is doing some sub-selection, too. Is that expected?\n\nBest,\n\nDavid\n\n [1]: https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-SQLJSON-PATH",
"msg_date": "Sat, 16 Sep 2023 16:26:07 -0400",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "On 16/09/2023 22:19 CEST David E. Wheeler <[email protected]> wrote:\n\n> On Sep 15, 2023, at 23:59, Erik Rijkers <[email protected]> wrote:\n>\n> > movie @? '$ ?($.year >= 2023)'\n> >\n> > I believe it is indeed not possible to have such a unequality-search use\n> > the GIN index. It is another weakness of JSON that can be unexpected to\n> > those not in the fullness of Knowledge of the manual. Yes, this too would\n> > be good to explain in the doc where JSON indexes are explained.\n>\n> Is that a limitation of GIN indexes in general? Or could there be opclass\n> improvements in the future that would enable such comparisons?\n\nThis detail is mentioned in docs [1]:\n\n\"For these operators, a GIN index extracts clauses of the form\n **accessors_chain = constant** out of the jsonpath pattern, and does the\n index search based on the keys and values mentioned in these clauses.\"\n\nI don't know if this is a general limitation of GIN indexes or just how these\noperators are implemented right now.\n\n[1] https://www.postgresql.org/docs/current/datatype-json.html#JSON-INDEXING\n\n--\nErik\n\n\n",
"msg_date": "Sat, 16 Sep 2023 22:50:13 +0200 (CEST)",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "On Sep 16, 2023, at 16:50, Erik Wienhold <[email protected]> wrote:\n\n> \"For these operators, a GIN index extracts clauses of the form\n> **accessors_chain = constant** out of the jsonpath pattern, and does the\n> index search based on the keys and values mentioned in these clauses.\"\n> \n> I don't know if this is a general limitation of GIN indexes or just how these\n> operators are implemented right now.\n> \n> [1] https://www.postgresql.org/docs/current/datatype-json.html#JSON-INDEXING\n\n\nThe detail that jumps out at me is this one on jsonb_path_ops:\n\n“Basically, each jsonb_path_ops index item is a hash of the value and the key(s) leading to it”\n\nBecause jsonb_path_ops indexes hashes, I would assume it would only support path equality. But it’s not clear to me from these docs that jsonb_ops also indexes hashes. Does it?\n\nBest,\n\nD",
"msg_date": "Sat, 16 Sep 2023 17:29:18 -0400",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "On 16/09/2023 22:26 CEST David E. Wheeler <[email protected]> wrote:\n\n> I’ve started work on this; there’s so much to learn! Here’s a new example\n> that surprised me a bit. Using the GPS tracker example from the docs [1]\n> loaded into a `:json` psql variable, this output of this query makes perfect\n> sense to me:\n>\n> david=# select jsonb_path_query(:'json', '$.track.segments.location[*] ? (@ < 14)');\n> jsonb_path_query\n> ------------------\n> 13.4034\n> 13.2635\n>\n> Because `[*]` selects all the values. This, however, I did not expect:\n>\n> david=# select jsonb_path_query(:'json', '$.track.segments.location ? (@[*] < 14)');\n> jsonb_path_query\n> ------------------\n> 13.4034\n> 13.2635\n> (2 rows)\n>\n> I had expected it to return two single-value arrays, instead:\n>\n> [13.4034]\n> [13.2635]\n>\n> It appears that the filter expression is doing some sub-selection, too.\n> Is that expected?\n\nLooks like the effect of lax mode which may unwrap arrays when necessary [1].\nThe array unwrapping looks like the result of jsonb_array_elements().\n\nIt kinda works in strict mode:\n\n\tSELECT jsonb_path_query(:'json', 'strict $.track.segments[*].location ? (@[*] < 14)');\n\t\n\t jsonb_path_query\n\t-----------------------\n\t [47.763, 13.4034]\n\t [47.706, 13.2635]\n\t(2 rows)\n\nBut it does not remove elements from the matching arrays. Which I don't even\nexpect here because the path specifies the location array as the object to be\nreturned. The filter expression then only decides whether to return the\nlocation array or not. Nowhere in the docs does it say that the filter\nexpression itself removes any elements from a matched array.\n\nHere's a query that filter's out individual array elements. It's quite a\nmouthful (especially to preserve the order of array elements):\n\n\tWITH location AS (\n\t SELECT loc, row_number() OVER () AS array_num\n\t FROM jsonb_path_query(:'json', 'strict $.track.segments[*].location') loc\n\t),\n\telement AS (\n\t SELECT array_num, e.num AS elem_num, e.elem\n\t FROM location\n\t CROSS JOIN jsonb_array_elements(loc) WITH ORDINALITY AS e (elem, num)\n\t)\n\tSELECT jsonb_agg(elem ORDER BY elem_num)\n\tFROM element\n\tWHERE jsonb_path_exists(elem, '$ ? (@ < 14)')\n\tGROUP BY array_num;\n\t\n\t jsonb_agg\n\t---------------\n\t [13.2635]\n\t [13.4034]\n\t(2 rows)\n\n[1] https://www.postgresql.org/docs/current/functions-json.html#STRICT-AND-LAX-MODES\n\n--\nErik\n\n\n",
"msg_date": "Sun, 17 Sep 2023 00:13:56 +0200 (CEST)",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "On Sep 16, 2023, at 18:13, Erik Wienhold <[email protected]> wrote:\n\n> Looks like the effect of lax mode which may unwrap arrays when necessary [1].\n> The array unwrapping looks like the result of jsonb_array_elements().\n> \n> It kinda works in strict mode:\n> \n> SELECT jsonb_path_query(:'json', 'strict $.track.segments[*].location ? (@[*] < 14)');\n> \n> jsonb_path_query\n> -----------------------\n> [47.763, 13.4034]\n> [47.706, 13.2635]\n> (2 rows)\n> \n> But it does not remove elements from the matching arrays. Which I don't even\n> expect here because the path specifies the location array as the object to be\n> returned. The filter expression then only decides whether to return the\n> location array or not. Nowhere in the docs does it say that the filter\n> expression itself removes any elements from a matched array.\n\nYes, this is what I expected. It means “select the location array if any of its contents is less that 14.”\n\nI don’t understand why it’s different in lax mode, though, as `@[*]` is not a structural error; it confirms to the schema, as the docs say. The flattening in this case seems weird.\n\nAh, here’s why:, from the docs:\n\n\"Besides, comparison operators automatically unwrap their operands in the lax mode, so you can compare SQL/JSON arrays out-of-the-box.”\n\nThere follow some discussion of the need to specify `[*]` on segments in strict mode, but since that’s exactly what my example does (and the same for the locations array inside the filter), it doesn’t seem right to me that it would be unwrapped here.\n\n> Here's a query that filter's out individual array elements. It's quite a\n> mouthful (especially to preserve the order of array elements):\n\nWow fun, and yeah, it makes sense to take things apart in SQL for this sort of thing!\n\nBest,\n\nDavid",
"msg_date": "Sat, 16 Sep 2023 19:41:12 -0400",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "\"David E. Wheeler\" <[email protected]> writes:\n> On Sep 15, 2023, at 20:36, Tom Lane <[email protected]> wrote:\n>> I think that that indicates that you're putting the info in the\n>> wrong place. Perhaps the right answer is to insert something\n>> more explicit in section 11.2, which is the first place where\n>> we really spend any effort discussing what can be indexed.\n\n> Fair enough. How ’bout this?\n\nAfter thinking about it for awhile, I think we need some more\ndiscursive explanation of what's allowed, perhaps along the lines\nof the attached. (I still can't shake the feeling that this is\nduplicative; but I can't find anything comparable until you get\ninto the weeds in Section V.)\n\nI put the new text at the end of section 11.1, but perhaps it\nbelongs a little further up in that section; it seems more\nimportant than some of the preceding paras.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 17 Sep 2023 12:20:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "On Sep 17, 2023, at 12:20, Tom Lane <[email protected]> wrote:\n\n> After thinking about it for awhile, I think we need some more\n> discursive explanation of what's allowed, perhaps along the lines\n> of the attached. (I still can't shake the feeling that this is\n> duplicative; but I can't find anything comparable until you get\n> into the weeds in Section V.)\n> \n> I put the new text at the end of section 11.1, but perhaps it\n> belongs a little further up in that section; it seems more\n> important than some of the preceding paras.\n\nI think this is useful, but also that it’s worth calling out explicitly that functions do not count as indexable operators. True by definition, of course, but I at least had assumed that since an operator is, in a sense, syntax sugar for a function call, they are in some sense the same thing.\n\nA header might be useful, something like “What Counts as an indexable expression”.\n\nBest,\n\nDavid",
"msg_date": "Sun, 17 Sep 2023 18:09:36 -0400",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "On Sep 12, 2023, at 21:00, Erik Wienhold <[email protected]> wrote:\n\n>> I posted this question on Stack Overflow (https://stackoverflow.com/q/77046554/79202),\n>> and from the suggestion I got there, it seems that @@ expects a boolean to be\n>> returned by the path query, while @? wraps it in an implicit exists(). Is that\n>> right?\n> \n> That's also my understanding. We had a discussion about the docs on @@, @?, and\n> jsonb_path_query on -general a while back [1]. Maybe it's useful also.\n\nHi, finally getting back to this, still fiddling to figure out the differences. From the thread you reference [1], is the point that @@ and jsonb_path_match() can only be properly used with a JSON Path expression that’s a predicate check?\n\nIf so, as far as I can tell, only exists() around the entire path query, or the deviation from the SQL standard that allows an expression to be a predicate?\n\nThis suggest to me that the \"Only the first item of the result is taken into account” bit from the docs may not be quite right. Consider this example:\n\ndavid=# select jsonb_path_query('{\"a\":[false,true,false]}', '$.a ?(@[*] == false)');\n jsonb_path_query\n------------------\n false\n false\n(2 rows)\n\ndavid=# select jsonb_path_match('{\"a\":[false,true,false]}', '$.a ?(@[*] == false)');\nERROR: single boolean result is expected\n\njsonb_path_match(), it turns out, only wants a single result. But furthermore perhaps the use of a filter predicate rather than a predicate expression for the entire path query is an error?\n\nCuriously, @@ seems okay with it:\n\ndavid=# select '{\"a\":[false,true,false]}'@@ '$.a ?(@[*] == false)';\n ?column? \n----------\n t\n\nNot a predicate query, and somehow returns true even though the first item of the result is false? Is that how it should be?\n\nBest,\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CACJufxE01sxgvtG4QEvRZPzs_roggsZeVvBSGpjM5tzE5hMCLA%40mail.gmail.com\n\n\n",
"msg_date": "Sun, 8 Oct 2023 19:13:08 -0400",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "On 2023-10-09 01:13 +0200, David E. Wheeler write:\n> On Sep 12, 2023, at 21:00, Erik Wienhold <[email protected]> wrote:\n> \n> >> I posted this question on Stack Overflow (https://stackoverflow.com/q/77046554/79202),\n> >> and from the suggestion I got there, it seems that @@ expects a boolean to be\n> >> returned by the path query, while @? wraps it in an implicit exists(). Is that\n> >> right?\n> > \n> > That's also my understanding. We had a discussion about the docs on @@, @?, and\n> > jsonb_path_query on -general a while back [1]. Maybe it's useful also.\n> \n> Hi, finally getting back to this, still fiddling to figure out the\n> differences. From the thread you reference [1], is the point that @@\n> and jsonb_path_match() can only be properly used with a JSON Path\n> expression that’s a predicate check?\n\nI think so. That's also supported by the existing docs which only\nmention \"JSON path predicate\" for @@ and jsonb_path_match().\n\n> If so, as far as I can tell, only exists() around the entire path\n> query, or the deviation from the SQL standard that allows an\n> expression to be a predicate?\n\nLooks like that. But note that exists() is also a filter expression.\nSo wrapping the entire jsonpath in exists() is also a deviation from the\nSQL standard which only allows predicates in filter expressions, i.e.\n'<path> ? (<predicate>)'.\n\n> This suggest to me that the \"Only the first item of the result is\n> taken into account” bit from the docs may not be quite right.\n\nYes, this was also the issue in the referenced thread[1]. I think my\nsuggesstion in [2] explains it (as far as I understand it).\n\n> Consider this example:\n> \n> david=# select jsonb_path_query('{\"a\":[false,true,false]}', '$.a ?(@[*] == false)');\n> jsonb_path_query\n> ------------------\n> false\n> false\n> (2 rows)\n> \n> david=# select jsonb_path_match('{\"a\":[false,true,false]}', '$.a ?(@[*] == false)');\n> ERROR: single boolean result is expected\n> \n> jsonb_path_match(), it turns out, only wants a single result. But\n> furthermore perhaps the use of a filter predicate rather than a\n> predicate expression for the entire path query is an error?\n\nYes, I think @@ and jsonb_path_match() should not be used with filter\nexpressions because the jsonpath returns whatever the path expression\nyields (which may be an actual boolean value in the jsonb). The filter\nexpression only filters (as the name suggests) what the path expression\nyields.\n\n> Curiously, @@ seems okay with it:\n> \n> david=# select '{\"a\":[false,true,false]}'@@ '$.a ?(@[*] == false)';\n> ?column? \n> ----------\n> t\n> \n> Not a predicate query, and somehow returns true even though the first\n> item of the result is false? Is that how it should be?\n\nYour example does a text search equivalent to:\n\n\tselect to_tsvector('{\"a\":[false,true,false]}') @@ plainto_tsquery('$.a ? (@[*] == true)')\n\nYou forgot the cast to jsonb. jsonb @@ jsonpath actually returns null:\n\n\ttest=# select '{\"a\":[false,true,false]}'::jsonb @@ '$.a ? (@[*] == false)';\n\t ?column?\n\t----------\n\t <null>\n\t(1 row)\n\nThis matches the note right after the docs for @@:\n\n\"The jsonpath operators @? and @@ suppress the following errors: missing\n object field or array element, unexpected JSON item type, datetime and\n numeric errors. The jsonpath-related functions described below can also\n be told to suppress these types of errors. This behavior might be\n helpful when searching JSON document collections of varying structure.\"\n\nThat would be the silent argument of jsonb_path_match():\n\n\ttest=# select jsonb_path_match('{\"a\":[false,true,false]}', '$.a ? (@[*] == false)', silent => true);\n\t jsonb_path_match \n\t------------------\n\t <null>\n\t(1 row)\n\n[1] https://www.postgresql.org/message-id/CACJufxE01sxgvtG4QEvRZPzs_roggsZeVvBSGpjM5tzE5hMCLA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/880194083.579916.1680598906819%40office.mailbox.org\n\n-- \nErik\n\n\n",
"msg_date": "Sat, 14 Oct 2023 04:50:05 +0200",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "Thanks for the reply, Erik. Have appreciated collaborating with you on a few different things lately!\n\n> On Oct 13, 2023, at 22:50, Erik Wienhold <[email protected]> wrote:\n\n>> Hi, finally getting back to this, still fiddling to figure out the\n>> differences. From the thread you reference [1], is the point that @@\n>> and jsonb_path_match() can only be properly used with a JSON Path\n>> expression that’s a predicate check?\n> \n> I think so. That's also supported by the existing docs which only\n> mention \"JSON path predicate\" for @@ and jsonb_path_match().\n\nOkay, good.\n\n>> If so, as far as I can tell, only exists() around the entire path\n>> query, or the deviation from the SQL standard that allows an\n>> expression to be a predicate?\n> \n> Looks like that. But note that exists() is also a filter expression.\n> So wrapping the entire jsonpath in exists() is also a deviation from the\n> SQL standard which only allows predicates in filter expressions, i.e.\n> '<path> ? (<predicate>)'.\n\nYeah. I’m starting to get the sense that the Postgres extension of the standard to allow predicates without filters is almost a different thing, like there are two Pg SQL/JSON Path languages:\n\n1. SQL Standard path language for selecting values and includes predicates. Returns the selected value(s). Supported by `@?` and jsonb_path_exists().\n\n2. The Postgres predicate path language which returns a boolean, akin to a WHERE expression. Supported by `@@` and jsonb_path_match()\n\nBoth are supported by jsonb_path_query(), but if you use a standard path you get the values and if you use a predicate path you get a boolean. This feels a big overloaded to me, TBH; I find myself wanting them to be separate types since the behaviors vary quite a bit!\n\n>> This suggest to me that the \"Only the first item of the result is\n>> taken into account” bit from the docs may not be quite right.\n> \n> Yes, this was also the issue in the referenced thread[1]. I think my\n> suggesstion in [2] explains it (as far as I understand it).\n\nYeah, lax vs. strict mode stuff definitely creates some added complexity. I see now I missed the rest of that thread; seeing the entire thread on one page[1] really helps. I’d like to take a stab at the doc improvements Tom suggests[2].\n\n>> jsonb_path_match(), it turns out, only wants a single result. But\n>> furthermore perhaps the use of a filter predicate rather than a\n>> predicate expression for the entire path query is an error?\n> \n> Yes, I think @@ and jsonb_path_match() should not be used with filter\n> expressions because the jsonpath returns whatever the path expression\n> yields (which may be an actual boolean value in the jsonb). The filter\n> expression only filters (as the name suggests) what the path expression\n> yields.\n\nAgreed. It only gets worse with a filter expression that selects a single value:\n\ndavid=# select jsonb_path_match('{\"a\":[false,true]}', '$.a ?(@[*] == false)');\n jsonb_path_match \n------------------\n f\n\nPresumably it returns false because the value selected is JSON `false`:\n\ndavid=# select jsonb_path_query('{\"a\":[false,true]}', '$.a ?(@[*] == false)');\n jsonb_path_query \n------------------\n false\n\nWhich seems misleading, frankly. Would it be possible to update jsonb_path_match and @@ to raise an error when the path expression is not a predicate?\n\n\n>> Curiously, @@ seems okay with it:\n>> \n>> david=# select '{\"a\":[false,true,false]}'@@ '$.a ?(@[*] == false)';\n>> ?column? \n>> ----------\n>> t\n>> \n>> Not a predicate query, and somehow returns true even though the first\n>> item of the result is false? Is that how it should be?\n> \n> Your example does a text search equivalent to:\n> \n> select to_tsvector('{\"a\":[false,true,false]}') @@ plainto_tsquery('$.a ? (@[*] == true)')\n> \n> You forgot the cast to jsonb. \n\nOh good grief 🤦🏻♂️\n\n> jsonb @@ jsonpath actually returns null:\n> \n> test=# select '{\"a\":[false,true,false]}'::jsonb @@ '$.a ? (@[*] == false)';\n> ?column?\n> ----------\n> <null>\n> (1 row)\n\nYes, much better, though see the result above that returns a single `false` and confuses things.\n\n> This matches the note right after the docs for @@:\n\nYeah, that makes sense. But here’s a bit about lax mode[3] that confuses me:\n\n> The lax mode facilitates matching of a JSON document structure and path expression if the JSON data does not conform to the expected schema. If an operand does not match the requirements of a particular operation, it can be automatically wrapped as an SQL/JSON array or unwrapped by converting its elements into an SQL/JSON sequence before performing this operation. Besides, comparison operators automatically unwrap their operands in the lax mode, so you can compare SQL/JSON arrays out-of-the-box.\n\nThis automatic flattening in lax mode seems odd, because it means you get different results in strict and lax mode where there are no errors. In lax mode, you get a set:\n\ndavid=# select jsonb_path_query('{\"a\":[1,2,3,4,5]}', '$.a ?(@[*] > 2)');\njsonb_path_query \n------------------\n3\n4\n5\n(3 rows)\n\nBut in strict mode, you get the array selected by `$.a`, which is more what I would expect:\n\ndavid=# select jsonb_path_query('{\"a\":[1,2,3,4,5]}', 'strict $.a ?(@[*] > 2)');\n jsonb_path_query \n------------------\n [1, 2, 3, 4, 5]\n\nThis seems like an odd inconsistency in return values, but perhaps the standard calls for this? I don’t have access to it, but MSSQL docs[4], at least, say:\n\n> * In **lax** mode, the function returns empty values if the path expression contains an error. For example, if you request the value **$.name**, and the JSON text doesn't contain a **name** key, the function returns null, but does not raise an error.\n> \n> * In **strict** mode, the function raises an error if the path expression contains an error.\n\nNo flattening, only error suppression. The Oracle docs[5] mention array flattening, but I don’t have it up and running to see if that means query *results* are flattened.\n\nBest,\n\nDavid\n\n\n[1] https://www.postgresql.org/message-id/flat/CACJufxE01sxgvtG4QEvRZPzs_roggsZeVvBSGpjM5tzE5hMCLA%40mail.gmail.com\n\n[2] https://www.postgresql.org/message-id/1229727.1680535592%40sss.pgh.pa.us\n\n[3] https://www.postgresql.org/docs/current/functions-json.html#STRICT-AND-LAX-MODES\n\n[4] https://learn.microsoft.com/en-us/sql/relational-databases/json/json-path-expressions-sql-server?view=sql-server-ver16#PATHMODE\n\n[5] https://docs.oracle.com/en/database/oracle/oracle-database/21/adjsn/json-path-expressions.html#GUID-8656CAB9-C293-4A99-BB62-F38F3CFC4C13\n\n",
"msg_date": "Sat, 14 Oct 2023 15:27:07 -0400",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "On Sep 17, 2023, at 18:09, David E. Wheeler <[email protected]> wrote:\n\n> I think this is useful, but also that it’s worth calling out explicitly that functions do not count as indexable operators. True by definition, of course, but I at least had assumed that since an operator is, in a sense, syntax sugar for a function call, they are in some sense the same thing.\n> \n> A header might be useful, something like “What Counts as an indexable expression”.\n\nHey Tom, are you still thinking about adding this bit to the docs? I took a quick look at master and didn’t see it there.\n\nThanks,\n\nDavid\n\n\n\n",
"msg_date": "Sun, 17 Dec 2023 12:55:05 -0500",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "\"David E. Wheeler\" <[email protected]> writes:\n> Hey Tom, are you still thinking about adding this bit to the docs? I took a quick look at master and didn’t see it there.\n\nI'd waited because the discussion was still active, and then it\nkind of slipped off the radar. I'll take another look and push\nsome form of what I suggested. That doesn't really address the\njsonpath oddities you were on about, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 17 Dec 2023 16:08:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JSON Path and GIN Questions"
},
{
"msg_contents": "On Dec 17, 2023, at 16:08, Tom Lane <[email protected]> wrote:\n\n> I'd waited because the discussion was still active, and then it\n> kind of slipped off the radar. I'll take another look and push\n> some form of what I suggested.\n\nRight on.\n\n> That doesn't really address the\n> jsonpath oddities you were on about, though.\n\nNo, I attempted to address those in [a patch][1].\n\n [1]: https://commitfest.postgresql.org/45/4624/\n\nBest,\n\nDavid\n\n\n\n",
"msg_date": "Sun, 17 Dec 2023 18:30:10 -0500",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: JSON Path and GIN Questions"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nThe proposal by Bertrand in CC to jumble CALL and SET in [1] was\r\nrejected at the time for a more robust solution to jumble DDL.\r\n\r\nMichael also in CC made this possible with commit 3db72ebcbe.\r\n\r\nThe attached patch takes advantage of the jumbling infrastructure\r\nadded in the above mentioned commit and jumbles the CALL statement\r\nin pg_stat_statements.\r\n\r\nThe patch also modifies existing test cases for CALL handling in pg_stat_statements\r\nand adds additional tests which prove that a CALL to an overloaded procedure\r\nwill generate a different query_id.\r\n\r\nAs far as the SET command mentioned in [1] is concerned, it is a bit more complex\r\nas it requires us to deal with A_Constants which is not very straightforward. We can surely\r\ndeal with SET currently by applying custom query jumbling logic to VariableSetStmt,\r\nbut this can be dealt with in a separate discussion.\r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n[1] https://www.postgresql.org/message-id/flat/36e5bffe-e989-194f-85c8-06e7bc88e6f7%40amazon.com",
"msg_date": "Wed, 13 Sep 2023 00:48:48 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Jumble the CALL command in pg_stat_statements"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 12:48:48AM +0000, Imseih (AWS), Sami wrote:\n> The patch also modifies existing test cases for CALL handling in pg_stat_statements\n> and adds additional tests which prove that a CALL to an overloaded procedure\n> will generate a different query_id.\n\n+CALL overload(1);\n+CALL overload('A');\n[...]\n+ 1 | 0 | CALL overload($1)\n+ 1 | 0 | CALL overload($1)\n\nThat's not surprising to me. We've historically relied on the\nfunction OID in the jumbling of a FuncExpr, so I'm OK with that. This\nmay look a bit surprising though if you have a schema that enforces\nthe same function name for several data types. Having a DEFAULT does\nthis:\nCREATE OR REPLACE PROCEDURE overload(i text, j bool DEFAULT true) AS\n$$ DECLARE\n r text;\nBEGIN\n SELECT i::text INTO r;\nEND; $$ LANGUAGE plpgsql;\n\nThen with these three, and a jumbling based on the OID gives:\n+CALL overload(1);\n+CALL overload('A');\n+CALL overload('A', false);\n[...]\n- 1 | 0 | CALL overload($1)\n+ 2 | 0 | CALL overload($1)\n\nStill this grouping is much better than having thousands of entries\nwith different values. I am not sure if we should bother improving\nthat more than what you suggest that, especially as FuncExpr->args can\nitself include Const nodes as far as I recall.\n\n> As far as the SET command mentioned in [1] is concerned, it is a bit more complex\n> as it requires us to deal with A_Constants which is not very straightforward. We can surely\n> deal with SET currently by applying custom query jumbling logic to VariableSetStmt,\n> but this can be dealt with in a separate discussion.\n\nAs VariableSetStmt is the top-most node structure for SET/RESET\ncommands, using a custom implementation may be wise in this case,\nparticularly for the args made of A_Const. I don't really want to go\ndown to the internals of A_Const outside its internal implementation,\nas these can be used for some queries where there are multiple\nkeywords separated by whitespaces for one single A_Const, like\nisolation level values in transaction commands. This would lead to\nappending the dollar-based variables in weird ways for some patterns.\nCough.\n\n /* transformed output-argument expressions */\n- List *outargs pg_node_attr(query_jumble_ignore);\n+ List *outargs;\n\nThis choice is a bit surprising. How does it influence the jumbling?\nFor example, if I add a query_jumble_ignore to it, the regression\ntests of pg_stat_statements still pass. This is going to require more\ntest coverage to prove that this addition is useful.\n--\nMichael",
"msg_date": "Wed, 13 Sep 2023 13:37:55 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Jumble the CALL command in pg_stat_statements"
},
{
"msg_contents": "> Still this grouping is much better than having thousands of entries\r\n> with different values. I am not sure if we should bother improving\r\n> that more than what you suggest that, especially as FuncExpr->args can\r\n> itself include Const nodes as far as I recall.\r\n\r\nI agree.\r\n\r\n> As far as the SET command mentioned in [1] is concerned, it is a bit more complex\r\n> as it requires us to deal with A_Constants which is not very straightforward. We can surely\r\n> deal with SET currently by applying custom query jumbling logic to VariableSetStmt,\r\n> but this can be dealt with in a separate discussion.\r\n\r\n> As VariableSetStmt is the top-most node structure for SET/RESET\r\n> commands, using a custom implementation may be wise in this case,\r\n\r\nI do have a patch for this with test cases, 0001-v1-Jumble-the-SET-command.patch\r\nIf you feel this needs a separate discussion I can start one.\r\n\r\nIn the patch, the custom _jumbleVariableSetStmt jumbles\r\n the kind, name, is_local and number of arguments ( in case of a list ) \r\nand tracks the locations for normalization.\r\n\r\n> This choice is a bit surprising. How does it influence the jumbling?\r\n> For example, if I add a query_jumble_ignore to it, the regression\r\n> tests of pg_stat_statements still pass. This is going to require more\r\n> test coverage to prove that this addition is useful.\r\n\r\nCALL with OUT or INOUT args is a bit strange, because\r\nas the doc [1] mentions \"Arguments must be supplied for all procedure parameters \r\nthat lack defaults, including OUT parameters. However, arguments \r\nmatching OUT parameters are not evaluated, so it's customary\r\nto just write NULL for them.\"\r\n\r\nso for pgss, passing a NULL or some other value into OUT/INOUT args should \r\nbe normalized like IN args.\r\n\r\n0001-v2-Jumble-the-CALL-command-in-pg_stat_statements.patch adds\r\nthese test cases.\r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n[1] https://www.postgresql.org/docs/current/sql-call.html",
"msg_date": "Wed, 13 Sep 2023 23:09:19 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Jumble the CALL command in pg_stat_statements"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 11:09:19PM +0000, Imseih (AWS), Sami wrote:\n> I do have a patch for this with test cases, 0001-v1-Jumble-the-SET-command.patch\n> If you feel this needs a separate discussion I can start one.\n\nAgreed tha tthis should have its own thread with a proper subject.\n\n> In the patch, the custom _jumbleVariableSetStmt jumbles\n> the kind, name, is_local and number of arguments ( in case of a list ) \n> and tracks the locations for normalization.\n\nThere is much more going on here, like FunctionSetResetClause, or\nAlterSystemStmt with its generic_reset.\n\n+ foreach (l, expr->args)\n+ {\n+ A_Const *ac = (A_Const *) lfirst(l);\n+\n+ if(ac->type != T_String)\n+ RecordConstLocation(jstate, ac->location);\n+ }\n\nEven this part, I am not sure if it is always correct. Couldn't we\nhave cases where String's A_Const had better be recorded as const?\n\n> CALL with OUT or INOUT args is a bit strange, because\n> as the doc [1] mentions \"Arguments must be supplied for all procedure parameters \n> that lack defaults, including OUT parameters. However, arguments \n> matching OUT parameters are not evaluated, so it's customary\n> to just write NULL for them.\"\n> \n> so for pgss, passing a NULL or some other value into OUT/INOUT args should \n> be normalized like IN args.\n\nI've been studying this one, and I can see why you're right here.\nThis feels much more natural to include. The INOUT parameters get\nregistered twice at the same position, and the duplicates are\ndiscarded by pg_stat_statements, which is OK. The patch is straight\nfor the CALL part, so I have applied it.\n--\nMichael",
"msg_date": "Thu, 28 Sep 2023 15:42:43 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Jumble the CALL command in pg_stat_statements"
}
] |
[
{
"msg_contents": "The definition of hashctl is shown below\n\ntypedef struct HASHCTL\n{\n long num_partitions; /* # partitions (must be power of 2) */\n long ssize; /* segment size */\n long dsize; /* (initial) directory size */\n long max_dsize; /* limit to dsize if dir\nsize is limited */\n long ffactor; /* fill factor */\n Size keysize; /* hash key length in bytes */\n Size entrysize; /* total user element size\nin bytes */\n HashValueFunc hash; /* hash function */\n HashCompareFunc match; /* key comparison function */\n HashCopyFunc keycopy; /* key copying function */\n HashAllocFunc alloc; /* memory allocator */\n MemoryContext hcxt; /* memory context to use\nfor allocations */\n HASHHDR *hctl; /* location of header in\nshared mem */\n} HASHCTL;\n\n\n/*\n* Key copying functions must have this signature. The return value is not\n* used. (The definition is set up to allow memcpy() and strlcpy() to be\n* used directly.)\n*/\ntypedef void *(*HashCopyFunc) (void *dest, const void *src, Size keysize);\n\nAccording to the description, the keycopy function only copies the key, but\nin reality it copies the entire entry, i.e., the key and the value, is the\nname wrong? This may make the developer pass in an inappropriate keycopy\nparameter when creating the htab.\n\nThanks\n\nThe definition of hashctl is shown belowtypedef struct HASHCTL{ long num_partitions; /* # partitions (must be power of 2) */ long ssize; /* segment size */ long dsize; /* (initial) directory size */ long max_dsize; /* limit to dsize if dir size is limited */ long ffactor; /* fill factor */ Size keysize; /* hash key length in bytes */ Size entrysize; /* total user element size in bytes */ HashValueFunc hash; /* hash function */ HashCompareFunc match; /* key comparison function */ HashCopyFunc keycopy; /* key copying function */ HashAllocFunc alloc; /* memory allocator */ MemoryContext hcxt; /* memory context to use for allocations */ HASHHDR *hctl; /* location of header in shared mem */} HASHCTL;/** Key copying functions must have this signature. The return value is not* used. (The definition is set up to allow memcpy() and strlcpy() to be* used directly.)*/typedef void *(*HashCopyFunc) (void *dest, const void *src, Size keysize);According to the description, the keycopy function only copies the key, but in reality it copies the entire entry, i.e., the key and the value, is the name wrong? This may make the developer pass in an inappropriate keycopy parameter when creating the htab.Thanks",
"msg_date": "Wed, 13 Sep 2023 09:14:04 +0800",
"msg_from": "ywgrit <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?Q?Is_the_member_name_of_hashctl_inappropriate=EF=BC=9F?="
},
{
"msg_contents": "ywgrit <[email protected]> writes:\n> According to the description, the keycopy function only copies the key, but\n> in reality it copies the entire entry, i.e., the key and the value,\n\nOn what grounds do you claim that? dynahash.c only ever passes \"keysize\"\nas the size parameter.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Sep 2023 21:36:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re:_Is_the_member_name_of_hashctl_inappropriate?=\n =?UTF-8?Q?=EF=BC=9F?="
},
{
"msg_contents": "You are right, I came to this erroneous conclusion based on the following\nmishandled experiment: My test program is shown below.\n\ntypedef struct ColumnIdentifier\n{\n Oid relid;\n AttrNumber resno;\n} ColumnIdentifier;\n\ntypedef struct ColumnType\n{\n ColumnIdentifier colId;\n Oid vartype;\n} ColumnType;\n\n\nint\nColumnIdentifier_compare(const void *key1, const void *key2, Size keysize)\n{\nconst ColumnIdentifier *colId_key1 = (const ColumnIdentifier *) key1;\nconst ColumnIdentifier *colId_key2 = (const ColumnIdentifier *) key2;\n\nreturn colId_key1->relid == colId_key2->relid && colId_key1->resno ==\ncolId_key2->resno ? 0 : 1;\n}\n\n\nvoid *\nColumnIdentifier_copy(void *dest, const void *src, Size keysize)\n{\nColumnIdentifier *colId_dest = (ColumnIdentifier *) dest;\nColumnIdentifier *colId_src = (ColumnIdentifier *) src;\n\ncolId_dest->relid = colId_src->relid;\ncolId_dest->resno = colId_src->resno;\n\n return NULL; /* not used */\n}\n\n\n HASHCTL hashctl;\n hashctl.hash = tag_hash;\n hashctl.match = ColumnIdentifier_compare;\n hashctl.keycopy = ColumnIdentifier_copy;\n hashctl.keysize = sizeof(ColumnIdentifier);\n hashctl.entrysize = sizeof(ColumnType);\n HTAB *htab = hash_create(\"type of column\",\n 512 /* nelem */,\n &hashctl,\n HASH_ELEM | HASH_FUNCTION |\n HASH_COMPARE | HASH_KEYCOPY);\nColumnType *entry = NULL;\n\n ColumnIdentifier *colId = (ColumnIdentifier *)\nMemoryContextAllocZero(CurrentMemoryContext,\nsizeof(ColumnIdentifier));\n ColumnType *coltype = (ColumnType *)\nMemoryContextAllocZero(CurrentMemoryContext, sizeof(ColumnType));\n\n coltype->colId.relid = colId->relid = 16384;\n coltype->colId.resno = colId->resno = 1;\n coltype->vartype = INT4OID;\n\n hash_search(htab, coltype, HASH_ENTER, NULL);\n entry = hash_search(htab, colId, HASH_FIND, NULL);\n\n Assert(entry->colId.relid == colId->relid);\n Assert(entry->colId.resno == colId->resno);\n Assert(entry->vartype == INT4OID); // entry->vartype == 0\n\nAs shown above, entry->vartype is not assigned when keycopy copies only the\nkey. I modified ColumnIdentifier_copy as shown below, the keycopy copies\nthe entire entry.\n\nvoid *\nColumnIdentifier_copy(void *dest, const void *src, Size keysize)\n{\nconst ColumnType *coltype_src = (const ColumnType *) src;\nconst ColumnType *coltype_dest = (const ColumnType *) dest;\n\n coltype_dest->colId->relid = coltype_src->colId->relid;\n coltype_dest->colId->resno = coltype_src->colId->resno;\n coltype_dest->vartype = coltype_src->vartype;\n\n return NULL; /* not used */\n}\n\nThe result is that entry->vartype is now the same as var->vartype, which\nleads me to believe that keycopy \"should\" copy the entire entry. Before\nsending the initial email, I looked at the implementation of\n\"hash_search_with_hash_value\" and found the line\n\"hashp->keycopy(ELEMENTKEY(currBucket), keyPtr, keysize)\", which made me\nwonder how data field is copied into the HTAB?\n\nBut at the time I ignored a note above: \"Caller is expected to fill the\ndata field on return\". Now I know that the data field needs to be filled\nmanually, so it was my misuse. Thanks for the correction!\n\nThanks\n\nTom Lane <[email protected]> 于2023年9月13日周三 09:36写道:\n\n> ywgrit <[email protected]> writes:\n> > According to the description, the keycopy function only copies the key,\n> but\n> > in reality it copies the entire entry, i.e., the key and the value,\n>\n> On what grounds do you claim that? dynahash.c only ever passes \"keysize\"\n> as the size parameter.\n>\n> regards, tom lane\n>\n\nYou are right, I came to this erroneous conclusion based on the following mishandled experiment:\nMy test program is shown below.typedef struct ColumnIdentifier{ Oid relid; AttrNumber resno;} ColumnIdentifier;typedef struct ColumnType{ ColumnIdentifier colId; Oid vartype;} ColumnType;intColumnIdentifier_compare(const void *key1, const void *key2, Size keysize){const ColumnIdentifier *colId_key1 = (const ColumnIdentifier *) key1;const ColumnIdentifier *colId_key2 = (const ColumnIdentifier *) key2;return colId_key1->relid == colId_key2->relid && colId_key1->resno == colId_key2->resno ? 0 : 1;}void *ColumnIdentifier_copy(void *dest, const void *src, Size keysize){ColumnIdentifier *colId_dest = (ColumnIdentifier *) dest;ColumnIdentifier *colId_src = (ColumnIdentifier *) src;colId_dest->relid = colId_src->relid;colId_dest->resno = colId_src->resno; return NULL; /* not used */} HASHCTL hashctl; hashctl.hash = tag_hash; hashctl.match = ColumnIdentifier_compare; hashctl.keycopy = ColumnIdentifier_copy; hashctl.keysize = sizeof(ColumnIdentifier); hashctl.entrysize = sizeof(ColumnType); HTAB *htab = hash_create(\"type of column\", 512 /* nelem */, &hashctl, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_KEYCOPY);ColumnType *entry = NULL; ColumnIdentifier *colId = (ColumnIdentifier *) MemoryContextAllocZero(CurrentMemoryContext, sizeof(ColumnIdentifier)); ColumnType *coltype = (ColumnType *) MemoryContextAllocZero(CurrentMemoryContext, sizeof(ColumnType)); coltype->colId.relid = colId->relid = 16384; coltype->colId.resno = colId->resno = 1; coltype->vartype = INT4OID; hash_search(htab, coltype, HASH_ENTER, NULL); entry = hash_search(htab, colId, HASH_FIND, NULL); Assert(entry->colId.relid == colId->relid); Assert(entry->colId.resno == colId->resno); Assert(entry->vartype == INT4OID); // entry->vartype == 0As shown above, entry->vartype is not assigned when keycopy copies only the key. I modified ColumnIdentifier_copy as shown below, the keycopy copies the entire entry.void *ColumnIdentifier_copy(void *dest, const void *src, Size keysize){const ColumnType *coltype_src = (const ColumnType *) src;const ColumnType *coltype_dest = (const ColumnType *) dest; coltype_dest->colId->relid = coltype_src->colId->relid; coltype_dest->colId->resno = coltype_src->colId->resno; coltype_dest->vartype = coltype_src->vartype; return NULL; /* not used */}The result is that entry->vartype is now the same as var->vartype, which leads me to believe that keycopy \"should\" copy the entire entry. Before sending the initial email, I looked at the implementation of \"hash_search_with_hash_value\" and found the line \"hashp->keycopy(ELEMENTKEY(currBucket), keyPtr, keysize)\", which made me wonder how data field is copied into the HTAB?But at the time I ignored a note above: \"Caller is expected to fill the data field on return\". Now I know that the data field needs to be filled manually, so it was my misuse. Thanks for the correction!ThanksTom Lane <[email protected]> 于2023年9月13日周三 09:36写道:ywgrit <[email protected]> writes:\n> According to the description, the keycopy function only copies the key, but\n> in reality it copies the entire entry, i.e., the key and the value,\n\nOn what grounds do you claim that? dynahash.c only ever passes \"keysize\"\nas the size parameter.\n\n regards, tom lane",
"msg_date": "Wed, 13 Sep 2023 14:00:49 +0800",
"msg_from": "ywgrit <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re=3A_Is_the_member_name_of_hashctl_inappropriate=EF=BC=9F?="
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently, the psql's tab completion feature does not support properly \nfor ATTACH PARTITION. When <TAB> key is typed after \"ALTER TABLE \n<table_name> ATTACH PARTITION \", all possible table names should be \ndisplayed, however, foreign table names are not displayed. So I created \na patch that addresses this issue by ensuring that psql displays not \nonly normal table names but also foreign table names in this case.\n\nAny kind of feedback is appreciated.\n\nBest,\nTung Nguyen",
"msg_date": "Wed, 13 Sep 2023 10:18:46 +0900",
"msg_from": "bt23nguyent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tab completion for ATTACH PARTITION"
},
{
"msg_contents": "On 2023-Sep-13, bt23nguyent wrote:\n\n> Hi,\n> \n> Currently, the psql's tab completion feature does not support properly for\n> ATTACH PARTITION. When <TAB> key is typed after \"ALTER TABLE <table_name>\n> ATTACH PARTITION \", all possible table names should be displayed, however,\n> foreign table names are not displayed. So I created a patch that addresses\n> this issue by ensuring that psql displays not only normal table names but\n> also foreign table names in this case.\n\nSounds reasonable, but I think if we're going to have a specific query\nfor this case, we should make it a lot more precise. For example, any\nrelation that's already a partition cannot be attached; as can't any\nrelation that is involved in legacy inheritance as either parent or\nchild.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"This is a foot just waiting to be shot\" (Andrew Dunstan)\n\n\n",
"msg_date": "Wed, 13 Sep 2023 09:19:29 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for ATTACH PARTITION"
},
{
"msg_contents": "On 2023-09-13 12:19 a.m., Alvaro Herrera wrote:\n> On 2023-Sep-13, bt23nguyent wrote:\n>\n>> Hi,\n>>\n>> Currently, the psql's tab completion feature does not support properly for\n>> ATTACH PARTITION. When <TAB> key is typed after \"ALTER TABLE <table_name>\n>> ATTACH PARTITION \", all possible table names should be displayed, however,\n>> foreign table names are not displayed. So I created a patch that addresses\n>> this issue by ensuring that psql displays not only normal table names but\n>> also foreign table names in this case.\n> Sounds reasonable, but I think if we're going to have a specific query\n> for this case, we should make it a lot more precise. For example, any\n> relation that's already a partition cannot be attached; as can't any\n> relation that is involved in legacy inheritance as either parent or\n> child.\n\nI applied the patch and performed below tests. I think it would be \nbetter if \"attach partition\" can filter out those partitions which has \nalready been attached, just like \"detach partition\" is capable to filter \nout the partitions which has already been detached.\n\nHere are my test steps and results:\n\n\n### create a main PG cluster on port 5432 and run below commands:\n\nCREATE EXTENSION postgres_fdw;\nCREATE SERVER s1 FOREIGN DATA WRAPPER postgres_fdw OPTIONS (dbname \n'postgres', host '127.0.0.1', port '5433');\nCREATE USER MAPPING for david SERVER s1 OPTIONS(user 'david');\nCREATE TABLE t (a INT, b TEXT) PARTITION BY RANGE (a);\nCREATE TABLE t_local PARTITION OF t FOR VALUES FROM (1) TO (10);\nCREATE FOREIGN TABLE t_s1 PARTITION OF t FOR VALUES FROM (11) TO (20) \nSERVER s1 OPTIONS(schema_name 'public', table_name 't');\nCREATE FOREIGN TABLE t_s1 SERVER s1 OPTIONS(schema_name 'public', \ntable_name 't');\n\n\n### create a foreign PG cluster on port 5433 and run below command:\nCREATE TABLE t (a INT, b TEXT);\n\n\n### \"detach partition\" can filter out already detached partition, in \nthis case, \"t_local\".\n\npostgres=# alter table t detach partition\ninformation_schema. public. t_local t_s1\npostgres=# alter table t detach partition t_s1 ;\nALTER TABLE\npostgres=# alter table t detach partition\ninformation_schema. public. t_local\n\n\n## before patch, \"attach partition\" can't display foreign table;\npostgres=# alter table t attach partition\ninformation_schema. public. t t_local\n\n\n### after patch, \"attach partition\" dose display the foreign table \n(patch works).\npostgres=# alter table t attach partition\ninformation_schema. public. t t_local t_s1\n\nIn both cases, the already attached partition \"t_local\" shows up. If it \ncan be filtered out then I believe better user experience.\n\n\nBest regards,\n\nDavid\n\n\n\n\n\n\n\n",
"msg_date": "Tue, 10 Oct 2023 12:43:20 -0700",
"msg_from": "David Zhang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for ATTACH PARTITION"
},
{
"msg_contents": "On Wed, 13 Sept 2023 at 06:57, bt23nguyent <[email protected]> wrote:\n>\n> Hi,\n>\n> Currently, the psql's tab completion feature does not support properly\n> for ATTACH PARTITION. When <TAB> key is typed after \"ALTER TABLE\n> <table_name> ATTACH PARTITION \", all possible table names should be\n> displayed, however, foreign table names are not displayed. So I created\n> a patch that addresses this issue by ensuring that psql displays not\n> only normal table names but also foreign table names in this case.\n>\n> Any kind of feedback is appreciated.\n\nI have changed the status of the commitfest entry to RWF as Alvaro's\ncomments have not yet been addressed. Kindly post an updated version\nby addressing the comments and add a new commitfest entry for the\nsame.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 29 Jan 2024 08:39:28 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for ATTACH PARTITION"
}
] |
[
{
"msg_contents": "Yesterday noticed a TAP test assignment to an unused $result.\n\nPSA patch to remove that.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 13 Sep 2023 11:56:37 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "subscription TAP test has unused $result"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 8:43 AM Peter Smith <[email protected]> wrote:\n>\n> Yesterday noticed a TAP test assignment to an unused $result.\n>\n> PSA patch to remove that.\n>\n\nThough it is harmless I think we can clean it up. Your patch looks good to me.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 13 Sep 2023 10:14:43 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subscription TAP test has unused $result"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 10:14 AM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Sep 13, 2023 at 8:43 AM Peter Smith <[email protected]> wrote:\n> >\n> > Yesterday noticed a TAP test assignment to an unused $result.\n> >\n> > PSA patch to remove that.\n> >\n>\n> Though it is harmless I think we can clean it up. Your patch looks good to me.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 14 Sep 2023 14:40:35 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subscription TAP test has unused $result"
},
{
"msg_contents": "On Thu, Sep 14, 2023 at 7:10 PM Amit Kapila <[email protected]> wrote:\n>\n> > Though it is harmless I think we can clean it up. Your patch looks good to me.\n> >\n>\n> Pushed.\n>\n\nThanks!\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 15 Sep 2023 08:53:26 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: subscription TAP test has unused $result"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen a snapshot file reading fails in ImportSnapshot(), it errors out\nwith \"invalid snapshot identifier\". This message better suits for\nsnapshot identifier parsing errors which is being done just before the\nfile reading. The attached patch adds a generic file reading error\nmessage with path to help distinguish if the issue is with snapshot\nidentifier parsing or file reading.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 13 Sep 2023 11:40:25 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Have better wording for snapshot file reading failure"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 11:40:25AM +0530, Bharath Rupireddy wrote:\n> When a snapshot file reading fails in ImportSnapshot(), it errors out\n> with \"invalid snapshot identifier\". This message better suits for\n> snapshot identifier parsing errors which is being done just before the\n> file reading. The attached patch adds a generic file reading error\n> message with path to help distinguish if the issue is with snapshot\n> identifier parsing or file reading.\n\n f = AllocateFile(path, PG_BINARY_R);\n if (!f)\n ereport(ERROR,\n- (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n- errmsg(\"invalid snapshot identifier: \\\"%s\\\"\", idstr)));\n+ (errcode_for_file_access(),\n+ errmsg(\"could not open file \\\"%s\\\" for reading: %m\",\n+ path)));\n\nAgreed that this just looks like a copy-pasto. The path provides\nenough context about what's being read, so using this generic error\nmessage is fine. Will apply if there are no objections. \n--\nMichael",
"msg_date": "Wed, 13 Sep 2023 15:18:00 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have better wording for snapshot file reading failure"
},
{
"msg_contents": "On 9/13/23 02:10, Bharath Rupireddy wrote:\n> Hi,\n>\n> When a snapshot file reading fails in ImportSnapshot(), it errors out\n> with \"invalid snapshot identifier\". This message better suits for\n> snapshot identifier parsing errors which is being done just before the\n> file reading. The attached patch adds a generic file reading error\n> message with path to help distinguish if the issue is with snapshot\n> identifier parsing or file reading.\n>\nI suggest error message to include \"snapshot\" keyword in message, like this:\n\nerrmsg(\"could not open snapshot file \\\"%s\\\" for reading: %m\",\n\nand also tweak other messages accordingly.\n\n\n-- \nKind Regards,\nYogesh Sharma\nPostgreSQL, Linux, and Networking Expert\nOpen Source Enthusiast and Advocate\n\n\n\n",
"msg_date": "Wed, 13 Sep 2023 06:02:46 -0400",
"msg_from": "Yogesh Sharma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have better wording for snapshot file reading failure"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 3:32 PM Yogesh Sharma\n<[email protected]> wrote:\n>\n> On 9/13/23 02:10, Bharath Rupireddy wrote:\n> > Hi,\n> >\n> > When a snapshot file reading fails in ImportSnapshot(), it errors out\n> > with \"invalid snapshot identifier\". This message better suits for\n> > snapshot identifier parsing errors which is being done just before the\n> > file reading. The attached patch adds a generic file reading error\n> > message with path to help distinguish if the issue is with snapshot\n> > identifier parsing or file reading.\n> >\n> I suggest error message to include \"snapshot\" keyword in message, like this:\n>\n> errmsg(\"could not open snapshot file \\\"%s\\\" for reading: %m\",\n>\n> and also tweak other messages accordingly.\n\n-1. The path includes the pg_snapshots there which is enough to give\nthe clue, so no need to say \"could not open snapshot file\". AFAICS,\nthis is the typical messaging followed across postgres code for\nAllocateFile failures.\n\n[1]\n/* Define pathname of exported-snapshot files */\n#define SNAPSHOT_EXPORT_DIR \"pg_snapshots\"\n\n /* OK, read the file */\n snprintf(path, MAXPGPATH, SNAPSHOT_EXPORT_DIR \"/%s\", idstr);\n\n f = AllocateFile(path, PG_BINARY_R);\n if (!f)\n ereport(ERROR,\n (errcode_for_file_access(),\n errmsg(\"could not open file \\\"%s\\\" for reading: %m\",\n path)));\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 13 Sep 2023 16:22:24 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Have better wording for snapshot file reading failure"
},
{
"msg_contents": "> On 13 Sep 2023, at 08:18, Michael Paquier <[email protected]> wrote:\n\n> f = AllocateFile(path, PG_BINARY_R);\n> if (!f)\n> ereport(ERROR,\n> - (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> - errmsg(\"invalid snapshot identifier: \\\"%s\\\"\", idstr)));\n> + (errcode_for_file_access(),\n> + errmsg(\"could not open file \\\"%s\\\" for reading: %m\",\n> + path)));\n> \n> Agreed that this just looks like a copy-pasto. The path provides\n> enough context about what's being read, so using this generic error\n> message is fine. Will apply if there are no objections.\n\n+1. This errmsg is already present so it eases the translation burden as well.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 13 Sep 2023 13:19:38 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have better wording for snapshot file reading failure"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 01:19:38PM +0200, Daniel Gustafsson wrote:\n> +1. This errmsg is already present so it eases the translation burden as well.\n\nI was thinking about doing only that on HEAD, but there is an argument\nthat one could get confusing errors when dealing with snapshot imports\non back-branches as well, and it applies down to 11 without conflicts.\nSo, applied and backpatched.\n--\nMichael",
"msg_date": "Thu, 14 Sep 2023 10:33:33 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have better wording for snapshot file reading failure"
},
{
"msg_contents": "Hi,\n\nOn 2023-09-14 10:33:33 +0900, Michael Paquier wrote:\n> On Wed, Sep 13, 2023 at 01:19:38PM +0200, Daniel Gustafsson wrote:\n> > +1. This errmsg is already present so it eases the translation burden as well.\n> \n> I was thinking about doing only that on HEAD, but there is an argument\n> that one could get confusing errors when dealing with snapshot imports\n> on back-branches as well, and it applies down to 11 without conflicts.\n> So, applied and backpatched.\n\nHuh. I don't think this is a good idea - and certainly not in the back\nbranches. The prior message made more sense, imo. The fact that the snapshot\nidentifier is a file is an implementation detail, no snapshot with the\nidentifier being exported is a user level detail. Hence that being mentioned\nin the error message.\n\nI can see an argument for treating ENOENT different than other errors though,\nand using the standard file opening error message for anything other than\nENOENT.\n\nGreetings,\n\nAndres\n\n\n",
"msg_date": "Wed, 13 Sep 2023 19:07:24 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have better wording for snapshot file reading failure"
},
{
"msg_contents": "Hi,\n\nOn 2023-09-13 19:07:24 -0700, Andres Freund wrote:\n> On 2023-09-14 10:33:33 +0900, Michael Paquier wrote:\n> > On Wed, Sep 13, 2023 at 01:19:38PM +0200, Daniel Gustafsson wrote:\n> > > +1. This errmsg is already present so it eases the translation burden as well.\n> > \n> > I was thinking about doing only that on HEAD, but there is an argument\n> > that one could get confusing errors when dealing with snapshot imports\n> > on back-branches as well, and it applies down to 11 without conflicts.\n> > So, applied and backpatched.\n> \n> Huh. I don't think this is a good idea - and certainly not in the back\n> branches. The prior message made more sense, imo. The fact that the snapshot\n> identifier is a file is an implementation detail, no snapshot with the\n> identifier being exported is a user level detail. Hence that being mentioned\n> in the error message.\n> \n> I can see an argument for treating ENOENT different than other errors though,\n> and using the standard file opening error message for anything other than\n> ENOENT.\n\nOh, and given that this actually changes the error code for an invalid\nsnapshot, I think this needs to be reverted. It's not that unlikely that\nthere's code out there that depends on getting ERRCODE_INVALID_PARAMETER_VALUE\nwhen the snapshot doesn't exist.\n\n- Andres\n\n\n",
"msg_date": "Wed, 13 Sep 2023 19:09:32 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have better wording for snapshot file reading failure"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 07:09:32PM -0700, Andres Freund wrote:\n> On 2023-09-13 19:07:24 -0700, Andres Freund wrote:\n>> Huh. I don't think this is a good idea - and certainly not in the back\n>> branches. The prior message made more sense, imo. The fact that the snapshot\n>> identifier is a file is an implementation detail, no snapshot with the\n>> identifier being exported is a user level detail. Hence that being mentioned\n>> in the error message.\n>> \n>> I can see an argument for treating ENOENT different than other errors though,\n>> and using the standard file opening error message for anything other than\n>> ENOENT.\n> \n> Oh, and given that this actually changes the error code for an invalid\n> snapshot, I think this needs to be reverted. It's not that unlikely that\n> there's code out there that depends on getting ERRCODE_INVALID_PARAMETER_VALUE\n> when the snapshot doesn't exist.\n\nAhem. This seems to be the only code path that tracks a failure on\nAllocateFile() where we don't show %m at all, while the error is\nmisleading in basically all the cases as errno holds the extra\ninformation telling somebody that something's going wrong, so I don't\nquite see how it is useful to tell \"invalid snapshot identifier\" on\nan EACCES or even ENOENT when opening this file, with zero information\nabout what's happening on top of that? Even on ENOENT, one can be\nconfused with the same error message generated a few lines above: if\nAllocateFile() fails, the snapshot identifier is correctly shaped, but\nits file is missing. If ENOENT is considered a particular case with\nthe old message, we'd still not know if this refers to the first\nfailure or the second failure.\n\nSaying that, I'm OK with reverting to the previous behavior on\nback-branches if you feel strongly about that.\n--\nMichael",
"msg_date": "Thu, 14 Sep 2023 13:33:39 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have better wording for snapshot file reading failure"
},
{
"msg_contents": "On Thu, Sep 14, 2023 at 01:33:39PM +0900, Michael Paquier wrote:\n> Ahem. This seems to be the only code path that tracks a failure on\n> AllocateFile() where we don't show %m at all, while the error is\n> misleading in basically all the cases as errno holds the extra\n> information telling somebody that something's going wrong, so I don't\n> quite see how it is useful to tell \"invalid snapshot identifier\" on\n> an EACCES or even ENOENT when opening this file, with zero information\n> about what's happening on top of that? Even on ENOENT, one can be\n> confused with the same error message generated a few lines above: if\n> AllocateFile() fails, the snapshot identifier is correctly shaped, but\n> its file is missing. If ENOENT is considered a particular case with\n> the old message, we'd still not know if this refers to the first\n> failure or the second failure.\n\nI see your point after thinking about it, the new message would show\nup when running a SET TRANSACTION SNAPSHOT with a value id, which is\nnot helpful either. Your idea of filtering out ENOENT may be the best\nmove to get more information on %m. Still, it looks to me that using\nthe same error message for both cases is incorrect. So, how about a\n\"could not find the requested snapshot\" if the snapshot ID is valid\nbut its file cannot be found? We don't have any tests for the failure\npaths, either, so I've added some.\n\nThis new suggestion is only for HEAD. I've reverted a0d87bc & co for\nnow.\n--\nMichael",
"msg_date": "Thu, 14 Sep 2023 16:29:22 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have better wording for snapshot file reading failure"
},
{
"msg_contents": "Hi,\n\nOn 2023-09-14 16:29:22 +0900, Michael Paquier wrote:\n> On Thu, Sep 14, 2023 at 01:33:39PM +0900, Michael Paquier wrote:\n> > Ahem. This seems to be the only code path that tracks a failure on\n> > AllocateFile() where we don't show %m at all, while the error is\n> > misleading in basically all the cases as errno holds the extra\n> > information telling somebody that something's going wrong, so I don't\n> > quite see how it is useful to tell \"invalid snapshot identifier\" on\n> > an EACCES or even ENOENT when opening this file, with zero information\n> > about what's happening on top of that? Even on ENOENT, one can be\n> > confused with the same error message generated a few lines above: if\n> > AllocateFile() fails, the snapshot identifier is correctly shaped, but\n> > its file is missing. If ENOENT is considered a particular case with\n> > the old message, we'd still not know if this refers to the first\n> > failure or the second failure.\n> \n> I see your point after thinking about it, the new message would show\n> up when running a SET TRANSACTION SNAPSHOT with a value id, which is\n> not helpful either. Your idea of filtering out ENOENT may be the best\n> move to get more information on %m. Still, it looks to me that using\n> the same error message for both cases is incorrect.\n\nI wouldn't call it quite incorrect, but it's certainly a good idea to provide\nrelevant details for the rare case of errors other than ENOENT.\n\n\n> So, how about a \"could not find the requested snapshot\" if the snapshot ID\n> is valid but its file cannot be found?\n\nI'd probably just go for something like \"snapshot \\\"%s\\\" does not exist\",\nsimilar to what we report for unknown tables etc. Arguably changing the\nerrcode to ERRCODE_UNDEFINED_OBJECT would make this more precise?\n\n\n> This new suggestion is only for HEAD. I've reverted a0d87bc & co for\n> now.\n\nI think there's really no reason to backpatch this, so that makes sense to me.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 14 Sep 2023 17:33:35 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have better wording for snapshot file reading failure"
},
{
"msg_contents": "On Thu, Sep 14, 2023 at 05:33:35PM -0700, Andres Freund wrote:\n> I'd probably just go for something like \"snapshot \\\"%s\\\" does not exist\",\n> similar to what we report for unknown tables etc. Arguably changing the\n> errcode to ERRCODE_UNDEFINED_OBJECT would make this more precise?\n\nGood points. Updated as suggested in v2 attached.\n--\nMichael",
"msg_date": "Fri, 15 Sep 2023 14:20:59 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have better wording for snapshot file reading failure"
},
{
"msg_contents": "On 9/14/23 20:33, Andres Freund wrote:\n> I'd probably just go for something like \"snapshot \\\"%s\\\" does not exist\",\n> similar to what we report for unknown tables etc. Arguably changing the\n> errcode to ERRCODE_UNDEFINED_OBJECT would make this more precise?\n\n+1 better informative message compare to the original patch.\n\n-- \nKind Regards,\nYogesh Sharma\nPostgreSQL, Linux, and Networking Expert\nOpen Source Enthusiast and Advocate\n\n\n\n",
"msg_date": "Sun, 17 Sep 2023 08:36:06 -0400",
"msg_from": "Yogesh Sharma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Have better wording for snapshot file reading failure"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nIt's not necessary to fill the key field for most cases, since\nhash_search has already done that for you. For developer that\nusing memset to zero the entry structure after enter it, fill the\nkey field is a must, but IMHO that is not good coding style, we\nreally should not touch the key field after insert it into the\ndynahash.\n\nThis patch fixed some most abnormal ones, instead of refilling the\nkey field of primitive types, adding some assert might be a better\nchoice.\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Wed, 13 Sep 2023 14:46:30 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": true,
"msg_subject": "[dynahash] do not refill the hashkey after hash_search"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 1:47 PM Junwang Zhao <[email protected]> wrote:\n>\n> Hi hackers,\n>\n> It's not necessary to fill the key field for most cases, since\n> hash_search has already done that for you. For developer that\n> using memset to zero the entry structure after enter it, fill the\n> key field is a must, but IMHO that is not good coding style, we\n> really should not touch the key field after insert it into the\n> dynahash.\n\n- memset(part_entry, 0, sizeof(LogicalRepPartMapEntry));\n- part_entry->partoid = partOid;\n+ Assert(part_entry->partoid == partOid);\n+ memset(entry, 0, sizeof(LogicalRepRelMapEntry));\n\nThis is making an assumption that the non-key part of\nLogicalRepPartMapEntry will never get new members. Without knowing much\nabout this code, it seems like a risk in the abstract.\n\n> This patch fixed some most abnormal ones, instead of refilling the\n> key field of primitive types, adding some assert might be a better\n> choice.\n\nTaking a quick look, I didn't happen to see any existing asserts of this\nsort, so the patch doesn't seem to be making things more \"normal\". I did\nsee a few instances of /* hash_search already filled in the key */, so if\nwe do anything at all here, we might prefer that.\n\n- hash_search(uncommitted_enums, serialized++, HASH_ENTER, NULL);\n+ (void) hash_search(uncommitted_enums, serialized++, HASH_ENTER, NULL);\n\nI prefer explicit (void) for new code, but others may disagree. I don't\nthink we have a preferred style for this, so changing current usage will\njust cause unnecessary code churn.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Sep 13, 2023 at 1:47 PM Junwang Zhao <[email protected]> wrote:>> Hi hackers,>> It's not necessary to fill the key field for most cases, since> hash_search has already done that for you. For developer that> using memset to zero the entry structure after enter it, fill the> key field is a must, but IMHO that is not good coding style, we> really should not touch the key field after insert it into the> dynahash.-\t\tmemset(part_entry, 0, sizeof(LogicalRepPartMapEntry));-\t\tpart_entry->partoid = partOid;+\t\tAssert(part_entry->partoid == partOid);+\t\tmemset(entry, 0, sizeof(LogicalRepRelMapEntry));This is making an assumption that the non-key part of LogicalRepPartMapEntry will never get new members. Without knowing much about this code, it seems like a risk in the abstract.> This patch fixed some most abnormal ones, instead of refilling the> key field of primitive types, adding some assert might be a better> choice.Taking a quick look, I didn't happen to see any existing asserts of this sort, so the patch doesn't seem to be making things more \"normal\". I did see a few instances of /* hash_search already filled in the key */, so if we do anything at all here, we might prefer that.-\t\thash_search(uncommitted_enums, serialized++, HASH_ENTER, NULL);+\t\t(void) hash_search(uncommitted_enums, serialized++, HASH_ENTER, NULL);I prefer explicit (void) for new code, but others may disagree. I don't think we have a preferred style for this, so changing current usage will just cause unnecessary code churn. --John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 13 Sep 2023 15:22:29 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [dynahash] do not refill the hashkey after hash_search"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 4:22 PM John Naylor\n<[email protected]> wrote:\n>\n>\n> On Wed, Sep 13, 2023 at 1:47 PM Junwang Zhao <[email protected]> wrote:\n> >\n> > Hi hackers,\n> >\n> > It's not necessary to fill the key field for most cases, since\n> > hash_search has already done that for you. For developer that\n> > using memset to zero the entry structure after enter it, fill the\n> > key field is a must, but IMHO that is not good coding style, we\n> > really should not touch the key field after insert it into the\n> > dynahash.\n>\n> - memset(part_entry, 0, sizeof(LogicalRepPartMapEntry));\n> - part_entry->partoid = partOid;\n> + Assert(part_entry->partoid == partOid);\n> + memset(entry, 0, sizeof(LogicalRepRelMapEntry));\n>\n> This is making an assumption that the non-key part of LogicalRepPartMapEntry will never get new members. Without knowing much about this code, it seems like a risk in the abstract.\n\nWhat do you mean by 'the non-key part of LogicalRepPartMapEntry will\nnever get new members'?\n\ntypedef struct LogicalRepPartMapEntry\n{\n Oid partoid; /* LogicalRepPartMap's key */\n LogicalRepRelMapEntry relmapentry;\n} LogicalRepPartMapEntry;\n\npartoid has already been filled by hash_search with HASH_ENTER action,\nso I think the\nabove code should have the same effects.\n\n>\n> > This patch fixed some most abnormal ones, instead of refilling the\n> > key field of primitive types, adding some assert might be a better\n> > choice.\n>\n> Taking a quick look, I didn't happen to see any existing asserts of this sort, so the patch doesn't seem to be making things more \"normal\". I did see a few instances of /* hash_search already filled in the key */, so if we do anything at all here, we might prefer that.\n\nThere are some code using assert for this sort, for example in\n*ReorderBufferToastAppendChunk*:\n\n```\nent = (ReorderBufferToastEnt *)\nhash_search(txn->toast_hash, &chunk_id, HASH_ENTER, &found);\n\nif (!found)\n{\n Assert(ent->chunk_id == chunk_id); <------- this\nline, by Robert Haas\n ent->num_chunks = 0;\n ent->last_chunk_seq = 0;\n```\n\nand in *rebuild_database_list*, tom commented that the key has already\nbeen filled, which I think\nhe was trying to tell people no need to assign the key again.\n\n```\n/* we assume it isn't found because the hash was just created */\ndb = hash_search(dbhash, &newdb, HASH_ENTER, NULL);\n\n/* hash_search already filled in the key */ <------- this\nline, by Tom Lane\ndb->adl_score = score++;\n/* next_worker is filled in later */\n```\n\n>\n> - hash_search(uncommitted_enums, serialized++, HASH_ENTER, NULL);\n> + (void) hash_search(uncommitted_enums, serialized++, HASH_ENTER, NULL);\n>\n> I prefer explicit (void) for new code, but others may disagree. I don't think we have a preferred style for this, so changing current usage will just cause unnecessary code churn.\n>\n\nWhat I am concerned about is that if we change the key after\nhash_search with HASH_ENTER action, there\nare chances that if we assign a wrong value, it will be impossible to\nmatch that entry again.\n\n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Wed, 13 Sep 2023 16:46:31 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [dynahash] do not refill the hashkey after hash_search"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 3:46 PM Junwang Zhao <[email protected]> wrote:\n>\n> On Wed, Sep 13, 2023 at 4:22 PM John Naylor\n> <[email protected]> wrote:\n\n> > - memset(part_entry, 0, sizeof(LogicalRepPartMapEntry));\n> > - part_entry->partoid = partOid;\n> > + Assert(part_entry->partoid == partOid);\n> > + memset(entry, 0, sizeof(LogicalRepRelMapEntry));\n> >\n> > This is making an assumption that the non-key part of\nLogicalRepPartMapEntry will never get new members. Without knowing much\nabout this code, it seems like a risk in the abstract.\n>\n> What do you mean by 'the non-key part of LogicalRepPartMapEntry will\n> never get new members'?\n\nI mean, if this struct:\n\n> typedef struct LogicalRepPartMapEntry\n> {\n> Oid partoid; /* LogicalRepPartMap's key */\n> LogicalRepRelMapEntry relmapentry;\n> } LogicalRepPartMapEntry;\n\n...gets a new member, it will not get memset when memsetting \"relmapentry\".\n\n> > Taking a quick look, I didn't happen to see any existing asserts of\nthis sort, so the patch doesn't seem to be making things more \"normal\". I\ndid see a few instances of /* hash_search already filled in the key */, so\nif we do anything at all here, we might prefer that.\n>\n> There are some code using assert for this sort, for example in\n> *ReorderBufferToastAppendChunk*:\n\n> and in *rebuild_database_list*, tom commented that the key has already\n> been filled, which I think\n> he was trying to tell people no need to assign the key again.\n\nOkay, we have examples of each.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Sep 13, 2023 at 3:46 PM Junwang Zhao <[email protected]> wrote:>> On Wed, Sep 13, 2023 at 4:22 PM John Naylor> <[email protected]> wrote:> > - memset(part_entry, 0, sizeof(LogicalRepPartMapEntry));> > - part_entry->partoid = partOid;> > + Assert(part_entry->partoid == partOid);> > + memset(entry, 0, sizeof(LogicalRepRelMapEntry));> >> > This is making an assumption that the non-key part of LogicalRepPartMapEntry will never get new members. Without knowing much about this code, it seems like a risk in the abstract.>> What do you mean by 'the non-key part of LogicalRepPartMapEntry will> never get new members'?I mean, if this struct:> typedef struct LogicalRepPartMapEntry> {> Oid partoid; /* LogicalRepPartMap's key */> LogicalRepRelMapEntry relmapentry;> } LogicalRepPartMapEntry;...gets a new member, it will not get memset when memsetting \"relmapentry\".> > Taking a quick look, I didn't happen to see any existing asserts of this sort, so the patch doesn't seem to be making things more \"normal\". I did see a few instances of /* hash_search already filled in the key */, so if we do anything at all here, we might prefer that.>> There are some code using assert for this sort, for example in> *ReorderBufferToastAppendChunk*:> and in *rebuild_database_list*, tom commented that the key has already> been filled, which I think> he was trying to tell people no need to assign the key again.Okay, we have examples of each.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 13 Sep 2023 16:28:45 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [dynahash] do not refill the hashkey after hash_search"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 5:28 PM John Naylor\n<[email protected]> wrote:\n>\n>\n> On Wed, Sep 13, 2023 at 3:46 PM Junwang Zhao <[email protected]> wrote:\n> >\n> > On Wed, Sep 13, 2023 at 4:22 PM John Naylor\n> > <[email protected]> wrote:\n>\n> > > - memset(part_entry, 0, sizeof(LogicalRepPartMapEntry));\n> > > - part_entry->partoid = partOid;\n> > > + Assert(part_entry->partoid == partOid);\n> > > + memset(entry, 0, sizeof(LogicalRepRelMapEntry));\n> > >\n> > > This is making an assumption that the non-key part of LogicalRepPartMapEntry will never get new members. Without knowing much about this code, it seems like a risk in the abstract.\n> >\n> > What do you mean by 'the non-key part of LogicalRepPartMapEntry will\n> > never get new members'?\n>\n> I mean, if this struct:\n>\n> > typedef struct LogicalRepPartMapEntry\n> > {\n> > Oid partoid; /* LogicalRepPartMap's key */\n> > LogicalRepRelMapEntry relmapentry;\n> > } LogicalRepPartMapEntry;\n>\n> ...gets a new member, it will not get memset when memsetting \"relmapentry\".\n\nok, I see. I will leave this case as it was.\n\n>\n> > > Taking a quick look, I didn't happen to see any existing asserts of this sort, so the patch doesn't seem to be making things more \"normal\". I did see a few instances of /* hash_search already filled in the key */, so if we do anything at all here, we might prefer that.\n> >\n> > There are some code using assert for this sort, for example in\n> > *ReorderBufferToastAppendChunk*:\n>\n> > and in *rebuild_database_list*, tom commented that the key has already\n> > been filled, which I think\n> > he was trying to tell people no need to assign the key again.\n>\n> Okay, we have examples of each.\n>\n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com\n\nAdd a v2 with some change to fix warnings about unused-parameter.\n\nI will add this to Commit Fest.\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Thu, 14 Sep 2023 16:28:26 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [dynahash] do not refill the hashkey after hash_search"
},
{
"msg_contents": "On Thu, Sep 14, 2023 at 04:28:26PM +0800, Junwang Zhao wrote:\n> Add a v2 with some change to fix warnings about unused-parameter.\n> \n> I will add this to Commit Fest.\n\nThis looks reasonable to me. I've marked the commitfest entry as\nready-for-committer. I will plan on committing it in a couple of days\nunless John has additional feedback or would like to do the honors.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 12 Oct 2023 22:07:47 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [dynahash] do not refill the hashkey after hash_search"
},
{
"msg_contents": "On Fri, Oct 13, 2023 at 10:07 AM Nathan Bossart\n<[email protected]> wrote:\n>\n> On Thu, Sep 14, 2023 at 04:28:26PM +0800, Junwang Zhao wrote:\n> > Add a v2 with some change to fix warnings about unused-parameter.\n> >\n> > I will add this to Commit Fest.\n>\n> This looks reasonable to me. I've marked the commitfest entry as\n> ready-for-committer. I will plan on committing it in a couple of days\n> unless John has additional feedback or would like to do the honors.\n\n(I've been offline for a few weeks, and have been catching up this week.)\n\nI agree it's reasonable, but there are a couple small loose ends I'd\nlike to see addressed.\n\n- strlcpy(hentry->name, name, sizeof(hentry->name));\n\nThis might do with a comment stating we already set the value, (we've\nseen in this thread that some other code does this), but I don't feel\nstrongly about it.\n\n do\n {\n- hash_search(uncommitted_enums, serialized++, HASH_ENTER, NULL);\n+ (void) hash_search(uncommitted_enums, serialized++, HASH_ENTER, NULL);\n } while (OidIsValid(*serialized));\n\nI still consider this an unrelated and unnecessary cosmetic change.\n\n- NotificationHash *hentry;\n bool found;\n\n- hentry = (NotificationHash *) hash_search(pendingNotifies->hashtab,\n- &oldn,\n- HASH_ENTER,\n- &found);\n+ (void) hash_search(pendingNotifies->hashtab,\n+ &oldn,\n+ HASH_ENTER,\n+ &found);\n Assert(!found);\n- hentry->event = oldn;\n\nI'd prefer just adding \"Assert(hentry->event == oldn);\" and declaring\nhentry PG_USED_FOR_ASSERTS_ONLY.\n\n\n",
"msg_date": "Wed, 25 Oct 2023 12:12:58 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [dynahash] do not refill the hashkey after hash_search"
},
{
"msg_contents": "John Naylor <[email protected]> writes:\n> I'd prefer just adding \"Assert(hentry->event == oldn);\" and declaring\n> hentry PG_USED_FOR_ASSERTS_ONLY.\n\nI'm not aware of any other places where we have Asserts checking\nthat hash_search() honored its contract. Why do we need one here?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 25 Oct 2023 01:21:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [dynahash] do not refill the hashkey after hash_search"
},
{
"msg_contents": "On Wed, Oct 25, 2023 at 12:21 PM Tom Lane <[email protected]> wrote:\n>\n> John Naylor <[email protected]> writes:\n> > I'd prefer just adding \"Assert(hentry->event == oldn);\" and declaring\n> > hentry PG_USED_FOR_ASSERTS_ONLY.\n>\n> I'm not aware of any other places where we have Asserts checking\n> that hash_search() honored its contract. Why do we need one here?\n\n[removing old CC]\nThe author pointed out here that we're not consistent in this regard:\n\nhttps://www.postgresql.org/message-id/CAEG8a3KEO_Kdt2Y5hFNWMEX3DpCXi9jtZOJY-GFUEE9QLgF%2Bbw%40mail.gmail.com\n\n...but I didn't try seeing where the balance lay. We can certainly\njust remove redundant assignments.\n\n\n",
"msg_date": "Wed, 25 Oct 2023 12:48:52 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [dynahash] do not refill the hashkey after hash_search"
},
{
"msg_contents": "On Wed, Oct 25, 2023 at 12:48:52PM +0700, John Naylor wrote:\n> On Wed, Oct 25, 2023 at 12:21 PM Tom Lane <[email protected]> wrote:\n>>\n>> John Naylor <[email protected]> writes:\n>> > I'd prefer just adding \"Assert(hentry->event == oldn);\" and declaring\n>> > hentry PG_USED_FOR_ASSERTS_ONLY.\n>>\n>> I'm not aware of any other places where we have Asserts checking\n>> that hash_search() honored its contract. Why do we need one here?\n> \n> [removing old CC]\n> The author pointed out here that we're not consistent in this regard:\n> \n> https://www.postgresql.org/message-id/CAEG8a3KEO_Kdt2Y5hFNWMEX3DpCXi9jtZOJY-GFUEE9QLgF%2Bbw%40mail.gmail.com\n> \n> ...but I didn't try seeing where the balance lay. We can certainly\n> just remove redundant assignments.\n\nWhile it probably doesn't hurt anything, IMHO it's unnecessary to verify\nthat hash_search() works every time it is called. This behavior seems\nunlikely to change anytime soon, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 25 Oct 2023 09:50:00 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [dynahash] do not refill the hashkey after hash_search"
},
{
"msg_contents": "On Thu, Sep 14, 2023 at 3:28 PM Junwang Zhao <[email protected]> wrote:\n>\n> Add a v2 with some change to fix warnings about unused-parameter.\n>\n> I will add this to Commit Fest.\n\nPushed v2 after removing asserts, as well as the unnecessary cast that\nI complained about earlier.\n\nSome advice: I was added as a reviewer in CF without my knowledge. I\nsee people doing it sometimes, but I don't recommend that, since the\nreviewer field in CF implies the person volunteered to continue giving\nfeedback for patch updates.\n\n\n",
"msg_date": "Thu, 30 Nov 2023 15:51:39 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [dynahash] do not refill the hashkey after hash_search"
}
] |
[
{
"msg_contents": "Looking at [email protected] I noticed that we had a a few instances\nof filenames in userfacing log messages (ie not elog or DEBUGx etc) not being\nquoted, where the vast majority are quoted like \\\"%s\\\". Any reason not to\nquote them as per the attached to be consistent across all log messages?\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 13 Sep 2023 13:48:12 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Quoting filename in using facing log messages"
},
{
"msg_contents": "On 13.09.23 13:48, Daniel Gustafsson wrote:\n> Looking at [email protected] I noticed that we had a a few instances\n> of filenames in userfacing log messages (ie not elog or DEBUGx etc) not being\n> quoted, where the vast majority are quoted like \\\"%s\\\". Any reason not to\n> quote them as per the attached to be consistent across all log messages?\n\nSince WAL file names have a predictable format, there is less pressure \nto quote them to avoid ambiguities. But in general we should try to be \nconsistent, so your patch makes sense to me.\n\n\n\n",
"msg_date": "Wed, 13 Sep 2023 13:55:57 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Quoting filename in using facing log messages"
},
{
"msg_contents": "> On 13 Sep 2023, at 13:55, Peter Eisentraut <[email protected]> wrote:\n> \n> On 13.09.23 13:48, Daniel Gustafsson wrote:\n>> Looking at [email protected] I noticed that we had a a few instances\n>> of filenames in userfacing log messages (ie not elog or DEBUGx etc) not being\n>> quoted, where the vast majority are quoted like \\\"%s\\\". Any reason not to\n>> quote them as per the attached to be consistent across all log messages?\n> \n> Since WAL file names have a predictable format, there is less pressure to quote them to avoid ambiguities. But in general we should try to be consistent\n\nCorrect, this is all for consistency.\n\n> so your patch makes sense to me.\n\nThanks!\n\nIt might be worth concatenating the errmsg() while there since we typically\ndon't linebreak errmsg strings anymore for greppability:\n\n-\t errmsg(\"could not write to log file %s \"\n-\t\t\"at offset %u, length %zu: %m\",\n+\t errmsg(\"could not write to log file \\\"%s\\\" at offset %u, length %zu: %m\",\n\nI don't have strong feelings wrt that, just have a vague memory of \"concatenate\nwhen touching\" as an informal guideline.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 13 Sep 2023 14:02:47 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Quoting filename in using facing log messages"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 02:02:47PM +0200, Daniel Gustafsson wrote:\n> It might be worth concatenating the errmsg() while there since we typically\n> don't linebreak errmsg strings anymore for greppability:\n> \n> -\t errmsg(\"could not write to log file %s \"\n> -\t\t\"at offset %u, length %zu: %m\",\n> +\t errmsg(\"could not write to log file \\\"%s\\\" at offset %u, length %zu: %m\",\n> \n> I don't have strong feelings wrt that, just have a vague memory of \"concatenate\n> when touching\" as an informal guideline.\n\nBecause these are slightly easier to grep when looking for a given\npattern in the tree.\n\n(I'm OK with your patch as well, FWIW.)\n--\nMichael",
"msg_date": "Thu, 14 Sep 2023 16:56:03 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Quoting filename in using facing log messages"
},
{
"msg_contents": "> On 14 Sep 2023, at 09:56, Michael Paquier <[email protected]> wrote:\n\n> (I'm OK with your patch as well, FWIW.)\n\nThanks for looking, pushed.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 14 Sep 2023 11:23:09 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Quoting filename in using facing log messages"
}
] |
[
{
"msg_contents": "Hello!\n\nThere is a table with a unique index on it and we have a query that\nsearching DISTINCT values on this table on columns of unique index. Example:\n\n\ncreate table a (n int);\ninsert into a (n) select x from generate_series(1, 140000) as g(x);\ncreate unique index on a (n);\nexplain select distinct n from a;\n QUERY PLAN\n\n------------------------------------------------------------------------------------\n Unique (cost=0.42..6478.42 rows=140000 width=4)\n -> Index Only Scan using a_n_idx on a (cost=0.42..6128.42 rows=140000\nwidth=4)\n(2 rows)\n\n\nWe can see that Unique node is redundant for this case. So I implemented a\nsimple patch that removes Unique node from the plan.\nAfter patch:\n\n\nexplain select distinct n from a;\n QUERY PLAN\n---------------------------------------------------------\n Seq Scan on a (cost=0.00..2020.00 rows=140000 width=4)\n(1 row)\n\n\nThe patch is rather simple and doesn't consider queries with joins. The\ncriteria when Unique node is should be removed is a case when a set of Vars\nin DISTINCT clause contains unique index columns from the same table.\nAnother example:\nCREATE TABLE a (n int, m int);\nCRETE UNIQUE INDEX ON a (n);\nSELECT DISTINCT (n,m) FROM a;\nThe Unique node should be deleted because n is contained in (n,m).\n\n\nThe patch doesn't consider these cases:\n 1. DISTINCT ON [EXPR]\n Because this case can need grouping.\n 2. Subqueries.\n Because this case can need grouping:\n CREATE TABLE a (n int);\n CREA UNIQUE INDEX ON a (n);\n SELECT DISTINCT g FROM (SELECT * FROM a) as g;\n 3. Joins, because it demands complication of code.\n Example:\n SELECT DISTINCT a.n1 JOIN b where a.n1 = b.n1;\n where a.n1 and b.n1 should be unique indexes and join qual should be\non this index columns.\n or\n a have a unique index on n1 and b is \"unique for a\" on join qual.\n\n\nI am wondering if there are opportunities for further development of this\npatch, in particular for JOIN cases.\nFor several levels of JOINs we should understand which set columns is\nunique for the every joinrel in query. In general terms I identified two\ncases when joinrel \"saves\" unique index from table: when tables are joined\nby unique index columns and when one table has unique index and it is\n\"unique_for\" (has one common tuple) another table.\n\n\nRegards,\nDamir Belyalov\nPostgres Professional",
"msg_date": "Wed, 13 Sep 2023 16:22:00 +0300",
"msg_from": "Damir Belyalov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Redundant Unique plan node for table with a unique index"
},
{
"msg_contents": "> On 13 Sep 2023, at 15:22, Damir Belyalov <[email protected]> wrote:\n\n> There is a table with a unique index on it and we have a query that searching DISTINCT values on this table on columns of unique index.\n\n> We can see that Unique node is redundant for this case. So I implemented a simple patch that removes Unique node from the plan.\n\nIs this query pattern common enough to warrant spending time on in the planner\n(are there perhaps ORMs that generate such)? Have you measured the overhead of\nthis?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 13 Sep 2023 15:28:18 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Redundant Unique plan node for table with a unique index"
},
{
"msg_contents": "On Thu, 14 Sept 2023 at 02:28, Damir Belyalov <[email protected]> wrote:\n> create table a (n int);\n> insert into a (n) select x from generate_series(1, 140000) as g(x);\n> create unique index on a (n);\n> explain select distinct n from a;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------\n> Unique (cost=0.42..6478.42 rows=140000 width=4)\n> -> Index Only Scan using a_n_idx on a (cost=0.42..6128.42 rows=140000 width=4)\n> (2 rows)\n>\n>\n> We can see that Unique node is redundant for this case. So I implemented a simple patch that removes Unique node from the plan.\n\nI don't think this is a good way to do this. The method you're using\nonly supports this optimisation when querying a table directly. If\nthere were subqueries, joins, etc then it wouldn't work as there are\nno unique indexes. You should probably have a look at [1] to see\nfurther details of an alternative method without the said limitations.\n\nDavid\n\n[1] https://postgr.es/m/flat/CAKU4AWqZvSyxroHkbpiHSCEAY2C41dG7VWs%3Dc188KKznSK_2Zg%40mail.gmail.com\n\n\n",
"msg_date": "Thu, 14 Sep 2023 11:39:05 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Redundant Unique plan node for table with a unique index"
},
{
"msg_contents": ">\n>\n> I don't think this is a good way to do this. The method you're using\n> only supports this optimisation when querying a table directly. If\n> there were subqueries, joins, etc then it wouldn't work as there are\n> no unique indexes. You should probably have a look at [1] to see\n> further details of an alternative method without the said limitations.\n>\n> David\n>\n> [1]\n> https://postgr.es/m/flat/CAKU4AWqZvSyxroHkbpiHSCEAY2C41dG7VWs%3Dc188KKznSK_2Zg%40mail.gmail.com\n>\n>\nThe nullable tracking blocker probably has been removed by varnullingrels\nso I will start working on UniqueKey stuff very soon, thank you David\nfor remember of this feature!\n\n-- \nBest Regards\nAndy Fan\n\n\nI don't think this is a good way to do this. The method you're using\nonly supports this optimisation when querying a table directly. If\nthere were subqueries, joins, etc then it wouldn't work as there are\nno unique indexes. You should probably have a look at [1] to see\nfurther details of an alternative method without the said limitations.\n\nDavid\n\n[1] https://postgr.es/m/flat/CAKU4AWqZvSyxroHkbpiHSCEAY2C41dG7VWs%3Dc188KKznSK_2Zg%40mail.gmail.comThe nullable tracking blocker probably has been removed by varnullingrelsso I will start working on UniqueKey stuff very soon, thank you Davidfor remember of this feature!-- Best RegardsAndy Fan",
"msg_date": "Thu, 14 Sep 2023 09:17:58 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Redundant Unique plan node for table with a unique index"
},
{
"msg_contents": "Thank you for feedback and thread [1].\n\nRegards,\nDamir Belyalov\nPostgres Professional\n\nThank you for feedback and thread [1].Regards,Damir BelyalovPostgres Professional",
"msg_date": "Thu, 14 Sep 2023 11:44:08 +0300",
"msg_from": "Damir Belyalov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Redundant Unique plan node for table with a unique index"
}
] |
[
{
"msg_contents": "Hi All,\n\nPlease find a small patch to improve code readability by modifying\nvariable name to reflect the logic involved - finding diff between end\nand start time of WAL sync.\n\n--\nThanks and Regards,\nKrishnakumar (KK)\n[Microsoft]",
"msg_date": "Wed, 13 Sep 2023 23:28:44 -0700",
"msg_from": "Krishnakumar R <[email protected]>",
"msg_from_op": true,
"msg_subject": "Small patch modifying variable name to reflect the logic involved"
},
{
"msg_contents": "> On 14 Sep 2023, at 08:28, Krishnakumar R <[email protected]> wrote:\n\n> Please find a small patch to improve code readability by modifying\n> variable name to reflect the logic involved - finding diff between end\n> and start time of WAL sync.\n\n-\tINSTR_TIME_ACCUM_DIFF(PendingWalStats.wal_sync_time, duration, start);\n+\tINSTR_TIME_SET_CURRENT(end);\n+\tINSTR_TIME_ACCUM_DIFF(PendingWalStats.wal_sync_time, end, start);\n\nAgreed, the duration is the result of the INSTR_TIME_ACCUM_DIFF calculation,\nnot what's stored in the instr_time variable.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 14 Sep 2023 11:30:54 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Small patch modifying variable name to reflect the logic involved"
},
{
"msg_contents": "> On 14 Sep 2023, at 11:30, Daniel Gustafsson <[email protected]> wrote:\n> \n>> On 14 Sep 2023, at 08:28, Krishnakumar R <[email protected]> wrote:\n> \n>> Please find a small patch to improve code readability by modifying\n>> variable name to reflect the logic involved - finding diff between end\n>> and start time of WAL sync.\n> \n> -\tINSTR_TIME_ACCUM_DIFF(PendingWalStats.wal_sync_time, duration, start);\n> +\tINSTR_TIME_SET_CURRENT(end);\n> +\tINSTR_TIME_ACCUM_DIFF(PendingWalStats.wal_sync_time, end, start);\n> \n> Agreed, the duration is the result of the INSTR_TIME_ACCUM_DIFF calculation,\n> not what's stored in the instr_time variable.\n\nAnd done, with a small fixup to handle another occurrence in the same file.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 15 Sep 2023 19:50:51 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Small patch modifying variable name to reflect the logic involved"
}
] |
[
{
"msg_contents": "Hi All,\n\nPlease find a small patch to improve code readability by fixing up the\nvariable name to indicate the WAL record reservation status. The\ninsertion is done later in the code based on the reservation status.\n\n--\nThanks and Regards,\nKrishnakumar (KK).\n[Microsoft]",
"msg_date": "Wed, 13 Sep 2023 23:48:30 -0700",
"msg_from": "Krishnakumar R <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fixup the variable name to indicate the WAL record reservation\n status."
},
{
"msg_contents": "At Wed, 13 Sep 2023 23:48:30 -0700, Krishnakumar R <[email protected]> wrote in \n> Please find a small patch to improve code readability by fixing up the\n> variable name to indicate the WAL record reservation status. The\n> insertion is done later in the code based on the reservation status.\n\nIMHO... Although \"reserved\" might be pertinent at the point of\nassignment, its applicability promptly diminishes in the subsequent\nuses. When the variable is first assigned, we know the record will\ninsert some bytes and advance the LSN. In other words, the variable\nsuggests \"to be inserted\", and promptly thereafter, the variable\nindicates that the record \"has been inserted\". Given this, \"inserted\"\nseems to be a better fit than \"reserved\".\n\nIn short, I would keep the variable name as it is.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 15 Sep 2023 10:12:25 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fixup the variable name to indicate the WAL record reservation\n status."
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently it is complained that wal_level changes require an instance\nrestart, I'm not familiar with this stuff so far and I didn't get any good\ninformation from searching the email archive. So I want to gather\nsome feedbacks from experts to see if it is possible and if not, why\nit would be the key blocker for this. Basically I agree that changing\nthe wal_level online will be a good experience for users.\n\n-- \nBest Regards\nAndy Fan\n\nHi, Currently it is complained that wal_level changes require an instancerestart, I'm not familiar with this stuff so far and I didn't get any goodinformation from searching the email archive. So I want to gather some feedbacks from experts to see if it is possible and if not, whyit would be the key blocker for this. Basically I agree that changing the wal_level online will be a good experience for users. -- Best RegardsAndy Fan",
"msg_date": "Thu, 14 Sep 2023 18:05:03 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is it possible to change wal_level online"
},
{
"msg_contents": "On Thu, Sep 14, 2023, at 7:05 AM, Andy Fan wrote:\n> Currently it is complained that wal_level changes require an instance\n> restart, I'm not familiar with this stuff so far and I didn't get any good\n> information from searching the email archive. So I want to gather \n> some feedbacks from experts to see if it is possible and if not, why\n> it would be the key blocker for this. Basically I agree that changing \n> the wal_level online will be a good experience for users. \n> \n\nThis topic was already discussed. See this thread [1] that was requesting to\nchange the wal_level default value. There might be other threads but I didn't\ntry hard to find them.\n\n\n[1] https://www.postgresql.org/message-id/20200608213215.mgk3cctlzvfuaqm6%40alap3.anarazel.de\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Sep 14, 2023, at 7:05 AM, Andy Fan wrote:Currently it is complained that wal_level changes require an instancerestart, I'm not familiar with this stuff so far and I didn't get any goodinformation from searching the email archive. So I want to gather some feedbacks from experts to see if it is possible and if not, whyit would be the key blocker for this. Basically I agree that changing the wal_level online will be a good experience for users. This topic was already discussed. See this thread [1] that was requesting tochange the wal_level default value. There might be other threads but I didn'ttry hard to find them.[1] https://www.postgresql.org/message-id/20200608213215.mgk3cctlzvfuaqm6%40alap3.anarazel.de--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 14 Sep 2023 10:21:59 -0300",
"msg_from": "\"Euler Taveira\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is it possible to change wal_level online"
},
{
"msg_contents": "On Thu, Sep 14, 2023 at 9:22 PM Euler Taveira <[email protected]> wrote:\n\n> On Thu, Sep 14, 2023, at 7:05 AM, Andy Fan wrote:\n>\n> Currently it is complained that wal_level changes require an instance\n> restart, I'm not familiar with this stuff so far and I didn't get any good\n> information from searching the email archive. So I want to gather\n> some feedbacks from experts to see if it is possible and if not, why\n> it would be the key blocker for this. Basically I agree that changing\n> the wal_level online will be a good experience for users.\n>\n>\n> This topic was already discussed. See this thread [1] that was requesting\n> to\n> change the wal_level default value. There might be other threads but I\n> didn't\n> try hard to find them.\n>\n>\n> [1]\n> https://www.postgresql.org/message-id/20200608213215.mgk3cctlzvfuaqm6%40alap3.anarazel.de\n>\n\nThank you Euler, this one is already the best one I ever found.\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Thu, Sep 14, 2023 at 9:22 PM Euler Taveira <[email protected]> wrote:On Thu, Sep 14, 2023, at 7:05 AM, Andy Fan wrote:Currently it is complained that wal_level changes require an instancerestart, I'm not familiar with this stuff so far and I didn't get any goodinformation from searching the email archive. So I want to gather some feedbacks from experts to see if it is possible and if not, whyit would be the key blocker for this. Basically I agree that changing the wal_level online will be a good experience for users. This topic was already discussed. See this thread [1] that was requesting tochange the wal_level default value. There might be other threads but I didn'ttry hard to find them.[1] https://www.postgresql.org/message-id/20200608213215.mgk3cctlzvfuaqm6%40alap3.anarazel.deThank you Euler, this one is already the best one I ever found. -- Best RegardsAndy Fan",
"msg_date": "Fri, 15 Sep 2023 07:28:26 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is it possible to change wal_level online"
},
{
"msg_contents": "Hi, \n\nOn September 14, 2023 6:21:59 AM PDT, Euler Taveira <[email protected]> wrote:\n>On Thu, Sep 14, 2023, at 7:05 AM, Andy Fan wrote:\n>> Currently it is complained that wal_level changes require an instance\n>> restart, I'm not familiar with this stuff so far and I didn't get any good\n>> information from searching the email archive. So I want to gather \n>> some feedbacks from experts to see if it is possible and if not, why\n>> it would be the key blocker for this. Basically I agree that changing \n>> the wal_level online will be a good experience for users. \n>> \n>\n>This topic was already discussed. See this thread [1] that was requesting to\n>change the wal_level default value. There might be other threads but I didn't\n>try hard to find them.\n>\n>\n>[1] https://www.postgresql.org/message-id/20200608213215.mgk3cctlzvfuaqm6%40alap3.anarazel.de\n\nI think it's gotten a bit easier since then, because we now have global barriers, to implement the waiting that's mentioned in the email.\n\nPossibly we should do the switch to logical dynamically, without a dedicated wal_level. Whenever a logical slot exists, automatically increase the Wal level, whenever the last slot is dropped, lower it again. Plus some waiting to ensure every backend has knows about the new value. \n\nRegards,\n\nAndres \n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Thu, 14 Sep 2023 18:28:56 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is it possible to change wal_level online"
}
] |
[
{
"msg_contents": "Hi there,\n\nI have 1 trivial fix, 1 bug fix, and 1 suggestion about vacuumdb.\n\nFirst, I noticed that the help message of `vacuumdb` is a bit incorrect.\n\n`vacuumdb -?` displays the following message\n```\n...\n -n, --schema=PATTERN vacuum tables in the specified schema(s)\nonly\n -N, --exclude-schema=PATTERN do not vacuum tables in the specified\nschema(s)\n\n...\n```\nPATTERN should be changed to SCHEMA because -n and -N options don't support\npattern matching for schema names. The attached patch 0001 fixes this.\n\nSecond, when we use multiple -N options, vacuumdb runs incorrectly as shown\nbelow.\n```\n$ psql\n=# CREATE SCHEMA s1;\n=# CREATE SCHEMA s2;\n=# CREATE SCHEMA s3;\n=# CREATE TABLE s1.t(i int);\n=# CREATE TABLE s2.t(i int);\n=# CREATE TABLE s3.t(i int);\n=# ALTER SYSTEM SET log_statement TO 'all';\n=# SELECT pg_reload_conf();\n=# \\q\n$ vacuumdb -N s1 -N s2\n```\nWe expect that tables in schemas s1 and s2 should not be vacuumed, while\nthe\nothers should be. However, logfile says like this.\n```\nLOG: statement: VACUUM (SKIP_DATABASE_STATS) pg_catalog.pg_proc;\nLOG: statement: VACUUM (SKIP_DATABASE_STATS) pg_catalog.pg_proc;\n\n...\n\nLOG: statement: VACUUM (SKIP_DATABASE_STATS) s2.t;\nLOG: statement: VACUUM (SKIP_DATABASE_STATS) s1.t;\nLOG: statement: VACUUM (ONLY_DATABASE_STATS);\n```\nEven specified by -N, s1.t and s2.t are vacuumed, and also the others are\nvacuumed\ntwice. The attached patch 0002 fixes this.\n\nThird, for the description of the -N option, I wonder if \"vacuum all tables\nexcept\nin the specified schema(s)\" might be clearer. The current one says nothing\nabout\ntables not in the specified schema.\n\nThoughts?\n\nMasaki Kuwamura",
"msg_date": "Thu, 14 Sep 2023 20:21:51 +0900",
"msg_from": "Kuwamura Masaki <[email protected]>",
"msg_from_op": true,
"msg_subject": "bug fix and documentation improvement about vacuumdb"
},
{
"msg_contents": "> On 14 Sep 2023, at 13:21, Kuwamura Masaki <[email protected]> wrote:\n\n> PATTERN should be changed to SCHEMA because -n and -N options don't support \n> pattern matching for schema names. The attached patch 0001 fixes this.\n\nTrue, there is no pattern matching performed. I wonder if it's worth lifting\nthe pattern matching from pg_dump into common code such that tools like this\ncan use it?\n\n> Second, when we use multiple -N options, vacuumdb runs incorrectly as shown below.\n> ...\n\n> Even specified by -N, s1.t and s2.t are vacuumed, and also the others are vacuumed \n> twice. The attached patch 0002 fixes this.\n\nI can reproduce that, a single -N works but adding multiple -N's makes none of\nthem excluded. The current coding does this:\n\n if (objfilter & OBJFILTER_SCHEMA_EXCLUDE)\n appendPQExpBufferStr(&catalog_query, \"OPERATOR(pg_catalog.!=) \");\n\nIf the join is instead made to exclude the oids in listed_objects with a left\njoin and a clause on object_oid being null I can make the current query work\nwithout adding a second clause. I don't have strong feelings wrt if we should\nadd a NOT IN () or fix this JOIN, but we shouldn't have a faulty join together\nwith the fix. With your patch the existing join is left in place, let's fix that.\n\n> Third, for the description of the -N option, I wonder if \"vacuum all tables except \n> in the specified schema(s)\" might be clearer. The current one says nothing about \n> tables not in the specified schema.\n\nMaybe, but the point of vacuumdb is to analyze a database so I'm not sure who\nwould expect anything else than vacuuming everything but the excluded schema\nwhen specifying -N. What else could \"vacuumdb -N foo\" be interpreted to do\nthat can be confusing?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 14 Sep 2023 14:06:51 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug fix and documentation improvement about vacuumdb"
},
{
"msg_contents": "On Thu, Sep 14, 2023 at 02:06:51PM +0200, Daniel Gustafsson wrote:\n>> On 14 Sep 2023, at 13:21, Kuwamura Masaki <[email protected]> wrote:\n> \n>> PATTERN should be changed to SCHEMA because -n and -N options don't support \n>> pattern matching for schema names. The attached patch 0001 fixes this.\n> \n> True, there is no pattern matching performed. I wonder if it's worth lifting\n> the pattern matching from pg_dump into common code such that tools like this\n> can use it?\n\nI agree that this should be changed to SCHEMA. It might be tough to add\npattern matching with the current catalog query, and I don't know whether\nthere is demand for such a feature, but I wouldn't discourage someone from\ntrying.\n\n>> Second, when we use multiple -N options, vacuumdb runs incorrectly as shown below.\n>> ...\n> \n>> Even specified by -N, s1.t and s2.t are vacuumed, and also the others are vacuumed \n>> twice. The attached patch 0002 fixes this.\n> \n> I can reproduce that, a single -N works but adding multiple -N's makes none of\n> them excluded. The current coding does this:\n> \n> if (objfilter & OBJFILTER_SCHEMA_EXCLUDE)\n> appendPQExpBufferStr(&catalog_query, \"OPERATOR(pg_catalog.!=) \");\n> \n> If the join is instead made to exclude the oids in listed_objects with a left\n> join and a clause on object_oid being null I can make the current query work\n> without adding a second clause. I don't have strong feelings wrt if we should\n> add a NOT IN () or fix this JOIN, but we shouldn't have a faulty join together\n> with the fix. With your patch the existing join is left in place, let's fix that.\n\nYeah, I think we can fix the JOIN as you suggest. I quickly put a patch\ntogether to demonstrate. We should probably add some tests...\n\n>> Third, for the description of the -N option, I wonder if \"vacuum all tables except \n>> in the specified schema(s)\" might be clearer. The current one says nothing about \n>> tables not in the specified schema.\n> \n> Maybe, but the point of vacuumdb is to analyze a database so I'm not sure who\n> would expect anything else than vacuuming everything but the excluded schema\n> when specifying -N. What else could \"vacuumdb -N foo\" be interpreted to do\n> that can be confusing?\n\nI agree with Daniel on this one.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 14 Sep 2023 07:57:57 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug fix and documentation improvement about vacuumdb"
},
{
"msg_contents": "At Thu, 14 Sep 2023 07:57:57 -0700, Nathan Bossart <[email protected]> wrote in \n> On Thu, Sep 14, 2023 at 02:06:51PM +0200, Daniel Gustafsson wrote:\n> > I can reproduce that, a single -N works but adding multiple -N's makes none of\n> > them excluded. The current coding does this:\n> > \n> > if (objfilter & OBJFILTER_SCHEMA_EXCLUDE)\n> > appendPQExpBufferStr(&catalog_query, \"OPERATOR(pg_catalog.!=) \");\n> > \n> > If the join is instead made to exclude the oids in listed_objects with a left\n> > join and a clause on object_oid being null I can make the current query work\n> > without adding a second clause. I don't have strong feelings wrt if we should\n> > add a NOT IN () or fix this JOIN, but we shouldn't have a faulty join together\n> > with the fix. With your patch the existing join is left in place, let's fix that.\n> \n> Yeah, I think we can fix the JOIN as you suggest. I quickly put a patch\n> together to demonstrate. We should probably add some tests...\n\nIt seems to work fine. However, if we're aiming for consistent\nspacing, the \"IS NULL\" (two spaces in between) might be an concern.\n\n> >> Third, for the description of the -N option, I wonder if \"vacuum all tables except \n> >> in the specified schema(s)\" might be clearer. The current one says nothing about \n> >> tables not in the specified schema.\n> > \n> > Maybe, but the point of vacuumdb is to analyze a database so I'm not sure who\n> > would expect anything else than vacuuming everything but the excluded schema\n> > when specifying -N. What else could \"vacuumdb -N foo\" be interpreted to do\n> > that can be confusing?\n> \n> I agree with Daniel on this one.\n\n+1.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 15 Sep 2023 11:39:01 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug fix and documentation improvement about vacuumdb"
},
{
"msg_contents": "> On 15 Sep 2023, at 04:39, Kyotaro Horiguchi <[email protected]> wrote:\n> At Thu, 14 Sep 2023 07:57:57 -0700, Nathan Bossart <[email protected]> wrote in \n\n>> Yeah, I think we can fix the JOIN as you suggest. I quickly put a patch\n>> together to demonstrate. \n\nLooks good from a quick skim.\n\n>> We should probably add some tests...\n\nAgreed.\n\n> It seems to work fine. However, if we're aiming for consistent\n> spacing, the \"IS NULL\" (two spaces in between) might be an concern.\n\nI don't think that's a problem. I would rather have readable C code and two\nspaces in the generated SQL than contorting the C code to produce less\nwhitespace in a query few will read in its generated form.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 15 Sep 2023 10:13:10 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug fix and documentation improvement about vacuumdb"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 10:13:10AM +0200, Daniel Gustafsson wrote:\n>> On 15 Sep 2023, at 04:39, Kyotaro Horiguchi <[email protected]> wrote:\n>> It seems to work fine. However, if we're aiming for consistent\n>> spacing, the \"IS NULL\" (two spaces in between) might be an concern.\n> \n> I don't think that's a problem. I would rather have readable C code and two\n> spaces in the generated SQL than contorting the C code to produce less\n> whitespace in a query few will read in its generated form.\n\nI think we could pretty easily avoid the extra space and keep the C code\nrelatively readable. These sorts of things bug me, too (see 2af3336).\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 15 Sep 2023 07:42:10 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug fix and documentation improvement about vacuumdb"
},
{
"msg_contents": "Thank you for all your reviews!\n\n>>> PATTERN should be changed to SCHEMA because -n and -N options don't\nsupport\n>>> pattern matching for schema names. The attached patch 0001 fixes this.\n>>\n>> True, there is no pattern matching performed. I wonder if it's worth\nlifting\n>> the pattern matching from pg_dump into common code such that tools like\nthis\n>> can use it?\n>\n> I agree that this should be changed to SCHEMA. It might be tough to add\n> pattern matching with the current catalog query, and I don't know whether\n> there is demand for such a feature, but I wouldn't discourage someone from\n> trying.\n\nI think that supporting pattern matching is quite nice.\nBut it will be not only tough but also a breaking change, I wonder.\nSo I guess this change should be commited either way.\n\n>>> Yeah, I think we can fix the JOIN as you suggest. I quickly put a patch\n>>> together to demonstrate.\n>\n> Looks good from a quick skim.\n\nI do agree with this updates. Thank you!\n\n>> We should probably add some tests...\n>\n> Agreed.\n\nThe attached patch includes new tests for this bug.\nAlso, I fixed the current test for -N option seems to be incorrect.\n\n>>> It seems to work fine. However, if we're aiming for consistent\n>>> spacing, the \"IS NULL\" (two spaces in between) might be an concern.\n>>\n>> I don't think that's a problem. I would rather have readable C code and\ntwo\n>> spaces in the generated SQL than contorting the C code to produce less\n>> whitespace in a query few will read in its generated form.\n>\n> I think we could pretty easily avoid the extra space and keep the C code\n> relatively readable. These sorts of things bug me, too (see 2af3336).\n\nThough I don't think it affects readability, I'm neutral about this.\n\n>> >> Third, for the description of the -N option, I wonder if \"vacuum all\ntables except\n>> >> in the specified schema(s)\" might be clearer. The current one says\nnothing about\n>> >> tables not in the specified schema.\n>> >\n>> > Maybe, but the point of vacuumdb is to analyze a database so I'm not\nsure who\n>> > would expect anything else than vacuuming everything but the excluded\nschema\n>> > when specifying -N. What else could \"vacuumdb -N foo\" be interpreted\nto do\n>> > that can be confusing?\n>>\n>> I agree with Daniel on this one.\n>\n> +1.\n\nThat make sense. I retract my suggestion.",
"msg_date": "Wed, 20 Sep 2023 18:46:32 +0900",
"msg_from": "Kuwamura Masaki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bug fix and documentation improvement about vacuumdb"
},
{
"msg_contents": "> On 20 Sep 2023, at 11:46, Kuwamura Masaki <[email protected]> wrote:\n\n> I think that supporting pattern matching is quite nice.\n> But it will be not only tough but also a breaking change, I wonder.\n> So I guess this change should be commited either way.\n\nI agree. Supporting pattern matching should, if anyone is interested in\ntrying, be done separately in its own thread, no need to move the goalposts\nhere. Sorry if I made it sound like so upthread.\n\n> The attached patch includes new tests for this bug.\n> Also, I fixed the current test for -N option seems to be incorrect.\n\nWhen sending an update, please include the previous patch as well with your new\ntests as a 0002 patch in a patchset. The CFBot can only apply and build/test\npatches when the entire patchset is attached to the email. The below\ntestresults indicate that the patch failed the new tests (which in a way is\ngood since without the fix the tests *should* fail), since the fix patch was\nnot applied:\n\n http://cfbot.cputube.org/masaki-kuwamura.html\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 20 Sep 2023 13:17:17 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug fix and documentation improvement about vacuumdb"
},
{
"msg_contents": ">\n> I agree. Supporting pattern matching should, if anyone is interested in\n> trying, be done separately in its own thread, no need to move the goalposts\n> here. Sorry if I made it sound like so upthread.\n>\n I got it.\n\n\n> When sending an update, please include the previous patch as well with\n> your new\n> tests as a 0002 patch in a patchset. The CFBot can only apply and\n> build/test\n> patches when the entire patchset is attached to the email. The below\n> testresults indicate that the patch failed the new tests (which in a way is\n> good since without the fix the tests *should* fail), since the fix patch\n> was\n> not applied:\n>\n> http://cfbot.cputube.org/masaki-kuwamura.html\n>\nI'm sorry, I didn't know that. I attached both the test and fix patch to\nthis mail.\n(The fix patch is clearly Nathan-san's though)\nIf I'm still in a wrong way, please let me know.\n\nMasaki Kuwamura",
"msg_date": "Thu, 21 Sep 2023 10:53:09 +0900",
"msg_from": "Kuwamura Masaki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bug fix and documentation improvement about vacuumdb"
},
{
"msg_contents": "> On 21 Sep 2023, at 03:53, Kuwamura Masaki <[email protected]> wrote:\n\n> When sending an update, please include the previous patch as well with your new\n> tests as a 0002 patch in a patchset. The CFBot can only apply and build/test\n> patches when the entire patchset is attached to the email. The below\n> testresults indicate that the patch failed the new tests (which in a way is\n> good since without the fix the tests *should* fail), since the fix patch was\n> not applied:\n> \n> http://cfbot.cputube.org/masaki-kuwamura.html <http://cfbot.cputube.org/masaki-kuwamura.html>\n> I'm sorry, I didn't know that. I attached both the test and fix patch to this mail.\n\nNo worries at all. If you look at the page now you will see all green\ncheckmarks indicating that the patch was tested in CI. So now we know that\nyour tests fail without the fix and work with the fix applied, so all is well.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 21 Sep 2023 09:57:12 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug fix and documentation improvement about vacuumdb"
},
{
"msg_contents": ">\n> No worries at all. If you look at the page now you will see all green\n> checkmarks indicating that the patch was tested in CI. So now we know that\n> your tests fail without the fix and work with the fix applied, so all is\n> well.\n>\n\nThank you for your kind words!\n\nAnd it seems to me that all concerns are resolved.\nI'll change the patch status to Ready for Committer.\nIf you have or find any flaw, let me know that.\n\nBest Regards,\n\nMasaki Kuwamura\n\nNo worries at all. If you look at the page now you will see all green\ncheckmarks indicating that the patch was tested in CI. So now we know that\nyour tests fail without the fix and work with the fix applied, so all is well.Thank you for your kind words! And it seems to me that all concerns are resolved.I'll change the patch status to Ready for Committer.If you have or find any flaw, let me know that.Best Regards,Masaki Kuwamura",
"msg_date": "Fri, 22 Sep 2023 18:08:18 +0900",
"msg_from": "Kuwamura Masaki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bug fix and documentation improvement about vacuumdb"
},
{
"msg_contents": "> On 22 Sep 2023, at 11:08, Kuwamura Masaki <[email protected]> wrote:\n> \n> No worries at all. If you look at the page now you will see all green\n> checkmarks indicating that the patch was tested in CI. So now we know that\n> your tests fail without the fix and work with the fix applied, so all is well.\n> \n> Thank you for your kind words! \n> \n> And it seems to me that all concerns are resolved.\n> I'll change the patch status to Ready for Committer.\n> If you have or find any flaw, let me know that.\n\nI had a look at this and tweaked the testcase a bit to make the diff smaller,\nas well as removed the (in some cases) superfluous space in the generated SQL\nquery mentioned upthread. The attached two patches is what I propose we commit\nto fix this, with a backpatch to v16 where it was introduced.\n\n--\nDaniel Gustafsson",
"msg_date": "Fri, 22 Sep 2023 14:58:20 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug fix and documentation improvement about vacuumdb"
},
{
"msg_contents": "On Fri, Sep 22, 2023 at 02:58:20PM +0200, Daniel Gustafsson wrote:\n> I had a look at this and tweaked the testcase a bit to make the diff smaller,\n> as well as removed the (in some cases) superfluous space in the generated SQL\n> query mentioned upthread. The attached two patches is what I propose we commit\n> to fix this, with a backpatch to v16 where it was introduced.\n\nLGTM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 23 Sep 2023 12:29:29 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug fix and documentation improvement about vacuumdb"
},
{
"msg_contents": "LGTM too!\n\n>> a bit to make the diff smaller,\nI couldn't think from that perspective. Thanks for your update, Daniel-san.\n\nMasaki Kuwamura\n\nLGTM too!>> a bit to make the diff smaller,I couldn't think from that perspective. Thanks for your update, Daniel-san.Masaki Kuwamura",
"msg_date": "Sun, 24 Sep 2023 17:22:23 +0900",
"msg_from": "Kuwamura Masaki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bug fix and documentation improvement about vacuumdb"
},
{
"msg_contents": "> On 24 Sep 2023, at 10:22, Kuwamura Masaki <[email protected]> wrote:\n> \n> LGTM too!\n\nI've applied this down to v16 now, thanks for the submission!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 25 Sep 2023 16:18:20 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bug fix and documentation improvement about vacuumdb"
},
{
"msg_contents": ">\n> I've applied this down to v16 now, thanks for the submission!\n>\n\nThanks for pushing!\n\nMasaki Kuwamura\n\nI've applied this down to v16 now, thanks for the submission!Thanks for pushing!Masaki Kuwamura",
"msg_date": "Tue, 26 Sep 2023 10:22:05 +0900",
"msg_from": "Kuwamura Masaki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bug fix and documentation improvement about vacuumdb"
}
] |
[
{
"msg_contents": "A normal LRU cache or implement it reference to a research paper?\n\n\n",
"msg_date": "Thu, 14 Sep 2023 19:47:14 +0800",
"msg_from": "jacktby jacktby <[email protected]>",
"msg_from_op": true,
"msg_subject": "What's the eviction algorithm of newest pg version?"
},
{
"msg_contents": "On Fri, 15 Sept 2023 at 00:53, jacktby jacktby <[email protected]> wrote:\n> A normal LRU cache or implement it reference to a research paper?\n\nsrc/backend/storage/buffer/README\n\nDavid\n\n\n",
"msg_date": "Fri, 15 Sep 2023 00:55:36 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What's the eviction algorithm of newest pg version?"
}
] |
[
{
"msg_contents": "In buffer README, I see “Pins may not be held across transaction boundaries, however.” I think for different transactions, they can pin the same buffer page, why not? For concurrent read transactions, they could read the one and the same buffer page.\n\n",
"msg_date": "Fri, 15 Sep 2023 00:05:28 +0800",
"msg_from": "jacktby jacktby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Buffer ReadMe Confuse"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 4:08 AM jacktby jacktby <[email protected]> wrote:\n\n> In buffer README, I see “Pins may not be held across transaction\n> boundaries, however.” I think for different transactions, they can pin the\n> same buffer page, why not? For concurrent read transactions, they could\n> read the one and the same buffer page.\n>\n>\nYou are right that different transactions can pin the same buffer,\nbut that does not conflict with what the README says, which is talking\nabout once the transaction is completed, all the Pins are removed.\n\n-- \nBest Regards\nAndy Fan\n\nOn Fri, Sep 15, 2023 at 4:08 AM jacktby jacktby <[email protected]> wrote:In buffer README, I see “Pins may not be held across transaction boundaries, however.” I think for different transactions, they can pin the same buffer page, why not? For concurrent read transactions, they could read the one and the same buffer page.\n\nYou are right that different transactions can pin the same buffer,but that does not conflict with what the README says, which is talkingabout once the transaction is completed, all the Pins are removed. -- Best RegardsAndy Fan",
"msg_date": "Fri, 15 Sep 2023 07:26:47 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Buffer ReadMe Confuse"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen a table name is specified as the first argument of \\ev \nmeta-command, it reports the error message, the prompt string becomes \n\"-#\" and then the following valid query fails because the psql's query \nbuffer contains the garbage string generated by failure of \\ev. Please \nsee the following example.\n=# \\ev t\n\"public.t\" is not a view\n\n-# SELECT * FROM t;\nERROR: syntax error at or near \"public\" at character 1\nSTATEMENT: public.t AS\n\n\tSELECT * FROM t;\nI think this is a bug in psql's \\ev meta-command. Even when \\ev fails, \nit should not leave the garbage string in psql's query buffer and the \nfollowing query should be completed successfully.\nThis problem can be resolved by resetting the query buffer on error. You \ncan see the attached source code. After that, it will result in output \nlike the following:\n=# \\ev t\n\"public.t\" is not a view\n=# SELECT * FROM t;\n i\n---\n 1\n 2\n(2 rows)\n\nRyoga Yoshida",
"msg_date": "Fri, 15 Sep 2023 11:37:46 +0900",
"msg_from": "Ryoga Yoshida <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bug fix for psql's meta-command \\ev"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 11:37:46AM +0900, Ryoga Yoshida wrote:\n> I think this is a bug in psql's \\ev meta-command. Even when \\ev fails, it\n> should not leave the garbage string in psql's query buffer and the following\n> query should be completed successfully.\n\nRight. Good catch. Will look at that a bit more to see if the resets\nare correctly placed, particularly in light of \\ef.\n--\nMichael",
"msg_date": "Fri, 15 Sep 2023 14:26:16 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug fix for psql's meta-command \\ev"
},
{
"msg_contents": "At Fri, 15 Sep 2023 11:37:46 +0900, Ryoga Yoshida <[email protected]> wrote in \n> I think this is a bug in psql's \\ev meta-command. Even when \\ev fails,\n> it should not leave the garbage string in psql's query buffer and the\n> following query should be completed successfully.\n\nGood catch! I agree to this.\n\n> This problem can be resolved by resetting the query buffer on\n> error. You can see the attached source code. After that, it will\n> result in output like the following:\n\nWhile exec_command_ef_ev() currently preserves the existing content of\nthe query buffer in case of certain failures, This behavior doesn't\nseem to be particularly significant, especially given that both \\ef\nand \\ev are intended to overwrite the query buffer on success.\n\nWe have the option to fix get_create_object_cmd() and ensure\nexec_command_ef_ev() retains the existing content of the query buffer\non failure. However, this approach seems like overly cumbersome. So\nI'm +1 to this approach.\n\nA comment might be necessary to clarify that we need to wipe out the\nquery buffer because it could be overwritten with an incomplete query\nstring due to certain failures.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 15 Sep 2023 15:17:50 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug fix for psql's meta-command \\ev"
},
{
"msg_contents": "Hi,\n\nI came across the patch since it was marked as \"Needs review\" (and\nthen I realized that I mistakenly opened the upcoming commit fest, not\nthe current one...).\n\n> Good catch! I agree to this.\n>\n> > This problem can be resolved by resetting the query buffer on\n> > error. You can see the attached source code. After that, it will\n> > result in output like the following:\n>\n> While exec_command_ef_ev() currently preserves the existing content of\n> the query buffer in case of certain failures, This behavior doesn't\n> seem to be particularly significant, especially given that both \\ef\n> and \\ev are intended to overwrite the query buffer on success.\n>\n> We have the option to fix get_create_object_cmd() and ensure\n> exec_command_ef_ev() retains the existing content of the query buffer\n> on failure. However, this approach seems like overly cumbersome. So\n> I'm +1 to this approach.\n>\n> A comment might be necessary to clarify that we need to wipe out the\n> query buffer because it could be overwritten with an incomplete query\n> string due to certain failures.\n\nI tested the patch and it LGTM too. I don't have a strong opinion on\nwhether we should bother with a comment or not.\n\nAs a side note I wonder whether we shouldn't assume that query_buf is\nalways properly initialized elsewhere. But this is probably out of\nscope of this particular discussion.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 18 Sep 2023 18:54:50 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug fix for psql's meta-command \\ev"
},
{
"msg_contents": "On Mon, Sep 18, 2023 at 06:54:50PM +0300, Aleksander Alekseev wrote:\n> I tested the patch and it LGTM too. I don't have a strong opinion on\n> whether we should bother with a comment or not.\n> \n> As a side note I wonder whether we shouldn't assume that query_buf is\n> always properly initialized elsewhere. But this is probably out of\n> scope of this particular discussion.\n\nThe patch looks incorrect to me. In case you've not noticed, we'd\nstill have the same problem if do_edit() fails for a reason or\nanother, and there are plenty of these in this code path, even if I\nagree that all of them are very unlikely. For example:\n- Emulate a failure in do_edit(), any way is fine, like forcing a\nreturn false at the beginning of the routine.\n- Attempt \\ev on a valid view. This passes lookup_object_oid() and\nget_create_object_cmd(), fails at do_edit while switching the status\nto PSQL_CMD_ERROR.\n- The query buffer is incorrect, a follow-up query still fails.\n\nAdding a comment looks important to me once we consider the edit as a\npath that can fail and the edited query is only executed then reset\nwhen we have PSQL_CMD_NEWEDIT as status. I would suggest the patch\nattached instead, taking care of the error case of this thread and the\nones I've spotted.\n--\nMichael",
"msg_date": "Tue, 19 Sep 2023 12:53:59 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug fix for psql's meta-command \\ev"
},
{
"msg_contents": "On 2023-09-19 12:53, Michael Paquier wrote:\n> Adding a comment looks important to me once we consider the edit as a\n> path that can fail and the edited query is only executed then reset\n> when we have PSQL_CMD_NEWEDIT as status. I would suggest the patch\n> attached instead, taking care of the error case of this thread and the\n> ones I've spotted.\n\nThank you everyone for the reviews. I fixed the patch for the error and \nalso added a comment\n. You can see attached file.\n\nRyoga Yoshida",
"msg_date": "Tue, 19 Sep 2023 15:29:11 +0900",
"msg_from": "Ryoga Yoshida <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bug fix for psql's meta-command \\ev"
},
{
"msg_contents": "On 2023-09-19 15:29, Ryoga Yoshida wrote:\n> You can see attached file.\n\nI didn't notice that Michael attached the patch file. Just ignore my \nfile. I apologize for the inconvenience.\n\nRyoga Yoshida\n\n\n",
"msg_date": "Tue, 19 Sep 2023 16:23:36 +0900",
"msg_from": "Ryoga Yoshida <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bug fix for psql's meta-command \\ev"
},
{
"msg_contents": "Hi Michael,\n\n> The patch looks incorrect to me. In case you've not noticed, we'd\n> still have the same problem if do_edit() fails [...]\n\nYou are right, I missed it. Your patch is correct while the original\none is not quite so.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 19 Sep 2023 13:23:54 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug fix for psql's meta-command \\ev"
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 01:23:54PM +0300, Aleksander Alekseev wrote:\n> You are right, I missed it. Your patch is correct while the original\n> one is not quite so.\n\nActually there was a bit more to it in the presence of \\e, that could\nalso get some unpredictible behaviors if some errors happen while\nediting a query, which is something unlikely, still leads to strange\nbehaviors on failure injections. I was considering first to move the\nreset in do_edit(), but also we have the case of \\e[v|f] where the\nbuffer has no edits so it felt a bit more natural to do that in the\nupper layer like in this patch.\n\nAnother aspect of all these code paths is the line number that can be\noptionally number after an object name for \\e[v|f] or a file name for\n\\e (in the latter case it is possible to have a line number without a\nfile name, as well). Anyway, we only fill the query buffer after\nvalidating all the options at hand. So, while the status is set to\nPSQL_CMD_ERROR, we'd still do a reset of the query buffer but nothing\ngot added to it yet.\n\nI've also considered a backpatch for this change, but at the end\ndiscarded this option, at least for now. I don't think that someone\nis relying on the existing behavior of having the query buffer still\naround on failure if \\ev or \\ef fail their object lookup as the\ncontents are undefined, because that's not unintuitive, but this\nchange is not critical enough to make it backpatchable if somebody's\nbeen actually relying on the previous behavior. I'm OK to revisit\nthis choice later on depending on the feedback, though.\n--\nMichael",
"msg_date": "Wed, 20 Sep 2023 09:32:36 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug fix for psql's meta-command \\ev"
},
{
"msg_contents": "On 2023-09-20 09:32, Michael Paquier wrote:\n> Actually there was a bit more to it in the presence of \\e, that could\n> also get some unpredictible behaviors if some errors happen while\n> editing a query, which is something unlikely, still leads to strange\n> behaviors on failure injections. I was considering first to move the\n> reset in do_edit(), but also we have the case of \\e[v|f] where the\n> buffer has no edits so it felt a bit more natural to do that in the\n> upper layer like in this patch.\n\nIndeed, similar behaviours can happen with the \\e. The patch you \ncommitted looks good to me. Thank you.\n\nRyoga Yoshida\n\n\n",
"msg_date": "Wed, 20 Sep 2023 14:08:34 +0900",
"msg_from": "Ryoga Yoshida <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bug fix for psql's meta-command \\ev"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen archive_library is set to 'basic_archive' but \nbasic_archive.archive_directory is not set, WAL archiving doesn't work \nand only the following warning message is logged.\n\n $ emacs $PGDATA/postgresql.conf\n archive_mode = on\n archive_library = 'basic_archive'\n\n $ bin/pg_ctl -D $PGDATA restart\n ....\n WARNING: archive_mode enabled, yet archiving is not configured\n\nThe issue here is that this warning message doesn't suggest any hint \nregarding the cause of WAL archiving failure. In other words, I think \nthat the log message in this case should report that WAL archiving \nfailed because basic_archive.archive_directory is not set. Thus, I think \nit's worth implementing new patch that improves that warning message, \nand here is the patch for that.\n\nBest regards,\nTung Nguyen",
"msg_date": "Fri, 15 Sep 2023 18:38:37 +0900",
"msg_from": "bt23nguyent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Improve the log message output of basic_archive when\n basic_archive.archive_directory parameter is not set"
},
{
"msg_contents": "> On 15 Sep 2023, at 11:38, bt23nguyent <[email protected]> wrote:\n> \n> Hi,\n> \n> When archive_library is set to 'basic_archive' but basic_archive.archive_directory is not set, WAL archiving doesn't work and only the following warning message is logged.\n> \n> $ emacs $PGDATA/postgresql.conf\n> archive_mode = on\n> archive_library = 'basic_archive'\n> \n> $ bin/pg_ctl -D $PGDATA restart\n> ....\n> WARNING: archive_mode enabled, yet archiving is not configured\n> \n> The issue here is that this warning message doesn't suggest any hint regarding the cause of WAL archiving failure. In other words, I think that the log message in this case should report that WAL archiving failed because basic_archive.archive_directory is not set.\n\nThat doesn't seem unreasonable, and I can imagine other callbacks having the\nneed to give errhints as well to assist the user.\n\n> Thus, I think it's worth implementing new patch that improves that warning message, and here is the patch for that.\n\n-basic_archive_configured(ArchiveModuleState *state)\n+basic_archive_configured(ArchiveModuleState *state, const char **errmsg)\n\nThe variable name errmsg implies that it will contain the errmsg() data when it\nin fact is used for errhint() data, so it should be named accordingly.\n\nIt's probably better to define the interface as ArchiveCheckConfiguredCB\nfunctions returning an allocated string in the passed pointer which the caller\nis responsible for freeing.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 15 Sep 2023 11:57:31 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the log message output of basic_archive when\n basic_archive.archive_directory parameter is not set"
},
{
"msg_contents": "On 2023-Sep-15, Daniel Gustafsson wrote:\n\n> -basic_archive_configured(ArchiveModuleState *state)\n> +basic_archive_configured(ArchiveModuleState *state, const char **errmsg)\n> \n> The variable name errmsg implies that it will contain the errmsg() data when it\n> in fact is used for errhint() data, so it should be named accordingly.\n> \n> It's probably better to define the interface as ArchiveCheckConfiguredCB\n> functions returning an allocated string in the passed pointer which the caller\n> is responsible for freeing.\n\nAlso note that this callback is documented in archive-modules.sgml, so\nthat needs to be updated as well. This also means you can't backpatch\nthis change, or you risk breaking external software that implements this\ninterface.\n\nI suggest that 'msg' shouldn't be a global variable. There's no need\nfor that AFAICS; but if there is, this is a terrible name for it.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 15 Sep 2023 12:49:24 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the log message output of basic_archive when\n basic_archive.archive_directory parameter is not set"
},
{
"msg_contents": "> On 15 Sep 2023, at 12:49, Alvaro Herrera <[email protected]> wrote:\n> \n> On 2023-Sep-15, Daniel Gustafsson wrote:\n> \n>> -basic_archive_configured(ArchiveModuleState *state)\n>> +basic_archive_configured(ArchiveModuleState *state, const char **errmsg)\n>> \n>> The variable name errmsg implies that it will contain the errmsg() data when it\n>> in fact is used for errhint() data, so it should be named accordingly.\n>> \n>> It's probably better to define the interface as ArchiveCheckConfiguredCB\n>> functions returning an allocated string in the passed pointer which the caller\n>> is responsible for freeing.\n> \n> Also note that this callback is documented in archive-modules.sgml, so\n> that needs to be updated as well. This also means you can't backpatch\n> this change, or you risk breaking external software that implements this\n> interface.\n\nAbsolutely, this is master only for v17.\n\n> I suggest that 'msg' shouldn't be a global variable. There's no need\n> for that AFAICS; but if there is, this is a terrible name for it.\n\nAgreed.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 15 Sep 2023 14:48:55 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the log message output of basic_archive when\n basic_archive.archive_directory parameter is not set"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 02:48:55PM +0200, Daniel Gustafsson wrote:\n>> On 15 Sep 2023, at 12:49, Alvaro Herrera <[email protected]> wrote:\n>> \n>> On 2023-Sep-15, Daniel Gustafsson wrote:\n>> \n>>> -basic_archive_configured(ArchiveModuleState *state)\n>>> +basic_archive_configured(ArchiveModuleState *state, const char **errmsg)\n>>> \n>>> The variable name errmsg implies that it will contain the errmsg() data when it\n>>> in fact is used for errhint() data, so it should be named accordingly.\n\nI have no objection to allowing this callback to provide additional\ninformation, but IMHO this should use errdetail() instead of errhint(). In\nthe provided patch, the new message explains how the module is not\nconfigured. It doesn't hint at how to fix it (although presumably one\ncould figure that out pretty easily).\n\n>>> It's probably better to define the interface as ArchiveCheckConfiguredCB\n>>> functions returning an allocated string in the passed pointer which the caller\n>>> is responsible for freeing.\n\nThat does seem more flexible.\n\n>> Also note that this callback is documented in archive-modules.sgml, so\n>> that needs to be updated as well. This also means you can't backpatch\n>> this change, or you risk breaking external software that implements this\n>> interface.\n> \n> Absolutely, this is master only for v17.\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 15 Sep 2023 07:38:27 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the log message output of basic_archive when\n basic_archive.archive_directory parameter is not set"
},
{
"msg_contents": "> On 15 Sep 2023, at 16:38, Nathan Bossart <[email protected]> wrote:\n\n> this should use errdetail() instead of errhint(). In\n> the provided patch, the new message explains how the module is not\n> configured. It doesn't hint at how to fix it (although presumably one\n> could figure that out pretty easily).\n\nFair point, I agree with your reasoning that errdetail seems more appropriate.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 15 Sep 2023 16:41:52 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the log message output of basic_archive when\n basic_archive.archive_directory parameter is not set"
},
{
"msg_contents": "On 2023-09-15 23:38, Nathan Bossart wrote:\n> On Fri, Sep 15, 2023 at 02:48:55PM +0200, Daniel Gustafsson wrote:\n>>> On 15 Sep 2023, at 12:49, Alvaro Herrera <[email protected]> \n>>> wrote:\n>>> \n>>> On 2023-Sep-15, Daniel Gustafsson wrote:\n>>> \n>>>> -basic_archive_configured(ArchiveModuleState *state)\n>>>> +basic_archive_configured(ArchiveModuleState *state, const char \n>>>> **errmsg)\n>>>> \n>>>> The variable name errmsg implies that it will contain the errmsg() \n>>>> data when it\n>>>> in fact is used for errhint() data, so it should be named \n>>>> accordingly.\n> \n> I have no objection to allowing this callback to provide additional\n> information, but IMHO this should use errdetail() instead of errhint(). \n> In\n> the provided patch, the new message explains how the module is not\n> configured. It doesn't hint at how to fix it (although presumably one\n> could figure that out pretty easily).\n> \n>>>> It's probably better to define the interface as \n>>>> ArchiveCheckConfiguredCB\n>>>> functions returning an allocated string in the passed pointer which \n>>>> the caller\n>>>> is responsible for freeing.\n> \n> That does seem more flexible.\n> \n>>> Also note that this callback is documented in archive-modules.sgml, \n>>> so\n>>> that needs to be updated as well. This also means you can't \n>>> backpatch\n>>> this change, or you risk breaking external software that implements \n>>> this\n>>> interface.\n>> \n>> Absolutely, this is master only for v17.\n> \n> +1\n\nThank you for all of your comments!\n\nThey are all really constructive and I totally agree with the points you \nbrought up.\nI have updated the patch accordingly.\n\nPlease let me know if you have any further suggestions that I can \nimprove more.\n\nBest regards,\nTung Nguyen",
"msg_date": "Tue, 19 Sep 2023 18:21:59 +0900",
"msg_from": "bt23nguyent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve the log message output of basic_archive when\n basic_archive.archive_directory parameter is not set"
},
{
"msg_contents": "> On 19 Sep 2023, at 11:21, bt23nguyent <[email protected]> wrote:\n\n> Please let me know if you have any further suggestions that I can improve more.\n\n+ *logdetail = pstrdup(\"WAL archiving failed because basic_archive.archive_directory is not set\");\n\nNitpick: detail messages should end with a period per the error message style\nguide [0].\n\n- archiving will proceed only when it returns <literal>true</literal>.\n+ archiving will proceed only when it returns <literal>true</literal>. The\n+ archiver may also emit the detail explaining how the module is not configured\n+ to the sever log if the archive module has any. \n\nI think this paragraph needs to be updated to include how the returned\nlogdetail is emitted, since it currently shows the WARNING without mentioning\nthe added detail in case returned. It would also be good to mention that it\nshould be an allocated string which the caller can free.\n\n--\nDaniel Gustafsson\n\n[0] https://www.postgresql.org/docs/devel/error-style-guide.html\n\n",
"msg_date": "Wed, 20 Sep 2023 14:14:54 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the log message output of basic_archive when\n basic_archive.archive_directory parameter is not set"
},
{
"msg_contents": "On 2023-09-20 21:14, Daniel Gustafsson wrote:\n>> On 19 Sep 2023, at 11:21, bt23nguyent <[email protected]> \n>> wrote:\n> \n>> Please let me know if you have any further suggestions that I can \n>> improve more.\n> \n> + *logdetail = pstrdup(\"WAL archiving failed because\n> basic_archive.archive_directory is not set\");\n> \n> Nitpick: detail messages should end with a period per the error message \n> style\n> guide [0].\n> \n\nYes! I totally missed this detail.\n\n> - archiving will proceed only when it returns \n> <literal>true</literal>.\n> + archiving will proceed only when it returns \n> <literal>true</literal>. The\n> + archiver may also emit the detail explaining how the module is\n> not configured\n> + to the sever log if the archive module has any.\n> \n> I think this paragraph needs to be updated to include how the returned\n> logdetail is emitted, since it currently shows the WARNING without \n> mentioning\n> the added detail in case returned. It would also be good to mention \n> that it\n> should be an allocated string which the caller can free.\n> \n> --\n> Daniel Gustafsson\n> \n> [0] https://www.postgresql.org/docs/devel/error-style-guide.html\n\n\nThank you for your kind review comment!\n\nI agree with you that this document update is not explanatory enough.\nSo here is an updated patch.\n\nIf there is any further suggestion, please let me know.\n\nBest regards,\nTung Nguyen",
"msg_date": "Thu, 21 Sep 2023 11:18:00 +0900",
"msg_from": "bt23nguyent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve the log message output of basic_archive when\n basic_archive.archive_directory parameter is not set"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 11:18:00AM +0900, bt23nguyent wrote:\n> -basic_archive_configured(ArchiveModuleState *state)\n> +basic_archive_configured(ArchiveModuleState *state, char **logdetail)\n\nCould we do something more like GUC_check_errdetail() instead to maintain\nbackward compatibility with v16?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 25 Sep 2023 15:20:47 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the log message output of basic_archive when\n basic_archive.archive_directory parameter is not set"
},
{
"msg_contents": "> On 26 Sep 2023, at 00:20, Nathan Bossart <[email protected]> wrote:\n> \n> On Thu, Sep 21, 2023 at 11:18:00AM +0900, bt23nguyent wrote:\n>> -basic_archive_configured(ArchiveModuleState *state)\n>> +basic_archive_configured(ArchiveModuleState *state, char **logdetail)\n> \n> Could we do something more like GUC_check_errdetail() instead to maintain\n> backward compatibility with v16?\n\nWe'd still need something exported to call into which isn't in 16, so it\nwouldn't be more than optically backwards compatible since a module written for\n17 won't compile for 16, or am I missing something?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 26 Sep 2023 08:13:45 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the log message output of basic_archive when\n basic_archive.archive_directory parameter is not set"
},
{
"msg_contents": "On Tue, Sep 26, 2023 at 08:13:45AM +0200, Daniel Gustafsson wrote:\n>> On 26 Sep 2023, at 00:20, Nathan Bossart <[email protected]> wrote:\n>> \n>> On Thu, Sep 21, 2023 at 11:18:00AM +0900, bt23nguyent wrote:\n>>> -basic_archive_configured(ArchiveModuleState *state)\n>>> +basic_archive_configured(ArchiveModuleState *state, char **logdetail)\n>> \n>> Could we do something more like GUC_check_errdetail() instead to maintain\n>> backward compatibility with v16?\n> \n> We'd still need something exported to call into which isn't in 16, so it\n> wouldn't be more than optically backwards compatible since a module written for\n> 17 won't compile for 16, or am I missing something?\n\nI only mean that a module written for v16 could continue to be used in v17\nwithout any changes. You are right that a module that uses this new\nfunctionality wouldn't compile for v16. But IMHO the interface is nicer,\ntoo, since module authors wouldn't need to worry about allocating the space\nfor the string or formatting the message.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 12 Oct 2023 21:25:59 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the log message output of basic_archive when\n basic_archive.archive_directory parameter is not set"
},
{
"msg_contents": "> On 13 Oct 2023, at 04:25, Nathan Bossart <[email protected]> wrote:\n> \n> On Tue, Sep 26, 2023 at 08:13:45AM +0200, Daniel Gustafsson wrote:\n>>> On 26 Sep 2023, at 00:20, Nathan Bossart <[email protected]> wrote:\n>>> \n>>> On Thu, Sep 21, 2023 at 11:18:00AM +0900, bt23nguyent wrote:\n>>>> -basic_archive_configured(ArchiveModuleState *state)\n>>>> +basic_archive_configured(ArchiveModuleState *state, char **logdetail)\n>>> \n>>> Could we do something more like GUC_check_errdetail() instead to maintain\n>>> backward compatibility with v16?\n>> \n>> We'd still need something exported to call into which isn't in 16, so it\n>> wouldn't be more than optically backwards compatible since a module written for\n>> 17 won't compile for 16, or am I missing something?\n> \n> I only mean that a module written for v16 could continue to be used in v17\n> without any changes. You are right that a module that uses this new\n> functionality wouldn't compile for v16.\n\nSure, but that also means that few if any existing modules will be updated to\nprovide this =).\n\n> But IMHO the interface is nicer,\n\nThat's a more compelling reason IMO. I'm not sure if I prefer the\nGUC_check_errdetail-like approach better, I would for sure not be opposed to\nreviewing a version of the patch doing it that way.\n\nTung Nguyen: are you interested in updating the patch along these lines\nsuggested by Nathan?\n\n> since module authors wouldn't need to worry about allocating the space\n> for the string or formatting the message.\n\nWell, they still need to format it; and calling <new_api>_errdetail(msg),\npstrdup(msg) or psprintf(msg) isn't a world of difference.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 13 Oct 2023 11:02:39 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the log message output of basic_archive when\n basic_archive.archive_directory parameter is not set"
},
{
"msg_contents": "On Fri, Oct 13, 2023 at 11:02:39AM +0200, Daniel Gustafsson wrote:\n> That's a more compelling reason IMO. I'm not sure if I prefer the\n> GUC_check_errdetail-like approach better, I would for sure not be opposed to\n> reviewing a version of the patch doing it that way.\n> \n> Tung Nguyen: are you interested in updating the patch along these lines\n> suggested by Nathan?\n\nI gave it a try.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 5 Jan 2024 17:03:57 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the log message output of basic_archive when\n basic_archive.archive_directory parameter is not set"
},
{
"msg_contents": "On Fri, Jan 05, 2024 at 05:03:57PM -0600, Nathan Bossart wrote:\n> I gave it a try.\n\nIs there any interest in this? If not, I'll withdraw the commitfest entry.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 28 Feb 2024 12:51:58 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the log message output of basic_archive when\n basic_archive.archive_directory parameter is not set"
},
{
"msg_contents": "> On 28 Feb 2024, at 19:51, Nathan Bossart <[email protected]> wrote:\n> \n> On Fri, Jan 05, 2024 at 05:03:57PM -0600, Nathan Bossart wrote:\n>> I gave it a try.\n> \n> Is there any interest in this? If not, I'll withdraw the commitfest entry.\n\nI'm still interested, please leave it in and I'll circle around to it.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 28 Feb 2024 22:05:26 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the log message output of basic_archive when\n basic_archive.archive_directory parameter is not set"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 10:05:26PM +0100, Daniel Gustafsson wrote:\n>> On 28 Feb 2024, at 19:51, Nathan Bossart <[email protected]> wrote:\n>> Is there any interest in this? If not, I'll withdraw the commitfest entry.\n> \n> I'm still interested, please leave it in and I'll circle around to it.\n\nThanks, Daniel.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 28 Feb 2024 15:06:35 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the log message output of basic_archive when\n basic_archive.archive_directory parameter is not set"
},
{
"msg_contents": "> On 6 Jan 2024, at 00:03, Nathan Bossart <[email protected]> wrote:\n\n> I gave it a try.\n\nLooking at this again I think this is about ready to go in. My only comment is\nthat doc/src/sgml/archive-modules.sgml probably should be updated to refer to\nsetting the errdetail, especially since we document the errormessage there.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 4 Mar 2024 15:21:59 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the log message output of basic_archive when\n basic_archive.archive_directory parameter is not set"
},
{
"msg_contents": "On Mon, Mar 04, 2024 at 03:21:59PM +0100, Daniel Gustafsson wrote:\n> Looking at this again I think this is about ready to go in. My only comment is\n> that doc/src/sgml/archive-modules.sgml probably should be updated to refer to\n> setting the errdetail, especially since we document the errormessage there.\n\nThanks for reviewing. How does this look?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 4 Mar 2024 11:22:52 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the log message output of basic_archive when\n basic_archive.archive_directory parameter is not set"
},
{
"msg_contents": "> On 4 Mar 2024, at 18:22, Nathan Bossart <[email protected]> wrote:\n> \n> On Mon, Mar 04, 2024 at 03:21:59PM +0100, Daniel Gustafsson wrote:\n>> Looking at this again I think this is about ready to go in. My only comment is\n>> that doc/src/sgml/archive-modules.sgml probably should be updated to refer to\n>> setting the errdetail, especially since we document the errormessage there.\n> \n> Thanks for reviewing. How does this look?\n\nLooks good from a read-through, I like it. A few comments on the commit\nmessage only:\n\nactionable details about the source of the miconfiguration. This\ns/miconfiguration/misconfiguration/\n\nReviewed-by: Daniel Gustafsson, ¡lvaro Herrera\nAlvaro's name seems wrong.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 4 Mar 2024 21:27:23 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the log message output of basic_archive when\n basic_archive.archive_directory parameter is not set"
},
{
"msg_contents": "On Mon, Mar 04, 2024 at 09:27:23PM +0100, Daniel Gustafsson wrote:\n> Looks good from a read-through, I like it. A few comments on the commit\n> message only:\n> \n> actionable details about the source of the miconfiguration. This\n> s/miconfiguration/misconfiguration/\n\nI reworded the commit message a bit to avoid the word \"misconfiguration,\"\nas it felt a bit misleading to me. In any case, this was fixed, albeit\nindirectly.\n\n> Reviewed-by: Daniel Gustafsson, �lvaro Herrera\n> Alvaro's name seems wrong.\n\nHm. It looks alright to me. I copied the name from his e-mail signature,\nwhich has an accent over the first 'A'. I assume that's why it's not\nshowing up correctly in some places.\n\nAnyway, I've committed this now. Thanks for taking a look!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 4 Mar 2024 15:50:48 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the log message output of basic_archive when\n basic_archive.archive_directory parameter is not set"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Mon, Mar 04, 2024 at 09:27:23PM +0100, Daniel Gustafsson wrote:\n>> Reviewed-by: Daniel Gustafsson, ¡lvaro Herrera\n>> Alvaro's name seems wrong.\n\n> Hm. It looks alright to me. I copied the name from his e-mail signature,\n> which has an accent over the first 'A'. I assume that's why it's not\n> showing up correctly in some places.\n\nI think that git has an expectation of commit log entries being in\nUTF8. The committed message looks okay from my end, but maybe some\nencoding mangling happened to the version Daniel was looking at?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Mar 2024 17:05:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the log message output of basic_archive when\n basic_archive.archive_directory parameter is not set"
}
] |
[
{
"msg_contents": "Hi,\n\nI found that pgBufferUsage.blk_{read|write}_time are zero although there\nare pgBufferUsage.local_blks_{read|written}. For example, when you run\n(track_io_timing should be on):\n\nCREATE EXTENSION pg_stat_statements;\nCREATE TEMP TABLE example_table (id serial PRIMARY KEY, data text);\nINSERT INTO example_table (data) SELECT 'Some data'\n FROM generate_series(1, 100000);\nUPDATE example_table SET data = 'Updated data';\nSELECT query, local_blks_read, local_blks_written,\n blk_read_time, blk_write_time FROM pg_stat_statements\n WHERE query like '%UPDATE%';\n\non master:\n\nquery | UPDATE example_table SET data = $1\nlocal_blks_read | 467\nlocal_blks_written | 2087\nblk_read_time | 0\nblk_write_time | 0\n\nThere are two reasons of that:\n\n1- When local_blks_{read|written} are incremented,\npgstat_count_io_op_time() is called with IOOBJECT_TEMP_RELATION. But in\npgstat_count_io_op_time(), pgBufferUsage.blk_{read|write}_time increments\nare called only when io_object is IOOBJECT_RELATION. The first patch\nattached fixes that.\n\n2- in ExtendBufferedRelLocal() and in ExtendBufferedRelShared(), extend\ncalls increment local_blks_written and shared_blks_written respectively.\nBut these extends are not counted while calculating the blk_write_time. If\nthere is no specific reason to not do that, I think these extends needs to\nbe counted in blk_write_time. The second patch attached does that.\n\nResults after applying first patch:\n\nquery | UPDATE example_table SET data = $1\nlocal_blks_read | 467\nlocal_blks_written | 2087\nblk_read_time | 0.30085\nblk_write_time | 1.475123\n\nResults after applying both patches:\n\nquery | UPDATE example_table SET data = $1\nlocal_blks_read | 467\nlocal_blks_written | 2087\nblk_read_time | 0.329597\nblk_write_time | 4.050305\n\nAny kind of feedback would be appreciated.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Fri, 15 Sep 2023 12:46:56 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgBufferUsage.blk_{read|write}_time are zero although there are\n pgBufferUsage.local_blks_{read|written}"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 9:24 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> I found that pgBufferUsage.blk_{read|write}_time are zero although there are pgBufferUsage.local_blks_{read|written}\n\nYes, good catch. This is a bug. I will note that at least in 15 and\nlikely before, pgBufferUsage.local_blks_written is incremented for\nlocal buffers but pgBufferUsage.blk_write_time is only added to for\nshared buffers (in FlushBuffer()). I think it makes sense to propose a\nbug fix to stable branches counting blk_write_time for local buffers\nas well.\n\n- Melanie\n\n\n",
"msg_date": "Fri, 15 Sep 2023 09:30:24 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgBufferUsage.blk_{read|write}_time are zero although there are\n pgBufferUsage.local_blks_{read|written}"
},
{
"msg_contents": "Hi,\n\nOn Fri, 15 Sept 2023 at 16:30, Melanie Plageman\n<[email protected]> wrote:\n>\n> Yes, good catch. This is a bug. I will note that at least in 15 and\n> likely before, pgBufferUsage.local_blks_written is incremented for\n> local buffers but pgBufferUsage.blk_write_time is only added to for\n> shared buffers (in FlushBuffer()). I think it makes sense to propose a\n> bug fix to stable branches counting blk_write_time for local buffers\n> as well.\n\nI attached the PG16+ (after pg_stat_io) and PG15- (before pg_stat_io)\nversions of the same patch.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Tue, 3 Oct 2023 12:37:37 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgBufferUsage.blk_{read|write}_time are zero although there are\n pgBufferUsage.local_blks_{read|written}"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 12:34 PM Melanie Plageman\n<[email protected]> wrote:\n> On Fri, Sep 15, 2023 at 9:24 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> > I found that pgBufferUsage.blk_{read|write}_time are zero although there are pgBufferUsage.local_blks_{read|written}\n>\n> Yes, good catch. This is a bug. I will note that at least in 15 and\n> likely before, pgBufferUsage.local_blks_written is incremented for\n> local buffers but pgBufferUsage.blk_write_time is only added to for\n> shared buffers (in FlushBuffer()). I think it makes sense to propose a\n> bug fix to stable branches counting blk_write_time for local buffers\n> as well.\n\nMy first thought was to wonder whether this was even a bug. I\nremembered that EXPLAIN treats shared, local, and temp buffers as\nthree separate categories of things. But it seems that someone decided\nto conflate two of them for I/O timing purposes:\n\n if (has_timing)\n {\n appendStringInfoString(es->str, \"\nshared/local\");\n\n^^^^ Notice this bit in particular.\n\n if (!INSTR_TIME_IS_ZERO(usage->blk_read_time))\n appendStringInfo(es->str, \" read=%0.3f\",\n\n INSTR_TIME_GET_MILLISEC(usage->blk_read_time));\n if (!INSTR_TIME_IS_ZERO(usage->blk_write_time))\n appendStringInfo(es->str, \"\nwrite=%0.3f\",\n\n INSTR_TIME_GET_MILLISEC(usage->blk_write_time));\n if (has_temp_timing)\n appendStringInfoChar(es->str, ',');\n }\n if (has_temp_timing)\n {\n appendStringInfoString(es->str, \" temp\");\n if\n(!INSTR_TIME_IS_ZERO(usage->temp_blk_read_time))\n appendStringInfo(es->str, \" read=%0.3f\",\n\n INSTR_TIME_GET_MILLISEC(usage->temp_blk_read_time));\n if\n(!INSTR_TIME_IS_ZERO(usage->temp_blk_write_time))\n appendStringInfo(es->str, \"\nwrite=%0.3f\",\n\n INSTR_TIME_GET_MILLISEC(usage->temp_blk_write_time));\n }\n\nGiven that, I'm inclined to agree that this is a bug. But we might\nneed to go through and make sure all of the code that deals with these\ncounters is on the same page about what the values represent. Maybe\nthere is code lurking somewhere that thinks these counters are only\nsupposed to include \"shared\" rather than, as the fragment above\nsuggests, \"shared/local\".\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 3 Oct 2023 12:44:36 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgBufferUsage.blk_{read|write}_time are zero although there are\n pgBufferUsage.local_blks_{read|written}"
},
{
"msg_contents": "Hi,\n\nOn Tue, 3 Oct 2023 at 19:44, Robert Haas <[email protected]> wrote:\n>\n> Given that, I'm inclined to agree that this is a bug. But we might\n> need to go through and make sure all of the code that deals with these\n> counters is on the same page about what the values represent. Maybe\n> there is code lurking somewhere that thinks these counters are only\n> supposed to include \"shared\" rather than, as the fragment above\n> suggests, \"shared/local\".\n\nThank you for the guidance.\n\nWhat do you think about the second patch, counting extend calls'\ntimings in blk_write_time? In my opinion if something increments\n{shared|local}_blks_written, then it needs to be counted in\nblk_write_time too. I am not sure why it is decided like that.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Thu, 5 Oct 2023 13:25:36 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgBufferUsage.blk_{read|write}_time are zero although there are\n pgBufferUsage.local_blks_{read|written}"
},
{
"msg_contents": "On Thu, Oct 5, 2023 at 6:25 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> What do you think about the second patch, counting extend calls'\n> timings in blk_write_time? In my opinion if something increments\n> {shared|local}_blks_written, then it needs to be counted in\n> blk_write_time too. I am not sure why it is decided like that.\n\nI agree that an extend should be counted the same way as a write. But\nI'm suspicious that here too we have confusion about whether\nblk_write_time is supposed to be covering shared buffers and local\nbuffers or just shared buffers.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 5 Oct 2023 08:51:40 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgBufferUsage.blk_{read|write}_time are zero although there are\n pgBufferUsage.local_blks_{read|written}"
},
{
"msg_contents": "On Thu, Oct 05, 2023 at 08:51:40AM -0400, Robert Haas wrote:\n> On Thu, Oct 5, 2023 at 6:25 AM Nazir Bilal Yavuz <[email protected]> wrote:\n>> What do you think about the second patch, counting extend calls'\n>> timings in blk_write_time? In my opinion if something increments\n>> {shared|local}_blks_written, then it needs to be counted in\n>> blk_write_time too. I am not sure why it is decided like that.\n> \n> I agree that an extend should be counted the same way as a write. But\n> I'm suspicious that here too we have confusion about whether\n> blk_write_time is supposed to be covering shared buffers and local\n> buffers or just shared buffers.\n\nAgreed.\n\nIn ~14, as far as I can see blk_write_time is only incremented for\nshared buffers. FWIW, I agree that we should improve these stats for\nlocal buffers but I am not on board with a solution where we'd use the\nsame counter for local and shared buffers while we've historically\nonly counted the former, because that could confuse existing\nmonitoring queries. It seems to me that the right solution is to do\nthe same separation as temp blocks with two separate counters, without\na backpatch. I'd like to go as far as renaming blk_read_time and\nblk_write_time to respectively shared_blk_read_time and\nshared_blk_write_time to know exactly what the type of block dealt\nwith is when querying this data, particularly for pg_stat_statements's\nsake.\n--\nMichael",
"msg_date": "Tue, 10 Oct 2023 09:54:43 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgBufferUsage.blk_{read|write}_time are zero although there are\n pgBufferUsage.local_blks_{read|written}"
},
{
"msg_contents": "Hi,\n\nOn Tue, 10 Oct 2023 at 03:54, Michael Paquier <[email protected]> wrote:\n>\n> In ~14, as far as I can see blk_write_time is only incremented for\n> shared buffers. FWIW, I agree that we should improve these stats for\n> local buffers but I am not on board with a solution where we'd use the\n> same counter for local and shared buffers while we've historically\n> only counted the former, because that could confuse existing\n> monitoring queries. It seems to me that the right solution is to do\n> the same separation as temp blocks with two separate counters, without\n> a backpatch. I'd like to go as far as renaming blk_read_time and\n> blk_write_time to respectively shared_blk_read_time and\n> shared_blk_write_time to know exactly what the type of block dealt\n> with is when querying this data, particularly for pg_stat_statements's\n> sake.\n\nYes, that could be a better solution. Also, having more detailed stats\nfor shared and local buffers is helpful. I updated patches in line\nwith that:\n\n0001: Counts extends same way as a write.\n0002: Rename blk_{read|write}_time as shared_blk_{read|write}_time.\n0003: Add new local_blk_{read|write}_time variables.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Mon, 16 Oct 2023 13:07:07 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgBufferUsage.blk_{read|write}_time are zero although there are\n pgBufferUsage.local_blks_{read|written}"
},
{
"msg_contents": "On Mon, Oct 16, 2023 at 01:07:07PM +0300, Nazir Bilal Yavuz wrote:\n> Yes, that could be a better solution. Also, having more detailed stats\n> for shared and local buffers is helpful. I updated patches in line\n> with that:\n> \n> 0001: Counts extends same way as a write.\n\nIt can change existing query results on an already-released branch,\nbut we already count the number of blocks when doing a relation\nextension, so counting the write time is something I'd rather fix in\nv16. If you have any objections, let me know.\n\n> 0002: Rename blk_{read|write}_time as shared_blk_{read|write}_time.\n\nNote that `git diff --check` complains here.\n\n--- a/contrib/pg_stat_statements/pg_stat_statements--1.1--1.2.sql\n+++ b/contrib/pg_stat_statements/pg_stat_statements--1.1--1.2.sql\n@@ -30,8 +30,8 @@ CREATE FUNCTION pg_stat_statements(IN showtext boolean,\n OUT local_blks_written int8,\n OUT temp_blks_read int8,\n OUT temp_blks_written int8,\n- OUT blk_read_time float8,\n- OUT blk_write_time float8\n+ OUT shared_blk_read_time float8,\n+ OUT shared_blk_write_time float8\n\nDoing that in an extension upgrade script is incorrect. These should\nnot be touched. \n\n- Total time the statement spent reading data file blocks, in milliseconds\n+ Total time the statement spent reading shared data file blocks, in milliseconds\n\nOr just shared blocks? That's what we use elsewhere for\npg_stat_statements. \"shared data file blocks\" sounds a bit confusing\nfor relation file blocks read/written from/to shared buffers.\n\n> 0003: Add new local_blk_{read|write}_time variables.\n\n DATA = pg_stat_statements--1.4.sql \\\n+ pg_stat_statements--1.11--1.12.sql \\\n pg_stat_statements--1.10--1.11.sql \\\n\nThere is no need to bump again pg_stat_statements, as it has already\nbeen bumped to 1.11 on HEAD per the recent commit 5a3423ad8ee1 from\nDaniel. So the new changes can just be added to 1.11.\n--\nMichael",
"msg_date": "Tue, 17 Oct 2023 17:40:00 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgBufferUsage.blk_{read|write}_time are zero although there are\n pgBufferUsage.local_blks_{read|written}"
},
{
"msg_contents": "Hi,\n\nThanks for the review!\n\nOn Tue, 17 Oct 2023 at 11:40, Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Oct 16, 2023 at 01:07:07PM +0300, Nazir Bilal Yavuz wrote:\n> > Yes, that could be a better solution. Also, having more detailed stats\n> > for shared and local buffers is helpful. I updated patches in line\n> > with that:\n> >\n> > 0001: Counts extends same way as a write.\n>\n> It can change existing query results on an already-released branch,\n> but we already count the number of blocks when doing a relation\n> extension, so counting the write time is something I'd rather fix in\n> v16. If you have any objections, let me know.\n\nI agree.\n\n>\n> > 0002: Rename blk_{read|write}_time as shared_blk_{read|write}_time.\n>\n> Note that `git diff --check` complains here.\n>\n> --- a/contrib/pg_stat_statements/pg_stat_statements--1.1--1.2.sql\n> +++ b/contrib/pg_stat_statements/pg_stat_statements--1.1--1.2.sql\n> @@ -30,8 +30,8 @@ CREATE FUNCTION pg_stat_statements(IN showtext boolean,\n> OUT local_blks_written int8,\n> OUT temp_blks_read int8,\n> OUT temp_blks_written int8,\n> - OUT blk_read_time float8,\n> - OUT blk_write_time float8\n> + OUT shared_blk_read_time float8,\n> + OUT shared_blk_write_time float8\n>\n> Doing that in an extension upgrade script is incorrect. These should\n> not be touched.\n>\n> - Total time the statement spent reading data file blocks, in milliseconds\n> + Total time the statement spent reading shared data file blocks, in milliseconds\n>\n> Or just shared blocks? That's what we use elsewhere for\n> pg_stat_statements. \"shared data file blocks\" sounds a bit confusing\n> for relation file blocks read/written from/to shared buffers.\n>\n> > 0003: Add new local_blk_{read|write}_time variables.\n>\n> DATA = pg_stat_statements--1.4.sql \\\n> + pg_stat_statements--1.11--1.12.sql \\\n> pg_stat_statements--1.10--1.11.sql \\\n>\n> There is no need to bump again pg_stat_statements, as it has already\n> been bumped to 1.11 on HEAD per the recent commit 5a3423ad8ee1 from\n> Daniel. So the new changes can just be added to 1.11.\n\nI updated patches based on your comments. v4 is attached.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Tue, 17 Oct 2023 16:44:25 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgBufferUsage.blk_{read|write}_time are zero although there are\n pgBufferUsage.local_blks_{read|written}"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 04:44:25PM +0300, Nazir Bilal Yavuz wrote:\n> I updated patches based on your comments. v4 is attached.\n\nThanks for the new versions. I have applied 0001 and backpatched it\nfor now. 0002 and 0003 look in much cleaner shape than previously.\n--\nMichael",
"msg_date": "Wed, 18 Oct 2023 14:56:42 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgBufferUsage.blk_{read|write}_time are zero although there are\n pgBufferUsage.local_blks_{read|written}"
},
{
"msg_contents": "On Tue, Oct 03, 2023 at 12:44:36PM -0400, Robert Haas wrote:\n> My first thought was to wonder whether this was even a bug. I\n> remembered that EXPLAIN treats shared, local, and temp buffers as\n> three separate categories of things. But it seems that someone decided\n> to conflate two of them for I/O timing purposes:\n> \n> if (has_timing)\n> {\n> appendStringInfoString(es->str, \"\n> shared/local\");\n> \n> ^^^^ Notice this bit in particular.\n\nI was reviewing the whole, and this is an oversight specific to\nefb0ef909f60, because we've never incremented the write/read counters\nfor local buffers, even with this commit applied, for both the EXPLAIN\nreports and anything stored in pg_stat_statement. It seems to me that\nthe origin of the confusion comes down to pg_stat_database where\nblk_{read|write}_time increments on both local and shared blocks, but\non EXPLAIN this stuff only reflects data about shared buffers. So the\n\"shared\" part of the string is right, but the \"local\" part is not in\nv15 and v16.\n--\nMichael",
"msg_date": "Thu, 19 Oct 2023 11:25:13 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgBufferUsage.blk_{read|write}_time are zero although there are\n pgBufferUsage.local_blks_{read|written}"
},
{
"msg_contents": "On Wed, Oct 18, 2023 at 02:56:42PM +0900, Michael Paquier wrote:\n> Thanks for the new versions. I have applied 0001 and backpatched it\n> for now. 0002 and 0003 look in much cleaner shape than previously.\n\n0002 and 0003 have now been applied. I have split 0003 into two parts\nat the end, mainly on clarity grounds: one for the counters with\nEXPLAIN and a second for pg_stat_statements.\n\nThere were a few things in the patch set. Per my notes:\n- Some incorrect indentation.\n- The additions of show_buffer_usage() did not handle correctly the\naddition of a comma before/after the local timing block. The code\narea for has_local_timing needs to check for has_temp_timing, while\nthe area of has_shared_timing needs to check for (has_local_timing ||\nhas_temp_timing).\n- explain.sgml was missing an update for the information related to\nthe read/write timings of the local blocks.\n\nRemains what we should do about the \"shared/local\" string in\nshow_buffer_usage() for v16 and v15, as \"local\" is unrelated to that.\nPerhaps we should just switch to \"shared\" in this case or just remove\nthe string entirely? Still that implies changing the output of\nEXPLAIN on a stable branch in this case, so there could be an argument\nfor leaving this stuff alone.\n--\nMichael",
"msg_date": "Thu, 19 Oct 2023 14:26:26 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgBufferUsage.blk_{read|write}_time are zero although there are\n pgBufferUsage.local_blks_{read|written}"
},
{
"msg_contents": "Hi,\n\nOn Thu, 19 Oct 2023 at 08:26, Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Oct 18, 2023 at 02:56:42PM +0900, Michael Paquier wrote:\n> > Thanks for the new versions. I have applied 0001 and backpatched it\n> > for now. 0002 and 0003 look in much cleaner shape than previously.\n>\n> 0002 and 0003 have now been applied. I have split 0003 into two parts\n> at the end, mainly on clarity grounds: one for the counters with\n> EXPLAIN and a second for pg_stat_statements.\n>\n> There were a few things in the patch set. Per my notes:\n> - Some incorrect indentation.\n> - The additions of show_buffer_usage() did not handle correctly the\n> addition of a comma before/after the local timing block. The code\n> area for has_local_timing needs to check for has_temp_timing, while\n> the area of has_shared_timing needs to check for (has_local_timing ||\n> has_temp_timing).\n> - explain.sgml was missing an update for the information related to\n> the read/write timings of the local blocks.\n\nThanks for the changes, push and feedback!\n\n>\n> Remains what we should do about the \"shared/local\" string in\n> show_buffer_usage() for v16 and v15, as \"local\" is unrelated to that.\n> Perhaps we should just switch to \"shared\" in this case or just remove\n> the string entirely? Still that implies changing the output of\n> EXPLAIN on a stable branch in this case, so there could be an argument\n> for leaving this stuff alone.\n\nI think switching it to 'shared' makes sense. That shouldn't confuse\nexisting monitoring queries much as the numbers won't change, right?\nAlso, if we keep 'shared/local' there could be similar complaints to\nthis thread in the future; so, at least adding comments can be\nhelpful.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Fri, 27 Oct 2023 16:58:20 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgBufferUsage.blk_{read|write}_time are zero although there are\n pgBufferUsage.local_blks_{read|written}"
},
{
"msg_contents": "On Fri, Oct 27, 2023 at 04:58:20PM +0300, Nazir Bilal Yavuz wrote:\n> I think switching it to 'shared' makes sense. That shouldn't confuse\n> existing monitoring queries much as the numbers won't change, right?\n> Also, if we keep 'shared/local' there could be similar complaints to\n> this thread in the future; so, at least adding comments can be\n> helpful.\n\nThe problem is that it may impact existing tools that do explain\noutput deparsing. One of them is https://explain.depesz.com/ that\nHubert Depesz Lubaczewski has implemented, and it would be sad to\nbreak anything related to it.\n\nI am adding Hubert in CC for comments about changing this\n\"shared/local\" to \"shared\" on a released branch. Knowing that\n\"shared\" and \"local\" will need to be handled as separate terms in 17~\nanyway, perhaps that's not a big deal, but let's be sure.\n--\nMichael",
"msg_date": "Mon, 30 Oct 2023 10:45:05 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgBufferUsage.blk_{read|write}_time are zero although there are\n pgBufferUsage.local_blks_{read|written}"
},
{
"msg_contents": "On Mon, Oct 30, 2023 at 10:45:05AM +0900, Michael Paquier wrote:\n> On Fri, Oct 27, 2023 at 04:58:20PM +0300, Nazir Bilal Yavuz wrote:\n> > I think switching it to 'shared' makes sense. That shouldn't confuse\n> > existing monitoring queries much as the numbers won't change, right?\n> > Also, if we keep 'shared/local' there could be similar complaints to\n> > this thread in the future; so, at least adding comments can be\n> > helpful.\n> \n> The problem is that it may impact existing tools that do explain\n> output deparsing. One of them is https://explain.depesz.com/ that\n> Hubert Depesz Lubaczewski has implemented, and it would be sad to\n> break anything related to it.\n> \n> I am adding Hubert in CC for comments about changing this\n> \"shared/local\" to \"shared\" on a released branch. Knowing that\n> \"shared\" and \"local\" will need to be handled as separate terms in 17~\n> anyway, perhaps that's not a big deal, but let's be sure.\n\nHi,\nsome things will definitely break, but that's 100% OK. The change seems\nneeded, and I can update my parser to deal with it :)\n\nBest regards,\n\ndepesz\n\n\n\n",
"msg_date": "Mon, 30 Oct 2023 15:14:16 +0100",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgBufferUsage.blk_{read|write}_time are zero although there are\n pgBufferUsage.local_blks_{read|written}"
},
{
"msg_contents": "On Mon, Oct 30, 2023 at 03:14:16PM +0100, hubert depesz lubaczewski wrote:\n> some things will definitely break, but that's 100% OK. The change seems\n> needed, and I can update my parser to deal with it :)\n\nThanks for the input. I was looking yesterday if this code was\navailable somewhere, but couldn't find it.. Until this morning:\nhttps://gitlab.com/depesz/explain.depesz.com.git\n\nAnd.. It looks like things would become better if we change\n\"shared/local\" to \"shared\", because the parsing code seems to have an\nissue once you add a '/'. All the fields in I/O Timings are\nconsidered as part of a Node, and they're just included in the output.\nNow, pasting a plan that includes \"shared/local\" drops entirely the\nstring from the output result, so some information is lost. In short,\nimagine that we have the following string in a node:\nI/O Timings: shared/local write=23.77\n\nThis would show up like that, meaning that the context where the\nwrite/read timings happened is lost:\nI/O Timings: write=23.77\n\nIf we switch back to \"shared\", the context would be kept around. Of\ncourse, this does not count for all the parsers that may be out\nthere, but at least that's something.\n--\nMichael",
"msg_date": "Tue, 31 Oct 2023 08:17:52 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgBufferUsage.blk_{read|write}_time are zero although there are\n pgBufferUsage.local_blks_{read|written}"
},
{
"msg_contents": "On Tue, Oct 31, 2023 at 08:17:52AM +0900, Michael Paquier wrote:\n> Thanks for the input. I was looking yesterday if this code was\n> available somewhere, but couldn't find it.. Until this morning:\n> https://gitlab.com/depesz/explain.depesz.com.git\n\nWell, the parser itself is https://gitlab.com/depesz/Pg--Explain/ :)\n\n> And.. It looks like things would become better if we change\n> \"shared/local\" to \"shared\", because the parsing code seems to have an\n> issue once you add a '/'. All the fields in I/O Timings are\n> considered as part of a Node, and they're just included in the output.\n> Now, pasting a plan that includes \"shared/local\" drops entirely the\n> string from the output result, so some information is lost. In short,\n> imagine that we have the following string in a node:\n> I/O Timings: shared/local write=23.77\n> \n> This would show up like that, meaning that the context where the\n> write/read timings happened is lost:\n> I/O Timings: write=23.77\n> \n> If we switch back to \"shared\", the context would be kept around. Of\n> course, this does not count for all the parsers that may be out\n> there, but at least that's something.\n\nWell, if it's possible to deduce what is the meaning in given line,\nI can add the logic to do the deduction to parser.\n\nAlso, I want to say that I appreciate being looped in the discussion.\n\nBest regards,\n\ndepesz\n\n\n\n",
"msg_date": "Tue, 31 Oct 2023 15:11:03 +0100",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgBufferUsage.blk_{read|write}_time are zero although there are\n pgBufferUsage.local_blks_{read|written}"
},
{
"msg_contents": "On Tue, Oct 31, 2023 at 03:11:03PM +0100, hubert depesz lubaczewski wrote:\n> On Tue, Oct 31, 2023 at 08:17:52AM +0900, Michael Paquier wrote:\n>> Thanks for the input. I was looking yesterday if this code was\n>> available somewhere, but couldn't find it.. Until this morning:\n>> https://gitlab.com/depesz/explain.depesz.com.git\n> \n> Well, the parser itself is https://gitlab.com/depesz/Pg--Explain/ :)\n\nThat was close enough ;)\n\n> Well, if it's possible to deduce what is the meaning in given line,\n> I can add the logic to do the deduction to parser.\n> \n> Also, I want to say that I appreciate being looped in the discussion.\n\nI lost sight of this thread, so my apologies for the delay. The patch\nto fix the description of the EXPLAIN field has now been applied to\nv15 and v16.\n--\nMichael",
"msg_date": "Thu, 14 Dec 2023 10:02:05 +0100",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgBufferUsage.blk_{read|write}_time are zero although there are\n pgBufferUsage.local_blks_{read|written}"
},
{
"msg_contents": "> > Well, if it's possible to deduce what is the meaning in given line,\n> > I can add the logic to do the deduction to parser.\n> > Also, I want to say that I appreciate being looped in the discussion.\n> I lost sight of this thread, so my apologies for the delay. The patch\n> to fix the description of the EXPLAIN field has now been applied to\n> v15 and v16.\n\nThanks. Will do my best to update the parser soon.\n\nBest regards,\n\ndepesz\n\n\n\n",
"msg_date": "Fri, 15 Dec 2023 14:10:58 +0100",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgBufferUsage.blk_{read|write}_time are zero although there are\n pgBufferUsage.local_blks_{read|written}"
}
] |
[
{
"msg_contents": "I’m trying to implement a new column store for pg, is there a good example to reference?\n\n",
"msg_date": "Fri, 15 Sep 2023 20:31:06 +0800",
"msg_from": "jacktby jacktby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Implement a column store for pg?"
},
{
"msg_contents": "> On 15 Sep 2023, at 14:31, jacktby jacktby <[email protected]> wrote:\n> \n> I’m trying to implement a new column store for pg, is there a good example to reference?\n\nThere are open-source forks of postgres that have column-stores, like Greenplum\nfor example. Be sure to check the license and existence of any patents on any\ncode before studying it though. The most recent attempt to make a column-store\nfor PostgreSQL was, IIRC, zedstore. The zedstore thread might give some\ninsights:\n\nhttps://postgr.es/m/CALfoeiuF-m5jg51mJUPm5GN8u396o5sA2AF5N97vTRAEDYac7w@mail.gmail.com\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 15 Sep 2023 14:47:18 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Implement a column store for pg?"
},
{
"msg_contents": "\n> 2023年9月15日 20:31,jacktby jacktby <[email protected]> 写道:\n> \n> I’m trying to implement a new column store for pg, is there a good example to reference?\nThat’s too complex, I just need to know the interface about design a column store. In fact, I just need a simple example, and I will implement it by myself, what I’m confusing is that, I don’t know how to implement a MVCC, because old version is tuple, this will make a big difference to the transaction? \n\n",
"msg_date": "Fri, 15 Sep 2023 21:13:39 +0800",
"msg_from": "jacktby jacktby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Implement a column store for pg?"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 10:21 AM jacktby jacktby <[email protected]> wrote:\n\n> > I’m trying to implement a new column store for pg, is there a good\n> example to reference?\n> That’s too complex, I just need to know the interface about design a\n> column store. In fact, I just need a simple example, and I will implement\n> it by myself, what I’m confusing is that, I don’t know how to implement a\n> MVCC, because old version is tuple, this will make a big difference to the\n> transaction?\n\n\nIf you're looking for the simplest version of a columnar implementation for\nPostgres, I'd check out Citus' original cstore implemented via FDW. It\nhasn't been updated in years, but it's still one of the faster simple\ncolumnar implementations out there https://github.com/citusdata/cstore_fdw\n\n--\nJonah H. Harris\n\nOn Fri, Sep 15, 2023 at 10:21 AM jacktby jacktby <[email protected]> wrote:> I’m trying to implement a new column store for pg, is there a good example to reference?\nThat’s too complex, I just need to know the interface about design a column store. In fact, I just need a simple example, and I will implement it by myself, what I’m confusing is that, I don’t know how to implement a MVCC, because old version is tuple, this will make a big difference to the transaction?If you're looking for the simplest version of a columnar implementation for Postgres, I'd check out Citus' original cstore implemented via FDW. It hasn't been updated in years, but it's still one of the faster simple columnar implementations out there https://github.com/citusdata/cstore_fdw--Jonah H. Harris",
"msg_date": "Fri, 15 Sep 2023 12:24:47 -0400",
"msg_from": "\"Jonah H. Harris\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Implement a column store for pg?"
}
] |
[
{
"msg_contents": "Hi,\n\nI believe SET ROLE documentation makes a slightly incomplete statement\nabout what happens when a superuser uses SET ROLE.\n\nThe documentation reading suggests that the superuser would lose all their\nprivileges. However, they still retain the ability to use `SET ROLE` again.\n\nThe attached patch adds this bit to the documentation.\n\n-- \nY.",
"msg_date": "Fri, 15 Sep 2023 11:26:16 -0700",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "SET ROLE documentation improvement"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 11:26:16AM -0700, Yurii Rashkovskii wrote:\n> I believe SET ROLE documentation makes a slightly incomplete statement\n> about what happens when a superuser uses SET ROLE.\n> \n> The documentation reading suggests that the superuser would lose all their\n> privileges. However, they still retain the ability to use `SET ROLE` again.\n> \n> The attached patch adds this bit to the documentation.\n\nIMO this is arguably covered by the following note:\n\n The specified <replaceable class=\"parameter\">role_name</replaceable>\n must be a role that the current session user is a member of.\n (If the session user is a superuser, any role can be selected.)\n\nBut I don't see a big issue with clarifying things further as you propose.\n\nI think another issue is that the aforementioned note doesn't mention the\nnew SET option added in 3d14e17.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 15 Sep 2023 13:47:11 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE documentation improvement"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 1:47 PM Nathan Bossart <[email protected]>\nwrote:\n\n> On Fri, Sep 15, 2023 at 11:26:16AM -0700, Yurii Rashkovskii wrote:\n> > I believe SET ROLE documentation makes a slightly incomplete statement\n> > about what happens when a superuser uses SET ROLE.\n> >\n> > The documentation reading suggests that the superuser would lose all\n> their\n> > privileges. However, they still retain the ability to use `SET ROLE`\n> again.\n> >\n> > The attached patch adds this bit to the documentation.\n>\n> IMO this is arguably covered by the following note:\n>\n> The specified <replaceable class=\"parameter\">role_name</replaceable>\n> must be a role that the current session user is a member of.\n> (If the session user is a superuser, any role can be selected.)\n>\n>\nI agree that this may be considered sufficient coverage, but I believe that\ngiving contextual clarification goes a long way to help people understand.\nDocumentation reading can be challenging.\n\n\n> But I don't see a big issue with clarifying things further as you propose.\n>\n> I think another issue is that the aforementioned note doesn't mention the\n> new SET option added in 3d14e17.\n>\n\nHow do you think we should word it in that note to make it useful?\n\n\n-- \nY.\n\nOn Fri, Sep 15, 2023 at 1:47 PM Nathan Bossart <[email protected]> wrote:On Fri, Sep 15, 2023 at 11:26:16AM -0700, Yurii Rashkovskii wrote:\n> I believe SET ROLE documentation makes a slightly incomplete statement\n> about what happens when a superuser uses SET ROLE.\n> \n> The documentation reading suggests that the superuser would lose all their\n> privileges. However, they still retain the ability to use `SET ROLE` again.\n> \n> The attached patch adds this bit to the documentation.\n\nIMO this is arguably covered by the following note:\n\n The specified <replaceable class=\"parameter\">role_name</replaceable>\n must be a role that the current session user is a member of.\n (If the session user is a superuser, any role can be selected.)\nI agree that this may be considered sufficient coverage, but I believe that giving contextual clarification goes a long way to help people understand. Documentation reading can be challenging. \nBut I don't see a big issue with clarifying things further as you propose.\n\nI think another issue is that the aforementioned note doesn't mention the\nnew SET option added in 3d14e17.How do you think we should word it in that note to make it useful? -- Y.",
"msg_date": "Fri, 15 Sep 2023 14:36:16 -0700",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SET ROLE documentation improvement"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 02:36:16PM -0700, Yurii Rashkovskii wrote:\n> On Fri, Sep 15, 2023 at 1:47 PM Nathan Bossart <[email protected]>\n> wrote:\n>> I think another issue is that the aforementioned note doesn't mention the\n>> new SET option added in 3d14e17.\n> \n> How do you think we should word it in that note to make it useful?\n\nMaybe something like this:\n\n\tThe current session user must have the SET option for the specified\n\trole_name, either directly or indirectly via a chain of memberships\n\twith the SET option.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 25 Sep 2023 15:09:45 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE documentation improvement"
},
{
"msg_contents": "On Mon, Sep 25, 2023 at 3:09 PM Nathan Bossart <[email protected]>\nwrote:\n\n> On Fri, Sep 15, 2023 at 02:36:16PM -0700, Yurii Rashkovskii wrote:\n> > On Fri, Sep 15, 2023 at 1:47 PM Nathan Bossart <[email protected]\n> >\n> > wrote:\n> >> I think another issue is that the aforementioned note doesn't mention\n> the\n> >> new SET option added in 3d14e17.\n> >\n> > How do you think we should word it in that note to make it useful?\n>\n> Maybe something like this:\n>\n> The current session user must have the SET option for the specified\n> role_name, either directly or indirectly via a chain of memberships\n> with the SET option.\n>\n\nThis is a good start, indeed. I've amended my patch to include it.\n\n-- \nY.",
"msg_date": "Tue, 26 Sep 2023 08:33:25 -0700",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SET ROLE documentation improvement"
},
{
"msg_contents": "On Tue, Sep 26, 2023 at 08:33:25AM -0700, Yurii Rashkovskii wrote:\n> This is a good start, indeed. I've amended my patch to include it.\n\nThanks for the new patch.\n\nLooking again, I'm kind of hesitant to add too much qualification to this\nnote about losing superuser privileges. If we changed it to\n\n\tNote that when a superuser chooses to SET ROLE to a non-superuser role,\n\tthey lose their superuser privileges, except for the privilege to\n\tchange to another role again using SET ROLE or RESET ROLE.\n\nit almost seems to imply that a non-superuser role could obtain the ability\nto switch to any role if they first SET ROLE to a superuser. In practice,\nthat's true because they could just give the session role SUPERUSER, but I\ndon't think that's the intent of this section.\n\nI thought about changing it to something like\n\n\tNote that when a superuser chooses to SET ROLE to a non-superuser role,\n\tthey lose their superuser privileges. However, if the current session\n\tuser is a superuser, they retain the ability to set the current user\n\tidentifier to any role via SET ROLE and RESET ROLE.\n\nbut it seemed weird to me to single out superusers here when it's always\ntrue that the current session user retains the ability to SET ROLE to any\nrole they have the SET option on. That is already covered above in the\n\"Description\" section, so I don't really see the need to belabor the point\nby adding qualifications to the \"Notes\" section. ISTM the point of these\ncouple of paragraphs in the \"Notes\" section is to explain the effects on\nprivileges for schemas, tables, etc.\n\nI still think we should update the existing note about privileges for\nSET/RESET ROLE to something like the following:\n\ndiff --git a/doc/src/sgml/ref/set_role.sgml b/doc/src/sgml/ref/set_role.sgml\nindex 13bad1bf66..c91a95f5af 100644\n--- a/doc/src/sgml/ref/set_role.sgml\n+++ b/doc/src/sgml/ref/set_role.sgml\n@@ -41,8 +41,10 @@ RESET ROLE\n </para>\n \n <para>\n- The specified <replaceable class=\"parameter\">role_name</replaceable>\n- must be a role that the current session user is a member of.\n+ The current session user must have the <literal>SET</option> for the\n+ specified <replaceable class=\"parameter\">role_name</replaceable>, either\n+ directly or indirectly via a chain of memberships with the\n+ <literal>SET</literal> option.\n (If the session user is a superuser, any role can be selected.)\n </para>\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 10 Nov 2023 11:41:07 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE documentation improvement"
},
{
"msg_contents": "On Fri, Nov 10, 2023 at 11:11 PM Nathan Bossart\n<[email protected]> wrote:\n>\n> On Tue, Sep 26, 2023 at 08:33:25AM -0700, Yurii Rashkovskii wrote:\n> > This is a good start, indeed. I've amended my patch to include it.\n>\n> Thanks for the new patch.\n>\n> Looking again, I'm kind of hesitant to add too much qualification to this\n> note about losing superuser privileges. If we changed it to\n>\n> Note that when a superuser chooses to SET ROLE to a non-superuser role,\n> they lose their superuser privileges, except for the privilege to\n> change to another role again using SET ROLE or RESET ROLE.\n>\n> it almost seems to imply that a non-superuser role could obtain the ability\n> to switch to any role if they first SET ROLE to a superuser. In practice,\n> that's true because they could just give the session role SUPERUSER, but I\n> don't think that's the intent of this section.\n>\n> I thought about changing it to something like\n>\n> Note that when a superuser chooses to SET ROLE to a non-superuser role,\n> they lose their superuser privileges. However, if the current session\n> user is a superuser, they retain the ability to set the current user\n> identifier to any role via SET ROLE and RESET ROLE.\n>\n> but it seemed weird to me to single out superusers here when it's always\n> true that the current session user retains the ability to SET ROLE to any\n> role they have the SET option on. That is already covered above in the\n> \"Description\" section, so I don't really see the need to belabor the point\n> by adding qualifications to the \"Notes\" section. ISTM the point of these\n> couple of paragraphs in the \"Notes\" section is to explain the effects on\n> privileges for schemas, tables, etc.\n>\n> I still think we should update the existing note about privileges for\n> SET/RESET ROLE to something like the following:\n>\n> diff --git a/doc/src/sgml/ref/set_role.sgml b/doc/src/sgml/ref/set_role.sgml\n> index 13bad1bf66..c91a95f5af 100644\n> --- a/doc/src/sgml/ref/set_role.sgml\n> +++ b/doc/src/sgml/ref/set_role.sgml\n> @@ -41,8 +41,10 @@ RESET ROLE\n> </para>\n>\n> <para>\n> - The specified <replaceable class=\"parameter\">role_name</replaceable>\n> - must be a role that the current session user is a member of.\n> + The current session user must have the <literal>SET</option> for the\n> + specified <replaceable class=\"parameter\">role_name</replaceable>, either\n> + directly or indirectly via a chain of memberships with the\n> + <literal>SET</literal> option.\n> (If the session user is a superuser, any role can be selected.)\n> </para>\n>\n> --\n> I have Reviewed the patch. Patch applies neatly without any issues. Documentation build was successful and there was no Spell-check issue also. I did not find any issues. The patch looks good to me.\n>\n>Thanks and Regards,\n>Shubham Khanna.\n\n\n",
"msg_date": "Tue, 5 Dec 2023 10:50:40 +0530",
"msg_from": "Shubham Khanna <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE documentation improvement"
},
{
"msg_contents": "On Fri, Nov 10, 2023 at 12:41 PM Nathan Bossart\n<[email protected]> wrote:\n> On Tue, Sep 26, 2023 at 08:33:25AM -0700, Yurii Rashkovskii wrote:\n> > This is a good start, indeed. I've amended my patch to include it.\n>\n> Thanks for the new patch.\n>\n> Looking again, I'm kind of hesitant to add too much qualification to this\n> note about losing superuser privileges.\n\nThe note in question is:\n\n <para>\n Note that when a superuser chooses to <command>SET ROLE</command> to a\n non-superuser role, they lose their superuser privileges.\n </para>\n\nIt's not entirely clear to me what the point of this note is. I think\nwe could consider removing it entirely, on the theory that it's just\npoorly-stated special case of what's already been said in the\ndescription, i.e. \"permissions checking for SQL commands is carried\nout as though the named role were the one that had logged in\noriginally\" and \"The specified <replaceable\nclass=\"parameter\">role_name</replaceable> must be a role that the\ncurrent session user is a member of.\"\n\nI think it's also possible that what the author of this paragraph\nmeant was that role attributes like CREATEDB, CREATEROLE, REPLICATION,\nand SUPERUSER follow the current user, not the session user. If we\nthink that was the point of this paragraph, we could make it say that\nmore clearly. However, I'm not sure that really needs to be mentioned,\nbecause \"permissions checking for SQL commands is carried out as\nthough the named role were the one that had logged in originally\"\nseems to cover that ground along with everything else.\n\n> I still think we should update the existing note about privileges for\n> SET/RESET ROLE to something like the following:\n>\n> diff --git a/doc/src/sgml/ref/set_role.sgml b/doc/src/sgml/ref/set_role.sgml\n> index 13bad1bf66..c91a95f5af 100644\n> --- a/doc/src/sgml/ref/set_role.sgml\n> +++ b/doc/src/sgml/ref/set_role.sgml\n> @@ -41,8 +41,10 @@ RESET ROLE\n> </para>\n>\n> <para>\n> - The specified <replaceable class=\"parameter\">role_name</replaceable>\n> - must be a role that the current session user is a member of.\n> + The current session user must have the <literal>SET</option> for the\n> + specified <replaceable class=\"parameter\">role_name</replaceable>, either\n> + directly or indirectly via a chain of memberships with the\n> + <literal>SET</literal> option.\n> (If the session user is a superuser, any role can be selected.)\n> </para>\n\nThis is a good change; I should have done this when SET was added.\n\nAnother change we could consider is revising \"permissions checking for\nSQL commands is carried out as though the named role were the one that\nhad logged in originally\" to mention that SET ROLE and SET SESSION\nAUTHORIZATION are exceptions.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 22 Mar 2024 15:58:59 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE documentation improvement"
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 03:58:59PM -0400, Robert Haas wrote:\n> On Fri, Nov 10, 2023 at 12:41 PM Nathan Bossart\n> <[email protected]> wrote:\n>> Looking again, I'm kind of hesitant to add too much qualification to this\n>> note about losing superuser privileges.\n> \n> The note in question is:\n> \n> <para>\n> Note that when a superuser chooses to <command>SET ROLE</command> to a\n> non-superuser role, they lose their superuser privileges.\n> </para>\n> \n> It's not entirely clear to me what the point of this note is. I think\n> we could consider removing it entirely, on the theory that it's just\n> poorly-stated special case of what's already been said in the\n> description, i.e. \"permissions checking for SQL commands is carried\n> out as though the named role were the one that had logged in\n> originally\" and \"The specified <replaceable\n> class=\"parameter\">role_name</replaceable> must be a role that the\n> current session user is a member of.\"\n\n+1. IMHO these kinds of special mentions of SUPERUSER tend to be\nredundant, and, as evidenced by this thread, confusing. I'll update the\npatch.\n\n> I think it's also possible that what the author of this paragraph\n> meant was that role attributes like CREATEDB, CREATEROLE, REPLICATION,\n> and SUPERUSER follow the current user, not the session user. If we\n> think that was the point of this paragraph, we could make it say that\n> more clearly. However, I'm not sure that really needs to be mentioned,\n> because \"permissions checking for SQL commands is carried out as\n> though the named role were the one that had logged in originally\"\n> seems to cover that ground along with everything else.\n\n+1\n\n>> I still think we should update the existing note about privileges for\n>> SET/RESET ROLE to something like the following:\n>>\n>> diff --git a/doc/src/sgml/ref/set_role.sgml b/doc/src/sgml/ref/set_role.sgml\n>> index 13bad1bf66..c91a95f5af 100644\n>> --- a/doc/src/sgml/ref/set_role.sgml\n>> +++ b/doc/src/sgml/ref/set_role.sgml\n>> @@ -41,8 +41,10 @@ RESET ROLE\n>> </para>\n>>\n>> <para>\n>> - The specified <replaceable class=\"parameter\">role_name</replaceable>\n>> - must be a role that the current session user is a member of.\n>> + The current session user must have the <literal>SET</option> for the\n>> + specified <replaceable class=\"parameter\">role_name</replaceable>, either\n>> + directly or indirectly via a chain of memberships with the\n>> + <literal>SET</literal> option.\n>> (If the session user is a superuser, any role can be selected.)\n>> </para>\n> \n> This is a good change; I should have done this when SET was added.\n\nCool.\n\n> Another change we could consider is revising \"permissions checking for\n> SQL commands is carried out as though the named role were the one that\n> had logged in originally\" to mention that SET ROLE and SET SESSION\n> AUTHORIZATION are exceptions.\n\nThat seems like a resonable idea, although it might require a few rounds of\nwordsmithing.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 22 Mar 2024 15:45:06 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE documentation improvement"
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 03:45:06PM -0500, Nathan Bossart wrote:\n> On Fri, Mar 22, 2024 at 03:58:59PM -0400, Robert Haas wrote:\n>> On Fri, Nov 10, 2023 at 12:41 PM Nathan Bossart\n>> <[email protected]> wrote:\n>>> I still think we should update the existing note about privileges for\n>>> SET/RESET ROLE to something like the following:\n>>>\n>>> diff --git a/doc/src/sgml/ref/set_role.sgml b/doc/src/sgml/ref/set_role.sgml\n>>> index 13bad1bf66..c91a95f5af 100644\n>>> --- a/doc/src/sgml/ref/set_role.sgml\n>>> +++ b/doc/src/sgml/ref/set_role.sgml\n>>> @@ -41,8 +41,10 @@ RESET ROLE\n>>> </para>\n>>>\n>>> <para>\n>>> - The specified <replaceable class=\"parameter\">role_name</replaceable>\n>>> - must be a role that the current session user is a member of.\n>>> + The current session user must have the <literal>SET</option> for the\n>>> + specified <replaceable class=\"parameter\">role_name</replaceable>, either\n>>> + directly or indirectly via a chain of memberships with the\n>>> + <literal>SET</literal> option.\n>>> (If the session user is a superuser, any role can be selected.)\n>>> </para>\n>> \n>> This is a good change; I should have done this when SET was added.\n> \n> Cool.\n\nActually, shouldn't this one be back-patched to v16? If so, I'd do that\none separately from the other changes we are discussing.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 22 Mar 2024 15:51:34 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE documentation improvement"
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 4:51 PM Nathan Bossart <[email protected]> wrote:\n> Actually, shouldn't this one be back-patched to v16? If so, I'd do that\n> one separately from the other changes we are discussing.\n\nSure, that seems fine.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 23 Mar 2024 08:37:20 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE documentation improvement"
},
{
"msg_contents": "On Sat, Mar 23, 2024 at 08:37:20AM -0400, Robert Haas wrote:\n> On Fri, Mar 22, 2024 at 4:51 PM Nathan Bossart <[email protected]> wrote:\n>> Actually, shouldn't this one be back-patched to v16? If so, I'd do that\n>> one separately from the other changes we are discussing.\n> \n> Sure, that seems fine.\n\nCommitted that part.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 24 Mar 2024 15:34:49 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE documentation improvement"
},
{
"msg_contents": "\n\n> On 24 Mar 2024, at 23:34, Nathan Bossart <[email protected]> wrote:\n> \n> Committed that part.\n\nHi Nathan and Yurii!\n\nCan I ask you please to help me with determining status of CF item [0]. Is it committed or there's something to move to next CF?\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://commitfest.postgresql.org/47/4572/\n\n",
"msg_date": "Tue, 9 Apr 2024 09:21:39 +0300",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE documentation improvement"
},
{
"msg_contents": "On Tue, Apr 09, 2024 at 09:21:39AM +0300, Andrey M. Borodin wrote:\n> Can I ask you please to help me with determining status of CF item\n> [0]. Is it committed or there's something to move to next CF?\n\nOnly half of the patch has been applied as of 3330a8d1b792. Yurii and\nNathan, could you follow up with the rest? Moving the patch to the\nnext CF makes sense, and the last thread update is rather recent.\n--\nMichael",
"msg_date": "Thu, 11 Apr 2024 10:33:15 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE documentation improvement"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 10:33:15AM +0900, Michael Paquier wrote:\n> On Tue, Apr 09, 2024 at 09:21:39AM +0300, Andrey M. Borodin wrote:\n>> Can I ask you please to help me with determining status of CF item\n>> [0]. Is it committed or there's something to move to next CF?\n> \n> Only half of the patch has been applied as of 3330a8d1b792. Yurii and\n> Nathan, could you follow up with the rest? Moving the patch to the\n> next CF makes sense, and the last thread update is rather recent.\n\nAFAICT there are two things remaining:\n\n* Remove the \"they lose their superuser privileges\" note.\n* Note that SET ROLE and SET SESSION AUTHORIZATION are exceptions.\n\nWhile I think these are good changes, I don't sense any urgency here, so\nI'm treating this as v18 material at this point.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 11 Apr 2024 09:03:32 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE documentation improvement"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 10:03 AM Nathan Bossart\n<[email protected]> wrote:\n> On Thu, Apr 11, 2024 at 10:33:15AM +0900, Michael Paquier wrote:\n> > On Tue, Apr 09, 2024 at 09:21:39AM +0300, Andrey M. Borodin wrote:\n> >> Can I ask you please to help me with determining status of CF item\n> >> [0]. Is it committed or there's something to move to next CF?\n> >\n> > Only half of the patch has been applied as of 3330a8d1b792. Yurii and\n> > Nathan, could you follow up with the rest? Moving the patch to the\n> > next CF makes sense, and the last thread update is rather recent.\n>\n> AFAICT there are two things remaining:\n>\n> * Remove the \"they lose their superuser privileges\" note.\n> * Note that SET ROLE and SET SESSION AUTHORIZATION are exceptions.\n>\n> While I think these are good changes, I don't sense any urgency here, so\n> I'm treating this as v18 material at this point.\n\nI suggest that we close the existing CF entry as committed and\nsomebody can start a new one for whatever remains. I think that will\nbe less confusing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Apr 2024 11:36:52 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE documentation improvement"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 11:36:52AM -0400, Robert Haas wrote:\n> I suggest that we close the existing CF entry as committed and\n> somebody can start a new one for whatever remains. I think that will\n> be less confusing.\n\nDone: https://commitfest.postgresql.org/48/4923/.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 11 Apr 2024 11:48:30 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE documentation improvement"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 11:48:30AM -0500, Nathan Bossart wrote:\n> On Thu, Apr 11, 2024 at 11:36:52AM -0400, Robert Haas wrote:\n>> I suggest that we close the existing CF entry as committed and\n>> somebody can start a new one for whatever remains. I think that will\n>> be less confusing.\n> \n> Done: https://commitfest.postgresql.org/48/4923/.\n\nWhile it's fresh on my mind, I very hastily hacked together a draft of what\nI believe is the remaining work.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 11 Apr 2024 12:03:12 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE documentation improvement"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 1:03 PM Nathan Bossart <[email protected]> wrote:\n> On Thu, Apr 11, 2024 at 11:48:30AM -0500, Nathan Bossart wrote:\n> > On Thu, Apr 11, 2024 at 11:36:52AM -0400, Robert Haas wrote:\n> >> I suggest that we close the existing CF entry as committed and\n> >> somebody can start a new one for whatever remains. I think that will\n> >> be less confusing.\n> >\n> > Done: https://commitfest.postgresql.org/48/4923/.\n>\n> While it's fresh on my mind, I very hastily hacked together a draft of what\n> I believe is the remaining work.\n\nThat looks fine to me. And if others agree, I think it's fine to just\ncommit this now, post-freeze. It's only a doc change, and a\nback-patchable one if you want to go that route.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Apr 2024 13:38:37 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE documentation improvement"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 01:38:37PM -0400, Robert Haas wrote:\n> On Thu, Apr 11, 2024 at 1:03 PM Nathan Bossart <[email protected]> wrote:\n>> While it's fresh on my mind, I very hastily hacked together a draft of what\n>> I believe is the remaining work.\n> \n> That looks fine to me. And if others agree, I think it's fine to just\n> commit this now, post-freeze. It's only a doc change, and a\n> back-patchable one if you want to go that route.\n\nNo objections here. I'll give this a few days for others to comment. I'm\nnot particularly interested in back-patching this since it's arguably not\nfixing anything that's incorrect, but if anyone really wants me to, I will.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 11 Apr 2024 14:21:49 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE documentation improvement"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 02:21:49PM -0500, Nathan Bossart wrote:\n> No objections here. I'll give this a few days for others to comment. I'm\n> not particularly interested in back-patching this since it's arguably not\n> fixing anything that's incorrect, but if anyone really wants me to, I will.\n\nHEAD looks fine based on what I'm reading in the patch. If there are\nmore voices in favor of a backpatch, it could always happen later.\n--\nMichael",
"msg_date": "Fri, 12 Apr 2024 09:54:24 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE documentation improvement"
},
{
"msg_contents": "On Fri, Apr 12, 2024 at 09:54:24AM +0900, Michael Paquier wrote:\n> On Thu, Apr 11, 2024 at 02:21:49PM -0500, Nathan Bossart wrote:\n>> No objections here. I'll give this a few days for others to comment. I'm\n>> not particularly interested in back-patching this since it's arguably not\n>> fixing anything that's incorrect, but if anyone really wants me to, I will.\n> \n> HEAD looks fine based on what I'm reading in the patch. If there are\n> more voices in favor of a backpatch, it could always happen later.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 15 Apr 2024 14:11:44 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SET ROLE documentation improvement"
}
] |
[
{
"msg_contents": "Hi,\n\nIt appears that 16.0 improved some of the checks in ALTER ROLE. Previously,\nit was possible to do the following (assuming current_user is a bootstrap\nuser):\n\n```\nALTER ROLE current_user NOSUPERUSER\n```\n\nAs of 16.0, this produces an error:\n\n```\nERROR: permission denied to alter role\nDETAIL: The bootstrap user must have the SUPERUSER attribute.\n```\n\nThe attached patch documents this behavior by providing a bit more\nclarification to the following statement:\n\n\"Database superusers can change any of these settings for any role.\"\n\n\n-- \nY.",
"msg_date": "Fri, 15 Sep 2023 11:46:35 -0700",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "ALTER ROLE documentation improvement"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 11:46:35AM -0700, Yurii Rashkovskii wrote:\n> It appears that 16.0 improved some of the checks in ALTER ROLE. Previously,\n> it was possible to do the following (assuming current_user is a bootstrap\n> user):\n> \n> ```\n> ALTER ROLE current_user NOSUPERUSER\n> ```\n> \n> As of 16.0, this produces an error:\n> \n> ```\n> ERROR: permission denied to alter role\n> DETAIL: The bootstrap user must have the SUPERUSER attribute.\n> ```\n> \n> The attached patch documents this behavior by providing a bit more\n> clarification to the following statement:\n> \n> \"Database superusers can change any of these settings for any role.\"\n\nI think this could also be worth a mention in the glossary [0]. BTW the\nglossary calls this role the \"bootstrap superuser\", but the DETAIL message\ncalls it the \"bootstrap user\". Perhaps we should standardize on one name.\n\n[0] https://www.postgresql.org/docs/devel/glossary.html\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 15 Sep 2023 13:53:01 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER ROLE documentation improvement"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 1:53 PM Nathan Bossart <[email protected]>\nwrote:\n\n> On Fri, Sep 15, 2023 at 11:46:35AM -0700, Yurii Rashkovskii wrote:\n> > It appears that 16.0 improved some of the checks in ALTER ROLE.\n> Previously,\n> > it was possible to do the following (assuming current_user is a bootstrap\n> > user):\n> >\n> > ```\n> > ALTER ROLE current_user NOSUPERUSER\n> > ```\n> >\n> > As of 16.0, this produces an error:\n> >\n> > ```\n> > ERROR: permission denied to alter role\n> > DETAIL: The bootstrap user must have the SUPERUSER attribute.\n> > ```\n> >\n> > The attached patch documents this behavior by providing a bit more\n> > clarification to the following statement:\n> >\n> > \"Database superusers can change any of these settings for any role.\"\n>\n> I think this could also be worth a mention in the glossary [0]. BTW the\n> glossary calls this role the \"bootstrap superuser\", but the DETAIL message\n> calls it the \"bootstrap user\". Perhaps we should standardize on one name.\n>\n> [0] https://www.postgresql.org/docs/devel/glossary.html\n>\n>\nThank you for the feedback. I've updated the glossary and updated the\nterminology to be consistent. Please see the new patch attached.\n\n-- \nY.",
"msg_date": "Fri, 15 Sep 2023 14:25:38 -0700",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ALTER ROLE documentation improvement"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 02:25:38PM -0700, Yurii Rashkovskii wrote:\n> Thank you for the feedback. I've updated the glossary and updated the\n> terminology to be consistent. Please see the new patch attached.\n\nThanks for the new version of the patch.\n\n This user owns all system catalog tables in each database. It is also the role\n from which all granted permissions originate. Because of these things, this\n- role may not be dropped.\n+ role may not be dropped. This role must always be a superuser, it can't be changed\n+ to be a non-superuser.\n\nI think we should move this note to the sentence just below that mentions\nits superuserness. Maybe it could look something like this:\n\n\tThis role also behaves as a normal database superuser, and its\n\tsuperuser status cannot be revoked.\n\n+ Database superusers can change any of these settings for any role, except for\n+ changing <literal>SUPERUSER</literal> to <literal>NOSUPERUSER</literal>\n+ for a <glossterm linkend=\"glossary-bootstrap-superuser\">bootstrap superuser</glossterm>.\n\nnitpick: s/a bootstrap superuser/the bootstrap superuser\n\n #: commands/user.c:871\n #, c-format\n-msgid \"The bootstrap user must have the %s attribute.\"\n+msgid \"The bootstrap superuser must have the %s attribute.\"\n msgstr \"Der Bootstrap-Benutzer muss das %s-Attribut haben.\"\n\nNo need to update the translation files. Those are updated separately in\nthe pgtranslation repo.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 25 Sep 2023 14:59:18 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER ROLE documentation improvement"
},
{
"msg_contents": "On Tue, 26 Sept 2023 at 04:38, Nathan Bossart <[email protected]> wrote:\n>\n> On Fri, Sep 15, 2023 at 02:25:38PM -0700, Yurii Rashkovskii wrote:\n> > Thank you for the feedback. I've updated the glossary and updated the\n> > terminology to be consistent. Please see the new patch attached.\n>\n> Thanks for the new version of the patch.\n>\n> This user owns all system catalog tables in each database. It is also the role\n> from which all granted permissions originate. Because of these things, this\n> - role may not be dropped.\n> + role may not be dropped. This role must always be a superuser, it can't be changed\n> + to be a non-superuser.\n>\n> I think we should move this note to the sentence just below that mentions\n> its superuserness. Maybe it could look something like this:\n>\n> This role also behaves as a normal database superuser, and its\n> superuser status cannot be revoked.\n\nModified\n\n> + Database superusers can change any of these settings for any role, except for\n> + changing <literal>SUPERUSER</literal> to <literal>NOSUPERUSER</literal>\n> + for a <glossterm linkend=\"glossary-bootstrap-superuser\">bootstrap superuser</glossterm>.\n>\n> nitpick: s/a bootstrap superuser/the bootstrap superuser\n\nModified\n\n> #: commands/user.c:871\n> #, c-format\n> -msgid \"The bootstrap user must have the %s attribute.\"\n> +msgid \"The bootstrap superuser must have the %s attribute.\"\n> msgstr \"Der Bootstrap-Benutzer muss das %s-Attribut haben.\"\n>\n> No need to update the translation files. Those are updated separately in\n> the pgtranslation repo.\n\nRemoved the translation changes\n\nThe attached v3 version patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Sun, 14 Jan 2024 16:17:41 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER ROLE documentation improvement"
},
{
"msg_contents": "On Sun, Jan 14, 2024 at 04:17:41PM +0530, vignesh C wrote:\n> The attached v3 version patch has the changes for the same.\n\nLGTM. I'll wait a little while longer for additional feedback, but if none\nmaterializes, I'll commit this soon.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 14 Jan 2024 19:59:41 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER ROLE documentation improvement"
},
{
"msg_contents": "On Sun, Jan 14, 2024 at 6:59 PM Nathan Bossart <[email protected]>\nwrote:\n\n> On Sun, Jan 14, 2024 at 04:17:41PM +0530, vignesh C wrote:\n> > The attached v3 version patch has the changes for the same.\n>\n> LGTM. I'll wait a little while longer for additional feedback, but if none\n> materializes, I'll commit this soon.\n>\n>\nLGTM too. I didn't go looking for anything else related to this, but the\nproposed changes all look needed.\n\nDavid J.\n\nOn Sun, Jan 14, 2024 at 6:59 PM Nathan Bossart <[email protected]> wrote:On Sun, Jan 14, 2024 at 04:17:41PM +0530, vignesh C wrote:\n> The attached v3 version patch has the changes for the same.\n\nLGTM. I'll wait a little while longer for additional feedback, but if none\nmaterializes, I'll commit this soon.LGTM too. I didn't go looking for anything else related to this, but the proposed changes all look needed.David J.",
"msg_date": "Thu, 18 Jan 2024 14:44:35 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER ROLE documentation improvement"
},
{
"msg_contents": "On Thu, Jan 18, 2024 at 02:44:35PM -0700, David G. Johnston wrote:\n> LGTM too. I didn't go looking for anything else related to this, but the\n> proposed changes all look needed.\n\nCommitted. Thanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 18 Jan 2024 21:42:39 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ALTER ROLE documentation improvement"
}
] |
[
{
"msg_contents": "Hi,\n\nAt one time 12 years ago, fn_collation was stored in FmgrInfo,\nwith a comment saying it was really \"parse-time-determined\ninformation about the arguments, rather than about the function\nitself\" but saying \"it's convenient\" to store it in FmgrInfo\nrather than in FunctionCallInfoData.\n\nBut in d64713d, fn_collation was booted out of FmgrInfo and into\nFunctionCallInfoData, with this commit comment: \"Since collation\nis effectively an argument, not a property of the function, FmgrInfo\nis really the wrong place for it; and this becomes critical in cases\nwhere a cached FmgrInfo is used for varying purposes that might need\ndifferent collation settings.\"\n\nHowever, fn_expr is still there in FmgrInfo, with exactly the same\nvague rationale in the comment about being \"convenient\" to keep in\nFmgrInfo rather than in the function call info where it might more\nlogically belong.\n\nIs there a good quick story for why that's ok for fn_expr and not\nfor fn_collation? In particular, what can I count on?\n\nCan I count on, if a FmgrInfo has a non-null fn_expr, that all\nforthcoming function calls based on it will have:\n\n- the same value for fcinfo->nargs ? (matching the count of fn_expr)\n- the same arg types (and same polymorphic arg resolved types) ?\n\nThen what about fcinfo->fn_collation? Can it vary from one call\nto another in those circumstances? Or can that only happen when\nfn_expr is null, and a cached FmgrInfo is being used for varying\npurposes /other than/ \"as part of an SQL expression\"?\n\nAre there ever circumstances where a FmgrInfo with a non-null\nfn_expr is reused with a different fn_expr?\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Fri, 15 Sep 2023 16:39:42 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": true,
"msg_subject": "semantics of \"convenient to store\" in FmgrInfo ?"
}
] |
[
{
"msg_contents": "I wrote a patch to change psql's display of zero privileges after a user's\nreported confusion with the psql output for zero vs. default privileges [1].\nAdmittedly, zero privileges is a rare use case [2] but I think psql should not\nconfuse the user in the off chance that this happens.\n\nWith this change psql now prints \"(none)\" for zero privileges instead of\nnothing. This affects the following meta commands:\n\n \\db+ \\dD+ \\df+ \\dL+ \\dl+ \\dn+ \\dp \\dT+ \\l\n\nDefault privileges start as NULL::aclitem[] in various catalog columns but\nrevoking the default privileges leaves an empty aclitem array. Using\n\\pset null '(null)' as a workaround to spot default privileges does not work\nbecause the meta commands ignore this setting.\n\nThe privileges shown by \\dconfig+ and \\ddp as well as the column privileges\nshown by \\dp are not affected by this change because those privileges are reset\nto NULL instead of leaving empty arrays.\n\nCommands \\des+ and \\dew+ are not covered in src/test/regress because no foreign\ndata wrapper is available at this point to create a foreign server.\n\n[1] https://www.postgresql.org/message-id/efdd465d-a795-6188-7f71-7cdb4b2be031%40mtneva.com\n[2] https://www.postgresql.org/message-id/31246.1693337238%40sss.pgh.pa.us\n\n--\nErik",
"msg_date": "Sun, 17 Sep 2023 21:31:50 +0200 (CEST)",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix output of zero privileges in psql"
},
{
"msg_contents": "On 17/09/2023 21:31 CEST Erik Wienhold <[email protected]> wrote:\n\n> This affects the following meta commands:\n>\n> \\db+ \\dD+ \\df+ \\dL+ \\dl+ \\dn+ \\dp \\dT+ \\l\n\nalso \\des+ and \\dew+\n\n--\nErik\n\n\n",
"msg_date": "Sun, 17 Sep 2023 21:37:10 +0200 (CEST)",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Sun, 2023-09-17 at 21:31 +0200, Erik Wienhold wrote:\n> I wrote a patch to change psql's display of zero privileges after a user's\n> reported confusion with the psql output for zero vs. default privileges [1].\n> Admittedly, zero privileges is a rare use case [2] but I think psql should not\n> confuse the user in the off chance that this happens.\n> \n> With this change psql now prints \"(none)\" for zero privileges instead of\n> nothing. This affects the following meta commands:\n> \n> \\db+ \\dD+ \\df+ \\dL+ \\dl+ \\dn+ \\dp \\dT+ \\l\n> \n> Default privileges start as NULL::aclitem[] in various catalog columns but\n> revoking the default privileges leaves an empty aclitem array. Using\n> \\pset null '(null)' as a workaround to spot default privileges does not work\n> because the meta commands ignore this setting.\n> \n> The privileges shown by \\dconfig+ and \\ddp as well as the column privileges\n> shown by \\dp are not affected by this change because those privileges are reset\n> to NULL instead of leaving empty arrays.\n> \n> Commands \\des+ and \\dew+ are not covered in src/test/regress because no foreign\n> data wrapper is available at this point to create a foreign server.\n> \n> [1] https://www.postgresql.org/message-id/efdd465d-a795-6188-7f71-7cdb4b2be031%40mtneva.com\n> [2] https://www.postgresql.org/message-id/31246.1693337238%40sss.pgh.pa.us\n\nReading that thread, I had the impression that there was more support for\nhonoring \"\\pset null\" rather than unconditionally displaying \"(none)\".\n\nThe simple attached patch does it like that. What do you think?\n\nYours,\nLaurenz Albe",
"msg_date": "Fri, 06 Oct 2023 22:32:41 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On 2023-10-06 22:32 +0200, Laurenz Albe write:\n> On Sun, 2023-09-17 at 21:31 +0200, Erik Wienhold wrote:\n> > I wrote a patch to change psql's display of zero privileges after a user's\n> > reported confusion with the psql output for zero vs. default privileges [1].\n> > Admittedly, zero privileges is a rare use case [2] but I think psql should not\n> > confuse the user in the off chance that this happens.\n> > \n> > With this change psql now prints \"(none)\" for zero privileges instead of\n> > nothing. This affects the following meta commands:\n> > \n> > \\db+ \\dD+ \\df+ \\dL+ \\dl+ \\dn+ \\dp \\dT+ \\l\n> > \n> > Default privileges start as NULL::aclitem[] in various catalog columns but\n> > revoking the default privileges leaves an empty aclitem array. Using\n> > \\pset null '(null)' as a workaround to spot default privileges does not work\n> > because the meta commands ignore this setting.\n> > \n> > The privileges shown by \\dconfig+ and \\ddp as well as the column privileges\n> > shown by \\dp are not affected by this change because those privileges are reset\n> > to NULL instead of leaving empty arrays.\n> > \n> > Commands \\des+ and \\dew+ are not covered in src/test/regress because no foreign\n> > data wrapper is available at this point to create a foreign server.\n> > \n> > [1] https://www.postgresql.org/message-id/efdd465d-a795-6188-7f71-7cdb4b2be031%40mtneva.com\n> > [2] https://www.postgresql.org/message-id/31246.1693337238%40sss.pgh.pa.us\n> \n> Reading that thread, I had the impression that there was more support for\n> honoring \"\\pset null\" rather than unconditionally displaying \"(none)\".\n\nI took Tom's response in the -general thread to mean that we could fix\n\\pset null also as a \"nice to have\" but not as a solution to the display\nof zero privileges.\n\nOnly fixing \\pset null has one drawback IMO because it only affects how\ndefault privileges (more common) are printed. The edge case of zero\nprivileges (less common) gets lost in a bunch of NULL output. And I\nassume most users change the default \\pset null to some non-empty string\nin their psqlrc (I do).\n\nFor example with your patch applied:\n\n\tcreate table t1 (a int);\n\tcreate table t2 (a int);\n\tcreate table t3 (a int);\n\n\trevoke all on t2 from :USER;\n\n\t\\pset null <NULL>\n\t\\dp t1|t2|t3\n\t Access privileges\n\t Schema | Name | Type | Access privileges | Column privileges | Policies\n\t--------+------+-------+-------------------+-------------------+----------\n\t public | t1 | table | <NULL> | |\n\t public | t2 | table | | |\n\t public | t3 | table | <NULL> | |\n\t(3 rows)\n\nInstead of only displaying the zero privileges with my patch and default\n\\pset null:\n\n\t\\pset null ''\n\t\\dp t1|t2|t3\n\t Access privileges\n\t Schema | Name | Type | Access privileges | Column privileges | Policies\n\t--------+------+-------+-------------------+-------------------+----------\n\t public | t1 | table | | |\n\t public | t2 | table | (none) | |\n\t public | t3 | table | | |\n\t(3 rows)\n\nI guess if most tables have any non-default privileges then both\nsolutions are equally good.\n\n> The simple attached patch does it like that. What do you think?\n\nLGTM.\n\n-- \nErik\n\n\n",
"msg_date": "Sat, 7 Oct 2023 05:07:50 +0200",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Sat, 2023-10-07 at 05:07 +0200, Erik Wienhold wrote:\n> On 2023-10-06 22:32 +0200, Laurenz Albe write:\n> > On Sun, 2023-09-17 at 21:31 +0200, Erik Wienhold wrote:\n> > > I wrote a patch to change psql's display of zero privileges after a user's\n> > > reported confusion with the psql output for zero vs. default privileges [1].\n> > > Admittedly, zero privileges is a rare use case [2] but I think psql should not\n> > > confuse the user in the off chance that this happens.\n> > > \n> > > [1] https://www.postgresql.org/message-id/efdd465d-a795-6188-7f71-7cdb4b2be031%40mtneva.com\n> > > [2] https://www.postgresql.org/message-id/31246.1693337238%40sss.pgh.pa.us\n> > \n> > Reading that thread, I had the impression that there was more support for\n> > honoring \"\\pset null\" rather than unconditionally displaying \"(none)\".\n> \n> For example with your patch applied:\n> \n> create table t1 (a int);\n> create table t2 (a int);\n> create table t3 (a int);\n> \n> revoke all on t2 from :USER;\n> \n> \\pset null <NULL>\n> \\dp t1|t2|t3\n> Access privileges\n> Schema | Name | Type | Access privileges | Column privileges | Policies\n> --------+------+-------+-------------------+-------------------+----------\n> public | t1 | table | <NULL> | |\n> public | t2 | table | | |\n> public | t3 | table | <NULL> | |\n> (3 rows)\n> \n> Instead of only displaying the zero privileges with my patch and default\n> \\pset null:\n> \n> \\pset null ''\n> \\dp t1|t2|t3\n> Access privileges\n> Schema | Name | Type | Access privileges | Column privileges | Policies\n> --------+------+-------+-------------------+-------------------+----------\n> public | t1 | table | | |\n> public | t2 | table | (none) | |\n> public | t3 | table | | |\n> (3 rows)\n> \n> I guess if most tables have any non-default privileges then both\n> solutions are equally good.\n\nIt is a tough call.\n\nFor somebody who knows PostgreSQL well enough to know that default privileges are\nrepresented by NULL values, my solution is probably more appealing.\n\nIt seems that we both had the goal of distinguishing the cases of default and\nzero privileges, but for a beginner, both versions are confusing. better would\nprobably be\n\n Access privileges\n Schema | Name | Type | Access privileges | Column privileges | Policies\n --------+------+-------+-------------------+-------------------+----------\n public | t1 | table | default | default |\n public | t2 | table | | default |\n public | t3 | table | default | default |\n\nThe disadvantage of this (and the advantage of my proposal) is that it might\nconfuse experienced users (and perhaps automated tools) if the output changes\ntoo much.\n\n> > The simple attached patch does it like that. What do you think?\n> \n> LGTM.\n\nIf you are happy enough with my patch, shall we mark it as ready for committer?\nOr do you want to have a stab at something like I suggested above?\n\nYours,\nLaurenz Albe\n",
"msg_date": "Sat, 07 Oct 2023 14:29:58 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On 2023-10-07 14:29 +0200, Laurenz Albe write:\n> On Sat, 2023-10-07 at 05:07 +0200, Erik Wienhold wrote:\n> > On 2023-10-06 22:32 +0200, Laurenz Albe write:\n> > > On Sun, 2023-09-17 at 21:31 +0200, Erik Wienhold wrote:\n> > > > I wrote a patch to change psql's display of zero privileges after a user's\n> > > > reported confusion with the psql output for zero vs. default privileges [1].\n> > > > Admittedly, zero privileges is a rare use case [2] but I think psql should not\n> > > > confuse the user in the off chance that this happens.\n> > > > \n> > > > [1] https://www.postgresql.org/message-id/efdd465d-a795-6188-7f71-7cdb4b2be031%40mtneva.com\n> > > > [2] https://www.postgresql.org/message-id/31246.1693337238%40sss.pgh.pa.us\n> > > \n> > > Reading that thread, I had the impression that there was more support for\n> > > honoring \"\\pset null\" rather than unconditionally displaying \"(none)\".\n> > \n> > For example with your patch applied:\n> > \n> > create table t1 (a int);\n> > create table t2 (a int);\n> > create table t3 (a int);\n> > \n> > revoke all on t2 from :USER;\n> > \n> > \\pset null <NULL>\n> > \\dp t1|t2|t3\n> > Access privileges\n> > Schema | Name | Type | Access privileges | Column privileges | Policies\n> > --------+------+-------+-------------------+-------------------+----------\n> > public | t1 | table | <NULL> | |\n> > public | t2 | table | | |\n> > public | t3 | table | <NULL> | |\n> > (3 rows)\n> > \n> > Instead of only displaying the zero privileges with my patch and default\n> > \\pset null:\n> > \n> > \\pset null ''\n> > \\dp t1|t2|t3\n> > Access privileges\n> > Schema | Name | Type | Access privileges | Column privileges | Policies\n> > --------+------+-------+-------------------+-------------------+----------\n> > public | t1 | table | | |\n> > public | t2 | table | (none) | |\n> > public | t3 | table | | |\n> > (3 rows)\n> > \n> > I guess if most tables have any non-default privileges then both\n> > solutions are equally good.\n> \n> It is a tough call.\n> \n> For somebody who knows PostgreSQL well enough to know that default\n> privileges are represented by NULL values, my solution is probably\n> more appealing.\n> \n> It seems that we both had the goal of distinguishing the cases of\n> default and zero privileges, but for a beginner, both versions are\n> confusing. better would probably be\n> \n> Access privileges\n> Schema | Name | Type | Access privileges | Column privileges | Policies\n> --------+------+-------+-------------------+-------------------+----------\n> public | t1 | table | default | default |\n> public | t2 | table | | default |\n> public | t3 | table | default | default |\n\nAh yes. The problem seems to be more with default privileges producing\nno output right now. I was just focusing on the zero privs edge case.\n\n> The disadvantage of this (and the advantage of my proposal) is that it\n> might confuse experienced users (and perhaps automated tools) if the\n> output changes too much.\n\nI agree that your patch is less invasive under default settings. But is\nthe output of meta commands considered part of the interface where we\nneed to be cautious about not breaking clients?\n\nI've written quite a few scripts that parse results from psql's stdout,\nbut always with simple queries to have control over columns and the\nformatting of values. I always expect meta command output to change\nwith the next release because to me they look more like a human-readable\ninterface, e.g. the localizable header which of course one can still\nhide with --tuples-only.\n\n> > > The simple attached patch does it like that. What do you think?\n> > \n> > LGTM.\n> \n> If you are happy enough with my patch, shall we mark it as ready for\n> committer?\n\nI amended your patch to also document the effect of \\pset null in this\ncase. See the attached v2.\n\n> Or do you want to have a stab at something like I suggested above?\n\nNot right now if the user can just use \\pset null 'default' with your\npatch.\n\n-- \nErik",
"msg_date": "Sat, 7 Oct 2023 20:41:04 +0200",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Sat, 2023-10-07 at 20:41 +0200, Erik Wienhold wrote:\n> > If you are happy enough with my patch, shall we mark it as ready for\n> > committer?\n> \n> I amended your patch to also document the effect of \\pset null in this\n> case. See the attached v2.\n\n+1\n\nIf you mention in ddl.sgml that you can use \"\\pset null\" to distinguish\ndefault from no privileges, you should mention that this only works with\npsql. Many people out there don't use psql.\n\nAlso, I'm not sure if \"zero privileges\" will be readily understood by\neverybody. Perhaps \"no privileges at all, even for the object owner\"\nwould be a better wording.\n\nPerhaps it would also be good to mention this in the psql documentation.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Sun, 08 Oct 2023 06:14:12 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On 2023-10-08 06:14 +0200, Laurenz Albe write:\n> On Sat, 2023-10-07 at 20:41 +0200, Erik Wienhold wrote:\n> > > If you are happy enough with my patch, shall we mark it as ready for\n> > > committer?\n> > \n> > I amended your patch to also document the effect of \\pset null in this\n> > case. See the attached v2.\n> \n> +1\n> \n> If you mention in ddl.sgml that you can use \"\\pset null\" to distinguish\n> default from no privileges, you should mention that this only works with\n> psql. Many people out there don't use psql.\n\nI don't think this is necessary because that section in ddl.sgml is\nalready about psql and \\dp.\n\n> Also, I'm not sure if \"zero privileges\" will be readily understood by\n> everybody. Perhaps \"no privileges at all, even for the object owner\"\n> would be a better wording.\n\nChanged in v3 to \"empty privileges\" with an explanation that this means\n\"no privileges at all, even for the object owner\".\n\n> Perhaps it would also be good to mention this in the psql documentation.\n\nJust once under \\pset null with a reference to section 5.7? Something\nlike \"This is also useful for distinguishing default privileges from\nempty privileges as explained in Section 5.7.\"\n\nOr instead under every command affected by this change? \\dp and \\ddp\nalready have such a reference (\"The meaning of the privilege display is\nexplained in Section 5.7.\")\n\nI prefer the first one because it's less effort ;) Also done in v3.\n\n-- \nErik",
"msg_date": "Mon, 9 Oct 2023 03:53:27 +0200",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Sun, Oct 8, 2023 at 6:55 PM Erik Wienhold <[email protected]> wrote:\n\n> On 2023-10-08 06:14 +0200, Laurenz Albe write:\n> > On Sat, 2023-10-07 at 20:41 +0200, Erik Wienhold wrote:\n> > > > If you are happy enough with my patch, shall we mark it as ready for\n> > > > committer?\n> > >\n> > > I amended your patch to also document the effect of \\pset null in this\n> > > case. See the attached v2.\n> >\n> > +1\n> >\n> > If you mention in ddl.sgml that you can use \"\\pset null\" to distinguish\n> > default from no privileges, you should mention that this only works with\n> > psql. Many people out there don't use psql.\n>\n> I don't think this is necessary because that section in ddl.sgml is\n> already about psql and \\dp.\n>\n\nI agree that we are simply detailing how psql makes this information\navailable to the reader and leave users of other clients on their own to\nfigure out their own methods - which many clients probably have handled for\nthem anyway.\n\nFor us, I would suggest the following wording:\n\nIn addition to the situation of printing all acl items, the Access and\nColumn privileges columns report two other situations specially. In the\nrare case where all privileges for an object have been explicitly removed,\nincluding from the owner and PUBLIC, (i.e., has empty privileges) these\ncolumns will display NULL. The other case is where the built-in default\nprivileges are in effect, in which case these columns will display the\nempty string. (Note that by default psql will print NULL as an empty\nstring, so in order to visually distinguish these two cases you will need\nto issue the \\pset null meta-command and choose some other string to print\nfor NULLs). Built-in default privileges include all privileges for the\nowner, as well as those granted to PUBLIC per for relevant object types as\ndescribed above. The built-in default privileges are only in effect if the\nobject has not been the target of a GRANT or REVOKE and also has not had\nits default privileges modified using ALTER DEFAULT PRIVILEGES. (???: if it\nis possible to revert back to the state of built-in privileges that would\nneed to be described here.)\n\n\nThe above removes the parenthetical regarding null in the catalogs, this is\nintentional as it seems that the goal here is to use psql instead of the\ncatalogs and adding its use of null being printed as the empty string just\nseems likely to add confusion.\n\n\n> > Also, I'm not sure if \"zero privileges\" will be readily understood by\n> > everybody. Perhaps \"no privileges at all, even for the object owner\"\n> > would be a better wording.\n>\n> Changed in v3 to \"empty privileges\" with an explanation that this means\n> \"no privileges at all, even for the object owner\".\n>\n\n+1\n\nWe probably should add the two terms to the glossary:\n\nempty privileges: all privileges explicitly revoked from the owner and\nPUBLIC (where applicable), and none otherwise granted.\n\nbuilt-in default privileges: owner having all privileges and no privileges\ngranted or removed via ALTER DEFAULT PRIVILEGES\n\n\n> > Perhaps it would also be good to mention this in the psql documentation.\n>\n> Just once under \\pset null with a reference to section 5.7? Something\n> like \"This is also useful for distinguishing default privileges from\n> empty privileges as explained in Section 5.7.\"\n>\n> Or instead under every command affected by this change? \\dp and \\ddp\n> already have such a reference (\"The meaning of the privilege display is\n> explained in Section 5.7.\")\n>\n> I prefer the first one because it's less effort ;) Also done in v3.\n>\n\nWe've chosen a poor default and I'd rather inform the user of specific\nmeta-commands to be wary of this poor default and change it at the point\nthey will be learning about the meta-commands adversely affected.\n\nThat said, I'd be willing to document that these commands, because they are\naffected by empty string versus null, require a non-empty-string value for\n\\pset null and will choose the string '<null>' for the duration of the\nmeta-command's execution if the user's setting is incompatible.\n\nDavid J.\n\nOn Sun, Oct 8, 2023 at 6:55 PM Erik Wienhold <[email protected]> wrote:On 2023-10-08 06:14 +0200, Laurenz Albe write:\n> On Sat, 2023-10-07 at 20:41 +0200, Erik Wienhold wrote:\n> > > If you are happy enough with my patch, shall we mark it as ready for\n> > > committer?\n> > \n> > I amended your patch to also document the effect of \\pset null in this\n> > case. See the attached v2.\n> \n> +1\n> \n> If you mention in ddl.sgml that you can use \"\\pset null\" to distinguish\n> default from no privileges, you should mention that this only works with\n> psql. Many people out there don't use psql.\n\nI don't think this is necessary because that section in ddl.sgml is\nalready about psql and \\dp.I agree that we are simply detailing how psql makes this information available to the reader and leave users of other clients on their own to figure out their own methods - which many clients probably have handled for them anyway.For us, I would suggest the following wording:In addition to the situation of printing all acl items, the Access and Column privileges columns report two other situations specially. In the rare case where all privileges for an object have been explicitly removed, including from the owner and PUBLIC, (i.e., has empty privileges) these columns will display NULL. The other case is where the built-in default privileges are in effect, in which case these columns will display the empty string. (Note that by default psql will print NULL as an empty string, so in order to visually distinguish these two cases you will need to issue the \\pset null meta-command and choose some other string to print for NULLs). Built-in default privileges include all privileges for the owner, as well as those granted to PUBLIC per for relevant object types as described above. The built-in default privileges are only in effect if the object has not been the target of a GRANT or REVOKE and also has not had its default privileges modified using ALTER DEFAULT PRIVILEGES. (???: if it is possible to revert back to the state of built-in privileges that would need to be described here.)The above removes the parenthetical regarding null in the catalogs, this is intentional as it seems that the goal here is to use psql instead of the catalogs and adding its use of null being printed as the empty string just seems likely to add confusion.\n\n> Also, I'm not sure if \"zero privileges\" will be readily understood by\n> everybody. Perhaps \"no privileges at all, even for the object owner\"\n> would be a better wording.\n\nChanged in v3 to \"empty privileges\" with an explanation that this means\n\"no privileges at all, even for the object owner\".+1We probably should add the two terms to the glossary:empty privileges: all privileges explicitly revoked from the owner and PUBLIC (where applicable), and none otherwise granted.built-in default privileges: owner having all privileges and no privileges granted or removed via ALTER DEFAULT PRIVILEGES\n\n> Perhaps it would also be good to mention this in the psql documentation.\n\nJust once under \\pset null with a reference to section 5.7? Something\nlike \"This is also useful for distinguishing default privileges from\nempty privileges as explained in Section 5.7.\"\n\nOr instead under every command affected by this change? \\dp and \\ddp\nalready have such a reference (\"The meaning of the privilege display is\nexplained in Section 5.7.\")\n\nI prefer the first one because it's less effort ;) Also done in v3.We've chosen a poor default and I'd rather inform the user of specific meta-commands to be wary of this poor default and change it at the point they will be learning about the meta-commands adversely affected.That said, I'd be willing to document that these commands, because they are affected by empty string versus null, require a non-empty-string value for \\pset null and will choose the string '<null>' for the duration of the meta-command's execution if the user's setting is incompatible.David J.",
"msg_date": "Sun, 8 Oct 2023 19:58:15 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Mon, 2023-10-09 at 03:53 +0200, Erik Wienhold wrote:\n> On 2023-10-08 06:14 +0200, Laurenz Albe write:\n> > On Sat, 2023-10-07 at 20:41 +0200, Erik Wienhold wrote:\n> > > > If you are happy enough with my patch, shall we mark it as ready for\n> > > > committer?\n> > > \n> > > I amended your patch to also document the effect of \\pset null in this\n> > > case. See the attached v2.\n> > \n> > +1\n> > \n> > If you mention in ddl.sgml that you can use \"\\pset null\" to distinguish\n> > default from no privileges, you should mention that this only works with\n> > psql. Many people out there don't use psql.\n> \n> I don't think this is necessary because that section in ddl.sgml is\n> already about psql and \\dp.\n\nYou are right.\n\n> > Also, I'm not sure if \"zero privileges\" will be readily understood by\n> > everybody. Perhaps \"no privileges at all, even for the object owner\"\n> > would be a better wording.\n> \n> Changed in v3 to \"empty privileges\" with an explanation that this means\n> \"no privileges at all, even for the object owner\".\n\nLooks good.\n\n> > Perhaps it would also be good to mention this in the psql documentation.\n> \n> Just once under \\pset null with a reference to section 5.7? Something\n> like \"This is also useful for distinguishing default privileges from\n> empty privileges as explained in Section 5.7.\"\n> \n> Or instead under every command affected by this change? \\dp and \\ddp\n> already have such a reference (\"The meaning of the privilege display is\n> explained in Section 5.7.\")\n> \n> I prefer the first one because it's less effort ;) Also done in v3.\n\nI think that sufficient.\n\nI tinkered a bit with your documentation. For example, the suggestion to\n\"\\pset null\" seemed to be in an inappropriate place. Tell me what you think.\n\nYours,\nLaurenz Albe",
"msg_date": "Mon, 09 Oct 2023 09:54:23 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Sun, 2023-10-08 at 19:58 -0700, David G. Johnston wrote:\n> For us, I would suggest the following wording:\n> \n> In addition to the situation of printing all acl items, the Access and Column\n> privileges columns report two other situations specially. In the rare case\n> where all privileges for an object have been explicitly removed, including\n> from the owner and PUBLIC, (i.e., has empty privileges) these columns will\n> display NULL. The other case is where the built-in default privileges are\n> in effect, in which case these columns will display the empty string.\n> (Note that by default psql will print NULL as an empty string, so in order\n> to visually distinguish these two cases you will need to issue the \\pset null\n> meta-command and choose some other string to print for NULLs). Built-in\n> default privileges include all privileges for the owner, as well as those\n> granted to PUBLIC per for relevant object types as described above.\n\nThat doesn't look like an improvement over the latest patches to me.\n\n> The built-in default privileges are only in effect if the object has not been\n> the target of a GRANT or REVOKE and also has not had its default privileges\n> modified using ALTER DEFAULT PRIVILEGES. (???: if it is possible to revert\n> back to the state of built-in privileges that would need to be described here.)\n\nI don't think that we need to mention ALTER DEFAULT PRIVILEGES there. If\nthe default privileges have been altered, the ACL will not be stored as\nNULL in the catalogs.\n\n> The above removes the parenthetical regarding null in the catalogs, this is\n> intentional as it seems that the goal here is to use psql instead of the\n> catalogs and adding its use of null being printed as the empty string just\n> seems likely to add confusion.\n\nTo me, mentioning the default privileges are stored as NULLs in the catalogs\nis not an invitation to view the privileges with catalog queries, but\ninformation about implementation details that explains why default privileges\nare displayed the way they are.\n\n> We probably should add the two terms to the glossary:\n> \n> empty privileges: all privileges explicitly revoked from the owner and PUBLIC\n> (where applicable), and none otherwise granted.\n> \n> built-in default privileges: owner having all privileges and no privileges\n> granted or removed via ALTER DEFAULT PRIVILEGES\n\n\"Empty privileges\" are too unimportant to warrant an index entry.\n\nI can see the value of an index entry\n\n<indexterm>\n <primary>privilege</primary>\n <secondary>default</secondary>\n</indexterm>\n\nDone in the attached v5 of the patch, even though this is not really\nthe business of this patch.\n\n> > > Perhaps it would also be good to mention this in the psql documentation.\n> \n> We've chosen a poor default and I'd rather inform the user of specific meta-commands\n> to be wary of this poor default and change it at the point they will be learning\n> about the meta-commands adversely affected.\n> \n> That said, I'd be willing to document that these commands, because they are affected\n> by empty string versus null, require a non-empty-string value for \\pset null and will\n> choose the string '<null>' for the duration of the meta-command's execution if the\n> user's setting is incompatible.\n\nI am not certain I understood you correctly.\nAre you advocating for adding a mention of \"\\pset null\" to every backslash command\nthat displays privileges? That is excessive, in my opinion.\n\nYours,\nLaurenz Albe",
"msg_date": "Mon, 09 Oct 2023 10:29:07 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Mon, Oct 9, 2023 at 1:29 AM Laurenz Albe <[email protected]>\nwrote:\n\n> On Sun, 2023-10-08 at 19:58 -0700, David G. Johnston wrote:\n>\n> > The built-in default privileges are only in effect if the object has not\n> been\n> > the target of a GRANT or REVOKE and also has not had its default\n> privileges\n> > modified using ALTER DEFAULT PRIVILEGES. (???: if it is possible to\n> revert\n> > back to the state of built-in privileges that would need to be described\n> here.)\n>\n> I don't think that we need to mention ALTER DEFAULT PRIVILEGES there. If\n> the default privileges have been altered, the ACL will not be stored as\n> NULL in the catalogs.\n>\n\nIt's already mentioned there, I just felt bringing the mention of ADP and\ngrant/revoke both invalidating the built-in default privileges closer\ntogether would be an improvement.\n\n\n>\n> > The above removes the parenthetical regarding null in the catalogs, this\n> is\n> > intentional as it seems that the goal here is to use psql instead of the\n> > catalogs and adding its use of null being printed as the empty string\n> just\n> > seems likely to add confusion.\n>\n> To me, mentioning the default privileges are stored as NULLs in the\n> catalogs\n> is not an invitation to view the privileges with catalog queries, but\n> information about implementation details that explains why default\n> privileges\n> are displayed the way they are.\n>\n\nCalling it an implementation detail leads me to conclude the point does not\nbelong in the user-facing administration documentation.\n\n>\n> > > > Perhaps it would also be good to mention this in the psql\n> documentation.\n> >\n> > We've chosen a poor default and I'd rather inform the user of specific\n> meta-commands\n> > to be wary of this poor default and change it at the point they will be\n> learning\n> > about the meta-commands adversely affected.\n> >\n> > That said, I'd be willing to document that these commands, because they\n> are affected\n> > by empty string versus null, require a non-empty-string value for \\pset\n> null and will\n> > choose the string '<null>' for the duration of the meta-command's\n> execution if the\n> > user's setting is incompatible.\n>\n> I am not certain I understood you correctly.\n> Are you advocating for adding a mention of \"\\pset null\" to every backslash\n> command\n> that displays privileges? That is excessive, in my opinion.\n>\n\nYes, I am. I suppose the argument that any user of those commands is\nexpected to have read the ddl/privileges chapter would suffice for me\nthough.\n\nMy point with the second paragraph is that we could, instead of documenting\nthe caveat about null printing as empty strings is to instead issue an\nimplicit \"\\pset null '<null>'\" as part of these commands, and a \"\\pset\nnull\" afterward, conditioned upon it not already being set to a non-empty\nvalue. IOW, the special-casing we do today but actually do it in a way\nthat distinguishes the two cases instead of forcing them to be\nindistinguishable.\n\nDavid J.\n\nOn Mon, Oct 9, 2023 at 1:29 AM Laurenz Albe <[email protected]> wrote:On Sun, 2023-10-08 at 19:58 -0700, David G. Johnston wrote:\n> The built-in default privileges are only in effect if the object has not been\n> the target of a GRANT or REVOKE and also has not had its default privileges\n> modified using ALTER DEFAULT PRIVILEGES. (???: if it is possible to revert\n> back to the state of built-in privileges that would need to be described here.)\n\nI don't think that we need to mention ALTER DEFAULT PRIVILEGES there. If\nthe default privileges have been altered, the ACL will not be stored as\nNULL in the catalogs.It's already mentioned there, I just felt bringing the mention of ADP and grant/revoke both invalidating the built-in default privileges closer together would be an improvement. \n\n> The above removes the parenthetical regarding null in the catalogs, this is\n> intentional as it seems that the goal here is to use psql instead of the\n> catalogs and adding its use of null being printed as the empty string just\n> seems likely to add confusion.\n\nTo me, mentioning the default privileges are stored as NULLs in the catalogs\nis not an invitation to view the privileges with catalog queries, but\ninformation about implementation details that explains why default privileges\nare displayed the way they are.Calling it an implementation detail leads me to conclude the point does not belong in the user-facing administration documentation.\n> > > Perhaps it would also be good to mention this in the psql documentation.\n> \n> We've chosen a poor default and I'd rather inform the user of specific meta-commands\n> to be wary of this poor default and change it at the point they will be learning\n> about the meta-commands adversely affected.\n> \n> That said, I'd be willing to document that these commands, because they are affected\n> by empty string versus null, require a non-empty-string value for \\pset null and will\n> choose the string '<null>' for the duration of the meta-command's execution if the\n> user's setting is incompatible.\n\nI am not certain I understood you correctly.\nAre you advocating for adding a mention of \"\\pset null\" to every backslash command\nthat displays privileges? That is excessive, in my opinion.Yes, I am. I suppose the argument that any user of those commands is expected to have read the ddl/privileges chapter would suffice for me though.My point with the second paragraph is that we could, instead of documenting the caveat about null printing as empty strings is to instead issue an implicit \"\\pset null '<null>'\" as part of these commands, and a \"\\pset null\" afterward, conditioned upon it not already being set to a non-empty value. IOW, the special-casing we do today but actually do it in a way that distinguishes the two cases instead of forcing them to be indistinguishable.David J.",
"msg_date": "Mon, 9 Oct 2023 09:30:22 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Mon, 2023-10-09 at 09:30 -0700, David G. Johnston wrote:\n> On Mon, Oct 9, 2023 at 1:29 AM Laurenz Albe <[email protected]> wrote:\n> > On Sun, 2023-10-08 at 19:58 -0700, David G. Johnston wrote:\n> > \n> > > The built-in default privileges are only in effect if the object has not been\n> > > the target of a GRANT or REVOKE and also has not had its default privileges\n> > > modified using ALTER DEFAULT PRIVILEGES. (???: if it is possible to revert\n> > > back to the state of built-in privileges that would need to be described here.)\n> > \n> > I don't think that we need to mention ALTER DEFAULT PRIVILEGES there. If\n> > the default privileges have been altered, the ACL will not be stored as\n> > NULL in the catalogs.\n> \n> It's already mentioned there, I just felt bringing the mention of ADP and\n> grant/revoke both invalidating the built-in default privileges closer together\n> would be an improvement.\n\nAh, sorry, I misread your comment. You are right. But then, the effects of\nALTER DEFAULT PRIVILEGES are discussed later in the paragraph anyway, and we don't\nhave to repeat that here.\n\n> > \n> > To me, mentioning the default privileges are stored as NULLs in the catalogs\n> > is not an invitation to view the privileges with catalog queries, but\n> > information about implementation details that explains why default privileges\n> > are displayed the way they are.\n> \n> Calling it an implementation detail leads me to conclude the point does not\n> belong in the user-facing administration documentation.\n\nAgain, you have a point there. However, I find that detail useful, as it explains\nthe user-facing behavior. Anyway, I don't think it is the job of this patch to\nremove that pre-existing detail.\n\n> > Are you advocating for adding a mention of \"\\pset null\" to every backslash command\n> > that displays privileges? That is excessive, in my opinion.\n> \n> Yes, I am. I suppose the argument that any user of those commands is expected\n> to have read the ddl/privileges chapter would suffice for me though.\n\nThanks. Then let's leave it like that.\n\n> My point with the second paragraph is that we could, instead of documenting the\n> caveat about null printing as empty strings is to instead issue an implicit\n> \"\\pset null '<null>'\" as part of these commands, and a \"\\pset null\" afterward,\n> conditioned upon it not already being set to a non-empty value. IOW, the\n> special-casing we do today but actually do it in a way that distinguishes the\n> two cases instead of forcing them to be indistinguishable.\n\n-1\n\nThe whole point of this patch is to make psql behave consistently with respect to\nNULLs in meta-commands. Your suggestion would subvert that idea.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 09 Oct 2023 20:56:29 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "Laurenz Albe <[email protected]> writes:\n> On Mon, 2023-10-09 at 09:30 -0700, David G. Johnston wrote:\n>> My point with the second paragraph is that we could, instead of documenting the\n>> caveat about null printing as empty strings is to instead issue an implicit\n>> \"\\pset null '<null>'\" as part of these commands, and a \"\\pset null\" afterward,\n>> conditioned upon it not already being set to a non-empty value. IOW, the\n>> special-casing we do today but actually do it in a way that distinguishes the\n>> two cases instead of forcing them to be indistinguishable.\n\n> -1\n\n> The whole point of this patch is to make psql behave consistently with respect to\n> NULLs in meta-commands. Your suggestion would subvert that idea.\n\nYeah. There is a lot of attraction in having \\pset null affect these\ndisplays just like all other ones. The fact that they act differently\nnow is a wart, not something we should replace with a different special\ncase behavior.\n\nAlso, I'm fairly concerned about not changing the default behavior here.\nThe fact that this behavior has stood for a couple dozen years without\nmany complaints indicates that there's not all that big a problem to be\nsolved. I doubt that a new default behavior will be well received,\neven if it's arguably better.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Oct 2023 15:13:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Mon, Oct 9, 2023 at 12:13 PM Tom Lane <[email protected]> wrote:\n\n> Laurenz Albe <[email protected]> writes:\n> > On Mon, 2023-10-09 at 09:30 -0700, David G. Johnston wrote:\n> >> My point with the second paragraph is that we could, instead of\n> documenting the\n> >> caveat about null printing as empty strings is to instead issue an\n> implicit\n> >> \"\\pset null '<null>'\" as part of these commands, and a \"\\pset null\"\n> afterward,\n> >> conditioned upon it not already being set to a non-empty value. IOW,\n> the\n> >> special-casing we do today but actually do it in a way that\n> distinguishes the\n> >> two cases instead of forcing them to be indistinguishable.\n>\n> > -1\n>\n> > The whole point of this patch is to make psql behave consistently with\n> respect to\n> > NULLs in meta-commands. Your suggestion would subvert that idea.\n>\n> Yeah. There is a lot of attraction in having \\pset null affect these\n> displays just like all other ones. The fact that they act differently\n> now is a wart, not something we should replace with a different special\n> case behavior.\n>\n> Also, I'm fairly concerned about not changing the default behavior here.\n> The fact that this behavior has stood for a couple dozen years without\n> many complaints indicates that there's not all that big a problem to be\n> solved. I doubt that a new default behavior will be well received,\n> even if it's arguably better.\n>\n\nMy position is that the default behavior should be changed such that the\ndistinction these reports need to make between empty privileges and default\nprivileges is impossible for the user to remove. I suppose the best direct\nsolution given that goal is for the query to simply do away with any\nreliance on NULL being an indicator. Output an i18n'd \"no permissions\npresent\" line in the rare empty permissions situation and leave the empty\nstring for built-in default.\n\nIf the consensus is to not actually fix these views to make them\nenvironment invariant in their accuracy then so be it. Having them obey\n\\pset null seems like a half-measure but it at least is an improvement over\nthe status quo such that users are capable of altering their system to make\nthe reports distinguish the two cases if they realize the need.\n\nI do agree that my suggestion regarding the implicit \\pset null, while\nplausible, leaves the wart that NULL is being used to mean something\nspecific. Better is to just use a label for that specific thing.\n\nDavid J.\n\nOn Mon, Oct 9, 2023 at 12:13 PM Tom Lane <[email protected]> wrote:Laurenz Albe <[email protected]> writes:\n> On Mon, 2023-10-09 at 09:30 -0700, David G. Johnston wrote:\n>> My point with the second paragraph is that we could, instead of documenting the\n>> caveat about null printing as empty strings is to instead issue an implicit\n>> \"\\pset null '<null>'\" as part of these commands, and a \"\\pset null\" afterward,\n>> conditioned upon it not already being set to a non-empty value. IOW, the\n>> special-casing we do today but actually do it in a way that distinguishes the\n>> two cases instead of forcing them to be indistinguishable.\n\n> -1\n\n> The whole point of this patch is to make psql behave consistently with respect to\n> NULLs in meta-commands. Your suggestion would subvert that idea.\n\nYeah. There is a lot of attraction in having \\pset null affect these\ndisplays just like all other ones. The fact that they act differently\nnow is a wart, not something we should replace with a different special\ncase behavior.\n\nAlso, I'm fairly concerned about not changing the default behavior here.\nThe fact that this behavior has stood for a couple dozen years without\nmany complaints indicates that there's not all that big a problem to be\nsolved. I doubt that a new default behavior will be well received,\neven if it's arguably better.My position is that the default behavior should be changed such that the distinction these reports need to make between empty privileges and default privileges is impossible for the user to remove. I suppose the best direct solution given that goal is for the query to simply do away with any reliance on NULL being an indicator. Output an i18n'd \"no permissions present\" line in the rare empty permissions situation and leave the empty string for built-in default.If the consensus is to not actually fix these views to make them environment invariant in their accuracy then so be it. Having them obey \\pset null seems like a half-measure but it at least is an improvement over the status quo such that users are capable of altering their system to make the reports distinguish the two cases if they realize the need.I do agree that my suggestion regarding the implicit \\pset null, while plausible, leaves the wart that NULL is being used to mean something specific. Better is to just use a label for that specific thing.David J.",
"msg_date": "Mon, 9 Oct 2023 13:34:47 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Mon, 2023-10-09 at 15:13 -0400, Tom Lane wrote:\n> Laurenz Albe <[email protected]> writes:\n> > The whole point of this patch is to make psql behave consistently with respect to\n> > NULLs in meta-commands.\n> \n> Yeah. There is a lot of attraction in having \\pset null affect these\n> displays just like all other ones. The fact that they act differently\n> now is a wart, not something we should replace with a different special\n> case behavior.\n> \n> Also, I'm fairly concerned about not changing the default behavior here.\n> The fact that this behavior has stood for a couple dozen years without\n> many complaints indicates that there's not all that big a problem to be\n> solved. I doubt that a new default behavior will be well received,\n> even if it's arguably better.\n\nI understand your concern. People who have \"\\pset null\" in their .psqlrc\nmight be surprised if suddenly \"<null>\" starts appearing in the outout\nof \\l.\n\nBut then the people who have \"\\pset null\" in their .psqlrc are typically\nalready somewhat experienced and might have less trouble dealing with that\nchange (but they are more likely to bleat on the mailing list, granted).\n\nUsers with little experience won't notice any difference, so they won't\nbe confused by the change.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 10 Oct 2023 08:31:57 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On 2023-10-09 09:54 +0200, Laurenz Albe write:\n> \n> I tinkered a bit with your documentation. For example, the suggestion to\n> \"\\pset null\" seemed to be in an inappropriate place. Tell me what you think.\n\n+1 You're right to put that sentence right after the explanation of\nempty privileges.\n\n-- \nErik\n\n\n",
"msg_date": "Sat, 14 Oct 2023 02:45:17 +0200",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On 2023-10-09 10:29 +0200, Laurenz Albe write:\n> On Sun, 2023-10-08 at 19:58 -0700, David G. Johnston wrote:\n> > We probably should add the two terms to the glossary:\n> > \n> > empty privileges: all privileges explicitly revoked from the owner and PUBLIC\n> > (where applicable), and none otherwise granted.\n> > \n> > built-in default privileges: owner having all privileges and no privileges\n> > granted or removed via ALTER DEFAULT PRIVILEGES\n> \n> \"Empty privileges\" are too unimportant to warrant an index entry.\n> \n> I can see the value of an index entry\n> \n> <indexterm>\n> <primary>privilege</primary>\n> <secondary>default</secondary>\n> </indexterm>\n> \n> Done in the attached v5 of the patch, even though this is not really\n> the business of this patch.\n\n+1\n\n-- \nErik\n\n\n",
"msg_date": "Sat, 14 Oct 2023 02:46:54 +0200",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On 2023-10-09 22:34 +0200, David G. Johnston write:\n> On Mon, Oct 9, 2023 at 12:13 PM Tom Lane <[email protected]> wrote:\n> > Yeah. There is a lot of attraction in having \\pset null affect these\n> > displays just like all other ones. The fact that they act differently\n> > now is a wart, not something we should replace with a different special\n> > case behavior.\n> >\n> > Also, I'm fairly concerned about not changing the default behavior here.\n> > The fact that this behavior has stood for a couple dozen years without\n> > many complaints indicates that there's not all that big a problem to be\n> > solved. I doubt that a new default behavior will be well received,\n> > even if it's arguably better.\n> >\n> \n> My position is that the default behavior should be changed such that the\n> distinction these reports need to make between empty privileges and default\n> privileges is impossible for the user to remove. I suppose the best direct\n> solution given that goal is for the query to simply do away with any\n> reliance on NULL being an indicator. Output an i18n'd \"no permissions\n> present\" line in the rare empty permissions situation and leave the empty\n> string for built-in default.\n\nMy initial patch does exactly that.\n\n> If the consensus is to not actually fix these views to make them\n> environment invariant in their accuracy then so be it. Having them obey\n> \\pset null seems like a half-measure but it at least is an improvement over\n> the status quo such that users are capable of altering their system to make\n> the reports distinguish the two cases if they realize the need.\n\nI agree.\n\n-- \nErik\n\n\n",
"msg_date": "Sat, 14 Oct 2023 03:04:05 +0200",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Sat, 2023-10-14 at 02:45 +0200, Erik Wienhold wrote:\n> On 2023-10-09 09:54 +0200, Laurenz Albe write:\n> > \n> > I tinkered a bit with your documentation. For example, the suggestion to\n> > \"\\pset null\" seemed to be in an inappropriate place. Tell me what you think.\n> \n> +1 You're right to put that sentence right after the explanation of\n> empty privileges.\n\nThanks for looking.\n\nDavid, how do you feel about this? I am wondering whether to set this patch\n\"ready for committer\" or not.\n\nThere is Tom wondering upthread whether changing psql's behavior like that\nis too much of a compatibility break or not, but I guess it is alright to\nleave that final verdict to a committer.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 16 Oct 2023 17:56:10 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On 2023-10-16 17:56 +0200, Laurenz Albe write:\n> David, how do you feel about this? I am wondering whether to set this patch\n> \"ready for committer\" or not.\n> \n> There is Tom wondering upthread whether changing psql's behavior like that\n> is too much of a compatibility break or not, but I guess it is alright to\n> leave that final verdict to a committer.\n\nWhat's the process for the CommitFest now since we settled on your\npatch? This is my first time being involved in this, so still learning.\nI'see that you've withdrawn your initial patch [1] but this thread is\nstill attached to my patch [2]. Should I edit my CF entry (or withdraw\nit) and you reactivate yours?\n\n[1] https://commitfest.postgresql.org/45/4603/\n[2] https://commitfest.postgresql.org/45/4593/\n\n-- \nErik\n\n\n",
"msg_date": "Mon, 16 Oct 2023 23:51:39 +0200",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Mon, 2023-10-16 at 23:51 +0200, Erik Wienhold wrote:\n> What's the process for the CommitFest now since we settled on your\n> patch? This is my first time being involved in this, so still learning.\n> I'see that you've withdrawn your initial patch [1] but this thread is\n> still attached to my patch [2]. Should I edit my CF entry (or withdraw\n> it) and you reactivate yours?\n\nI don't think it makes sense to have two competing commitfest entries,\nand we should leave it on this thread. If you are concerned about\nauthorship, we could both be mentioned as co-authors, if the patch ever\ngets committed.\n\nI'd still like to wait for feedback from David before I change anything.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 17 Oct 2023 03:17:11 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Mon, Oct 16, 2023 at 6:19 PM Laurenz Albe <[email protected]>\nwrote:\n\n> On Mon, 2023-10-16 at 23:51 +0200, Erik Wienhold wrote:\n> > What's the process for the CommitFest now since we settled on your\n> > patch? This is my first time being involved in this, so still learning.\n> > I'see that you've withdrawn your initial patch [1] but this thread is\n> > still attached to my patch [2]. Should I edit my CF entry (or withdraw\n> > it) and you reactivate yours?\n>\n> I don't think it makes sense to have two competing commitfest entries,\n> and we should leave it on this thread. If you are concerned about\n> authorship, we could both be mentioned as co-authors, if the patch ever\n> gets committed.\n>\n> I'd still like to wait for feedback from David before I change anything.\n>\n>\nReading both threads I'm not seeing any specific rejection of the solution\nthat we simply represent empty privileges as \"(none)\".\n\nI see an apparent consensus that if we do continue to represent it as NULL\nthat the printout should respect \\pset null.\n\nTom commented in favor of (none) on the other thread with some commentary\nregarding how extremely rare it is; then turns around and is \"fairly\nconcerned\" about changing the current blank presentation of its value which\nis going to happen for some people regardless of which approach is chosen.\n(idk...maybe in favor of (none))\n\nPeter's comment doesn't strike me as recognizing that (none) is even an\noption on the table...(n/a)\n\nStuart, the original user complainant, seems to favor (none)\n\nErik seems to favors (none)\n\nI favor (none)\n\nIt's unclear to me whether you Laurenz are against (none) or just trying to\ngo with the group consensus that doesn't actually seem to be against (none).\n\nI'm in favor of iterating on v1 on this thread (haven't read it yet) and\npresenting it as the proposed solution. If that ends up getting shot down\nwe can revive v5 (still need to review as well).\n\nWe should probably post on that thread that this one exists and post a link\nto it.\n\nDavid J.\n\nOn Mon, Oct 16, 2023 at 6:19 PM Laurenz Albe <[email protected]> wrote:On Mon, 2023-10-16 at 23:51 +0200, Erik Wienhold wrote:\n> What's the process for the CommitFest now since we settled on your\n> patch? This is my first time being involved in this, so still learning.\n> I'see that you've withdrawn your initial patch [1] but this thread is\n> still attached to my patch [2]. Should I edit my CF entry (or withdraw\n> it) and you reactivate yours?\n\nI don't think it makes sense to have two competing commitfest entries,\nand we should leave it on this thread. If you are concerned about\nauthorship, we could both be mentioned as co-authors, if the patch ever\ngets committed.\n\nI'd still like to wait for feedback from David before I change anything.Reading both threads I'm not seeing any specific rejection of the solution that we simply represent empty privileges as \"(none)\".I see an apparent consensus that if we do continue to represent it as NULL that the printout should respect \\pset null.Tom commented in favor of (none) on the other thread with some commentary regarding how extremely rare it is; then turns around and is \"fairly concerned\" about changing the current blank presentation of its value which is going to happen for some people regardless of which approach is chosen. (idk...maybe in favor of (none))Peter's comment doesn't strike me as recognizing that (none) is even an option on the table...(n/a)Stuart, the original user complainant, seems to favor (none)Erik seems to favors (none)I favor (none)It's unclear to me whether you Laurenz are against (none) or just trying to go with the group consensus that doesn't actually seem to be against (none).I'm in favor of iterating on v1 on this thread (haven't read it yet) and presenting it as the proposed solution. If that ends up getting shot down we can revive v5 (still need to review as well).We should probably post on that thread that this one exists and post a link to it.David J.",
"msg_date": "Mon, 16 Oct 2023 19:05:08 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Mon, 2023-10-16 at 19:05 -0700, David G. Johnston wrote:\n> Reading both threads I'm not seeing any specific rejection of the\n> solution that we simply represent empty privileges as \"(none)\".\n> \n> I see an apparent consensus that if we do continue to represent it\n> as NULL that the printout should respect \\pset null.\n> \n> Tom commented in favor of (none) on the other thread with some\n> commentary regarding how extremely rare it is; then turns around\n> and is \"fairly concerned\" about changing the current blank presentation\n> of its value which is going to happen for some people regardless\n> of which approach is chosen.\n> \n> Stuart, the original user complainant, seems to favor (none)\n> \n> Erik seems to favors (none)\n> \n> I favor (none)\n> \n> It's unclear to me whether you Laurenz are against (none) or just\n> trying to go with the group consensus that doesn't actually seem to\n> be against (none).\n> \n> I'm in favor of iterating on v1 on this thread (haven't read it yet)\n> and presenting it as the proposed solution. If that ends up getting\n> shot down we can revive v5 (still need to review as well).\n\nThanks for that summary. I prefer my version (simply display NULLs\nas NULLs), but I am not wedded to it. I had the impression that Tom\nwould prefer that too, but is woried if the incompatibility introduced\nwould outweigh the small benefit (of either patch).\n\nSo it is clear that we don't have a consensus.\n\nI don't think I want to go ahead with my version of the patch unless\nthere is more support for it. I can review Erik's original code, if\nthat design meets with more favor.\n\n> We should probably post on that thread that this one exists and post a link to it.\n\nPerhaps a good idea, yes.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 17 Oct 2023 15:05:56 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "Laurenz Albe <[email protected]> writes:\n> On Mon, 2023-10-16 at 19:05 -0700, David G. Johnston wrote:\n>> Reading both threads I'm not seeing any specific rejection of the\n>> solution that we simply represent empty privileges as \"(none)\".\n\n> Thanks for that summary. I prefer my version (simply display NULLs\n> as NULLs), but I am not wedded to it. I had the impression that Tom\n> would prefer that too, but is woried if the incompatibility introduced\n> would outweigh the small benefit (of either patch).\n\n> So it is clear that we don't have a consensus.\n\nFWIW, my druthers are to make the describe.c queries honor \\pset null\n(not only for privileges, but anywhere else they fail to) and do\nnothing beyond that. I think that'll generally reduce the surprise\nfactor, while anything else we might opt to do will increase it.\n\nIf that fails to garner a consensus, my second choice would be to\ndo that plus translate empty-but-not-null ACLs to \"(none)\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Oct 2023 12:33:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On 2023-10-17 04:05 +0200, David G. Johnston wrote:\n> Erik seems to favors (none)\n\nYes, with a slight favor for \"(none)\" because it's the least disruptive\nto users who change \\pset null to a non-blank string. The output of \\dp\netc. would still look the same for default privileges.\n\nBut I'm also okay with respecting \\pset null because it is so simple.\nWe will no longer hide the already documented null representation of\ndefault privileges by allowing the user to display the ACL as it is.\nAnd with Laurenz' patch we will also document the special case of zero\nprivileges and how to distinguish it.\n\n-- \nErik\n\n\n",
"msg_date": "Fri, 20 Oct 2023 04:13:27 +0200",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Fri, 2023-10-20 at 04:13 +0200, Erik Wienhold wrote:\n> On 2023-10-17 04:05 +0200, David G. Johnston wrote:\n> > Erik seems to favors (none)\n> \n> Yes, with a slight favor for \"(none)\" because it's the least disruptive\n> to users who change \\pset null to a non-blank string. The output of \\dp\n> etc. would still look the same for default privileges.\n> \n> But I'm also okay with respecting \\pset null because it is so simple.\n> We will no longer hide the already documented null representation of\n> default privileges by allowing the user to display the ACL as it is.\n> And with Laurenz' patch we will also document the special case of zero\n> privileges and how to distinguish it.\n\nIf you want to proceed with your patch, you could send a new version.\n\nI think it could do with an added line of documentation to the\n\"Privileges\" chapter, and I'd say that the regression tests should be\nin \"psql.sql\" (not that it is very important).\n\nI am not sure how to proceed. Perhaps it would indeed be better to have\ntwo competing commitfest entries. Both could be \"ready for committer\",\nand the committers can decide what they prefer.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 20 Oct 2023 08:42:58 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "Laurenz Albe <[email protected]> writes:\n> I am not sure how to proceed. Perhaps it would indeed be better to have\n> two competing commitfest entries. Both could be \"ready for committer\",\n> and the committers can decide what they prefer.\n\nAs near as I can tell, doing both things (the \\pset null fix and\nsubstituting \"(none)\" for empty privileges) would be an acceptable\nanswer to everyone who has commented. Let's proceed with both\npatches, or combine them into one if there are merge conflicts.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Oct 2023 15:34:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Fri, Oct 20, 2023 at 12:34 PM Tom Lane <[email protected]> wrote:\n\n> Laurenz Albe <[email protected]> writes:\n> > I am not sure how to proceed. Perhaps it would indeed be better to have\n> > two competing commitfest entries. Both could be \"ready for committer\",\n> > and the committers can decide what they prefer.\n>\n> As near as I can tell, doing both things (the \\pset null fix and\n> substituting \"(none)\" for empty privileges) would be an acceptable\n> answer to everyone who has commented. Let's proceed with both\n> patches, or combine them into one if there are merge conflicts.\n>\n>\nI'm under the impression that removing the null representation of empty\nprivileges by making them (none) removes all known \\d commands that output\nnulls and don't obey \\pset null. At least, the existing \\pset null patch\ndoesn't purport to fix anything that would become not applicable if the\n(none) patch goes in. I.e., at present they are mutually exclusive.\n\nDavid J.\n\nOn Fri, Oct 20, 2023 at 12:34 PM Tom Lane <[email protected]> wrote:Laurenz Albe <[email protected]> writes:\n> I am not sure how to proceed. Perhaps it would indeed be better to have\n> two competing commitfest entries. Both could be \"ready for committer\",\n> and the committers can decide what they prefer.\n\nAs near as I can tell, doing both things (the \\pset null fix and\nsubstituting \"(none)\" for empty privileges) would be an acceptable\nanswer to everyone who has commented. Let's proceed with both\npatches, or combine them into one if there are merge conflicts.I'm under the impression that removing the null representation of empty privileges by making them (none) removes all known \\d commands that output nulls and don't obey \\pset null. At least, the existing \\pset null patch doesn't purport to fix anything that would become not applicable if the (none) patch goes in. I.e., at present they are mutually exclusive.David J.",
"msg_date": "Fri, 20 Oct 2023 12:40:11 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Fri, Oct 20, 2023 at 12:34 PM Tom Lane <[email protected]> wrote:\n>> As near as I can tell, doing both things (the \\pset null fix and\n>> substituting \"(none)\" for empty privileges) would be an acceptable\n>> answer to everyone who has commented. Let's proceed with both\n>> patches, or combine them into one if there are merge conflicts.\n\n> I'm under the impression that removing the null representation of empty\n> privileges by making them (none) removes all known \\d commands that output\n> nulls and don't obey \\pset null.\n\nHow so? IIUC the proposal is to substitute \"(none)\" for empty-string\nACLs, not null ACLs. The \\pset change should be addressing an\nindependent case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Oct 2023 15:57:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Fri, Oct 20, 2023 at 12:57 PM Tom Lane <[email protected]> wrote:\n\n> \"David G. Johnston\" <[email protected]> writes:\n> > On Fri, Oct 20, 2023 at 12:34 PM Tom Lane <[email protected]> wrote:\n> >> As near as I can tell, doing both things (the \\pset null fix and\n> >> substituting \"(none)\" for empty privileges) would be an acceptable\n> >> answer to everyone who has commented. Let's proceed with both\n> >> patches, or combine them into one if there are merge conflicts.\n>\n> > I'm under the impression that removing the null representation of empty\n> > privileges by making them (none) removes all known \\d commands that\n> output\n> > nulls and don't obey \\pset null.\n>\n> How so? IIUC the proposal is to substitute \"(none)\" for empty-string\n> ACLs, not null ACLs. The \\pset change should be addressing an\n> independent case.\n>\n\nOk, I found my mis-understanding and better understand where you are all\ncoming from now; I apparently had the usage of NULL flip-flopped.\n\nTaking pg_tablespace as an example. Its \"spcacl\" column produces NULL for\ndefault privileges and '{}'::text[] for empty privileges.\n\nThus, today:\nempty: array_to_string('{}'::text[], '\\n') produces an empty string\ndefault: array_to_string(null, '\\n') produces null which then was printed\nas a hard-coded empty string via forcibly changing \\pset null\n\nI was thinking the two cases were reversed.\n\nMy proposal for changing printACLColumn is thus:\n\ncase when spcacl is null then ''\n when cardinality(spcacl) = 0 then '(none)'\n else array_to_string(spcacl, E'\\\\n')\nend as \"Access privileges\"\n\nIn short, I don't want default privileges to start to obey \\pset null when\nit never has before and is documented as displaying the empty string. I do\nwant the empty string produced by empty privileges to change to (none) so\nthat it no longer is indistinguishable from our choice of presentation for\nthe default privilege case.\n\nMechanically, we remove the existing \\pset null for these metacommands and\nmove it into the query via the added CASE expression in the printACLColumn\nfunction.\n\nThis gets rid of NULL as an output value for this column and makes the\npatch regarding \\pset null discussion unnecessary. And it leaves the\nexisting well-established empty string for default privileges alone (and\nchanging this is what I believe Tom is against and I agree on that point).\n\nDavid J.\n\nOn Fri, Oct 20, 2023 at 12:57 PM Tom Lane <[email protected]> wrote:\"David G. Johnston\" <[email protected]> writes:\n> On Fri, Oct 20, 2023 at 12:34 PM Tom Lane <[email protected]> wrote:\n>> As near as I can tell, doing both things (the \\pset null fix and\n>> substituting \"(none)\" for empty privileges) would be an acceptable\n>> answer to everyone who has commented. Let's proceed with both\n>> patches, or combine them into one if there are merge conflicts.\n\n> I'm under the impression that removing the null representation of empty\n> privileges by making them (none) removes all known \\d commands that output\n> nulls and don't obey \\pset null.\n\nHow so? IIUC the proposal is to substitute \"(none)\" for empty-string\nACLs, not null ACLs. The \\pset change should be addressing an\nindependent case.Ok, I found my mis-understanding and better understand where you are all coming from now; I apparently had the usage of NULL flip-flopped.Taking pg_tablespace as an example. Its \"spcacl\" column produces NULL for default privileges and '{}'::text[] for empty privileges.Thus, today:empty: array_to_string('{}'::text[], '\\n') produces an empty stringdefault: array_to_string(null, '\\n') produces null which then was printed as a hard-coded empty string via forcibly changing \\pset nullI was thinking the two cases were reversed.My proposal for changing printACLColumn is thus:case when spcacl is null then '' when cardinality(spcacl) = 0 then '(none)' else array_to_string(spcacl, E'\\\\n')end as \"Access privileges\"In short, I don't want default privileges to start to obey \\pset null when it never has before and is documented as displaying the empty string. I do want the empty string produced by empty privileges to change to (none) so that it no longer is indistinguishable from our choice of presentation for the default privilege case.Mechanically, we remove the existing \\pset null for these metacommands and move it into the query via the added CASE expression in the printACLColumn function.This gets rid of NULL as an output value for this column and makes the patch regarding \\pset null discussion unnecessary. And it leaves the existing well-established empty string for default privileges alone (and changing this is what I believe Tom is against and I agree on that point).David J.",
"msg_date": "Fri, 20 Oct 2023 13:35:06 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On 2023-10-20 08:42 +0200, Laurenz Albe wrote:\n> If you want to proceed with your patch, you could send a new version.\n\nv2 attached.\n\n> I think it could do with an added line of documentation to the\n> \"Privileges\" chapter, and I'd say that the regression tests should be\n> in \"psql.sql\" (not that it is very important).\n\nI added some docs. There will be merge conflicts when combining with\nyour v5! Also moved the regression tests to psql.sql which is the right\nplace for testing meta commands.\n\n-- \nErik",
"msg_date": "Sat, 21 Oct 2023 04:02:21 +0200",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On 2023-10-20 22:35 +0200, David G. Johnston wrote:\n> Ok, I found my mis-understanding and better understand where you are all\n> coming from now; I apparently had the usage of NULL flip-flopped.\n> \n> Taking pg_tablespace as an example. Its \"spcacl\" column produces NULL for\n> default privileges and '{}'::text[] for empty privileges.\n> \n> Thus, today:\n> empty: array_to_string('{}'::text[], '\\n') produces an empty string\n> default: array_to_string(null, '\\n') produces null which then was printed\n> as a hard-coded empty string via forcibly changing \\pset null\n> \n> I was thinking the two cases were reversed.\n> \n> My proposal for changing printACLColumn is thus:\n> \n> case when spcacl is null then ''\n> when cardinality(spcacl) = 0 then '(none)'\n> else array_to_string(spcacl, E'\\\\n')\n> end as \"Access privileges\"\n> \n> In short, I don't want default privileges to start to obey \\pset null when\n> it never has before and is documented as displaying the empty string. I do\n> want the empty string produced by empty privileges to change to (none) so\n> that it no longer is indistinguishable from our choice of presentation for\n> the default privilege case.\n> \n> Mechanically, we remove the existing \\pset null for these metacommands and\n> move it into the query via the added CASE expression in the printACLColumn\n> function.\n> \n> This gets rid of NULL as an output value for this column and makes the\n> patch regarding \\pset null discussion unnecessary. And it leaves the\n> existing well-established empty string for default privileges alone (and\n> changing this is what I believe Tom is against and I agree on that point).\n\nI haven't thought off this yet. The attached v3 of my initial patch\ndoes that. It also includes Laurenz' fix to no longer ignore \\pset null\n(minus the doc changes that suggest using \\pset null to distinguish\nbetween default and empty privileges because that's no longer needed).\n\n-- \nErik",
"msg_date": "Sat, 21 Oct 2023 04:29:44 +0200",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Fri, Oct 20, 2023 at 7:29 PM Erik Wienhold <[email protected]> wrote:\n\n> On 2023-10-20 22:35 +0200, David G. Johnston wrote:\n> > In short, I don't want default privileges to start to obey \\pset null\n> when\n> > it never has before and is documented as displaying the empty string. I\n> do\n> > want the empty string produced by empty privileges to change to (none) so\n> > that it no longer is indistinguishable from our choice of presentation\n> for\n> > the default privilege case.\n>\n> I haven't thought off this yet. The attached v3 of my initial patch\n> does that. It also includes Laurenz' fix to no longer ignore \\pset null\n> (minus the doc changes that suggest using \\pset null to distinguish\n> between default and empty privileges because that's no longer needed).\n>\n>\nThank you.\n\nIt looks good to me as-is, with one possible nit.\n\nI wonder if it would be clearer to say:\n\n\"If the Access privileges column is *blank* for a given object...\"\n\ninstead of \"empty\" to avoid having both \"empty [string]\" and \"empty\nprivileges\" present in the same paragraph and the empty string not\npertaining to the empty privileges.\n\nDavid J.\n\nOn Fri, Oct 20, 2023 at 7:29 PM Erik Wienhold <[email protected]> wrote:On 2023-10-20 22:35 +0200, David G. Johnston wrote:> In short, I don't want default privileges to start to obey \\pset null when\n> it never has before and is documented as displaying the empty string. I do\n> want the empty string produced by empty privileges to change to (none) so\n> that it no longer is indistinguishable from our choice of presentation for\n> the default privilege case.\n\nI haven't thought off this yet. The attached v3 of my initial patch\ndoes that. It also includes Laurenz' fix to no longer ignore \\pset null\n(minus the doc changes that suggest using \\pset null to distinguish\nbetween default and empty privileges because that's no longer needed).Thank you.It looks good to me as-is, with one possible nit.I wonder if it would be clearer to say:\"If the Access privileges column is *blank* for a given object...\"instead of \"empty\" to avoid having both \"empty [string]\" and \"empty privileges\" present in the same paragraph and the empty string not pertaining to the empty privileges.David J.",
"msg_date": "Sun, 22 Oct 2023 20:57:48 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Sat, 2023-10-21 at 04:29 +0200, Erik Wienhold wrote:\n> The attached v3 of my initial patch\n> does that. It also includes Laurenz' fix to no longer ignore \\pset null\n> (minus the doc changes that suggest using \\pset null to distinguish\n> between default and empty privileges because that's no longer needed).\n\nThanks!\n\nI went over the patch, fixed some problems and added some more stuff from\nmy patch.\n\nIn particular:\n\n --- a/doc/src/sgml/ddl.sgml\n +++ b/doc/src/sgml/ddl.sgml\n @@ -2353,7 +2353,9 @@ GRANT SELECT (col1), UPDATE (col1) ON mytable TO miriam_rw;\n <para>\n If the <quote>Access privileges</quote> column is empty for a given\n object, it means the object has default privileges (that is, its\n - privileges entry in the relevant system catalog is null). Default\n + privileges entry in the relevant system catalog is null). The column shows\n + <literal>(none)</literal> for empty privileges (that is, no privileges at\n + all, even for the object owner — a rare occurrence). Default\n privileges always include all privileges for the owner, and can include\n some privileges for <literal>PUBLIC</literal> depending on the object\n type, as explained above. The first <command>GRANT</command>\n\nThis description of empty privileges is smack in the middle of describing\ndefault privileges. I thought that was confusing and moved it to its\nown paragraph.\n\n --- a/src/bin/psql/describe.c\n +++ b/src/bin/psql/describe.c\n @@ -6718,7 +6680,13 @@ static void\n printACLColumn(PQExpBuffer buf, const char *colname)\n {\n appendPQExpBuffer(buf,\n - \"pg_catalog.array_to_string(%s, E'\\\\n') AS \\\"%s\\\"\",\n + \"CASE\\n\"\n + \" WHEN %s IS NULL THEN ''\\n\"\n + \" WHEN pg_catalog.cardinality(%s) = 0 THEN '%s'\\n\"\n + \" ELSE pg_catalog.array_to_string(%s, E'\\\\n')\\n\"\n + \"END AS \\\"%s\\\"\",\n + colname,\n + colname, gettext_noop(\"(none)\"),\n colname, gettext_noop(\"Access privileges\"));\n }\n\nThis erroneously displays NULL as empty string and subverts my changes.\nI have removed the first branch of the CASE expression.\n\n --- a/src/test/regress/expected/psql.out\n +++ b/src/test/regress/expected/psql.out\n @@ -6663,3 +6663,97 @@ DROP ROLE regress_du_role0;\n DROP ROLE regress_du_role1;\n DROP ROLE regress_du_role2;\n DROP ROLE regress_du_admin;\n +-- Test empty privileges.\n +BEGIN;\n +WARNING: there is already a transaction in progress\n\nThis warning is caused by a pre-existing error in the regression test, which\nforgot to close the transaction. I have added a COMMIT at the appropriate place.\n\n +ALTER TABLESPACE regress_tblspace OWNER TO CURRENT_USER;\n +REVOKE ALL ON TABLESPACE regress_tblspace FROM CURRENT_USER;\n +\\db+ regress_tblspace\n + List of tablespaces\n + Name | Owner | Location | Access privileges | Options | Size | Description \n +------------------+------------------------+-----------------+-------------------+---------+---------+-------------\n + regress_tblspace | regress_zeropriv_owner | pg_tblspc/16385 | (none) | | 0 bytes | \n +(1 row)\n\nThis test is not stable, since it contains the OID of the tablespace, which\nis different every time.\n\n +ALTER DATABASE :\"DBNAME\" OWNER TO CURRENT_USER;\n +REVOKE ALL ON DATABASE :\"DBNAME\" FROM CURRENT_USER, PUBLIC;\n +\\l :\"DBNAME\"\n + List of databases\n + Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges \n +------------+------------------------+-----------+-----------------+---------+-------+------------+-----------+-------------------\n + regression | regress_zeropriv_owner | SQL_ASCII | libc | C | C | | | (none)\n +(1 row)\n\nThis test is also not stable, since it depends on the locale definition\nof the regression test database. If you use \"make installcheck\", that could\nbe a different locale.\n\nI think that these tests are not absolutely necessary, and the other tests\nare sufficient. Consequently, I took the simple road of removing them.\n\nI also tried to improve the commit message.\n\nPatch attached.\n\nYours,\nLaurenz Albe",
"msg_date": "Mon, 23 Oct 2023 11:40:02 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Monday, October 23, 2023, Laurenz Albe <[email protected]> wrote:\n\n>\n> --- a/src/bin/psql/describe.c\n> +++ b/src/bin/psql/describe.c\n> @@ -6718,7 +6680,13 @@ static void\n> printACLColumn(PQExpBuffer buf, const char *colname)\n> {\n> appendPQExpBuffer(buf,\n> - \"pg_catalog.array_to_string(%s, E'\\\\n') AS\n> \\\"%s\\\"\",\n> + \"CASE\\n\"\n> + \" WHEN %s IS NULL THEN ''\\n\"\n> + \" WHEN pg_catalog.cardinality(%s) = 0 THEN '%s'\\n\"\n> + \" ELSE pg_catalog.array_to_string(%s, E'\\\\n')\\n\"\n> + \"END AS \\\"%s\\\"\",\n> + colname,\n> + colname, gettext_noop(\"(none)\"),\n> colname, gettext_noop(\"Access privileges\"));\n> }\n>\n> This erroneously displays NULL as empty string and subverts my changes.\n> I have removed the first branch of the CASE expression.\n>\n\nThere is no error here, the current consensus which this patch implements\nis to not change the documented “default privileges display as blank”. We\nare solving the empty privileges are not distinguishable complaint by\nprinting (none) for them.\n\nDavid J.\n\nOn Monday, October 23, 2023, Laurenz Albe <[email protected]> wrote:\n --- a/src/bin/psql/describe.c\n +++ b/src/bin/psql/describe.c\n @@ -6718,7 +6680,13 @@ static void\n printACLColumn(PQExpBuffer buf, const char *colname)\n {\n appendPQExpBuffer(buf,\n - \"pg_catalog.array_to_string(%s, E'\\\\n') AS \\\"%s\\\"\",\n + \"CASE\\n\"\n + \" WHEN %s IS NULL THEN ''\\n\"\n + \" WHEN pg_catalog.cardinality(%s) = 0 THEN '%s'\\n\"\n + \" ELSE pg_catalog.array_to_string(%s, E'\\\\n')\\n\"\n + \"END AS \\\"%s\\\"\",\n + colname,\n + colname, gettext_noop(\"(none)\"),\n colname, gettext_noop(\"Access privileges\"));\n }\n\nThis erroneously displays NULL as empty string and subverts my changes.\nI have removed the first branch of the CASE expression.\nThere is no error here, the current consensus which this patch implements is to not change the documented “default privileges display as blank”. We are solving the empty privileges are not distinguishable complaint by printing (none) for them.David J.",
"msg_date": "Mon, 23 Oct 2023 07:03:54 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Mon, 2023-10-23 at 07:03 -0700, David G. Johnston wrote:\n> On Monday, October 23, 2023, Laurenz Albe <[email protected]> wrote:\n> > \n> > --- a/src/bin/psql/describe.c\n> > +++ b/src/bin/psql/describe.c\n> > @@ -6718,7 +6680,13 @@ static void\n> > printACLColumn(PQExpBuffer buf, const char *colname)\n> > {\n> > appendPQExpBuffer(buf,\n> > - \"pg_catalog.array_to_string(%s, E'\\\\n') AS \\\"%s\\\"\",\n> > + \"CASE\\n\"\n> > + \" WHEN %s IS NULL THEN ''\\n\"\n> > + \" WHEN pg_catalog.cardinality(%s) = 0 THEN '%s'\\n\"\n> > + \" ELSE pg_catalog.array_to_string(%s, E'\\\\n')\\n\"\n> > + \"END AS \\\"%s\\\"\",\n> > + colname,\n> > + colname, gettext_noop(\"(none)\"),\n> > colname, gettext_noop(\"Access privileges\"));\n> > }\n> > \n> > This erroneously displays NULL as empty string and subverts my changes.\n> > I have removed the first branch of the CASE expression.\n> \n> There is no error here, the current consensus which this patch implements is to\n> not change the documented “default privileges display as blank”. We are solving\n> the empty privileges are not distinguishable complaint by printing (none) for them.\n\nErik's latest patch included my changes to display NULL as NULL in psql,\nso that \"\\pset null\" works as expected.\nBut he left the above hunk from his original patch, and that hunk replaces\nNULL with an empty string, so \"\\pset null\" wouldn't work for the display\nof default provoleges.\n\nHe didn't notice it because he didn't have a regression test that displays\ndefault privileges with non-empty \"\\pset null\".\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 23 Oct 2023 16:10:09 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Monday, October 23, 2023, Laurenz Albe <[email protected]> wrote:\n\n> On Mon, 2023-10-23 at 07:03 -0700, David G. Johnston wrote:\n> > On Monday, October 23, 2023, Laurenz Albe <[email protected]>\n> wrote:\n> > >\n> > > --- a/src/bin/psql/describe.c\n> > > +++ b/src/bin/psql/describe.c\n> > > @@ -6718,7 +6680,13 @@ static void\n> > > printACLColumn(PQExpBuffer buf, const char *colname)\n> > > {\n> > > appendPQExpBuffer(buf,\n> > > - \"pg_catalog.array_to_string(%s, E'\\\\n') AS\n> \\\"%s\\\"\",\n> > > + \"CASE\\n\"\n> > > + \" WHEN %s IS NULL THEN ''\\n\"\n> > > + \" WHEN pg_catalog.cardinality(%s) = 0 THEN\n> '%s'\\n\"\n> > > + \" ELSE pg_catalog.array_to_string(%s,\n> E'\\\\n')\\n\"\n> > > + \"END AS \\\"%s\\\"\",\n> > > + colname,\n> > > + colname, gettext_noop(\"(none)\"),\n> > > colname, gettext_noop(\"Access privileges\"));\n> > > }\n> > >\n> > > This erroneously displays NULL as empty string and subverts my changes.\n> > > I have removed the first branch of the CASE expression.\n> >\n> > There is no error here, the current consensus which this patch\n> implements is to\n> > not change the documented “default privileges display as blank”. We are\n> solving\n> > the empty privileges are not distinguishable complaint by printing\n> (none) for them.\n>\n> Erik's latest patch included my changes to display NULL as NULL in psql,\n> so that \"\\pset null\" works as expected.\n>\n\nNo, per the commit message, issuing an explicit \\pset null is a kludge and\nit gets rid of the hack in favor of making the query itself produce an\nempty string. i.e., we choose a poor implementation to get the documented\nbehavior and we are cleaning that up as an aside to the main fix.\n\nChanging the behavior so that default privileges print in correspondence to\nthe setting of \\pset null is, IME, off the table for this patch. Its one\nand only goal is to reliably distinguish empty and default privileges.\nThat is our extant bug.\n\nWe document default privileges print as an empty string - I do not think we\nshould change the definition to \"default privileges print null which can be\ncontrolled via \\pset null\", and regardless of preference doing so is not a\nbug.\n\nDavid J.\n\nOn Monday, October 23, 2023, Laurenz Albe <[email protected]> wrote:On Mon, 2023-10-23 at 07:03 -0700, David G. Johnston wrote:\n> On Monday, October 23, 2023, Laurenz Albe <[email protected]> wrote:\n> > \n> > --- a/src/bin/psql/describe.c\n> > +++ b/src/bin/psql/describe.c\n> > @@ -6718,7 +6680,13 @@ static void\n> > printACLColumn(PQExpBuffer buf, const char *colname)\n> > {\n> > appendPQExpBuffer(buf,\n> > - \"pg_catalog.array_to_string(%s, E'\\\\n') AS \\\"%s\\\"\",\n> > + \"CASE\\n\"\n> > + \" WHEN %s IS NULL THEN ''\\n\"\n> > + \" WHEN pg_catalog.cardinality(%s) = 0 THEN '%s'\\n\"\n> > + \" ELSE pg_catalog.array_to_string(%s, E'\\\\n')\\n\"\n> > + \"END AS \\\"%s\\\"\",\n> > + colname,\n> > + colname, gettext_noop(\"(none)\"),\n> > colname, gettext_noop(\"Access privileges\"));\n> > }\n> > \n> > This erroneously displays NULL as empty string and subverts my changes.\n> > I have removed the first branch of the CASE expression.\n> \n> There is no error here, the current consensus which this patch implements is to\n> not change the documented “default privileges display as blank”. We are solving\n> the empty privileges are not distinguishable complaint by printing (none) for them.\n\nErik's latest patch included my changes to display NULL as NULL in psql,\nso that \"\\pset null\" works as expected.No, per the commit message, issuing an explicit \\pset null is a kludge and it gets rid of the hack in favor of making the query itself produce an empty string. i.e., we choose a poor implementation to get the documented behavior and we are cleaning that up as an aside to the main fix.Changing the behavior so that default privileges print in correspondence to the setting of \\pset null is, IME, off the table for this patch. Its one and only goal is to reliably distinguish empty and default privileges. That is our extant bug.We document default privileges print as an empty string - I do not think we should change the definition to \"default privileges print null which can be controlled via \\pset null\", and regardless of preference doing so is not a bug.David J.",
"msg_date": "Mon, 23 Oct 2023 07:26:17 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> We document default privileges print as an empty string - I do not think we\n> should change the definition to \"default privileges print null which can be\n> controlled via \\pset null\", and regardless of preference doing so is not a\n> bug.\n\nWell, \"if it's documented it's not a bug\" is a defensible argument\nfor calling something not a bug, but it doesn't address the question\nof whether changing it would be an improvement. I think it would be,\nand I object to your claim that we have a consensus to not do that.\n\nThe core of the problem here, IMO, is exactly that there is confusion\nbetween whether a visibly empty string represents NULL (default\nprivileges) or an empty ACL (no privileges). I believe we have agreed\nthat printing \"(none)\" or some other clearly-not-an-ACL-entry string\nfor the second case is an improvement, even though that's a change\nin behavior. That doesn't mean that changing the other case can't\nalso be an improvement. I think it'd be less confusing all around\nif this instance of NULL prints like other instances of NULL.\n\nIOW, the current definition is \"NULL privileges print as an empty\nstring no matter what\", and I don't think that serves to reduce\nconfusion about whether an ACL is NULL or not. We ought to be doing\nwhat we can to make clear that such an ACL *is* NULL, because\notherwise people won't understand how it differs from an empty ACL.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 Oct 2023 10:57:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Mon, Oct 23, 2023 at 7:57 AM Tom Lane <[email protected]> wrote:\n\n>\n> IOW, the current definition is \"NULL privileges print as an empty\n> string no matter what\", and I don't think that serves to reduce\n> confusion about whether an ACL is NULL or not. We ought to be doing\n> what we can to make clear that such an ACL *is* NULL, because\n> otherwise people won't understand how it differs from an empty ACL.\n>\n>\nI tend to prefer the argument that these views are for human consumption\nand should be designed with that in mind. Allowing the user to decide\nwhether they prefer NULL to print as the empty string or something else\nworks within that framework. And the change of behavior for everyone with\na non-default \\pset gets accepted under that framework as well.\n\nI would suggest that we make the expected presence of NULL as an input\nexplicit:\n\ncase when spcacl is null then null\n when cardinality(spcacl) = 0 then '(none)' -- so as not to confuse\nit with null being printed also as an empty string\n else array_to_string(spcacl, E'\\\\n')\nend as \"Access privileges\"\n\nI would offer up:\n\nwhen spcacl is null then '(default)'\n\nalong with not translating (none) and (default) and thus making the data\ncontents of these views environment independent. But minimizing the\nvariance of these command's output across systems doesn't seem like a\ndesign goal that is likely to gain consensus and is excessive when viewed\nwithin the framework of these being only for human consumption.\n\nDavid J.\n\nOn Mon, Oct 23, 2023 at 7:57 AM Tom Lane <[email protected]> wrote: \n\nIOW, the current definition is \"NULL privileges print as an empty\nstring no matter what\", and I don't think that serves to reduce\nconfusion about whether an ACL is NULL or not. We ought to be doing\nwhat we can to make clear that such an ACL *is* NULL, because\notherwise people won't understand how it differs from an empty ACL.I tend to prefer the argument that these views are for human consumption and should be designed with that in mind. Allowing the user to decide whether they prefer NULL to print as the empty string or something else works within that framework. And the change of behavior for everyone with a non-default \\pset gets accepted under that framework as well.I would suggest that we make the expected presence of NULL as an input explicit:case when spcacl is null then null when cardinality(spcacl) = 0 then '(none)' -- so as not to confuse it with null being printed also as an empty string else array_to_string(spcacl, E'\\\\n')end as \"Access privileges\"I would offer up:when spcacl is null then '(default)'along with not translating (none) and (default) and thus making the data contents of these views environment independent. But minimizing the variance of these command's output across systems doesn't seem like a design goal that is likely to gain consensus and is excessive when viewed within the framework of these being only for human consumption.David J.",
"msg_date": "Mon, 23 Oct 2023 08:35:03 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Mon, 2023-10-23 at 08:35 -0700, David G. Johnston wrote:\n> I tend to prefer the argument that these views are for human consumption and should\n> be designed with that in mind.\n\nTrue, although given the shape of ACLs, it takes a somewhat trained human to\nconsume the string representation. But we won't be able to hide the fact that\ndefault ACLs are NULL in the catalogs. We can leave them empty, we can show\nthem as \"(default)\" or we can let the user choose with \"\\pset null\".\n\n\n> I would suggest that we make the expected presence of NULL as an input explicit:\n> I would offer up:\n> \n> when spcacl is null then '(default)'\n\nNoted.\n\n> along with not translating (none) and (default) and thus making the data contents\n> of these views environment independent. But minimizing the variance of these command's\n> output across systems doesn't seem like a design goal that is likely to gain consensus\n> and is excessive when viewed within the framework of these being only for human consumption.\n\nI didn't understand this completely. You want default privileges displayed as\n\"(default)\", but are you for or against \"\\pset null\" to have its normal effect on\nthe output of backslash commands in all other cases?\n\nSpeaking of consensus, it seems to me that Tom, Erik and me are in consensus.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 23 Oct 2023 19:26:10 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Monday, October 23, 2023, Laurenz Albe <[email protected]> wrote:\n\n> On Mon, 2023-10-23 at 08:35 -0700, David G. Johnston wrote:\n>\n\n> > along with not translating (none) and (default) and thus making the data\n> contents\n> > of these views environment independent. But minimizing the variance of\n> these command's\n> > output across systems doesn't seem like a design goal that is likely to\n> gain consensus\n> > and is excessive when viewed within the framework of these being only\n> for human consumption.\n>\n> I didn't understand this completely. You want default privileges\n> displayed as\n> \"(default)\", but are you for or against \"\\pset null\" to have its normal\n> effect on\n> the output of backslash commands in all other cases?\n\n\nI haven’t inspected other cases but to my knowledge we don’t typically\nrepresent non-unknown things using NULL so I’m not expecting other places\nto have this representation problem.\n\nI don’t think any of our meta-command outputs should modify pset null.\nLeft join cases should be considered unknown, represented as NULL, and obey\nthe user’s setting.\n\nI do believe that we should be against exposing, like in this case, any\ninternal implementation detail that encodes something (e.g., default\nprivileges) as NULL in the catalogs, to the user of the psql meta-commands.\n\nI won’t argue that exposing such NULLS is wrong, just it would preferable\nIME to avoid doing so. NULL means unknown or not applicable and default\nprivileges are neither of those things. I get why our catalogs choose such\nan encoding and agree with it, and users that find the need to consult the\ncatalogs will need to learn such details. But we should strive for them to\nbe able to survive with psql meta-commands.\n\nDavid J.\n\nOn Monday, October 23, 2023, Laurenz Albe <[email protected]> wrote:On Mon, 2023-10-23 at 08:35 -0700, David G. Johnston wrote:\n\n> along with not translating (none) and (default) and thus making the data contents\n> of these views environment independent. But minimizing the variance of these command's\n> output across systems doesn't seem like a design goal that is likely to gain consensus\n> and is excessive when viewed within the framework of these being only for human consumption.\n\nI didn't understand this completely. You want default privileges displayed as\n\"(default)\", but are you for or against \"\\pset null\" to have its normal effect on\nthe output of backslash commands in all other cases?I haven’t inspected other cases but to my knowledge we don’t typically represent non-unknown things using NULL so I’m not expecting other places to have this representation problem.I don’t think any of our meta-command outputs should modify pset null. Left join cases should be considered unknown, represented as NULL, and obey the user’s setting.I do believe that we should be against exposing, like in this case, any internal implementation detail that encodes something (e.g., default privileges) as NULL in the catalogs, to the user of the psql meta-commands.I won’t argue that exposing such NULLS is wrong, just it would preferable IME to avoid doing so. NULL means unknown or not applicable and default privileges are neither of those things. I get why our catalogs choose such an encoding and agree with it, and users that find the need to consult the catalogs will need to learn such details. But we should strive for them to be able to survive with psql meta-commands.David J.",
"msg_date": "Mon, 23 Oct 2023 11:37:13 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Mon, 2023-10-23 at 11:37 -0700, David G. Johnston wrote:\n> > I didn't understand this completely. You want default privileges displayed as\n> > \"(default)\", but are you for or against \"\\pset null\" to have its normal effect on\n> > the output of backslash commands in all other cases?\n> \n> I haven’t inspected other cases but to my knowledge we don’t typically represent\n> non-unknown things using NULL so I’m not expecting other places to have this\n> representation problem.\n\nThe first example that comes to my mind is the \"ICU Locale\" and the \"ICU Rules\"\nin the output of \\l. There are many others.\n\n> I don’t think any of our meta-command outputs should modify pset null.\n> Left join cases should be considered unknown, represented as NULL, and obey the\n> user’s setting.\n\nThat's what I think too. psql output should respect \"\\pset null\".\nSo it looks like we agree on that.\n\n> I do believe that we should be against exposing, like in this case, any internal\n> implementation detail that encodes something (e.g., default privileges) as NULL\n> in the catalogs, to the user of the psql meta-commands.\n> \n> I won’t argue that exposing such NULLS is wrong, just it would preferable IME\n> to avoid doing so. NULL means unknown or not applicable and default privileges\n> are neither of those things. I get why our catalogs choose such an encoding and\n> agree with it, and users that find the need to consult the catalogs will need to\n> learn such details. But we should strive for them to be able to survive with\n> psql meta-commands.\n\nSure, it would be best to hide this implementation detail from the user.\nThe correct way to do that would be to fake an ACL entry like \"laurenz=arwdDxt/laurenz\"\nif there is a NULL in the catalog, but that would add a ton of special-case\ncode to psql, which does not look appealing at all.\n\nSo we cannot completely hide the implementation, but perhaps \"(default)\" would\nbe less confusing than a NULL value.\n\nIf everybody agrees, I can modify the patch to do that.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 24 Oct 2023 04:35:41 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "Laurenz Albe <[email protected]> writes:\n> On Mon, 2023-10-23 at 11:37 -0700, David G. Johnston wrote:\n>> I do believe that we should be against exposing, like in this case, any internal\n>> implementation detail that encodes something (e.g., default privileges) as NULL\n>> in the catalogs, to the user of the psql meta-commands.\n\n> Sure, it would be best to hide this implementation detail from the user.\n> The correct way to do that would be to fake an ACL entry like \"laurenz=arwdDxt/laurenz\"\n> if there is a NULL in the catalog, but that would add a ton of special-case\n> code to psql, which does not look appealing at all.\n\nFor better or worse, that *is* the backend's catalog representation,\nand I don't think that psql would be doing our users a service by\ntrying to obscure the fact. They'd run into it anyway the moment\nthey look at the catalogs with anything but a \\d-something command.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 Oct 2023 22:43:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Monday, October 23, 2023, Laurenz Albe <[email protected]> wrote:\n\n> On Mon, 2023-10-23 at 11:37 -0700, David G. Johnston wrote:\n> > > I didn't understand this completely. You want default privileges\n> displayed as\n> > > \"(default)\", but are you for or against \"\\pset null\" to have its\n> normal effect on\n> > > the output of backslash commands in all other cases?\n> >\n> > I haven’t inspected other cases but to my knowledge we don’t typically\n> represent\n> > non-unknown things using NULL so I’m not expecting other places to have\n> this\n> > representation problem.\n>\n> The first example that comes to my mind is the \"ICU Locale\" and the \"ICU\n> Rules\"\n> in the output of \\l. There are many others.\n\n\nBoth of those fall into “this null means there is no value for these\n(because we aren’t using icu)”. I have no qualms with leaving true nulls\nrepresented as themselves. Clean slate maybe I print “(not using icu)”\nthere instead of null but it isn’t worth the effort to change.\n\n>\n> > I won’t argue that exposing such NULLS is wrong, just it would\n> preferable IME\n> > to avoid doing so. NULL means unknown or not applicable and default\n> privileges\n> > are neither of those things. I get why our catalogs choose such an\n> encoding and\n> > agree with it, and users that find the need to consult the catalogs will\n> need to\n> > learn such details. But we should strive for them to be able to survive\n> with\n> > psql meta-commands.\n>\n> Sure, it would be best to hide this implementation detail from the user.\n> The correct way to do that would be to fake an ACL entry like\n> \"laurenz=arwdDxt/laurenz\"\n> if there is a NULL in the catalog, but that would add a ton of special-case\n> code to psql, which does not look appealing at all.\n\n\nMore generically it would be “[PUBLIC=]/???/postgres” and\n{OWNER}=???/postgres\n\nIt would ideally be a function call for psql and a system info function\nusable for anyone.\n\nDavid J.\n\nOn Monday, October 23, 2023, Laurenz Albe <[email protected]> wrote:On Mon, 2023-10-23 at 11:37 -0700, David G. Johnston wrote:\n> > I didn't understand this completely. You want default privileges displayed as\n> > \"(default)\", but are you for or against \"\\pset null\" to have its normal effect on\n> > the output of backslash commands in all other cases?\n> \n> I haven’t inspected other cases but to my knowledge we don’t typically represent\n> non-unknown things using NULL so I’m not expecting other places to have this\n> representation problem.\n\nThe first example that comes to my mind is the \"ICU Locale\" and the \"ICU Rules\"\nin the output of \\l. There are many others.Both of those fall into “this null means there is no value for these (because we aren’t using icu)”. I have no qualms with leaving true nulls represented as themselves. Clean slate maybe I print “(not using icu)” there instead of null but it isn’t worth the effort to change.\n> \n> I won’t argue that exposing such NULLS is wrong, just it would preferable IME\n> to avoid doing so. NULL means unknown or not applicable and default privileges\n> are neither of those things. I get why our catalogs choose such an encoding and\n> agree with it, and users that find the need to consult the catalogs will need to\n> learn such details. But we should strive for them to be able to survive with\n> psql meta-commands.\n\nSure, it would be best to hide this implementation detail from the user.\nThe correct way to do that would be to fake an ACL entry like \"laurenz=arwdDxt/laurenz\"\nif there is a NULL in the catalog, but that would add a ton of special-case\ncode to psql, which does not look appealing at all.More generically it would be “[PUBLIC=]/???/postgres” and {OWNER}=???/postgresIt would ideally be a function call for psql and a system info function usable for anyone.David J.",
"msg_date": "Mon, 23 Oct 2023 19:54:37 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Monday, October 23, 2023, Tom Lane <[email protected]> wrote:\n\n> Laurenz Albe <[email protected]> writes:\n> > On Mon, 2023-10-23 at 11:37 -0700, David G. Johnston wrote:\n> >> I do believe that we should be against exposing, like in this case, any\n> internal\n> >> implementation detail that encodes something (e.g., default privileges)\n> as NULL\n> >> in the catalogs, to the user of the psql meta-commands.\n>\n> > Sure, it would be best to hide this implementation detail from the user.\n> > The correct way to do that would be to fake an ACL entry like\n> \"laurenz=arwdDxt/laurenz\"\n> > if there is a NULL in the catalog, but that would add a ton of\n> special-case\n> > code to psql, which does not look appealing at all.\n>\n> For better or worse, that *is* the backend's catalog representation,\n> and I don't think that psql would be doing our users a service by\n> trying to obscure the fact. They'd run into it anyway the moment\n> they look at the catalogs with anything but a \\d-something command.\n>\n\nWhich many may never do, and those few that do will see immediately that\nthe catalog uses null where they expected to see “(default)” and realize we\nmade a presentational choice in the interests of readability and their\nquery will need to make a choice regarding the null and empty arrays as\nwell.\n\nDavid J.\n\nOn Monday, October 23, 2023, Tom Lane <[email protected]> wrote:Laurenz Albe <[email protected]> writes:\n> On Mon, 2023-10-23 at 11:37 -0700, David G. Johnston wrote:\n>> I do believe that we should be against exposing, like in this case, any internal\n>> implementation detail that encodes something (e.g., default privileges) as NULL\n>> in the catalogs, to the user of the psql meta-commands.\n\n> Sure, it would be best to hide this implementation detail from the user.\n> The correct way to do that would be to fake an ACL entry like \"laurenz=arwdDxt/laurenz\"\n> if there is a NULL in the catalog, but that would add a ton of special-case\n> code to psql, which does not look appealing at all.\n\nFor better or worse, that *is* the backend's catalog representation,\nand I don't think that psql would be doing our users a service by\ntrying to obscure the fact. They'd run into it anyway the moment\nthey look at the catalogs with anything but a \\d-something command.\nWhich many may never do, and those few that do will see immediately that the catalog uses null where they expected to see “(default)” and realize we made a presentational choice in the interests of readability and their query will need to make a choice regarding the null and empty arrays as well.David J.",
"msg_date": "Mon, 23 Oct 2023 20:01:56 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Mon, 2023-10-23 at 22:43 -0400, Tom Lane wrote:\n> Laurenz Albe <[email protected]> writes:\n> > On Mon, 2023-10-23 at 11:37 -0700, David G. Johnston wrote:\n> > > I do believe that we should be against exposing, like in this case, any internal\n> > > implementation detail that encodes something (e.g., default privileges) as NULL\n> > > in the catalogs, to the user of the psql meta-commands.\n> \n> > Sure, it would be best to hide this implementation detail from the user.\n> > The correct way to do that would be to fake an ACL entry like \"laurenz=arwdDxt/laurenz\"\n> > if there is a NULL in the catalog, but that would add a ton of special-case\n> > code to psql, which does not look appealing at all.\n> \n> For better or worse, that *is* the backend's catalog representation,\n> and I don't think that psql would be doing our users a service by\n> trying to obscure the fact. They'd run into it anyway the moment\n> they look at the catalogs with anything but a \\d-something command.\n\n... for example with a client like pgAdmin, which is a frequent choice\nof many PostgreSQL beginners (they display empty privileges).\n\nYes, it is \"(default)\" or NULL. The former is friendlier for beginners,\nthe latter incurs less backward incompatibility.\n\nI could live with either solution, but I am still leaning towards NULL.\n\nI ran the regression tests with a patch that displays \"(default)\",\nand I counted 22 failures, excluding the one added by my patch.\nThe tests can of course be fixed, but perhaps that serves as a measure\nof the backward incompatibility.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 24 Oct 2023 08:34:42 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Wed, Nov 8, 2023 at 10:46 AM Laurenz Albe <[email protected]> wrote:\n>\n> On Sat, 2023-10-21 at 04:29 +0200, Erik Wienhold wrote:\n> > The attached v3 of my initial patch\n> > does that. It also includes Laurenz' fix to no longer ignore \\pset null\n> > (minus the doc changes that suggest using \\pset null to distinguish\n> > between default and empty privileges because that's no longer needed).\n>\n> Thanks!\n>\n> I went over the patch, fixed some problems and added some more stuff from\n> my patch.\n>\n> In particular:\n>\n> --- a/doc/src/sgml/ddl.sgml\n> +++ b/doc/src/sgml/ddl.sgml\n> @@ -2353,7 +2353,9 @@ GRANT SELECT (col1), UPDATE (col1) ON mytable TO miriam_rw;\n> <para>\n> If the <quote>Access privileges</quote> column is empty for a given\n> object, it means the object has default privileges (that is, its\n> - privileges entry in the relevant system catalog is null). Default\n> + privileges entry in the relevant system catalog is null). The column shows\n> + <literal>(none)</literal> for empty privileges (that is, no privileges at\n> + all, even for the object owner — a rare occurrence). Default\n> privileges always include all privileges for the owner, and can include\n> some privileges for <literal>PUBLIC</literal> depending on the object\n> type, as explained above. The first <command>GRANT</command>\n>\n> This description of empty privileges is smack in the middle of describing\n> default privileges. I thought that was confusing and moved it to its\n> own paragraph.\n>\n> --- a/src/bin/psql/describe.c\n> +++ b/src/bin/psql/describe.c\n> @@ -6718,7 +6680,13 @@ static void\n> printACLColumn(PQExpBuffer buf, const char *colname)\n> {\n> appendPQExpBuffer(buf,\n> - \"pg_catalog.array_to_string(%s, E'\\\\n') AS \\\"%s\\\"\",\n> + \"CASE\\n\"\n> + \" WHEN %s IS NULL THEN ''\\n\"\n> + \" WHEN pg_catalog.cardinality(%s) = 0 THEN '%s'\\n\"\n> + \" ELSE pg_catalog.array_to_string(%s, E'\\\\n')\\n\"\n> + \"END AS \\\"%s\\\"\",\n> + colname,\n> + colname, gettext_noop(\"(none)\"),\n> colname, gettext_noop(\"Access privileges\"));\n> }\n>\n> This erroneously displays NULL as empty string and subverts my changes.\n> I have removed the first branch of the CASE expression.\n>\n> --- a/src/test/regress/expected/psql.out\n> +++ b/src/test/regress/expected/psql.out\n> @@ -6663,3 +6663,97 @@ DROP ROLE regress_du_role0;\n> DROP ROLE regress_du_role1;\n> DROP ROLE regress_du_role2;\n> DROP ROLE regress_du_admin;\n> +-- Test empty privileges.\n> +BEGIN;\n> +WARNING: there is already a transaction in progress\n>\n> This warning is caused by a pre-existing error in the regression test, which\n> forgot to close the transaction. I have added a COMMIT at the appropriate place.\n>\n> +ALTER TABLESPACE regress_tblspace OWNER TO CURRENT_USER;\n> +REVOKE ALL ON TABLESPACE regress_tblspace FROM CURRENT_USER;\n> +\\db+ regress_tblspace\n> + List of tablespaces\n> + Name | Owner | Location | Access privileges | Options | Size | Description\n> +------------------+------------------------+-----------------+-------------------+---------+---------+-------------\n> + regress_tblspace | regress_zeropriv_owner | pg_tblspc/16385 | (none) | | 0 bytes |\n> +(1 row)\n>\n> This test is not stable, since it contains the OID of the tablespace, which\n> is different every time.\n>\n> +ALTER DATABASE :\"DBNAME\" OWNER TO CURRENT_USER;\n> +REVOKE ALL ON DATABASE :\"DBNAME\" FROM CURRENT_USER, PUBLIC;\n> +\\l :\"DBNAME\"\n> + List of databases\n> + Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges\n> +------------+------------------------+-----------+-----------------+---------+-------+------------+-----------+-------------------\n> + regression | regress_zeropriv_owner | SQL_ASCII | libc | C | C | | | (none)\n> +(1 row)\n>\n> This test is also not stable, since it depends on the locale definition\n> of the regression test database. If you use \"make installcheck\", that could\n> be a different locale.\n>\n> I think that these tests are not absolutely necessary, and the other tests\n> are sufficient. Consequently, I took the simple road of removing them.\n>\n> I also tried to improve the commit message.\n>\n> Patch attached.\n\nI tested the Patch for the modified changes and it is working fine.\n\nThanks and regards,\nShubham Khanna.\n\n\n",
"msg_date": "Wed, 8 Nov 2023 10:56:12 +0530",
"msg_from": "Shubham Khanna <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Wed, 2023-11-08 at 10:56 +0530, Shubham Khanna wrote:\n> I tested the Patch for the modified changes and it is working fine.\n\nThanks for the review!\n\nI wonder how to proceed with this patch. The main disagreement is\nwhether default privileges should be displayed as NULL (less invasive,\nbut more confusing for beginners) or \"(default)\" (more invasive,\nbut nicer for beginners).\n\nDavid is for \"(default)\", Tom and me are for NULL, and I guess Erik\nwould also prefer \"(default)\", since that was how his original\npatch did it, IIRC. I think I could live with both solutions.\n\nKind of a stalemate. Who wants to tip the scales?\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 08 Nov 2023 13:23:28 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On 2023-11-08 13:23 +0100, Laurenz Albe wrote:\n> I wonder how to proceed with this patch. The main disagreement is\n> whether default privileges should be displayed as NULL (less invasive,\n> but more confusing for beginners) or \"(default)\" (more invasive,\n> but nicer for beginners).\n\nAre there any reports from beginners being confused about default\nprivileges being NULL or being displayed as a blank string in psql?\nThis is usually resolved with a pointer to the docs if it comes up in\ndiscussions or the user makes the mental leap and checks the docs\nhimself. Both patches add some details to the docs to explain psql's\noutput.\n\n> David is for \"(default)\", Tom and me are for NULL, and I guess Erik\n> would also prefer \"(default)\", since that was how his original\n> patch did it, IIRC. I think I could live with both solutions.\n>\n> Kind of a stalemate. Who wants to tip the scales?\n\nYes I had a slight preference for my patch but I'd go with yours (\\pset\nnull) now. I followed the discussion after my last mail but had nothing\nmore to add that wasn't already said. Tom then wrote that NULL is the\ncatalog's representation for the default privileges and obscuring that\nfact in psql is not doing any service to the users. This convinced me\nbecause users may have to deal with aclitem[] being NULL anyway at some\npoint if they need to check privileges in more detail. So it makes\nabsolutely sense that psql is transparent about that.\n\n-- \nErik\n\n\n",
"msg_date": "Thu, 9 Nov 2023 03:40:24 +0100",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Thu, 2023-11-09 at 03:40 +0100, Erik Wienhold wrote:\n> On 2023-11-08 13:23 +0100, Laurenz Albe wrote:\n> > I wonder how to proceed with this patch. The main disagreement is\n> > whether default privileges should be displayed as NULL (less invasive,\n> > but more confusing for beginners) or \"(default)\" (more invasive,\n> > but nicer for beginners).\n> \n> Are there any reports from beginners being confused about default\n> privileges being NULL or being displayed as a blank string in psql?\n> This is usually resolved with a pointer to the docs if it comes up in\n> discussions or the user makes the mental leap and checks the docs\n> himself. Both patches add some details to the docs to explain psql's\n> output.\n\nRight.\n\n> > David is for \"(default)\", Tom and me are for NULL, and I guess Erik\n> > would also prefer \"(default)\", since that was how his original\n> > patch did it, IIRC. I think I could live with both solutions.\n> > \n> > Kind of a stalemate. Who wants to tip the scales?\n> \n> Yes I had a slight preference for my patch but I'd go with yours (\\pset\n> null) now. I followed the discussion after my last mail but had nothing\n> more to add that wasn't already said. Tom then wrote that NULL is the\n> catalog's representation for the default privileges and obscuring that\n> fact in psql is not doing any service to the users. This convinced me\n> because users may have to deal with aclitem[] being NULL anyway at some\n> point if they need to check privileges in more detail. So it makes\n> absolutely sense that psql is transparent about that.\n\nThanks for the feedback. I'll set the patch to \"ready for committer\" then.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 09 Nov 2023 08:38:43 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "Laurenz Albe <[email protected]> writes:\n> Thanks for the feedback. I'll set the patch to \"ready for committer\" then.\n\nSo, just to clarify, we're settling on your v4 from [1]?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/[email protected]\n\n\n",
"msg_date": "Thu, 09 Nov 2023 14:19:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On 2023-11-09 20:19 +0100, Tom Lane wrote:\n> Laurenz Albe <[email protected]> writes:\n> > Thanks for the feedback. I'll set the patch to \"ready for committer\" then.\n> \n> So, just to clarify, we're settling on your v4 from [1]?\n> \n> [1] https://www.postgresql.org/message-id/[email protected]\n\nYes from my side.\n\n-- \nErik\n\n\n",
"msg_date": "Mon, 13 Nov 2023 11:27:04 +0100",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Mon, 2023-11-13 at 11:27 +0100, Erik Wienhold wrote:\n> On 2023-11-09 20:19 +0100, Tom Lane wrote:\n> > Laurenz Albe <[email protected]> writes:\n> > > Thanks for the feedback. I'll set the patch to \"ready for committer\" then.\n> > \n> > So, just to clarify, we're settling on your v4 from [1]?\n> > \n> > [1] https://www.postgresql.org/message-id/[email protected]\n> \n> Yes from my side.\n\n+1\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 13 Nov 2023 20:36:17 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 12:36 PM Laurenz Albe <[email protected]>\nwrote:\n\n> On Mon, 2023-11-13 at 11:27 +0100, Erik Wienhold wrote:\n> > On 2023-11-09 20:19 +0100, Tom Lane wrote:\n> > > Laurenz Albe <[email protected]> writes:\n> > > > Thanks for the feedback. I'll set the patch to \"ready for\n> committer\" then.\n> > >\n> > > So, just to clarify, we're settling on your v4 from [1]?\n> > >\n> > > [1]\n> https://www.postgresql.org/message-id/[email protected]\n> >\n> > Yes from my side.\n>\n> +1\n>\n>\n+0.5 for the reasons already stated; but I get and accept the argument for\nNULL.\n\nI will reiterate my preference for writing an explicit IS NULL branch in\nthe case expression instead of relying upon the strict-ness of\narray_to_string.\n\n+ \"CASE\\n\"\n WHEN %s IS NULL THEN NULL\n+ \" WHEN pg_catalog.cardinality(%s) = 0 THEN '%s'\\n\"\n+ \" ELSE pg_catalog.array_to_string(%s, E'\\\\n')\\n\"\n+ \"END AS \\\"%s\\\"\",\n\nDavid J.\n\nOn Mon, Nov 13, 2023 at 12:36 PM Laurenz Albe <[email protected]> wrote:On Mon, 2023-11-13 at 11:27 +0100, Erik Wienhold wrote:\n> On 2023-11-09 20:19 +0100, Tom Lane wrote:\n> > Laurenz Albe <[email protected]> writes:\n> > > Thanks for the feedback. I'll set the patch to \"ready for committer\" then.\n> > \n> > So, just to clarify, we're settling on your v4 from [1]?\n> > \n> > [1] https://www.postgresql.org/message-id/[email protected]\n> \n> Yes from my side.\n\n+1+0.5 for the reasons already stated; but I get and accept the argument for NULL.I will reiterate my preference for writing an explicit IS NULL branch in the case expression instead of relying upon the strict-ness of array_to_string.+\t\t\t\t\t \"CASE\\n\" WHEN %s IS NULL THEN NULL+\t\t\t\t\t \" WHEN pg_catalog.cardinality(%s) = 0 THEN '%s'\\n\"+\t\t\t\t\t \" ELSE pg_catalog.array_to_string(%s, E'\\\\n')\\n\"+\t\t\t\t\t \"END AS \\\"%s\\\"\",David J.",
"msg_date": "Mon, 13 Nov 2023 12:44:21 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Mon, Nov 13, 2023 at 12:36 PM Laurenz Albe <[email protected]>\n> wrote:\n>> On Mon, 2023-11-13 at 11:27 +0100, Erik Wienhold wrote:\n>>> On 2023-11-09 20:19 +0100, Tom Lane wrote:\n>>>> So, just to clarify, we're settling on your v4 from [1]?\n\n>>> Yes from my side.\n\n>> +1\n\n> +0.5 for the reasons already stated; but I get and accept the argument for\n> NULL.\n\nPatch pushed with minor adjustments, mainly rewriting some comments.\n\nOne notable change is that I dropped the newline whitespace printed\nby printACLColumn. That was contrary to the policy expressed in the\nfunction's comment, and IMO it made -E output look worse not better.\nThe problem is that the calling code determines the indentation\nthat this targetlist item should have, and we don't want to outdent\nfrom that. I think it's better to make it one line, even though\nthat will run a bit over 80 columns.\n\nI also got rid of the use of a created superuser in the test case.\nThe test seems pretty duplicative to me anyway, so let's just not\ntest the object types that need superuser.\n\n> I will reiterate my preference for writing an explicit IS NULL branch in\n> the case expression instead of relying upon the strict-ness of\n> array_to_string.\n\nMeh. We were relying on that already, and it wasn't a problem.\nI might have done it, except that it'd have made the one line\neven longer and harder to read (and slower to execute, probably).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Nov 2023 15:49:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On Mon, 2023-11-13 at 15:49 -0500, Tom Lane wrote:\n> Patch pushed with minor adjustments, mainly rewriting some comments.\n\nThank you!\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 14 Nov 2023 05:40:14 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix output of zero privileges in psql"
},
{
"msg_contents": "On 2023-11-13 21:49 +0100, Tom Lane wrote:\n> Patch pushed with minor adjustments, mainly rewriting some comments.\n\nThanks a lot!\n\n-- \nErik\n\n\n",
"msg_date": "Tue, 14 Nov 2023 18:17:43 +0100",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix output of zero privileges in psql"
}
] |
[
{
"msg_contents": "The docs for `to_regtype()` say, “this function will return NULL rather than throwing an error if the name is not found.” And it’s true most of the time:\n\ndavid=# select to_regtype('foo'), to_regtype('clam');\n to_regtype | to_regtype\n------------+------------\n [null] | [null]\n\nBut not others:\n\ndavid=# select to_regtype('inteval second');\nERROR: syntax error at or near \"second\"\nLINE 1: select to_regtype('inteval second');\n ^\nCONTEXT: invalid type name \"inteval second”\n\nI presume this has something to do with not catching errors from the parser?\n\ndavid=# select to_regtype('clam bake');\nERROR: syntax error at or near \"bake\"\nLINE 1: select to_regtype('clam bake');\n ^\nCONTEXT: invalid type name \"clam bake\"\n\nBest,\n\nDavid",
"msg_date": "Sun, 17 Sep 2023 18:13:56 -0400",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "to_regtype() Raises Error"
},
{
"msg_contents": "On 18/09/2023 00:13 CEST David E. Wheeler <[email protected]> wrote:\n\n> The docs for `to_regtype()` say, “this function will return NULL rather than\n> throwing an error if the name is not found.” And it’s true most of the time:\n>\n> david=# select to_regtype('foo'), to_regtype('clam');\n> to_regtype | to_regtype\n> ------------+------------\n> [null] | [null]\n>\n> But not others:\n>\n> david=# select to_regtype('inteval second');\n> ERROR: syntax error at or near \"second\"\n> LINE 1: select to_regtype('inteval second');\n> ^\n> CONTEXT: invalid type name \"inteval second”\n\nProbably a typo and you meant 'interval second' which works.\n\n> I presume this has something to do with not catching errors from the parser?\n>\n> david=# select to_regtype('clam bake');\n> ERROR: syntax error at or near \"bake\"\n> LINE 1: select to_regtype('clam bake');\n> ^\n> CONTEXT: invalid type name \"clam bake\"\n\nDouble-quoting the type name to treat it as an identifier works:\n\n\ttest=# select to_regtype('\"clam bake\"');\n\t to_regtype\n\t------------\n\t <NULL>\n\t(1 row)\n\nSo it's basically a matter of keywords vs. identifiers.\n\n--\nErik\n\n\n",
"msg_date": "Mon, 18 Sep 2023 00:41:01 +0200 (CEST)",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: to_regtype() Raises Error"
},
{
"msg_contents": "On 9/18/23 00:41, Erik Wienhold wrote:\n> On 18/09/2023 00:13 CEST David E. Wheeler <[email protected]> wrote:\n> \n>> The docs for `to_regtype()` say, “this function will return NULL rather than\n>> throwing an error if the name is not found.” And it’s true most of the time:\n>>\n>> david=# select to_regtype('foo'), to_regtype('clam');\n>> to_regtype | to_regtype\n>> ------------+------------\n>> [null] | [null]\n>>\n>> But not others:\n>>\n>> david=# select to_regtype('inteval second');\n>> ERROR: syntax error at or near \"second\"\n>> LINE 1: select to_regtype('inteval second');\n>> ^\n>> CONTEXT: invalid type name \"inteval second”\n> \n> Probably a typo and you meant 'interval second' which works.\nNo, that is precisely the point. The result should be null instead of \nan error.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Mon, 18 Sep 2023 00:57:04 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: to_regtype() Raises Error"
},
{
"msg_contents": "Vik Fearing <[email protected]> writes:\n> No, that is precisely the point. The result should be null instead of \n> an error.\n\nYeah, ideally so, but the cost/benefit of making it happen seems\npretty unattractive for now. See the soft-errors thread at [1],\nparticularly [2] (but searching in that thread for references to\nregclassin, regtypein, and to_reg* will find additional detail).\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/3bbbb0df-7382-bf87-9737-340ba096e034%40postgrespro.ru\n[2] https://www.postgresql.org/message-id/3342239.1671988406%40sss.pgh.pa.us\n\n\n",
"msg_date": "Sun, 17 Sep 2023 19:28:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: to_regtype() Raises Error"
},
{
"msg_contents": "On 18/09/2023 00:57 CEST Vik Fearing <[email protected]> wrote:\n\n> On 9/18/23 00:41, Erik Wienhold wrote:\n> > On 18/09/2023 00:13 CEST David E. Wheeler <[email protected]> wrote:\n> >\n> >> david=# select to_regtype('inteval second');\n> >> ERROR: syntax error at or near \"second\"\n> >> LINE 1: select to_regtype('inteval second');\n> >> ^\n> >> CONTEXT: invalid type name \"inteval second”\n> >\n> > Probably a typo and you meant 'interval second' which works.\n>\n> No, that is precisely the point. The result should be null instead of\n> an error.\n\nWell, the docs say \"return NULL rather than throwing an error if the name is\nnot found\". To me \"name is not found\" implies that it has to be valid syntax\nfirst to even have a name that can be looked up.\n\nString 'inteval second' is a syntax error when interpreted as a type name.\nThe same when I want to create a table with that typo:\n\n\ttest=# create table t (a inteval second);\n\tERROR: syntax error at or near \"second\"\n\tLINE 1: create table t (a inteval second);\n\nAnd a custom function is always an option:\n\n\tcreate function to_regtype_lax(name text)\n\t returns regtype\n\t language plpgsql\n\t as $$\n\tbegin\n\t return to_regtype(name);\n\texception\n\t when others then\n\t return null;\n\tend\n\t$$;\n\n\ttest=# select to_regtype_lax('inteval second');\n\t to_regtype_lax\n\t----------------\n\t <NULL>\n\t(1 row)\n\n--\nErik\n\n\n",
"msg_date": "Mon, 18 Sep 2023 01:33:07 +0200 (CEST)",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: to_regtype() Raises Error"
},
{
"msg_contents": "On Sep 17, 2023, at 19:28, Tom Lane <[email protected]> wrote:\n\n>> No, that is precisely the point. The result should be null instead of\n>> an error.\n> \n> Yeah, ideally so, but the cost/benefit of making it happen seems\n> pretty unattractive for now. See the soft-errors thread at [1],\n> particularly [2] (but searching in that thread for references to\n> regclassin, regtypein, and to_reg* will find additional detail).\n\nFor my purposes I’m wrapping it in an exception-catching PL/pgSQL function, but it might be worth noting the condition in which it *will* raise an error on the docs.\n\nBest,\n\nDavid",
"msg_date": "Sun, 17 Sep 2023 19:37:58 -0400",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: to_regtype() Raises Error"
},
{
"msg_contents": "On Mon, 2023-09-18 at 00:57 +0200, Vik Fearing wrote:\n> On 9/18/23 00:41, Erik Wienhold wrote:\n> > On 18/09/2023 00:13 CEST David E. Wheeler <[email protected]> wrote:\n> > > The docs for `to_regtype()` say, “this function will return NULL rather than\n> > > throwing an error if the name is not found.” And it’s true most of the time:\n> > > \n> > > david=# select to_regtype('foo'), to_regtype('clam');\n> > > to_regtype | to_regtype\n> > > ------------+------------\n> > > [null] | [null]\n> > > \n> > > But not others:\n> > > \n> > > david=# select to_regtype('inteval second');\n> > > ERROR: syntax error at or near \"second\"\n> > > LINE 1: select to_regtype('inteval second');\n> > > ^\n> > > CONTEXT: invalid type name \"inteval second”\n> > \n> > Probably a typo and you meant 'interval second' which works.\n> No, that is precisely the point. The result should be null instead of \n> an error.\n\nRight. I debugged into this, and found this comment to typeStringToTypeName():\n\n * If the string cannot be parsed as a type, an error is raised,\n * unless escontext is an ErrorSaveContext node, in which case we may\n * fill that and return NULL. But note that the ErrorSaveContext option\n * is mostly aspirational at present: errors detected by the main\n * grammar, rather than here, will still be thrown.\n\n\"escontext\" is an ErrorSaveContext node, and it is the parser failing.\n\nNot sure if we can do anything about that or if it is worth the effort.\n\nPerhaps the documentation could reflect the implementation.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 18 Sep 2023 01:49:22 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: to_regtype() Raises Error"
},
{
"msg_contents": "On Sun, Sep 17, 2023 at 5:34 PM Erik Wienhold <[email protected]> wrote:\n\n> On 18/09/2023 00:57 CEST Vik Fearing <[email protected]> wrote:\n>\n> > On 9/18/23 00:41, Erik Wienhold wrote:\n> > > On 18/09/2023 00:13 CEST David E. Wheeler <[email protected]>\n> wrote:\n> > >\n> > >> david=# select to_regtype('inteval second');\n> > >> ERROR: syntax error at or near \"second\"\n> > >> LINE 1: select to_regtype('inteval second');\n> > >> ^\n> > >> CONTEXT: invalid type name \"inteval second”\n> > >\n> > > Probably a typo and you meant 'interval second' which works.\n> >\n> > No, that is precisely the point. The result should be null instead of\n> > an error.\n>\n> Well, the docs say \"return NULL rather than throwing an error if the name\n> is\n> not found\".\n\n\n\n> To me \"name is not found\" implies that it has to be valid syntax\n> first to even have a name that can be looked up.\n>\n\nExcept there is nothing in the typed literal value that is actually a\nsyntactical problem from the perspective of the user. IOW, the following\nwork just fine:\n\nselect to_regtype('character varying'), to_regtype('interval second');\n\nNo need for quotes and the space doesn't produce an issue (and in fact\nadding double quotes to the above causes them to not match since the\nquoting is taken literally and not syntactically)\n\nThe failure to return NULL exposes an implementation detail that we\nshouldn't be exposing. As Tom said, maybe doing better is too hard to be\nworthwhile, but that doesn't mean our current behavior is somehow correct.\n\nPut differently, there is no syntax involved when the value being provided\nis the text literal name of a type as it is stored in pg_type.typname, so\nthe presence of a syntax error is wrong.\n\nDavid J.\n\nOn Sun, Sep 17, 2023 at 5:34 PM Erik Wienhold <[email protected]> wrote:On 18/09/2023 00:57 CEST Vik Fearing <[email protected]> wrote:\n\n> On 9/18/23 00:41, Erik Wienhold wrote:\n> > On 18/09/2023 00:13 CEST David E. Wheeler <[email protected]> wrote:\n> >\n> >> david=# select to_regtype('inteval second');\n> >> ERROR: syntax error at or near \"second\"\n> >> LINE 1: select to_regtype('inteval second');\n> >> ^\n> >> CONTEXT: invalid type name \"inteval second”\n> >\n> > Probably a typo and you meant 'interval second' which works.\n>\n> No, that is precisely the point. The result should be null instead of\n> an error.\n\nWell, the docs say \"return NULL rather than throwing an error if the name is\nnot found\". To me \"name is not found\" implies that it has to be valid syntax\nfirst to even have a name that can be looked up.Except there is nothing in the typed literal value that is actually a syntactical problem from the perspective of the user. IOW, the following work just fine:select to_regtype('character varying'), to_regtype('interval second');No need for quotes and the space doesn't produce an issue (and in fact adding double quotes to the above causes them to not match since the quoting is taken literally and not syntactically)The failure to return NULL exposes an implementation detail that we shouldn't be exposing. As Tom said, maybe doing better is too hard to be worthwhile, but that doesn't mean our current behavior is somehow correct.Put differently, there is no syntax involved when the value being provided is the text literal name of a type as it is stored in pg_type.typname, so the presence of a syntax error is wrong.David J.",
"msg_date": "Sun, 17 Sep 2023 17:58:48 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: to_regtype() Raises Error"
},
{
"msg_contents": "On 2023-09-17 20:58, David G. Johnston wrote:\n> Put differently, there is no syntax involved when the value being \n> provided\n> is the text literal name of a type as it is stored in pg_type.typname, \n> so\n> the presence of a syntax error is wrong.\n\nWell, the situation is a little weirder than that, because of the \nexistence\nof SQL standard types with multiple-token names; when you provide the\nvalue 'character varying', you are not providing a name found in\npg_type.typname (while, if you call the same type 'varchar', you are).\nFor 'character varying', the parser is necessarily involved.\n\nThe case with 'interval second' is similar, but even different still;\nthat isn't a multiple-token type name, but a type name with a\nstandard-specified bespoke way of writing a typmod. Another place\nthe parser is necessarily involved, doing another job. (AFAICT,\nto_regtype is happy with a typmod attached to the input, and\nhappily ignores it, so to_regtype('interval second') gives you\ninterval, to_regtype('character varying(20)') gives you\ncharacter varying, and so on.)\n\nRegards,\n-Chao\n\n\n",
"msg_date": "Sun, 17 Sep 2023 21:24:59 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: to_regtype() Raises Error"
},
{
"msg_contents": "On Sun, Sep 17, 2023 at 6:25 PM Chapman Flack <[email protected]> wrote:\n\n> On 2023-09-17 20:58, David G. Johnston wrote:\n> > Put differently, there is no syntax involved when the value being\n> > provided\n> > is the text literal name of a type as it is stored in pg_type.typname,\n> > so\n> > the presence of a syntax error is wrong.\n>\n> Well, the situation is a little weirder than that, because of the\n> existence\n> of SQL standard types with multiple-token names; when you provide the\n> value 'character varying', you are not providing a name found in\n> pg_type.typname (while, if you call the same type 'varchar', you are).\n> For 'character varying', the parser is necessarily involved.\n>\n\nWhy don't we just populate pg_type with these standard mandated names too?\n\n\n>\n> The case with 'interval second' is similar, but even different still;\n> that isn't a multiple-token type name, but a type name with a\n> standard-specified bespoke way of writing a typmod. Another place\n> the parser is necessarily involved, doing another job. (AFAICT,\n> to_regtype is happy with a typmod attached to the input, and\n> happily ignores it, so to_regtype('interval second') gives you\n> interval, to_regtype('character varying(20)') gives you\n> character varying, and so on.)\n>\n>\nSeems doable to teach the lookup code that suffixes of the form (n) should\nbe ignored when matching the base type, plus maybe some kind of special\ncase for standard mandated typmods on their specific types. There is some\nambiguity possible when doing that though:\n\ncreate type \"interval second\" as (x int, y int);\nselect to_regtype('interval second'); --> interval\n\nOr just write a function that deals with the known forms dictated by the\nstandard and delegate checking for valid names against that first before\nconsulting pg_type? It might be some code duplication but it isn't like\nit's a quickly moving target and I have to imagine it would be faster and\nallow us to readily implement the soft-error contract.\n\nDavid J.\n\nOn Sun, Sep 17, 2023 at 6:25 PM Chapman Flack <[email protected]> wrote:On 2023-09-17 20:58, David G. Johnston wrote:\n> Put differently, there is no syntax involved when the value being \n> provided\n> is the text literal name of a type as it is stored in pg_type.typname, \n> so\n> the presence of a syntax error is wrong.\n\nWell, the situation is a little weirder than that, because of the \nexistence\nof SQL standard types with multiple-token names; when you provide the\nvalue 'character varying', you are not providing a name found in\npg_type.typname (while, if you call the same type 'varchar', you are).\nFor 'character varying', the parser is necessarily involved.Why don't we just populate pg_type with these standard mandated names too? \n\nThe case with 'interval second' is similar, but even different still;\nthat isn't a multiple-token type name, but a type name with a\nstandard-specified bespoke way of writing a typmod. Another place\nthe parser is necessarily involved, doing another job. (AFAICT,\nto_regtype is happy with a typmod attached to the input, and\nhappily ignores it, so to_regtype('interval second') gives you\ninterval, to_regtype('character varying(20)') gives you\ncharacter varying, and so on.)Seems doable to teach the lookup code that suffixes of the form (n) should be ignored when matching the base type, plus maybe some kind of special case for standard mandated typmods on their specific types. There is some ambiguity possible when doing that though:create type \"interval second\" as (x int, y int);select to_regtype('interval second'); --> intervalOr just write a function that deals with the known forms dictated by the standard and delegate checking for valid names against that first before consulting pg_type? It might be some code duplication but it isn't like it's a quickly moving target and I have to imagine it would be faster and allow us to readily implement the soft-error contract.David J.",
"msg_date": "Sun, 17 Sep 2023 18:58:13 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: to_regtype() Raises Error"
},
{
"msg_contents": "On 2023-09-17 21:58, David G. Johnston wrote:\n> ambiguity possible when doing that though:\n> \n> create type \"interval second\" as (x int, y int);\n> select to_regtype('interval second'); --> interval\n\nNot ambiguity really: that composite type you just made was named\nwith a single <delimited identifier>, which is one token. (Also,\nbeing delimited makes it case-sensitive, and always distinct from\nan SQL keyword; consider the different types char and \"char\". Ah,\nthat SQL committee!)\n\nThe argument to regtype there is a single, case-insensitive,\n<regular identifier>, a <separator>, and another <regular identifier>,\nwhere in this case the first identifier happens to name a type, the\nsecond one happens to be a typmod, and the separator is rather\nsimple as <separator> goes.\n\nIn this one, both identifiers are part of the type name, and the\nseparator a little more flamboyant.\n\nselect to_regtype('character /* hi!\nam I part of the type name? /* what, me too? */ ok! */ -- huh!\nvarying');\n to_regtype\n-------------------\n character varying\n\nAs the backend already has one parser that knows all those\nlexical and grammar productions, I don't imagine it would be\nvery appealing to have a second implementation of some of them.\nObviously, to_regtype could add some simplifying requirements\n(like \"only whitespace for the separator please\"), but as you\nsee above, it currently doesn't.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Sun, 17 Sep 2023 22:44:13 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: to_regtype() Raises Error"
},
{
"msg_contents": "On Sunday, September 17, 2023, Chapman Flack <[email protected]> wrote:\n\n>\n> In this one, both identifiers are part of the type name, and the\n> separator a little more flamboyant.\n>\n> select to_regtype('character /* hi!\n> am I part of the type name? /* what, me too? */ ok! */ -- huh!\n> varying');\n> to_regtype\n> -------------------\n> character varying\n>\n\nSo, maybe we should be saying:\n\nParses a string of text, extracts a potential type name from it, and\ntranslates that name into an OID. Failure to extract a valid potential\ntype name results in an error while a failure to determine that the\nextracted name is known to the system results in a null output.\n\nI take specific exception to describing your example as a “textual type\nname”.\n\nDavid J.\n\nOn Sunday, September 17, 2023, Chapman Flack <[email protected]> wrote:\nIn this one, both identifiers are part of the type name, and the\nseparator a little more flamboyant.\n\nselect to_regtype('character /* hi!\nam I part of the type name? /* what, me too? */ ok! */ -- huh!\nvarying');\n to_regtype\n-------------------\n character varying\nSo, maybe we should be saying:Parses a string of text, extracts a potential type name from it, and translates that name into an OID. Failure to extract a valid potential type name results in an error while a failure to determine that the extracted name is known to the system results in a null output.I take specific exception to describing your example as a “textual type name”.David J.",
"msg_date": "Sun, 17 Sep 2023 19:58:44 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: to_regtype() Raises Error"
},
{
"msg_contents": "Hey there, coming back to this. I poked at the logs in the master branch and saw no mention of to_regtype; did I miss it?\n\nOn Sep 17, 2023, at 10:58 PM, David G. Johnston <[email protected]> wrote:\n\n> Parses a string of text, extracts a potential type name from it, and translates that name into an OID. Failure to extract a valid potential type name results in an error while a failure to determine that the extracted name is known to the system results in a null output.\n> \n> I take specific exception to describing your example as a “textual type name”.\n\nMore docs seem like a reasonable compromise. Perhaps it’d be useful to also describe when an error is likely and when it’s not.\n\nBest,\n\nDavid\n\n\n\n",
"msg_date": "Mon, 29 Jan 2024 10:45:01 -0500",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: to_regtype() Raises Error"
},
{
"msg_contents": "On Mon, Jan 29, 2024 at 8:45 AM David E. Wheeler <[email protected]>\nwrote:\n\n> Hey there, coming back to this. I poked at the logs in the master branch\n> and saw no mention of to_regtype; did I miss it?\n>\n\nWith no feedback regarding my final suggestion I lost interest in it and\nnever produced a patch.\n\n\n> On Sep 17, 2023, at 10:58 PM, David G. Johnston <\n> [email protected]> wrote:\n>\n> > Parses a string of text, extracts a potential type name from it, and\n> translates that name into an OID. Failure to extract a valid potential\n> type name results in an error while a failure to determine that the\n> extracted name is known to the system results in a null output.\n> >\n> > I take specific exception to describing your example as a “textual type\n> name”.\n>\n> More docs seem like a reasonable compromise. Perhaps it’d be useful to\n> also describe when an error is likely and when it’s not.\n>\n>\nSeems like most just want to leave well enough alone and deal with the rare\nquestion for oddball input on the mailing list. If you are interested\nenough to come back after 4 months I'd suggest you write up and submit a\npatch. I'm happy to review it and see if that is enough to get a committer\nto respond.\n\nDavid J.\n\nOn Mon, Jan 29, 2024 at 8:45 AM David E. Wheeler <[email protected]> wrote:Hey there, coming back to this. I poked at the logs in the master branch and saw no mention of to_regtype; did I miss it?With no feedback regarding my final suggestion I lost interest in it and never produced a patch.\n\nOn Sep 17, 2023, at 10:58 PM, David G. Johnston <[email protected]> wrote:\n\n> Parses a string of text, extracts a potential type name from it, and translates that name into an OID. Failure to extract a valid potential type name results in an error while a failure to determine that the extracted name is known to the system results in a null output.\n> \n> I take specific exception to describing your example as a “textual type name”.\n\nMore docs seem like a reasonable compromise. Perhaps it’d be useful to also describe when an error is likely and when it’s not.Seems like most just want to leave well enough alone and deal with the rare question for oddball input on the mailing list. If you are interested enough to come back after 4 months I'd suggest you write up and submit a patch. I'm happy to review it and see if that is enough to get a committer to respond.David J.",
"msg_date": "Fri, 2 Feb 2024 13:23:48 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: to_regtype() Raises Error"
},
{
"msg_contents": "On Feb 2, 2024, at 3:23 PM, David G. Johnston <[email protected]> wrote:\n\n> Seems like most just want to leave well enough alone and deal with the rare question for oddball input on the mailing list. If you are interested enough to come back after 4 months I'd suggest you write up and submit a patch. I'm happy to review it and see if that is enough to get a committer to respond.\n\nLOL, “interested enough” is less the right term than “triaging email backlog and following up on a surprisingly controversial question.” I also just like to see decisions made and issues closed one way or another.\n\nAnyway, I’m happy to submit a documentation patch along the lines you suggested.\n\nD\n\n\n\n",
"msg_date": "Fri, 2 Feb 2024 15:33:51 -0500",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: to_regtype() Raises Error"
},
{
"msg_contents": "On Feb 2, 2024, at 15:33, David E. Wheeler <[email protected]> wrote:\n\n> Anyway, I’m happy to submit a documentation patch along the lines you suggested.\n\nHow’s this?\n\n--- a/doc/src/sgml/func.sgml\n+++ b/doc/src/sgml/func.sgml\n@@ -25460,11 +25460,12 @@ SELECT collation for ('foo' COLLATE \"de_DE\");\n <returnvalue>regtype</returnvalue>\n </para>\n <para>\n- Translates a textual type name to its OID. A similar result is\n+ Parses a string of text, extracts a potential type name from it, and\n+ translates that name into an OID. A similar result is\n obtained by casting the string to type <type>regtype</type> (see\n- <xref linkend=\"datatype-oid\"/>); however, this function will return\n- <literal>NULL</literal> rather than throwing an error if the name is\n- not found.\n+ <xref linkend=\"datatype-oid\"/>). Failure to extract a valid potential\n+ type name results in an error; however, if the extracted names is not\n+ known to the system, this function will return <literal>NULL</literal>.\n </para></entry>\n </row>\n </tbody>\n\nDoes similar wording need to apply to other `to_reg*` functions?\n\nBest,\n\nDavid",
"msg_date": "Sun, 4 Feb 2024 14:20:38 -0500",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: to_regtype() Raises Error"
},
{
"msg_contents": "On 2024-02-04 20:20 +0100, David E. Wheeler wrote:\n> On Feb 2, 2024, at 15:33, David E. Wheeler <[email protected]> wrote:\n> \n> > Anyway, I’m happy to submit a documentation patch along the lines you suggested.\n> \n> How’s this?\n> \n> --- a/doc/src/sgml/func.sgml\n> +++ b/doc/src/sgml/func.sgml\n> @@ -25460,11 +25460,12 @@ SELECT collation for ('foo' COLLATE \"de_DE\");\n> <returnvalue>regtype</returnvalue>\n> </para>\n> <para>\n> - Translates a textual type name to its OID. A similar result is\n> + Parses a string of text, extracts a potential type name from it, and\n> + translates that name into an OID. A similar result is\n> obtained by casting the string to type <type>regtype</type> (see\n> - <xref linkend=\"datatype-oid\"/>); however, this function will return\n> - <literal>NULL</literal> rather than throwing an error if the name is\n> - not found.\n> + <xref linkend=\"datatype-oid\"/>). Failure to extract a valid potential\n> + type name results in an error; however, if the extracted names is not\n\nHere \"extracted names\" should be \"extracted name\" (singular).\nOtherwise, the text looks good.\n\n> + known to the system, this function will return <literal>NULL</literal>.\n> </para></entry>\n> </row>\n> </tbody>\n> \n> Does similar wording need to apply to other `to_reg*` functions?\n\nJust to_regtype() is fine IMO. The other to_reg* functions don't throw\nerrors on similar input, e.g.:\n\n\ttest=> select to_regproc('foo bar');\n\t to_regproc\n\t------------\n\t <NULL>\n\t(1 row)\n\n-- \nErik\n\n\n",
"msg_date": "Mon, 5 Feb 2024 01:11:48 +0100",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: to_regtype() Raises Error"
},
{
"msg_contents": "On Feb 4, 2024, at 19:11, Erik Wienhold <[email protected]> wrote:\n\n> Here \"extracted names\" should be \"extracted name\" (singular).\n> Otherwise, the text looks good.\n\nAh, thank you. Updated patch attached.\n\nBest,\n\nDavid",
"msg_date": "Mon, 5 Feb 2024 09:01:12 -0500",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: to_regtype() Raises Error"
},
{
"msg_contents": "\nOn Feb 5, 2024, at 09:01, David E. Wheeler <[email protected]> wrote:\n\n> Ah, thank you. Updated patch attached.\n\nI’ve moved this patch into the to_regtype patch thread[1], since it exhibits the same behavior.\n\nBest,\n\nDavid\n\n[1] https://www.postgresql.org/message-id/[email protected]\n\n\n\n",
"msg_date": "Tue, 20 Feb 2024 09:48:15 -0500",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: to_regtype() Raises Error"
},
{
"msg_contents": "Merged this change into the [to_regtypemod patch](https://commitfest.postgresql.org/47/4807/), which has exactly the same issue.\n\nThe new status of this patch is: Needs review\n",
"msg_date": "Wed, 21 Feb 2024 16:54:32 +0000",
"msg_from": "David Wheeler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: to_regtype() Raises Error"
},
{
"msg_contents": "On Feb 21, 2024, at 11:54 AM, David Wheeler <[email protected]> wrote:\n\n> Merged this change into the [to_regtypemod patch](https://commitfest.postgresql.org/47/4807/), which has exactly the same issue.\n> \n> The new status of this patch is: Needs review\n\nBah, withdrawn.\n\nD\n\n\n\n",
"msg_date": "Wed, 21 Feb 2024 11:56:49 -0500",
"msg_from": "\"David E. Wheeler\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: to_regtype() Raises Error"
}
] |
[
{
"msg_contents": "Hello postgres hackers:\nI recently notice that function \"cursor_to_xmlschema\" can lead to a crash if the\ncursor parameter points to the query itself. Here is an example:\n\npostgres=# SELECT cursor_to_xmlschema('' :: refcursor, TRUE , FALSE , 'xxx' ) into temp;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\n\nThe reason could be that this function doesn't ensure the cursor is correctly\nopened, as a \"select into\" statement can't be opened as a cursor. Although it may\nbe challenging to perform a perfect check in this scenario, it seems sufficient\njust to check the tuple descriptor of the portal, since only the query that\nreturns tuples can be opened as a cursor.\n\nOnly in my opinion, self-pointing cursors like this do not make practical sense.\nThis bug is discovered through randomly generated SQL statements.\n\nBest regards,\nBoyu Yang",
"msg_date": "Mon, 18 Sep 2023 13:00:59 +0800",
"msg_from": "\"=?UTF-8?B?5p2o5Lyv5a6HKOmVv+Wggik=?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?ZnVuY3Rpb24gImN1cnNvcl90b194bWxzY2hlbWEiIGNhdXNlcyBhIGNyYXNo?="
},
{
"msg_contents": "\"=?UTF-8?B?5p2o5Lyv5a6HKOmVv+Wggik=?=\" <[email protected]> writes:\n> I recently notice that function \"cursor_to_xmlschema\" can lead to a crash if the\n> cursor parameter points to the query itself. Here is an example:\n\n> postgres=# SELECT cursor_to_xmlschema('' :: refcursor, TRUE , FALSE , 'xxx' ) into temp;\n> server closed the connection unexpectedly\n\nHmm, yeah, it's blindly assuming that whatever portal you point it at\nmust have a result tupdesc, which in general PORTAL_MULTI_QUERY\nportals wouldn't.\n\nI always ask myself \"where else did we make the same mistake?\".\nTrawling through the callers of SPI_cursor_find and GetPortalByName,\nI couldn't find any other places that might get a hard failure.\nThere are some where you might get weird error messages, eg\n\n\tregression=# select cursor_to_xml('',1,false,false,'');\n\tERROR: portal \"\" cannot be run\n\nbut maybe that's good enough. The thing that is disturbing is that\nSPI_cursor_find is a documented entry point, and the documentation\ndoesn't warn that what you get back might not be a cursor-like\nobject, so it seems like external callers might have this bug too.\n\nI thought about having SPI_cursor_find refuse to return things that\naren't cursors, but I see that plpgsql's exec_stmt_open and\nexec_stmt_forc use SPI_cursor_find just to check for a duplicate\ncursor name. There, it seems like we don't want any additional\nfiltering, because any portal will create a name conflict whether\nit's a cursor or not. While we could change those two callers to\nuse GetPortalByName instead, any such change would make things\nstrictly worse for outside callers that are doing the same thing.\n\nAlso, PerformPortalFetch and PerformPortalClose have this very\ninteresting kluge:\n\n /*\n * Disallow empty-string cursor name (conflicts with protocol-level\n * unnamed portal).\n */\n if (!stmt->portalname || stmt->portalname[0] == '\\0')\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_CURSOR_NAME),\n errmsg(\"invalid cursor name: must not be empty\")));\n\nWe could imagine making SPI_cursor_find do likewise, but again\nI'm not sure if that'd be good for all callers. In any case,\nthe unnamed portal isn't the only source of this sort of problem.\nThe argument for doing it would not be to protect careless callers\nlike cursor_to_xmlschema, but to prevent SPI clients from having\nside-effects on the state of the unnamed portal.\n\nOn balance I'm satisfied with just changing cursor_to_xmlschema\nas you suggest, though I'd probably phrase the error message\nas \"portal %s is not a cursor\". However, I feel like we'd better\nextend the documentation for SPI_cursor_find to point out that\nyou might get something that isn't functionally like a cursor.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Sep 2023 13:01:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: function \"cursor_to_xmlschema\" causes a crash"
}
] |
[
{
"msg_contents": "We started with\n\nStatus summary: Needs review: 227. Waiting on Author: 37. Ready for \nCommitter: 30. Committed: 40. Rejected: 1. Returned with Feedback: 1. \nWithdrawn: 1. Total: 337.\n\nNow we are at\n\nStatus summary: Needs review: 199. Waiting on Author: 33. Ready for \nCommitter: 32. Committed: 53. Withdrawn: 4. Rejected: 1. Returned with \nFeedback: 16. Total: 338.\n\nThis doesn't look like much on paper, but I think there has been good \nprogress on a lot of threads that is not reflected in status changes. I \nthink a number of patches are about to be committed, and there are also \na number of relatively small patches that currently don't have reviewers \nbut that could be disposed of pretty quickly.\n\n\n",
"msg_date": "Mon, 18 Sep 2023 11:20:57 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Commitfest 2023-09 half-time"
}
] |
[
{
"msg_contents": "The GiST README says:\n\n> If the F_FOLLOW_RIGHT flag is not set, a scan compares the NSN on the\n> child and the LSN it saw in the parent. If NSN < LSN, the scan looked\n> at the parent page before the downlink was inserted, so it should\n> follow the rightlink. Otherwise the scan saw the downlink in the\n> parent page, and will/did follow that as usual.\n\nWhile the code does this (in gistget.c):\n\n> \tif (!XLogRecPtrIsInvalid(pageItem->data.parentlsn) &&\n> \t\t(GistFollowRight(page) ||\n> \t\t pageItem->data.parentlsn < GistPageGetNSN(page)) &&\n> \t\topaque->rightlink != InvalidBlockNumber /* sanity check */ )\n> \t{\n> \t\t/* There was a page split, follow right link to add pages */\n\nNote the comparison on LSN and NSN. The code seems correct, but the \nREADME got it backwards.\n\nThe narrow fix would be to change the \"NSN < LSN\" to \"LSN < NSN\" in the \nREADME. But I propose the attached patch to reword the sentence a little \nmore.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Mon, 18 Sep 2023 14:09:43 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix GIST readme on LSN vs NSN"
},
{
"msg_contents": "> On 18 Sep 2023, at 13:09, Heikki Linnakangas <[email protected]> wrote:\n\n> I propose the attached patch to reword the sentence a little more.\n\nLGTM, +1\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 18 Sep 2023 13:53:28 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix GIST readme on LSN vs NSN"
},
{
"msg_contents": "On 18/09/2023 14:53, Daniel Gustafsson wrote:\n>> On 18 Sep 2023, at 13:09, Heikki Linnakangas <[email protected]> wrote:\n> \n>> I propose the attached patch to reword the sentence a little more.\n> \n> LGTM, +1\n\nCommitted, thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 19 Sep 2023 11:57:58 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix GIST readme on LSN vs NSN"
}
] |
[
{
"msg_contents": "Hi,\n\nit seems dikkop is unhappy again, this time because of some OpenSSL\nstuff. I'm not sure it's our problem - it might be issues with the other\npackages, or maybe something FreeBSD specific, not sure.\n\nWe did some investigation of an unrelated issue on dikkop about a month\nago [1], so it wasn't doing/reporting the buildfram stuff for a while.\nAfter that I had to poweroff/move the machine, and unfortunately it\ndidn't boot after that - it's a rpi4 so maybe the SD card got damaged or\nsomething, not sure.\n\nI used the opportunity to install the new 14-BETA1 (instead of the\n14-current snapshot), but unfortunately it started having issues :-(\n\nBoth 11 and 12 failed with a weird openssl segfaults in plpython tests,\nsee [2] and [3]. And 13 is stuck in some openssl stuff in plpython\ntests, with 100% CPU usage (for ~30h now):\n\n#0 0x00000000850e86c0 in OPENSSL_sk_insert ()\n from /usr/local/lib/libcrypto.so.11\n#1 0x00000000850a5848 in CRYPTO_set_ex_data ()\n from /usr/local/lib/libcrypto.so.11\n...\n\nFull backtrace attached. I'm not sure what could possibly be causing\nthis, except maybe something in FreeBSD? Or maybe there's some confusion\nabout libraries? No idea.\n\nThe system is entirely new, there's only a handful of packages installed\n(full list attached), and I don't think I did anything strange or much\ndifferent from the previous 14-current install.\n\nAny ideas what might be causing this?\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/b2bc5c16-899e-ca99-26ed-e623b4259ec7%40enterprisedb.com\n\n[2]\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dikkop&dt=2023-09-16%2021%3A10%3A45\n\n[3]\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dikkop&dt=2023-09-17%2000%3A01%3A42\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 18 Sep 2023 15:11:27 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "dikkop seems unhappy because of openssl stuff (FreeBSD 14-BETA1)"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> it seems dikkop is unhappy again, this time because of some OpenSSL\n> stuff. I'm not sure it's our problem - it might be issues with the other\n> packages, or maybe something FreeBSD specific, not sure.\n> ...\n> Both 11 and 12 failed with a weird openssl segfaults in plpython tests,\n> see [2] and [3]. And 13 is stuck in some openssl stuff in plpython\n> tests, with 100% CPU usage (for ~30h now):\n\nEven weirder, its latest REL_11 run got past that, and instead failed\nin pltcl [1]. I suppose in an hour or two we'll know if v12 also\nchanged behavior.\n\nThe pltcl test case that is failing is annotated\n\n-- Test usage of Tcl's \"clock\" command. In recent Tcl versions this\n-- command fails without working \"unknown\" support, so it's a good canary\n-- for initialization problems.\n\nwhich is mighty suggestive, but I'm not sure what to look at exactly.\nPerhaps apply \"ldd\" or local equivalent to those languages' .so files\nand see if they link to the same versions of indirectly-required\nlibraries as Postgres is linking to?\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dikkop&dt=2023-09-18%2013%3A59%3A40\n\n\n",
"msg_date": "Mon, 18 Sep 2023 14:52:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dikkop seems unhappy because of openssl stuff (FreeBSD 14-BETA1)"
},
{
"msg_contents": "IDK, but I tried installing tcl87 as you showed in packages.txt, and\nREL_11_STABLE said:\n\nchecking for tclsh... no\nchecking for tcl... no\nchecking for tclsh8.6... no\nchecking for tclsh86... no\nchecking for tclsh8.5... no\nchecking for tclsh85... no\nchecking for tclsh8.4... no\nchecking for tclsh84... no\nconfigure: error: Tcl shell not found\n\nIt seems like our configure stuff knows only about older tcl, so how\ndid you get past that?\n\nThe other thing that springs to mind, without any particular theory,\nis that FreeBSD 14 switched to OpenSSL 3 (but hadn't done so yet in\nyour old current snapshot).\n\n\n",
"msg_date": "Tue, 19 Sep 2023 08:41:08 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dikkop seems unhappy because of openssl stuff (FreeBSD 14-BETA1)"
},
{
"msg_contents": "On Mon, Sep 18, 2023 at 03:11:27PM +0200, Tomas Vondra wrote:\n> Both 11 and 12 failed with a weird openssl segfaults in plpython tests,\n> see [2] and [3]. And 13 is stuck in some openssl stuff in plpython\n> tests, with 100% CPU usage (for ~30h now):\n> \n> #0 0x00000000850e86c0 in OPENSSL_sk_insert ()\n> from /usr/local/lib/libcrypto.so.11\n> #1 0x00000000850a5848 in CRYPTO_set_ex_data ()\n> from /usr/local/lib/libcrypto.so.11\n> ...\n> \n> Full backtrace attached. I'm not sure what could possibly be causing\n> this, except maybe something in FreeBSD? Or maybe there's some confusion\n> about libraries? No idea.\n\nFWIW, I've seen such corrupted and time-sensitive stacks in the past\nin the plpython tests in builds when python linked to a SSL library\ndifferent than what's linked with the backend. So that smells like a\npackaging issue to me.\n--\nMichael",
"msg_date": "Tue, 19 Sep 2023 08:45:52 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dikkop seems unhappy because of openssl stuff (FreeBSD 14-BETA1)"
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 2:04 PM Michael Paquier <[email protected]> wrote:\n> On Mon, Sep 18, 2023 at 03:11:27PM +0200, Tomas Vondra wrote:\n> > Both 11 and 12 failed with a weird openssl segfaults in plpython tests,\n> > see [2] and [3]. And 13 is stuck in some openssl stuff in plpython\n> > tests, with 100% CPU usage (for ~30h now):\n> >\n> > #0 0x00000000850e86c0 in OPENSSL_sk_insert ()\n> > from /usr/local/lib/libcrypto.so.11\n> > #1 0x00000000850a5848 in CRYPTO_set_ex_data ()\n> > from /usr/local/lib/libcrypto.so.11\n> > ...\n> >\n> > Full backtrace attached. I'm not sure what could possibly be causing\n> > this, except maybe something in FreeBSD? Or maybe there's some confusion\n> > about libraries? No idea.\n>\n> FWIW, I've seen such corrupted and time-sensitive stacks in the past\n> in the plpython tests in builds when python linked to a SSL library\n> different than what's linked with the backend. So that smells like a\n> packaging issue to me.\n\nCould it be confusion due to the presence of OpenSSL 3.0 in the\nFreeBSD base system (/usr/include, /usr/lib) combined with the\npresence of OpenSSL 1.1.1 installed with \"pkg install openssl\"\n(/usr/local/include, /usr/local/lib)? Tomas, does it help if you \"pkg\nremove openssl\"?\n\n\n",
"msg_date": "Tue, 19 Sep 2023 14:25:21 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dikkop seems unhappy because of openssl stuff (FreeBSD 14-BETA1)"
},
{
"msg_contents": "On 9/19/23 04:25, Thomas Munro wrote:\n> On Tue, Sep 19, 2023 at 2:04 PM Michael Paquier <[email protected]> wrote:\n>> On Mon, Sep 18, 2023 at 03:11:27PM +0200, Tomas Vondra wrote:\n>>> Both 11 and 12 failed with a weird openssl segfaults in plpython tests,\n>>> see [2] and [3]. And 13 is stuck in some openssl stuff in plpython\n>>> tests, with 100% CPU usage (for ~30h now):\n>>>\n>>> #0 0x00000000850e86c0 in OPENSSL_sk_insert ()\n>>> from /usr/local/lib/libcrypto.so.11\n>>> #1 0x00000000850a5848 in CRYPTO_set_ex_data ()\n>>> from /usr/local/lib/libcrypto.so.11\n>>> ...\n>>>\n>>> Full backtrace attached. I'm not sure what could possibly be causing\n>>> this, except maybe something in FreeBSD? Or maybe there's some confusion\n>>> about libraries? No idea.\n>>\n>> FWIW, I've seen such corrupted and time-sensitive stacks in the past\n>> in the plpython tests in builds when python linked to a SSL library\n>> different than what's linked with the backend. So that smells like a\n>> packaging issue to me.\n> \n> Could it be confusion due to the presence of OpenSSL 3.0 in the\n> FreeBSD base system (/usr/include, /usr/lib) combined with the\n> presence of OpenSSL 1.1.1 installed with \"pkg install openssl\"\n> (/usr/local/include, /usr/local/lib)? Tomas, does it help if you \"pkg\n> remove openssl\"?\n\nOh! That might be it - I didn't realize FreeBSD already has openssl 3.0\nalready included in the base system, so perhaps installing 1.1.1v leads\nto some serious confusion ...\n\nAfter some off-list discussion with Alvaro I tried removing the 1.1.1v\nand installed the openssl31 package, which apparently resolved this (at\nwhich point it ran into the unrelated tcl issue).\n\nStill, this confusion seems rather unexpected, and I'm not sure if\nhaving both 3.0 (from base) and 3.1 (from package) could lead to the\nsame confusion / crashes. Not sure if it's \"our\" problem ...\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 19 Sep 2023 18:11:06 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dikkop seems unhappy because of openssl stuff (FreeBSD 14-BETA1)"
},
{
"msg_contents": "\n\nOn 9/18/23 20:52, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> it seems dikkop is unhappy again, this time because of some OpenSSL\n>> stuff. I'm not sure it's our problem - it might be issues with the other\n>> packages, or maybe something FreeBSD specific, not sure.\n>> ...\n>> Both 11 and 12 failed with a weird openssl segfaults in plpython tests,\n>> see [2] and [3]. And 13 is stuck in some openssl stuff in plpython\n>> tests, with 100% CPU usage (for ~30h now):\n> \n> Even weirder, its latest REL_11 run got past that, and instead failed\n> in pltcl [1]. I suppose in an hour or two we'll know if v12 also\n> changed behavior.\n> \n\nOh, yeah. Sorry for not mentioning this yesterday ...\n\nI tried removing the openssl-1.1.1v and installed 3.1 instead, which\napparently allowed it to pass the plpython tests. I guess it's due to\nsome sort of confusion with the openssl-3.0 included in FreeBSD base\n(which I didn't realize is there).\n\n> The pltcl test case that is failing is annotated\n> \n> -- Test usage of Tcl's \"clock\" command. In recent Tcl versions this\n> -- command fails without working \"unknown\" support, so it's a good canary\n> -- for initialization problems.\n> \n> which is mighty suggestive, but I'm not sure what to look at exactly.\n> Perhaps apply \"ldd\" or local equivalent to those languages' .so files\n> and see if they link to the same versions of indirectly-required\n> libraries as Postgres is linking to?\n> \n> \t\t\tregards, tom lane\n> \n\nI have no experience with tcl, but I tried this in the two tclsh\nversions installed no the system (8.6 and 8.7):\n\nbsd@freebsd:~ $ tclsh8.7\n% clock scan \"1/26/2010\"\ntime value too large/small to represent\n\nbsd@freebsd:~ $ tclsh8.6\n% clock scan \"1/26/2010\"\ntime value too large/small to represent\n\nAFAIK this is what the tcl_date_week(2010,1,26) translates to.\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 19 Sep 2023 18:21:35 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dikkop seems unhappy because of openssl stuff (FreeBSD 14-BETA1)"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> I have no experience with tcl, but I tried this in the two tclsh\n> versions installed no the system (8.6 and 8.7):\n\n> bsd@freebsd:~ $ tclsh8.7\n> % clock scan \"1/26/2010\"\n> time value too large/small to represent\n\n> bsd@freebsd:~ $ tclsh8.6\n> % clock scan \"1/26/2010\"\n> time value too large/small to represent\n\n> AFAIK this is what the tcl_date_week(2010,1,26) translates to.\n\nOh, interesting. On my FreeBSD 13.1 arm64 system, it works:\n\n$ tclsh8.6\n% clock scan \"1/26/2010\"\n1264482000\n\nI am now suspicious that there's some locale effect that we have\nnot observed before (though why not?). What is the result of\nthe \"locale\" command on your box? Mine gives\n\n$ locale\nLANG=C.UTF-8\nLC_CTYPE=\"C.UTF-8\"\nLC_COLLATE=\"C.UTF-8\"\nLC_TIME=\"C.UTF-8\"\nLC_NUMERIC=\"C.UTF-8\"\nLC_MONETARY=\"C.UTF-8\"\nLC_MESSAGES=\"C.UTF-8\"\nLC_ALL=\n\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Sep 2023 12:45:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dikkop seems unhappy because of openssl stuff (FreeBSD 14-BETA1)"
},
{
"msg_contents": "\n\nOn 9/19/23 18:45, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> I have no experience with tcl, but I tried this in the two tclsh\n>> versions installed no the system (8.6 and 8.7):\n> \n>> bsd@freebsd:~ $ tclsh8.7\n>> % clock scan \"1/26/2010\"\n>> time value too large/small to represent\n> \n>> bsd@freebsd:~ $ tclsh8.6\n>> % clock scan \"1/26/2010\"\n>> time value too large/small to represent\n> \n>> AFAIK this is what the tcl_date_week(2010,1,26) translates to.\n> \n> Oh, interesting. On my FreeBSD 13.1 arm64 system, it works:\n> \n> $ tclsh8.6\n> % clock scan \"1/26/2010\"\n> 1264482000\n> \n> I am now suspicious that there's some locale effect that we have\n> not observed before (though why not?). What is the result of\n> the \"locale\" command on your box? Mine gives\n> \n> $ locale\n> LANG=C.UTF-8\n> LC_CTYPE=\"C.UTF-8\"\n> LC_COLLATE=\"C.UTF-8\"\n> LC_TIME=\"C.UTF-8\"\n> LC_NUMERIC=\"C.UTF-8\"\n> LC_MONETARY=\"C.UTF-8\"\n> LC_MESSAGES=\"C.UTF-8\"\n> LC_ALL=\n> \n\nbsd@freebsd:~ $ locale\nLANG=C.UTF-8\nLC_CTYPE=\"C.UTF-8\"\nLC_COLLATE=\"C.UTF-8\"\nLC_TIME=\"C.UTF-8\"\nLC_NUMERIC=\"C.UTF-8\"\nLC_MONETARY=\"C.UTF-8\"\nLC_MESSAGES=\"C.UTF-8\"\nLC_ALL=\n\nbsd@freebsd:~ $ tclsh8.6\n% clock scan \"1/26/2010\"\ntime value too large/small to represent\n\nHowever, I wonder if there's something wrong with tcl itself,\nconsidering this:\n\n% clock format 1360558800 -format %D\n02/11/2013\n% clock scan 02/11/2013 -format %D\ntime value too large/small to represent\n\nThat's a bit strange - it seems tcl can format a timestamp, but then\ncan't read it back in for some reason ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 19 Sep 2023 21:40:46 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dikkop seems unhappy because of openssl stuff (FreeBSD 14-BETA1)"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> bsd@freebsd:~ $ tclsh8.6\n> % clock scan \"1/26/2010\"\n> time value too large/small to represent\n\nIn hopes of replicating this, I tried installing FreeBSD 14-BETA2\naarch64 on my Pi 3B. This test case works fine:\n\n$ tclsh8.6\n% clock scan \"1/26/2010\"\n1264482000\n\n$ tclsh8.7\n% clock scan \"1/26/2010\"\n1264482000\n\nand unsurprisingly, pltcl's regression tests pass. I surmise\nthat something is broken in BETA1 that they fixed in BETA2.\n\nplpython works too, with the python 3.9 package (and no older\npython).\n\nHowever, all is not peachy, because plperl doesn't work.\nTrying to CREATE EXTENSION either plperl or plperlu leads\nto a libperl panic:\n\npl_regression=# create extension plperl;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Succeeded.\n\nwith this in the postmaster log:\n\npanic: pthread_key_create failed\n\nThat message is certainly not ours, so it must be coming out of libperl.\n\nAnother thing that seemed strange is that ecpg's preproc.o takes\nO(forever) to compile. I killed the build after observing that the\ncompiler had gotten to 40 minutes of CPU time, and redid that step\nwith PROFILE=-O0, which allowed it to compile in 20 seconds or so.\n(I also tried -O1, but gave up after a few minutes.) This machine\ncan compile the main backend grammar in a minute or two, so there is\nsomething very odd there.\n\nI'm coming to the conclusion that 14-BETA is, well, beta grade.\nI'll be interested to see if you get the same results when you\nupdate to BETA2.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Sep 2023 19:24:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dikkop seems unhappy because of openssl stuff (FreeBSD 14-BETA1)"
},
{
"msg_contents": "\n\nOn 9/20/23 01:24, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> bsd@freebsd:~ $ tclsh8.6\n>> % clock scan \"1/26/2010\"\n>> time value too large/small to represent\n> \n> In hopes of replicating this, I tried installing FreeBSD 14-BETA2\n> aarch64 on my Pi 3B. This test case works fine:\n> \n> $ tclsh8.6\n> % clock scan \"1/26/2010\"\n> 1264482000\n> \n> $ tclsh8.7\n> % clock scan \"1/26/2010\"\n> 1264482000\n> \n> and unsurprisingly, pltcl's regression tests pass. I surmise\n> that something is broken in BETA1 that they fixed in BETA2.\n> \n> plpython works too, with the python 3.9 package (and no older\n> python).\n> \n> However, all is not peachy, because plperl doesn't work.\n> Trying to CREATE EXTENSION either plperl or plperlu leads\n> to a libperl panic:\n> \n> pl_regression=# create extension plperl;\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Succeeded.\n> \n> with this in the postmaster log:\n> \n> panic: pthread_key_create failed\n> \n> That message is certainly not ours, so it must be coming out of libperl.\n> \n> Another thing that seemed strange is that ecpg's preproc.o takes\n> O(forever) to compile. I killed the build after observing that the\n> compiler had gotten to 40 minutes of CPU time, and redid that step\n> with PROFILE=-O0, which allowed it to compile in 20 seconds or so.\n> (I also tried -O1, but gave up after a few minutes.) This machine\n> can compile the main backend grammar in a minute or two, so there is\n> something very odd there.\n> \n> I'm coming to the conclusion that 14-BETA is, well, beta grade.\n> I'll be interested to see if you get the same results when you\n> update to BETA2.\n\nThanks, I'll try that when I'll be at the office next week.\n\nretards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 20 Sep 2023 19:59:28 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dikkop seems unhappy because of openssl stuff (FreeBSD 14-BETA1)"
},
{
"msg_contents": "On 9/20/23 19:59, Tomas Vondra wrote:\n> \n> \n> On 9/20/23 01:24, Tom Lane wrote:\n>> Tomas Vondra <[email protected]> writes:\n>>> bsd@freebsd:~ $ tclsh8.6\n>>> % clock scan \"1/26/2010\"\n>>> time value too large/small to represent\n>>\n>> In hopes of replicating this, I tried installing FreeBSD 14-BETA2\n>> aarch64 on my Pi 3B. This test case works fine:\n>>\n>> $ tclsh8.6\n>> % clock scan \"1/26/2010\"\n>> 1264482000\n>>\n>> $ tclsh8.7\n>> % clock scan \"1/26/2010\"\n>> 1264482000\n>>\n>> and unsurprisingly, pltcl's regression tests pass. I surmise\n>> that something is broken in BETA1 that they fixed in BETA2.\n>>\n>> plpython works too, with the python 3.9 package (and no older\n>> python).\n>>\n>> However, all is not peachy, because plperl doesn't work.\n>> Trying to CREATE EXTENSION either plperl or plperlu leads\n>> to a libperl panic:\n>>\n>> pl_regression=# create extension plperl;\n>> server closed the connection unexpectedly\n>> This probably means the server terminated abnormally\n>> before or while processing the request.\n>> The connection to the server was lost. Attempting reset: Succeeded.\n>>\n>> with this in the postmaster log:\n>>\n>> panic: pthread_key_create failed\n>>\n>> That message is certainly not ours, so it must be coming out of libperl.\n>>\n>> Another thing that seemed strange is that ecpg's preproc.o takes\n>> O(forever) to compile. I killed the build after observing that the\n>> compiler had gotten to 40 minutes of CPU time, and redid that step\n>> with PROFILE=-O0, which allowed it to compile in 20 seconds or so.\n>> (I also tried -O1, but gave up after a few minutes.) This machine\n>> can compile the main backend grammar in a minute or two, so there is\n>> something very odd there.\n>>\n>> I'm coming to the conclusion that 14-BETA is, well, beta grade.\n>> I'll be interested to see if you get the same results when you\n>> update to BETA2.\n> \n> Thanks, I'll try that when I'll be at the office next week.\n> \n\nFWIW when I disabled tcl, the tests pass (it's running with --nostatus\n--nosend, so it's not visible on the buildfarm site). Including the\nplperl stuff:\n\n============== running regression test queries ==============\ntest plperl ... ok 397 ms\ntest plperl_lc ... ok 152 ms\ntest plperl_trigger ... ok 374 ms\ntest plperl_shared ... ok 163 ms\ntest plperl_elog ... ok 184 ms\ntest plperl_util ... ok 210 ms\ntest plperl_init ... ok 150 ms\ntest plperlu ... ok 117 ms\ntest plperl_array ... ok 228 ms\ntest plperl_call ... ok 189 ms\ntest plperl_transaction ... ok 412 ms\ntest plperl_plperlu ... ok 238 ms\n\n======================\n All 12 tests passed.\n======================\n\nI wonder if this got broken between BETA1 and BETA2.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 20 Sep 2023 20:09:53 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dikkop seems unhappy because of openssl stuff (FreeBSD 14-BETA1)"
},
{
"msg_contents": "\n\nOn 9/20/23 20:09, Tomas Vondra wrote:\n> On 9/20/23 19:59, Tomas Vondra wrote:\n>>\n>>\n>> On 9/20/23 01:24, Tom Lane wrote:\n>>> Tomas Vondra <[email protected]> writes:\n>>>> bsd@freebsd:~ $ tclsh8.6\n>>>> % clock scan \"1/26/2010\"\n>>>> time value too large/small to represent\n>>>\n>>> In hopes of replicating this, I tried installing FreeBSD 14-BETA2\n>>> aarch64 on my Pi 3B. This test case works fine:\n>>>\n>>> $ tclsh8.6\n>>> % clock scan \"1/26/2010\"\n>>> 1264482000\n>>>\n>>> $ tclsh8.7\n>>> % clock scan \"1/26/2010\"\n>>> 1264482000\n>>>\n>>> and unsurprisingly, pltcl's regression tests pass. I surmise\n>>> that something is broken in BETA1 that they fixed in BETA2.\n>>>\n>>> plpython works too, with the python 3.9 package (and no older\n>>> python).\n>>>\n>>> However, all is not peachy, because plperl doesn't work.\n>>> Trying to CREATE EXTENSION either plperl or plperlu leads\n>>> to a libperl panic:\n>>>\n>>> pl_regression=# create extension plperl;\n>>> server closed the connection unexpectedly\n>>> This probably means the server terminated abnormally\n>>> before or while processing the request.\n>>> The connection to the server was lost. Attempting reset: Succeeded.\n>>>\n>>> with this in the postmaster log:\n>>>\n>>> panic: pthread_key_create failed\n>>>\n>>> That message is certainly not ours, so it must be coming out of libperl.\n>>>\n>>> Another thing that seemed strange is that ecpg's preproc.o takes\n>>> O(forever) to compile. I killed the build after observing that the\n>>> compiler had gotten to 40 minutes of CPU time, and redid that step\n>>> with PROFILE=-O0, which allowed it to compile in 20 seconds or so.\n>>> (I also tried -O1, but gave up after a few minutes.) This machine\n>>> can compile the main backend grammar in a minute or two, so there is\n>>> something very odd there.\n>>>\n>>> I'm coming to the conclusion that 14-BETA is, well, beta grade.\n>>> I'll be interested to see if you get the same results when you\n>>> update to BETA2.\n>>\n>> Thanks, I'll try that when I'll be at the office next week.\n>>\n> \n> FWIW when I disabled tcl, the tests pass (it's running with --nostatus\n> --nosend, so it's not visible on the buildfarm site). Including the\n> plperl stuff:\n> \n> ============== running regression test queries ==============\n> test plperl ... ok 397 ms\n> test plperl_lc ... ok 152 ms\n> test plperl_trigger ... ok 374 ms\n> test plperl_shared ... ok 163 ms\n> test plperl_elog ... ok 184 ms\n> test plperl_util ... ok 210 ms\n> test plperl_init ... ok 150 ms\n> test plperlu ... ok 117 ms\n> test plperl_array ... ok 228 ms\n> test plperl_call ... ok 189 ms\n> test plperl_transaction ... ok 412 ms\n> test plperl_plperlu ... ok 238 ms\n> \n> ======================\n> All 12 tests passed.\n> ======================\n> \n> I wonder if this got broken between BETA1 and BETA2.\n> \n\nHmmm, I got to install BETA2 yesterday, but I still se the tcl failure:\n\n select tcl_date_week(2010,1,26);\n- tcl_date_week\n----------------\n- 04\n-(1 row)\n-\n+ERROR: time value too large/small to represent\n+CONTEXT: time value too large/small to represent\n+ while executing\n+\"ConvertLocalToUTC $date[set date {}] $TZData($timezone) 2361222\"\n+ (procedure \"FreeScan\" line 86)\n+ invoked from within\n+\"FreeScan $string $base $timezone $locale\"\n+ (procedure \"::tcl::clock::scan\" line 68)\n+ invoked from within\n+\"::tcl::clock::scan 1/26/2010\"\n+ (\"uplevel\" body line 1)\n+ invoked from within\n+\"uplevel 1 [info level 0]\"\n+ (procedure \"::tcl::clock::scan\" line 4)\n+ invoked from within\n+\"clock scan \"$2/$3/$1\"\"\n+ (procedure \"__PLTcl_proc_55335\" line 3)\n+ invoked from within\n+\"__PLTcl_proc_55335 2010 1 26\"\n+in PL/Tcl function \"tcl_date_week\"\n select tcl_date_week(2001,10,24);\n\nI wonder what's the difference between the systems ... All I did was\nwriting the BETA2 image to SD card, and install a couple packages:\n\n pkg install xml2c libxslt gettext-tools ccache tcl tcl87 \\\n p5-Test-Harness p5-IPC-Run gmake htop bash screen \\\n python tcl86 nano p5-Test-LWP-UserAgent \\\n p5-LWP-Protocol-https\n\nAnd then\n\n perl ./run_branches.pl --run-all --nosend --nostatus --verbose\n\nwith the buildfarm config used by dikkop.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 26 Sep 2023 16:03:01 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dikkop seems unhappy because of openssl stuff (FreeBSD 14-BETA1)"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> Hmmm, I got to install BETA2 yesterday, but I still se the tcl failure:\n\nHuh. I'm baffled as to what's up there. Is it possible that this is\nactually a hardware-based difference? I didn't think there was much\ndifference between Pi 3B and Pi 4, but we're running out of other\nexplanations.\n\n> I wonder what's the difference between the systems ... All I did was\n> writing the BETA2 image to SD card, and install a couple packages:\n\nI reinstalled BETA3, since that's out now, but see no change in\nbehavior.\n\nI did discover that plperl works for me after adding --with-openssl\nto the configure options. Not sure if it's worth digging any further\nthan that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 26 Sep 2023 17:50:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dikkop seems unhappy because of openssl stuff (FreeBSD 14-BETA1)"
},
{
"msg_contents": "\nOn 9/26/23 23:50, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> Hmmm, I got to install BETA2 yesterday, but I still se the tcl failure:\n> \n> Huh. I'm baffled as to what's up there. Is it possible that this is\n> actually a hardware-based difference? I didn't think there was much\n> difference between Pi 3B and Pi 4, but we're running out of other\n> explanations.\n> \n\nHmm, yeah. Which FreeBSD image did you install? armv7 or aarch64?\n\n>> I wonder what's the difference between the systems ... All I did was\n>> writing the BETA2 image to SD card, and install a couple packages:\n> \n> I reinstalled BETA3, since that's out now, but see no change in\n> behavior.\n> \n> I did discover that plperl works for me after adding --with-openssl\n> to the configure options. Not sure if it's worth digging any further\n> than that.\n> \n\nNo idea. Seems broken, but no time to investigate further at the moment.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 27 Sep 2023 11:06:10 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dikkop seems unhappy because of openssl stuff (FreeBSD 14-BETA1)"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 9/26/23 23:50, Tom Lane wrote:\n>> Huh. I'm baffled as to what's up there. Is it possible that this is\n>> actually a hardware-based difference? I didn't think there was much\n>> difference between Pi 3B and Pi 4, but we're running out of other\n>> explanations.\n\n> Hmm, yeah. Which FreeBSD image did you install? armv7 or aarch64?\n\nhttps://download.freebsd.org/releases/arm64/aarch64/ISO-IMAGES/14.0/FreeBSD-14.0-BETA3-arm64-aarch64-RPI.img.xz\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Sep 2023 09:38:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dikkop seems unhappy because of openssl stuff (FreeBSD 14-BETA1)"
},
{
"msg_contents": "On 9/27/23 15:38, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> On 9/26/23 23:50, Tom Lane wrote:\n>>> Huh. I'm baffled as to what's up there. Is it possible that this is\n>>> actually a hardware-based difference? I didn't think there was much\n>>> difference between Pi 3B and Pi 4, but we're running out of other\n>>> explanations.\n> \n>> Hmm, yeah. Which FreeBSD image did you install? armv7 or aarch64?\n> \n> https://download.freebsd.org/releases/arm64/aarch64/ISO-IMAGES/14.0/FreeBSD-14.0-BETA3-arm64-aarch64-RPI.img.xz\n> \n\nThanks, that's the image I've used. This is really strange ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 27 Sep 2023 21:33:24 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dikkop seems unhappy because of openssl stuff (FreeBSD 14-BETA1)"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 9/27/23 15:38, Tom Lane wrote:\n>> Tomas Vondra <[email protected]> writes:\n>>> Hmm, yeah. Which FreeBSD image did you install? armv7 or aarch64?\n\n>> https://download.freebsd.org/releases/arm64/aarch64/ISO-IMAGES/14.0/FreeBSD-14.0-BETA3-arm64-aarch64-RPI.img.xz\n\n> Thanks, that's the image I've used. This is really strange ...\n\nI've now laid my hands on a Pi 4B, and with that exact same SD card\nplugged in, I get the same results I did with the 3B+: pltcl\nregression tests pass, and so does the manual check with tclsh8.[67].\nSo it seems like the \"different CPU\" theory doesn't survive contact\nwith reality either.\n\nI'm completely baffled, but I do notice that \"clock scan\" without\na -format option is deprecated according to the Tcl man page.\nMaybe we should stop relying on deprecated behavior and put in\na -format option?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 29 Sep 2023 18:05:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dikkop seems unhappy because of openssl stuff (FreeBSD 14-BETA1)"
},
{
"msg_contents": "Does the image lack a /etc/localtime file/link, but perhaps one of you\ndid something to create it?\n\nThis came up with the CI image:\nhttps://www.postgresql.org/message-id/flat/20230731191510.pebqeiuo2sbmlcfh%40awork3.anarazel.de\nAlso mentioned at: https://wiki.tcl-lang.org/page/clock+scan\n\n\n",
"msg_date": "Sat, 30 Sep 2023 12:25:56 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dikkop seems unhappy because of openssl stuff (FreeBSD 14-BETA1)"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> Does the image lack a /etc/localtime file/link, but perhaps one of you\n> did something to create it?\n\nHah! I thought it had to be some sort of locale effect, but I failed\nto think of that as a contributor :-(. My installation does have\n/etc/localtime, and removing it duplicates Tomas' syndrome.\n\nI also find that if I add \"-gmt 1\" to the clock invocation, it's happy\nwith or without /etc/localtime. So I think we should modify the test\ncase to use that to reduce its environmental sensitivity. Will\ngo make it so.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 29 Sep 2023 19:57:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dikkop seems unhappy because of openssl stuff (FreeBSD 14-BETA1)"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> This came up with the CI image:\n> https://www.postgresql.org/message-id/flat/20230731191510.pebqeiuo2sbmlcfh%40awork3.anarazel.de\n\nBTW, after re-reading that thread, I think the significant\ndifference is that these FreeBSD images don't force you to\nselect a timezone during setup, unlike what I recall seeing\nwhen installing x86_64 FreeBSD. You're not forced to run\nbsdconfig at all, and even if you do it doesn't make you\nenter the sub-menu where you can pick a timezone. I recall\nthat I did do that while setting mine up, but I'll bet\nTomas skipped it. I'm not sure at this point whether FreeBSD\nchanged behavior since 13.x, or this is a difference between\ntheir preferred installation processes for x86 vs. ARM.\nBut in any case, it's clearly easier to get into the\nno-/etc/localtime state with these systems than I thought\nbefore.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 29 Sep 2023 20:46:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: dikkop seems unhappy because of openssl stuff (FreeBSD 14-BETA1)"
},
{
"msg_contents": "On 9/30/23 01:57, Tom Lane wrote:\n> Thomas Munro <[email protected]> writes:\n>> Does the image lack a /etc/localtime file/link, but perhaps one of you\n>> did something to create it?\n> \n> Hah! I thought it had to be some sort of locale effect, but I failed\n> to think of that as a contributor :-(. My installation does have\n> /etc/localtime, and removing it duplicates Tomas' syndrome.\n> \n> I also find that if I add \"-gmt 1\" to the clock invocation, it's happy\n> with or without /etc/localtime. So I think we should modify the test\n> case to use that to reduce its environmental sensitivity. Will\n> go make it so.\n> \n\nFWIW I've defined the timezone (copying it into /etc/localtime), and\nthat seems to have resolved the issue (well, maybe it's the \"-gmt 1\"\ntweak, not sure).\n\nI wonder how come it worked with the earlier image - I don't recall\ndefining the timezone (AFAIK I only did the bare minimum to get it\nworking), but maybe I did.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 4 Oct 2023 13:04:16 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: dikkop seems unhappy because of openssl stuff (FreeBSD 14-BETA1)"
}
] |
[
{
"msg_contents": "This fails since 1349d2790b\n\ncommit 1349d2790bf48a4de072931c722f39337e72055e\nAuthor: David Rowley <[email protected]>\nDate: Tue Aug 2 23:11:45 2022 +1200\n\n Improve performance of ORDER BY / DISTINCT aggregates\n\nts=# CREATE TABLE t (a int, b text) PARTITION BY RANGE (a);\nts=# CREATE TABLE td PARTITION OF t DEFAULT;\nts=# INSERT INTO t SELECT 1 AS a, '' AS b;\nts=# SET enable_partitionwise_aggregate=on;\nts=# explain SELECT a, COUNT(DISTINCT b) FROM t GROUP BY a;\nERROR: XX000: could not find pathkey item to sort\nLOCATION: prepare_sort_from_pathkeys, createplan.c:6235\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 18 Sep 2023 09:02:09 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg16: XX000: could not find pathkey item to sort"
},
{
"msg_contents": "On Mon, Sep 18, 2023 at 10:02 PM Justin Pryzby <[email protected]> wrote:\n\n> This fails since 1349d2790b\n>\n> commit 1349d2790bf48a4de072931c722f39337e72055e\n> Author: David Rowley <[email protected]>\n> Date: Tue Aug 2 23:11:45 2022 +1200\n>\n> Improve performance of ORDER BY / DISTINCT aggregates\n>\n> ts=# CREATE TABLE t (a int, b text) PARTITION BY RANGE (a);\n> ts=# CREATE TABLE td PARTITION OF t DEFAULT;\n> ts=# INSERT INTO t SELECT 1 AS a, '' AS b;\n> ts=# SET enable_partitionwise_aggregate=on;\n> ts=# explain SELECT a, COUNT(DISTINCT b) FROM t GROUP BY a;\n> ERROR: XX000: could not find pathkey item to sort\n> LOCATION: prepare_sort_from_pathkeys, createplan.c:6235\n\n\nThanks for the report! I've looked at it a little bit. In function\nadjust_group_pathkeys_for_groupagg we add the pathkeys in ordered\naggregates to root->group_pathkeys. But if the new added pathkeys do\nnot have EC members that match the targetlist or can be computed from\nthe targetlist, prepare_sort_from_pathkeys would have problem computing\nsort column info for the new added pathkeys. In the given example, the\npathkey representing 'b' can not match or be computed from the current\ntargetlist, so prepare_sort_from_pathkeys emits the error.\n\nMy first thought about the fix is that we artificially add resjunk\ntarget entries to parse->targetList for the ordered aggregates'\narguments that are ORDER BY expressions, as attached. While this can\nfix the given query, it would cause Assert failure for the query in\nsql/triggers.sql.\n\n-- inserts only\ninsert into my_table values (1, 'AAA'), (2, 'BBB')\n on conflict (a) do\n update set b = my_table.b || ':' || excluded.b;\n\nI haven't looked into how that happens.\n\nAny thoughts?\n\nThanks\nRichard",
"msg_date": "Tue, 19 Sep 2023 18:36:11 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg16: XX000: could not find pathkey item to sort"
},
{
"msg_contents": "On Tue, 19 Sept 2023 at 23:45, Richard Guo <[email protected]> wrote:\n> My first thought about the fix is that we artificially add resjunk\n> target entries to parse->targetList for the ordered aggregates'\n> arguments that are ORDER BY expressions, as attached. While this can\n> fix the given query, it would cause Assert failure for the query in\n> sql/triggers.sql.\n\n> Any thoughts?\n\nUnfortunately, we can't do that as it'll lead to target entries\nexisting in the GroupAggregate's target list that have been\naggregated.\n\npostgres=# explain verbose SELECT a, COUNT(DISTINCT b) FROM rp GROUP BY a;\n QUERY PLAN\n--------------------------------------------------------------------------------------\n Append (cost=88.17..201.39 rows=400 width=44)\n -> GroupAggregate (cost=88.17..99.70 rows=200 width=44)\n Output: rp.a, count(DISTINCT rp.b), rp.b\n\nYour patch adds rp.b as an output column of the GroupAggregate.\nLogically, that column cannot exist there as there is no correct\nsingle value of rp.b after aggregation.\n\nI think the fix needs to go into create_agg_path(). The problem is\nthat for AGG_SORTED we do:\n\nif (aggstrategy == AGG_SORTED)\n pathnode->path.pathkeys = subpath->pathkeys; /* preserves order */\n\nwhich assumes that all of the columns before the aggregate will be\navailable after the aggregate. That likely used to work ok before\n1349d2790 as the planner wouldn't have requested any Pathkeys for\ncolumns that were not available below the Agg node.\n\nWe can no longer take the subpath pathkey's verbatim. We need to strip\noff pathkeys for columns that are not in pathnode's targetlist.\n\nI've attached a patch which adds a new function to pathkeys.c to strip\noff any PathKeys in a list that don't have a corresponding item in the\ngiven PathTarget and just return a prefix of the input pathkey list up\nuntil the first expr that can't be found.\n\nI'm concerned that this patch will be too much overhead when creating\npaths when a PathKey's EquivalenceClass has a large number of members\nfrom partitioned tables. I wondered if we should instead just check\nif the subpath's pathkeys match root->group_pathkeys and if they do\nset the AggPath's pathkeys to list_copy_head(subpath->pathkeys,\nroot->num_groupby_pathkeys), that'll be much cheaper, but it just\nfeels a bit too much like a special case.\n\nDavid",
"msg_date": "Tue, 3 Oct 2023 09:11:43 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg16: XX000: could not find pathkey item to sort"
},
{
"msg_contents": "On Tue, 3 Oct 2023 at 09:11, David Rowley <[email protected]> wrote:\n> I'm concerned that this patch will be too much overhead when creating\n> paths when a PathKey's EquivalenceClass has a large number of members\n> from partitioned tables.\n\nI just tried out the patch to see how much it affects the performance\nof the planner. I think we need to find a better way to strip off the\npathkeys for the columns that have been aggregated.\n\nSetup:\ncreate table lp (a int, b int) partition by list (a);\nselect 'create table lp'||x||' partition of lp for values in('||x||')'\nfrom generate_series(0,999)x;\n\\gexec\n\n\\pset pager off\nset enable_partitionwise_aggregate=1;\n\nBenchmark query:\nexplain (summary on) select a,count(*) from lp group by a;\n\nMaster:\nPlanning Time: 23.945 ms\nPlanning Time: 23.887 ms\nPlanning Time: 23.927 ms\n\nperf top:\n 7.39% libc.so.6 [.] __memmove_avx_unaligned_erms\n 6.98% [kernel] [k] clear_page_rep\n 5.69% postgres [.] bms_is_subset\n 5.07% postgres [.] fetch_upper_rel\n 4.41% postgres [.] bms_equal\n\nPatched:\nPlanning Time: 41.410 ms\nPlanning Time: 41.474 ms\nPlanning Time: 41.488 ms\n\nperf top:\n 19.02% postgres [.] bms_is_subset\n 6.91% postgres [.] find_ec_member_matching_expr\n 5.93% libc.so.6 [.] __memmove_avx_unaligned_erms\n 5.55% [kernel] [k] clear_page_rep\n 4.07% postgres [.] fetch_upper_rel\n 3.46% postgres [.] bms_equal\n\n> I wondered if we should instead just check\n> if the subpath's pathkeys match root->group_pathkeys and if they do\n> set the AggPath's pathkeys to list_copy_head(subpath->pathkeys,\n> root->num_groupby_pathkeys), that'll be much cheaper, but it just\n> feels a bit too much like a special case.\n\nI tried this approach (patch attached) and it does perform better than\nthe other patch:\n\ncreate_agg_path_fix2.patch:\nPlanning Time: 24.357 ms\nPlanning Time: 24.293 ms\nPlanning Time: 24.259 ms\n\n 7.45% libc.so.6 [.] __memmove_avx_unaligned_erms\n 6.90% [kernel] [k] clear_page_rep\n 5.56% postgres [.] bms_is_subset\n 5.38% postgres [.] bms_equal\n\nI wonder if the attached patch is too much of a special case fix. I\nguess from the lack of complaints previously that there are no other\ncases where we could possibly have pathkeys that belong to columns\nthat are aggregated. I've not gone to much effort to see if I can\ncraft a case that hits this without the ORDER BY/DISTINCT aggregate\noptimisation, however.\n\nDavid",
"msg_date": "Tue, 3 Oct 2023 20:16:07 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg16: XX000: could not find pathkey item to sort"
},
{
"msg_contents": "On Tue, 3 Oct 2023 at 20:16, David Rowley <[email protected]> wrote:\n> I wonder if the attached patch is too much of a special case fix. I\n> guess from the lack of complaints previously that there are no other\n> cases where we could possibly have pathkeys that belong to columns\n> that are aggregated. I've not gone to much effort to see if I can\n> craft a case that hits this without the ORDER BY/DISTINCT aggregate\n> optimisation, however.\n\nI spent more time on this today. I'd been wondering if there was any\nreason why create_agg_path() would receive a subpath with pathkeys\nthat were anything but the PlannerInfo's group_pathkeys. I mean, how\ncould we do Group Aggregate if it wasn't? I wondered if grouping sets\nmight change that, but it seems the group_pathkeys will be set to the\ninitial grouping set.\n\nGiven that, it would seem it's safe to just trim off any pathkey that\nwas added to the group_pathkeys by\nadjust_group_pathkeys_for_groupagg().\nPlannerInfo.num_groupby_pathkeys marks the number of pathkeys that\nexisted in group_pathkeys before adjust_group_pathkeys_for_groupagg()\nmade any additions, so we can just trim the list length back to that.\n\nI've done this in the attached patch. I also considered if it was\nworth adding a regression test for this and I concluded that there are\nbetter ways to test for this and considered if we should add some code\nto createplan.c to check that all Path pathkeys have corresponding\nitems in the PathTarget. I've included an additional patch which adds\nsome code in USE_ASSERT_CHECKING builds to verify this. Without the\nfix it's simple enough to trigger this with a query such as:\n\nselect two,count(distinct four) from tenk1 group by two order by two;\n\nWithout the fix the additional asserts cause the regression tests to\nfail, but with the fix everything passes.\n\nJustin's case is quite an obscure way to hit this as it requires\npartitionwise aggregation plus a single partition so that the Append\nis removed due to only having a single subplan in setrefs.c. If there\nhad been 2 partitions, then the AppendPath wouldn't have inherited the\nsubpath's pathkeys per code at the end of create_append_path().\n\nSo in short, I propose the attached fix without any regression tests\nbecause I feel that any regression test would just mark that there was\na big in create_agg_path() and not really help with ensuring we don't\nend up with some similar problem in the future.\n\nI have some concerns that the assert_pathkeys_in_target() function\nmight be a little heavyweight for USE_ASSERT_CHECKING builds. So I'm\nnot proposing to commit that without further discussion.\n\nDoes anyone feel differently?\n\nIf not, I plan to push the attached\nstrip_aggregate_pathkeys_from_aggpaths_v2.patch early next week.\n\nDavid",
"msg_date": "Thu, 5 Oct 2023 19:26:24 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg16: XX000: could not find pathkey item to sort"
},
{
"msg_contents": "On Thu, Oct 5, 2023 at 2:26 PM David Rowley <[email protected]> wrote:\n\n> So in short, I propose the attached fix without any regression tests\n> because I feel that any regression test would just mark that there was\n> a big in create_agg_path() and not really help with ensuring we don't\n> end up with some similar problem in the future.\n\n\nIf the pathkeys that were added by adjust_group_pathkeys_for_groupagg()\nare computable from the targetlist, it seems that we do not need to trim\nthem off, because prepare_sort_from_pathkeys() will add resjunk target\nentries for them. But it's also no harm if we trim them off. So I\nthink the patch is a pretty safe fix. +1 to it.\n\n\n> I have some concerns that the assert_pathkeys_in_target() function\n> might be a little heavyweight for USE_ASSERT_CHECKING builds. So I'm\n> not proposing to commit that without further discussion.\n\n\nYeah, it looks like some heavy to call assert_pathkeys_in_target() for\neach path node. Can we run some benchmarks to see how much overhead it\nwould bring to USE_ASSERT_CHECKING build?\n\nThanks\nRichard\n\nOn Thu, Oct 5, 2023 at 2:26 PM David Rowley <[email protected]> wrote:\nSo in short, I propose the attached fix without any regression tests\nbecause I feel that any regression test would just mark that there was\na big in create_agg_path() and not really help with ensuring we don't\nend up with some similar problem in the future.If the pathkeys that were added by adjust_group_pathkeys_for_groupagg()are computable from the targetlist, it seems that we do not need to trimthem off, because prepare_sort_from_pathkeys() will add resjunk targetentries for them. But it's also no harm if we trim them off. So Ithink the patch is a pretty safe fix. +1 to it. \nI have some concerns that the assert_pathkeys_in_target() function\nmight be a little heavyweight for USE_ASSERT_CHECKING builds. So I'm\nnot proposing to commit that without further discussion.Yeah, it looks like some heavy to call assert_pathkeys_in_target() foreach path node. Can we run some benchmarks to see how much overhead itwould bring to USE_ASSERT_CHECKING build?ThanksRichard",
"msg_date": "Sun, 8 Oct 2023 18:52:38 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg16: XX000: could not find pathkey item to sort"
},
{
"msg_contents": "On Sun, 8 Oct 2023 at 23:52, Richard Guo <[email protected]> wrote:\n> On Thu, Oct 5, 2023 at 2:26 PM David Rowley <[email protected]> wrote:\n>>\n>> So in short, I propose the attached fix without any regression tests\n>> because I feel that any regression test would just mark that there was\n>> a big in create_agg_path() and not really help with ensuring we don't\n>> end up with some similar problem in the future.\n>\n>\n> If the pathkeys that were added by adjust_group_pathkeys_for_groupagg()\n> are computable from the targetlist, it seems that we do not need to trim\n> them off, because prepare_sort_from_pathkeys() will add resjunk target\n> entries for them. But it's also no harm if we trim them off. So I\n> think the patch is a pretty safe fix. +1 to it.\n\nhmm, I think one of us does not understand what is going on here. I\ntried to explain in [1] why we *need* to strip off the pathkeys added\nby adjust_group_pathkeys_for_groupagg().\n\nGiven the following example:\n\ncreate table ab (a int,b int);\nexplain (costs off) select a,count(distinct b) from ab group by a;\n QUERY PLAN\n----------------------------\n GroupAggregate\n Group Key: a\n -> Sort\n Sort Key: a, b\n -> Seq Scan on ab\n(5 rows)\n\nadjust_group_pathkeys_for_groupagg() will add the pathkey for the \"b\"\ncolumn and that results in the Sort node sorting on {a,b}. It's\nsimply not at all valid to have the GroupAggregate path claim that its\npathkeys are also (effectively) {a,b}\" as \"b\" does not and cannot\nlegally exist after the aggregation takes place. We cannot put a\nresjunk \"b\" in the targetlist of the GroupAggregate either as there\ncould be any number \"b\" values aggregated.\n\nCan you explain why you think we can put a resjunk \"b\" in the target\nlist of the GroupAggregate in the above case?\n\n>>\n>> I have some concerns that the assert_pathkeys_in_target() function\n>> might be a little heavyweight for USE_ASSERT_CHECKING builds. So I'm\n>> not proposing to commit that without further discussion.\n>\n>\n> Yeah, it looks like some heavy to call assert_pathkeys_in_target() for\n> each path node. Can we run some benchmarks to see how much overhead it\n> would bring to USE_ASSERT_CHECKING build?\n\nI think it'll be easy to show that there is an overhead to it. It'll\nbe in the realm of the ~41ms patched vs ~24ms unpatched that I showed\nin [2]. That's quite an extreme case.\n\nMaybe it's worth checking the total planning time spent in a run of\nthe regression tests with and without the patch to see how much\noverhead it adds to the \"average case\".\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvpJJigQRW29TppTOPYp+Aui0mtd3MpfRxyKv=N-tB62jQ@mail.gmail.com\n[2] https://postgr.es/m/CAApHDvo7RzcQYw-gnkZr6QCijCqf8vJLkJ4XFk-KawvyAw109Q@mail.gmail.com\n\n\n",
"msg_date": "Mon, 9 Oct 2023 12:42:13 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg16: XX000: could not find pathkey item to sort"
},
{
"msg_contents": "On Mon, Oct 9, 2023 at 7:42 AM David Rowley <[email protected]> wrote:\n\n> On Sun, 8 Oct 2023 at 23:52, Richard Guo <[email protected]> wrote:\n> > If the pathkeys that were added by adjust_group_pathkeys_for_groupagg()\n> > are computable from the targetlist, it seems that we do not need to trim\n> > them off, because prepare_sort_from_pathkeys() will add resjunk target\n> > entries for them. But it's also no harm if we trim them off. So I\n> > think the patch is a pretty safe fix. +1 to it.\n>\n> hmm, I think one of us does not understand what is going on here. I\n> tried to explain in [1] why we *need* to strip off the pathkeys added\n> by adjust_group_pathkeys_for_groupagg().\n\n\nSorry I didn't make myself clear. I understand why we need to trim off\nthe pathkeys added by adjust_group_pathkeys_for_groupagg(). What I\nmeant was that if the new added pathkeys are *computable* from the\nexisting target entries, then prepare_sort_from_pathkeys() will add\nresjunk target entries for them, so there seems to be no problem even if\nwe do not trim them off. For example\n\nexplain (verbose, costs off)\nselect a, count(distinct a+1) from prt1 group by a order by a;\n QUERY PLAN\n------------------------------------------------------------------------------------\n Result\n Output: prt1.a, (count(DISTINCT ((prt1.a + 1))))\n -> Merge Append\n Sort Key: prt1.a, ((prt1.a + 1))\n -> GroupAggregate\n Output: prt1.a, count(DISTINCT ((prt1.a + 1))), ((prt1.a +\n1))\n Group Key: prt1.a\n -> Sort\n Output: prt1.a, ((prt1.a + 1))\n Sort Key: prt1.a, ((prt1.a + 1))\n -> Seq Scan on public.prt1_p1 prt1\n Output: prt1.a, (prt1.a + 1)\n ...\n\nExpression 'a+1' is *computable* from the existing entry 'a', so we just\nadd a new resjunk target entry for 'a+1', and there is no error planning\nthis query. But if we change 'a+1' to something that is not computable,\nthen we would have problems (without your fix), and the reason has been\nwell explained by your messages.\n\nexplain (verbose, costs off)\nselect a, count(distinct b) from prt1 group by a order by a;\nERROR: could not find pathkey item to sort\n\nHaving said that, I think it's the right thing to do to trim off the new\nadded pathkeys, even if they are *computable*. In the plan above, the\n'(prt1.a + 1)' in GroupAggregate's targetlist and MergeAppend's\npathkeys are actually redundant. It's good to remove it.\n\n\n> Can you explain why you think we can put a resjunk \"b\" in the target\n> list of the GroupAggregate in the above case?\n\n\nHmm, I don't think we can do that, because 'b' is not *computable* from\nthe existing target entries, as I explained above.\n\nThanks\nRichard\n\nOn Mon, Oct 9, 2023 at 7:42 AM David Rowley <[email protected]> wrote:On Sun, 8 Oct 2023 at 23:52, Richard Guo <[email protected]> wrote:\n> If the pathkeys that were added by adjust_group_pathkeys_for_groupagg()\n> are computable from the targetlist, it seems that we do not need to trim\n> them off, because prepare_sort_from_pathkeys() will add resjunk target\n> entries for them. But it's also no harm if we trim them off. So I\n> think the patch is a pretty safe fix. +1 to it.\n\nhmm, I think one of us does not understand what is going on here. I\ntried to explain in [1] why we *need* to strip off the pathkeys added\nby adjust_group_pathkeys_for_groupagg().Sorry I didn't make myself clear. I understand why we need to trim offthe pathkeys added by adjust_group_pathkeys_for_groupagg(). What Imeant was that if the new added pathkeys are *computable* from theexisting target entries, then prepare_sort_from_pathkeys() will addresjunk target entries for them, so there seems to be no problem even ifwe do not trim them off. For exampleexplain (verbose, costs off)select a, count(distinct a+1) from prt1 group by a order by a; QUERY PLAN------------------------------------------------------------------------------------ Result Output: prt1.a, (count(DISTINCT ((prt1.a + 1)))) -> Merge Append Sort Key: prt1.a, ((prt1.a + 1)) -> GroupAggregate Output: prt1.a, count(DISTINCT ((prt1.a + 1))), ((prt1.a + 1)) Group Key: prt1.a -> Sort Output: prt1.a, ((prt1.a + 1)) Sort Key: prt1.a, ((prt1.a + 1)) -> Seq Scan on public.prt1_p1 prt1 Output: prt1.a, (prt1.a + 1) ...Expression 'a+1' is *computable* from the existing entry 'a', so we justadd a new resjunk target entry for 'a+1', and there is no error planningthis query. But if we change 'a+1' to something that is not computable,then we would have problems (without your fix), and the reason has beenwell explained by your messages.explain (verbose, costs off)select a, count(distinct b) from prt1 group by a order by a;ERROR: could not find pathkey item to sortHaving said that, I think it's the right thing to do to trim off the newadded pathkeys, even if they are *computable*. In the plan above, the'(prt1.a + 1)' in GroupAggregate's targetlist and MergeAppend'spathkeys are actually redundant. It's good to remove it. \nCan you explain why you think we can put a resjunk \"b\" in the target\nlist of the GroupAggregate in the above case?Hmm, I don't think we can do that, because 'b' is not *computable* fromthe existing target entries, as I explained above.ThanksRichard",
"msg_date": "Mon, 9 Oct 2023 10:08:13 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg16: XX000: could not find pathkey item to sort"
},
{
"msg_contents": "On Mon, 9 Oct 2023 at 12:42, David Rowley <[email protected]> wrote:\n> Maybe it's worth checking the total planning time spent in a run of\n> the regression tests with and without the patch to see how much\n> overhead it adds to the \"average case\".\n\nI've now pushed the patch that trims off the Pathkeys for the ORDER BY\n/ DISTINCT aggregates.\n\nAs for the patch to verify the pathkeys during create plan, I patched\nmaster with the attached plan_times.patch.txt and used the following\nto check the time spent in the planner for 3 runs of make\ninstallcheck.\n\n$ for i in {1..3}; do pg_ctl start -D pgdata -l plantime.log >\n/dev/null && cd pg_src && make installcheck > /dev/null && cd .. &&\ngrep \"planning time in\" plantime.log|sed -E -e 's/.*planning time in\n(.*) nanoseconds/\\1/'|awk '{nanoseconds += $1} END{print nanoseconds}'\n&& pg_ctl stop -D pgdata > /dev/null && rm plantime.log; done\n\nMaster:\n1855788104\n1839655412\n1740769066\n\nPatched:\n1917797221\n1766606115\n1881322655\n\nThose results are a bit noisy. Perhaps a few more runs might yield\nmore consistency, but it seems that there's not too much overhead to\nit. If I take the minimum value out of the 3 runs from each, it comes\nto about 1.5% extra time spent in planning. Perhaps that's OK.\n\nDavid",
"msg_date": "Mon, 9 Oct 2023 17:13:09 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg16: XX000: could not find pathkey item to sort"
},
{
"msg_contents": "On Mon, Oct 9, 2023 at 12:13 PM David Rowley <[email protected]> wrote:\n\n> I've now pushed the patch that trims off the Pathkeys for the ORDER BY\n> / DISTINCT aggregates.\n\n\nThanks for pushing!\n\n\n> Those results are a bit noisy. Perhaps a few more runs might yield\n> more consistency, but it seems that there's not too much overhead to\n> it. If I take the minimum value out of the 3 runs from each, it comes\n> to about 1.5% extra time spent in planning. Perhaps that's OK.\n\n\nI agree that the overhead is acceptable, especially it only happens in\nUSE_ASSERT_CHECKING builds.\n\nThanks\nRichard\n\nOn Mon, Oct 9, 2023 at 12:13 PM David Rowley <[email protected]> wrote:\nI've now pushed the patch that trims off the Pathkeys for the ORDER BY\n/ DISTINCT aggregates.Thanks for pushing! \nThose results are a bit noisy. Perhaps a few more runs might yield\nmore consistency, but it seems that there's not too much overhead to\nit. If I take the minimum value out of the 3 runs from each, it comes\nto about 1.5% extra time spent in planning. Perhaps that's OK.I agree that the overhead is acceptable, especially it only happens inUSE_ASSERT_CHECKING builds.ThanksRichard",
"msg_date": "Mon, 9 Oct 2023 13:41:36 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg16: XX000: could not find pathkey item to sort"
},
{
"msg_contents": "Hello David,\n\n09.10.2023 07:13, David Rowley wrote:\n> On Mon, 9 Oct 2023 at 12:42, David Rowley <[email protected]> wrote:\n>> Maybe it's worth checking the total planning time spent in a run of\n>> the regression tests with and without the patch to see how much\n>> overhead it adds to the \"average case\".\n> I've now pushed the patch that trims off the Pathkeys for the ORDER BY\n> / DISTINCT aggregates.\n>\n\nI've stumbled upon the same error, but this time it apparently has another\ncause. It can be produced (on REL_16_STABLE and master) as follows:\nCREATE TABLE t (a int, b int) PARTITION BY RANGE (a);\nCREATE TABLE td PARTITION OF t DEFAULT;\nCREATE TABLE tp1 PARTITION OF t FOR VALUES FROM (1) TO (2);\nSET enable_partitionwise_aggregate = on;\nSET parallel_setup_cost = 0;\nSELECT a, sum(b order by b) FROM t GROUP BY a ORDER BY a;\n\nERROR: could not find pathkey item to sort\n\n`git bisect` for this anomaly blames the same commit 1349d2790.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Wed, 13 Mar 2024 20:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg16: XX000: could not find pathkey item to sort"
},
{
"msg_contents": "On Thu, 14 Mar 2024 at 06:00, Alexander Lakhin <[email protected]> wrote:\n> I've stumbled upon the same error, but this time it apparently has another\n> cause. It can be produced (on REL_16_STABLE and master) as follows:\n> CREATE TABLE t (a int, b int) PARTITION BY RANGE (a);\n> CREATE TABLE td PARTITION OF t DEFAULT;\n> CREATE TABLE tp1 PARTITION OF t FOR VALUES FROM (1) TO (2);\n> SET enable_partitionwise_aggregate = on;\n> SET parallel_setup_cost = 0;\n> SELECT a, sum(b order by b) FROM t GROUP BY a ORDER BY a;\n>\n> ERROR: could not find pathkey item to sort\n>\n> `git bisect` for this anomaly blames the same commit 1349d2790.\n\nThanks for finding and for the recreator script.\n\nI've attached a patch which fixes the problem for me.\n\nOn debugging this I uncovered some other stuff that looks broken which\nseems to caused by partition-wise aggregates. With your example\nquery, in get_useful_pathkeys_for_relation(), we call\nrelation_can_be_sorted_early() to check if the pathkey can be used as\na set of pathkeys in useful_pathkeys_list. The problem is that in\nyour query the 'rel' is the base relation belonging to the partitioned\ntable and relation_can_be_sorted_early() looks through the targetlist\nfor that relation and finds columns \"a\" and \"b\" in there. The problem\nis \"b\" has been aggregated away as partial aggregation has taken place\ndue to the partition-wise aggregation. I believe whichever rel we\nshould be using there should have an Aggref in the target exprs rather\nthan the plain unaggregated column. I've added Robert and Ashutosh to\nsee what their thoughts are on this.\n\nDavid",
"msg_date": "Thu, 14 Mar 2024 12:00:24 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg16: XX000: could not find pathkey item to sort"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 4:30 AM David Rowley <[email protected]> wrote:\n\n> On Thu, 14 Mar 2024 at 06:00, Alexander Lakhin <[email protected]>\n> wrote:\n> > I've stumbled upon the same error, but this time it apparently has\n> another\n> > cause. It can be produced (on REL_16_STABLE and master) as follows:\n> > CREATE TABLE t (a int, b int) PARTITION BY RANGE (a);\n> > CREATE TABLE td PARTITION OF t DEFAULT;\n> > CREATE TABLE tp1 PARTITION OF t FOR VALUES FROM (1) TO (2);\n> > SET enable_partitionwise_aggregate = on;\n> > SET parallel_setup_cost = 0;\n> > SELECT a, sum(b order by b) FROM t GROUP BY a ORDER BY a;\n> >\n> > ERROR: could not find pathkey item to sort\n> >\n> > `git bisect` for this anomaly blames the same commit 1349d2790.\n>\n> Thanks for finding and for the recreator script.\n>\n> I've attached a patch which fixes the problem for me.\n>\n> On debugging this I uncovered some other stuff that looks broken which\n> seems to caused by partition-wise aggregates. With your example\n> query, in get_useful_pathkeys_for_relation(), we call\n> relation_can_be_sorted_early() to check if the pathkey can be used as\n> a set of pathkeys in useful_pathkeys_list. The problem is that in\n> your query the 'rel' is the base relation belonging to the partitioned\n> table and relation_can_be_sorted_early() looks through the targetlist\n> for that relation and finds columns \"a\" and \"b\" in there. The problem\n> is \"b\" has been aggregated away as partial aggregation has taken place\n> due to the partition-wise aggregation. I believe whichever rel we\n> should be using there should have an Aggref in the target exprs rather\n> than the plain unaggregated column. I've added Robert and Ashutosh to\n> see what their thoughts are on this.\n>\n\nI don't understand why root->query_pathkeys has both a and b. \"a\" is there\nbecause of GROUP BY and ORDER BY clause. But why \"b\"?\n\nUnder the debugger this is what I observed: generate_useful_gather_paths()\ngets called twice, once for the base relation and second time for the upper\nrelation.\n\nWhen it's called for base relation, it includes \"a\" and \"b\" both in the\nuseful pathkeys. The plan doesn't use sortedness on b. But I don't think\nthat's the problem of the relation used. It looks like root->query_pathkeys\ncontaining \"b\" may be a problem.\n\nWhen it's called for upper relation, the reltarget has \"a\" and Aggref() and\nit includes only \"a\" in the useful pathkeys which is as per your\nexpectation.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Thu, Mar 14, 2024 at 4:30 AM David Rowley <[email protected]> wrote:On Thu, 14 Mar 2024 at 06:00, Alexander Lakhin <[email protected]> wrote:\n> I've stumbled upon the same error, but this time it apparently has another\n> cause. It can be produced (on REL_16_STABLE and master) as follows:\n> CREATE TABLE t (a int, b int) PARTITION BY RANGE (a);\n> CREATE TABLE td PARTITION OF t DEFAULT;\n> CREATE TABLE tp1 PARTITION OF t FOR VALUES FROM (1) TO (2);\n> SET enable_partitionwise_aggregate = on;\n> SET parallel_setup_cost = 0;\n> SELECT a, sum(b order by b) FROM t GROUP BY a ORDER BY a;\n>\n> ERROR: could not find pathkey item to sort\n>\n> `git bisect` for this anomaly blames the same commit 1349d2790.\n\nThanks for finding and for the recreator script.\n\nI've attached a patch which fixes the problem for me.\n\nOn debugging this I uncovered some other stuff that looks broken which\nseems to caused by partition-wise aggregates. With your example\nquery, in get_useful_pathkeys_for_relation(), we call\nrelation_can_be_sorted_early() to check if the pathkey can be used as\na set of pathkeys in useful_pathkeys_list. The problem is that in\nyour query the 'rel' is the base relation belonging to the partitioned\ntable and relation_can_be_sorted_early() looks through the targetlist\nfor that relation and finds columns \"a\" and \"b\" in there. The problem\nis \"b\" has been aggregated away as partial aggregation has taken place\ndue to the partition-wise aggregation. I believe whichever rel we\nshould be using there should have an Aggref in the target exprs rather\nthan the plain unaggregated column. I've added Robert and Ashutosh to\nsee what their thoughts are on this.I don't understand why root->query_pathkeys has both a and b. \"a\" is there because of GROUP BY and ORDER BY clause. But why \"b\"?Under the debugger this is what I observed: generate_useful_gather_paths() gets called twice, once for the base relation and second time for the upper relation.When it's called for base relation, it includes \"a\" and \"b\" both in the useful pathkeys. The plan doesn't use sortedness on b. But I don't think that's the problem of the relation used. It looks like root->query_pathkeys containing \"b\" may be a problem. When it's called for upper relation, the reltarget has \"a\" and Aggref() and it includes only \"a\" in the useful pathkeys which is as per your expectation.-- Best Wishes,Ashutosh Bapat",
"msg_date": "Thu, 14 Mar 2024 10:53:32 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg16: XX000: could not find pathkey item to sort"
},
{
"msg_contents": "On Thu, 14 Mar 2024 at 18:23, Ashutosh Bapat\n<[email protected]> wrote:\n> I don't understand why root->query_pathkeys has both a and b. \"a\" is there because of GROUP BY and ORDER BY clause. But why \"b\"?\n\nSo that the ORDER BY aggregate function can be evaluated without\nnodeAgg.c having to perform the sort. See\nadjust_group_pathkeys_for_groupagg().\n\nDavid\n\n\n",
"msg_date": "Thu, 14 Mar 2024 23:15:32 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg16: XX000: could not find pathkey item to sort"
},
{
"msg_contents": "On Thu, 14 Mar 2024 at 12:00, David Rowley <[email protected]> wrote:\n> I've attached a patch which fixes the problem for me.\n\nI've pushed the patch to fix gather_grouping_paths(). The issue with\nthe RelOptInfo having the incorrect PathTarget->exprs after the\npartial phase of partition-wise aggregate remains.\n\nDavid\n\n\n",
"msg_date": "Fri, 15 Mar 2024 11:58:09 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg16: XX000: could not find pathkey item to sort"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 3:45 PM David Rowley <[email protected]> wrote:\n\n> On Thu, 14 Mar 2024 at 18:23, Ashutosh Bapat\n> <[email protected]> wrote:\n> > I don't understand why root->query_pathkeys has both a and b. \"a\" is\n> there because of GROUP BY and ORDER BY clause. But why \"b\"?\n>\n> So that the ORDER BY aggregate function can be evaluated without\n> nodeAgg.c having to perform the sort. See\n> adjust_group_pathkeys_for_groupagg().\n>\n\nThanks. To me, it looks like we are gathering pathkeys, which if used to\nsort the result of overall join, would avoid sorting in as many as\naggregates as possible.\n\nrelation_can_be_sorted_early() finds, pathkeys which if used to sort the\ngiven relation, would help sorting the overall join. Contrary to what I\nsaid earlier, it might help if the base relation is sorted on \"a\" and \"b\".\nWhat I find weird is that the sorting is not pushed down to the partitions,\nwhere it would help most.\n\n#explain verbose SELECT a, sum(b order by b) FROM t GROUP BY a ORDER BY a;\n QUERY PLAN\n\n------------------------------------------------------------------------------------\n GroupAggregate (cost=362.21..398.11 rows=200 width=12)\n Output: t.a, sum(t.b ORDER BY t.b)\n Group Key: t.a\n -> Sort (cost=362.21..373.51 rows=4520 width=8)\n Output: t.a, t.b\n Sort Key: t.a, t.b\n -> Append (cost=0.00..87.80 rows=4520 width=8)\n -> Seq Scan on public.tp1 t_1 (cost=0.00..32.60 rows=2260\nwidth=8)\n Output: t_1.a, t_1.b\n -> Seq Scan on public.td t_2 (cost=0.00..32.60 rows=2260\nwidth=8)\n Output: t_2.a, t_2.b\n(11 rows)\n\nand that's the case even without parallel plans\n\n#explain verbose SELECT a, sum(b order by b) FROM t GROUP BY a ORDER BY a;\n QUERY PLAN\n\n------------------------------------------------------------------------------------\n GroupAggregate (cost=362.21..398.11 rows=200 width=12)\n Output: t.a, sum(t.b ORDER BY t.b)\n Group Key: t.a\n -> Sort (cost=362.21..373.51 rows=4520 width=8)\n Output: t.a, t.b\n Sort Key: t.a, t.b\n -> Append (cost=0.00..87.80 rows=4520 width=8)\n -> Seq Scan on public.tp1 t_1 (cost=0.00..32.60 rows=2260\nwidth=8)\n Output: t_1.a, t_1.b\n -> Seq Scan on public.td t_2 (cost=0.00..32.60 rows=2260\nwidth=8)\n Output: t_2.a, t_2.b\n(11 rows)\n\nBut it could be just because the corresponding plan was not found to be\noptimal. May be because there isn't enough data in those tables.\n\nIf the problem you speculate is different from this one, I am not able to\nsee it. It might help give an example query or explain more.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Thu, Mar 14, 2024 at 3:45 PM David Rowley <[email protected]> wrote:On Thu, 14 Mar 2024 at 18:23, Ashutosh Bapat\n<[email protected]> wrote:\n> I don't understand why root->query_pathkeys has both a and b. \"a\" is there because of GROUP BY and ORDER BY clause. But why \"b\"?\n\nSo that the ORDER BY aggregate function can be evaluated without\nnodeAgg.c having to perform the sort. See\nadjust_group_pathkeys_for_groupagg().Thanks. To me, it looks like we are gathering pathkeys, which if used to sort the result of overall join, would avoid sorting in as many as aggregates as possible.relation_can_be_sorted_early() finds, pathkeys which if used to sort the given relation, would help sorting the overall join. Contrary to what I said earlier, it might help if the base relation is sorted on \"a\" and \"b\". What I find weird is that the sorting is not pushed down to the partitions, where it would help most. #explain verbose SELECT a, sum(b order by b) FROM t GROUP BY a ORDER BY a; QUERY PLAN ------------------------------------------------------------------------------------ GroupAggregate (cost=362.21..398.11 rows=200 width=12) Output: t.a, sum(t.b ORDER BY t.b) Group Key: t.a -> Sort (cost=362.21..373.51 rows=4520 width=8) Output: t.a, t.b Sort Key: t.a, t.b -> Append (cost=0.00..87.80 rows=4520 width=8) -> Seq Scan on public.tp1 t_1 (cost=0.00..32.60 rows=2260 width=8) Output: t_1.a, t_1.b -> Seq Scan on public.td t_2 (cost=0.00..32.60 rows=2260 width=8) Output: t_2.a, t_2.b(11 rows)and that's the case even without parallel plans#explain verbose SELECT a, sum(b order by b) FROM t GROUP BY a ORDER BY a; QUERY PLAN ------------------------------------------------------------------------------------ GroupAggregate (cost=362.21..398.11 rows=200 width=12) Output: t.a, sum(t.b ORDER BY t.b) Group Key: t.a -> Sort (cost=362.21..373.51 rows=4520 width=8) Output: t.a, t.b Sort Key: t.a, t.b -> Append (cost=0.00..87.80 rows=4520 width=8) -> Seq Scan on public.tp1 t_1 (cost=0.00..32.60 rows=2260 width=8) Output: t_1.a, t_1.b -> Seq Scan on public.td t_2 (cost=0.00..32.60 rows=2260 width=8) Output: t_2.a, t_2.b(11 rows)But it could be just because the corresponding plan was not found to be optimal. May be because there isn't enough data in those tables.If the problem you speculate is different from this one, I am not able to see it. It might help give an example query or explain more.-- Best Wishes,Ashutosh Bapat",
"msg_date": "Mon, 18 Mar 2024 11:20:02 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg16: XX000: could not find pathkey item to sort"
},
{
"msg_contents": "On Mon, 18 Mar 2024 at 18:50, Ashutosh Bapat\n<[email protected]> wrote:\n> If the problem you speculate is different from this one, I am not able to see it. It might help give an example query or explain more.\n\nI looked at this again and I might have been wrong about there being a\nproblem. I set a breakpoint in create_gather_merge_path() and\nadjusted the startup and total cost to 1 when I saw the pathkeys\ncontaining {a,b}. It turns out this is the non-partitionwise\naggregate path, and of course, the targetlist there does contain the\n\"b\" column, so it's fine in that case that the pathkeys are {a,b}. I\nhad previously thought that this was for the partition-wise aggregate\nplan, in which case the targetlist would contain a, sum(b order by b),\nof which there's no single value of \"b\" that we can legally sort by.\n\nHere's the full plan.\n\npostgres=# explain verbose SELECT a, sum(b order by b) FROM t GROUP BY\na ORDER BY a;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------\n GroupAggregate (cost=1.00..25.60 rows=200 width=12)\n Output: t.a, sum(t.b ORDER BY t.b)\n Group Key: t.a\n -> Gather Merge (cost=1.00..1.00 rows=4520 width=8)\n Output: t.a, t.b\n Workers Planned: 2\n -> Sort (cost=158.36..163.07 rows=1882 width=8)\n Output: t.a, t.b\n Sort Key: t.a, t.b\n -> Parallel Append (cost=0.00..56.00 rows=1882 width=8)\n -> Parallel Seq Scan on public.tp1 t_1\n(cost=0.00..23.29 rows=1329 width=8)\n Output: t_1.a, t_1.b\n -> Parallel Seq Scan on public.td t_2\n(cost=0.00..23.29 rows=1329 width=8)\n Output: t_2.a, t_2.b\n(14 rows)\n\nDavid\n\n\n",
"msg_date": "Wed, 20 Mar 2024 14:28:02 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg16: XX000: could not find pathkey item to sort"
}
] |
[
{
"msg_contents": "While replaying our production workload we have found Postgres spending a\nlot of time inside TimescaleDB planner. The planner itself need an\ninformation about whether a table involved is a TimescaleDB hypertable. So\nplanner need an access to TimescaleDB internal metainformation tables. This\nplanner access become extremely slow when you have a lot of tables involved\nand you are beyond fast-path lock limit\n\nThis humble example makes only 2330 tps on REL_15_STABLE but 27041tps on\npatched version with 64 slots for fast-path locks.\n\n\\set bid random(1,1000)\n\nBEGIN;\nselect bbalance from pgbench_branches where bid = :bid\nUNION\nselect bbalance from pgbench_branches2 where bid = :bid\nUNION\nselect bbalance from pgbench_branches3 where bid = :bid\nUNION\nselect bbalance from pgbench_branches4 where bid = :bid\nUNION\nselect bbalance from pgbench_branches5 where bid = :bid\nUNION\nselect bbalance from pgbench_branches6 where bid = :bid\nUNION\nselect bbalance from pgbench_branches7 where bid = :bid\nUNION\nselect bbalance from pgbench_branches8 where bid = :bid\nUNION\nselect bbalance from pgbench_branches9 where bid = :bid\nUNION\nselect bbalance from pgbench_branches10 where bid = :bid\nUNION\nselect bbalance from pgbench_branches11 where bid = :bid\nUNION\nselect bbalance from pgbench_branches12 where bid = :bid\nUNION\nselect bbalance from pgbench_branches13 where bid = :bid\nUNION\nselect bbalance from pgbench_branches14 where bid = :bid\nUNION\nselect bbalance from pgbench_branches15 where bid = :bid\nUNION\nselect bbalance from pgbench_branches16 where bid = :bid\nUNION\nselect bbalance from pgbench_branches17 where bid = :bid\nUNION\nselect bbalance from pgbench_branches18 where bid = :bid\nUNION\nselect bbalance from pgbench_branches19 where bid = :bid\nUNION\nselect bbalance from pgbench_branches20 where bid = :bid;\nEND;\n\nFirst i try to make the number of fast-path locks as a GUC parameter. But\nit implies a lot of changes with PGPROC structure. Next I implement it as a\ncompile-time parameter.\n\nWhile replaying our production workload we have found Postgres spending a lot of time inside TimescaleDB planner. The planner itself need an information about whether a table involved is a TimescaleDB hypertable. So planner need an access to TimescaleDB internal metainformation tables. This planner access become extremely slow when you have a lot of tables involved and you are beyond fast-path lock limitThis humble example makes only 2330 tps on REL_15_STABLE but 27041tps on patched version with 64 slots for fast-path locks.\\set bid random(1,1000)BEGIN;select bbalance from pgbench_branches where bid = :bidUNIONselect bbalance from pgbench_branches2 where bid = :bidUNIONselect bbalance from pgbench_branches3 where bid = :bidUNIONselect bbalance from pgbench_branches4 where bid = :bidUNIONselect bbalance from pgbench_branches5 where bid = :bidUNIONselect bbalance from pgbench_branches6 where bid = :bidUNIONselect bbalance from pgbench_branches7 where bid = :bidUNIONselect bbalance from pgbench_branches8 where bid = :bidUNIONselect bbalance from pgbench_branches9 where bid = :bidUNIONselect bbalance from pgbench_branches10 where bid = :bidUNIONselect bbalance from pgbench_branches11 where bid = :bidUNIONselect bbalance from pgbench_branches12 where bid = :bidUNIONselect bbalance from pgbench_branches13 where bid = :bidUNIONselect bbalance from pgbench_branches14 where bid = :bidUNIONselect bbalance from pgbench_branches15 where bid = :bidUNIONselect bbalance from pgbench_branches16 where bid = :bidUNIONselect bbalance from pgbench_branches17 where bid = :bidUNIONselect bbalance from pgbench_branches18 where bid = :bidUNIONselect bbalance from pgbench_branches19 where bid = :bidUNIONselect bbalance from pgbench_branches20 where bid = :bid;END;First i try to make the number of fast-path locks as a GUC parameter. But it implies a lot of changes with PGPROC structure. Next I implement it as a compile-time parameter.",
"msg_date": "Mon, 18 Sep 2023 17:47:54 +0300",
"msg_from": "Sergey Sergey <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] fastpacth-locks compile time options"
},
{
"msg_contents": "Hope this patch will be usefull/\n\nOn Mon, Sep 18, 2023 at 5:47 PM Sergey Sergey <[email protected]> wrote:\n\n> While replaying our production workload we have found Postgres spending a\n> lot of time inside TimescaleDB planner. The planner itself need an\n> information about whether a table involved is a TimescaleDB hypertable. So\n> planner need an access to TimescaleDB internal metainformation tables. This\n> planner access become extremely slow when you have a lot of tables involved\n> and you are beyond fast-path lock limit\n>\n> This humble example makes only 2330 tps on REL_15_STABLE but 27041tps on\n> patched version with 64 slots for fast-path locks.\n>\n> \\set bid random(1,1000)\n>\n> BEGIN;\n> select bbalance from pgbench_branches where bid = :bid\n> UNION\n> select bbalance from pgbench_branches2 where bid = :bid\n> UNION\n> select bbalance from pgbench_branches3 where bid = :bid\n> UNION\n> select bbalance from pgbench_branches4 where bid = :bid\n> UNION\n> select bbalance from pgbench_branches5 where bid = :bid\n> UNION\n> select bbalance from pgbench_branches6 where bid = :bid\n> UNION\n> select bbalance from pgbench_branches7 where bid = :bid\n> UNION\n> select bbalance from pgbench_branches8 where bid = :bid\n> UNION\n> select bbalance from pgbench_branches9 where bid = :bid\n> UNION\n> select bbalance from pgbench_branches10 where bid = :bid\n> UNION\n> select bbalance from pgbench_branches11 where bid = :bid\n> UNION\n> select bbalance from pgbench_branches12 where bid = :bid\n> UNION\n> select bbalance from pgbench_branches13 where bid = :bid\n> UNION\n> select bbalance from pgbench_branches14 where bid = :bid\n> UNION\n> select bbalance from pgbench_branches15 where bid = :bid\n> UNION\n> select bbalance from pgbench_branches16 where bid = :bid\n> UNION\n> select bbalance from pgbench_branches17 where bid = :bid\n> UNION\n> select bbalance from pgbench_branches18 where bid = :bid\n> UNION\n> select bbalance from pgbench_branches19 where bid = :bid\n> UNION\n> select bbalance from pgbench_branches20 where bid = :bid;\n> END;\n>\n> First i try to make the number of fast-path locks as a GUC parameter. But\n> it implies a lot of changes with PGPROC structure. Next I implement it as a\n> compile-time parameter.\n>",
"msg_date": "Mon, 18 Sep 2023 17:49:51 +0300",
"msg_from": "Sergey Sergey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] fastpacth-locks compile time options"
},
{
"msg_contents": "On Mon, Sep 18, 2023 at 05:49:51PM +0300, Sergey Sergey wrote:\n> Hope this patch will be usefull/\n\n- uint64 fpLockBits; /* lock modes held for each fast-path slot */\n+ uint8 fpLockBits[FP_LOCK_SLOTS_PER_BACKEND]; /* lock modes\n\nIf my maths are right, this makes PGPROC 8 bytes larger with 16 slots\nby default. That is not a good idea.\n\n+ --runstatedir=DIR modifiable per-process data [LOCALSTATEDIR/run]\n\nAnd this points out that ./configure has been generated with one of\nDebian's autoreconf commands, which is something to avoid.\n\nI am not sure that this patch is a good idea long-term. Wouldn't it\nbe better to invent new and more scalable concepts able to tackle\nbottlenecks around these code paths instead of using compile-time\ntweaks like that?\n--\nMichael",
"msg_date": "Tue, 19 Sep 2023 08:52:35 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] fastpacth-locks compile time options"
},
{
"msg_contents": "Thank you for response.\n\nOn Tue, Sep 19, 2023 at 2:52 AM Michael Paquier <[email protected]> wrote:\n\n> On Mon, Sep 18, 2023 at 05:49:51PM +0300, Sergey Sergey wrote:\n> > Hope this patch will be usefull/\n>\n> - uint64 fpLockBits; /* lock modes held for each fast-path\n> slot */\n> + uint8 fpLockBits[FP_LOCK_SLOTS_PER_BACKEND]; /* lock\n> modes\n>\n> If my maths are right, this makes PGPROC 8 bytes larger with 16 slots\n> by default. That is not a good idea.\n>\n\nYou maths are correct. I can't estimate overall effect of this PGPROC\ngrows.\nOur typical setup include 768Gb RAM. It looks like space-for-time\noptimization.\nI check ordinary pgbench for patched and unpatched version.Total average tps\nare the same. Patched version has very stable tps values during test.\n\n>\n> + --runstatedir=DIR modifiable per-process data [LOCALSTATEDIR/run]\n>\n> And this points out that ./configure has been generated with one of\n> Debian's autoreconf commands, which is something to avoid.\n>\n\nYes, first i try to build it Debian way.\nI can rebuild ./configure with autoconf.\n\n\n>\n> I am not sure that this patch is a good idea long-term. Wouldn't it\n> be better to invent new and more scalable concepts able to tackle\n> bottlenecks around these code paths instead of using compile-time\n> tweaks like that?\n>\n\nAnother one way is to replace fixed arrays inside PGPROC\n\nuint8 fpLockBits[FP_LOCK_SLOTS_PER_BACKEND]; /* lock\nmodes\nOid fpRelId[FP_LOCK_SLOTS_PER_BACKEND]; /* slots for rel oids */\n\nwith pointers to arrays allocated outside PGPROC.\n\nWe also can use c99 flexible array pointers feature. This way we should make\nstructure like\n\nstruct FPLock\n{\n uint8 fpLockBit;\n Oid fpRelid;\n}\n\nSet the array of struct FPLock at the end of PGPROC structure. And\ncalculate memory\nallocation for PGPROC using some GUC variable.\n\nThis two ways seems so complex for me.\n\n\n\n> --\n> Michael\n>\n\nThank you for response.On Tue, Sep 19, 2023 at 2:52 AM Michael Paquier <[email protected]> wrote:On Mon, Sep 18, 2023 at 05:49:51PM +0300, Sergey Sergey wrote:\n> Hope this patch will be usefull/\n\n- uint64 fpLockBits; /* lock modes held for each fast-path slot */\n+ uint8 fpLockBits[FP_LOCK_SLOTS_PER_BACKEND]; /* lock modes\n\nIf my maths are right, this makes PGPROC 8 bytes larger with 16 slots\nby default. That is not a good idea.You maths are correct. I can't estimate overall effect of this PGPROC grows. Our typical setup include 768Gb RAM. It looks like space-for-time optimization.I check ordinary pgbench for patched and unpatched version.Total average tpsare the same. Patched version has very stable tps values during test.\n\n+ --runstatedir=DIR modifiable per-process data [LOCALSTATEDIR/run]\n\nAnd this points out that ./configure has been generated with one of\nDebian's autoreconf commands, which is something to avoid.Yes, first i try to build it Debian way.I can rebuild ./configure with autoconf. \n\nI am not sure that this patch is a good idea long-term. Wouldn't it\nbe better to invent new and more scalable concepts able to tackle\nbottlenecks around these code paths instead of using compile-time\ntweaks like that?Another one way is to replace fixed arrays inside PGPROCuint8 fpLockBits[FP_LOCK_SLOTS_PER_BACKEND]; /* lock modes Oid fpRelId[FP_LOCK_SLOTS_PER_BACKEND]; /* slots for rel oids */with pointers to arrays allocated outside PGPROC.We also can use c99 flexible array pointers feature. This way we should makestructure likestruct FPLock{ uint8 fpLockBit; Oid fpRelid;}Set the array of struct FPLock at the end of PGPROC structure. And calculate memory allocation for PGPROC using some GUC variable.This two ways seems so complex for me. \n--\nMichael",
"msg_date": "Tue, 19 Sep 2023 09:01:09 +0300",
"msg_from": "Sergey Sergey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] fastpacth-locks compile time options"
}
] |
[
{
"msg_contents": "Hi,\n\nThis came up in [0] and opinions besides my own would be welcome.\n\nThere is a function cannotCastJsonbValue in jsonb.c, and it throws \nerrors\nof this form:\n\nERRCODE_INVALID_PARAMETER_VALUE \"cannot cast jsonb %1$s to type %2$s\"\n\nwhere %1 is one of the possible JsonbValue types (null, string, numeric,\nboolean, array, object, or \"array or object\" for jbvBinary). %2 is the \nname\nof some SQL type.\n\nI question the errcode because I do not see a lot of precedent for\nERRCODE_INVALID_PARAMETER_VALUE in this sort of context; it seems more\noften used for a weird value of some behavioral parameter passed to\na function.\n\nThe bigger deal is I question the wording, because although calls to\nthis function are made from various jsonb_foo cast functions, the\nconditions for calling it don't involve the SQL type foo. This message\nonly means that you don't have the type of JsonbValue you thought\nyou were going to cast to the SQL type. I think that's what it should \nsay.\n\nLet me lay out a little more of the picture, by contrasting the way \nthese\njsonb casts work (which may be as specified in SQL/JSON, I don't have a\ncopy) with the way XMLCAST works in SQL/XML.\n\nWhen you XMLCAST some XML value to some target SQL type TD, then there\nis a corresponding XML Schema type XMLT chosen based on TD. For example,\nif you are casting to SQL's SMALLINT, XMLT will be chosen as xs:integer.\nThere are then two things that happen in sequence:\n1) whatever XML type you have is hit with the XQuery expression\n \"cast as xs:integer\", and then\n2) the xs:integer is cast to SQL's SMALLINT and returned.\n\nWhat our jsonb_foo casts do starts out the same way: based on\nthe target SQL type, there's a corresponding JsonbValue type\nchosen. Target SQL type SMALLINT => jbvNumeric, for example.\n\nBut step 2 is not like the SQL/XML case: there is no attempt\nto cast any other kind of JsonbValue to jbvNumeric. If the value\nisn't already of that JSON type, it's an error. (It's like an\nalternate-universe version of the SQL/XML rules, where the\nXQuery \"cast as\" in step 1 is \"treat as\" instead.)\n\nAnd then step 3 is unchanged: the JsonbValue of the expected\ntype (which it had to already be) is cast to the wanted SQL\ntype.\n\nConsider these two examples:\n\nselect '\"32768\"'::jsonb::smallint;\nINVALID_PARAMETER_VALUE cannot cast jsonb string to type smallint\n\nselect '32768'::jsonb::smallint;\nNUMERIC_VALUE_OUT_OF_RANGE smallint out of range\n\nThe second message is clearly from step 3, the actual attempt\nto cast a value to smallint, and is what you would expect.\n\nThe first message is from step 2, and it really only means\n\"jsonb string where jsonb numeric expected\", but for whatever SQL\ntype you ask for that corresponds to jsonb numeric in step 2,\nyou get a custom version of the message phrased as \"can't cast to\"\nyour target SQL type instead. To me, that just disguises what is\nreally happening. (It's not a matter of \"can't\" cast \"32768\" to\n32768, after all; it's a matter of \"won't\" do any casting in\nstep 2.)\n\nIt matters because the patch being discussed in [0] is\ncomplexified by trying to produce a matching message; it\nactually requires passing the ultimate wanted SQL type as an\nextra argument to a function that has no other reason to\nneed it, and could easily produce a message like \"jsonb string\nwhere jsonb numeric expected\" without it.\n\nTo me, when a situation like that crops up, it suggests that the\nmessage is kind of misrepresenting the logic.\n\nIt would make me happy if the message could be changed, and maybe\nERRCODE_INVALID_PARAMETER_VALUE also changed, perhaps to one of\nthe JSON-specific ones in the 2203x range.\n\nBy the same token, the message and the errcode are established\ncurrent behavior, so there can be sound arguments against changing\nthem (even though that means weird logic in rewriting the expression).\n\nThoughts?\n\nRegards,\n-Chap\n\n[0] \nhttps://www.postgresql.org/message-id/43a988594ac91a63dc4bb49a94303a42%40anastigmatix.net\n\n\n",
"msg_date": "Mon, 18 Sep 2023 12:55:00 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": true,
"msg_subject": "Questioning an errcode and message in jsonb.c"
},
{
"msg_contents": "Hi,\n\n Thanks for raising this issue in a more public way:)\n\nOn Tue, Sep 19, 2023 at 12:55 AM Chapman Flack <[email protected]>\nwrote:\n\n>\n> It would make me happy if the message could be changed, and maybe\n> ERRCODE_INVALID_PARAMETER_VALUE also changed, perhaps to one of\n> the JSON-specific ones in the 2203x range.\n>\n\nI'd agree with this.\n\n\n> By the same token, the message and the errcode are established\n> current behavior, so there can be sound arguments against changing\n> them (even though that means weird logic in rewriting the expression).\n>\n\nThis is not a technology issue, I'd be pretty willing to see what some\nmore experienced people say about this. I think just documenting the\nimpatible behavior is an option as well.\n\n-- \nBest Regards\nAndy Fan\n\nHi, Thanks for raising this issue in a more public way:) On Tue, Sep 19, 2023 at 12:55 AM Chapman Flack <[email protected]> wrote:\n\nIt would make me happy if the message could be changed, and maybe\nERRCODE_INVALID_PARAMETER_VALUE also changed, perhaps to one of\nthe JSON-specific ones in the 2203x range.I'd agree with this. \nBy the same token, the message and the errcode are established\ncurrent behavior, so there can be sound arguments against changing\nthem (even though that means weird logic in rewriting the expression). This is not a technology issue, I'd be pretty willing to see what somemore experienced people say about this. I think just documenting the impatible behavior is an option as well. -- Best RegardsAndy Fan",
"msg_date": "Tue, 19 Sep 2023 15:55:48 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questioning an errcode and message in jsonb.c"
},
{
"msg_contents": "On 18.09.23 18:55, Chapman Flack wrote:\n> It would make me happy if the message could be changed, and maybe\n> ERRCODE_INVALID_PARAMETER_VALUE also changed, perhaps to one of\n> the JSON-specific ones in the 2203x range.\n\nWhat is an example of a statement or function call that causes this \nerror? Then we can look in the SQL standard for guidance.\n\n\n",
"msg_date": "Wed, 20 Sep 2023 07:50:14 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questioning an errcode and message in jsonb.c"
},
{
"msg_contents": "Hi Peter,\n\nOn Wed, Sep 20, 2023 at 4:51 PM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 18.09.23 18:55, Chapman Flack wrote:\n> > It would make me happy if the message could be changed, and maybe\n> > ERRCODE_INVALID_PARAMETER_VALUE also changed, perhaps to one of\n> > the JSON-specific ones in the 2203x range.\n>\n> What is an example of a statement or function call that causes this\n> error? Then we can look in the SQL standard for guidance.\n>\n\nThanks for showing interest in this. The issue comes from this situation.\n\ncreate table tb(a jsonb);\n\ninsert into tb select '{\"a\": \"foo\", \"b\": 100000000}';\n\n\nselect cast(a->'a' as numeric) from tb;\nERRCODE_INVALID_PARAMETER_VALUE cannot cast jsonb string to type numeric\n\nthe call stack is:\n0 in errstart of elog.c:351\n1 in errstart_cold of elog.c:333\n2 in cannotCastJsonbValue of jsonb.c:2033\n3 in jsonb_numeric of jsonb.c:2063\n4 in ExecInterpExpr of execExprInterp.c:758\n\nselect cast(a->'b' as int2) from tb;\nNUMERIC_VALUE_OUT_OF_RANGE smallint out of range\n\nthe call stack is:\n1 in errstart_cold of elog.c:333\n2 in numeric_int2 of numeric.c:4503\n3 in DirectFunctionCall1Coll of fmgr.c:785\n4 in jsonb_int2 of jsonb.c:2086\n\nThere are 2 different errcode involved here and there are two different\nfunctions that play part in it (jsonb_numeric and numeric_int2). and\nthe error code jsonb_numeric used is improper as well.\n\nThe difference is not very huge, but it would be cool if we can make\nit better, If something really improves here, it will make the code in [0]\ncleaner as well. the bad code in [0]:\n\n+Datum\n+jsonb_finish_numeric(PG_FUNCTION_ARGS)\n+{\n+ JsonbValue *v = (JsonbValue *)PG_GETARG_POINTER(0);\n+ Oid final_oid = PG_GETARG_OID(1);\n+ if (v->type != jbvNumeric)\n+ cannotCastJsonbValue(v->type, format_type_be(final_oid));\n+ PG_RETURN_NUMERIC(v->val.numeric);\n+}\n\nTo match the error message in the older version, I have to input\na {final_oid} argument in jsonb_finish_numeric function which\nis not good.\n\nAs to how to redesign the error message is a bit confusing to\nme, it would be good to see the proposal code as well.\n\nThe only concern from me is that the new error from newer\nversion is not compatible with the older versions, which may matters\nmatters or doesn't match, I don't know.\n\n[0]\nhttps://www.postgresql.org/message-id/43a988594ac91a63dc4bb49a94303a42%40anastigmatix.net\n-- \nBest Regards\nAndy Fan\n\nHi Peter,On Wed, Sep 20, 2023 at 4:51 PM Peter Eisentraut <[email protected]> wrote:On 18.09.23 18:55, Chapman Flack wrote:> It would make me happy if the message could be changed, and maybe> ERRCODE_INVALID_PARAMETER_VALUE also changed, perhaps to one of> the JSON-specific ones in the 2203x range.What is an example of a statement or function call that causes thiserror? Then we can look in the SQL standard for guidance. Thanks for showing interest in this. The issue comes from this situation.create table tb(a jsonb);insert into tb select '{\"a\": \"foo\", \"b\": 100000000}';select cast(a->'a' as numeric) from tb;ERRCODE_INVALID_PARAMETER_VALUE cannot cast jsonb string to type numericthe call stack is:0 in errstart of elog.c:3511 in errstart_cold of elog.c:3332 in cannotCastJsonbValue of jsonb.c:20333 in jsonb_numeric of jsonb.c:20634 in ExecInterpExpr of execExprInterp.c:758select cast(a->'b' as int2) from tb;NUMERIC_VALUE_OUT_OF_RANGE smallint out of rangethe call stack is:1 in errstart_cold of elog.c:3332 in numeric_int2 of numeric.c:45033 in DirectFunctionCall1Coll of fmgr.c:7854 in jsonb_int2 of jsonb.c:2086There are 2 different errcode involved here and there are two differentfunctions that play part in it (jsonb_numeric and numeric_int2). andthe error code jsonb_numeric used is improper as well. The difference is not very huge, but it would be cool if we can make it better, If something really improves here, it will make the code in [0]cleaner as well. the bad code in [0]:+Datum+jsonb_finish_numeric(PG_FUNCTION_ARGS)+{+\tJsonbValue\t*v = (JsonbValue *)PG_GETARG_POINTER(0);+\tOid\t\t\tfinal_oid = PG_GETARG_OID(1);+\tif (v->type != jbvNumeric)+\t\tcannotCastJsonbValue(v->type, format_type_be(final_oid));+\tPG_RETURN_NUMERIC(v->val.numeric);+}To match the error message in the older version, I have to inputa {final_oid} argument in jsonb_finish_numeric function whichis not good. As to how to redesign the error message is a bit confusing tome, it would be good to see the proposal code as well. The only concern from me is that the new error from newerversion is not compatible with the older versions, which may mattersmatters or doesn't match, I don't know. [0]https://www.postgresql.org/message-id/43a988594ac91a63dc4bb49a94303a42%40anastigmatix.net-- Best RegardsAndy Fan",
"msg_date": "Fri, 22 Sep 2023 08:38:24 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questioning an errcode and message in jsonb.c"
},
{
"msg_contents": "On 2023-09-21 20:38, Andy Fan wrote:\n> insert into tb select '{\"a\": \"foo\", \"b\": 100000000}';\n> ...\n> select cast(a->'a' as numeric) from tb;\n> ERRCODE_INVALID_PARAMETER_VALUE cannot cast jsonb string to type \n> numeric\n> ...\n> select cast(a->'b' as int2) from tb;\n> NUMERIC_VALUE_OUT_OF_RANGE smallint out of range\n\n... and perhaps driving home the point:\n\ninsert into tb select '{\"a\": \"1\", \"b\": 100000000}';\nselect cast(a->'a' as int2) from tb;\nERRCODE_INVALID_PARAMETER_VALUE cannot cast jsonb string to type \nsmallint\n\nwhich illustrates that:\n\n1) it is of no consequence whether the non-numeric JSON type of\nthe cast source is something that does or doesn't look castable to\nnumeric: in the first-step test that produces this message, the\nonly thing tested is whether the JSON type of the source is JSON\nnumeric. If it is not, there will be no attempt to cast it.\n\n2) it is immaterial what the SQL target type of the cast is;\nthe message will misleadingly say \"to smallint\" if you are\ncasting to smallint, or \"to double precision\" if you are casting\nto that, but the only thing that has been tested is whether the\nsource has JSON type numeric.\n\nThe message in this case only really means \"JSON type string\nwhere JSON type numeric needed\".\n\nThe issue is fully general:\n\ninsert into tb select '{\"a\": 1}';\nselect cast(a->'a' as boolean) from tb;\nERRCODE_INVALID_PARAMETER_VALUE cannot cast jsonb numeric to type \nboolean\n\nAgain, all that has been tested is whether the JSON type is\nJSON boolean. If it is not, no effort is made to cast it, and\nthe message really only means \"JSON type numeric where\nJSON type boolean needed\".\n\nThe most annoying cases are the ones where JSON type numeric\nis needed, because of the several different SQL types that one\nmight want as the ultimate target type, so extra machinations\nare needed to get this message to misleadingly mention that\nultimate type.\n\nAs I mentioned in my earlier message, the behavior here\ndiffers from the exactly analogous specified behavior for\nXMLCAST in SQL/XML. I am not saying the behavior here is\nwrong; perhaps SQL/JSON has chosen to specify it differently\n(I haven't got a copy). But I pointed out the difference as\nit may help to pinpoint the relevant part of the spec.\n\nIn the SQL/XML XMLCAST, the same two-step process exists:\na first step that is only concerned with the XML Schema\ntype (say, is it xs:string or xs:decimal?), and a second\nstep where the right xs type is then cast to the wanted SQL type.\n\nThe difference is, XMLCAST in the first step will try to\ncast a different xs type to the right xs type. By contrast\nour JSON casting simply requires the JSON type to be the\nright JSON type, or fails. And for all I know, that different\napproach may be as specified in SQL/JSON.\n\nBut I would not have it use ERRCODE_INVALID_PARAMETER_VALUE,\nor issue a message talking about the ultimate SQL type when the\nonly thing checked in that step is the JSON type ... unless\nthe spec really says to do so.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Thu, 21 Sep 2023 21:16:07 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Questioning an errcode and message in jsonb.c"
},
{
"msg_contents": "Hi Chap,\n\n\n> As to how to redesign the error message is a bit confusing to\n> me, it would be good to see the proposal code as well.\n>\n> The only concern from me is that the new error from newer\n> version is not compatible with the older versions, which may matters\n> matters or doesn't match, I don't know.\n>\n>\nDo you mind providing the patch in your mind, and let's just ignore\nthe compatible issue for now. I think that would be pretty helpful for\nfurther discussion.\n\n-- \nBest Regards\nAndy Fan\n\nHi Chap, As to how to redesign the error message is a bit confusing tome, it would be good to see the proposal code as well. The only concern from me is that the new error from newerversion is not compatible with the older versions, which may mattersmatters or doesn't match, I don't know. Do you mind providing the patch in your mind, and let's just ignorethe compatible issue for now. I think that would be pretty helpful forfurther discussion. -- Best RegardsAndy Fan",
"msg_date": "Fri, 22 Sep 2023 11:02:25 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questioning an errcode and message in jsonb.c"
},
{
"msg_contents": "On 22.09.23 02:38, Andy Fan wrote:\n> create table tb(a jsonb);\n> \n> insert into tb select '{\"a\": \"foo\", \"b\": 100000000}';\n> \n> \n> select cast(a->'a' as numeric) from tb;\n> \n> ERRCODE_INVALID_PARAMETER_VALUE cannot cast jsonb string to type numeric\n> \n> the call stack is:\n> 0 in errstart of elog.c:351\n> 1 in errstart_cold of elog.c:333\n> 2 in cannotCastJsonbValue of jsonb.c:2033\n> 3 in jsonb_numeric of jsonb.c:2063\n> 4 in ExecInterpExpr of execExprInterp.c:758\n> \n> select cast(a->'b' as int2) from tb;\n> NUMERIC_VALUE_OUT_OF_RANGE smallint out of range\n> \n> the call stack is:\n> 1 in errstart_cold of elog.c:333\n> 2 in numeric_int2 of numeric.c:4503\n> 3 in DirectFunctionCall1Coll of fmgr.c:785\n> 4 in jsonb_int2 of jsonb.c:2086\n> \n> There are 2 different errcode involved here and there are two different\n> functions that play part in it (jsonb_numeric and numeric_int2). and\n> the error code jsonb_numeric used is improper as well.\n\nThis looks like an undesirable inconsistency.\n\nYou asked about the SQL standard. The error code \nNUMERIC_VALUE_OUT_OF_RANGE appears as part of a failure of the <cast \nspecification>. The error code ERRCODE_INVALID_PARAMETER_VALUE appears \nonly as part of processing host parameters in <externally-invoked \nprocedure>. Of course, in PostgreSQL, function calls and casts are \nrelated under the hood, so you could maybe make arguments for both. But \nI think we already use ERRCODE_INVALID_PARAMETER_VALUE more broadly than \nthe standard, so I would tend to prefer going in the direction of \nNUMERIC_VALUE_OUT_OF_RANGE when in doubt.\n\nWe could also consider these operators a special case of JSON_VALUE, in \nwhich case the following would apply:\n\n\"\"\"\nIf IDT cannot be cast to target type DT according to the Syntax Rules of \nSubclause 6.13, “<cast specification>”, then let TEMPST be data \nexception — SQL/JSON item cannot be cast to target type (2203G).\n\"\"\"\n\nWe do have a definition of this in errcodes.txt but don't use it \nanywhere. Maybe the patches for SQL/JSON currently being reviewed will \nuse it.\n\n\n\n",
"msg_date": "Fri, 6 Oct 2023 08:57:12 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questioning an errcode and message in jsonb.c"
}
] |
[
{
"msg_contents": "Hi,\n\nPer complain in another thread[1], I started to look into the global\nvariables in pgoutput.\n\nCurrently we have serval global variables in pgoutput, but each of them is\ninherently local to an individual pgoutput instance. This could cause issues if\nwe switch to different output plugin instance in one session and could miss to\nreset their value in case of errors. The analysis for each variable is as\nfollows:\n\n- static HTAB *RelationSyncCache = NULL;\n\npgoutput creates this hash table under cacheMemoryContext to remember the\nrelation schemas that have been sent, but it's local to an individual pgoutput\ninstance, and because it's under global memory context, the hashtable is not\nproperly cleared in error paths which means it has a risk of being accessed in\na different output plugin instance. This was also mentioned in another thread[2].\n\nSo I think we'd better allocate this under output plugin private context. \n\nBut note that, instead of completely moving the hash table into the output\nplugin private data, we need to to keep the static pointer variable for the map to\nbe accessed by the syscache callbacks. This is because syscache callbacks won't\nbe un-registered even after shutting down the output plugin, so we need a\nstatic pointer to cache the map pointer so that callbacks can check it.\n\n- static bool publish_no_origin;\n\nThis flag is also local to pgoutput instance, and we didn't reset the flag in\noutput shutdown callback, so if we consume changes from different slots, then\nthe second call would reuse the flag value that is set in the first call which\nis unexpected. To completely avoid this issue, we think we'd better move this\nflag to output plugin private data structure.\n\nExample:\n SELECT data FROM pg_logical_slot_peek_binary_changes('isolation_slot_1', NULL, NULL, 'proto_version', '1', 'publication_names', 'pub', 'origin', 'none'); --- Set origin in this call.\n SELECT data FROM pg_logical_slot_peek_binary_changes('isolation_slot_2', NULL, NULL, 'proto_version', '1', 'publication_names', 'pub'); -- Didn't set origin, but will reuse the origin flag in the first call.\n\n- static bool in_streaming;\n\nWhile on it, I feel we can also move this flag to private data, although I didn't\nsee problems for this one.\n\n- static bool publications_valid;\n\nI thought we need to move this to private data as well, but we need to access this in a\nsyscache callback, which means we need to keep the static variable.\n\nAttach the patches to change in_streaming, publish_no_origin and RelationSyncCache.\nSuggestions and comments are welcome.\n\n[1] https://www.postgresql.org/message-id/20230821182732.t3qc75i5s5xvovls%40awork3.anarazel.de\n[2] https://www.postgresql.org/message-id/CAA4eK1LJ%3DCSsxETs5ydqP58OiWPiwodx%3DJqw89LQ7fMrRWqK9w%40mail.gmail.com\n\nBest Regards,\nHou Zhijie",
"msg_date": "Tue, 19 Sep 2023 04:10:39 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Move global variables of pgoutput to plugin private scope."
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 04:10:39AM +0000, Zhijie Hou (Fujitsu) wrote:\n> Currently we have serval global variables in pgoutput, but each of them is\n> inherently local to an individual pgoutput instance. This could cause issues if\n> we switch to different output plugin instance in one session and could miss to\n> reset their value in case of errors. The analysis for each variable is as\n> follows:\n\n(Moved the last block of the message as per the relationship between\nRelationSyncCache and publications_valid).\n\n> - static HTAB *RelationSyncCache = NULL;\n> \n> pgoutput creates this hash table under cacheMemoryContext to remember the\n> relation schemas that have been sent, but it's local to an individual pgoutput\n> instance, and because it's under global memory context, the hashtable is not\n> properly cleared in error paths which means it has a risk of being accessed in\n> a different output plugin instance. This was also mentioned in another thread[2].\n> \n> So I think we'd better allocate this under output plugin private context. \n> \n> But note that, instead of completely moving the hash table into the output\n> plugin private data, we need to to keep the static pointer variable for the map to\n> be accessed by the syscache callbacks. This is because syscache callbacks won't\n> be un-registered even after shutting down the output plugin, so we need a\n> static pointer to cache the map pointer so that callbacks can check it.\n>\n> - static bool publications_valid;\n> \n> I thought we need to move this to private data as well, but we need to access this in a\n> syscache callback, which means we need to keep the static variable.\n\nFWIW, I think that keeping publications_valid makes the code kind of\nconfusing once 0001 is applied, because this makes the handling of the\ncached data for relations and publications even more inconsistent than\nit is now, with a mixed bag of two different logics caused by the\nrelationship between the synced relation cache and the publication\ncache: RelationSyncCache tracks if relations should be rebuilt, while\npublications_valid does it for the publication data, but both are\nstill static and could be shared by multiple pgoutput contexts. On\ntop of that, publications_valid is hidden at the top of pgoutput.c\nwithin a bunch of declarations and no comments to explain why it's\nhere (spoiler: to handle the cache rebuilds with its reset in the\ncache callback).\n\nI agree that CacheMemoryContext is not really a good idea to cache the\ndata only proper to a pgoutput session and that tracking a context in\nthe output data makes the whole cleanup attractive, but it also seems\nto me that we should avoid entirely the use of relcache callbacks if\nthe intention is to have one RelationSyncEntry per pgoutput. The\npatch does something different than HEAD and than having one\nRelationSyncEntry per pgoutout: RelationSyncEntry can reference \n*everything*, with its data stored in multiple memory contexts as of\none per pgoutput. It looks like RelationSyncEntry should be a list\nor a hash table, at least, so as it can refer to multiple pgoutput\nstates. Knowing that a session can only use one replication slot with\nMyReplicationSlot, not sure that's worth bothering with. As a whole,\n0001 with its changes for RelationSyncCache don't seem like an\nimprovement to me.\n\n> - static bool publish_no_origin;\n> \n> This flag is also local to pgoutput instance, and we didn't reset the flag in\n> output shutdown callback, so if we consume changes from different slots, then\n> the second call would reuse the flag value that is set in the first call which\n> is unexpected. To completely avoid this issue, we think we'd better move this\n> flag to output plugin private data structure.\n\nYep, that's incorrect.\n\n> - static bool in_streaming;\n> \n> While on it, I feel we can also move this flag to private data, although I didn't\n> see problems for this one.\n\nMoving this one to the private state data makes sense to me, as it\ntracks the streaming of one PGOutputData.\n\nNote that we name twice RelSchemaSyncCache in the code, but it does\nnot exist..\n--\nMichael",
"msg_date": "Tue, 19 Sep 2023 14:43:37 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move global variables of pgoutput to plugin private scope."
},
{
"msg_contents": "Hi Hou-san.\n\nGiven there are some issues raised about the 0001 patch [1] I am\nskipping that one until I see the replies.\n\nMeanwhile, here are some review comments for the patches v1-0002 and v1-0003\n\n////////////////////\nv1-0002\n\n======\nCommit message\n\n1.\nThe pgoutput module uses a global variable(publish_no_origin) to cache the\naction for the origin filter. But we only initialize publish_no_origin when\nuser specifies the \"origin\" in the output paramters which means we could refer\nto an uninitialized variable if user didn't specify the paramter.\n\n~\n\n1a.\n\ntypos\n/variable(publish_no_origin)/variable (publish_no_origin)/\n/paramters/parameters/\n/paramter./paramter./\n\n~\n\n1b.\n\"...we could refer to an uninitialized variable\"\n\nI'm not sure what this means. Previously it was static, so it wouldn't\nbe \"uninitialised\"; it would be false. Perhaps there might be a stale\nvalue from a previous pgoutput, but IIUC that's the point made by your\nnext paragraph (\"Besides, we don't...\")\n\n~~~\n\n2.\nTo improve it, the patch stores the map within the private data of the output\nplugin so that it will get initialized and reset along with the output plugin\ncontext.\n\n2a.\n/To improve it,/To fix this/\n\n~\n\n2b.\n\"stores the map\"\n\nWhat map? This might be a cut/paste error from the v1-0001 patch comment.\n\n////////////////////\nv1-0003\n\n======\nCommit message\n\n1.\nMissing patch comment.\n\n======\nsrc/backend/replication/pgoutput/pgoutput.c\n\n2. maybe_send_schema\n\n- if (in_streaming)\n+ if (data->in_streaming)\n set_schema_sent_in_streamed_txn((PGOutputData *) ctx->output_plugin_private,\n relentry, topxid);\n~\n\nSince you added a new 'data' variable, you might as well make use of\nit here instead of doing \"(PGOutputData *) ctx->output_plugin_private\"\nagain.\n\n======\nsrc/include/replication/pgoutput.h\n\n3.\n MemoryContext cachectx; /* private memory context for cache data */\n\n+ bool in_streaming;\n+\n\nEven though there was no comment previously when this was static, IMO\nit is better to comment on all the structure fields where possible.\n\n------\n[1] https://www.postgresql.org/message-id/ZQk1Ca_eFDTmBiZy%40paquier.xyz\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 20 Sep 2023 11:42:12 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move global variables of pgoutput to plugin private scope."
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 12:48 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> - static bool publish_no_origin;\n>\n> This flag is also local to pgoutput instance, and we didn't reset the flag in\n> output shutdown callback, so if we consume changes from different slots, then\n> the second call would reuse the flag value that is set in the first call which\n> is unexpected. To completely avoid this issue, we think we'd better move this\n> flag to output plugin private data structure.\n>\n> Example:\n> SELECT data FROM pg_logical_slot_peek_binary_changes('isolation_slot_1', NULL, NULL, 'proto_version', '1', 'publication_names', 'pub', 'origin', 'none'); --- Set origin in this call.\n> SELECT data FROM pg_logical_slot_peek_binary_changes('isolation_slot_2', NULL, NULL, 'proto_version', '1', 'publication_names', 'pub'); -- Didn't set origin, but will reuse the origin flag in the first call.\n>\n\n char *origin;\n+ bool publish_no_origin;\n } PGOutputData;\n\nDo we really need a new parameter in above structure? Can't we just\nuse the existing origin in the same structure? Please remember if this\nneeds to be backpatched then it may not be good idea to add new\nparameter in the structure but apart from that having two members to\nrepresent similar information doesn't seem advisable to me. I feel for\nbackbranch we can just use PGOutputData->origin for comparison and for\nHEAD, we can remove origin and just use a boolean to avoid any extra\ncost for comparisions for each change.\n\nCan we add a test case to cover this case?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 26 Sep 2023 14:10:17 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move global variables of pgoutput to plugin private scope."
},
{
"msg_contents": "On Tuesday, September 19, 2023 1:44 PM Michael Paquier <[email protected]> wrote:\n> \n> On Tue, Sep 19, 2023 at 04:10:39AM +0000, Zhijie Hou (Fujitsu) wrote:\n> > Currently we have serval global variables in pgoutput, but each of\n> > them is inherently local to an individual pgoutput instance. This\n> > could cause issues if we switch to different output plugin instance in\n> > one session and could miss to reset their value in case of errors. The\n> > analysis for each variable is as\n> > follows:\n> \n> (Moved the last block of the message as per the relationship between\n> RelationSyncCache and publications_valid).\n> \n> > - static HTAB *RelationSyncCache = NULL;\n> >\n> > pgoutput creates this hash table under cacheMemoryContext to remember\n> > the relation schemas that have been sent, but it's local to an\n> > individual pgoutput instance, and because it's under global memory\n> > context, the hashtable is not properly cleared in error paths which\n> > means it has a risk of being accessed in a different output plugin instance.\n> This was also mentioned in another thread[2].\n> >\n> > So I think we'd better allocate this under output plugin private context.\n> >\n> > But note that, instead of completely moving the hash table into the\n> > output plugin private data, we need to to keep the static pointer\n> > variable for the map to be accessed by the syscache callbacks. This is\n> > because syscache callbacks won't be un-registered even after shutting\n> > down the output plugin, so we need a static pointer to cache the map pointer\n> so that callbacks can check it.\n> >\n> > - static bool publications_valid;\n> >\n> > I thought we need to move this to private data as well, but we need to\n> > access this in a syscache callback, which means we need to keep the static\n> variable.\n> \n> FWIW, I think that keeping publications_valid makes the code kind of confusing\n> once 0001 is applied, because this makes the handling of the cached data for\n> relations and publications even more inconsistent than it is now, with a mixed\n> bag of two different logics caused by the relationship between the synced\n> relation cache and the publication\n> cache: RelationSyncCache tracks if relations should be rebuilt, while\n> publications_valid does it for the publication data, but both are still static and\n> could be shared by multiple pgoutput contexts. On top of that,\n> publications_valid is hidden at the top of pgoutput.c within a bunch of\n> declarations and no comments to explain why it's here (spoiler: to handle the\n> cache rebuilds with its reset in the cache callback).\n> \n> I agree that CacheMemoryContext is not really a good idea to cache the data\n> only proper to a pgoutput session and that tracking a context in the output\n> data makes the whole cleanup attractive, but it also seems to me that we\n> should avoid entirely the use of relcache callbacks if the intention is to have one\n> RelationSyncEntry per pgoutput. The patch does something different than\n> HEAD and than having one RelationSyncEntry per pgoutout: RelationSyncEntry\n> can reference *everything*, with its data stored in multiple memory contexts as\n> of one per pgoutput. It looks like RelationSyncEntry should be a list or a hash\n> table, at least, so as it can refer to multiple pgoutput states. Knowing that a\n> session can only use one replication slot with MyReplicationSlot, not sure that's\n> worth bothering with. As a whole,\n> 0001 with its changes for RelationSyncCache don't seem like an improvement\n> to me.\n> \n\nThanks for your comments. Currently, I am not sure how to avoid the use of the\nsyscache callback functions, So I think the change for RelationSyncCache needs\nmore thought and I will retry later if I find another way.\n\nBest Regards,\nHou zj\n\n\n",
"msg_date": "Tue, 26 Sep 2023 13:49:42 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Move global variables of pgoutput to plugin private scope."
},
{
"msg_contents": "On Tuesday, September 26, 2023 4:40 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Tue, Sep 19, 2023 at 12:48 PM Zhijie Hou (Fujitsu) <[email protected]>\r\n> wrote:\r\n> >\r\n> > - static bool publish_no_origin;\r\n> >\r\n> > This flag is also local to pgoutput instance, and we didn't reset the\r\n> > flag in output shutdown callback, so if we consume changes from\r\n> > different slots, then the second call would reuse the flag value that\r\n> > is set in the first call which is unexpected. To completely avoid this\r\n> > issue, we think we'd better move this flag to output plugin private data\r\n> structure.\r\n> >\r\n> > Example:\r\n> > SELECT data FROM pg_logical_slot_peek_binary_changes('isolation_slot_1',\r\n> NULL, NULL, 'proto_version', '1', 'publication_names', 'pub', 'origin', 'none'); ---\r\n> Set origin in this call.\r\n> > SELECT data FROM pg_logical_slot_peek_binary_changes('isolation_slot_2',\r\n> NULL, NULL, 'proto_version', '1', 'publication_names', 'pub'); -- Didn't set\r\n> origin, but will reuse the origin flag in the first call.\r\n> >\r\n> \r\n> char *origin;\r\n> + bool publish_no_origin;\r\n> } PGOutputData;\r\n> \r\n> Do we really need a new parameter in above structure? Can't we just use the\r\n> existing origin in the same structure? Please remember if this needs to be\r\n> backpatched then it may not be good idea to add new parameter in the\r\n> structure but apart from that having two members to represent similar\r\n> information doesn't seem advisable to me. I feel for backbranch we can just use\r\n> PGOutputData->origin for comparison and for HEAD, we can remove origin\r\n> and just use a boolean to avoid any extra cost for comparisions for each\r\n> change.\r\n\r\nOK, I agree to remove the origin string on HEAD and we can add that back\r\nwhen we support other origin value. I also modified to use the string for comparison\r\nas suggested for back-branch. I will also test it locally to confirm it doesn't affect\r\nthe perf.\r\n\r\n> \r\n> Can we add a test case to cover this case?\r\n\r\nAdded one in replorigin.sql.\r\n\r\nAttach the patch set for publish_no_origin and in_streaming including the\r\npatch(v2-PG16-0001) for back-branches. Since the patch for hash table need\r\nmore thoughts, I didn't post it this time.\r\n\r\n\r\nBest Regards,\r\nHou zj",
"msg_date": "Tue, 26 Sep 2023 13:55:10 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Move global variables of pgoutput to plugin private scope."
},
{
"msg_contents": "On Tue, Sep 26, 2023 at 01:55:10PM +0000, Zhijie Hou (Fujitsu) wrote:\n> On Tuesday, September 26, 2023 4:40 PM Amit Kapila <[email protected]> wrote:\n>> Do we really need a new parameter in above structure? Can't we just use the\n>> existing origin in the same structure? Please remember if this needs to be\n>> backpatched then it may not be good idea to add new parameter in the\n>> structure but apart from that having two members to represent similar\n>> information doesn't seem advisable to me. I feel for backbranch we can just use\n>> PGOutputData->origin for comparison and for HEAD, we can remove origin\n>> and just use a boolean to avoid any extra cost for comparisions for each\n>> change.\n> \n> OK, I agree to remove the origin string on HEAD and we can add that back\n> when we support other origin value. I also modified to use the string for comparison\n> as suggested for back-branch. I will also test it locally to confirm it doesn't affect\n> the perf.\n\nErr, actually, I am going to disagree here for the patch of HEAD. It\nseems to me that there is zero need for pgoutput.h and we don't need\nto show PGOutputData to the world. The structure is internal to\npgoutput.c and used only by its internal static routines.\n\nDoing a codesearch in the Debian repos or just github shows that it is\nused nowhere else, as well, something not really surprising as the\nstructure is filled and maintained in the file.\n--\nMichael",
"msg_date": "Wed, 27 Sep 2023 12:40:08 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move global variables of pgoutput to plugin private scope."
},
{
"msg_contents": "On Wed, Sep 27, 2023 at 9:10 AM Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Sep 26, 2023 at 01:55:10PM +0000, Zhijie Hou (Fujitsu) wrote:\n> > On Tuesday, September 26, 2023 4:40 PM Amit Kapila <[email protected]> wrote:\n> >> Do we really need a new parameter in above structure? Can't we just use the\n> >> existing origin in the same structure? Please remember if this needs to be\n> >> backpatched then it may not be good idea to add new parameter in the\n> >> structure but apart from that having two members to represent similar\n> >> information doesn't seem advisable to me. I feel for backbranch we can just use\n> >> PGOutputData->origin for comparison and for HEAD, we can remove origin\n> >> and just use a boolean to avoid any extra cost for comparisions for each\n> >> change.\n> >\n> > OK, I agree to remove the origin string on HEAD and we can add that back\n> > when we support other origin value. I also modified to use the string for comparison\n> > as suggested for back-branch. I will also test it locally to confirm it doesn't affect\n> > the perf.\n>\n> Err, actually, I am going to disagree here for the patch of HEAD. It\n> seems to me that there is zero need for pgoutput.h and we don't need\n> to show PGOutputData to the world. The structure is internal to\n> pgoutput.c and used only by its internal static routines.\n>\n\nDo you disagree with the approach for the PG16 patch or HEAD? You\nmentioned HEAD but your argument sounds like you disagree with a\ndifferent approach for PG16.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 27 Sep 2023 09:39:19 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move global variables of pgoutput to plugin private scope."
},
{
"msg_contents": "On Wed, Sep 27, 2023 at 09:39:19AM +0530, Amit Kapila wrote:\n> On Wed, Sep 27, 2023 at 9:10 AM Michael Paquier <[email protected]> wrote:\n>> Err, actually, I am going to disagree here for the patch of HEAD. It\n>> seems to me that there is zero need for pgoutput.h and we don't need\n>> to show PGOutputData to the world. The structure is internal to\n>> Pgoutput.c and used only by its internal static routines.\n> \n> Do you disagree with the approach for the PG16 patch or HEAD? You\n> mentioned HEAD but your argument sounds like you disagree with a\n> different approach for PG16.\n\nOnly HEAD where the structure should be moved from pgoutput.h to\npgoutput.c, IMO. The proposed patch for PG16 is OK as the size of the\nstructure should not change in a branch already released.\n--\nMichael",
"msg_date": "Wed, 27 Sep 2023 13:16:37 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move global variables of pgoutput to plugin private scope."
},
{
"msg_contents": "On Wed, Sep 27, 2023 at 9:46 AM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Sep 27, 2023 at 09:39:19AM +0530, Amit Kapila wrote:\n> > On Wed, Sep 27, 2023 at 9:10 AM Michael Paquier <[email protected]> wrote:\n> >> Err, actually, I am going to disagree here for the patch of HEAD. It\n> >> seems to me that there is zero need for pgoutput.h and we don't need\n> >> to show PGOutputData to the world. The structure is internal to\n> >> Pgoutput.c and used only by its internal static routines.\n> >\n> > Do you disagree with the approach for the PG16 patch or HEAD? You\n> > mentioned HEAD but your argument sounds like you disagree with a\n> > different approach for PG16.\n>\n> Only HEAD where the structure should be moved from pgoutput.h to\n> pgoutput.c, IMO.\n>\n\nIt's like that from the beginning. Now, even if we want to move, your\nsuggestion is not directly related to this patch as we are just\nchanging one field, and that too to fix a bug. We should start a\nseparate thread to gather a broader consensus if we want to move the\nexposed structure to an internal file.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 27 Sep 2023 10:15:24 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move global variables of pgoutput to plugin private scope."
},
{
"msg_contents": "On Wednesday, September 27, 2023 12:45 PM Amit Kapila <[email protected]>\r\n> \r\n> On Wed, Sep 27, 2023 at 9:46 AM Michael Paquier <[email protected]>\r\n> wrote:\r\n> >\r\n> > On Wed, Sep 27, 2023 at 09:39:19AM +0530, Amit Kapila wrote:\r\n> > > On Wed, Sep 27, 2023 at 9:10 AM Michael Paquier <[email protected]>\r\n> wrote:\r\n> > >> Err, actually, I am going to disagree here for the patch of HEAD.\r\n> > >> It seems to me that there is zero need for pgoutput.h and we don't\r\n> > >> need to show PGOutputData to the world. The structure is internal\r\n> > >> to Pgoutput.c and used only by its internal static routines.\r\n> > >\r\n> > > Do you disagree with the approach for the PG16 patch or HEAD? You\r\n> > > mentioned HEAD but your argument sounds like you disagree with a\r\n> > > different approach for PG16.\r\n> >\r\n> > Only HEAD where the structure should be moved from pgoutput.h to\r\n> > pgoutput.c, IMO.\r\n> >\r\n> \r\n> It's like that from the beginning. Now, even if we want to move, your\r\n> suggestion is not directly related to this patch as we are just changing one field,\r\n> and that too to fix a bug. We should start a separate thread to gather a broader\r\n> consensus if we want to move the exposed structure to an internal file.\r\n\r\nWhile searching the code, I noticed one postgres fork where the PGoutputData is\r\nused in other places, although it's a separate fork, but it seems better to\r\ndiscuss the removal separately.\r\n\r\n[1] https://github.com/Tencent/TBase/blob/7cf7f8afbcab7290538ad5e65893561710be3dfa/src/backend/replication/walsender.c#L100\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Wed, 27 Sep 2023 04:51:29 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Move global variables of pgoutput to plugin private scope."
},
{
"msg_contents": "On Wed, Sep 27, 2023 at 10:15:24AM +0530, Amit Kapila wrote:\n> It's like that from the beginning. Now, even if we want to move, your\n> suggestion is not directly related to this patch as we are just\n> changing one field, and that too to fix a bug. We should start a\n> separate thread to gather a broader consensus if we want to move the\n> exposed structure to an internal file.\n\nAs you wish. You are planning to take care of the patches 0001 and\n0002 posted on this thread, I guess?\n--\nMichael",
"msg_date": "Wed, 27 Sep 2023 13:56:47 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move global variables of pgoutput to plugin private scope."
},
{
"msg_contents": "On Wed, Sep 27, 2023 at 10:26 AM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Sep 27, 2023 at 10:15:24AM +0530, Amit Kapila wrote:\n> > It's like that from the beginning. Now, even if we want to move, your\n> > suggestion is not directly related to this patch as we are just\n> > changing one field, and that too to fix a bug. We should start a\n> > separate thread to gather a broader consensus if we want to move the\n> > exposed structure to an internal file.\n>\n> As you wish.\n>\n\nThanks.\n\n>\n> You are planning to take care of the patches 0001 and\n> 0002 posted on this thread, I guess?\n>\n\nI have tested and reviewed\nv2-0001-Maintain-publish_no_origin-in-output-plugin-priv and\nv2-PG16-0001-Maintain-publish_no_origin-in-output-plugin-priva patches\nposted in the email [1]. I'll push those unless there are more\ncomments on them. I have briefly looked at\nv2-0002-Move-in_streaming-to-output-private-data in the same email [1]\nbut didn't think about it in detail (like whether there is any live\nbug that can be fixed or is just an improvement). If you wanted to\nlook and commit v2-0002-Move-in_streaming-to-output-private-data, I am\nfine with that?\n\n[1] - https://www.postgresql.org/message-id/OS0PR01MB57164B085332DB677DBFA8E994C3A%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 27 Sep 2023 10:51:52 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move global variables of pgoutput to plugin private scope."
},
{
"msg_contents": "On Wed, Sep 27, 2023 at 04:51:29AM +0000, Zhijie Hou (Fujitsu) wrote:\n> While searching the code, I noticed one postgres fork where the PGoutputData is\n> used in other places, although it's a separate fork, but it seems better to\n> discuss the removal separately.\n> \n> [1] https://github.com/Tencent/TBase/blob/7cf7f8afbcab7290538ad5e65893561710be3dfa/src/backend/replication/walsender.c#L100\n\nIndeed. Interesting.\n--\nMichael",
"msg_date": "Wed, 27 Sep 2023 14:47:22 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move global variables of pgoutput to plugin private scope."
},
{
"msg_contents": "On Wed, Sep 27, 2023 at 10:51:52AM +0530, Amit Kapila wrote:\n> I have briefly looked at\n> v2-0002-Move-in_streaming-to-output-private-data in the same email [1]\n> but didn't think about it in detail (like whether there is any live\n> bug that can be fixed or is just an improvement).\n\nThis looks like an improvement to me, as at the startup of a stream\nthe flag is forcibly reset to a false state. So, you cannot really\nreach a state where a second stream could be started within the same\nsession but with a flag incorrectly set to true. Tracking that in the\nstate data of pgoutput is cleaner, definitely.\n\n> If you wanted to\n> look and commit v2-0002-Move-in_streaming-to-output-private-data, I am\n> fine with that?\n\nSure. I found the concept behind 0002 sound. Feel free to go ahead\nwith 0001, and I can always look at the second. Always happy to help.\n--\nMichael",
"msg_date": "Wed, 27 Sep 2023 16:57:06 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move global variables of pgoutput to plugin private scope."
},
{
"msg_contents": "On Wed, Sep 27, 2023 at 04:57:06PM +0900, Michael Paquier wrote:\n> Sure. I found the concept behind 0002 sound. Feel free to go ahead\n> with 0001, and I can always look at the second. Always happy to help.\n\nFor the sake of the archives:\n- Amit has applied 0001 down to 16 as of 54ccfd65868c.\n- I've applied 0002 after on HEAD as of 9210afd3bcd6.\n--\nMichael",
"msg_date": "Fri, 29 Sep 2023 10:41:04 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move global variables of pgoutput to plugin private scope."
}
] |
[
{
"msg_contents": "Hi,\n\nIssue1:\nVACUUM and ANALYZE docs explain that the parameter of BUFFER_USAGE_LIMIT \nis optional as follows. But this is not true. The argument, size, is \nrequired for BUFFER_USAGE_LIMIT. So the docs should be fixed this issue.\nBUFFER_USAGE_LIMIT [ size ]\nhttps://www.postgresql.org/docs/devel/sql-vacuum.html\nhttps://www.postgresql.org/docs/devel/sql-analyze.html\n\nIssue2:\nSizes may also be specified as a string containing the numerical size \nfollowed by any one of the following memory units: kB (kilobytes), MB \n(megabytes), GB (gigabytes), or TB (terabytes).\nVACUUM and ANALYZE docs explain that the argument of BUFFER_USAGE_LIMIT \naccepts the units like kB (kilobytes), MB (megabytes), GB (gigabytes), \nor TB (terabytes). But it also actually accepts B(bytes) as an unit. So \nthe docs should include \"B(bytes)\" as an unit that the argument of \nBUFFER_USAGE_LIMIT can accept.\n\nYou can see the patch in the attached file.\n\nRyoga Yoshida",
"msg_date": "Tue, 19 Sep 2023 17:59:11 +0900",
"msg_from": "Ryoga Yoshida <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix bug in VACUUM and ANALYZE docs"
},
{
"msg_contents": "On 2023-09-19 17:59, Ryoga Yoshida wrote:\n> Hi,\n> \n> Issue1:\n> VACUUM and ANALYZE docs explain that the parameter of\n> BUFFER_USAGE_LIMIT is optional as follows. But this is not true. The\n> argument, size, is required for BUFFER_USAGE_LIMIT. So the docs should\n> be fixed this issue.\n> BUFFER_USAGE_LIMIT [ size ]\n> https://www.postgresql.org/docs/devel/sql-vacuum.html\n> https://www.postgresql.org/docs/devel/sql-analyze.html\n> \n> Issue2:\n> Sizes may also be specified as a string containing the numerical size\n> followed by any one of the following memory units: kB (kilobytes), MB\n> (megabytes), GB (gigabytes), or TB (terabytes).\n> VACUUM and ANALYZE docs explain that the argument of\n> BUFFER_USAGE_LIMIT accepts the units like kB (kilobytes), MB\n> (megabytes), GB (gigabytes), or TB (terabytes). But it also actually\n> accepts B(bytes) as an unit. So the docs should include \"B(bytes)\" as\n> an unit that the argument of BUFFER_USAGE_LIMIT can accept.\n> \n> You can see the patch in the attached file.\n\nThanks for the patch.\nYou're right. It looks good to me.\n\n-- \nRegards,\nShinya Kato\nNTT DATA GROUP CORPORATION\n\n\n",
"msg_date": "Wed, 20 Sep 2023 09:43:15 +0900",
"msg_from": "Shinya Kato <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix bug in VACUUM and ANALYZE docs"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 09:43:15AM +0900, Shinya Kato wrote:\n> Thanks for the patch.\n> You're right. It looks good to me.\n\nRight, it feels like there has been a lot of copy-paste in this area\nof the docs. Applied down to 16.\n--\nMichael",
"msg_date": "Wed, 20 Sep 2023 13:39:02 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix bug in VACUUM and ANALYZE docs"
},
{
"msg_contents": "On Wed, 20 Sep 2023 13:39:02 +0900\nMichael Paquier <[email protected]> wrote:\n\n> On Wed, Sep 20, 2023 at 09:43:15AM +0900, Shinya Kato wrote:\n\n> > You're right. It looks good to me. \n> \n> Right, it feels like there has been a lot of copy-paste in this area\n> of the docs. Applied down to 16.\n\nI signed up to review, but I think that perhaps commitfest\nhttps://commitfest.postgresql.org/45/4574/\nneeds marking as applied and done?\n\nRegards,\n\nKarl <[email protected]>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Sun, 24 Sep 2023 18:30:32 -0500",
"msg_from": "\"Karl O. Pinc\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix bug in VACUUM and ANALYZE docs"
},
{
"msg_contents": "On Sun, Sep 24, 2023 at 06:30:32PM -0500, Karl O. Pinc wrote:\n> I signed up to review, but I think that perhaps commitfest\n> https://commitfest.postgresql.org/45/4574/\n> needs marking as applied and done?\n\nIndeed. I did not notice that there was a CF entry for this one.\nClosed it now.\n--\nMichael",
"msg_date": "Mon, 25 Sep 2023 09:00:51 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix bug in VACUUM and ANALYZE docs"
}
] |
[
{
"msg_contents": "I'm working on an index access method. I have a function which can appear\nin a projection list which should be evaluated by the access method itself.\nExample:\n\nSELECT title, my_special_function(body)\nFROM books\nWHERE book_id <===> 42;\n\n\"<===>\" is the operator that invokes the access method. The value returned\nby my_special_function() gets calculated during the index scan, and depends\non information that exists only in the index.\n\nHow do I get the system to pull the value from the index instead of trying\nto calculate it?\n\nSo far, I have created a CustomScan and set it using set_rel_pathlist_hook.\nThe hook function gives us a PlannerInfo, RelOptInfo, Index, and\nRangeTblEntry. So far as I can tell, only RelOptInfo.reltarget.exprs gives\nus any info on the SELECT expressions, but unfortunately, the exprs are Var\nnodes that contain the (title, body) columns from above, and do not say\nanything about my_special_function().\n\nWhere do I find the actual final projection exprs?\n\nAm I using the right hook?\n\nIs there any example code out there on how to do this?\n\nI know this is possible, because the docs for PathTarget say this:\n\nPathTarget\n *\n * This struct contains what we need to know during planning about the\n * targetlist (output columns) that a Path will compute. Each RelOptInfo\n * includes a default PathTarget, which its individual Paths may simply\n * reference. However, in some cases a Path may compute outputs different\n * from other Paths, and in that case we make a custom PathTarget for it.\n *\n*For example, an indexscan might return index expressions that would *\notherwise need to be explicitly calculated.*\n\n-- \nChris Cleveland\n312-339-2677 mobile\n\nI'm working on an index access method. I have a function which can \nappear in a projection list which should be evaluated by the access \nmethod itself. Example:SELECT title, my_special_function(body)FROM booksWHERE book_id <===> 42;\"<===>\"\n is the operator that invokes the access method. The value returned by \nmy_special_function() gets calculated during the index scan, and depends\n on information that exists only in the index.How do I get the system to pull the value from the index instead of trying to calculate it?So\n far, I have created a CustomScan and set it using \nset_rel_pathlist_hook. The hook function gives us a PlannerInfo, \nRelOptInfo, Index, and RangeTblEntry. So far as I can tell, only \nRelOptInfo.reltarget.exprs gives us any info on the SELECT expressions, \nbut unfortunately, the exprs are Var nodes that contain the (title, \nbody) columns from above, and do not say anything about \nmy_special_function().Where do I find the actual final projection exprs?Am I using the right hook?Is there any example code out there on how to do this?I know this is possible, because the docs for PathTarget say this:PathTarget * * This struct contains what we need to know during planning about the * targetlist (output columns) that a Path will compute. Each RelOptInfo * includes a default PathTarget, which its individual Paths may simply * reference. However, in some cases a Path may compute outputs different * from other Paths, and in that case we make a custom PathTarget for it. * For example, an indexscan might return index expressions that would * otherwise need to be explicitly calculated.-- Chris Cleveland312-339-2677 mobile",
"msg_date": "Tue, 19 Sep 2023 09:32:04 -0500",
"msg_from": "Chris Cleveland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Projection pushdown to index access method"
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 12:35 PM Chris Cleveland\n<[email protected]> wrote:\n> I'm working on an index access method. I have a function which can appear in a projection list which should be evaluated by the access method itself. Example:\n>\n> SELECT title, my_special_function(body)\n> FROM books\n> WHERE book_id <===> 42;\n>\n> \"<===>\" is the operator that invokes the access method. The value returned by my_special_function() gets calculated during the index scan, and depends on information that exists only in the index.\n>\n> How do I get the system to pull the value from the index instead of trying to calculate it?\n\nI don't see how you can do this in general, because there's no\nguarantee that the plan will be an Index Scan or Index Only Scan\ninstead of a Seq Scan or Bitmap Heap/Index Scan.\n\n> So far, I have created a CustomScan and set it using set_rel_pathlist_hook. The hook function gives us a PlannerInfo, RelOptInfo, Index, and RangeTblEntry. So far as I can tell, only RelOptInfo.reltarget.exprs gives us any info on the SELECT expressions, but unfortunately, the exprs are Var nodes that contain the (title, body) columns from above, and do not say anything about my_special_function().\n\nSo what does the EXPLAIN plan look like?\n\nI'm not quite sure what's happening here, but the planner likes to\nmake plans that just fetch attributes from all the relations being\njoined (here, there's just one) and then perform the calculation of\nany expressions at the very end, as the final step, or at least as the\nfinal step at that subquery level. And if it plans to ask your custom\nscan for title, body, and book_id and then compute\nmy_special_function(body) after the fact, the thing you want to happen\nis not going to happen. If the planner can be induced to ask your\ncustom scan for my_special_function(body), then I *think* you should\nbe able to arrange to get that value any way you like and just return\nit. But I don't quite know how to induce the planner to do that -- and\nespecially if this query involved more than one table, because of the\nplanner's tendency to postpone expression evaluation until after joins\nare done.\n\n> I know this is possible, because the docs for PathTarget say this:\n>\n> PathTarget\n> *\n> * This struct contains what we need to know during planning about the\n> * targetlist (output columns) that a Path will compute. Each RelOptInfo\n> * includes a default PathTarget, which its individual Paths may simply\n> * reference. However, in some cases a Path may compute outputs different\n> * from other Paths, and in that case we make a custom PathTarget for it.\n> * For example, an indexscan might return index expressions that would\n> * otherwise need to be explicitly calculated.\n\nIt's just worth keeping in mind that the planner and the executor are\nvery tightly bound together here. This may be one of those cases\ngetting the executor to do what you want is the easy part, and getting\nthe planner to produce a plan that tells it to do that is the hard\npart.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 19 Sep 2023 13:10:29 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Projection pushdown to index access method"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Tue, Sep 19, 2023 at 12:35 PM Chris Cleveland\n> <[email protected]> wrote:\n>> I'm working on an index access method. I have a function which can appear in a projection list which should be evaluated by the access method itself. Example:\n>> ...\n>> How do I get the system to pull the value from the index instead of trying to calculate it?\n\n> I don't see how you can do this in general, because there's no\n> guarantee that the plan will be an Index Scan or Index Only Scan\n> instead of a Seq Scan or Bitmap Heap/Index Scan.\n\nYeah. There is some adjacent functionality for indexed expressions,\nwhich maybe you could use, but it has a lot of shortcomings yet.\nFor example:\n\nregression=# create or replace function f(x int) returns int as $$begin return x+1; end$$ language plpgsql strict immutable cost 1000;\nCREATE FUNCTION\nregression=# create table mytable (id int, x int);\nCREATE TABLE\nregression=# create index on mytable(x, f(x));\nCREATE INDEX\nregression=# set enable_seqscan TO 0;\nSET\nregression=# set enable_bitmapscan TO 0;\nSET\nregression=# explain verbose select f(x) from mytable;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------\n Index Only Scan using mytable_x_f_idx on public.mytable (cost=0.15..5728.06 rows=2260 width=4)\n Output: (f(x))\n(2 rows)\n\nIf you examine the plan tree closely you can confirm that it is pulling\nf(x) from the index rather than recomputing it. So maybe you could get\nsomewhere by pretending that my_special_function(body) is an indexed\nexpression. However, there are a couple of big gotchas, which this\nexample illustrates:\n\n1. The index has to also provide x (or for you, \"body\") or else the\nplanner fails to detect that an IOS is applicable. This comes back\nto the point Robert made about the planner preferring to think about\npulling individual Vars from tables: we don't believe the index is\nusable in an IOS unless it provides all the Vars the query needs from\nthat table. This wouldn't be hard to fix exactly; the problem is to\nfix it without spending exponential amounts of planning time in\ncheck_index_only. We'd have to detect that all uses of \"x\" appear in\nthe context \"f(x)\" in order to realize that we don't need to be able\nto fetch \"x\" itself.\n\n2. Costing doesn't account for the fact that we've avoided runtime\ncomputation of f(), thus the IOS plan may not be preferred over\nother plan shapes, which is why I had to force it above. Again,\nthis is pretty closely tied to the fact that we don't recognize\nuntil very late in the game that we can get f(x) from the index.\n\n3. This only works for an index-only scan, not regular index scans.\nThere's some early discussion happening about unifying IOS and\nregular scans a bit more, which perhaps would allow relaxing that\n(and maybe even solve issue #1?). But it's a long way off yet.\n\nIf my_special_function() is supposed to always be applied to an\nindexed column, then issue #1 would fortuitously not be a problem\nfor you. But #2 is a pain, and #3 might be a deal-breaker for you.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Sep 2023 13:46:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Projection pushdown to index access method"
}
] |
[
{
"msg_contents": "Several places bypass the buffer manager and use direct smgrextend() \ncalls to populate a new relation: Index AM build methods, rewriteheap.c \nand RelationCopyStorage(). There's fair amount of duplicated code to \nWAL-log the pages, calculate checksums, call smgrextend(), and finally \ncall smgrimmedsync() if needed. The duplication is tedious and \nerror-prone. For example, if we want to optimize by WAL-logging multiple \npages in one record, that needs to be implemented in each AM separately. \nCurrently only sorted GiST index build does that but it would be equally \nbeneficial in all of those places.\n\nAnd I believe we got the smgrimmedsync() logic slightly wrong in a \nnumber of places [1]. And it's not great for latency, we could let the \ncheckpointer do the fsyncing lazily, like Robert mentioned in the same \nthread.\n\nThe attached patch centralizes that pattern to a new bulk writing \nfacility, and changes all those AMs to use it. The facility buffers 32 \npages and WAL-logs them in record, calculates checksums. You could \nimagine a lot of further optimizations, like writing those 32 pages in \none vectored pvwrite() call [2], and not skipping the buffer manager \nwhen the relation is small. But the scope of this initial version is \nmostly to refactor the existing code.\n\nOne new optimization included here is to let the checkpointer do the \nfsyncing if possible. That gives a big speedup when e.g. restoring a \nschema-only dump with lots of relations.\n\n[1] \nhttps://www.postgresql.org/message-id/58effc10-c160-b4a6-4eb7-384e95e6f9e3%40iki.fi\n\n[2] \nhttps://www.postgresql.org/message-id/CA+hUKGJkOiOCa+mag4BF+zHo7qo=o9CFheB8=g6uT5TUm2gkvA@mail.gmail.com\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Tue, 19 Sep 2023 18:13:47 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Relation bulk write facility"
},
{
"msg_contents": "On 19/09/2023 17:13, Heikki Linnakangas wrote:\n> The attached patch centralizes that pattern to a new bulk writing\n> facility, and changes all those AMs to use it.\n\nHere's a new rebased version of the patch.\n\nThis includes fixes to the pageinspect regression test. They were \nexplained in the commit message, but I forgot to include the actual test \nchanges. That should fix the cfbot failures.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Fri, 17 Nov 2023 11:37:21 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-17 11:37:21 +0100, Heikki Linnakangas wrote:\n> The new facility makes it easier to optimize bulk loading, as the\n> logic for buffering, WAL-logging, and syncing the relation only needs\n> to be implemented once. It's also less error-prone: We have had a\n> number of bugs in how a relation is fsync'd - or not - at the end of a\n> bulk loading operation. By centralizing that logic to one place, we\n> only need to write it correctly once.\n\nOne thing I'd like to use the centralized handling for is to track such\nwrites in pg_stat_io. I don't mean as part of the initial patch, just that\nthat's another reason I like the facility.\n\n\n> The new facility is faster for small relations: Instead of of calling\n> smgrimmedsync(), we register the fsync to happen at next checkpoint,\n> which avoids the fsync latency. That can make a big difference if you\n> are e.g. restoring a schema-only dump with lots of relations.\n\nI think this would be a huge win for people running their application tests\nagainst postgres.\n\n\n> +\tbulkw = bulkw_start_smgr(dst, forkNum, use_wal);\n> +\n> \tnblocks = smgrnblocks(src, forkNum);\n> \n> \tfor (blkno = 0; blkno < nblocks; blkno++)\n> \t{\n> +\t\tPage\t\tpage;\n> +\n> \t\t/* If we got a cancel signal during the copy of the data, quit */\n> \t\tCHECK_FOR_INTERRUPTS();\n> \n> -\t\tsmgrread(src, forkNum, blkno, buf.data);\n> +\t\tpage = bulkw_alloc_buf(bulkw);\n> +\t\tsmgrread(src, forkNum, blkno, page);\n> \n> \t\tif (!PageIsVerifiedExtended(page, blkno,\n> \t\t\t\t\t\t\t\t\tPIV_LOG_WARNING | PIV_REPORT_STAT))\n> @@ -511,30 +514,9 @@ RelationCopyStorage(SMgrRelation src, SMgrRelation dst,\n> \t\t * page this is, so we have to log the full page including any unused\n> \t\t * space.\n> \t\t */\n> -\t\tif (use_wal)\n> -\t\t\tlog_newpage(&dst->smgr_rlocator.locator, forkNum, blkno, page, false);\n> -\n> -\t\tPageSetChecksumInplace(page, blkno);\n> -\n> -\t\t/*\n> -\t\t * Now write the page. We say skipFsync = true because there's no\n> -\t\t * need for smgr to schedule an fsync for this write; we'll do it\n> -\t\t * ourselves below.\n> -\t\t */\n> -\t\tsmgrextend(dst, forkNum, blkno, buf.data, true);\n> +\t\tbulkw_write(bulkw, blkno, page, false);\n\nI wonder if bulkw_alloc_buf() is a good name - if you naively read this\nchange, it looks like it'll just leak memory. It also might be taken to be\nvalid until freed, which I don't think is the case?\n\n\n> +/*-------------------------------------------------------------------------\n> + *\n> + * bulk_write.c\n> + *\t Efficiently and reliably populate a new relation\n> + *\n> + * The assumption is that no other backends access the relation while we are\n> + * loading it, so we can take some shortcuts. Alternatively, you can use the\n> + * buffer manager as usual, if performance is not critical, but you must not\n> + * mix operations through the buffer manager and the bulk loading interface at\n> + * the same time.\n\n From \"Alternatively\" onward this is is somewhat confusing.\n\n\n> + * We bypass the buffer manager to avoid the locking overhead, and call\n> + * smgrextend() directly. A downside is that the pages will need to be\n> + * re-read into shared buffers on first use after the build finishes. That's\n> + * usually a good tradeoff for large relations, and for small relations, the\n> + * overhead isn't very significant compared to creating the relation in the\n> + * first place.\n\nFWIW, I doubt the \"isn't very significant\" bit is really true.\n\n\n> + * One tricky point is that because we bypass the buffer manager, we need to\n> + * register the relation for fsyncing at the next checkpoint ourselves, and\n> + * make sure that the relation is correctly fsync by us or the checkpointer\n> + * even if a checkpoint happens concurrently.\n\n\"fsync'ed\" or such? Otherwise this reads awkwardly for me.\n\n\n\n> + *\n> + *\n> + * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group\n> + * Portions Copyright (c) 1994, Regents of the University of California\n> + *\n> + *\n> + * IDENTIFICATION\n> + *\t src/backend/storage/smgr/bulk_write.c\n> + *\n> + *-------------------------------------------------------------------------\n> + */\n> +#include \"postgres.h\"\n> +\n> +#include \"access/xloginsert.h\"\n> +#include \"access/xlogrecord.h\"\n> +#include \"storage/bufmgr.h\"\n> +#include \"storage/bufpage.h\"\n> +#include \"storage/bulk_write.h\"\n> +#include \"storage/proc.h\"\n> +#include \"storage/smgr.h\"\n> +#include \"utils/rel.h\"\n> +\n> +#define MAX_BUFFERED_PAGES XLR_MAX_BLOCK_ID\n> +\n> +typedef struct BulkWriteBuffer\n> +{\n> +\tPage\t\tpage;\n> +\tBlockNumber blkno;\n> +\tbool\t\tpage_std;\n> +\tint16\t\torder;\n> +} BulkWriteBuffer;\n> +\n\nThe name makes it sound like this struct itself contains a buffer - but it's\njust pointing to one. *BufferRef or such maybe?\n\nI was wondering how you dealt with the alignment of buffers given the struct\ndefinition, which is what lead me to look at this...\n\n\n> +/*\n> + * Bulk writer state for one relation fork.\n> + */\n> +typedef struct BulkWriteState\n> +{\n> +\t/* Information about the target relation we're writing */\n> +\tSMgrRelation smgr;\n\nIsn't there a danger of this becoming a dangling pointer? At least until\nhttps://postgr.es/m/CA%2BhUKGJ8NTvqLHz6dqbQnt2c8XCki4r2QvXjBQcXpVwxTY_pvA%40mail.gmail.com\nis merged?\n\n\n> +\tForkNumber\tforknum;\n> +\tbool\t\tuse_wal;\n> +\n> +\t/* We keep several pages buffered, and WAL-log them in batches */\n> +\tint\t\t\tnbuffered;\n> +\tBulkWriteBuffer buffers[MAX_BUFFERED_PAGES];\n> +\n> +\t/* Current size of the relation */\n> +\tBlockNumber pages_written;\n> +\n> +\t/* The RedoRecPtr at the time that the bulk operation started */\n> +\tXLogRecPtr\tstart_RedoRecPtr;\n> +\n> +\tPage\t\tzeropage;\t\t/* workspace for filling zeroes */\n\nWe really should just have one such page in shared memory somewhere... For WAL\nwrites as well.\n\nBut until then - why do you allocate the page? Seems like we could just use a\nstatic global variable?\n\n\n> +/*\n> + * Write all buffered pages to disk.\n> + */\n> +static void\n> +bulkw_flush(BulkWriteState *bulkw)\n> +{\n> +\tint\t\t\tnbuffered = bulkw->nbuffered;\n> +\tBulkWriteBuffer *buffers = bulkw->buffers;\n> +\n> +\tif (nbuffered == 0)\n> +\t\treturn;\n> +\n> +\tif (nbuffered > 1)\n> +\t{\n> +\t\tint\t\t\to;\n> +\n> +\t\tqsort(buffers, nbuffered, sizeof(BulkWriteBuffer), buffer_cmp);\n> +\n> +\t\t/*\n> +\t\t * Eliminate duplicates, keeping the last write of each block.\n> +\t\t * (buffer_cmp uses 'order' as the last sort key)\n> +\t\t */\n\nHuh - which use cases would actually cause duplicate writes?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 18 Nov 2023 16:04:16 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On 19/11/2023 02:04, Andres Freund wrote:\n> On 2023-11-17 11:37:21 +0100, Heikki Linnakangas wrote:\n>> The new facility makes it easier to optimize bulk loading, as the\n>> logic for buffering, WAL-logging, and syncing the relation only needs\n>> to be implemented once. It's also less error-prone: We have had a\n>> number of bugs in how a relation is fsync'd - or not - at the end of a\n>> bulk loading operation. By centralizing that logic to one place, we\n>> only need to write it correctly once.\n> \n> One thing I'd like to use the centralized handling for is to track such\n> writes in pg_stat_io. I don't mean as part of the initial patch, just that\n> that's another reason I like the facility.\n\nOh I didn't realize they're not counted at the moment.\n\n>> +\tbulkw = bulkw_start_smgr(dst, forkNum, use_wal);\n>> +\n>> \tnblocks = smgrnblocks(src, forkNum);\n>> \n>> \tfor (blkno = 0; blkno < nblocks; blkno++)\n>> \t{\n>> +\t\tPage\t\tpage;\n>> +\n>> \t\t/* If we got a cancel signal during the copy of the data, quit */\n>> \t\tCHECK_FOR_INTERRUPTS();\n>> \n>> -\t\tsmgrread(src, forkNum, blkno, buf.data);\n>> +\t\tpage = bulkw_alloc_buf(bulkw);\n>> +\t\tsmgrread(src, forkNum, blkno, page);\n>> \n>> \t\tif (!PageIsVerifiedExtended(page, blkno,\n>> \t\t\t\t\t\t\t\t\tPIV_LOG_WARNING | PIV_REPORT_STAT))\n>> @@ -511,30 +514,9 @@ RelationCopyStorage(SMgrRelation src, SMgrRelation dst,\n>> \t\t * page this is, so we have to log the full page including any unused\n>> \t\t * space.\n>> \t\t */\n>> -\t\tif (use_wal)\n>> -\t\t\tlog_newpage(&dst->smgr_rlocator.locator, forkNum, blkno, page, false);\n>> -\n>> -\t\tPageSetChecksumInplace(page, blkno);\n>> -\n>> -\t\t/*\n>> -\t\t * Now write the page. We say skipFsync = true because there's no\n>> -\t\t * need for smgr to schedule an fsync for this write; we'll do it\n>> -\t\t * ourselves below.\n>> -\t\t */\n>> -\t\tsmgrextend(dst, forkNum, blkno, buf.data, true);\n>> +\t\tbulkw_write(bulkw, blkno, page, false);\n> \n> I wonder if bulkw_alloc_buf() is a good name - if you naively read this\n> change, it looks like it'll just leak memory. It also might be taken to be\n> valid until freed, which I don't think is the case?\n\nYeah, I'm not very happy with this interface. The model is that you get \na buffer to write to by calling bulkw_alloc_buf(). Later, you hand it \nover to bulkw_write(), which takes ownership of it and frees it later. \nThere is no other function to free it, although currently the buffer is \njust palloc'd so you could call pfree on it.\n\nHowever, I'd like to not expose that detail to the callers. I'm \nimagining that in the future we might optimize further, by having a \nlarger e.g. 1 MB buffer, and carve the 8kB blocks from that. Then \nopportunistically, if you fill the buffers sequentially, bulk_write.c \ncould do one smgrextend() to write the whole 1 MB chunk.\n\nI renamed it to bulkw_get_buf() now, and made it return a new \nBulkWriteBuffer typedef instead of a plain Page. The point of the new \ntypedef is to distinguish a buffer returned by bulkw_get_buf() from a \nPage or char[BLCKSZ] that you might palloc on your own. That indeed \nrevealed some latent bugs in gistbuild.c where I had mixed up buffers \nreturned by bulkw_alloc_buf() and palloc'd buffers.\n\n(The previous version of this patch called a different struct \nBulkWriteBuffer, but I renamed that to PendingWrite; see below. Don't be \nconfused!)\n\nI think this helps a little, but I'm still not very happy with it. I'll \ngive it some more thought after sleeping over it, but in the meanwhile, \nI'm all ears for suggestions.\n\n>> +/*-------------------------------------------------------------------------\n>> + *\n>> + * bulk_write.c\n>> + *\t Efficiently and reliably populate a new relation\n>> + *\n>> + * The assumption is that no other backends access the relation while we are\n>> + * loading it, so we can take some shortcuts. Alternatively, you can use the\n>> + * buffer manager as usual, if performance is not critical, but you must not\n>> + * mix operations through the buffer manager and the bulk loading interface at\n>> + * the same time.\n> \n> From \"Alternatively\" onward this is is somewhat confusing.\n\nRewrote that to just \"Do not mix operations through the regular buffer \nmanager and the bulk loading interface!\"\n\n>> + * One tricky point is that because we bypass the buffer manager, we need to\n>> + * register the relation for fsyncing at the next checkpoint ourselves, and\n>> + * make sure that the relation is correctly fsync by us or the checkpointer\n>> + * even if a checkpoint happens concurrently.\n> \n> \"fsync'ed\" or such? Otherwise this reads awkwardly for me.\n\nYep, fixed.\n\n>> +typedef struct BulkWriteBuffer\n>> +{\n>> +\tPage\t\tpage;\n>> +\tBlockNumber blkno;\n>> +\tbool\t\tpage_std;\n>> +\tint16\t\torder;\n>> +} BulkWriteBuffer;\n>> +\n> \n> The name makes it sound like this struct itself contains a buffer - but it's\n> just pointing to one. *BufferRef or such maybe?\n> \n> I was wondering how you dealt with the alignment of buffers given the struct\n> definition, which is what lead me to look at this...\n\nI renamed this to PendingWrite, and the field that holds these \n\"pending_writes\". Think of it as a queue of writes that haven't been \nperformed yet.\n\n>> +/*\n>> + * Bulk writer state for one relation fork.\n>> + */\n>> +typedef struct BulkWriteState\n>> +{\n>> +\t/* Information about the target relation we're writing */\n>> +\tSMgrRelation smgr;\n> \n> Isn't there a danger of this becoming a dangling pointer? At least until\n> https://postgr.es/m/CA%2BhUKGJ8NTvqLHz6dqbQnt2c8XCki4r2QvXjBQcXpVwxTY_pvA%40mail.gmail.com\n> is merged?\n\nGood point. I just added a FIXME comment to remind about that, hoping \nthat that patch gets merged soon. If not, I'll come up with a different fix.\n\n>> +\tForkNumber\tforknum;\n>> +\tbool\t\tuse_wal;\n>> +\n>> +\t/* We keep several pages buffered, and WAL-log them in batches */\n>> +\tint\t\t\tnbuffered;\n>> +\tBulkWriteBuffer buffers[MAX_BUFFERED_PAGES];\n>> +\n>> +\t/* Current size of the relation */\n>> +\tBlockNumber pages_written;\n>> +\n>> +\t/* The RedoRecPtr at the time that the bulk operation started */\n>> +\tXLogRecPtr\tstart_RedoRecPtr;\n>> +\n>> +\tPage\t\tzeropage;\t\t/* workspace for filling zeroes */\n> \n> We really should just have one such page in shared memory somewhere... For WAL\n> writes as well.\n> \n> But until then - why do you allocate the page? Seems like we could just use a\n> static global variable?\n\nI made it a static global variable for now. (The palloc way was copied \nover from nbtsort.c)\n\n>> +/*\n>> + * Write all buffered pages to disk.\n>> + */\n>> +static void\n>> +bulkw_flush(BulkWriteState *bulkw)\n>> +{\n>> +\tint\t\t\tnbuffered = bulkw->nbuffered;\n>> +\tBulkWriteBuffer *buffers = bulkw->buffers;\n>> +\n>> +\tif (nbuffered == 0)\n>> +\t\treturn;\n>> +\n>> +\tif (nbuffered > 1)\n>> +\t{\n>> +\t\tint\t\t\to;\n>> +\n>> +\t\tqsort(buffers, nbuffered, sizeof(BulkWriteBuffer), buffer_cmp);\n>> +\n>> +\t\t/*\n>> +\t\t * Eliminate duplicates, keeping the last write of each block.\n>> +\t\t * (buffer_cmp uses 'order' as the last sort key)\n>> +\t\t */\n> \n> Huh - which use cases would actually cause duplicate writes?\n\nHmm, nothing anymore I guess. Many AMs used to write zero pages as a \nplaceholder and come back to fill them in later, but now that \nbulk_write.c handles that,\n\nRemoved that, and replaced it with with an assertion in buffer_cmp() \nthat there are no duplicates.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Sat, 25 Nov 2023 01:19:16 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "Melanie just reminded about an older thread about this same thing:\nhttps://www.postgresql.org/message-id/CAAKRu_ZQEpk6Q1WtNLgfXBdCmdU5xN_w0boVO6faO_Ax%2Bckjig%40mail.gmail.com. \nI had completely forgotten about that.\n\nMelanie's patches in that thread implemented the same optimization of \navoiding the fsync() if no checkpoint has happened during the index \nbuild. My patch here also implements batching the WAL records of \nmultiple blocks, which was not part of those older patches. OTOH, those \npatches included an additional optimization of not bypassing the shared \nbuffer cache if the index is small. That seems sensible too.\n\nIn this new patch, I subconsciously implemented an API close to what I \nsuggested at the end of that old thread.\n\nSo I'd like to continue this effort based on this new patch. We can add \nthe bypass-buffer-cache optimization later on top of this. With the new \nAPI that this introduces, it should be an isolated change to the \nimplementation, with no changes required to the callers.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 14 Dec 2023 14:02:01 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On Sat, 25 Nov 2023 at 06:49, Heikki Linnakangas <[email protected]> wrote:\n>\n> On 19/11/2023 02:04, Andres Freund wrote:\n> > On 2023-11-17 11:37:21 +0100, Heikki Linnakangas wrote:\n> >> The new facility makes it easier to optimize bulk loading, as the\n> >> logic for buffering, WAL-logging, and syncing the relation only needs\n> >> to be implemented once. It's also less error-prone: We have had a\n> >> number of bugs in how a relation is fsync'd - or not - at the end of a\n> >> bulk loading operation. By centralizing that logic to one place, we\n> >> only need to write it correctly once.\n> >\n> > One thing I'd like to use the centralized handling for is to track such\n> > writes in pg_stat_io. I don't mean as part of the initial patch, just that\n> > that's another reason I like the facility.\n>\n> Oh I didn't realize they're not counted at the moment.\n>\n> >> + bulkw = bulkw_start_smgr(dst, forkNum, use_wal);\n> >> +\n> >> nblocks = smgrnblocks(src, forkNum);\n> >>\n> >> for (blkno = 0; blkno < nblocks; blkno++)\n> >> {\n> >> + Page page;\n> >> +\n> >> /* If we got a cancel signal during the copy of the data, quit */\n> >> CHECK_FOR_INTERRUPTS();\n> >>\n> >> - smgrread(src, forkNum, blkno, buf.data);\n> >> + page = bulkw_alloc_buf(bulkw);\n> >> + smgrread(src, forkNum, blkno, page);\n> >>\n> >> if (!PageIsVerifiedExtended(page, blkno,\n> >> PIV_LOG_WARNING | PIV_REPORT_STAT))\n> >> @@ -511,30 +514,9 @@ RelationCopyStorage(SMgrRelation src, SMgrRelation dst,\n> >> * page this is, so we have to log the full page including any unused\n> >> * space.\n> >> */\n> >> - if (use_wal)\n> >> - log_newpage(&dst->smgr_rlocator.locator, forkNum, blkno, page, false);\n> >> -\n> >> - PageSetChecksumInplace(page, blkno);\n> >> -\n> >> - /*\n> >> - * Now write the page. We say skipFsync = true because there's no\n> >> - * need for smgr to schedule an fsync for this write; we'll do it\n> >> - * ourselves below.\n> >> - */\n> >> - smgrextend(dst, forkNum, blkno, buf.data, true);\n> >> + bulkw_write(bulkw, blkno, page, false);\n> >\n> > I wonder if bulkw_alloc_buf() is a good name - if you naively read this\n> > change, it looks like it'll just leak memory. It also might be taken to be\n> > valid until freed, which I don't think is the case?\n>\n> Yeah, I'm not very happy with this interface. The model is that you get\n> a buffer to write to by calling bulkw_alloc_buf(). Later, you hand it\n> over to bulkw_write(), which takes ownership of it and frees it later.\n> There is no other function to free it, although currently the buffer is\n> just palloc'd so you could call pfree on it.\n>\n> However, I'd like to not expose that detail to the callers. I'm\n> imagining that in the future we might optimize further, by having a\n> larger e.g. 1 MB buffer, and carve the 8kB blocks from that. Then\n> opportunistically, if you fill the buffers sequentially, bulk_write.c\n> could do one smgrextend() to write the whole 1 MB chunk.\n>\n> I renamed it to bulkw_get_buf() now, and made it return a new\n> BulkWriteBuffer typedef instead of a plain Page. The point of the new\n> typedef is to distinguish a buffer returned by bulkw_get_buf() from a\n> Page or char[BLCKSZ] that you might palloc on your own. That indeed\n> revealed some latent bugs in gistbuild.c where I had mixed up buffers\n> returned by bulkw_alloc_buf() and palloc'd buffers.\n>\n> (The previous version of this patch called a different struct\n> BulkWriteBuffer, but I renamed that to PendingWrite; see below. Don't be\n> confused!)\n>\n> I think this helps a little, but I'm still not very happy with it. I'll\n> give it some more thought after sleeping over it, but in the meanwhile,\n> I'm all ears for suggestions.\n>\n> >> +/*-------------------------------------------------------------------------\n> >> + *\n> >> + * bulk_write.c\n> >> + * Efficiently and reliably populate a new relation\n> >> + *\n> >> + * The assumption is that no other backends access the relation while we are\n> >> + * loading it, so we can take some shortcuts. Alternatively, you can use the\n> >> + * buffer manager as usual, if performance is not critical, but you must not\n> >> + * mix operations through the buffer manager and the bulk loading interface at\n> >> + * the same time.\n> >\n> > From \"Alternatively\" onward this is is somewhat confusing.\n>\n> Rewrote that to just \"Do not mix operations through the regular buffer\n> manager and the bulk loading interface!\"\n>\n> >> + * One tricky point is that because we bypass the buffer manager, we need to\n> >> + * register the relation for fsyncing at the next checkpoint ourselves, and\n> >> + * make sure that the relation is correctly fsync by us or the checkpointer\n> >> + * even if a checkpoint happens concurrently.\n> >\n> > \"fsync'ed\" or such? Otherwise this reads awkwardly for me.\n>\n> Yep, fixed.\n>\n> >> +typedef struct BulkWriteBuffer\n> >> +{\n> >> + Page page;\n> >> + BlockNumber blkno;\n> >> + bool page_std;\n> >> + int16 order;\n> >> +} BulkWriteBuffer;\n> >> +\n> >\n> > The name makes it sound like this struct itself contains a buffer - but it's\n> > just pointing to one. *BufferRef or such maybe?\n> >\n> > I was wondering how you dealt with the alignment of buffers given the struct\n> > definition, which is what lead me to look at this...\n>\n> I renamed this to PendingWrite, and the field that holds these\n> \"pending_writes\". Think of it as a queue of writes that haven't been\n> performed yet.\n>\n> >> +/*\n> >> + * Bulk writer state for one relation fork.\n> >> + */\n> >> +typedef struct BulkWriteState\n> >> +{\n> >> + /* Information about the target relation we're writing */\n> >> + SMgrRelation smgr;\n> >\n> > Isn't there a danger of this becoming a dangling pointer? At least until\n> > https://postgr.es/m/CA%2BhUKGJ8NTvqLHz6dqbQnt2c8XCki4r2QvXjBQcXpVwxTY_pvA%40mail.gmail.com\n> > is merged?\n>\n> Good point. I just added a FIXME comment to remind about that, hoping\n> that that patch gets merged soon. If not, I'll come up with a different fix.\n>\n> >> + ForkNumber forknum;\n> >> + bool use_wal;\n> >> +\n> >> + /* We keep several pages buffered, and WAL-log them in batches */\n> >> + int nbuffered;\n> >> + BulkWriteBuffer buffers[MAX_BUFFERED_PAGES];\n> >> +\n> >> + /* Current size of the relation */\n> >> + BlockNumber pages_written;\n> >> +\n> >> + /* The RedoRecPtr at the time that the bulk operation started */\n> >> + XLogRecPtr start_RedoRecPtr;\n> >> +\n> >> + Page zeropage; /* workspace for filling zeroes */\n> >\n> > We really should just have one such page in shared memory somewhere... For WAL\n> > writes as well.\n> >\n> > But until then - why do you allocate the page? Seems like we could just use a\n> > static global variable?\n>\n> I made it a static global variable for now. (The palloc way was copied\n> over from nbtsort.c)\n>\n> >> +/*\n> >> + * Write all buffered pages to disk.\n> >> + */\n> >> +static void\n> >> +bulkw_flush(BulkWriteState *bulkw)\n> >> +{\n> >> + int nbuffered = bulkw->nbuffered;\n> >> + BulkWriteBuffer *buffers = bulkw->buffers;\n> >> +\n> >> + if (nbuffered == 0)\n> >> + return;\n> >> +\n> >> + if (nbuffered > 1)\n> >> + {\n> >> + int o;\n> >> +\n> >> + qsort(buffers, nbuffered, sizeof(BulkWriteBuffer), buffer_cmp);\n> >> +\n> >> + /*\n> >> + * Eliminate duplicates, keeping the last write of each block.\n> >> + * (buffer_cmp uses 'order' as the last sort key)\n> >> + */\n> >\n> > Huh - which use cases would actually cause duplicate writes?\n>\n> Hmm, nothing anymore I guess. Many AMs used to write zero pages as a\n> placeholder and come back to fill them in later, but now that\n> bulk_write.c handles that,\n>\n> Removed that, and replaced it with with an assertion in buffer_cmp()\n> that there are no duplicates.\n\nThere are few compilation errors reported by CFBot at [1], patch needs\nto be rebased:\n[02:38:12.675] In file included from ../../../../src/include/postgres.h:45,\n[02:38:12.675] from nbtsort.c:41:\n[02:38:12.675] nbtsort.c: In function ‘_bt_load’:\n[02:38:12.675] nbtsort.c:1309:57: error: ‘BTPageState’ has no member\nnamed ‘btps_page’\n[02:38:12.675] 1309 | Assert(dstate->maxpostingsize <=\nBTMaxItemSize(state->btps_page) &&\n[02:38:12.675] | ^~\n[02:38:12.675] ../../../../src/include/c.h:864:9: note: in definition\nof macro ‘Assert’\n[02:38:12.675] 864 | if (!(condition)) \\\n[02:38:12.675] | ^~~~~~~~~\n[02:38:12.675] ../../../../src/include/c.h:812:29: note: in expansion\nof macro ‘TYPEALIGN_DOWN’\n[02:38:12.675] 812 | #define MAXALIGN_DOWN(LEN)\nTYPEALIGN_DOWN(MAXIMUM_ALIGNOF, (LEN))\n[02:38:12.675] | ^~~~~~~~~~~~~~\n[02:38:12.675] ../../../../src/include/access/nbtree.h:165:3: note: in\nexpansion of macro ‘MAXALIGN_DOWN’\n[02:38:12.675] 165 | (MAXALIGN_DOWN((PageGetPageSize(page) - \\\n\n[1] - https://cirrus-ci.com/task/5299954164432896\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 9 Jan 2024 12:20:45 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On 09/01/2024 08:50, vignesh C wrote:\n> There are few compilation errors reported by CFBot at [1], patch needs\n> to be rebased:\n\nHere you go.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Wed, 10 Jan 2024 11:26:36 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On Fri, Nov 24, 2023 at 10:22 PM Heikki Linnakangas <[email protected]> wrote:\n> Yeah, I'm not very happy with this interface. The model is that you get\n> a buffer to write to by calling bulkw_alloc_buf(). Later, you hand it\n> over to bulkw_write(), which takes ownership of it and frees it later.\n> There is no other function to free it, although currently the buffer is\n> just palloc'd so you could call pfree on it.\n\nI think we should try to pick prefixes that are one or more words\nrather than using word fragments. bulkw is an awkward prefix even for\npeople whose first language is English, and probably more awkward for\nothers.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Jan 2024 11:17:22 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nthere was a CFbot test failure last time it was run [2]. Please have a\nlook and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/4575/\n[2] https://cirrus-ci.com/task/4990764426461184\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 16:50:13 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On 10/01/2024 18:17, Robert Haas wrote:\n> I think we should try to pick prefixes that are one or more words\n> rather than using word fragments. bulkw is an awkward prefix even for\n> people whose first language is English, and probably more awkward for\n> others.\n\nRenamed the 'bulkw' variables to 'bulkstate, and the functions to have \nsmgr_bulk_* prefix.\n\nI was tempted to use just bulk_* as the prefix, but I'm afraid e.g. \nbulk_write() is too generic.\n\nOn 22/01/2024 07:50, Peter Smith wrote:\n> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> there was a CFbot test failure last time it was run [2]. Please have a\n> look and post an updated version if necessary.\n\nFixed the headerscheck failure by adding appropriate #includes.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Thu, 25 Jan 2024 22:07:04 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "Committed this. Thanks everyone!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 23 Feb 2024 16:27:34 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On 23/02/2024 16:27, Heikki Linnakangas wrote:\n> Committed this. Thanks everyone!\n\nBuildfarm animals 'sifaka' and 'longfin' are not happy, I will investigate..\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 23 Feb 2024 17:12:30 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> Buildfarm animals 'sifaka' and 'longfin' are not happy, I will investigate..\n\nThose are mine, let me know if you need local investigation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 23 Feb 2024 10:15:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On 23/02/2024 17:15, Tom Lane wrote:\n> Heikki Linnakangas <[email protected]> writes:\n>> Buildfarm animals 'sifaka' and 'longfin' are not happy, I will investigate..\n> \n> Those are mine, let me know if you need local investigation.\n\nThanks, the error message was clear enough:\n\n> bulk_write.c:78:3: error: redefinition of typedef 'BulkWriteState' is a C11 feature [-Werror,-Wtypedef-redefinition]\n> } BulkWriteState;\n> ^\n> ../../../../src/include/storage/bulk_write.h:20:31: note: previous definition is here\n> typedef struct BulkWriteState BulkWriteState;\n> ^\n> 1 error generated.\n\nFixed now, but I'm a bit surprised other buildfarm members nor cirrus CI \ncaught that. I also tried to reproduce it locally by adding \n-Wtypedef-redefinition, but my version of clang didn't produce any \nwarnings. Are there any extra compiler options on those animals or \nsomething?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 23 Feb 2024 17:43:28 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> Thanks, the error message was clear enough:\n>> bulk_write.c:78:3: error: redefinition of typedef 'BulkWriteState' is a C11 feature [-Werror,-Wtypedef-redefinition]\n>> } BulkWriteState;\n\n> Fixed now, but I'm a bit surprised other buildfarm members nor cirrus CI \n> caught that. I also tried to reproduce it locally by adding \n> -Wtypedef-redefinition, but my version of clang didn't produce any \n> warnings. Are there any extra compiler options on those animals or \n> something?\n\nThey use Apple's standard compiler (clang 15 or so), but\n\n 'CC' => 'ccache clang -std=gnu99',\n\nso maybe the -std has something to do with it. I installed that\n(or -std=gnu90 as appropriate to branch) on most of my build\nsetups back when we started the C99 push.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 23 Feb 2024 11:06:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On Fri, Feb 23, 2024 at 04:27:34PM +0200, Heikki Linnakangas wrote:\n> Committed this. Thanks everyone!\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2024-02-24%2015%3A13%3A14 got:\nTRAP: failed Assert(\"(uintptr_t) buffer == TYPEALIGN(PG_IO_ALIGN_SIZE, buffer)\"), File: \"md.c\", Line: 472, PID: 43188608\n\nwith this stack trace:\n#5 0x10005cf0 in ExceptionalCondition (conditionName=0x1015d790 <XLogBeginInsert+80> \"`\", fileName=0x0, lineNumber=16780064) at assert.c:66\n#6 0x102daba8 in mdextend (reln=0x1042628c <PageSetChecksumInplace+44>, forknum=812540744, blocknum=33, buffer=0x306e6000, skipFsync=812539904) at md.c:472\n#7 0x102d6760 in smgrextend (reln=0x306e6670, forknum=812540744, blocknum=33, buffer=0x306e6000, skipFsync=812539904) at smgr.c:541\n#8 0x104c8dac in smgr_bulk_flush (bulkstate=0x306e6000) at bulk_write.c:245\n#9 0x107baf24 in _bt_blwritepage (wstate=0x100d0a14 <datum_image_eq@AF65_7+404>, buf=0x304f13b0, blkno=807631240) at nbtsort.c:638\n#10 0x107bccd8 in _bt_buildadd (wstate=0x104c9384 <smgr_bulk_start_rel+132>, state=0x304eb190, itup=0xe10, truncextra=805686672) at nbtsort.c:984\n#11 0x107bc86c in _bt_sort_dedup_finish_pending (wstate=0x3b6, state=0x19, dstate=0x3) at nbtsort.c:1036\n#12 0x107bc188 in _bt_load (wstate=0x10, btspool=0x0, btspool2=0x0) at nbtsort.c:1331\n#13 0x107bd4ec in _bt_leafbuild (btspool=0x101589fc <ProcessInvalidationMessages+188>, btspool2=0x0) at nbtsort.c:571\n#14 0x107be028 in btbuild (heap=0x304d2a00, index=0x4e1f, indexInfo=0x3) at nbtsort.c:329\n#15 0x1013538c in index_build (heapRelation=0x2, indexRelation=0x10bdc518 <getopt_long+2464664>, indexInfo=0x2, isreindex=10, parallel=false) at index.c:3047\n#16 0x101389e0 in index_create (heapRelation=0x1001aac0 <palloc+192>, indexRelationName=0x20 <error: Cannot access memory at address 0x20>, indexRelationId=804393376, parentIndexRelid=805686672,\n parentConstraintId=268544704, relFileNumber=805309688, indexInfo=0x3009a328, indexColNames=0x30237a20, accessMethodId=403, tableSpaceId=0, collationIds=0x304d29d8, opclassIds=0x304d29f8,\n opclassOptions=0x304d2a18, coloptions=0x304d2a38, reloptions=0, flags=0, constr_flags=0, allow_system_table_mods=false, is_internal=false, constraintId=0x2ff211b4) at index.c:1260\n#17 0x1050342c in DefineIndex (tableId=19994, stmt=0x2ff21370, indexRelationId=0, parentIndexId=0, parentConstraintId=0, total_parts=0, is_alter_table=false, check_rights=true, check_not_in_use=true,\n skip_build=false, quiet=false) at indexcmds.c:1204\n#18 0x104b4474 in ProcessUtilitySlow (pstate=<error reading variable>, pstmt=0x3009a408, queryString=0x30099730 \"CREATE INDEX dupindexcols_i ON dupindexcols (f1, id, f1 text_pattern_ops);\",\n\nIf there are other ways I should poke at it, let me know.\n\n\n\n",
"msg_date": "Sat, 24 Feb 2024 09:23:45 -0800",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On Sun, Feb 25, 2024 at 6:24 AM Noah Misch <[email protected]> wrote:\n> On Fri, Feb 23, 2024 at 04:27:34PM +0200, Heikki Linnakangas wrote:\n> > Committed this. Thanks everyone!\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2024-02-24%2015%3A13%3A14 got:\n> TRAP: failed Assert(\"(uintptr_t) buffer == TYPEALIGN(PG_IO_ALIGN_SIZE, buffer)\"), File: \"md.c\", Line: 472, PID: 43188608\n>\n> with this stack trace:\n> #5 0x10005cf0 in ExceptionalCondition (conditionName=0x1015d790 <XLogBeginInsert+80> \"`\", fileName=0x0, lineNumber=16780064) at assert.c:66\n> #6 0x102daba8 in mdextend (reln=0x1042628c <PageSetChecksumInplace+44>, forknum=812540744, blocknum=33, buffer=0x306e6000, skipFsync=812539904) at md.c:472\n> #7 0x102d6760 in smgrextend (reln=0x306e6670, forknum=812540744, blocknum=33, buffer=0x306e6000, skipFsync=812539904) at smgr.c:541\n> #8 0x104c8dac in smgr_bulk_flush (bulkstate=0x306e6000) at bulk_write.c:245\n\nSo that's:\n\nstatic const PGIOAlignedBlock zero_buffer = {{0}}; /* worth BLCKSZ */\n\n...\n smgrextend(bulkstate->smgr, bulkstate->forknum,\n bulkstate->pages_written++,\n &zero_buffer,\n true);\n\n... where PGIOAlignedBlock is:\n\ntypedef union PGIOAlignedBlock\n{\n#ifdef pg_attribute_aligned\n pg_attribute_aligned(PG_IO_ALIGN_SIZE)\n#endif\n char data[BLCKSZ];\n...\n\nWe see this happen with both xlc and gcc (new enough to know how to do\nthis). One idea would be that the AIX *linker* is unable to align it,\nas that is the common tool-chain component here (and unlike stack and\nheap objects, this scope is the linker's job). There is a\npre-existing example of a zero-buffer that is at file scope like that:\npg_prewarm.c. Perhaps it doesn't get tested?\n\nHmm.\n\n\n",
"msg_date": "Sun, 25 Feb 2024 07:52:16 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On Sun, Feb 25, 2024 at 07:52:16AM +1300, Thomas Munro wrote:\n> On Sun, Feb 25, 2024 at 6:24 AM Noah Misch <[email protected]> wrote:\n> > On Fri, Feb 23, 2024 at 04:27:34PM +0200, Heikki Linnakangas wrote:\n> > > Committed this. Thanks everyone!\n> >\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2024-02-24%2015%3A13%3A14 got:\n> > TRAP: failed Assert(\"(uintptr_t) buffer == TYPEALIGN(PG_IO_ALIGN_SIZE, buffer)\"), File: \"md.c\", Line: 472, PID: 43188608\n> >\n> > with this stack trace:\n> > #5 0x10005cf0 in ExceptionalCondition (conditionName=0x1015d790 <XLogBeginInsert+80> \"`\", fileName=0x0, lineNumber=16780064) at assert.c:66\n> > #6 0x102daba8 in mdextend (reln=0x1042628c <PageSetChecksumInplace+44>, forknum=812540744, blocknum=33, buffer=0x306e6000, skipFsync=812539904) at md.c:472\n> > #7 0x102d6760 in smgrextend (reln=0x306e6670, forknum=812540744, blocknum=33, buffer=0x306e6000, skipFsync=812539904) at smgr.c:541\n> > #8 0x104c8dac in smgr_bulk_flush (bulkstate=0x306e6000) at bulk_write.c:245\n> \n> So that's:\n> \n> static const PGIOAlignedBlock zero_buffer = {{0}}; /* worth BLCKSZ */\n> \n> ...\n> smgrextend(bulkstate->smgr, bulkstate->forknum,\n> bulkstate->pages_written++,\n> &zero_buffer,\n> true);\n> \n> ... where PGIOAlignedBlock is:\n> \n> typedef union PGIOAlignedBlock\n> {\n> #ifdef pg_attribute_aligned\n> pg_attribute_aligned(PG_IO_ALIGN_SIZE)\n> #endif\n> char data[BLCKSZ];\n> ...\n> \n> We see this happen with both xlc and gcc (new enough to know how to do\n> this). One idea would be that the AIX *linker* is unable to align it,\n> as that is the common tool-chain component here (and unlike stack and\n> heap objects, this scope is the linker's job). There is a\n> pre-existing example of a zero-buffer that is at file scope like that:\n> pg_prewarm.c. Perhaps it doesn't get tested?\n> \n> Hmm.\n\nGCC docs do say \"For some linkers, the maximum supported alignment may be very\nvery small.\", but AIX \"man LD\" says \"data sections are aligned on a boundary\nso as to satisfy the alignment of all CSECTs in the sections\". It also has -H\nand -K flags to force some particular higher alignment.\n\nOn GNU/Linux x64, gcc correctly records alignment=2**12 for the associated\nsection (.rodata for bulk_write.o zero_buffer, .bss for pg_prewarm.o\nblockbuffer). If I'm reading this right, neither AIX gcc nor xlc is marking\nthe section with sufficient alignment, in bulk_write.o or pg_prewarm.o:\n\n$ /opt/cfarm/binutils-latest/bin/objdump --section-headers ~/farm/*/HEAD/pgsqlkeep.*/src/backend/storage/smgr/bulk_write.o\n\n/home/nm/farm/gcc64/HEAD/pgsqlkeep.2024-02-24_00-03-22/src/backend/storage/smgr/bulk_write.o: file format aix5coff64-rs6000\n\nSections:\nIdx Name Size VMA LMA File off Algn\n 0 .text 0000277c 0000000000000000 0000000000000000 000000f0 2**2\n CONTENTS, ALLOC, LOAD, RELOC, CODE\n 1 .data 000000e4 000000000000277c 000000000000277c 0000286c 2**3\n CONTENTS, ALLOC, LOAD, RELOC, DATA\n 2 .debug 0001f7ea 0000000000000000 0000000000000000 00002950 2**3\n CONTENTS\n\n/home/nm/farm/xlc32/HEAD/pgsqlkeep.2024-02-24_15-12-23/src/backend/storage/smgr/bulk_write.o: file format aixcoff-rs6000\n\nSections:\nIdx Name Size VMA LMA File off Algn\n 0 .text 00000880 00000000 00000000 00000180 2**2\n CONTENTS, ALLOC, LOAD, RELOC, CODE\n 1 .data 0000410c 00000880 00000880 00000a00 2**3\n CONTENTS, ALLOC, LOAD, RELOC, DATA\n 2 .bss 00000000 0000498c 0000498c 00000000 2**3\n ALLOC\n 3 .debug 00008448 00000000 00000000 00004b24 2**3\n CONTENTS\n 4 .except 00000018 00000000 00000000 00004b0c 2**3\n CONTENTS, LOAD\n\n$ /opt/cfarm/binutils-latest/bin/objdump --section-headers ~/farm/*/HEAD/pgsqlkeep.*/contrib/pg_prewarm/pg_prewarm.o\n\n/home/nm/farm/gcc32/HEAD/pgsqlkeep.2024-01-21_03-16-12/contrib/pg_prewarm/pg_prewarm.o: file format aixcoff-rs6000\n\nSections:\nIdx Name Size VMA LMA File off Algn\n 0 .text 00000a6c 00000000 00000000 000000b4 2**2\n CONTENTS, ALLOC, LOAD, RELOC, CODE\n 1 .data 00000044 00000a6c 00000a6c 00000b20 2**3\n CONTENTS, ALLOC, LOAD, RELOC, DATA\n 2 .bss 00002550 00000ab0 00000ab0 00000000 2**3\n ALLOC\n 3 .debug 0001c50e 00000000 00000000 00000b64 2**3\n CONTENTS\n\n/home/nm/farm/gcc64/HEAD/pgsqlkeep.2024-02-15_17-13-04/contrib/pg_prewarm/pg_prewarm.o: file format aix5coff64-rs6000\n\nSections:\nIdx Name Size VMA LMA File off Algn\n 0 .text 00000948 0000000000000000 0000000000000000 00000138 2**2\n CONTENTS, ALLOC, LOAD, RELOC, CODE\n 1 .data 00000078 0000000000000948 0000000000000948 00000a80 2**3\n CONTENTS, ALLOC, LOAD, RELOC, DATA\n 2 .bss 00002640 00000000000009c0 00000000000009c0 00000000 2**3\n ALLOC\n 3 .debug 0001d887 0000000000000000 0000000000000000 00000af8 2**3\n CONTENTS\n\n\n",
"msg_date": "Sat, 24 Feb 2024 11:50:24 -0800",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On Sun, Feb 25, 2024 at 8:50 AM Noah Misch <[email protected]> wrote:\n> On GNU/Linux x64, gcc correctly records alignment=2**12 for the associated\n> section (.rodata for bulk_write.o zero_buffer, .bss for pg_prewarm.o\n> blockbuffer). If I'm reading this right, neither AIX gcc nor xlc is marking\n> the section with sufficient alignment, in bulk_write.o or pg_prewarm.o:\n\nAh, that is a bit of a hazard that we should probably document.\n\nI guess the ideas to fix this would be: use smgrzeroextend() instead\nof this coding, and/or perhaps look at the coding of pg_pwrite_zeros()\n(function-local static) for any other place that needs such a thing,\nif it would be satisfied by function-local scope?\n\n\n",
"msg_date": "Sun, 25 Feb 2024 09:12:43 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On Sun, Feb 25, 2024 at 9:12 AM Thomas Munro <[email protected]> wrote:\n> On Sun, Feb 25, 2024 at 8:50 AM Noah Misch <[email protected]> wrote:\n> > On GNU/Linux x64, gcc correctly records alignment=2**12 for the associated\n> > section (.rodata for bulk_write.o zero_buffer, .bss for pg_prewarm.o\n> > blockbuffer). If I'm reading this right, neither AIX gcc nor xlc is marking\n> > the section with sufficient alignment, in bulk_write.o or pg_prewarm.o:\n>\n> Ah, that is a bit of a hazard that we should probably document.\n>\n> I guess the ideas to fix this would be: use smgrzeroextend() instead\n> of this coding, and/or perhaps look at the coding of pg_pwrite_zeros()\n> (function-local static) for any other place that needs such a thing,\n> if it would be satisfied by function-local scope?\n\nErm, wait, how does that function-local static object work differently?\n\n\n",
"msg_date": "Sun, 25 Feb 2024 09:13:47 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On Sun, Feb 25, 2024 at 09:13:47AM +1300, Thomas Munro wrote:\n> On Sun, Feb 25, 2024 at 9:12 AM Thomas Munro <[email protected]> wrote:\n> > On Sun, Feb 25, 2024 at 8:50 AM Noah Misch <[email protected]> wrote:\n> > > On GNU/Linux x64, gcc correctly records alignment=2**12 for the associated\n> > > section (.rodata for bulk_write.o zero_buffer, .bss for pg_prewarm.o\n> > > blockbuffer). If I'm reading this right, neither AIX gcc nor xlc is marking\n> > > the section with sufficient alignment, in bulk_write.o or pg_prewarm.o:\n> >\n> > Ah, that is a bit of a hazard that we should probably document.\n> >\n> > I guess the ideas to fix this would be: use smgrzeroextend() instead\n> > of this coding, and/or perhaps look at the coding of pg_pwrite_zeros()\n> > (function-local static) for any other place that needs such a thing,\n> > if it would be satisfied by function-local scope?\n\nTrue. Alternatively, could arrange for \"#define PG_O_DIRECT 0\" on AIX, which\ndisables the alignment assertions (and debug_io_direct).\n\n> Erm, wait, how does that function-local static object work differently?\n\nI don't know specifically, but I expect they're different parts of the gcc\nimplementation. Aligning an xcoff section may entail some xcoff-specific gcc\ncomponent. Aligning a function-local object just changes the early\ninstructions of the function; it's independent of the object format.\n\n\n",
"msg_date": "Sat, 24 Feb 2024 12:26:12 -0800",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-24 11:50:24 -0800, Noah Misch wrote:\n> > We see this happen with both xlc and gcc (new enough to know how to do\n> > this). One idea would be that the AIX *linker* is unable to align it,\n> > as that is the common tool-chain component here (and unlike stack and\n> > heap objects, this scope is the linker's job). There is a\n> > pre-existing example of a zero-buffer that is at file scope like that:\n> > pg_prewarm.c. Perhaps it doesn't get tested?\n> >\n> > Hmm.\n>\n> GCC docs do say \"For some linkers, the maximum supported alignment may be very\n> very small.\", but AIX \"man LD\" says \"data sections are aligned on a boundary\n> so as to satisfy the alignment of all CSECTs in the sections\". It also has -H\n> and -K flags to force some particular higher alignment.\n\nSome xlc manual [1] states that\n\n n must be a positive power of 2, or NIL. NIL can be specified as either\n __attribute__((aligned())) or __attribute__((aligned)); this is the same as\n specifying the maximum system alignment (16 bytes on all UNIX platforms).\n\nWhich does seems to suggest that this is a platform restriction.\n\n\nLet's just drop AIX. This isn't the only alignment issue we've found and the\nsolution for those isn't so much a fix as forcing everyone to carefully only\nlook into one direction and not notice the cliffs to either side.\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.ibm.com/docs/en/SSGH2K_13.1.2/com.ibm.compilers.aix.doc/proguide.pdf\n\n\n",
"msg_date": "Sat, 24 Feb 2024 13:29:36 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On 24 February 2024 23:29:36 EET, Andres Freund <[email protected]> wrote:\n>Hi,\n>\n>On 2024-02-24 11:50:24 -0800, Noah Misch wrote:\n>> > We see this happen with both xlc and gcc (new enough to know how to do\n>> > this). One idea would be that the AIX *linker* is unable to align it,\n>> > as that is the common tool-chain component here (and unlike stack and\n>> > heap objects, this scope is the linker's job). There is a\n>> > pre-existing example of a zero-buffer that is at file scope like that:\n>> > pg_prewarm.c. Perhaps it doesn't get tested?\n>> >\n>> > Hmm.\n>>\n>> GCC docs do say \"For some linkers, the maximum supported alignment may be very\n>> very small.\", but AIX \"man LD\" says \"data sections are aligned on a boundary\n>> so as to satisfy the alignment of all CSECTs in the sections\". It also has -H\n>> and -K flags to force some particular higher alignment.\n>\n>Some xlc manual [1] states that\n>\n> n must be a positive power of 2, or NIL. NIL can be specified as either\n> __attribute__((aligned())) or __attribute__((aligned)); this is the same as\n> specifying the maximum system alignment (16 bytes on all UNIX platforms).\n>\n>Which does seems to suggest that this is a platform restriction.\n\nMy reading of that paragraph is that you can set it to any powet of two, and it should work. 16 bytes is just what you get if you set it to NIL.\n\n>Let's just drop AIX. This isn't the only alignment issue we've found and the\n>solution for those isn't so much a fix as forcing everyone to carefully only\n>look into one direction and not notice the cliffs to either side.\n\nI think the way that decision should go is that as long as someone is willing to step up and do the work keep AIX support going, we support it. To be clear, that someone is not me. Anyone willing to do it?\n\nRegarding the issue at hand, perhaps we should define PG_IO_ALIGN_SIZE as 16 on AIX, if that's the best the linker can do on that platform.\n\nWe could also make the allocation 2*PG_IO_ALIGN_SIZE and round up the starting address ourselves to PG_IO_ALIGN_SIZE. Or allocate it in the heap.\n\n- Heikki\n\n\n",
"msg_date": "Sun, 25 Feb 2024 00:06:14 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On Sun, Feb 25, 2024 at 11:06 AM Heikki Linnakangas <[email protected]> wrote:\n> Regarding the issue at hand, perhaps we should define PG_IO_ALIGN_SIZE as 16 on AIX, if that's the best the linker can do on that platform.\n\nYou'll probably get either an error or silently fall back to buffered\nI/O, if direct I/O is enabled and you try to read/write a badly\naligned buffer. That's documented (they offer finfo() to query it,\nbut it's always 4KB for the same sort of reasons as it is on every\nother OS).\n\n\n",
"msg_date": "Sun, 25 Feb 2024 11:16:43 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On Sun, Feb 25, 2024 at 11:16 AM Thomas Munro <[email protected]> wrote:\n> On Sun, Feb 25, 2024 at 11:06 AM Heikki Linnakangas <[email protected]> wrote:\n> > Regarding the issue at hand, perhaps we should define PG_IO_ALIGN_SIZE as 16 on AIX, if that's the best the linker can do on that platform.\n>\n> You'll probably get either an error or silently fall back to buffered\n> I/O, if direct I/O is enabled and you try to read/write a badly\n> aligned buffer. That's documented (they offer finfo() to query it,\n> but it's always 4KB for the same sort of reasons as it is on every\n> other OS).\n\nI guess it's the latter (\"to work efficiently\" sounds like it isn't\ngoing to reject the request):\n\nhttps://www.ibm.com/docs/en/aix/7.3?topic=tuning-direct-io\n\nIf you make it < 4KB then all direct I/O would be affected, not just\nthis one place, so then you might as well just not allow direct I/O on\nAIX at all, to avoid giving a false impression that it does something.\n(Note that if we think the platform lacks O_DIRECT we don't make those\nassertions about alignment).\n\nFWIW I'm aware of one other thing that is wrong with our direct I/O\nsupport on AIX: it should perhaps be using a different flag. I\ncreated a wiki page to defer thinking about any AIX issues\nuntil/unless at least one real, live user shows up, which hasn't\nhappened yet: https://wiki.postgresql.org/wiki/AIX\n\n\n",
"msg_date": "Sun, 25 Feb 2024 11:37:50 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On 25/02/2024 00:37, Thomas Munro wrote:\n> On Sun, Feb 25, 2024 at 11:16 AM Thomas Munro <[email protected]> wrote:\n>> On Sun, Feb 25, 2024 at 11:06 AM Heikki Linnakangas <[email protected]> wrote:\n>>> Regarding the issue at hand, perhaps we should define PG_IO_ALIGN_SIZE as 16 on AIX, if that's the best the linker can do on that platform.\n>>\n>> You'll probably get either an error or silently fall back to buffered\n>> I/O, if direct I/O is enabled and you try to read/write a badly\n>> aligned buffer. That's documented (they offer finfo() to query it,\n>> but it's always 4KB for the same sort of reasons as it is on every\n>> other OS).\n> \n> I guess it's the latter (\"to work efficiently\" sounds like it isn't\n> going to reject the request):\n> \n> https://www.ibm.com/docs/en/aix/7.3?topic=tuning-direct-io\n> \n> If you make it < 4KB then all direct I/O would be affected, not just\n> this one place, so then you might as well just not allow direct I/O on\n> AIX at all, to avoid giving a false impression that it does something.\n> (Note that if we think the platform lacks O_DIRECT we don't make those\n> assertions about alignment).\n> \n> FWIW I'm aware of one other thing that is wrong with our direct I/O\n> support on AIX: it should perhaps be using a different flag. I\n> created a wiki page to defer thinking about any AIX issues\n> until/unless at least one real, live user shows up, which hasn't\n> happened yet: https://wiki.postgresql.org/wiki/AIX\n\nHere's a patch that effectively disables direct I/O on AIX. I'm inclined \nto commit this as a quick fix to make the buildfarm green again.\n\nI agree with Andres though, that unless someone raises their hand and \nvolunteers to properly maintain the AIX support, we should drop it. The \ncurrent AIX buildfarm members are running AIX 7.1, which has been out of \nsupport since May 2023 \n(https://www.ibm.com/support/pages/aix-support-lifecycle-information). \nSee also older thread on this [0].\n\nNoah, you're running the current AIX buildfarm animals. How much effort \nare you interested to put into AIX support?\n\n[0] \nhttps://www.postgresql.org/message-id/20220702183354.a6uhja35wta7agew%40alap3.anarazel.de\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Sun, 25 Feb 2024 16:34:47 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On Sun, Feb 25, 2024 at 04:34:47PM +0200, Heikki Linnakangas wrote:\n> I agree with Andres though, that unless someone raises their hand and\n> volunteers to properly maintain the AIX support, we should drop it.\n\nThere's no way forward in which AIX support stops doing net harm. Even if AIX\nenthusiasts intercepted would-be buildfarm failures and fixed them before\nbuildfarm.postgresql.org could see them, the damage from the broader community\nseeing the AIX-specific code would outweigh the benefits of AIX support. I've\nnow disabled the animals for v17+, though each may do one more run before\npicking up the disable.\n\nMy upthread observation about xcoff section alignment was a red herring. gcc\npopulates symbol-level alignment, and section-level alignment is unnecessary\nif symbol-level alignment is correct. The simplest workaround for $SUBJECT\nAIX failure would be to remove the \"const\", based on the results of the\nattached test program. The pg_prewarm.c var is like al4096_static in the\noutputs below, hence the lack of trouble there. The bulk_write.c var is like\nal4096_static_const_initialized.\n\n==== gcc 8.3.0\nal4096 4096 @ 0x11000c000 (mod 0)\nal4096_initialized 4096 @ 0x110000fd0 (mod 4048 - BUG)\nal4096_const 4096 @ 0x11000f000 (mod 0)\nal4096_const_initialized 4096 @ 0x10000cd00 (mod 3328 - BUG)\nal4096_static 4096 @ 0x110005000 (mod 0)\nal4096_static_initialized 4096 @ 0x110008000 (mod 0)\nal4096_static_const 4096 @ 0x100000c10 (mod 3088 - BUG)\nal4096_static_const_initialized 4096 @ 0x100003c10 (mod 3088 - BUG)\n==== xlc 12.01.0000.0000\nal4096 4096 @ 0x110008000 (mod 0)\nal4096_initialized 4096 @ 0x110004000 (mod 0)\nal4096_const 4096 @ 0x11000b000 (mod 0)\nal4096_const_initialized 4096 @ 0x100007000 (mod 0)\nal4096_static 4096 @ 0x11000e000 (mod 0)\nal4096_static_initialized 4096 @ 0x110001000 (mod 0)\nal4096_static_const 4096 @ 0x110011000 (mod 0)\nal4096_static_const_initialized 4096 @ 0x1000007d0 (mod 2000 - BUG)\n==== ibm-clang 17.1.1.2\nal4096 4096 @ 0x110001000 (mod 0)\nal4096_initialized 4096 @ 0x110004000 (mod 0)\nal4096_const 4096 @ 0x100001000 (mod 0)\nal4096_const_initialized 4096 @ 0x100005000 (mod 0)\nal4096_static 4096 @ 0x110008000 (mod 0)\nal4096_static_initialized 4096 @ 0x11000b000 (mod 0)\nal4096_static_const 4096 @ 0x100009000 (mod 0)\nal4096_static_const_initialized 4096 @ 0x10000d000 (mod 0)",
"msg_date": "Sun, 25 Feb 2024 11:43:22 -0800",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "Noah Misch <[email protected]> writes:\n> On Sun, Feb 25, 2024 at 04:34:47PM +0200, Heikki Linnakangas wrote:\n>> I agree with Andres though, that unless someone raises their hand and\n>> volunteers to properly maintain the AIX support, we should drop it.\n\n> There's no way forward in which AIX support stops doing net harm. Even if AIX\n> enthusiasts intercepted would-be buildfarm failures and fixed them before\n> buildfarm.postgresql.org could see them, the damage from the broader community\n> seeing the AIX-specific code would outweigh the benefits of AIX support. I've\n> now disabled the animals for v17+, though each may do one more run before\n> picking up the disable.\n\nSo, we now need to strip the remnants of AIX support from the code and\ndocs? I don't see that much of it, but it's misleading to leave it\nthere.\n\n(BTW, I still want to nuke the remaining snippets of HPPA support.\nI don't think it does anybody any good to make it look like that's\nstill expected to work.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 25 Feb 2024 14:51:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 1:21 AM Tom Lane <[email protected]> wrote:\n> So, we now need to strip the remnants of AIX support from the code and\n> docs? I don't see that much of it, but it's misleading to leave it\n> there.\n>\n> (BTW, I still want to nuke the remaining snippets of HPPA support.\n> I don't think it does anybody any good to make it look like that's\n> still expected to work.)\n\n+1 for removing things that don't work (or that we think probably don't work).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 26 Feb 2024 09:42:03 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 09:42:03AM +0530, Robert Haas wrote:\n> On Mon, Feb 26, 2024 at 1:21 AM Tom Lane <[email protected]> wrote:\n>> So, we now need to strip the remnants of AIX support from the code and\n>> docs? I don't see that much of it, but it's misleading to leave it\n>> there.\n>>\n>> (BTW, I still want to nuke the remaining snippets of HPPA support.\n>> I don't think it does anybody any good to make it look like that's\n>> still expected to work.)\n> \n> +1 for removing things that don't work (or that we think probably don't work).\n\nSeeing this stuff eat developer time because of the debugging of weird\nissues while having a very limited impact for end-users is sad, so +1\nfor a cleanup of any remnants if this disappears.\n--\nMichael",
"msg_date": "Mon, 26 Feb 2024 13:18:51 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On 26/02/2024 06:18, Michael Paquier wrote:\n> On Mon, Feb 26, 2024 at 09:42:03AM +0530, Robert Haas wrote:\n>> On Mon, Feb 26, 2024 at 1:21 AM Tom Lane <[email protected]> wrote:\n>>> So, we now need to strip the remnants of AIX support from the code and\n>>> docs? I don't see that much of it, but it's misleading to leave it\n>>> there.\n>>>\n>>> (BTW, I still want to nuke the remaining snippets of HPPA support.\n>>> I don't think it does anybody any good to make it look like that's\n>>> still expected to work.)\n>>\n>> +1 for removing things that don't work (or that we think probably don't work).\n> \n> Seeing this stuff eat developer time because of the debugging of weird\n> issues while having a very limited impact for end-users is sad, so +1\n> for a cleanup of any remnants if this disappears.\n\nHere's a patch to fully remove AIX support.\n\nOne small issue that warrants some discussion (in sanity_check.sql):\n\n> --- When ALIGNOF_DOUBLE==4 (e.g. AIX), the C ABI may impose 8-byte alignment on\n> +-- When MAXIMUM_ALIGNOF==8 but ALIGNOF_DOUBLE==4, the C ABI may impose 8-byte alignment\n> -- some of the C types that correspond to TYPALIGN_DOUBLE SQL types. To ensure\n> -- catalog C struct layout matches catalog tuple layout, arrange for the tuple\n> -- offset of each fixed-width, attalign='d' catalog column to be divisible by 8\n> -- unconditionally. Keep such columns before the first NameData column of the\n> -- catalog, since packagers can override NAMEDATALEN to an odd number.\n> +-- (XXX: I'm not sure if any of the supported platforms have MAXIMUM_ALIGNOF==8 and\n> +-- ALIGNOF_DOUBLE==4. Perhaps we should just require that\n> +-- ALIGNOF_DOUBLE==MAXIMUM_ALIGNOF)\n\nWhat do y'all think of adding a check for \nALIGNOF_DOUBLE==MAXIMUM_ALIGNOF to configure.ac and meson.build? It's \nnot a requirement today, but I believe AIX was the only platform where \nthat was not true. With AIX gone, that combination won't be tested, and \nwe will probably break it sooner or later.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Wed, 28 Feb 2024 00:24:01 +0400",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> What do y'all think of adding a check for \n> ALIGNOF_DOUBLE==MAXIMUM_ALIGNOF to configure.ac and meson.build? It's \n> not a requirement today, but I believe AIX was the only platform where \n> that was not true. With AIX gone, that combination won't be tested, and \n> we will probably break it sooner or later.\n\n+1, and then probably revert the whole test addition of 79b716cfb7a.\n\nI did a quick scrape of the buildfarm, and identified these as the\nonly animals reporting ALIGNOF_DOUBLE less than 8:\n\n$ grep 'alignment of double' alignments | grep -v ' 8$'\n hornet | 2024-02-22 16:26:16 | checking alignment of double... 4\n lapwing | 2024-02-27 12:40:15 | checking alignment of double... (cached) 4\n mandrill | 2024-02-19 01:03:47 | checking alignment of double... 4\n sungazer | 2024-02-21 00:22:48 | checking alignment of double... 4\n tern | 2024-02-22 13:25:12 | checking alignment of double... 4\n\nWith AIX out of the picture, lapwing will be the only remaining\nanimal testing MAXALIGN less than 8. That seems like a single\npoint of failure ... should we spin up another couple 32-bit\nanimals? I had supposed that my faithful old PPC animal mamba\nwas helping to check this, but I see that under NetBSD it's\njoined the ALIGNOF_DOUBLE==8 crowd.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Feb 2024 15:45:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-27 15:45:45 -0500, Tom Lane wrote:\n> Heikki Linnakangas <[email protected]> writes:\n> With AIX out of the picture, lapwing will be the only remaining\n> animal testing MAXALIGN less than 8. That seems like a single\n> point of failure ... should we spin up another couple 32-bit\n> animals? I had supposed that my faithful old PPC animal mamba\n> was helping to check this, but I see that under NetBSD it's\n> joined the ALIGNOF_DOUBLE==8 crowd.\n\nI can set up a i386 animal, albeit on an amd64 kernel. But I don't think the\nlatter matters.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 27 Feb 2024 12:59:14 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 9:24 AM Heikki Linnakangas <[email protected]> wrote:\n> Here's a patch to fully remove AIX support.\n\n--- a/doc/src/sgml/installation.sgml\n+++ b/doc/src/sgml/installation.sgml\n@@ -3401,7 +3401,7 @@ export MANPATH\n <para>\n <productname>PostgreSQL</productname> can be expected to work on current\n versions of these operating systems: Linux, Windows,\n- FreeBSD, OpenBSD, NetBSD, DragonFlyBSD, macOS, AIX, Solaris, and illumos.\n+ FreeBSD, OpenBSD, NetBSD, DragonFlyBSD, macOS, Solaris, and illumos.\n\nThere is also a little roll-of-honour of operating systems we used to\nsupport, just a couple of paragraphs down, where AIX should appear.\n\n\n",
"msg_date": "Wed, 28 Feb 2024 11:30:00 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 12:24:01AM +0400, Heikki Linnakangas wrote:\n> Here's a patch to fully remove AIX support.\n\n> Subject: [PATCH 1/1] Remove AIX support\n> \n> There isn't a lot of user demand for AIX support, no one has stepped\n> up to the plate to properly maintain it, so it's best to remove it\n\nRegardless of how someone were to step up to maintain it, we'd be telling them\nsuch contributions have negative value and must stop. We're expelling AIX due\nto low demand, compiler bugs, its ABI, and its shlib symbol export needs.\n\n> altogether. AIX is still supported for stable versions.\n> \n> The acute issue that triggered this decision was that after commit\n> 8af2565248, the AIX buildfarm members have been hitting this\n> assertion:\n> \n> TRAP: failed Assert(\"(uintptr_t) buffer == TYPEALIGN(PG_IO_ALIGN_SIZE, buffer)\"), File: \"md.c\", Line: 472, PID: 2949728\n> \n> Apperently the \"pg_attribute_aligned(a)\" attribute doesn't work on AIX\n> (linker?) for values larger than PG_IO_ALIGN_SIZE.\n\nNo; see https://postgr.es/m/20240225194322.a5%40rfd.leadboat.com\n\n\n",
"msg_date": "Tue, 27 Feb 2024 19:52:59 -0800",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-28 00:24:01 +0400, Heikki Linnakangas wrote:\n> Here's a patch to fully remove AIX support.\n\nThomas mentioned to me that cfbot failed with this applied:\nhttps://cirrus-ci.com/task/6348635416297472\nhttps://api.cirrus-ci.com/v1/artifact/task/6348635416297472/log/tmp_install/log/initdb-template.log\n\ninitdb: error while loading shared libraries: libpq.so.5: cannot open shared object file: No such file or directory\n\n\nWhile I couldn't reproduce the failure, I did notice that locally with the\npatch applied, system libpq ended up getting used. Which isn't pre-installed\nin the CI environment, explaining the failure.\n\nThe problem is due to this hunk:\n> @@ -401,10 +376,6 @@ install-lib-static: $(stlib) installdirs-lib\n> \n> install-lib-shared: $(shlib) installdirs-lib\n> ifdef soname\n> -# we don't install $(shlib) on AIX\n> -# (see http://archives.postgresql.org/message-id/52EF20B2E3209443BC37736D00C3C1380A6E79FE@EXADV1.host.magwien.gv.at)\n> -ifneq ($(PORTNAME), aix)\n> -\t$(INSTALL_SHLIB) $< '$(DESTDIR)$(libdir)/$(shlib)'\n> ifneq ($(PORTNAME), cygwin)\n> ifneq ($(PORTNAME), win32)\n> ifneq ($(shlib), $(shlib_major))\n\nSo the versioned name didn't end up getting installed anymore, leading to\nbroken symlinks in the install directory.\n\n\n\n> diff --git a/src/bin/pg_basebackup/t/010_pg_basebackup.pl b/src/bin/pg_basebackup/t/010_pg_basebackup.pl\n> index 86cc01a640b..fc6b00224f6 100644\n> --- a/src/bin/pg_basebackup/t/010_pg_basebackup.pl\n> +++ b/src/bin/pg_basebackup/t/010_pg_basebackup.pl\n> @@ -400,9 +400,6 @@ is(scalar(@tblspc_tars), 1, 'one tablespace tar was created');\n> SKIP:\n> {\n> \tmy $tar = $ENV{TAR};\n> -\t# don't check for a working tar here, to accommodate various odd\n> -\t# cases such as AIX. If tar doesn't work the init_from_backup below\n> -\t# will fail.\n> \tskip \"no tar program available\", 1\n> \t if (!defined $tar || $tar eq '');\n\nMaybe better to not remove the whole comment, just the reference to AIX?\n\n\n> diff --git a/src/test/regress/sql/sanity_check.sql b/src/test/regress/sql/sanity_check.sql\n> index 7f338d191c6..2e9d5ebef3f 100644\n> --- a/src/test/regress/sql/sanity_check.sql\n> +++ b/src/test/regress/sql/sanity_check.sql\n> @@ -21,12 +21,15 @@ SELECT relname, relkind\n> AND relfilenode <> 0;\n> \n> --\n> --- When ALIGNOF_DOUBLE==4 (e.g. AIX), the C ABI may impose 8-byte alignment on\n> +-- When MAXIMUM_ALIGNOF==8 but ALIGNOF_DOUBLE==4, the C ABI may impose 8-byte alignment\n> -- some of the C types that correspond to TYPALIGN_DOUBLE SQL types. To ensure\n> -- catalog C struct layout matches catalog tuple layout, arrange for the tuple\n> -- offset of each fixed-width, attalign='d' catalog column to be divisible by 8\n> -- unconditionally. Keep such columns before the first NameData column of the\n> -- catalog, since packagers can override NAMEDATALEN to an odd number.\n> +-- (XXX: I'm not sure if any of the supported platforms have MAXIMUM_ALIGNOF==8 and\n> +-- ALIGNOF_DOUBLE==4. Perhaps we should just require that\n> +-- ALIGNOF_DOUBLE==MAXIMUM_ALIGNOF)\n> --\n> WITH check_columns AS (\n> SELECT relname, attname,\n\nI agree, this should be an error, and we should then remove the test.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 28 Feb 2024 01:26:42 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-27 12:59:14 -0800, Andres Freund wrote:\n> On 2024-02-27 15:45:45 -0500, Tom Lane wrote:\n> > Heikki Linnakangas <[email protected]> writes:\n> > With AIX out of the picture, lapwing will be the only remaining\n> > animal testing MAXALIGN less than 8. That seems like a single\n> > point of failure ... should we spin up another couple 32-bit\n> > animals? I had supposed that my faithful old PPC animal mamba\n> > was helping to check this, but I see that under NetBSD it's\n> > joined the ALIGNOF_DOUBLE==8 crowd.\n>\n> I can set up a i386 animal, albeit on an amd64 kernel. But I don't think the\n> latter matters.\n\nThat animal is now running, named \"adder\". Due to a typo there are still\nspurious errors on the older branches, but I've triggered those to be re-run.\n\nCurrently adder builds with autconf on older branches and with meson on newer\nones. Is it worth setting up two animals so we cover both ac and meson with 32\nbit on 16/HEAD?\n\nThere's something odd about how we fail when not specifying the correct PERL\nat configure time:\n/home/bf/bf-build/adder/REL_13_STABLE/pgsql.build/../pgsql/src/pl/plperl/Util.c: loadable library and perl binaries are mismatched (got first handshake key 0x93c0080, needed 0x9580080)\n\nNot sure what gets linked against what wrongly. But I'm also not sure it's\nworth the energy to investigate.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 28 Feb 2024 03:13:47 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "Committed, after fixing the various little things you pointed out:\n\nOn 28/02/2024 00:30, Thomas Munro wrote:\n> --- a/doc/src/sgml/installation.sgml\n> +++ b/doc/src/sgml/installation.sgml\n> @@ -3401,7 +3401,7 @@ export MANPATH\n> <para>\n> <productname>PostgreSQL</productname> can be expected to work on current\n> versions of these operating systems: Linux, Windows,\n> - FreeBSD, OpenBSD, NetBSD, DragonFlyBSD, macOS, AIX, Solaris, and illumos.\n> + FreeBSD, OpenBSD, NetBSD, DragonFlyBSD, macOS, Solaris, and illumos.\n> \n> There is also a little roll-of-honour of operating systems we used to\n> support, just a couple of paragraphs down, where AIX should appear.\n\nAdded.\n\nOn 28/02/2024 05:52, Noah Misch wrote:\n> Regardless of how someone were to step up to maintain it, we'd be telling them\n> such contributions have negative value and must stop. We're expelling AIX due\n> to low demand, compiler bugs, its ABI, and its shlib symbol export needs.\n\nReworded.\n\n>> Apperently the \"pg_attribute_aligned(a)\" attribute doesn't work on AIX\n>> (linker?) for values larger than PG_IO_ALIGN_SIZE.\n> \n> No; see https://postgr.es/m/20240225194322.a5%40rfd.leadboat.com\n\nOk, reworded.\n\nOn 28/02/2024 11:26, Andres Freund wrote:\n> On 2024-02-28 00:24:01 +0400, Heikki Linnakangas wrote:\n> The problem is due to this hunk:\n>> @@ -401,10 +376,6 @@ install-lib-static: $(stlib) installdirs-lib\n>> \n>> install-lib-shared: $(shlib) installdirs-lib\n>> ifdef soname\n>> -# we don't install $(shlib) on AIX\n>> -# (see http://archives.postgresql.org/message-id/52EF20B2E3209443BC37736D00C3C1380A6E79FE@EXADV1.host.magwien.gv.at)\n>> -ifneq ($(PORTNAME), aix)\n>> -\t$(INSTALL_SHLIB) $< '$(DESTDIR)$(libdir)/$(shlib)'\n>> ifneq ($(PORTNAME), cygwin)\n>> ifneq ($(PORTNAME), win32)\n>> ifneq ($(shlib), $(shlib_major))\n> \n> So the versioned name didn't end up getting installed anymore, leading to\n> broken symlinks in the install directory.\n\nFixed, thanks!\n\n>> diff --git a/src/bin/pg_basebackup/t/010_pg_basebackup.pl b/src/bin/pg_basebackup/t/010_pg_basebackup.pl\n>> index 86cc01a640b..fc6b00224f6 100644\n>> --- a/src/bin/pg_basebackup/t/010_pg_basebackup.pl\n>> +++ b/src/bin/pg_basebackup/t/010_pg_basebackup.pl\n>> @@ -400,9 +400,6 @@ is(scalar(@tblspc_tars), 1, 'one tablespace tar was created');\n>> SKIP:\n>> {\n>> \tmy $tar = $ENV{TAR};\n>> -\t# don't check for a working tar here, to accommodate various odd\n>> -\t# cases such as AIX. If tar doesn't work the init_from_backup below\n>> -\t# will fail.\n>> \tskip \"no tar program available\", 1\n>> \t if (!defined $tar || $tar eq '');\n> \n> Maybe better to not remove the whole comment, just the reference to AIX?\n\nOk, done\n\n>> diff --git a/src/test/regress/sql/sanity_check.sql b/src/test/regress/sql/sanity_check.sql\n>> index 7f338d191c6..2e9d5ebef3f 100644\n>> --- a/src/test/regress/sql/sanity_check.sql\n>> +++ b/src/test/regress/sql/sanity_check.sql\n>> @@ -21,12 +21,15 @@ SELECT relname, relkind\n>> AND relfilenode <> 0;\n>> \n>> --\n>> --- When ALIGNOF_DOUBLE==4 (e.g. AIX), the C ABI may impose 8-byte alignment on\n>> +-- When MAXIMUM_ALIGNOF==8 but ALIGNOF_DOUBLE==4, the C ABI may impose 8-byte alignment\n>> -- some of the C types that correspond to TYPALIGN_DOUBLE SQL types. To ensure\n>> -- catalog C struct layout matches catalog tuple layout, arrange for the tuple\n>> -- offset of each fixed-width, attalign='d' catalog column to be divisible by 8\n>> -- unconditionally. Keep such columns before the first NameData column of the\n>> -- catalog, since packagers can override NAMEDATALEN to an odd number.\n>> +-- (XXX: I'm not sure if any of the supported platforms have MAXIMUM_ALIGNOF==8 and\n>> +-- ALIGNOF_DOUBLE==4. Perhaps we should just require that\n>> +-- ALIGNOF_DOUBLE==MAXIMUM_ALIGNOF)\n>> --\n>> WITH check_columns AS (\n>> SELECT relname, attname,\n> \n> I agree, this should be an error, and we should then remove the test.\n\nDone.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 28 Feb 2024 15:25:19 +0400",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> On 28/02/2024 00:30, Thomas Munro wrote:\n>> I agree, this should be an error, and we should then remove the test.\n\n> Done.\n\nThe commit that added that test added a support function\n\"get_columns_length\" which is now unused. Should we get rid of that\nas well?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Feb 2024 11:04:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On 28/02/2024 18:04, Tom Lane wrote:\n> Heikki Linnakangas <[email protected]> writes:\n>> On 28/02/2024 00:30, Thomas Munro wrote:\n>>> I agree, this should be an error, and we should then remove the test.\n> \n>> Done.\n> \n> The commit that added that test added a support function\n> \"get_columns_length\" which is now unused. Should we get rid of that\n> as well?\n\nI see you just removed it; thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Wed, 28 Feb 2024 23:47:33 +0400",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> On 28/02/2024 18:04, Tom Lane wrote:\n>> The commit that added that test added a support function\n>> \"get_columns_length\" which is now unused. Should we get rid of that\n>> as well?\n\n> I see you just removed it; thanks!\n\nIn the no-good-deed-goes-unpunished department: crake is reporting\nthat this broke our cross-version upgrade tests. I'll go fix that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Feb 2024 17:23:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "Hi,\n\nOn Sat, Feb 24, 2024 at 01:29:36PM -0800, Andres Freund wrote:\n> Let's just drop AIX. This isn't the only alignment issue we've found and the\n> solution for those isn't so much a fix as forcing everyone to carefully only\n> look into one direction and not notice the cliffs to either side.\n\nWhile I am not against dropping AIX (and certainly won't step up to\nmaintain it just for fun), I don't think burying this inside some\n\"Relation bulk write facility\" thread is helpful; I have changed the\nthread title as a first step.\n\nThe commit message says there is not a lot of user demand and that might\nbe right, but contrary to other fringe OSes that got removed like HPPA\nor Irix, I believe Postgres on AIX is still used in production and if\nso, probably in a mission-critical manner at some old-school\ninstitutions (in fact, one of our customers does just that) and not as a\nthought-experiment. It is probably well-known among Postgres hackers\nthat AIX support is problematic/a burden, but the current users might\nnot be aware of this.\n\nNot sure what to do about this (especially now that this has been\ncommitted), maybe there should have been be a public deprecation notice\nfirst for v17... On the other hand, that might not work if important\nfeatures like direct-IO would have to be bumped from v17 just because of\nAIX.\n\nI posted about this on Twitter and Mastodon to see whether anybody\ncomplains and did not get a lot of feedback.\n\nIn any case, users will have a couple of years to migrate as usual if\nthey upgrade to v16.\n\n\nMichael\n\n\n",
"msg_date": "Thu, 29 Feb 2024 09:13:04 +0100",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Remove AIX Support (was: Re: Relation bulk write facility)"
},
{
"msg_contents": "> On 29 Feb 2024, at 09:13, Michael Banck <[email protected]> wrote:\n\n> In any case, users will have a couple of years to migrate as usual if\n> they upgrade to v16.\n\nAs you say, there are many years left of AIX being supported so there is plenty\nof runway for planning a migration.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 29 Feb 2024 09:40:55 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove AIX Support (was: Re: Relation bulk write facility)"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-29 09:13:04 +0100, Michael Banck wrote:\n> The commit message says there is not a lot of user demand and that might\n> be right, but contrary to other fringe OSes that got removed like HPPA\n> or Irix, I believe Postgres on AIX is still used in production and if\n> so, probably in a mission-critical manner at some old-school\n> institutions (in fact, one of our customers does just that) and not as a\n> thought-experiment. It is probably well-known among Postgres hackers\n> that AIX support is problematic/a burden, but the current users might\n> not be aware of this.\n\nThen these users should have paid somebody to actually do maintenance work on\nthe AIX support,o it doesn't regularly stand in the way of implementing\nvarious things.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 29 Feb 2024 00:57:31 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove AIX Support (was: Re: Relation bulk write facility)"
},
{
"msg_contents": "Hi,\n\nOn Thu, Feb 29, 2024 at 12:57:31AM -0800, Andres Freund wrote:\n> On 2024-02-29 09:13:04 +0100, Michael Banck wrote:\n> > The commit message says there is not a lot of user demand and that might\n> > be right, but contrary to other fringe OSes that got removed like HPPA\n> > or Irix, I believe Postgres on AIX is still used in production and if\n> > so, probably in a mission-critical manner at some old-school\n> > institutions (in fact, one of our customers does just that) and not as a\n> > thought-experiment. It is probably well-known among Postgres hackers\n> > that AIX support is problematic/a burden, but the current users might\n> > not be aware of this.\n> \n> Then these users should have paid somebody to actually do maintenance work on\n> the AIX support,o it doesn't regularly stand in the way of implementing\n> various things.\n\nRight, absolutely.\n\nBut: did we ever tell them to do that? I don't think it's reasonable for\nthem to expect to follow -hackers and jump in when somebody grumbles\nabout AIX being a burden somewhere deep down a thread...\n\n\nMichael\n\n\n",
"msg_date": "Thu, 29 Feb 2024 10:24:24 +0100",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove AIX Support (was: Re: Relation bulk write facility)"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-29 10:24:24 +0100, Michael Banck wrote:\n> On Thu, Feb 29, 2024 at 12:57:31AM -0800, Andres Freund wrote:\n> > On 2024-02-29 09:13:04 +0100, Michael Banck wrote:\n> > > The commit message says there is not a lot of user demand and that might\n> > > be right, but contrary to other fringe OSes that got removed like HPPA\n> > > or Irix, I believe Postgres on AIX is still used in production and if\n> > > so, probably in a mission-critical manner at some old-school\n> > > institutions (in fact, one of our customers does just that) and not as a\n> > > thought-experiment. It is probably well-known among Postgres hackers\n> > > that AIX support is problematic/a burden, but the current users might\n> > > not be aware of this.\n> > \n> > Then these users should have paid somebody to actually do maintenance work on\n> > the AIX support,o it doesn't regularly stand in the way of implementing\n> > various things.\n> \n> Right, absolutely.\n> \n> But: did we ever tell them to do that? I don't think it's reasonable for\n> them to expect to follow -hackers and jump in when somebody grumbles\n> about AIX being a burden somewhere deep down a thread...\n\nWell, the thing is that it's commonly going to be deep down some threads that\nportability problems cause pain. This is far from the only time. Just a few\nthreads:\n\nhttps://postgr.es/m/CA+TgmoauCAv+p4Z57PqgVgNxsApxKs3Yh9mDLdUDB8fep-s=1w@mail.gmail.com\nhttps://postgr.es/m/CA+hUKGK=DOC+hE-62FKfZy=Ybt5uLkrg3zCZD-jFykM-iPn8yw@mail.gmail.com\nhttps://postgr.es/m/[email protected]\nhttps://postgr.es/m/[email protected]\nhttps://postgr.es/m/[email protected]\nhttps://postgr.es/m/20220820204401.vrf5kejih6jofvqb%40awork3.anarazel.de\nhttps://postgr.es/m/E1oWpzF-002EG4-AG%40gemulon.postgresql.org\n\nThis is far from all.\n\nThe only platform rivalling AIX on the pain-caused metric is windows. And at\nleast that can be tested via CI (or locally). We've been relying on the gcc\nbuildfarm to be able to maintain AIX at all, and that's not a resource that\nscales to many users.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 29 Feb 2024 01:35:31 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove AIX Support (was: Re: Relation bulk write facility)"
},
{
"msg_contents": "> On 29 Feb 2024, at 10:24, Michael Banck <[email protected]> wrote:\n> On Thu, Feb 29, 2024 at 12:57:31AM -0800, Andres Freund wrote:\n\n>> Then these users should have paid somebody to actually do maintenance work on\n>> the AIX support,o it doesn't regularly stand in the way of implementing\n>> various things.\n> \n> Right, absolutely.\n> \n> But: did we ever tell them to do that? I don't think it's reasonable for\n> them to expect to follow -hackers and jump in when somebody grumbles\n> about AIX being a burden somewhere deep down a thread...\n\nHaving spent a fair bit of time within open source projects that companies rely\non, my experience is that those companies who need to hear such news have zero\ninteraction with the project and most of time don't even know the project\ncommunity exist. That conversely also means that the project don't know they\nexist. If their consultants and suppliers, who have a higher probability of\nknowing this, hasn't told them then it's highly unlikely that anything we say\nwill get across.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 29 Feb 2024 10:55:48 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove AIX Support (was: Re: Relation bulk write facility)"
},
{
"msg_contents": "Hi,\nHistorically many public hospitals I work for had IBM Power hardware.\nThe SMT8 (8 threads/cores) capabilities of Power CPU are useful to lower Oracle licence & support cost. We migrate to PostgreSQL and it runs very well on Power, especially since the (relatively) recent parallel executions features of the RDBMS match very well the CPU capabilities.\nWe chose to run PostgreSQL on Debian/Power (Little Endian) since ppc64le is an official Debian port. No AIX then. Only problem is that we still need to access Oracle databases and it can be useful to read directly with oracle_fdw but this tool needs an instant client and it's not open source of course. Oracle provides a binary but they don't provide patches for Debian/Power Little Endian (strange situation...) Just to say that of course we chose Linux for PostgreSQL but sometimes things are not so easy... We could have chosen AIX and we still have a ???? about interoperability.\nBest regards,\nPhil\n________________________________\nDe : Andres Freund <[email protected]>\nEnvoyé : jeudi 29 février 2024 10:35\nÀ : Michael Banck <[email protected]>\nCc : Noah Misch <[email protected]>; Thomas Munro <[email protected]>; Heikki Linnakangas <[email protected]>; Peter Smith <[email protected]>; Robert Haas <[email protected]>; vignesh C <[email protected]>; pgsql-hackers <[email protected]>; Melanie Plageman <[email protected]>\nObjet : Re: Remove AIX Support (was: Re: Relation bulk write facility)\n\nHi,\n\nOn 2024-02-29 10:24:24 +0100, Michael Banck wrote:\n> On Thu, Feb 29, 2024 at 12:57:31AM -0800, Andres Freund wrote:\n> > On 2024-02-29 09:13:04 +0100, Michael Banck wrote:\n> > > The commit message says there is not a lot of user demand and that might\n> > > be right, but contrary to other fringe OSes that got removed like HPPA\n> > > or Irix, I believe Postgres on AIX is still used in production and if\n> > > so, probably in a mission-critical manner at some old-school\n> > > institutions (in fact, one of our customers does just that) and not as a\n> > > thought-experiment. It is probably well-known among Postgres hackers\n> > > that AIX support is problematic/a burden, but the current users might\n> > > not be aware of this.\n> >\n> > Then these users should have paid somebody to actually do maintenance work on\n> > the AIX support,o it doesn't regularly stand in the way of implementing\n> > various things.\n>\n> Right, absolutely.\n>\n> But: did we ever tell them to do that? I don't think it's reasonable for\n> them to expect to follow -hackers and jump in when somebody grumbles\n> about AIX being a burden somewhere deep down a thread...\n\nWell, the thing is that it's commonly going to be deep down some threads that\nportability problems cause pain. This is far from the only time. Just a few\nthreads:\n\nhttps://postgr.es/m/CA+TgmoauCAv+p4Z57PqgVgNxsApxKs3Yh9mDLdUDB8fep-s=1w@mail.gmail.com\nhttps://postgr.es/m/CA+hUKGK=DOC+hE-62FKfZy=Ybt5uLkrg3zCZD-jFykM-iPn8yw@mail.gmail.com\nhttps://postgr.es/m/[email protected]\nhttps://postgr.es/m/[email protected]\nhttps://postgr.es/m/[email protected]\nhttps://postgr.es/m/20220820204401.vrf5kejih6jofvqb%40awork3.anarazel.de\nhttps://postgr.es/m/E1oWpzF-002EG4-AG%40gemulon.postgresql.org\n\nThis is far from all.\n\nThe only platform rivalling AIX on the pain-caused metric is windows. And at\nleast that can be tested via CI (or locally). We've been relying on the gcc\nbuildfarm to be able to maintain AIX at all, and that's not a resource that\nscales to many users.\n\nGreetings,\n\nAndres Freund\n\n\n\n\n\n\n\n\n\n\nHi,\n\nHistorically many public hospitals I work for had IBM Power hardware.\n\nThe SMT8 (8 threads/cores) capabilities of Power CPU are useful to lower Oracle licence & support cost. We migrate to PostgreSQL and it runs very well on Power, especially since the (relatively) recent parallel executions features of the RDBMS match very well\n the CPU capabilities. \n\nWe chose to run PostgreSQL on Debian/Power (Little Endian) since ppc64le is an official Debian port. No AIX then. Only problem is that we still need to access Oracle databases and it can be useful to read directly with oracle_fdw but this tool needs an instant\n client and it's not open source of course. Oracle provides a binary but they don't provide patches for Debian/Power Little Endian (strange situation...) Just to say that of course we chose Linux for PostgreSQL but sometimes things are not so easy... We could\n have chosen AIX and we still have a ???? about interoperability.\n\nBest regards,\n\nPhil\n\n\nDe : Andres Freund <[email protected]>\nEnvoyé : jeudi 29 février 2024 10:35\nÀ : Michael Banck <[email protected]>\nCc : Noah Misch <[email protected]>; Thomas Munro <[email protected]>; Heikki Linnakangas <[email protected]>; Peter Smith <[email protected]>; Robert Haas <[email protected]>; vignesh C <[email protected]>; pgsql-hackers <[email protected]>;\n Melanie Plageman <[email protected]>\nObjet : Re: Remove AIX Support (was: Re: Relation bulk write facility)\n \n\n\nHi,\n\nOn 2024-02-29 10:24:24 +0100, Michael Banck wrote:\n> On Thu, Feb 29, 2024 at 12:57:31AM -0800, Andres Freund wrote:\n> > On 2024-02-29 09:13:04 +0100, Michael Banck wrote:\n> > > The commit message says there is not a lot of user demand and that might\n> > > be right, but contrary to other fringe OSes that got removed like HPPA\n> > > or Irix, I believe Postgres on AIX is still used in production and if\n> > > so, probably in a mission-critical manner at some old-school\n> > > institutions (in fact, one of our customers does just that) and not as a\n> > > thought-experiment. It is probably well-known among Postgres hackers\n> > > that AIX support is problematic/a burden, but the current users might\n> > > not be aware of this.\n> > \n> > Then these users should have paid somebody to actually do maintenance work on\n> > the AIX support,o it doesn't regularly stand in the way of implementing\n> > various things.\n> \n> Right, absolutely.\n> \n> But: did we ever tell them to do that? I don't think it's reasonable for\n> them to expect to follow -hackers and jump in when somebody grumbles\n> about AIX being a burden somewhere deep down a thread...\n\nWell, the thing is that it's commonly going to be deep down some threads that\nportability problems cause pain. This is far from the only time. Just a few\nthreads:\n\nhttps://postgr.es/m/CA+TgmoauCAv+p4Z57PqgVgNxsApxKs3Yh9mDLdUDB8fep-s=1w@mail.gmail.com\nhttps://postgr.es/m/CA+hUKGK=DOC+hE-62FKfZy=Ybt5uLkrg3zCZD-jFykM-iPn8yw@mail.gmail.com\nhttps://postgr.es/m/[email protected]\nhttps://postgr.es/m/[email protected]\nhttps://postgr.es/m/[email protected]\nhttps://postgr.es/m/20220820204401.vrf5kejih6jofvqb%40awork3.anarazel.de\nhttps://postgr.es/m/E1oWpzF-002EG4-AG%40gemulon.postgresql.org\n\nThis is far from all.\n\nThe only platform rivalling AIX on the pain-caused metric is windows. And at\nleast that can be tested via CI (or locally). We've been relying on the gcc\nbuildfarm to be able to maintain AIX at all, and that's not a resource that\nscales to many users.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Thu, 29 Feb 2024 19:12:02 +0000",
"msg_from": "Phil Florent <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Remove AIX Support (was: Re: Relation bulk write facility)"
},
{
"msg_contents": "On Fri, Feb 23, 2024 at 04:27:34PM +0200, Heikki Linnakangas wrote:\n> Committed this. Thanks everyone!\n\nCommit 8af2565 wrote:\n> --- /dev/null\n> +++ b/src/backend/storage/smgr/bulk_write.c\n\n> +/*\n> + * Finish bulk write operation.\n> + *\n> + * This WAL-logs and flushes any remaining pending writes to disk, and fsyncs\n> + * the relation if needed.\n> + */\n> +void\n> +smgr_bulk_finish(BulkWriteState *bulkstate)\n> +{\n> +\t/* WAL-log and flush any remaining pages */\n> +\tsmgr_bulk_flush(bulkstate);\n> +\n> +\t/*\n> +\t * When we wrote out the pages, we passed skipFsync=true to avoid the\n> +\t * overhead of registering all the writes with the checkpointer. Register\n> +\t * the whole relation now.\n> +\t *\n> +\t * There is one hole in that idea: If a checkpoint occurred while we were\n> +\t * writing the pages, it already missed fsyncing the pages we had written\n> +\t * before the checkpoint started. A crash later on would replay the WAL\n> +\t * starting from the checkpoint, therefore it wouldn't replay our earlier\n> +\t * WAL records. So if a checkpoint started after the bulk write, fsync\n> +\t * the files now.\n> +\t */\n> +\tif (!SmgrIsTemp(bulkstate->smgr))\n> +\t{\n\nShouldn't this be \"if (bulkstate->use_wal)\"? The GetRedoRecPtr()-based\ndecision is irrelevant to the !wal case. Either we don't need fsync at all\n(TEMP or UNLOGGED) or smgrDoPendingSyncs() will do it (wal_level=minimal). I\ndon't see any functional problem, but this likely arranges for an unnecessary\nsync when a checkpoint starts between mdcreate() and here. (The mdcreate()\nsync may also be unnecessary, but that's longstanding.)\n\n> +\t\t/*\n> +\t\t * Prevent a checkpoint from starting between the GetRedoRecPtr() and\n> +\t\t * smgrregistersync() calls.\n> +\t\t */\n> +\t\tAssert((MyProc->delayChkptFlags & DELAY_CHKPT_START) == 0);\n> +\t\tMyProc->delayChkptFlags |= DELAY_CHKPT_START;\n> +\n> +\t\tif (bulkstate->start_RedoRecPtr != GetRedoRecPtr())\n> +\t\t{\n> +\t\t\t/*\n> +\t\t\t * A checkpoint occurred and it didn't know about our writes, so\n> +\t\t\t * fsync() the relation ourselves.\n> +\t\t\t */\n> +\t\t\tMyProc->delayChkptFlags &= ~DELAY_CHKPT_START;\n> +\t\t\tsmgrimmedsync(bulkstate->smgr, bulkstate->forknum);\n> +\t\t\telog(DEBUG1, \"flushed relation because a checkpoint occurred concurrently\");\n> +\t\t}\n> +\t\telse\n> +\t\t{\n> +\t\t\tsmgrregistersync(bulkstate->smgr, bulkstate->forknum);\n> +\t\t\tMyProc->delayChkptFlags &= ~DELAY_CHKPT_START;\n> +\t\t}\n> +\t}\n> +}\n\nThis is an elegant optimization.\n\n\n",
"msg_date": "Mon, 1 Jul 2024 13:52:50 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "Thanks for poking at this!\n\nOn 01/07/2024 23:52, Noah Misch wrote:\n> Commit 8af2565 wrote:\n>> --- /dev/null\n>> +++ b/src/backend/storage/smgr/bulk_write.c\n> \n>> +/*\n>> + * Finish bulk write operation.\n>> + *\n>> + * This WAL-logs and flushes any remaining pending writes to disk, and fsyncs\n>> + * the relation if needed.\n>> + */\n>> +void\n>> +smgr_bulk_finish(BulkWriteState *bulkstate)\n>> +{\n>> +\t/* WAL-log and flush any remaining pages */\n>> +\tsmgr_bulk_flush(bulkstate);\n>> +\n>> +\t/*\n>> +\t * When we wrote out the pages, we passed skipFsync=true to avoid the\n>> +\t * overhead of registering all the writes with the checkpointer. Register\n>> +\t * the whole relation now.\n>> +\t *\n>> +\t * There is one hole in that idea: If a checkpoint occurred while we were\n>> +\t * writing the pages, it already missed fsyncing the pages we had written\n>> +\t * before the checkpoint started. A crash later on would replay the WAL\n>> +\t * starting from the checkpoint, therefore it wouldn't replay our earlier\n>> +\t * WAL records. So if a checkpoint started after the bulk write, fsync\n>> +\t * the files now.\n>> +\t */\n>> +\tif (!SmgrIsTemp(bulkstate->smgr))\n>> +\t{\n> \n> Shouldn't this be \"if (bulkstate->use_wal)\"? The GetRedoRecPtr()-based\n> decision is irrelevant to the !wal case. Either we don't need fsync at all\n> (TEMP or UNLOGGED) or smgrDoPendingSyncs() will do it (wal_level=minimal).\n\nThe point of GetRedoRecPtr() is to detect if a checkpoint has started \nconcurrently. It works for that purpose whether or not the bulk load is \nWAL-logged. It is not compared with the LSNs of WAL records written by \nthe bulk load.\n\nUnlogged tables do need to be fsync'd. The scenario is:\n\n1. Bulk load an unlogged table.\n2. Shut down Postgres cleanly\n3. Pull power plug from server, and restart.\n\nWe talked about this earlier in the \"Unlogged relation copy is not \nfsync'd\" thread [1]. I had already forgotten about that; that bug \nactually still exists in back branches, and we should fix it..\n\n[1] \nhttps://www.postgresql.org/message-id/flat/65e94fc8-ce1d-dd02-3be3-fda0fe8f2965%40iki.fi\n\n> I don't see any functional problem, but this likely arranges for an\n> unnecessary sync when a checkpoint starts between mdcreate() and\n> here. (The mdcreate() sync may also be unnecessary, but that's\n> longstanding.)\nHmm, yes we might do two fsyncs() with wal_level=minimal, unnecessarily. \nIt seems hard to eliminate the redundancy. smgr_bulk_finish() could skip \nthe fsync, if it knew that smgrDoPendingSyncs() will do it later. \nHowever, smgrDoPendingSyncs() might also decide to WAL-log the relation \ninstead of fsyncing it, and in that case we do still need the fsync.\n\nFortunately, fsync() on a file that's already flushed to disk is pretty \ncheap.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 2 Jul 2024 00:53:05 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On Tue, Jul 02, 2024 at 12:53:05AM +0300, Heikki Linnakangas wrote:\n> On 01/07/2024 23:52, Noah Misch wrote:\n> > Commit 8af2565 wrote:\n> > > --- /dev/null\n> > > +++ b/src/backend/storage/smgr/bulk_write.c\n> > \n> > > +/*\n> > > + * Finish bulk write operation.\n> > > + *\n> > > + * This WAL-logs and flushes any remaining pending writes to disk, and fsyncs\n> > > + * the relation if needed.\n> > > + */\n> > > +void\n> > > +smgr_bulk_finish(BulkWriteState *bulkstate)\n> > > +{\n> > > +\t/* WAL-log and flush any remaining pages */\n> > > +\tsmgr_bulk_flush(bulkstate);\n> > > +\n> > > +\t/*\n> > > +\t * When we wrote out the pages, we passed skipFsync=true to avoid the\n> > > +\t * overhead of registering all the writes with the checkpointer. Register\n> > > +\t * the whole relation now.\n> > > +\t *\n> > > +\t * There is one hole in that idea: If a checkpoint occurred while we were\n> > > +\t * writing the pages, it already missed fsyncing the pages we had written\n> > > +\t * before the checkpoint started. A crash later on would replay the WAL\n> > > +\t * starting from the checkpoint, therefore it wouldn't replay our earlier\n> > > +\t * WAL records. So if a checkpoint started after the bulk write, fsync\n> > > +\t * the files now.\n> > > +\t */\n> > > +\tif (!SmgrIsTemp(bulkstate->smgr))\n> > > +\t{\n> > \n> > Shouldn't this be \"if (bulkstate->use_wal)\"? The GetRedoRecPtr()-based\n> > decision is irrelevant to the !wal case. Either we don't need fsync at all\n> > (TEMP or UNLOGGED) or smgrDoPendingSyncs() will do it (wal_level=minimal).\n> \n> The point of GetRedoRecPtr() is to detect if a checkpoint has started\n> concurrently. It works for that purpose whether or not the bulk load is\n> WAL-logged. It is not compared with the LSNs of WAL records written by the\n> bulk load.\n\nI think the significance of start_RedoRecPtr is it preceding all records\nneeded to recreate the bulk write. If start_RedoRecPtr==GetRedoRecPtr() and\nwe crash after commit, we're indifferent to whether the rel gets synced at a\ncheckpoint before that crash or rebuilt from WAL after that crash. If\nstart_RedoRecPtr!=GetRedoRecPtr(), some WAL of the bulk write is already\ndeleted, so only smgrimmedsync() suffices. Overall, while it is not compared\nwith LSNs in WAL records, it's significant only to the extent that such a WAL\nrecord exists. What am I missing?\n\n> Unlogged tables do need to be fsync'd. The scenario is:\n> \n> 1. Bulk load an unlogged table.\n> 2. Shut down Postgres cleanly\n> 3. Pull power plug from server, and restart.\n> \n> We talked about this earlier in the \"Unlogged relation copy is not fsync'd\"\n> thread [1]. I had already forgotten about that; that bug actually still\n> exists in back branches, and we should fix it..\n> \n> [1] https://www.postgresql.org/message-id/flat/65e94fc8-ce1d-dd02-3be3-fda0fe8f2965%40iki.fi\n\nAh, that's right. I agree this code suffices for unlogged. As a further\noptimization, it would be valid to ignore GetRedoRecPtr() for unlogged and\nalways call smgrregistersync(). (For any rel, smgrimmedsync() improves on\nsmgrregistersync() only if we fail to reach the shutdown checkpoint. Without\na shutdown checkpoint, unlogged rels get reset anyway.)\n\n> > I don't see any functional problem, but this likely arranges for an\n> > unnecessary sync when a checkpoint starts between mdcreate() and\n> > here. (The mdcreate() sync may also be unnecessary, but that's\n> > longstanding.)\n> Hmm, yes we might do two fsyncs() with wal_level=minimal, unnecessarily. It\n> seems hard to eliminate the redundancy. smgr_bulk_finish() could skip the\n> fsync, if it knew that smgrDoPendingSyncs() will do it later. However,\n> smgrDoPendingSyncs() might also decide to WAL-log the relation instead of\n> fsyncing it, and in that case we do still need the fsync.\n\nWe do not need the fsync in the \"WAL-log the relation instead\" case; see\nhttps://postgr.es/m/[email protected]\n\nSo maybe like this:\n\n if (use_wal) /* includes init forks */\n current logic;\n else if (unlogged)\n smgrregistersync;\n /* else temp || (permanent && wal_level=minimal): nothing to do */\n\n> Fortunately, fsync() on a file that's already flushed to disk is pretty\n> cheap.\n\nYep. I'm more concerned about future readers wondering why the function is\nusing LSNs to decide what to do about data that doesn't appear in WAL. A\ncomment could be another way to fix that, though.\n\n\n",
"msg_date": "Mon, 1 Jul 2024 16:24:48 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On 02/07/2024 02:24, Noah Misch wrote:\n> On Tue, Jul 02, 2024 at 12:53:05AM +0300, Heikki Linnakangas wrote:\n>> On 01/07/2024 23:52, Noah Misch wrote:\n>>> Commit 8af2565 wrote:\n>>>> --- /dev/null\n>>>> +++ b/src/backend/storage/smgr/bulk_write.c\n>>>\n>>>> +/*\n>>>> + * Finish bulk write operation.\n>>>> + *\n>>>> + * This WAL-logs and flushes any remaining pending writes to disk, and fsyncs\n>>>> + * the relation if needed.\n>>>> + */\n>>>> +void\n>>>> +smgr_bulk_finish(BulkWriteState *bulkstate)\n>>>> +{\n>>>> +\t/* WAL-log and flush any remaining pages */\n>>>> +\tsmgr_bulk_flush(bulkstate);\n>>>> +\n>>>> +\t/*\n>>>> +\t * When we wrote out the pages, we passed skipFsync=true to avoid the\n>>>> +\t * overhead of registering all the writes with the checkpointer. Register\n>>>> +\t * the whole relation now.\n>>>> +\t *\n>>>> +\t * There is one hole in that idea: If a checkpoint occurred while we were\n>>>> +\t * writing the pages, it already missed fsyncing the pages we had written\n>>>> +\t * before the checkpoint started. A crash later on would replay the WAL\n>>>> +\t * starting from the checkpoint, therefore it wouldn't replay our earlier\n>>>> +\t * WAL records. So if a checkpoint started after the bulk write, fsync\n>>>> +\t * the files now.\n>>>> +\t */\n>>>> +\tif (!SmgrIsTemp(bulkstate->smgr))\n>>>> +\t{\n>>>\n>>> Shouldn't this be \"if (bulkstate->use_wal)\"? The GetRedoRecPtr()-based\n>>> decision is irrelevant to the !wal case. Either we don't need fsync at all\n>>> (TEMP or UNLOGGED) or smgrDoPendingSyncs() will do it (wal_level=minimal).\n>>\n>> The point of GetRedoRecPtr() is to detect if a checkpoint has started\n>> concurrently. It works for that purpose whether or not the bulk load is\n>> WAL-logged. It is not compared with the LSNs of WAL records written by the\n>> bulk load.\n> \n> I think the significance of start_RedoRecPtr is it preceding all records\n> needed to recreate the bulk write. If start_RedoRecPtr==GetRedoRecPtr() and\n> we crash after commit, we're indifferent to whether the rel gets synced at a\n> checkpoint before that crash or rebuilt from WAL after that crash. If\n> start_RedoRecPtr!=GetRedoRecPtr(), some WAL of the bulk write is already\n> deleted, so only smgrimmedsync() suffices. Overall, while it is not compared\n> with LSNs in WAL records, it's significant only to the extent that such a WAL\n> record exists. What am I missing?\n\nYou're right. You pointed out below that we don't need to register or \nimmediately fsync the relation if it was not WAL-logged, I missed that.\n\nIn the alternative universe that we did need to fsync() even in !use_wal \ncase, the point of the start_RedoRecPtr==GetRedoRecPtr() was to detect \nwhether the last checkpoint \"missed\" fsyncing the files that we wrote. \nBut the point is moot now.\n\n>> Unlogged tables do need to be fsync'd. The scenario is:\n>>\n>> 1. Bulk load an unlogged table.\n>> 2. Shut down Postgres cleanly\n>> 3. Pull power plug from server, and restart.\n>>\n>> We talked about this earlier in the \"Unlogged relation copy is not fsync'd\"\n>> thread [1]. I had already forgotten about that; that bug actually still\n>> exists in back branches, and we should fix it..\n>>\n>> [1] https://www.postgresql.org/message-id/flat/65e94fc8-ce1d-dd02-3be3-fda0fe8f2965%40iki.fi\n> \n> Ah, that's right. I agree this code suffices for unlogged. As a further\n> optimization, it would be valid to ignore GetRedoRecPtr() for unlogged and\n> always call smgrregistersync(). (For any rel, smgrimmedsync() improves on\n> smgrregistersync() only if we fail to reach the shutdown checkpoint. Without\n> a shutdown checkpoint, unlogged rels get reset anyway.)\n> \n>>> I don't see any functional problem, but this likely arranges for an\n>>> unnecessary sync when a checkpoint starts between mdcreate() and\n>>> here. (The mdcreate() sync may also be unnecessary, but that's\n>>> longstanding.)\n>> Hmm, yes we might do two fsyncs() with wal_level=minimal, unnecessarily. It\n>> seems hard to eliminate the redundancy. smgr_bulk_finish() could skip the\n>> fsync, if it knew that smgrDoPendingSyncs() will do it later. However,\n>> smgrDoPendingSyncs() might also decide to WAL-log the relation instead of\n>> fsyncing it, and in that case we do still need the fsync.\n> \n> We do not need the fsync in the \"WAL-log the relation instead\" case; see\n> https://postgr.es/m/[email protected]\n\nAh, true, I missed that log_newpage_range() loads the pages to the \nbuffer cache and dirties them. That kinds of sucks actually, I wish it \ndidn't need to dirty the buffers.\n\n> So maybe like this:\n> \n> if (use_wal) /* includes init forks */\n> current logic;\n> else if (unlogged)\n> smgrregistersync;\n> /* else temp || (permanent && wal_level=minimal): nothing to do */\n\nMakes sense, except that we cannot distinguish between unlogged \nrelations and permanent relations with !use_wal here.\n\nIt would be nice to have relpersistence flag in SMgrRelation. I remember \nwanting to have that before, although I don't remember what the context \nwas exactly.\n\n>> Fortunately, fsync() on a file that's already flushed to disk is pretty\n>> cheap.\n> \n> Yep. I'm more concerned about future readers wondering why the function is\n> using LSNs to decide what to do about data that doesn't appear in WAL. A\n> comment could be another way to fix that, though.\n\nAgreed, this is all very subtle, and deserves a good comment. What do \nyou think of the attached?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Tue, 2 Jul 2024 14:42:50 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On Tue, Jul 02, 2024 at 02:42:50PM +0300, Heikki Linnakangas wrote:\n> On 02/07/2024 02:24, Noah Misch wrote:\n> > On Tue, Jul 02, 2024 at 12:53:05AM +0300, Heikki Linnakangas wrote:\n\n> log_newpage_range() loads the pages to the buffer\n> cache and dirties them. That kinds of sucks actually, I wish it didn't need\n> to dirty the buffers.\n\nAgreed.\n\n> > > Fortunately, fsync() on a file that's already flushed to disk is pretty\n> > > cheap.\n> > \n> > Yep. I'm more concerned about future readers wondering why the function is\n> > using LSNs to decide what to do about data that doesn't appear in WAL. A\n> > comment could be another way to fix that, though.\n> \n> Agreed, this is all very subtle, and deserves a good comment. What do you\n> think of the attached?\n\nLooks good. Thanks. pgindent doesn't preserve all your indentation, but it\ndoesn't make things objectionable, either.\n\n\n",
"msg_date": "Tue, 2 Jul 2024 20:41:31 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation bulk write facility"
},
{
"msg_contents": "On 03/07/2024 06:41, Noah Misch wrote:\n> On Tue, Jul 02, 2024 at 02:42:50PM +0300, Heikki Linnakangas wrote:\n>> On 02/07/2024 02:24, Noah Misch wrote:\n>>> On Tue, Jul 02, 2024 at 12:53:05AM +0300, Heikki Linnakangas wrote:\n>>>> Fortunately, fsync() on a file that's already flushed to disk is pretty\n>>>> cheap.\n>>>\n>>> Yep. I'm more concerned about future readers wondering why the function is\n>>> using LSNs to decide what to do about data that doesn't appear in WAL. A\n>>> comment could be another way to fix that, though.\n>>\n>> Agreed, this is all very subtle, and deserves a good comment. What do you\n>> think of the attached?\n> \n> Looks good. Thanks. pgindent doesn't preserve all your indentation, but it\n> doesn't make things objectionable, either.\n\nCommitted, thanks!\n\n(Sorry for the delay, I had forgotten about this already and found it \nonly now sedimented at the bottom of my inbox)\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 16 Aug 2024 15:18:31 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Relation bulk write facility"
}
] |
[
{
"msg_contents": "Hi,\n\nWhat I am concerned about from the report [1] is that this comment is\na bit too terse; it might cause a misunderstanding that extensions can\ndo different things than we intend to allow:\n\n /*\n * 6. Finally, give extensions a chance to manipulate the path list.\n */\n if (set_join_pathlist_hook)\n set_join_pathlist_hook(root, joinrel, outerrel, innerrel,\n jointype, &extra);\n\nSo I would like to propose to extend the comment to explain what they\ncan do, as in the comment about set_rel_pathlist_hook() in allpaths.c.\nAttached is a patch for that.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CACawEhV%3D%2BQ0HXrcDergbTR9EkVFukgRPMTZbRFL-YK5CRmvYag%40mail.gmail.com",
"msg_date": "Wed, 20 Sep 2023 19:05:31 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": true,
"msg_subject": "Comment about set_join_pathlist_hook()"
},
{
"msg_contents": "On Wed, 20 Sept 2023 at 22:06, Etsuro Fujita <[email protected]> wrote:\n> So I would like to propose to extend the comment to explain what they\n> can do, as in the comment about set_rel_pathlist_hook() in allpaths.c.\n> Attached is a patch for that.\n\nLooks good to me.\n\nI see you've copy/edited the comment just above the call to\nset_rel_pathlist_hook(). That makes sense.\n\nDavid\n\n\n",
"msg_date": "Wed, 20 Sep 2023 22:49:18 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comment about set_join_pathlist_hook()"
},
{
"msg_contents": "On Wed, Sep 20, 2023, at 5:05 PM, Etsuro Fujita wrote:\n> Hi,\n>\n> What I am concerned about from the report [1] is that this comment is\n> a bit too terse; it might cause a misunderstanding that extensions can\n> do different things than we intend to allow:\n>\n> /*\n> * 6. Finally, give extensions a chance to manipulate the path list.\n> */\n> if (set_join_pathlist_hook)\n> set_join_pathlist_hook(root, joinrel, outerrel, innerrel,\n> jointype, &extra);\n>\n> So I would like to propose to extend the comment to explain what they\n> can do, as in the comment about set_rel_pathlist_hook() in allpaths.c.\n> Attached is a patch for that.\n\nIt makes sense. But why do you restrict addition to pathlist by only the add_path() routine? It can fail to add a path to the pathlist. We need to find out the result of the add_path operation and need to check the pathlist each time. So, sometimes, it can be better to add a path manually.\nOne more slip-up could be prevented by the comment: removing a path from the pathlist we should remember about the cheapest_* pointers.\nAlso, it may be good to remind a user, that jointype and extra->sjinfo->jointype aren't the same all the time.\n\n> [1] \n> https://www.postgresql.org/message-id/CACawEhV%3D%2BQ0HXrcDergbTR9EkVFukgRPMTZbRFL-YK5CRmvYag%40mail.gmail.com\n\n-- \nRegards,\nAndrei Lepikhov\n\n\n",
"msg_date": "Thu, 21 Sep 2023 09:49:16 +0700",
"msg_from": "\"Lepikhov Andrei\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comment about set_join_pathlist_hook()"
},
{
"msg_contents": "Hi,\n\nOn Thu, Sep 21, 2023 at 11:49 AM Lepikhov Andrei\n<[email protected]> wrote:\n> On Wed, Sep 20, 2023, at 5:05 PM, Etsuro Fujita wrote:\n> > What I am concerned about from the report [1] is that this comment is\n> > a bit too terse; it might cause a misunderstanding that extensions can\n> > do different things than we intend to allow:\n> >\n> > /*\n> > * 6. Finally, give extensions a chance to manipulate the path list.\n> > */\n> > if (set_join_pathlist_hook)\n> > set_join_pathlist_hook(root, joinrel, outerrel, innerrel,\n> > jointype, &extra);\n> >\n> > So I would like to propose to extend the comment to explain what they\n> > can do, as in the comment about set_rel_pathlist_hook() in allpaths.c.\n> > Attached is a patch for that.\n>\n> It makes sense. But why do you restrict addition to pathlist by only the add_path() routine? It can fail to add a path to the pathlist. We need to find out the result of the add_path operation and need to check the pathlist each time. So, sometimes, it can be better to add a path manually.\n\nI do not agree with you on this point; I think you can do so at your\nown responsibility, but I think it is better for extensions to use\nadd_path(), because that makes them stable. (Assuming that add_path()\nhas a bug and we change the logic of it to fix the bug, extensions\nthat do not follow the standard procedure might not work anymore.)\n\n> One more slip-up could be prevented by the comment: removing a path from the pathlist we should remember about the cheapest_* pointers.\n\nDo we really need to do this? I mean we do set_cheapest() afterward.\nSee standard_join_search().\n\n> Also, it may be good to remind a user, that jointype and extra->sjinfo->jointype aren't the same all the time.\n\nThat might be an improvement, but IMO that is not the point here,\nbecause the purpose to expand the comment is to avoid extensions doing\ndifferent things than we intend to allow.\n\nThanks for looking!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 21 Sep 2023 14:53:01 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Comment about set_join_pathlist_hook()"
},
{
"msg_contents": "On Thu, Sep 21, 2023, at 12:53 PM, Etsuro Fujita wrote:\n> Hi,\n>\n> On Thu, Sep 21, 2023 at 11:49 AM Lepikhov Andrei\n> <[email protected]> wrote:\n>> On Wed, Sep 20, 2023, at 5:05 PM, Etsuro Fujita wrote:\n>> > What I am concerned about from the report [1] is that this comment is\n>> > a bit too terse; it might cause a misunderstanding that extensions can\n>> > do different things than we intend to allow:\n>> >\n>> > /*\n>> > * 6. Finally, give extensions a chance to manipulate the path list.\n>> > */\n>> > if (set_join_pathlist_hook)\n>> > set_join_pathlist_hook(root, joinrel, outerrel, innerrel,\n>> > jointype, &extra);\n>> >\n>> > So I would like to propose to extend the comment to explain what they\n>> > can do, as in the comment about set_rel_pathlist_hook() in allpaths.c.\n>> > Attached is a patch for that.\n>>\n>> It makes sense. But why do you restrict addition to pathlist by only the add_path() routine? It can fail to add a path to the pathlist. We need to find out the result of the add_path operation and need to check the pathlist each time. So, sometimes, it can be better to add a path manually.\n>\n> I do not agree with you on this point; I think you can do so at your\n> own responsibility, but I think it is better for extensions to use\n> add_path(), because that makes them stable. (Assuming that add_path()\n> has a bug and we change the logic of it to fix the bug, extensions\n> that do not follow the standard procedure might not work anymore.)\n\nOk, I got it.This question related to the add_path() interface itself, not to the comment. It is awkward to check every time in the pathlist the result of the add_path.\n\n>> One more slip-up could be prevented by the comment: removing a path from the pathlist we should remember about the cheapest_* pointers.\n>\n> Do we really need to do this? I mean we do set_cheapest() afterward.\n> See standard_join_search().\n\nAgree, in the case of current join it doesn't make sense. I stuck in this situation because providing additional path at the current level I need to arrange paths for the inner and outer too.\n\nThanks for the explanation!\n\n-- \nRegards,\nAndrei Lepikhov\n\n\n",
"msg_date": "Thu, 21 Sep 2023 13:11:17 +0700",
"msg_from": "\"Lepikhov Andrei\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comment about set_join_pathlist_hook()"
},
{
"msg_contents": "Hi,\n\nOn Wed, Sep 20, 2023 at 7:49 PM David Rowley <[email protected]> wrote:\n> On Wed, 20 Sept 2023 at 22:06, Etsuro Fujita <[email protected]> wrote:\n> > So I would like to propose to extend the comment to explain what they\n> > can do, as in the comment about set_rel_pathlist_hook() in allpaths.c.\n> > Attached is a patch for that.\n>\n> Looks good to me.\n>\n> I see you've copy/edited the comment just above the call to\n> set_rel_pathlist_hook(). That makes sense.\n\nCool! Pushed.\n\nThanks for taking a look!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 21 Sep 2023 20:07:56 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Comment about set_join_pathlist_hook()"
}
] |
[
{
"msg_contents": "Hi,\n\nOn the latest master head, I can see a $subject bug that seems to be related\ncommit #b0e96f311985:\n\nHere is the table definition:\ncreate table foo(i int, j int, CONSTRAINT pk PRIMARY KEY(i) DEFERRABLE);\n\nAnd after restore from the dump, it shows a descriptor where column 'i' not\nmarked NOT NULL:\n\n=# \\d foo\n Table \"public.foo\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n i | integer | | |\n j | integer | | |\nIndexes:\n \"pk\" PRIMARY KEY, btree (i) DEFERRABLE\n\nThe pg_attribute entry:\n\n=# select attname, attnotnull from pg_attribute\nwhere attrelid = 'foo'::regclass and attnum > 0;\n\n attname | attnotnull\n---------+------------\n i | f\n j | f\n(2 rows)\n\n-- \nRegards,\nAmul Sul\nEDB: http://www.enterprisedb.com\n\nHi,On the latest master head, I can see a $subject bug that seems to be relatedcommit #b0e96f311985:Here is the table definition:create table foo(i int, j int, CONSTRAINT pk PRIMARY KEY(i) DEFERRABLE);And after restore from the dump, it shows a descriptor where column 'i' notmarked NOT NULL:=# \\d foo Table \"public.foo\" Column | Type | Collation | Nullable | Default --------+---------+-----------+----------+--------- i | integer | | | j | integer | | | Indexes: \"pk\" PRIMARY KEY, btree (i) DEFERRABLEThe pg_attribute entry:=# select attname, attnotnull from pg_attributewhere attrelid = 'foo'::regclass and attnum > 0; attname | attnotnull ---------+------------ i | f j | f(2 rows)-- Regards,Amul SulEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 20 Sep 2023 18:28:36 +0530",
"msg_from": "Amul Sul <[email protected]>",
"msg_from_op": true,
"msg_subject": "Dump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY\n column(s)."
},
{
"msg_contents": "On 2023-Sep-20, Amul Sul wrote:\n\n> On the latest master head, I can see a $subject bug that seems to be related\n> commit #b0e96f311985:\n> \n> Here is the table definition:\n> create table foo(i int, j int, CONSTRAINT pk PRIMARY KEY(i) DEFERRABLE);\n\nInteresting, thanks for the report. Your attribution to that commit is\ncorrect. The table is dumped like this:\n\nCREATE TABLE public.foo (\n i integer CONSTRAINT pgdump_throwaway_notnull_0 NOT NULL NO INHERIT,\n j integer\n);\nALTER TABLE ONLY public.foo\n ADD CONSTRAINT pk PRIMARY KEY (i) DEFERRABLE;\nALTER TABLE ONLY public.foo DROP CONSTRAINT pgdump_throwaway_notnull_0;\n\nso the problem here is that the deferrable PK is not considered a reason\nto keep attnotnull set, so we produce a throwaway constraint that we\nthen drop. This is already bogus, but what is more bogus is the fact\nthat the backend accepts the DROP CONSTRAINT at all.\n\nThe pg_dump failing should be easy to fix, but fixing the backend to\nerror out sounds more critical. So, the reason for this behavior is\nthat RelationGetIndexList doesn't want to list an index that isn't\nmarked indimmediate as a primary key. I can easily hack around that by\ndoing\n\n\tdiff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c\n\tindex 7234cb3da6..971d9c8738 100644\n\t--- a/src/backend/utils/cache/relcache.c\n\t+++ b/src/backend/utils/cache/relcache.c\n\t@@ -4794,7 +4794,6 @@ RelationGetIndexList(Relation relation)\n\t\t\t * check them.\n\t\t\t */\n\t\t\tif (!index->indisunique ||\n\t-\t\t\t!index->indimmediate ||\n\t\t\t\t!heap_attisnull(htup, Anum_pg_index_indpred, NULL))\n\t\t\t\tcontinue;\n\t \n\t@@ -4821,6 +4820,9 @@ RelationGetIndexList(Relation relation)\n\t\t\t\t relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE))\n\t\t\t\tpkeyIndex = index->indexrelid;\n\t \n\t+\t\tif (!index->indimmediate)\n\t+\t\t\tcontinue;\n\t+\n\t\t\tif (!index->indisvalid)\n\t\t\t\tcontinue;\n\n\nBut of course this is not great, since it impacts unrelated bits of code\nthat are relying on relation->pkindex or RelationGetIndexAttrBitmap\nhaving their current behavior with non-immediate index.\n\nI think a real solution is to stop relying on RelationGetIndexAttrBitmap\nin ATExecDropNotNull(). (And, again, pg_dump needs some patch as well\nto avoid printing a throwaway NOT NULL constraint at this point.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 20 Sep 2023 16:59:49 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY\n column(s)."
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 8:29 PM Alvaro Herrera <[email protected]>\nwrote:\n\n> On 2023-Sep-20, Amul Sul wrote:\n>\n> > On the latest master head, I can see a $subject bug that seems to be\n> related\n> > commit #b0e96f311985:\n> >\n> > Here is the table definition:\n> > create table foo(i int, j int, CONSTRAINT pk PRIMARY KEY(i) DEFERRABLE);\n>\n> Interesting, thanks for the report. Your attribution to that commit is\n> correct. The table is dumped like this:\n>\n> CREATE TABLE public.foo (\n> i integer CONSTRAINT pgdump_throwaway_notnull_0 NOT NULL NO INHERIT,\n> j integer\n> );\n> ALTER TABLE ONLY public.foo\n> ADD CONSTRAINT pk PRIMARY KEY (i) DEFERRABLE;\n> ALTER TABLE ONLY public.foo DROP CONSTRAINT pgdump_throwaway_notnull_0;\n>\n> so the problem here is that the deferrable PK is not considered a reason\n> to keep attnotnull set, so we produce a throwaway constraint that we\n> then drop. This is already bogus, but what is more bogus is the fact\n> that the backend accepts the DROP CONSTRAINT at all.\n>\n> The pg_dump failing should be easy to fix, but fixing the backend to\n> error out sounds more critical. So, the reason for this behavior is\n> that RelationGetIndexList doesn't want to list an index that isn't\n> marked indimmediate as a primary key. I can easily hack around that by\n> doing\n>\n> diff --git a/src/backend/utils/cache/relcache.c\n> b/src/backend/utils/cache/relcache.c\n> index 7234cb3da6..971d9c8738 100644\n> --- a/src/backend/utils/cache/relcache.c\n> +++ b/src/backend/utils/cache/relcache.c\n> @@ -4794,7 +4794,6 @@ RelationGetIndexList(Relation relation)\n> * check them.\n> */\n> if (!index->indisunique ||\n> - !index->indimmediate ||\n> !heap_attisnull(htup,\n> Anum_pg_index_indpred, NULL))\n> continue;\n>\n> @@ -4821,6 +4820,9 @@ RelationGetIndexList(Relation relation)\n> relation->rd_rel->relkind ==\n> RELKIND_PARTITIONED_TABLE))\n> pkeyIndex = index->indexrelid;\n>\n> + if (!index->indimmediate)\n> + continue;\n> +\n> if (!index->indisvalid)\n> continue;\n>\n>\n> But of course this is not great, since it impacts unrelated bits of code\n> that are relying on relation->pkindex or RelationGetIndexAttrBitmap\n> having their current behavior with non-immediate index.\n>\n\nTrue, but still wondering why would relation->rd_pkattr skipped for a\ndeferrable primary key, which seems to be a bit incorrect to me since it\nsensing that relation doesn't have PK at all. Anyway, that is unrelated.\n\n\n> I think a real solution is to stop relying on RelationGetIndexAttrBitmap\n> in ATExecDropNotNull(). (And, again, pg_dump needs some patch as well\n> to avoid printing a throwaway NOT NULL constraint at this point.)\n>\n\nI might not have understood this, but I think, if it is ok to skip\nthrowaway NOT\nNULL for deferrable PK then that would be enough for the reported issue\nto be fixed. I quickly tried with the attached patch which looks sufficient\nto skip that, but, TBH, I haven't thought carefully about this change.\n\nRegards,\nAmul",
"msg_date": "Fri, 22 Sep 2023 16:25:21 +0530",
"msg_from": "Amul Sul <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY\n column(s)."
},
{
"msg_contents": "On Fri, 22 Sept 2023 at 18:45, Amul Sul <[email protected]> wrote:\n>\n>\n>\n> On Wed, Sep 20, 2023 at 8:29 PM Alvaro Herrera <[email protected]> wrote:\n>>\n>> On 2023-Sep-20, Amul Sul wrote:\n>>\n>> > On the latest master head, I can see a $subject bug that seems to be related\n>> > commit #b0e96f311985:\n>> >\n>> > Here is the table definition:\n>> > create table foo(i int, j int, CONSTRAINT pk PRIMARY KEY(i) DEFERRABLE);\n>>\n>> Interesting, thanks for the report. Your attribution to that commit is\n>> correct. The table is dumped like this:\n>>\n>> CREATE TABLE public.foo (\n>> i integer CONSTRAINT pgdump_throwaway_notnull_0 NOT NULL NO INHERIT,\n>> j integer\n>> );\n>> ALTER TABLE ONLY public.foo\n>> ADD CONSTRAINT pk PRIMARY KEY (i) DEFERRABLE;\n>> ALTER TABLE ONLY public.foo DROP CONSTRAINT pgdump_throwaway_notnull_0;\n>>\n>> so the problem here is that the deferrable PK is not considered a reason\n>> to keep attnotnull set, so we produce a throwaway constraint that we\n>> then drop. This is already bogus, but what is more bogus is the fact\n>> that the backend accepts the DROP CONSTRAINT at all.\n>>\n>> The pg_dump failing should be easy to fix, but fixing the backend to\n>> error out sounds more critical. So, the reason for this behavior is\n>> that RelationGetIndexList doesn't want to list an index that isn't\n>> marked indimmediate as a primary key. I can easily hack around that by\n>> doing\n>>\n>> diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c\n>> index 7234cb3da6..971d9c8738 100644\n>> --- a/src/backend/utils/cache/relcache.c\n>> +++ b/src/backend/utils/cache/relcache.c\n>> @@ -4794,7 +4794,6 @@ RelationGetIndexList(Relation relation)\n>> * check them.\n>> */\n>> if (!index->indisunique ||\n>> - !index->indimmediate ||\n>> !heap_attisnull(htup, Anum_pg_index_indpred, NULL))\n>> continue;\n>>\n>> @@ -4821,6 +4820,9 @@ RelationGetIndexList(Relation relation)\n>> relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE))\n>> pkeyIndex = index->indexrelid;\n>>\n>> + if (!index->indimmediate)\n>> + continue;\n>> +\n>> if (!index->indisvalid)\n>> continue;\n>>\n>>\n>> But of course this is not great, since it impacts unrelated bits of code\n>> that are relying on relation->pkindex or RelationGetIndexAttrBitmap\n>> having their current behavior with non-immediate index.\n>\n>\n> True, but still wondering why would relation->rd_pkattr skipped for a\n> deferrable primary key, which seems to be a bit incorrect to me since it\n> sensing that relation doesn't have PK at all. Anyway, that is unrelated.\n>\n>>\n>> I think a real solution is to stop relying on RelationGetIndexAttrBitmap\n>> in ATExecDropNotNull(). (And, again, pg_dump needs some patch as well\n>> to avoid printing a throwaway NOT NULL constraint at this point.)\n>\n>\n> I might not have understood this, but I think, if it is ok to skip throwaway NOT\n> NULL for deferrable PK then that would be enough for the reported issue\n> to be fixed. I quickly tried with the attached patch which looks sufficient\n> to skip that, but, TBH, I haven't thought carefully about this change.\n\nI did not see any test addition for this, can we add it?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 20 Jan 2024 07:54:52 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY\n column(s)."
},
{
"msg_contents": "On Sat, Jan 20, 2024 at 7:55 AM vignesh C <[email protected]> wrote:\n\n> On Fri, 22 Sept 2023 at 18:45, Amul Sul <[email protected]> wrote:\n> >\n> >\n> >\n> > On Wed, Sep 20, 2023 at 8:29 PM Alvaro Herrera <[email protected]>\n> wrote:\n> >>\n> >> On 2023-Sep-20, Amul Sul wrote:\n> >>\n> >> > On the latest master head, I can see a $subject bug that seems to be\n> related\n> >> > commit #b0e96f311985:\n> >> >\n> >> > Here is the table definition:\n> >> > create table foo(i int, j int, CONSTRAINT pk PRIMARY KEY(i)\n> DEFERRABLE);\n> >>\n> >> Interesting, thanks for the report. Your attribution to that commit is\n> >> correct. The table is dumped like this:\n> >>\n> >> CREATE TABLE public.foo (\n> >> i integer CONSTRAINT pgdump_throwaway_notnull_0 NOT NULL NO INHERIT,\n> >> j integer\n> >> );\n> >> ALTER TABLE ONLY public.foo\n> >> ADD CONSTRAINT pk PRIMARY KEY (i) DEFERRABLE;\n> >> ALTER TABLE ONLY public.foo DROP CONSTRAINT pgdump_throwaway_notnull_0;\n> >>\n> >> so the problem here is that the deferrable PK is not considered a reason\n> >> to keep attnotnull set, so we produce a throwaway constraint that we\n> >> then drop. This is already bogus, but what is more bogus is the fact\n> >> that the backend accepts the DROP CONSTRAINT at all.\n> >>\n> >> The pg_dump failing should be easy to fix, but fixing the backend to\n> >> error out sounds more critical. So, the reason for this behavior is\n> >> that RelationGetIndexList doesn't want to list an index that isn't\n> >> marked indimmediate as a primary key. I can easily hack around that by\n> >> doing\n> >>\n> >> diff --git a/src/backend/utils/cache/relcache.c\n> b/src/backend/utils/cache/relcache.c\n> >> index 7234cb3da6..971d9c8738 100644\n> >> --- a/src/backend/utils/cache/relcache.c\n> >> +++ b/src/backend/utils/cache/relcache.c\n> >> @@ -4794,7 +4794,6 @@ RelationGetIndexList(Relation relation)\n> >> * check them.\n> >> */\n> >> if (!index->indisunique ||\n> >> - !index->indimmediate ||\n> >> !heap_attisnull(htup,\n> Anum_pg_index_indpred, NULL))\n> >> continue;\n> >>\n> >> @@ -4821,6 +4820,9 @@ RelationGetIndexList(Relation relation)\n> >> relation->rd_rel->relkind ==\n> RELKIND_PARTITIONED_TABLE))\n> >> pkeyIndex = index->indexrelid;\n> >>\n> >> + if (!index->indimmediate)\n> >> + continue;\n> >> +\n> >> if (!index->indisvalid)\n> >> continue;\n> >>\n> >>\n> >> But of course this is not great, since it impacts unrelated bits of code\n> >> that are relying on relation->pkindex or RelationGetIndexAttrBitmap\n> >> having their current behavior with non-immediate index.\n> >\n> >\n> > True, but still wondering why would relation->rd_pkattr skipped for a\n> > deferrable primary key, which seems to be a bit incorrect to me since it\n> > sensing that relation doesn't have PK at all. Anyway, that is unrelated.\n> >\n> >>\n> >> I think a real solution is to stop relying on RelationGetIndexAttrBitmap\n> >> in ATExecDropNotNull(). (And, again, pg_dump needs some patch as well\n> >> to avoid printing a throwaway NOT NULL constraint at this point.)\n> >\n> >\n> > I might not have understood this, but I think, if it is ok to skip\n> throwaway NOT\n> > NULL for deferrable PK then that would be enough for the reported issue\n> > to be fixed. I quickly tried with the attached patch which looks\n> sufficient\n> > to skip that, but, TBH, I haven't thought carefully about this change.\n>\n> I did not see any test addition for this, can we add it?\n>\n\nOk, added it in the attached version.\n\nThis was an experimental patch, I was looking for the comment on the\nproposed\napproach -- whether we could simply skip the throwaway NOT NULL constraint\nfor\ndeferred PK constraint. Moreover, skip that for any PK constraint.\n\nRegards,\nAmul",
"msg_date": "Tue, 23 Jan 2024 17:41:26 +0530",
"msg_from": "Amul Sul <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY\n column(s)."
},
{
"msg_contents": "Hi hackers,\n\n>> I did not see any test addition for this, can we add it?\n>\n>\n> Ok, added it in the attached version.\n>\n> This was an experimental patch, I was looking for the comment on the proposed\n> approach -- whether we could simply skip the throwaway NOT NULL constraint for\n> deferred PK constraint. Moreover, skip that for any PK constraint.\n\nI confirm that the patch fixes the bug. All the tests pass. Looks like\nRfC to me.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 4 Mar 2024 15:34:20 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY\n column(s)."
},
{
"msg_contents": "On Mon, 4 Mar 2024 at 12:34, Aleksander Alekseev\n<[email protected]> wrote:\n>\n> > This was an experimental patch, I was looking for the comment on the proposed\n> > approach -- whether we could simply skip the throwaway NOT NULL constraint for\n> > deferred PK constraint. Moreover, skip that for any PK constraint.\n>\n> I confirm that the patch fixes the bug. All the tests pass. Looks like\n> RfC to me.\n>\n\nI don't think that this is the right fix. ISTM that the real issue is\nthat dropping a NOT NULL constraint should not mark the column as\nnullable if it is part of a PK, whether or not that PK is deferrable\n-- a deferrable PK still marks a column as not nullable.\n\nThe reason pg_dump creates these throwaway NOT NULL constraints is to\navoid a table scan to check for NULLs when the PK is later created.\nThat rationale still applies to deferrable PKs, so we still want the\nthrowaway NOT NULL constraints in that case, otherwise we'd be hurting\nperformance of restore.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 4 Mar 2024 13:50:21 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY\n column(s)."
},
{
"msg_contents": "On 2024-Mar-04, Dean Rasheed wrote:\n\n> I don't think that this is the right fix. ISTM that the real issue is\n> that dropping a NOT NULL constraint should not mark the column as\n> nullable if it is part of a PK, whether or not that PK is deferrable\n> -- a deferrable PK still marks a column as not nullable.\n\nYeah. As I said upthread, a good fix seems to require no longer relying\non RelationGetIndexAttrBitmap to obtain the columns in the primary key,\nbecause that function does not include deferred primary keys. I came up\nwith the attached POC, which seems to fix the reported problem, but of\ncourse it needs more polish, a working test case, and verifying whether\nthe new function should be used in more places -- in particular, whether\nit can be used to revert the changes to RelationGetIndexList that\nb0e96f311985 did.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La persona que no quería pecar / estaba obligada a sentarse\n en duras y empinadas sillas / desprovistas, por cierto\n de blandos atenuantes\" (Patricio Vogel)",
"msg_date": "Tue, 5 Mar 2024 13:36:11 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY\n column(s)."
},
{
"msg_contents": "On Tue, 5 Mar 2024 at 12:36, Alvaro Herrera <[email protected]> wrote:\n>\n> Yeah. As I said upthread, a good fix seems to require no longer relying\n> on RelationGetIndexAttrBitmap to obtain the columns in the primary key,\n> because that function does not include deferred primary keys. I came up\n> with the attached POC, which seems to fix the reported problem, but of\n> course it needs more polish, a working test case, and verifying whether\n> the new function should be used in more places -- in particular, whether\n> it can be used to revert the changes to RelationGetIndexList that\n> b0e96f311985 did.\n>\n\nLooking at the other places that call RelationGetIndexAttrBitmap()\nwith INDEX_ATTR_BITMAP_PRIMARY_KEY, they all appear to want to include\ndeferrable PKs, since they are relying on the result to see which\ncolumns are not nullable.\n\nSo there are other bugs here. For example:\n\nCREATE TABLE foo (id int PRIMARY KEY DEFERRABLE, val text);\nCREATE TABLE bar (LIKE foo);\n\nnow fails to mark bar.id as not nullable, whereas prior to\nb0e96f311985 it would have been.\n\nSo I think RelationGetIndexAttrBitmap() should include deferrable PKs,\nbut not all the changes made to RelationGetIndexList() by b0e96f311985\nneed reverting.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 5 Mar 2024 14:58:10 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY\n column(s)."
},
{
"msg_contents": "On 2024-Mar-05, Dean Rasheed wrote:\n\n> Looking at the other places that call RelationGetIndexAttrBitmap()\n> with INDEX_ATTR_BITMAP_PRIMARY_KEY, they all appear to want to include\n> deferrable PKs, since they are relying on the result to see which\n> columns are not nullable.\n\nHmm, I find this pretty surprising, but you are right. Somehow I had\nthe idea that INDEX_ATTR_BITMAP_PRIMARY_KEY was used for planning\nactivities so I didn't want to modify its behavior ... but clearly\nthat's not at all the case. It's only used for DDL, and one check in\nlogical replication.\n\n> So there are other bugs here. For example:\n> \n> CREATE TABLE foo (id int PRIMARY KEY DEFERRABLE, val text);\n> CREATE TABLE bar (LIKE foo);\n> \n> now fails to mark bar.id as not nullable, whereas prior to\n> b0e96f311985 it would have been.\n\nFun. (Thankfully, easy to fix. But I'll add this as a test too.)\n\n> So I think RelationGetIndexAttrBitmap() should include deferrable PKs,\n\nYeah, I'll go make it so. I think I'll add a test for the case that\nchanges behavior in logical replication first (which is that the target\nrelation of logical replication is currently not marked as updatable,\nwhen its PK is deferrable).\n\n> but not all the changes made to RelationGetIndexList() by b0e96f311985\n> need reverting.\n\nI'll give this a look too.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Once again, thank you and all of the developers for your hard work on\nPostgreSQL. This is by far the most pleasant management experience of\nany database I've worked on.\" (Dan Harris)\nhttp://archives.postgresql.org/pgsql-performance/2006-04/msg00247.php\n\n\n",
"msg_date": "Tue, 5 Mar 2024 18:18:45 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY\n column(s)."
},
{
"msg_contents": "On 2024-Mar-05, Dean Rasheed wrote:\n\n> So I think RelationGetIndexAttrBitmap() should include deferrable PKs,\n\nI tried this, but it doesn't actually lead to a good place, because if\nwe allow deferrable PKs to identify rows, then they are not useful to\nfind the tuple to update when replicating. Consider the following case:\n\n$node_publisher->safe_psql('postgres',\n\t'create table deferred_pk (id int primary key initially deferred, hidden int, value text)');\n$node_subscriber->safe_psql('postgres',\n\t'create table deferred_pk (id int primary key initially deferred, hidden int, value text)');\n$node_subscriber->safe_psql('postgres',\n\t'alter subscription tap_sub refresh publication');\n\n$node_publisher->safe_psql('postgres',\n\t\"insert into deferred_pk (id, hidden, value) values (1, 1, 'first')\");\n$node_publisher->wait_for_catchup('tap_sub');\n$node_publisher->safe_psql('postgres',\n\tqq{\n\tbegin;\n\tinsert into deferred_pk values (1, 2, 'conflicting');\n\tupdate deferred_pk set value = value || ', updated' where id = 1 and hidden = 2;\n\tupdate deferred_pk set id = 3, value = value || ', updated' where hidden = 2;\n\tcommit});\n$node_publisher->wait_for_catchup('tap_sub');\nmy $pubdata = $node_publisher->safe_psql('postgres',\n\t'select * from deferred_pk order by id');\nmy $subsdata = $node_subscriber->safe_psql('postgres',\n\t'select * from deferred_pk order by id');\nis($subsdata, $pubdata, \"data is equal\");\n\nHere, the publisher's transaction first creates a new record with the\nsame PK, which only works because the PK is deferred; then we update its\npayload column. When this is replicated, the row is identified by the\nPK ... but replication actually updates the other row, because it's\nfound first:\n\n# Failed test 'data is equal'\n# at t/003_constraints.pl line 163.\n# got: '1|2|conflicting\n# 3|2|conflicting, updated, updated'\n# expected: '1|1|first\n# 3|2|conflicting, updated, updated'\n\nActually, is that what happened here? I'm not sure, but clearly this is\nbogus.\n\nSo I think the original developers of REPLICA IDENTITY had the right\nidea here (commit 07cacba983ef), and we mustn't change this aspect,\nbecause it'll lead to data corruption in replication. Using a deferred\nPK for DDL considerations seems OK, but it seems certain that for actual\ndata replication it's going to be a disaster.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 7 Mar 2024 14:00:15 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY\n column(s)."
},
{
"msg_contents": "On Thu, 7 Mar 2024 at 13:00, Alvaro Herrera <[email protected]> wrote:\n>\n> So I think the original developers of REPLICA IDENTITY had the right\n> idea here (commit 07cacba983ef), and we mustn't change this aspect,\n> because it'll lead to data corruption in replication. Using a deferred\n> PK for DDL considerations seems OK, but it seems certain that for actual\n> data replication it's going to be a disaster.\n>\n\nYes, that makes sense. If I understand correctly though, the\nreplication code uses relation->rd_replidindex (not\nrelation->rd_pkindex), although sometimes it's the same thing. So can\nwe get away with making sure that RelationGetIndexList() doesn't set\nrelation->rd_replidindex to a deferrable PK, while still allowing\nrelation->rd_pkindex to be one?\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 7 Mar 2024 13:42:07 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY\n column(s)."
},
{
"msg_contents": "On 2024-Mar-07, Dean Rasheed wrote:\n\n> On Thu, 7 Mar 2024 at 13:00, Alvaro Herrera <[email protected]> wrote:\n> >\n> > So I think the original developers of REPLICA IDENTITY had the right\n> > idea here (commit 07cacba983ef), and we mustn't change this aspect,\n> > because it'll lead to data corruption in replication. Using a deferred\n> > PK for DDL considerations seems OK, but it seems certain that for actual\n> > data replication it's going to be a disaster.\n> \n> Yes, that makes sense. If I understand correctly though, the\n> replication code uses relation->rd_replidindex (not\n> relation->rd_pkindex), although sometimes it's the same thing. So can\n> we get away with making sure that RelationGetIndexList() doesn't set\n> relation->rd_replidindex to a deferrable PK, while still allowing\n> relation->rd_pkindex to be one?\n\nWell, not really, because the logical replication code for some reason\nuses GetRelationIdentityOrPK(), which falls back to rd->pk_index (via\nRelationGetPrimaryKeyIndex) if rd_replindex is not set.\n\nMaybe we can add a flag RelationData->rd_ispkdeferred, so that\nRelationGetPrimaryKeyIndex returned InvalidOid for deferrable PKs; then\nlogical replication would continue to not know about this PK, which\nperhaps is what we want. I'll do some testing with this.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 7 Mar 2024 16:10:38 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY\n column(s)."
},
{
"msg_contents": "On 2024-Mar-07, Alvaro Herrera wrote:\n\n> Maybe we can add a flag RelationData->rd_ispkdeferred, so that\n> RelationGetPrimaryKeyIndex returned InvalidOid for deferrable PKs; then\n> logical replication would continue to not know about this PK, which\n> perhaps is what we want. I'll do some testing with this.\n\nThis seems to work okay.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n“Cuando no hay humildad las personas se degradan” (A. Christie)",
"msg_date": "Thu, 7 Mar 2024 18:32:35 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY\n column(s)."
},
{
"msg_contents": "On Thu, Mar 7, 2024 at 11:02 PM Alvaro Herrera <[email protected]>\nwrote:\n\n> On 2024-Mar-07, Alvaro Herrera wrote:\n>\n> > Maybe we can add a flag RelationData->rd_ispkdeferred, so that\n> > RelationGetPrimaryKeyIndex returned InvalidOid for deferrable PKs; then\n> > logical replication would continue to not know about this PK, which\n> > perhaps is what we want. I'll do some testing with this.\n>\n> This seems to work okay.\n>\n\nThank you for working on this, the patch works nicely.\n\nRegards,\nAmul\n\nOn Thu, Mar 7, 2024 at 11:02 PM Alvaro Herrera <[email protected]> wrote:On 2024-Mar-07, Alvaro Herrera wrote:\n\n> Maybe we can add a flag RelationData->rd_ispkdeferred, so that\n> RelationGetPrimaryKeyIndex returned InvalidOid for deferrable PKs; then\n> logical replication would continue to not know about this PK, which\n> perhaps is what we want. I'll do some testing with this.\n\nThis seems to work okay.Thank you for working on this, the patch works nicely.Regards,Amul",
"msg_date": "Fri, 8 Mar 2024 11:28:00 +0530",
"msg_from": "Amul Sul <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY\n column(s)."
},
{
"msg_contents": "On Thu, 7 Mar 2024 at 17:32, Alvaro Herrera <[email protected]> wrote:\n>\n> This seems to work okay.\n>\n\nYes, this looks good. I tested it against CREATE TABLE ... LIKE, and\nit worked as expected. It might be worth adding a test case for that,\nto ensure that it doesn't get broken in the future. Do we also want a\ntest case that does what pg_dump would do:\n\n - Add a NOT NULL constraint\n - Add a deferrable PK constraint\n - Drop the NOT NULL constraint\n - Check the column is still not nullable\n\nLooking at rel.h, I think that the new field should probably come\nafter rd_pkindex, under the comment \"data managed by\nRelationGetIndexList\", and have its own comment.\n\nAlso, if I'm nitpicking, the new field and local variables should use\nthe term \"deferrable\" rather than \"deferred\". A DEFERRABLE constraint\ncan be set to be either DEFERRED or IMMEDIATE within a transaction,\nbut \"deferrable\" is the right term to use to describe the persistent\nproperty of an index/constraint that can be deferred. (The same\nobjection applies to the field name \"indimmediate\", but it's too late\nto change that.)\n\nAlso, for neatness/consistency, the new field should probably be reset\nin load_relcache_init_file(), alongside rd_pkindex, though I don't\nthink it can matter in practice.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 8 Mar 2024 11:18:41 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY\n column(s)."
},
{
"msg_contents": "On Thu, Mar 7, 2024 at 12:32 PM Alvaro Herrera <[email protected]> wrote:\n> On 2024-Mar-07, Alvaro Herrera wrote:\n> > Maybe we can add a flag RelationData->rd_ispkdeferred, so that\n> > RelationGetPrimaryKeyIndex returned InvalidOid for deferrable PKs; then\n> > logical replication would continue to not know about this PK, which\n> > perhaps is what we want. I'll do some testing with this.\n>\n> This seems to work okay.\n\nThere is a CommitFest entry for this patch. Should that entry be\nclosed in view of the not-NULL revert\n(6f8bb7c1e9610dd7af20cdaf74c4ff6e6d678d44)?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 May 2024 10:08:18 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY\n column(s)."
},
{
"msg_contents": "On 2024-May-14, Robert Haas wrote:\n\n> On Thu, Mar 7, 2024 at 12:32 PM Alvaro Herrera <[email protected]> wrote:\n> > On 2024-Mar-07, Alvaro Herrera wrote:\n> > > Maybe we can add a flag RelationData->rd_ispkdeferred, so that\n> > > RelationGetPrimaryKeyIndex returned InvalidOid for deferrable PKs; then\n> > > logical replication would continue to not know about this PK, which\n> > > perhaps is what we want. I'll do some testing with this.\n> >\n> > This seems to work okay.\n> \n> There is a CommitFest entry for this patch. Should that entry be\n> closed in view of the not-NULL revert\n> (6f8bb7c1e9610dd7af20cdaf74c4ff6e6d678d44)?\n\nUhmm, I didn't realize there was a CF entry. I don't know why it was\nthere; this should have been an open item, not a bugfix CF entry.\n\nThis had already been committed as 270af6f0df76 (the day before it was\nsent to the next commitfest). This commit wasn't included in the\nreverted set, though, so you still get deferrable PKs from\nRelationGetIndexList. I don't think this is necessarily a bad thing,\nthough these don't have any usefulness as things stand (and if we deal\nwith PKs by forcing not-null constraints to be underneath, then we won't\nneed them either).\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Postgres is bloatware by design: it was built to house\n PhD theses.\" (Joey Hellerstein, SIGMOD annual conference 2002)\n\n\n",
"msg_date": "Tue, 14 May 2024 16:42:04 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY\n column(s)."
},
{
"msg_contents": "On Tue, May 14, 2024 at 10:42 AM Alvaro Herrera <[email protected]> wrote:\n> This had already been committed as 270af6f0df76 (the day before it was\n> sent to the next commitfest). This commit wasn't included in the\n> reverted set, though, so you still get deferrable PKs from\n> RelationGetIndexList. I don't think this is necessarily a bad thing,\n> though these don't have any usefulness as things stand (and if we deal\n> with PKs by forcing not-null constraints to be underneath, then we won't\n> need them either).\n\nSo, are you saying this should be marked Committed in the commitfest?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 May 2024 10:55:59 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY\n column(s)."
},
{
"msg_contents": "On 2024-May-14, Robert Haas wrote:\n\n> On Tue, May 14, 2024 at 10:42 AM Alvaro Herrera <[email protected]> wrote:\n> > This had already been committed as 270af6f0df76 (the day before it was\n> > sent to the next commitfest). This commit wasn't included in the\n> > reverted set, though, so you still get deferrable PKs from\n> > RelationGetIndexList. I don't think this is necessarily a bad thing,\n> > though these don't have any usefulness as things stand (and if we deal\n> > with PKs by forcing not-null constraints to be underneath, then we won't\n> > need them either).\n> \n> So, are you saying this should be marked Committed in the commitfest?\n\nYeah. I've done so.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Pero la cosa no es muy grave ...\" (le petit Nicolas -- René Goscinny)\n\n\n",
"msg_date": "Tue, 14 May 2024 17:11:53 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY\n column(s)."
},
{
"msg_contents": "On Tue, May 14, 2024 at 11:11 AM Alvaro Herrera <[email protected]> wrote:\n> > So, are you saying this should be marked Committed in the commitfest?\n>\n> Yeah. I've done so.\n\nGreat, thanks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 May 2024 11:17:43 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY\n column(s)."
}
] |
[
{
"msg_contents": "I got a complaint that pg_upgrade --check fails to raise red flags when\nthe source database contains type abstime when upgrading from pg11. The\ntype (along with reltime and tinterval) was removed by pg12.\n\n\nIn passing, while testing this, I noticed that the translation\ninfrastructure in pg_upgrade/util.c is broken: we do have the messages\nin the translation catalog, but the translations for the messages from\nprep_status are never displayed. So I made the quick hack of adding _()\naround the fmt, and this was the result:\n\nVerificando Consistencia en Vivo en el Servidor Antiguo\n-------------------------------------------------------\nVerificando las versiones de los clústers éxito\nVerificando que el usuario de base de datos es el usuario de instalaciónéxito\nVerificando los parámetros de conexión de bases de datos éxito\nVerificando transacciones preparadas éxito\nVerificando tipos compuestos definidos por el sistema en tablas de usuarioéxito\nVerificando tipos de datos reg* en datos de usuario éxito\nVerificando contrib/isn con discordancia en mecanismo de paso de bigintéxito\nChecking for incompatible \"aclitem\" data type in user tables éxito\nChecking for removed \"abstime\" data type in user tables éxito\nChecking for removed \"reltime\" data type in user tables éxito\nChecking for removed \"tinterval\" data type in user tables éxito\nVerificando conversiones de codificación definidas por el usuarioéxito\nVerificando operadores postfix definidos por el usuario éxito\nVerificando funciones polimórficas incompatibles éxito\nVerificando tablas WITH OIDS éxito\nVerificando columnas de usuario del tipo «sql_identifier» éxito\nVerificando la presencia de las bibliotecas requeridas éxito\nVerificando que el usuario de base de datos es el usuario de instalaciónéxito\nVerificando transacciones preparadas éxito\nVerificando los directorios de tablespaces para el nuevo clústeréxito\n\nNote how nicely they line up ... not. There is some code that claims to\ndo this correctly, but apparently it counts bytes, not characters, and\nalso it appears to be measuring the original rather than the\ntranslation.\n\nI think we're trimming the strings in the wrong places. We need to\napply _() to the originals, not the trimmed ones. Anyway, clearly\nnobody has looked at this very much.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"We’ve narrowed the problem down to the customer’s pants being in a situation\n of vigorous combustion\" (Robert Haas, Postgres expert extraordinaire)",
"msg_date": "Wed, 20 Sep 2023 18:54:24 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_upgrade --check fails to warn about abstime"
},
{
"msg_contents": "> +/*\n> + * check_for_removed_data_type_usage\n> + *\n> + * similar to the above, but for types that were removed in 12.\n> + */\n> +static void\n> +check_for_removed_data_type_usage(ClusterInfo *cluster, const char *datatype)\n\nSeems like you could make this more generic instead of hardcoding \nversion 12, and then you could use it for any future removed types as \nwell.\n\nJust a thought.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 20 Sep 2023 12:45:36 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --check fails to warn about abstime"
},
{
"msg_contents": "On 2023-Sep-20, Tristan Partin wrote:\n\n> > +/*\n> > + * check_for_removed_data_type_usage\n> > + *\n> > + * similar to the above, but for types that were removed in 12.\n> > + */\n> > +static void\n> > +check_for_removed_data_type_usage(ClusterInfo *cluster, const char *datatype)\n> \n> Seems like you could make this more generic instead of hardcoding version\n> 12, and then you could use it for any future removed types as well.\n\nYeah, I thought about that, and then closed that with \"we can whack it\naround when we need it\". At this point I imagine there's very few other\ndatatypes we can remove from the core server, if any.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La libertad es como el dinero; el que no la sabe emplear la pierde\" (Alvarez)\n\n\n",
"msg_date": "Wed, 20 Sep 2023 19:58:07 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade --check fails to warn about abstime"
},
{
"msg_contents": "On Wed Sep 20, 2023 at 12:58 PM CDT, Alvaro Herrera wrote:\n> On 2023-Sep-20, Tristan Partin wrote:\n>\n> > > +/*\n> > > + * check_for_removed_data_type_usage\n> > > + *\n> > > + * similar to the above, but for types that were removed in 12.\n> > > + */\n> > > +static void\n> > > +check_for_removed_data_type_usage(ClusterInfo *cluster, const char *datatype)\n> > \n> > Seems like you could make this more generic instead of hardcoding version\n> > 12, and then you could use it for any future removed types as well.\n>\n> Yeah, I thought about that, and then closed that with \"we can whack it\n> around when we need it\". At this point I imagine there's very few other\n> datatypes we can remove from the core server, if any.\n\nMakes complete sense to me. Patch looks good to me with one comment.\n\n> + pg_fatal(\"Your installation contains the \\\"%s\\\" data type in user tables.\\n\"\n> + \"Data type \\\"%s\\\" has been removed in PostgreSQL version 12,\\n\"\n> + \"so this cluster cannot currently be upgraded. You can drop the\\n\"\n> + \"problem columns, or change them to another data type, and restart\\n\"\n> + \"the upgrade. A list of the problem columns is in the file:\\n\"\n> + \" %s\", datatype, datatype, output_path);\n\nI would wrap the second \\\"%s\\\" in commas.\n\n> Data type, \"abstime\", has been...\n\nMaybe also add a \"The\" to start that sentence to make it less terse. Up \nto you.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 20 Sep 2023 13:13:03 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --check fails to warn about abstime"
},
{
"msg_contents": "Thanks, Alvaro, for working on this.\n\nThe patch looks good to me.\n\n+ * similar to the above, but for types that were removed in 12.\nComment can start with a capital letter.\n\nAlso, We need to backport the same, right?\n\nOn Wed, Sep 20, 2023 at 10:24 PM Alvaro Herrera <[email protected]>\nwrote:\n\n> I got a complaint that pg_upgrade --check fails to raise red flags when\n> the source database contains type abstime when upgrading from pg11. The\n> type (along with reltime and tinterval) was removed by pg12.\n>\n>\n> In passing, while testing this, I noticed that the translation\n> infrastructure in pg_upgrade/util.c is broken: we do have the messages\n> in the translation catalog, but the translations for the messages from\n> prep_status are never displayed. So I made the quick hack of adding _()\n> around the fmt, and this was the result:\n>\n> Verificando Consistencia en Vivo en el Servidor Antiguo\n> -------------------------------------------------------\n> Verificando las versiones de los clústers éxito\n> Verificando que el usuario de base de datos es el usuario de\n> instalaciónéxito\n> Verificando los parámetros de conexión de bases de datos éxito\n> Verificando transacciones preparadas éxito\n> Verificando tipos compuestos definidos por el sistema en tablas de\n> usuarioéxito\n> Verificando tipos de datos reg* en datos de usuario éxito\n> Verificando contrib/isn con discordancia en mecanismo de paso de\n> bigintéxito\n> Checking for incompatible \"aclitem\" data type in user tables éxito\n> Checking for removed \"abstime\" data type in user tables éxito\n> Checking for removed \"reltime\" data type in user tables éxito\n> Checking for removed \"tinterval\" data type in user tables éxito\n> Verificando conversiones de codificación definidas por el usuarioéxito\n> Verificando operadores postfix definidos por el usuario éxito\n> Verificando funciones polimórficas incompatibles éxito\n> Verificando tablas WITH OIDS éxito\n> Verificando columnas de usuario del tipo «sql_identifier» éxito\n> Verificando la presencia de las bibliotecas requeridas éxito\n> Verificando que el usuario de base de datos es el usuario de\n> instalaciónéxito\n> Verificando transacciones preparadas éxito\n> Verificando los directorios de tablespaces para el nuevo clústeréxito\n>\n> Note how nicely they line up ... not. There is some code that claims to\n> do this correctly, but apparently it counts bytes, not characters, and\n> also it appears to be measuring the original rather than the\n> translation.\n>\n> I think we're trimming the strings in the wrong places. We need to\n> apply _() to the originals, not the trimmed ones. Anyway, clearly\n> nobody has looked at this very much.\n>\n> --\n> Álvaro Herrera PostgreSQL Developer —\n> https://www.EnterpriseDB.com/\n> \"We’ve narrowed the problem down to the customer’s pants being in a\n> situation\n> of vigorous combustion\" (Robert Haas, Postgres expert extraordinaire)\n>\n\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\n\n\n\nedbpostgres.com\n\nThanks, Alvaro, for working on this.The patch looks good to me. + *\tsimilar to the above, but for types that were removed in 12.Comment can start with a capital letter.Also, We need to backport the same, right?On Wed, Sep 20, 2023 at 10:24 PM Alvaro Herrera <[email protected]> wrote:I got a complaint that pg_upgrade --check fails to raise red flags when\nthe source database contains type abstime when upgrading from pg11. The\ntype (along with reltime and tinterval) was removed by pg12.\n\n\nIn passing, while testing this, I noticed that the translation\ninfrastructure in pg_upgrade/util.c is broken: we do have the messages\nin the translation catalog, but the translations for the messages from\nprep_status are never displayed. So I made the quick hack of adding _()\naround the fmt, and this was the result:\n\nVerificando Consistencia en Vivo en el Servidor Antiguo\n-------------------------------------------------------\nVerificando las versiones de los clústers éxito\nVerificando que el usuario de base de datos es el usuario de instalaciónéxito\nVerificando los parámetros de conexión de bases de datos éxito\nVerificando transacciones preparadas éxito\nVerificando tipos compuestos definidos por el sistema en tablas de usuarioéxito\nVerificando tipos de datos reg* en datos de usuario éxito\nVerificando contrib/isn con discordancia en mecanismo de paso de bigintéxito\nChecking for incompatible \"aclitem\" data type in user tables éxito\nChecking for removed \"abstime\" data type in user tables éxito\nChecking for removed \"reltime\" data type in user tables éxito\nChecking for removed \"tinterval\" data type in user tables éxito\nVerificando conversiones de codificación definidas por el usuarioéxito\nVerificando operadores postfix definidos por el usuario éxito\nVerificando funciones polimórficas incompatibles éxito\nVerificando tablas WITH OIDS éxito\nVerificando columnas de usuario del tipo «sql_identifier» éxito\nVerificando la presencia de las bibliotecas requeridas éxito\nVerificando que el usuario de base de datos es el usuario de instalaciónéxito\nVerificando transacciones preparadas éxito\nVerificando los directorios de tablespaces para el nuevo clústeréxito\n\nNote how nicely they line up ... not. There is some code that claims to\ndo this correctly, but apparently it counts bytes, not characters, and\nalso it appears to be measuring the original rather than the\ntranslation.\n\nI think we're trimming the strings in the wrong places. We need to\napply _() to the originals, not the trimmed ones. Anyway, clearly\nnobody has looked at this very much.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"We’ve narrowed the problem down to the customer’s pants being in a situation\n of vigorous combustion\" (Robert Haas, Postgres expert extraordinaire)\n-- --Thanks & Regards, Suraj kharage, edbpostgres.com",
"msg_date": "Thu, 21 Sep 2023 12:05:43 +0530",
"msg_from": "Suraj Kharage <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --check fails to warn about abstime"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 06:54:24PM +0200, Álvaro Herrera wrote:\n> I got a complaint that pg_upgrade --check fails to raise red flags when\n> the source database contains type abstime when upgrading from pg11. The\n> type (along with reltime and tinterval) was removed by pg12.\n\nWow, I never added code to pg_upgrade to check for that, and no one\ncomplained either.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Thu, 21 Sep 2023 21:46:04 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --check fails to warn about abstime"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> On Wed, Sep 20, 2023 at 06:54:24PM +0200, Álvaro Herrera wrote:\n>> I got a complaint that pg_upgrade --check fails to raise red flags when\n>> the source database contains type abstime when upgrading from pg11. The\n>> type (along with reltime and tinterval) was removed by pg12.\n\n> Wow, I never added code to pg_upgrade to check for that, and no one\n> complained either.\n\nYeah, so most people had indeed listened to warnings and moved away\nfrom those datatypes. I'm inclined to think that adding code for this\nat this point is a bit of a waste of time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Sep 2023 23:18:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --check fails to warn about abstime"
},
{
"msg_contents": "On 2023-Sep-21, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n\n> > Wow, I never added code to pg_upgrade to check for that, and no one\n> > complained either.\n> \n> Yeah, so most people had indeed listened to warnings and moved away\n> from those datatypes. I'm inclined to think that adding code for this\n> at this point is a bit of a waste of time.\n\nThe migrations from versions prior to 12 have not stopped yet, and I did\nreceive a complaint about it. Because the change is so simple, I'm\ninclined to patch it anyway, late though it is.\n\nI decided to follow Tristan's advice to add the version number as a\nparameter to the new function; this way, the knowledge of where was what\ndropped is all in the callsite and none in the function. It\nlooked a bit schizoid otherwise.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Postgres is bloatware by design: it was built to house\n PhD theses.\" (Joey Hellerstein, SIGMOD annual conference 2002)",
"msg_date": "Fri, 22 Sep 2023 13:14:23 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade --check fails to warn about abstime"
},
{
"msg_contents": "On Fri, Sep 22, 2023 at 4:44 PM Alvaro Herrera <[email protected]>\nwrote:\n\n> On 2023-Sep-21, Tom Lane wrote:\n>\n> > Bruce Momjian <[email protected]> writes:\n>\n> > > Wow, I never added code to pg_upgrade to check for that, and no one\n> > > complained either.\n> >\n> > Yeah, so most people had indeed listened to warnings and moved away\n> > from those datatypes. I'm inclined to think that adding code for this\n> > at this point is a bit of a waste of time.\n>\n> The migrations from versions prior to 12 have not stopped yet, and I did\n> receive a complaint about it. Because the change is so simple, I'm\n> inclined to patch it anyway, late though it is.\n>\n> I decided to follow Tristan's advice to add the version number as a\n> parameter to the new function; this way, the knowledge of where was what\n> dropped is all in the callsite and none in the function. It\n> looked a bit schizoid otherwise.\n>\n\nyeah, looks good to me.\n\n\n>\n> --\n> Álvaro Herrera 48°01'N 7°57'E —\n> https://www.EnterpriseDB.com/\n> \"Postgres is bloatware by design: it was built to house\n> PhD theses.\" (Joey Hellerstein, SIGMOD annual conference 2002)\n>\n\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\n\n\n\nedbpostgres.com\n\nOn Fri, Sep 22, 2023 at 4:44 PM Alvaro Herrera <[email protected]> wrote:On 2023-Sep-21, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n\n> > Wow, I never added code to pg_upgrade to check for that, and no one\n> > complained either.\n> \n> Yeah, so most people had indeed listened to warnings and moved away\n> from those datatypes. I'm inclined to think that adding code for this\n> at this point is a bit of a waste of time.\n\nThe migrations from versions prior to 12 have not stopped yet, and I did\nreceive a complaint about it. Because the change is so simple, I'm\ninclined to patch it anyway, late though it is.\n\nI decided to follow Tristan's advice to add the version number as a\nparameter to the new function; this way, the knowledge of where was what\ndropped is all in the callsite and none in the function. It\nlooked a bit schizoid otherwise.yeah, looks good to me. \n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Postgres is bloatware by design: it was built to house\n PhD theses.\" (Joey Hellerstein, SIGMOD annual conference 2002)\n-- --Thanks & Regards, Suraj kharage, edbpostgres.com",
"msg_date": "Wed, 27 Sep 2023 11:10:53 +0530",
"msg_from": "Suraj Kharage <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --check fails to warn about abstime"
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently we claim to support all versions of LLVM from 3.9 up. It's\nnow getting quite inconvenient to test changes on older releases with\nsingle digit major versions, because they aren't available through\nusual package channels on current distributions, and frankly it feels\nlike pointless busy-work to build those older versions from source\n(not to mention that it takes hoooouuurrs to compile that much C++).\nAt the other end of the window, we've also been back-patching support\nfor the latest LLVM versions into all supported releases, which might\nmake slightly more sense, but I don't know.\n\nFor the trailing end of the window, would it make sense to say that\nwhen PostgreSQL 17 ships, it doesn't need to support any LLVM versions\nthat are no longer available in the default package repositories of\ncurrent major distros?\n\nI'm trying to understand the practical constraints. Perhaps a package\nmaintainer could correct me if I have this wrong. Distros typically\nsupport a range of releases from the past few years, and then bless\none as 'default' by making it the one you get if you install a meta\npackage eg 'llvm' without a number (for example, on Debian 12 this is\nLLVM 14, though LLVM 13 is still available). Having a default\nencourages sharing, eg one LLVM library can be used by many different\nthings. The maintainer of the PostgreSQL package then chooses which\none to link against, and it's usually the default one unless we can't\nuse that one yet for technical reasons (a situation that might arise\nfrom time to time in bleeding edge distros). So if we just knew the\n*oldest default* on every live distro at release time, I assume no\npackage maintainer would get upset if we ripped out support for\neverything older, and that'd let us vacuum a lot of old versions out\nof our tree.\n\nA more conservative horizon would be: which is the *oldest* LLVM you\ncan still get through the usual channels on every relevant distro, for\nthe benefit of people compiling from source, who for some reason want\nto use a version older then the default on their distro? I don't know\nwhat the motivation would be.\n\nWhat reason could there be to be more conservative than that?\n\nI wonder if there is a good way to make this sort of thing more\nsystematic. If we could agree on a guiding principle vaguely like the\nabove, then perhaps we just need a wiki page that lists relevant\ndistributions, versions and EOL dates, that could be used to reduce\nthe combinations of stuff we have to consider and make the pruning\ndecisions into no-brainers.\n\n\n",
"msg_date": "Thu, 21 Sep 2023 10:54:09 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Guiding principle for dropping LLVM versions?"
},
{
"msg_contents": "Hi,\n\nOn Thu, 2023-09-21 at 10:54 +1200, Thomas Munro wrote:\n> I'm trying to understand the practical constraints. Perhaps a package\n> maintainer could correct me if I have this wrong. Distros typically\n> support a range of releases from the past few years, and then bless\n> one as 'default' by making it the one you get if you install a meta\n> package eg 'llvm' without a number (for example, on Debian 12 this is\n> LLVM 14, though LLVM 13 is still available). Having a default\n> encourages sharing, eg one LLVM library can be used by many different\n> things. The maintainer of the PostgreSQL package then chooses which\n> one to link against, and it's usually the default one unless we can't\n> use that one yet for technical reasons (a situation that might arise\n> from time to time in bleeding edge distros). So if we just knew the\n> *oldest default* on every live distro at release time, I assume no\n> package maintainer would get upset if we ripped out support for\n> everything older, and that'd let us vacuum a lot of old versions out\n> of our tree.\n\nRPM packager speaking:\n\nEven though older LLVM versions exist on both RHEL and Fedora, they\ndon't provide older Clang packages, which means we have to link to the\nlatest release anyway (like currently Fedora 38 packages are waiting for\nLLVM 16 patch, as they cannot be linked against LLVM 15)\n\nRegards,\n\nRegards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, PostgreSQL Major Contributor\nTwitter: @DevrimGunduz , @DevrimGunduzTR\n\n\n",
"msg_date": "Thu, 21 Sep 2023 01:27:38 +0100",
"msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guiding principle for dropping LLVM versions?"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> I wonder if there is a good way to make this sort of thing more\n> systematic. If we could agree on a guiding principle vaguely like the\n> above, then perhaps we just need a wiki page that lists relevant\n> distributions, versions and EOL dates, that could be used to reduce\n> the combinations of stuff we have to consider and make the pruning\n> decisions into no-brainers.\n\nFWIW, I think \"compile older Postgres on newer infrastructure\"\nis a more common and more defensible scenario than \"compile\nnewer Postgres on older infrastructure\". We've spent a ton of\neffort on the latter scenario (and I've helped lead the charge\nin many cases), but I think the real-world demand for it isn't\ntruly that high once you get beyond a year or two back. On the\nother hand, if you have an app that depends on PG 11 behavioral\ndetails and you don't want to update it right now, you might\nnonetheless need to put that server onto recent infrastructure\nfor practical reasons.\n\nThus, I think it's worthwhile to spend effort on back-patching\nnew-LLVM compatibility fixes into old PG branches, but I agree\nthat newer PG branches can drop compatibility with obsolete\nLLVM versions.\n\nLLVM is maybe not the poster child for these concerns -- for\neither direction of compatibility problems, someone could build\nwithout JIT support and not really be dead in the water.\n\nIn any case, I agree with your prior decision to not touch v11\nfor this. With that branch's next release being its last,\nI think the odds of introducing a bug we can't fix later\noutweigh any arguable portability gain.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Sep 2023 01:28:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guiding principle for dropping LLVM versions?"
},
{
"msg_contents": "> On 21 Sep 2023, at 07:28, Tom Lane <[email protected]> wrote:\n> \n> Thomas Munro <[email protected]> writes:\n>> I wonder if there is a good way to make this sort of thing more\n>> systematic. If we could agree on a guiding principle vaguely like the\n>> above, then perhaps we just need a wiki page that lists relevant\n>> distributions, versions and EOL dates, that could be used to reduce\n>> the combinations of stuff we have to consider and make the pruning\n>> decisions into no-brainers.\n\nAs someone who on occasion poke at OpenSSL compat code I would very much like a\nmore structured approach around dealing with dependencies.\n\n> Thus, I think it's worthwhile to spend effort on back-patching\n> new-LLVM compatibility fixes into old PG branches, but I agree\n> that newer PG branches can drop compatibility with obsolete\n> LLVM versions.\n\n+1\n\n> LLVM is maybe not the poster child for these concerns -- for\n> either direction of compatibility problems, someone could build\n> without JIT support and not really be dead in the water.\n\nRight, OpenSSL on the other hand might be better example since removing TLS\nsupport is likely a no-show. I can see both the need to use an old OpenSSL\nversion in a backbranch due to certifications etc, as well as a requirement in\nother cases to use the latest version due to CVE's.\n\n> In any case, I agree with your prior decision to not touch v11\n> for this. With that branch's next release being its last,\n> I think the odds of introducing a bug we can't fix later\n> outweigh any arguable portability gain.\n\nAgreed.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 21 Sep 2023 11:39:00 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guiding principle for dropping LLVM versions?"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 12:27 PM Devrim Gündüz <[email protected]> wrote:\n> Even though older LLVM versions exist on both RHEL and Fedora, they\n> don't provide older Clang packages, which means we have to link to the\n> latest release anyway (like currently Fedora 38 packages are waiting for\n> LLVM 16 patch, as they cannot be linked against LLVM 15)\n\nThat's quite interesting, because it means that RHEL doesn't act as\nthe \"lanterne route\" for this, ie the most conservative relevant\ndistribution.\n\nIf we used Debian as a yardstick, PostgreSQL 17 wouldn't need anything\nolder than LLVM 14 AFAICS. Who else do we need to ask? Where could\nwe find this sort of information in machine-readable form (that is\nfeedback I got discussing the wiki page idea with people, ie that it\nwould be bound to become stale and abandoned)?\n\nFresh from doing battle with this stuff, I wanted to see what it would\nlook like if we dropped 3.9...13 in master:\n\n 11 files changed, 12 insertions(+), 367 deletions(-)\n\nI noticed in passing that the LLVMOrcRegisterJITEventListener\nconfigure probes are not present in meson.",
"msg_date": "Thu, 19 Oct 2023 08:13:56 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Guiding principle for dropping LLVM versions?"
},
{
"msg_contents": "*rouge\n\n\n",
"msg_date": "Thu, 19 Oct 2023 08:18:01 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Guiding principle for dropping LLVM versions?"
},
{
"msg_contents": "We could go further. With LLVM 14 as the minimum we can just use\nopaque pointers everywhere, and delete more conditional code in\nmaster. Tested on 14-18.\n\nI explored using the new pass manager everywhere too. It almost\nworked, but I couldn't see how to override the inlining threshold\nbefore LLVM 16[1], even in C++, so we couldn't fix that with a\nllvmjit_wrap.cpp hack.\n\nI like this. How can I find out if someone would shout at me for\ndropping LLVM 13?\n\n[1] https://github.com/llvm/llvm-project/commit/4fa328074efd7eefdbb314b8f6e9f855e443ca20",
"msg_date": "Fri, 20 Oct 2023 15:36:21 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Guiding principle for dropping LLVM versions?"
},
{
"msg_contents": "Rebased. I also noticed this woefully out of date line:\n\n- PGAC_PATH_PROGS(LLVM_CONFIG, llvm-config llvm-config-7\nllvm-config-6.0 llvm-config-5.0 llvm-config-4.0 llvm-config-3.9)\n+ PGAC_PATH_PROGS(LLVM_CONFIG, llvm-config llvm-config-17\nllvm-config-16 llvm-config-15 llvm-config-14)",
"msg_date": "Sun, 22 Oct 2023 15:04:15 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Guiding principle for dropping LLVM versions?"
},
{
"msg_contents": "Hi,\n\nCan we also check if the clang's version is compatible with llvm's version\nin llvm.m4? I have multiple llvm toolchains installed on my system and I\nhave to specify the $CLANG and $LLVM_CONFIG variables each time I build the\nserver against a toolchain that is not present in $PATH. If one of the\nvariables is missing, the build system will pick up a default one whose\nversion might not be compatible with the other. E.g., If we use clang-16\nand llvm-config-15, there will be issues when creating indexes for bitcodes\nat the end of installation.\n\nThere will be errors look like\n\n```\nLLVM ERROR: ThinLTO cannot create input file: Unknown attribute kind (86)\n(Producer: 'LLVM16.0.6' Reader: 'LLVM 15.0.7')\nPLEASE submit a bug report to https://github.com/llvm/llvm-project/issues/\nand include the crash backtrace.\nStack dump:\n0. Program arguments: /usr/lib/llvm15/bin/llvm-lto -thinlto\n-thinlto-action=thinlink -o postgres.index.bc postgres/access/brin/brin.bc\npostgres/access/brin/brin_bloom.bc postgres/acces\ns/brin/brin_inclusion.bc postgres/access/brin/brin_minmax.bc\npostgres/access/brin/brin_minmax_multi.bc\npostgres/access/brin/brin_pageops.bc postgres/access/brin/brin_revmap.bc\npostgres/acce\nss/brin/brin_tuple.bc postgres/access/brin/brin_validate.bc\npostgres/access/brin/brin_xlog.bc postgres/access/common/attmap.bc\npostgres/access/common/bufmask.bc postgres/access/common/detoa\nst.bc postgres/access/common/heaptuple.bc\npostgres/access/common/indextuple.bc postgres/access/common/printsimple.bc\npostgres/access/common/printtup.bc postgres/access/common/relation.bc po\nstgres/access/common/reloptions.bc postgres/access/common/scankey.bc\npostgres/access/common/session.bc postgres/access/common/syncscan.bc\npostgres/access/common/toast_compression.bc postgre\ns/access/common/toast_internals.bc postgres/access/common/tupconvert.bc\npostgres/access/common/tupdesc.bc postgres/access/gin/ginarrayproc.bc\npostgres/access/gin/ginbtree.bc postgres/access\n/gin/ginbulk.bc postgres/access/gin/gindatapage.bc\npostgres/access/gin/ginentrypage.bc postgres/access/gin/ginfast.bc\npostgres/access/gin/ginget.bc postgres/access/gin/gininsert.bc postgres\n/access/gin/ginlogic.bc postgres/access/gin/ginpostinglist.bc\npostgres/access/gin/ginscan.bc postgres/access/gin/ginutil.bc\npostgres/access/gin/ginvacuum.bc postgres/access/gin/ginvalidate.\nbc postgres/access/gin/ginxlog.bc postgres/access/gist/gist.bc\npostgres/access/gist/gistbuild.bc postgres/access/gist/gistbuildbuffers.bc\npostgres/access/gist/gistget.bc postgres/access/gis\nt/gistproc.bc postgres/access/gist/gistscan.bc\npostgres/access/gist/gistsplit.bc postgres/access/gist/gistutil.bc\npostgres/access/gist/gistvacuum.bc postgres/access/gist/gistvalidate.bc pos\ntgres/access/gist/gistxlog.bc postgres/access/hash/hash.bc\npostgres/access/hash/hash_xlog.bc postgres/access/hash/hashfunc.bc\npostgres/access/hash/hashinsert.bc postgres/access/hash/hashovf\nl.bc postgres/access/hash/hashpage.bc postgres/access/hash/hashsearch.bc\npostgres/access/hash/hashsort.bc postgres/access/hash/hashutil.bc\npostgres/access/hash/hashvalidate.bc postgres/acce\nss/heap/heapam.bc postgres/access/heap/heapam_handler.bc\npostgres/access/heap/heapam_visibility.bc postgres/access/heap/heaptoast.bc\npostgres/access/heap/hio.bc postgres/access/heap/prunehe\n```\n\nIf we can check the llvm-config versions and clang versions at the\nconfiguration phase we can detect the problem earlier.\n\nBest Regards,\nXing\n\n\n\n\n\n\n\n\nOn Sun, Oct 22, 2023 at 10:07 AM Thomas Munro <[email protected]>\nwrote:\n\n> Rebased. I also noticed this woefully out of date line:\n>\n> - PGAC_PATH_PROGS(LLVM_CONFIG, llvm-config llvm-config-7\n> llvm-config-6.0 llvm-config-5.0 llvm-config-4.0 llvm-config-3.9)\n> + PGAC_PATH_PROGS(LLVM_CONFIG, llvm-config llvm-config-17\n> llvm-config-16 llvm-config-15 llvm-config-14)\n>\n\nHi,Can we also check if the clang's version is compatible with llvm's version in llvm.m4? I have multiple llvm toolchains installed on my system and I have to specify the $CLANG and $LLVM_CONFIG variables each time I build the server against a toolchain that is not present in $PATH. If one of the variables is missing, the build system will pick up a default one whose version might not be compatible with the other. E.g., If we use clang-16 and llvm-config-15, there will be issues when creating indexes for bitcodes at the end of installation.There will be errors look like```LLVM ERROR: ThinLTO cannot create input file: Unknown attribute kind (86) (Producer: 'LLVM16.0.6' Reader: 'LLVM 15.0.7')PLEASE submit a bug report to https://github.com/llvm/llvm-project/issues/ and include the crash backtrace.Stack dump:0. Program arguments: /usr/lib/llvm15/bin/llvm-lto -thinlto -thinlto-action=thinlink -o postgres.index.bc postgres/access/brin/brin.bc postgres/access/brin/brin_bloom.bc postgres/access/brin/brin_inclusion.bc postgres/access/brin/brin_minmax.bc postgres/access/brin/brin_minmax_multi.bc postgres/access/brin/brin_pageops.bc postgres/access/brin/brin_revmap.bc postgres/access/brin/brin_tuple.bc postgres/access/brin/brin_validate.bc postgres/access/brin/brin_xlog.bc postgres/access/common/attmap.bc postgres/access/common/bufmask.bc postgres/access/common/detoast.bc postgres/access/common/heaptuple.bc postgres/access/common/indextuple.bc postgres/access/common/printsimple.bc postgres/access/common/printtup.bc postgres/access/common/relation.bc postgres/access/common/reloptions.bc postgres/access/common/scankey.bc postgres/access/common/session.bc postgres/access/common/syncscan.bc postgres/access/common/toast_compression.bc postgres/access/common/toast_internals.bc postgres/access/common/tupconvert.bc postgres/access/common/tupdesc.bc postgres/access/gin/ginarrayproc.bc postgres/access/gin/ginbtree.bc postgres/access/gin/ginbulk.bc postgres/access/gin/gindatapage.bc postgres/access/gin/ginentrypage.bc postgres/access/gin/ginfast.bc postgres/access/gin/ginget.bc postgres/access/gin/gininsert.bc postgres/access/gin/ginlogic.bc postgres/access/gin/ginpostinglist.bc postgres/access/gin/ginscan.bc postgres/access/gin/ginutil.bc postgres/access/gin/ginvacuum.bc postgres/access/gin/ginvalidate.bc postgres/access/gin/ginxlog.bc postgres/access/gist/gist.bc postgres/access/gist/gistbuild.bc postgres/access/gist/gistbuildbuffers.bc postgres/access/gist/gistget.bc postgres/access/gist/gistproc.bc postgres/access/gist/gistscan.bc postgres/access/gist/gistsplit.bc postgres/access/gist/gistutil.bc postgres/access/gist/gistvacuum.bc postgres/access/gist/gistvalidate.bc postgres/access/gist/gistxlog.bc postgres/access/hash/hash.bc postgres/access/hash/hash_xlog.bc postgres/access/hash/hashfunc.bc postgres/access/hash/hashinsert.bc postgres/access/hash/hashovfl.bc postgres/access/hash/hashpage.bc postgres/access/hash/hashsearch.bc postgres/access/hash/hashsort.bc postgres/access/hash/hashutil.bc postgres/access/hash/hashvalidate.bc postgres/access/heap/heapam.bc postgres/access/heap/heapam_handler.bc postgres/access/heap/heapam_visibility.bc postgres/access/heap/heaptoast.bc postgres/access/heap/hio.bc postgres/access/heap/prunehe```If we can check the llvm-config versions and clang versions at the configuration phase we can detect the problem earlier.Best Regards,XingOn Sun, Oct 22, 2023 at 10:07 AM Thomas Munro <[email protected]> wrote:Rebased. I also noticed this woefully out of date line:\n\n- PGAC_PATH_PROGS(LLVM_CONFIG, llvm-config llvm-config-7\nllvm-config-6.0 llvm-config-5.0 llvm-config-4.0 llvm-config-3.9)\n+ PGAC_PATH_PROGS(LLVM_CONFIG, llvm-config llvm-config-17\nllvm-config-16 llvm-config-15 llvm-config-14)",
"msg_date": "Sun, 22 Oct 2023 10:46:17 +0800",
"msg_from": "Xing Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guiding principle for dropping LLVM versions?"
},
{
"msg_contents": "On Sun, Oct 22, 2023 at 3:46 PM Xing Guo <[email protected]> wrote:\n> Can we also check if the clang's version is compatible with llvm's version in llvm.m4? I have multiple llvm toolchains installed on my system and I have to specify the $CLANG and $LLVM_CONFIG variables each time I build the server against a toolchain that is not present in $PATH. If one of the variables is missing, the build system will pick up a default one whose version might not be compatible with the other. E.g., If we use clang-16 and llvm-config-15, there will be issues when creating indexes for bitcodes at the end of installation.\n\nHmm. Problems that occur to me:\n\n1. We need to decide if our rule is that clang must be <= llvm, or\n==. I think this question has been left unanswered in the past when\nit has come up. So far I think <= would be enough to avoid the error\nyou showed but can we find where this policy (ie especially\ncommitments for future releases) is written down in LLVM literature?\n2. Apple's clang lies about its version (I don't know the story\nbehind that, but my wild guess is that someone from marketing wanted\nthe compiler's version numbers to align with xcode's version numbers?\nthey're off by 1 or something like that).\n\nAnother idea could be to produce some bitcode with clang, and then\ncheck if a relevant LLVM tool can deal with it.\n\n\n",
"msg_date": "Mon, 23 Oct 2023 08:23:30 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Guiding principle for dropping LLVM versions?"
},
{
"msg_contents": "Hi, \n\nOn October 21, 2023 7:46:17 PM PDT, Xing Guo <[email protected]> wrote:\n>Can we also check if the clang's version is compatible with llvm's version\n>in llvm.m4? I have multiple llvm toolchains installed on my system and I\n>have to specify the $CLANG and $LLVM_CONFIG variables each time I build the\n>server against a toolchain that is not present in $PATH. If one of the\n>variables is missing, the build system will pick up a default one whose\n>version might not be compatible with the other. E.g., If we use clang-16\n>and llvm-config-15, there will be issues when creating indexes for bitcodes\n>at the end of installation.\n\nIt's unfortunately not that obvious to figure out what is compatible and what not. Older clang versions work, except if too old. Newer versions sometimes work. We could perhaps write a script that will find many, but not all, incompatibilities. \n\nFor the meson build I made it just use clang belonging to the llvm install - but that's very painful when building against an assert enabled llvm, clang is slower by an order of magnitude or so.\n\nI wonder if we should change the search order to 1) CLANG, iff explicitly specified, 2) use explicitly specified or inferred llvm-config, 3) only if that didn't find clang, search path. \n\n>wrote:\n>\n>> Rebased. I also noticed this woefully out of date line:\n>>\n>> - PGAC_PATH_PROGS(LLVM_CONFIG, llvm-config llvm-config-7\n>> llvm-config-6.0 llvm-config-5.0 llvm-config-4.0 llvm-config-3.9)\n>> + PGAC_PATH_PROGS(LLVM_CONFIG, llvm-config llvm-config-17\n>> llvm-config-16 llvm-config-15 llvm-config-14)\n>>\n\nIt's outdated, but not completely absurd - back then often no llvm-config -> llvm-config-XY was installed, but these days there pretty much always is.\n\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Sun, 22 Oct 2023 12:24:58 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guiding principle for dropping LLVM versions?"
},
{
"msg_contents": "Here are some systematic rules I'd like to propose to anchor this\nstuff to reality and avoid future doubt and litigation:\n\n1. Build farm animals testing LLVM determine the set of OSes and LLVM\nversions we consider.\n2. We exclude OSes that will be out of full vendor support when a\nrelease ships.\n3. We exclude OSes that don't bless an LLVM release (eg macOS running\nan arbitrarily picked version), and animals running only to cover\nancient LLVM compiled from source for coverage (Andres's sid\nmenagerie).\n\nBy these rules we can't require LLVM 14 for another year, because\nUbuntu and Amazon Linux are standing in the way*:\n\n animal | arch | llvm_version | os | os_release | end_of_support\n---------------+---------+--------------+--------+------------+----------------\n branta | s390x | 10.0.0 | Ubuntu | 20.04 | 2025-04-01\n splitfin | aarch64 | 10.0.0 | Ubuntu | 20.04 | 2025-04-01\n urutau | s390x | 10.0.0 | Ubuntu | 20.04 | 2025-04-01\n massasauga | aarch64 | 11.1.0 | Amazon | 2 | 2025-06-30\n snakefly | aarch64 | 11.1.0 | Amazon | 2 | 2025-06-30\n sarus | s390x | 14.0.0 | Ubuntu | 22.04 | 2027-06-01\n shiner | aarch64 | 14.0.0 | Ubuntu | 22.04 | 2027-06-01\n turbot | aarch64 | 14.0.0 | Ubuntu | 22.04 | 2027-06-01\n lora | s390x | 15.0.7 | RHEL | 9 | 2027-05-31\n mamushi | s390x | 15.0.7 | RHEL | 9 | 2027-05-31\n nicator | ppc64le | 15.0.7 | Alma | 9 | 2027-05-31\n oystercatcher | aarch64 | 15.0.7 | Alma | 9 | 2027-05-31\n\nIdeally more distros would be present in this vacuum-horizon decision\ntable, but I don't think it'd change the conclusion: 10 is the\ntrailing edge. Therefore the attached patch scales back its ambition\nto that release. Tested on LLVM 10-18.\n\nIf I pushed this we'd need to disable or upgrade the following to\navoid failure in configure on master:\n\n animal | arch | llvm_version | os | os_release\n| end_of_support\n-------------+--------------------+--------------+--------+------------+----------------\n dragonet | x86_64 | 3.9.1 | Debian | sid |\n phycodurus | x86_64 | 3.9.1 | Debian | sid |\n desmoxytes | x86_64 | 4.0.1 | Debian | sid |\n petalura | x86_64 | 4.0.1 | Debian | sid |\n mantid | x86_64 | 5.0.1 | CentOS | 7\n| 2019-08-06\n idiacanthus | x86_64 | 5.0.2 | Debian | sid |\n pogona | x86_64 | 5.0.2 | Debian | sid |\n cotinga | s390x | 6.0.0 | Ubuntu | 18.04\n| 2023-06-01\n vimba | aarch64 | 6.0.0 | Ubuntu | 18.04\n| 2023-06-01\n komodoensis | x86_64 | 6.0.1 | Debian | sid |\n topminnow | mips64el; -mabi=32 | 6.0.1 | Debian | 8\n| 2018-06-17\n xenodermus | x86_64 | 6.0.1 | Debian | sid |\n alimoche | aarch64 | 7.0.1 | Debian | 10\n| 2022-09-10\n blackneck | aarch64 | 7.0.1 | Debian | 10\n| 2022-09-10\n bonito | ppc64le | 7.0.1 | Fedora | 29\n| 2019-11-26\n\n*Some distros announce EOL date by month without saying which day, so\nin my data collecting operation I just punched in the first of the\nmonth, *shrug*",
"msg_date": "Wed, 25 Oct 2023 18:47:20 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Guiding principle for dropping LLVM versions?"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> Here are some systematic rules I'd like to propose to anchor this\n> stuff to reality and avoid future doubt and litigation:\n\n> 1. Build farm animals testing LLVM determine the set of OSes and LLVM\n> versions we consider.\n> 2. We exclude OSes that will be out of full vendor support when a\n> release ships.\n> 3. We exclude OSes that don't bless an LLVM release (eg macOS running\n> an arbitrarily picked version), and animals running only to cover\n> ancient LLVM compiled from source for coverage (Andres's sid\n> menagerie).\n\nSeems generally reasonable. Maybe rephrase 3 as \"We consider only\nan OS release's default LLVM version\"? Or a bit more forgivingly,\n\"... only LLVM versions available from the OS vendor\"? Also,\nwhat's an OS vendor? You rejected macOS which is fine, but\nI think the packages available from MacPorts or Homebrew should\nbe considered.\n\nYou could imagine somebody trying to game the system by standing up a\nbuildfarm animal running some really arbitrary combination of versions\n--- but what would be the point? I think we can deal with that\nwhen/if it happens. But \"macOS running an LLVM version available\nfrom MacPorts\" doesn't seem arbitrary.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 25 Oct 2023 02:11:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guiding principle for dropping LLVM versions?"
},
{
"msg_contents": "On Wed, Oct 25, 2023 at 7:12 PM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > 3. We exclude OSes that don't bless an LLVM release (eg macOS running\n> > an arbitrarily picked version), and animals running only to cover\n> > ancient LLVM compiled from source for coverage (Andres's sid\n> > menagerie).\n>\n> Seems generally reasonable. Maybe rephrase 3 as \"We consider only\n> an OS release's default LLVM version\"? Or a bit more forgivingly,\n> \"... only LLVM versions available from the OS vendor\"? Also,\n> what's an OS vendor? You rejected macOS which is fine, but\n> I think the packages available from MacPorts or Homebrew should\n> be considered.\n\nOK. For me the key differences are that they are independent of OS\nreleases and time lines, they promptly add new releases, they have a\nwide back-catalogue of the old releases and they let the user decide\nwhich to use. So I don't think they constrain us and it makes no\nsense to try to apply 'end of support' logic to them.\n\nhttps://formulae.brew.sh/formula/llvm\nhttps://ports.macports.org/search/?q=llvm&name=on\n\n(Frustratingly, the ancient releases of clang don't actually seem to\nwork on MacPorts at least on aarch64, and they tell you so when you\ntry to install them.)\n\nThe BSDs may be closer to macOS in that respect too, since they have\nports separate from OS releases and they offer a rolling and wide\nrange of LLVMs and generally default to a very new one, so I don't\nthink they constrain us either. It's really Big Linux that is\ndrawing the lines in the sand here, though (unusually) not\nRHEL-and-frenemies as they've opted for rolling to the latest in every\nminor release as Devrim explained.\n\n\n",
"msg_date": "Thu, 26 Oct 2023 09:51:47 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Guiding principle for dropping LLVM versions?"
},
{
"msg_contents": "Hi,\n\nOn Thu, 2023-10-19 at 08:13 +1300, Thomas Munro wrote:\n> If we used Debian as a yardstick, PostgreSQL 17 wouldn't need anything\n> older than LLVM 14 AFAICS. Who else do we need to ask? \n\nLLVM 15 is the minimum one for the platforms that I build the packages\non. So LLVM >= 14 is great for HEAD.\n\nRegards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, PostgreSQL Major Contributor\nTwitter: @DevrimGunduz , @DevrimGunduzTR\n\n\n",
"msg_date": "Thu, 26 Oct 2023 15:36:07 +0100",
"msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guiding principle for dropping LLVM versions?"
},
{
"msg_contents": "So it sounds like we're in agreement that it is time to require LLVM\n10+ in master. Could the owners (CC'd) of the following animals\nplease turn off --with-llvm on master (and future 17+ branches), or\nconsider upgrading to a modern OS release? Otherwise they'll turn\nred.\n\n animal | arch | llvm_version | os | os_release\n| end_of_support\n-------------+--------------------+--------------+--------+------------+----------------\n mantid | x86_64 | 5.0.1 | CentOS | 7\n| 2019-08-06\n cotinga | s390x | 6.0.0 | Ubuntu | 18.04\n| 2023-06-01\n vimba | aarch64 | 6.0.0 | Ubuntu | 18.04\n| 2023-06-01\n topminnow | mips64el; -mabi=32 | 6.0.1 | Debian | 8\n| 2018-06-17\n alimoche | aarch64 | 7.0.1 | Debian | 10\n| 2022-09-10\n blackneck | aarch64 | 7.0.1 | Debian | 10\n| 2022-09-10\n\nAnd of course Andres would need to do the same for his coverage\nanimals in that range:\n\n animal | arch | llvm_version | os | os_release\n| end_of_support\n-------------+--------------------+--------------+--------+------------+----------------\n dragonet | x86_64 | 3.9.1 | Debian | sid |\n phycodurus | x86_64 | 3.9.1 | Debian | sid |\n desmoxytes | x86_64 | 4.0.1 | Debian | sid |\n petalura | x86_64 | 4.0.1 | Debian | sid |\n idiacanthus | x86_64 | 5.0.2 | Debian | sid |\n pogona | x86_64 | 5.0.2 | Debian | sid |\n komodoensis | x86_64 | 6.0.1 | Debian | sid |\n xenodermus | x86_64 | 6.0.1 | Debian | sid |\n\n\n",
"msg_date": "Thu, 2 Nov 2023 10:46:52 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Guiding principle for dropping LLVM versions?"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-02 10:46:52 +1300, Thomas Munro wrote:\n> So it sounds like we're in agreement that it is time to require LLVM\n> 10+ in master. Could the owners (CC'd) of the following animals\n> please turn off --with-llvm on master (and future 17+ branches), or\n> consider upgrading to a modern OS release? Otherwise they'll turn\n> red.\n>\n> And of course Andres would need to do the same for his coverage\n> animals in that range:\n> \n> animal | arch | llvm_version | os | os_release\n> | end_of_support\n> -------------+--------------------+--------------+--------+------------+----------------\n> dragonet | x86_64 | 3.9.1 | Debian | sid |\n> phycodurus | x86_64 | 3.9.1 | Debian | sid |\n> desmoxytes | x86_64 | 4.0.1 | Debian | sid |\n> petalura | x86_64 | 4.0.1 | Debian | sid |\n> idiacanthus | x86_64 | 5.0.2 | Debian | sid |\n> pogona | x86_64 | 5.0.2 | Debian | sid |\n> komodoensis | x86_64 | 6.0.1 | Debian | sid |\n> xenodermus | x86_64 | 6.0.1 | Debian | sid |\n\nWould you want me to do this now or just before you apply the patch?\n\nI think I should stand up a few more replacement animals to cover older llvm\nversions with assertions enabled...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 4 Nov 2023 09:38:18 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guiding principle for dropping LLVM versions?"
},
{
"msg_contents": "On 25.10.23 07:47, Thomas Munro wrote:\n> Ideally more distros would be present in this vacuum-horizon decision\n> table, but I don't think it'd change the conclusion: 10 is the\n> trailing edge. Therefore the attached patch scales back its ambition\n> to that release. Tested on LLVM 10-18.\n\nThis patch and the associated reasoning look good to me. I think this \nis good to go for PG17.\n\n\n",
"msg_date": "Thu, 14 Dec 2023 11:19:38 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guiding principle for dropping LLVM versions?"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Ready for Committer\", but it looks\nlike it failed when the CFbot test for it was last run [1]. Please\nhave a look and post an updated version..\n\n======\n[1] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4640\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 10:49:59 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guiding principle for dropping LLVM versions?"
},
{
"msg_contents": "Thanks all for the discussion. Pushed. A few build farm animals will\nnow fail in the configure step as discussed, and need some adjustment\n(ie disable LLVM or upgrade to LLVM 10+ for the master branch).\n\nNext year I think we should be able to do a much bigger cleanup, by\nmoving to LLVM 14+.\n\n\n",
"msg_date": "Thu, 25 Jan 2024 16:44:43 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Guiding principle for dropping LLVM versions?"
},
{
"msg_contents": "On Thu, Jan 25, 2024 at 4:44 PM Thomas Munro <[email protected]> wrote:\n> ... A few build farm animals will\n> now fail in the configure step as discussed, and need some adjustment\n> (ie disable LLVM or upgrade to LLVM 10+ for the master branch).\n\nOwners pinged.\n\n\n",
"msg_date": "Fri, 26 Jan 2024 10:41:22 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Guiding principle for dropping LLVM versions?"
},
{
"msg_contents": "On 1/25/24 13:41, Thomas Munro wrote:\n> On Thu, Jan 25, 2024 at 4:44 PM Thomas Munro <[email protected]> wrote:\n>> ... A few build farm animals will\n>> now fail in the configure step as discussed, and need some adjustment\n>> (ie disable LLVM or upgrade to LLVM 10+ for the master branch).\n> \n> Owners pinged.\n\nI think I fixed up the 4 or 6 under my name...\n\nRegards,\nMark\n\n\n\n",
"msg_date": "Thu, 25 Jan 2024 14:39:01 -0800",
"msg_from": "Mark Wong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guiding principle for dropping LLVM versions?"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen --buffer-usage-limit option is specified, vacuumdb issues VACUUM or \nVACUUM ANALYZE command with BUFFER_USAGE_LIMIT option. Also if \n--buffer-usage-limit and -Z options are specified, vacuumdb should issue \nANALYZE command with BUFFER_USAGE_LIMIT option. But it does not. That \nis, vacuumdb -Z seems to fail to handle --buffer-usage-limit option. \nThis seems a bug.\n\nYou can see my patch in the attached file and how it works by adding -e \noption in vacuumdb.\n\nRyoga Yoshida",
"msg_date": "Thu, 21 Sep 2023 10:44:49 +0900",
"msg_from": "Ryoga Yoshida <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bug fix in vacuumdb --buffer-usage-limit xxx -Z"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 10:44:49AM +0900, Ryoga Yoshida wrote:\n> When --buffer-usage-limit option is specified, vacuumdb issues VACUUM or\n> VACUUM ANALYZE command with BUFFER_USAGE_LIMIT option. Also if\n> --buffer-usage-limit and -Z options are specified, vacuumdb should issue\n> ANALYZE command with BUFFER_USAGE_LIMIT option. But it does not. That is,\n> vacuumdb -Z seems to fail to handle --buffer-usage-limit option. This seems\n> a bug.\n> \n> You can see my patch in the attached file and how it works by adding -e\n> option in vacuumdb.\n\nGood catch, indeed the option is missing from the ANALYZE commands\nbuilt under analyze_only. I can also notice that we have no tests for\nthis option in src/bin/scripts/t checking the shape of the commands\ngenerated. Could you add something for ANALYZE and VACUUM? The\noption could just be appended in one of the existing cases, for\ninstance.\n--\nMichael",
"msg_date": "Thu, 21 Sep 2023 11:32:57 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug fix in vacuumdb --buffer-usage-limit xxx -Z"
},
{
"msg_contents": "On Thu, 21 Sept 2023 at 13:45, Ryoga Yoshida\n<[email protected]> wrote:\n> When --buffer-usage-limit option is specified, vacuumdb issues VACUUM or\n> VACUUM ANALYZE command with BUFFER_USAGE_LIMIT option. Also if\n> --buffer-usage-limit and -Z options are specified, vacuumdb should issue\n> ANALYZE command with BUFFER_USAGE_LIMIT option. But it does not. That\n> is, vacuumdb -Z seems to fail to handle --buffer-usage-limit option.\n> This seems a bug.\n>\n> You can see my patch in the attached file and how it works by adding -e\n> option in vacuumdb.\n\nThanks for the report and the patch. I agree this has been overlooked.\n\nI also wonder if we should be escaping the buffer-usage-limit string\nsent in the comment. It seems quite an unlikely attack vector, as the\nuser would have command line access and could likely just use psql\nanyway, but I had thought about something along the lines of:\n\n$ vacuumdb --buffer-usage-limit \"1MB'); drop database postgres;--\" postgres\nvacuumdb: vacuuming database \"postgres\"\nvacuumdb: error: processing of database \"postgres\" failed: ERROR:\nVACUUM cannot run inside a transaction block\n\nseems that won't work, due to sending multiple commands at once, but I\nthink we should fix it anyway.\n\nDavid\n\n\n",
"msg_date": "Thu, 21 Sep 2023 16:18:55 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug fix in vacuumdb --buffer-usage-limit xxx -Z"
},
{
"msg_contents": "On Thu, 21 Sept 2023 at 16:18, David Rowley <[email protected]> wrote:\n> Thanks for the report and the patch. I agree this has been overlooked.\n>\n> I also wonder if we should be escaping the buffer-usage-limit string\n> sent in the comment. It seems quite an unlikely attack vector, as the\n> user would have command line access and could likely just use psql\n> anyway, but I had thought about something along the lines of:\n>\n> $ vacuumdb --buffer-usage-limit \"1MB'); drop database postgres;--\" postgres\n> vacuumdb: vacuuming database \"postgres\"\n> vacuumdb: error: processing of database \"postgres\" failed: ERROR:\n> VACUUM cannot run inside a transaction block\n>\n> seems that won't work, due to sending multiple commands at once, but I\n> think we should fix it anyway.\n\nI've pushed your patch plus some additional code to escape the text\nspecified in --buffer-usage-limit before passing it to the server in\ncommit 5cfba1ad6\n\nThanks again for the report.\n\nDavid\n\n\n",
"msg_date": "Thu, 21 Sep 2023 17:50:26 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug fix in vacuumdb --buffer-usage-limit xxx -Z"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 05:50:26PM +1200, David Rowley wrote:\n> I've pushed your patch plus some additional code to escape the text\n> specified in --buffer-usage-limit before passing it to the server in\n> commit 5cfba1ad6\n\nThat was fast. If I may ask, why don't you have some regression tests\nfor the two code paths of vacuumdb that append this option to the\ncommands generated for VACUUM and ANALYZE?\n--\nMichael",
"msg_date": "Thu, 21 Sep 2023 14:59:25 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug fix in vacuumdb --buffer-usage-limit xxx -Z"
},
{
"msg_contents": "On Thu, 21 Sept 2023 at 17:59, Michael Paquier <[email protected]> wrote:\n> That was fast. If I may ask, why don't you have some regression tests\n> for the two code paths of vacuumdb that append this option to the\n> commands generated for VACUUM and ANALYZE?\n\nI think we have differing standards for what constitutes as a useful\ntest. For me, there would have to be a much higher likelihood of this\never getting broken again.\n\nI deem it pretty unlikely that someone will accidentally remove the\ncode that I just committed and a test to test that vacuumdb -Z\n--buffer-usage-limit ... passes the BUFFER_USAGE_LIMIT option would\nlikely just forever mark that we once had a trivial bug that forgot to\ninclude the --buffer-usage-limit when -Z was specified.\n\nIf others feel strongly that a test is worthwhile, then I'll reconsider.\n\nDavid\n\n\n",
"msg_date": "Thu, 21 Sep 2023 18:56:29 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug fix in vacuumdb --buffer-usage-limit xxx -Z"
},
{
"msg_contents": "On 2023-09-21 14:50, David Rowley wrote:\n> I've pushed your patch plus some additional code to escape the text\n> specified in --buffer-usage-limit before passing it to the server in\n> commit 5cfba1ad6\n> \n> Thanks again for the report.\n\nThank you for the commit. I didn't notice about the escaping and it \nseemed like it would be difficult for me to fix, so I appreciate your \nhelp.\n\nRyoga Yoshida\n\n\n",
"msg_date": "Thu, 21 Sep 2023 17:06:59 +0900",
"msg_from": "Ryoga Yoshida <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bug fix in vacuumdb --buffer-usage-limit xxx -Z"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 06:56:29PM +1200, David Rowley wrote:\n> I deem it pretty unlikely that someone will accidentally remove the\n> code that I just committed and a test to test that vacuumdb -Z\n> --buffer-usage-limit ... passes the BUFFER_USAGE_LIMIT option would\n> likely just forever mark that we once had a trivial bug that forgot to\n> include the --buffer-usage-limit when -Z was specified.\n\nPerhaps so.\n\n> If others feel strongly that a test is worthwhile, then I'll reconsider.\n\nI don't know if you would like that, but the addition is as simple as\nthe attached, FYI.\n--\nMichael",
"msg_date": "Fri, 22 Sep 2023 14:21:46 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug fix in vacuumdb --buffer-usage-limit xxx -Z"
}
] |
[
{
"msg_contents": "Dear hackers,\n\nWhile developing my patch, I found that the CI for macOS failed with unknown error [1].\nDo you know the reason why it happened? Please tell me if you have workarounds...\n\nIt failed the test at \"Upload 'ccache' cache\". The Cirrus app said a following message:\n\n> Persistent worker failed to start the task: remote agent failed: failed to run agent: wait: remote command exited without exit status or exit signal\n\n[1]: https://cirrus-ci.com/task/5439712639320064\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Thu, 21 Sep 2023 03:25:12 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "CI: Unfamiliar error while testing macOS"
},
{
"msg_contents": "> On 21 Sep 2023, at 05:25, Hayato Kuroda (Fujitsu) <[email protected]> wrote:\n> \n> Dear hackers,\n> \n> While developing my patch, I found that the CI for macOS failed with unknown error [1].\n> Do you know the reason why it happened? Please tell me if you have workarounds...\n> \n> It failed the test at \"Upload 'ccache' cache\". The Cirrus app said a following message:\n> \n>> Persistent worker failed to start the task: remote agent failed: failed to run agent: wait: remote command exited without exit status or exit signal\n\nThat looks like a transient infrastructure error and not something related to\nyour patch, it will most likely go away for the next run.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 21 Sep 2023 10:00:36 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CI: Unfamiliar error while testing macOS"
},
{
"msg_contents": "Dear Daniel,\n\nThank you for confirmation!\n\n> \n> That looks like a transient infrastructure error and not something related to\n> your patch, it will most likely go away for the next run.\n>\n\nI checked next run and found that tests were passed on all platforms.\nI'm still not sure the root cause, but now I forget about it.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Fri, 22 Sep 2023 02:44:41 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: CI: Unfamiliar error while testing macOS"
}
] |
[
{
"msg_contents": "Hi,\n\nI am confused about the new subscription parameter: password_required.\n\nI have two instances. The publisher's pg_hba is configured too allow \nconnections without authentication. On the subscriber, I have an \nunprivileged user with pg_create_subscription and CREATE on the database.\n\nI tried using a superuser to create a subsciption without setting the \npassword_required parameter (the default is true). Then I changed the \nowner to the unprivileged user.\n\nThis user can use the subscription without limitation (including ALTER \nSUBSCRIPTION ENABLE / DISABLE). The \\dRs+ metacommand shows that a \npassword is requiered, which is not the case (or it is but it's not \nenforced).\n\nIs this normal? I was expecting the ALTER SUBSCRIPTION .. OWNER to fail.\n\nWhen I try to drop the subscription with the unprivileged user or a \nsuperuser, I get an error:\n\nERROR: password is required\nDETAIL: Non-superuser cannot connect if the server does not request a \npassword.\nHINT: Target server's authentication method must be changed, or set \npassword_required=false in the subscription parameters.\n\nI have to re-change the subscription owner to the superuser, to be able \nto drop it.\n\n(See password_required.sql and password_required.log)\n\nI tried the same setup and changed the connexion string to add an \napplication_name with the unprivileged user. In this case, I am reminded \nthat I need a password. I tried modifying password_required to false \nwith the superuser and modify the connexion string with the unprivilege \nuser again. It fails with:\n\nHINT: Subscriptions with the password_required option set to false may \nonly be created or modified by the superuser.\n\nI think that this part works as intended.\n\nI tried dropping the subscription with the unprivilege user: it works. \nIs it normal (given the previous message)?\n\n(see password_required2.sql and password_required2.log)\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com",
"msg_date": "Thu, 21 Sep 2023 11:58:37 +0200",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=c3=a9au?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Questions about the new subscription parameter: password_required"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 8:03 AM Benoit Lobréau\n<[email protected]> wrote:\n> I am confused about the new subscription parameter: password_required.\n>\n> I have two instances. The publisher's pg_hba is configured too allow\n> connections without authentication. On the subscriber, I have an\n> unprivileged user with pg_create_subscription and CREATE on the database.\n>\n> I tried using a superuser to create a subsciption without setting the\n> password_required parameter (the default is true). Then I changed the\n> owner to the unprivileged user.\n>\n> This user can use the subscription without limitation (including ALTER\n> SUBSCRIPTION ENABLE / DISABLE). The \\dRs+ metacommand shows that a\n> password is requiered, which is not the case (or it is but it's not\n> enforced).\n>\n> Is this normal? I was expecting the ALTER SUBSCRIPTION .. OWNER to fail.\n\nWhich one? I see 2 ALTER SUBSCRIPTION ... OWNER commands in\npassword_required.log and 1 more in password_required2.log, but\nthey're all performed by the superuser, who is entitled to do anything\nthey want.\n\nThe intention here is that most subscriptions will have\npasswordrequired=true. If such a subscription is owned by a superuser,\nthe superuser can still use them however they like. If owned by a\nnon-superuser, they can use them however they like *provided* that the\npassword must be used to authenticate. If the superuser wants a\nnon-superuser to be able to own a subscription that doesn't use a\npassword, the superuser can set that up by configuring\npasswordrequired=false. But then that non-superuser is not allowed to\nfurther manipulate that subscription.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 21 Sep 2023 14:29:43 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questions about the new subscription parameter: password_required"
},
{
"msg_contents": "On 9/21/23 20:29, Robert Haas wrote:\n> Which one? I see 2 ALTER SUBSCRIPTION ... OWNER commands in\n> password_required.log and 1 more in password_required2.log, but\n> they're all performed by the superuser, who is entitled to do anything\n> they want.\n\nThank you for taking the time to respond!\n\nI expected the ALTER SUBSCRIPTION ... OWNER command in \npassword_required.log to fail because the end result of the command is a \nnon-superuser owning a subscription with password_required=true, but the \nconnection string has no password keyword, and the authentication scheme \nused doesn't require one anyway.\n\nThe description of the password_required parameter doesn't clearly state \nwhat will fail or when the configuration is enforced (during CREATE \nSUBSCRIPTION and ALTER SUBSCRIPTION .. CONNECTION):\n\n\"\"\" https://www.postgresql.org/docs/16/sql-createsubscription.html\nSpecifies whether connections to the publisher made as a result of this \nsubscription must use password authentication. This setting is ignored \nwhen the subscription is owned by a superuser. The default is true. Only \nsuperusers can set this value to false.\n\"\"\"\n\nThe description of pg_subscription.subpasswordrequired doesn't either:\n\n\"\"\" https://www.postgresql.org/docs/16/catalog-pg-subscription.html\nIf true, the subscription will be required to specify a password for \nauthentication\n\"\"\"\n\nCan we consider adding something like this to clarify?\n\n\"\"\"\nThis parameter is enforced when the CREATE SUBSCRIPTION or ALTER \nSUBSCRIPTION .. CONNECTION commands are executed. Therefore, it's \npossible to alter the ownership of a subscription with \npassword_required=true to a non-superuser.\n\"\"\"\n\nIs the DROP SUBSCRIPTION failure in password_required.log expected for \nboth superuser and non-superuser?\n\nIs the DROP SUBSCRIPTION success in password_required2.log expected?\n(i.e., with password_require=false, the only action a non-superuser can \nperform is dropping the subscription. Since they own it, it is \nunderstandable).\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com\n\n\n",
"msg_date": "Fri, 22 Sep 2023 10:25:20 +0200",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=c3=a9au?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Questions about the new subscription parameter: password_required"
},
{
"msg_contents": "On Fri, Sep 22, 2023 at 4:25 AM Benoit Lobréau\n<[email protected]> wrote:\n> Can we consider adding something like this to clarify?\n>\n> \"\"\"\n> This parameter is enforced when the CREATE SUBSCRIPTION or ALTER\n> SUBSCRIPTION .. CONNECTION commands are executed. Therefore, it's\n> possible to alter the ownership of a subscription with\n> password_required=true to a non-superuser.\n> \"\"\"\n\nI'm not sure of the exact wording, but there was another recent thread\ncomplaining about this being unclear, so it seems like some\nclarification is needed.\n\n[ adding Jeff Davis in case he wants to weigh in here ]\n\n> Is the DROP SUBSCRIPTION failure in password_required.log expected for\n> both superuser and non-superuser?\n>\n> Is the DROP SUBSCRIPTION success in password_required2.log expected?\n> (i.e., with password_require=false, the only action a non-superuser can\n> perform is dropping the subscription. Since they own it, it is\n> understandable).\n\nI haven't checked this, but I think what's happening here is that DROP\nSUBSCRIPTION tries to drop the remote slot, which requires making a\nconnection, which can trigger the error. You might get different\nresults if you did ALTER SUBSCRIPTION ... SET (slot_name = none)\nfirst.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 22 Sep 2023 08:36:41 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questions about the new subscription parameter: password_required"
},
{
"msg_contents": "On 9/22/23 14:36, Robert Haas wrote:\n> I haven't checked this, but I think what's happening here is that DROP\n> SUBSCRIPTION tries to drop the remote slot, which requires making a\n> connection, which can trigger the error. You might get different\n> results if you did ALTER SUBSCRIPTION ... SET (slot_name = none)\n> first.\n\nYou're right, it comes from the connection to drop the slot.\n\nBut the code in for DropSubscription in \nsrc/backend/commands/subscriptioncmds.c tries to connect before testing \nif the slot is NONE / NULL. So it doesn't work to DISABLE the \nsubscription and set the slot to NONE.\n\n\n 1522 >~~~must_use_password = !superuser_arg(subowner) && \nform->subpasswordrequired;\n ...\n 1685 >~~~wrconn = walrcv_connect(conninfo, true, must_use_password,\n 1 >~~~>~~~>~~~>~~~>~~~>~~~>~~~subname, &err);\n 2 >~~~if (wrconn == NULL)\n 3 >~~~{\n 4 >~~~>~~~if (!slotname)\n 5 >~~~>~~~{\n 6 >~~~>~~~>~~~/* be tidy */\n 7 >~~~>~~~>~~~list_free(rstates);\n 8 >~~~>~~~>~~~table_close(rel, NoLock);\n 9 >~~~>~~~>~~~return;\n 10 >~~~>~~~}\n 11 >~~~>~~~else\n 12 >~~~>~~~{\n 13 >~~~>~~~>~~~ReportSlotConnectionError(rstates, subid, slotname, \nerr);\n 14 >~~~>~~~}\n 15 >~~~}\n\n\nReading the code, I think I understand why the postgres user cannot drop \nthe slot:\n\n* the owner is sub_owner (not a superuser)\n* and form->subpasswordrequired is true\n\nShould there be a test to check if the user executing the query is \nsuperuser? maybe it's handled differently? (I am not very familiar with \nthe code).\n\nI dont understand (yet?) why I can do ALTER SUBSCRIPTIONs after changing \nthe ownership to an unpriviledged user (must_use_password should be true \nalso in that case).\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com\n\n\n",
"msg_date": "Fri, 22 Sep 2023 16:59:19 +0200",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=c3=a9au?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Questions about the new subscription parameter: password_required"
},
{
"msg_contents": "On Fri, Sep 22, 2023 at 10:59 AM Benoit Lobréau\n<[email protected]> wrote:\n> You're right, it comes from the connection to drop the slot.\n>\n> But the code in for DropSubscription in\n> src/backend/commands/subscriptioncmds.c tries to connect before testing\n> if the slot is NONE / NULL. So it doesn't work to DISABLE the\n> subscription and set the slot to NONE.\n\nSo I'm seeing this:\n\n if (!slotname && rstates == NIL)\n {\n table_close(rel, NoLock);\n return;\n }\n\n load_file(\"libpqwalreceiver\", false);\n\n wrconn = walrcv_connect(conninfo, true, must_use_password,\n subname, &err);\n\nThat looks like it's intended to return if there's nothing to do, and\nthe comments say as much. But that (!slotname && rstates == NIL) test\nlooks sketchy. It seems like we should bail out early if *either*\n!slotname *or* rstates == NIL, or for that matter if all of the\nrstates have rstate->relid == 0 or rstate->state ==\nSUBREL_STATE_SYNCDONE. Maybe we need to push setting up the connection\ninside the foreach(lc, rstates) loop and do it only once we're sure we\nwant to do something. Or at least, I don't understand why we don't\nbail out immediately in all cases where slotname is NULL, regardless\nof rstates. Am I missing something here?\n\n> Reading the code, I think I understand why the postgres user cannot drop\n> the slot:\n>\n> * the owner is sub_owner (not a superuser)\n> * and form->subpasswordrequired is true\n>\n> Should there be a test to check if the user executing the query is\n> superuser? maybe it's handled differently? (I am not very familiar with\n> the code).\n\nI think that there normally shouldn't be any problem here, because if\nform->subpasswordrequired is true, we expect that the connection\nstring should contain a password which the remote side actually uses,\nor we expect the subscription to be owned by the superuser. If neither\nof those things is the case, then either the superuser made a\nsubscription that doesn't use a password owned by a non-superuser\nwithout fixing subpasswordrequired, or else the configuration on the\nremote side has changed so that it now doesn't use the password when\nformerly it did. In the first case, perhaps it would be fine to go\nahead and drop the slot, but in the second case I don't think it's OK\nfrom a security point view, because the command is going to behave the\nsame way no matter who executes the drop command, and a non-superuser\nwho drops the slot shouldn't be permitted to rely on the postgres\nprocesses's identity to do anything on a remote node -- including\ndropping a relation slot. So I tentatively think that this behavior is\ncorrect.\n\n> I dont understand (yet?) why I can do ALTER SUBSCRIPTIONs after changing\n> the ownership to an unpriviledged user (must_use_password should be true\n> also in that case).\n\nMaybe you're altering it in a way that doesn't involve any connections\nor any changes to the connection string? There's no security issue if,\nsay, you rename it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 22 Sep 2023 15:58:55 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questions about the new subscription parameter: password_required"
},
{
"msg_contents": "On Fri, 2023-09-22 at 08:36 -0400, Robert Haas wrote:\n> On Fri, Sep 22, 2023 at 4:25 AM Benoit Lobréau\n> <[email protected]> wrote:\n> > Can we consider adding something like this to clarify?\n> > \n> > \"\"\"\n> > This parameter is enforced when the CREATE SUBSCRIPTION or ALTER\n> > SUBSCRIPTION .. CONNECTION commands are executed. Therefore, it's\n> > possible to alter the ownership of a subscription with\n> > password_required=true to a non-superuser.\n> > \"\"\"\n> \n> I'm not sure of the exact wording, but there was another recent\n> thread\n> complaining about this being unclear, so it seems like some\n> clarification is needed.\n\nIIUC there is really one use case here, which is for superuser to\ndefine a subscription including the connection, and then change the\nowner to a non-superuser to actually run it (without being able to\ntouch the connection string itself). I'd just document that in its own\nsection, and mention a few caveats / mistakes to avoid. For instance,\nwhen the superuser is defining the connection, don't forget to set\npassword_required=false, so that when you reassign to a non-superuser\nthen the connection doesn't break.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 22 Sep 2023 18:57:19 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questions about the new subscription parameter:\n password_required"
},
{
"msg_contents": "On 9/22/23 21:58, Robert Haas wrote\n> I think that there normally shouldn't be any problem here, because if\n> form->subpasswordrequired is true, we expect that the connection\n> string should contain a password which the remote side actually uses,\n> or we expect the subscription to be owned by the superuser. If neither\n> of those things is the case, then either the superuser made a\n> subscription that doesn't use a password owned by a non-superuser\n> without fixing subpasswordrequired, or else the configuration on the\n> remote side has changed so that it now doesn't use the password when\n> formerly it did. In the first case, perhaps it would be fine to go\n> ahead and drop the slot, but in the second case I don't think it's OK\n> from a security point view, because the command is going to behave the\n> same way no matter who executes the drop command, and a non-superuser\n> who drops the slot shouldn't be permitted to rely on the postgres\n> processes's identity to do anything on a remote node -- including\n> dropping a relation slot. So I tentatively think that this behavior is\n> correct.\n\nI must admin I hadn't considered the implication when the configuration \non the remote side has changed and we use a non-superuser. I see how it \ncould be problematic.\n\nI will try to come up with a documentation patch.\n\n> Maybe you're altering it in a way that doesn't involve any connections\n> or any changes to the connection string? There's no security issue if,\n> say, you rename it.\n\nI looked at the code again. Indeed, of the ALTER SUBSCRIPTION commands, \nonly ALTER SUBSCRIPTION .. CONNECTION uses walrcv_check_conninfo().\n\nI checked the other thread (Re: [16+] subscription can end up in \ninconsistent state [1]) and will try the patch. Is it the thread you \nwhere refering to earlier ?\n\n[1] \nhttps://www.postgresql.org/message-id/flat/5dff4caf26f45ce224a33a5e18e110b93a351b2f.camel%40j-davis.com#ff4a06505de317b1ad436b8102a69446\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com\n\n\n",
"msg_date": "Tue, 26 Sep 2023 16:27:03 +0200",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=c3=a9au?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Questions about the new subscription parameter: password_required"
},
{
"msg_contents": "On 9/26/23 16:27, Benoit Lobréau wrote:\n> I will try to come up with a documentation patch.\n\nThis is my attempt at a documentation patch.\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com",
"msg_date": "Tue, 26 Sep 2023 18:21:04 +0200",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=c3=a9au?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Questions about the new subscription parameter: password_required"
},
{
"msg_contents": "On Tue, 2023-09-26 at 18:21 +0200, Benoit Lobréau wrote:\n> On 9/26/23 16:27, Benoit Lobréau wrote:\n> > I will try to come up with a documentation patch.\n> \n> This is my attempt at a documentation patch.\n> \n\n\n + If the ownership of a subscription with\n<literal>password_required=true</literal>\n + is transferred to a non-superuser, they will gain full control\nover the subscription\n + but will not be able to modify it's connection string.\n\nI think you mean false, right?\n\n + If the ownership of a subscription with\n<literal>password_required=true</literal>\n + has been transferred to a non-superuser, it must be reverted to a\nsuperuser for\n + the DROP operation to succeed.\n\nThat's only needed if the superuser transfers a subscription with\npassword_required=true to a non-superuser and the connection string\ndoes not contain a password. In that case, the subscription is already\nin a failing state, not just for DROP. Ideally we'd have some other\nwarning in the docs not to do that -- maybe in CREATE and ALTER.\n\nAlso, if the subscription is in that kind of failing state, there are\nother ways to get out of it as well, like disabling it and setting\nconnection=none, then dropping it.\n\nThe whole thing is fairly delicate. As soon as you work slightly\noutside of the intended use, password_required starts causing\nunexpected things to happen.\n\nAs I said earlier, I think the best thing to do is to just have a\nsection that describes when to use password_required, what specific\nthings you should do to satisfy that case, and what caveats you should\navoid. Something like:\n\n \"If you want to have a subscription using a connection string without\na password managed by a non-superuser, then: [ insert SQL steps here ].\nWarning: if the connection string doesn't contain a password, make sure\nto set password_required=false before transferring ownership, otherwise\nit will start failing.\"\n\nDocumenting the behavior is good, too, but I find the behavior\ndifficult to document, so examples will help.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 26 Sep 2023 10:00:12 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questions about the new subscription parameter:\n password_required"
},
{
"msg_contents": "On Tue, Sep 26, 2023 at 1:00 PM Jeff Davis <[email protected]> wrote:\n> As I said earlier, I think the best thing to do is to just have a\n> section that describes when to use password_required, what specific\n> things you should do to satisfy that case, and what caveats you should\n> avoid. Something like:\n>\n> \"If you want to have a subscription using a connection string without\n> a password managed by a non-superuser, then: [ insert SQL steps here ].\n> Warning: if the connection string doesn't contain a password, make sure\n> to set password_required=false before transferring ownership, otherwise\n> it will start failing.\"\n>\n> Documenting the behavior is good, too, but I find the behavior\n> difficult to document, so examples will help.\n\nYeah, I think something like that could make sense, with an\nappropriate amount of word-smithing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 26 Sep 2023 13:53:22 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questions about the new subscription parameter: password_required"
},
{
"msg_contents": "On 9/26/23 19:00, Jeff Davis wrote:\n> + If the ownership of a subscription with\n> <literal>password_required=true</literal>\n> + is transferred to a non-superuser, they will gain full control\n> over the subscription\n> + but will not be able to modify it's connection string.\n> \n> I think you mean false, right?\n\nNo, but I was wrong. At the beginning of the thread, I was surprised \nwas even possible to change the ownership to a non-superuser because It \nshouldn't work and commands like ENABLE didn't complain in the terminal.\nThen Robert Haas explained to me that it's ok because the superuser can \ndo whatever he wants. I came back to it later and somehow convinced \nmyself it was working. Sorry.\n\n> + If the ownership of a subscription with\n> <literal>password_required=true</literal>\n> + has been transferred to a non-superuser, it must be reverted to a\n> superuser for\n> + the DROP operation to succeed.\n> \n> That's only needed if the superuser transfers a subscription with\n> password_required=true to a non-superuser and the connection string\n> does not contain a password. In that case, the subscription is already\n> in a failing state, not just for DROP. Ideally we'd have some other\n> warning in the docs not to do that -- maybe in CREATE and ALTER.\n\nYes, I forgot the connection string bit.\n\n> Also, if the subscription is in that kind of failing state, there are\n> other ways to get out of it as well, like disabling it and setting\n> connection=none, then dropping i\nThe code in for DropSubscription in\nsrc/backend/commands/subscriptioncmds.c tries to connect before testing\nif the slot is NONE / NULL. So it doesn't work to DISABLE the\nsubscription and set the slot to NONE.\n\nRobert Haas proposed something in the following message but I am a \nlittle out of my depth here ...\n\nhttps://www.postgresql.org/message-id/af9435ae-18df-6a9e-2374-2de23009518c%40dalibo.com\n\n> The whole thing is fairly delicate. As soon as you work slightly\n> outside of the intended use, password_required starts causing\n> unexpected things to happen.\n> \n> As I said earlier, I think the best thing to do is to just have a\n> section that describes when to use password_required, what specific\n> things you should do to satisfy that case, and what caveats you should\n> avoid. Something like:\n> \n> \"If you want to have a subscription using a connection string without\n> a password managed by a non-superuser, then: [ insert SQL steps here ].\n> Warning: if the connection string doesn't contain a password, make sure\n> to set password_required=false before transferring ownership, otherwise\n> it will start failing.\"\n\nOk, I will do it that way. Would you prefer this section to be in the \nALTER SUBSCRIPTION on the CREATE SUBSCIPTION doc ?\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com\n\n\n",
"msg_date": "Thu, 28 Sep 2023 11:15:37 +0200",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=c3=a9au?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Questions about the new subscription parameter: password_required"
},
{
"msg_contents": "On 9/23/23 03:57, Jeff Davis wrote:\n> IIUC there is really one use case here, which is for superuser to\n> define a subscription including the connection, and then change the\n> owner to a non-superuser to actually run it (without being able to\n> touch the connection string itself). I'd just document that in its own\n> section, and mention a few caveats / mistakes to avoid. For instance,\n> when the superuser is defining the connection, don't forget to set\n> password_required=false, so that when you reassign to a non-superuser\n> then the connection doesn't break.\n\nHi,\n\nI tried adding a section in \"Logical Replication > Subscription\" with \nthe text you suggested and links in the CREATE / ALTER SUBSRIPTION commands.\n\nIs it better ?\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com",
"msg_date": "Fri, 13 Oct 2023 11:18:33 +0200",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=c3=a9au?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Questions about the new subscription parameter: password_required"
},
{
"msg_contents": "On Fri, 2023-10-13 at 11:18 +0200, Benoit Lobréau wrote:\n> I tried adding a section in \"Logical Replication > Subscription\" with\n> the text you suggested and links in the CREATE / ALTER SUBSRIPTION\n> commands.\n> \n> Is it better ?\n\n\nMinor comments:\n\n * Use possessive \"its\" instead of the contraction, i.e. \"before\ntransferring its ownership\".\n * I like that docs cover the case where a password is specified, but\nthe remote server doesn't require one. But the warning is the wrong\nplace to explain that, it should be in the main behavioral description\nin 31.2.2.\n * The warning feels like it has too many negatives and confused me at\nfirst. I struggled myself a bit to come up with something less\nconfusing, but perhaps less is more: \"Ensure that password_required is\nproperly set before transferring ownership of a subscription to a non-\nsuperuser, otherwise the subscription may start to fail.\"\n * Missing space in the warning after \"password_required = true\"\n * Mention that a non-superuser-owned subscription with\npassword_required = false is partially locked down, e.g. the owner\ncan't change the connection string any more.\n * 31.2.2 could probably be in the CREATE SUBSCRIPTION docs instead,\nand linked from the ALTER docs. That's fairly normal for other commands\nand I'm not sure there needs to be a separate section in logical\nreplication. I don't have a strong opinion here.\n\nI like the changes; this is a big improvement. I'll leave it to Robert\nto commit it, so that he can ensure it matches how he expected the\nfeature to be used and sufficiently covers the behavioral aspects.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 13 Oct 2023 11:54:07 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questions about the new subscription parameter:\n password_required"
},
{
"msg_contents": "Hi, how about having links (instead of just\n<literal>password_required=false</literal>) in alter_subscription.sgml\nand logical-replication.sgml so the user can navigate easily back to\nthe CREATE SUBSCRIPTION parameters \"password_required\" part.\n\nFor example, alter_subscription.sgml does this already for \"two_phase\"\nand \"copy_data\" but not for \"password_required\" (??)\n\n======\nKind Regards,\nPeter Smith\nFujitsu Australia\n\nOn Sat, Oct 14, 2023 at 5:57 AM Jeff Davis <[email protected]> wrote:\n>\n> On Fri, 2023-10-13 at 11:18 +0200, Benoit Lobréau wrote:\n> > I tried adding a section in \"Logical Replication > Subscription\" with\n> > the text you suggested and links in the CREATE / ALTER SUBSRIPTION\n> > commands.\n> >\n> > Is it better ?\n>\n>\n> Minor comments:\n>\n> * Use possessive \"its\" instead of the contraction, i.e. \"before\n> transferring its ownership\".\n> * I like that docs cover the case where a password is specified, but\n> the remote server doesn't require one. But the warning is the wrong\n> place to explain that, it should be in the main behavioral description\n> in 31.2.2.\n> * The warning feels like it has too many negatives and confused me at\n> first. I struggled myself a bit to come up with something less\n> confusing, but perhaps less is more: \"Ensure that password_required is\n> properly set before transferring ownership of a subscription to a non-\n> superuser, otherwise the subscription may start to fail.\"\n> * Missing space in the warning after \"password_required = true\"\n> * Mention that a non-superuser-owned subscription with\n> password_required = false is partially locked down, e.g. the owner\n> can't change the connection string any more.\n> * 31.2.2 could probably be in the CREATE SUBSCRIPTION docs instead,\n> and linked from the ALTER docs. That's fairly normal for other commands\n> and I'm not sure there needs to be a separate section in logical\n> replication. I don't have a strong opinion here.\n>\n> I like the changes; this is a big improvement. I'll leave it to Robert\n> to commit it, so that he can ensure it matches how he expected the\n> feature to be used and sufficiently covers the behavioral aspects.\n>\n> Regards,\n> Jeff Davis\n>\n>\n>\n\n\n",
"msg_date": "Mon, 16 Oct 2023 09:47:33 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questions about the new subscription parameter: password_required"
}
] |
[
{
"msg_contents": "",
"msg_date": "Thu, 21 Sep 2023 21:15:59 +0800",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "how to do profile for pg?"
},
{
"msg_contents": "Hi jacktby,\n\nPostgreSQL is literally a large and complicated program in C. Thus it\ncan be profiled as such. E.g. you can use `perf` and build flamegraphs\nusing `perf record`. Often pgbench is an adequate tool to compare\nbefore and after results.There are many other tools available\ndepending on what exactly you want to profile - CPU, lock contention,\ndisk I/O, etc. People write books (plural) on the subject. Personally\nI would recommend \"System Performance, Enterprise and the Cloud, 2nd\nEdition\" and \"BPF Performance Tools\" by Brendan Gregg.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 21 Sep 2023 17:02:59 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to do profile for pg?"
},
{
"msg_contents": "but I need a quick demo to see the memory profiling or CPU profiling. I hope a blog or a video which is better for me. Thanks.<br/><br/><br/><html>\n<head>\n <meta http-equiv=\"Content-Type\" content=\"text/html; charset=UTF-8\">\n</head>\n<body>\n<div class=\"ntes-mailmaster-quote\" style=\"padding-top: 1px; padding-bottom: 1px\" >\n <div style=\" margin-top: 20px; margin-bottom: 12px; font-size: 14px; line-height: 1.25; color: #89898c; \" >---- Replied Message ----</div>\n <div style=\" margin-bottom: 12px; font-size: 13px; line-height: 1.25; color: #2c2c2e; padding: 2px; border-radius: 8px; background-color: #f0f0f0; \" >\n <table width=\"100%\" cellpadding=\"0\" cellspacing=\"10\" border=\"0\">\n <tr>\n <td valign=\"top\" style=\" width: 4em; font-size: 13px; line-height: 1.25; color: #89898c; white-space: nowrap; \" >From</td>\n <td valign=\"top\" style=\" font-size: 13px; line-height: 1.25; color: #2c2c2e; word-break: break-all; \" ><a class=\"mail-from\" style=\"color: #1c83eb; text-decoration: none\" href=\"mailto:[email protected]\" >Aleksander Alekseev<[email protected]></a></td>\n </tr>\n <tr>\n <td valign=\"top\" style=\" width: 4em; font-size: 13px; line-height: 1.25; color: #89898c; white-space: nowrap; \" >Date</td>\n <td class=\"mail-date\" valign=\"top\" style=\" font-size: 13px; line-height: 1.25; color: #2c2c2e; word-break: break-all; \" >09/21/2023 22:02</td>\n </tr>\n <tr style=\"\">\n <td valign=\"top\" style=\" width: 4em; font-size: 13px; line-height: 1.25; color: #89898c; \" >To</td>\n <td valign=\"top\" style=\" font-size: 13px; line-height: 1.25; color: #2c2c2e; word-break: break-all; \" ><a class=\"mail-to\" style=\"color: #1c83eb; text-decoration: none\" href=\"mailto:[email protected]\" >pgsql-hackers<[email protected]></a></td>\n </tr>\n <tr style=\"\">\n <td valign=\"top\" style=\" width: 4em; font-size: 13px; line-height: 1.25; color: #89898c; \" >Cc</td>\n <td valign=\"top\" style=\" font-size: 13px; line-height: 1.25; color: #2c2c2e; word-break: break-all; \" ><a class=\"mail-cc\" style=\"color: #1c83eb; text-decoration: none\" href=\"mailto:[email protected]\" >[email protected]</a></td>\n </tr>\n <tr>\n <td valign=\"top\" style=\" width: 4em; font-size: 13px; line-height: 1.25; color: #89898c; \" >Subject</td>\n <td class=\"mail-subject\" valign=\"top\" style=\" font-size: 13px; line-height: 1.25; color: #2c2c2e; word-break: break-all; \" >Re: how to do profile for pg?</td>\n </tr>\n </table>\n </div>\n <div>Hi jacktby,\r<br/>\r<br/>PostgreSQL is literally a large and complicated program in C. Thus it\r<br/>can be profiled as such. E.g. you can use `perf` and build flamegraphs\r<br/>using `perf record`. Often pgbench is an adequate tool to compare\r<br/>before and after results.There are many other tools available\r<br/>depending on what exactly you want to profile - CPU, lock contention,\r<br/>disk I/O, etc. People write books (plural) on the subject. Personally\r<br/>I would recommend "System Performance, Enterprise and the Cloud, 2nd\r<br/>Edition" and "BPF Performance Tools" by Brendan Gregg.\r<br/>\r<br/>-- \r<br/>Best regards,\r<br/>Aleksander Alekseev\r<br/></div>\n</div>\n</body>\n</html>\nbut I need a quick demo to see the memory profiling or CPU profiling. I hope a blog or a video which is better for me. Thanks.\n\n\n\n---- Replied Message ----\n\n\n\nFrom\nAleksander Alekseev<[email protected]>\n\n\nDate\n09/21/2023 22:02\n\n\nTo\npgsql-hackers<[email protected]>\n\n\nCc\[email protected]\n\n\nSubject\nRe: how to do profile for pg?\n\n\n\nHi jacktby,\r PostgreSQL is literally a large and complicated program in C. Thus it\rcan be profiled as such. E.g. you can use `perf` and build flamegraphs\rusing `perf record`. Often pgbench is an adequate tool to compare\rbefore and after results.There are many other tools available\rdepending on what exactly you want to profile - CPU, lock contention,\rdisk I/O, etc. People write books (plural) on the subject. Personally\rI would recommend \"System Performance, Enterprise and the Cloud, 2nd\rEdition\" and \"BPF Performance Tools\" by Brendan Gregg.\r -- \rBest regards,\rAleksander Alekseev",
"msg_date": "Thu, 21 Sep 2023 22:22:48 +0800",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how to do profile for pg?"
},
{
"msg_contents": "Hi,\n\n> but I need a quick demo to see the memory profiling or CPU profiling. I hope a blog or a video which is better for me. Thanks\n\nWell, then I guess you better hurry with reading these books :)\n\nThere is no shortcut I'm afraid. One of the first things that Brendan\nexplains is how to do benchmarks *prorerly*. This is far from being\ntrivial and often you may be measuring not something you want. E.g.\nyou may think that you are profiling CPU while in fact there is a lock\ncontention and CPU is not even a bottleneck. Another thing worth\nconsidering which is often neglected is to make sure your optimization\ndoesn't cause any digradations under different workloads.\n\nLast but not least you should be mindful of different configuration\nparameters of PostgreSQL - shared_buffers, synchronous_commit = off,\nto name a few, and also understand the architecture of the system\nquite well. In this context I recommend Database System Concepts, 7th\nEdition by Avi Silberschatz et al and also CMU Intro to Database\nSystems [1] and CMU Advanced Database Systems [2] courses.\n\n[1]: https://www.youtube.com/playlist?list=PLSE8ODhjZXjZaHA6QcxDfJ0SIWBzQFKEG\n[2]: https://www.youtube.com/playlist?list=PLSE8ODhjZXjYzlLMbX3cR0sxWnRM7CLFn\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 21 Sep 2023 20:21:16 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to do profile for pg?"
},
{
"msg_contents": "This talk by D. Dolgov \nhttps://www.postgresql.eu/events/pgconfeu2022/sessions/session/3861/slides/325/Dmitrii_Dolgov_PGConf_EU_2022.pdf\nmight be insightful. Or not, because you need to fill in a lot of\nblanks. Maybe you can find a recording of that talk somewhere, if\nyou're lucky.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n#error \"Operator lives in the wrong universe\"\n (\"Use of cookies in real-time system development\", M. Gleixner, M. Mc Guire)\n\n\n",
"msg_date": "Thu, 21 Sep 2023 19:49:58 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to do profile for pg?"
},
{
"msg_contents": "Hi,\n\nThis talk from Andres seems to have some relevant information for you:\n\nhttps://www.youtube.com/watch?v=HghP4D72Noc\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Thu, 21 Sep 2023 21:25:03 +0200",
"msg_from": "David Geier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to do profile for pg?"
}
] |
[
{
"msg_contents": "Hi all,\n\nIt has been mentioned a few times now that, as Meson has been\nintegrated in the tree, the next step would be to get rid of the\ncustom scripts in src/tools/msvc/ and moving forward only support\nMeson when building with VS compilers. As far as I understand, nobody\nhas sent a patch to do that yet, so here is one.\n\nPlease find attached a patch to move the needle in this sense. Here\nare some notes:\n- Meson depends on msvc/gendef.pl, so I've renamed it to\nmsvc_gendef.pl in src/tools/.\n- install-windows.sgml could be OK if entirely gone, moving more\ndetailed instructions to the meson page instead.\n- What to do with src/tools/msvc/dummylib/? It would still be useful\nfor src/tools/win32tzlist.pl but it also seems to me that we should be\nable to live without it as perl's Win32 should have evolved quite a\nbit? I need to test this, but I'd like to think that we are OK with a\nremoval of it. If people want to keep it. I'm OK with that as well.\n\nThe documentation for Meson could be improved more, I think, when\nbuilding with either MinGW's or VS compilers. Particularly, something\nthat was itching me is how we can improve the instructions about the\nextra packages one would need to deploy to make the builds work as a\nstraight removal of install-windows.sgml loses references, but we are\nOK if we rely on a packager to do the dependency job. For example,\nI've been having a good ride with Strawberry Perl and Chocolatey,\nlinking my build of Meson with MinGW or Visual, but I also recall\nAndres being allergic to Strawberry, so I am not sure if folks are OK\nwith directly mentioning it in the docs, for instance.\n\nOne thing that we could do is add a subsection in the installation\nnotes close to MinGW, but for Visual Studio that applies to Meson.\nOne good thing with the removal of install-windows.sgml is that it is\npossible to clean up a few things that have rotted in the docs, and\ncould be reworked from scratch. For example, we still recommend\nActive Perl, but it has become impossible to use in builds as perl\ncommands cannot be involved without extra steps that we don't\nrecommend, while an installation needs to be registered in their\ncentralized system.. That's not user-friendly. I've notice that\nAndrew has been relying on strawberry perl as well with Chocolatey for\nsome buildfarm members.\n\nAs of today, I can see that the only buildfarm members relying on\nthese scripts are bowerbird and hamerkop, so these two would fail if\nthe patch attached were to be applied today. I am adding Andrew\nD. and the maintainers of hamerkop (SRA-OSS) in CC for comments.\n\nThoughts?\n--\nMichael",
"msg_date": "Fri, 22 Sep 2023 10:12:29 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 22.09.23 03:12, Michael Paquier wrote:\n> It has been mentioned a few times now that, as Meson has been\n> integrated in the tree, the next step would be to get rid of the\n> custom scripts in src/tools/msvc/ and moving forward only support\n> Meson when building with VS compilers.\n\nFirst we need to fix things so that we can build using meson from a \ndistribution tarball, which is the subject of \n<https://commitfest.postgresql.org/44/4357/>.\n\n\n\n",
"msg_date": "Fri, 22 Sep 2023 08:06:57 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Fri, Sep 22, 2023 at 08:06:57AM +0200, Peter Eisentraut wrote:\n> First we need to fix things so that we can build using meson from a\n> distribution tarball, which is the subject of\n> <https://commitfest.postgresql.org/44/4357/>.\n\nThanks, missed this one.\n--\nMichael",
"msg_date": "Mon, 25 Sep 2023 13:00:47 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 2023-09-21 Th 21:12, Michael Paquier wrote:\n>\n> As of today, I can see that the only buildfarm members relying on\n> these scripts are bowerbird and hamerkop, so these two would fail if\n> the patch attached were to be applied today. I am adding Andrew\n> D. and the maintainers of hamerkop (SRA-OSS) in CC for comments.\n>\n\nChanging bowerbird to use meson should not be difficult, just needs some \nTUITs.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-09-21 Th 21:12, Michael Paquier\n wrote:\n\n\n\n\nAs of today, I can see that the only buildfarm members relying on\nthese scripts are bowerbird and hamerkop, so these two would fail if\nthe patch attached were to be applied today. I am adding Andrew\nD. and the maintainers of hamerkop (SRA-OSS) in CC for comments.\n\n\n\n\n\nChanging bowerbird to use meson should not be difficult, just\n needs some TUITs.\n\n\ncheers\n\n\nandrew\n \n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 25 Sep 2023 11:19:12 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Fri, 22 Sep 2023 10:12:29 +0900\nMichael Paquier <[email protected]> wrote:\n\n> As of today, I can see that the only buildfarm members relying on\n> these scripts are bowerbird and hamerkop, so these two would fail if\n> the patch attached were to be applied today. I am adding Andrew\n> D. and the maintainers of hamerkop (SRA-OSS) in CC for comments.\n\nhamerkop is not yet prepared for Meson builds, but we plan to work on this support soon. \nIf we go with Meson builds exclusively right now, we will have to temporarily remove the master/HEAD for a while.\n\nBest Regards.\n-- \nSRA OSS LLC\nChen Ningwei<[email protected]>\n\n\n\n",
"msg_date": "Tue, 26 Sep 2023 14:25:12 +0900",
"msg_from": "NINGWEI CHEN <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 2023-09-26 Tu 01:25, NINGWEI CHEN wrote:\n> On Fri, 22 Sep 2023 10:12:29 +0900\n> Michael Paquier<[email protected]> wrote:\n>\n>> As of today, I can see that the only buildfarm members relying on\n>> these scripts are bowerbird and hamerkop, so these two would fail if\n>> the patch attached were to be applied today. I am adding Andrew\n>> D. and the maintainers of hamerkop (SRA-OSS) in CC for comments.\n> hamerkop is not yet prepared for Meson builds, but we plan to work on this support soon.\n> If we go with Meson builds exclusively right now, we will have to temporarily remove the master/HEAD for a while.\n>\n> Best Regards.\n\n\n\nI don't think we should switch to that until you're ready.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-09-26 Tu 01:25, NINGWEI CHEN\n wrote:\n\n\nOn Fri, 22 Sep 2023 10:12:29 +0900\nMichael Paquier <[email protected]> wrote:\n\n\n\nAs of today, I can see that the only buildfarm members relying on\nthese scripts are bowerbird and hamerkop, so these two would fail if\nthe patch attached were to be applied today. I am adding Andrew\nD. and the maintainers of hamerkop (SRA-OSS) in CC for comments.\n\n\n\nhamerkop is not yet prepared for Meson builds, but we plan to work on this support soon. \nIf we go with Meson builds exclusively right now, we will have to temporarily remove the master/HEAD for a while.\n\nBest Regards.\n\n\n\n\n\nI don't think we should switch to that until you're ready.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 26 Sep 2023 12:17:04 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 22.09.23 03:12, Michael Paquier wrote:\n> It has been mentioned a few times now that, as Meson has been\n> integrated in the tree, the next step would be to get rid of the\n> custom scripts in src/tools/msvc/ and moving forward only support\n> Meson when building with VS compilers. As far as I understand, nobody\n> has sent a patch to do that yet, so here is one.\n> \n> Please find attached a patch to move the needle in this sense.\n\nYour patch still leaves various mentions of Mkvcbuild.pm and Project.pm \nin other files, including in\n\nconfig/perl.m4\nmeson.build\nsrc/bin/pg_basebackup/Makefile\nsrc/bin/pgevent/meson.build\nsrc/common/Makefile\nsrc/common/meson.build\nsrc/interfaces/libpq/Makefile\nsrc/port/Makefile\n\nA few of these comments are like \"see $otherfile for the reason\", which \nmeans that if we delete $otherfile, we should move that information to a \nnew site somehow.\n\n\n\n",
"msg_date": "Fri, 29 Sep 2023 11:26:55 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Fri, Sep 29, 2023 at 11:26:55AM +0200, Peter Eisentraut wrote:\n> Your patch still leaves various mentions of Mkvcbuild.pm and Project.pm in\n> other files, including in\n\nIndeed, thanks. I didn't think to check for references to these\nmodules.\n\n> config/perl.m4\n\nHere is the thing:\n# switches for symbols not beginning with underscore. Some exceptions are the\n# Windows-specific -D_USE_32BIT_TIME_T and -D__MINGW_USE_VC2005_COMPAT; see\n# Mkvcbuild.pm for details.\n\nAnd we'd lose quite some information here. meson.build loops back to\nperl.m4 for this part. Reformating the comments perl.m4 should do the\njob.\n\n> meson.build\n\nThese were in Project::_new() and WriteItemDefinitionGroup(). Just\nremoving the reference does not remove any information.\n\n> src/bin/pg_basebackup/Makefile\n\n-# If you add or remove files here, also update Mkvcbuild.pm, which only knows\n-# about OBJS, not BBOBJS, and thus has to be manually updated to stay in sync\n-# with this list.\nThis can be removed, I guess.\n\n> src/bin/pgevent/meson.build\n\nThe reference can be removed. The original says nothing about the use\nof DisableLinkerWarnings() in this case.\n\n> src/common/Makefile\n> src/common/meson.build\n\nThese two have the same copy-pasted comment, and the reference can be\nremoved.\n\n> src/interfaces/libpq/Makefile\n\nCan be removed once the MSVC files are gone.\n\n> src/port/Makefile\n\nCan be removed.\n\nAttached is a v2 with these adjustments, for now.\n--\nMichael",
"msg_date": "Mon, 2 Oct 2023 16:38:11 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 02.10.23 09:38, Michael Paquier wrote:\n> Attached is a v2 with these adjustments, for now.\n\nGeneral comments:\n\n- I think we can't just delete install-windows.sgml. Some of that \ncontent needs to be moved over to installation.sgml. As a simple \nexample, install-windows.sgml shows which MSVC versions are supported. \nThat information should surely be kept.\n\n- Is src/backend/utils/README.Gen_dummy_probes still correct after this? \n AFAICT, the Perl-based MSVC build system uses Gen_dummy_probes.pl, but \nthe meson build uses Gen_dummy_probes.sed even on Windows. Is that \ncorrect, intended?\n\n- src/port/pgstrsignal.c contains a comment that it is not \"built in \nMSVC builds\", but AFAICT, that is only correct for the legacy Perl-based \nbuild system, not for meson. Again, is that correct, intended?\n\n\nDetail comments:\n\n(Btw., whatever orderfile you used for the diff, I find that confusing.)\n\n* config/perl.m4: This now contains all the required information, but \nmaybe break the text into paragraphs a bit?\n\n\n* doc/src/sgml/installation.sgml:\n\nI think this paragraph should just be removed altogether:\n\n <para>\n If you are building <productname>PostgreSQL</productname> for Microsoft\n- Windows, read this chapter if you intend to build with MinGW or Cygwin;\n- but if you intend to build with Microsoft's <productname>Visual\n- C++</productname>, see <xref linkend=\"install-windows\"/> instead.\n+ Windows, read this chapter if you intend to build with Meson, MinGW or\n+ Cygwin.\n </para>\n\nHere\n\n <para>\n PostgreSQL can be built using Cygwin, a Linux-like environment for\n Windows, but that method is inferior to the native Windows build\n- <phrase condition=\"standalone-ignore\">(see <xref \nlinkend=\"install-windows\"/>)</phrase> and\n- running a server under Cygwin is no longer recommended.\n+ with Meson, and running a server under Cygwin is no longer recommended.\n </para>\n\nI think \"with Meson\" should be removed. The tradeoff is Cygwin vs. \nnative, it doesn't have anything to do with Meson.\n\nAlso, I think this paragraph needs a complete revision, along with \nhowever install-windows.sgml gets integrated:\n\n <para>\n- PostgreSQL for Windows can be built using MinGW, a Unix-like build\n [...]\n\n\n* meson.build: I think these comments are unnecessary and can be removed:\n\n-# From Project.pm\n+# MSVC flags\n\n+ # Preprocessor definitions.\n\n\n* src/bin/pgevent/meson.build: After consideration, I think this \ncomment should just be removed:\n\n-# FIXME: copied from Mkvcbuild.pm, but I don't think that's the right \napproach\n+# FIXME: this may not not the right approach..\n\nThe original site in Mkvcbuild.pm does not contain a comment, so we \nshould accept that as canonical. It doesn't help much if we carry \naround a comment like \"this might be wrong\" indefinitely without any \nfurther supporting material.\n\n\n* src/common/Makefile and src/common/meson.build: This change is losing \nthe period at the end of the first sentence:\n\n # A few files are currently only built for frontend, not server\n-# (Mkvcbuild.pm has a copy of this list, too). logging.c is excluded\n-# from OBJS_FRONTEND_SHLIB (shared library) as a matter of policy,\n-# because it is not appropriate for general purpose libraries such\n-# as libpq to report errors directly.\n+# logging.c is excluded from OBJS_FRONTEND_SHLIB (shared library) as\n+# a matter of policy, because it is not appropriate for general purpose\n+# libraries such as libpq to report errors directly.\n\n\n\n",
"msg_date": "Thu, 5 Oct 2023 09:38:51 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Thu, Oct 05, 2023 at 09:38:51AM +0200, Peter Eisentraut wrote:\n> - I think we can't just delete install-windows.sgml. Some of that content\n> needs to be moved over to installation.sgml. As a simple example,\n> install-windows.sgml shows which MSVC versions are supported. That\n> information should surely be kept.\n\nI've been thinking about the whole structure for a bit, but with the\nMSVC scripts gone and the fact that we would rely on meson, moving\nthis information to the section under the platform-specific notes is\nfeeling kind of natural here. Here is a possible split of the\ninformation across several sections: \n- The requirements:\n-- ActiveState Perl could be entirely removed, IMO. Perhaps we should\nreplace that to a reference to raspberry-perl, chocolatey or similar?\nI am not sure about the best approach here, so for now I've kept the\nbits about active perl.\n-- bison and flex, which would become hard requirements on Windows\nwith Visual Studio now. Perhaps this could be unified with the patch\nfor distprep later on, but here we have specifics for Windows.\n-- All the other optional requirements, tcl, etc.\n- MinGW notes.\n- Visual Studio notes, with the versions of visual supported, download\nlinks, and a bit more.\n- Notes specific about 64b builds.\n\nThe attached is a bit crude and requires adjustments, but it shows the\nidea.\n\n> - Is src/backend/utils/README.Gen_dummy_probes still correct after this?\n> AFAICT, the Perl-based MSVC build system uses Gen_dummy_probes.pl, but the\n> meson build uses Gen_dummy_probes.sed even on Windows. Is that correct,\n> intended?\n\nInteresting point. This may depend on the environment at the end? As\nfar as I can see, sed is currently a hard requirement in the meson\nbuild and we'd fail if the command cannot be used. The buildfarm\nmachines that test meson are able to find sed, making\nGen_dummy_probes.pl not necessary:\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=drongo&dt=2023-10-11%2020%3A21%3A17&stg=configure\n\nSo the $1000 question is: could there be a point in keeping the perl\nscript around if sed cannot be found? The buildfarm coverage is\ncurrently saying no thanks to chocolatey, at least. The VM images\ncompiled by Andres for the CI seem to have the same opinion.\n\n> - src/port/pgstrsignal.c contains a comment that it is not \"built in MSVC\n> builds\", but AFAICT, that is only correct for the legacy Perl-based build\n> system, not for meson. Again, is that correct, intended?\n\nIndeed, it's built under meson for WIN32. Good find.\n\n> Detail comments:\n> \n> (Btw., whatever orderfile you used for the diff, I find that confusing.)\n\nHere is my configuration for that:\nhttps://github.com/michaelpq/home/blob/main/.gitconfig_orderfile\n\n> * config/perl.m4: This now contains all the required information, but maybe\n> break the text into paragraphs a bit?\n\nSure. I've attempted something here.\n\n> * doc/src/sgml/installation.sgml:\n> \n> I think this paragraph should just be removed altogether:\n> \n> <para>\n> If you are building <productname>PostgreSQL</productname> for Microsoft\n> - Windows, read this chapter if you intend to build with MinGW or Cygwin;\n> - but if you intend to build with Microsoft's <productname>Visual\n> - C++</productname>, see <xref linkend=\"install-windows\"/> instead.\n> + Windows, read this chapter if you intend to build with Meson, MinGW or\n> + Cygwin.\n> </para>\n\nOkay.\n\n> Here\n> \n> <para>\n> PostgreSQL can be built using Cygwin, a Linux-like environment for\n> Windows, but that method is inferior to the native Windows build\n> - <phrase condition=\"standalone-ignore\">(see <xref\n> linkend=\"install-windows\"/>)</phrase> and\n> - running a server under Cygwin is no longer recommended.\n> + with Meson, and running a server under Cygwin is no longer recommended.\n> </para>\n> \n> I think \"with Meson\" should be removed. The tradeoff is Cygwin vs. native,\n> it doesn't have anything to do with Meson.\n\nOkay.\n\n> Also, I think this paragraph needs a complete revision, along with however\n> install-windows.sgml gets integrated:\n> \n> <para>\n> - PostgreSQL for Windows can be built using MinGW, a Unix-like build\n> [...]\n\nSure, see above for details.\n\n> * meson.build: I think these comments are unnecessary and can be removed:\n> \n> -# From Project.pm\n> +# MSVC flags\n> \n> + # Preprocessor definitions.\n\nOkay.\n\n> * src/bin/pgevent/meson.build: After consideration, I think this comment\n> should just be removed:\n> \n> -# FIXME: copied from Mkvcbuild.pm, but I don't think that's the right\n> approach\n> +# FIXME: this may not not the right approach..\n> \n> The original site in Mkvcbuild.pm does not contain a comment, so we should\n> accept that as canonical. It doesn't help much if we carry around a comment\n> like \"this might be wrong\" indefinitely without any further supporting\n> material.\n\nHmm, okay. I was not sure about this one but fine for me to drop it.\n\n> * src/common/Makefile and src/common/meson.build: This change is losing the\n> period at the end of the first sentence:\n> \n> # A few files are currently only built for frontend, not server\n> -# (Mkvcbuild.pm has a copy of this list, too). logging.c is excluded\n> -# from OBJS_FRONTEND_SHLIB (shared library) as a matter of policy,\n> -# because it is not appropriate for general purpose libraries such\n> -# as libpq to report errors directly.\n> +# logging.c is excluded from OBJS_FRONTEND_SHLIB (shared library) as\n> +# a matter of policy, because it is not appropriate for general purpose\n> +# libraries such as libpq to report errors directly.\n\nFixed.\n--\nMichael",
"msg_date": "Thu, 12 Oct 2023 14:23:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 12.10.23 07:23, Michael Paquier wrote:\n>> - Is src/backend/utils/README.Gen_dummy_probes still correct after this?\n>> AFAICT, the Perl-based MSVC build system uses Gen_dummy_probes.pl, but the\n>> meson build uses Gen_dummy_probes.sed even on Windows. Is that correct,\n>> intended?\n> Interesting point. This may depend on the environment at the end? As\n> far as I can see, sed is currently a hard requirement in the meson\n> build and we'd fail if the command cannot be used. The buildfarm\n> machines that test meson are able to find sed, making\n> Gen_dummy_probes.pl not necessary:\n> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=drongo&dt=2023-10-11%2020%3A21%3A17&stg=configure\n> \n> So the $1000 question is: could there be a point in keeping the perl\n> script around if sed cannot be found? The buildfarm coverage is\n> currently saying no thanks to chocolatey, at least. The VM images\n> compiled by Andres for the CI seem to have the same opinion.\n\nI don't think we should rely on sed being there on Windows. Maybe it's \ntrue now on the handful of buildfarm/CI machines and early adopters, but \ndo we have any indication that that is systematic or just an accident?\n\nSince we definitely require Perl now, we could just as well use the Perl \nscript and avoid this issue.\n\nAttached is a Perl version of the sed script, converted by hand (so not \nthe super-verbose s2p thing). It's basically just the sed script with \nsemicolons added and the backslashes in the regular expressions moved \naround. I think we could use something like that for all platforms now.",
"msg_date": "Wed, 8 Nov 2023 09:41:19 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "\nOn 2023-11-08 We 03:41, Peter Eisentraut wrote:\n> On 12.10.23 07:23, Michael Paquier wrote:\n>>> - Is src/backend/utils/README.Gen_dummy_probes still correct after \n>>> this?\n>>> AFAICT, the Perl-based MSVC build system uses Gen_dummy_probes.pl, \n>>> but the\n>>> meson build uses Gen_dummy_probes.sed even on Windows. Is that \n>>> correct,\n>>> intended?\n>> Interesting point. This may depend on the environment at the end? As\n>> far as I can see, sed is currently a hard requirement in the meson\n>> build and we'd fail if the command cannot be used. The buildfarm\n>> machines that test meson are able to find sed, making\n>> Gen_dummy_probes.pl not necessary:\n>> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=drongo&dt=2023-10-11%2020%3A21%3A17&stg=configure \n>>\n>>\n>> So the $1000 question is: could there be a point in keeping the perl\n>> script around if sed cannot be found? The buildfarm coverage is\n>> currently saying no thanks to chocolatey, at least. The VM images\n>> compiled by Andres for the CI seem to have the same opinion.\n>\n> I don't think we should rely on sed being there on Windows. Maybe \n> it's true now on the handful of buildfarm/CI machines and early \n> adopters, but do we have any indication that that is systematic or \n> just an accident?\n>\n> Since we definitely require Perl now, we could just as well use the \n> Perl script and avoid this issue.\n>\n> Attached is a Perl version of the sed script, converted by hand (so \n> not the super-verbose s2p thing). It's basically just the sed script \n> with semicolons added and the backslashes in the regular expressions \n> moved around. I think we could use something like that for all \n> platforms now.\n\n\n\nI think it's alright, but please don't use literal tabs, use \\t, even in \na character class.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 8 Nov 2023 09:47:08 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Wed, Nov 08, 2023 at 09:41:19AM +0100, Peter Eisentraut wrote:\n> I don't think we should rely on sed being there on Windows. Maybe it's true\n> now on the handful of buildfarm/CI machines and early adopters, but do we\n> have any indication that that is systematic or just an accident?\n\nOr both? When doing builds based on MinGW in the past I vaguely\nrecall getting annoyed that I needed to look for sed as one thing, so\nyour suggestion could simplify the experience a bit.\n\n> Since we definitely require Perl now, we could just as well use the Perl\n> script and avoid this issue.\n>\n> Attached is a Perl version of the sed script, converted by hand (so not the\n> super-verbose s2p thing). It's basically just the sed script with\n> semicolons added and the backslashes in the regular expressions moved\n> around. I think we could use something like that for all platforms now.\n\nSounds like a good idea to me now that perl is a hard requirement.\n+1.\n--\nMichael",
"msg_date": "Thu, 9 Nov 2023 08:05:20 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 09.11.23 00:05, Michael Paquier wrote:\n>> Attached is a Perl version of the sed script, converted by hand (so not the\n>> super-verbose s2p thing). It's basically just the sed script with\n>> semicolons added and the backslashes in the regular expressions moved\n>> around. I think we could use something like that for all platforms now.\n> \n> Sounds like a good idea to me now that perl is a hard requirement.\n> +1.\n\nHow about this patch as a comprehensive solution?",
"msg_date": "Fri, 10 Nov 2023 08:38:21 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Fri, Nov 10, 2023 at 08:38:21AM +0100, Peter Eisentraut wrote:\n> How about this patch as a comprehensive solution?\n> 8 files changed, 26 insertions(+), 339 deletions(-)\n\nThanks for the patch. The numbers are here, and the patch looks\nsensible.\n\nThe contents of probes.h without --enable-trace are exactly the same\nbefore and after the patch.\n\nIn short, +1 to switch to what you are proposing here.\n--\nMichael",
"msg_date": "Mon, 13 Nov 2023 14:30:48 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 13.11.23 06:30, Michael Paquier wrote:\n> On Fri, Nov 10, 2023 at 08:38:21AM +0100, Peter Eisentraut wrote:\n>> How about this patch as a comprehensive solution?\n>> 8 files changed, 26 insertions(+), 339 deletions(-)\n> \n> Thanks for the patch. The numbers are here, and the patch looks\n> sensible.\n> \n> The contents of probes.h without --enable-trace are exactly the same\n> before and after the patch.\n> \n> In short, +1 to switch to what you are proposing here.\n\ndone\n\n\n\n",
"msg_date": "Tue, 14 Nov 2023 11:02:40 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "Other than the documentation details and the business about \nGen_dummy_probes, which has been dealt with separately, this patch looks \nsolid to me.\n\nOn 12.10.23 07:23, Michael Paquier wrote:\n> On Thu, Oct 05, 2023 at 09:38:51AM +0200, Peter Eisentraut wrote:\n>> - I think we can't just delete install-windows.sgml. Some of that content\n>> needs to be moved over to installation.sgml. As a simple example,\n>> install-windows.sgml shows which MSVC versions are supported. That\n>> information should surely be kept.\n> \n> I've been thinking about the whole structure for a bit, but with the\n> MSVC scripts gone and the fact that we would rely on meson, moving\n> this information to the section under the platform-specific notes is\n> feeling kind of natural here. Here is a possible split of the\n> information across several sections:\n> - The requirements:\n> -- ActiveState Perl could be entirely removed, IMO. Perhaps we should\n> replace that to a reference to raspberry-perl, chocolatey or similar?\n> I am not sure about the best approach here, so for now I've kept the\n> bits about active perl.\n> -- bison and flex, which would become hard requirements on Windows\n> with Visual Studio now. Perhaps this could be unified with the patch\n> for distprep later on, but here we have specifics for Windows.\n> -- All the other optional requirements, tcl, etc.\n> - MinGW notes.\n> - Visual Studio notes, with the versions of visual supported, download\n> links, and a bit more.\n> - Notes specific about 64b builds.\n> \n> The attached is a bit crude and requires adjustments, but it shows the\n> idea.\n\nIt's tricky. Eventually, we would like to reduce some of the \nduplication, like the whole list of requirements. But there are some \nWindows-specific details in there, so I don't know.\n\nMy suggestion would be:\n\nMake a new <sect2 id=\"installation-notes-windows\"> titled \"Windows\" at \nthe end of installation.sgml (after the Solaris section). Dump most of \nthe content from install-windows.sgml in there (except the stuff about \nthe old build system). Rename the existing section \"MinGW/Native \nWindows\" to just \"MinGW\" and make some minor adjustments, similar to \nyour patch.\n\nThat way, we can move forward, and we can adjust and trim the details of \nthe documentation over time.\n\n\n",
"msg_date": "Tue, 14 Nov 2023 11:18:52 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 11:18:52AM +0100, Peter Eisentraut wrote:\n> Other than the documentation details and the business about\n> Gen_dummy_probes, which has been dealt with separately, this patch looks\n> solid to me.\n\nThanks.\n\n> It's tricky. Eventually, we would like to reduce some of the duplication,\n> like the whole list of requirements. But there are some Windows-specific\n> details in there, so I don't know.\n\nYes, that's something I was considering but polluting the meson\ndependency list with Windows-specific links was not the best way\nforward to me, because there are few users who care about knowing\nwhere Active Perl or an equivalent is located.\n\n> My suggestion would be:\n> \n> Make a new <sect2 id=\"installation-notes-windows\"> titled \"Windows\" at the\n> end of installation.sgml (after the Solaris section). Dump most of the\n> content from install-windows.sgml in there (except the stuff about the old\n> build system). Rename the existing section \"MinGW/Native Windows\" to just\n> \"MinGW\" and make some minor adjustments, similar to your patch.\n>\n> That way, we can move forward, and we can adjust and trim the details of the\n> documentation over time.\n\nThe latest patch I have sent is close to that, actually. Instead of\ncreating a new section, I have integrated the contents of\ninstall-windows.sgml into the existing section dedicated to MinGW and\nnative Windows because some parts apply to both of them, like the\ncrash reporting facility. So this gave the following structure: \n- sect2 MinGW/Native Windows\n-- sect3 Requirements\n-- sect3 MinGW\n-- sect3 Visual Studio\n-- sect3 Special Considerations for 64-Bit Windows\n-- sect3 Collecting Crash Dumps on Windows\n\nThe last parts affects both MinGW and VS builds, while the first\nrequirement part applies only to native (references to MinGW are only\nthere to handle dependencies for the builds). So I'm OK to live with\na bit of duplication across two sect2 rather than attempt to unify\nthem, while renaming the current MinGW/native section.\n\nWith the requirements and the SDK-related guidelines, all the\ninformation seems from the original install-windows.sgml seems to be\naround. Hopefully I did not miss a spot.\n\nAttached is a v4. hamerkop and bowerbird still rely on that in the\nbuildfarm today.\n--\nMichael",
"msg_date": "Wed, 15 Nov 2023 13:49:06 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 15.11.23 05:49, Michael Paquier wrote:\n> Attached is a v4.\n\nI'm happy with that.\n\n(Note, however, that your rebase didn't pick up commits e7814b40d0 and \nb41b1a7f49 that I did yesterday. Please check that again.)\n\n\n\n",
"msg_date": "Wed, 15 Nov 2023 11:27:13 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-15 13:49:06 +0900, Michael Paquier wrote:\n> The latest patch I have sent is close to that, actually. Instead of\n> creating a new section, I have integrated the contents of\n> install-windows.sgml into the existing section dedicated to MinGW and\n> native Windows because some parts apply to both of them, like the\n> crash reporting facility. So this gave the following structure:\n> - sect2 MinGW/Native Windows\n> -- sect3 Requirements\n> -- sect3 MinGW\n> -- sect3 Visual Studio\n> -- sect3 Special Considerations for 64-Bit Windows\n> -- sect3 Collecting Crash Dumps on Windows\n\nIt doesn't seem like your patch has it quite that way? I see\n\n <sect2 id=\"installation-notes-mingw\">\n <title>MinGW</title>\n...\n <sect2 id=\"installation-notes-windows\">\n <title>Windows</title>\n\nWhere \"Windows\" actually seems to solely describe visual studio? That seems\nconfusing.\n\n\n> diff --git a/src/port/pgstrsignal.c b/src/port/pgstrsignal.c\n> index 7d76d1cca9..8c10a760c6 100644\n> --- a/src/port/pgstrsignal.c\n> +++ b/src/port/pgstrsignal.c\n> @@ -6,9 +6,6 @@\n> * On platforms compliant with modern POSIX, this just wraps strsignal(3).\n> * Elsewhere, we do the best we can.\n> *\n> - * This file is not currently built in MSVC builds, since it's useless\n> - * on non-Unix platforms.\n> - *\n> * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group\n> * Portions Copyright (c) 1994, Regents of the University of California\n> *\n\nHuh, so this was wrong since the code was added? For a moment I thought I'd\nunintentionally promoted it to be built by default, but ...\n\n\n> index eca930ae47..14c9905b60 100644\n> --- a/src/bin/pgevent/meson.build\n> +++ b/src/bin/pgevent/meson.build\n> @@ -14,7 +14,6 @@ pgevent_sources += rc_bin_gen.process(win32ver_rc, extra_args: [\n>\n> pgevent_sources += windows.compile_resources('pgmsgevent.rc')\n>\n> -# FIXME: copied from Mkvcbuild.pm, but I don't think that's the right approach\n> pgevent_link_args = []\n> if cc.get_id() == 'msvc'\n> pgevent_link_args += '/ignore:4104'\n\nI think it's worth leaving a trail indicating that adding this\nwarning-suppression is dubious at best. It seems to pretty obviously paper\nover us exporting the symbols the wrong way:\nhttps://learn.microsoft.com/en-us/cpp/error-messages/tool-errors/linker-tools-warning-lnk4104?view=msvc-170\n\nWhich pretty clearly explains that pgevent.def is wrong.\n\nI just can't really test it, nor does it have test. Otherwise I might have\nfixed it.\n\n\n> @@ -53,10 +53,25 @@ AC_DEFUN([PGAC_CHECK_PERL_CONFIGS],\n> # would be fatal to try to compile PL/Perl to a different libc ABI than core\n> # Postgres uses. The available information says that most symbols that affect\n> # Perl's own ABI begin with letters, so it's almost sufficient to adopt -D\n> -# switches for symbols not beginning with underscore. Some exceptions are the\n> -# Windows-specific -D_USE_32BIT_TIME_T and -D__MINGW_USE_VC2005_COMPAT; see\n> -# Mkvcbuild.pm for details. We absorb the former when Perl reports it. Perl\n> -# never reports the latter, and we don't attempt to deduce when it's needed.\n> +# switches for symbols not beginning with underscore.\n> +\n> +# Some exceptions are the Windows-specific -D_USE_32BIT_TIME_T and\n> +# -D__MINGW_USE_VC2005_COMPAT. To be exact, Windows offers several 32-bit ABIs.\n> +# Perl is sensitive to sizeof(time_t), one of the ABI dimensions. To get\n> +# 32-bit time_t, use \"cl -D_USE_32BIT_TIME_T\" or plain \"gcc\". For 64-bit\n> +# time_t, use \"gcc -D__MINGW_USE_VC2005_COMPAT\" or plain \"cl\". Before MSVC\n> +# 2005, plain \"cl\" chose 32-bit time_t. PostgreSQL doesn't support building\n> +# with pre-MSVC-2005 compilers, but it does support linking to Perl built with\n> +# such a compiler. MSVC-built Perl 5.13.4 and later report -D_USE_32BIT_TIME_T\n> +# in $Config{ccflags} if applicable, but MinGW-built Perl never reports\n> +# -D_USE_32BIT_TIME_T despite typically needing it.\n\nHm, it's pretty odd to have comments about cl.exe here, given that it can't\neven be used with msvc.\n\nMy impression from testing this is that absorbing the flag from perl suffices\nwith strawberry perl and mingw perl, both when building with mingw and msvc.\n\n\n> +# Ignore the $Config{ccflags} opinion about -D_USE_32BIT_TIME_T, and use a\n> +# runtime test to deduce the ABI Perl expects. Specifically, test use of\n> +# PL_modglobal, which maps to a PerlInterpreter field whose position depends\n> +# on sizeof(time_t). We absorb the former when Perl reports it. Perl never\n> +# reports the latter, and we don't attempt to deduce when it's needed.\n\nI don't think this is implemented anywhere now?\n\n\n> + <para>\n> + PostgreSQL for Windows can be built using meson, as described\n> + in <xref linkend=\"install-meson\"/>.\n> + The native Windows port requires a 32 or 64-bit version of Windows\n> + 2000 or later. Earlier operating systems do\n> + not have sufficient infrastructure (but Cygwin may be used on\n> + those).\n> + </para>\n\nIs this actually true? I don't think we build on win2k...\n\n\n> + <para>\n> + Native builds of <application>psql</application> don't support command\n> + line editing. The <productname>Cygwin</productname> build does support\n> + command line editing, so it should be used where psql is needed for\n> + interactive use on <productname>Windows</productname>.\n> + </para>\n\nFWIW, the last time I tested it, readline worked.\n\nhttps://postgr.es/m/20221124023251.k4dnbmxuxmqzq7w3%40awork3.anarazel.de\n\n\n> + <para>\n> + PostgreSQL can be built using the Visual C++ compiler suite from Microsoft.\n> + These compilers can be either from <productname>Visual Studio</productname>,\n> + <productname>Visual Studio Express</productname> or some versions of the\n> + <productname>Microsoft Windows SDK</productname>. If you do not already have a\n> + <productname>Visual Studio</productname> environment set up, the easiest\n> + ways are to use the compilers from\n> + <productname>Visual Studio 2022</productname> or those in the\n> + <productname>Windows SDK 10</productname>, which are both free downloads\n> + from Microsoft.\n> + </para>\n\nI think we need a reference to mingw somewhere around here. I don't think\neverybody can be expected to just know that they should not have navigated to\n\"Windows\" but \"MinGW\".\n\n\n\n> + <variablelist>\n> + <varlistentry>\n> + <term><productname>ActiveState Perl</productname></term>\n> + <listitem><para>\n> + ActiveState Perl is required to run the build generation scripts. MinGW\n> + or Cygwin Perl will not work. It must also be present in the PATH.\n> + Binaries can be downloaded from\n> + <ulink url=\"https://www.activestate.com\"></ulink>\n> + (Note: version 5.14 or later is required,\n> + the free Standard Distribution is sufficient).\n> + </para></listitem>\n> + </varlistentry>\n\nContinuing to recommend ActiveState perl seems dubious, but I guess that's\nmaterial for another patch.\n\n\n> + <varlistentry>\n> + <term><productname>Bison</productname> and\n> + <productname>Flex</productname></term>\n> + <listitem>\n> + <para>\n> + <productname>Bison</productname> and <productname>Flex</productname> are\n> + required to build from Git, but not required when building from a release\n> + file. Only <productname>Bison</productname> versions 2.3 and later\n> + will work. <productname>Flex</productname> must be version 2.5.35 or later.\n> + </para>\n> +\n> + <para>\n> + Both <productname>Bison</productname> and <productname>Flex</productname>\n> + are included in the <productname>msys</productname> tool suite, available\n> + from <ulink url=\"http://www.mingw.org/wiki/MSYS\"></ulink> as part of the\n> + <productname>MinGW</productname> compiler suite.\n> + </para>\n> +\n> + <para>\n> + You will need to add the directory containing\n> + <filename>flex.exe</filename> and <filename>bison.exe</filename> to the\n> + PATH environment variable. In the case of MinGW, the directory is the\n> + <filename>\\msys\\1.0\\bin</filename> subdirectory of your MinGW\n> + installation directory.\n> + </para>\n\nI found it a lot easier to use https://github.com/lexxmark/winflexbison\n\n\n\n> + <varlistentry>\n> + <term><productname>MIT Kerberos</productname></term>\n> + <listitem><para>\n> + Required for GSSAPI authentication support. MIT Kerberos can be\n> + downloaded from\n> + <ulink url=\"https://web.mit.edu/Kerberos/dist/index.html\"></ulink>.\n> + </para></listitem>\n> + </varlistentry>\n> +\n> + <varlistentry>\n> + <term><productname>libxml2</productname> and\n> + <productname>libxslt</productname></term>\n> + <listitem><para>\n> + Required for XML support. Binaries can be downloaded from\n> + <ulink url=\"https://zlatkovic.com/pub/libxml\"></ulink> or source from\n> + <ulink url=\"http://xmlsoft.org\"></ulink>. Note that libxml2 requires iconv,\n> + which is available from the same download location.\n> + </para></listitem>\n> + </varlistentry>\n> +\n> + <varlistentry>\n> + <term><productname>LZ4</productname></term>\n> + <listitem><para>\n> + Required for supporting <productname>LZ4</productname> compression.\n> + Binaries and source can be downloaded from\n> + <ulink url=\"https://github.com/lz4/lz4/releases\"></ulink>.\n> + </para></listitem>\n> + </varlistentry>\n> +\n> + <varlistentry>\n> + <term><productname>Zstandard</productname></term>\n> + <listitem><para>\n> + Required for supporting <productname>Zstandard</productname> compression.\n> + Binaries and source can be downloaded from\n> + <ulink url=\"https://github.com/facebook/zstd/releases\"></ulink>.\n> + </para></listitem>\n> + </varlistentry>\n> +\n> + <varlistentry>\n> + <term><productname>OpenSSL</productname></term>\n> + <listitem><para>\n> + Required for SSL support. Binaries can be downloaded from\n> + <ulink url=\"https://slproweb.com/products/Win32OpenSSL.html\"></ulink>\n> + or source from <ulink url=\"https://www.openssl.org\"></ulink>.\n> + </para></listitem>\n> + </varlistentry>\n> +\n> + <varlistentry>\n> + <term><productname>ossp-uuid</productname></term>\n> + <listitem><para>\n> + Required for UUID-OSSP support (contrib only). Source can be\n> + downloaded from\n> + <ulink url=\"http://www.ossp.org/pkg/lib/uuid/\"></ulink>.\n> + </para></listitem>\n> + </varlistentry>\n> +\n> + <varlistentry>\n> + <term><productname>Python</productname></term>\n> + <listitem><para>\n> + Required for building <application>PL/Python</application>. Binaries can\n> + be downloaded from <ulink url=\"https://www.python.org\"></ulink>.\n> + </para></listitem>\n> + </varlistentry>\n> +\n> + <varlistentry>\n> + <term><productname>zlib</productname></term>\n> + <listitem><para>\n> + Required for compression support in <application>pg_dump</application>\n> + and <application>pg_restore</application>. Binaries can be downloaded\n> + from <ulink url=\"https://www.zlib.net\"></ulink>.\n> + </para></listitem>\n> + </varlistentry>\n\n\nExcept for openssl, where the link is somewhat valuable, the rest don't really\nseem to be specific to windows.\n\n\n> + <sect3 id=\"install-windows-full-64-bit\">\n> + <title>Special Considerations for 64-Bit Windows</title>\n> + <para>\n> + PostgreSQL will only build for the x64 architecture on 64-bit Windows.\n> + </para>\n> + <para>\n> + Mixing 32- and 64-bit versions in the same build tree is not supported.\n> + The build system will automatically detect if it's running in a 32- or\n> + 64-bit environment, and build PostgreSQL accordingly. For this reason, it\n> + is important to start the correct command prompt before building.\n> + </para>\n\nIsn't this directly contradicting the earlier\n> + The native Windows port requires a 32 or 64-bit version of Windows\n> + 2000 or later. Earlier operating systems do\n?\n\n> + <para>\n> + To use a server-side third party library such as <productname>Python</productname> or\n> + <productname>OpenSSL</productname>, this library <emphasis>must</emphasis> also be\n> + 64-bit. There is no support for loading a 32-bit library in a 64-bit\n> + server. Several of the third party libraries that PostgreSQL supports may\n> + only be available in 32-bit versions, in which case they cannot be used with\n> + 64-bit PostgreSQL.\n> + </para>\n> + </sect3>\n\nI.e. cannot be used with postgres at all.\n\n\nThank you for working on this!\n\n\n- Andres\n\n\n",
"msg_date": "Wed, 15 Nov 2023 17:07:03 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Wed, Nov 15, 2023 at 11:27:13AM +0100, Peter Eisentraut wrote:\n> (Note, however, that your rebase didn't pick up commits e7814b40d0 and\n> b41b1a7f49 that I did yesterday. Please check that again.)\n\nIndeed. I need to absorb that properly.\n--\nMichael",
"msg_date": "Thu, 16 Nov 2023 10:15:10 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Wed, Nov 15, 2023 at 05:07:03PM -0800, Andres Freund wrote:\n> On 2023-11-15 13:49:06 +0900, Michael Paquier wrote:\n> Where \"Windows\" actually seems to solely describe visual studio? That seems\n> confusing.\n\nYeah, switch that to Visual.\n\n> Huh, so this was wrong since the code was added? For a moment I thought I'd\n> unintentionally promoted it to be built by default, but ...\n\nYes, I was wondering if there could be an argument for simplifying\nsome code here by pushing more logic into this wrapper, but I'm\nfinding that a bit unappealing, and building it under Visual has no\nactual consequence: it seems that we never call pg_strsignal() under\nWIN32.\n\n>> -# FIXME: copied from Mkvcbuild.pm, but I don't think that's the right approach\n>> pgevent_link_args = []\n>> if cc.get_id() == 'msvc'\n>> pgevent_link_args += '/ignore:4104'\n> \n> I think it's worth leaving a trail indicating that adding this\n> warning-suppression is dubious at best. It seems to pretty obviously paper\n> over us exporting the symbols the wrong way:\n> https://learn.microsoft.com/en-us/cpp/error-messages/tool-errors/linker-tools-warning-lnk4104?view=msvc-170\n> \n> Which pretty clearly explains that pgevent.def is wrong.\n> \n> I just can't really test it, nor does it have test. Otherwise I might have\n> fixed it.\n\nAgreed that there is a good argument for removing it at some point,\nwith a separate investigation. I've just added a XXX comment for now.\n\n>> @@ -53,10 +53,25 @@ AC_DEFUN([PGAC_CHECK_PERL_CONFIGS],\n>> # would be fatal to try to compile PL/Perl to a different libc ABI than core\n>> # Postgres uses. The available information says that most symbols that affect\n>> # Perl's own ABI begin with letters, so it's almost sufficient to adopt -D\n>> -# switches for symbols not beginning with underscore. Some exceptions are the\n>> -# Windows-specific -D_USE_32BIT_TIME_T and -D__MINGW_USE_VC2005_COMPAT; see\n>> -# Mkvcbuild.pm for details. We absorb the former when Perl reports it. Perl\n>> -# never reports the latter, and we don't attempt to deduce when it's needed.\n>> +# switches for symbols not beginning with underscore.\n>> +\n>> +# Some exceptions are the Windows-specific -D_USE_32BIT_TIME_T and\n>> +# -D__MINGW_USE_VC2005_COMPAT. To be exact, Windows offers several 32-bit ABIs.\n>> +# Perl is sensitive to sizeof(time_t), one of the ABI dimensions. To get\n>> +# 32-bit time_t, use \"cl -D_USE_32BIT_TIME_T\" or plain \"gcc\". For 64-bit\n>> +# time_t, use \"gcc -D__MINGW_USE_VC2005_COMPAT\" or plain \"cl\". Before MSVC\n>> +# 2005, plain \"cl\" chose 32-bit time_t. PostgreSQL doesn't support building\n>> +# with pre-MSVC-2005 compilers, but it does support linking to Perl built with\n>> +# such a compiler. MSVC-built Perl 5.13.4 and later report -D_USE_32BIT_TIME_T\n>> +# in $Config{ccflags} if applicable, but MinGW-built Perl never reports\n>> +# -D_USE_32BIT_TIME_T despite typically needing it.\n> \n> Hm, it's pretty odd to have comments about cl.exe here, given that it can't\n> even be used with msvc.\n> \n> My impression from testing this is that absorbing the flag from perl suffices\n> with strawberry perl and mingw perl, both when building with mingw and msvc.\n\nI was a bit uncomfortable with removing these references, but I\nsuspect that you are right and that they're outdated artifacts of the\npast. So I'm OK to remove the cl and gcc parts as the flags come from\n$PERL.\n\n>> +# Ignore the $Config{ccflags} opinion about -D_USE_32BIT_TIME_T, and use a\n>> +# runtime test to deduce the ABI Perl expects. Specifically, test use of\n>> +# PL_modglobal, which maps to a PerlInterpreter field whose position depends\n>> +# on sizeof(time_t). We absorb the former when Perl reports it. Perl never\n>> +# reports the latter, and we don't attempt to deduce when it's needed.\n> \n> I don't think this is implemented anywhere now?\n\nIndeed, that's now gone.\n\n>> + <para>\n>> + PostgreSQL for Windows can be built using meson, as described\n>> + in <xref linkend=\"install-meson\"/>.\n>> + The native Windows port requires a 32 or 64-bit version of Windows\n>> + 2000 or later. Earlier operating systems do\n>> + not have sufficient infrastructure (but Cygwin may be used on\n>> + those).\n>> + </para>\n> \n> Is this actually true? I don't think we build on win2k...\n\nNah, this is a reference outdated for ages. 495ed0ef2d72 has even\nbumped _WIN32_WINNT to require Windows 10 as the minimal runtime\nversion supported, so this needs to be updated and backpatched. The\nfirst two sentences can be simplified like that:\n- The native Windows port requires a 32 or 64-bit version of Windows\n- 2000 or later. Earlier operating systems do\n- not have sufficient infrastructure (but Cygwin may be used on\n- those).\n+ The native Windows port requires a 32 or 64-bit version of Windows\n+ 10 or later. Earlier operating systems do not have sufficient\n+ infrastructure.\n\nEven the second sentence could be entirely removed, I don't see much\nadvantage in keeping it. Would you be OK with that, as a separate\npatch? I've updated the refernce in the attached.\n\n>> + <para>\n>> + Native builds of <application>psql</application> don't support command\n>> + line editing. The <productname>Cygwin</productname> build does support\n>> + command line editing, so it should be used where psql is needed for\n>> + interactive use on <productname>Windows</productname>.\n>> + </para>\n> \n> FWIW, the last time I tested it, readline worked.\n> \n> https://postgr.es/m/20221124023251.k4dnbmxuxmqzq7w3%40awork3.anarazel.de\n\nOkay. I couldn't really make it work, FWIW. Perhaps this is just\nsomething that could be tweaked in a different patch. What you are\nmentioning requires quite a few steps, and I am not sure if this is\nthe safest and/or the easiest way to achieve that, TBH. I'd keep that\nas a separate investigation for now.\n\n>> + <para>\n>> + PostgreSQL can be built using the Visual C++ compiler suite from Microsoft.\n>> + These compilers can be either from <productname>Visual Studio</productname>,\n>> + <productname>Visual Studio Express</productname> or some versions of the\n>> + <productname>Microsoft Windows SDK</productname>. If you do not already have a\n>> + <productname>Visual Studio</productname> environment set up, the easiest\n>> + ways are to use the compilers from\n>> + <productname>Visual Studio 2022</productname> or those in the\n>> + <productname>Windows SDK 10</productname>, which are both free downloads\n>> + from Microsoft.\n>> + </para>\n> \n> I think we need a reference to mingw somewhere around here. I don't think\n> everybody can be expected to just know that they should not have navigated to\n> \"Windows\" but \"MinGW\".\n\nHmm. But if this is a section only for Visual, it doesn't make sense\nto me to mention MinGW here? I am not sure to follow how this is in\nline with the previous comments.\n\n> Continuing to recommend ActiveState perl seems dubious, but I guess that's\n> material for another patch.\n\nI want to see this reference entirely gone at the end with more\nstuff trimmed. For now I'm focusing on a simpler restructure.\n\n>> + <varlistentry>\n>> + <term><productname>Bison</productname> and\n>> + <productname>Flex</productname></term>\n>> + <listitem>\n>> + <para>\n>> + <productname>Bison</productname> and <productname>Flex</productname> are\n>> + required to build from Git, but not required when building from a release\n>> + file. Only <productname>Bison</productname> versions 2.3 and later\n>> + will work. <productname>Flex</productname> must be version 2.5.35 or later.\n>> + </para>\n>> +\n>> + <para>\n>> + Both <productname>Bison</productname> and <productname>Flex</productname>\n>> + are included in the <productname>msys</productname> tool suite, available\n>> + from <ulink url=\"http://www.mingw.org/wiki/MSYS\"></ulink> as part of the\n>> + <productname>MinGW</productname> compiler suite.\n>> + </para>\n>> +\n>> + <para>\n>> + You will need to add the directory containing\n>> + <filename>flex.exe</filename> and <filename>bison.exe</filename> to the\n>> + PATH environment variable. In the case of MinGW, the directory is the\n>> + <filename>\\msys\\1.0\\bin</filename> subdirectory of your MinGW\n>> + installation directory.\n>> + </para>\n> \n> I found it a lot easier to use https://github.com/lexxmark/winflexbison\n\nAnd I've been using chocolatey to fetch some dependencies. I think\nthat trimming this stuff should be discussed in a separate patch.\n\n> Except for openssl, where the link is somewhat valuable, the rest don't really\n> seem to be specific to windows.\n\nYeah, these are historic. Still they can be useful for the Visual\nbuilds in some cases, I guess? I am not sure if it's worth pushing\nthese dependencies to the main meson page, somewhat polluting it for\nreferences that most people don't really care about. Anyway, I'm\ntempted to be less ambitious in a first step and just move that in the\ncompatibility section.\n\n>> + <sect3 id=\"install-windows-full-64-bit\">\n>> + <title>Special Considerations for 64-Bit Windows</title>\n>> + <para>\n>> + PostgreSQL will only build for the x64 architecture on 64-bit Windows.\n>> + </para>\n>> + <para>\n>> + Mixing 32- and 64-bit versions in the same build tree is not supported.\n>> + The build system will automatically detect if it's running in a 32- or\n>> + 64-bit environment, and build PostgreSQL accordingly. For this reason, it\n>> + is important to start the correct command prompt before building.\n>> + </para>\n> \n>> Isn't this directly contradicting the earlier\n>> + The native Windows port requires a 32 or 64-bit version of Windows\n>> + 2000 or later. Earlier operating systems do\n> ?\n\nHow it that? Mixing 32b and 64b libraries is not related to the\nminimal runtime version. This is just telling to not mix both.\n--\nMichael",
"msg_date": "Fri, 17 Nov 2023 10:01:08 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 16.11.23 02:07, Andres Freund wrote:\n> It doesn't seem like your patch has it quite that way? I see\n> \n> <sect2 id=\"installation-notes-mingw\">\n> <title>MinGW</title>\n> ...\n> <sect2 id=\"installation-notes-windows\">\n> <title>Windows</title>\n> \n> Where \"Windows\" actually seems to solely describe visual studio? That seems\n> confusing.\n\nI had suggested this arrangement as a way to reduce churn in this patch \nset. We'd just move over the existing separate chapter into a new \nsection, and then later consider further rearrangements.\n\nIt's not always clear where all of these things should go, as there are \nso many dimensions. For example, the existing sentence \"After you have \neverything installed, it is suggested that you run psql under CMD.EXE, \nas the MSYS console has buffering issues.\", does that apply to MinGW, or \nreally MSYS, or does it also apply if you build with Visual Something?\n\nUltimately, I don't think MinGW needs to be its own section.\n\n\n",
"msg_date": "Mon, 20 Nov 2023 08:00:09 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 08:00:09AM +0100, Peter Eisentraut wrote:\n> I had suggested this arrangement as a way to reduce churn in this patch set.\n> We'd just move over the existing separate chapter into a new section, and\n> then later consider further rearrangements.\n>\n> It's not always clear where all of these things should go, as there are so\n> many dimensions. For example, the existing sentence \"After you have\n> everything installed, it is suggested that you run psql under CMD.EXE, as\n> the MSYS console has buffering issues.\", does that apply to MinGW, or really\n> MSYS, or does it also apply if you build with Visual Something?\n\nEven for this specific one, are you sure that it still applies? :D\n\n> Ultimately, I don't think MinGW needs to be its own section.\n\nYes, agreed. The end result should be one single sect2 for Windows\ndivided into multiple sect3, perhaps themselves divided into more\nsect4 for each build method. As a whole, before refactoring all that,\nI'd be in favor of a slightly different strategy once the MSVC scripts\nand install-windows.sgml with its stuff specific to src/tools/msvc are\ngone:\n- Review all this section from the docs and trim them from everything\nthat we think is now irrelevant.\n- Look at the rest and see how it can be efficiently refactored into\nbalanced sections.\n\nYour suggestion to create a new sect2 for \"Windows\" as much as Andres'\nsuggestion are OK by as an intermediate step, and I suspect that the\nend result will likely not be that.\n--\nMichael",
"msg_date": "Mon, 20 Nov 2023 17:03:28 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Tue, Sep 26, 2023 at 12:17:04PM -0400, Andrew Dunstan wrote:\n> On 2023-09-26 Tu 01:25, NINGWEI CHEN wrote:\n>> hamerkop is not yet prepared for Meson builds, but we plan to work on this support soon.\n>> If we go with Meson builds exclusively right now, we will have to temporarily remove the master/HEAD for a while.\n> \n> I don't think we should switch to that until you're ready.\n\nAgreed that it would just be breaking a build for the sake of breaking\nit. Saying that, the last exchange that we had about hamerkop\nswitching to meson was two months ago. Are there any plans to do the\nswitch?\n--\nMichael",
"msg_date": "Mon, 4 Dec 2023 17:05:24 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "\nOn 2023-12-04 Mo 03:05, Michael Paquier wrote:\n> On Tue, Sep 26, 2023 at 12:17:04PM -0400, Andrew Dunstan wrote:\n>> On 2023-09-26 Tu 01:25, NINGWEI CHEN wrote:\n>>> hamerkop is not yet prepared for Meson builds, but we plan to work on this support soon.\n>>> If we go with Meson builds exclusively right now, we will have to temporarily remove the master/HEAD for a while.\n>> I don't think we should switch to that until you're ready.\n> Agreed that it would just be breaking a build for the sake of breaking\n> it. Saying that, the last exchange that we had about hamerkop\n> switching to meson was two months ago. Are there any plans to do the\n> switch?\n\n\nI just had a look at shifting bowerbird to use meson, and it got stymied \nat the c99 test, which apparently doesn't compile with anything less \nthan VS2019.\n\nI can upgrade bowerbird, but that will take rather longer. It looks like \nhamerkop is in th4e same boat.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 4 Dec 2023 15:11:47 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Mon, Dec 04, 2023 at 03:11:47PM -0500, Andrew Dunstan wrote:\n> I just had a look at shifting bowerbird to use meson, and it got stymied at\n> the c99 test, which apparently doesn't compile with anything less than\n> VS2019.\n> \n> I can upgrade bowerbird, but that will take rather longer. It looks like\n> hamerkop is in th4e same boat.\n\nOkay. Thanks for the update.\n--\nMichael",
"msg_date": "Tue, 5 Dec 2023 07:29:59 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 04.12.23 21:11, Andrew Dunstan wrote:\n> I just had a look at shifting bowerbird to use meson, and it got stymied \n> at the c99 test, which apparently doesn't compile with anything less \n> than VS2019.\n\nIf that is the case, then wouldn't that invalidate the documented claim \nthat you can build with VS2015 or newer?\n\n\n\n",
"msg_date": "Wed, 6 Dec 2023 07:18:26 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Fri, Nov 17, 2023 at 6:31 AM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Nov 15, 2023 at 05:07:03PM -0800, Andres Freund wrote:\n> > On 2023-11-15 13:49:06 +0900, Michael Paquier wrote:\n> > Where \"Windows\" actually seems to solely describe visual studio? That seems\n> > confusing.\n>\n> Yeah, switch that to Visual.\n>\n> > Huh, so this was wrong since the code was added? For a moment I thought I'd\n> > unintentionally promoted it to be built by default, but ...\n>\n> Yes, I was wondering if there could be an argument for simplifying\n> some code here by pushing more logic into this wrapper, but I'm\n> finding that a bit unappealing, and building it under Visual has no\n> actual consequence: it seems that we never call pg_strsignal() under\n> WIN32.\n>\n> >> -# FIXME: copied from Mkvcbuild.pm, but I don't think that's the right approach\n> >> pgevent_link_args = []\n> >> if cc.get_id() == 'msvc'\n> >> pgevent_link_args += '/ignore:4104'\n> >\n> > I think it's worth leaving a trail indicating that adding this\n> > warning-suppression is dubious at best. It seems to pretty obviously paper\n> > over us exporting the symbols the wrong way:\n> > https://learn.microsoft.com/en-us/cpp/error-messages/tool-errors/linker-tools-warning-lnk4104?view=msvc-170\n> >\n> > Which pretty clearly explains that pgevent.def is wrong.\n> >\n> > I just can't really test it, nor does it have test. Otherwise I might have\n> > fixed it.\n>\n> Agreed that there is a good argument for removing it at some point,\n> with a separate investigation. I've just added a XXX comment for now.\n>\n> >> @@ -53,10 +53,25 @@ AC_DEFUN([PGAC_CHECK_PERL_CONFIGS],\n> >> # would be fatal to try to compile PL/Perl to a different libc ABI than core\n> >> # Postgres uses. The available information says that most symbols that affect\n> >> # Perl's own ABI begin with letters, so it's almost sufficient to adopt -D\n> >> -# switches for symbols not beginning with underscore. Some exceptions are the\n> >> -# Windows-specific -D_USE_32BIT_TIME_T and -D__MINGW_USE_VC2005_COMPAT; see\n> >> -# Mkvcbuild.pm for details. We absorb the former when Perl reports it. Perl\n> >> -# never reports the latter, and we don't attempt to deduce when it's needed.\n> >> +# switches for symbols not beginning with underscore.\n> >> +\n> >> +# Some exceptions are the Windows-specific -D_USE_32BIT_TIME_T and\n> >> +# -D__MINGW_USE_VC2005_COMPAT. To be exact, Windows offers several 32-bit ABIs.\n> >> +# Perl is sensitive to sizeof(time_t), one of the ABI dimensions. To get\n> >> +# 32-bit time_t, use \"cl -D_USE_32BIT_TIME_T\" or plain \"gcc\". For 64-bit\n> >> +# time_t, use \"gcc -D__MINGW_USE_VC2005_COMPAT\" or plain \"cl\". Before MSVC\n> >> +# 2005, plain \"cl\" chose 32-bit time_t. PostgreSQL doesn't support building\n> >> +# with pre-MSVC-2005 compilers, but it does support linking to Perl built with\n> >> +# such a compiler. MSVC-built Perl 5.13.4 and later report -D_USE_32BIT_TIME_T\n> >> +# in $Config{ccflags} if applicable, but MinGW-built Perl never reports\n> >> +# -D_USE_32BIT_TIME_T despite typically needing it.\n> >\n> > Hm, it's pretty odd to have comments about cl.exe here, given that it can't\n> > even be used with msvc.\n> >\n> > My impression from testing this is that absorbing the flag from perl suffices\n> > with strawberry perl and mingw perl, both when building with mingw and msvc.\n>\n> I was a bit uncomfortable with removing these references, but I\n> suspect that you are right and that they're outdated artifacts of the\n> past. So I'm OK to remove the cl and gcc parts as the flags come from\n> $PERL.\n>\n> >> +# Ignore the $Config{ccflags} opinion about -D_USE_32BIT_TIME_T, and use a\n> >> +# runtime test to deduce the ABI Perl expects. Specifically, test use of\n> >> +# PL_modglobal, which maps to a PerlInterpreter field whose position depends\n> >> +# on sizeof(time_t). We absorb the former when Perl reports it. Perl never\n> >> +# reports the latter, and we don't attempt to deduce when it's needed.\n> >\n> > I don't think this is implemented anywhere now?\n>\n> Indeed, that's now gone.\n>\n> >> + <para>\n> >> + PostgreSQL for Windows can be built using meson, as described\n> >> + in <xref linkend=\"install-meson\"/>.\n> >> + The native Windows port requires a 32 or 64-bit version of Windows\n> >> + 2000 or later. Earlier operating systems do\n> >> + not have sufficient infrastructure (but Cygwin may be used on\n> >> + those).\n> >> + </para>\n> >\n> > Is this actually true? I don't think we build on win2k...\n>\n> Nah, this is a reference outdated for ages. 495ed0ef2d72 has even\n> bumped _WIN32_WINNT to require Windows 10 as the minimal runtime\n> version supported, so this needs to be updated and backpatched. The\n> first two sentences can be simplified like that:\n> - The native Windows port requires a 32 or 64-bit version of Windows\n> - 2000 or later. Earlier operating systems do\n> - not have sufficient infrastructure (but Cygwin may be used on\n> - those).\n> + The native Windows port requires a 32 or 64-bit version of Windows\n> + 10 or later. Earlier operating systems do not have sufficient\n> + infrastructure.\n>\n> Even the second sentence could be entirely removed, I don't see much\n> advantage in keeping it. Would you be OK with that, as a separate\n> patch? I've updated the refernce in the attached.\n>\n> >> + <para>\n> >> + Native builds of <application>psql</application> don't support command\n> >> + line editing. The <productname>Cygwin</productname> build does support\n> >> + command line editing, so it should be used where psql is needed for\n> >> + interactive use on <productname>Windows</productname>.\n> >> + </para>\n> >\n> > FWIW, the last time I tested it, readline worked.\n> >\n> > https://postgr.es/m/20221124023251.k4dnbmxuxmqzq7w3%40awork3.anarazel.de\n>\n> Okay. I couldn't really make it work, FWIW. Perhaps this is just\n> something that could be tweaked in a different patch. What you are\n> mentioning requires quite a few steps, and I am not sure if this is\n> the safest and/or the easiest way to achieve that, TBH. I'd keep that\n> as a separate investigation for now.\n>\n> >> + <para>\n> >> + PostgreSQL can be built using the Visual C++ compiler suite from Microsoft.\n> >> + These compilers can be either from <productname>Visual Studio</productname>,\n> >> + <productname>Visual Studio Express</productname> or some versions of the\n> >> + <productname>Microsoft Windows SDK</productname>. If you do not already have a\n> >> + <productname>Visual Studio</productname> environment set up, the easiest\n> >> + ways are to use the compilers from\n> >> + <productname>Visual Studio 2022</productname> or those in the\n> >> + <productname>Windows SDK 10</productname>, which are both free downloads\n> >> + from Microsoft.\n> >> + </para>\n> >\n> > I think we need a reference to mingw somewhere around here. I don't think\n> > everybody can be expected to just know that they should not have navigated to\n> > \"Windows\" but \"MinGW\".\n>\n> Hmm. But if this is a section only for Visual, it doesn't make sense\n> to me to mention MinGW here? I am not sure to follow how this is in\n> line with the previous comments.\n>\n> > Continuing to recommend ActiveState perl seems dubious, but I guess that's\n> > material for another patch.\n>\n> I want to see this reference entirely gone at the end with more\n> stuff trimmed. For now I'm focusing on a simpler restructure.\n>\n> >> + <varlistentry>\n> >> + <term><productname>Bison</productname> and\n> >> + <productname>Flex</productname></term>\n> >> + <listitem>\n> >> + <para>\n> >> + <productname>Bison</productname> and <productname>Flex</productname> are\n> >> + required to build from Git, but not required when building from a release\n> >> + file. Only <productname>Bison</productname> versions 2.3 and later\n> >> + will work. <productname>Flex</productname> must be version 2.5.35 or later.\n> >> + </para>\n> >> +\n> >> + <para>\n> >> + Both <productname>Bison</productname> and <productname>Flex</productname>\n> >> + are included in the <productname>msys</productname> tool suite, available\n> >> + from <ulink url=\"http://www.mingw.org/wiki/MSYS\"></ulink> as part of the\n> >> + <productname>MinGW</productname> compiler suite.\n> >> + </para>\n> >> +\n> >> + <para>\n> >> + You will need to add the directory containing\n> >> + <filename>flex.exe</filename> and <filename>bison.exe</filename> to the\n> >> + PATH environment variable. In the case of MinGW, the directory is the\n> >> + <filename>\\msys\\1.0\\bin</filename> subdirectory of your MinGW\n> >> + installation directory.\n> >> + </para>\n> >\n> > I found it a lot easier to use https://github.com/lexxmark/winflexbison\n>\n> And I've been using chocolatey to fetch some dependencies. I think\n> that trimming this stuff should be discussed in a separate patch.\n>\n> > Except for openssl, where the link is somewhat valuable, the rest don't really\n> > seem to be specific to windows.\n>\n> Yeah, these are historic. Still they can be useful for the Visual\n> builds in some cases, I guess? I am not sure if it's worth pushing\n> these dependencies to the main meson page, somewhat polluting it for\n> references that most people don't really care about. Anyway, I'm\n> tempted to be less ambitious in a first step and just move that in the\n> compatibility section.\n>\n> >> + <sect3 id=\"install-windows-full-64-bit\">\n> >> + <title>Special Considerations for 64-Bit Windows</title>\n> >> + <para>\n> >> + PostgreSQL will only build for the x64 architecture on 64-bit Windows.\n> >> + </para>\n> >> + <para>\n> >> + Mixing 32- and 64-bit versions in the same build tree is not supported.\n> >> + The build system will automatically detect if it's running in a 32- or\n> >> + 64-bit environment, and build PostgreSQL accordingly. For this reason, it\n> >> + is important to start the correct command prompt before building.\n> >> + </para>\n> >\n> >> Isn't this directly contradicting the earlier\n> >> + The native Windows port requires a 32 or 64-bit version of Windows\n> >> + 2000 or later. Earlier operating systems do\n> > ?\n>\n> How it that? Mixing 32b and 64b libraries is not related to the\n> minimal runtime version. This is just telling to not mix both.\n> --\n>\nPatch is not applying. Please share the Rebased Version. Please find the error:\n\nD:\\Project\\Postgres>git am D:\\Project\\Patch\\v5-0001-Remove-MSVC-scripts.patch\nerror: patch failed: doc/src/sgml/filelist.sgml:38\nerror: doc/src/sgml/filelist.sgml: patch does not apply\nerror: patch failed: src/tools/msvc/Mkvcbuild.pm:1\nerror: src/tools/msvc/Mkvcbuild.pm: patch does not apply\nerror: patch failed: src/tools/msvc/Solution.pm:1\nerror: src/tools/msvc/Solution.pm: patch does not apply\nhint: Use 'git am --show-current-patch=diff' to see the failed patch\nApplying: Remove MSVC scripts\nPatch failed at 0001 Remove MSVC scripts\nWhen you have resolved this problem, run \"git am --continue\".\nIf you prefer to skip this patch, run \"git am --skip\" instead.\nTo restore the original branch and stop patching, run \"git am --abort\".\n\nThanks and Regards,\nShubham Khanna.\n\n\n",
"msg_date": "Wed, 6 Dec 2023 12:15:50 +0530",
"msg_from": "Shubham Khanna <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Wed, Dec 06, 2023 at 12:15:50PM +0530, Shubham Khanna wrote:\n> Patch is not applying. Please share the Rebased Version. Please find the error:\n\nThanks. Here you go with a v6.\n\n> D:\\Project\\Postgres>git am D:\\Project\\Patch\\v5-0001-Remove-MSVC-scripts.patch\n> error: patch failed: doc/src/sgml/filelist.sgml:38\n> error: doc/src/sgml/filelist.sgml: patch does not apply\n\nThis is caused by the recent addition of targets-meson.\n\n> error: patch failed: src/tools/msvc/Mkvcbuild.pm:1\n> error: src/tools/msvc/Mkvcbuild.pm: patch does not apply\n> error: patch failed: src/tools/msvc/Solution.pm:1\n> error: src/tools/msvc/Solution.pm: patch does not apply\n\nAnd some stuff because these have been updated.\n--\nMichael",
"msg_date": "Wed, 6 Dec 2023 16:28:43 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "\nOn 2023-12-06 We 01:18, Peter Eisentraut wrote:\n> On 04.12.23 21:11, Andrew Dunstan wrote:\n>> I just had a look at shifting bowerbird to use meson, and it got \n>> stymied at the c99 test, which apparently doesn't compile with \n>> anything less than VS2019.\n>\n> If that is the case, then wouldn't that invalidate the documented \n> claim that you can build with VS2015 or newer?\n\n\nIndeed it would.\n\nHere's what the Microsoft site says at \n<https://learn.microsoft.com/en-us/cpp/build/reference/std-specify-language-standard-version?view=msvc-170>:\n\n\n> You can invoke the Microsoft C compiler by using the /TC or /Tc \n> compiler option. It's used by default for code that has a .c file \n> extension, unless overridden by a /TP or /Tp option. The default C \n> compiler (that is, the compiler when /std:c11 or /std:c17 isn't \n> specified) implements ANSI C89, but includes several Microsoft \n> extensions, some of which are part of ISO C99. Some Microsoft \n> extensions to C89 can be disabled by using the /Za compiler option, \n> but others remain in effect. It isn't possible to specify strict C89 \n> conformance. The compiler doesn't implement several required features \n> of C99, so it isn't possible to specify C99 conformance, either.\n\nBut the VS2019 compiler implements enough of C99 to pass our meson test, \nunlike VS2017. Maybe the test is too strict. After all, we know we can \nin fact build with the earlier versions.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 6 Dec 2023 11:27:44 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 06.12.23 17:27, Andrew Dunstan wrote:\n> But the VS2019 compiler implements enough of C99 to pass our meson test, \n> unlike VS2017. Maybe the test is too strict. After all, we know we can \n> in fact build with the earlier versions.\n\nI just realized that the C99 test is actually our own, not provided by \nmeson. (See \"c99_test\" in meson.build.)\n\nCan you try disabling a few bits of that to see what makes it pass for \nyou? I suspect it's the structfunc() call.\n\n\n\n",
"msg_date": "Wed, 6 Dec 2023 18:24:37 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "\nOn 2023-12-06 We 12:24, Peter Eisentraut wrote:\n> On 06.12.23 17:27, Andrew Dunstan wrote:\n>> But the VS2019 compiler implements enough of C99 to pass our meson \n>> test, unlike VS2017. Maybe the test is too strict. After all, we know \n>> we can in fact build with the earlier versions.\n>\n> I just realized that the C99 test is actually our own, not provided by \n> meson. (See \"c99_test\" in meson.build.)\n>\n> Can you try disabling a few bits of that to see what makes it pass for \n> you? I suspect it's the structfunc() call.\n\n\nYes, if I comment out the call to structfunc() the test passes on VS2017 \n(compiler version 19.15.26726)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 6 Dec 2023 15:52:10 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 06.12.23 21:52, Andrew Dunstan wrote:\n> \n> On 2023-12-06 We 12:24, Peter Eisentraut wrote:\n>> On 06.12.23 17:27, Andrew Dunstan wrote:\n>>> But the VS2019 compiler implements enough of C99 to pass our meson \n>>> test, unlike VS2017. Maybe the test is too strict. After all, we know \n>>> we can in fact build with the earlier versions.\n>>\n>> I just realized that the C99 test is actually our own, not provided by \n>> meson. (See \"c99_test\" in meson.build.)\n>>\n>> Can you try disabling a few bits of that to see what makes it pass for \n>> you? I suspect it's the structfunc() call.\n> \n> \n> Yes, if I comment out the call to structfunc() the test passes on VS2017 \n> (compiler version 19.15.26726)\n\nThis is strange, because we use code like that in the tree. There must \nbe some small detail that trips it up here.\n\nPerhaps try moving the definition of struct named_init_test outside of \nthe function, or make it a typedef.\n\n\n",
"msg_date": "Thu, 7 Dec 2023 08:07:42 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 2023-Dec-07, Peter Eisentraut wrote:\n\n> On 06.12.23 21:52, Andrew Dunstan wrote:\n\n> > Yes, if I comment out the call to structfunc() the test passes on VS2017\n> > (compiler version 19.15.26726)\n> \n> This is strange, because we use code like that in the tree. There must be\n> some small detail that trips it up here.\n\nWell, We have things like these\n\ntypedef struct _archiveOpts\n{\n\t...\n} ArchiveOpts;\n#define ARCHIVE_OPTS(...) &(ArchiveOpts){__VA_ARGS__}\n\nXL_ROUTINE is quite similar.\n\nThese are then used like\n ARCHIVE_OPTS(.tag = \"pg_largeobject\",\n .description = \"pg_largeobject\",\n .section = SECTION_PRE_DATA,\n .createStmt = loOutQry->data));\n\nso the difference is that we're passing a pointer to a struct, not\nthe struct bare, which is what c99_test is doing:\n\nstruct named_init_test {\n int a;\n int b;\n};\n\nint main() {\n ...\n structfunc((struct named_init_test){1, 0});\n}\n\nMaybe this would work if the function received the pointer too?\n\n\nextern void structfunc(struct named_init_test *);\n\n structfunc(&(struct named_init_test){1, 0});\n\nThe fact that this is called \"structfunc\" makes me wonder if the author\ndid indeed want to test passing a struct to the function. That'd be\nodd, since the interesting thing in this line is the expression used to\ninitialize the struct argument. (We do pass structs, eg. ObjectAddress\nto check_object_ownership; old code.)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"No renuncies a nada. No te aferres a nada.\"\n\n\n",
"msg_date": "Thu, 7 Dec 2023 12:33:35 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "Hi,\n\nOn 2023-12-04 15:11:47 -0500, Andrew Dunstan wrote:\n> I just had a look at shifting bowerbird to use meson, and it got stymied at\n> the c99 test, which apparently doesn't compile with anything less than\n> VS2019.\n\nWhat error or warning is being raised by msvc?\n\nAndres\n\n\n",
"msg_date": "Thu, 7 Dec 2023 09:05:13 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "Hi,\n\nOn 2023-12-07 12:33:35 +0100, Alvaro Herrera wrote:\n> Well, We have things like these\n> \n> typedef struct _archiveOpts\n> {\n> \t...\n> } ArchiveOpts;\n> #define ARCHIVE_OPTS(...) &(ArchiveOpts){__VA_ARGS__}\n> \n> XL_ROUTINE is quite similar.\n> \n> These are then used like\n> ARCHIVE_OPTS(.tag = \"pg_largeobject\",\n> .description = \"pg_largeobject\",\n> .section = SECTION_PRE_DATA,\n> .createStmt = loOutQry->data));\n> \n> so the difference is that we're passing a pointer to a struct, not\n> the struct bare, which is what c99_test is doing:\n> \n> struct named_init_test {\n> int a;\n> int b;\n> };\n> \n> int main() {\n> ...\n> structfunc((struct named_init_test){1, 0});\n> }\n> \n> Maybe this would work if the function received the pointer too?\n> \n> extern void structfunc(struct named_init_test *);\n> \n> structfunc(&(struct named_init_test){1, 0});\n> \n> The fact that this is called \"structfunc\" makes me wonder if the author\n> did indeed want to test passing a struct to the function. That'd be\n> odd, since the interesting thing in this line is the expression used to\n> initialize the struct argument. (We do pass structs, eg. ObjectAddress\n> to check_object_ownership; old code.)\n\nIt seems like both might be interesting? But I think there's no reason to not\nevolve this test if we need to. I think I wrote it testing with a few old *nix\ncompilers to see where -std=c99 was needed, not more. It's not too surprising\nthat it might need some massaging for older msvc...\n\n\nHowever: I used godbolt to compile the test code on msvc, and it seems to\nbuild with 19.15 (which is the version Andrew referenced upthread), with a\nwarning that's triggered independent of the structfunc bit.\n\nhttps://godbolt.org/z/j99E9MeEK\n\n\nAndrew, could you attach meson.log from the failed build?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 7 Dec 2023 09:34:00 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "\nOn 2023-12-07 Th 12:34, Andres Freund wrote:\n> Hi,\n>\n> On 2023-12-07 12:33:35 +0100, Alvaro Herrera wrote:\n>> Well, We have things like these\n>>\n>> typedef struct _archiveOpts\n>> {\n>> \t...\n>> } ArchiveOpts;\n>> #define ARCHIVE_OPTS(...) &(ArchiveOpts){__VA_ARGS__}\n>>\n>> XL_ROUTINE is quite similar.\n>>\n>> These are then used like\n>> ARCHIVE_OPTS(.tag = \"pg_largeobject\",\n>> .description = \"pg_largeobject\",\n>> .section = SECTION_PRE_DATA,\n>> .createStmt = loOutQry->data));\n>>\n>> so the difference is that we're passing a pointer to a struct, not\n>> the struct bare, which is what c99_test is doing:\n>>\n>> struct named_init_test {\n>> int a;\n>> int b;\n>> };\n>>\n>> int main() {\n>> ...\n>> structfunc((struct named_init_test){1, 0});\n>> }\n>>\n>> Maybe this would work if the function received the pointer too?\n>>\n>> extern void structfunc(struct named_init_test *);\n>>\n>> structfunc(&(struct named_init_test){1, 0});\n>>\n>> The fact that this is called \"structfunc\" makes me wonder if the author\n>> did indeed want to test passing a struct to the function. That'd be\n>> odd, since the interesting thing in this line is the expression used to\n>> initialize the struct argument. (We do pass structs, eg. ObjectAddress\n>> to check_object_ownership; old code.)\n> It seems like both might be interesting? But I think there's no reason to not\n> evolve this test if we need to. I think I wrote it testing with a few old *nix\n> compilers to see where -std=c99 was needed, not more. It's not too surprising\n> that it might need some massaging for older msvc...\n>\n>\n> However: I used godbolt to compile the test code on msvc, and it seems to\n> build with 19.15 (which is the version Andrew referenced upthread), with a\n> warning that's triggered independent of the structfunc bit.\n>\n> https://godbolt.org/z/j99E9MeEK\n>\n>\n> Andrew, could you attach meson.log from the failed build?\n>\n>\n\nThe odd thing is I tried to reproduce the issue and instead it's now \ncompiling with VS2017. The only thing I have changed on the machine was \nto install VS2022 alongside VS2017, as well as switching which perl to \nlink to, which should have no effect on this.\n\n\nSo never mind, we make progress.\n\n\nNot sure about VS 2015 though.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 7 Dec 2023 13:49:44 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "Hi,\n\nOn 2023-12-07 13:49:44 -0500, Andrew Dunstan wrote:\n> On 2023-12-07 Th 12:34, Andres Freund wrote:\n> > However: I used godbolt to compile the test code on msvc, and it seems to\n> > build with 19.15 (which is the version Andrew referenced upthread), with a\n> > warning that's triggered independent of the structfunc bit.\n> > \n> > https://godbolt.org/z/j99E9MeEK\n> > \n> > \n> > Andrew, could you attach meson.log from the failed build?\n> > \n> > \n> \n> The odd thing is I tried to reproduce the issue and instead it's now\n> compiling with VS2017. The only thing I have changed on the machine was to\n> install VS2022 alongside VS2017, as well as switching which perl to link to,\n> which should have no effect on this.\n\nThe error might have been due to an older C runtime. I think installing visual\nstudio 2022 might have lead to updating the C runtime associated with VS 2017\nto a newer release of VS 2017. Or even updated the version of VS 2017 - I find\nthe version numbers of msvc vs visual studio incomprehensible, but I think\n19.15.26726 is from 2017, missing a lot of bugfixes that were made to VS 2017\nsince.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 7 Dec 2023 11:01:18 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Thu, Dec 7, 2023 at 5:27 AM Andrew Dunstan <[email protected]> wrote:\n> But the VS2019 compiler implements enough of C99 to pass our meson test,\n> unlike VS2017. Maybe the test is too strict. After all, we know we can\n> in fact build with the earlier versions.\n\n. o O { I wish master would systematically drop support for compilers\nthat were out of 'mainstream' vendor support. }\n\n\n",
"msg_date": "Fri, 8 Dec 2023 08:50:47 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Fri, Dec 08, 2023 at 08:50:47AM +1300, Thomas Munro wrote:\n> . o O { I wish master would systematically drop support for compilers\n> that were out of 'mainstream' vendor support. }\n\nCalling for a patch once, twice ;p \n\nFWIW, I would not mind marking VS 2019 as the minimum requirement on\nHEAD once the MSVC scripts are gone. The oldest VS version tested in\nthe buildfarm is hamerkop with VS2017, still under the MSVC scripts.\nI'd like to believe that a switch to meson implies a newer version of\nVS installed there.\n--\nMichael",
"msg_date": "Fri, 8 Dec 2023 14:23:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Tue, Dec 05, 2023 at 07:29:59AM +0900, Michael Paquier wrote:\n> Okay. Thanks for the update.\n\nWhile in Prague, Andres and Peter E. have mentioned me that we perhaps\nhad better move on with this patch sooner than later, without waiting\nfor the two buildfarm members to do the switch because much more\ncleanup is required for the documentation once the scripts are\nremoved.\n\nSo, any objections with the patch as presented to remove the scripts\nwhile moving the existing doc blocks from install-windows.sgml that\nstill need more discussion?\n--\nMichael",
"msg_date": "Wed, 13 Dec 2023 15:23:14 +0100",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "\nOn 2023-12-13 We 09:23, Michael Paquier wrote:\n> On Tue, Dec 05, 2023 at 07:29:59AM +0900, Michael Paquier wrote:\n>> Okay. Thanks for the update.\n> While in Prague, Andres and Peter E. have mentioned me that we perhaps\n> had better move on with this patch sooner than later, without waiting\n> for the two buildfarm members to do the switch because much more\n> cleanup is required for the documentation once the scripts are\n> removed.\n>\n> So, any objections with the patch as presented to remove the scripts\n> while moving the existing doc blocks from install-windows.sgml that\n> still need more discussion?\n\n\n\nTBH I'd prefer to wait. But I have had a couple more urgent things on my \nplate. I hope to get back to it before New Year. In the meantime I have \nswitched bowerbird to building only STABLE branches.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 13 Dec 2023 16:27:12 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Mon, 4 Dec 2023 17:05:24 +0900\nMichael Paquier <[email protected]> wrote:\n\n> On Tue, Sep 26, 2023 at 12:17:04PM -0400, Andrew Dunstan wrote:\n> > On 2023-09-26 Tu 01:25, NINGWEI CHEN wrote:\n> >> hamerkop is not yet prepared for Meson builds, but we plan to work on this support soon.\n> >> If we go with Meson builds exclusively right now, we will have to temporarily remove the master/HEAD for a while.\n> > \n> > I don't think we should switch to that until you're ready.\n> \n> Agreed that it would just be breaking a build for the sake of breaking\n> it. Saying that, the last exchange that we had about hamerkop\n> switching to meson was two months ago. Are there any plans to do the\n> switch?\n> --\n> Michael\n\n\nSorry for the delayed response. \nWe are currently working on transitioning to meson build at hamerkop and \nanticipating that this can be accomplished by no later than January.\n\nIf the old build scripts are removed before that, hamerkop will be temporarily \ntaken off the master branch, and will rejoin once the adjustment is done.\n\n\nBest Regards.\n-- \nSRA OSS LLC\nChen Ningwei<[email protected]>\n\n\n",
"msg_date": "Thu, 14 Dec 2023 11:43:14 +0900",
"msg_from": "NINGWEI CHEN <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Thu, Dec 14, 2023 at 11:43:14AM +0900, NINGWEI CHEN wrote:\n> Sorry for the delayed response. \n> We are currently working on transitioning to meson build at hamerkop and \n> anticipating that this can be accomplished by no later than January.\n> \n> If the old build scripts are removed before that, hamerkop will be temporarily \n> taken off the master branch, and will rejoin once the adjustment is done.\n\nThanks for the update. Let's move on with that on HEAD then. I've\nwanted some room to work on improving the set of docs for v17.\n--\nMichael",
"msg_date": "Sat, 16 Dec 2023 10:07:04 +0100",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Wed, 6 Dec 2023 at 12:59, Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Dec 06, 2023 at 12:15:50PM +0530, Shubham Khanna wrote:\n> > Patch is not applying. Please share the Rebased Version. Please find the error:\n>\n> Thanks. Here you go with a v6.\n\nFew comments:\n1) Now that the MSVC build scripts are removed, should we have the\nreference to \"MSVC build scripts\" here?\nltree.h:\n.....\n/*\n * LOWER_NODE used to be defined in the Makefile via the compile flags.\n * However the MSVC build scripts neglected to do the same which resulted in\n * MSVC builds not using LOWER_NODE. Since then, the MSVC scripts have been\n * modified to look for -D compile flags in Makefiles, so here, in order to\n * get the historic behavior of LOWER_NODE not being defined on MSVC, we only\n * define it when not building in that environment. This is important as we\n * want to maintain the same LOWER_NODE behavior after a pg_upgrade.\n */\n#ifndef _MSC_VER\n#define LOWER_NODE\n#endif\n.....\n\n2) I had seen that if sed/gzip is not available meson build will fail:\n2.a)\nProgram gsed sed found: NO\nmeson.build:334:6: ERROR: Program 'gsed sed' not found or not executable\n\n2.b)\nProgram gzip found: NO\nmeson.build:337:7: ERROR: Program 'gzip' not found or not executable\n\nShould we mention sed and gzip here?\n+ <varlistentry>\n+ <term><productname>Bison</productname> and\n+ <productname>Flex</productname></term>\n+ <listitem>\n+ <para>\n+ <productname>Bison</productname> and\n<productname>Flex</productname> are\n+ required. Only <productname>Bison</productname> versions 2.3 and later\n+ will work. <productname>Flex</productname> must be version\n2.5.35 or later.\n+ </para>\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 18 Dec 2023 16:19:33 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 18.12.23 11:49, vignesh C wrote:\n> Few comments:\n> 1) Now that the MSVC build scripts are removed, should we have the\n> reference to \"MSVC build scripts\" here?\n> ltree.h:\n\nI think this note is correct and can be kept, as it explains the \nhistorical context.\n\n> 2) I had seen that if sed/gzip is not available meson build will fail:\n> 2.a)\n> Program gsed sed found: NO\n> meson.build:334:6: ERROR: Program 'gsed sed' not found or not executable\n\nYes, this would need to be improved. Currently, sed is only required if \neither selinux or dtrace is enabled, which isn't supported on Windows. \nBut we should adjust the build scripts to not fail the top-level setup \nrun unless those options are enabled.\n\n> 2.b)\n> Program gzip found: NO\n> meson.build:337:7: ERROR: Program 'gzip' not found or not executable\n\ngzip is only required for certain test suites, so again we should adjust \nthe build scripts to not fail the build but instead skip the tests as \nappropriate.\n\n\n\n",
"msg_date": "Mon, 18 Dec 2023 14:52:41 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 18.12.23 14:52, Peter Eisentraut wrote:\n>> 2) I had seen that if sed/gzip is not available meson build will fail:\n>> 2.a)\n>> Program gsed sed found: NO\n>> meson.build:334:6: ERROR: Program 'gsed sed' not found or not executable\n> \n> Yes, this would need to be improved. Currently, sed is only required if \n> either selinux or dtrace is enabled, which isn't supported on Windows. \n> But we should adjust the build scripts to not fail the top-level setup \n> run unless those options are enabled.\n> \n>> 2.b)\n>> Program gzip found: NO\n>> meson.build:337:7: ERROR: Program 'gzip' not found or not executable\n> \n> gzip is only required for certain test suites, so again we should adjust \n> the build scripts to not fail the build but instead skip the tests as \n> appropriate.\n\nHere are patches for these two issues. More testing would be appreciated.",
"msg_date": "Tue, 19 Dec 2023 16:24:02 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Tue Dec 19, 2023 at 9:24 AM CST, Peter Eisentraut wrote:\n> On 18.12.23 14:52, Peter Eisentraut wrote:\n> >> 2) I had seen that if sed/gzip is not available meson build will fail:\n> >> 2.a)\n> >> Program gsed sed found: NO\n> >> meson.build:334:6: ERROR: Program 'gsed sed' not found or not executable\n> > \n> > Yes, this would need to be improved. Currently, sed is only required if \n> > either selinux or dtrace is enabled, which isn't supported on Windows. \n> > But we should adjust the build scripts to not fail the top-level setup \n> > run unless those options are enabled.\n> > \n> >> 2.b)\n> >> Program gzip found: NO\n> >> meson.build:337:7: ERROR: Program 'gzip' not found or not executable\n> > \n> > gzip is only required for certain test suites, so again we should adjust \n> > the build scripts to not fail the build but instead skip the tests as \n> > appropriate.\n>\n> Here are patches for these two issues. More testing would be appreciated.\n\nMeson looks good to me!\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 19 Dec 2023 10:10:45 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "Hi,\n\nOn Tue, 19 Dec 2023 at 18:24, Peter Eisentraut <[email protected]> wrote:\n>\n> On 18.12.23 14:52, Peter Eisentraut wrote:\n> >> 2) I had seen that if sed/gzip is not available meson build will fail:\n> >> 2.a)\n> >> Program gsed sed found: NO\n> >> meson.build:334:6: ERROR: Program 'gsed sed' not found or not executable\n> >\n> > Yes, this would need to be improved. Currently, sed is only required if\n> > either selinux or dtrace is enabled, which isn't supported on Windows.\n> > But we should adjust the build scripts to not fail the top-level setup\n> > run unless those options are enabled.\n> >\n> >> 2.b)\n> >> Program gzip found: NO\n> >> meson.build:337:7: ERROR: Program 'gzip' not found or not executable\n> >\n> > gzip is only required for certain test suites, so again we should adjust\n> > the build scripts to not fail the build but instead skip the tests as\n> > appropriate.\n>\n> Here are patches for these two issues. More testing would be appreciated.\n\n0001-meson-Require-sed-only-when-needed:\n\n+sed = find_program(get_option('SED'), 'sed', native: true,\n+ required: get_option('dtrace').enabled() or\nget_option('selinux').enabled())\n\ndtrace is disabled as default but selinux is set to auto. So, meson\ncould find selinux ( because of the auto ) and fail to find sed, then\ncompilation will fail with:\ncontrib/sepgsql/meson.build:34:19: ERROR: Tried to use not-found\nexternal program in \"command\"\n\nI think we need to require sed when dtrace or selinux is found, not by\nlooking at the return value of the get_option().enabled().\n\nSecond patch looks good to me.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Tue, 19 Dec 2023 19:44:48 +0300",
"msg_from": "Nazir Bilal Yavuz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Mon, Dec 18, 2023 at 02:52:41PM +0100, Peter Eisentraut wrote:\n> On 18.12.23 11:49, vignesh C wrote:\n>> Few comments:\n>> 1) Now that the MSVC build scripts are removed, should we have the\n>> reference to \"MSVC build scripts\" here?\n>> ltree.h:\n> \n> I think this note is correct and can be kept, as it explains the historical\n> context.\n\nYeah, that's something I was pondering about for a bit a few weeks ago\nbut keeping the historical context is still the most important piece\nto me.\n\n>> 2) I had seen that if sed/gzip is not available meson build will fail:\n>> 2.a)\n>> Program gsed sed found: NO\n>> meson.build:334:6: ERROR: Program 'gsed sed' not found or not executable\n> \n> Yes, this would need to be improved. Currently, sed is only required if\n> either selinux or dtrace is enabled, which isn't supported on Windows. But\n> we should adjust the build scripts to not fail the top-level setup run\n> unless those options are enabled.\n> \n>> 2.b)\n>> Program gzip found: NO\n>> meson.build:337:7: ERROR: Program 'gzip' not found or not executable\n> \n> gzip is only required for certain test suites, so again we should adjust the\n> build scripts to not fail the build but instead skip the tests as\n> appropriate.\n\nOops.\n--\nMichael",
"msg_date": "Wed, 20 Dec 2023 08:52:04 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 05:03:28PM +0900, Michael Paquier wrote:\n> Your suggestion to create a new sect2 for \"Windows\" as much as Andres'\n> suggestion are OK by as an intermediate step, and I suspect that the\n> end result will likely not be that.\n\nIt took me some time to get back to this one, and just applied the\npatch removing the scripts.\n\nAt the end, I have gone with the addition of a subsection named\n\"Visual\" for now in the platform-specific notes, keeping all the\ninformation originally in install-windows.sgml the same. A proposal\nof patch to clean up the docs is on my TODO list for the next CF.\n--\nMichael",
"msg_date": "Wed, 20 Dec 2023 09:48:37 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Tue, Dec 19, 2023 at 04:24:02PM +0100, Peter Eisentraut wrote:\n> Here are patches for these two issues. More testing would be appreciated.\n>\n> --- a/contrib/basebackup_to_shell/meson.build\n> +++ b/contrib/basebackup_to_shell/meson.build\n> @@ -24,7 +24,7 @@ tests += {\n> 'tests': [\n> 't/001_basic.pl',\n> ],\n> - 'env': {'GZIP_PROGRAM': gzip.path(),\n> - 'TAR': tar.path()},\n> + 'env': {'GZIP_PROGRAM': gzip.found() ? gzip.path() : '',\n> + 'TAR': tar.found() ? tar.path() : '' },\n> },\n\nHmm. Interesting. So this basically comes down to the fact that GZIP\nand TAR are required in ./configure because distcheck has a hard\ndependency on both, but we don't support this target in meson. Is\nthat right?\n--\nMichael",
"msg_date": "Wed, 20 Dec 2023 10:14:55 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 19.12.23 17:44, Nazir Bilal Yavuz wrote:\n> I think we need to require sed when dtrace or selinux is found, not by\n> looking at the return value of the get_option().enabled().\n\nRight. I think the correct condition would be\n\nsed = find_program(get_option('SED'), 'sed', native: true,\n required: dtrace.found() or selinux.found())\n\nI was trying to avoid that because it would require moving the \nfind_program() to somewhere later in the top-level meson.build, but I \nsuppose we have to do it that way.\n\n\n\n",
"msg_date": "Wed, 20 Dec 2023 08:31:01 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 20.12.23 02:14, Michael Paquier wrote:\n> Hmm. Interesting. So this basically comes down to the fact that GZIP\n> and TAR are required in ./configure because distcheck has a hard\n> dependency on both, but we don't support this target in meson. Is\n> that right?\n\nNo, the issue is that gzip and tar are not required by configure (it \nwill proceed if they are not found), but they are currently required by \nmeson.build (it will error if they are not found).\n\nThey are used in two different areas. One is for \"make dist\", but that \ndoesn't affect meson anyway.\n\nThe other is various test suites. The test suites are already set up to \nskip tests when gzip and tar are not found.\n\n\n\n",
"msg_date": "Wed, 20 Dec 2023 08:36:53 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 2023-12-20 09:48:37 +0900, Michael Paquier wrote:\n> On Mon, Nov 20, 2023 at 05:03:28PM +0900, Michael Paquier wrote:\n> > Your suggestion to create a new sect2 for \"Windows\" as much as Andres'\n> > suggestion are OK by as an intermediate step, and I suspect that the\n> > end result will likely not be that.\n> \n> It took me some time to get back to this one, and just applied the\n> patch removing the scripts.\n\nWohooo!\n\n\n",
"msg_date": "Wed, 20 Dec 2023 02:26:48 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Tue, 19 Dec 2023 at 20:54, Peter Eisentraut <[email protected]> wrote:\n>\n> On 18.12.23 14:52, Peter Eisentraut wrote:\n> >> 2) I had seen that if sed/gzip is not available meson build will fail:\n> >> 2.a)\n> >> Program gsed sed found: NO\n> >> meson.build:334:6: ERROR: Program 'gsed sed' not found or not executable\n> >\n> > Yes, this would need to be improved. Currently, sed is only required if\n> > either selinux or dtrace is enabled, which isn't supported on Windows.\n> > But we should adjust the build scripts to not fail the top-level setup\n> > run unless those options are enabled.\n> >\n> >> 2.b)\n> >> Program gzip found: NO\n> >> meson.build:337:7: ERROR: Program 'gzip' not found or not executable\n> >\n> > gzip is only required for certain test suites, so again we should adjust\n> > the build scripts to not fail the build but instead skip the tests as\n> > appropriate.\n>\n> Here are patches for these two issues. More testing would be appreciated.\n\nThanks for the patches, Windows build is successful without these binaries.\nIn linux when I try with Dtrace enabled, it throws the following error:\nCompiler for C supports arguments -fPIC: YES\nCompiler for C supports link arguments -Wl,--as-needed: YES\nConfiguring pg_config_paths.h using configuration\n\nsrc/include/utils/meson.build:39:2: ERROR: Tried to use not-found\nexternal program in \"command\"\n\nWith Dtrace enabled we should throw the original error that we were\ngetting i.e.:\nERROR: Program sed not found or not executable\n\nAnother observation is that we could include the executable name in\nthis case something like:\nERROR: Tried to use not-found external program \"sed\" in \"command\"\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 20 Dec 2023 16:15:55 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 2023-12-20 16:15:55 +0530, vignesh C wrote:\n> On Tue, 19 Dec 2023 at 20:54, Peter Eisentraut <[email protected]> wrote:\n> >\n> > On 18.12.23 14:52, Peter Eisentraut wrote:\n> > >> 2) I had seen that if sed/gzip is not available meson build will fail:\n> > >> 2.a)\n> > >> Program gsed sed found: NO\n> > >> meson.build:334:6: ERROR: Program 'gsed sed' not found or not executable\n> > >\n> > > Yes, this would need to be improved. Currently, sed is only required if\n> > > either selinux or dtrace is enabled, which isn't supported on Windows.\n> > > But we should adjust the build scripts to not fail the top-level setup\n> > > run unless those options are enabled.\n> > >\n> > >> 2.b)\n> > >> Program gzip found: NO\n> > >> meson.build:337:7: ERROR: Program 'gzip' not found or not executable\n> > >\n> > > gzip is only required for certain test suites, so again we should adjust\n> > > the build scripts to not fail the build but instead skip the tests as\n> > > appropriate.\n> >\n> > Here are patches for these two issues. More testing would be appreciated.\n> \n> Thanks for the patches, Windows build is successful without these binaries.\n> In linux when I try with Dtrace enabled, it throws the following error:\n> Compiler for C supports arguments -fPIC: YES\n> Compiler for C supports link arguments -Wl,--as-needed: YES\n> Configuring pg_config_paths.h using configuration\n> \n> src/include/utils/meson.build:39:2: ERROR: Tried to use not-found\n> external program in \"command\"\n> \n> With Dtrace enabled we should throw the original error that we were\n> getting i.e.:\n> ERROR: Program sed not found or not executable\n\nI think the problem is that the current formulation in the patches doesn't\nwith deal with dtrace=auto. I think we ought to make that in a proper feature\nchecking block instead of just checking the presence of the dtrace binary.\n\nHm, or perhaps we should just get rid of sed use altogether. The sepgsql case\nis trivially translateable to perl, and postprocess_dtrace.sed isn't\nmuch harder.\n\n\nOTOH, I actually don't think it's valid to not have sed when you have\ndtrace. Erroring out in a weird way in such an artificially constructed test\ndoesn't really seem like a problem.\n\n\n> Another observation is that we could include the executable name in\n> this case something like:\n> ERROR: Tried to use not-found external program \"sed\" in \"command\"\n\nIt's a meson generated message, so you'd need to open a bug report / feature\nrequest for it.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 20 Dec 2023 03:40:27 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "> On 20 Dec 2023, at 01:48, Michael Paquier <[email protected]> wrote:\n> \n> On Mon, Nov 20, 2023 at 05:03:28PM +0900, Michael Paquier wrote:\n>> Your suggestion to create a new sect2 for \"Windows\" as much as Andres'\n>> suggestion are OK by as an intermediate step, and I suspect that the\n>> end result will likely not be that.\n> \n> It took me some time to get back to this one, and just applied the\n> patch removing the scripts.\n\nThe Buildfarm complains that Win32::Registry can't be found:\n\nCan't locate Win32/Registry.pm in @INC (you may need to install the Win32::Registry module) (@INC entries checked: src/test/perl src/tools/msvc src/backend/catalog src/backend/utils/mb/Unicode src/bin/pg_rewind src/test/ssl/t src/tools/msvc/dummylib /usr/local/lib64/perl5/5.38 /usr/local/share/perl5/5.38 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5) at ./src/tools/win32tzlist.pl line 21.\nBEGIN failed--compilation aborted at ./src/tools/win32tzlist.pl line 21.\n\nhttps://brekka.postgresql.org/cgi-bin/show_log.pl?nm=koel&dt=2023-12-20%2013%3A19%3A04\n\nThis could perhaps be related to this patch removing the module in\nsrc/tools/msvc/dummylib/Win32/Registry.pm ?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 20 Dec 2023 14:31:41 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 20.12.23 12:40, Andres Freund wrote:\n> Hm, or perhaps we should just get rid of sed use altogether. The sepgsql case\n> is trivially translateable to perl, and postprocess_dtrace.sed isn't\n> much harder.\n\nMaybe yeah, but also it seems fine as is and we can easily fix the \npresent issue ...\n\n> OTOH, I actually don't think it's valid to not have sed when you have\n> dtrace. Erroring out in a weird way in such an artificially constructed test\n> doesn't really seem like a problem.\n\nAgreed. So let's just make it not-required, and that should work.\n\nUpdated patch set attached.",
"msg_date": "Wed, 20 Dec 2023 16:43:55 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "\nOn 2023-12-20 We 08:31, Daniel Gustafsson wrote:\n>> On 20 Dec 2023, at 01:48, Michael Paquier <[email protected]> wrote:\n>>\n>> On Mon, Nov 20, 2023 at 05:03:28PM +0900, Michael Paquier wrote:\n>>> Your suggestion to create a new sect2 for \"Windows\" as much as Andres'\n>>> suggestion are OK by as an intermediate step, and I suspect that the\n>>> end result will likely not be that.\n>> It took me some time to get back to this one, and just applied the\n>> patch removing the scripts.\n> The Buildfarm complains that Win32::Registry can't be found:\n>\n> Can't locate Win32/Registry.pm in @INC (you may need to install the Win32::Registry module) (@INC entries checked: src/test/perl src/tools/msvc src/backend/catalog src/backend/utils/mb/Unicode src/bin/pg_rewind src/test/ssl/t src/tools/msvc/dummylib /usr/local/lib64/perl5/5.38 /usr/local/share/perl5/5.38 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5) at ./src/tools/win32tzlist.pl line 21.\n> BEGIN failed--compilation aborted at ./src/tools/win32tzlist.pl line 21.\n>\n> https://brekka.postgresql.org/cgi-bin/show_log.pl?nm=koel&dt=2023-12-20%2013%3A19%3A04\n>\n> This could perhaps be related to this patch removing the module in\n> src/tools/msvc/dummylib/Win32/Registry.pm ?\n>\n\n\nIt is. I've fixed the buildfarm to stop checking this script.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 20 Dec 2023 11:02:49 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 11:03 AM Andrew Dunstan <[email protected]> wrote:\n> > This could perhaps be related to this patch removing the module in\n> > src/tools/msvc/dummylib/Win32/Registry.pm ?\n>\n> It is. I've fixed the buildfarm to stop checking this script.\n\nThanks! But I wonder whether the script itself also needs to be\nchanged? Are we expecting that the 'use Win32::Registry' in\nwin32tzlist.pl would be satisfied externally in some case?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 20 Dec 2023 11:32:22 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "\nOn 2023-12-20 We 11:32, Robert Haas wrote:\n> On Wed, Dec 20, 2023 at 11:03 AM Andrew Dunstan <[email protected]> wrote:\n>>> This could perhaps be related to this patch removing the module in\n>>> src/tools/msvc/dummylib/Win32/Registry.pm ?\n>> It is. I've fixed the buildfarm to stop checking this script.\n> Thanks! But I wonder whether the script itself also needs to be\n> changed? Are we expecting that the 'use Win32::Registry' in\n> win32tzlist.pl would be satisfied externally in some case?\n>\n\nYes, the module will normally be present on a Windows perl. The only \nreason we had dummylib was so we could check the perl scripts on Unix.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 20 Dec 2023 12:22:42 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "> On 20 Dec 2023, at 18:22, Andrew Dunstan <[email protected]> wrote:\n> On 2023-12-20 We 11:32, Robert Haas wrote:\n>> On Wed, Dec 20, 2023 at 11:03 AM Andrew Dunstan <[email protected]> wrote:\n>>>> This could perhaps be related to this patch removing the module in\n>>>> src/tools/msvc/dummylib/Win32/Registry.pm ?\n>>> It is. I've fixed the buildfarm to stop checking this script.\n>> Thanks! But I wonder whether the script itself also needs to be\n>> changed? Are we expecting that the 'use Win32::Registry' in\n>> win32tzlist.pl would be satisfied externally in some case?\n> \n> Yes, the module will normally be present on a Windows perl. The only reason we had dummylib was so we could check the perl scripts on Unix.\n\nThanks for taking care of it!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 20 Dec 2023 19:00:50 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 07:00:50PM +0100, Daniel Gustafsson wrote:\n> On 20 Dec 2023, at 18:22, Andrew Dunstan <[email protected]> wrote:\n>> Yes, the module will normally be present on a Windows perl. The\n>> only reason we had dummylib was so we could check the perl scripts on\n>> Unix.\n> \n> Thanks for taking care of it!\n\nThanks!\n--\nMichael",
"msg_date": "Thu, 21 Dec 2023 08:34:14 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Wed, 20 Dec 2023 at 21:13, Peter Eisentraut <[email protected]> wrote:\n>\n> On 20.12.23 12:40, Andres Freund wrote:\n> > Hm, or perhaps we should just get rid of sed use altogether. The sepgsql case\n> > is trivially translateable to perl, and postprocess_dtrace.sed isn't\n> > much harder.\n>\n> Maybe yeah, but also it seems fine as is and we can easily fix the\n> present issue ...\n>\n> > OTOH, I actually don't think it's valid to not have sed when you have\n> > dtrace. Erroring out in a weird way in such an artificially constructed test\n> > doesn't really seem like a problem.\n>\n> Agreed. So let's just make it not-required, and that should work.\n>\n> Updated patch set attached.\n\nThanks for the patches.\nI noticed one issue about the flex 2.5.35 version mentioned.\nI noticed some warning with meson build in windows with flex 2.5.35\nfor several files:\n python = find_program(get_option('PYTHON'), required: true, native: true)\n flex = find_program(get_option('FLEX'), native: true, version: '>= 2.5.35')\n bison = find_program(get_option('BISON'), native: true, version: '>= 2.3')\n-sed = find_program(get_option('SED'), 'sed', native: true)\n+sed = find_program(get_option('SED'), 'sed', native: true, required: false)\n prove = find_program(get_option('PROVE'), native: true, required: false)\n tar = find_program(get_option('TAR'), native: true)\n\n\nCompiling C object\nsrc/test/isolation/isolationtester.exe.p/meson-generated_.._specscanner.c.obj\nsrc/test/isolation/specscanner.c(2): warning C4129: 'W': unrecognized\ncharacter escape sequence\nsrc/test/isolation/specscanner.c(2): warning C4129: 'P': unrecognized\ncharacter escape sequence\nsrc/test/isolation/specscanner.c(2): warning C4129: 'p': unrecognized\ncharacter escape sequence\nsrc/test/isolation/specscanner.c(2): warning C4129: 's': unrecognized\ncharacter escape sequence\nsrc/test/isolation/specscanner.c(2): warning C4129: 'i': unrecognized\ncharacter escape sequence\n\nI noticed this is because the lex file getting added without escape\ncharacters in the C file:\n#line 2 \"D:\\postgres\\pg_meson\\src\\backend\\utils\\adt\\jsonpath_scan.l\"\n\nThere were no warnings when I used flex 2.6.4.\n\nDid anyone else get these warnings with the flex 2.5.35 version?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 21 Dec 2023 12:05:58 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 20.12.23 18:22, Andrew Dunstan wrote:\n> \n> On 2023-12-20 We 11:32, Robert Haas wrote:\n>> On Wed, Dec 20, 2023 at 11:03 AM Andrew Dunstan <[email protected]> \n>> wrote:\n>>>> This could perhaps be related to this patch removing the module in\n>>>> src/tools/msvc/dummylib/Win32/Registry.pm ?\n>>> It is. I've fixed the buildfarm to stop checking this script.\n>> Thanks! But I wonder whether the script itself also needs to be\n>> changed? Are we expecting that the 'use Win32::Registry' in\n>> win32tzlist.pl would be satisfied externally in some case?\n>>\n> \n> Yes, the module will normally be present on a Windows perl. The only \n> reason we had dummylib was so we could check the perl scripts on Unix.\n\nBut this use case still exists. Right now, running\n\n ./src/tools/perlcheck/pgperlsyncheck .\n\nfails because this module is missing. So I think we need to put the \ndummy module back somehow.\n\n\n\n",
"msg_date": "Thu, 21 Dec 2023 08:31:57 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 21.12.23 07:35, vignesh C wrote:\n> I noticed this is because the lex file getting added without escape\n> characters in the C file:\n> #line 2 \"D:\\postgres\\pg_meson\\src\\backend\\utils\\adt\\jsonpath_scan.l\"\n> \n> There were no warnings when I used flex 2.6.4.\n> \n> Did anyone else get these warnings with the flex 2.5.35 version?\n\nIt appears that this is an issue related to building in a separate build \ndirectory, not something specific to meson. The solution would be to \nuse an appropriately new flex, as you have done.\n\n\n\n",
"msg_date": "Thu, 21 Dec 2023 08:37:54 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "Hi,\n\nOn 2023-12-21 08:31:57 +0100, Peter Eisentraut wrote:\n> On 20.12.23 18:22, Andrew Dunstan wrote:\n> > \n> > On 2023-12-20 We 11:32, Robert Haas wrote:\n> > > On Wed, Dec 20, 2023 at 11:03 AM Andrew Dunstan\n> > > <[email protected]> wrote:\n> > > > > This could perhaps be related to this patch removing the module in\n> > > > > src/tools/msvc/dummylib/Win32/Registry.pm ?\n> > > > It is. I've fixed the buildfarm to stop checking this script.\n> > > Thanks! But I wonder whether the script itself also needs to be\n> > > changed? Are we expecting that the 'use Win32::Registry' in\n> > > win32tzlist.pl would be satisfied externally in some case?\n> > > \n> > \n> > Yes, the module will normally be present on a Windows perl. The only\n> > reason we had dummylib was so we could check the perl scripts on Unix.\n> \n> But this use case still exists. Right now, running\n> \n> ./src/tools/perlcheck/pgperlsyncheck .\n> \n> fails because this module is missing. So I think we need to put the dummy\n> module back somehow.\n\nCan't we teach the tool that it should not validate src/tools/win32tzlist.pl\non !windows? It's obviously windows specific code, and it's special case\nenough that there doesn't seem like a need to develop it on !windows.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 20 Dec 2023 23:39:15 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 11:39:15PM -0800, Andres Freund wrote:\n> Can't we teach the tool that it should not validate src/tools/win32tzlist.pl\n> on !windows? It's obviously windows specific code, and it's special case\n> enough that there doesn't seem like a need to develop it on !windows.\n\nI am not really excited about keeping a dummy library for the sake of\na script checking if this WIN32-only file is correctly written, and\nI've never used pgperlsyncheck, TBH, since it exists in af616ce48347.\nAnyway, we could just tweak the list of files returned by\nfind_perl_files as win32tzlist.pl is valid for perltidy and\nperlcritic.\n\nAndrew, was the original target of pgperlsyncheck committers and\nhackers who played with the MSVC scripts but could not run sanity\nchecks on Windows (see [1])? There are a few more cases like the\nUnicode scripts or some of the stuff in src/tools/ where that can be\nuseful still these are not touched on a daily basis. The rest of the\npm files are for TAP tests, one for Unicode. I'm OK to tweak the\nscript, still, if its main purpose is gone..\n\n[1]: https://www.postgresql.org/message-id/[email protected]\n--\nMichael",
"msg_date": "Thu, 21 Dec 2023 17:01:26 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On 20.12.23 16:43, Peter Eisentraut wrote:\n> On 20.12.23 12:40, Andres Freund wrote:\n>> Hm, or perhaps we should just get rid of sed use altogether. The \n>> sepgsql case\n>> is trivially translateable to perl, and postprocess_dtrace.sed isn't\n>> much harder.\n> \n> Maybe yeah, but also it seems fine as is and we can easily fix the \n> present issue ...\n> \n>> OTOH, I actually don't think it's valid to not have sed when you have\n>> dtrace. Erroring out in a weird way in such an artificially \n>> constructed test\n>> doesn't really seem like a problem.\n> \n> Agreed. So let's just make it not-required, and that should work.\n> \n> Updated patch set attached.\n\nI have committed these two.\n\n\n",
"msg_date": "Thu, 21 Dec 2023 10:12:46 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "\nOn 2023-12-21 Th 03:01, Michael Paquier wrote:\n> On Wed, Dec 20, 2023 at 11:39:15PM -0800, Andres Freund wrote:\n>> Can't we teach the tool that it should not validate src/tools/win32tzlist.pl\n>> on !windows? It's obviously windows specific code, and it's special case\n>> enough that there doesn't seem like a need to develop it on !windows.\n> I am not really excited about keeping a dummy library for the sake of\n> a script checking if this WIN32-only file is correctly written, and\n> I've never used pgperlsyncheck, TBH, since it exists in af616ce48347.\n> Anyway, we could just tweak the list of files returned by\n> find_perl_files as win32tzlist.pl is valid for perltidy and\n> perlcritic.\n>\n> Andrew, was the original target of pgperlsyncheck committers and\n> hackers who played with the MSVC scripts but could not run sanity\n> checks on Windows (see [1])?\n\n\nyes.\n\n\n> There are a few more cases like the\n> Unicode scripts or some of the stuff in src/tools/ where that can be\n> useful still these are not touched on a daily basis. The rest of the\n> pm files are for TAP tests, one for Unicode. I'm OK to tweak the\n> script, still, if its main purpose is gone..\n>\n> [1]: https://www.postgresql.org/message-id/[email protected]\n\n\nI'm actually a bit dubious about win32tzlist.pl. Win32::Registry is not \npresent in a recent Strawberry Perl installation, and its latest version \nsays it is obsolete, although it's still included in the cpan bundle \nlibwin32.\n\nI wonder who has actually run the script any time recently?\n\nIn any case, we can probably work around the syncheck issue by making \nthe module a runtime requirement rather than a compile time requirement, \nby using \"require\" instead of \"use\".\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 21 Dec 2023 15:43:32 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 03:43:32PM -0500, Andrew Dunstan wrote:\n> On 2023-12-21 Th 03:01, Michael Paquier wrote:\n>> Andrew, was the original target of pgperlsyncheck committers and\n>> hackers who played with the MSVC scripts but could not run sanity\n>> checks on Windows (see [1])?\n> \n> \n> yes.\n\nOkay, thanks. Wouldn't it be better to remove it at the end? With\nthe main use case behind its introduction being gone, it is less\nattractive to keep maintaining it. If some people have been using it\nin their workflows, I'm OK to keep it but the rest of the tree can be\nchecked at runtime as well.\n\n> I'm actually a bit dubious about win32tzlist.pl. Win32::Registry is not\n> present in a recent Strawberry Perl installation, and its latest version\n> says it is obsolete, although it's still included in the cpan bundle\n> libwin32.\n> \n> I wonder who has actually run the script any time recently?\n\nHmm... I've never run it with meson on Win32.\n\n> In any case, we can probably work around the syncheck issue by making the\n> module a runtime requirement rather than a compile time requirement, by\n> using \"require\" instead of \"use\".\n\nInteresting. Another trick would be needed for HKEY_LOCAL_MACHINE,\nlike what the dummylib but local to win32tzlist.pl. Roughly among\nthese lines:\n-use Win32::Registry;\n+use Config;\n+\n+require Win32::Registry;\n \n my $tzfile = 'src/bin/initdb/findtimezone.c';\n \n+if ($Config{osname} ne 'MSWin32' && $Config{osname} ne 'msys')\n+{\n+\tuse vars qw($HKEY_LOCAL_MACHINE);\n+}\n--\nMichael",
"msg_date": "Fri, 22 Dec 2023 08:20:22 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "\nOn 2023-12-21 Th 18:20, Michael Paquier wrote:\n> On Thu, Dec 21, 2023 at 03:43:32PM -0500, Andrew Dunstan wrote:\n>> On 2023-12-21 Th 03:01, Michael Paquier wrote:\n>>> Andrew, was the original target of pgperlsyncheck committers and\n>>> hackers who played with the MSVC scripts but could not run sanity\n>>> checks on Windows (see [1])?\n>>\n>> yes.\n> Okay, thanks. Wouldn't it be better to remove it at the end? With\n> the main use case behind its introduction being gone, it is less\n> attractive to keep maintaining it. If some people have been using it\n> in their workflows, I'm OK to keep it but the rest of the tree can be\n> checked at runtime as well.\n>\n>> I'm actually a bit dubious about win32tzlist.pl. Win32::Registry is not\n>> present in a recent Strawberry Perl installation, and its latest version\n>> says it is obsolete, although it's still included in the cpan bundle\n>> libwin32.\n>>\n>> I wonder who has actually run the script any time recently?\n> Hmm... I've never run it with meson on Win32.\n\n\nTurns out I was wrong - Windows sometimes doesn't find files nicely. It \nis present in my Strawberry installation.\n\n\n>\n>> In any case, we can probably work around the syncheck issue by making the\n>> module a runtime requirement rather than a compile time requirement, by\n>> using \"require\" instead of \"use\".\n> Interesting. Another trick would be needed for HKEY_LOCAL_MACHINE,\n> like what the dummylib but local to win32tzlist.pl. Roughly among\n> these lines:\n> -use Win32::Registry;\n> +use Config;\n> +\n> +require Win32::Registry;\n> \n> my $tzfile = 'src/bin/initdb/findtimezone.c';\n> \n> +if ($Config{osname} ne 'MSWin32' && $Config{osname} ne 'msys')\n> +{\n> +\tuse vars qw($HKEY_LOCAL_MACHINE);\n> +}\n\n\nI've done it a bit differently, but the same idea. I have tested that \nwhat I committed passes checks on Unix and works on Windows.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 22 Dec 2023 09:07:21 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove MSVC scripts from the tree"
},
{
"msg_contents": "On Fri, Dec 22, 2023 at 09:07:21AM -0500, Andrew Dunstan wrote:\n> I've done it a bit differently, but the same idea. I have tested that what I\n> committed passes checks on Unix and works on Windows.\n\nSounds fine by me. Thanks for the quick turnaround!\n--\nMichael",
"msg_date": "Sun, 24 Dec 2023 10:43:38 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove MSVC scripts from the tree"
}
] |
[
{
"msg_contents": "Hi all,\n\n\nGood day!\n\n\nI am a newbee to PostgreSQL and recently came across an idea about\ntype-casting tablespace OID.\n\nThe motibation is that when I have to upgrade a PostgreSQL database, we\nneed to join other tables to\n\ntrack tablespace name. I have just created a simple patch to resolve this.\n\n\nHope you can take a look with this.\n\n\nMy Execution Sample:\n\n# After Patch:\n\n------------------------------------------------------------------------\n\npostgres=# SELECT oid,oid::regtablespace,spcname from pg_tablespace ;\n\n oid | oid | spcname\n\n------+------------+------------\n\n 1663 | pg_default | pg_default\n\n 1664 | pg_global | pg_global\n\n(2 rows)\n\n------------------------------------------------------------------------\n\n\n# Before Patch\n\n------------------------------------------------------------------------\n\npostgres-# SELECT oid,oid::regtablespace,spcname from pg_tablespace ;\n\nERROR: syntax error at or near \"oid\"\n\nLINE 1: oid | oid | spcname\n\n ^\n\n------------------------------------------------------------------------\n\n\nI added the \"::regtablespace\" part to source.\n\nNote: While developing, I also had to add several rows to pgcatalog tables.\n\n Please point out if any OID newly assigned is not appropriate.\n\n\nKind Regards,\n\nYuki Tei",
"msg_date": "Fri, 22 Sep 2023 13:49:56 +0900",
"msg_from": "=?UTF-8?B?56iL44KG44GN?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Try adding type cast with tablespace"
},
{
"msg_contents": "Hello,Yuki.\n\nMy understanding is that your patch is aimed to enrich an alias type for oid.\nThere are already some alias types for oid so I think it is good to\nadd regtablespace for convenience.\n\nhttps://www.postgresql.org/docs/16/datatype-oid.html#DATATYPE-OID-TABLE\n\nActually,I also felt it is a bit of a hassle to join tables to find\ntablespace name from pg_database,\nit is convenient if I can use regtablespace alias.\n\nTherefore I think it is good to add regtablespace alias,but I’m also\nnewbie pgsql-hackers.\nWe need some senior hackers’s opinions.\n\nKind Regards,\nKenichiro Tanaka\n>\n> Hi all,\n>\n>\n> Good day!\n>\n>\n> I am a newbee to PostgreSQL and recently came across an idea about type-casting tablespace OID.\n>\n> The motibation is that when I have to upgrade a PostgreSQL database, we need to join other tables to\n>\n> track tablespace name. I have just created a simple patch to resolve this.\n>\n>\n> Hope you can take a look with this.\n>\n>\n> My Execution Sample:\n>\n> # After Patch:\n>\n> ------------------------------------------------------------------------\n>\n> postgres=# SELECT oid,oid::regtablespace,spcname from pg_tablespace ;\n>\n> oid | oid | spcname\n>\n> ------+------------+------------\n>\n> 1663 | pg_default | pg_default\n>\n> 1664 | pg_global | pg_global\n>\n> (2 rows)\n>\n> ------------------------------------------------------------------------\n>\n>\n> # Before Patch\n>\n> ------------------------------------------------------------------------\n>\n> postgres-# SELECT oid,oid::regtablespace,spcname from pg_tablespace ;\n>\n> ERROR: syntax error at or near \"oid\"\n>\n> LINE 1: oid | oid | spcname\n>\n> ^\n>\n> ------------------------------------------------------------------------\n>\n>\n> I added the \"::regtablespace\" part to source.\n>\n> Note: While developing, I also had to add several rows to pgcatalog tables.\n>\n> Please point out if any OID newly assigned is not appropriate.\n>\n>\n> Kind Regards,\n>\n> Yuki Tei\n\n\n",
"msg_date": "Fri, 29 Sep 2023 09:40:00 +0900",
"msg_from": "Kenichiro Tanaka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Try adding type cast with tablespace"
},
{
"msg_contents": "Kenichiro Tanaka <[email protected]> writes:\n> Therefore I think it is good to add regtablespace alias,but I’m also\n> newbie pgsql-hackers.\n> We need some senior hackers’s opinions.\n\nWell ... for my two cents, I'm kind of down on this, mainly because\nI don't understand where we'd stop. I don't want to end up in a\nscenario where every system catalog is expected to have a reg*\ntype to go with it, because that'd create a lot of make-work.\n\nThe original idea of the reg* types was to create an easy way to do\nOID lookups in catalogs where the correct lookup rule is more\ncomplicated than\n\t(SELECT oid FROM some_catalog WHERE name = 'foo')\nSo that motivates making reg* types for objects with\nschema-qualified names, and even more so for functions and\ntypes which have specialized syntax. There was also some\nconsideration of which object types frequently need lookups.\nIIRC, regrole got in partly because unprivileged users can't\nselect from pg_authid.\n\nI don't really see that tablespaces meet the bar of any of these\npast criteria: they don't have complex lookup rules nor are they\nall that commonly used (IME anyway). So if we accept this patch,\nwe're essentially saying that every catalog should have a reg*\ntype, and that's not a conclusion I want to reach. We have 11\nreg* types at the moment (only 9 if you discount the legacy\nregproc and regoper ones), whereas there are about 30 catalogs\nthat have name columns. Do we really want to open those\nfloodgates?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Sep 2023 21:12:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Try adding type cast with tablespace"
}
] |
[
{
"msg_contents": "Hi,\n\npgstat_report_wal() calls pgstat_flush_wal() and pgstat_flush_io(). When \ncalling them, pgstat_report_wal() specifies its argument \"force\" as the \nargument of them, as follows. But according to the code of \npgstat_flush_wal() and pgstat_flush_io(), their argument is \"nowait\" and \nits meaning seems the opposite of \"force\". This means that, even when \ncheckpointer etc calls pgstat_report_wal() with force=true to forcibly \nflush the statistics, pgstat_flush_wal() and pgstat_flush_io() skip \nflushing the statistics if they fail to acquire the lock immediately \nbecause they are called with nowait=true. This seems unexpected behavior \nand a bug.\nvoid\npgstat_report_wal(bool force)\n{\n\tpgstat_flush_wal(force);\n\n\tpgstat_flush_io(force);\n}\n\nBTW, pgstat_report_stat() treats \"nowait\" and \"force\" as the opposite \none, as follows.\n/* don't wait for lock acquisition when !force */\nnowait = !force;\n\nRyoga Yoshida\n\n\n",
"msg_date": "Fri, 22 Sep 2023 13:58:37 +0900",
"msg_from": "Ryoga Yoshida <[email protected]>",
"msg_from_op": true,
"msg_subject": "Doesn't pgstat_report_wal() handle the argument \"force\" incorrectly"
},
{
"msg_contents": "On Fri, Sep 22, 2023 at 01:58:37PM +0900, Ryoga Yoshida wrote:\n> pgstat_report_wal() calls pgstat_flush_wal() and pgstat_flush_io(). When\n> calling them, pgstat_report_wal() specifies its argument \"force\" as the\n> argument of them, as follows. But according to the code of\n> pgstat_flush_wal() and pgstat_flush_io(), their argument is \"nowait\" and its\n> meaning seems the opposite of \"force\". This means that, even when\n> checkpointer etc calls pgstat_report_wal() with force=true to forcibly flush\n> the statistics, pgstat_flush_wal() and pgstat_flush_io() skip flushing the\n> statistics if they fail to acquire the lock immediately because they are\n> called with nowait=true. This seems unexpected behavior and a bug.\n\nIt seems to me that you are right here. It would make sense to me to\nsay that force=true is equivalent to nowait=false, as in \"I'm OK to\nwait on the lockas I want to make sure that the stats are flushed at\nthis point\". Currently force=true means nowait=true, as in \"I'm OK to\nnot have the stats flushed if I cannot take the lock\".\n\nSeeing the three callers of pgstat_report_wal(), the checkpointer\nwants to force its way twice, and the WAL writer does not care if they\nare not flushed immediately at it loops forever in this path.\n\nA comment at the top of pgstat_report_wal() would be nice to document\nthat a bit better, at least.\n--\nMichael",
"msg_date": "Mon, 25 Sep 2023 09:56:22 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Doesn't pgstat_report_wal() handle the argument \"force\"\n incorrectly"
},
{
"msg_contents": "On 2023-09-25 09:56, Michael Paquier wrote:\n> It seems to me that you are right here. It would make sense to me to\n> say that force=true is equivalent to nowait=false, as in \"I'm OK to\n> wait on the lockas I want to make sure that the stats are flushed at\n> this point\". Currently force=true means nowait=true, as in \"I'm OK to\n> not have the stats flushed if I cannot take the lock\".\n> \n> Seeing the three callers of pgstat_report_wal(), the checkpointer\n> wants to force its way twice, and the WAL writer does not care if they\n> are not flushed immediately at it loops forever in this path.\n> \n> A comment at the top of pgstat_report_wal() would be nice to document\n> that a bit better, at least.\n\nThank you for the review. Certainly, adding a comments is a good idea. I \nadded a comment.\n\nRyoga Yoshida",
"msg_date": "Mon, 25 Sep 2023 11:27:27 +0900",
"msg_from": "Ryoga Yoshida <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Doesn't pgstat_report_wal() handle the argument \"force\"\n incorrectly"
},
{
"msg_contents": "On Mon, Sep 25, 2023 at 11:27:27AM +0900, Ryoga Yoshida wrote:\n> Thank you for the review. Certainly, adding a comments is a good idea. I\n> added a comment.\n\nHmm. How about the attached version with some tweaks?\n--\nMichael",
"msg_date": "Mon, 25 Sep 2023 12:47:41 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Doesn't pgstat_report_wal() handle the argument \"force\"\n incorrectly"
},
{
"msg_contents": "On 2023-09-25 12:47, Michael Paquier wrote:\nin attached file\n> +\t/* like in pgstat.c, don't wait for lock acquisition when !force */\n\nIsn't it the case with force=true and !force that it doesn't wait for \nthe lock acquisition. In fact, force may be false.\n\nRyoga Yoshida\n\n\n",
"msg_date": "Mon, 25 Sep 2023 14:16:22 +0900",
"msg_from": "Ryoga Yoshida <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Doesn't pgstat_report_wal() handle the argument \"force\"\n incorrectly"
},
{
"msg_contents": "On Mon, Sep 25, 2023 at 02:16:22PM +0900, Ryoga Yoshida wrote:\n> On 2023-09-25 12:47, Michael Paquier wrote:\n> in attached file\n>> +\t/* like in pgstat.c, don't wait for lock acquisition when !force */\n> \n> Isn't it the case with force=true and !force that it doesn't wait for the\n> lock acquisition. In fact, force may be false.\n\nWe would not wait on the lock if force=false, which would do\nnowait=true. And !force reads the same to me as force=false.\n\nAnyway, I am OK to remove this part. That seems to confuse you, so\nyou may not be the only one who would read this comment.\n\nAnother idea would be to do like in pgstat.c by adding the following\nline, then use \"nowait\" to call each sub-function:\nnowait = !force;\npgstat_flush_wal(nowait);\npgstat_flush_io(nowait);\n--\nMichael",
"msg_date": "Mon, 25 Sep 2023 14:38:57 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Doesn't pgstat_report_wal() handle the argument \"force\"\n incorrectly"
},
{
"msg_contents": "On 2023-09-25 14:38, Michael Paquier wrote:\n> We would not wait on the lock if force=false, which would do\n> nowait=true. And !force reads the same to me as force=false.\n> \n> Anyway, I am OK to remove this part. That seems to confuse you, so\n> you may not be the only one who would read this comment.\n\nWhen I first read it, I didn't read that !force as force=false, so \nremoving it might be better.\n\n> Another idea would be to do like in pgstat.c by adding the following\n> line, then use \"nowait\" to call each sub-function:\n> nowait = !force;\n> pgstat_flush_wal(nowait);\n> pgstat_flush_io(nowait);\n\nThat's very clear and I think it's good.\n\nRyoga Yoshida\n\n\n",
"msg_date": "Mon, 25 Sep 2023 14:49:50 +0900",
"msg_from": "Ryoga Yoshida <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Doesn't pgstat_report_wal() handle the argument \"force\"\n incorrectly"
},
{
"msg_contents": "On Mon, Sep 25, 2023 at 02:49:50PM +0900, Ryoga Yoshida wrote:\n> On 2023-09-25 14:38, Michael Paquier wrote:\n>> Another idea would be to do like in pgstat.c by adding the following\n>> line, then use \"nowait\" to call each sub-function:\n>> nowait = !force;\n>> pgstat_flush_wal(nowait);\n>> pgstat_flush_io(nowait);\n> \n> That's very clear and I think it's good.\n\nDone this way down to 15, then, with more comment polishing.\n--\nMichael",
"msg_date": "Tue, 26 Sep 2023 09:33:23 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Doesn't pgstat_report_wal() handle the argument \"force\"\n incorrectly"
}
] |
[
{
"msg_contents": "Hi,\n\npgstat_report_wal() calls pgstat_flush_wal() and pgstat_flush_io(). When \ncalling them, pgstat_report_wal() specifies its argument \"force\" as the \nargument of them, as follows. But according to the code of \npgstat_flush_wal() and pgstat_flush_io(), their argument is \"nowait\" and \nits meaning seems the opposite of \"force\". This means that, even when \ncheckpointer etc calls pgstat_report_wal() with force=true to forcibly \nflush the statistics, pgstat_flush_wal() and pgstat_flush_io() skip \nflushing the statistics if they fail to acquire the lock immediately \nbecause they are called with nowait=true. This seems unexpected behavior \nand a bug.\nvoid\npgstat_report_wal(bool force)\n{\n pgstat_flush_wal(force);\n\n pgstat_flush_io(force);\n}\n\nBTW, pgstat_report_stat() treats \"nowait\" and \"force\" as the opposite \none, as follows.\n/* don't wait for lock acquisition when !force */\nnowait = !force;\n\nRyoga Yoshida",
"msg_date": "Fri, 22 Sep 2023 14:11:14 +0900",
"msg_from": "Ryoga Yoshida <[email protected]>",
"msg_from_op": true,
"msg_subject": "Doesn't pgstat_report_wal() handle the argument \"force\" incorrectly"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.