threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "Hi,\n\nOn my buildfarm host (for all my animals) I noted that slapd was by far the\nbiggest contributor to syslog. Even though there's not normally slapd\nrunning. It's of course the slapds started by various tests.\n\nWould anybody mind if I add 'logfile_only' to slapd's config in LdapServer.pm?\nThat still leaves a few logline, from before the config file parsing, but it's\na lot better than all requests getting logged.\n\nObviously I also could reconfigure syslog to just filter this stuff, but it\nseems that the tests shouldn't spam like that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 11 Mar 2023 15:37:08 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "slapd logs to syslog during tests"
},
{
"msg_contents": "\n\n> On Mar 11, 2023, at 6:37 PM, Andres Freund <[email protected]> wrote:\n> \n> Hi,\n> \n> On my buildfarm host (for all my animals) I noted that slapd was by far the\n> biggest contributor to syslog. Even though there's not normally slapd\n> running. It's of course the slapds started by various tests.\n> \n> Would anybody mind if I add 'logfile_only' to slapd's config in LdapServer.pm?\n> That still leaves a few logline, from before the config file parsing, but it's\n> a lot better than all requests getting logged.\n\nMakes sense\n\nCheers \n\nAndrew\n\n\n",
"msg_date": "Sat, 11 Mar 2023 20:19:17 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slapd logs to syslog during tests"
},
{
"msg_contents": "On 2023-03-11 Sa 18:37, Andres Freund wrote:\n> Hi,\n>\n> On my buildfarm host (for all my animals) I noted that slapd was by far the\n> biggest contributor to syslog. Even though there's not normally slapd\n> running. It's of course the slapds started by various tests.\n>\n> Would anybody mind if I add 'logfile_only' to slapd's config in LdapServer.pm?\n> That still leaves a few logline, from before the config file parsing, but it's\n> a lot better than all requests getting logged.\n>\n> Obviously I also could reconfigure syslog to just filter this stuff, but it\n> seems that the tests shouldn't spam like that.\n>\n\nHi, Andres,\n\n\nare you moving ahead with this?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-11 Sa 18:37, Andres Freund\n wrote:\n\n\nHi,\n\nOn my buildfarm host (for all my animals) I noted that slapd was by far the\nbiggest contributor to syslog. Even though there's not normally slapd\nrunning. It's of course the slapds started by various tests.\n\nWould anybody mind if I add 'logfile_only' to slapd's config in LdapServer.pm?\nThat still leaves a few logline, from before the config file parsing, but it's\na lot better than all requests getting logged.\n\nObviously I also could reconfigure syslog to just filter this stuff, but it\nseems that the tests shouldn't spam like that.\n\n\n\n\n\nHi, Andres,\n\n\nare you moving ahead with this?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 16 Mar 2023 16:14:58 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slapd logs to syslog during tests"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 9:15 AM Andrew Dunstan <[email protected]> wrote:\n> On 2023-03-11 Sa 18:37, Andres Freund wrote:\n> On my buildfarm host (for all my animals) I noted that slapd was by far the\n> biggest contributor to syslog. Even though there's not normally slapd\n> running. It's of course the slapds started by various tests.\n>\n> Would anybody mind if I add 'logfile_only' to slapd's config in LdapServer.pm?\n> That still leaves a few logline, from before the config file parsing, but it's\n> a lot better than all requests getting logged.\n>\n> Obviously I also could reconfigure syslog to just filter this stuff, but it\n> seems that the tests shouldn't spam like that.\n>\n>\n> Hi, Andres,\n>\n> are you moving ahead with this?\n\n+1 for doing so. It has befuddled me before that I had to hunt down\nerror messages by tracing system calls[1], and that's useless for\nreaders of CI/BF logs.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJdwNiwM5iWXVh050kKw5p3VCMJyoFyCpPbEf6ZNOC1pw%40mail.gmail.com#efe8bb4695171288e4e600391df3f9fa\n\n\n",
"msg_date": "Fri, 17 Mar 2023 10:54:20 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slapd logs to syslog during tests"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-16 16:14:58 -0400, Andrew Dunstan wrote:\n> are you moving ahead with this?\n\nI got sidetracked trying to make slapd stop any and all syslog access, but it\ndoesn't look like that's possible. But working on commiting the logfile-only\napproach now. Planning to backpatch this, unless somebody protests very soon.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Mar 2023 17:48:18 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slapd logs to syslog during tests"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> I got sidetracked trying to make slapd stop any and all syslog access, but it\n> doesn't look like that's possible. But working on commiting the logfile-only\n> approach now. Planning to backpatch this, unless somebody protests very soon.\n\nSadly, buildfarm seems to be having some indigestion with this ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Mar 2023 23:52:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slapd logs to syslog during tests"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-16 23:52:04 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > I got sidetracked trying to make slapd stop any and all syslog access, but it\n> > doesn't look like that's possible. But working on commiting the logfile-only\n> > approach now. Planning to backpatch this, unless somebody protests very soon.\n> \n> Sadly, buildfarm seems to be having some indigestion with this ...\n\nUnfortunately even just slightly older versions don't have the logfile-only\noption :(.\n\nFor a bit I thought we were out of options, because 'loglevel 0' works, but\nI was not seeing any contents in the logfile we specify. But as it turns out,\nthe logfile we (before this patch already) specify, don't contain anything\never, because:\n logfile <filename>\n Specify a file for recording slapd debug messages. By default these messages only go to stderr, are not recorded anywhere else, and are\n unrelated to messages exposed by the loglevel configuration parameter. Specifying a logfile copies messages to both stderr and the logfile.\nand\n loglevel <integer> [...]\n Specify the level at which debugging statements and operation statistics should be syslogged (currently logged to the syslogd(8) LOG_LOCAL4\n\nyet using logfile-only does prevent things from ending up in syslog.\n\nBecause it's not at all confusing that a 'loglevel' option doesn't influence\nat all what ends up in the file controlled by 'logfile'.\n\n\nGiven that 'loglevel 0' works and doesn't actually reduce the amount of\nlogging available, that seems to be the way to go.\n\n\nIff we actually want slapd logging, the stderr logging can be turned on (and\npotentially redirected to a logfile via logfile or redirection). But\nunfortunately it seems that the debug level can only be set on the server\ncommandline. And has a significant sideeffect:\n -d debug-level\n Turn on debugging as defined by debug-level. If this option is specified, even with a zero argument, slapd will not fork or disassociate\n from the invoking terminal\n\nWhich means the server can't be started anymore as we do currently do, we'd\nhave to use IPC::Run::start. I hacked that together locally, but that's more\nthan I think I can get right at my current level of tiredness.\n\n\nSo unless somebody has a better idea, I'm gonna replace 'logfile-only on' with\n'loglevel 0' for now. I also am open to reverting and trying again tomorrow.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Mar 2023 22:43:17 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slapd logs to syslog during tests"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-16 22:43:17 -0700, Andres Freund wrote:\n> So unless somebody has a better idea, I'm gonna replace 'logfile-only on' with\n> 'loglevel 0' for now. I also am open to reverting and trying again tomorrow.\n\nDid that now. I used the commandline option -s0 instead of loglevel 0, as that\nprevents even the first message.\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Thu, 16 Mar 2023 23:58:29 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slapd logs to syslog during tests"
}
] |
[
{
"msg_contents": "Sehr geehrte Damen und Herren,\n \nich bitte um Löschung meines Accountes mit dem Benutzername [email protected]. Ich bitte um Bestätigung. Vielen Dank. \n \nMit freundlichen Grüßen,\n\nKoray Ili\n\nEmail: [email protected]\nTel.: +49 (0) 179 3474321\nAdresse: Hansaring 21, 46483 Wesel, Deutschland\n",
"msg_date": "Sun, 12 Mar 2023 12:35:40 +0100",
"msg_from": "Koray Ili <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?Q?Account_l=C3=B6schen?="
},
{
"msg_contents": "On 2023-Mar-12, Koray Ili wrote:\n\n> Sehr geehrte Damen und Herren,\n> \n> ich bitte um Löschung meines Accountes mit dem Benutzername\n> [...]\n\nThis has been taken care of.\n\n-- \nÁlvaro Herrera\n\n\n",
"msg_date": "Sun, 12 Mar 2023 14:22:50 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Account =?utf-8?Q?l=C3=B6schen?="
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently we don't support \"IF NOT EXISTS\" for Create publication and\nCreate subscription, I felt it would be useful to add this \"IF NOT\nEXISTS\" which will create publication/subscription only if the object\ndoes not exist.\nAttached patch for handling the same.\nThoughts?\n\nRegards,\nVignesh",
"msg_date": "Sun, 12 Mar 2023 20:08:23 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Implement IF NOT EXISTS for CREATE PUBLICATION AND CREATE\n SUBSCRIPTION"
},
{
"msg_contents": "vignesh C <[email protected]> writes:\n> Currently we don't support \"IF NOT EXISTS\" for Create publication and\n> Create subscription, I felt it would be useful to add this \"IF NOT\n> EXISTS\" which will create publication/subscription only if the object\n> does not exist.\n> Attached patch for handling the same.\n> Thoughts?\n\nI generally dislike IF NOT EXISTS options, because they are so\nsemantically squishy: when the command is over, you cannot make any\nassumptions whatsoever about the properties of the object, beyond\nthe bare fact that it exists. I do not think we should implement\nsuch options without a pretty compelling argument that there is a\nuse-case for them. \"I felt it would be useful\" doesn't meet the\nbar IMO.\n\nCREATE OR REPLACE doesn't have this semantic problem, but I'm not\nsure whether it's a useful approach for these types of objects.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 12 Mar 2023 13:27:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Implement IF NOT EXISTS for CREATE PUBLICATION AND CREATE\n SUBSCRIPTION"
}
] |
[
{
"msg_contents": "Hi all,\n\nI was looking for a way to track actual schema changes after database migrations\nin a VCS. Preferably, the schema definition should come from a trusted source\nlike pg_dump and should consist of small files.\nThis patch was born out of that need.\n\nThis patch adds the structured output format to pg_dump.\nThis format is a plaintext output split up into multiple files and the\nresulting small files are stored in a directory path based on the dumped object.\nThis format can be restored by feeding its plaintext toc file (restore-dump.sql)\nto psql. The output is also suitable for manipulating the files with standard\nediting tools.\n\nThis patch is a WIP (V1). The patch is against master and it compiles\nsuccessfully on macOS 13.2.1 aarch64 and on Debian 11 arm64.\nTo test, execute pg_dump --format=structured --file=/path/to/outputdir dbname\n\nWhat do you think of this feature, any chance it will be added to pg_dump once\nthe patch is ready?\nIs the chosen name \"structured\" appropriate?\n\nThanks for any feedback.\n\n--\nAttila Soki",
"msg_date": "Sun, 12 Mar 2023 21:36:36 +0100",
"msg_from": "Attila Soki <[email protected]>",
"msg_from_op": true,
"msg_subject": "WIP Patch: pg_dump structured"
},
{
"msg_contents": "Attila Soki <[email protected]> writes:\n> This patch adds the structured output format to pg_dump.\n> This format is a plaintext output split up into multiple files and the\n> resulting small files are stored in a directory path based on the dumped object.\n\nWon't this fail completely with SQL objects whose names aren't suitable\nto be pathname components? \"A/B\" is a perfectly good name so far as\nSQL is concerned. You could also have problems with collisions on\ncase-insensitive filesystems.\n\n> This format can be restored by feeding its plaintext toc file (restore-dump.sql)\n> to psql. The output is also suitable for manipulating the files with standard\n> editing tools.\n\nThis seems a little contradictory: if you want to edit the individual\nfiles, you'd have to also update restore-dump.sql, or else it's pointless.\nIt might make more sense to consider this as a write-only dump format\nand not worry about whether it can be restored directly.\n\n> What do you think of this feature, any chance it will be added to pg_dump once\n> the patch is ready?\n\nI'm not clear on how big the use-case is. It's not really obvious to\nme that this'd have any benefit over the existing plain-text dump\ncapability. You can edit those files too, at least till the schema\ngets too big for your editor. (But if you've got many many thousand\nSQL objects, a file-per-SQL-object directory will also be no fun to\ndeal with.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 12 Mar 2023 16:50:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WIP Patch: pg_dump structured"
},
{
"msg_contents": "\n\n> On 12 Mar 2023, at 21:50, Tom Lane <[email protected]> wrote:\n> \n> Attila Soki <[email protected]> writes:\n>> This patch adds the structured output format to pg_dump.\n>> This format is a plaintext output split up into multiple files and the\n>> resulting small files are stored in a directory path based on the dumped object.\n> \n> Won't this fail completely with SQL objects whose names aren't suitable\n> to be pathname components? \"A/B\" is a perfectly good name so far as\n> SQL is concerned. You could also have problems with collisions on\n> case-insensitive filesystems.\n\nThe “A/B” case is handled in _CleanFilename function, the slash and other\nproblematic characters are replaced.\nYou are right about the case-insensivity, this is not handled and will fail. I forgot\nto handle that. I trying to find a way to handle this.\n\n> \n>> This format can be restored by feeding its plaintext toc file (restore-dump.sql)\n>> to psql. The output is also suitable for manipulating the files with standard\n>> editing tools.\n> \n> This seems a little contradictory: if you want to edit the individual\n> files, you'd have to also update restore-dump.sql, or else it's pointless.\n> It might make more sense to consider this as a write-only dump format\n> and not worry about whether it can be restored directly.\n\nThe main motivation was to track changes with VCS at the file (object) level,\nediting small files was intended as a second possible use case.\nI did not know that a write-only format would go.\n\n> \n>> What do you think of this feature, any chance it will be added to pg_dump once\n>> the patch is ready?\n> \n> I'm not clear on how big the use-case is. It's not really obvious to\n> me that this'd have any benefit over the existing plain-text dump\n> capability. You can edit those files too, at least till the schema\n> gets too big for your editor. (But if you've got many many thousand\n> SQL objects, a file-per-SQL-object directory will also be no fun to\n> deal with.)\n\n\nI use something like this (a previous version) to track several thousand\nobjects. But I'm not sure if that would have a wide user base.\nTherefore the wip to see if there is interest in this feature.\nI think the advantage of having many small files is that it is recognizable\nwhich file (object) is involved in a commit and that the SQL functions and\ntables get a change history.\n\nThank you for your feedback.\n\nRegards,\nAttila Soki\n\n",
"msg_date": "Sun, 12 Mar 2023 22:56:06 +0100",
"msg_from": "Attila Soki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Patch: pg_dump structured"
},
{
"msg_contents": "On 12 Mar 2023, at 22:56, Attila Soki <[email protected]> wrote:\n>> On 12 Mar 2023, at 21:50, Tom Lane <[email protected]> wrote:\n\n>> Won't this fail completely with SQL objects whose names aren't suitable\n>> to be pathname components? \"A/B\" is a perfectly good name so far as\n>> SQL is concerned. You could also have problems with collisions on\n>> case-insensitive filesystems.\n> \n\n> You are right about the case-insensivity, this is not handled and will fail. I forgot\n> to handle that. I trying to find a way to handle this.\n\nHi Tom,\n\nThank you for your feedback.\n\nThis is an updated version of the pg_dump structured wip patch (V2) with the\nfollowing changes:\n- to avoid path collisions on case insensitive filessystems, all path components\n created from user input are suffixed with the hex representation of a 32 bit\n hash. “A/B” and “a/b” will get different suffixes.\n- all path components are now filesystem safe\n\nAll this is a proposal, if you know a better solution please let me know.\n\nThis patch is a WIP (V2). The patch is against master and it compiles\nsuccessfully on macOS 13.2.1 aarch64 and on Debian 11 arm64.\nTo test, execute pg_dump --format=structured --file=/path/to/outputdir dbname\n\n> \n>> \n>>> This format can be restored by feeding its plaintext toc file (restore-dump.sql)\n>>> to psql. The output is also suitable for manipulating the files with standard\n>>> editing tools.\n>> \n>> This seems a little contradictory: if you want to edit the individual\n>> files, you'd have to also update restore-dump.sql, or else it's pointless.\n>> It might make more sense to consider this as a write-only dump format\n>> and not worry about whether it can be restored directly.\n> \n> The main motivation was to track changes with VCS at the file (object) level,\n> editing small files was intended as a second possible use case.\n> I did not know that a write-only format would go.\n\nDeclaring this format as a write-only dump would allow a more flexible directory\nstructure since we wouldn't have to maintain the restore order.\n\n> \n>> \n>>> What do you think of this feature, any chance it will be added to pg_dump once\n>>> the patch is ready?\n>> \n>> I'm not clear on how big the use-case is. It's not really obvious to\n>> me that this'd have any benefit over the existing plain-text dump\n>> capability. You can edit those files too, at least till the schema\n>> gets too big for your editor. (But if you've got many many thousand\n>> SQL objects, a file-per-SQL-object directory will also be no fun to\n>> deal with.)\n\nHere is a sample use case to demonstrate how this format could be used to track\nschema changes with git. The main difference from using the existing plain-text\nschema dump is, that this format makes it possible to keep a history of the\nactual changes made to the individual objects. For example, to determine which\nmigrations have changed the foo function.\n\n# import the schema into the repository\ncd /path/to/my_app_code\npg_dump --format=structured --schema-only --file=foo_schema foodb\ngit add foo_schema --all\ngit commit foo_schema -m'initial commit foo_schema'\n\n# make changes in the db\n(my_app migrate foodb)\n(psql foodb < tweak.sql)\n\n# get a fresh dump\nrm -rf foo_schema\npg_dump --format=structured --schema-only --file=foo_schema foodb\n\n# now inspect the changes under foo_schema: there may be changed, new and\n# missing files\ngit status foo_schema\n\n# commit all schema changes\ngit add foo_schema -u\ngit commit foo_schema -m'changes from migration foodb'\n\n# later, inspect changes\ngit log --stat\n\n# show the history of one object\ngit log -p -- \"foo_schema/path/to/FUNCTIONS/foo.sql\"\n\nSure, the user base for this is narrow.\n\nThanks for any feedback.\n\n—\nBest regards\nAttila Soki",
"msg_date": "Thu, 23 Mar 2023 16:34:05 +0100",
"msg_from": "Attila Soki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WIP Patch: pg_dump structured"
}
] |
[
{
"msg_contents": "Hi,\n\nOver in [1], I thought for a moment that a new function\nWaitLatchUs(..., timeout_us, ...) was going to be useful to fix that\nbug report, at least in master, until I realised the required Linux\nsyscall is a little too new (for example RHEL 9 shipped May '22,\nDebian 12 is expected to be declared \"stable\" in a few months). So\nI'm kicking this proof-of-concept over into a new thread to talk about\nin the next cycle, in case it turns out to be useful later.\n\nThere probably isn't too much call for very high resolution sleeping.\nMost time-based sleeping is probably bad, but when it's legitimately\nused to spread CPU or I/O out (instead of illegitimate use for\npolling-based algorithms), it seems nice to be able to use all the\naccuracy your hardware can provide, and yet it is still important to\nbe able to process other kinds of events, so WaitLatchUs() seems like\na better building block than pg_usleep().\n\nOne question is whether it'd be better to use nanoseconds instead,\nsince the relevant high-resolution primitives use those under the\ncovers (struct timespec). On the other hand, microseconds are a good\nmatch for our TimestampTz which is the ultimate source of many of our\ntimeout decisions. I suppose we could also consider an interface with\nan absolute timeout instead, and then stop thinking about the units so\nmuch.\n\nAs mentioned in that other thread, the only systems that currently\nseem to be able to sleep less than 1ms through these multiplexing APIs\nare: Linux 5.11+ (epoll_pwait2()), FreeBSD (kevent()), macOS (ditto).\nEverything else will round up to milliseconds at the kernel interface\n(because poll(), epoll_wait() and WaitForMultipleObjects() take those)\nor later inside the kernel due to kernel tick rounding. There might\nbe ways to do better on Windows with separate timer events, but I\ndon't know.\n\n[1] https://www.postgresql.org/message-id/flat/CAAKRu_b-q0hXCBUCAATh0Z4Zi6UkiC0k2DFgoD3nC-r3SkR3tg%40mail.gmail.com",
"msg_date": "Mon, 13 Mar 2023 18:23:02 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Microsecond-based timeouts"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-13 18:23:02 +1300, Thomas Munro wrote:\n> One question is whether it'd be better to use nanoseconds instead,\n> since the relevant high-resolution primitives use those under the\n> covers (struct timespec). On the other hand, microseconds are a good\n> match for our TimestampTz which is the ultimate source of many of our\n> timeout decisions.\n\nIt's hard to believe we'll need nanosecond sleeps anytime soon, given that\neven very trivial syscalls take on the order of 100ns.\n\nIt's not like we couldn't add another function for waiting for nanoseconds at\na later point.\n\n\n> I suppose we could also consider an interface with an absolute timeout\n> instead, and then stop thinking about the units so much.\n\nThat seesm pretty awful to use, and we'd just end up with the same question at\nthe callsites.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Mar 2023 14:59:19 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Microsecond-based timeouts"
}
] |
[
{
"msg_contents": "Hi!\n\nI found a bug in jsonb_in function (it converts json from sting representation\n into jsonb internal representation).\n\nTo reproduce this bug (the way I found it) you should get 8bit instance of postgres db:\n\n1. add en_US locale (dpkg-reconfigure locales in debian)\n2. initdb with latin1 encoding: \n\nLANG=en_US ./initdb --encoding=LATIN1 -D my_pg_data\n\n3. run database and execute the query:\n\nSELECT E'{\\x0a\"\\x5cb\\x5c\"\\x5c\\x5c\\x5c/\\x5cb\\x5cf\\x5cn\\x5cr\\x5ct\\x5c\"\\x5c\\x5c\\x5c\\x5crZt\\x5c\"\\x5c\\x5c\\x5c/\\x5cb\\x5c\"\\x5c\\x5c\\x5c/\\x5cb\\x5c\"\\x5cu000f0\\x5cu000f0000000000000000000000000000000000000000000000000000000\\x5cuDFFF000000000000000000000000000000000000000000000000000000000000\"0000000000000000000000000000000\\x5cu0000000000000000000\\xb4\\x5cuDBFF\\x5cuDFFF00000000000000000002000000000000000000000000000000000000000000000000000000000000000\\x5cuDBFF'::jsonb;\n\nIn postgres 14 and 15, the backend will crash.\n\nThe packtrace produce with ASan is in the attached file.\n\nThis bug was found while fuzzing postgres input functions, using AFL++.\nFor now we are using lightweight wrapper around input functions that \ncreate minimal environment for these functions to run conversion, and run the, in fuzzer.\n\n\nMy colleagues (they will come here shortly) have narrowed down this query to \n\nSELECT E'\\n\"\\\\u00000\"'::jsonb;\n\nand says that is crashes even in utf8 locale.\n\nThey also have a preliminary version of patch to fix it. They will tell about it soon, I hope.\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su",
"msg_date": "Mon, 13 Mar 2023 17:18:15 +0300",
"msg_from": "Nikolay Shaplov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bug in jsonb_in function (14 & 15 version are affected)"
},
{
"msg_contents": "Nikolay Shaplov <[email protected]> writes:\n> I found a bug in jsonb_in function (it converts json from sting representation\n> into jsonb internal representation).\n\nYeah. Looks like json_lex_string is failing to honor the invariant\nthat it needs to set token_terminator ... although the documentation\nof the function certainly isn't helping. I think we need the attached.\n\nA nice side benefit is that the error context reports get a lot more\nuseful --- somebody should have inquired before as to why they were\nso bogus.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 13 Mar 2023 13:58:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug in jsonb_in function (14 & 15 version are affected)"
},
{
"msg_contents": "В Пн, 13/03/2023 в 13:58 -0400, Tom Lane пишет:\n> Nikolay Shaplov <[email protected]> writes:\n> > I found a bug in jsonb_in function (it converts json from sting representation\n> > into jsonb internal representation).\n> \n> Yeah. Looks like json_lex_string is failing to honor the invariant\n> that it needs to set token_terminator ... although the documentation\n> of the function certainly isn't helping. I think we need the attached.\n> \n> A nice side benefit is that the error context reports get a lot more\n> useful --- somebody should have inquired before as to why they were\n> so bogus.\n> \n> regards, tom lane\n> \n\nGood day, Tom and all.\n\nMerged patch looks like start of refactoring.\n\nColleague (Nikita Glukhov) propose further refactoring of jsonapi.c:\n- use of inline functions instead of macroses,\n- more uniform their usage in token success or error reporting,\n- simplify json_lex_number and its usage a bit.\nAlso he added tests for fixed bug.\n\n\n-----\n\nRegards,\nYura Sokolov.",
"msg_date": "Thu, 16 Mar 2023 08:18:01 +0300",
"msg_from": "Yura Sokolov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug in jsonb_in function (14 & 15 version are affected)"
}
] |
[
{
"msg_contents": "Hi,\n\nI want to suggest a patch against master (it may also be worth backporting\nit) that makes it possible to use longer filenames (such as those with\nabsolute paths) in `BackgroundWorker.bgw_library_name`.\n\n`BackgroundWorker.bgw_library_name` currently allows names up to\nBGW_MAXLEN-1, which is generally sufficient if `$libdir` expansion is used.\n\nHowever, there are use cases where [potentially] longer names are\nexpected/desired; for example, test benches (where library files may not\n[or can not] be copied to Postgres installation) or alternative library\ninstallation methods that do not put them into $libdir.\n\nThe patch is backwards-compatible and ensures that bgw_library_name stays\n*at least* as long as BGW_MAXLEN. Existing external code that uses\nBGW_MAXLEN is a length boundary (for example, in `strncpy`) will continue\nto work as expected.\n\nThe trade-off of this patch is that the `BackgroundWorker` structure\nbecomes larger. From my perspective, this is a reasonable cost (less than a\nkilobyte of extra space per worker).\n\nThe patch builds and `make check` succeeds.\n\nAny feedback is welcome!\n\n-- \nhttp://omnigres.org\nYurii",
"msg_date": "Mon, 13 Mar 2023 07:57:47 -0700",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] Extend the length of BackgroundWorker.bgw_library_name"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 07:57:47AM -0700, Yurii Rashkovskii wrote:\n> However, there are use cases where [potentially] longer names are\n> expected/desired; for example, test benches (where library files may not\n> [or can not] be copied to Postgres installation) or alternative library\n> installation methods that do not put them into $libdir.\n> \n> The patch is backwards-compatible and ensures that bgw_library_name stays\n> *at least* as long as BGW_MAXLEN. Existing external code that uses\n> BGW_MAXLEN is a length boundary (for example, in `strncpy`) will continue\n> to work as expected.\n\nI see that BGW_MAXLEN was originally set to 64 in 2013 (7f7485a) [0], but\nwas increased to 96 in 2018 (3a4b891) [1]. It seems generally reasonable\nto me to increase the length of bgw_library_name further for the use-case\nyou describe, but I wonder if it'd be better to simply increase BGW_MAXLEN\nagain. However, IIUC bgw_library_name is the only field that is likely to\nbe used for absolute paths, so only increasing that one to MAXPGPATH makes\nsense.\n\n[0] https://postgr.es/m/CA%2BTgmoYtQQ-JqAJPxZg3Mjg7EqugzqQ%2BZBrpnXo95chWMCZsXw%40mail.gmail.com\n[1] https://postgr.es/m/304a21ab-a9d6-264a-f688-912869c0d7c6%402ndquadrant.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 13 Mar 2023 10:35:28 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend the length of BackgroundWorker.bgw_library_name"
},
{
"msg_contents": "Nathan,\n\nThank you for your review.\n\nIndeed, my motivation for doing the change the way I did it was that only\nbgw_library_name is expected to be longer, whereas it is much less of a\nconcern for other fields. If we have increased BGW_MAXLEN, it would have\nincreased the size of BackgroundWorker for little to no benefit.\n\nOn Mon, Mar 13, 2023 at 10:35 AM Nathan Bossart <[email protected]>\nwrote:\n\n> On Mon, Mar 13, 2023 at 07:57:47AM -0700, Yurii Rashkovskii wrote:\n> > However, there are use cases where [potentially] longer names are\n> > expected/desired; for example, test benches (where library files may not\n> > [or can not] be copied to Postgres installation) or alternative library\n> > installation methods that do not put them into $libdir.\n> >\n> > The patch is backwards-compatible and ensures that bgw_library_name stays\n> > *at least* as long as BGW_MAXLEN. Existing external code that uses\n> > BGW_MAXLEN is a length boundary (for example, in `strncpy`) will continue\n> > to work as expected.\n>\n> I see that BGW_MAXLEN was originally set to 64 in 2013 (7f7485a) [0], but\n> was increased to 96 in 2018 (3a4b891) [1]. It seems generally reasonable\n> to me to increase the length of bgw_library_name further for the use-case\n> you describe, but I wonder if it'd be better to simply increase BGW_MAXLEN\n> again. However, IIUC bgw_library_name is the only field that is likely to\n> be used for absolute paths, so only increasing that one to MAXPGPATH makes\n> sense.\n>\n> [0]\n> https://postgr.es/m/CA%2BTgmoYtQQ-JqAJPxZg3Mjg7EqugzqQ%2BZBrpnXo95chWMCZsXw%40mail.gmail.com\n> [1]\n> https://postgr.es/m/304a21ab-a9d6-264a-f688-912869c0d7c6%402ndquadrant.com\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n>\n\n\n--\nhttp://omnigres.org\nYurii\n\nNathan,Thank you for your review. Indeed, my motivation for doing the change the way I did it was that only bgw_library_name is expected to be longer, whereas it is much less of a concern for other fields. If we have increased BGW_MAXLEN, it would have increased the size of BackgroundWorker for little to no benefit. On Mon, Mar 13, 2023 at 10:35 AM Nathan Bossart <[email protected]> wrote:On Mon, Mar 13, 2023 at 07:57:47AM -0700, Yurii Rashkovskii wrote:\n> However, there are use cases where [potentially] longer names are\n> expected/desired; for example, test benches (where library files may not\n> [or can not] be copied to Postgres installation) or alternative library\n> installation methods that do not put them into $libdir.\n> \n> The patch is backwards-compatible and ensures that bgw_library_name stays\n> *at least* as long as BGW_MAXLEN. Existing external code that uses\n> BGW_MAXLEN is a length boundary (for example, in `strncpy`) will continue\n> to work as expected.\n\nI see that BGW_MAXLEN was originally set to 64 in 2013 (7f7485a) [0], but\nwas increased to 96 in 2018 (3a4b891) [1]. It seems generally reasonable\nto me to increase the length of bgw_library_name further for the use-case\nyou describe, but I wonder if it'd be better to simply increase BGW_MAXLEN\nagain. However, IIUC bgw_library_name is the only field that is likely to\nbe used for absolute paths, so only increasing that one to MAXPGPATH makes\nsense.\n\n[0] https://postgr.es/m/CA%2BTgmoYtQQ-JqAJPxZg3Mjg7EqugzqQ%2BZBrpnXo95chWMCZsXw%40mail.gmail.com\n[1] https://postgr.es/m/304a21ab-a9d6-264a-f688-912869c0d7c6%402ndquadrant.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n--http://omnigres.orgYurii",
"msg_date": "Mon, 13 Mar 2023 10:48:26 -0700",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Extend the length of BackgroundWorker.bgw_library_name"
},
{
"msg_contents": "> On 13 Mar 2023, at 18:35, Nathan Bossart <[email protected]> wrote:\n> \n> On Mon, Mar 13, 2023 at 07:57:47AM -0700, Yurii Rashkovskii wrote:\n>> However, there are use cases where [potentially] longer names are\n>> expected/desired; for example, test benches (where library files may not\n>> [or can not] be copied to Postgres installation) or alternative library\n>> installation methods that do not put them into $libdir.\n>> \n>> The patch is backwards-compatible and ensures that bgw_library_name stays\n>> *at least* as long as BGW_MAXLEN. Existing external code that uses\n>> BGW_MAXLEN is a length boundary (for example, in `strncpy`) will continue\n>> to work as expected.\n> \n> I see that BGW_MAXLEN was originally set to 64 in 2013 (7f7485a) [0], but\n> was increased to 96 in 2018 (3a4b891) [1]. It seems generally reasonable\n> to me to increase the length of bgw_library_name further for the use-case\n> you describe, but I wonder if it'd be better to simply increase BGW_MAXLEN\n> again. However, IIUC bgw_library_name is the only field that is likely to\n> be used for absolute paths, so only increasing that one to MAXPGPATH makes\n> sense.\n\nYeah, raising just bgw_library_name to MAXPGPATH seems reasonable here. While\nthe memory usage does grow it's still quite modest, and has an upper limit in\nmax_worker_processes.\n\nWhile here, I wonder if we should document what BGW_MAXLEN is defined as in\nbgworker.sgml?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 15 Mar 2023 10:38:34 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend the length of BackgroundWorker.bgw_library_name"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 10:38:34AM +0100, Daniel Gustafsson wrote:\n> While here, I wonder if we should document what BGW_MAXLEN is defined as in\n> bgworker.sgml?\n\nI am -0.5 for this. If you are writing a new background worker, it's\nprobably reasonable to expect that you can locate the definition of\nBGW_MAXLEN. Also, I think there's a good chance that we'd forget to update\nsuch documentation the next time we adjust it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 20 Apr 2023 16:32:43 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend the length of BackgroundWorker.bgw_library_name"
},
{
"msg_contents": "> On 21 Apr 2023, at 01:32, Nathan Bossart <[email protected]> wrote:\n> \n> On Wed, Mar 15, 2023 at 10:38:34AM +0100, Daniel Gustafsson wrote:\n>> While here, I wonder if we should document what BGW_MAXLEN is defined as in\n>> bgworker.sgml?\n> \n> I am -0.5 for this. If you are writing a new background worker, it's\n> probably reasonable to expect that you can locate the definition of\n> BGW_MAXLEN. \n\nOf course. The question is if it's a helpful addition for someone who is\nreading the documentation section on implementing background workers where we\nexplicitly mention BGW_MAXLEN without saying what it is.\n\n> Also, I think there's a good chance that we'd forget to update\n> such documentation the next time we adjust it.\n\nThere is that, but once set to MAXPGPATH it seems unlikely to change\nparticularly often so it seems the wrong thing to optimize for.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 21 Apr 2023 10:49:48 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend the length of BackgroundWorker.bgw_library_name"
},
{
"msg_contents": "On Fri, Apr 21, 2023 at 10:49:48AM +0200, Daniel Gustafsson wrote:\n> On 21 Apr 2023, at 01:32, Nathan Bossart <[email protected]> wrote:\n>> I am -0.5 for this. If you are writing a new background worker, it's\n>> probably reasonable to expect that you can locate the definition of\n>> BGW_MAXLEN. \n> \n> Of course. The question is if it's a helpful addition for someone who is\n> reading the documentation section on implementing background workers where we\n> explicitly mention BGW_MAXLEN without saying what it is.\n\nIMHO it's better to have folks use the macro so that their calls to\nsnprintf(), etc. are updated when BGW_MAXLEN is changed. But I can't say\nI'm strongly opposed to adding the value to the docs if you think it is\nhelpful.\n\n>> Also, I think there's a good chance that we'd forget to update\n>> such documentation the next time we adjust it.\n> \n> There is that, but once set to MAXPGPATH it seems unlikely to change\n> particularly often so it seems the wrong thing to optimize for.\n\nTrue.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 21 Apr 2023 13:44:51 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend the length of BackgroundWorker.bgw_library_name"
},
{
"msg_contents": "Hi,\n\n> The trade-off of this patch is that the `BackgroundWorker` structure becomes larger. From my perspective, this is a reasonable cost (less than a kilobyte of extra space per worker).\n\nAgree.\n\n> The patch is backwards-compatible and ensures that bgw_library_name stays *at least* as long as BGW_MAXLEN. Existing external code that uses BGW_MAXLEN is a length boundary (for example, in `strncpy`) will continue to work as expected.\n\nThere is a mistake in the comment though:\n\n```\n+/*\n+ * Ensure bgw_function_name's size is backwards-compatible and sensible\n+ */\n+StaticAssertDecl(MAXPGPATH >= BGW_MAXLEN, \"MAXPGPATH must be at least\nequal to BGW_MAXLEN\");\n```\n\nlibrary_name, not function_name. Also I think the comment should be\nmore detailed, something like \"prior to PG17 we used ... but since\nPG17 ... which may cause problems if ...\".\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 24 Apr 2023 14:01:30 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend the length of BackgroundWorker.bgw_library_name"
},
{
"msg_contents": "Aleksander,\n\nOn Mon, Apr 24, 2023 at 1:01 PM Aleksander Alekseev <\[email protected]> wrote:\n\n> > The patch is backwards-compatible and ensures that bgw_library_name\n> stays *at least* as long as BGW_MAXLEN. Existing external code that uses\n> BGW_MAXLEN is a length boundary (for example, in `strncpy`) will continue\n> to work as expected.\n>\n> There is a mistake in the comment though:\n>\n> ```\n> +/*\n> + * Ensure bgw_function_name's size is backwards-compatible and sensible\n> + */\n> +StaticAssertDecl(MAXPGPATH >= BGW_MAXLEN, \"MAXPGPATH must be at least\n> equal to BGW_MAXLEN\");\n> ```\n>\n> library_name, not function_name. Also I think the comment should be\n> more detailed, something like \"prior to PG17 we used ... but since\n> PG17 ... which may cause problems if ...\".\n>\n\nThis is a very good catch and a suggestion. I've slightly reworked the\npatch, and I also made this static assertion to have less indirection and,\ntherefore, make it easier to understand the premise.\nVersion 2 is attached.\n\n-- \nY.",
"msg_date": "Mon, 24 Apr 2023 13:40:10 +0200",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Extend the length of BackgroundWorker.bgw_library_name"
},
{
"msg_contents": "Hi,\n\n> This is a very good catch and a suggestion. I've slightly reworked the patch, and I also made this static assertion to have less indirection and, therefore, make it easier to understand the premise.\n> Version 2 is attached.\n\n```\nsizeof((BackgroundWorker *)NULL)->bgw_library_name\n```\n\nI'm pretty confident something went wrong with the parentheses in v2.\n\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 24 Apr 2023 19:30:19 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend the length of BackgroundWorker.bgw_library_name"
},
{
"msg_contents": "You're absolutely right. Here's v3.\n\n\nOn Mon, Apr 24, 2023 at 6:30 PM Aleksander Alekseev <\[email protected]> wrote:\n\n> Hi,\n>\n> > This is a very good catch and a suggestion. I've slightly reworked the\n> patch, and I also made this static assertion to have less indirection and,\n> therefore, make it easier to understand the premise.\n> > Version 2 is attached.\n>\n> ```\n> sizeof((BackgroundWorker *)NULL)->bgw_library_name\n> ```\n>\n> I'm pretty confident something went wrong with the parentheses in v2.\n>\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nY.",
"msg_date": "Mon, 24 Apr 2023 19:43:51 +0200",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Extend the length of BackgroundWorker.bgw_library_name"
},
{
"msg_contents": "Hi,\n\n> You're absolutely right. Here's v3.\n\nPlease avoid using top posting [1].\n\nThe commit message may require a bit of tweaking by the committer but\nother than that the patch seems to be fine. I'm going to mark it as\nRfC in a bit unless anyone objects.\n\n[1]: https://wiki.postgresql.org/wiki/Mailing_Lists#Email_etiquette_mechanics\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 26 Apr 2023 15:07:18 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend the length of BackgroundWorker.bgw_library_name"
},
{
"msg_contents": "On Wed, Apr 26, 2023 at 03:07:18PM +0300, Aleksander Alekseev wrote:\n> The commit message may require a bit of tweaking by the committer but\n> other than that the patch seems to be fine. I'm going to mark it as\n> RfC in a bit unless anyone objects.\n\nIn v4, I've introduced a new BGW_LIBLEN macro and set it to the default\nvalue of MAXPGPATH (1024). This way, the value can live in bgworker.h like\nthe other BGW_* macros do. Plus, this should make the assertion that\nchecks for backward compatibility unnecessary. Since bgw_library_name is\nessentially a path, I can see the argument that we should just set\nBGW_LIBLEN to MAXPGPATH directly. I'm curious what folks think about this.\n\nI also changed the added sizeofs to use the macro for consistency with the\nsurrounding code.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 30 Jun 2023 14:39:56 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend the length of BackgroundWorker.bgw_library_name"
},
{
"msg_contents": "Hi Nathan,\n\nOn Fri, Jun 30, 2023 at 2:39 PM Nathan Bossart <[email protected]>\nwrote:\n\n>\n> In v4, I've introduced a new BGW_LIBLEN macro and set it to the default\n> value of MAXPGPATH (1024). This way, the value can live in bgworker.h like\n> the other BGW_* macros do. Plus, this should make the assertion that\n> checks for backward compatibility unnecessary. Since bgw_library_name is\n> essentially a path, I can see the argument that we should just set\n> BGW_LIBLEN to MAXPGPATH directly. I'm curious what folks think about this.\n>\n\nThank you for revising the patch. While this is relatively minor, I think\nit should be set to MAXPGPATH directly to clarify their relationship.\n\n-- \nY.\n\nHi Nathan,On Fri, Jun 30, 2023 at 2:39 PM Nathan Bossart <[email protected]> wrote:\n\nIn v4, I've introduced a new BGW_LIBLEN macro and set it to the default\nvalue of MAXPGPATH (1024). This way, the value can live in bgworker.h like\nthe other BGW_* macros do. Plus, this should make the assertion that\nchecks for backward compatibility unnecessary. Since bgw_library_name is\nessentially a path, I can see the argument that we should just set\nBGW_LIBLEN to MAXPGPATH directly. I'm curious what folks think about this.Thank you for revising the patch. While this is relatively minor, I think it should be set to MAXPGPATH directly to clarify their relationship.-- Y.",
"msg_date": "Sun, 2 Jul 2023 16:37:52 -0700",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Extend the length of BackgroundWorker.bgw_library_name"
},
{
"msg_contents": "On Sun, Jul 02, 2023 at 04:37:52PM -0700, Yurii Rashkovskii wrote:\n> Thank you for revising the patch. While this is relatively minor, I think\n> it should be set to MAXPGPATH directly to clarify their relationship.\n\nCommitted. I set the size to MAXPGPATH directly instead of inventing a new\nmacro with the same value.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 3 Jul 2023 15:08:42 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend the length of BackgroundWorker.bgw_library_name"
},
{
"msg_contents": "Nathan,\n\nOn Mon, Jul 3, 2023 at 3:08 PM Nathan Bossart <[email protected]>\nwrote:\n\n> On Sun, Jul 02, 2023 at 04:37:52PM -0700, Yurii Rashkovskii wrote:\n> > Thank you for revising the patch. While this is relatively minor, I think\n> > it should be set to MAXPGPATH directly to clarify their relationship.\n>\n> Committed. I set the size to MAXPGPATH directly instead of inventing a new\n> macro with the same value.\n>\n\nGreat, thank you! The reason I was leaving the other constant in place to\nmake upgrading extensions trivial (so that they don't need to adjust for\nthis), but if you think this is a better way, I am fine with it.\n\n-- \nY.\n\nNathan,On Mon, Jul 3, 2023 at 3:08 PM Nathan Bossart <[email protected]> wrote:On Sun, Jul 02, 2023 at 04:37:52PM -0700, Yurii Rashkovskii wrote:\n> Thank you for revising the patch. While this is relatively minor, I think\n> it should be set to MAXPGPATH directly to clarify their relationship.\n\nCommitted. I set the size to MAXPGPATH directly instead of inventing a new\nmacro with the same value.Great, thank you! The reason I was leaving the other constant in place to make upgrading extensions trivial (so that they don't need to adjust for this), but if you think this is a better way, I am fine with it. -- Y.",
"msg_date": "Mon, 3 Jul 2023 18:00:12 -0700",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Extend the length of BackgroundWorker.bgw_library_name"
},
{
"msg_contents": "On Mon, Jul 03, 2023 at 06:00:12PM -0700, Yurii Rashkovskii wrote:\n> Great, thank you! The reason I was leaving the other constant in place to\n> make upgrading extensions trivial (so that they don't need to adjust for\n> this), but if you think this is a better way, I am fine with it.\n\nSorry, I'm not following. Which constant are you referring to?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 3 Jul 2023 20:12:26 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Extend the length of BackgroundWorker.bgw_library_name"
},
{
"msg_contents": "Nathan,\n\nOn Mon, Jul 3, 2023 at 8:12 PM Nathan Bossart <[email protected]>\nwrote:\n\n> On Mon, Jul 03, 2023 at 06:00:12PM -0700, Yurii Rashkovskii wrote:\n> > Great, thank you! The reason I was leaving the other constant in place to\n> > make upgrading extensions trivial (so that they don't need to adjust for\n> > this), but if you think this is a better way, I am fine with it.\n>\n> Sorry, I'm not following. Which constant are you referring to?\n>\n\nApologies, I misread the final patch. All good!\n\n-- \nY.\n\nNathan,On Mon, Jul 3, 2023 at 8:12 PM Nathan Bossart <[email protected]> wrote:On Mon, Jul 03, 2023 at 06:00:12PM -0700, Yurii Rashkovskii wrote:\n> Great, thank you! The reason I was leaving the other constant in place to\n> make upgrading extensions trivial (so that they don't need to adjust for\n> this), but if you think this is a better way, I am fine with it.\n\nSorry, I'm not following. Which constant are you referring to?Apologies, I misread the final patch. All good! -- Y.",
"msg_date": "Tue, 4 Jul 2023 07:40:14 -0700",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Extend the length of BackgroundWorker.bgw_library_name"
}
] |
[
{
"msg_contents": "\nIn ICU 54 and earlier, if ucol_open() is unable to find a matching\nlocale, it will fall back to the *environment*.\n\nUsing ICU 54:\n\n initdb -D data -N --locale=\"en_US.UTF-8\"\n pg_ctl -D data -l logfile start\n psql postgres -c \"create collation asdf(provider=icu, locale='asdf')\"\n # returns true\n psql postgres -c \"select 'abc' collate asdf < 'ABC' collate asdf\"\n psql postgres -c \"alter system set lc_messages='C'\"\n pg_ctl -D data -l logfile restart\n # returns false and warns about collation version mismatch\n psql postgres -c \"select 'abc' collate asdf < 'ABC' collate asdf\"\n\nThis was fixed in ICU 55 to fall back to the root locale instead[1],\nwhich is stable, has a collator version, and is not dependent on the\nenvironment. As far as I can tell, 55 and later never fall back to the\nenvironment when opening a collator (unless you explicitly pass NULL to\nucol_open(), which is documented).\n\nIt would be nice if we could detect when this fallback-to-environment\nhappens, so that we could just refuse to create the bogus collation.\nBut I didn't find a good way. There are non-error return codes from\nucol_open() that seem promising[2], but they aren't actually very\nuseful to distinguish the fallback-to-environment case as far as I can\ntell.\n\nUnless someone has a better idea, I think we need to bump the minimum\nrequired ICU version to 55. That would solve the issue in v16 and\nlater, but those using old versions of ICU and old versions of postgres\nwould still be vulnerable to these kinds of typos.\n\nRegards,\n\tJeff Davis\n\n\n[1] https://icu.unicode.org/download/55m1\n[2]\nhttps://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/utypes_8h.html#a3343c1c8a8377277046774691c98d78c\n\n\n",
"msg_date": "Mon, 13 Mar 2023 16:39:04 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "ICU 54 and earlier are too dangerous"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> In ICU 54 and earlier, if ucol_open() is unable to find a matching\n> locale, it will fall back to the *environment*.\n\nThat's not great, but ...\n\n> Unless someone has a better idea, I think we need to bump the minimum\n> required ICU version to 55. That would solve the issue in v16 and\n> later, but those using old versions of ICU and old versions of postgres\n> would still be vulnerable to these kinds of typos.\n\n... that seems like an overreaction. We know from the buildfarm\nthat there's still a lot of old ICU out there. Is it really improving\nanybody's life to try to forbid them from using such a version?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Mar 2023 20:26:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ICU 54 and earlier are too dangerous"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-13 16:39:04 -0700, Jeff Davis wrote:\n> In ICU 54 and earlier, if ucol_open() is unable to find a matching\n> locale, it will fall back to the *environment*.\n> \n> Using ICU 54:\n> \n> initdb -D data -N --locale=\"en_US.UTF-8\"\n> pg_ctl -D data -l logfile start\n> psql postgres -c \"create collation asdf(provider=icu, locale='asdf')\"\n> # returns true\n> psql postgres -c \"select 'abc' collate asdf < 'ABC' collate asdf\"\n> psql postgres -c \"alter system set lc_messages='C'\"\n> pg_ctl -D data -l logfile restart\n> # returns false and warns about collation version mismatch\n> psql postgres -c \"select 'abc' collate asdf < 'ABC' collate asdf\"\n> \n> This was fixed in ICU 55 to fall back to the root locale instead[1],\n> which is stable, has a collator version, and is not dependent on the\n> environment. As far as I can tell, 55 and later never fall back to the\n> environment when opening a collator (unless you explicitly pass NULL to\n> ucol_open(), which is documented).\n\n> It would be nice if we could detect when this fallback-to-environment\n> happens, so that we could just refuse to create the bogus collation.\n> But I didn't find a good way. There are non-error return codes from\n> ucol_open() that seem promising[2], but they aren't actually very\n> useful to distinguish the fallback-to-environment case as far as I can\n> tell.\n\nWhat non-error code is returned in the above example?\n\nCan we query the returned collator and see if it matches what we were looking\nfor?\n\n\n> Unless someone has a better idea, I think we need to bump the minimum\n> required ICU version to 55. That would solve the issue in v16 and\n> later, but those using old versions of ICU and old versions of postgres\n> would still be vulnerable to these kinds of typos.\n\nI'm a bit confused by the dates. https://icu.unicode.org/download/55m1 says\nthat version was released 2014-12-17, but the linked issue around root locales\nis from 2018: https://unicode-org.atlassian.net/browse/ICU-10823 - I guess\nthe issue tracker was migrated at some point or such...\n\nIf indeed 2014 is the correct year of release, then it might be ok to increase\nthe minimum version...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Mar 2023 18:13:19 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ICU 54 and earlier are too dangerous"
},
{
"msg_contents": "On 14.03.23 01:26, Tom Lane wrote:\n>> Unless someone has a better idea, I think we need to bump the minimum\n>> required ICU version to 55. That would solve the issue in v16 and\n>> later, but those using old versions of ICU and old versions of postgres\n>> would still be vulnerable to these kinds of typos.\n> ... that seems like an overreaction. We know from the buildfarm\n> that there's still a lot of old ICU out there. Is it really improving\n> anybody's life to try to forbid them from using such a version?\n\nIf I'm getting the dates right, the 10-year support of RHEL 7 will \nexpire in June 2024. So if we follow past practices, we could drop \nsupport for RHEL 7 in PG17. This would allow us to drop support for old \nlibicu, and also old openssl, zlib, maybe more.\n\nSo if we don't feel like we need to do an emergency change here, there \nis a path to do this in a principled way in the near future.\n\n\n\n",
"msg_date": "Tue, 14 Mar 2023 08:25:55 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ICU 54 and earlier are too dangerous"
},
{
"msg_contents": "On Mon, 2023-03-13 at 18:13 -0700, Andres Freund wrote:\n> What non-error code is returned in the above example?\n\nWhen the collator for locale \"asdf\" is opened, the status is set to\nU_USING_DEFAULT_WARNING.\n\nThat seemed very promising at first, but it's the same thing returned\nafter opening most valid locales, including \"en\" and \"en-US\". It seems\nto only return U_ZERO_ERROR on an exact hit, like \"fr-CA\" or \"root\".\n\nThere's also U_USING_FALLBACK_WARNING, which also seemed promising, but\nit's returned when opening \"fr-FR\" or \"ja-JP\" (falls back to \"fr\" and\n\"ja\" respectively).\n\n> Can we query the returned collator and see if it matches what we were\n> looking\n> for?\n\nI tried a few variations of that in my canonicalization / validation\npatch, which I called \"validation\". The closest thing I found is:\n\n ucol_getLocaleByType(collator, ULOC_VALID_LOCALE, &status)\n\nWe could strip away the attributes and compare to the result of that,\nand it mostly works. There are a few complications, like I think we\nneed to preserve the \"collation\" attribute for things like\n\"de@collation=phonebook\".\n\nAnother thing to consider is that the environment might happen to open\nthe collation you intend at the time the collation is created, but then\nlater of course the environment can change, so we'd have to check every\ntime it's opened. And getting an error when the collation is opened is\nnot great, so it might need to be a WARNING or something, and it starts\nto get less useful.\n\nWhat would be *really* nice is if there was some kind of way to tell if\nthere was no real match to a known locale, either during open or via\nsome other API. I wasn't able to find one, though.\n\nActually, now that I think about it, we could just search all known\nlocales using either ucol_getAvailable() or uloc_getAvailable(), and\nsee if there's a match. Not very clean, but it should catch most\nproblems. I'll look into whether there's a reasonable way to match or\nnot.\n\n> \n> I'm a bit confused by the dates.\n> https://icu.unicode.org/download/55m1 says\n> that version was released 2014-12-17, but the linked issue around\n> root locales\n> is from 2018: https://unicode-org.atlassian.net/browse/ICU-10823 - I\n> guess\n> the issue tracker was migrated at some point or such...\n\nThe dates are misleading in both git (migrated from SVN circa 2016) and\nJIRA (migrated circa 2018, see\nhttps://unicode-org.atlassian.net/browse/ICU-1 ). It seems 55.1 was\nreleased in either 2014 or 2015.\n\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 14 Mar 2023 08:48:01 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ICU 54 and earlier are too dangerous"
},
{
"msg_contents": "On Tue, 2023-03-14 at 08:48 -0700, Jeff Davis wrote:\n> Actually, now that I think about it, we could just search all known\n> locales using either ucol_getAvailable() or uloc_getAvailable(), and\n> see if there's a match. Not very clean, but it should catch most\n> problems. I'll look into whether there's a reasonable way to match or\n> not.\n\nI posted a patch to do this as 0006 in the series here:\n\nhttps://www.postgresql.org/message-id/[email protected]\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 17 Mar 2023 11:10:13 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ICU 54 and earlier are too dangerous"
}
] |
[
{
"msg_contents": "Hello all,\n\nAs highlighted in [1] fseek() might fail to error even when accessing\nunseekable streams.\n\nPFA a patch that checks the file type before the actual fseek(), so only\nsupported calls are made.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/[email protected]\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Tue, 14 Mar 2023 13:26:27 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix fseek() detection of unseekable files on WIN32"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 01:26:27PM +0100, Juan José Santamaría Flecha wrote:\n> As highlighted in [1] fseek() might fail to error even when accessing\n> unseekable streams.\n> \n> PFA a patch that checks the file type before the actual fseek(), so only\n> supported calls are made.\n\n+ * streams, so harden that funcion with our version.\ns/funcion/function/.\n\n+extern int pgfseek64(FILE *stream, pgoff_t offset, int origin);\n+extern pgoff_t pgftell64(FILE *stream);\n+#define fseeko(stream, offset, origin) pgfseek64(stream, offset, origin)\n+#define ftello(stream) pgftell64(stream)\n\nWhat about naming the internal wrappers _pgfseeko64() and\n_pgftello64(), located in a new file named win32fseek.c? It may be\npossible that we would need a similar treatment for fseek(), in the\nfuture, though I don't see an issue why this would be needed now.\n\n+ if (GetFileType((HANDLE) _get_osfhandle(_fileno(stream))) != FILE_TYPE_DISK)\n+ {\n+ errno = ESPIPE;\n+ return -1;\n+ }\nShouldn't there be cases where we should return EINVAL for some of the\nother types, like FILE_TYPE_REMOTE or FILE_TYPE_UNKNOWN? We should\nreturn ESPIPE only for FILE_TYPE_PIPE and FILE_TYPE_CHAR, then?\n--\nMichael",
"msg_date": "Wed, 15 Mar 2023 13:57:28 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix fseek() detection of unseekable files on WIN32"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 5:57 AM Michael Paquier <[email protected]> wrote:\n\n> On Tue, Mar 14, 2023 at 01:26:27PM +0100, Juan José Santamaría Flecha\n> wrote:\n> > As highlighted in [1] fseek() might fail to error even when accessing\n> > unseekable streams.\n> >\n> > PFA a patch that checks the file type before the actual fseek(), so only\n> > supported calls are made.\n>\n> + * streams, so harden that funcion with our version.\n> s/funcion/function/.\n>\n\nDone.\n\n+extern int pgfseek64(FILE *stream, pgoff_t offset, int origin);\n> +extern pgoff_t pgftell64(FILE *stream);\n> +#define fseeko(stream, offset, origin) pgfseek64(stream, offset, origin)\n> +#define ftello(stream) pgftell64(stream)\n>\n> What about naming the internal wrappers _pgfseeko64() and\n> _pgftello64(), located in a new file named win32fseek.c? It may be\n> possible that we would need a similar treatment for fseek(), in the\n> future, though I don't see an issue why this would be needed now.\n>\n\nDone.\n\n\n> + if (GetFileType((HANDLE) _get_osfhandle(_fileno(stream))) !=\n> FILE_TYPE_DISK)\n> + {\n> + errno = ESPIPE;\n> + return -1;\n> + }\n> Shouldn't there be cases where we should return EINVAL for some of the\n> other types, like FILE_TYPE_REMOTE or FILE_TYPE_UNKNOWN? We should\n> return ESPIPE only for FILE_TYPE_PIPE and FILE_TYPE_CHAR, then?\n>\n\nDone.\n\nPFA a new version of the patch.\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Wed, 15 Mar 2023 12:18:25 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix fseek() detection of unseekable files on WIN32"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 12:18:25PM +0100, Juan José Santamaría Flecha wrote:\n> PFA a new version of the patch.\n\n+_pgftello64(FILE *stream)\n+{\n+ DWORD fileType;\n+\n+ fileType = GetFileType((HANDLE) _get_osfhandle(_fileno(stream)));\n\nHmm. I am a bit surprised here.. It seems to me that we should make\nsure that:\n- We exist quickly if _get_osfhandle() returns -2 or\nINVALID_HANDLE_VALUE, returning EINVAL?\n- After GetFileType(), check for GetLastError() and the\nFILE_TYPE_UNKNOWN case?\n\nDo you think that these would be improvements?\n--\nMichael",
"msg_date": "Thu, 16 Mar 2023 10:05:52 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix fseek() detection of unseekable files on WIN32"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 2:05 AM Michael Paquier <[email protected]> wrote:\n\n> On Wed, Mar 15, 2023 at 12:18:25PM +0100, Juan José Santamaría Flecha\n> wrote:\n> > PFA a new version of the patch.\n>\n> +_pgftello64(FILE *stream)\n> +{\n> + DWORD fileType;\n> +\n> + fileType = GetFileType((HANDLE) _get_osfhandle(_fileno(stream)));\n>\n> Hmm. I am a bit surprised here.. It seems to me that we should make\n> sure that:\n> - We exist quickly if _get_osfhandle() returns -2 or\n> INVALID_HANDLE_VALUE, returning EINVAL?\n> - After GetFileType(), check for GetLastError() and the\n> FILE_TYPE_UNKNOWN case?\n>\n> Do you think that these would be improvements?\n>\n\nIDK, this is just looking for the good case, anything else we'll fail with\nESPIPE or EINVAL anyway. If we want to get the proper file type we can call\nfstat(), which has the full logic.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Thu, Mar 16, 2023 at 2:05 AM Michael Paquier <[email protected]> wrote:On Wed, Mar 15, 2023 at 12:18:25PM +0100, Juan José Santamaría Flecha wrote:\n> PFA a new version of the patch.\n\n+_pgftello64(FILE *stream)\n+{\n+ DWORD fileType;\n+\n+ fileType = GetFileType((HANDLE) _get_osfhandle(_fileno(stream)));\n\nHmm. I am a bit surprised here.. It seems to me that we should make\nsure that:\n- We exist quickly if _get_osfhandle() returns -2 or\nINVALID_HANDLE_VALUE, returning EINVAL?\n- After GetFileType(), check for GetLastError() and the\nFILE_TYPE_UNKNOWN case?\n\nDo you think that these would be improvements?IDK, this is just looking for the good case, anything else we'll fail with ESPIPE or EINVAL anyway. If we want to get the proper file type we can call fstat(), which has the full logic.Regards,Juan José Santamaría Flecha",
"msg_date": "Thu, 16 Mar 2023 10:08:44 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix fseek() detection of unseekable files on WIN32"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 10:08:44AM +0100, Juan José Santamaría Flecha wrote:\n> IDK, this is just looking for the good case, anything else we'll fail with\n> ESPIPE or EINVAL anyway. If we want to get the proper file type we can call\n> fstat(), which has the full logic.\n\nI am not sure, TBH. As presented, the two GetFileType() calls in\n_pgftello64() and _pgfseeko64() ignore the case where it returns\nFILE_TYPE_UNKNOWN and GetLastError() has something else than\nNO_ERROR. The code would return EINVAL for all the errors happening.\nPerhaps that's fine, though I am wondering if we should report\nsomething more exact, based on _dosmaperr(GetLastError())?\n--\nMichael",
"msg_date": "Sun, 19 Mar 2023 20:20:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix fseek() detection of unseekable files on WIN32"
},
{
"msg_contents": "On Sun, Mar 19, 2023 at 08:20:27PM +0900, Michael Paquier wrote:\n> I am not sure, TBH. As presented, the two GetFileType() calls in\n> _pgftello64() and _pgfseeko64() ignore the case where it returns\n> FILE_TYPE_UNKNOWN and GetLastError() has something else than\n> NO_ERROR. The code would return EINVAL for all the errors happening.\n> Perhaps that's fine, though I am wondering if we should report\n> something more exact, based on _dosmaperr(GetLastError())?\n\nIn short, I was thinking among the lines of something like the\nattached, where I have invented a pgwin32_get_file_type() that acts as\na wrapper of GetFileType() in a new file called win32common.c, with\nall the error handling we would use between fstat(), fseeko() and\nftello() centralized in a single code path.\n\nThe refactoring with win32common.c had better be separated into its\nown patch, at the end, if using an approach like that.\n--\nMichael",
"msg_date": "Sun, 19 Mar 2023 20:45:00 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix fseek() detection of unseekable files on WIN32"
},
{
"msg_contents": "On Sun, Mar 19, 2023 at 12:45 PM Michael Paquier <[email protected]>\nwrote:\n\n>\n> In short, I was thinking among the lines of something like the\n> attached, where I have invented a pgwin32_get_file_type() that acts as\n> a wrapper of GetFileType() in a new file called win32common.c, with\n> all the error handling we would use between fstat(), fseeko() and\n> ftello() centralized in a single code path.\n>\n> The refactoring with win32common.c had better be separated into its\n> own patch, at the end, if using an approach like that.\n>\n\nMy approach was trying to make something minimal so it could be\nbackpatchable. This looks fine for HEAD, but are you planning on something\nsimilar for the other branches?\n\nDoesn't pgwin32_get_file_type() fit in dirmod.c? Might be a question of\npersonal taste, I don't really have strong feelings against win32common.c.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Sun, Mar 19, 2023 at 12:45 PM Michael Paquier <[email protected]> wrote:\nIn short, I was thinking among the lines of something like the\nattached, where I have invented a pgwin32_get_file_type() that acts as\na wrapper of GetFileType() in a new file called win32common.c, with\nall the error handling we would use between fstat(), fseeko() and\nftello() centralized in a single code path.\n\nThe refactoring with win32common.c had better be separated into its\nown patch, at the end, if using an approach like that.My approach was trying to make something minimal so it could be backpatchable. This looks fine for HEAD, but are you planning on something similar for the other branches?Doesn't pgwin32_get_file_type() fit in dirmod.c? Might be a question of personal taste, I don't really have strong feelings against win32common.c.Regards,Juan José Santamaría Flecha",
"msg_date": "Sun, 19 Mar 2023 20:10:10 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix fseek() detection of unseekable files on WIN32"
},
{
"msg_contents": "On Sun, Mar 19, 2023 at 08:10:10PM +0100, Juan José Santamaría Flecha wrote:\n> My approach was trying to make something minimal so it could be\n> backpatchable. This looks fine for HEAD, but are you planning on something\n> similar for the other branches?\n\nYes. This is actually not invasive down to 14 as the code is\nconsistent for these branches.\n\n> Doesn't pgwin32_get_file_type() fit in dirmod.c? Might be a question of\n> personal taste, I don't really have strong feelings against win32common.c.\n\nNot sure about this one. I have considered it and dirmod.c includes\nalso bits for cygwin, while being aimed for higher-level routines like\nrename(), unlink() or symlink(). This patch is only for WIN32, and\naimed for common parts in win32*.c code, so a separate file seemed a\nbit cleaner to me at the end.\n--\nMichael",
"msg_date": "Mon, 20 Mar 2023 07:06:22 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix fseek() detection of unseekable files on WIN32"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 07:06:22AM +0900, Michael Paquier wrote:\n> Not sure about this one. I have considered it and dirmod.c includes\n> also bits for cygwin, while being aimed for higher-level routines like\n> rename(), unlink() or symlink(). This patch is only for WIN32, and\n> aimed for common parts in win32*.c code, so a separate file seemed a\n> bit cleaner to me at the end.\n\nBy the way, do you think that we could be able to get a TAP test for\nthat? It does not seem that it needs to be that complicated, as long\nas we use a pg_dump command that pipes its output to a pg_restore\ncommand launched by system()?\n--\nMichael",
"msg_date": "Wed, 22 Mar 2023 09:12:36 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix fseek() detection of unseekable files on WIN32"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 07:06:22AM +0900, Michael Paquier wrote:\n> Not sure about this one. I have considered it and dirmod.c includes\n> also bits for cygwin, while being aimed for higher-level routines like\n> rename(), unlink() or symlink(). This patch is only for WIN32, and\n> aimed for common parts in win32*.c code, so a separate file seemed a\n> bit cleaner to me at the end.\n\nAfter going through the installation of a Windows setup with meson and\nninja under VS, I have checked that this is working correctly by\nmyself, so I am going to apply that. One of the tests I have done\ninvolved feeding a dump of the regression data through a pipe to\npg_restore, and the whole was able to work fine, while head broke when\nusing a pipe.\n\nDigressing a bit, while I don't forget..\n\nSpoiler 1: I don't think that recommending ActivePerl in the\ndocumentation is a good idea these days. They do not provide anymore\na standalone installer that deploys the binaries you can use, and\nthey've made it really difficult to even access a \"perl\" command as it\nhas become necessary to use an extra command \"state activate\n--default\" to link with a project registered in their stuff, meaning a\nconnection to their project. Once this command is launched, the\nterminal links to a cached state in AppData. This is very unfriendly.\nIn comparison, relying on StrawberryPerl and Chocolatey feels like a\nbreeze..\n\nSpoiler 2: mingw.org seems to be dead, and we have two links in the\ndocs referring to it.\n--\nMichael",
"msg_date": "Tue, 11 Apr 2023 14:43:25 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix fseek() detection of unseekable files on WIN32"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 02:43:25PM +0900, Michael Paquier wrote:\n> After going through the installation of a Windows setup with meson and\n> ninja under VS, I have checked that this is working correctly by\n> myself, so I am going to apply that. One of the tests I have done\n> involved feeding a dump of the regression data through a pipe to\n> pg_restore, and the whole was able to work fine, while head broke when\n> using a pipe.\n\nApplied this one down to 14. The first responses from the buildfarm\nare good, I'll keep an eye on all that.\n--\nMichael",
"msg_date": "Wed, 12 Apr 2023 11:19:52 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix fseek() detection of unseekable files on WIN32"
}
] |
[
{
"msg_contents": "Hi,\n\n\nUnfortunately DROP DATABASE does not hold interrupt over its crucial steps. If\nyou e.g. set a breakpoint on DropDatabaseBuffers() and then do a signal\nSIGINT, we'll process that interrupt before the transaction commits.\n\nA later connect to that database ends with:\n2023-03-14 10:22:24.443 PDT [3439153][client backend][3/2:0][[unknown]] PANIC: could not open critical system index 2662\n\n\nIt's not entirely obvious how to fix this. We can't just hold interrupts for\nthe whole transaction - for one, we hang if we do so, because it prevents\nourselves from absorbing our own barrier:\n\t/* Close all smgr fds in all backends. */\n\tWaitForProcSignalBarrier(EmitProcSignalBarrier(PROCSIGNAL_BARRIER_SMGRRELEASE));\n\n\nISTM that at the very least dropdb() needs to internally commit *before*\ndropping buffers - after that point the database is corrupt.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Mar 2023 10:45:21 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "DROP DATABASE is interruptible"
},
{
"msg_contents": "I tried out the patch you posted over at [1]. For those wanting an\neasy way to test it, or test the buggy behaviour in master without\nthis patch, you can simply kill -STOP the checkpointer, so that DROP\nDATABASE hangs in RequestCheckpoint() (or you could SIGSTOP any other\nbackend so it hangs in the barrier thing instead), and then you can\njust press ^C like this:\n\npostgres=# create database db2;\nCREATE DATABASE\npostgres=# drop database db2;\n^CCancel request sent\nERROR: canceling statement due to user request\n\nAfter that you get:\n\n$ psql db2\npsql: error: connection to server on socket \"/tmp/.s.PGSQL.5432\"\nfailed: FATAL: database \"db2\" is invalid\nDETAIL: Use DROP DATABASE to drop invalid databases\n\nI suppose it should be a HINT?\n\n+# FIXME: It'd be good to test the actual interruption path. But it's not\n+# immediately obvious how.\n\nI wonder if there is some way to incorporate something based on\nSIGSTOP signals into the test, but I don't know how to do it on\nWindows and maybe that's a bit weird anyway. For a non-OS-specific\nway to do it, I was wondering about having a test module function that\nhas a wait loop that accepts ^C but deliberately ignores\nProcSignalBarrier, and leaving that running in a background psql for a\nsimilar effect?\n\nNot sure why the test is under src/test/recovery.\n\nWhile a database exists in this state, we get periodic autovacuum\nnoise, which I guess we should actually skip? I suppose someone might\neventually wonder if autovacuum could complete the drop, but it seems\na bit of a sudden weird leap in duties and might be confusing (perhaps\nit'd make more sense if 'invalid because creating' and 'invalid\nbecause dropping' were distinguished).\n\n2023-05-09 15:24:10.860 NZST [523191] FATAL: database \"db2\" is invalid\n2023-05-09 15:24:10.860 NZST [523191] DETAIL: Use DROP DATABASE to\ndrop invalid databases\n2023-05-09 15:25:10.883 NZST [523279] FATAL: database \"db2\" is invalid\n2023-05-09 15:25:10.883 NZST [523279] DETAIL: Use DROP DATABASE to\ndrop invalid databases\n2023-05-09 15:26:10.899 NZST [523361] FATAL: database \"db2\" is invalid\n2023-05-09 15:26:10.899 NZST [523361] DETAIL: Use DROP DATABASE to\ndrop invalid databases\n2023-05-09 15:27:10.919 NZST [523408] FATAL: database \"db2\" is invalid\n2023-05-09 15:27:10.919 NZST [523408] DETAIL: Use DROP DATABASE to\ndrop invalid databases\n2023-05-09 15:28:10.938 NZST [523456] FATAL: database \"db2\" is invalid\n2023-05-09 15:28:10.938 NZST [523456] DETAIL: Use DROP DATABASE to\ndrop invalid databases\n\n[1] https://www.postgresql.org/message-id/20230509013255.fjrlpitnj3ltur76%40awork3.anarazel.de\n\n\n",
"msg_date": "Tue, 9 May 2023 15:41:36 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DROP DATABASE is interruptible"
},
{
"msg_contents": "On Tue, May 9, 2023 at 3:41 PM Thomas Munro <[email protected]> wrote:\n> I tried out the patch you posted over at [1].\n\nI forgot to add, +1, I think this is a good approach.\n\n(I'm still a little embarrassed at how long we spent trying to debug\nthis in the other thread from the supplied clues, when you'd already\npointed this failure mechanism out including the exact error message a\ncouple of months ago. One thing I've noticed is that new threads\nposted in the middle of commitfests are hard to see :-D We were\ngetting pretty close, though.)\n\n\n",
"msg_date": "Tue, 9 May 2023 15:50:01 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DROP DATABASE is interruptible"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-09 15:41:36 +1200, Thomas Munro wrote:\n> I tried out the patch you posted over at [1].\n\nThanks!\n\n\n> $ psql db2\n> psql: error: connection to server on socket \"/tmp/.s.PGSQL.5432\"\n> failed: FATAL: database \"db2\" is invalid\n> DETAIL: Use DROP DATABASE to drop invalid databases\n> \n> I suppose it should be a HINT?\n\nYup.\n\n\n> +# FIXME: It'd be good to test the actual interruption path. But it's not\n> +# immediately obvious how.\n> \n> I wonder if there is some way to incorporate something based on\n> SIGSTOP signals into the test, but I don't know how to do it on\n> Windows and maybe that's a bit weird anyway. For a non-OS-specific\n> way to do it, I was wondering about having a test module function that\n> has a wait loop that accepts ^C but deliberately ignores\n> ProcSignalBarrier, and leaving that running in a background psql for a\n> similar effect?\n\nSeems a bit too complicated.\n\nWe really need to work at a framework for this kind of thing.\n\n\n> Not sure why the test is under src/test/recovery.\n\nWhere else? We don't really have a place to put backend specific tests that\naren't about logical replication or recovery right now...\n\nIt partially is about dealing with crashes etc in the middle of DROP DATABASE,\nso it doesn't seem unreasonble to me anyway.\n\n\n> While a database exists in this state, we get periodic autovacuum\n> noise, which I guess we should actually skip?\n\nYes, good catch.\n\nAlso should either reset datfrozenxid et al when invalidating, or ignore it\nwhen computing horizons.\n\n\n> I suppose someone might eventually wonder if autovacuum could complete the\n> drop, but it seems a bit of a sudden weird leap in duties and might be\n> confusing (perhaps it'd make more sense if 'invalid because creating' and\n> 'invalid because dropping' were distinguished).\n\nI'm bit hesitant to do so for now. Once it's a bit more settled, maybe?\nAlthough I wonder if there's something better suited to the task than\nautovacuum.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 8 May 2023 21:02:03 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DROP DATABASE is interruptible"
},
{
"msg_contents": "Hi,\n\nI'm hacking on this bugfix again, thanks to Evgeny's reminder on the other\nthread [1].\n\n\nI've been adding checks for partiall-dropped databases to the following places\nso far:\n- vac_truncate_clog(), as autovacuum can't process it anymore. Otherwise a\n partially dropped database could easily lead to shutdown-due-to-wraparound.\n- get_database_list() - so autovacuum workers don't error out when connecting\n- template database used by CREATE DATABASE\n- pg_dumpall, so we don't try to connect to the database\n- vacuumdb, clusterdb, reindexdb, same\n\nIt's somewhat annoying that there is no shared place for the relevant query\nfor the client-side cases.\n\n\nI haven't yet added checks to pg_upgrade, even though that's clearly\nneeded. I'm waffling a bit between erroring out and just ignoring the\ndatabase? pg_upgrade already fails when datallowconn is set \"wrongly\", see\ncheck_proper_datallowconn(). Any opinions?\n\n\nI'm not sure what should be done for psql. It's probably not a good idea to\nchange tab completion, that'd just make it appear the database is gone. But \\l\ncould probably show dropped databases more prominently?\n\n\nWe don't really have a good place to for database specific\ncode. dbcommands.[ch] are for commands (duh), but already contain a bunch of\nfunctions that don't really belong there. Seems we should add a\ncatalog/pg_database.c or catalog/database.c (tbh, I don't really know which we\nuse for what). But that's probably for HEAD only.\n\n\ndbcommands.c's get_db_info() seems to have gone completely off the deep\nend. It returns information in 14 separate out parameters, and the variables\nfor that need to all exhaustively be declared. And of course that differs\nheavily between releases, making it a pain to backpatch any change. ISTM we\nshould just define a struct for the parameters - alternatively we could just\nreturn a copy of the pg_database tuple, but it looks like the variable-width\nattributes would make that *just* about a loss.\n\nI guess that's once more something better dealt with on HEAD, but damn, I'm\nnot relishing having to deal with backpatching anything touching it - I think\nit might be reasonable to just open-code fetching datconnlimit :/.\n\n\nThis patch is starting to be a bit big, particularly once adding tests for all\nthe checks mentioned above - but I haven't heard of or thought of a better\nproposal :(.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/01020188d31d0a86-16af92c0-4466-4cb6-a2e8-0e5898aab800-000000%40eu-west-1.amazonses.com\n\n\n",
"msg_date": "Wed, 21 Jun 2023 12:02:04 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DROP DATABASE is interruptible"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-09 15:41:36 +1200, Thomas Munro wrote:\n> +# FIXME: It'd be good to test the actual interruption path. But it's not\n> +# immediately obvious how.\n> \n> I wonder if there is some way to incorporate something based on\n> SIGSTOP signals into the test, but I don't know how to do it on\n> Windows and maybe that's a bit weird anyway. For a non-OS-specific\n> way to do it, I was wondering about having a test module function that\n> has a wait loop that accepts ^C but deliberately ignores\n> ProcSignalBarrier, and leaving that running in a background psql for a\n> similar effect?\n\nI found a way to test it reliably, albeit partially. However, I'm not sure\nwhere to do so / if it's worth doing so.\n\nThe problem occurs once remove_dbtablespaces() starts working. The fix does a\nheap_inplace_update() before that. So to reproduce the problem one session can\nlock pg_tablespace, another can drop a database. Then the second session can\nbe cancelled by the first.\n\nWaiting for locks to be acquired etc is somewhat cumbersome in a tap\ntest. It'd be easier in an isolation test. But I don't think we want to do\nthis as part of the normal isolation schedule?\n\nSo just open coding it in a tap test seems to be the best way?\n\nIs it worth doing?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 21 Jun 2023 19:38:17 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DROP DATABASE is interruptible"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-21 12:02:04 -0700, Andres Freund wrote:\n> I'm hacking on this bugfix again, thanks to Evgeny's reminder on the other\n> thread [1].\n> \n> \n> I've been adding checks for partiall-dropped databases to the following places\n> so far:\n> - vac_truncate_clog(), as autovacuum can't process it anymore. Otherwise a\n> partially dropped database could easily lead to shutdown-due-to-wraparound.\n> - get_database_list() - so autovacuum workers don't error out when connecting\n> - template database used by CREATE DATABASE\n> - pg_dumpall, so we don't try to connect to the database\n> - vacuumdb, clusterdb, reindexdb, same\n\nAlso pg_amcheck.\n\n\n> It's somewhat annoying that there is no shared place for the relevant query\n> for the client-side cases.\n\nStill the case, I looked around, and it doesn't look we do anything smart\nanywhere :/\n\n\n> I haven't yet added checks to pg_upgrade, even though that's clearly\n> needed. I'm waffling a bit between erroring out and just ignoring the\n> database? pg_upgrade already fails when datallowconn is set \"wrongly\", see\n> check_proper_datallowconn(). Any opinions?\n\nThere don't need to be explict checks, because pg_upgrade will fail, because\nit connects to every database. Obviously the error could be nicer, but it\nseems ok for something hopefully very rare. I did add a test ensuring that the\nbehaviour is caught.\n\nIt's somewhat odd that pg_upgrade prints errors on stdout...\n\n\n> I'm not sure what should be done for psql. It's probably not a good idea to\n> change tab completion, that'd just make it appear the database is gone. But \\l\n> could probably show dropped databases more prominently?\n\nI have not done that. I wonder if this is something that should be done in the\nback branches?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sun, 25 Jun 2023 10:03:37 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DROP DATABASE is interruptible"
},
{
"msg_contents": "> On 25 Jun 2023, at 19:03, Andres Freund <[email protected]> wrote:\n> On 2023-06-21 12:02:04 -0700, Andres Freund wrote:\n>> I'm hacking on this bugfix again, \n\nThis patch LGTM from reading through and testing (manually and with your\nsupplied tests in the patch), I think this is a sound approach to deal with\nthis.\n\n>> I've been adding checks for partiall-dropped databases to the following places\n>> so far:\n>> - vac_truncate_clog(), as autovacuum can't process it anymore. Otherwise a\n>> partially dropped database could easily lead to shutdown-due-to-wraparound.\n>> - get_database_list() - so autovacuum workers don't error out when connecting\n>> - template database used by CREATE DATABASE\n>> - pg_dumpall, so we don't try to connect to the database\n>> - vacuumdb, clusterdb, reindexdb, same\n> \n> Also pg_amcheck.\n\nThat seems like an exhaustive list to me, I was unable to think of any other\nplace which would need the same treatment. pg_checksums does come to mind but\nit can clearly not see the required info so there doesn't seem like theres a\nlot to do about that.\n\n>> I haven't yet added checks to pg_upgrade, even though that's clearly\n>> needed. I'm waffling a bit between erroring out and just ignoring the\n>> database? pg_upgrade already fails when datallowconn is set \"wrongly\", see\n>> check_proper_datallowconn(). Any opinions?\n> \n> There don't need to be explict checks, because pg_upgrade will fail, because\n> it connects to every database. Obviously the error could be nicer, but it\n> seems ok for something hopefully very rare. I did add a test ensuring that the\n> behaviour is caught.\n\nI don't see any pg_upgrade test in the patch?\n\n> It's somewhat odd that pg_upgrade prints errors on stdout...\n\nThere are many odd things about pg_upgrade logging, updating it to use the\ncommon logging framework of other utils would be nice.\n\n>> I'm not sure what should be done for psql. It's probably not a good idea to\n>> change tab completion, that'd just make it appear the database is gone. But \\l\n>> could probably show dropped databases more prominently?\n> \n> I have not done that. I wonder if this is something that should be done in the\n> back branches?\n\nPossibly, I'm not sure where we usually stand on changing the output format of\n\\ commands in psql in minor revisions.\n\nA few small comments on the patch:\n\n+ * Max connections allowed (-1=no limit, -2=invalid database). A database\n+ * is set to invalid partway through eing dropped. Using datconnlimit=-2\n+ * for this purpose isn't particularly clean, but is backpatchable.\nTypo: s/eing/being/. A limit of -1 makes sense, but the meaning of -2 is less\nintuitive IMO. Would it make sense to add a #define with a more descriptive\nname to save folks reading this having to grep around and figure out what -2\nmeans?\n\n+\terrhint(\"Use DROP DATABASE to drop invalid databases\"));\nShould end with a period as a complete sentence?\n\n+\terrmsg(\"cannot alter invalid database \\\"%s\\\"\", stmt->dbname),\n+\terrdetail(\"Use DROP DATABASE to drop invalid databases\"));\nShouldn't this be an errhint() instead? Also ending with a period.\n\n+\tif (database_is_invalid_form((Form_pg_database) dbform))\n+\t\tcontinue;\nWould it make sense to stick a DEBUG2 log entry in there to signal that such a\ndatabase exist? (The same would apply for the similar hunk in autovacuum.c.)\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 7 Jul 2023 14:09:08 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DROP DATABASE is interruptible"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-07 14:09:08 +0200, Daniel Gustafsson wrote:\n> > On 25 Jun 2023, at 19:03, Andres Freund <[email protected]> wrote:\n> > On 2023-06-21 12:02:04 -0700, Andres Freund wrote:\n> >> I'm hacking on this bugfix again,\n>\n> This patch LGTM from reading through and testing (manually and with your\n> supplied tests in the patch), I think this is a sound approach to deal with\n> this.\n\nThanks!\n\n\n> >> I haven't yet added checks to pg_upgrade, even though that's clearly\n> >> needed. I'm waffling a bit between erroring out and just ignoring the\n> >> database? pg_upgrade already fails when datallowconn is set \"wrongly\", see\n> >> check_proper_datallowconn(). Any opinions?\n> >\n> > There don't need to be explict checks, because pg_upgrade will fail, because\n> > it connects to every database. Obviously the error could be nicer, but it\n> > seems ok for something hopefully very rare. I did add a test ensuring that the\n> > behaviour is caught.\n>\n> I don't see any pg_upgrade test in the patch?\n\nOops, I stashed them alongside some unrelated changes... Included this time.\n\n\n\n> > It's somewhat odd that pg_upgrade prints errors on stdout...\n>\n> There are many odd things about pg_upgrade logging, updating it to use the\n> common logging framework of other utils would be nice.\n\nIndeed.\n\n\n> >> I'm not sure what should be done for psql. It's probably not a good idea to\n> >> change tab completion, that'd just make it appear the database is gone. But \\l\n> >> could probably show dropped databases more prominently?\n> >\n> > I have not done that. I wonder if this is something that should be done in the\n> > back branches?\n>\n> Possibly, I'm not sure where we usually stand on changing the output format of\n> \\ commands in psql in minor revisions.\n\nI'd normally be quite careful, people do script psql.\n\nWhile breaking things when encountering an invalid database doesn't actually\nsound like a bad thing, I don't think it fits into any of the existing column\noutput by psql for \\l.\n\n\n> A few small comments on the patch:\n>\n> + * Max connections allowed (-1=no limit, -2=invalid database). A database\n> + * is set to invalid partway through eing dropped. Using datconnlimit=-2\n> + * for this purpose isn't particularly clean, but is backpatchable.\n> Typo: s/eing/being/.\n\nFixed.\n\n\n> A limit of -1 makes sense, but the meaning of -2 is less intuitive IMO.\n> Would it make sense to add a #define with a more descriptive name to save\n> folks reading this having to grep around and figure out what -2 means?\n\nI went back and forth about this one. We don't use defines for such things in\nall the frontend code today, so the majority of places won't be improved by\nadding this. I added them now, which required touching a few otherwise\nuntouched places, but not too bad.\n\n\n\n\n> +\terrhint(\"Use DROP DATABASE to drop invalid databases\"));\n> Should end with a period as a complete sentence?\n\nI get confused about this every time. It's not helped by this example in\nsources.sgml:\n\n<programlisting>\nPrimary: could not create shared memory segment: %m\nDetail: Failed syscall was shmget(key=%d, size=%u, 0%o).\nHint: the addendum\n</programlisting>\n\nWhich notably does not use punctuation for the hint. But indeed, later we say:\n <para>\n Detail and hint messages: Use complete sentences, and end each with\n a period. Capitalize the first word of sentences. Put two spaces after\n the period if another sentence follows (for English text; might be\n inappropriate in other languages).\n </para>\n\n\n> +\terrmsg(\"cannot alter invalid database \\\"%s\\\"\", stmt->dbname),\n> +\terrdetail(\"Use DROP DATABASE to drop invalid databases\"));\n> Shouldn't this be an errhint() instead? Also ending with a period.\n\nYep.\n\n\n> +\tif (database_is_invalid_form((Form_pg_database) dbform))\n> +\t\tcontinue;\n> Would it make sense to stick a DEBUG2 log entry in there to signal that such a\n> database exist? (The same would apply for the similar hunk in autovacuum.c.)\n\nI don't really have an opinion on it. Added.\n\n\t\t\telog(DEBUG2,\n\t\t\t\t \"skipping invalid database \\\"%s\\\" while computing relfrozenxid\",\n\t\t\t\t NameStr(dbform->datname));\nand\n\t\t\telog(DEBUG2,\n\t\t\t\t \"autovacuum: skipping invalid database \\\"%s\\\"\",\n\t\t\t\t NameStr(pgdatabase->datname));\n\n\nUpdated patches attached.\n\n\nNot looking forward to fixing all the conflicts.\n\n\nDoes anybody have an opinion about whether we should add a dedicated field to\npg_database for representing invalid databases in HEAD? I'm inclined to think\nthat it's not really worth the cross-version complexity at this point, and\nit's not that bad a misuse to use pg_database.datconnlimit.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 11 Jul 2023 18:59:48 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DROP DATABASE is interruptible"
},
{
"msg_contents": "> On 12 Jul 2023, at 03:59, Andres Freund <[email protected]> wrote:\n> On 2023-07-07 14:09:08 +0200, Daniel Gustafsson wrote:\n>>> On 25 Jun 2023, at 19:03, Andres Freund <[email protected]> wrote:\n>>> On 2023-06-21 12:02:04 -0700, Andres Freund wrote:\n\n>>> There don't need to be explict checks, because pg_upgrade will fail, because\n>>> it connects to every database. Obviously the error could be nicer, but it\n>>> seems ok for something hopefully very rare. I did add a test ensuring that the\n>>> behaviour is caught.\n>> \n>> I don't see any pg_upgrade test in the patch?\n> \n> Oops, I stashed them alongside some unrelated changes... Included this time.\n\nLooking more at this I wonder if we in HEAD should make this a bit nicer by\nextending the --check phase to catch this? I did a quick hack along these\nlines in the 0003 commit attached here (0001 and 0002 are your unchanged\npatches, just added for consistency and to be CFBot compatible). If done it\ncould be a separate commit to make the 0002 patch backport cleaner of course.\n\n>>>> I'm not sure what should be done for psql. It's probably not a good idea to\n>>>> change tab completion, that'd just make it appear the database is gone. But \\l\n>>>> could probably show dropped databases more prominently?\n>>> \n>>> I have not done that. I wonder if this is something that should be done in the\n>>> back branches?\n>> \n>> Possibly, I'm not sure where we usually stand on changing the output format of\n>> \\ commands in psql in minor revisions.\n> \n> I'd normally be quite careful, people do script psql.\n> \n> While breaking things when encountering an invalid database doesn't actually\n> sound like a bad thing, I don't think it fits into any of the existing column\n> output by psql for \\l.\n\nAgreed, it doesn't, it would have to be a new column.\n\n>> +\terrhint(\"Use DROP DATABASE to drop invalid databases\"));\n>> Should end with a period as a complete sentence?\n> \n> I get confused about this every time. It's not helped by this example in\n> sources.sgml:\n> \n> <programlisting>\n> Primary: could not create shared memory segment: %m\n> Detail: Failed syscall was shmget(key=%d, size=%u, 0%o).\n> Hint: the addendum\n> </programlisting>\n> \n> Which notably does not use punctuation for the hint. But indeed, later we say:\n> <para>\n> Detail and hint messages: Use complete sentences, and end each with\n> a period. Capitalize the first word of sentences. Put two spaces after\n> the period if another sentence follows (for English text; might be\n> inappropriate in other languages).\n> </para>\n\nThat's not a very helpful example, and one which may give the wrong impression\nunless the entire page is read. I've raised this with a small diff to improve\nit on -docs.\n\n> Updated patches attached.\n\nThis version of the patchset LGTM.\n\n> Does anybody have an opinion about whether we should add a dedicated field to\n> pg_database for representing invalid databases in HEAD? I'm inclined to think\n> that it's not really worth the cross-version complexity at this point, and\n> it's not that bad a misuse to use pg_database.datconnlimit.\n\nFWIW I think we should use pg_database.datconnlimit for this, it doesn't seem\nlike a common enough problem to warrant the added complexity and cost.\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 12 Jul 2023 11:54:18 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DROP DATABASE is interruptible"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-12 11:54:18 +0200, Daniel Gustafsson wrote:\n> > On 12 Jul 2023, at 03:59, Andres Freund <[email protected]> wrote:\n> > On 2023-07-07 14:09:08 +0200, Daniel Gustafsson wrote:\n> >>> On 25 Jun 2023, at 19:03, Andres Freund <[email protected]> wrote:\n> >>> On 2023-06-21 12:02:04 -0700, Andres Freund wrote:\n> \n> >>> There don't need to be explict checks, because pg_upgrade will fail, because\n> >>> it connects to every database. Obviously the error could be nicer, but it\n> >>> seems ok for something hopefully very rare. I did add a test ensuring that the\n> >>> behaviour is caught.\n> >> \n> >> I don't see any pg_upgrade test in the patch?\n> > \n> > Oops, I stashed them alongside some unrelated changes... Included this time.\n> \n> Looking more at this I wonder if we in HEAD should make this a bit nicer by\n> extending the --check phase to catch this? I did a quick hack along these\n> lines in the 0003 commit attached here (0001 and 0002 are your unchanged\n> patches, just added for consistency and to be CFBot compatible). If done it\n> could be a separate commit to make the 0002 patch backport cleaner of course.\n\nI don't really have an opinion on that, tbh...\n\n> >> +\terrhint(\"Use DROP DATABASE to drop invalid databases\"));\n> >> Should end with a period as a complete sentence?\n> > \n> > I get confused about this every time. It's not helped by this example in\n> > sources.sgml:\n> > \n> > <programlisting>\n> > Primary: could not create shared memory segment: %m\n> > Detail: Failed syscall was shmget(key=%d, size=%u, 0%o).\n> > Hint: the addendum\n> > </programlisting>\n> > \n> > Which notably does not use punctuation for the hint. But indeed, later we say:\n> > <para>\n> > Detail and hint messages: Use complete sentences, and end each with\n> > a period. Capitalize the first word of sentences. Put two spaces after\n> > the period if another sentence follows (for English text; might be\n> > inappropriate in other languages).\n> > </para>\n> \n> That's not a very helpful example, and one which may give the wrong impression\n> unless the entire page is read. I've raised this with a small diff to improve\n> it on -docs.\n\nThanks for doing that!\n\n\n> > Updated patches attached.\n> \n> This version of the patchset LGTM.\n\nBackpatching indeed was no fun. Not having BackgroundPsql.pm was the worst\npart. But also a lot of other conflicts in tests... Took me 5-6 hours or\nso.\nBut I now finally pushed the fixes. Hope the buildfarm agrees with it...\n\nThanks for the review!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 13 Jul 2023 13:52:33 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DROP DATABASE is interruptible"
},
{
"msg_contents": "> On 13 Jul 2023, at 22:52, Andres Freund <[email protected]> wrote:\n> On 2023-07-12 11:54:18 +0200, Daniel Gustafsson wrote:\n\n>> Looking more at this I wonder if we in HEAD should make this a bit nicer by\n>> extending the --check phase to catch this? I did a quick hack along these\n>> lines in the 0003 commit attached here (0001 and 0002 are your unchanged\n>> patches, just added for consistency and to be CFBot compatible). If done it\n>> could be a separate commit to make the 0002 patch backport cleaner of course.\n> \n> I don't really have an opinion on that, tbh...\n\nFair enough. Thinking more on it I think it has merits, so I will submit that\npatch in its own thread on -hackers.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 14 Jul 2023 10:35:42 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DROP DATABASE is interruptible"
},
{
"msg_contents": "I noticed that this patch set introduced this pg_dump test:\n\nOn 12.07.23 03:59, Andres Freund wrote:\n> +\t'CREATE DATABASE invalid...' => {\n> +\t\tcreate_order => 1,\n> +\t\tcreate_sql => q(CREATE DATABASE invalid; UPDATE pg_database SET datconnlimit = -2 WHERE datname = 'invalid'),\n> +\t\tregexp => qr/^CREATE DATABASE invalid/m,\n> +\t\tnot_like => {\n> +\t\t\tpg_dumpall_dbprivs => 1,\n> +\t\t},\n> +\t},\n\nBut the key \"not_like\" isn't used for anything by that test suite. \nMaybe \"unlike\" was meant? But even then it would be useless because the \n\"like\" key is empty, so there is nothing that \"unlike\" can subtract \nfrom. Was there something expected from the mention of \n\"pg_dumpall_dbprivs\"?\n\nPerhaps it would be better to write out\n\n like => {},\n\nexplicitly, with a comment, like some other tests are doing.\n\n\n",
"msg_date": "Mon, 25 Sep 2023 01:48:31 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DROP DATABASE is interruptible"
},
{
"msg_contents": "Hi,\n\nOn 2023-09-25 01:48:31 +0100, Peter Eisentraut wrote:\n> I noticed that this patch set introduced this pg_dump test:\n> \n> On 12.07.23 03:59, Andres Freund wrote:\n> > +\t'CREATE DATABASE invalid...' => {\n> > +\t\tcreate_order => 1,\n> > +\t\tcreate_sql => q(CREATE DATABASE invalid; UPDATE pg_database SET datconnlimit = -2 WHERE datname = 'invalid'),\n> > +\t\tregexp => qr/^CREATE DATABASE invalid/m,\n> > +\t\tnot_like => {\n> > +\t\t\tpg_dumpall_dbprivs => 1,\n> > +\t\t},\n> > +\t},\n> \n> But the key \"not_like\" isn't used for anything by that test suite. Maybe\n> \"unlike\" was meant?\n\nIt's not clear to me either. Invalid databases shouldn't *ever* be dumped, so\nexplicitly listing pg_dumpall_dbprivs is odd.\n\nTBH, I find this testsuite the most opaque in postgres...\n\n\n> But even then it would be useless because the \"like\" key is empty, so there\n> is nothing that \"unlike\" can subtract from. Was there something expected\n> from the mention of \"pg_dumpall_dbprivs\"?\n\nNot that I can figure out...\n\n\n> Perhaps it would be better to write out\n> \n> like => {},\n> \n> explicitly, with a comment, like some other tests are doing.\n\nYea, that looks like the right direction.\n\nI'll go and backpatch the adjustment.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 25 Sep 2023 11:52:23 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DROP DATABASE is interruptible"
},
{
"msg_contents": "Hi,\n\n13.07.2023 23:52, Andres Freund wrote:\n>\n> Backpatching indeed was no fun. Not having BackgroundPsql.pm was the worst\n> part. But also a lot of other conflicts in tests... Took me 5-6 hours or\n> so.\n> But I now finally pushed the fixes. Hope the buildfarm agrees with it...\n>\n> Thanks for the review!\n\nI've discovered that the test 037_invalid_database, introduced with\nc66a7d75e, hangs when a server built with -DCLOBBER_CACHE_ALWAYS or with\ndebug_discard_caches = 1 set via TEMP_CONFIG:\necho \"debug_discard_caches = 1\" >/tmp/extra.config\nTEMP_CONFIG=/tmp/extra.config make -s check -C src/test/recovery/ PROVE_TESTS=\"t/037*\"\n# +++ tap check in src/test/recovery +++\n[09:05:48] t/037_invalid_database.pl .. 6/?\n\nregress_log_037_invalid_database ends with:\n[09:05:51.622](0.021s) # issuing query via background psql:\n# CREATE DATABASE regression_invalid_interrupt;\n# BEGIN;\n# LOCK pg_tablespace;\n# PREPARE TRANSACTION 'lock_tblspc';\n[09:05:51.684](0.062s) ok 8 - blocked DROP DATABASE completion\n\nI see two backends waiting:\nlaw 2420132 2420108 0 09:05 ? 00:00:00 postgres: node: law postgres [local] DROP DATABASE waiting\nlaw 2420135 2420108 0 09:05 ? 00:00:00 postgres: node: law postgres [local] startup waiting\n\nand the latter's stack trace:\n#0 0x00007f65c8fd3f9a in epoll_wait (epfd=9, events=0x563c40e15478, maxevents=1, timeout=-1) at \n../sysdeps/unix/sysv/linux/epoll_wait.c:30\n#1 0x0000563c3fa9a9fa in WaitEventSetWaitBlock (set=0x563c40e15410, cur_timeout=-1, occurred_events=0x7fff579dda80, \nnevents=1) at latch.c:1570\n#2 0x0000563c3fa9a8e4 in WaitEventSetWait (set=0x563c40e15410, timeout=-1, occurred_events=0x7fff579dda80, nevents=1, \nwait_event_info=50331648) at latch.c:1516\n#3 0x0000563c3fa99b14 in WaitLatch (latch=0x7f65c5e112e4, wakeEvents=33, timeout=0, wait_event_info=50331648) at \nlatch.c:538\n#4 0x0000563c3fac7dee in ProcSleep (locallock=0x563c40e41e80, lockMethodTable=0x563c4007cba0 <default_lockmethod>) at \nproc.c:1339\n#5 0x0000563c3fab4160 in WaitOnLock (locallock=0x563c40e41e80, owner=0x563c40ea5af8) at lock.c:1816\n#6 0x0000563c3fab2c80 in LockAcquireExtended (locktag=0x7fff579dde30, lockmode=1, sessionLock=false, dontWait=false, \nreportMemoryError=true, locallockp=0x7fff579dde28) at lock.c:1080\n#7 0x0000563c3faaf86d in LockRelationOid (relid=1213, lockmode=1) at lmgr.c:116\n#8 0x0000563c3f537aff in relation_open (relationId=1213, lockmode=1) at relation.c:55\n#9 0x0000563c3f5efde9 in table_open (relationId=1213, lockmode=1) at table.c:44\n#10 0x0000563c3fca2227 in CatalogCacheInitializeCache (cache=0x563c40e8fe80) at catcache.c:980\n#11 0x0000563c3fca255e in InitCatCachePhase2 (cache=0x563c40e8fe80, touch_index=true) at catcache.c:1083\n#12 0x0000563c3fcc0556 in InitCatalogCachePhase2 () at syscache.c:184\n#13 0x0000563c3fcb7db3 in RelationCacheInitializePhase3 () at relcache.c:4317\n#14 0x0000563c3fce2748 in InitPostgres (in_dbname=0x563c40e54000 \"postgres\", dboid=5, username=0x563c40e53fe8 \"law\", \nuseroid=0, flags=1, out_dbname=0x0) at postinit.c:1177\n#15 0x0000563c3fad90a7 in PostgresMain (dbname=0x563c40e54000 \"postgres\", username=0x563c40e53fe8 \"law\") at postgres.c:4229\n#16 0x0000563c3f9f01e4 in BackendRun (port=0x563c40e45360) at postmaster.c:4475\n\nIt looks like no new backend can be started due to the pg_tablespace lock,\nwhen a new relcache file is needed during the backend initialization.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Tue, 12 Mar 2024 11:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DROP DATABASE is interruptible"
},
{
"msg_contents": "On Tue, Mar 12, 2024 at 9:00 PM Alexander Lakhin <[email protected]> wrote:\n> I see two backends waiting:\n> law 2420132 2420108 0 09:05 ? 00:00:00 postgres: node: law postgres [local] DROP DATABASE waiting\n> law 2420135 2420108 0 09:05 ? 00:00:00 postgres: node: law postgres [local] startup waiting\n\nUgh.\n\n\n",
"msg_date": "Thu, 11 Apr 2024 16:07:15 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DROP DATABASE is interruptible"
}
] |
[
{
"msg_contents": "I have identified several open issues with the documentation build under \nMeson (approximately in priority order):\n\n1. Image files are not handled at all, so they don't show up in the \nfinal product.\n\n2. Defaults to website stylesheet, no way to configure. This should be \nadjusted to match the make build.\n\n3. The various build targets and their combinations are mismatching and \nincomplete. For example:\n\nTop-level GNUmakefile has these targets:\n\n- docs (builds html and man)\n- html\n- man\n\n(Those are the formats that are part of a distribution build.)\n\ndoc/src/sgml/Makefile has these documented targets:\n\n- default target is html\n- all (builds html and man, maps to top-level \"docs\")\n- html\n- man\n- postgres-A4.pdf\n- postgres-US.pdf\n- check\n\nas well as (undocumented):\n\n- htmlhelp\n- postgres.html\n- postgres.txt\n- epub\n- postgres.epub\n- postgres.info\n\nmeson has the following documented targets:\n\n- docs (builds only html)\n- alldocs (builds all formats, including obscure ones)\n\nas well as the following undocumented targets:\n\n- html\n- man\n- html_help [sic]\n- postgres-A4.pdf\n- postgres-US.pdf\n- postgres.epub\n\n- [info is not implemented at all]\n- [didn't find an equivalent of check]\n\nAs you can see, this is all over the place. I'd like to arrive at some \nconsistency across all build systems for handling each tier of \ndocumentation formats, in terms of what is documented, what the targets \nare named, and how they are grouped.\n\n4. There doesn't appear to be a way to install the documentation.\n(There are also some open questions in the top-level meson.build about\nthe installation directories, but I suppose if we can't install them\nthen exactly where to install them hasn't been thought about too\nmuch.)\n\n5. There doesn't appear to be an equivalent of \"make world\" and \"make\ninstall-world\" that includes documentation builds.\n\n\n",
"msg_date": "Wed, 15 Mar 2023 08:14:09 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "meson documentation build open issues"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-15 08:14:09 +0100, Peter Eisentraut wrote:\n> I have identified several open issues with the documentation build under\n> Meson (approximately in priority order):\n>\n> 1. Image files are not handled at all, so they don't show up in the final\n> product.\n\nHm. Somehow I thought I'd tackled that at some point. Ah. I got there for the\nPDF output, but didn't realize it's also an issue for the html output.\n\nFor FO it sufficed to set the img.src.path param. For HTML that's not enough,\nbecause that just adjusts the link to the file - but we don't want to link to\nthe source file. We actually solved this for the single-page html version - we\njust embed the svg. I wonder if we should just do that as well.\n\nAnother way would be to emit the files into the desired place as part of the\nstylesheet. While it requires touching xslt, it does seems somewhat more\nelegant than just copying files around. I did implement that, curious what you\nthink.\n\n\n> 2. Defaults to website stylesheet, no way to configure. This should be\n> adjusted to match the make build.\n\nShould we add a meson option?\n\n\n> 3. The various build targets and their combinations are mismatching and\n> incomplete. For example:\n>\n> Top-level GNUmakefile has these targets:\n>\n> - docs (builds html and man)\n> - html\n> - man\n>\n> (Those are the formats that are part of a distribution build.)\n>\n> doc/src/sgml/Makefile has these documented targets:\n>\n> - default target is html\n> - all (builds html and man, maps to top-level \"docs\")\n> - html\n> - man\n> - postgres-A4.pdf\n> - postgres-US.pdf\n> - check\n>\n> as well as (undocumented):\n>\n> - htmlhelp\n> - postgres.html\n> - postgres.txt\n> - epub\n> - postgres.epub\n> - postgres.info\n>\n> meson has the following documented targets:\n>\n> - docs (builds only html)\n> - alldocs (builds all formats, including obscure ones)\n>\n> as well as the following undocumented targets:\n>\n> - html\n> - man\n> - html_help [sic]\n\nrenamed in the attached patch.\n\n\n> - postgres-A4.pdf\n> - postgres-US.pdf\n> - postgres.epub\n\nNote that these are actually named doc/src/sgml/{html,man,...}, not top-level\ntargets.\n\n\n> - [info is not implemented at all]\n\nWould be easy to implement, but not sure it's worth doing.\n\n\n> - [didn't find an equivalent of check]\n\nThat's probably worth doing - should it be run as an actual test, or be a\ntarget?\n\n\n> 4. There doesn't appear to be a way to install the documentation.\n> (There are also some open questions in the top-level meson.build about\n> the installation directories, but I suppose if we can't install them\n> then exactly where to install them hasn't been thought about too\n> much.)\n\nWIP patch for that attached. There's now\n install-doc-man\n install-doc-html\nrun targets and a\n install-docs\nalias target.\n\n\nI did end up getting stuck when hacking on this, and ended up adding css\nsupport for nochunk and support for the website style for htmlhelp and\nnochunk, as well as obsoleting the need for copying the css files... But\nperhaps that's a bit too much.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 15 Mar 2023 20:55:33 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-15 20:55:33 -0700, Andres Freund wrote:\n> WIP patch for that attached. There's now\n> install-doc-man\n> install-doc-html\n> run targets and a\n> install-docs\n> alias target.\n> \n> \n> I did end up getting stuck when hacking on this, and ended up adding css\n> support for nochunk and support for the website style for htmlhelp and\n> nochunk, as well as obsoleting the need for copying the css files... But\n> perhaps that's a bit too much.\n\nUpdated set of patches attached. This one works in older meson versions too\nand adds install-world and install-quiet targets.\n\n\nI also ended up getting so frustrated at the docs build speed that I started\nto hack a bit on that. I attached a patch shaving a few seconds off the\nbuildtime.\n\n\nI think we can make the docs build in parallel and incrementally, by building\nthe different parts of the docs in parallel, using --stringparam rootid,\ne.g. building each 'part' separately.\n\nA very very rough draft attached:\n\nparallel with parts:\nreal\t0m10.831s\nuser\t0m58.295s\nsys\t0m1.402s\n\nnormal:\nreal\t0m32.215s\nuser\t0m31.876s\nsys\t0m0.328s\n\n1/3 of the build time at 2x the cost is nothing to sneeze at.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sun, 19 Mar 2023 19:33:38 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "On 20.03.23 03:33, Andres Freund wrote:\n>> I did end up getting stuck when hacking on this, and ended up adding css\n>> support for nochunk and support for the website style for htmlhelp and\n>> nochunk, as well as obsoleting the need for copying the css files... But\n>> perhaps that's a bit too much.\n> Updated set of patches attached. This one works in older meson versions too\n> and adds install-world and install-quiet targets.\n\nOh, this patch set grew quite quickly. ;-)\n\n[PATCH v2 1/8] meson: rename html_help target to htmlhelp\n\nThis is obvious.\n\n\n[PATCH v2 5/8] docs: html: copy images to output as part of xslt build\n\nMaking the XSLT stylesheets do the copying has some appeal. I think it \nwould only work for SVG (or other XML) files, which I guess is okay, but \nmaybe the templates should have a filter on format=\"SVG\" or something. \nAlso, this copying actually modifies the files in some XML-equivalent \nway. Also okay, I think, but worth noting.\n\nNote sure why you removed this comment\n\n-<!-- strip directory name from image filerefs -->\n\nsince the code still exists.\n\n\n[PATCH v2 6/8] wip: docs: copy or inline css\n\nThis seems pretty complicated compared to just copying a file?\n\n\n\n",
"msg_date": "Mon, 20 Mar 2023 11:58:08 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-20 11:58:08 +0100, Peter Eisentraut wrote:\n> Oh, this patch set grew quite quickly. ;-)\n\nYep :)\n\n\n> [PATCH v2 5/8] docs: html: copy images to output as part of xslt build\n> \n> Making the XSLT stylesheets do the copying has some appeal. I think it\n> would only work for SVG (or other XML) files, which I guess is okay, but\n> maybe the templates should have a filter on format=\"SVG\" or something. Also,\n> this copying actually modifies the files in some XML-equivalent way. Also\n> okay, I think, but worth noting.\n\nI think it can be made work for non-xml files with xinclude too. But the\nrestriction around only working in top-level stylesheets (vs everywhere for\ndocuments) is quite annoying.\n\n\n> [PATCH v2 6/8] wip: docs: copy or inline css\n> \n> This seems pretty complicated compared to just copying a file?\n\nMainly that it works correctly for the standalone file.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 Mar 2023 10:32:49 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-19 19:33:38 -0700, Andres Freund wrote:\n> I think we can make the docs build in parallel and incrementally, by building\n> the different parts of the docs in parallel, using --stringparam rootid,\n> e.g. building each 'part' separately.\n> \n> A very very rough draft attached:\n> \n> parallel with parts:\n> real\t0m10.831s\n> user\t0m58.295s\n> sys\t0m1.402s\n> \n> normal:\n> real\t0m32.215s\n> user\t0m31.876s\n> sys\t0m0.328s\n> \n> 1/3 of the build time at 2x the cost is nothing to sneeze at.\n\nI could not make myself stop trying to figure out where the big constant time\nfactor comes from. Every invocation costs about 2s, even if not much is\nrendered. Turns out, that's solely spent building all the <xsl:key>s. The\nfirst time *any* key() is invoked for a document, all the keys are computed in\na single pass over the document.\n\nA single reasonable key doesn't take that much time, even for the size of our\ndocs. But there are several redundant keys being built. Some of them somewhat\nexpensive. E.g. each\n<xsl:key name=\"genid\" match=\"*\" use=\"generate-id()\"/>\ntakes about 300ms. There's one in chunk-common and one in\ndocbook-no-doctype.xsl.\n\nI'm going to cry now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 Mar 2023 12:16:39 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-20 10:32:49 -0700, Andres Freund wrote:\n> On 2023-03-20 11:58:08 +0100, Peter Eisentraut wrote:\n> > Oh, this patch set grew quite quickly. ;-)\n> \n> Yep :)\n\nUnless somebody sees a reason to wait, I am planning to commit:\n meson: add install-{quiet, world} targets\n meson: add install-{docs,doc-html,doc-man} targets\n meson: make install_test_files more generic, rename to install_files\n\nWhile I don't think we have necessarily the path forward around .css and .svg,\nthe above are independent of that.\n\n\nFor the .svg: I wonder if we should just inline the images in the chunked\nhtml, just like we do in the single page one. It's not like we reuse one image\nacross a lot of pages, so there's no bandwidth saved from having the images\nseparate...\n\nFor the .css: docbook-xsl actually has support for writing the .css: [1] - but\nit requires the .css file be valid xml. I wonder if the cleanest approch would\nbe to have a build step to create .css.xml - then the non-chunked build's\ngenerate.css.header would do the right thing.\n\n\nI'll start a new thread for\n docs: speed up docs build by special-casing the gentext.template\n VERY WIP: parallel doc generation\nafter the feature freeze.\n\n\nAfter looking into it a tiny bit more, it seems we should use neither pandoc\nnor dbtoepub for epub generation.\n\nAll the dbtoepub does is to invoke the docbook-xsl support for epubs and zip\nthe result - except it doesn't use our stylesheets, so it looks randomly\ndifferent and doesn't use our speedups. At the very least we should use our\ncustomizations, if we want epub support. Or we should just remove it.\n\nPandoc unfortunately doesn't do docbook well enough to be usable for now to\ndirectly parse our docbook.\n\nRegards,\n\nAndres\n\n[1] https://docbook.sourceforge.net/release/xsl/current/doc/html/custom.css.source.html\n\n\n",
"msg_date": "Wed, 22 Mar 2023 11:59:17 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-22 11:59:17 -0700, Andres Freund wrote:\n> Unless somebody sees a reason to wait, I am planning to commit:\n> meson: add install-{quiet, world} targets\n> meson: add install-{docs,doc-html,doc-man} targets\n> meson: make install_test_files more generic, rename to install_files\n\nI've done that now.\n\n\n> For the .css: docbook-xsl actually has support for writing the .css: [1] - but\n> it requires the .css file be valid xml. I wonder if the cleanest approch would\n> be to have a build step to create .css.xml - then the non-chunked build's\n> generate.css.header would do the right thing.\n\nWe don't even need to do that! The attached patch just creates a wrapper\ncss.xml that loads the .css via an entity reference.\n\nI think this looks reasonably complicated, given that it gives us a working\nstylesheet for the non-chunked output?\n\nI don't know if my hack of putting the paramters in stylesheet-common.xsl is\nreasonable. Perhaps we should just include stylesheet-html-common.xsl in\nstylesheet-hh.xsl, then this uglyness wouldn't be required.\n\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 24 Mar 2023 00:26:19 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "On 24.03.23 08:26, Andres Freund wrote:\n>> For the .css: docbook-xsl actually has support for writing the .css: [1] - but\n>> it requires the .css file be valid xml. I wonder if the cleanest approch would\n>> be to have a build step to create .css.xml - then the non-chunked build's\n>> generate.css.header would do the right thing.\n> \n> We don't even need to do that! The attached patch just creates a wrapper\n> css.xml that loads the .css via an entity reference.\n\nThat looks like a better solution.\n\n> I don't know if my hack of putting the paramters in stylesheet-common.xsl is\n> reasonable. Perhaps we should just include stylesheet-html-common.xsl in\n> stylesheet-hh.xsl, then this uglyness wouldn't be required.\n\nMaybe, but it's not clear whether all the customizations in there are \napplicable to htmlhelp.\n\nAnother option here is to remove support for htmlhelp.\n\n\n\n",
"msg_date": "Fri, 24 Mar 2023 11:59:23 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-24 11:59:23 +0100, Peter Eisentraut wrote:\n> Another option here is to remove support for htmlhelp.\n\nThat might actually be the best path - it certainly doesn't look like anybody\nhas been actively using it. Or otherwise somebody would have complained about\nthere not being any instructions on how to actually compile a .chm file. And\nperhaps complained that it takes next to forever to build.\n\nI also have the impression that people don't use the .chm stuff much anymore,\nbut that might just be me not using windows.\n\nGreetings,\n\nAndres\n\n\n",
"msg_date": "Fri, 24 Mar 2023 09:58:22 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Remove 'htmlhelp' documentat format (was meson documentation build\n open issues)"
},
{
"msg_contents": "> On 24 Mar 2023, at 17:58, Andres Freund <[email protected]> wrote:\n> On 2023-03-24 11:59:23 +0100, Peter Eisentraut wrote:\n>> Another option here is to remove support for htmlhelp.\n> \n> That might actually be the best path - it certainly doesn't look like anybody\n> has been actively using it.\n\nI had no idea we had support for building a .chm until reading this, but I've\nalso never come across anyone asking for such a docset. FWIW, no objections to\nit going.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 24 Mar 2023 22:00:56 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove 'htmlhelp' documentat format (was meson documentation\n build open issues)"
},
{
"msg_contents": "On 24.03.23 17:58, Andres Freund wrote:\n> On 2023-03-24 11:59:23 +0100, Peter Eisentraut wrote:\n>> Another option here is to remove support for htmlhelp.\n> \n> That might actually be the best path - it certainly doesn't look like anybody\n> has been actively using it. Or otherwise somebody would have complained about\n> there not being any instructions on how to actually compile a .chm file. And\n> perhaps complained that it takes next to forever to build.\n> \n> I also have the impression that people don't use the .chm stuff much anymore,\n> but that might just be me not using windows.\n\nI think in ancient times, pgadmin used it for its internal help.\n\nBut I have heard less about htmlhelp over the years than about the info \nformat.\n\n\n",
"msg_date": "Tue, 28 Mar 2023 11:46:41 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove 'htmlhelp' documentat format (was meson documentation\n build open issues)"
},
{
"msg_contents": "On Tue, 28 Mar 2023 at 10:46, Peter Eisentraut <\[email protected]> wrote:\n\n> On 24.03.23 17:58, Andres Freund wrote:\n> > On 2023-03-24 11:59:23 +0100, Peter Eisentraut wrote:\n> >> Another option here is to remove support for htmlhelp.\n> >\n> > That might actually be the best path - it certainly doesn't look like\n> anybody\n> > has been actively using it. Or otherwise somebody would have complained\n> about\n> > there not being any instructions on how to actually compile a .chm file.\n> And\n> > perhaps complained that it takes next to forever to build.\n> >\n> > I also have the impression that people don't use the .chm stuff much\n> anymore,\n> > but that might just be me not using windows.\n>\n> I think in ancient times, pgadmin used it for its internal help.\n>\n\nYes, very ancient :-). We use Sphinx now.\n\n\n>\n> But I have heard less about htmlhelp over the years than about the info\n> format.\n>\n>\n>\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Tue, 28 Mar 2023 at 10:46, Peter Eisentraut <[email protected]> wrote:On 24.03.23 17:58, Andres Freund wrote:\n> On 2023-03-24 11:59:23 +0100, Peter Eisentraut wrote:\n>> Another option here is to remove support for htmlhelp.\n> \n> That might actually be the best path - it certainly doesn't look like anybody\n> has been actively using it. Or otherwise somebody would have complained about\n> there not being any instructions on how to actually compile a .chm file. And\n> perhaps complained that it takes next to forever to build.\n> \n> I also have the impression that people don't use the .chm stuff much anymore,\n> but that might just be me not using windows.\n\nI think in ancient times, pgadmin used it for its internal help.Yes, very ancient :-). We use Sphinx now. \n\nBut I have heard less about htmlhelp over the years than about the info \nformat.\n\n\n-- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 28 Mar 2023 09:50:07 +0000",
"msg_from": "Dave Page <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove 'htmlhelp' documentat format (was meson documentation\n build open issues)"
},
{
"msg_contents": "On 15.03.23 08:14, Peter Eisentraut wrote:\n> I have identified several open issues with the documentation build under \n> Meson (approximately in priority order):\n\nSome work has been done on this. Here is my current assessment.\n\n> 1. Image files are not handled at all, so they don't show up in the \n> final product.\n\nThis is fixed.\n\n> 2. Defaults to website stylesheet, no way to configure. This should be \n> adjusted to match the make build.\n\nThis is fixed.\n\n> 3. The various build targets and their combinations are mismatching and \n> incomplete.\n\nThis has been improved, and I see there is documentation.\n\nI think it's still an issue that \"make docs\" builds html and man but \n\"ninja docs\" only builds html. For some reason the wiki page actually \nclaims that ninja docs builds both, but this does not happen for me.\n\n> 4. There doesn't appear to be a way to install the documentation.\n\nThis has been addressed.\n\n> 5. There doesn't appear to be an equivalent of \"make world\" and \"make\n> install-world\" that includes documentation builds.\n\nThis has been addressed with the additional meson auto options. But it \nseems that these options only control building, not installing, so there \nis no \"install-world\" aspect yet.\n\n\n",
"msg_date": "Wed, 5 Apr 2023 12:24:04 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-05 12:24:04 +0200, Peter Eisentraut wrote:\n> On 15.03.23 08:14, Peter Eisentraut wrote:\n> > 3. The various build targets and their combinations are mismatching and\n> > incomplete.\n> \n> This has been improved, and I see there is documentation.\n> \n> I think it's still an issue that \"make docs\" builds html and man but \"ninja\n> docs\" only builds html. For some reason the wiki page actually claims that\n> ninja docs builds both, but this does not happen for me.\n\nIt used to, but Tom insisted that it should not. I'm afraid that it's not\nquite possible to emulate make here. 'make docs' at the toplevel builds both\nHTML and manpages. But 'make -C doc/src/sgml', only builds HTML.\n\n\n> > 5. There doesn't appear to be an equivalent of \"make world\" and \"make\n> > install-world\" that includes documentation builds.\n> \n> This has been addressed with the additional meson auto options. But it\n> seems that these options only control building, not installing, so there is\n> no \"install-world\" aspect yet.\n\nI'm not following - install-world install docs if the docs feature is\navailable, and not if not?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 5 Apr 2023 07:45:12 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "On 05.04.23 16:45, Andres Freund wrote:\n>> I think it's still an issue that \"make docs\" builds html and man but \"ninja\n>> docs\" only builds html. For some reason the wiki page actually claims that\n>> ninja docs builds both, but this does not happen for me.\n> \n> It used to, but Tom insisted that it should not. I'm afraid that it's not\n> quite possible to emulate make here. 'make docs' at the toplevel builds both\n> HTML and manpages. But 'make -C doc/src/sgml', only builds HTML.\n\nOk, not a topic for this thread then.\n\n>>> 5. There doesn't appear to be an equivalent of \"make world\" and \"make\n>>> install-world\" that includes documentation builds.\n>>\n>> This has been addressed with the additional meson auto options. But it\n>> seems that these options only control building, not installing, so there is\n>> no \"install-world\" aspect yet.\n> \n> I'm not following - install-world install docs if the docs feature is\n> available, and not if not?\n\nI had expected that if meson setup enables the 'docs' feature, then \nmeson compile will build the documentation, which happens, and meson \ninstall will install it, which does not happen.\n\n\n\n",
"msg_date": "Thu, 6 Apr 2023 11:11:57 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "On 2023-04-06 Th 05:11, Peter Eisentraut wrote:\n> On 05.04.23 16:45, Andres Freund wrote:\n>>> I think it's still an issue that \"make docs\" builds html and man but \n>>> \"ninja\n>>> docs\" only builds html. For some reason the wiki page actually \n>>> claims that\n>>> ninja docs builds both, but this does not happen for me.\n>>\n>> It used to, but Tom insisted that it should not. I'm afraid that it's \n>> not\n>> quite possible to emulate make here. 'make docs' at the toplevel \n>> builds both\n>> HTML and manpages. But 'make -C doc/src/sgml', only builds HTML.\n>\n> Ok, not a topic for this thread then.\n>\n>>>> 5. There doesn't appear to be an equivalent of \"make world\" and \"make\n>>>> install-world\" that includes documentation builds.\n>>>\n>>> This has been addressed with the additional meson auto options. But it\n>>> seems that these options only control building, not installing, so \n>>> there is\n>>> no \"install-world\" aspect yet.\n>>\n>> I'm not following - install-world install docs if the docs feature is\n>> available, and not if not?\n>\n> I had expected that if meson setup enables the 'docs' feature, then \n> meson compile will build the documentation, which happens, and meson \n> install will install it, which does not happen.\n>\n>\n>\n\n\"meson compile\" doesn't seem to build the docs by default ( see \n<https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2023-04-06%2018%3A17%3A04&stg=build>), \nand I'd rather it didn't, building the docs is a separate and optional \nstep for the buildfarm.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-06 Th 05:11, Peter\n Eisentraut wrote:\n\nOn\n 05.04.23 16:45, Andres Freund wrote:\n \n\nI think it's still an issue that \"make\n docs\" builds html and man but \"ninja\n \n docs\" only builds html. For some reason the wiki page\n actually claims that\n \n ninja docs builds both, but this does not happen for me.\n \n\n\n It used to, but Tom insisted that it should not. I'm afraid that\n it's not\n \n quite possible to emulate make here. 'make docs' at the toplevel\n builds both\n \n HTML and manpages. But 'make -C doc/src/sgml', only builds HTML.\n \n\n\n Ok, not a topic for this thread then.\n \n\n\n\n5. There doesn't appear to be an\n equivalent of \"make world\" and \"make\n \n install-world\" that includes documentation builds.\n \n\n\n This has been addressed with the additional meson auto\n options. But it\n \n seems that these options only control building, not\n installing, so there is\n \n no \"install-world\" aspect yet.\n \n\n\n I'm not following - install-world install docs if the docs\n feature is\n \n available, and not if not?\n \n\n\n I had expected that if meson setup enables the 'docs' feature,\n then meson compile will build the documentation, which happens,\n and meson install will install it, which does not happen.\n \n\n\n\n\n\n\n\"meson compile\" doesn't seem to build the docs by default ( see\n<https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2023-04-06%2018%3A17%3A04&stg=build>),\n and I'd rather it didn't, building the docs is a separate and\n optional step for the buildfarm.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 7 Apr 2023 10:39:56 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "On 07.04.23 16:39, Andrew Dunstan wrote:\n>>>>> 5. There doesn't appear to be an equivalent of \"make world\" and \"make\n>>>>> install-world\" that includes documentation builds.\n>>>>\n>>>> This has been addressed with the additional meson auto options. But it\n>>>> seems that these options only control building, not installing, so \n>>>> there is\n>>>> no \"install-world\" aspect yet.\n>>>\n>>> I'm not following - install-world install docs if the docs feature is\n>>> available, and not if not?\n>>\n>> I had expected that if meson setup enables the 'docs' feature, then \n>> meson compile will build the documentation, which happens, and meson \n>> install will install it, which does not happen.\n> \n> \"meson compile\" doesn't seem to build the docs by default ( see \n> <https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2023-04-06%2018%3A17%3A04&stg=build>), and I'd rather it didn't, building the docs is a separate and optional step for the buildfarm.\n\nYou can control this with the \"docs\" option for meson, as of recently.\n\n\n",
"msg_date": "Wed, 12 Apr 2023 17:30:28 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Re: Peter Eisentraut\n> > \"meson compile\" doesn't seem to build the docs by default ( see <https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2023-04-06%2018%3A17%3A04&stg=build>),\n> > and I'd rather it didn't, building the docs is a separate and optional\n> > step for the buildfarm.\n> \n> You can control this with the \"docs\" option for meson, as of recently.\n\nI've been looking into switching the Debian PG 17 build to meson, but\nI'm running into several problems.\n\n* The docs are still not built by default, and -Ddocs=enabled doesn't\n change that\n\n* None of the \"build docs\" targets are documented in install-meson.html\n\n* \"ninja -C build alldocs\" works, but it's impossible to see what\n flavors it's actually building. Everything is autodetected, and\n perhaps I would like to no build the .txt/something variants,\n but I have no idea what switch that is, or what package I have to\n uninstall so it's not autodetected (only html and pdf are\n documented.)\n\n Are there any other targets for the individual formats? (I could\n probably use one for the manpages only, without the html.)\n\nNon-doc issues:\n\n* LLVM is off by default (ok), when I enable it with -Dllvm=auto, it\n gets detected, but no .bc files are built, nor installed\n\n* selinux is not autodetected. It needs -Dselinux=auto, but that's not\n documented in install-meson.html\n\n* There is no split between libdir and pkglibdir. We had used that in\n the past for libpq -> /usr/lib/x86_64-linux-gnu and PG stuff ->\n /usr/lib/postgresql/17/lib.\n\nChristoph\n\n\n",
"msg_date": "Fri, 3 Nov 2023 15:26:05 +0100",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-03 15:26:05 +0100, Christoph Berg wrote:\n> Re: Peter Eisentraut\n> > > \"meson compile\" doesn't seem to build the docs by default ( see <https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2023-04-06%2018%3A17%3A04&stg=build>),\n> > > and I'd rather it didn't, building the docs is a separate and optional\n> > > step for the buildfarm.\n> >\n> > You can control this with the \"docs\" option for meson, as of recently.\n>\n> I've been looking into switching the Debian PG 17 build to meson, but\n> I'm running into several problems.\n>\n> * The docs are still not built by default, and -Ddocs=enabled doesn't\n> change that\n\nMaybe I am missing something - they aren't built by default in autoconf\neither?\n\n\n> * None of the \"build docs\" targets are documented in install-meson.html\n\nHm, odd, I thought they were, but you are right. There were some docs patches\nthat we never really could find agreement upon :/\n\n\n> * \"ninja -C build alldocs\" works, but it's impossible to see what\n> flavors it's actually building. Everything is autodetected, and\n> perhaps I would like to no build the .txt/something variants,\n> but I have no idea what switch that is, or what package I have to\n> uninstall so it's not autodetected (only html and pdf are\n> documented.)\n\nI think a package build should probably turn off auto-detection (\n meson setup --auto-features=disabled) and enable specific features that are\ndesired - in which case you get errors if they are not available. Which\npresumably is the behaviour you'd like?\n\n\n\n> Are there any other targets for the individual formats? (I could\n> probably use one for the manpages only, without the html.)\n\nYes, there are.\nninja doc/src/sgml/{postgres-A4.pdf,html,postgres.html,man1}\n\nPerhaps more interesting for your purposes, there are the\ninstall-doc-{html,man} targets.\n\nI remember discussing adding doc-{html,man} targets alongside\ninstall-doc-{html,man}, not sure why we ended up not doing that. I'd be in\nfavor of adding them.\n\nI've also been wondering about a 'help' target that documents important\ntargets in a interactively usable way.\n\n\n> Non-doc issues:\n>\n> * LLVM is off by default (ok), when I enable it with -Dllvm=auto, it\n> gets detected, but no .bc files are built, nor installed\n\nSupport for that has not yet been merged.\n\n\n> * selinux is not autodetected. It needs -Dselinux=auto, but that's not\n> documented in install-meson.html\n\nUh, huh. There's no documentation for --with-selinux in the installation.sgml\neither, just in sepgsql.sgml. So when the relevant docs got translated to\nmeson, -Dselinux= wasn't documented either.\n\n\n> * There is no split between libdir and pkglibdir. We had used that in\n> the past for libpq -> /usr/lib/x86_64-linux-gnu and PG stuff ->\n> /usr/lib/postgresql/17/lib.\n\nI don't think the autoconf build currently exposes separately configuring\npkglibdir either, I think that's a debian patch? I'm entirely open to adding\nan explicit configuration option for this though.\n\n\nThanks for looking at this, it's quite helpful!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 3 Nov 2023 09:38:48 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Re: Andres Freund\n> > > You can control this with the \"docs\" option for meson, as of recently.\n> >\n> > I've been looking into switching the Debian PG 17 build to meson, but\n> > I'm running into several problems.\n> >\n> > * The docs are still not built by default, and -Ddocs=enabled doesn't\n> > change that\n> \n> Maybe I am missing something - they aren't built by default in autoconf\n> either?\n\nTrue, but the documentation (and this thread) reads like it should. Or\nat least it should, when I explicitly say -Ddocs=enabled.\n\nWhat would also help is when the tail of the meson output had a list\nof features that are enabled. There's the list of \"External libraries\"\nwhich is quite helpful at figuring out what's still missing, but\nperhaps this could be extended:\n\n Features\n LLVM : YES (/usr/bin/llvm-config-16)\n DOCS : YES (html pdf texinfo)\n\nAtm it's hidden in the long initial blurb of \"Checking for..\" and the\n\"NO\" in there don't really stand out as much, since some of them are\nnormal.\n\n> > * \"ninja -C build alldocs\" works, but it's impossible to see what\n> > flavors it's actually building. Everything is autodetected, and\n> > perhaps I would like to no build the .txt/something variants,\n> > but I have no idea what switch that is, or what package I have to\n> > uninstall so it's not autodetected (only html and pdf are\n> > documented.)\n> \n> I think a package build should probably turn off auto-detection (\n> meson setup --auto-features=disabled) and enable specific features that are\n> desired - in which case you get errors if they are not available. Which\n> presumably is the behaviour you'd like?\n\nI'm still trying to figure out the best spot in that space of options.\nCurrently I'm still in the phase of getting it to work at all; the end\nresult might well use that option.\n\n> > Are there any other targets for the individual formats? (I could\n> > probably use one for the manpages only, without the html.)\n> \n> Yes, there are.\n> ninja doc/src/sgml/{postgres-A4.pdf,html,postgres.html,man1}\n\nOh, that was not obvious to me that this \"make $some_file\" style\ncommand would work. (But it still leaves the problem of knowing which\ntargets there are.)\n\n> Perhaps more interesting for your purposes, there are the\n> install-doc-{html,man} targets.\n\nHmm, I thought I had tried these, but apparently managed to miss them.\nThanks.\n\ninstall-doc-man seems to install \"man1\" only, though?\n(It seems to compile man5/man7, but not install them.)\n\n> I remember discussing adding doc-{html,man} targets alongside\n> install-doc-{html,man}, not sure why we ended up not doing that. I'd be in\n> favor of adding them.\n> \n> I've also been wondering about a 'help' target that documents important\n> targets in a interactively usable way.\n\nThat is definitely missing, yes. I found out about \"alldocs\" only\nafter reading the meson files, and that took more than it should have.\n\n> > Non-doc issues:\n> >\n> > * LLVM is off by default (ok), when I enable it with -Dllvm=auto, it\n> > gets detected, but no .bc files are built, nor installed\n> \n> Support for that has not yet been merged.\n\nOh, that's a showstopper. I thought meson would already be ready for\nproduction use. There is indeed an \"experimental\" note in\ninstall-requirements.html, but not in install-meson.html\n\n> > * selinux is not autodetected. It needs -Dselinux=auto, but that's not\n> > documented in install-meson.html\n> \n> Uh, huh. There's no documentation for --with-selinux in the installation.sgml\n> either, just in sepgsql.sgml. So when the relevant docs got translated to\n> meson, -Dselinux= wasn't documented either.\n\nOk. It does show up in \"External libraries\" and was enabled in the\nDebian packages before.\n\nWhy isn't it \"auto\" like the others?\n\n> > * There is no split between libdir and pkglibdir. We had used that in\n> > the past for libpq -> /usr/lib/x86_64-linux-gnu and PG stuff ->\n> > /usr/lib/postgresql/17/lib.\n> \n> I don't think the autoconf build currently exposes separately configuring\n> pkglibdir either, I think that's a debian patch? I'm entirely open to adding\n> an explicit configuration option for this though.\n\nThat would definitely be helpful.\n\n> Thanks for looking at this, it's quite helpful!\n\nThanks for the feedback!\nChristoph\n\n\n",
"msg_date": "Fri, 3 Nov 2023 19:19:17 +0100",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-03 19:19:17 +0100, Christoph Berg wrote:\n> Re: Andres Freund\n> > > > You can control this with the \"docs\" option for meson, as of recently.\n> > >\n> > > I've been looking into switching the Debian PG 17 build to meson, but\n> > > I'm running into several problems.\n> > >\n> > > * The docs are still not built by default, and -Ddocs=enabled doesn't\n> > > change that\n> >\n> > Maybe I am missing something - they aren't built by default in autoconf\n> > either?\n>\n> True, but the documentation (and this thread) reads like it should. Or\n> at least it should, when I explicitly say -Ddocs=enabled.\n\nMy understanding of the intent of the options is to make meson error out if\nthe required dependencies are not available, not that it controls when the\nbuild targets are built.\n\nThe reason for that is simply that the docs take too long to build.\n\n\n> What would also help is when the tail of the meson output had a list\n> of features that are enabled. There's the list of \"External libraries\"\n> which is quite helpful at figuring out what's still missing, but\n> perhaps this could be extended:\n>\n> Features\n> LLVM : YES (/usr/bin/llvm-config-16)\n> DOCS : YES (html pdf texinfo)\n>\n> Atm it's hidden in the long initial blurb of \"Checking for..\" and the\n> \"NO\" in there don't really stand out as much, since some of them are\n> normal.\n\nThe summary does include both. LLVM is 'llvm', man/html docs is 'docs' and pdf\ndocs as 'docs_pdf'.\n\n\n> > > Are there any other targets for the individual formats? (I could\n> > > probably use one for the manpages only, without the html.)\n> >\n> > Yes, there are.\n> > ninja doc/src/sgml/{postgres-A4.pdf,html,postgres.html,man1}\n>\n> Oh, that was not obvious to me that this \"make $some_file\" style\n> command would work. (But it still leaves the problem of knowing which\n> targets there are.)\n\nYes, you can trigger building any file that way.\n\nThe following is *not* an argument the docs targets shouldn't be documented\n(working on a patch), just something that might be helpful until then /\nseparately. You can see which targets are built with\n\nninja -t targets all|grep doc/src/\n\n\n> > Perhaps more interesting for your purposes, there are the\n> > install-doc-{html,man} targets.\n>\n> Hmm, I thought I had tried these, but apparently managed to miss them.\n> Thanks.\n>\n> install-doc-man seems to install \"man1\" only, though?\n> (It seems to compile man5/man7, but not install them.)\n\nUgh, that's obviously a bug. I'll fix it.\n\n\n> > > Non-doc issues:\n> > >\n> > > * LLVM is off by default (ok), when I enable it with -Dllvm=auto, it\n> > > gets detected, but no .bc files are built, nor installed\n> >\n> > Support for that has not yet been merged.\n>\n> Oh, that's a showstopper. I thought meson would already be ready for\n> production use. There is indeed an \"experimental\" note in\n> install-requirements.html, but not in install-meson.html\n\nI'm working on merging it. Having it for core PG isn't a huge difficulty, the\nextension story is what's been holding me back...\n\n\n> > > * selinux is not autodetected. It needs -Dselinux=auto, but that's not\n> > > documented in install-meson.html\n> >\n> > Uh, huh. There's no documentation for --with-selinux in the installation.sgml\n> > either, just in sepgsql.sgml. So when the relevant docs got translated to\n> > meson, -Dselinux= wasn't documented either.\n>\n> Ok. It does show up in \"External libraries\" and was enabled in the\n> Debian packages before.\n>\n> Why isn't it \"auto\" like the others?\n\nI don't really remember why I did that, but it's platform specific, maybe\nthat's why I did it that way?\n\n\n> > > * There is no split between libdir and pkglibdir. We had used that in\n> > > the past for libpq -> /usr/lib/x86_64-linux-gnu and PG stuff ->\n> > > /usr/lib/postgresql/17/lib.\n> >\n> > I don't think the autoconf build currently exposes separately configuring\n> > pkglibdir either, I think that's a debian patch? I'm entirely open to adding\n> > an explicit configuration option for this though.\n>\n> That would definitely be helpful.\n\nI have a patch locally, will send it together with a few others in a bit.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 3 Nov 2023 11:53:44 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Re: Andres Freund\n> The reason for that is simply that the docs take too long to build.\n\nThat why I'd prefer to be able to separate arch:all and arch:any\nbuilds, yes.\n\n> The summary does include both. LLVM is 'llvm', man/html docs is 'docs' and pdf\n> docs as 'docs_pdf'.\n\nSorry, I should have looked closer. :(\n\n> The following is *not* an argument the docs targets shouldn't be documented\n> (working on a patch), just something that might be helpful until then /\n> separately. You can see which targets are built with\n> \n> ninja -t targets all|grep doc/src/\n\nThanks.\n\n> > Oh, that's a showstopper. I thought meson would already be ready for\n> > production use. There is indeed an \"experimental\" note in\n> > install-requirements.html, but not in install-meson.html\n> \n> I'm working on merging it. Having it for core PG isn't a huge difficulty, the\n> extension story is what's been holding me back...\n\nIn-core extensions or external ones?\n\n> > Why isn't it \"auto\" like the others?\n> \n> I don't really remember why I did that, but it's platform specific, maybe\n> that's why I did it that way?\n\nIsn't that kind the point of autodetecting things? Aren't bonjour and\nbsd_auth autodetected as well?\n\n> > > I don't think the autoconf build currently exposes separately configuring\n> > > pkglibdir either, I think that's a debian patch? I'm entirely open to adding\n> > > an explicit configuration option for this though.\n> >\n> > That would definitely be helpful.\n> \n> I have a patch locally, will send it together with a few others in a bit.\n\nThanks!\n\nChristoph\n\n\n",
"msg_date": "Fri, 3 Nov 2023 20:19:18 +0100",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Hi,\n\n\nOn 2023-11-03 20:19:18 +0100, Christoph Berg wrote:\n> Re: Andres Freund\n> > The reason for that is simply that the docs take too long to build.\n>\n> That why I'd prefer to be able to separate arch:all and arch:any\n> builds, yes.\n\nWhat's stopping you from doing that? I think the only arch:any content we\nhave is the docs, and those you can build separately? Doc builds do trigger\ngeneration of a handful of files besides the docs, but not more.\n\n\n> > > Oh, that's a showstopper. I thought meson would already be ready for\n> > > production use. There is indeed an \"experimental\" note in\n> > > install-requirements.html, but not in install-meson.html\n> >\n> > I'm working on merging it. Having it for core PG isn't a huge difficulty, the\n> > extension story is what's been holding me back...\n>\n> In-core extensions or external ones?\n\nBoth, although the difficulty of doing it is somewhat separate for each.\n\n\n> > > Why isn't it \"auto\" like the others?\n> >\n> > I don't really remember why I did that, but it's platform specific, maybe\n> > that's why I did it that way?\n>\n> Isn't that kind the point of autodetecting things? Aren't bonjour and\n> bsd_auth autodetected as well?\n\nI'd be happy to change it, unless somebody objects?\n\n\n> > > > I don't think the autoconf build currently exposes separately configuring\n> > > > pkglibdir either, I think that's a debian patch? I'm entirely open to adding\n> > > > an explicit configuration option for this though.\n> > >\n> > > That would definitely be helpful.\n> >\n> > I have a patch locally, will send it together with a few others in a bit.\n>\n> Thanks!\n\nAttached.\n\n0001 - the bugfix for install-man only installing man1, I'll push that soon\n0002 - Document --with-selinux/-Dselinux options centrally\n0003 - Add doc-{html,man} targets\n\n I'm not quite sure it's worth it, but it's basically free, so ...\n\n0004 - Documentation for important build targets\n\n I'm not entirely happy with the formatting, but it looks like that's\n mostly a CSS issue. I started a thread on fixing that on -www.\n\n0005 - Add -Dpkglibdir option\n\n I guess we might want to do the same for configure if we decide to do\n this?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 3 Nov 2023 14:16:01 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Re: Andres Freund\n> > > The reason for that is simply that the docs take too long to build.\n> >\n> > That why I'd prefer to be able to separate arch:all and arch:any\n> > builds, yes.\n> \n> What's stopping you from doing that? I think the only arch:any content we\n> have is the docs, and those you can build separately? Doc builds do trigger\n> generation of a handful of files besides the docs, but not more.\n\nHistorically, .deb files have been required to contain the manpages\nfor all executables even when there's a separate -doc package. This\nmeans we'd need a separate (hopefully fast) manpage build even when\nthe arch:any binaries are built. We might get around that by\nintroducing a new postgresql-manpages-XX arch:all package, but that\nmight be too much micropackaging.\n\nThe install-doc-man target will likely fix it, will play with it a bit\nmore, thanks.\n\n> > > I'm working on merging it. Having it for core PG isn't a huge difficulty, the\n> > > extension story is what's been holding me back...\n> >\n> > In-core extensions or external ones?\n> \n> Both, although the difficulty of doing it is somewhat separate for each.\n\nI'd think most external extensions could stay with pgxs.mk for the\ntime being.\n\n\n> + <varlistentry id=\"configure-with-sepgsql-meson\">\n> + <term><option>-Dselinux={ disabled | auto | enabled }</option></term>\n> + <listitem>\n> + <para>\n> + Build with selinux support, enabling the <xref linkend=\"sepgsql\"/>\n> + extension.\n\nThis option defaults to ... auto?\n\n\n> index 90e2c062fa8..003b57498bb 100644\n> --- a/doc/src/sgml/meson.build\n> +++ b/doc/src/sgml/meson.build\n> @@ -142,6 +142,7 @@ if docs_dep.found()\n> '--install-dir-contents', dir_doc_html, html],\n> build_always_stale: true, build_by_default: false,\n> )\n> + alias_target('doc-html', install_doc_html)\n> alias_target('install-doc-html', install_doc_html)\n\nShouldn't this just build the html docs, without installing?\n\n> + alias_target('doc-man', install_doc_html)\n> alias_target('install-doc-man', install_doc_man)\n\n... same\n\n\n> + <varlistentry id=\"meson-target-install-world\">\n> + <term><option>install-install-world</option></term>\n\ninstall-world\n\n> + <varlistentry id=\"meson-target-install-doc-man\">\n> + <term><option>install-doc-html</option></term>\n> + <listitem>\n> + <para>\n> + Install documentation in man page format.\n\ninstall-doc-man\n\n> + <sect3 id=\"meson-targets-docs\">\n> + <title>Documentation Targets</title>\n\n> + <varlistentry id=\"meson-target-docs\">\n> + <term><option>docs</option></term>\n> + <term><option>doc-html</option></term>\n> + <listitem>\n> + <para>\n> + Build documentation in multi-page HTML format. Note that\n> + <option>docs</option> does <emphasis>not</emphasis> include building\n> + man page documentation, as man page generation seldom fails when\n> + building HTML documentation succeeds.\n\nWhy is that a reason for not building the manpages?\n\n> + <sect3 id=\"meson-targets-code\">\n> + <title>Code Targets</title>\n\nI would have expected the sections to be in the order\nbuild-docs-install. Having install first seems weird to me.\n\n> + <sect3 id=\"meson-targets-other\">\n> + <title>Other Targets</title>\n> +\n> + <variablelist>\n> +\n> + <varlistentry id=\"meson-target-clean\">\n> + <term><option>clean</option></term>\n> + <listitem>\n> + <para>\n> + Remove all build products\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n> + <varlistentry id=\"meson-target-test\">\n> + <term><option>test</option></term>\n> + <listitem>\n> + <para>\n> + Remove all enabled tests. Support for some classes of tests can be\n> + enabled / disabled with <xref linkend=\"configure-tap-tests-meson\"/>\n> + and <xref linkend=\"configure-pg-test-extra-meson\"/>.\n\nThis should explicitly say if contrib tests are included (or there\nneeds to be a separate test-world target.)\n\n\n> Subject: [PATCH v1 5/5] meson: Add -Dpkglibdir option\n\nWill give that a try, thanks!\n\nChristoph\n\n\n",
"msg_date": "Mon, 6 Nov 2023 10:45:27 +0100",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "On 03.11.23 22:16, Andres Freund wrote:\n[selinux option]\n>>>> Why isn't it \"auto\" like the others?\n>>> I don't really remember why I did that, but it's platform specific, maybe\n>>> that's why I did it that way?\n>> Isn't that kind the point of autodetecting things? Aren't bonjour and\n>> bsd_auth autodetected as well?\n> I'd be happy to change it, unless somebody objects?\n\nMakes sense to me to change it to auto.\n\n\n\n",
"msg_date": "Tue, 7 Nov 2023 16:44:42 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "On 03.11.23 19:19, Christoph Berg wrote:\n>>>> You can control this with the \"docs\" option for meson, as of recently.\n>>> I've been looking into switching the Debian PG 17 build to meson, but\n>>> I'm running into several problems.\n>>>\n>>> * The docs are still not built by default, and -Ddocs=enabled doesn't\n>>> change that\n>> Maybe I am missing something - they aren't built by default in autoconf\n>> either?\n> True, but the documentation (and this thread) reads like it should. Or\n> at least it should, when I explicitly say -Ddocs=enabled.\n> \n> What would also help is when the tail of the meson output had a list\n> of features that are enabled. There's the list of \"External libraries\"\n> which is quite helpful at figuring out what's still missing, but\n> perhaps this could be extended:\n> \n> Features\n> LLVM : YES (/usr/bin/llvm-config-16)\n> DOCS : YES (html pdf texinfo)\n> \n> Atm it's hidden in the long initial blurb of \"Checking for..\" and the\n> \"NO\" in there don't really stand out as much, since some of them are\n> normal.\n\nI don't feel like we have fully worked out how the docs options should \nfit together.\n\nWith the make build system, there is a canonical sequence of\n\nmake world\nmake check-world\nmake install-world\n\nthat encompasses everything.\n\nNow with meson to handle the documentation one needs to remember a \nvariety of additional targets. (There is a risk that once this gets \nmore widespread, more people will submit broken documentation.)\n\nI would like to have some set of options that enables it so that the \nstandard documentation targets become part of \"meson compile\" and \"meson \ninstall\".\n\n\n\n",
"msg_date": "Tue, 7 Nov 2023 16:55:37 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Hi, \n\nOn November 7, 2023 7:55:37 AM PST, Peter Eisentraut <[email protected]> wrote:\n>On 03.11.23 19:19, Christoph Berg wrote:\n>>>>> You can control this with the \"docs\" option for meson, as of recently.\n>>>> I've been looking into switching the Debian PG 17 build to meson, but\n>>>> I'm running into several problems.\n>>>> \n>>>> * The docs are still not built by default, and -Ddocs=enabled doesn't\n>>>> change that\n>>> Maybe I am missing something - they aren't built by default in autoconf\n>>> either?\n>> True, but the documentation (and this thread) reads like it should. Or\n>> at least it should, when I explicitly say -Ddocs=enabled.\n>> \n>> What would also help is when the tail of the meson output had a list\n>> of features that are enabled. There's the list of \"External libraries\"\n>> which is quite helpful at figuring out what's still missing, but\n>> perhaps this could be extended:\n>> \n>> Features\n>> LLVM : YES (/usr/bin/llvm-config-16)\n>> DOCS : YES (html pdf texinfo)\n>> \n>> Atm it's hidden in the long initial blurb of \"Checking for..\" and the\n>> \"NO\" in there don't really stand out as much, since some of them are\n>> normal.\n>\n>I don't feel like we have fully worked out how the docs options should fit together.\n>\n>With the make build system, there is a canonical sequence of\n>\n>make world\n>make check-world\n>make install-world\n>\n>that encompasses everything.\n>\n>Now with meson to handle the documentation one needs to remember a variety of additional targets. (There is a risk that once this gets more widespread, more people will submit broken documentation.)\n\ninstall-world with meson also installs docs.\n\n\n>I would like to have some set of options that enables it so that the standard documentation targets become part of \"meson compile\" and \"meson install\".\n\n-0.5 - it's just too painfully slow. For all scripted uses you can just as well use install-world...\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Tue, 07 Nov 2023 08:08:29 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On November 7, 2023 7:55:37 AM PST, Peter Eisentraut <[email protected]> wrote:\n>> I would like to have some set of options that enables it so that the standard documentation targets become part of \"meson compile\" and \"meson install\".\n\n> -0.5 - it's just too painfully slow. For all scripted uses you can just as well use install-world...\n\nI think we should set up the meson stuff so that \"install\" and\n\"install-world\" cover exactly what they did with \"make\". Otherwise\nthere will be too much confusion.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Nov 2023 11:34:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "On 2023-Nov-07, Andres Freund wrote:\n\n> >I would like to have some set of options that enables it so that the\n> >standard documentation targets become part of \"meson compile\" and\n> >\"meson install\".\n> \n> -0.5 - it's just too painfully slow. For all scripted uses you can just as well use install-world...\n\nIf the problem is broken doc patches, then maybe a solution is to\ninclude the `xmllint --noout --valid` target in whatever the check-world\nequivalent is for meson. Looking at doc/src/sgml/meson.build, we don't\nseem to do that anywhere. Doing the no-output lint run is very fast\n(375ms real time in my machine, whereas \"make html\" takes 27s).\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"¿Cómo puedes confiar en algo que pagas y que no ves,\ny no confiar en algo que te dan y te lo muestran?\" (Germán Poo)\n\n\n",
"msg_date": "Tue, 7 Nov 2023 17:40:40 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> If the problem is broken doc patches, then maybe a solution is to\n> include the `xmllint --noout --valid` target in whatever the check-world\n> equivalent is for meson.\n\n+1, but let's do that for the \"make\" build too. I see that\ndoc/src/sgml/Makefile has a \"check\" target, but AFAICS it's not\nwired up to the top-level check-world.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Nov 2023 11:49:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-06 10:45:27 +0100, Christoph Berg wrote:\n> Re: Andres Freund\n> > > > The reason for that is simply that the docs take too long to build.\n> > >\n> > > That why I'd prefer to be able to separate arch:all and arch:any\n> > > builds, yes.\n> > \n> > What's stopping you from doing that? I think the only arch:any content we\n> > have is the docs, and those you can build separately? Doc builds do trigger\n> > generation of a handful of files besides the docs, but not more.\n> \n> Historically, .deb files have been required to contain the manpages\n> for all executables even when there's a separate -doc package. This\n> means we'd need a separate (hopefully fast) manpage build even when\n> the arch:any binaries are built.\n\nManpages are a bit faster to build than html, but not a whole lot. Both are a\nlot faster than PDF.\n\n\n> We might get around that by introducing a new postgresql-manpages-XX\n> arch:all package, but that might be too much micropackaging.\n\nI've not done packaging in, uh, a fair while, but isn't the common solution to\nthat a -common package? There might be a few more files we could put itno one.\n\n\n> > + <varlistentry id=\"configure-with-sepgsql-meson\">\n> > + <term><option>-Dselinux={ disabled | auto | enabled }</option></term>\n> > + <listitem>\n> > + <para>\n> > + Build with selinux support, enabling the <xref linkend=\"sepgsql\"/>\n> > + extension.\n> \n> This option defaults to ... auto?\n\nNot quite sure what you mean? Today it defaults to disabled, a patch changing\nthat should also change the docs?\n\n\n> > index 90e2c062fa8..003b57498bb 100644\n> > --- a/doc/src/sgml/meson.build\n> > +++ b/doc/src/sgml/meson.build\n> > @@ -142,6 +142,7 @@ if docs_dep.found()\n> > '--install-dir-contents', dir_doc_html, html],\n> > build_always_stale: true, build_by_default: false,\n> > )\n> > + alias_target('doc-html', install_doc_html)\n> > alias_target('install-doc-html', install_doc_html)\n> \n> Shouldn't this just build the html docs, without installing?\n> \n> > + alias_target('doc-man', install_doc_html)\n> > alias_target('install-doc-man', install_doc_man)\n> \n> ... same\n> \n> \n> > + <varlistentry id=\"meson-target-install-world\">\n> > + <term><option>install-install-world</option></term>\n> \n> install-world\n> \n> > + <varlistentry id=\"meson-target-install-doc-man\">\n> > + <term><option>install-doc-html</option></term>\n> > + <listitem>\n> > + <para>\n> > + Install documentation in man page format.\n> \n> install-doc-man\n\nOops.\n\n\n> > + <sect3 id=\"meson-targets-docs\">\n> > + <title>Documentation Targets</title>\n> \n> > + <varlistentry id=\"meson-target-docs\">\n> > + <term><option>docs</option></term>\n> > + <term><option>doc-html</option></term>\n> > + <listitem>\n> > + <para>\n> > + Build documentation in multi-page HTML format. Note that\n> > + <option>docs</option> does <emphasis>not</emphasis> include building\n> > + man page documentation, as man page generation seldom fails when\n> > + building HTML documentation succeeds.\n> \n> Why is that a reason for not building the manpages?\n\nI didn't have it that way, and Tom argued strongly for maintaining that\nbehaviour from the make build. Personally I wouldn't.\n\n\n\n> > + <sect3 id=\"meson-targets-code\">\n> > + <title>Code Targets</title>\n> \n> I would have expected the sections to be in the order\n> build-docs-install. Having install first seems weird to me.\n\nMakes sense to me. I just had the install first because I wrote it first\nbecause of our conversation...\n\n\n> > + <sect3 id=\"meson-targets-other\">\n> > + <title>Other Targets</title>\n> > +\n> > + <variablelist>\n> > +\n> > + <varlistentry id=\"meson-target-clean\">\n> > + <term><option>clean</option></term>\n> > + <listitem>\n> > + <para>\n> > + Remove all build products\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > +\n> > + <varlistentry id=\"meson-target-test\">\n> > + <term><option>test</option></term>\n> > + <listitem>\n> > + <para>\n> > + Remove all enabled tests. Support for some classes of tests can be\n> > + enabled / disabled with <xref linkend=\"configure-tap-tests-meson\"/>\n> > + and <xref linkend=\"configure-pg-test-extra-meson\"/>.\n> \n> This should explicitly say if contrib tests are included (or there\n> needs to be a separate test-world target.)\n\nThey are included, will state that. And also s/Remove/Run/\n\n\n> > Subject: [PATCH v1 5/5] meson: Add -Dpkglibdir option\n> \n> Will give that a try, thanks!\n\nThanks for the review!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 Nov 2023 09:00:11 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Re: Andres Freund\n> > We might get around that by introducing a new postgresql-manpages-XX\n> > arch:all package, but that might be too much micropackaging.\n> \n> I've not done packaging in, uh, a fair while, but isn't the common solution to\n> that a -common package? There might be a few more files we could put itno one.\n\nTrue. /usr/share/postgresql/17/ is 4.2MB here, with 1.5MB manpages,\n1.1MB /extensions/ and some other bits. Will consider, thanks.\n\n> > > + <varlistentry id=\"configure-with-sepgsql-meson\">\n> > > + <term><option>-Dselinux={ disabled | auto | enabled }</option></term>\n> > > + <listitem>\n> > > + <para>\n> > > + Build with selinux support, enabling the <xref linkend=\"sepgsql\"/>\n> > > + extension.\n> > \n> > This option defaults to ... auto?\n> \n> Not quite sure what you mean? Today it defaults to disabled, a patch changing\n> that should also change the docs?\n\nWhat I failed to say is that the other options document what the\ndefault it, this one doesn't yet.\n\nChristoph\n\n\n",
"msg_date": "Tue, 7 Nov 2023 18:07:11 +0100",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "On 07.11.23 17:08, Andres Freund wrote:\n>> make world\n>> make check-world\n>> make install-world\n>>\n>> that encompasses everything.\n>>\n>> Now with meson to handle the documentation one needs to remember a variety of additional targets. (There is a risk that once this gets more widespread, more people will submit broken documentation.)\n> install-world with meson also installs docs.\n\nOk, I didn't know about ninja install-world. That works for me. Maybe \na \"world\" target would also be good.\n\nI played around with this a bit and noticed some files missing or in the \nwrong place. See two attached patches (plus e9f075f9a1 already committed).",
"msg_date": "Wed, 8 Nov 2023 12:04:30 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "On 07.11.23 17:40, Alvaro Herrera wrote:\n> If the problem is broken doc patches, then maybe a solution is to\n> include the `xmllint --noout --valid` target in whatever the check-world\n> equivalent is for meson. Looking at doc/src/sgml/meson.build, we don't\n> seem to do that anywhere. Doing the no-output lint run is very fast\n> (375ms real time in my machine, whereas \"make html\" takes 27s).\n\nThis would be a start, but it wouldn't cover everything. Lately, we \nrequire id attributes on certain elements, which is checked on the XSLT \nlevel.\n\n\n",
"msg_date": "Wed, 8 Nov 2023 12:05:52 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Re: Peter Eisentraut\n> > If the problem is broken doc patches, then maybe a solution is to\n> > include the `xmllint --noout --valid` target in whatever the check-world\n> > equivalent is for meson. Looking at doc/src/sgml/meson.build, we don't\n> > seem to do that anywhere. Doing the no-output lint run is very fast\n> > (375ms real time in my machine, whereas \"make html\" takes 27s).\n> \n> This would be a start, but it wouldn't cover everything. Lately, we require\n> id attributes on certain elements, which is checked on the XSLT level.\n\nI'd think there should be a catchy \"make check-world\"-equivalent that\ndoes run all reasonable check that we can tell people to run by\ndefault. Then if that takes too long, we could still offer\nalternatives that exclude some areas. If it's the other way round,\nsome areas will never be checked widely.\n\nChristoph\n\n\n",
"msg_date": "Wed, 8 Nov 2023 13:55:02 +0100",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "On 08.11.23 13:55, Christoph Berg wrote:\n> Re: Peter Eisentraut\n>>> If the problem is broken doc patches, then maybe a solution is to\n>>> include the `xmllint --noout --valid` target in whatever the check-world\n>>> equivalent is for meson. Looking at doc/src/sgml/meson.build, we don't\n>>> seem to do that anywhere. Doing the no-output lint run is very fast\n>>> (375ms real time in my machine, whereas \"make html\" takes 27s).\n>>\n>> This would be a start, but it wouldn't cover everything. Lately, we require\n>> id attributes on certain elements, which is checked on the XSLT level.\n> \n> I'd think there should be a catchy \"make check-world\"-equivalent that\n> does run all reasonable check that we can tell people to run by\n> default. Then if that takes too long, we could still offer\n> alternatives that exclude some areas. If it's the other way round,\n> some areas will never be checked widely.\n\nI think we could build doc/src/sgml/postgres-full.xml by default. That \ntakes less than 0.5 seconds here and it's an intermediate target for \nhtml and man.\n\n\n",
"msg_date": "Wed, 8 Nov 2023 16:19:51 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "On 2023-Nov-08, Peter Eisentraut wrote:\n\n> I think we could build doc/src/sgml/postgres-full.xml by default. That\n> takes less than 0.5 seconds here and it's an intermediate target for html\n> and man.\n\nIf that detects problems like the id attributes you mentioned, apart\nfrom the other checks in the `xmllint --noout`, then that WFM.\n\nAt least with the makefile the command to produce postgres-full.xml\nincludes --valid, so I think we're covered.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Having your biases confirmed independently is how scientific progress is\nmade, and hence made our great society what it is today\" (Mary Gardiner)\n\n\n",
"msg_date": "Wed, 8 Nov 2023 16:55:07 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Looks good to me. Thanks for finding this.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 08 Nov 2023 09:56:08 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-08 12:04:30 +0100, Peter Eisentraut wrote:\n> Ok, I didn't know about ninja install-world. That works for me. Maybe a\n> \"world\" target would also be good.\n\nYea, I thought so as well. I'll send out a patch shortly. Kinda wondering if\nits worth backpatching to 16. Uniformity seems useful and it's low risk.\n\n\n> I played around with this a bit and noticed some files missing or in the\n> wrong place. See two attached patches (plus e9f075f9a1 already committed).\n\nMake sense.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Nov 2023 09:32:08 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Hi,\n\nI really like the idea of an 'help' target that prints the targets. It seemed\nannoying to document such targets in both the sgml docs and the input for a\nthe help target. Particularly due to the redundancies between id attributes,\nthe target name etc.\n\nFirst I generated the list of targets from within meson.build, only to later\nrealize that that would not work when building the docs via make. So I instead\nadded doc/src/sgml/meson-targets.txt which is lightly postprocessed for the\n'help' target, and slightly more processed when building the docs.\n\nThat does have some downsides, e.g. it'd be more complicated to only print\ntargets if a relevant option is enabled. But I think it's acceptable that way.\n\n\nExample output:\n\n$ ninja help\n[0/1 1 0%] Running external command help (wrapped by meson to set env)\nCode Targets:\n all Build everything other than documentation\n backend Build backend and related modules\n bin Build frontend binaries\n contrib Build contrib modules\n pl Build procedual languages\n\nDocumentation Targets:\n docs Build documentation in multi-page HTML format\n doc-html Build documentation in multi-page HTML format\n doc-man Build documentation in man page format\n doc/src/sgml/postgres-A4.pdf Build documentation in PDF format, with A4 pages\n doc/src/sgml/postgres-US.pdf Build documentation in PDF format, with US letter pages\n doc/src/sgml/postgres.html Build documentation in single-page HTML format\n alldocs Build documentation in all supported formats\n\nInstallation Targets:\n install Install postgres, excluding documentation\n install-doc-html Install documentation in multi-page HTML format\n install-doc-man Install documentation in man page format\n install-docs Install documentation in multi-page HTML and man page formats\n install-quiet Like \"install\", but installed files are not displayed\n install-world Install postgres, including multi-page HTML and man page documentation\n uninstall Remove installed files\n\nOther Targets:\n clean Remove all build products\n test Run all enabled tests (including contrib)\n world Build everything, including documentation\n help List important targets\n\n\nBecause of the common source, some of the descriptions in the state of this\npatch are a bit shorter than in the preceding commit. But I don't think that\nhurts much.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 8 Nov 2023 15:21:21 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-08 13:55:02 +0100, Christoph Berg wrote:\n> Re: Peter Eisentraut\n> > > If the problem is broken doc patches, then maybe a solution is to\n> > > include the `xmllint --noout --valid` target in whatever the check-world\n> > > equivalent is for meson. Looking at doc/src/sgml/meson.build, we don't\n> > > seem to do that anywhere. Doing the no-output lint run is very fast\n> > > (375ms real time in my machine, whereas \"make html\" takes 27s).\n> > \n> > This would be a start, but it wouldn't cover everything. Lately, we require\n> > id attributes on certain elements, which is checked on the XSLT level.\n> \n> I'd think there should be a catchy \"make check-world\"-equivalent that\n> does run all reasonable check that we can tell people to run by\n> default. Then if that takes too long, we could still offer\n> alternatives that exclude some areas. If it's the other way round,\n> some areas will never be checked widely.\n\nThe 'test' target (generated by meson, otherwise I'd have named it check),\nruns all enabled tests. You obviously can run a subset if you so desire.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Nov 2023 16:43:25 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-08 16:19:51 +0100, Peter Eisentraut wrote:\n> On 08.11.23 13:55, Christoph Berg wrote:\n> > Re: Peter Eisentraut\n> > > > If the problem is broken doc patches, then maybe a solution is to\n> > > > include the `xmllint --noout --valid` target in whatever the check-world\n> > > > equivalent is for meson. Looking at doc/src/sgml/meson.build, we don't\n> > > > seem to do that anywhere. Doing the no-output lint run is very fast\n> > > > (375ms real time in my machine, whereas \"make html\" takes 27s).\n> > >\n> > > This would be a start, but it wouldn't cover everything. Lately, we require\n> > > id attributes on certain elements, which is checked on the XSLT level.\n> >\n> > I'd think there should be a catchy \"make check-world\"-equivalent that\n> > does run all reasonable check that we can tell people to run by\n> > default. Then if that takes too long, we could still offer\n> > alternatives that exclude some areas. If it's the other way round,\n> > some areas will never be checked widely.\n>\n> I think we could build doc/src/sgml/postgres-full.xml by default. That\n> takes less than 0.5 seconds here and it's an intermediate target for html\n> and man.\n\nThat does require the docbook dtd to be installed, afaict. I think we would\nneed a configure test for that to be present if we want to build it by\ndefault, otherwise we'll cause errors on plenty systems that don't get them\ntoday. The docbook dts aren't a huge dependency, but still. Some OSs might\nnot have a particularly install source for them, e.g. windows.\n\nI don't think that'd detect the missing ids?\n\nGreetings,\n\nAndres Freund\n\n\n\n",
"msg_date": "Wed, 8 Nov 2023 16:59:09 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "On 09.11.23 01:59, Andres Freund wrote:\n>> I think we could build doc/src/sgml/postgres-full.xml by default. That\n>> takes less than 0.5 seconds here and it's an intermediate target for html\n>> and man.\n> That does require the docbook dtd to be installed, afaict. I think we would\n> need a configure test for that to be present if we want to build it by\n> default, otherwise we'll cause errors on plenty systems that don't get them\n> today. The docbook dts aren't a huge dependency, but still. Some OSs might\n> not have a particularly install source for them, e.g. windows.\n\nI was thinking we would do it only if the required tools are found. \nBasically like\n\n postgres_full_xml = custom_target('postgres-full.xml',\n input: 'postgres.sgml',\n output: 'postgres-full.xml',\n depfile: 'postgres-full.xml.d',\n command: [xmllint, '--nonet', '--noent', '--valid',\n '--path', '@OUTDIR@', '-o', '@OUTPUT@', '@INPUT@'],\n depends: doc_generated,\n- build_by_default: false,\n+ build_by_default: xmllint_bin.found(),\n )\n\nBesides giving you a quick validity check of the XML, this also builds \nthe doc_generated, which draw from non-doc source files, so this would \nalso serve to check that those are sound and didn't mess up the docs.\n\n> I don't think that'd detect the missing ids?\n\nRight, it wouldn't do that.\n\n\n\n",
"msg_date": "Thu, 9 Nov 2023 15:32:39 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "On 09.11.23 00:21, Andres Freund wrote:\n> Example output:\n\nThis is very nice!\n\n> $ ninja help\n> [0/1 1 0%] Running external command help (wrapped by meson to set env)\n> Code Targets:\n> all Build everything other than documentation\n> backend Build backend and related modules\n> bin Build frontend binaries\n> contrib Build contrib modules\n> pl Build procedual languages\n\nok\n\n> Documentation Targets:\n> docs Build documentation in multi-page HTML format\n> doc-html Build documentation in multi-page HTML format\n> doc-man Build documentation in man page format\n> doc/src/sgml/postgres-A4.pdf Build documentation in PDF format, with A4 pages\n> doc/src/sgml/postgres-US.pdf Build documentation in PDF format, with US letter pages\n> doc/src/sgml/postgres.html Build documentation in single-page HTML format\n> alldocs Build documentation in all supported formats\n> \n> Installation Targets:\n> install Install postgres, excluding documentation\n\nThis should probably read \"Install everything other than documentation\", \nto mirror \"all\" above. (Otherwise one might think it installs just the \nbackend.)\n\n> install-doc-html Install documentation in multi-page HTML format\n> install-doc-man Install documentation in man page format\n> install-docs Install documentation in multi-page HTML and man page formats\n\nThere is a mismatch between \"docs\" and \"install-docs\". (As was \npreviously discussed, I'm in the camp that \"docs\" should be html + man.)\n\n> install-quiet Like \"install\", but installed files are not displayed\n> install-world Install postgres, including multi-page HTML and man page documentation\n\nSuggest \"Install everything, including documentation\" (matches \"world\").\n\n> uninstall Remove installed files\n> \n> Other Targets:\n> clean Remove all build products\n> test Run all enabled tests (including contrib)\n> world Build everything, including documentation\n\nShouldn't that be under \"Code Targets\"?\n\n> help List important targets\n\n\n\n",
"msg_date": "Thu, 9 Nov 2023 15:39:32 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-09 15:32:39 +0100, Peter Eisentraut wrote:\n> On 09.11.23 01:59, Andres Freund wrote:\n> > > I think we could build doc/src/sgml/postgres-full.xml by default. That\n> > > takes less than 0.5 seconds here and it's an intermediate target for html\n> > > and man.\n> > That does require the docbook dtd to be installed, afaict. I think we would\n> > need a configure test for that to be present if we want to build it by\n> > default, otherwise we'll cause errors on plenty systems that don't get them\n> > today. The docbook dts aren't a huge dependency, but still. Some OSs might\n> > not have a particularly install source for them, e.g. windows.\n> \n> I was thinking we would do it only if the required tools are found.\n> Basically like\n> \n> postgres_full_xml = custom_target('postgres-full.xml',\n> input: 'postgres.sgml',\n> output: 'postgres-full.xml',\n> depfile: 'postgres-full.xml.d',\n> command: [xmllint, '--nonet', '--noent', '--valid',\n> '--path', '@OUTDIR@', '-o', '@OUTPUT@', '@INPUT@'],\n> depends: doc_generated,\n> - build_by_default: false,\n> + build_by_default: xmllint_bin.found(),\n> )\n\nWe don't get to that point if xmllint isn't found...\n\n\n> Besides giving you a quick validity check of the XML, this also builds the\n> doc_generated, which draw from non-doc source files, so this would also\n> serve to check that those are sound and didn't mess up the docs.\n\nUnfortunately presence of xmllint doesn't guarantee presence of the relevant\nDTDs. Without docbook-xml installed, you'll get something like\n\n../../../../../home/andres/src/postgresql/doc/src/sgml/postgres.sgml:21: warning: failed to load external entity \"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd\"\n]>\n ^\n\nand a bunch of other subsequent errors.\n\n\nI think if we want to do this, we'd need a configure time check for being able\nto validate a document with\n<!DOCTYPE book PUBLIC \"-//OASIS//DTD DocBook XML V4.5//EN\" \"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd\"...\n\nThat's certainly doable. If we go there, we imo also should check if the\nrelevant xslt stylesheets are installed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 9 Nov 2023 09:52:40 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "On 09.11.23 18:52, Andres Freund wrote:\n>> Besides giving you a quick validity check of the XML, this also builds the\n>> doc_generated, which draw from non-doc source files, so this would also\n>> serve to check that those are sound and didn't mess up the docs.\n> Unfortunately presence of xmllint doesn't guarantee presence of the relevant\n> DTDs. Without docbook-xml installed, you'll get something like\n> \n> ../../../../../home/andres/src/postgresql/doc/src/sgml/postgres.sgml:21: warning: failed to load external entity\"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd\"\n> ]>\n> ^\n> \n> and a bunch of other subsequent errors.\n> \n> \n> I think if we want to do this, we'd need a configure time check for being able\n> to validate a document with\n> <!DOCTYPE book PUBLIC \"-//OASIS//DTD DocBook XML V4.5//EN\"\"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd\"...\n\nWe used to have exactly such a check in configure, but it was removed in \n4823c4f6ac. I suppose we could look into reviving that.\n\n\n\n",
"msg_date": "Tue, 14 Nov 2023 11:58:53 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Some comments on your patches:\n\nv2-0001-meson-Change-default-of-selinux-feature-option-to.patch\n\nOk\n\nv2-0002-docs-Document-with-selinux-Dselinux-options-centr.patch\n\nOk, but \"selinux\" should be \"SELinux\" when referring to the product.\n\nv2-0003-meson-docs-Add-doc-html-man-targets.patch\n\nWe have make targets \"html\" and \"man\", so I suggest we make the meson \ntargets the same.\n\nv2-0004-meson-Add-world-target.patch\n\nAFAICT, this world target doesn't include the man target. (Again, this \nwould all work better if we added \"man\" to \"docs\".)\n\nv2-0005-docs-meson-Add-documentation-for-important-build-.patch\n\nIt's nice to document this, but it's weird that we only document the \nmeson targets, not the make targets.\n\nv2-0006-meson-Add-help-target-build-docs-from-a-common-so.patch\n\nHere also, making this consistent and uniform with make would be useful.\n\nv2-0007-meson-Add-Dpkglibdir-option.patch\n\nNormally, the pkgFOOdir variables are just FOOdir plus package name. I \ndon't feel comfortable allowing those to be separately set. We don't \nallow that with configure; this just arose from a Debian patch.\n\nThe description \"location to dynamically loadable modules\" is too \nnarrow. Consider for example, another proposed patch, where we are \ndoing some preprocessing on postgres.bki at build time. Since that \nmakes postgres.bki platform dependent, it should really be moved from \nshare (datadir in configure parlance) to pkglibdir. So then we have \nthings in there that are not loadable modules. I don't know how that \naffects Debian packaging, but this patch might not be the right one.\n\nI suggest we leave this patch for a separate discussion.\n\n\n\n",
"msg_date": "Tue, 14 Nov 2023 21:16:13 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-14 21:16:13 +0100, Peter Eisentraut wrote:\n> Some comments on your patches:\n> \n> v2-0001-meson-Change-default-of-selinux-feature-option-to.patch\n> \n> Ok\n> \n> v2-0002-docs-Document-with-selinux-Dselinux-options-centr.patch\n> \n> Ok, but \"selinux\" should be \"SELinux\" when referring to the product.\n\nWill apply with that fix.\n\n\n> v2-0003-meson-docs-Add-doc-html-man-targets.patch\n> \n> We have make targets \"html\" and \"man\", so I suggest we make the meson\n> targets the same.\n\nHm, ok.\n\n\n> v2-0004-meson-Add-world-target.patch\n> \n> AFAICT, this world target doesn't include the man target. (Again, this\n> would all work better if we added \"man\" to \"docs\".)\n\nI agree with that sentiment - I only moved to the current arrangement after\nTom argued forcefully against building both.\n\nThe situation in the make world is weird:\n\"make docs\" in the toplevel builds both, because it's defined as\n\ndocs:\n\t$(MAKE) -C doc all\n\nBuf if you \"make -C doc/src/sgml\" (or are in doc/src/sgml), we only build\nhtml, as the default target is explicitly just html:\n\n# Make \"html\" the default target, since that is what most people tend\n# to want to use.\nhtml:\n\n\nThere's no real way of making the recursive-make and non-recursive ninja\ncoherent. There's no equivalent to default target in a sudirectory with ninja\n(or non-recursive make).\n\n\n> v2-0005-docs-meson-Add-documentation-for-important-build-.patch\n> \n> It's nice to document this, but it's weird that we only document the meson\n> targets, not the make targets.\n\nI think it'd have been good if we had documented the important targets with\nmake. But I don't think documenting them as a prerequisite to documenting the\nmeson targets makes much sense.\n\n\n> v2-0006-meson-Add-help-target-build-docs-from-a-common-so.patch\n> \n> Here also, making this consistent and uniform with make would be useful.\n\nWhat precisely are you referring to here? Also adding a help target? Or just\nconsistency between what the \"docs\" target does?\n\n\n> v2-0007-meson-Add-Dpkglibdir-option.patch\n> \n> Normally, the pkgFOOdir variables are just FOOdir plus package name. I\n> don't feel comfortable allowing those to be separately set. We don't allow\n> that with configure; this just arose from a Debian patch.\n\nRight - but Debian's desire seems quite sensible. The need to have multiple\npostgres versions installed in parallel is quite widespread.\n\n\n> The description \"location to dynamically loadable modules\" is too narrow.\n> Consider for example, another proposed patch, where we are doing some\n> preprocessing on postgres.bki at build time. Since that makes postgres.bki\n> platform dependent, it should really be moved from share (datadir in\n> configure parlance) to pkglibdir.\n\nI think I cannot be faulted for documenting the current use of the directory\n:).\n\nSeparately, I'm not really convinced that moving some build time values into\npostgres.bki is useful, but that's a matter for a different thread.\n\n\n> So then we have things in there that are not loadable modules. I don't know\n> how that affects Debian packaging, but this patch might not be the right\n> one.\n\nI'm not really seeing why that'd affect pkglibdir being adjustable, besides\nneeding to tweak the description of pkglibdir?\n\n\n> I suggest we leave this patch for a separate discussion.\n\nFair enough.\n\nThanks for the review,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Nov 2023 16:22:31 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-14 11:58:53 +0100, Peter Eisentraut wrote:\n> On 09.11.23 18:52, Andres Freund wrote:\n> > I think if we want to do this, we'd need a configure time check for being able\n> > to validate a document with\n> > <!DOCTYPE book PUBLIC \"-//OASIS//DTD DocBook XML V4.5//EN\"\"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd\"...\n> \n> We used to have exactly such a check in configure, but it was removed in\n> 4823c4f6ac. I suppose we could look into reviving that.\n\nYea, that change was obsoleted by xmllint/xsltproc not being able to fetch the\ndtd over network anymore... And the performance issue 4823c4f6ac notes also\ndoesn't apply anymore, as we use -nonet since 969509c3f2e.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Nov 2023 16:26:25 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-14 16:22:31 -0800, Andres Freund wrote:\n> > v2-0004-meson-Add-world-target.patch\n> > \n> > AFAICT, this world target doesn't include the man target. (Again, this\n> > would all work better if we added \"man\" to \"docs\".)\n> \n> I agree with that sentiment - I only moved to the current arrangement after\n> Tom argued forcefully against building both.\n\nAnother message in this thread made me realize that I actually hadn't\nimplemented it at all - it was Tom in 969509c3f2e\n\n In HEAD, also document how to build docs using Meson, and adjust\n \"ninja docs\" to just build the HTML docs, for consistency with the\n default behavior of doc/src/sgml/Makefile.\n\n\nI think that change was just ill-advised, given that the top-level make target\nactually *does* build both html and man:\n\n> The situation in the make world is weird:\n> \"make docs\" in the toplevel builds both, because it's defined as\n> \n> docs:\n> \t$(MAKE) -C doc all\n\nNotwithstanding this:\n\n> Buf if you \"make -C doc/src/sgml\" (or are in doc/src/sgml), we only build\n> html, as the default target is explicitly just html:\n\nAs the obvious thing for people that really just want to build html with ninja\nwould be to just use the doc-html (to-be-renamed to \"html\") target.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Nov 2023 16:30:24 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-14 16:30:24 -0800, Andres Freund wrote:\n> On 2023-11-14 16:22:31 -0800, Andres Freund wrote:\n> > > v2-0004-meson-Add-world-target.patch\n> > > \n> > > AFAICT, this world target doesn't include the man target. (Again, this\n> > > would all work better if we added \"man\" to \"docs\".)\n> > \n> > I agree with that sentiment - I only moved to the current arrangement after\n> > Tom argued forcefully against building both.\n> \n> Another message in this thread made me realize that I actually hadn't\n> implemented it at all - it was Tom in 969509c3f2e\n> \n> In HEAD, also document how to build docs using Meson, and adjust\n> \"ninja docs\" to just build the HTML docs, for consistency with the\n> default behavior of doc/src/sgml/Makefile.\n> \n> \n> I think that change was just ill-advised, given that the top-level make target\n> actually *does* build both html and man:\n> \n> > The situation in the make world is weird:\n> > \"make docs\" in the toplevel builds both, because it's defined as\n> > \n> > docs:\n> > \t$(MAKE) -C doc all\n> \n> Notwithstanding this:\n> \n> > Buf if you \"make -C doc/src/sgml\" (or are in doc/src/sgml), we only build\n> > html, as the default target is explicitly just html:\n> \n> As the obvious thing for people that really just want to build html with ninja\n> would be to just use the doc-html (to-be-renamed to \"html\") target.\n\nI pushed the first two commits (the selinux stuff) and worked a bit more on\nthe subsequent ones.\n\n- As requested, I've renamed the 'doc-html' and 'doc-man' targets to just 'html'\n and 'man'. Which then seems to also necessitates renaming the existing\n install-doc-{html,man}. I'm not so sure about this change, likely because I\n use autocomplete to remember the spelling of ninja (or make) targets, which\n is easier with [install-]doc-{html,man} than with [install-]{html,man}.\n\n- I added a commit to change what 'docs' builds, undoing that part of\n 969509c3f2e. I also moved the 'all' target in doc/src/sgml/Makefile up to\n the 'html' target to make things less confusing there, as discussed in the\n thread referenced in the commit message.\n\n Because of the 'html' target, Tom can still just build html easily.\n\n- I renamed 'meson-targets.txt' to 'targets-meson.txt' and renamed other files\n to match. One reason is that meson tries to prevent conflict between its\n internal targets by prefixing them with 'meson-', and the old names\n conflicted with that rule. If we ever wanted to add something similar for\n make, the new naming also seems better.\n\n- I added documentation for some developer targets (reformat-dat-files,\n expand-dat-files, update-unicode)\n\nI didn't move 'world' in the docs, as it doesn't quite seem right in the \"code\ntargets\" section?\n\n\nI attached the pkglibdir thing again, even though I don't plan to push it or\nreally review it further. Thought it might still be interesting for Christoph.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 17 Nov 2023 10:53:06 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "On 17.11.23 19:53, Andres Freund wrote:\n> I pushed the first two commits (the selinux stuff) and worked a bit more on\n> the subsequent ones.\n\nPatches 0001 through 0004 look good to me.\n\nSome possible small tweaks in 0004:\n\n+ perl, '-ne', 'next if /^#/; print',\n\nIf you're going for super-brief mode, you could also use \"perl -p\" and \ndrop the \"print\".\n\nPut at least two spaces between the \"columns\" in targets-meson.txt:\n\n+ doc/src/sgml/postgres-A4.pdf Build documentation in PDF format, with\n ^^\n\n\n\n",
"msg_date": "Mon, 20 Nov 2023 08:27:48 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-20 08:27:48 +0100, Peter Eisentraut wrote:\n> On 17.11.23 19:53, Andres Freund wrote:\n> > I pushed the first two commits (the selinux stuff) and worked a bit more on\n> > the subsequent ones.\n> \n> Patches 0001 through 0004 look good to me.\n\nCool, I pushed them now.\n\n\n> Some possible small tweaks in 0004:\n> \n> + perl, '-ne', 'next if /^#/; print',\n> \n> If you're going for super-brief mode, you could also use \"perl -p\" and drop\n> the \"print\".\n\nI thought this didn't add much, so I didn't go there.\n\n\n> Put at least two spaces between the \"columns\" in targets-meson.txt:\n> \n> + doc/src/sgml/postgres-A4.pdf Build documentation in PDF format, with\n> ^^\n\nI did adopt this.\n\n\nOne remaining question is whether we should adjust install-doc-{html,man} to\nbe install-{html,man}, to match the docs targets.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 Nov 2023 17:56:13 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "On 21.11.23 02:56, Andres Freund wrote:\n> One remaining question is whether we should adjust install-doc-{html,man} to\n> be install-{html,man}, to match the docs targets.\n\nAh didn't notice that one; yes please.\n\n\n",
"msg_date": "Tue, 21 Nov 2023 14:23:39 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
},
{
"msg_contents": "On 21.11.23 14:23, Peter Eisentraut wrote:\n> On 21.11.23 02:56, Andres Freund wrote:\n>> One remaining question is whether we should adjust \n>> install-doc-{html,man} to\n>> be install-{html,man}, to match the docs targets.\n> \n> Ah didn't notice that one; yes please.\n\nI think this was done?\n\n\n",
"msg_date": "Tue, 21 Nov 2023 15:54:10 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson documentation build open issues"
}
] |
[
{
"msg_contents": "While comparing the .pc (pkg-config) files generated by the make and \nmeson builds, I noticed that the Requires.private entries use different \ndelimiters. The make build uses spaces, the meson build uses commas. \nThe pkg-config documentation says that it should be comma-separated, but \napparently about half the .pc in the wild use just spaces.\n\nThe pkg-config source code acknowledges that both commas and spaces work:\n\nhttps://github.com/freedesktop/pkg-config/blob/master/parse.c#L273\nhttps://github.com/pkgconf/pkgconf/blob/master/libpkgconf/dependency.c#L286\n\nI think for consistency we should change the make build to use commas \nanyway. See attached patch.",
"msg_date": "Wed, 15 Mar 2023 08:51:04 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "pkg-config Requires.private entries should be comma-separated"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-15 08:51:04 +0100, Peter Eisentraut wrote:\n> While comparing the .pc (pkg-config) files generated by the make and meson\n> builds, I noticed that the Requires.private entries use different\n> delimiters. The make build uses spaces, the meson build uses commas. The\n> pkg-config documentation says that it should be comma-separated, but\n> apparently about half the .pc in the wild use just spaces.\n> \n> The pkg-config source code acknowledges that both commas and spaces work:\n> \n> https://github.com/freedesktop/pkg-config/blob/master/parse.c#L273\n> https://github.com/pkgconf/pkgconf/blob/master/libpkgconf/dependency.c#L286\n> \n> I think for consistency we should change the make build to use commas\n> anyway. See attached patch.\n\nMakes sense.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 15 Mar 2023 09:10:14 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pkg-config Requires.private entries should be comma-separated"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed that there are some duplicated codes in pgoutput_change() function\nwhich can be simplified, and here is an attempt to do that.\n\nBest Regards,\nHou Zhijie",
"msg_date": "Wed, 15 Mar 2023 08:29:54 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Simplify some codes in pgoutput"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 2:00 PM [email protected]\n<[email protected]> wrote:\n>\n> I noticed that there are some duplicated codes in pgoutput_change() function\n> which can be simplified, and here is an attempt to do that.\n>\n\nFor REORDER_BUFFER_CHANGE_DELETE, when the old tuple is missing, after\nthis patch, we will still send BEGIN and do OutputPluginWrite, etc.\nAlso, it will try to perform row_filter when none of old_slot or\nnew_slot is set. I don't know for which particular case we have s\nhandling missing old tuples for deletes but that might require changes\nin your proposed patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 16 Mar 2023 10:00:24 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simplify some codes in pgoutput"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 7:30 PM [email protected]\n<[email protected]> wrote:\n>\n> Hi,\n>\n> I noticed that there are some duplicated codes in pgoutput_change() function\n> which can be simplified, and here is an attempt to do that.\n>\n> Best Regards,\n> Hou Zhijie\n\nHi Hou-san.\n\nI had a quick look at the 0001 patch.\n\nHere are some first comments.\n\n======\n\n1.\n+ if (relentry->attrmap)\n+ old_slot = execute_attr_map_slot(relentry->attrmap, old_slot,\n+ MakeTupleTableSlot(RelationGetDescr(targetrel),\n+ &TTSOpsVirtual));\n\n1a.\nIMO maybe it was more readable before when there was a separate\n'tupdesc' variable, instead of trying to squeeze too much into one\nstatement.\n\n1b.\nShould you retain the old comments that said \"/* Convert tuple if needed. */\"\n\n~~~\n\n2.\n- if (old_slot)\n- old_slot = execute_attr_map_slot(relentry->attrmap,\n- old_slot,\n- MakeTupleTableSlot(tupdesc, &TTSOpsVirtual));\n\nThe original code for REORDER_BUFFER_CHANGE_UPDATE was checking \"if\n(old_slot)\" but that check seems no longer present. Is it OK?\n\n~~~\n\n3.\n- /*\n- * Send BEGIN if we haven't yet.\n- *\n- * We send the BEGIN message after ensuring that we will actually\n- * send the change. This avoids sending a pair of BEGIN/COMMIT\n- * messages for empty transactions.\n- */\n\nThat original longer comment has been replaced with just \"/* Send\nBEGIN if we haven't yet */\". Won't it be better to retain the more\ninformative longer comment?\n\n~~~\n\n4.\n+\n+cleanup:\n if (RelationIsValid(ancestor))\n {\n RelationClose(ancestor);\n\n~\n\nSince you've introduced a new label 'cleanup:' then IMO you can remove\nthat old comment \"/* Cleanup */\".\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 17 Mar 2023 14:49:14 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simplify some codes in pgoutput"
},
{
"msg_contents": "On Thursday, March 16, 2023 12:30 PM Amit Kapila <[email protected]> wrote:\r\n\r\n> \r\n> On Wed, Mar 15, 2023 at 2:00 PM [email protected]\r\n> <[email protected]> wrote:\r\n> >\r\n> > I noticed that there are some duplicated codes in pgoutput_change()\r\n> function\r\n> > which can be simplified, and here is an attempt to do that.\r\n> >\r\n> \r\n> For REORDER_BUFFER_CHANGE_DELETE, when the old tuple is missing, after\r\n> this patch, we will still send BEGIN and do OutputPluginWrite, etc.\r\n> Also, it will try to perform row_filter when none of old_slot or\r\n> new_slot is set. I don't know for which particular case we have s\r\n> handling missing old tuples for deletes but that might require changes\r\n> in your proposed patch.\r\n\r\nI researched this a bit. I think the old tuple will be null only if the\r\nmodified table doesn't have PK or RI when the DELETE happens (referred to\r\nthe heap_delete()), but in that case the DELETE won't be allowed to be\r\nreplicated(e.g. the DELETE will either error out or be filtered by table level\r\nfilter in pgoutput_change).\r\n\r\nI also checked this for system table and in that case it is null but\r\nreorderbuffer doesn't forward it. For user_catalog_table, similarily, the\r\nDELETE should be filtered by table filter in pgoutput_change as well.\r\n\r\nSo, I think we can remove this check and log.\r\nAnd here is the new version patch which removes that for now.\r\n\r\nBest Regards,\r\nHou zj",
"msg_date": "Mon, 20 Mar 2023 09:19:57 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Simplify some codes in pgoutput"
},
{
"msg_contents": "On Friday, March 17, 2023 11:49 AM Peter Smith <[email protected]> wrote:\r\n> \r\n> On Wed, Mar 15, 2023 at 7:30 PM [email protected]\r\n> <[email protected]> wrote:\r\n> >\r\n> > Hi,\r\n> >\r\n> > I noticed that there are some duplicated codes in pgoutput_change()\r\n> > function which can be simplified, and here is an attempt to do that.\r\n> \r\n> Hi Hou-san.\r\n> \r\n> I had a quick look at the 0001 patch.\r\n> \r\n> Here are some first comments.\r\n\r\nThanks for the comments.\r\n\r\n> \r\n> ======\r\n> \r\n> 1.\r\n> + if (relentry->attrmap)\r\n> + old_slot = execute_attr_map_slot(relentry->attrmap, old_slot,\r\n> + MakeTupleTableSlot(RelationGetDescr(targetrel),\r\n> + &TTSOpsVirtual));\r\n> \r\n> 1a.\r\n> IMO maybe it was more readable before when there was a separate 'tupdesc'\r\n> variable, instead of trying to squeeze too much into one statement.\r\n> \r\n> 1b.\r\n> Should you retain the old comments that said \"/* Convert tuple if needed. */\"\r\n\r\nAdded.\r\n\r\n> ~~~\r\n> \r\n> 2.\r\n> - if (old_slot)\r\n> - old_slot = execute_attr_map_slot(relentry->attrmap,\r\n> - old_slot,\r\n> - MakeTupleTableSlot(tupdesc, &TTSOpsVirtual));\r\n> \r\n> The original code for REORDER_BUFFER_CHANGE_UPDATE was checking \"if\r\n> (old_slot)\" but that check seems no longer present. Is it OK?\r\n\r\nI think the logic is the same.\r\n\r\n> \r\n> ~~~\r\n> \r\n> 3.\r\n> - /*\r\n> - * Send BEGIN if we haven't yet.\r\n> - *\r\n> - * We send the BEGIN message after ensuring that we will actually\r\n> - * send the change. This avoids sending a pair of BEGIN/COMMIT\r\n> - * messages for empty transactions.\r\n> - */\r\n> \r\n> That original longer comment has been replaced with just \"/* Send BEGIN if we\r\n> haven't yet */\". Won't it be better to retain the more informative longer\r\n> comment?\r\n\r\nAdded.\r\n\r\n> ~~~\r\n> \r\n> 4.\r\n> +\r\n> +cleanup:\r\n> if (RelationIsValid(ancestor))\r\n> {\r\n> RelationClose(ancestor);\r\n> \r\n> ~\r\n> \r\n> Since you've introduced a new label 'cleanup:' then IMO you can remove that\r\n> old comment \"/* Cleanup */\".\r\n> \r\nRemoved.\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Mon, 20 Mar 2023 09:20:40 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Simplify some codes in pgoutput"
},
{
"msg_contents": "On Monday, March 20, 2023 5:20 [email protected] wrote:\r\n> \r\n> On Thursday, March 16, 2023 12:30 PM Amit Kapila <[email protected]>\r\n> wrote:\r\n> \r\n> >\r\n> > On Wed, Mar 15, 2023 at 2:00 PM [email protected]\r\n> > <[email protected]> wrote:\r\n> > >\r\n> > > I noticed that there are some duplicated codes in pgoutput_change()\r\n> > function\r\n> > > which can be simplified, and here is an attempt to do that.\r\n> > >\r\n> >\r\n> > For REORDER_BUFFER_CHANGE_DELETE, when the old tuple is missing, after\r\n> > this patch, we will still send BEGIN and do OutputPluginWrite, etc.\r\n> > Also, it will try to perform row_filter when none of old_slot or\r\n> > new_slot is set. I don't know for which particular case we have s\r\n> > handling missing old tuples for deletes but that might require changes\r\n> > in your proposed patch.\r\n> \r\n> I researched this a bit. I think the old tuple will be null only if the modified table\r\n> doesn't have PK or RI when the DELETE happens (referred to the heap_delete()),\r\n> but in that case the DELETE won't be allowed to be replicated(e.g. the DELETE\r\n> will either error out or be filtered by table level filter in pgoutput_change).\r\n> \r\n> I also checked this for system table and in that case it is null but reorderbuffer\r\n> doesn't forward it. For user_catalog_table, similarily, the DELETE should be\r\n> filtered by table filter in pgoutput_change as well.\r\n> \r\n> So, I think we can remove this check and log.\r\n> And here is the new version patch which removes that for now.\r\n\r\nAfter rethinking about this, it seems better leave this check for now. Although\r\nit may be unnecessary, but we can remove that later as a separate patch when we\r\nare sure about this. So, here is a patch that add this check back.\r\n\r\nBest Regards,\r\nHou zj",
"msg_date": "Thu, 23 Mar 2023 01:26:58 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Simplify some codes in pgoutput"
},
{
"msg_contents": "Hi Hou-san,\n\nI tried to compare the logic of patch v3-0001 versus the original HEAD code.\n\nIMO this patch logic is not exactly the same as before -- there are\nsome subtle differences. I am not sure if these differences represent\nreal problems or not.\n\nBelow are all my review comments:\n\n======\n\n1.\n/* Switch relation if publishing via root. */\nif (relentry->publish_as_relid != RelationGetRelid(relation))\n{\n Assert(relation->rd_rel->relispartition);\n ancestor = RelationIdGetRelation(relentry->publish_as_relid);\n targetrel = ancestor;\n}\n\n~\n\nThe \"switch relation if publishing via root\" logic is now happening\nfirst, whereas the original code was doing this after the slot\nassignments. AFAIK it does not matter, it's just a small point of\ndifference.\n\n======\n\n2.\n/* Convert tuple if needed. */\nif (relentry->attrmap)\n{\n ...\n}\n\nThe \"Convert tuple if needed.\" logic looks the same, but when it is\nexecuted is NOT the same. It could be a problem.\n\nPreviously, the conversion would only happen within the \"Switch\nrelation if publishing via root.\" condition. But the patch no longer\nhas that extra condition -- now I think it attempts conversion every\ntime regardless of \"publishing via root\".\n\nI would expect the \"publish via root\" case to be less common, so even\nif the current code works, by omitting that check won't this patch\nhave an unnecessary performance hit due to the extra conversions?\n\n~~\n\n3.\nif (old_slot)\n old_slot = execute_attr_map_slot(relentry->attrmap,old_slot,MakeTupleTableSlot(tupdesc,\n&TTSOpsVirtual));\n\n~\n\nThe previous conversion code for UPDATE (shown above) was checking\n\"if (old_slot)\". Actually, I don't know why that check was even\nnecessary before but it seems to have been accounting for a\npossibility that UPDATE might not have \"oldtuple\".\n\nBut this combination (if indeed it was possible) is not handled\nanymore with the patch code because the old_slot is unconditionally\nassigned in the same block doing this conversion. Perhaps that\noriginal HEAD extra check was just overkill? TAP tests obviously\nstill are passing with the patch, but anyway, this is yet another\nsmall point of difference for the refactored patch code.\n\n======\n\n4.\nAFAIK, the \"if (change->data.tp.newtuple)\" can only be true for INSERT\nor UPDATE, so the code would be better to include a sanity Assert.\n\nSUGGESTION\nif (change->data.tp.newtuple)\n{\n Assert(action == REORDER_BUFFER_CHANGE_INSERT || action ==\nREORDER_BUFFER_CHANGE_UPDATE);\n...\n}\n\n======\n\n5.\nAFAIK, the \"if (change->data.tp.oldtuple)\" can only be true for UPDATE\nor DELETE, so the code would be better to include a sanity Assert.\n\nSUGGESTION\nif (change->data.tp.oldtuple)\n{\n Assert(action == REORDER_BUFFER_CHANGE_UPDATE || action ==\nREORDER_BUFFER_CHANGE_DELETE);\n...\n}\n\n======\n\n6.\nI suggest moving the \"change->data.tp.oldtuple\" check before the\n\"change->data.tp.newtuple\" check. I don't think it makes any\ndifference, but it seems more natural IMO to have old before new.\n\n\n------\nKind Regards,\nPeter Smith\n\n\n",
"msg_date": "Thu, 30 Mar 2023 12:15:17 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simplify some codes in pgoutput"
},
{
"msg_contents": "On Thursday, March 30, 2023 9:15 AM Peter Smith <[email protected]> wrote:\r\n> \r\n> Hi Hou-san,\r\n> \r\n> I tried to compare the logic of patch v3-0001 versus the original HEAD code.\r\n> \r\n> IMO this patch logic is not exactly the same as before -- there are\r\n> some subtle differences. I am not sure if these differences represent\r\n> real problems or not.\r\n> \r\n> Below are all my review comments:\r\n\r\nThanks for the check and comments.\r\n\r\n> \r\n> ======\r\n> \r\n> 1.\r\n> /* Switch relation if publishing via root. */\r\n> if (relentry->publish_as_relid != RelationGetRelid(relation))\r\n> {\r\n> Assert(relation->rd_rel->relispartition);\r\n> ancestor = RelationIdGetRelation(relentry->publish_as_relid);\r\n> targetrel = ancestor;\r\n> }\r\n> \r\n> ~\r\n> \r\n> The \"switch relation if publishing via root\" logic is now happening\r\n> first, whereas the original code was doing this after the slot\r\n> assignments. AFAIK it does not matter, it's just a small point of\r\n> difference.\r\n\r\nI also think it doesn't matter.\r\n\r\n> ======\r\n> \r\n> 2.\r\n> /* Convert tuple if needed. */\r\n> if (relentry-> attrmap)\r\n> {\r\n> ...\r\n> }\r\n> \r\n> The \"Convert tuple if needed.\" logic looks the same, but when it is\r\n> executed is NOT the same. It could be a problem.\r\n> \r\n> Previously, the conversion would only happen within the \"Switch\r\n> relation if publishing via root.\" condition. But the patch no longer\r\n> has that extra condition -- now I think it attempts conversion every\r\n> time regardless of \"publishing via root\".\r\n> \r\n> I would expect the \"publish via root\" case to be less common, so even\r\n> if the current code works, by omitting that check won't this patch\r\n> have an unnecessary performance hit due to the extra conversions?\r\n\r\nNo, the conversions won't happen in normal cases because \"if (relentry-> attrmap)\"\r\nwill pass only if we need to switch relation(publish via root).\r\n\r\n> ~~\r\n> \r\n> 3.\r\n> if (old_slot)\r\n> old_slot =\r\n> execute_attr_map_slot(relentry->attrmap,old_slot,MakeTupleTableSlot(tupde\r\n> sc,\r\n> &TTSOpsVirtual));\r\n> \r\n> ~\r\n> \r\n> The previous conversion code for UPDATE (shown above) was checking\r\n> \"if (old_slot)\". Actually, I don't know why that check was even\r\n> necessary before but it seems to have been accounting for a\r\n> possibility that UPDATE might not have \"oldtuple\".\r\n\r\nIf the RI key wasn't updated, then it's possible the old tuple is null.\r\n\r\n> \r\n> But this combination (if indeed it was possible) is not handled\r\n> anymore with the patch code because the old_slot is unconditionally\r\n> assigned in the same block doing this conversion.\r\n\r\nI think this case is handled by the generic old slot conversion in the patch.\r\n\r\n> ======\r\n> \r\n> 4.\r\n> AFAIK, the \"if (change->data.tp.newtuple)\" can only be true for INSERT\r\n> or UPDATE, so the code would be better to include a sanity Assert.\r\n> \r\n> SUGGESTION\r\n> if (change->data.tp.newtuple)\r\n> {\r\n> Assert(action == REORDER_BUFFER_CHANGE_INSERT || action ==\r\n> REORDER_BUFFER_CHANGE_UPDATE);\r\n> ...\r\n> }\r\n> \r\n> ======\r\n> \r\n> 5.\r\n> AFAIK, the \"if (change->data.tp.oldtuple)\" can only be true for UPDATE\r\n> or DELETE, so the code would be better to include a sanity Assert.\r\n> \r\n> SUGGESTION\r\n> if (change->data.tp.oldtuple)\r\n> {\r\n> Assert(action == REORDER_BUFFER_CHANGE_UPDATE || action ==\r\n> REORDER_BUFFER_CHANGE_DELETE);\r\n> ...\r\n> \r\n\r\nIt might be fine but I am not sure if it's necessary to add this in this\r\npatch as we don't have such assertion before.\r\n\r\n> \r\n> ======\r\n> \r\n> 6.\r\n> I suggest moving the \"change->data.tp.oldtuple\" check before the\r\n> \"change->data.tp.newtuple\" check. I don't think it makes any\r\n> difference, but it seems more natural IMO to have old before new.\r\n\r\nChanged.\r\n\r\nBest Regards,\r\nHou zj",
"msg_date": "Thu, 30 Mar 2023 03:01:17 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Simplify some codes in pgoutput"
},
{
"msg_contents": "Hi Hou-san,\n\nI looked again at v4-0001.\n\nOn Thu, Mar 30, 2023 at 2:01 PM [email protected]\n<[email protected]> wrote:\n>\n> On Thursday, March 30, 2023 9:15 AM Peter Smith <[email protected]> wrote:\n> >\n...\n> >\n> > 2.\n> > /* Convert tuple if needed. */\n> > if (relentry-> attrmap)\n> > {\n> > ...\n> > }\n> >\n> > The \"Convert tuple if needed.\" logic looks the same, but when it is\n> > executed is NOT the same. It could be a problem.\n> >\n> > Previously, the conversion would only happen within the \"Switch\n> > relation if publishing via root.\" condition. But the patch no longer\n> > has that extra condition -- now I think it attempts conversion every\n> > time regardless of \"publishing via root\".\n> >\n> > I would expect the \"publish via root\" case to be less common, so even\n> > if the current code works, by omitting that check won't this patch\n> > have an unnecessary performance hit due to the extra conversions?\n>\n> No, the conversions won't happen in normal cases because \"if (relentry-> attrmap)\"\n> will pass only if we need to switch relation(publish via root).\n>\n\nOK.\n\n> > ~~\n> >\n> > 3.\n> > if (old_slot)\n> > old_slot =\n> > execute_attr_map_slot(relentry->attrmap,old_slot,MakeTupleTableSlot(tupde\n> > sc,\n> > &TTSOpsVirtual));\n> >\n> > ~\n> >\n> > The previous conversion code for UPDATE (shown above) was checking\n> > \"if (old_slot)\". Actually, I don't know why that check was even\n> > necessary before but it seems to have been accounting for a\n> > possibility that UPDATE might not have \"oldtuple\".\n>\n> If the RI key wasn't updated, then it's possible the old tuple is null.\n>\n> >\n> > But this combination (if indeed it was possible) is not handled\n> > anymore with the patch code because the old_slot is unconditionally\n> > assigned in the same block doing this conversion.\n>\n> I think this case is handled by the generic old slot conversion in the patch.\n\nYeah, I think you are right. Sorry, this was my mistake when reading v3.\n\n>\n> > ======\n> >\n> > 4.\n> > AFAIK, the \"if (change->data.tp.newtuple)\" can only be true for INSERT\n> > or UPDATE, so the code would be better to include a sanity Assert.\n> >\n> > SUGGESTION\n> > if (change->data.tp.newtuple)\n> > {\n> > Assert(action == REORDER_BUFFER_CHANGE_INSERT || action ==\n> > REORDER_BUFFER_CHANGE_UPDATE);\n> > ...\n> > }\n> >\n> > ======\n> >\n> > 5.\n> > AFAIK, the \"if (change->data.tp.oldtuple)\" can only be true for UPDATE\n> > or DELETE, so the code would be better to include a sanity Assert.\n> >\n> > SUGGESTION\n> > if (change->data.tp.oldtuple)\n> > {\n> > Assert(action == REORDER_BUFFER_CHANGE_UPDATE || action ==\n> > REORDER_BUFFER_CHANGE_DELETE);\n> > ...\n> >\n>\n> It might be fine but I am not sure if it's necessary to add this in this\n> patch as we don't have such assertion before.\n\nThe Asserts are just for sanity and self-documentation regarding what\nactions can get into this logic. IMO including them does no harm,\nrather it does some small amount of good, so why not do it?\n\nYou can't really use the fact they were not there before as a reason\nto not add them now -- There were no Asserts in the original code\nbecause this same logic was duplicated multiple times and was always\nwithin obvious scope of a particular switch (action) case:\n\n~\n\nApart from the question of the Asserts, I have no more review comments\nfor this patch.\n\n(FYI - patch v4 applied cleanly and the regression tests and TAP\nsubscription tests all pass OK)\n\n------\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Thu, 30 Mar 2023 16:41:33 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simplify some codes in pgoutput"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 11:12 AM Peter Smith <[email protected]> wrote:\n>\n> > >\n> > > 5.\n> > > AFAIK, the \"if (change->data.tp.oldtuple)\" can only be true for UPDATE\n> > > or DELETE, so the code would be better to include a sanity Assert.\n> > >\n> > > SUGGESTION\n> > > if (change->data.tp.oldtuple)\n> > > {\n> > > Assert(action == REORDER_BUFFER_CHANGE_UPDATE || action ==\n> > > REORDER_BUFFER_CHANGE_DELETE);\n> > > ...\n> > >\n> >\n> > It might be fine but I am not sure if it's necessary to add this in this\n> > patch as we don't have such assertion before.\n>\n> The Asserts are just for sanity and self-documentation regarding what\n> actions can get into this logic. IMO including them does no harm,\n> rather it does some small amount of good, so why not do it?\n>\n> You can't really use the fact they were not there before as a reason\n> to not add them now -- There were no Asserts in the original code\n> because this same logic was duplicated multiple times and was always\n> within obvious scope of a particular switch (action) case:\n>\n\nI see your point but like Hou-San I am also not really sure if these\nnew Asserts will be better. The patch looks good to me, so will push\nin some time.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 30 Mar 2023 11:21:41 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simplify some codes in pgoutput"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on something else, I noticed that the “if (entry->conn\n== NULL)” test after doing disconnect_pg_server() when re-establishing\na given connection in GetConnection() is pointless, because the former\nfunction ensures that entry->conn is NULL. So I removed the if-test.\nAttached is a patch for that. I think we could instead add an\nassertion, but I did not, because we already have it in\nmake_new_connection().\n\nThis would be harmless, so I am planning to apply the patch to HEAD only.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Wed, 15 Mar 2023 19:18:41 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgres_fdw: Useless if-test in GetConnection()"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 6:18 PM Etsuro Fujita <[email protected]>\nwrote:\n\n> While working on something else, I noticed that the “if (entry->conn\n> == NULL)” test after doing disconnect_pg_server() when re-establishing\n> a given connection in GetConnection() is pointless, because the former\n> function ensures that entry->conn is NULL. So I removed the if-test.\n> Attached is a patch for that. I think we could instead add an\n> assertion, but I did not, because we already have it in\n> make_new_connection().\n\n\n+1. Good catch.\n\nThanks\nRichard\n\nOn Wed, Mar 15, 2023 at 6:18 PM Etsuro Fujita <[email protected]> wrote:\nWhile working on something else, I noticed that the “if (entry->conn\n== NULL)” test after doing disconnect_pg_server() when re-establishing\na given connection in GetConnection() is pointless, because the former\nfunction ensures that entry->conn is NULL. So I removed the if-test.\nAttached is a patch for that. I think we could instead add an\nassertion, but I did not, because we already have it in\nmake_new_connection().+1. Good catch.ThanksRichard",
"msg_date": "Wed, 15 Mar 2023 18:40:00 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: Useless if-test in GetConnection()"
},
{
"msg_contents": "> On 15 Mar 2023, at 11:18, Etsuro Fujita <[email protected]> wrote:\n\n> While working on something else, I noticed that the “if (entry->conn\n> == NULL)” test after doing disconnect_pg_server() when re-establishing\n> a given connection in GetConnection() is pointless, because the former\n> function ensures that entry->conn is NULL. So I removed the if-test.\n> Attached is a patch for that.\n\nLGTM, nice catch.\n\n> I think we could instead add an assertion, but I did not, because we already\n> have it in make_new_connection().\n\nAgreed, the assertion in make_new_connection is enough (and is needed there).\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 15 Mar 2023 11:58:33 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: Useless if-test in GetConnection()"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 7:40 PM Richard Guo <[email protected]> wrote:\n> On Wed, Mar 15, 2023 at 6:18 PM Etsuro Fujita <[email protected]> wrote:\n>> While working on something else, I noticed that the “if (entry->conn\n>> == NULL)” test after doing disconnect_pg_server() when re-establishing\n>> a given connection in GetConnection() is pointless, because the former\n>> function ensures that entry->conn is NULL. So I removed the if-test.\n>> Attached is a patch for that. I think we could instead add an\n>> assertion, but I did not, because we already have it in\n>> make_new_connection().\n\n> +1. Good catch.\n\nCool! Thanks for looking!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 16 Mar 2023 18:25:46 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: Useless if-test in GetConnection()"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 7:58 PM Daniel Gustafsson <[email protected]> wrote:\n> > On 15 Mar 2023, at 11:18, Etsuro Fujita <[email protected]> wrote:\n> > While working on something else, I noticed that the “if (entry->conn\n> > == NULL)” test after doing disconnect_pg_server() when re-establishing\n> > a given connection in GetConnection() is pointless, because the former\n> > function ensures that entry->conn is NULL. So I removed the if-test.\n> > Attached is a patch for that.\n>\n> LGTM, nice catch.\n>\n> > I think we could instead add an assertion, but I did not, because we already\n> > have it in make_new_connection().\n>\n> Agreed, the assertion in make_new_connection is enough (and is needed there).\n\nGreat! Thanks for looking!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 16 Mar 2023 18:28:41 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: Useless if-test in GetConnection()"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 7:18 PM Etsuro Fujita <[email protected]> wrote:\n> This would be harmless, so I am planning to apply the patch to HEAD only.\n\nI forgot to mention that this was added in v14. Done that way.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 17 Mar 2023 18:28:20 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: Useless if-test in GetConnection()"
}
] |
[
{
"msg_contents": "Hi Everyone,\r\n\r\nI am working on the initial schema sync for Logical replication. Currently, user have to\r\nmanually create a schema on subscriber side. Aim of this feature is to add an option in\r\ncreate subscription, so that schema sync can be automatic. I am sharing Design Doc below,\r\nbut there are some corner cases where the design does not work. Please share your opinion\r\nif design can be improved and we can get rid of corner cases. This design is loosely based\r\non Pglogical.\r\nDDL replication is required for this feature.\r\n(https://www.postgresql.org/message-id/flat/CAAD30U%2BpVmfKwUKy8cbZOnUXyguJ-uBNejwD75Kyo%3DOjdQGJ9g%40mail.gmail.com)\r\n\r\nSQL Changes:-\r\nCREATE SUBSCRIPTION subscription_name\r\nCONNECTION 'conninfo'\r\nPUBLICATION publication_name [, ...]\r\n[ WITH ( subscription_parameter [= value] [, ... ] ) ]\r\nsync_initial_schema (enum) will be added to subscription_parameter.\r\nIt can have 3 values:-\r\nTABLES, ALL , NONE (Default)\r\nIn ALL everything will be synced including global objects too.\r\n\r\nRestrictions :- sync_initial_schema=ALL can only be used for publication with FOR ALL TABLES\r\n\r\nDesign:-\r\n\r\nPublisher :-\r\nPublisher have to implement `SHOW CREATE TABLE_NAME`, this table definition will be used by\r\nsubscriber to create exact schema of a table on the subscriber. One alternative to this can\r\nbe doing it on the subscriber side itself, we can create a function similar to\r\ndescribeOneTableDetails and call it on the subscriber. We also need maintain same ownership\r\nas of publisher.\r\n\r\nIt should also have turned on publication of DDL commands.\r\n\r\nSubscriber :-\r\n\r\n1. In CreateSubscription() when we create replication slot(walrcv_create_slot()), should\r\nuse CRS_EXPORT_SNAPSHOT, So that we can use this snapshot later in the pg_dump.\r\n\r\n2. Now we can call pg_dump with above snapshot from CreateSubscription. This is inside\r\nopts.connect && opts.create_slot if statement. If we fail in this step we have to drop\r\nthe replication slot and create a new one again. Because we need snapshot and creating a\r\nreplication slot is a way to get snapshot. The reason for running pg_dump with above\r\nsnapshot is that we don't want execute DDLs in wal_logs to 2 times. With above snapshot we\r\nget a state of database which is before the replication slot origin and any changes after\r\nthe snapshot will be in wal_logs.\r\n\r\nWe will save the pg_dump into a file (custom archive format). So pg_dump will be similar to\r\npg_dump --connection_string --schema_only --snapshot=xyz -Fc --file initSchema\r\n\r\nIf sync_initial_schema=TABLES we dont have to call pg_dump/restore at all. TableSync process\r\nwill take care of it.\r\n\r\n3. If we have to sync global objects we need to call pg_dumpall --globals-only also. But pg_dumpall\r\ndoes not support --snapshot option, So if user creates a new global object between creation\r\nof replication slot and running pg_dumpall, that above global object will be created 2\r\ntimes on subscriber , which will error out the Applier process.\r\n\r\n4. walrcv_disconnect should be called after pg_dump is finished, otherwise snapshot will\r\nnot be valid.\r\n\r\n5. Users will replication role cant not call pg_dump , So the replication user have to\r\nsuperuser. This is a a major problem.\r\npostgres=# create role s4 WITH LOGIN Replication;\r\nCREATE ROLE\r\n╭─sachin@DUB-1800550165 ~\r\n╰─$ pg_dump postgres -s -U s4 1 ↵\r\npg_dump: error: query failed: ERROR: permission denied for table t1\r\npg_dump: detail: Query was: LOCK TABLE public.t1, public.t2 IN ACCESS SHARE MODE\r\n\r\n6. pg_subscription_rel table column srsubstate will have one more state\r\nSUBREL_STATE_CREATE 'c'. if sync_initial_schema is enabled we will set table_state to 'c'.\r\nAbove 6 steps will be done even if subscription is not enabled, but connect is true.\r\n\r\n7. Leader Applier process should check if initSync file exist , if true then it should\r\ncall pg_restore. We are not using —pre-data and —post-data segment as it is used in\r\nPglogical, Because post_data works on table having data , but we will fill the data into\r\ntable on later stages. pg_restore can be called like this\r\n\r\npg_restore --connection_string -1 file_name\r\n-1 option will execute every command inside of one transaction. If there is any error\r\neverything will be rollbacked.\r\npg_restore should be called quite early in the Applier process code, before any tablesync\r\nprocess can be created.\r\nInstead of checking if file exist maybe pg_subscription table can be extended with column\r\nSyncInitialSchema and applier process will check SyncInitialSchema == SYNC_PENDING\r\n\r\n8. TableSync process should check the state of table , if it is SUBREL_STATE_CREATE it should\r\nget the latest definition from the publisher and recreate the table. (We have to recreate\r\nthe table even if there are no changes). Then it should go into copy table mode as usual.\r\n\r\nIt might seem that TableSync is doing duplicate work already done by pg_restore. We are doing\r\nit in this way because of concurrent DDLs and refresh publication command.\r\n\r\nConcurrent DDL :-\r\nUser can execute a DDL command to table t1 at the same time when subscriber is trying to sync\r\nit. pictorial representation https://imgur.com/a/ivrIEv8 [1]\r\n\r\nIn tablesync process, it makes a connection to the publisher and it sees the\r\ntable state which can be in future wrt to the publisher, which can introduce conflicts.\r\nFor example:-\r\n\r\nCASE 1:- { Publisher removed the column b from the table t1 when subscriber was doing pg_restore\r\n(or any point in concurrent DDL window described in picture [1] ), when tableSync\r\nprocess will start transaction on the publisher it will see request data of table t1\r\nincluding column b, which does not exist on the publisher.} So that is why tableSync process\r\nasks for the latest definition.\r\nIf we say that we will delay tableSync worker till all the DDL related to table t1 is\r\napplied by the applier process , we can still have a window when publisher issues a DDL\r\ncommand just before tableSync starts its transaction, and therefore making tableSync and\r\npublisher table definition incompatible (Thanks to Masahiko for pointing out this race\r\ncondition).\r\n\r\nApplier process will skip all DDL/DMLs related to the table t1 and tableSync will apply those\r\nin Catchup phase.\r\nAlthough there is one issue what will happen to views/ or functions which depend on the table\r\n. I think they should wait till table_state is > SUBREL_STATE_CREATE (means we have the latest\r\nschema definition from the publisher).\r\nThere might be corner cases to this approach or maybe a better way to handle concurrent DDL\r\nOne simple solution might be to disallow DDLs on the publisher till all the schema is\r\nsynced and all tables have state >= SUBREL_STATE_DATASYNC (We can have CASE 1: issue ,\r\neven with DDL replication, so we have to wait till all the tables have table_state\r\n> SUBREL_STATE_DATASYNC). Which might be a big window for big databases.\r\n\r\n\r\nRefresh publication :-\r\nIn refresh publication, subscriber does create a new replication slot hence , we can’t run\r\npg_dump with a snapshot which starts from origin(maybe this is not an issue at all). In this case\r\nit makes more sense for tableSync worker to do schema sync.\r\n\r\n\r\nIf community is happy with above design, I can start working on prototype.\r\n\r\nCredits :- This design is inspired by Pglogical. Also thanks to Zane, Masahiko, Amit for reviewing earlier designs\r\n\r\nRegards\r\nSachin Kumar\r\nAmazon Web Services\r\n\n\n\n\n\n\n\n\n\nHi Everyone,\n \nI am working on the initial schema sync for Logical replication. Currently, user have to\nmanually create a schema on subscriber side. Aim of this feature is to add an option in\ncreate subscription, so that schema sync can be automatic. I am sharing Design Doc below,\nbut there are some corner cases where the design does not work. Please share your opinion\nif design can be improved and we can get rid of corner cases. This design is loosely based\non Pglogical.\nDDL replication is required for this feature.\n(https://www.postgresql.org/message-id/flat/CAAD30U%2BpVmfKwUKy8cbZOnUXyguJ-uBNejwD75Kyo%3DOjdQGJ9g%40mail.gmail.com)\n \nSQL Changes:-\nCREATE SUBSCRIPTION subscription_name\nCONNECTION 'conninfo'\nPUBLICATION publication_name [, ...]\n[ WITH ( subscription_parameter [= value] [, ... ] ) ]\nsync_initial_schema (enum) will be added to subscription_parameter.\nIt can have 3 values:-\nTABLES, ALL , NONE (Default)\nIn ALL everything will be synced including global objects too.\n \nRestrictions :- sync_initial_schema=ALL can only be used for publication with FOR ALL TABLES\r\n\n \nDesign:-\n \nPublisher :-\nPublisher have to implement `SHOW CREATE TABLE_NAME`, this table definition will be used by\nsubscriber to create exact schema of a table on the subscriber. One alternative to this can\nbe doing it on the subscriber side itself, we can create a function similar to\ndescribeOneTableDetails and call it on the subscriber. We also need maintain same ownership\nas of publisher.\n \nIt should also have turned on publication of DDL commands.\n \nSubscriber :-\n \n1. In CreateSubscription() when we create replication slot(walrcv_create_slot()), should\nuse CRS_EXPORT_SNAPSHOT, So that we can use this snapshot later in the pg_dump.\n \n2. Now we can call pg_dump with above snapshot from CreateSubscription. This is inside\nopts.connect && opts.create_slot if statement. If we fail in this step we have to drop\nthe replication slot and create a new one again. Because we need snapshot and creating a\nreplication slot is a way to get snapshot. The reason for running pg_dump with above\nsnapshot is that we don't want execute DDLs in wal_logs to 2 times. With above snapshot we\nget a state of database which is before the replication slot origin and any changes after\nthe snapshot will be in wal_logs.\n \nWe will save the pg_dump into a file (custom archive format). So pg_dump will be similar to\npg_dump --connection_string --schema_only --snapshot=xyz -Fc --file initSchema\n \nIf sync_initial_schema=TABLES we dont have to call pg_dump/restore at all. TableSync process\nwill take care of it.\n \n3. If we have to sync global objects we need to call pg_dumpall --globals-only also. But pg_dumpall\ndoes not support --snapshot option, So if user creates a new global object between creation\nof replication slot and running pg_dumpall, that above global object will be created 2\ntimes on subscriber , which will error out the Applier process.\n \n4. walrcv_disconnect should be called after pg_dump is finished, otherwise snapshot will\nnot be valid.\n \n5. Users will replication role cant not call pg_dump , So the replication user have to\nsuperuser. This is a a major problem.\npostgres=# create role s4 WITH LOGIN Replication;\nCREATE ROLE\n╭─sachin@DUB-1800550165 ~\n╰─$ pg_dump postgres -s -U s4 1\r\n↵\npg_dump: error: query failed: ERROR: permission denied for table t1\npg_dump: detail: Query was: LOCK TABLE public.t1, public.t2 IN ACCESS SHARE MODE\n \n6. pg_subscription_rel table column srsubstate will have one more state\nSUBREL_STATE_CREATE 'c'. if sync_initial_schema is enabled we will set table_state to 'c'.\nAbove 6 steps will be done even if subscription is not enabled, but connect is true.\n \n7. Leader Applier process should check if initSync file exist , if true then it should\ncall pg_restore. We are not using —pre-data and —post-data segment as it is used in\nPglogical, Because post_data works on table having data , but we will fill the data into\ntable on later stages. pg_restore can be called like this\n \npg_restore --connection_string -1 file_name\n-1 option will execute every command inside of one transaction. If there is any error\neverything will be rollbacked.\npg_restore should be called quite early in the Applier process code, before any tablesync\nprocess can be created.\nInstead of checking if file exist maybe pg_subscription table can be extended with column\nSyncInitialSchema and applier process will check SyncInitialSchema == SYNC_PENDING\n \n8. TableSync process should check the state of table , if it is SUBREL_STATE_CREATE it should\nget the latest definition from the publisher and recreate the table. (We have to recreate\nthe table even if there are no changes). Then it should go into copy table mode as usual.\n \nIt might seem that TableSync is doing duplicate work already done by pg_restore. We are doing\nit in this way because of concurrent DDLs and refresh publication command.\n \nConcurrent DDL :-\nUser can execute a DDL command to table t1 at the same time when subscriber is trying to sync\nit. pictorial representation https://imgur.com/a/ivrIEv8 [1]\n \nIn tablesync process, it makes a connection to the publisher and it sees the\ntable state which can be in future wrt to the publisher, which can introduce conflicts.\nFor example:- \n \nCASE 1:- { Publisher removed the column b from the table t1 when subscriber was doing pg_restore\n(or any point in concurrent DDL window described in picture [1] ), when tableSync\nprocess will start transaction on the publisher it will see request data of table t1\nincluding column b, which does not exist on the publisher.} So that is why tableSync process\nasks for the latest definition. \nIf we say that we will delay tableSync worker till all the DDL related to table t1 is\napplied by the applier process , we can still have a window when publisher issues a DDL\ncommand just before tableSync starts its transaction, and therefore making tableSync and\npublisher table definition incompatible (Thanks to Masahiko for pointing out this race\ncondition). \n \nApplier process will skip all DDL/DMLs related to the table t1 and tableSync will apply those\nin Catchup phase.\nAlthough there is one issue what will happen to views/ or functions which depend on the table\n. I think they should wait till table_state is > SUBREL_STATE_CREATE (means we have the latest\nschema definition from the publisher).\nThere might be corner cases to this approach or maybe a better way to handle concurrent DDL\nOne simple solution might be to disallow DDLs on the publisher till all the schema is\nsynced and all tables have state >= SUBREL_STATE_DATASYNC (We can have CASE 1: issue ,\neven with DDL replication, so we have to wait till all the tables have table_state\n> SUBREL_STATE_DATASYNC). Which might be a big window for big databases.\n \n \nRefresh publication :-\nIn refresh publication, subscriber does create a new replication slot hence , we can’t run\npg_dump with a snapshot which starts from origin(maybe this is not an issue at all). In this case\nit makes more sense for tableSync worker to do schema sync.\n \n \nIf community is happy with above design, I can start working on prototype.\n \nCredits :- This design is inspired by Pglogical. Also thanks to Zane, Masahiko, Amit for reviewing earlier designs\n \nRegards\nSachin Kumar\nAmazon Web Services",
"msg_date": "Wed, 15 Mar 2023 17:42:32 +0000",
"msg_from": "\"Kumar, Sachin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "Hi,\n\nI have a couple of questions.\n\nQ1.\n\nWhat happens if the subscriber already has some tables present? For\nexample, I did not see the post saying anything like \"Only if the\ntable does not already exist then it will be created\".\n\nOn the contrary, the post seemed to say SUBREL_STATE_CREATE 'c' would\n*always* be set when this subscriber mode is enabled. And then it\nseemed to say the table would *always* get re-created by the tablesync\nin this new mode.\n\nWon't this cause problems\n- if the user wanted a slightly different subscriber-side table? (eg\nsome extra columns on the subscriber-side table)\n- if there was some pre-existing table data on the subscriber-side\ntable that you now are about to re-create and clobber?\n\nOr does the idea intend that the CREATE TABLE DDL that will be\nexecuted is like \"CREATE TABLE ... IF NOT EXISTS\"?\n\n~~~\n\nQ2.\n\nThe post says. \"DDL replication is required for this feature\". And \"It\nshould also have turned on publication of DDL commands.\"\n\nIt wasn't entirely clear to me why those must be a requirement. Is\nthat just to make implementation easier?\n\nSure, I see that the idea might have some (or maybe a lot?) of common\ninternal code with the table DDL replication work, but OTOH an\nauto-create feature for subscriber tables seems like it might be a\nuseful feature to have regardless of the value of the publication\n'ddl' parameter.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Thu, 16 Mar 2023 15:18:07 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "Hi Peter,\r\n\r\n> Hi,\r\n> \r\n> I have a couple of questions.\r\n> \r\n> Q1.\r\n> \r\n> What happens if the subscriber already has some tables present? For\r\n> example, I did not see the post saying anything like \"Only if the table does\r\n> not already exist then it will be created\".\r\n> \r\nMy assumption was the if subscriber is doing initial schema sync , It does not have\r\nany conflicting database objects.\r\n> On the contrary, the post seemed to say SUBREL_STATE_CREATE 'c' would\r\n> *always* be set when this subscriber mode is enabled. And then it seemed\r\n> to say the table would *always* get re-created by the tablesync in this new\r\n> mode.\r\nRight\r\n> \r\n> Won't this cause problems\r\n> - if the user wanted a slightly different subscriber-side table? (eg some extra\r\n> columns on the subscriber-side table)\r\n> - if there was some pre-existing table data on the subscriber-side table that\r\n> you now are about to re-create and clobber?\r\n> \r\n> Or does the idea intend that the CREATE TABLE DDL that will be executed is\r\n> like \"CREATE TABLE ... IF NOT EXISTS\"?\r\n> \r\npg_dump does not support --if-not-exists , But I think it can be added and we get a\r\ndump with IF NOT EXISTS.\r\nOn subscriber side we get table OID list, we can use this change table_state\r\n= SUBREL_STATE_INIT so that it won't be recreated. \r\n> ~~~\r\n> \r\n> Q2.\r\n> \r\n> The post says. \"DDL replication is required for this feature\". And \"It should\r\n> also have turned on publication of DDL commands.\"\r\n> \r\n> It wasn't entirely clear to me why those must be a requirement. Is that just to\r\n> make implementation easier?\r\nDDL replication is needed to facilitate concurrent DDL, so that we don’t have to\r\nworry about schema change at the same time when subscriber is initializing.\r\nif we can block publisher so that it does not do DDLs or subscriber can simple\r\nerror out if it sees conflicting table information , then we don’t need to use DDL\r\nreplication. \r\nRegards\r\nSachin\r\n> \r\n> Sure, I see that the idea might have some (or maybe a lot?) of common\r\n> internal code with the table DDL replication work, but OTOH an auto-create\r\n> feature for subscriber tables seems like it might be a useful feature to have\r\n> regardless of the value of the publication 'ddl' parameter.\r\n> \r\n> ------\r\n> Kind Regards,\r\n> Peter Smith.\r\n> Fujitsu Australia.\r\n",
"msg_date": "Thu, 16 Mar 2023 16:56:38 +0000",
"msg_from": "\"Kumar, Sachin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On 2023-Mar-15, Kumar, Sachin wrote:\n\n> 1. In CreateSubscription() when we create replication slot(walrcv_create_slot()), should\n> use CRS_EXPORT_SNAPSHOT, So that we can use this snapshot later in the pg_dump.\n> \n> 2. Now we can call pg_dump with above snapshot from CreateSubscription.\n\nOverall I'm not on board with the idea that logical replication would\ndepend on pg_dump; that seems like it could run into all sorts of\ntrouble (what if calling external binaries requires additional security\nsetup? what about pg_hba connection requirements? what about\nmax_connections in tight circumstances?).\n\nIt would be much better, I think, to handle this internally in the\npublisher instead: similar to how DDL sync would work, except it'd\nsomehow generate the CREATE statements from the existing tables instead\nof waiting for DDL events to occur. I grant that this does require\nwriting a bunch of new code for each object type, a lot of which would\nduplicate the pg_dump logic, but it would probably be a lot more robust.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Sat, 18 Mar 2023 13:06:38 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 10:27 PM Kumar, Sachin <[email protected]> wrote:\n>\n> > Hi,\n> >\n> > I have a couple of questions.\n> >\n> > Q1.\n> >\n> > What happens if the subscriber already has some tables present? For\n> > example, I did not see the post saying anything like \"Only if the table does\n> > not already exist then it will be created\".\n> >\n> My assumption was the if subscriber is doing initial schema sync , It does not have\n> any conflicting database objects.\n>\n\nCan't we simply error out in such a case with \"obj already exists\"?\nThis would be similar to how we deal with conflicting rows with\nunique/primary keys.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 20 Mar 2023 09:51:38 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "Hi Amit,\r\n\r\n> From: Amit Kapila <[email protected]>\r\n> > > Hi,\r\n> > >\r\n> > > I have a couple of questions.\r\n> > >\r\n> > > Q1.\r\n> > >\r\n> > > What happens if the subscriber already has some tables present? For\r\n> > > example, I did not see the post saying anything like \"Only if the\r\n> > > table does not already exist then it will be created\".\r\n> > >\r\n> > My assumption was the if subscriber is doing initial schema sync , It\r\n> > does not have any conflicting database objects.\r\n> >\r\n> \r\n> Can't we simply error out in such a case with \"obj already exists\"?\r\n> This would be similar to how we deal with conflicting rows with unique/primary\r\n> keys.\r\nRight this is the default behaviour , We will run pg_restore with --single_transaction,\r\nSo if we get error while executing a create table the whole pg_restore will fail and \r\nuser will notified. \r\nRegards\r\nSachin\r\n",
"msg_date": "Mon, 20 Mar 2023 12:46:54 +0000",
"msg_from": "\"Kumar, Sachin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "Hi Alvaro,\r\n\r\n> From: Alvaro Herrera <[email protected]>\r\n> Subject: RE: [EXTERNAL]Initial Schema Sync for Logical Replication\r\n> On 2023-Mar-15, Kumar, Sachin wrote:\r\n> \r\n> > 1. In CreateSubscription() when we create replication\r\n> > slot(walrcv_create_slot()), should use CRS_EXPORT_SNAPSHOT, So that we\r\n> can use this snapshot later in the pg_dump.\r\n> >\r\n> > 2. Now we can call pg_dump with above snapshot from CreateSubscription.\r\n> \r\n> Overall I'm not on board with the idea that logical replication would depend on\r\n> pg_dump; that seems like it could run into all sorts of trouble (what if calling\r\n> external binaries requires additional security setup? what about pg_hba\r\n> connection requirements? what about max_connections in tight\r\n> circumstances?).\r\n> what if calling external binaries requires additional security setup\r\nI am not sure what kind of security restriction would apply in this case, maybe pg_dump\r\nbinary can be changed ? \r\n> what about pg_hba connection requirements?\r\nWe will use the same connection string which subscriber process uses to connect to\r\nthe publisher.\r\n>what about max_connections in tight circumstances?\r\nRight that might be a issue, but I don’t think it will be a big issue, We will create dump\r\nof database in CreateSubscription() function itself , So before tableSync process even starts\r\nif we have reached max_connections while calling pg_dump itself , tableSync wont be successful.\r\n> It would be much better, I think, to handle this internally in the publisher instead:\r\n> similar to how DDL sync would work, except it'd somehow generate the CREATE\r\n> statements from the existing tables instead of waiting for DDL events to occur. I\r\n> grant that this does require writing a bunch of new code for each object type, a\r\n> lot of which would duplicate the pg_dump logic, but it would probably be a lot\r\n> more robust.\r\nAgree , But we might have a lots of code duplication essentially almost all of pg_dump\r\nCode needs to be duplicated, which might cause issue when modifying/adding new\r\nDDLs. \r\nI am not sure but if it's possible to move dependent code of pg_dump to common/ folder\r\n, to avoid duplication.\r\n\r\nRegards\r\nSachin\r\n",
"msg_date": "Tue, 21 Mar 2023 01:10:06 +0000",
"msg_from": "\"Kumar, Sachin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Mon, Mar 20, 2023, at 10:10 PM, Kumar, Sachin wrote:\n> > From: Alvaro Herrera <[email protected]>\n> > Subject: RE: [EXTERNAL]Initial Schema Sync for Logical Replication\n> > On 2023-Mar-15, Kumar, Sachin wrote:\n> > \n> > > 1. In CreateSubscription() when we create replication\n> > > slot(walrcv_create_slot()), should use CRS_EXPORT_SNAPSHOT, So that we\n> > can use this snapshot later in the pg_dump.\n> > >\n> > > 2. Now we can call pg_dump with above snapshot from CreateSubscription.\n> > \n> > Overall I'm not on board with the idea that logical replication would depend on\n> > pg_dump; that seems like it could run into all sorts of trouble (what if calling\n> > external binaries requires additional security setup? what about pg_hba\n> > connection requirements? what about max_connections in tight\n> > circumstances?).\n> > what if calling external binaries requires additional security setup\n> I am not sure what kind of security restriction would apply in this case, maybe pg_dump\n> binary can be changed ? \nUsing pg_dump as part of this implementation is not acceptable because we\nexpect the backend to be decoupled from the client. Besides that, pg_dump\nprovides all table dependencies (such as tablespaces, privileges, security\nlabels, comments); not all dependencies shouldn't be replicated. You should\nexclude them removing these objects from the TOC before running pg_restore or\nadding a few pg_dump options to exclude these objects. Another issue is related\nto different version. Let's say the publisher has a version ahead of the\nsubscriber version, a new table syntax can easily break your logical\nreplication setup. IMO pg_dump doesn't seem like a good solution for initial\nsynchronization.\n\nInstead, the backend should provide infrastructure to obtain the required DDL\ncommands for the specific (set of) tables. This can work around the issues from\nthe previous paragraph:\n\n* you can selectively choose dependencies;\n* don't require additional client packages;\n* don't need to worry about different versions.\n\nThis infrastructure can also be useful for other use cases such as:\n\n* client tools that provide create commands (such as psql, pgAdmin);\n* other logical replication solutions;\n* other logical backup solutions.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Mar 20, 2023, at 10:10 PM, Kumar, Sachin wrote:> From: Alvaro Herrera <[email protected]>> Subject: RE: [EXTERNAL]Initial Schema Sync for Logical Replication> On 2023-Mar-15, Kumar, Sachin wrote:> > > 1. In CreateSubscription() when we create replication> > slot(walrcv_create_slot()), should use CRS_EXPORT_SNAPSHOT, So that we> can use this snapshot later in the pg_dump.> >> > 2. Now we can call pg_dump with above snapshot from CreateSubscription.> > Overall I'm not on board with the idea that logical replication would depend on> pg_dump; that seems like it could run into all sorts of trouble (what if calling> external binaries requires additional security setup? what about pg_hba> connection requirements? what about max_connections in tight> circumstances?).> what if calling external binaries requires additional security setupI am not sure what kind of security restriction would apply in this case, maybe pg_dumpbinary can be changed ? Using pg_dump as part of this implementation is not acceptable because weexpect the backend to be decoupled from the client. Besides that, pg_dumpprovides all table dependencies (such as tablespaces, privileges, securitylabels, comments); not all dependencies shouldn't be replicated. You shouldexclude them removing these objects from the TOC before running pg_restore oradding a few pg_dump options to exclude these objects. Another issue is relatedto different version. Let's say the publisher has a version ahead of thesubscriber version, a new table syntax can easily break your logicalreplication setup. IMO pg_dump doesn't seem like a good solution for initialsynchronization.Instead, the backend should provide infrastructure to obtain the required DDLcommands for the specific (set of) tables. This can work around the issues fromthe previous paragraph:* you can selectively choose dependencies;* don't require additional client packages;* don't need to worry about different versions.This infrastructure can also be useful for other use cases such as:* client tools that provide create commands (such as psql, pgAdmin);* other logical replication solutions;* other logical backup solutions.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 20 Mar 2023 23:01:32 -0300",
"msg_from": "\"Euler Taveira\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Tue, Mar 21, 2023 at 7:32 AM Euler Taveira <[email protected]> wrote:\n>\n> On Mon, Mar 20, 2023, at 10:10 PM, Kumar, Sachin wrote:\n>\n> > From: Alvaro Herrera <[email protected]>\n> > Subject: RE: [EXTERNAL]Initial Schema Sync for Logical Replication\n> > On 2023-Mar-15, Kumar, Sachin wrote:\n> >\n> > > 1. In CreateSubscription() when we create replication\n> > > slot(walrcv_create_slot()), should use CRS_EXPORT_SNAPSHOT, So that we\n> > can use this snapshot later in the pg_dump.\n> > >\n> > > 2. Now we can call pg_dump with above snapshot from CreateSubscription.\n> >\n> > Overall I'm not on board with the idea that logical replication would depend on\n> > pg_dump; that seems like it could run into all sorts of trouble (what if calling\n> > external binaries requires additional security setup? what about pg_hba\n> > connection requirements? what about max_connections in tight\n> > circumstances?).\n> > what if calling external binaries requires additional security setup\n> I am not sure what kind of security restriction would apply in this case, maybe pg_dump\n> binary can be changed ?\n>\n> Using pg_dump as part of this implementation is not acceptable because we\n> expect the backend to be decoupled from the client. Besides that, pg_dump\n> provides all table dependencies (such as tablespaces, privileges, security\n> labels, comments); not all dependencies shouldn't be replicated.\n>\n\nI agree that in the initial version we may not support sync of all\nobjects but why that shouldn't be possible in the later versions?\n\n> You should\n> exclude them removing these objects from the TOC before running pg_restore or\n> adding a few pg_dump options to exclude these objects. Another issue is related\n> to different version. Let's say the publisher has a version ahead of the\n> subscriber version, a new table syntax can easily break your logical\n> replication setup. IMO pg_dump doesn't seem like a good solution for initial\n> synchronization.\n>\n> Instead, the backend should provide infrastructure to obtain the required DDL\n> commands for the specific (set of) tables. This can work around the issues from\n> the previous paragraph:\n>\n...\n> * don't need to worry about different versions.\n>\n\nAFAICU some of the reasons why pg_dump is not allowed to dump from the\nnewer version are as follows: (a) there could be more columns in the\nnewer version of the system catalog and then Select * type of stuff\nwon't work because the client won't have knowledge of additional\ncolumns. (b) the newer version could have new features (represented by\nsay new columns in existing catalogs or new catalogs) that the older\nversion of pg_dump has no knowledge of and will fail to get that data\nand hence an inconsistent dump. The subscriber will easily be not in\nsync due to that.\n\nNow, how do we avoid these problems even if we have our own version of\nfunctionality similar to pg_dump for selected objects? I guess we will\nface similar problems. If so, we may need to deny schema sync in any\nsuch case.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 21 Mar 2023 16:48:30 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Tue, Mar 21, 2023 at 8:18 PM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Mar 21, 2023 at 7:32 AM Euler Taveira <[email protected]> wrote:\n> >\n> > On Mon, Mar 20, 2023, at 10:10 PM, Kumar, Sachin wrote:\n> >\n> > > From: Alvaro Herrera <[email protected]>\n> > > Subject: RE: [EXTERNAL]Initial Schema Sync for Logical Replication\n> > > On 2023-Mar-15, Kumar, Sachin wrote:\n> > >\n> > > > 1. In CreateSubscription() when we create replication\n> > > > slot(walrcv_create_slot()), should use CRS_EXPORT_SNAPSHOT, So that we\n> > > can use this snapshot later in the pg_dump.\n> > > >\n> > > > 2. Now we can call pg_dump with above snapshot from CreateSubscription.\n> > >\n> > > Overall I'm not on board with the idea that logical replication would depend on\n> > > pg_dump; that seems like it could run into all sorts of trouble (what if calling\n> > > external binaries requires additional security setup? what about pg_hba\n> > > connection requirements? what about max_connections in tight\n> > > circumstances?).\n> > > what if calling external binaries requires additional security setup\n> > I am not sure what kind of security restriction would apply in this case, maybe pg_dump\n> > binary can be changed ?\n> >\n> > Using pg_dump as part of this implementation is not acceptable because we\n> > expect the backend to be decoupled from the client. Besides that, pg_dump\n> > provides all table dependencies (such as tablespaces, privileges, security\n> > labels, comments); not all dependencies shouldn't be replicated.\n> >\n>\n> I agree that in the initial version we may not support sync of all\n> objects but why that shouldn't be possible in the later versions?\n>\n> > You should\n> > exclude them removing these objects from the TOC before running pg_restore or\n> > adding a few pg_dump options to exclude these objects. Another issue is related\n> > to different version. Let's say the publisher has a version ahead of the\n> > subscriber version, a new table syntax can easily break your logical\n> > replication setup. IMO pg_dump doesn't seem like a good solution for initial\n> > synchronization.\n> >\n> > Instead, the backend should provide infrastructure to obtain the required DDL\n> > commands for the specific (set of) tables. This can work around the issues from\n> > the previous paragraph:\n> >\n> ...\n> > * don't need to worry about different versions.\n> >\n>\n> AFAICU some of the reasons why pg_dump is not allowed to dump from the\n> newer version are as follows: (a) there could be more columns in the\n> newer version of the system catalog and then Select * type of stuff\n> won't work because the client won't have knowledge of additional\n> columns. (b) the newer version could have new features (represented by\n> say new columns in existing catalogs or new catalogs) that the older\n> version of pg_dump has no knowledge of and will fail to get that data\n> and hence an inconsistent dump. The subscriber will easily be not in\n> sync due to that.\n>\n> Now, how do we avoid these problems even if we have our own version of\n> functionality similar to pg_dump for selected objects? I guess we will\n> face similar problems.\n\nRight. I think that such functionality needs to return DDL commands\nthat can be executed on the requested version.\n\n> If so, we may need to deny schema sync in any such case.\n\nYes. Do we have any concrete use case where the subscriber is an older\nversion, in the first place?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 22 Mar 2023 11:58:54 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 8:29 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Tue, Mar 21, 2023 at 8:18 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Mar 21, 2023 at 7:32 AM Euler Taveira <[email protected]> wrote:\n> >\n> > > You should\n> > > exclude them removing these objects from the TOC before running pg_restore or\n> > > adding a few pg_dump options to exclude these objects. Another issue is related\n> > > to different version. Let's say the publisher has a version ahead of the\n> > > subscriber version, a new table syntax can easily break your logical\n> > > replication setup. IMO pg_dump doesn't seem like a good solution for initial\n> > > synchronization.\n> > >\n> > > Instead, the backend should provide infrastructure to obtain the required DDL\n> > > commands for the specific (set of) tables. This can work around the issues from\n> > > the previous paragraph:\n> > >\n> > ...\n> > > * don't need to worry about different versions.\n> > >\n> >\n> > AFAICU some of the reasons why pg_dump is not allowed to dump from the\n> > newer version are as follows: (a) there could be more columns in the\n> > newer version of the system catalog and then Select * type of stuff\n> > won't work because the client won't have knowledge of additional\n> > columns. (b) the newer version could have new features (represented by\n> > say new columns in existing catalogs or new catalogs) that the older\n> > version of pg_dump has no knowledge of and will fail to get that data\n> > and hence an inconsistent dump. The subscriber will easily be not in\n> > sync due to that.\n> >\n> > Now, how do we avoid these problems even if we have our own version of\n> > functionality similar to pg_dump for selected objects? I guess we will\n> > face similar problems.\n>\n> Right. I think that such functionality needs to return DDL commands\n> that can be executed on the requested version.\n>\n> > If so, we may need to deny schema sync in any such case.\n>\n> Yes. Do we have any concrete use case where the subscriber is an older\n> version, in the first place?\n>\n\nAs per my understanding, it is mostly due to the reason that it can\nwork today. Today, during an off-list discussion with Jonathan on this\npoint, he pointed me to a similar incompatibility in MySQL\nreplication. See the \"SQL incompatibilities\" section in doc[1]. Also,\nplease note that this applies not only to initial sync but also to\nschema sync during replication. I don't think it would be feasible to\nkeep such cross-version compatibility for DDL replication.\n\nHaving said above, I don't intend that we must use pg_dump from the\nsubscriber for the purpose of initial sync. I think the idea at this\nstage is to primarily write a POC patch to see what difficulties we\nmay face. The other options that we could try out are (a) try to\nduplicate parts of pg_dump code in some way (by extracting required\ncode) for the subscription's initial sync, or (b) have a common code\n(probably as a library or some other way) for the required\nfunctionality. There could be more possibilities that we may not have\nthought of yet. But the main point is that for approaches other than\nusing pg_dump, we should consider ways to avoid duplicity of various\nparts of its code. Due to this, I think before ruling out using\npg_dump, we should be clear about its risks and limitations.\n\nThoughts?\n\n[1] - https://dev.mysql.com/doc/refman/8.0/en/replication-compatibility.html\n[2] - https://www.postgresql.org/message-id/CAAD30U%2BpVmfKwUKy8cbZOnUXyguJ-uBNejwD75Kyo%3DOjdQGJ9g%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 22 Mar 2023 10:46:28 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Wednesday, March 22, 2023 1:16 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Wed, Mar 22, 2023 at 8:29 AM Masahiko Sawada\r\n> <[email protected]> wrote:\r\n> >\r\n> > On Tue, Mar 21, 2023 at 8:18 PM Amit Kapila <[email protected]>\r\n> wrote:\r\n> > >\r\n> > > On Tue, Mar 21, 2023 at 7:32 AM Euler Taveira <[email protected]> wrote:\r\n> > >\r\n> > > > You should\r\n> > > > exclude them removing these objects from the TOC before running\r\n> > > > pg_restore or adding a few pg_dump options to exclude these\r\n> > > > objects. Another issue is related to different version. Let's say\r\n> > > > the publisher has a version ahead of the subscriber version, a new\r\n> > > > table syntax can easily break your logical replication setup. IMO\r\n> > > > pg_dump doesn't seem like a good solution for initial synchronization.\r\n> > > >\r\n> > > > Instead, the backend should provide infrastructure to obtain the\r\n> > > > required DDL commands for the specific (set of) tables. This can\r\n> > > > work around the issues from the previous paragraph:\r\n> > > >\r\n> > > ...\r\n> > > > * don't need to worry about different versions.\r\n> > > >\r\n> > >\r\n> > > AFAICU some of the reasons why pg_dump is not allowed to dump from\r\n> > > the newer version are as follows: (a) there could be more columns in\r\n> > > the newer version of the system catalog and then Select * type of\r\n> > > stuff won't work because the client won't have knowledge of\r\n> > > additional columns. (b) the newer version could have new features\r\n> > > (represented by say new columns in existing catalogs or new\r\n> > > catalogs) that the older version of pg_dump has no knowledge of and\r\n> > > will fail to get that data and hence an inconsistent dump. The\r\n> > > subscriber will easily be not in sync due to that.\r\n> > >\r\n> > > Now, how do we avoid these problems even if we have our own version\r\n> > > of functionality similar to pg_dump for selected objects? I guess we\r\n> > > will face similar problems.\r\n> >\r\n> > Right. I think that such functionality needs to return DDL commands\r\n> > that can be executed on the requested version.\r\n> >\r\n> > > If so, we may need to deny schema sync in any such case.\r\n> >\r\n> > Yes. Do we have any concrete use case where the subscriber is an older\r\n> > version, in the first place?\r\n> >\r\n> \r\n> As per my understanding, it is mostly due to the reason that it can work today.\r\n> Today, during an off-list discussion with Jonathan on this point, he pointed me\r\n> to a similar incompatibility in MySQL replication. See the \"SQL\r\n> incompatibilities\" section in doc[1]. Also, please note that this applies not only\r\n> to initial sync but also to schema sync during replication. I don't think it would\r\n> be feasible to keep such cross-version compatibility for DDL replication.\r\n> \r\n> Having said above, I don't intend that we must use pg_dump from the\r\n> subscriber for the purpose of initial sync. I think the idea at this stage is to\r\n> primarily write a POC patch to see what difficulties we may face. The other\r\n> options that we could try out are (a) try to duplicate parts of pg_dump code in\r\n> some way (by extracting required\r\n> code) for the subscription's initial sync, or (b) have a common code (probably\r\n> as a library or some other way) for the required functionality. There could be\r\n> more possibilities that we may not have thought of yet. But the main point is\r\n> that for approaches other than using pg_dump, we should consider ways to\r\n> avoid duplicity of various parts of its code. Due to this, I think before ruling out\r\n> using pg_dump, we should be clear about its risks and limitations.\r\n\r\nI thought about some possible problems about the design of using pg_dump.\r\n\r\n1) According to the design, it will internally call pg_dump when creating\r\nsubscription, but it requires to use a powerful user when calling pg_dump.\r\nCurrently, it may not be a problem because create subscription also requires\r\nsuperuser. But people have recently discussed about allowing non-superuser to\r\ncreate the subscription[1], if that is accepted, then it seems not great to\r\ninternally use superuser to call pg_dump while the user creating the\r\nsubscription is a non-super user.\r\n\r\n2) I think it's possible that some cloud DB service doesn't allow user to use\r\nthe client commands(pg_dump ,..) directly, and the user that login in the\r\ndatabase may not have the permission to execute the client commands.\r\n\r\n[1] https://www.postgresql.org/message-id/flat/20230308194743.23rmgjgwahh4i4rg%40awork3.anarazel.de\r\n\r\nBest Regards,\r\nHou zj\r\n\r\n",
"msg_date": "Wed, 22 Mar 2023 07:52:32 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "> From: Amit Kapila <[email protected]>\r\n> Sent: Wednesday, March 22, 2023 5:16 AM\r\n> To: Masahiko Sawada <[email protected]>\r\n> Cc: Euler Taveira <[email protected]>; Kumar, Sachin\r\n> <[email protected]>; Alvaro Herrera <[email protected]>; pgsql-\r\n> [email protected]; Jonathan S. Katz <[email protected]>\r\n> Subject: RE: [EXTERNAL]Initial Schema Sync for Logical Replication\r\n> \r\n> CAUTION: This email originated from outside of the organization. Do not click\r\n> links or open attachments unless you can confirm the sender and know the\r\n> content is safe.\r\n> \r\n> \r\n> \r\n> On Wed, Mar 22, 2023 at 8:29 AM Masahiko Sawada\r\n> <[email protected]> wrote:\r\n> >\r\n> > On Tue, Mar 21, 2023 at 8:18 PM Amit Kapila <[email protected]>\r\n> wrote:\r\n> > >\r\n> > > On Tue, Mar 21, 2023 at 7:32 AM Euler Taveira <[email protected]>\r\n> wrote:\r\n> > >\r\n> > > > You should\r\n> > > > exclude them removing these objects from the TOC before running\r\n> > > > pg_restore or adding a few pg_dump options to exclude these\r\n> > > > objects. Another issue is related to different version. Let's say\r\n> > > > the publisher has a version ahead of the subscriber version, a new\r\n> > > > table syntax can easily break your logical replication setup. IMO\r\n> > > > pg_dump doesn't seem like a good solution for initial synchronization.\r\n> > > >\r\n> > > > Instead, the backend should provide infrastructure to obtain the\r\n> > > > required DDL commands for the specific (set of) tables. This can\r\n> > > > work around the issues from the previous paragraph:\r\n> > > >\r\n> > > ...\r\n> > > > * don't need to worry about different versions.\r\n> > > >\r\n> > >\r\n> > > AFAICU some of the reasons why pg_dump is not allowed to dump from\r\n> > > the newer version are as follows: (a) there could be more columns in\r\n> > > the newer version of the system catalog and then Select * type of\r\n> > > stuff won't work because the client won't have knowledge of\r\n> > > additional columns. (b) the newer version could have new features\r\n> > > (represented by say new columns in existing catalogs or new\r\n> > > catalogs) that the older version of pg_dump has no knowledge of and\r\n> > > will fail to get that data and hence an inconsistent dump. The\r\n> > > subscriber will easily be not in sync due to that.\r\n> > >\r\n> > > Now, how do we avoid these problems even if we have our own version\r\n> > > of functionality similar to pg_dump for selected objects? I guess we\r\n> > > will face similar problems.\r\n> >\r\n> > Right. I think that such functionality needs to return DDL commands\r\n> > that can be executed on the requested version.\r\n> >\r\n> > > If so, we may need to deny schema sync in any such case.\r\n> >\r\n> > Yes. Do we have any concrete use case where the subscriber is an older\r\n> > version, in the first place?\r\n> >\r\n> \r\n> As per my understanding, it is mostly due to the reason that it can work\r\n> today. Today, during an off-list discussion with Jonathan on this point, he\r\n> pointed me to a similar incompatibility in MySQL replication. See the \"SQL\r\n> incompatibilities\" section in doc[1]. Also, please note that this applies not\r\n> only to initial sync but also to schema sync during replication. I don't think it\r\n> would be feasible to keep such cross-version compatibility for DDL\r\n> replication.\r\n> \r\n> Having said above, I don't intend that we must use pg_dump from the\r\n> subscriber for the purpose of initial sync. I think the idea at this stage is to\r\n> primarily write a POC patch to see what difficulties we may face. The other\r\n> options that we could try out are (a) try to duplicate parts of pg_dump code\r\n> in some way (by extracting required\r\n> code) for the subscription's initial sync, or (b) have a common code (probably\r\n> as a library or some other way) for the required functionality. There could be\r\n> more possibilities that we may not have thought of yet. But the main point is\r\n> that for approaches other than using pg_dump, we should consider ways to\r\n> avoid duplicity of various parts of its code. Due to this, I think before ruling\r\n> out using pg_dump, we should be clear about its risks and limitations.\r\n> \r\n> Thoughts?\r\nThere is one more thing which needs to be consider even if we use pg_dump/pg_restore\r\nWe still need to have a way to get the create table for tables , if we want to support\r\nconcurrent DDLs on the publisher.\r\n>8. TableSync process should check the state of table , if it is SUBREL_STATE_CREATE it should\r\n>get the latest definition from the publisher and recreate the table. (We have to recreate\r\n>the table even if there are no changes). Then it should go into copy table mode as usual.\r\nUnless there is different way to support concurrent DDLs or we going for blocking publisher\r\ntill initial sync is completed.\r\nRegards\r\nSachin\r\n> \r\n> [1] - https://dev.mysql.com/doc/refman/8.0/en/replication-\r\n> compatibility.html\r\n> [2] - https://www.postgresql.org/message-\r\n> id/CAAD30U%2BpVmfKwUKy8cbZOnUXyguJ-\r\n> uBNejwD75Kyo%3DOjdQGJ9g%40mail.gmail.com\r\n> \r\n> --\r\n> With Regards,\r\n> Amit Kapila.\r\n",
"msg_date": "Wed, 22 Mar 2023 10:55:33 +0000",
"msg_from": "\"Kumar, Sachin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 2:16 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Mar 22, 2023 at 8:29 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Tue, Mar 21, 2023 at 8:18 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Tue, Mar 21, 2023 at 7:32 AM Euler Taveira <[email protected]> wrote:\n> > >\n> > > > You should\n> > > > exclude them removing these objects from the TOC before running pg_restore or\n> > > > adding a few pg_dump options to exclude these objects. Another issue is related\n> > > > to different version. Let's say the publisher has a version ahead of the\n> > > > subscriber version, a new table syntax can easily break your logical\n> > > > replication setup. IMO pg_dump doesn't seem like a good solution for initial\n> > > > synchronization.\n> > > >\n> > > > Instead, the backend should provide infrastructure to obtain the required DDL\n> > > > commands for the specific (set of) tables. This can work around the issues from\n> > > > the previous paragraph:\n> > > >\n> > > ...\n> > > > * don't need to worry about different versions.\n> > > >\n> > >\n> > > AFAICU some of the reasons why pg_dump is not allowed to dump from the\n> > > newer version are as follows: (a) there could be more columns in the\n> > > newer version of the system catalog and then Select * type of stuff\n> > > won't work because the client won't have knowledge of additional\n> > > columns. (b) the newer version could have new features (represented by\n> > > say new columns in existing catalogs or new catalogs) that the older\n> > > version of pg_dump has no knowledge of and will fail to get that data\n> > > and hence an inconsistent dump. The subscriber will easily be not in\n> > > sync due to that.\n> > >\n> > > Now, how do we avoid these problems even if we have our own version of\n> > > functionality similar to pg_dump for selected objects? I guess we will\n> > > face similar problems.\n> >\n> > Right. I think that such functionality needs to return DDL commands\n> > that can be executed on the requested version.\n> >\n> > > If so, we may need to deny schema sync in any such case.\n> >\n> > Yes. Do we have any concrete use case where the subscriber is an older\n> > version, in the first place?\n> >\n>\n> As per my understanding, it is mostly due to the reason that it can\n> work today. Today, during an off-list discussion with Jonathan on this\n> point, he pointed me to a similar incompatibility in MySQL\n> replication. See the \"SQL incompatibilities\" section in doc[1]. Also,\n> please note that this applies not only to initial sync but also to\n> schema sync during replication. I don't think it would be feasible to\n> keep such cross-version compatibility for DDL replication.\n\nMakes sense to me.\n\n> Having said above, I don't intend that we must use pg_dump from the\n> subscriber for the purpose of initial sync. I think the idea at this\n> stage is to primarily write a POC patch to see what difficulties we\n> may face. The other options that we could try out are (a) try to\n> duplicate parts of pg_dump code in some way (by extracting required\n> code) for the subscription's initial sync, or (b) have a common code\n> (probably as a library or some other way) for the required\n> functionality. There could be more possibilities that we may not have\n> thought of yet. But the main point is that for approaches other than\n> using pg_dump, we should consider ways to avoid duplicity of various\n> parts of its code. Due to this, I think before ruling out using\n> pg_dump, we should be clear about its risks and limitations.\n>\n> Thoughts?\n>\n\nAgreed. My biggest concern about approaches other than using pg_dump\nis the same; the code duplication that could increase the maintenance\ncosts. We should clarify what points of using pg_dump is not a good\nidea, and also analyze alternative ideas in depth.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 22 Mar 2023 22:47:10 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "> > Yes. Do we have any concrete use case where the subscriber is an older\n> > version, in the first place?\n> >\n>\n> As per my understanding, it is mostly due to the reason that it can\n> work today. Today, during an off-list discussion with Jonathan on this\n> point, he pointed me to a similar incompatibility in MySQL\n> replication. See the \"SQL incompatibilities\" section in doc[1]. Also,\n> please note that this applies not only to initial sync but also to\n> schema sync during replication. I don't think it would be feasible to\n> keep such cross-version compatibility for DDL replication.\n\nI think it's possible to make DDL replication cross-version\ncompatible, by making the DDL deparser version-aware: the deparsed\nJSON blob can have a PG version in it, and the destination server can\nprocess the versioned JSON blob by transforming anything incompatible\naccording to the original version and its own version.\n\nRegards,\nZane\n\n\n",
"msg_date": "Wed, 22 Mar 2023 14:04:15 -0400",
"msg_from": "Zheng Li <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Tue, Mar 21, 2023, at 8:18 AM, Amit Kapila wrote:\n> Now, how do we avoid these problems even if we have our own version of\n> functionality similar to pg_dump for selected objects? I guess we will\n> face similar problems. If so, we may need to deny schema sync in any\n> such case.\nThere are 2 approaches for initial DDL synchronization:\n\n1) generate the DDL command on the publisher, stream it and apply it as-is on\nthe subscriber;\n2) generate a DDL representation (JSON, for example) on the publisher, stream\nit, transform it into a DDL command on subscriber and apply it.\n\nThe option (1) is simpler and faster than option (2) because it does not\nrequire an additional step (transformation). However, option (2) is more\nflexible than option (1) because it allow you to create a DDL command even if a\nfeature was removed from the subscriber and the publisher version is less than\nthe subscriber version or a feature was added to the publisher and the\npublisher version is greater than the subscriber version. Of course there are\nexceptions and it should forbid the transformation (in this case, it can be\ncontrolled by the protocol version -- LOGICALREP_PROTO_FOOBAR_VERSION_NUM). A\ndecision must be made: simple/restrict vs complex/flexible.\n\nOne of the main use cases for logical replication is migration (X -> Y where X\n< Y). Postgres generally does not remove features but it might happen (such as\nWITH OIDS syntax) and it would break the DDL replication (option 1). In the\ndowngrade case (X -> Y where X > Y), it might break the DDL replication if a\nnew syntax is introduced in X. Having said that, IMO option (1) is fragile if\nwe want to support DDL replication between different Postgres versions. It\nmight eventually work but there is no guarantee.\n\nPer discussion [1], I think if we agree that the Alvaro's DDL deparse patch is\nthe way to go with DDL replication, it seems wise that it should be used for\ninitial DDL synchronization as well.\n\n\n[1] https://www.postgresql.org/message-id/CAA4eK1%2Bw_dFytBiv3RxbOL76_noMzmX0QGTc8uS%3Dbc2WaPVoow%40mail.gmail.com\n\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, Mar 21, 2023, at 8:18 AM, Amit Kapila wrote:Now, how do we avoid these problems even if we have our own version offunctionality similar to pg_dump for selected objects? I guess we willface similar problems. If so, we may need to deny schema sync in anysuch case.There are 2 approaches for initial DDL synchronization:1) generate the DDL command on the publisher, stream it and apply it as-is onthe subscriber;2) generate a DDL representation (JSON, for example) on the publisher, streamit, transform it into a DDL command on subscriber and apply it.The option (1) is simpler and faster than option (2) because it does notrequire an additional step (transformation). However, option (2) is moreflexible than option (1) because it allow you to create a DDL command even if afeature was removed from the subscriber and the publisher version is less thanthe subscriber version or a feature was added to the publisher and thepublisher version is greater than the subscriber version. Of course there areexceptions and it should forbid the transformation (in this case, it can becontrolled by the protocol version -- LOGICALREP_PROTO_FOOBAR_VERSION_NUM). Adecision must be made: simple/restrict vs complex/flexible.One of the main use cases for logical replication is migration (X -> Y where X< Y). Postgres generally does not remove features but it might happen (such asWITH OIDS syntax) and it would break the DDL replication (option 1). In thedowngrade case (X -> Y where X > Y), it might break the DDL replication if anew syntax is introduced in X. Having said that, IMO option (1) is fragile ifwe want to support DDL replication between different Postgres versions. Itmight eventually work but there is no guarantee.Per discussion [1], I think if we agree that the Alvaro's DDL deparse patch isthe way to go with DDL replication, it seems wise that it should be used forinitial DDL synchronization as well.[1] https://www.postgresql.org/message-id/CAA4eK1%2Bw_dFytBiv3RxbOL76_noMzmX0QGTc8uS%3Dbc2WaPVoow%40mail.gmail.com--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Wed, 22 Mar 2023 18:17:49 -0300",
"msg_from": "\"Euler Taveira\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 11:12 PM Kumar, Sachin <[email protected]> wrote:\n>\n>\n> Concurrent DDL :-\n>\n> User can execute a DDL command to table t1 at the same time when subscriber is trying to sync\n>\n> it. pictorial representation https://imgur.com/a/ivrIEv8 [1]\n>\n>\n>\n> In tablesync process, it makes a connection to the publisher and it sees the\n>\n> table state which can be in future wrt to the publisher, which can introduce conflicts.\n>\n> For example:-\n>\n>\n>\n> CASE 1:- { Publisher removed the column b from the table t1 when subscriber was doing pg_restore\n>\n> (or any point in concurrent DDL window described in picture [1] ), when tableSync\n>\n> process will start transaction on the publisher it will see request data of table t1\n>\n> including column b, which does not exist on the publisher.} So that is why tableSync process\n>\n> asks for the latest definition.\n>\n> If we say that we will delay tableSync worker till all the DDL related to table t1 is\n>\n> applied by the applier process , we can still have a window when publisher issues a DDL\n>\n> command just before tableSync starts its transaction, and therefore making tableSync and\n>\n> publisher table definition incompatible (Thanks to Masahiko for pointing out this race\n>\n> condition).\n>\n\nIIUC, this is possible only if tablesync process uses a snapshot\ndifferent than the snapshot we have used to perform the initial schema\nsync, otherwise, this shouldn't be a problem. Let me try to explain my\nunderstanding with an example (the LSNs used are just explain the\nproblem):\n\n1. Create Table t1(c1, c2); --LSN: 90\n2. Insert t1 (1, 1); --LSN 100\n3. Insert t1 (2, 2); --LSN 110\n4. Alter t1 Add Column c3; --LSN 120\n5. Insert t1 (3, 3, 3); --LSN 130\n\nNow, say before starting tablesync worker, apply process performs\ninitial schema sync and uses a snapshot corresponding to LSN 100. Then\nit starts tablesync process to allow the initial copy of data in t1.\nHere, if the table sync process tries to establish a new snapshot, it\nmay get data till LSN 130 and when it will try to copy the same in\nsubscriber it will fail. Is my understanding correct about the problem\nyou described? If so, can't we allow tablesync process to use the same\nexported snapshot as we used for the initial schema sync and won't\nthat solve the problem you described?\n\n>\n>\n> Applier process will skip all DDL/DMLs related to the table t1 and tableSync will apply those\n>\n> in Catchup phase.\n>\n> Although there is one issue what will happen to views/ or functions which depend on the table\n>\n> . I think they should wait till table_state is > SUBREL_STATE_CREATE (means we have the latest\n>\n> schema definition from the publisher).\n>\n> There might be corner cases to this approach or maybe a better way to handle concurrent DDL\n>\n> One simple solution might be to disallow DDLs on the publisher till all the schema is\n>\n> synced and all tables have state >= SUBREL_STATE_DATASYNC (We can have CASE 1: issue ,\n>\n> even with DDL replication, so we have to wait till all the tables have table_state\n>\n> > SUBREL_STATE_DATASYNC). Which might be a big window for big databases.\n>\n>\n>\n>\n>\n> Refresh publication :-\n>\n> In refresh publication, subscriber does create a new replication slot hence , we can’t run\n>\n> pg_dump with a snapshot which starts from origin(maybe this is not an issue at all). In this case\n>\n> it makes more sense for tableSync worker to do schema sync.\n>\n\nCan you please explain this problem with some examples?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 23 Mar 2023 16:45:57 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Thu, Mar 23, 2023 at 2:48 AM Euler Taveira <[email protected]> wrote:\n>\n> On Tue, Mar 21, 2023, at 8:18 AM, Amit Kapila wrote:\n>\n> Now, how do we avoid these problems even if we have our own version of\n> functionality similar to pg_dump for selected objects? I guess we will\n> face similar problems. If so, we may need to deny schema sync in any\n> such case.\n>\n> There are 2 approaches for initial DDL synchronization:\n>\n> 1) generate the DDL command on the publisher, stream it and apply it as-is on\n> the subscriber;\n> 2) generate a DDL representation (JSON, for example) on the publisher, stream\n> it, transform it into a DDL command on subscriber and apply it.\n>\n> The option (1) is simpler and faster than option (2) because it does not\n> require an additional step (transformation). However, option (2) is more\n> flexible than option (1) because it allow you to create a DDL command even if a\n> feature was removed from the subscriber and the publisher version is less than\n> the subscriber version or a feature was added to the publisher and the\n> publisher version is greater than the subscriber version.\n>\n\nIs this practically possible? Say the publisher has a higher version\nthat has introduced a new object type corresponding to which it has\neither a new catalog or some new columns in the existing catalog. Now,\nI don't think the older version of the subscriber can modify the\ncommand received from the publisher so that the same can be applied to\nthe subscriber because it won't have any knowledge of the new feature.\nIn the other case where the subscriber is of a newer version, we\nanyway should be able to support it with pg_dump as there doesn't\nappear to be any restriction with that, am, I missing something?\n\n> One of the main use cases for logical replication is migration (X -> Y where X\n> < Y).\n>\n\nI don't think we need to restrict this case even if we decide to use pg_dump.\n\n>\n> Per discussion [1], I think if we agree that the Alvaro's DDL deparse patch is\n> the way to go with DDL replication, it seems wise that it should be used for\n> initial DDL synchronization as well.\n>\n\nEven if we decide to use deparse approach, it would still need to\nmimic stuff from pg_dump to construct commands based on only catalog\ncontents. I am not against using this approach but we shouldn't ignore\nthe duplicity required in this approach.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 23 Mar 2023 17:14:57 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "> From: Amit Kapila <[email protected]>\r\n> IIUC, this is possible only if tablesync process uses a snapshot different than the\r\n> snapshot we have used to perform the initial schema sync, otherwise, this\r\n> shouldn't be a problem. Let me try to explain my understanding with an example\r\n> (the LSNs used are just explain the\r\n> problem):\r\n> \r\n> 1. Create Table t1(c1, c2); --LSN: 90\r\n> 2. Insert t1 (1, 1); --LSN 100\r\n> 3. Insert t1 (2, 2); --LSN 110\r\n> 4. Alter t1 Add Column c3; --LSN 120\r\n> 5. Insert t1 (3, 3, 3); --LSN 130\r\n> \r\n> Now, say before starting tablesync worker, apply process performs initial\r\n> schema sync and uses a snapshot corresponding to LSN 100. Then it starts\r\n> tablesync process to allow the initial copy of data in t1.\r\n> Here, if the table sync process tries to establish a new snapshot, it may get data\r\n> till LSN 130 and when it will try to copy the same in subscriber it will fail. Is my\r\n> understanding correct about the problem you described?\r\nRight\r\n> If so, can't we allow\r\n> tablesync process to use the same exported snapshot as we used for the initial\r\n> schema sync and won't that solve the problem you described?\r\nI think we won't be able to use same snapshot because the transaction will be committed.\r\nIn CreateSubscription() we can use the transaction snapshot from walrcv_create_slot()\r\ntill walrcv_disconnect() is called.(I am not sure about this part maybe walrcv_disconnect() calls\r\nthe commits internally ?). \r\nSo somehow we need to keep this snapshot alive, even after transaction is committed(or delay committing\r\nthe transaction , but we can have CREATE SUBSCRIPTION with ENABLED=FALSE, so we can have a restart before \r\ntableSync is able to use the same snapshot.) \r\n> > Refresh publication :-\r\n> >\r\n> > In refresh publication, subscriber does create a new replication slot\r\nTypo-> subscriber does not\r\n> > hence , we can’t run\r\n> >\r\n> > pg_dump with a snapshot which starts from origin(maybe this is not an\r\n> > issue at all). In this case\r\n> >\r\n> > it makes more sense for tableSync worker to do schema sync.\r\n> >\r\n> \r\n> Can you please explain this problem with some examples?\r\nI think we can have same issues as you mentioned\r\nNew table t1 is added to the publication , User does a refresh publication.\r\npg_dump / pg_restore restores the table definition. But before tableSync\r\ncan start, steps from 2 to 5 happen on the publisher.\r\n> 1. Create Table t1(c1, c2); --LSN: 90\r\n> 2. Insert t1 (1, 1); --LSN 100\r\n> 3. Insert t1 (2, 2); --LSN 110\r\n> 4. Alter t1 Add Column c3; --LSN 120\r\n> 5. Insert t1 (3, 3, 3); --LSN 130\r\nAnd table sync errors out\r\nThere can be one more issue , since we took the pg_dump without snapshot (wrt to replication slot).\r\n(I am not 100 percent sure about this).\r\nLets imagine applier process is lagging behind publisher. \r\nEvents on publisher\r\n1. alter t1 drop column c; LSN 100 <-- applier process tries to execute this DDL\r\n2. alter t1 drop column d; LSN 110\r\n3. insert into t1 values(..); LSN 120 <-- (Refresh publication called )pg_dump/restore restores this version\r\nApplier process executing 1 will fail because t1 does not have column c. \r\nRegards\r\nSachin\r\n",
"msg_date": "Thu, 23 Mar 2023 15:54:49 +0000",
"msg_from": "\"Kumar, Sachin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Thu, Mar 23, 2023, at 8:44 AM, Amit Kapila wrote:\n> On Thu, Mar 23, 2023 at 2:48 AM Euler Taveira <[email protected]> wrote:\n> >\n> > On Tue, Mar 21, 2023, at 8:18 AM, Amit Kapila wrote:\n> >\n> > Now, how do we avoid these problems even if we have our own version of\n> > functionality similar to pg_dump for selected objects? I guess we will\n> > face similar problems. If so, we may need to deny schema sync in any\n> > such case.\n> >\n> > There are 2 approaches for initial DDL synchronization:\n> >\n> > 1) generate the DDL command on the publisher, stream it and apply it as-is on\n> > the subscriber;\n> > 2) generate a DDL representation (JSON, for example) on the publisher, stream\n> > it, transform it into a DDL command on subscriber and apply it.\n> >\n> > The option (1) is simpler and faster than option (2) because it does not\n> > require an additional step (transformation). However, option (2) is more\n> > flexible than option (1) because it allow you to create a DDL command even if a\n> > feature was removed from the subscriber and the publisher version is less than\n> > the subscriber version or a feature was added to the publisher and the\n> > publisher version is greater than the subscriber version.\n> >\n> \n> Is this practically possible? Say the publisher has a higher version\n> that has introduced a new object type corresponding to which it has\n> either a new catalog or some new columns in the existing catalog. Now,\n> I don't think the older version of the subscriber can modify the\n> command received from the publisher so that the same can be applied to\n> the subscriber because it won't have any knowledge of the new feature.\n> In the other case where the subscriber is of a newer version, we\n> anyway should be able to support it with pg_dump as there doesn't\n> appear to be any restriction with that, am, I missing something?\nI think so (with some limitations). Since the publisher knows the subscriber\nversion, publisher knows that the subscriber does not contain the new object\ntype then publisher can decide if this case is critical (and reject the\nreplication) or optional (and silently not include the feature X -- because it\nis not essential for logical replication). If required, the transformation\nshould be done on the publisher.\n\n> Even if we decide to use deparse approach, it would still need to\n> mimic stuff from pg_dump to construct commands based on only catalog\n> contents. I am not against using this approach but we shouldn't ignore\n> the duplicity required in this approach.\nIt is fine to share code between pg_dump and this new infrastructure. However,\nthe old code should coexist to support older versions because the new set of\nfunctions don't exist in older server versions. Hence, duplicity should exist\nfor a long time (if you consider that the current policy is to allow dump from\n9.2, we are talking about 10 years or so). There are some threads [1][2] that\ndiscussed this topic: provide a SQL command based on the catalog\nrepresentation. You can probably find other discussions searching for \"pg_dump\nlibrary\" or \"getddl\".\n\n\n[1] https://www.postgresql.org/message-id/flat/82EFF560-2A09-4C3D-81CC-A2A5EC438CE5%40eggerapps.at\n[2] https://www.postgresql.org/message-id/[email protected]\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Mar 23, 2023, at 8:44 AM, Amit Kapila wrote:On Thu, Mar 23, 2023 at 2:48 AM Euler Taveira <[email protected]> wrote:>> On Tue, Mar 21, 2023, at 8:18 AM, Amit Kapila wrote:>> Now, how do we avoid these problems even if we have our own version of> functionality similar to pg_dump for selected objects? I guess we will> face similar problems. If so, we may need to deny schema sync in any> such case.>> There are 2 approaches for initial DDL synchronization:>> 1) generate the DDL command on the publisher, stream it and apply it as-is on> the subscriber;> 2) generate a DDL representation (JSON, for example) on the publisher, stream> it, transform it into a DDL command on subscriber and apply it.>> The option (1) is simpler and faster than option (2) because it does not> require an additional step (transformation). However, option (2) is more> flexible than option (1) because it allow you to create a DDL command even if a> feature was removed from the subscriber and the publisher version is less than> the subscriber version or a feature was added to the publisher and the> publisher version is greater than the subscriber version.>Is this practically possible? Say the publisher has a higher versionthat has introduced a new object type corresponding to which it haseither a new catalog or some new columns in the existing catalog. Now,I don't think the older version of the subscriber can modify thecommand received from the publisher so that the same can be applied tothe subscriber because it won't have any knowledge of the new feature.In the other case where the subscriber is of a newer version, weanyway should be able to support it with pg_dump as there doesn'tappear to be any restriction with that, am, I missing something?I think so (with some limitations). Since the publisher knows the subscriberversion, publisher knows that the subscriber does not contain the new objecttype then publisher can decide if this case is critical (and reject thereplication) or optional (and silently not include the feature X -- because itis not essential for logical replication). If required, the transformationshould be done on the publisher.Even if we decide to use deparse approach, it would still need tomimic stuff from pg_dump to construct commands based on only catalogcontents. I am not against using this approach but we shouldn't ignorethe duplicity required in this approach.It is fine to share code between pg_dump and this new infrastructure. However,the old code should coexist to support older versions because the new set offunctions don't exist in older server versions. Hence, duplicity should existfor a long time (if you consider that the current policy is to allow dump from9.2, we are talking about 10 years or so). There are some threads [1][2] thatdiscussed this topic: provide a SQL command based on the catalogrepresentation. You can probably find other discussions searching for \"pg_dumplibrary\" or \"getddl\".[1] https://www.postgresql.org/message-id/flat/82EFF560-2A09-4C3D-81CC-A2A5EC438CE5%40eggerapps.at[2] https://www.postgresql.org/message-id/[email protected] TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 23 Mar 2023 13:02:02 -0300",
"msg_from": "\"Euler Taveira\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Thu, Mar 23, 2023 at 9:24 PM Kumar, Sachin <[email protected]> wrote:\n>\n> > From: Amit Kapila <[email protected]>\n> > IIUC, this is possible only if tablesync process uses a snapshot different than the\n> > snapshot we have used to perform the initial schema sync, otherwise, this\n> > shouldn't be a problem. Let me try to explain my understanding with an example\n> > (the LSNs used are just explain the\n> > problem):\n> >\n> > 1. Create Table t1(c1, c2); --LSN: 90\n> > 2. Insert t1 (1, 1); --LSN 100\n> > 3. Insert t1 (2, 2); --LSN 110\n> > 4. Alter t1 Add Column c3; --LSN 120\n> > 5. Insert t1 (3, 3, 3); --LSN 130\n> >\n> > Now, say before starting tablesync worker, apply process performs initial\n> > schema sync and uses a snapshot corresponding to LSN 100. Then it starts\n> > tablesync process to allow the initial copy of data in t1.\n> > Here, if the table sync process tries to establish a new snapshot, it may get data\n> > till LSN 130 and when it will try to copy the same in subscriber it will fail. Is my\n> > understanding correct about the problem you described?\n> Right\n> > If so, can't we allow\n> > tablesync process to use the same exported snapshot as we used for the initial\n> > schema sync and won't that solve the problem you described?\n> I think we won't be able to use same snapshot because the transaction will be committed.\n> In CreateSubscription() we can use the transaction snapshot from walrcv_create_slot()\n> till walrcv_disconnect() is called.(I am not sure about this part maybe walrcv_disconnect() calls\n> the commits internally ?).\n> So somehow we need to keep this snapshot alive, even after transaction is committed(or delay committing\n> the transaction , but we can have CREATE SUBSCRIPTION with ENABLED=FALSE, so we can have a restart before\n> tableSync is able to use the same snapshot.)\n>\n\nCan we think of getting the table data as well along with schema via\npg_dump? Won't then both schema and initial data will correspond to\nthe same snapshot?\n\n> > > Refresh publication :-\n> > >\n> > > In refresh publication, subscriber does create a new replication slot\n> Typo-> subscriber does not\n> > > hence , we can’t run\n> > >\n> > > pg_dump with a snapshot which starts from origin(maybe this is not an\n> > > issue at all). In this case\n> > >\n> > > it makes more sense for tableSync worker to do schema sync.\n> > >\n> >\n> > Can you please explain this problem with some examples?\n> I think we can have same issues as you mentioned\n> New table t1 is added to the publication , User does a refresh publication.\n> pg_dump / pg_restore restores the table definition. But before tableSync\n> can start, steps from 2 to 5 happen on the publisher.\n> > 1. Create Table t1(c1, c2); --LSN: 90\n> > 2. Insert t1 (1, 1); --LSN 100\n> > 3. Insert t1 (2, 2); --LSN 110\n> > 4. Alter t1 Add Column c3; --LSN 120\n> > 5. Insert t1 (3, 3, 3); --LSN 130\n> And table sync errors out\n> There can be one more issue , since we took the pg_dump without snapshot (wrt to replication slot).\n>\n\nTo avoid both the problems mentioned for Refresh Publication, we can\ndo one of the following: (a) create a new slot along with a snapshot\nfor this operation and drop it afterward; or (b) using the existing\nslot, establish a new snapshot using a technique proposed in email\n[1].\n\nNote - Please keep one empty line before and after your inline\nresponses, otherwise, it is slightly difficult to understand your\nresponse.\n\n[1] - https://www.postgresql.org/message-id/CAGPVpCRWEVhXa7ovrhuSQofx4to7o22oU9iKtrOgAOtz_%3DY6vg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 24 Mar 2023 09:30:03 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Friday, March 24, 2023 12:02 AM Euler Taveira <[email protected]> wrote:\r\n> \r\n> On Thu, Mar 23, 2023, at 8:44 AM, Amit Kapila wrote:\r\n> > On Thu, Mar 23, 2023 at 2:48 AM Euler Taveira <mailto:[email protected]> wrote:\r\n> > >\r\n> > > On Tue, Mar 21, 2023, at 8:18 AM, Amit Kapila wrote:\r\n> > >\r\n> > > Now, how do we avoid these problems even if we have our own version of\r\n> > > functionality similar to pg_dump for selected objects? I guess we will\r\n> > > face similar problems. If so, we may need to deny schema sync in any\r\n> > > such case.\r\n> > >\r\n> > > There are 2 approaches for initial DDL synchronization:\r\n> > >\r\n> > > 1) generate the DDL command on the publisher, stream it and apply it as-is on\r\n> > > the subscriber;\r\n> > > 2) generate a DDL representation (JSON, for example) on the publisher, stream\r\n> > > it, transform it into a DDL command on subscriber and apply it.\r\n> > >\r\n> > > The option (1) is simpler and faster than option (2) because it does not\r\n> > > require an additional step (transformation). However, option (2) is more\r\n> > > flexible than option (1) because it allow you to create a DDL command even if a\r\n> > > feature was removed from the subscriber and the publisher version is less than\r\n> > > the subscriber version or a feature was added to the publisher and the\r\n> > > publisher version is greater than the subscriber version.\r\n> > >\r\n> > \r\n> > Is this practically possible? Say the publisher has a higher version\r\n> > that has introduced a new object type corresponding to which it has\r\n> > either a new catalog or some new columns in the existing catalog. Now,\r\n> > I don't think the older version of the subscriber can modify the\r\n> > command received from the publisher so that the same can be applied to\r\n> > the subscriber because it won't have any knowledge of the new feature.\r\n> > In the other case where the subscriber is of a newer version, we\r\n> > anyway should be able to support it with pg_dump as there doesn't\r\n> > appear to be any restriction with that, am, I missing something?\r\n> I think so (with some limitations). Since the publisher knows the subscriber\r\n> version, publisher knows that the subscriber does not contain the new object\r\n> type then publisher can decide if this case is critical (and reject the\r\n> replication) or optional (and silently not include the feature X -- because it\r\n> is not essential for logical replication). If required, the transformation\r\n> should be done on the publisher.\r\n\r\nI am not if it's feasible to support the use case the replicate DDL to old\r\nsubscriber.\r\n\r\nFirst, I think the current publisher doesn't know the version number of\r\nclient(subscriber) so we need to check the feasibility of same. Also, having\r\nclient's version number checks doesn't seem to be a good idea.\r\n\r\nBesides, I thought about the problems that will happen if we try to support\r\nreplicating New PG to older PG. The following examples assume that we support the\r\nDDL replication in the mentioned PG.\r\n\r\n1) Assume we want to replicate from a newer PG to a older PG where partition\r\n table has not been introduced. I think even if the publisher is aware of\r\n that, it doesn't have a good way to transform the partition related command,\r\n maybe one could say we can transform that to inherit table, but I feel that\r\n introduces too much complexity.\r\n\r\n2) Another example is generated column. To replicate the newer PG which has\r\n this feature to a older PG without this. I am concerned that is there a way\r\n to transform this without causing inconsistent behavior.\r\n\r\nEven if we decide to simply skip sending such unsupported commands or skip\r\napplying them, then it's likely that the following dml replication will cause\r\ndata inconsistency.\r\n\r\nSo, it seems we cannot completely support this use case, there would be some\r\nlimitations. Personally, I am not sure if it's worth introducing complexity to\r\nsupport it partially.\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Fri, 24 Mar 2023 11:57:00 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "> From: Amit Kapila <[email protected]>\r\n> > I think we won't be able to use same snapshot because the transaction will\r\n> > be committed.\r\n> > In CreateSubscription() we can use the transaction snapshot from\r\n> > walrcv_create_slot() till walrcv_disconnect() is called.(I am not sure\r\n> > about this part maybe walrcv_disconnect() calls the commits internally ?).\r\n> > So somehow we need to keep this snapshot alive, even after transaction\r\n> > is committed(or delay committing the transaction , but we can have\r\n> > CREATE SUBSCRIPTION with ENABLED=FALSE, so we can have a restart\r\n> > before tableSync is able to use the same snapshot.)\r\n> >\r\n> \r\n> Can we think of getting the table data as well along with schema via\r\n> pg_dump? Won't then both schema and initial data will correspond to the\r\n> same snapshot?\r\n\r\nRight , that will work, Thanks!\r\n\r\n> > I think we can have same issues as you mentioned New table t1 is added\r\n> > to the publication , User does a refresh publication.\r\n> > pg_dump / pg_restore restores the table definition. But before\r\n> > tableSync can start, steps from 2 to 5 happen on the publisher.\r\n> > > 1. Create Table t1(c1, c2); --LSN: 90 2. Insert t1 (1, 1); --LSN 100\r\n> > > 3. Insert t1 (2, 2); --LSN 110 4. Alter t1 Add Column c3; --LSN 120\r\n> > > 5. Insert t1 (3, 3, 3); --LSN 130\r\n> > And table sync errors out\r\n> > There can be one more issue , since we took the pg_dump without\r\n> snapshot (wrt to replication slot).\r\n> >\r\n> \r\n> To avoid both the problems mentioned for Refresh Publication, we can do\r\n> one of the following: (a) create a new slot along with a snapshot for this\r\n> operation and drop it afterward; or (b) using the existing slot, establish a\r\n> new snapshot using a technique proposed in email [1].\r\n> \r\n\r\nThanks, I think option (b) will be perfect, since we don’t have to create a new slot.\r\n\r\nRegards\r\nSachin\r\n",
"msg_date": "Fri, 24 Mar 2023 14:51:25 +0000",
"msg_from": "\"Kumar, Sachin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Fri, Mar 24, 2023, at 8:57 AM, [email protected] wrote:\n> First, I think the current publisher doesn't know the version number of\n> client(subscriber) so we need to check the feasibility of same. Also, having\n> client's version number checks doesn't seem to be a good idea.\n\nwalrcv_server_version().\n\n> Besides, I thought about the problems that will happen if we try to support\n> replicating New PG to older PG. The following examples assume that we support the\n> DDL replication in the mentioned PG.\n> \n> 1) Assume we want to replicate from a newer PG to a older PG where partition\n> table has not been introduced. I think even if the publisher is aware of\n> that, it doesn't have a good way to transform the partition related command,\n> maybe one could say we can transform that to inherit table, but I feel that\n> introduces too much complexity.\n> \n> 2) Another example is generated column. To replicate the newer PG which has\n> this feature to a older PG without this. I am concerned that is there a way\n> to transform this without causing inconsistent behavior.\n> \n> Even if we decide to simply skip sending such unsupported commands or skip\n> applying them, then it's likely that the following dml replication will cause\n> data inconsistency.\n\nAs I mentioned in a previous email [1], the publisher can contain code to\ndecide if it can proceed or not, in case you are doing a downgrade. I said\ndowngrade but it can also happen if we decide to deprecate a syntax. For\nexample, when WITH OIDS was deprecated, pg_dump treats it as an acceptable\nremoval. The transformation can be (dis)allowed by the protocol version or\nanother constant [2].\n\n> So, it seems we cannot completely support this use case, there would be some\n> limitations. Personally, I am not sure if it's worth introducing complexity to\n> support it partially.\n\nLimitations are fine; they have different versions. I wouldn't like to forbid\ndowngrade just because I don't want to maintain compatibility with previous\nversions. IMO it is important to be able to downgrade in case of any\nincompatibility with an application. You might argue that this isn't possible\ndue to time or patch size and that there is a workaround for it but I wouldn't\nwant to close the door for downgrade in the future.\n\n[1] https://www.postgresql.org/message-id/fb7894e4-b44e-4ae3-a74d-7c5650f69f1a%40app.fastmail.com\n[2] https://www.postgresql.org/message-id/78149fa6-4c77-4128-8518-197a631c29c3%40app.fastmail.com\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Fri, Mar 24, 2023, at 8:57 AM, [email protected] wrote:First, I think the current publisher doesn't know the version number ofclient(subscriber) so we need to check the feasibility of same. Also, havingclient's version number checks doesn't seem to be a good idea.walrcv_server_version().Besides, I thought about the problems that will happen if we try to supportreplicating New PG to older PG. The following examples assume that we support theDDL replication in the mentioned PG.1) Assume we want to replicate from a newer PG to a older PG where partition table has not been introduced. I think even if the publisher is aware of that, it doesn't have a good way to transform the partition related command, maybe one could say we can transform that to inherit table, but I feel that introduces too much complexity.2) Another example is generated column. To replicate the newer PG which has this feature to a older PG without this. I am concerned that is there a way to transform this without causing inconsistent behavior.Even if we decide to simply skip sending such unsupported commands or skipapplying them, then it's likely that the following dml replication will causedata inconsistency.As I mentioned in a previous email [1], the publisher can contain code todecide if it can proceed or not, in case you are doing a downgrade. I saiddowngrade but it can also happen if we decide to deprecate a syntax. Forexample, when WITH OIDS was deprecated, pg_dump treats it as an acceptableremoval. The transformation can be (dis)allowed by the protocol version oranother constant [2].So, it seems we cannot completely support this use case, there would be somelimitations. Personally, I am not sure if it's worth introducing complexity tosupport it partially.Limitations are fine; they have different versions. I wouldn't like to forbiddowngrade just because I don't want to maintain compatibility with previousversions. IMO it is important to be able to downgrade in case of anyincompatibility with an application. You might argue that this isn't possibledue to time or patch size and that there is a workaround for it but I wouldn'twant to close the door for downgrade in the future.[1] https://www.postgresql.org/message-id/fb7894e4-b44e-4ae3-a74d-7c5650f69f1a%40app.fastmail.com[2] https://www.postgresql.org/message-id/78149fa6-4c77-4128-8518-197a631c29c3%40app.fastmail.com--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Fri, 24 Mar 2023 12:01:17 -0300",
"msg_from": "\"Euler Taveira\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "\r\n> I am not if it's feasible to support the use case the replicate DDL to old\r\n> subscriber.\r\n>\r\n\r\n+1\r\n \r\n> First, I think the current publisher doesn't know the version number of\r\n> client(subscriber) so we need to check the feasibility of same. Also, having\r\n> client's version number checks doesn't seem to be a good idea.\r\n> \r\n> Besides, I thought about the problems that will happen if we try to support\r\n> replicating New PG to older PG. The following examples assume that we\r\n> support the DDL replication in the mentioned PG.\r\n> \r\n> 1) Assume we want to replicate from a newer PG to a older PG where\r\n> partition\r\n> table has not been introduced. I think even if the publisher is aware of\r\n> that, it doesn't have a good way to transform the partition related\r\n> command,\r\n> maybe one could say we can transform that to inherit table, but I feel that\r\n> introduces too much complexity.\r\n> \r\n> 2) Another example is generated column. To replicate the newer PG which\r\n> has\r\n> this feature to a older PG without this. I am concerned that is there a way\r\n> to transform this without causing inconsistent behavior.\r\n> \r\n> Even if we decide to simply skip sending such unsupported commands or\r\n> skip applying them, then it's likely that the following dml replication will\r\n> cause data inconsistency.\r\n> \r\n> So, it seems we cannot completely support this use case, there would be\r\n> some limitations. Personally, I am not sure if it's worth introducing\r\n> complexity to support it partially.\r\n> \r\n\r\n+1\r\n\r\nRegards\r\nSachin\r\n\r\n\r\n",
"msg_date": "Fri, 24 Mar 2023 15:04:01 +0000",
"msg_from": "\"Kumar, Sachin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Friday, March 24, 2023 11:01 PM Euler Taveira <[email protected]> wrote:\r\n\r\nHi,\r\n\r\n> On Fri, Mar 24, 2023, at 8:57 AM, mailto:[email protected] wrote:\r\n> > First, I think the current publisher doesn't know the version number of\r\n> > client(subscriber) so we need to check the feasibility of same. Also, having\r\n> > client's version number checks doesn't seem to be a good idea.\r\n> \r\n> walrcv_server_version().\r\n\r\nI don't think this function works, as it only shows the server version (e.g.\r\npublisher/walsender).\r\n\r\n> > Besides, I thought about the problems that will happen if we try to support\r\n> > replicating New PG to older PG. The following examples assume that we support the\r\n> > DDL replication in the mentioned PG.\r\n> > \r\n> > 1) Assume we want to replicate from a newer PG to a older PG where partition\r\n> > table has not been introduced. I think even if the publisher is aware of\r\n> > that, it doesn't have a good way to transform the partition related command,\r\n> > maybe one could say we can transform that to inherit table, but I feel that\r\n> > introduces too much complexity.\r\n> > \r\n> > 2) Another example is generated column. To replicate the newer PG which has\r\n> > this feature to a older PG without this. I am concerned that is there a way\r\n> > to transform this without causing inconsistent behavior.\r\n> > \r\n> > Even if we decide to simply skip sending such unsupported commands or skip\r\n> > applying them, then it's likely that the following dml replication will cause\r\n> > data inconsistency.\r\n>\r\n> As I mentioned in a previous email [1], the publisher can contain code to\r\n> decide if it can proceed or not, in case you are doing a downgrade. I said\r\n> downgrade but it can also happen if we decide to deprecate a syntax. For\r\n> example, when WITH OIDS was deprecated, pg_dump treats it as an acceptable\r\n> removal. The transformation can be (dis)allowed by the protocol version or\r\n> another constant [2].\r\n\r\nIf most of the new DDL related features won't be supported to be transformed to\r\nold subscriber, I don't see a point in supporting this use case.\r\n\r\nI think cases like the removal of WITH OIDS are rare enough that we don't need\r\nto worry about and it doesn't affect the data consistency. But new DDL features\r\nare different.\r\n\r\nNot only the features like partition or generated column, features like\r\nnulls_not_distinct are also tricky to be transformed without causing\r\ninconsistent behavior.\r\n\r\n> > So, it seems we cannot completely support this use case, there would be some\r\n> > limitations. Personally, I am not sure if it's worth introducing complexity to\r\n> > support it partially.\r\n> \r\n> Limitations are fine; they have different versions. I wouldn't like to forbid\r\n> downgrade just because I don't want to maintain compatibility with previous\r\n> versions. IMO it is important to be able to downgrade in case of any\r\n> incompatibility with an application. You might argue that this isn't possible\r\n> due to time or patch size and that there is a workaround for it but I wouldn't\r\n> want to close the door for downgrade in the future.\r\n\r\nThe biggest problem is the data inconsistency that it would cause. I am not\r\naware of a generic solution to replicate new introduced DDLs to old subscriber.\r\nwhich wouldn't cause data inconsistency. And apart from that, IMO the\r\ncomplexity and maintainability of the feature also matters.\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Sat, 25 Mar 2023 05:50:08 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 11:51 PM Kumar, Sachin <[email protected]> wrote:\n>\n> > From: Amit Kapila <[email protected]>\n> > > I think we won't be able to use same snapshot because the transaction will\n> > > be committed.\n> > > In CreateSubscription() we can use the transaction snapshot from\n> > > walrcv_create_slot() till walrcv_disconnect() is called.(I am not sure\n> > > about this part maybe walrcv_disconnect() calls the commits internally ?).\n> > > So somehow we need to keep this snapshot alive, even after transaction\n> > > is committed(or delay committing the transaction , but we can have\n> > > CREATE SUBSCRIPTION with ENABLED=FALSE, so we can have a restart\n> > > before tableSync is able to use the same snapshot.)\n> > >\n> >\n> > Can we think of getting the table data as well along with schema via\n> > pg_dump? Won't then both schema and initial data will correspond to the\n> > same snapshot?\n>\n> Right , that will work, Thanks!\n\nWhile it works, we cannot get the initial data in parallel, no?\n\n>\n> > > I think we can have same issues as you mentioned New table t1 is added\n> > > to the publication , User does a refresh publication.\n> > > pg_dump / pg_restore restores the table definition. But before\n> > > tableSync can start, steps from 2 to 5 happen on the publisher.\n> > > > 1. Create Table t1(c1, c2); --LSN: 90 2. Insert t1 (1, 1); --LSN 100\n> > > > 3. Insert t1 (2, 2); --LSN 110 4. Alter t1 Add Column c3; --LSN 120\n> > > > 5. Insert t1 (3, 3, 3); --LSN 130\n> > > And table sync errors out\n> > > There can be one more issue , since we took the pg_dump without\n> > snapshot (wrt to replication slot).\n> > >\n> >\n> > To avoid both the problems mentioned for Refresh Publication, we can do\n> > one of the following: (a) create a new slot along with a snapshot for this\n> > operation and drop it afterward; or (b) using the existing slot, establish a\n> > new snapshot using a technique proposed in email [1].\n> >\n>\n> Thanks, I think option (b) will be perfect, since we don’t have to create a new slot.\n\nRegarding (b), does it mean that apply worker stops streaming,\nrequests to create a snapshot, and then resumes the streaming?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 27 Mar 2023 11:47:01 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 8:17 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Mar 24, 2023 at 11:51 PM Kumar, Sachin <[email protected]> wrote:\n> >\n> > > From: Amit Kapila <[email protected]>\n> > > > I think we won't be able to use same snapshot because the transaction will\n> > > > be committed.\n> > > > In CreateSubscription() we can use the transaction snapshot from\n> > > > walrcv_create_slot() till walrcv_disconnect() is called.(I am not sure\n> > > > about this part maybe walrcv_disconnect() calls the commits internally ?).\n> > > > So somehow we need to keep this snapshot alive, even after transaction\n> > > > is committed(or delay committing the transaction , but we can have\n> > > > CREATE SUBSCRIPTION with ENABLED=FALSE, so we can have a restart\n> > > > before tableSync is able to use the same snapshot.)\n> > > >\n> > >\n> > > Can we think of getting the table data as well along with schema via\n> > > pg_dump? Won't then both schema and initial data will correspond to the\n> > > same snapshot?\n> >\n> > Right , that will work, Thanks!\n>\n> While it works, we cannot get the initial data in parallel, no?\n>\n\nAnother possibility is that we dump/restore the schema of each table\nalong with its data. One thing we can explore is whether the parallel\noption of dump can be useful here. Do you have any other ideas?\n\nOne related idea is that currently, we fetch the table list\ncorresponding to publications in subscription and create the entries\nfor those in pg_subscription_rel during Create Subscription, can we\nthink of postponing that work till after the initial schema sync? We\nseem to be already storing publications list in pg_subscription, so it\nappears possible if we somehow remember the value of copy_data. If\nthis is feasible then I think that may give us the flexibility to\nperform the initial sync at a later point by the background worker.\n\n> >\n> > > > I think we can have same issues as you mentioned New table t1 is added\n> > > > to the publication , User does a refresh publication.\n> > > > pg_dump / pg_restore restores the table definition. But before\n> > > > tableSync can start, steps from 2 to 5 happen on the publisher.\n> > > > > 1. Create Table t1(c1, c2); --LSN: 90 2. Insert t1 (1, 1); --LSN 100\n> > > > > 3. Insert t1 (2, 2); --LSN 110 4. Alter t1 Add Column c3; --LSN 120\n> > > > > 5. Insert t1 (3, 3, 3); --LSN 130\n> > > > And table sync errors out\n> > > > There can be one more issue , since we took the pg_dump without\n> > > snapshot (wrt to replication slot).\n> > > >\n> > >\n> > > To avoid both the problems mentioned for Refresh Publication, we can do\n> > > one of the following: (a) create a new slot along with a snapshot for this\n> > > operation and drop it afterward; or (b) using the existing slot, establish a\n> > > new snapshot using a technique proposed in email [1].\n> > >\n> >\n> > Thanks, I think option (b) will be perfect, since we don’t have to create a new slot.\n>\n> Regarding (b), does it mean that apply worker stops streaming,\n> requests to create a snapshot, and then resumes the streaming?\n>\n\nShouldn't this be done by the backend performing a REFRESH publication?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 28 Mar 2023 15:17:27 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 6:47 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Mar 27, 2023 at 8:17 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Fri, Mar 24, 2023 at 11:51 PM Kumar, Sachin <[email protected]> wrote:\n> > >\n> > > > From: Amit Kapila <[email protected]>\n> > > > > I think we won't be able to use same snapshot because the transaction will\n> > > > > be committed.\n> > > > > In CreateSubscription() we can use the transaction snapshot from\n> > > > > walrcv_create_slot() till walrcv_disconnect() is called.(I am not sure\n> > > > > about this part maybe walrcv_disconnect() calls the commits internally ?).\n> > > > > So somehow we need to keep this snapshot alive, even after transaction\n> > > > > is committed(or delay committing the transaction , but we can have\n> > > > > CREATE SUBSCRIPTION with ENABLED=FALSE, so we can have a restart\n> > > > > before tableSync is able to use the same snapshot.)\n> > > > >\n> > > >\n> > > > Can we think of getting the table data as well along with schema via\n> > > > pg_dump? Won't then both schema and initial data will correspond to the\n> > > > same snapshot?\n> > >\n> > > Right , that will work, Thanks!\n> >\n> > While it works, we cannot get the initial data in parallel, no?\n> >\n>\n> Another possibility is that we dump/restore the schema of each table\n> along with its data. One thing we can explore is whether the parallel\n> option of dump can be useful here. Do you have any other ideas?\n\nA downside of the idea of dumping both table schema and table data\nwould be that we need to temporarily store data twice the size of the\ntable (the dump file and the table itself) during the load. One might\nthink that we can redirect the pg_dump output into the backend so that\nit can load it via SPI, but it doesn't work since \"COPY tbl FROM\nstdin;\" doesn't work via SPI. The --inserts option of pg_dump could\nhelp it out but it makes restoration very slow.\n\n>\n> One related idea is that currently, we fetch the table list\n> corresponding to publications in subscription and create the entries\n> for those in pg_subscription_rel during Create Subscription, can we\n> think of postponing that work till after the initial schema sync? We\n> seem to be already storing publications list in pg_subscription, so it\n> appears possible if we somehow remember the value of copy_data. If\n> this is feasible then I think that may give us the flexibility to\n> perform the initial sync at a later point by the background worker.\n\nIt sounds possible. With this idea, we will be able to have the apply\nworker restore the table schemas (and create pg_subscription_rel\nentries) as the first thing. Another point we might need to consider\nis that the initial schema sync (i.e. creating tables) and creating\npg_subscription_rel entries need to be done in the same transaction.\nOtherwise, we could end up committing either one change. I think it\ndepends on how we restore the schema data.\n\n>\n> > >\n> > > > > I think we can have same issues as you mentioned New table t1 is added\n> > > > > to the publication , User does a refresh publication.\n> > > > > pg_dump / pg_restore restores the table definition. But before\n> > > > > tableSync can start, steps from 2 to 5 happen on the publisher.\n> > > > > > 1. Create Table t1(c1, c2); --LSN: 90 2. Insert t1 (1, 1); --LSN 100\n> > > > > > 3. Insert t1 (2, 2); --LSN 110 4. Alter t1 Add Column c3; --LSN 120\n> > > > > > 5. Insert t1 (3, 3, 3); --LSN 130\n> > > > > And table sync errors out\n> > > > > There can be one more issue , since we took the pg_dump without\n> > > > snapshot (wrt to replication slot).\n> > > > >\n> > > >\n> > > > To avoid both the problems mentioned for Refresh Publication, we can do\n> > > > one of the following: (a) create a new slot along with a snapshot for this\n> > > > operation and drop it afterward; or (b) using the existing slot, establish a\n> > > > new snapshot using a technique proposed in email [1].\n> > > >\n> > >\n> > > Thanks, I think option (b) will be perfect, since we don’t have to create a new slot.\n> >\n> > Regarding (b), does it mean that apply worker stops streaming,\n> > requests to create a snapshot, and then resumes the streaming?\n> >\n>\n> Shouldn't this be done by the backend performing a REFRESH publication?\n\nHmm, I might be missing something but the idea (b) uses the existing\nslot to establish a new snapshot, right? What existing replication\nslot do we use for that? I thought it was the one used by the apply\nworker.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 28 Mar 2023 23:59:40 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "> > > > From: Amit Kapila <[email protected]>\r\n> > > > > I think we won't be able to use same snapshot because the\r\n> > > > > transaction will be committed.\r\n> > > > > In CreateSubscription() we can use the transaction snapshot from\r\n> > > > > walrcv_create_slot() till walrcv_disconnect() is called.(I am\r\n> > > > > not sure about this part maybe walrcv_disconnect() calls the commits\r\n> internally ?).\r\n> > > > > So somehow we need to keep this snapshot alive, even after\r\n> > > > > transaction is committed(or delay committing the transaction ,\r\n> > > > > but we can have CREATE SUBSCRIPTION with ENABLED=FALSE, so we\r\n> > > > > can have a restart before tableSync is able to use the same\r\n> > > > > snapshot.)\r\n> > > > >\r\n> > > >\r\n> > > > Can we think of getting the table data as well along with schema\r\n> > > > via pg_dump? Won't then both schema and initial data will\r\n> > > > correspond to the same snapshot?\r\n> > >\r\n> > > Right , that will work, Thanks!\r\n> >\r\n> > While it works, we cannot get the initial data in parallel, no?\r\n> >\r\n\r\nI was thinking each TableSync process will call pg_dump --table, This way if we have N\r\ntableSync process, we can have N pg_dump --table=table_name called in parallel.\r\nIn fact we can use --schema-only to get schema and then let COPY take care of data\r\nsyncing . We will use same snapshot for pg_dump as well as COPY table. \r\n\r\nRegards\r\nSachin\r\n",
"msg_date": "Wed, 29 Mar 2023 10:57:49 +0000",
"msg_from": "\"Kumar, Sachin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 8:30 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Tue, Mar 28, 2023 at 6:47 PM Amit Kapila <[email protected]> wrote:\n> >\n> > > >\n> > > > > > I think we can have same issues as you mentioned New table t1 is added\n> > > > > > to the publication , User does a refresh publication.\n> > > > > > pg_dump / pg_restore restores the table definition. But before\n> > > > > > tableSync can start, steps from 2 to 5 happen on the publisher.\n> > > > > > > 1. Create Table t1(c1, c2); --LSN: 90 2. Insert t1 (1, 1); --LSN 100\n> > > > > > > 3. Insert t1 (2, 2); --LSN 110 4. Alter t1 Add Column c3; --LSN 120\n> > > > > > > 5. Insert t1 (3, 3, 3); --LSN 130\n> > > > > > And table sync errors out\n> > > > > > There can be one more issue , since we took the pg_dump without\n> > > > > snapshot (wrt to replication slot).\n> > > > > >\n> > > > >\n> > > > > To avoid both the problems mentioned for Refresh Publication, we can do\n> > > > > one of the following: (a) create a new slot along with a snapshot for this\n> > > > > operation and drop it afterward; or (b) using the existing slot, establish a\n> > > > > new snapshot using a technique proposed in email [1].\n> > > > >\n> > > >\n> > > > Thanks, I think option (b) will be perfect, since we don’t have to create a new slot.\n> > >\n> > > Regarding (b), does it mean that apply worker stops streaming,\n> > > requests to create a snapshot, and then resumes the streaming?\n> > >\n> >\n> > Shouldn't this be done by the backend performing a REFRESH publication?\n>\n> Hmm, I might be missing something but the idea (b) uses the existing\n> slot to establish a new snapshot, right? What existing replication\n> slot do we use for that? I thought it was the one used by the apply\n> worker.\n>\n\nRight, it will be the same as the one for apply worker. I think if we\ndecide to do initial sync via apply worker then in this case also, we\nneed to let apply worker restart and perform initial sync as the first\nthing.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 29 Mar 2023 16:37:43 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Wed, Mar 29, 2023 at 7:57 PM Kumar, Sachin <[email protected]> wrote:\n>\n> > > > > From: Amit Kapila <[email protected]>\n> > > > > > I think we won't be able to use same snapshot because the\n> > > > > > transaction will be committed.\n> > > > > > In CreateSubscription() we can use the transaction snapshot from\n> > > > > > walrcv_create_slot() till walrcv_disconnect() is called.(I am\n> > > > > > not sure about this part maybe walrcv_disconnect() calls the commits\n> > internally ?).\n> > > > > > So somehow we need to keep this snapshot alive, even after\n> > > > > > transaction is committed(or delay committing the transaction ,\n> > > > > > but we can have CREATE SUBSCRIPTION with ENABLED=FALSE, so we\n> > > > > > can have a restart before tableSync is able to use the same\n> > > > > > snapshot.)\n> > > > > >\n> > > > >\n> > > > > Can we think of getting the table data as well along with schema\n> > > > > via pg_dump? Won't then both schema and initial data will\n> > > > > correspond to the same snapshot?\n> > > >\n> > > > Right , that will work, Thanks!\n> > >\n> > > While it works, we cannot get the initial data in parallel, no?\n> > >\n>\n> I was thinking each TableSync process will call pg_dump --table, This way if we have N\n> tableSync process, we can have N pg_dump --table=table_name called in parallel.\n> In fact we can use --schema-only to get schema and then let COPY take care of data\n> syncing . We will use same snapshot for pg_dump as well as COPY table.\n\nHow can we postpone creating the pg_subscription_rel entries until the\ntablesync worker starts and does the schema sync? I think that since\npg_subscription_rel entry needs the table OID, we need either to do\nthe schema sync before creating the entry (i.e, during CREATE\nSUBSCRIPTION) or to postpone creating entries as Amit proposed[1]. The\napply worker needs the information of tables to sync in order to\nlaunch the tablesync workers, but it needs to create the table schema\nto get that information.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAA4eK1Ld9-5ueomE_J5CA6LfRo%3DwemdTrUp5qdBhRFwGT%2BdOUw%40mail.gmail.com\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 30 Mar 2023 00:18:04 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "> From: Masahiko Sawada <[email protected]>\r\n> > \r\n> > One related idea is that currently, we fetch the table list\r\n> > corresponding to publications in subscription and create the entries\r\n> > for those in pg_subscription_rel during Create Subscription, can we\r\n> > think of postponing that work till after the initial schema sync? We\r\n> > seem to be already storing publications list in pg_subscription, so it\r\n> > appears possible if we somehow remember the value of copy_data. If\r\n> > this is feasible then I think that may give us the flexibility to\r\n> > perform the initial sync at a later point by the background worker.\r\n\r\nMaybe we need to add column to pg_subscription to store copy_data state ?\r\n\r\n> \r\n> It sounds possible. With this idea, we will be able to have the apply worker\r\n> restore the table schemas (and create pg_subscription_rel\r\n> entries) as the first thing. Another point we might need to consider is that the\r\n> initial schema sync (i.e. creating tables) and creating pg_subscription_rel entries\r\n> need to be done in the same transaction.\r\n> Otherwise, we could end up committing either one change. I think it depends on\r\n> how we restore the schema data.\r\n\r\nI think we have to add one more column to pg_subscription to track the initial sync\r\nstate {OFF, SCHEMA_DUMPED, SCHEMA_RESTORED, COMPLETED} (COMPLETED will\r\nshow that pg_subscription_rel is filled) . I don’t think we won't be able\r\nto do pg_restore and pg_subscription_rel in single transaction, but we can use \r\ninitial_sync_state to start from where we left after a abrupt crash/shutdown.\r\n\r\nRegards\r\nSachin\r\n\r\n\r\n\r\n",
"msg_date": "Thu, 30 Mar 2023 01:39:39 +0000",
"msg_from": "\"Kumar, Sachin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 12:18 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Mar 29, 2023 at 7:57 PM Kumar, Sachin <[email protected]> wrote:\n> >\n> > > > > > From: Amit Kapila <[email protected]>\n> > > > > > > I think we won't be able to use same snapshot because the\n> > > > > > > transaction will be committed.\n> > > > > > > In CreateSubscription() we can use the transaction snapshot from\n> > > > > > > walrcv_create_slot() till walrcv_disconnect() is called.(I am\n> > > > > > > not sure about this part maybe walrcv_disconnect() calls the commits\n> > > internally ?).\n> > > > > > > So somehow we need to keep this snapshot alive, even after\n> > > > > > > transaction is committed(or delay committing the transaction ,\n> > > > > > > but we can have CREATE SUBSCRIPTION with ENABLED=FALSE, so we\n> > > > > > > can have a restart before tableSync is able to use the same\n> > > > > > > snapshot.)\n> > > > > > >\n> > > > > >\n> > > > > > Can we think of getting the table data as well along with schema\n> > > > > > via pg_dump? Won't then both schema and initial data will\n> > > > > > correspond to the same snapshot?\n> > > > >\n> > > > > Right , that will work, Thanks!\n> > > >\n> > > > While it works, we cannot get the initial data in parallel, no?\n> > > >\n> >\n> > I was thinking each TableSync process will call pg_dump --table, This way if we have N\n> > tableSync process, we can have N pg_dump --table=table_name called in parallel.\n> > In fact we can use --schema-only to get schema and then let COPY take care of data\n> > syncing . We will use same snapshot for pg_dump as well as COPY table.\n>\n> How can we postpone creating the pg_subscription_rel entries until the\n> tablesync worker starts and does the schema sync? I think that since\n> pg_subscription_rel entry needs the table OID, we need either to do\n> the schema sync before creating the entry (i.e, during CREATE\n> SUBSCRIPTION) or to postpone creating entries as Amit proposed[1]. The\n> apply worker needs the information of tables to sync in order to\n> launch the tablesync workers, but it needs to create the table schema\n> to get that information.\n\nFor the above reason, I think that step 6 of the initial proposal won't work.\n\nIf we can have the tablesync worker create an entry of\npg_subscription_rel after creating the table, it may give us the\nflexibility to perform the initial sync. One idea is that we add a\nrelname field to pg_subscription_rel so that we can create entries\nwith relname instead of OID if the table is not created yet. Once the\ntable is created, we clear the relname field and set the OID of the\ntable instead. It's not an ideal solution but we might make it simpler\nlater.\n\nAssuming that it's feasible, I'm considering another approach for the\ninitial sync in order to address the concurrent DDLs.\n\nThe basic idea is to somewhat follow how pg_dump/restore to\ndump/restore the database data. We divide the synchronization phase\n(including both schema and data) up into three phases: pre-data,\ntable-data, post-data. These mostly follow the --section option of\npg_dump.\n\n1. The backend process performing CREATE SUBSCRIPTION creates the\nsubscription but doesn't create pg_subscription_rel entries yet.\n\n2. Before starting the streaming, the apply worker fetches the table\nlist from the publisher, create pg_subscription_rel entries for them,\nand dumps+restores database objects that tables could depend on but\ndon't depend on tables such as TYPE, OPERATOR, ACCESS METHOD etc (i.e.\npre-data).\n\n3. The apply worker launches the tablesync workers for tables that\nneed to be synchronized.\n\nThere might be DDLs executed on the publisher for tables before the\ntablesync worker starts. But the apply worker needs to apply DDLs for\npre-data database objects. OTOH, it can ignore DDLs for not-synced-yet\ntables and other database objects such as INDEX, TRIGGER, RULE, etc\n(i.e. post-data).\n\n4. The tablesync worker creates its replication slot, dumps+restores\nthe table schema, update the pg_subscription_rel, and perform COPY.\n\nThese operations should be done in the same transaction.\n\n5. After finishing COPY, the tablesync worker dumps indexes (and\nperhaps constraints) of the table and creates them (which possibly\ntakes a long time). Then it starts to catch up, same as today. The\napply worker needs to wait for the tablesync worker to catch up.\n\nWe need to repeat these steps until we complete the initial data copy\nand create indexes for all tables, IOW until all pg_subscription_rel\nstatus becomes READY.\n\n6. If the apply worker confirms all tables are READY, it starts\nanother sync worker who is responsible for the post-data database\nobjects such as TRIGGER, RULE, POLICY etc (i.e. post-data).\n\nWhile the sync worker is starting up or working, the apply worker\napplies changes for pre-data database objects as well as READY tables.\n\n7. Similar to the tablesync worker, this sync worker creates its\nreplication slot and sets the returned LSN somewhere, say\npg_subscription.\n\n8. The sync worker dumps and restores these objects. Which could take\na time since it would need to create FK constraints. Then it starts to\ncatch up if the apply worker is ahead. The apply worker waits for the\nsync worker to catch up.\n\n9. Once the sync worker catches up, the apply worker starts applying\nchanges for all database objects.\n\nIIUC with this approach, we can resolve the concurrent DDL problem\nSachin mentioned, and indexes (and constraints) are created after the\ninitial data copy.\n\nThe procedures are still very complex and not fully considered yet but\nI hope there are some useful things at least for discussion.\n\nProbably we can start with supporting only tables. In this case, we\nwould not need the post-data phase (i.e. step 6-9). It seems to me\nthat we need to have the subscription state somewhere (maybe\npg_subscription) so that the apply worker figure out the next step.\nSince we need to dump and restore different objects on different\ntimings, we probably cannot directly use pg_dump/pg_restore. I've not\nconsidered how the concurrent REFRESH PUBLICATION works.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 30 Mar 2023 22:11:50 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "\r\n\r\n> -----Original Message-----\r\n> From: Masahiko Sawada <[email protected]>\r\n> > > I was thinking each TableSync process will call pg_dump --table,\r\n> > > This way if we have N tableSync process, we can have N pg_dump --\r\n> table=table_name called in parallel.\r\n> > > In fact we can use --schema-only to get schema and then let COPY\r\n> > > take care of data syncing . We will use same snapshot for pg_dump as well\r\n> as COPY table.\r\n> >\r\n> > How can we postpone creating the pg_subscription_rel entries until the\r\n> > tablesync worker starts and does the schema sync? I think that since\r\n> > pg_subscription_rel entry needs the table OID, we need either to do\r\n> > the schema sync before creating the entry (i.e, during CREATE\r\n> > SUBSCRIPTION) or to postpone creating entries as Amit proposed[1]. The\r\n> > apply worker needs the information of tables to sync in order to\r\n> > launch the tablesync workers, but it needs to create the table schema\r\n> > to get that information.\r\n> \r\n> For the above reason, I think that step 6 of the initial proposal won't work.\r\n> \r\n> If we can have the tablesync worker create an entry of pg_subscription_rel after\r\n> creating the table, it may give us the flexibility to perform the initial sync. One\r\n> idea is that we add a relname field to pg_subscription_rel so that we can create\r\n> entries with relname instead of OID if the table is not created yet. Once the\r\n> table is created, we clear the relname field and set the OID of the table instead.\r\n> It's not an ideal solution but we might make it simpler later.\r\n> \r\n> Assuming that it's feasible, I'm considering another approach for the initial sync\r\n> in order to address the concurrent DDLs.\r\n> \r\n> The basic idea is to somewhat follow how pg_dump/restore to dump/restore\r\n> the database data. We divide the synchronization phase (including both schema\r\n> and data) up into three phases: pre-data, table-data, post-data. These mostly\r\n> follow the --section option of pg_dump.\r\n> \r\n> 1. The backend process performing CREATE SUBSCRIPTION creates the\r\n> subscription but doesn't create pg_subscription_rel entries yet.\r\n> \r\n> 2. Before starting the streaming, the apply worker fetches the table list from the\r\n> publisher, create pg_subscription_rel entries for them, and dumps+restores\r\n> database objects that tables could depend on but don't depend on tables such as\r\n> TYPE, OPERATOR, ACCESS METHOD etc (i.e.\r\n> pre-data).\r\n\r\nWe will not have slot starting snapshot, So somehow we have to get a new snapshot\r\nAnd skip all the wal_log between starting of slot and snapshot creation lsn ? .\r\n\r\n> \r\n> 3. The apply worker launches the tablesync workers for tables that need to be\r\n> synchronized.\r\n> \r\n> There might be DDLs executed on the publisher for tables before the tablesync\r\n> worker starts. But the apply worker needs to apply DDLs for pre-data database\r\n> objects. OTOH, it can ignore DDLs for not-synced-yet tables and other database\r\n> objects such as INDEX, TRIGGER, RULE, etc (i.e. post-data).\r\n> \r\n> 4. The tablesync worker creates its replication slot, dumps+restores the table\r\n> schema, update the pg_subscription_rel, and perform COPY.\r\n> \r\n> These operations should be done in the same transaction.\r\n\r\npg_restore wont be rollbackable, So we need to maintain states in pg_subscription_rel.\r\n\r\n> \r\n> 5. After finishing COPY, the tablesync worker dumps indexes (and perhaps\r\n> constraints) of the table and creates them (which possibly takes a long time).\r\n> Then it starts to catch up, same as today. The apply worker needs to wait for the\r\n> tablesync worker to catch up.\r\n\r\nI don’t think we can have CATCHUP stage. We can have a DDL on publisher which\r\ncan add a new column (And this DDL will be executed by applier later). Then we get a INSERT\r\n because we have old definition of table, insert will fail.\r\n\r\n> \r\n> We need to repeat these steps until we complete the initial data copy and create\r\n> indexes for all tables, IOW until all pg_subscription_rel status becomes READY.\r\n> \r\n> 6. If the apply worker confirms all tables are READY, it starts another sync\r\n> worker who is responsible for the post-data database objects such as TRIGGER,\r\n> RULE, POLICY etc (i.e. post-data).\r\n> \r\n> While the sync worker is starting up or working, the apply worker applies\r\n> changes for pre-data database objects as well as READY tables.\r\nWe might have some issue if we have create table like\r\nCreate table_name as select * from materialized_view.\r\n> \r\n> 7. Similar to the tablesync worker, this sync worker creates its replication slot\r\n> and sets the returned LSN somewhere, say pg_subscription.\r\n> \r\n> 8. The sync worker dumps and restores these objects. Which could take a time\r\n> since it would need to create FK constraints. Then it starts to catch up if the\r\n> apply worker is ahead. The apply worker waits for the sync worker to catch up.\r\n> \r\n> 9. Once the sync worker catches up, the apply worker starts applying changes\r\n> for all database objects.\r\n> \r\n> IIUC with this approach, we can resolve the concurrent DDL problem Sachin\r\n> mentioned, and indexes (and constraints) are created after the initial data copy.\r\n> \r\n> The procedures are still very complex and not fully considered yet but I hope\r\n> there are some useful things at least for discussion.\r\n> \r\n> Probably we can start with supporting only tables. In this case, we would not\r\n> need the post-data phase (i.e. step 6-9). It seems to me that we need to have\r\n> the subscription state somewhere (maybe\r\n> pg_subscription) so that the apply worker figure out the next step.\r\n> Since we need to dump and restore different objects on different timings, we\r\n> probably cannot directly use pg_dump/pg_restore. I've not considered how the\r\n> concurrent REFRESH PUBLICATION works.\r\n\r\nI think above prototype will work and will have least amount of side effects, but\r\nIt might be too complex to implement and I am not sure about corner cases.\r\n\r\nI was thinking of other ways of doing Initial Sync , which are less complex but each\r\nwith separate set of bottlenecks\r\n\r\nOn Publisher Side:- \r\n1) Locking the publisher:- Easiest one to implement, applier process will get Access Shared\r\nlock on the all the published tables. (We don't have to worry newly created concurrent table)\r\nAs tableSync will finish syncing the table, it will release table lock, So we will release\r\ntable locks in steps. Users can still perform DML on tables, but DDLs wont be allowed.\r\n\r\nOn Subscriber Side:-\r\nSo the main issue is tableSync process can see the future table data/version wrt to the\r\napplier process, So we have to find a way to ensure that tableSync/applier process sees\r\nsame table version.\r\n\r\n2) Using pg_dump/pg_restore for schema and data:- As Amit mentioned we can use pg_dump/\r\npg_restore [1], Although it might have side effect of using double storage , we can\r\ntable pg_dump of each table separately and delete the dump as soon as table is synced.\r\ntableSync process will read the dump and call pg_restore on the table.\r\nIf we crash in middle of restoring the tables we can start pg_dump(--clean)/restore again\r\nwith left out tables.\r\nWith this we can reduce space usage but we might create too many files.\r\n\r\n3) Using publisher snapshot:- Applier process will do pg_dump/pg_restore as usual,\r\nThen applier process will start a new process P1 which will connect to\r\npublisher and start a transaction , it will call pg_export_snapshot() to export the\r\nsnapshot.Then applier process will take snapshot string and pass it to the tableSync process\r\nas a argument. tableSync will use this snapshot for COPY TABLE. tableSync should only\r\ndo COPY TABLE and then will exit , So we wont do any catchup phase in tableSync. After\r\nall tables finish COPY table transaction will be committed by P1 process and it will exit.\r\nIn the case of crash/restart we can simple start from beginning since nothing is committed\r\ntill every table is synced. There are 2 main issues with this approach\r\n1. I am not sure what side-effects we might have on publisher since we might have to keep\r\nthe transaction open for long time.\r\n2. Applier process will simple wait till all tables are synced.\r\nsince applier process wont be able to apply any wal_logs till all tables are synced\r\nmaybe instead of creating new process Applier process itself can start transaction/\r\nexport snapshot and tableSync process will use that snapshot. After all tables are synced\r\nit can start wal_streaming.\r\n\r\nI think approach no 3 might be the best way.\r\n\r\n[1] https://www.postgresql.org/message-id/CAA4eK1Ld9-5ueomE_J5CA6LfRo%3DwemdTrUp5qdBhRFwGT%2BdOUw%40mail.gmail.com\r\n\r\nRegards\r\nSachin\r\n",
"msg_date": "Mon, 3 Apr 2023 06:53:56 +0000",
"msg_from": "\"Kumar, Sachin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Mon, Apr 3, 2023 at 3:54 PM Kumar, Sachin <[email protected]> wrote:\n>\n>\n>\n> > -----Original Message-----\n> > From: Masahiko Sawada <[email protected]>\n> > > > I was thinking each TableSync process will call pg_dump --table,\n> > > > This way if we have N tableSync process, we can have N pg_dump --\n> > table=table_name called in parallel.\n> > > > In fact we can use --schema-only to get schema and then let COPY\n> > > > take care of data syncing . We will use same snapshot for pg_dump as well\n> > as COPY table.\n> > >\n> > > How can we postpone creating the pg_subscription_rel entries until the\n> > > tablesync worker starts and does the schema sync? I think that since\n> > > pg_subscription_rel entry needs the table OID, we need either to do\n> > > the schema sync before creating the entry (i.e, during CREATE\n> > > SUBSCRIPTION) or to postpone creating entries as Amit proposed[1]. The\n> > > apply worker needs the information of tables to sync in order to\n> > > launch the tablesync workers, but it needs to create the table schema\n> > > to get that information.\n> >\n> > For the above reason, I think that step 6 of the initial proposal won't work.\n> >\n> > If we can have the tablesync worker create an entry of pg_subscription_rel after\n> > creating the table, it may give us the flexibility to perform the initial sync. One\n> > idea is that we add a relname field to pg_subscription_rel so that we can create\n> > entries with relname instead of OID if the table is not created yet. Once the\n> > table is created, we clear the relname field and set the OID of the table instead.\n> > It's not an ideal solution but we might make it simpler later.\n> >\n> > Assuming that it's feasible, I'm considering another approach for the initial sync\n> > in order to address the concurrent DDLs.\n> >\n> > The basic idea is to somewhat follow how pg_dump/restore to dump/restore\n> > the database data. We divide the synchronization phase (including both schema\n> > and data) up into three phases: pre-data, table-data, post-data. These mostly\n> > follow the --section option of pg_dump.\n> >\n> > 1. The backend process performing CREATE SUBSCRIPTION creates the\n> > subscription but doesn't create pg_subscription_rel entries yet.\n> >\n> > 2. Before starting the streaming, the apply worker fetches the table list from the\n> > publisher, create pg_subscription_rel entries for them, and dumps+restores\n> > database objects that tables could depend on but don't depend on tables such as\n> > TYPE, OPERATOR, ACCESS METHOD etc (i.e.\n> > pre-data).\n>\n> We will not have slot starting snapshot, So somehow we have to get a new snapshot\n> And skip all the wal_log between starting of slot and snapshot creation lsn ? .\n\nYes. Or we can somehow postpone creating pg_subscription_rel entries\nuntil the tablesync workers create tables, or we request walsender to\nestablish a new snapshot using a technique proposed in email[1].\n\n>\n> >\n> > 3. The apply worker launches the tablesync workers for tables that need to be\n> > synchronized.\n> >\n> > There might be DDLs executed on the publisher for tables before the tablesync\n> > worker starts. But the apply worker needs to apply DDLs for pre-data database\n> > objects. OTOH, it can ignore DDLs for not-synced-yet tables and other database\n> > objects such as INDEX, TRIGGER, RULE, etc (i.e. post-data).\n> >\n> > 4. The tablesync worker creates its replication slot, dumps+restores the table\n> > schema, update the pg_subscription_rel, and perform COPY.\n> >\n> > These operations should be done in the same transaction.\n>\n> pg_restore wont be rollbackable, So we need to maintain states in pg_subscription_rel.\n\nYes. But I think it depends on how we restore them. For example, if we\nhave the tablesync worker somethow restore the table using a new SQL\nfunction returning the table schema as we discussed or executing the\ndump file via SPI, we can do that in the same transaction.\n\n>\n> >\n> > 5. After finishing COPY, the tablesync worker dumps indexes (and perhaps\n> > constraints) of the table and creates them (which possibly takes a long time).\n> > Then it starts to catch up, same as today. The apply worker needs to wait for the\n> > tablesync worker to catch up.\n>\n> I don’t think we can have CATCHUP stage. We can have a DDL on publisher which\n> can add a new column (And this DDL will be executed by applier later). Then we get a INSERT\n> because we have old definition of table, insert will fail.\n\nAll DMLs and DDLs associated with the table being synchronized are\napplied by the tablesync worker until it catches up with the apply\nworker.\n\n>\n> >\n> > We need to repeat these steps until we complete the initial data copy and create\n> > indexes for all tables, IOW until all pg_subscription_rel status becomes READY.\n> >\n> > 6. If the apply worker confirms all tables are READY, it starts another sync\n> > worker who is responsible for the post-data database objects such as TRIGGER,\n> > RULE, POLICY etc (i.e. post-data).\n> >\n> > While the sync worker is starting up or working, the apply worker applies\n> > changes for pre-data database objects as well as READY tables.\n> We might have some issue if we have create table like\n> Create table_name as select * from materialized_view.\n\nCould you elaborate on the scenario where we could have an issue with such DDL?\n\n> >\n> > 7. Similar to the tablesync worker, this sync worker creates its replication slot\n> > and sets the returned LSN somewhere, say pg_subscription.\n> >\n> > 8. The sync worker dumps and restores these objects. Which could take a time\n> > since it would need to create FK constraints. Then it starts to catch up if the\n> > apply worker is ahead. The apply worker waits for the sync worker to catch up.\n> >\n> > 9. Once the sync worker catches up, the apply worker starts applying changes\n> > for all database objects.\n> >\n> > IIUC with this approach, we can resolve the concurrent DDL problem Sachin\n> > mentioned, and indexes (and constraints) are created after the initial data copy.\n> >\n> > The procedures are still very complex and not fully considered yet but I hope\n> > there are some useful things at least for discussion.\n> >\n> > Probably we can start with supporting only tables. In this case, we would not\n> > need the post-data phase (i.e. step 6-9). It seems to me that we need to have\n> > the subscription state somewhere (maybe\n> > pg_subscription) so that the apply worker figure out the next step.\n> > Since we need to dump and restore different objects on different timings, we\n> > probably cannot directly use pg_dump/pg_restore. I've not considered how the\n> > concurrent REFRESH PUBLICATION works.\n>\n> I think above prototype will work and will have least amount of side effects, but\n> It might be too complex to implement and I am not sure about corner cases.\n>\n> I was thinking of other ways of doing Initial Sync , which are less complex but each\n> with separate set of bottlenecks\n>\n> On Publisher Side:-\n> 1) Locking the publisher:- Easiest one to implement, applier process will get Access Shared\n> lock on the all the published tables. (We don't have to worry newly created concurrent table)\n> As tableSync will finish syncing the table, it will release table lock, So we will release\n> table locks in steps. Users can still perform DML on tables, but DDLs wont be allowed.\n\nDo you mean that the apply worker acquires table locks and the\ntablesync workers release them? If so, how can we implement it?\n\n>\n> 2) Using pg_dump/pg_restore for schema and data:- As Amit mentioned we can use pg_dump/\n> pg_restore [1], Although it might have side effect of using double storage , we can\n> table pg_dump of each table separately and delete the dump as soon as table is synced.\n> tableSync process will read the dump and call pg_restore on the table.\n> If we crash in middle of restoring the tables we can start pg_dump(--clean)/restore again\n> with left out tables.\n> With this we can reduce space usage but we might create too many files.\n\nWith this idea, who does pg_dump and pg_restore? and when do we create\npg_subscription_rel entries?\n\n>\n> 3) Using publisher snapshot:- Applier process will do pg_dump/pg_restore as usual,\n> Then applier process will start a new process P1 which will connect to\n> publisher and start a transaction , it will call pg_export_snapshot() to export the\n> snapshot.Then applier process will take snapshot string and pass it to the tableSync process\n> as a argument. tableSync will use this snapshot for COPY TABLE. tableSync should only\n> do COPY TABLE and then will exit , So we wont do any catchup phase in tableSync. After\n> all tables finish COPY table transaction will be committed by P1 process and it will exit.\n> In the case of crash/restart we can simple start from beginning since nothing is committed\n> till every table is synced. There are 2 main issues with this approach\n> 1. I am not sure what side-effects we might have on publisher since we might have to keep\n> the transaction open for long time.\n\nI'm concerned that it would not be an acceptable downside that we keep\na transaction open until all tables are synchronized.\n\n> 2. Applier process will simple wait till all tables are synced.\n> since applier process wont be able to apply any wal_logs till all tables are synced\n> maybe instead of creating new process Applier process itself can start transaction/\n> export snapshot and tableSync process will use that snapshot. After all tables are synced\n> it can start wal_streaming.\n\nI think that after users execute REFRESH PUBLICATION, there are mixed\nnon-ready and ready tables in the subscription. In this case, it's a\nhuge restriction for users that logical replication for the ready\ntables stops until all newly-subscribed tables are synchronized.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAGPVpCRWEVhXa7ovrhuSQofx4to7o22oU9iKtrOgAOtz_%3DY6vg%40mail.gmail.com\n\n\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 4 Apr 2023 00:15:29 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "> From: Masahiko Sawada <[email protected]>\r\n> > >\r\n> > > 3. The apply worker launches the tablesync workers for tables that\r\n> > > need to be synchronized.\r\n> > >\r\n> > > There might be DDLs executed on the publisher for tables before the\r\n> > > tablesync worker starts. But the apply worker needs to apply DDLs\r\n> > > for pre-data database objects. OTOH, it can ignore DDLs for\r\n> > > not-synced-yet tables and other database objects such as INDEX,\r\n> TRIGGER, RULE, etc (i.e. post-data).\r\n> > >\r\n> > > 4. The tablesync worker creates its replication slot, dumps+restores\r\n> > > the table schema, update the pg_subscription_rel, and perform COPY.\r\n> > >\r\n> > > These operations should be done in the same transaction.\r\n> >\r\n> > pg_restore wont be rollbackable, So we need to maintain states in\r\n> pg_subscription_rel.\r\n> \r\n> Yes. But I think it depends on how we restore them. For example, if we have\r\n> the tablesync worker somethow restore the table using a new SQL function\r\n> returning the table schema as we discussed or executing the dump file via\r\n> SPI, we can do that in the same transaction.\r\n\r\nokay\r\n\r\n> \r\n> >\r\n> > >\r\n> > > 5. After finishing COPY, the tablesync worker dumps indexes (and\r\n> > > perhaps\r\n> > > constraints) of the table and creates them (which possibly takes a long\r\n> time).\r\n> > > Then it starts to catch up, same as today. The apply worker needs to\r\n> > > wait for the tablesync worker to catch up.\r\n> >\r\n> > I don’t think we can have CATCHUP stage. We can have a DDL on\r\n> > publisher which can add a new column (And this DDL will be executed by\r\n> > applier later). Then we get a INSERT because we have old definition of\r\n> table, insert will fail.\r\n> \r\n> All DMLs and DDLs associated with the table being synchronized are applied\r\n> by the tablesync worker until it catches up with the apply worker.\r\n\r\nRight, Sorry I forgot that in above case if definition on publisher changes we will also have a \r\ncorresponding DDLs.\r\n\r\n> \r\n> >\r\n> > >\r\n> > > We need to repeat these steps until we complete the initial data\r\n> > > copy and create indexes for all tables, IOW until all pg_subscription_rel\r\n> status becomes READY.\r\n> > >\r\n> > > 6. If the apply worker confirms all tables are READY, it starts\r\n> > > another sync worker who is responsible for the post-data database\r\n> > > objects such as TRIGGER, RULE, POLICY etc (i.e. post-data).\r\n> > >\r\n> > > While the sync worker is starting up or working, the apply worker\r\n> > > applies changes for pre-data database objects as well as READY tables.\r\n> > We might have some issue if we have create table like Create\r\n> > table_name as select * from materialized_view.\r\n> \r\n> Could you elaborate on the scenario where we could have an issue with such\r\n> DDL?\r\n\r\nSince materialized view of publisher has not been created by subscriber yet\r\nSo if we have a DDL which does a create table using a materialized view\r\nit will fail. I am not sure how DDL patch is handling create table as statements.\r\nIf it is modified to become like a normal CREATE TABLE then we wont have any issues. \r\n\r\n> \r\n> > >\r\n> > > 7. Similar to the tablesync worker, this sync worker creates its\r\n> > > replication slot and sets the returned LSN somewhere, say\r\n> pg_subscription.\r\n> > >\r\n> > > 8. The sync worker dumps and restores these objects. Which could\r\n> > > take a time since it would need to create FK constraints. Then it\r\n> > > starts to catch up if the apply worker is ahead. The apply worker waits for\r\n> the sync worker to catch up.\r\n> > >\r\n> > > 9. Once the sync worker catches up, the apply worker starts applying\r\n> > > changes for all database objects.\r\n> > >\r\n> > > IIUC with this approach, we can resolve the concurrent DDL problem\r\n> > > Sachin mentioned, and indexes (and constraints) are created after the\r\n> initial data copy.\r\n> > >\r\n> > > The procedures are still very complex and not fully considered yet\r\n> > > but I hope there are some useful things at least for discussion.\r\n> > >\r\n> > > Probably we can start with supporting only tables. In this case, we\r\n> > > would not need the post-data phase (i.e. step 6-9). It seems to me\r\n> > > that we need to have the subscription state somewhere (maybe\r\n> > > pg_subscription) so that the apply worker figure out the next step.\r\n> > > Since we need to dump and restore different objects on different\r\n> > > timings, we probably cannot directly use pg_dump/pg_restore. I've\r\n> > > not considered how the concurrent REFRESH PUBLICATION works.\r\n> >\r\n> > I think above prototype will work and will have least amount of side\r\n> > effects, but It might be too complex to implement and I am not sure about\r\n> corner cases.\r\n> >\r\n> > I was thinking of other ways of doing Initial Sync , which are less\r\n> > complex but each with separate set of bottlenecks\r\n> >\r\n> > On Publisher Side:-\r\n> > 1) Locking the publisher:- Easiest one to implement, applier process\r\n> > will get Access Shared lock on the all the published tables. (We don't\r\n> > have to worry newly created concurrent table) As tableSync will finish\r\n> > syncing the table, it will release table lock, So we will release table locks in\r\n> steps. Users can still perform DML on tables, but DDLs wont be allowed.\r\n> \r\n> Do you mean that the apply worker acquires table locks and the tablesync\r\n> workers release them? If so, how can we implement it?\r\n> \r\n\r\nI think releasing lock in steps would be impossible (given postgres lock implementations)\r\nSo applier process has to create a new transaction and lock all the published tables in \r\naccess shared mode. And after tableSync is completed transaction will be committed to release\r\nlocks. So 1 and 3 are similar we have to keep one transaction open till table are synced.\r\n\r\n> >\r\n> > 2) Using pg_dump/pg_restore for schema and data:- As Amit mentioned\r\n> we\r\n> > can use pg_dump/ pg_restore [1], Although it might have side effect of\r\n> > using double storage , we can table pg_dump of each table separately and\r\n> delete the dump as soon as table is synced.\r\n> > tableSync process will read the dump and call pg_restore on the table.\r\n> > If we crash in middle of restoring the tables we can start\r\n> > pg_dump(--clean)/restore again with left out tables.\r\n> > With this we can reduce space usage but we might create too many files.\r\n> \r\n> With this idea, who does pg_dump and pg_restore? and when do we create\r\n> pg_subscription_rel entries?\r\n\r\nApplier process will do pg_dump/pg_restore . pg_subscription_rel entries can be created\r\nafter pg_restore, We can create a new column with rel_nam and keep oid empty As you have\r\nsuggested earlier.\r\n\r\n> \r\n> >\r\n> > 3) Using publisher snapshot:- Applier process will do\r\n> > pg_dump/pg_restore as usual, Then applier process will start a new\r\n> > process P1 which will connect to publisher and start a transaction ,\r\n> > it will call pg_export_snapshot() to export the snapshot.Then applier\r\n> > process will take snapshot string and pass it to the tableSync process\r\n> > as a argument. tableSync will use this snapshot for COPY TABLE.\r\n> > tableSync should only do COPY TABLE and then will exit , So we wont do\r\n> any catchup phase in tableSync. After all tables finish COPY table transaction\r\n> will be committed by P1 process and it will exit.\r\n> > In the case of crash/restart we can simple start from beginning since\r\n> > nothing is committed till every table is synced. There are 2 main\r\n> > issues with this approach 1. I am not sure what side-effects we might\r\n> > have on publisher since we might have to keep the transaction open for\r\n> long time.\r\n> \r\n> I'm concerned that it would not be an acceptable downside that we keep a\r\n> transaction open until all tables are synchronized.\r\n> \r\n\r\nOkay, There is one more issue just using same snapshot will not stop table DDL\r\nmodifications, we need to have atleast access share lock on each tables.\r\nSo this will make tables locked on publisher, So this is essentially same as 1.\r\n\r\nRegards\r\nSachin\r\n",
"msg_date": "Wed, 5 Apr 2023 13:25:12 +0000",
"msg_from": "\"Kumar, Sachin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 10:11 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Thu, Mar 30, 2023 at 12:18 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Wed, Mar 29, 2023 at 7:57 PM Kumar, Sachin <[email protected]> wrote:\n> > >\n> > > > > > > From: Amit Kapila <[email protected]>\n> > > > > > > > I think we won't be able to use same snapshot because the\n> > > > > > > > transaction will be committed.\n> > > > > > > > In CreateSubscription() we can use the transaction snapshot from\n> > > > > > > > walrcv_create_slot() till walrcv_disconnect() is called.(I am\n> > > > > > > > not sure about this part maybe walrcv_disconnect() calls the commits\n> > > > internally ?).\n> > > > > > > > So somehow we need to keep this snapshot alive, even after\n> > > > > > > > transaction is committed(or delay committing the transaction ,\n> > > > > > > > but we can have CREATE SUBSCRIPTION with ENABLED=FALSE, so we\n> > > > > > > > can have a restart before tableSync is able to use the same\n> > > > > > > > snapshot.)\n> > > > > > > >\n> > > > > > >\n> > > > > > > Can we think of getting the table data as well along with schema\n> > > > > > > via pg_dump? Won't then both schema and initial data will\n> > > > > > > correspond to the same snapshot?\n> > > > > >\n> > > > > > Right , that will work, Thanks!\n> > > > >\n> > > > > While it works, we cannot get the initial data in parallel, no?\n> > > > >\n> > >\n> > > I was thinking each TableSync process will call pg_dump --table, This way if we have N\n> > > tableSync process, we can have N pg_dump --table=table_name called in parallel.\n> > > In fact we can use --schema-only to get schema and then let COPY take care of data\n> > > syncing . We will use same snapshot for pg_dump as well as COPY table.\n> >\n> > How can we postpone creating the pg_subscription_rel entries until the\n> > tablesync worker starts and does the schema sync? I think that since\n> > pg_subscription_rel entry needs the table OID, we need either to do\n> > the schema sync before creating the entry (i.e, during CREATE\n> > SUBSCRIPTION) or to postpone creating entries as Amit proposed[1]. The\n> > apply worker needs the information of tables to sync in order to\n> > launch the tablesync workers, but it needs to create the table schema\n> > to get that information.\n>\n> For the above reason, I think that step 6 of the initial proposal won't work.\n>\n> If we can have the tablesync worker create an entry of\n> pg_subscription_rel after creating the table, it may give us the\n> flexibility to perform the initial sync. One idea is that we add a\n> relname field to pg_subscription_rel so that we can create entries\n> with relname instead of OID if the table is not created yet. Once the\n> table is created, we clear the relname field and set the OID of the\n> table instead. It's not an ideal solution but we might make it simpler\n> later.\n\nWhile writing a PoC patch, I found some difficulties in this idea.\nFirst, I tried to add schemaname+relname to pg_subscription_rel but I\ncould not define the primary key of pg_subscription_rel. The primary\nkey on (srsubid, srrelid) doesn't work since srrelid could be NULL.\nSimilarly, the primary key on (srsubid, srrelid, schemaname, relname)\nalso doesn't work. So I tried another idea: that we generate a new OID\nfor srrelid and the tablesync worker will replace it with the new\ntable's OID once it creates the table. However, since we use srrelid\nin replication slot names, changing srrelid during the initial\nschema+data sync is not straightforward (please note that the slot is\ncreated by the tablesync worker but is removed by the apply worker).\nUsing relname in slot name instead of srrelid is not a good idea since\nit requires all pg_subscription_rel entries have relname, and slot\nnames could be duplicated, for example, when the relname is very long\nand we cut it.\n\nI'm trying to consider the idea from another angle: the apply worker\nfetches the table list and passes the relname to the tablesync worker.\nBut a problem of this approach is that the table list is not\npersisted. If the apply worker restarts during the initial table sync,\nit could not get the same list as before.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 6 Apr 2023 22:26:33 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Thu, Apr 6, 2023 at 6:57 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Thu, Mar 30, 2023 at 10:11 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Thu, Mar 30, 2023 at 12:18 AM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > >\n> > > How can we postpone creating the pg_subscription_rel entries until the\n> > > tablesync worker starts and does the schema sync? I think that since\n> > > pg_subscription_rel entry needs the table OID, we need either to do\n> > > the schema sync before creating the entry (i.e, during CREATE\n> > > SUBSCRIPTION) or to postpone creating entries as Amit proposed[1]. The\n> > > apply worker needs the information of tables to sync in order to\n> > > launch the tablesync workers, but it needs to create the table schema\n> > > to get that information.\n> >\n> > For the above reason, I think that step 6 of the initial proposal won't work.\n> >\n> > If we can have the tablesync worker create an entry of\n> > pg_subscription_rel after creating the table, it may give us the\n> > flexibility to perform the initial sync. One idea is that we add a\n> > relname field to pg_subscription_rel so that we can create entries\n> > with relname instead of OID if the table is not created yet. Once the\n> > table is created, we clear the relname field and set the OID of the\n> > table instead. It's not an ideal solution but we might make it simpler\n> > later.\n>\n> While writing a PoC patch, I found some difficulties in this idea.\n> First, I tried to add schemaname+relname to pg_subscription_rel but I\n> could not define the primary key of pg_subscription_rel. The primary\n> key on (srsubid, srrelid) doesn't work since srrelid could be NULL.\n> Similarly, the primary key on (srsubid, srrelid, schemaname, relname)\n> also doesn't work.\n>\n\nCan we think of having a separate catalog table say\npg_subscription_remote_rel for this? You can have srsubid,\nremote_schema_name, remote_rel_name, etc. We may need some other state\nto be maintained during the initial schema sync where this table can\nbe used. Basically, this can be used to maintain the state till the\ninitial schema sync is complete because we can create a relation entry\nin pg_subscritption_rel only after the initial schema sync is\ncomplete.\n\n> So I tried another idea: that we generate a new OID\n> for srrelid and the tablesync worker will replace it with the new\n> table's OID once it creates the table. However, since we use srrelid\n> in replication slot names, changing srrelid during the initial\n> schema+data sync is not straightforward (please note that the slot is\n> created by the tablesync worker but is removed by the apply worker).\n> Using relname in slot name instead of srrelid is not a good idea since\n> it requires all pg_subscription_rel entries have relname, and slot\n> names could be duplicated, for example, when the relname is very long\n> and we cut it.\n>\n> I'm trying to consider the idea from another angle: the apply worker\n> fetches the table list and passes the relname to the tablesync worker.\n> But a problem of this approach is that the table list is not\n> persisted. If the apply worker restarts during the initial table sync,\n> it could not get the same list as before.\n>\n\nAgreed, this has some drawbacks. We can try to explore this if the\nabove idea of the new catalog table doesn't solve this problem.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 7 Apr 2023 15:07:38 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Fri, Apr 7, 2023 at 6:37 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Apr 6, 2023 at 6:57 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Thu, Mar 30, 2023 at 10:11 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Thu, Mar 30, 2023 at 12:18 AM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > >\n> > > > How can we postpone creating the pg_subscription_rel entries until the\n> > > > tablesync worker starts and does the schema sync? I think that since\n> > > > pg_subscription_rel entry needs the table OID, we need either to do\n> > > > the schema sync before creating the entry (i.e, during CREATE\n> > > > SUBSCRIPTION) or to postpone creating entries as Amit proposed[1]. The\n> > > > apply worker needs the information of tables to sync in order to\n> > > > launch the tablesync workers, but it needs to create the table schema\n> > > > to get that information.\n> > >\n> > > For the above reason, I think that step 6 of the initial proposal won't work.\n> > >\n> > > If we can have the tablesync worker create an entry of\n> > > pg_subscription_rel after creating the table, it may give us the\n> > > flexibility to perform the initial sync. One idea is that we add a\n> > > relname field to pg_subscription_rel so that we can create entries\n> > > with relname instead of OID if the table is not created yet. Once the\n> > > table is created, we clear the relname field and set the OID of the\n> > > table instead. It's not an ideal solution but we might make it simpler\n> > > later.\n> >\n> > While writing a PoC patch, I found some difficulties in this idea.\n> > First, I tried to add schemaname+relname to pg_subscription_rel but I\n> > could not define the primary key of pg_subscription_rel. The primary\n> > key on (srsubid, srrelid) doesn't work since srrelid could be NULL.\n> > Similarly, the primary key on (srsubid, srrelid, schemaname, relname)\n> > also doesn't work.\n> >\n>\n> Can we think of having a separate catalog table say\n> pg_subscription_remote_rel for this? You can have srsubid,\n> remote_schema_name, remote_rel_name, etc. We may need some other state\n> to be maintained during the initial schema sync where this table can\n> be used. Basically, this can be used to maintain the state till the\n> initial schema sync is complete because we can create a relation entry\n> in pg_subscritption_rel only after the initial schema sync is\n> complete.\n\nIt might not be ideal but I guess it works. But I think we need to\nmodify the name of replication slot for initial sync as it currently\nincludes OID of the table:\n\nvoid\nReplicationSlotNameForTablesync(Oid suboid, Oid relid,\n char *syncslotname, Size szslot)\n{\n snprintf(syncslotname, szslot, \"pg_%u_sync_%u_\" UINT64_FORMAT, suboid,\n relid, GetSystemIdentifier());\n}\n\nIf we use both schema name and table name, it's possible that slot\nnames are duplicated if schema and/or table names are long. Another\nidea is to use the hash value of schema+table names, but it cannot\ncompletely eliminate that possibility, and probably would make\ninvestigation and debugging hard in case of any failure. Probably we\ncan use the OID of each entry in pg_subscription_remote_rel instead,\nbut I'm not sure it's a good idea, mainly because we will end up using\ntwice as many OIDs as before.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 17 Apr 2023 12:41:29 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "> From: Masahiko Sawada <[email protected]>\r\n> > > While writing a PoC patch, I found some difficulties in this idea.\r\n> > > First, I tried to add schemaname+relname to pg_subscription_rel but\r\n> > > I could not define the primary key of pg_subscription_rel. The\r\n> > > primary key on (srsubid, srrelid) doesn't work since srrelid could be NULL.\r\n> > > Similarly, the primary key on (srsubid, srrelid, schemaname,\r\n> > > relname) also doesn't work.\r\n> > >\r\n> >\r\n> > Can we think of having a separate catalog table say\r\n> > pg_subscription_remote_rel for this? You can have srsubid,\r\n> > remote_schema_name, remote_rel_name, etc. We may need some other\r\n> state\r\n> > to be maintained during the initial schema sync where this table can\r\n> > be used. Basically, this can be used to maintain the state till the\r\n> > initial schema sync is complete because we can create a relation entry\r\n> > in pg_subscritption_rel only after the initial schema sync is\r\n> > complete.\r\n> \r\n> It might not be ideal but I guess it works. But I think we need to modify the name\r\n> of replication slot for initial sync as it currently includes OID of the table:\r\n> \r\n> void\r\n> ReplicationSlotNameForTablesync(Oid suboid, Oid relid,\r\n> char *syncslotname, Size szslot) {\r\n> snprintf(syncslotname, szslot, \"pg_%u_sync_%u_\" UINT64_FORMAT, suboid,\r\n> relid, GetSystemIdentifier()); }\r\n> \r\n> If we use both schema name and table name, it's possible that slot names are\r\n> duplicated if schema and/or table names are long. Another idea is to use the\r\n> hash value of schema+table names, but it cannot completely eliminate that\r\n> possibility, and probably would make investigation and debugging hard in case\r\n> of any failure. Probably we can use the OID of each entry in\r\n> pg_subscription_remote_rel instead, but I'm not sure it's a good idea, mainly\r\n> because we will end up using twice as many OIDs as before.\r\n\r\nMaybe we can create serial primary key for pg_subscription_remote_rel table \r\nAnd use this key for creating replication slot ?\r\n\r\nRegards\r\nSachin\r\n",
"msg_date": "Wed, 19 Apr 2023 06:21:58 +0000",
"msg_from": "\"Kumar, Sachin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "I am working on a prototype with above Idea , and will send it for review by Sunday/Monday\r\n\r\nRegards\r\nSachin\r\n",
"msg_date": "Thu, 20 Apr 2023 10:07:47 +0000",
"msg_from": "\"Kumar, Sachin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Mon, Apr 17, 2023 at 9:12 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Apr 7, 2023 at 6:37 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Apr 6, 2023 at 6:57 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > >\n> > > While writing a PoC patch, I found some difficulties in this idea.\n> > > First, I tried to add schemaname+relname to pg_subscription_rel but I\n> > > could not define the primary key of pg_subscription_rel. The primary\n> > > key on (srsubid, srrelid) doesn't work since srrelid could be NULL.\n> > > Similarly, the primary key on (srsubid, srrelid, schemaname, relname)\n> > > also doesn't work.\n> > >\n> >\n> > Can we think of having a separate catalog table say\n> > pg_subscription_remote_rel for this? You can have srsubid,\n> > remote_schema_name, remote_rel_name, etc. We may need some other state\n> > to be maintained during the initial schema sync where this table can\n> > be used. Basically, this can be used to maintain the state till the\n> > initial schema sync is complete because we can create a relation entry\n> > in pg_subscritption_rel only after the initial schema sync is\n> > complete.\n>\n> It might not be ideal but I guess it works. But I think we need to\n> modify the name of replication slot for initial sync as it currently\n> includes OID of the table:\n>\n> void\n> ReplicationSlotNameForTablesync(Oid suboid, Oid relid,\n> char *syncslotname, Size szslot)\n> {\n> snprintf(syncslotname, szslot, \"pg_%u_sync_%u_\" UINT64_FORMAT, suboid,\n> relid, GetSystemIdentifier());\n> }\n>\n> If we use both schema name and table name, it's possible that slot\n> names are duplicated if schema and/or table names are long. Another\n> idea is to use the hash value of schema+table names, but it cannot\n> completely eliminate that possibility, and probably would make\n> investigation and debugging hard in case of any failure. Probably we\n> can use the OID of each entry in pg_subscription_remote_rel instead,\n> but I'm not sure it's a good idea, mainly because we will end up using\n> twice as many OIDs as before.\n>\n\nThe other possibility is to use worker_pid. To make debugging easier,\nwe may want to LOG schema_name+rel_name vs slot_name mapping at DEBUG1\nlog level.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 20 Apr 2023 16:46:38 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "I am working on a prototype with above discussed idea, I think I will send it for initial review by Monday.\n\nRegards\nSachin\n\n",
"msg_date": "Thu, 20 Apr 2023 12:40:59 +0000",
"msg_from": "\"Kumar, Sachin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Thu, Apr 20, 2023 at 9:41 PM Kumar, Sachin <[email protected]> wrote:\n>\n> I am working on a prototype with above discussed idea, I think I will send it for initial review by Monday.\n>\n\nOkay, but which idea are you referring to? pg_subscription_remote_rel\n+ worker_pid idea Amit proposed?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 21 Apr 2023 17:47:23 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Thu, Apr 20, 2023 at 8:16 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Apr 17, 2023 at 9:12 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Fri, Apr 7, 2023 at 6:37 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Thu, Apr 6, 2023 at 6:57 PM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > >\n> > > > While writing a PoC patch, I found some difficulties in this idea.\n> > > > First, I tried to add schemaname+relname to pg_subscription_rel but I\n> > > > could not define the primary key of pg_subscription_rel. The primary\n> > > > key on (srsubid, srrelid) doesn't work since srrelid could be NULL.\n> > > > Similarly, the primary key on (srsubid, srrelid, schemaname, relname)\n> > > > also doesn't work.\n> > > >\n> > >\n> > > Can we think of having a separate catalog table say\n> > > pg_subscription_remote_rel for this? You can have srsubid,\n> > > remote_schema_name, remote_rel_name, etc. We may need some other state\n> > > to be maintained during the initial schema sync where this table can\n> > > be used. Basically, this can be used to maintain the state till the\n> > > initial schema sync is complete because we can create a relation entry\n> > > in pg_subscritption_rel only after the initial schema sync is\n> > > complete.\n> >\n> > It might not be ideal but I guess it works. But I think we need to\n> > modify the name of replication slot for initial sync as it currently\n> > includes OID of the table:\n> >\n> > void\n> > ReplicationSlotNameForTablesync(Oid suboid, Oid relid,\n> > char *syncslotname, Size szslot)\n> > {\n> > snprintf(syncslotname, szslot, \"pg_%u_sync_%u_\" UINT64_FORMAT, suboid,\n> > relid, GetSystemIdentifier());\n> > }\n> >\n> > If we use both schema name and table name, it's possible that slot\n> > names are duplicated if schema and/or table names are long. Another\n> > idea is to use the hash value of schema+table names, but it cannot\n> > completely eliminate that possibility, and probably would make\n> > investigation and debugging hard in case of any failure. Probably we\n> > can use the OID of each entry in pg_subscription_remote_rel instead,\n> > but I'm not sure it's a good idea, mainly because we will end up using\n> > twice as many OIDs as before.\n> >\n>\n> The other possibility is to use worker_pid. To make debugging easier,\n> we may want to LOG schema_name+rel_name vs slot_name mapping at DEBUG1\n> log level.\n\nSince worker_pid changes after the worker restarts, a new worker\ncannot find the slot that had been used, no?\n\nAfter thinking it over, a better solution would be that we add an oid\ncolumn, nspname column, and relname column to pg_subscription_rel and\nthe primary key on the oid. If the table is not present on the\nsubscriber we store the schema name and table name to the catalog, and\notherwise we store the local table oid same as today. The local table\noid will be filled after the schema sync. The names of origin and\nreplication slot the tablesync worker uses use the oid instead of the\ntable oid.\n\nI've attached a PoC patch of this idea (very rough patch and has many\nTODO comments). It mixes the following changes:\n\n1. Add oid column to the pg_subscription_rel. The oid is used as the\nprimary key and in the names of origin and slot the tablesync workers\nuse.\n\n2. Add copy_schema = on/off option to CREATE SUBSCRIPTION (not yet\nsupport for ALTER SUBSCRIPTION).\n\n3. Add CRS_EXPORT_USE_SNAPSHOT new action in order to use the same\nsnapshot by both walsender and other processes (e.g. pg_dump). In this\npatch, the snapshot is exported for pg_dump and is used by the\nwalsender for COPY.\n\nIt seems to work well but there might be a pitfall as I've not fully\nimplemented it.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 21 Apr 2023 17:47:31 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Fri, Apr 21, 2023 at 16:48 PM Masahiko Sawada <[email protected]> wrote:\r\n> On Thu, Apr 20, 2023 at 8:16 PM Amit Kapila <[email protected]> wrote:\r\n> >\r\n> > On Mon, Apr 17, 2023 at 9:12 AM Masahiko Sawada\r\n> <[email protected]> wrote:\r\n> > >\r\n> > > On Fri, Apr 7, 2023 at 6:37 PM Amit Kapila <[email protected]> wrote:\r\n> > > >\r\n> > > > On Thu, Apr 6, 2023 at 6:57 PM Masahiko Sawada\r\n> <[email protected]> wrote:\r\n> > > > >\r\n> > > > >\r\n> > > > > While writing a PoC patch, I found some difficulties in this idea.\r\n> > > > > First, I tried to add schemaname+relname to pg_subscription_rel but I\r\n> > > > > could not define the primary key of pg_subscription_rel. The primary\r\n> > > > > key on (srsubid, srrelid) doesn't work since srrelid could be NULL.\r\n> > > > > Similarly, the primary key on (srsubid, srrelid, schemaname, relname)\r\n> > > > > also doesn't work.\r\n> > > > >\r\n> > > >\r\n> > > > Can we think of having a separate catalog table say\r\n> > > > pg_subscription_remote_rel for this? You can have srsubid,\r\n> > > > remote_schema_name, remote_rel_name, etc. We may need some other\r\n> state\r\n> > > > to be maintained during the initial schema sync where this table can\r\n> > > > be used. Basically, this can be used to maintain the state till the\r\n> > > > initial schema sync is complete because we can create a relation entry\r\n> > > > in pg_subscritption_rel only after the initial schema sync is\r\n> > > > complete.\r\n> > >\r\n> > > It might not be ideal but I guess it works. But I think we need to\r\n> > > modify the name of replication slot for initial sync as it currently\r\n> > > includes OID of the table:\r\n> > >\r\n> > > void\r\n> > > ReplicationSlotNameForTablesync(Oid suboid, Oid relid,\r\n> > > char *syncslotname, Size szslot)\r\n> > > {\r\n> > > snprintf(syncslotname, szslot, \"pg_%u_sync_%u_\" UINT64_FORMAT,\r\n> suboid,\r\n> > > relid, GetSystemIdentifier());\r\n> > > }\r\n> > >\r\n> > > If we use both schema name and table name, it's possible that slot\r\n> > > names are duplicated if schema and/or table names are long. Another\r\n> > > idea is to use the hash value of schema+table names, but it cannot\r\n> > > completely eliminate that possibility, and probably would make\r\n> > > investigation and debugging hard in case of any failure. Probably we\r\n> > > can use the OID of each entry in pg_subscription_remote_rel instead,\r\n> > > but I'm not sure it's a good idea, mainly because we will end up using\r\n> > > twice as many OIDs as before.\r\n> > >\r\n> >\r\n> > The other possibility is to use worker_pid. To make debugging easier,\r\n> > we may want to LOG schema_name+rel_name vs slot_name mapping at\r\n> DEBUG1\r\n> > log level.\r\n> \r\n> Since worker_pid changes after the worker restarts, a new worker\r\n> cannot find the slot that had been used, no?\r\n> \r\n> After thinking it over, a better solution would be that we add an oid\r\n> column, nspname column, and relname column to pg_subscription_rel and\r\n> the primary key on the oid. If the table is not present on the\r\n> subscriber we store the schema name and table name to the catalog, and\r\n> otherwise we store the local table oid same as today. The local table\r\n> oid will be filled after the schema sync. The names of origin and\r\n> replication slot the tablesync worker uses use the oid instead of the\r\n> table oid.\r\n> \r\n> I've attached a PoC patch of this idea (very rough patch and has many\r\n> TODO comments). It mixes the following changes:\r\n> \r\n> 1. Add oid column to the pg_subscription_rel. The oid is used as the\r\n> primary key and in the names of origin and slot the tablesync workers\r\n> use.\r\n> \r\n> 2. Add copy_schema = on/off option to CREATE SUBSCRIPTION (not yet\r\n> support for ALTER SUBSCRIPTION).\r\n> \r\n> 3. Add CRS_EXPORT_USE_SNAPSHOT new action in order to use the same\r\n> snapshot by both walsender and other processes (e.g. pg_dump). In this\r\n> patch, the snapshot is exported for pg_dump and is used by the\r\n> walsender for COPY.\r\n> \r\n> It seems to work well but there might be a pitfall as I've not fully\r\n> implemented it.\r\n\r\nThanks for your POC patch.\r\nAfter reviewing this patch, I have a question below that want to confirm:\r\n\r\n1. In the function synchronize_table_schema.\r\nI think some changes to GUC and table-related object SQLs are included in the\r\npg_dump result. And in this POC, these SQLs will be executed. Do we need to\r\nalter the pg_dump results to only execute the table schema related SQLs?\r\nFor example, if we have below table schema in the publisher-side:\r\n```\r\ncreate table tbl(a int, b int);\r\ncreate index idx_t on tbl (a);\r\nCREATE FUNCTION trigger_func() RETURNS TRIGGER LANGUAGE PLPGSQL AS $$ BEGIN INSERT INTO public.tbl VALUES (NEW.*); RETURN NEW; END; $$;\r\nCREATE TRIGGER tri_tbl BEFORE INSERT ON public.tbl FOR EACH ROW EXECUTE PROCEDURE trigger_func();\r\n```\r\nThe result of pg_dump executed on the subscriber-side:\r\n```\r\nSET statement_timeout = 0;\r\nSET lock_timeout = 0;\r\nSET idle_in_transaction_session_timeout = 0;\r\nSET client_encoding = 'UTF8';\r\nSET standard_conforming_strings = on;\r\nSELECT pg_catalog.set_config('search_path', '', false);\r\nSET check_function_bodies = false;\r\nSET xmloption = content;\r\nSET client_min_messages = warning;\r\nSET row_security = off;\r\nSET default_tablespace = '';\r\nSET default_table_access_method = heap;\r\n\r\nCREATE TABLE public.tbl (\r\n a integer,\r\n b integer\r\n);\r\n\r\nALTER TABLE public.tbl OWNER TO postgres;\r\n\r\nCREATE INDEX idx_t ON public.tbl USING btree (a);\r\n\r\nCREATE TRIGGER tri_tbl BEFORE INSERT ON public.tbl FOR EACH ROW EXECUTE FUNCTION public.trigger_func();\r\n```\r\nAnd this will cause an error when `CREATE TRIGGER` because we did not dump the\r\nfunction trigger_func.\r\n\r\nRegards,\r\nWang Wei\r\n",
"msg_date": "Thu, 27 Apr 2023 03:02:29 +0000",
"msg_from": "\"Wei Wang (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Thu, Apr 27, 2023 at 12:02 PM Wei Wang (Fujitsu)\n<[email protected]> wrote:\n>\n> On Fri, Apr 21, 2023 at 16:48 PM Masahiko Sawada <[email protected]> wrote:\n> > On Thu, Apr 20, 2023 at 8:16 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Mon, Apr 17, 2023 at 9:12 AM Masahiko Sawada\n> > <[email protected]> wrote:\n> > > >\n> > > > On Fri, Apr 7, 2023 at 6:37 PM Amit Kapila <[email protected]> wrote:\n> > > > >\n> > > > > On Thu, Apr 6, 2023 at 6:57 PM Masahiko Sawada\n> > <[email protected]> wrote:\n> > > > > >\n> > > > > >\n> > > > > > While writing a PoC patch, I found some difficulties in this idea.\n> > > > > > First, I tried to add schemaname+relname to pg_subscription_rel but I\n> > > > > > could not define the primary key of pg_subscription_rel. The primary\n> > > > > > key on (srsubid, srrelid) doesn't work since srrelid could be NULL.\n> > > > > > Similarly, the primary key on (srsubid, srrelid, schemaname, relname)\n> > > > > > also doesn't work.\n> > > > > >\n> > > > >\n> > > > > Can we think of having a separate catalog table say\n> > > > > pg_subscription_remote_rel for this? You can have srsubid,\n> > > > > remote_schema_name, remote_rel_name, etc. We may need some other\n> > state\n> > > > > to be maintained during the initial schema sync where this table can\n> > > > > be used. Basically, this can be used to maintain the state till the\n> > > > > initial schema sync is complete because we can create a relation entry\n> > > > > in pg_subscritption_rel only after the initial schema sync is\n> > > > > complete.\n> > > >\n> > > > It might not be ideal but I guess it works. But I think we need to\n> > > > modify the name of replication slot for initial sync as it currently\n> > > > includes OID of the table:\n> > > >\n> > > > void\n> > > > ReplicationSlotNameForTablesync(Oid suboid, Oid relid,\n> > > > char *syncslotname, Size szslot)\n> > > > {\n> > > > snprintf(syncslotname, szslot, \"pg_%u_sync_%u_\" UINT64_FORMAT,\n> > suboid,\n> > > > relid, GetSystemIdentifier());\n> > > > }\n> > > >\n> > > > If we use both schema name and table name, it's possible that slot\n> > > > names are duplicated if schema and/or table names are long. Another\n> > > > idea is to use the hash value of schema+table names, but it cannot\n> > > > completely eliminate that possibility, and probably would make\n> > > > investigation and debugging hard in case of any failure. Probably we\n> > > > can use the OID of each entry in pg_subscription_remote_rel instead,\n> > > > but I'm not sure it's a good idea, mainly because we will end up using\n> > > > twice as many OIDs as before.\n> > > >\n> > >\n> > > The other possibility is to use worker_pid. To make debugging easier,\n> > > we may want to LOG schema_name+rel_name vs slot_name mapping at\n> > DEBUG1\n> > > log level.\n> >\n> > Since worker_pid changes after the worker restarts, a new worker\n> > cannot find the slot that had been used, no?\n> >\n> > After thinking it over, a better solution would be that we add an oid\n> > column, nspname column, and relname column to pg_subscription_rel and\n> > the primary key on the oid. If the table is not present on the\n> > subscriber we store the schema name and table name to the catalog, and\n> > otherwise we store the local table oid same as today. The local table\n> > oid will be filled after the schema sync. The names of origin and\n> > replication slot the tablesync worker uses use the oid instead of the\n> > table oid.\n> >\n> > I've attached a PoC patch of this idea (very rough patch and has many\n> > TODO comments). It mixes the following changes:\n> >\n> > 1. Add oid column to the pg_subscription_rel. The oid is used as the\n> > primary key and in the names of origin and slot the tablesync workers\n> > use.\n> >\n> > 2. Add copy_schema = on/off option to CREATE SUBSCRIPTION (not yet\n> > support for ALTER SUBSCRIPTION).\n> >\n> > 3. Add CRS_EXPORT_USE_SNAPSHOT new action in order to use the same\n> > snapshot by both walsender and other processes (e.g. pg_dump). In this\n> > patch, the snapshot is exported for pg_dump and is used by the\n> > walsender for COPY.\n> >\n> > It seems to work well but there might be a pitfall as I've not fully\n> > implemented it.\n>\n> Thanks for your POC patch.\n> After reviewing this patch, I have a question below that want to confirm:\n>\n> 1. In the function synchronize_table_schema.\n> I think some changes to GUC and table-related object SQLs are included in the\n> pg_dump result. And in this POC, these SQLs will be executed. Do we need to\n> alter the pg_dump results to only execute the table schema related SQLs?\n\nYes, in this approach, we need to dump/restore objects while\nspecifying with fine granularity. Ideally, the table sync worker dumps\nand restores the table schema, does copy the initial data, and then\ncreates indexes, and triggers and table-related objects are created\nafter that. So if we go with the pg_dump approach to copy the schema\nof individual tables, we need to change pg_dump (or libpgdump needs to\nbe able to do) to support it.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 28 Apr 2023 16:16:14 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Fri, Apr 28, 2023 at 4:16 PM Masahiko Sawada <[email protected]> wrote:\n> Yes, in this approach, we need to dump/restore objects while\n> specifying with fine granularity. Ideally, the table sync worker dumps\n> and restores the table schema, does copy the initial data, and then\n> creates indexes, and triggers and table-related objects are created\n> after that. So if we go with the pg_dump approach to copy the schema\n> of individual tables, we need to change pg_dump (or libpgdump needs to\n> be able to do) to support it.\n\nWe have been discussing how to sync schema but I'd like to step back a\nbit and discuss use cases and requirements of this feature.\n\nSuppose that a table belongs to a publication, what objects related to\nthe table we want to sync by the initial schema sync features? IOW, do\nwe want to sync table's ACLs, tablespace settings, triggers, and\nsecurity labels too?\n\nIf we want to replicate the whole database, e.g. when using logical\nreplication for major version upgrade, it would be convenient if it\nsynchronizes all table-related objects. However, if we have only this\noption, it could be useless in some cases. For example, in a case\nwhere users have different database users on the subscriber than the\npublisher, they might want to sync only CREATE TABLE, and set ACL etc\nby themselves. In this case, it would not be necessary to sync ACL and\nsecurity labels.\n\nWhat use case do we want to support by this feature? I think the\nimplementation could be varied depending on how to select what objects\nto sync.\n\nOne possible idea is to select objects to sync depending on how DDL\nreplication is set in the publisher. It's straightforward but I'm not\nsure the design of DDL replication syntax has been decided. Also, even\nif we create a publication with ddl = 'table' option, it's not clear\nto me that we want to sync table-dependent triggers, indexes, and\nrules too by the initial sync feature.\n\nSecond idea is to make it configurable by users so that they can\nspecify what objects to sync. But it would make the feature complex\nand I'm not sure users can use it properly.\n\nThird idea is that since the use case of synchronizing the whole\ndatabase can be achievable even by pg_dump(all), we support\nsynchronizing only tables (+ indexes) in the initial sync feature,\nwhich can not be achievable by pg_dump.\n\nFeedback is very welcome.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 22 May 2023 10:06:44 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Mon, May 22, 2023 at 6:37 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Apr 28, 2023 at 4:16 PM Masahiko Sawada <[email protected]> wrote:\n> > Yes, in this approach, we need to dump/restore objects while\n> > specifying with fine granularity. Ideally, the table sync worker dumps\n> > and restores the table schema, does copy the initial data, and then\n> > creates indexes, and triggers and table-related objects are created\n> > after that. So if we go with the pg_dump approach to copy the schema\n> > of individual tables, we need to change pg_dump (or libpgdump needs to\n> > be able to do) to support it.\n>\n> We have been discussing how to sync schema but I'd like to step back a\n> bit and discuss use cases and requirements of this feature.\n>\n> Suppose that a table belongs to a publication, what objects related to\n> the table we want to sync by the initial schema sync features? IOW, do\n> we want to sync table's ACLs, tablespace settings, triggers, and\n> security labels too?\n>\n> If we want to replicate the whole database, e.g. when using logical\n> replication for major version upgrade, it would be convenient if it\n> synchronizes all table-related objects. However, if we have only this\n> option, it could be useless in some cases. For example, in a case\n> where users have different database users on the subscriber than the\n> publisher, they might want to sync only CREATE TABLE, and set ACL etc\n> by themselves. In this case, it would not be necessary to sync ACL and\n> security labels.\n>\n> What use case do we want to support by this feature? I think the\n> implementation could be varied depending on how to select what objects\n> to sync.\n>\n> One possible idea is to select objects to sync depending on how DDL\n> replication is set in the publisher. It's straightforward but I'm not\n> sure the design of DDL replication syntax has been decided. Also, even\n> if we create a publication with ddl = 'table' option, it's not clear\n> to me that we want to sync table-dependent triggers, indexes, and\n> rules too by the initial sync feature.\n>\n\nI think it is better to keep the initial sync the same as the\nreplication. So, if the publication specifies 'table' then we should\njust synchronize tables. Otherwise, it will look odd that the initial\nsync has synchronized say index-related DDLs but then later\nreplication didn't replicate it. OTOH, if we want to do initial sync\nof table-dependent objects like triggers, indexes, rules, etc. when\nthe user has specified ddl = 'table' then the replication should also\nfollow the same. The main reason to exclude the other objects during\nreplication is to reduce the scope of deparsing patch but if we have a\nfinite set of objects (say all dependent on the table) then we can\nprobably try to address those.\n\n> Second idea is to make it configurable by users so that they can\n> specify what objects to sync. But it would make the feature complex\n> and I'm not sure users can use it properly.\n>\n> Third idea is that since the use case of synchronizing the whole\n> database can be achievable even by pg_dump(all), we support\n> synchronizing only tables (+ indexes) in the initial sync feature,\n> which can not be achievable by pg_dump.\n>\n\nCan't we add some switch to dump only the table and not its dependents\nif we want to go with that approach?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 23 May 2023 11:01:42 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Tue, May 23, 2023 at 2:31 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, May 22, 2023 at 6:37 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Fri, Apr 28, 2023 at 4:16 PM Masahiko Sawada <[email protected]> wrote:\n> > > Yes, in this approach, we need to dump/restore objects while\n> > > specifying with fine granularity. Ideally, the table sync worker dumps\n> > > and restores the table schema, does copy the initial data, and then\n> > > creates indexes, and triggers and table-related objects are created\n> > > after that. So if we go with the pg_dump approach to copy the schema\n> > > of individual tables, we need to change pg_dump (or libpgdump needs to\n> > > be able to do) to support it.\n> >\n> > We have been discussing how to sync schema but I'd like to step back a\n> > bit and discuss use cases and requirements of this feature.\n> >\n> > Suppose that a table belongs to a publication, what objects related to\n> > the table we want to sync by the initial schema sync features? IOW, do\n> > we want to sync table's ACLs, tablespace settings, triggers, and\n> > security labels too?\n> >\n> > If we want to replicate the whole database, e.g. when using logical\n> > replication for major version upgrade, it would be convenient if it\n> > synchronizes all table-related objects. However, if we have only this\n> > option, it could be useless in some cases. For example, in a case\n> > where users have different database users on the subscriber than the\n> > publisher, they might want to sync only CREATE TABLE, and set ACL etc\n> > by themselves. In this case, it would not be necessary to sync ACL and\n> > security labels.\n> >\n> > What use case do we want to support by this feature? I think the\n> > implementation could be varied depending on how to select what objects\n> > to sync.\n> >\n> > One possible idea is to select objects to sync depending on how DDL\n> > replication is set in the publisher. It's straightforward but I'm not\n> > sure the design of DDL replication syntax has been decided. Also, even\n> > if we create a publication with ddl = 'table' option, it's not clear\n> > to me that we want to sync table-dependent triggers, indexes, and\n> > rules too by the initial sync feature.\n> >\n>\n> I think it is better to keep the initial sync the same as the\n> replication. So, if the publication specifies 'table' then we should\n> just synchronize tables. Otherwise, it will look odd that the initial\n> sync has synchronized say index-related DDLs but then later\n> replication didn't replicate it. OTOH, if we want to do initial sync\n> of table-dependent objects like triggers, indexes, rules, etc. when\n> the user has specified ddl = 'table' then the replication should also\n> follow the same. The main reason to exclude the other objects during\n> replication is to reduce the scope of deparsing patch but if we have a\n> finite set of objects (say all dependent on the table) then we can\n> probably try to address those.\n>\n\nWe have discussed several ideas of how to synchronize schemas between\npublisher and subscribers, and the points are summarized in Wiki\npage[1]. As for the idea of using pg_dump, we were concerned that\npg_dump needs to be present along with the server binary if the user\nneeds to use the initial schema synchronization feature. Since these\nbinaries are typically included in different packages, they need to\ninstall both. During PGCon we've discussed with some senior hackers\nthat it would be an acceptable limitation for users. When executing\nCREATE/ALTER SUBSCRIPTION, we check if pg_dump is available and raise\nan error if not. We've also discussed the idea of using\npg_dump_library but no one preferred this idea because of its\nimplementation costs. Therefore, I'm going to do further evaluation\nfor the pg_dump idea.\n\nI agree with Amit that the initial schema synchronization should\nprocess the same as the DDL replication. We can support only table\nschemas as the first step. To do that, we need a new switch, say\n--exclude-table-dependents, in pg_dump to dump only table schemas\nexcluding table-related objects such as triggers and indexes. Then, we\ncan support synchronizing tables and table-related objects such as\ntriggers, indexes, and rules, as the second step, which can be done\nwith the --schema and --table option. Finally, we can synchronize the\nwhole database by using the --schema option.\n\nWe also need to research how to integrate the initial schema\nsynchronization with tablesync workers. We have a PoC patch[2].\n\nRegards,\n\n[1] https://wiki.postgresql.org/wiki/Logical_replication_of_DDLs#Initial_Schema_Sync\n[2] https://www.postgresql.org/message-id/CAD21AoCdfg506__qKz%2BHX8vqfdyKgQ5qeehgqq9bi1L-6p5Pwg%40mail.gmail.com\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 8 Jun 2023 12:23:13 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 1:24 PM Masahiko Sawada <[email protected]> wrote:\n>\n...\n\n> We also need to research how to integrate the initial schema\n> synchronization with tablesync workers. We have a PoC patch[2].\n>\n> Regards,\n>\n> [1] https://wiki.postgresql.org/wiki/Logical_replication_of_DDLs#Initial_Schema_Sync\n> [2] https://www.postgresql.org/message-id/CAD21AoCdfg506__qKz%2BHX8vqfdyKgQ5qeehgqq9bi1L-6p5Pwg%40mail.gmail.com\n>\n\nFYI -- the PoC patch fails to apply using HEAD fetched today.\n\ngit apply ../patches_misc/0001-Poc-initial-table-structure-synchronization-in-logic.patch\nerror: patch failed: src/backend/replication/logical/tablesync.c:1245\nerror: src/backend/replication/logical/tablesync.c: patch does not apply\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 15 Jun 2023 16:14:05 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 4:14 PM Peter Smith <[email protected]> wrote:\n>\n> On Thu, Jun 8, 2023 at 1:24 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> ...\n>\n> > We also need to research how to integrate the initial schema\n> > synchronization with tablesync workers. We have a PoC patch[2].\n> >\n> > Regards,\n> >\n> > [1] https://wiki.postgresql.org/wiki/Logical_replication_of_DDLs#Initial_Schema_Sync\n> > [2] https://www.postgresql.org/message-id/CAD21AoCdfg506__qKz%2BHX8vqfdyKgQ5qeehgqq9bi1L-6p5Pwg%40mail.gmail.com\n> >\n>\n> FYI -- the PoC patch fails to apply using HEAD fetched today.\n>\n> git apply ../patches_misc/0001-Poc-initial-table-structure-synchronization-in-logic.patch\n> error: patch failed: src/backend/replication/logical/tablesync.c:1245\n> error: src/backend/replication/logical/tablesync.c: patch does not apply\n>\n\nAfter rebasing the PoC patch locally, I found the 'make check' still\ndid not pass 100%.\n\n# 2 of 215 tests failed.\n\nHere are the differences:\n\ndiff -U3 /home/postgres/oss_postgres_misc/src/test/regress/expected/rules.out\n/home/postgres/oss_postgres_misc/src/test/regress/results/rules.out\n--- /home/postgres/oss_postgres_misc/src/test/regress/expected/rules.out\n 2023-06-02 23:12:32.073864475 +1000\n+++ /home/postgres/oss_postgres_misc/src/test/regress/results/rules.out\n2023-06-15 16:53:29.352622676 +1000\n@@ -2118,14 +2118,14 @@\n su.subname,\n st.pid,\n st.leader_pid,\n- st.relid,\n+ st.subrelid,\n st.received_lsn,\n st.last_msg_send_time,\n st.last_msg_receipt_time,\n st.latest_end_lsn,\n st.latest_end_time\n FROM (pg_subscription su\n- LEFT JOIN pg_stat_get_subscription(NULL::oid) st(subid, relid,\npid, leader_pid, received_lsn, last_msg_send_time,\nlast_msg_receipt_time, latest_end_lsn, latest_end_time) ON ((st.subid\n= su.oid)));\n+ LEFT JOIN pg_stat_get_subscription(NULL::oid) st(subid,\nsubrelid, pid, leader_pid, received_lsn, last_msg_send_time,\nlast_msg_receipt_time, latest_end_lsn, latest_end_time) ON ((st.subid\n= su.oid)));\n pg_stat_subscription_stats| SELECT ss.subid,\n s.subname,\n ss.apply_error_count,\ndiff -U3 /home/postgres/oss_postgres_misc/src/test/regress/expected/oidjoins.out\n/home/postgres/oss_postgres_misc/src/test/regress/results/oidjoins.out\n--- /home/postgres/oss_postgres_misc/src/test/regress/expected/oidjoins.out\n2022-10-04 15:11:32.457834981 +1100\n+++ /home/postgres/oss_postgres_misc/src/test/regress/results/oidjoins.out\n 2023-06-15 16:54:07.159839010 +1000\n@@ -265,4 +265,3 @@\n NOTICE: checking pg_subscription {subdbid} => pg_database {oid}\n NOTICE: checking pg_subscription {subowner} => pg_authid {oid}\n NOTICE: checking pg_subscription_rel {srsubid} => pg_subscription {oid}\n-NOTICE: checking pg_subscription_rel {srrelid} => pg_class {oid}\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 15 Jun 2023 17:29:16 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "Hi,\n\nBelow are my review comments for the PoC patch 0001.\n\nIn addition, the patch needed rebasing, and, after I rebased it\nlocally in my private environment there were still test failures:\na) The 'make check' tests fail but only in a minor way due to changes colname\nb) the subscription TAP test did not work at all for me -- many errors.\n\n======\nCommit message.\n\n1.\n- Add oid column to the pg_subscription_rel.\n - use it as the primary key.\n - use it in the names of origin and slot the tablesync workers use.\n\n~\n\nIIUC, I think there were lots of variables called 'subrelid' referring\nto this new 'oid' field. But, somehow I found that very confusing with\nthe other similarly named 'relid'. I wonder if all those can be named\nlike 'sroid' or 'srid' to reduce the confusion of such similar names?\n\n\n======\nsrc/backend/catalog/pg_subscription.c\n\n2. AddSubscriptionRelState\n\nI felt should be some sanity check Asserts for the args here. E.g.\nCannot have valid relid when copy_schema == true, etc.\n\n~~~\n\n3.\n+ if (nspname)\n+ values[Anum_pg_subscription_rel_srnspname - 1] = CStringGetDatum(nspname);\n+ else\n+ nulls[Anum_pg_subscription_rel_srnspname - 1] = true;\n+\n+ if (relname)\n+ values[Anum_pg_subscription_rel_srrelname - 1] = CStringGetDatum(relname);\n+ else\n+ nulls[Anum_pg_subscription_rel_srrelname - 1] = true;\n\nHere is where I was wondering why not pass the nspname and relname all\nthe time, even for valid 'relid' (when copy_schema is false). It\nshould simplify some code, as well as putting more useful/readable\ninformation into the catalog.\n\n~~~\n\n4. UpdateSubscriptionRelRelid\n\n+ /* XXX: need to distinguish from message in UpdateSubscriptionRelState() */\n+ if (!HeapTupleIsValid(tup))\n+ elog(ERROR, \"subscription table %u in subscription %u does not exist\",\n+ subrelid, subid);\n\nIs that ERROR msg correct? IIUC the 'subrelid' is the Oid of the row\nin the catalog -- it is not the \"subscription table\" Oid.\n\n~~~\n\n5. UpdateSubscriptionRelState\n\n if (!HeapTupleIsValid(tup))\n elog(ERROR, \"subscription table %u in subscription %u does not exist\",\n- relid, subid);\n+ subrelid, subid);\n\n\n(ditto previous review comment)\n\nIs that ERROR msg correct? IIUC the subrelid is the Oid of the row in\nthe catalog -- it is not the \"subscription table\" Oid.\n\n~~~\n\n6. GetSubscriptoinRelStateByRelid\n\nThere is a spelling mistake in this function name\n\n/Subscriptoin/Subscription/\n\n~~~\n\n7.\n+ ScanKeyInit(&skey[0],\n+ Anum_pg_subscription_rel_srrelid,\n+ BTEqualStrategyNumber, F_OIDEQ,\n+ ObjectIdGetDatum(relid));\n+ ScanKeyInit(&skey[1],\n+ Anum_pg_subscription_rel_srsubid,\n+ BTEqualStrategyNumber, F_OIDEQ,\n+ ObjectIdGetDatum(subid));\n\nWon't it be better to swap the order of these so it matches the\nfunction comment \"(srsubid, srrelid)\".\n\n~~~\n\n8.\n+ tup = systable_getnext(scan);\n+\n+\n+ if (!HeapTupleIsValid(tup))\n\nDouble blank lines\n\n~~~\n\n9.\n/* Get palloc'ed SubscriptionRelState of the given subrelid */\nSubscriptionRelState *\nGetSubscriptionRelByOid(Oid subrelid)\n\n~\n\nThere seems some function name confusion because the struct is called\nSubscriptionRelState and it also has a 'state' field.\n\ne.g. The functions named GetSubscriptionRelStateXXX return only the\nstate field of the struct. OTOH, this function returns the\nSubscriptionRelState* but it is NOT called\nGetSubscriptionRelStateByOid (??).\n\n~~~\n\n10. deconstruct_subrelstate\n\n+ /* syncflags */\n+ relstate->syncflags =\n+ (((subrel_form->srsyncschema) ? SUBREL_SYNC_KIND_SCHEMA : 0) |\n+ ((subrel_form->srsyncdata) ? SUBREL_SYNC_KIND_DATA : 0));\n\nSeems excessive parens.\n\n~~~\n\n11.\n+ return relstate;\n+}\n /*\n * Drop subscription relation mapping. These can be for a particular\n * subscription, or for a particular relation, or both.\n */\n void\n-RemoveSubscriptionRel(Oid subid, Oid relid)\n+RemoveSubscriptionRel(Oid subid, Oid relid, Oid subrelid)\n\n~\n\nThere is no blank line before this function\n\n~~~\n\n12. RemoveSubscriptionRel\n\n-RemoveSubscriptionRel(Oid subid, Oid relid)\n+RemoveSubscriptionRel(Oid subid, Oid relid, Oid subrelid)\n {\n\n~\n\nIIUC what you called 'subrelid' is the PK, so would it make more sense\nfor that to be the 1st parameter for this function?\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n13. struct SubOpts\n\n bool copy_data;\n+ /* XXX: want to choose synchronizing only tables or all objects? */\n+ bool copy_schema;\n\nI wonder if it would be more natural to put the 'copy_schema' field\nbefore the 'copy_data' field?\n\n~~~\n\n14. parse_subscription_options\n\n if (IsSet(supported_opts, SUBOPT_COPY_DATA))\n opts->copy_data = true;\n+ if (IsSet(supported_opts, SUBOPT_COPY_SCHEMA))\n+ opts->copy_data = true;\n\n14a.\nI wonder if it would be more natural to put the COPY_SCHEMA logic\nbefore the COPY_DATA logic?\n\n~\n\n14b.\nIs this a bug? Why is this assigning copy_data = true, instead of\ncopy_schema = true?\n\n~~~\n\n15.\n opts->specified_opts |= SUBOPT_COPY_DATA;\n opts->copy_data = defGetBoolean(defel);\n }\n+ else if (IsSet(supported_opts, SUBOPT_COPY_SCHEMA) &&\n+ strcmp(defel->defname, \"copy_schema\") == 0)\n+ {\n+ if (IsSet(opts->specified_opts, SUBOPT_COPY_SCHEMA))\n+ errorConflictingDefElem(defel, pstate);\n+\n+ opts->specified_opts |= SUBOPT_COPY_SCHEMA;\n+ opts->copy_schema = defGetBoolean(defel);\n+ }\n\nI wonder if it would be more natural to put the COPY_SCHEMA logic\nbefore the COPY_DATA logic?\n\n~~~\n\n16.\n+ if (opts->copy_schema &&\n+ IsSet(opts->specified_opts, SUBOPT_COPY_SCHEMA))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"%s and %s are mutually exclusive options\",\n+ \"connect = false\", \"copy_schema = true\")));\n+\n\nI wonder if it would be more natural to put the COPY_SCHEMA logic\nbefore the COPY_DATA logic?\n\n~~~\n\n17. CreateSubscription\n\n * Set sync state based on if we were asked to do data copy or\n * not.\n */\n- table_state = opts.copy_data ? SUBREL_STATE_INIT : SUBREL_STATE_READY;\n+ if (opts.copy_data || opts.copy_schema)\n+ table_state = SUBREL_STATE_INIT;\n+ else\n+ table_state = SUBREL_STATE_READY;\n\nThe comment prior to this code needs updating, it still only mentions\n\"data copy\".\n\n~~~\n\n18. AlterSubscription_refresh\n\n+ sub_remove_rels[remove_rel_len].relid = subrelid;\n sub_remove_rels[remove_rel_len++].state = state;\n~\n\nIs that right?\n\nIIUC that 'subrelid' is the OID PK of the row in pg_subscription_rel,\nwhich is not the same as the 'relid'.\n\nShouldn't this be sub_remove_rels[remove_rel_len].relid = relstate->relid;\n\n~~~\n\n19.\n+ if (OidIsValid(relstate->relid))\n+ ereport(DEBUG1,\n+ (errmsg_internal(\"table \\\"%s.%s\\\" removed from subscription \\\"%s\\\"\",\n+ get_namespace_name(get_rel_namespace(relstate->relid)),\n+ get_rel_name(relstate->relid),\n+ sub->name)));\n+ else\n+ ereport(DEBUG1,\n+ (errmsg_internal(\"table \\\"%s.%s\\\" removed from subscription \\\"%s\\\"\",\n+ relstate->nspname, relstate->relname,\n+ sub->name)));\n\nI wondered why can't we just always store nspname and relname even for\nthe valid 'relid' when there is no copy_schema? Won't that simplify\ncode such as this?\n\n======\nsrc/backend/replication/logical/launcher.c\n\n20. logicalrep_worker_find\n\n- if (w->in_use && w->subid == subid && w->relid == relid &&\n+ if (w->in_use && w->subid == subid && w->subrelid == subrelid &&\n (!only_running || w->proc))\n {\n\n~\n\nMaybe I misunderstand something, but somehow it seems strange to be\nchecking both the 'subid' and the the Oid PK ('subrelid') here. Isn't\nit that when subrelid is valid you need to test only 'subrelid' (aka\ntablesync) for equality? But when subrelid is InvalidOid (aka not a\ntablesync worker) you only need to test subid for equality?\n\n~~~\n\n21. logicalrep_worker_launch\n\n bool is_parallel_apply_worker = (subworker_dsm != DSM_HANDLE_INVALID);\n\n /* Sanity check - tablesync worker cannot be a subworker */\n- Assert(!(is_parallel_apply_worker && OidIsValid(relid)));\n+ Assert(!(is_parallel_apply_worker && OidIsValid(subrelid)));\n\nIIUC I thought this code might be easier to understand if you\nintroduced another variable\n\nbool is_tabslync_worker = OidIsValid(subrelid);\n\n~~~\n\n22.\n+ if (OidIsValid(subrelid) && nsyncworkers >= max_sync_workers_per_subscription)\n\n(ditto previous comment)\n\n~~~\n\n23.\n- if (OidIsValid(relid))\n+ if (OidIsValid(subrelid))\n snprintf(bgw.bgw_name, BGW_MAXLEN,\n- \"logical replication worker for subscription %u sync %u\", subid, relid);\n+ \"logical replication worker for subscription %u sync %u\", subid, subrelid);\n\nThis name seems somehow less useful to the user now. IIUC 'subrelid'\nis just the PK of the pg_subscription_rel_catalog instead of the\nrelid. Does this require changes to the documentation that might have\nbeen saying this is the relid?\n\n~~~\n\n24. logicalrep_worker_stop\n\n * Stop the logical replication worker for subid/relid, if any.\n */\n void\n-logicalrep_worker_stop(Oid subid, Oid relid)\n+logicalrep_worker_stop(Oid subid, Oid subrelid)\n\nThe function comment still is talking about relid.\n\n======\nsrc/backend/replication/logical/snapbuild.c\n\n25. SnapBuildExportSnapshot\n\n-SnapBuildExportSnapshot(SnapBuild *builder)\n+SnapBuildExportSnapshot(SnapBuild *builder, bool use_it)\n\n'use_it' does not see a good parameter name. At least, maybe the\nfunction comment can describe the meaning of use_it.\n\n~~~\n\n26.\n- /* There doesn't seem to a nice API to set these */\n- XactIsoLevel = XACT_REPEATABLE_READ;\n- XactReadOnly = true;\n+ /* There doesn't seem to a nice API to set these */\n+ XactIsoLevel = XACT_REPEATABLE_READ;\n+ XactReadOnly = true;\n+ }\n+ else\n+ Assert(IsTransactionBlock());\n\nAlthough it is not introduced by this patch, since you change the\nindent on this line you might as well at the same time fix the typo on\nthis line.\n\n/seem to be nice/seem to be a nice/\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n27. process_syncing_tables_for_sync\n\n UpdateSubscriptionRelState(MyLogicalRepWorker->subid,\n- MyLogicalRepWorker->relid,\n+ MyLogicalRepWorker->subrelid,\n MyLogicalRepWorker->relstate,\n MyLogicalRepWorker->relstate_lsn);\n\nIIUC the 'subrelid' is now the PK. Isn't it better for that to be the 1st param?\n\n~~~\n\n28.\n\n+ if ((syncflags & SUBREL_SYNC_KIND_SCHEMA) != 0)\n\nThere are several checks like the code shown above. Would it be better\nto have some macro for that expression? Or maybe simply assign this\nresult to a local variable instead of testing the same thing multiple\ntimes.\n\n~~~\n\n29. synchronize_table_schema\n\nFILE *handle;\nOid relid;\nOid nspoid;\nStringInfoData command;\nStringInfoData querybuf;\nchar full_path[MAXPGPATH];\nchar buf[1024];\nint ret;\n\n if (find_my_exec(\"pg_dump\", full_path) < 0)\n elog(ERROR, \"\\\"%s\\\" was not found\", \"pg_dump\")\n\n~\n\nSomething is not quite right with the indentation in this new function.\n\n~~~\n\n30.\n+ * XXX what if the table already doesn't exist?\n\nI didn't understand the meaning of the comment. Is it supposed to say\n\"What if the table already exists?\" (??)\n\n======\nsrc/backend/replication/logical/worker.c\n\n31. InitializeApplyWorker\n\n+ {\n+ if (OidIsValid(MyLogicalRepWorker->relid))\n+ ereport(LOG,\n+ (errmsg(\"logical replication table synchronization worker for\nsubscription \\\"%s\\\", table \\\"%s\\\" has started\",\n+ MySubscription->name,\n+ get_rel_name(MyLogicalRepWorker->relid))));\n+ else\n+ ereport(LOG,\n+ (errmsg(\"logical replication table synchronization worker for\nsubscription \\\"%s\\\", relid %u has started\",\n+ MySubscription->name,\n+ MyLogicalRepWorker->subrelid)));\n+ }\n\n~\n\nIIUC it doesn't seem right to say \"relid %u has started\". Because\nthat's not really a relid is it? I thought it is just a PK Oid of the\nrow in the catalog.\n\n======\nsrc/include/catalog/pg_subscription_rel.h\n\n32. pg_subscription_rel\n\n+ /* What part do we need to synchronize? */\n+ bool srsyncschema;\n+ bool srsyncdata;\n\nThese aren't really \"parts\".\n\nSUGGESTION\n/* What to synchronize? */\n\n~~~\n\n33.\n typedef struct SubscriptionRelState\n {\n+ Oid oid;\n\nIs that the pg_subscription_rel's oid? Maybe it would be better to\ncall this field 'sroid'? (see the general comment in the commit\nmessage)\n\n======\nsrc/include/replication/walsender.h\n\n34. CRSSnapshotAction\n\n CRS_EXPORT_SNAPSHOT,\n CRS_NOEXPORT_SNAPSHOT,\n- CRS_USE_SNAPSHOT\n+ CRS_USE_SNAPSHOT,\n+ CRS_EXPORT_USE_SNAPSHOT\n } CRSSnapshotAction;\n\n~\n\nShould the CRS_USE_SNAPSHOT be renamed to CRS_NOEXOPRT_USE_SNAPSHOT to\nhave a more consistent naming pattern?\n\n======\nsrc/include/replication/worker_internal.h\n\n35.\n- /* Used for initial table synchronization. */\n+ /*\n+ * Used for initial table synchronization.\n+ *\n+ * relid is an invalid oid if the table is not created on the subscriber\n+ * yet.\n+ */\n+ Oid subrelid;\n Oid relid;\nIt would be good to have more explanation what is the different\nmeaning of 'subrelid' versus 'relid' (see also the general comment\nsuggesting to rename this)\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 19 Jun 2023 18:29:17 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Mon, Jun 19, 2023 at 5:29 PM Peter Smith <[email protected]> wrote:\n>\n> Hi,\n>\n> Below are my review comments for the PoC patch 0001.\n>\n> In addition, the patch needed rebasing, and, after I rebased it\n> locally in my private environment there were still test failures:\n> a) The 'make check' tests fail but only in a minor way due to changes colname\n> b) the subscription TAP test did not work at all for me -- many errors.\n\nThank you for reviewing the patch.\n\nWhile updating the patch, I realized that the current approach won't\nwork well or at least has the problem with partition tables. If a\npublication has a partitioned table with publish_via_root = false, the\nsubscriber launches tablesync workers for its partitions so that each\ntablesync worker copies data of each partition. Similarly, if it has a\npartition table with publish_via_root = true, the subscriber launches\na tablesync worker for the parent table. With the current design,\nsince the tablesync worker is responsible for both schema and data\nsynchronization for the target table, it won't be possible to\nsynchronize both the parent table's schema and partitions' schema. For\nexample, there is no pg_subscription_rel entry for the parent table if\nthe publication has publish_via_root = false. In addition to that, we\nneed to be careful about the order of synchronization of the parent\ntable and its partitions. We cannot start schema synchronization for\npartitions before its parent table. So it seems to me that we need to\nconsider another approach.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 5 Jul 2023 11:14:52 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Wed, Jul 5, 2023 at 11:14 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Mon, Jun 19, 2023 at 5:29 PM Peter Smith <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > Below are my review comments for the PoC patch 0001.\n> >\n> > In addition, the patch needed rebasing, and, after I rebased it\n> > locally in my private environment there were still test failures:\n> > a) The 'make check' tests fail but only in a minor way due to changes colname\n> > b) the subscription TAP test did not work at all for me -- many errors.\n>\n> Thank you for reviewing the patch.\n>\n> While updating the patch, I realized that the current approach won't\n> work well or at least has the problem with partition tables. If a\n> publication has a partitioned table with publish_via_root = false, the\n> subscriber launches tablesync workers for its partitions so that each\n> tablesync worker copies data of each partition. Similarly, if it has a\n> partition table with publish_via_root = true, the subscriber launches\n> a tablesync worker for the parent table. With the current design,\n> since the tablesync worker is responsible for both schema and data\n> synchronization for the target table, it won't be possible to\n> synchronize both the parent table's schema and partitions' schema. For\n> example, there is no pg_subscription_rel entry for the parent table if\n> the publication has publish_via_root = false. In addition to that, we\n> need to be careful about the order of synchronization of the parent\n> table and its partitions. We cannot start schema synchronization for\n> partitions before its parent table. So it seems to me that we need to\n> consider another approach.\n\nSo I've implemented a different approach; doing schema synchronization\nat a CREATE SUBSCRIPTION time. The backend executing CREATE\nSUBSCRIPTION uses pg_dump and restores the table schemas including\nboth partitioned tables and their partitions regardless of\npublish_via_partition_root option, and then creates\npg_subscription_rel entries for tables while respecting\npublish_via_partition_root option.\n\nThere is a window between table creations and the tablesync workers\nstarting to process the tables. If DDLs are executed in this window,\nthe tablesync worker might fail because the table schema might have\nalready been changed. We need to mention this note in the\ndocumentation. BTW, I think we will be able to get rid of this\ndownside if we support DDL replication. DDLs executed in the window\nare applied by the apply worker and it takes over the data copy to the\ntablesync worker at a certain LSN.\n\nI've attached PoC patches. It has regression tests but doesn't have\nthe documentation yet.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 7 Jul 2023 16:11:13 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "> From: Masahiko Sawada <[email protected]>\r\n> So I've implemented a different approach; doing schema synchronization at a\r\n> CREATE SUBSCRIPTION time. The backend executing CREATE SUBSCRIPTION\r\n> uses pg_dump and restores the table schemas including both partitioned tables\r\n> and their partitions regardless of publish_via_partition_root option, and then\r\n> creates pg_subscription_rel entries for tables while respecting\r\n> publish_via_partition_root option.\r\n> \r\n> There is a window between table creations and the tablesync workers starting to\r\n> process the tables. If DDLs are executed in this window, the tablesync worker\r\n> might fail because the table schema might have already been changed. We need\r\n> to mention this note in the documentation. BTW, I think we will be able to get\r\n> rid of this downside if we support DDL replication. DDLs executed in the window\r\n> are applied by the apply worker and it takes over the data copy to the tablesync\r\n> worker at a certain LSN.\r\n\r\nI don’t think even with DDL replication we will be able to get rid of this window. \r\nThere are some issues\r\n1. Even with tablesync worker taking over at certain LSN, publisher can make more changes till\r\nTable sync acquires lock on publisher table via copy table.\r\n2. how we will make sure that applier worker has caught up will all the changes from publisher\r\nBefore it starts tableSync worker. It can be lag behind publisher.\r\n\r\nI think the easiest option would be to just recreate the table , this way we don’t have to worry about \r\ncomplex race conditions, tablesync already makes a slot for copy data we can use same slot for \r\ngetting upto date table definition, dropping the table won't be much expensive since there won't be any data\r\nin it.Apply worker will skip all the DDLs/DMLs till table is synced.\r\n\r\nAlthough for partitioned tables we will be able to keep with published table schema changes only when \r\npublish_by_partition_root is true.\r\n\r\nRegards\r\nSachin\r\nAmazon Web Services: https://aws.amazon.com\r\n",
"msg_date": "Fri, 7 Jul 2023 09:16:01 +0000",
"msg_from": "\"Kumar, Sachin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Wed, Jul 5, 2023 at 7:45 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Mon, Jun 19, 2023 at 5:29 PM Peter Smith <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > Below are my review comments for the PoC patch 0001.\n> >\n> > In addition, the patch needed rebasing, and, after I rebased it\n> > locally in my private environment there were still test failures:\n> > a) The 'make check' tests fail but only in a minor way due to changes colname\n> > b) the subscription TAP test did not work at all for me -- many errors.\n>\n> Thank you for reviewing the patch.\n>\n> While updating the patch, I realized that the current approach won't\n> work well or at least has the problem with partition tables. If a\n> publication has a partitioned table with publish_via_root = false, the\n> subscriber launches tablesync workers for its partitions so that each\n> tablesync worker copies data of each partition. Similarly, if it has a\n> partition table with publish_via_root = true, the subscriber launches\n> a tablesync worker for the parent table. With the current design,\n> since the tablesync worker is responsible for both schema and data\n> synchronization for the target table, it won't be possible to\n> synchronize both the parent table's schema and partitions' schema.\n>\n\nI think one possibility to make this design work is that when\npublish_via_root is false, then we assume that subscriber already has\nparent table and then the individual tablesync workers can sync the\nschema of partitions and their data. And when publish_via_root is\ntrue, then the table sync worker is responsible to sync parent and\nchild tables along with data. Do you think such a mechanism can\naddress the partition table related cases?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sun, 9 Jul 2023 09:29:04 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "\r\n\r\n> From: Amit Kapila <[email protected]>\r\n> On Wed, Jul 5, 2023 at 7:45 AM Masahiko Sawada\r\n> <[email protected]> wrote:\r\n> >\r\n> > On Mon, Jun 19, 2023 at 5:29 PM Peter Smith <[email protected]>\r\n> wrote:\r\n> > >\r\n> > > Hi,\r\n> > >\r\n> > > Below are my review comments for the PoC patch 0001.\r\n> > >\r\n> > > In addition, the patch needed rebasing, and, after I rebased it\r\n> > > locally in my private environment there were still test failures:\r\n> > > a) The 'make check' tests fail but only in a minor way due to\r\n> > > changes colname\r\n> > > b) the subscription TAP test did not work at all for me -- many errors.\r\n> >\r\n> > Thank you for reviewing the patch.\r\n> >\r\n> > While updating the patch, I realized that the current approach won't\r\n> > work well or at least has the problem with partition tables. If a\r\n> > publication has a partitioned table with publish_via_root = false, the\r\n> > subscriber launches tablesync workers for its partitions so that each\r\n> > tablesync worker copies data of each partition. Similarly, if it has a\r\n> > partition table with publish_via_root = true, the subscriber launches\r\n> > a tablesync worker for the parent table. With the current design,\r\n> > since the tablesync worker is responsible for both schema and data\r\n> > synchronization for the target table, it won't be possible to\r\n> > synchronize both the parent table's schema and partitions' schema.\r\n> >\r\n> \r\n> I think one possibility to make this design work is that when publish_via_root\r\n> is false, then we assume that subscriber already has parent table and then\r\n> the individual tablesync workers can sync the schema of partitions and their\r\n> data.\r\n\r\nSince publish_via_partition_root is false by default users have to create parent table by themselves\r\nwhich I think is not a good user experience.\r\n\r\n> And when publish_via_root is true, then the table sync worker is\r\n> responsible to sync parent and child tables along with data. Do you think\r\n> such a mechanism can address the partition table related cases?\r\n> \r\n",
"msg_date": "Mon, 10 Jul 2023 11:06:04 +0000",
"msg_from": "\"Kumar, Sachin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 8:06 PM Kumar, Sachin <[email protected]> wrote:\n>\n>\n>\n> > From: Amit Kapila <[email protected]>\n> > On Wed, Jul 5, 2023 at 7:45 AM Masahiko Sawada\n> > <[email protected]> wrote:\n> > >\n> > > On Mon, Jun 19, 2023 at 5:29 PM Peter Smith <[email protected]>\n> > wrote:\n> > > >\n> > > > Hi,\n> > > >\n> > > > Below are my review comments for the PoC patch 0001.\n> > > >\n> > > > In addition, the patch needed rebasing, and, after I rebased it\n> > > > locally in my private environment there were still test failures:\n> > > > a) The 'make check' tests fail but only in a minor way due to\n> > > > changes colname\n> > > > b) the subscription TAP test did not work at all for me -- many errors.\n> > >\n> > > Thank you for reviewing the patch.\n> > >\n> > > While updating the patch, I realized that the current approach won't\n> > > work well or at least has the problem with partition tables. If a\n> > > publication has a partitioned table with publish_via_root = false, the\n> > > subscriber launches tablesync workers for its partitions so that each\n> > > tablesync worker copies data of each partition. Similarly, if it has a\n> > > partition table with publish_via_root = true, the subscriber launches\n> > > a tablesync worker for the parent table. With the current design,\n> > > since the tablesync worker is responsible for both schema and data\n> > > synchronization for the target table, it won't be possible to\n> > > synchronize both the parent table's schema and partitions' schema.\n> > >\n> >\n> > I think one possibility to make this design work is that when publish_via_root\n> > is false, then we assume that subscriber already has parent table and then\n> > the individual tablesync workers can sync the schema of partitions and their\n> > data.\n>\n> Since publish_via_partition_root is false by default users have to create parent table by themselves\n> which I think is not a good user experience.\n\nI have the same concern. I think that users normally use\npublish_via_partiiton_root = false if the partitioned table on the\nsubscriber consists of the same set of partitions as the publisher's\nones. And such users would expect the both partitioned table and its\npartitions to be synchronized.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 11 Jul 2023 15:21:21 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "Hi Everyone, based on internal discussion with Masahiko\r\nI have implemented concurrent DDL support for initial schema sync.\r\n\r\nConcurrent Patch workflow\r\n\r\n1. When TableSync worker creates a replicaton slot, It will\r\nsave the slot lsn into pg_subscription_rel with\r\nSUBREL_SYNC_SCHEMA_DATA_SYNC state, and it will wait for\r\nits state to be SUBREL_STATE_DATASYNC.\r\n\r\n2. Applier process will apply DDLs till tablesync lsn, and then\r\nit will change pg_subscription_rel state to SUBREL_STATE_DATASYNC.\r\n\r\n3. TableSync will continue applying pending DML/DDls till it catch up.\r\n\r\nThis patch needs DDL replication to apply concurrent DDLs, I have cherry-\r\npicked this DDL patch [0]\r\n\r\nIssues\r\n1) needs testing for concurrent DDLs, Not sure how to make tablesync process wait so that\r\nconcurrent DDLs can be issued on publisher.\r\n2) In my testing created table does not appear on the same conenction on subscriber,\r\nI have to reconnect to see table.\r\n3) maybe different chars for SUBREL_SYNC_SCHEMA_DATA_INIT and SUBREL_SYNC_SCHEMA_DATA_SYNC,\r\ncurrently they are 'x' and 'y'.\r\n4) I need to add SUBREL_SYNC_SCHEMA_DATA_INIT and SUBREL_SYNC_SCHEMA_DATA_SYNC to\r\npg_subscription_rel_d.h to make it compile succesfully.\r\n5) It only implement concurrent alter as of now\r\n\r\n[0] = https://www.postgresql.org/message-id/OS0PR01MB57163E6487EFF7378CB8E17C9438A%40OS0PR01MB5716.jpnprd01.prod.outlook.com",
"msg_date": "Thu, 31 Aug 2023 10:48:06 +0000",
"msg_from": "\"Kumar, Sachin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Thu, 31 Aug 2023 at 17:18, Kumar, Sachin <[email protected]> wrote:\n>\n> Hi Everyone, based on internal discussion with Masahiko\n> I have implemented concurrent DDL support for initial schema sync.\n>\n> Concurrent Patch workflow\n>\n> 1. When TableSync worker creates a replicaton slot, It will\n> save the slot lsn into pg_subscription_rel with\n> SUBREL_SYNC_SCHEMA_DATA_SYNC state, and it will wait for\n> its state to be SUBREL_STATE_DATASYNC.\n>\n> 2. Applier process will apply DDLs till tablesync lsn, and then\n> it will change pg_subscription_rel state to SUBREL_STATE_DATASYNC.\n>\n> 3. TableSync will continue applying pending DML/DDls till it catch up.\n>\n> This patch needs DDL replication to apply concurrent DDLs, I have cherry-\n> picked this DDL patch [0]\n\nCan you rebase the patch and post the complete set of required changes\nfor the concurrent DDL, I will have a look at them.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 19 Oct 2023 15:10:33 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "On Fri, 7 Jul 2023 at 12:41, Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Jul 5, 2023 at 11:14 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Mon, Jun 19, 2023 at 5:29 PM Peter Smith <[email protected]> wrote:\n> > >\n> > > Hi,\n> > >\n> > > Below are my review comments for the PoC patch 0001.\n> > >\n> > > In addition, the patch needed rebasing, and, after I rebased it\n> > > locally in my private environment there were still test failures:\n> > > a) The 'make check' tests fail but only in a minor way due to changes colname\n> > > b) the subscription TAP test did not work at all for me -- many errors.\n> >\n> > Thank you for reviewing the patch.\n> >\n> > While updating the patch, I realized that the current approach won't\n> > work well or at least has the problem with partition tables. If a\n> > publication has a partitioned table with publish_via_root = false, the\n> > subscriber launches tablesync workers for its partitions so that each\n> > tablesync worker copies data of each partition. Similarly, if it has a\n> > partition table with publish_via_root = true, the subscriber launches\n> > a tablesync worker for the parent table. With the current design,\n> > since the tablesync worker is responsible for both schema and data\n> > synchronization for the target table, it won't be possible to\n> > synchronize both the parent table's schema and partitions' schema. For\n> > example, there is no pg_subscription_rel entry for the parent table if\n> > the publication has publish_via_root = false. In addition to that, we\n> > need to be careful about the order of synchronization of the parent\n> > table and its partitions. We cannot start schema synchronization for\n> > partitions before its parent table. So it seems to me that we need to\n> > consider another approach.\n>\n> So I've implemented a different approach; doing schema synchronization\n> at a CREATE SUBSCRIPTION time. The backend executing CREATE\n> SUBSCRIPTION uses pg_dump and restores the table schemas including\n> both partitioned tables and their partitions regardless of\n> publish_via_partition_root option, and then creates\n> pg_subscription_rel entries for tables while respecting\n> publish_via_partition_root option.\n>\n> There is a window between table creations and the tablesync workers\n> starting to process the tables. If DDLs are executed in this window,\n> the tablesync worker might fail because the table schema might have\n> already been changed. We need to mention this note in the\n> documentation. BTW, I think we will be able to get rid of this\n> downside if we support DDL replication. DDLs executed in the window\n> are applied by the apply worker and it takes over the data copy to the\n> tablesync worker at a certain LSN.\n>\n> I've attached PoC patches. It has regression tests but doesn't have\n> the documentation yet.\n\nFew thoughts:\n1) There might be a scenario where we will create multiple\nsubscriptions with the tables overlapping across the subscription, in\nthat case, the table will be present when the 2nd subscription is\nbeing created, can we do something in this case:\n+ /*\n+ * Error if the table is already present on the\nsubscriber. Please note\n+ * that concurrent DDLs can create the table as we\ndon't acquire any lock\n+ * on the table.\n+ *\n+ * XXX: do we want to overwrite it (or optionally)?\n+ */\n+ if (OidIsValid(RangeVarGetRelid(rv, AccessShareLock, true)))\n+ ereport(ERROR,\n+ (errmsg(\"existing table %s\ncannot synchronize table schema\",\n+ rv->relname)));\n\n2) Should we clean the replication slot in case of failures, currently\nthe replication slot is left over.\n\n3) Is it expected that all of the dependencies like type/domain etc\nshould be created by the user before creating a subscription with\ncopy_schema, currently we are taking care of creating the sequences\nfor tables, is this an exception?\n\n4) If a column list publication is created, currently we are getting\nall of the columns, should we get only the specified columns in this\ncase?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 19 Oct 2023 15:11:59 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial Schema Sync for Logical Replication"
},
{
"msg_contents": "\r\n> From: vignesh C <[email protected]>\r\n> Sent: Thursday, October 19, 2023 10:41 AM\r\n> Can you rebase the patch and post the complete set of required changes for\r\n> the concurrent DDL, I will have a look at them.\r\n\r\nSure , I will try to send the complete rebased patch within a week.\r\n\r\nRegards\r\nSachin\r\n\r\n",
"msg_date": "Fri, 20 Oct 2023 11:14:32 +0000",
"msg_from": "\"Kumar, Sachin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Initial Schema Sync for Logical Replication"
}
] |
[
{
"msg_contents": "I've downloaded the PostgreSQL 14.7 source and building it on Windows 64bit and 32bit.\n\nI'm using the Visual Studio tools in the src/tools/msvc folder.\n\nI'm trying to build with the uuid extension but it looks like I need uuid-ossp installed in order\nto get it to work.\n\nThe source download referenced in the Postgresql doc here, https://www.postgresql.org/docs/current/uuid-ossp.html#id-1.11.7.58.6\nthis source download, ftp://ftp.ossp.org/pkg/lib/uuid/uuid-1.6.2.tar.gz, is Unix-specific as far as I can tell.\n\nWhere can I find uuid-ossp for Windows, 32 and 64 bit, either the source so I can build it or\nprebuilt libraries?\n\nThanks, Mark\n\n\n\n\n\n\n\n\n\n\n\n\nI’ve downloaded the PostgreSQL 14.7 source and building it on Windows 64bit and 32bit.\n\nI’m using the Visual Studio tools in the src/tools/msvc folder.\n\nI’m trying to build with the uuid extension but it looks like I need uuid-ossp installed in order\nto get it to work.\n\nThe source download referenced in the Postgresql doc here, \nhttps://www.postgresql.org/docs/current/uuid-ossp.html#id-1.11.7.58.6\nthis source download, ftp://ftp.ossp.org/pkg/lib/uuid/uuid-1.6.2.tar.gz, is Unix-specific as far as I can tell.\n\n\nWhere can I find uuid-ossp for Windows, 32 and 64 bit, either the source so I can build it or\nprebuilt libraries?\n\nThanks, Mark",
"msg_date": "Wed, 15 Mar 2023 18:31:27 +0000",
"msg_from": "Mark Hill <[email protected]>",
"msg_from_op": true,
"msg_subject": "uuid-ossp source or binaries for Windows"
},
{
"msg_contents": "> On 15 Mar 2023, at 19:31, Mark Hill <[email protected]> wrote:\n> \n> I’ve downloaded the PostgreSQL 14.7 source and building it on Windows 64bit and 32bit.\n> \n> I’m using the Visual Studio tools in the src/tools/msvc folder.\n> \n> I’m trying to build with the uuid extension but it looks like I need uuid-ossp installed in order\n> to get it to work.\n\nDo you need the extension specifically or does the built-in generator function\ndo what you need?\n\n> The source download referenced in the Postgresql doc here, https://www.postgresql.org/docs/current/uuid-ossp.html#id-1.11.7.58.6\n> this source download, ftp://ftp.ossp.org/pkg/lib/uuid/uuid-1.6.2.tar.gz, is Unix-specific as far as I can tell.\n> \n> Where can I find uuid-ossp for Windows, 32 and 64 bit, either the source so I can build it or\n> prebuilt libraries?\n\nI don't know windows at all, but uuid-ossp.dll is provided in the EDB packages\n(looking at the binary zip bundle) so it's clearly available to be built.\nMaybe someone from EDB can chime in with pointers for building on Windows so we\ncan update the docs accordingly?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 15 Mar 2023 20:15:36 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: uuid-ossp source or binaries for Windows"
},
{
"msg_contents": "Hey Daniel,\n\nThanks for getting back to me.\n\nI think the issue I'm having is that my build of Postgres is missing uuid pieces needed by our users.\n\nThey're executing the command: CREATE EXTENSION \"uuid-ossp\"\n\nand getting the error\n\nERROR: could not open extension control file \"<Postgres-Install-Home>/share/extension/uuid-ossp.control\"\n\nThe only file matching \"*uuid*\" in my build of Postgres is: <Postgres-Install-Home>/include/server/utils/uuid.h\n\nI should have in addition: \n<Postgres-Install-Home>/include/uuid.h\n<Postgres-Install-Home>/lib/uuid-ossp.dll\n<Postgres-Install-Home>/share/extension/uuid-ossp--1.1.sql\n<Postgres-Install-Home>/share/extension/uuid-ossp.control\n<Postgres-Install-Home>/share/extension/uuid-ossp--unpackaged--1.0.sql\n<Postgres-Install-Home>/share/extension/uuid-ossp--1.0--1.1.sql\n\nI need a Windows-specific install of uuid-ossp for the Postgres build to use, for both 32bit and 64bit Windows.\n\nThanks, Mark\n\n-----Original Message-----\nFrom: Daniel Gustafsson <[email protected]> \nSent: Wednesday, March 15, 2023 3:16 PM\nTo: Mark Hill <[email protected]>\nCc: [email protected]; Ken Peressini <[email protected]>; Michael King <[email protected]>\nSubject: Re: uuid-ossp source or binaries for Windows\n\nEXTERNAL\n\n> On 15 Mar 2023, at 19:31, Mark Hill <[email protected]> wrote:\n>\n> I've downloaded the PostgreSQL 14.7 source and building it on Windows 64bit and 32bit.\n>\n> I'm using the Visual Studio tools in the src/tools/msvc folder.\n>\n> I'm trying to build with the uuid extension but it looks like I need \n> uuid-ossp installed in order to get it to work.\n\nDo you need the extension specifically or does the built-in generator function do what you need?\n\n> The source download referenced in the Postgresql doc here, \n> https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.\n> postgresql.org%2Fdocs%2Fcurrent%2Fuuid-ossp.html%23id-1.11.7.58.6&data\n> =05%7C01%7CMark.Hill%40sas.com%7C5acf51786dd5440ea0ed08db2589a9fd%7Cb1\n> c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638145045990073139%7CUnknown%\n> 7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJX\n> VCI6Mn0%3D%7C3000%7C%7C%7C&sdata=TSRqdrvImMLf6Pr8XWqRSUkCWUDaAjFtziykz\n> Czt5Sc%3D&reserved=0 this source download, \n> https://nam02.safelinks.protection.outlook.com/?url=ftp%3A%2F%2Fftp.ossp.org%2Fpkg%2Flib%2Fuuid%2Fuuid-1.6.2.tar.gz&data=05%7C01%7CMark.Hill%40sas.com%7C5acf51786dd5440ea0ed08db2589a9fd%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638145045990073139%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=ry3iJshaFPSegaIrmaJzA0%2BIKgEfXbJwmasBA8ZdWQ8%3D&reserved=0, is Unix-specific as far as I can tell.\n>\n> Where can I find uuid-ossp for Windows, 32 and 64 bit, either the \n> source so I can build it or prebuilt libraries?\n\nI don't know windows at all, but uuid-ossp.dll is provided in the EDB packages (looking at the binary zip bundle) so it's clearly available to be built.\nMaybe someone from EDB can chime in with pointers for building on Windows so we can update the docs accordingly?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 16 Mar 2023 03:14:46 +0000",
"msg_from": "Mark Hill <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: uuid-ossp source or binaries for Windows"
},
{
"msg_contents": "I posted this to pgsql-general but I think that's more for Postgres users. I'm trying to build Postgres \nwith the uuid-ossp extension on Windows using the msvc toolset provided with the Postgres source\nin <postgresSourceHome>/src/tools/msvc, e.g. postgresql-14.7/src/tools/msvc.\n\nI think I need uuid-ossp installed. The uuid-ossp source located here: ftp://ftp.ossp.org/pkg/lib/uuid/uuid-1.6.2.tar.gz\nis Unix-specific.\n\nIs there uuid-ossp source download for Windows or are uuid-ossp prebuilt binaries for Windows available?\n\nThanks, Mark\n\n-----Original Message-----\nFrom: Mark Hill <[email protected]> \nSent: Wednesday, March 15, 2023 11:15 PM\nTo: 'Daniel Gustafsson' <[email protected]>\nCc: [email protected]; Ken Peressini <[email protected]>; Michael King <[email protected]>\nSubject: RE: uuid-ossp source or binaries for Windows\n\nEXTERNAL\n\nHey Daniel,\n\nThanks for getting back to me.\n\nI think the issue I'm having is that my build of Postgres is missing uuid pieces needed by our users.\n\nThey're executing the command: CREATE EXTENSION \"uuid-ossp\"\n\nand getting the error\n\nERROR: could not open extension control file \"<Postgres-Install-Home>/share/extension/uuid-ossp.control\"\n\nThe only file matching \"*uuid*\" in my build of Postgres is: <Postgres-Install-Home>/include/server/utils/uuid.h\n\nI should have in addition:\n<Postgres-Install-Home>/include/uuid.h\n<Postgres-Install-Home>/lib/uuid-ossp.dll\n<Postgres-Install-Home>/share/extension/uuid-ossp--1.1.sql\n<Postgres-Install-Home>/share/extension/uuid-ossp.control\n<Postgres-Install-Home>/share/extension/uuid-ossp--unpackaged--1.0.sql\n<Postgres-Install-Home>/share/extension/uuid-ossp--1.0--1.1.sql\n\nI need a Windows-specific install of uuid-ossp for the Postgres build to use, for both 32bit and 64bit Windows.\n\nThanks, Mark\n\n-----Original Message-----\nFrom: Daniel Gustafsson <[email protected]>\nSent: Wednesday, March 15, 2023 3:16 PM\nTo: Mark Hill <[email protected]>\nCc: [email protected]; Ken Peressini <[email protected]>; Michael King <[email protected]>\nSubject: Re: uuid-ossp source or binaries for Windows\n\nEXTERNAL\n\n> On 15 Mar 2023, at 19:31, Mark Hill <[email protected]> wrote:\n>\n> I've downloaded the PostgreSQL 14.7 source and building it on Windows 64bit and 32bit.\n>\n> I'm using the Visual Studio tools in the src/tools/msvc folder.\n>\n> I'm trying to build with the uuid extension but it looks like I need \n> uuid-ossp installed in order to get it to work.\n\nDo you need the extension specifically or does the built-in generator function do what you need?\n\n> The source download referenced in the Postgresql doc here, \n> https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww%2F&data=05%7C01%7Cmark.hill%40sas.com%7C2fe3e6f033eb4de4506708db25cca633%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638145333114215621%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=4dfMYruL3rZjCY8ScPUM70xOk%2FM2WJIs8FPw4xXrUI0%3D&reserved=0.\n> postgresql.org%2Fdocs%2Fcurrent%2Fuuid-ossp.html%23id-1.11.7.58.6&data\n> =05%7C01%7CMark.Hill%40sas.com%7C5acf51786dd5440ea0ed08db2589a9fd%7Cb1\n> c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638145045990073139%7CUnknown%\n> 7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJX\n> VCI6Mn0%3D%7C3000%7C%7C%7C&sdata=TSRqdrvImMLf6Pr8XWqRSUkCWUDaAjFtziykz\n> Czt5Sc%3D&reserved=0 this source download, \n> https://nam02.safelinks.protection.outlook.com/?url=ftp%3A%2F%2Fftp.ossp.org%2Fpkg%2Flib%2Fuuid%2Fuuid-1.6.2.tar.gz&data=05%7C01%7Cmark.hill%40sas.com%7C2fe3e6f033eb4de4506708db25cca633%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638145333114215621%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=t9AVwe32CRgcW9oH%2Fmjj0lC8SSIAw0cBrQXmH1GKcNc%3D&reserved=0, is Unix-specific as far as I can tell.\n>\n> Where can I find uuid-ossp for Windows, 32 and 64 bit, either the \n> source so I can build it or prebuilt libraries?\n\nI don't know windows at all, but uuid-ossp.dll is provided in the EDB packages (looking at the binary zip bundle) so it's clearly available to be built.\nMaybe someone from EDB can chime in with pointers for building on Windows so we can update the docs accordingly?\n\n--\nDaniel Gustafsson\n\n\n\n\n\n",
"msg_date": "Thu, 16 Mar 2023 13:31:03 +0000",
"msg_from": "Mark Hill <[email protected]>",
"msg_from_op": true,
"msg_subject": "FW: uuid-ossp source or binaries for Windows"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI'm Tej, a grad student poking around postgres for a project.\n\nFor my use case, I'm trying to ascertain if there are any in-flight\ntransactions that are yet to be replicated to synchronous standbys (in a\nsynchronous streaming replication setting)\n\nThe first way to do this would be to check the WalSndCtl->lsn[] array to\nsee if the current max lsn of the system has replicated or not. This works\nwell when postgres is running and being actively used. However, when\npostgres has just started up, WalSndCtl->lsn[] values could be 0, but there\ncould still be transactions waiting to replicate.\n\nThe second way to do it would be to scan ProcGlobal to check for active\nxids. However, the issue is that I'm calling ProcArrayEndTransaction()\nbefore calling SyncRepWaitForLSN() to ensure that the transaction becomes\nvisible to other transactions before it begins to wait in the SyncRep\nqueue.\n\nSo, with this change, if I scan ProcGlobal, I would not see transactions\nthat have been committed locally but are yet to be replicated to\nsynchronous standbys because ProcArrayEndTransaction() would have marked\nthe transaction as completed.\n\nI've been looking at sent_lsn, write_lsn, flush_lsn etc., of the\nwalsender, but with no success. Considering the visibility change added\nabove, is there a way for me to check for transactions that have been\ncommitted locally but are waiting for replication?\n\nI would appreciate it if someone could point me in the right direction!\n\nSincerely,\n\nTej Kashi\nMMath CS, University of Waterloo\nWaterloo, ON, CA\n\nHi everyone,I'm Tej, a grad student poking around postgres for a project.For my use case, I'm trying to ascertain if there are any in-flight transactions that are yet to be replicated to synchronous standbys (in a synchronous streaming replication setting)The first way to do this would be to check the WalSndCtl->lsn[] array to see if the current max lsn of the system has replicated or not. This works well when postgres is running and being actively used. However, when postgres has just started up, WalSndCtl->lsn[] values could be 0, but there could still be transactions waiting to replicate.The second way to do it would be to scan ProcGlobal to check for active xids. However, the issue is that I'm calling ProcArrayEndTransaction() before calling SyncRepWaitForLSN() to ensure that the transaction becomes visible to other transactions before it begins to wait in the SyncRep queue. So, with this change, if I scan ProcGlobal, I would not see transactions that have been committed locally but are yet to be replicated to synchronous standbys because ProcArrayEndTransaction() would have marked the transaction as completed.I've been looking at sent_lsn, write_lsn, flush_lsn etc., of the walsender, but with no success. Considering the visibility change added above, is there a way for me to check for transactions that have been committed locally but are waiting for replication?I would appreciate it if someone could point me in the right direction!Sincerely,Tej KashiMMath CS, University of WaterlooWaterloo, ON, CA",
"msg_date": "Wed, 15 Mar 2023 15:48:26 -0400",
"msg_from": "Tejasvi Kashi <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to check for in-progress transactions"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 1:18 AM Tejasvi Kashi <[email protected]> wrote:\n>\n> For my use case, I'm trying to ascertain if there are any in-flight transactions that are yet to be replicated to synchronous standbys (in a synchronous streaming replication setting)\n>\n> I've been looking at sent_lsn, write_lsn, flush_lsn etc., of the walsender, but with no success. Considering the visibility change added above, is there a way for me to check for transactions that have been committed locally but are waiting for replication?\n\nI think you can look for SyncRep wait_event from pg_stat_activity,\nsomething like [1]. The backends will wait indefinitely until latch is\nset (postmaster death or an ack is received from sync standbys) in\nSyncRepWaitForLSN(). backend_xid is your\nlocally-committed-but-not-yet-replicated txn id. Will this help?\n\nWell, if you're planning to know all\nlocally-committed-but-not-yet-replicated txns from an extension or any\nother source code, you may run the full query [1] or if running a\nquery seems costly, you can look at what pg_stat_get_activity() does\nto get each backend's wait_event_info and have your code do that.\n\nBTW, what exactly is the use-case that'd want\nlocally-committed-but-not-yet-replicated txns info?\n\n[1]\npostgres=# select * from pg_stat_activity where backend_type = 'client\nbackend' and wait_event = 'SyncRep';\n-[ RECORD 1 ]----+------------------------------\ndatid | 5\ndatname | postgres\npid | 4187907\nleader_pid |\nusesysid | 10\nusename | ubuntu\napplication_name | psql\nclient_addr |\nclient_hostname |\nclient_port | -1\nbackend_start | 2023-03-16 05:16:56.917124+00\nxact_start | 2023-03-16 05:17:09.472092+00\nquery_start | 2023-03-16 05:17:09.472092+00\nstate_change | 2023-03-16 05:17:09.472095+00\nwait_event_type | IPC\nwait_event | SyncRep\nstate | active\nbackend_xid | 731\nbackend_xmin | 731\nquery_id |\nquery | create table foo(col1 int);\nbackend_type | client backend\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 16 Mar 2023 11:06:37 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to check for in-progress transactions"
},
{
"msg_contents": "Hi Bharath,\n\nThanks a lot for your reply. It looks like this is exactly what I need. For\nmy use case, I'm trying to get read-only transactions to wait for the\nreplication of prior writes.\n\nSincerely,\n\nTej Kashi\nMMath CS, University of Waterloo\nWaterloo, ON, CA\n\nOn Thu, 16 Mar 2023 at 01:36, Bharath Rupireddy <\[email protected]> wrote:\n\n> On Thu, Mar 16, 2023 at 1:18 AM Tejasvi Kashi <[email protected]> wrote:\n> >\n> > For my use case, I'm trying to ascertain if there are any in-flight\n> transactions that are yet to be replicated to synchronous standbys (in a\n> synchronous streaming replication setting)\n> >\n> > I've been looking at sent_lsn, write_lsn, flush_lsn etc., of the\n> walsender, but with no success. Considering the visibility change added\n> above, is there a way for me to check for transactions that have been\n> committed locally but are waiting for replication?\n>\n> I think you can look for SyncRep wait_event from pg_stat_activity,\n> something like [1]. The backends will wait indefinitely until latch is\n> set (postmaster death or an ack is received from sync standbys) in\n> SyncRepWaitForLSN(). backend_xid is your\n> locally-committed-but-not-yet-replicated txn id. Will this help?\n>\n> Well, if you're planning to know all\n> locally-committed-but-not-yet-replicated txns from an extension or any\n> other source code, you may run the full query [1] or if running a\n> query seems costly, you can look at what pg_stat_get_activity() does\n> to get each backend's wait_event_info and have your code do that.\n>\n> BTW, what exactly is the use-case that'd want\n> locally-committed-but-not-yet-replicated txns info?\n>\n> [1]\n> postgres=# select * from pg_stat_activity where backend_type = 'client\n> backend' and wait_event = 'SyncRep';\n> -[ RECORD 1 ]----+------------------------------\n> datid | 5\n> datname | postgres\n> pid | 4187907\n> leader_pid |\n> usesysid | 10\n> usename | ubuntu\n> application_name | psql\n> client_addr |\n> client_hostname |\n> client_port | -1\n> backend_start | 2023-03-16 05:16:56.917124+00\n> xact_start | 2023-03-16 05:17:09.472092+00\n> query_start | 2023-03-16 05:17:09.472092+00\n> state_change | 2023-03-16 05:17:09.472095+00\n> wait_event_type | IPC\n> wait_event | SyncRep\n> state | active\n> backend_xid | 731\n> backend_xmin | 731\n> query_id |\n> query | create table foo(col1 int);\n> backend_type | client backend\n>\n> --\n> Bharath Rupireddy\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n>\n\nHi Bharath,Thanks a lot for your reply. It looks like this is exactly what I need. For my use case, I'm trying to get read-only transactions to wait for the replication of prior writes.Sincerely,Tej KashiMMath CS, University of WaterlooWaterloo, ON, CAOn Thu, 16 Mar 2023 at 01:36, Bharath Rupireddy <[email protected]> wrote:On Thu, Mar 16, 2023 at 1:18 AM Tejasvi Kashi <[email protected]> wrote:\n>\n> For my use case, I'm trying to ascertain if there are any in-flight transactions that are yet to be replicated to synchronous standbys (in a synchronous streaming replication setting)\n>\n> I've been looking at sent_lsn, write_lsn, flush_lsn etc., of the walsender, but with no success. Considering the visibility change added above, is there a way for me to check for transactions that have been committed locally but are waiting for replication?\n\nI think you can look for SyncRep wait_event from pg_stat_activity,\nsomething like [1]. The backends will wait indefinitely until latch is\nset (postmaster death or an ack is received from sync standbys) in\nSyncRepWaitForLSN(). backend_xid is your\nlocally-committed-but-not-yet-replicated txn id. Will this help?\n\nWell, if you're planning to know all\nlocally-committed-but-not-yet-replicated txns from an extension or any\nother source code, you may run the full query [1] or if running a\nquery seems costly, you can look at what pg_stat_get_activity() does\nto get each backend's wait_event_info and have your code do that.\n\nBTW, what exactly is the use-case that'd want\nlocally-committed-but-not-yet-replicated txns info?\n\n[1]\npostgres=# select * from pg_stat_activity where backend_type = 'client\nbackend' and wait_event = 'SyncRep';\n-[ RECORD 1 ]----+------------------------------\ndatid | 5\ndatname | postgres\npid | 4187907\nleader_pid |\nusesysid | 10\nusename | ubuntu\napplication_name | psql\nclient_addr |\nclient_hostname |\nclient_port | -1\nbackend_start | 2023-03-16 05:16:56.917124+00\nxact_start | 2023-03-16 05:17:09.472092+00\nquery_start | 2023-03-16 05:17:09.472092+00\nstate_change | 2023-03-16 05:17:09.472095+00\nwait_event_type | IPC\nwait_event | SyncRep\nstate | active\nbackend_xid | 731\nbackend_xmin | 731\nquery_id |\nquery | create table foo(col1 int);\nbackend_type | client backend\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 16 Mar 2023 16:43:31 -0400",
"msg_from": "Tejasvi Kashi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to check for in-progress transactions"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 4:43 PM Tejasvi Kashi <[email protected]> wrote:\n> Thanks a lot for your reply. It looks like this is exactly what I need. For my use case, I'm trying to get read-only transactions to wait for the replication of prior writes.\n\ncan't you use remote_apply?\n\nhttps://www.postgresql.org/docs/15/runtime-config-wal.html\n\n\n",
"msg_date": "Thu, 16 Mar 2023 17:01:23 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to check for in-progress transactions"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 17:01, Melanie Plageman <[email protected]>\nwrote:\n\n> On Thu, Mar 16, 2023 at 4:43 PM Tejasvi Kashi <[email protected]> wrote:\n> > Thanks a lot for your reply. It looks like this is exactly what I need.\n> For my use case, I'm trying to get read-only transactions to wait for the\n> replication of prior writes.\n>\n> can't you use remote_apply?\n>\n> https://www.postgresql.org/docs/15/runtime-config-wal.html\n\n\nThat will ensure that the writes are acknowledged only after remote\napplication. But, in my case, I’m trying to get read transactions to wait\nif they have seen a write that is yet to be replicated.\n\n<https://www.postgresql.org/docs/15/runtime-config-wal.html>\n>\n\nOn Thu, Mar 16, 2023 at 17:01, Melanie Plageman <[email protected]> wrote:On Thu, Mar 16, 2023 at 4:43 PM Tejasvi Kashi <[email protected]> wrote:\n> Thanks a lot for your reply. It looks like this is exactly what I need. For my use case, I'm trying to get read-only transactions to wait for the replication of prior writes.\n\ncan't you use remote_apply?\n\nhttps://www.postgresql.org/docs/15/runtime-config-wal.htmlThat will ensure that the writes are acknowledged only after remote application. But, in my case, I’m trying to get read transactions to wait if they have seen a write that is yet to be replicated.",
"msg_date": "Thu, 16 Mar 2023 17:08:17 -0400",
"msg_from": "Tejasvi Kashi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to check for in-progress transactions"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nIt is well known fact that queries using sequential scan can not be used \nto prewarm cache, because them are using ring buffer\neven if shared buffers are almost empty.\nI have searched hackers archive but failed to find any discussion about it.\nWhat are the drawbacks of using free buffers even with BAM_BULKREAD \nstrategy?\nI mean the following trivial patch:\n\ndiff --git a/src/backend/storage/buffer/freelist.c \nb/src/backend/storage/buffer/freelist.c\nindex 6be80476db..243335d0e4 100644\n--- a/src/backend/storage/buffer/freelist.c\n+++ b/src/backend/storage/buffer/freelist.c\n@@ -208,8 +208,15 @@ StrategyGetBuffer(BufferAccessStrategy strategy, \nuint32 *buf_state)\n /*\n * If given a strategy object, see whether it can select a \nbuffer. We\n * assume strategy objects don't need buffer_strategy_lock.\n */\n- if (strategy != NULL)\n+ if (strategy != NULL && StrategyControl->firstFreeBuffer < 0)\n {\n buf = GetBufferFromRing(strategy, buf_state);\n if (buf != NULL)\n\nSo if there are free buffers, then use normal buffer allocation instead \nof GetBufferFromRing.\n\nRight now it is necessary to use pg_prewarm extension in order to \nprewarm buffers.\nBut it is not convenient (you need to manually locate and prewarm all \nindexes and TOAST relation) and not always possible\n(client may just not notice that server is restarted).\n\nOne potential problem which I can imagine is sync scan: when several \nseqscans of the same table are using the same pages from ring buffer.\nBut synchronization of concurrent sync scans is naturally achieved: \nbacked which is moving first is moving slowly than catching up backends\nwhich do not need to read something from the disk. It seems to me that \nif we allow to use all shared buffers instead of small ring buffer,\nthen concurrent seqscans will have more chances to reuse cached pages. I \nhave performed multiple tests with spawning multiple parallel seqscans\nafter postgres restart and didn't observe any problems or degradation of \nperformance comparing with master.\n\nAlso ring buffer is used not only for seqscan. There are several places \nin Postgres core and extension (for example pgvector) where BAM_BULKREAD \nstrategy is used\nalso for index scan.\n\nCertainly OS file cache should prevent redundant disk reads.\nBut it seems to be better in any case to use free memory inside Postgres \nprocess rather than rely on OS cache and perform syscalls to copy data \nfrom this cache.\n\nDefinitely it is possible that seqscan limited by ring buffer will be \ncompleted faster than seqscan filling all shared buffers especially if\nsize of shared buffers is large enough. OS will need some extra time to \ncommit memory and may be swap out other regions to find enough physical\nmemory for shared buffers. But if data set fits in memory, then \nsubsequent queries will be much faster. And it is quite common for \nmodern servers\nthat size of shared buffers is comparable with database size.\n\nI will be pleased you point me at some drawbacks of such approach.\nOtherwise I can propose patch for commitfest.\n\n\n\n\n\n Hi hackers,\n\n It is well known fact that queries using sequential scan can not be\n used to prewarm cache, because them are using ring buffer \n even if shared buffers are almost empty.\n I have searched hackers archive but failed to find any discussion\n about it.\n What are the drawbacks of using free buffers even with BAM_BULKREAD\n strategy?\n I mean the following trivial patch:\n\n diff --git a/src/backend/storage/buffer/freelist.c\n b/src/backend/storage/buffer/freelist.c\n index 6be80476db..243335d0e4 100644\n --- a/src/backend/storage/buffer/freelist.c\n +++ b/src/backend/storage/buffer/freelist.c\n @@ -208,8 +208,15 @@ StrategyGetBuffer(BufferAccessStrategy\n strategy, uint32 *buf_state)\n /*\n * If given a strategy object, see whether it can select a\n buffer. We\n * assume strategy objects don't need buffer_strategy_lock.\n */\n - if (strategy != NULL)\n + if (strategy != NULL &&\n StrategyControl->firstFreeBuffer < 0)\n {\n buf = GetBufferFromRing(strategy, buf_state);\n if (buf != NULL)\n\n So if there are free buffers, then use normal buffer allocation\n instead of GetBufferFromRing.\n\n Right now it is necessary to use pg_prewarm extension in order to\n prewarm buffers.\n But it is not convenient (you need to manually locate and prewarm\n all indexes and TOAST relation) and not always possible\n (client may just not notice that server is restarted).\n\n One potential problem which I can imagine is sync scan: when several\n seqscans of the same table are using the same pages from ring\n buffer.\n But synchronization of concurrent sync scans is naturally achieved:\n backed which is moving first is moving slowly than catching up backends\n which do not need to read something from the disk. It seems to me\n that if we allow to use all shared buffers instead of small ring\n buffer, \n then concurrent seqscans will have more chances to reuse cached\n pages. I have performed multiple tests with spawning multiple\n parallel seqscans\n after postgres restart and didn't observe any problems or\n degradation of performance comparing with master. \n\n Also ring buffer is used not only for seqscan. There are several\n places in Postgres core and extension (for example pgvector) where\n BAM_BULKREAD strategy is used\n also for index scan.\n\n Certainly OS file cache should prevent redundant disk reads.\n But it seems to be better in any case to use free memory inside\n Postgres process rather than rely on OS cache and perform syscalls\n to copy data from this cache.\n\n Definitely it is possible that seqscan limited by ring buffer will\n be completed faster than seqscan filling all shared buffers\n especially if\n size of shared buffers is large enough. OS will need some extra\n time to commit memory and may be swap out other regions to find\n enough physical\n memory for shared buffers. But if data set fits in memory, then\n subsequent queries will be much faster. And it is quite common\n for modern servers\n that size of shared buffers is comparable with database size.\n\n I will be pleased you point me at some drawbacks of such approach.\n Otherwise I can propose patch for commitfest.",
"msg_date": "Wed, 15 Mar 2023 22:38:06 +0200",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Speed-up shared buffers prewarming"
},
{
"msg_contents": "On Wed, 15 Mar 2023 at 21:38, Konstantin Knizhnik <[email protected]> wrote:\n>\n> Hi hackers,\n>\n> It is well known fact that queries using sequential scan can not be used to prewarm cache, because them are using ring buffer\n> even if shared buffers are almost empty.\n> I have searched hackers archive but failed to find any discussion about it.\n> What are the drawbacks of using free buffers even with BAM_BULKREAD strategy?\n> I mean the following trivial patch:\n>\n> diff --git a/src/backend/storage/buffer/freelist.c b/src/backend/storage/buffer/freelist.c\n> index 6be80476db..243335d0e4 100644\n> --- a/src/backend/storage/buffer/freelist.c\n> +++ b/src/backend/storage/buffer/freelist.c\n> @@ -208,8 +208,15 @@ StrategyGetBuffer(BufferAccessStrategy strategy, uint32 *buf_state)\n> /*\n> * If given a strategy object, see whether it can select a buffer. We\n> * assume strategy objects don't need buffer_strategy_lock.\n> */\n> - if (strategy != NULL)\n> + if (strategy != NULL && StrategyControl->firstFreeBuffer < 0)\n> {\n> buf = GetBufferFromRing(strategy, buf_state);\n> if (buf != NULL)\n>\n> So if there are free buffers, then use normal buffer allocation instead of GetBufferFromRing.\n\nYes. As seen in [1], ring buffers aren't all that great in some cases,\nand I think this is one. Buffer allocation should always make use of\nthe available resources, so that it doesn't take O(N/ring_size) scans\non a table to fill the buffers if that seqscan is the only workload of\nthe system.\n\n> Definitely it is possible that seqscan limited by ring buffer will be completed faster than seqscan filling all shared buffers especially if\n> size of shared buffers is large enough. OS will need some extra time to commit memory and may be swap out other regions to find enough physical\n> memory for shared buffers.\n\nNot just that, but it is also possible that by ignoring the ring we're\ngoing to hit pages that aren't yet in the CPU caches and we would thus\nneed to fetch the data from RAM (or from another NUMA node), which\ncould be more expensive than reading it from a local kernel's file\ncache and writing it to the local cache lines.\n\nAnyway, I'm all for this change - I don't think we need to be careful\nabout trashing other workload's buffers if the buffer is useless for\nliterally every workload.\n\n\nKind regards,\n\nMatthias van de Meent\n\n[1] https://www.postgresql.org/message-id/flat/20230111182720.ejifsclfwymw2reb%40awork3.anarazel.de\n\n\n",
"msg_date": "Wed, 15 Mar 2023 22:40:32 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speed-up shared buffers prewarming"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 4:38 PM Konstantin Knizhnik <[email protected]> wrote:\n> It is well known fact that queries using sequential scan can not be used to prewarm cache, because them are using ring buffer\n> even if shared buffers are almost empty.\n> I have searched hackers archive but failed to find any discussion about it.\n> What are the drawbacks of using free buffers even with BAM_BULKREAD strategy?\n> I mean the following trivial patch:\n\nIt has been brought up at least in 2014 [1] and 2020 [2]\nThe part relevant to your patch is in the thread from 2020 here [3].\nThis quote in particular:\n\n> a) Don't evict buffers when falling off the ringbuffer as long as\n> there unused buffers on the freelist. Possibly just set their\n> usagecount to zero as long that is the case.\n\n> diff --git a/src/backend/storage/buffer/freelist.c b/src/backend/storage/buffer/freelist.c\n> index 6be80476db..243335d0e4 100644\n> --- a/src/backend/storage/buffer/freelist.c\n> +++ b/src/backend/storage/buffer/freelist.c\n> @@ -208,8 +208,15 @@ StrategyGetBuffer(BufferAccessStrategy strategy, uint32 *buf_state)\n> /*\n> * If given a strategy object, see whether it can select a buffer. We\n> * assume strategy objects don't need buffer_strategy_lock.\n> */\n> - if (strategy != NULL)\n> + if (strategy != NULL && StrategyControl->firstFreeBuffer < 0)\n> {\n> buf = GetBufferFromRing(strategy, buf_state);\n> if (buf != NULL)\n>\n> So if there are free buffers, then use normal buffer allocation instead of GetBufferFromRing.\n\nSimilar to what you did.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/flat/CAJRYxuL98fE_QN7McnCM5HUo8p9ceNJw%3D20GoN5NVdZdueJFqg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/20200206040026.trjzsmdsbl4gu2b6%40alap3.anarazel.de\n[5] https://www.postgresql.org/message-id/20200206040026.trjzsmdsbl4gu2b6%40alap3.anarazel.de\n\n\n",
"msg_date": "Wed, 15 Mar 2023 18:07:36 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speed-up shared buffers prewarming"
},
{
"msg_contents": "Hi,\n\nOn 3/15/23 10:40 PM, Matthias van de Meent wrote:\n> On Wed, 15 Mar 2023 at 21:38, Konstantin Knizhnik <[email protected]> wrote:\n>>\n>> Hi hackers,\n>>\n>> It is well known fact that queries using sequential scan can not be used to prewarm cache, because them are using ring buffer\n>> even if shared buffers are almost empty.\n>> I have searched hackers archive but failed to find any discussion about it.\n>> What are the drawbacks of using free buffers even with BAM_BULKREAD strategy?\n>> I mean the following trivial patch:\n>>\n>> diff --git a/src/backend/storage/buffer/freelist.c b/src/backend/storage/buffer/freelist.c\n>> index 6be80476db..243335d0e4 100644\n>> --- a/src/backend/storage/buffer/freelist.c\n>> +++ b/src/backend/storage/buffer/freelist.c\n>> @@ -208,8 +208,15 @@ StrategyGetBuffer(BufferAccessStrategy strategy, uint32 *buf_state)\n>> /*\n>> * If given a strategy object, see whether it can select a buffer. We\n>> * assume strategy objects don't need buffer_strategy_lock.\n>> */\n>> - if (strategy != NULL)\n>> + if (strategy != NULL && StrategyControl->firstFreeBuffer < 0)\n>> {\n>> buf = GetBufferFromRing(strategy, buf_state);\n>> if (buf != NULL)\n>>\n>> So if there are free buffers, then use normal buffer allocation instead of GetBufferFromRing.\n> \n> Yes. As seen in [1], ring buffers aren't all that great in some cases,\n> and I think this is one. Buffer allocation should always make use of\n> the available resources, so that it doesn't take O(N/ring_size) scans\n> on a table to fill the buffers if that seqscan is the only workload of\n> the system.\n\nAgree but then what do you think about also paying special attention to those buffers when eviction needs to be done?\n\nThose buffers are \"usually\" needed briefly, so something like being able to distinguish them and be more aggressive regarding their eviction.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 16 Mar 2023 12:50:35 +0100",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speed-up shared buffers prewarming"
}
] |
[
{
"msg_contents": "Hello.\n\nWhen I ran pg_ls_dir('..'), the error message I received was somewhat\ndifficult to understand.\n\npostgres=> select * from pg_ls_dir('..');\nERROR: path must be in or below the current directory\n\nAs far as I know the concept of a \"current directory\" doesn't apply to\nthe server side. In fact, the function comment for\nconvert_and_check_filename explicitly states that:\n\n> * Filename may be absolute or relative to the DataDir\n\nThus I think that the message should read \"path must be in or below\nthe data directory\" instead.\n\nWhat do you think about making this change?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 16 Mar 2023 11:16:46 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "\"current directory\" in a server error message"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 7:47 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> Hello.\n>\n> When I ran pg_ls_dir('..'), the error message I received was somewhat\n> difficult to understand.\n>\n> postgres=> select * from pg_ls_dir('..');\n> ERROR: path must be in or below the current directory\n>\n> As far as I know the concept of a \"current directory\" doesn't apply to\n> the server side. In fact, the function comment for\n> convert_and_check_filename explicitly states that:\n>\n> > * Filename may be absolute or relative to the DataDir\n>\n> Thus I think that the message should read \"path must be in or below\n> the data directory\" instead.\n>\n> What do you think about making this change?\n\nWell yes. As far as postgres processes are concerned their working\ndirectory is set to data directory by the postmaster in\nChangeToDataDir() and all the children will inherit that setting. So,\nI see nothing wrong in being explicit about it in the error messages.\n\nBTW, adminpack too has the same error message.\n\nFWIW, here are the steps to generate the error:\ncreate role foo with nosuperuser;\ngrant execute on function pg_ls_dir(text) to foo;\nset role foo;\nselect * from pg_ls_dir('..');\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 16 Mar 2023 09:32:05 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"current directory\" in a server error message"
},
{
"msg_contents": "At Thu, 16 Mar 2023 09:32:05 +0530, Bharath Rupireddy <[email protected]> wrote in \r\n> On Thu, Mar 16, 2023 at 7:47 AM Kyotaro Horiguchi\r\n> <[email protected]> wrote:\r\n> > Thus I think that the message should read \"path must be in or below\r\n> > the data directory\" instead.\r\n> >\r\n> > What do you think about making this change?\r\n> \r\n> Well yes. As far as postgres processes are concerned their working\r\n> directory is set to data directory by the postmaster in\r\n> ChangeToDataDir() and all the children will inherit that setting. So,\r\n> I see nothing wrong in being explicit about it in the error messages.\r\n\r\nYeah, you're right.\r\n\r\n> BTW, adminpack too has the same error message.\r\n\r\nI somehow dropped them. Thanks for pointing.\r\n\r\n> FWIW, here are the steps to generate the error:\r\n> create role foo with nosuperuser;\r\n> grant execute on function pg_ls_dir(text) to foo;\r\n> set role foo;\r\n> select * from pg_ls_dir('..');\r\n\r\nOh, thank you for the clarification about the reproduction method.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Thu, 16 Mar 2023 17:10:42 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"current directory\" in a server error message"
},
{
"msg_contents": "Kyotaro Horiguchi <[email protected]> writes:\n> At Thu, 16 Mar 2023 09:32:05 +0530, Bharath Rupireddy <[email protected]> wrote in \n>> On Thu, Mar 16, 2023 at 7:47 AM Kyotaro Horiguchi\n>> <[email protected]> wrote:\n>>> Thus I think that the message should read \"path must be in or below\n>>> the data directory\" instead.\n\n>> BTW, adminpack too has the same error message.\n\n> I somehow dropped them. Thanks for pointing.\n\nAgreed, this is an improvement. I fixed adminpack too and pushed it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Mar 2023 12:05:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"current directory\" in a server error message"
},
{
"msg_contents": "At Thu, 16 Mar 2023 12:05:32 -0400, Tom Lane <[email protected]> wrote in \r\n> Kyotaro Horiguchi <[email protected]> writes:\r\n> > At Thu, 16 Mar 2023 09:32:05 +0530, Bharath Rupireddy <[email protected]> wrote in \r\n> >> On Thu, Mar 16, 2023 at 7:47 AM Kyotaro Horiguchi\r\n> >> <[email protected]> wrote:\r\n> >>> Thus I think that the message should read \"path must be in or below\r\n> >>> the data directory\" instead.\r\n> \r\n> >> BTW, adminpack too has the same error message.\r\n> \r\n> > I somehow dropped them. Thanks for pointing.\r\n> \r\n> Agreed, this is an improvement. I fixed adminpack too and pushed it.\r\n\r\nOh, thanks for committing this.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Fri, 17 Mar 2023 10:32:26 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"current directory\" in a server error message"
}
] |
[
{
"msg_contents": "Hi all,\n\nlibpq has kept some code related to the support of authentication with\nSCM credentials for some time now, code dead in the backend since\n9.1. Wouldn't it be time to let it go and remove this code entirely,\nerroring in libpq if attempting to connect to a server that supports\nthat?\n\nHard to say if this is actually working these days.\n\nOpinions or thoughts?\n--\nMichael",
"msg_date": "Thu, 16 Mar 2023 16:40:33 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Remove last traces of SCM credential auth from libpq?"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> libpq has kept some code related to the support of authentication with\n> SCM credentials for some time now, code dead in the backend since\n> 9.1. Wouldn't it be time to let it go and remove this code entirely,\n> erroring in libpq if attempting to connect to a server that supports\n> that?\n\n+1. Since that's only used on Unix-domain sockets, it could only be\nuseful if you were using current libpq while talking to a pre-9.1\nserver on the same machine. That seems fairly unlikely --- and if\nyou did have to do that, you could still connect, just not with peer\nauth. You'd be suffering other quality-of-life problems too,\nbecause we removed support for such old servers from psql and pg_dump\nawhile ago.\n\n> Hard to say if this is actually working these days.\n\nI didn't trace the old discussions, but the commit that removed the\nserver-side support (be4585b1c) mentions something about portability\nissues with that code ... so it's rather likely that it didn't work\nanyway.\n\nIn addition to the changes here, it looks like you could drop the\nconfigure/meson probes that set HAVE_STRUCT_CMSGCRED.\n\nAlso, in pg_fe_sendauth, couldn't you just let the default: case\nhandle it instead of adding a bespoke error message? We're not\nreally expecting that anyone is ever going to hit this, so I'm\nnot convinced it's worth the translation burden.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Mar 2023 10:49:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove last traces of SCM credential auth from libpq?"
},
{
"msg_contents": "On 3/16/23 10:49 AM, Tom Lane wrote:\r\n> Michael Paquier <[email protected]> writes:\r\n>> libpq has kept some code related to the support of authentication with\r\n>> SCM credentials for some time now, code dead in the backend since\r\n>> 9.1. Wouldn't it be time to let it go and remove this code entirely,\r\n>> erroring in libpq if attempting to connect to a server that supports\r\n>> that?\r\n> \r\n> +1. Since that's only used on Unix-domain sockets, it could only be\r\n> useful if you were using current libpq while talking to a pre-9.1\r\n> server on the same machine.\r\n\r\n+1.\r\n\r\n> Also, in pg_fe_sendauth, couldn't you just let the default: case\r\n> handle it instead of adding a bespoke error message? We're not\r\n> really expecting that anyone is ever going to hit this, so I'm\r\n> not convinced it's worth the translation burden.\r\n\r\n+1 to this, that was my thought as well. That would let us remove the \r\n\"AUTH_REQ_SCM_CREDS\" constant too.\r\n\r\nIt looks like in the po files there are a bunch of \"SCM_CRED \r\nauthentication method not supported\" messages that can also be removed.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Thu, 16 Mar 2023 13:28:51 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove last traces of SCM credential auth from libpq?"
},
{
"msg_contents": "\"Jonathan S. Katz\" <[email protected]> writes:\n> It looks like in the po files there are a bunch of \"SCM_CRED \n> authentication method not supported\" messages that can also be removed.\n\nThose will go away in the normal course of translation maintenance,\nthere's no need to remove them by hand. (Generally speaking, there\nis no need to ever touch the .po files except when new versions get\nimported from the translation repo.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Mar 2023 13:50:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove last traces of SCM credential auth from libpq?"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 10:49:45AM -0400, Tom Lane wrote:\n> In addition to the changes here, it looks like you could drop the\n> configure/meson probes that set HAVE_STRUCT_CMSGCRED.\n\nRight, done.\n\n> Also, in pg_fe_sendauth, couldn't you just let the default: case\n> handle it instead of adding a bespoke error message? We're not\n> really expecting that anyone is ever going to hit this, so I'm\n> not convinced it's worth the translation burden.\n\nYes, I was wondering if that's worth keeping or not, so I chose\nconsistency with AUTH_REQ_KRB4 and AUTH_REQ_KRB5.\n\nWould it be better to hold on this patch for 17~? I have just noticed\nthat while looking at Jacob's patch for require_auth, so the timing is\nnot good. Honestly, I don't see a reason to wait a few extra month to\nremove that, particularly now that pg_dump and pg_upgrade go down to\n9.2..\n--\nMichael",
"msg_date": "Fri, 17 Mar 2023 09:04:49 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove last traces of SCM credential auth from libpq?"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> On Thu, Mar 16, 2023 at 10:49:45AM -0400, Tom Lane wrote:\n>> Also, in pg_fe_sendauth, couldn't you just let the default: case\n>> handle it instead of adding a bespoke error message? We're not\n>> really expecting that anyone is ever going to hit this, so I'm\n>> not convinced it's worth the translation burden.\n\n> Yes, I was wondering if that's worth keeping or not, so I chose\n> consistency with AUTH_REQ_KRB4 and AUTH_REQ_KRB5.\n\nMaybe flush those special messages too? I'm not sure how long\nthey've been obsolete, though.\n\n> Would it be better to hold on this patch for 17~?\n\nNah, I see no reason to wait. We already dropped the higher-level\nclient support (psql/pg_dump) for these server versions in v15.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Mar 2023 20:10:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove last traces of SCM credential auth from libpq?"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 08:10:12PM -0400, Tom Lane wrote:\n> Maybe flush those special messages too? I'm not sure how long\n> they've been obsolete, though.\n\nKRB4 was switched in a159ad3 back in 2005, and KRB5 in 98de86e back in\n2014 (deprecated in 8.3, so that's even older than creds). So yes,\nthat could be removed as well, I guess, falling back to the default\nerror message.\n\n> Nah, I see no reason to wait. We already dropped the higher-level\n> client support (psql/pg_dump) for these server versions in v15.\n\nOkay. I'll clean up this part today, then.\n--\nMichael",
"msg_date": "Fri, 17 Mar 2023 09:30:32 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove last traces of SCM credential auth from libpq?"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 09:30:32AM +0900, Michael Paquier wrote:\n> KRB4 was switched in a159ad3 back in 2005, and KRB5 in 98de86e back in\n> 2014 (deprecated in 8.3, so that's even older than creds). So yes,\n> that could be removed as well, I guess, falling back to the default\n> error message.\n\nThis seems like something worth a thread of its own, will send a\npatch.\n\n>> Nah, I see no reason to wait. We already dropped the higher-level\n>> client support (psql/pg_dump) for these server versions in v15.\n> \n> Okay. I'll clean up this part today, then.\n\nI got around to do that with 98ae2c8.\n--\nMichael",
"msg_date": "Sat, 18 Mar 2023 08:13:10 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove last traces of SCM credential auth from libpq?"
}
] |
[
{
"msg_contents": "When looking at the report in [0] an API choice in the relevant pg_upgrade code\npath stood out as curious. check_is_install_user() runs this query to ensure\nthat only the install user is present in the cluster:\n\n res = executeQueryOrDie(conn,\n \"SELECT COUNT(*) \"\n \"FROM pg_catalog.pg_roles \"\n \"WHERE rolname !~ '^pg_'\");\n\nThe result is then verified with the following:\n\n if (cluster == &new_cluster && atooid(PQgetvalue(res, 0, 0)) != 1)\n pg_fatal(\"Only the install user can be defined in the new cluster.\");\n\nThis was changed from atoi() in ee646df59 with no specific comment on why.\nThis is not a bug, since atooid() will do the right thing here, but it threw me\noff reading the code and might well confuse others. Is there a reason not to\nchange this back to atoi() for code clarity as we're not reading an Oid here?\n\n--\nDaniel Gustafsson\n\n[0] VE1P191MB1118E9752D4EAD45205E995CD6BF9@VE1P191MB1118.EURP191.PROD.OUTLOOK.COM\n\n",
"msg_date": "Thu, 16 Mar 2023 11:20:24 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "The use of atooid() on non-Oid results"
},
{
"msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> When looking at the report in [0] an API choice in the relevant pg_upgrade code\n> path stood out as curious. check_is_install_user() runs this query to ensure\n> that only the install user is present in the cluster:\n\n> res = executeQueryOrDie(conn,\n> \"SELECT COUNT(*) \"\n> \"FROM pg_catalog.pg_roles \"\n> \"WHERE rolname !~ '^pg_'\");\n\n> The result is then verified with the following:\n\n> if (cluster == &new_cluster && atooid(PQgetvalue(res, 0, 0)) != 1)\n> pg_fatal(\"Only the install user can be defined in the new cluster.\");\n\n> This was changed from atoi() in ee646df59 with no specific comment on why.\n> This is not a bug, since atooid() will do the right thing here, but it threw me\n> off reading the code and might well confuse others. Is there a reason not to\n> change this back to atoi() for code clarity as we're not reading an Oid here?\n\nHmm ... in principle, you could have more than 2^31 entries in pg_roles,\nbut not more than 2^32 since they all have to have distinct OIDs. So\nI can see the point of avoiding that theoretical overflow hazard. But\npersonally I'd probably avoid assuming anything about how wide the COUNT()\nresult could be, and instead writing\n\n\t... && strcmp(PQgetvalue(res, 0, 0), \"1\") != 0)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Mar 2023 10:58:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The use of atooid() on non-Oid results"
},
{
"msg_contents": "> On 16 Mar 2023, at 15:58, Tom Lane <[email protected]> wrote:\n> \n> Daniel Gustafsson <[email protected]> writes:\n>> When looking at the report in [0] an API choice in the relevant pg_upgrade code\n>> path stood out as curious. check_is_install_user() runs this query to ensure\n>> that only the install user is present in the cluster:\n> \n>> res = executeQueryOrDie(conn,\n>> \"SELECT COUNT(*) \"\n>> \"FROM pg_catalog.pg_roles \"\n>> \"WHERE rolname !~ '^pg_'\");\n> \n>> The result is then verified with the following:\n> \n>> if (cluster == &new_cluster && atooid(PQgetvalue(res, 0, 0)) != 1)\n>> pg_fatal(\"Only the install user can be defined in the new cluster.\");\n> \n>> This was changed from atoi() in ee646df59 with no specific comment on why.\n>> This is not a bug, since atooid() will do the right thing here, but it threw me\n>> off reading the code and might well confuse others. Is there a reason not to\n>> change this back to atoi() for code clarity as we're not reading an Oid here?\n> \n> Hmm ... in principle, you could have more than 2^31 entries in pg_roles,\n> but not more than 2^32 since they all have to have distinct OIDs. So\n> I can see the point of avoiding that theoretical overflow hazard. But\n> personally I'd probably avoid assuming anything about how wide the COUNT()\n> result could be, and instead writing\n> \n> \t... && strcmp(PQgetvalue(res, 0, 0), \"1\") != 0)\n\nYeah, that makes sense. I'll go ahead with that solution instead and possibly\na brief addition to the comment.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 16 Mar 2023 20:17:15 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The use of atooid() on non-Oid results"
}
] |
[
{
"msg_contents": "Hi\n\nsee\n\n[504/2287] Compiling C object\nsrc/backend/postgres_lib.a.p/access_transam_xlogrecovery.c.o\nIn function ‘recoveryStopsAfter’,\n inlined from ‘PerformWalRecovery’ at\n../src/backend/access/transam/xlogrecovery.c:1749:8:\n../src/backend/access/transam/xlogrecovery.c:2737:42: warning:\n‘recordXtime’ may be used uninitialized [-Wmaybe-uninitialized]\n 2737 | recoveryStopTime = recordXtime;\n | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~\n../src/backend/access/transam/xlogrecovery.c: In function\n‘PerformWalRecovery’:\n../src/backend/access/transam/xlogrecovery.c:2628:21: note: ‘recordXtime’\nwas declared here\n 2628 | TimestampTz recordXtime;\n | ^~~~~~~~~~~\n[1642/2287] Compiling C object src/bin/pgbench/pgbench.p/pgbench.c.o\nIn function ‘coerceToInt’,\n inlined from ‘evalStandardFunc’ at ../src/bin/pgbench/pgbench.c:2617:11:\n../src/bin/pgbench/pgbench.c:2042:17: warning: ‘vargs[0].type’ may be used\nuninitialized [-Wmaybe-uninitialized]\n 2042 | if (pval->type == PGBT_INT)\n | ~~~~^~~~~~\n../src/bin/pgbench/pgbench.c: In function ‘evalStandardFunc’:\n../src/bin/pgbench/pgbench.c:2250:22: note: ‘vargs’ declared here\n 2250 | PgBenchValue vargs[MAX_FARGS];\n | ^~~~~\nIn function ‘coerceToInt’,\n inlined from ‘evalStandardFunc’ at ../src/bin/pgbench/pgbench.c:2617:11:\n../src/bin/pgbench/pgbench.c:2044:32: warning: ‘vargs[0].u.ival’ may be\nused uninitialized [-Wmaybe-uninitialized]\n 2044 | *ival = pval->u.ival;\n | ~~~~~~~^~~~~\n../src/bin/pgbench/pgbench.c: In function ‘evalStandardFunc’:\n../src/bin/pgbench/pgbench.c:2250:22: note: ‘vargs’ declared here\n 2250 | PgBenchValue vargs[MAX_FARGS];\n | ^~~~~\nIn function ‘coerceToInt’,\n inlined from ‘evalStandardFunc’ at ../src/bin/pgbench/pgbench.c:2617:11:\n../src/bin/pgbench/pgbench.c:2049:40: warning: ‘vargs[0].u.dval’ may be\nused uninitialized [-Wmaybe-uninitialized]\n 2049 | double dval = rint(pval->u.dval);\n | ^~~~~~~~~~~~~~~~~~\n../src/bin/pgbench/pgbench.c: In function ‘evalStandardFunc’:\n../src/bin/pgbench/pgbench.c:2250:22: note: ‘vargs’ declared here\n 2250 | PgBenchValue vargs[MAX_FARGS];\n | ^~~~~\n[1700/2287] Compiling C object src/pl/plpgsql/src/plpgsql.so.p/pl_exec.c.o\nIn file included from ../src/include/access/htup_details.h:22,\n from ../src/pl/plpgsql/src/pl_exec.c:21:\nIn function ‘assign_simple_var’,\n inlined from ‘exec_set_found’ at ../src/pl/plpgsql/src/pl_exec.c:8307:2:\n../src/include/varatt.h:230:36: warning: array subscript 0 is outside array\nbounds of ‘char[0]’ [-Warray-bounds=]\n 230 | (((varattrib_1b_e *) (PTR))->va_tag)\n | ^\n../src/include/varatt.h:94:12: note: in definition of macro\n‘VARTAG_IS_EXPANDED’\n 94 | (((tag) & ~1) == VARTAG_EXPANDED_RO)\n | ^~~\n../src/include/varatt.h:284:57: note: in expansion of macro ‘VARTAG_1B_E’\n 284 | #define VARTAG_EXTERNAL(PTR)\n VARTAG_1B_E(PTR)\n | ^~~~~~~~~~~\n../src/include/varatt.h:301:57: note: in expansion of macro\n‘VARTAG_EXTERNAL’\n 301 | (VARATT_IS_EXTERNAL(PTR) &&\n!VARTAG_IS_EXPANDED(VARTAG_EXTERNAL(PTR)))\n |\n^~~~~~~~~~~~~~~\n../src/pl/plpgsql/src/pl_exec.c:8495:17: note: in expansion of macro\n‘VARATT_IS_EXTERNAL_NON_EXPANDED’\n 8495 |\nVARATT_IS_EXTERNAL_NON_EXPANDED(DatumGetPointer(newvalue)))\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nIn function ‘exec_set_found’:\ncc1: note: source object is likely at address zero\n\nRegards\n\nPavel\n\nHisee[504/2287] Compiling C object src/backend/postgres_lib.a.p/access_transam_xlogrecovery.c.oIn function ‘recoveryStopsAfter’, inlined from ‘PerformWalRecovery’ at ../src/backend/access/transam/xlogrecovery.c:1749:8:../src/backend/access/transam/xlogrecovery.c:2737:42: warning: ‘recordXtime’ may be used uninitialized [-Wmaybe-uninitialized] 2737 | recoveryStopTime = recordXtime; | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~../src/backend/access/transam/xlogrecovery.c: In function ‘PerformWalRecovery’:../src/backend/access/transam/xlogrecovery.c:2628:21: note: ‘recordXtime’ was declared here 2628 | TimestampTz recordXtime; | ^~~~~~~~~~~[1642/2287] Compiling C object src/bin/pgbench/pgbench.p/pgbench.c.oIn function ‘coerceToInt’, inlined from ‘evalStandardFunc’ at ../src/bin/pgbench/pgbench.c:2617:11:../src/bin/pgbench/pgbench.c:2042:17: warning: ‘vargs[0].type’ may be used uninitialized [-Wmaybe-uninitialized] 2042 | if (pval->type == PGBT_INT) | ~~~~^~~~~~../src/bin/pgbench/pgbench.c: In function ‘evalStandardFunc’:../src/bin/pgbench/pgbench.c:2250:22: note: ‘vargs’ declared here 2250 | PgBenchValue vargs[MAX_FARGS]; | ^~~~~In function ‘coerceToInt’, inlined from ‘evalStandardFunc’ at ../src/bin/pgbench/pgbench.c:2617:11:../src/bin/pgbench/pgbench.c:2044:32: warning: ‘vargs[0].u.ival’ may be used uninitialized [-Wmaybe-uninitialized] 2044 | *ival = pval->u.ival; | ~~~~~~~^~~~~../src/bin/pgbench/pgbench.c: In function ‘evalStandardFunc’:../src/bin/pgbench/pgbench.c:2250:22: note: ‘vargs’ declared here 2250 | PgBenchValue vargs[MAX_FARGS]; | ^~~~~In function ‘coerceToInt’, inlined from ‘evalStandardFunc’ at ../src/bin/pgbench/pgbench.c:2617:11:../src/bin/pgbench/pgbench.c:2049:40: warning: ‘vargs[0].u.dval’ may be used uninitialized [-Wmaybe-uninitialized] 2049 | double dval = rint(pval->u.dval); | ^~~~~~~~~~~~~~~~~~../src/bin/pgbench/pgbench.c: In function ‘evalStandardFunc’:../src/bin/pgbench/pgbench.c:2250:22: note: ‘vargs’ declared here 2250 | PgBenchValue vargs[MAX_FARGS]; | ^~~~~[1700/2287] Compiling C object src/pl/plpgsql/src/plpgsql.so.p/pl_exec.c.oIn file included from ../src/include/access/htup_details.h:22, from ../src/pl/plpgsql/src/pl_exec.c:21:In function ‘assign_simple_var’, inlined from ‘exec_set_found’ at ../src/pl/plpgsql/src/pl_exec.c:8307:2:../src/include/varatt.h:230:36: warning: array subscript 0 is outside array bounds of ‘char[0]’ [-Warray-bounds=] 230 | (((varattrib_1b_e *) (PTR))->va_tag) | ^../src/include/varatt.h:94:12: note: in definition of macro ‘VARTAG_IS_EXPANDED’ 94 | (((tag) & ~1) == VARTAG_EXPANDED_RO) | ^~~../src/include/varatt.h:284:57: note: in expansion of macro ‘VARTAG_1B_E’ 284 | #define VARTAG_EXTERNAL(PTR) VARTAG_1B_E(PTR) | ^~~~~~~~~~~../src/include/varatt.h:301:57: note: in expansion of macro ‘VARTAG_EXTERNAL’ 301 | (VARATT_IS_EXTERNAL(PTR) && !VARTAG_IS_EXPANDED(VARTAG_EXTERNAL(PTR))) | ^~~~~~~~~~~~~~~../src/pl/plpgsql/src/pl_exec.c:8495:17: note: in expansion of macro ‘VARATT_IS_EXTERNAL_NON_EXPANDED’ 8495 | VARATT_IS_EXTERNAL_NON_EXPANDED(DatumGetPointer(newvalue))) | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~In function ‘exec_set_found’:cc1: note: source object is likely at address zeroRegardsPavel",
"msg_date": "Thu, 16 Mar 2023 14:40:04 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "gcc 13 warnings"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 9:40 AM Pavel Stehule <[email protected]> wrote:\n> [1700/2287] Compiling C object src/pl/plpgsql/src/plpgsql.so.p/pl_exec.c.o\n> In file included from ../src/include/access/htup_details.h:22,\n> from ../src/pl/plpgsql/src/pl_exec.c:21:\n> In function ‘assign_simple_var’,\n> inlined from ‘exec_set_found’ at ../src/pl/plpgsql/src/pl_exec.c:8307:2:\n> ../src/include/varatt.h:230:36: warning: array subscript 0 is outside array bounds of ‘char[0]’ [-Warray-bounds=]\n> 230 | (((varattrib_1b_e *) (PTR))->va_tag)\n> | ^\n> ../src/include/varatt.h:94:12: note: in definition of macro ‘VARTAG_IS_EXPANDED’\n> 94 | (((tag) & ~1) == VARTAG_EXPANDED_RO)\n> | ^~~\n> ../src/include/varatt.h:284:57: note: in expansion of macro ‘VARTAG_1B_E’\n> 284 | #define VARTAG_EXTERNAL(PTR) VARTAG_1B_E(PTR)\n> | ^~~~~~~~~~~\n> ../src/include/varatt.h:301:57: note: in expansion of macro ‘VARTAG_EXTERNAL’\n> 301 | (VARATT_IS_EXTERNAL(PTR) && !VARTAG_IS_EXPANDED(VARTAG_EXTERNAL(PTR)))\n> | ^~~~~~~~~~~~~~~\n> ../src/pl/plpgsql/src/pl_exec.c:8495:17: note: in expansion of macro ‘VARATT_IS_EXTERNAL_NON_EXPANDED’\n> 8495 | VARATT_IS_EXTERNAL_NON_EXPANDED(DatumGetPointer(newvalue)))\n> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> In function ‘exec_set_found’:\n> cc1: note: source object is likely at address zero\n\nI see these with gcc 12.2.0 also.\n\n\n",
"msg_date": "Thu, 16 Mar 2023 10:21:23 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gcc 13 warnings"
},
{
"msg_contents": "Melanie Plageman <[email protected]> writes:\n> On Thu, Mar 16, 2023 at 9:40 AM Pavel Stehule <[email protected]> wrote:\n>> ../src/include/varatt.h:230:36: warning: array subscript 0 is outside array bounds of ‘char[0]’ [-Warray-bounds=]\n\n> I see these with gcc 12.2.0 also.\n\nHmm, I do not see any warnings on HEAD with Fedora 37's gcc 12.2.1.\nWhat non-default configure switches, CFLAGS, etc are you using?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Mar 2023 11:43:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gcc 13 warnings"
},
{
"msg_contents": "čt 16. 3. 2023 v 16:43 odesílatel Tom Lane <[email protected]> napsal:\n\n> Melanie Plageman <[email protected]> writes:\n> > On Thu, Mar 16, 2023 at 9:40 AM Pavel Stehule <[email protected]>\n> wrote:\n> >> ../src/include/varatt.h:230:36: warning: array subscript 0 is outside\n> array bounds of ‘char[0]’ [-Warray-bounds=]\n>\n> > I see these with gcc 12.2.0 also.\n>\n> Hmm, I do not see any warnings on HEAD with Fedora 37's gcc 12.2.1.\n> What non-default configure switches, CFLAGS, etc are you using?\n>\n\nmeson build without any settings\n\nI think so it is related to meson build, I didn't see these warnings with\nautoconf\n\nregards\n\nPavel\n\n\n\n>\n> regards, tom lane\n>\n\nčt 16. 3. 2023 v 16:43 odesílatel Tom Lane <[email protected]> napsal:Melanie Plageman <[email protected]> writes:\n> On Thu, Mar 16, 2023 at 9:40 AM Pavel Stehule <[email protected]> wrote:\n>> ../src/include/varatt.h:230:36: warning: array subscript 0 is outside array bounds of ‘char[0]’ [-Warray-bounds=]\n\n> I see these with gcc 12.2.0 also.\n\nHmm, I do not see any warnings on HEAD with Fedora 37's gcc 12.2.1.\nWhat non-default configure switches, CFLAGS, etc are you using?meson build without any settingsI think so it is related to meson build, I didn't see these warnings with autoconfregardsPavel \n\n regards, tom lane",
"msg_date": "Thu, 16 Mar 2023 17:00:47 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: gcc 13 warnings"
},
{
"msg_contents": "Pavel Stehule <[email protected]> writes:\n> čt 16. 3. 2023 v 16:43 odesílatel Tom Lane <[email protected]> napsal:\n>> Hmm, I do not see any warnings on HEAD with Fedora 37's gcc 12.2.1.\n>> What non-default configure switches, CFLAGS, etc are you using?\n\n> meson build without any settings\n> I think so it is related to meson build, I didn't see these warnings with\n> autoconf\n\nIt wouldn't be entirely surprising if meson is selecting some -W\nswitches that the configure script doesn't ... but I don't know\nwhere to check or change that.\n\nIf that is the case, do we want to beat meson over the head till\nit stops doing that, or try to silence the warnings? The ones\nyou show here don't look terribly helpful ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Mar 2023 12:10:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gcc 13 warnings"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-16 12:10:27 -0400, Tom Lane wrote:\n> Pavel Stehule <[email protected]> writes:\n> > čt 16. 3. 2023 v 16:43 odesílatel Tom Lane <[email protected]> napsal:\n> >> Hmm, I do not see any warnings on HEAD with Fedora 37's gcc 12.2.1.\n> >> What non-default configure switches, CFLAGS, etc are you using?\n> \n> > meson build without any settings\n> > I think so it is related to meson build, I didn't see these warnings with\n> > autoconf\n> \n> It wouldn't be entirely surprising if meson is selecting some -W\n> switches that the configure script doesn't ... but I don't know\n> where to check or change that.\n\nI think it's just that meson defaults to -O3 (fwiw, I see substantial gains of\nthat over -O2). I see such warnings with autoconf as well if I make it use\n-O3.\n\nI think some of these are stemming from\nhttps://postgr.es/m/20230204130708.pta7pjc4dvu225ey%40alap3.anarazel.de\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Mar 2023 10:05:06 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gcc 13 warnings"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-16 10:05:06 -0700, Andres Freund wrote:\n> I think it's just that meson defaults to -O3 (fwiw, I see substantial gains of\n> that over -O2). I see such warnings with autoconf as well if I make it use\n> -O3.\n\nWRT:\n\nIn file included from /home/andres/src/postgresql/src/include/access/htup_details.h:22,\n from /home/andres/src/postgresql/src/pl/plpgsql/src/pl_exec.c:21:\nIn function ‘assign_simple_var’,\n inlined from ‘exec_set_found’ at /home/andres/src/postgresql/src/pl/plpgsql/src/pl_exec.c:8307:2:\n/home/andres/src/postgresql/src/include/varatt.h:230:36: warning: array subscript 0 is outside array bounds of ‘char[0]’ [-Warray-bounds]\n 230 | (((varattrib_1b_e *) (PTR))->va_tag)\n | ^\n/home/andres/src/postgresql/src/include/varatt.h:94:12: note: in definition of macro ‘VARTAG_IS_EXPANDED’\n 94 | (((tag) & ~1) == VARTAG_EXPANDED_RO)\n | ^~~\n/home/andres/src/postgresql/src/include/varatt.h:284:57: note: in expansion of macro ‘VARTAG_1B_E’\n 284 | #define VARTAG_EXTERNAL(PTR) VARTAG_1B_E(PTR)\n | ^~~~~~~~~~~\n/home/andres/src/postgresql/src/include/varatt.h:301:57: note: in expansion of macro ‘VARTAG_EXTERNAL’\n 301 | (VARATT_IS_EXTERNAL(PTR) && !VARTAG_IS_EXPANDED(VARTAG_EXTERNAL(PTR)))\n | ^~~~~~~~~~~~~~~\n/home/andres/src/postgresql/src/pl/plpgsql/src/pl_exec.c:8495:17: note: in expansion of macro ‘VARATT_IS_EXTERNAL_NON_EXPANDED’\n 8495 | VARATT_IS_EXTERNAL_NON_EXPANDED(DatumGetPointer(newvalue)))\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nI think that's basically because gcc does realize that the datum is just an 8\nbyte by-value datum:\n\tassign_simple_var(estate, var, BoolGetDatum(state), false, false);\n\nbut doesn't (and probably can't, with the available information) grok that\nthat means we don't even get to the VARATT_IS_EXTERNAL_NON_EXPANDED() in\nassign_simple_var().\n\nIf I add\n\tif (var->datatype->typlen == -1)\n\t\tpg_unreachable();\nto exec_set_found(), the warning indeed goes away.\n\n\nI've wondered before if we should make at least some Asserts() into something\nlike the above (if we have something better backing it than abort()), so the\ncompiler can understand unreachable code paths even when building without\ncassert.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Mar 2023 10:28:18 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gcc 13 warnings"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-03-16 12:10:27 -0400, Tom Lane wrote:\n>> It wouldn't be entirely surprising if meson is selecting some -W\n>> switches that the configure script doesn't ... but I don't know\n>> where to check or change that.\n\n> I think it's just that meson defaults to -O3 (fwiw, I see substantial gains of\n> that over -O2). I see such warnings with autoconf as well if I make it use\n> -O3.\n\nOh, interesting. Should we try to standardize the two build systems\non the same -O level, and if so which one?\n\nTo my mind, you should ideally get the identical built bits out of\neither system, so defaulting to a different -O level seems bad.\nI'm not sure if we're prepared to go to -O3 by default though,\nespecially for some of the older buildfarm critters where that\nmight be buggy. (I'd imagine you take a hit in gdb-ability too.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Mar 2023 13:54:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gcc 13 warnings"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-16 13:54:29 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2023-03-16 12:10:27 -0400, Tom Lane wrote:\n> >> It wouldn't be entirely surprising if meson is selecting some -W\n> >> switches that the configure script doesn't ... but I don't know\n> >> where to check or change that.\n>\n> > I think it's just that meson defaults to -O3 (fwiw, I see substantial gains of\n> > that over -O2). I see such warnings with autoconf as well if I make it use\n> > -O3.\n>\n> Oh, interesting. Should we try to standardize the two build systems\n> on the same -O level, and if so which one?\n\nI'm on the fence on this one (and posed it as a question before). O3 does\nresult in higher performance for me, but it also does take longer to build,\nand increases the numbers of warnings.\n\nSo I just elected to leave it at the default for meson.\n\n\n> To my mind, you should ideally get the identical built bits out of\n> either system, so defaulting to a different -O level seems bad.\n\nI doubt that is attainable, unfortunately. My experience is that even trivial\nchanges can lead to substantial changes in output. Even just being in a\ndifferent directory (the root build directory for meson vs the subdirectory in\nmake builds) apparently sometimes leads to different compiler output.\n\n\n> I'm not sure if we're prepared to go to -O3 by default though,\n> especially for some of the older buildfarm critters where that\n> might be buggy. (I'd imagine you take a hit in gdb-ability too.)\n\nMy experience is that debuggability is already bad enough at O2 that the\ndifference to O3 is pretty marginal. But it certainly depends a bit on the\ncompiler version and what level of debug information one enables.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Mar 2023 11:11:00 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gcc 13 warnings"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 1:11 AM Andres Freund <[email protected]> wrote:\n\n> On 2023-03-16 13:54:29 -0400, Tom Lane wrote:\n\n> So I just elected to leave it at the default for meson.\n\nIn my build scripts I've been setting it to -O2, because that seemed the\nobvious thing to do, and assumed some later commit would get rid of the\nneed to do it manually. (if it was discussed before, I missed that)\n\n> > I'm not sure if we're prepared to go to -O3 by default though,\n> > especially for some of the older buildfarm critters where that\n> > might be buggy. (I'd imagine you take a hit in gdb-ability too.)\n\nNewer platforms could be buggy enough. A while back, IIUC gcc moved an\noptimization pass from O3 to O2, which resulted in obviously bad code\ngeneration, which I know because of a bug report filed by one Andres Freund:\n\nhttps://gcc.gnu.org/bugzilla/show_bug.cgi?id=101481\n\n...which was never properly addressed as far as I know.\n\nI'm a bit surprised we would even consider changing optimization level\nbased on a build tool default.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Mar 17, 2023 at 1:11 AM Andres Freund <[email protected]> wrote:> On 2023-03-16 13:54:29 -0400, Tom Lane wrote:> So I just elected to leave it at the default for meson.In my build scripts I've been setting it to -O2, because that seemed the obvious thing to do, and assumed some later commit would get rid of the need to do it manually. (if it was discussed before, I missed that)> > I'm not sure if we're prepared to go to -O3 by default though,> > especially for some of the older buildfarm critters where that> > might be buggy. (I'd imagine you take a hit in gdb-ability too.)Newer platforms could be buggy enough. A while back, IIUC gcc moved an optimization pass from O3 to O2, which resulted in obviously bad code generation, which I know because of a bug report filed by one Andres Freund:https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101481...which was never properly addressed as far as I know.I'm a bit surprised we would even consider changing optimization level based on a build tool default.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 17 Mar 2023 10:14:56 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gcc 13 warnings"
},
{
"msg_contents": "On 16.03.23 19:11, Andres Freund wrote:\n> On 2023-03-16 13:54:29 -0400, Tom Lane wrote:\n>> Andres Freund <[email protected]> writes:\n>>> On 2023-03-16 12:10:27 -0400, Tom Lane wrote:\n>>>> It wouldn't be entirely surprising if meson is selecting some -W\n>>>> switches that the configure script doesn't ... but I don't know\n>>>> where to check or change that.\n>>\n>>> I think it's just that meson defaults to -O3 (fwiw, I see substantial gains of\n>>> that over -O2). I see such warnings with autoconf as well if I make it use\n>>> -O3.\n>>\n>> Oh, interesting. Should we try to standardize the two build systems\n>> on the same -O level, and if so which one?\n> \n> I'm on the fence on this one (and posed it as a question before). O3 does\n> result in higher performance for me, but it also does take longer to build,\n> and increases the numbers of warnings.\n> \n> So I just elected to leave it at the default for meson.\n\nAFAICT, the default for meson is buildtype=debug, which is -O0. The -O3 \ncomes from meson.build setting buildtype=release.\n\nI think a good compromise would be buildtype=debugoptimized, which is \n-O2 with debug symbols, which also sort of matches the default in the \nautoconf world.\n\n(https://mesonbuild.com/Builtin-options.html#details-for-buildtype)\n\nAt least during the transition phase I would prefer having the same \ndefault optimization level in both build systems, mainly because of how \nthis affects warnings.\n\n\n\n",
"msg_date": "Fri, 17 Mar 2023 09:06:05 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gcc 13 warnings"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 16.03.23 19:11, Andres Freund wrote:\n>> So I just elected to leave it at the default for meson.\n\n> AFAICT, the default for meson is buildtype=debug, which is -O0. The -O3 \n> comes from meson.build setting buildtype=release.\n\n> I think a good compromise would be buildtype=debugoptimized, which is \n> -O2 with debug symbols, which also sort of matches the default in the \n> autoconf world.\n\nThat sounds promising.\n\n> At least during the transition phase I would prefer having the same \n> default optimization level in both build systems, mainly because of how \n> this affects warnings.\n\nI'd prefer sticking to -O2 mainly because of the risk of new bugs.\nThe meson conversion is a big enough job without adding \"harden\nPostgres against -O3\" to the list of tasks that must be accomplished.\nWe can take that on in due time, but let's keep it separate.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Mar 2023 10:26:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gcc 13 warnings"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-17 09:06:05 +0100, Peter Eisentraut wrote:\n> AFAICT, the default for meson is buildtype=debug, which is -O0. The -O3\n> comes from meson.build setting buildtype=release.\n\nRight - my point about -O3 was just that buildtype=release defaults to it.\n\n\n> I think a good compromise would be buildtype=debugoptimized, which is -O2\n> with debug symbols, which also sort of matches the default in the autoconf\n> world.\n\nLooks like that'd result in a slightly worse build with msvc, as afaict we\nwouldn't end up with /OPT:REF doesn't get specified, which automatically gets\ndisabled if /DEBUG is specified. I guess we can live with that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 Mar 2023 16:54:27 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gcc 13 warnings"
},
{
"msg_contents": "On 18.03.23 00:54, Andres Freund wrote:\n>> I think a good compromise would be buildtype=debugoptimized, which is -O2\n>> with debug symbols, which also sort of matches the default in the autoconf\n>> world.\n> \n> Looks like that'd result in a slightly worse build with msvc, as afaict we\n> wouldn't end up with /OPT:REF doesn't get specified, which automatically gets\n> disabled if /DEBUG is specified. I guess we can live with that.\n\nI looked up what /OPT:REF does \n(https://learn.microsoft.com/en-us/cpp/build/reference/opt-optimizations?view=msvc-170), \nand it seems pretty obscure to me, at least for development builds.\n\n\n",
"msg_date": "Wed, 22 Mar 2023 10:45:59 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gcc 13 warnings"
},
{
"msg_contents": "On 22.03.23 10:45, Peter Eisentraut wrote:\n> On 18.03.23 00:54, Andres Freund wrote:\n>>> I think a good compromise would be buildtype=debugoptimized, which is \n>>> -O2\n>>> with debug symbols, which also sort of matches the default in the \n>>> autoconf\n>>> world.\n>>\n>> Looks like that'd result in a slightly worse build with msvc, as \n>> afaict we\n>> wouldn't end up with /OPT:REF doesn't get specified, which \n>> automatically gets\n>> disabled if /DEBUG is specified. I guess we can live with that.\n> \n> I looked up what /OPT:REF does \n> (https://learn.microsoft.com/en-us/cpp/build/reference/opt-optimizations?view=msvc-170), and it seems pretty obscure to me, at least for development builds.\n\nI have committed the change of buildtype to debugoptimized.\n\n\n",
"msg_date": "Wed, 29 Mar 2023 09:51:19 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gcc 13 warnings"
},
{
"msg_contents": "Hi,\n\n> I have committed the change of buildtype to debugoptimized.\n\nThere is still a warning previously reported by Melanie:\n\n```\n[1391/1944] Compiling C object src/pl/plpgsql/src/plpgsql.so.p/pl_exec.c.o\nIn file included from ../src/include/access/htup_details.h:22,\n from ../src/pl/plpgsql/src/pl_exec.c:21:\nIn function ‘assign_simple_var’,\n inlined from ‘exec_set_found’ at ../src/pl/plpgsql/src/pl_exec.c:8382:2:\n../src/include/varatt.h:230:36: warning: array subscript 0 is outside\narray bounds of ‘char[0]’ [-Warray-bounds]\n 230 | (((varattrib_1b_e *) (PTR))->va_tag)\n | ^\n../src/include/varatt.h:94:12: note: in definition of macro ‘VARTAG_IS_EXPANDED’\n 94 | (((tag) & ~1) == VARTAG_EXPANDED_RO)\n | ^~~\n../src/include/varatt.h:284:57: note: in expansion of macro ‘VARTAG_1B_E’\n 284 | #define VARTAG_EXTERNAL(PTR) VARTAG_1B_E(PTR)\n | ^~~~~~~~~~~\n../src/include/varatt.h:301:57: note: in expansion of macro ‘VARTAG_EXTERNAL’\n 301 | (VARATT_IS_EXTERNAL(PTR) &&\n!VARTAG_IS_EXPANDED(VARTAG_EXTERNAL(PTR)))\n | ^~~~~~~~~~~~~~~\n../src/pl/plpgsql/src/pl_exec.c:8570:17: note: in expansion of macro\n‘VARATT_IS_EXTERNAL_NON_EXPANDED’\n 8570 |\nVARATT_IS_EXTERNAL_NON_EXPANDED(DatumGetPointer(newvalue)))\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n[1687/1944] Compiling C object\nsrc/test/modules/test_dsa/test_dsa.so.p/test_dsa.c.o^C\nninja: build stopped: interrupted by user.\n``\n\nDisplayed only for the release builds, e.g.:\n\n```\ngit clean -dfx\nmeson setup --buildtype release -DPG_TEST_EXTRA=\"kerberos ldap ssl\"\n-Dldap=disabled -Dssl=openssl -Dcassert=true -Dtap_tests=enabled\n-Dprefix=/home/eax/pginstall build\nninja -C build\n```\n\nCompiler version is:\n\n```\ngcc (Debian 12.2.0-14) 12.2.0\n```\n\nThe overall environment is Raspberry Pi 5 with pretty much default\nconfiguration - Raspbian etc.\n\nHow to fix it? Absolutely no idea :)\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Fri, 5 Jul 2024 14:19:12 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gcc 13 warnings"
},
{
"msg_contents": "Hi,\n\nOn 2024-07-05 14:19:12 +0300, Aleksander Alekseev wrote:\n> There is still a warning previously reported by Melanie:\n>\n> ```\n> [1391/1944] Compiling C object src/pl/plpgsql/src/plpgsql.so.p/pl_exec.c.o\n> In file included from ../src/include/access/htup_details.h:22,\n> from ../src/pl/plpgsql/src/pl_exec.c:21:\n> In function ‘assign_simple_var’,\n> inlined from ‘exec_set_found’ at ../src/pl/plpgsql/src/pl_exec.c:8382:2:\n> ../src/include/varatt.h:230:36: warning: array subscript 0 is outside\n> array bounds of ‘char[0]’ [-Warray-bounds]\n> 230 | (((varattrib_1b_e *) (PTR))->va_tag)\n> | ^\n> ../src/include/varatt.h:94:12: note: in definition of macro ‘VARTAG_IS_EXPANDED’\n> 94 | (((tag) & ~1) == VARTAG_EXPANDED_RO)\n> | ^~~\n> ../src/include/varatt.h:284:57: note: in expansion of macro ‘VARTAG_1B_E’\n> 284 | #define VARTAG_EXTERNAL(PTR) VARTAG_1B_E(PTR)\n> | ^~~~~~~~~~~\n> ../src/include/varatt.h:301:57: note: in expansion of macro ‘VARTAG_EXTERNAL’\n> 301 | (VARATT_IS_EXTERNAL(PTR) &&\n> !VARTAG_IS_EXPANDED(VARTAG_EXTERNAL(PTR)))\n> | ^~~~~~~~~~~~~~~\n> ../src/pl/plpgsql/src/pl_exec.c:8570:17: note: in expansion of macro\n> ‘VARATT_IS_EXTERNAL_NON_EXPANDED’\n> 8570 |\n> VARATT_IS_EXTERNAL_NON_EXPANDED(DatumGetPointer(newvalue)))\n> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> [1687/1944] Compiling C object\n> src/test/modules/test_dsa/test_dsa.so.p/test_dsa.c.o^C\n> ninja: build stopped: interrupted by user.\n> ``\n\n> The overall environment is Raspberry Pi 5 with pretty much default\n> configuration - Raspbian etc.\n>\n> How to fix it? Absolutely no idea :)\n\nI think it's actually a somewhat reasonable warning - the compiler can't know\nthat in exec_set_found() we'll always deal with typlen == 1 and thus can't\never reach the inside of the branch it warns about.\n\nOnce the compiler knows about that \"restriction\", the warning vanishes. Try\nadding the following to exec_set_found():\n\n\t/*\n\t * Prevent spurious warning due to compiler not realizing\n\t * VARATT_IS_EXTERNAL_NON_EXPANDED() branch in assign_simple_var() isn't\n\t * reachable due to \"found\" being byvalue.\n\t */\n\tif (var->datatype->typlen != 1)\n\t\tpg_unreachable();\n\nI'm somewhat inclined to think it'd be worth adding something along those\nlines to avoid this warning ([1]).\n\nGreetings,\n\nAndres Freund\n\n\n[1]\n\nIn general we're actually hiding a lot of useful information from the compiler\nin release builds, due to asserts not being enabled. I've been wondering about\na version of Assert() that isn't completely removed in release builds but\ninstead transform into the pg_unreachable() form for compilers with an\n\"efficient\" pg_unreachable() (i.e. not using abort()).\n\nWe can't just do that for all asserts though, it only makes sense for ones\nthat are \"trivial\" in some form (i.e. the compiler can realize it doesn't\nhave side effects and doesn't need be generated).\n\nThat's about generating more optimized code though.\n\n\n",
"msg_date": "Fri, 12 Jul 2024 10:45:32 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gcc 13 warnings"
},
{
"msg_contents": "Hi,\n\n> /*\n> * Prevent spurious warning due to compiler not realizing\n> * VARATT_IS_EXTERNAL_NON_EXPANDED() branch in assign_simple_var() isn't\n> * reachable due to \"found\" being byvalue.\n> */\n> if (var->datatype->typlen != 1)\n> pg_unreachable();\n>\n> I'm somewhat inclined to think it'd be worth adding something along those\n> lines to avoid this warning ([1]).\n\nIMO we shouldn't allow warnings to appear in release builds, even\nharmless ones. Otherwise we start ignoring them and will skip\nsomething important one day. So I think we should do this.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 15 Jul 2024 11:38:42 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gcc 13 warnings"
}
] |
[
{
"msg_contents": "I notice a number of places in fe-connect.c have copied this idiom\nwhere if an option is present they validate the legal options and\notherwise they strdup a default value. This strdup of the default\noption I think is being copied from sslmode's validation which is a\nbit special but afaics the following still applies to it.\n\n /*\n * validate channel_binding option\n */\n if (conn->channel_binding)\n {\n if (strcmp(conn->channel_binding, \"disable\") != 0\n && strcmp(conn->channel_binding, \"prefer\") != 0\n && strcmp(conn->channel_binding, \"require\") != 0)\n {\n conn->status = CONNECTION_BAD;\n libpq_append_conn_error(conn, \"invalid %s value: \\\"%s\\\"\",\n\n\"channel_binding\", conn->channel_binding);\n return false;\n }\n }\n else\n {\n conn->channel_binding = strdup(DefaultChannelBinding);\n if (!conn->channel_binding)\n goto oom_error;\n }\n\nAFAICS the else branch of this is entirely dead code. These options\ncannot be NULL because the default option is present in the\nPQconninfoOptions array as the \"compiled in default value\" which\nconninfo_add_defaults() will strdup in for us.\n\nUnless..... conninfo_array_parse() is passed use_defaults=false in\nwhich case no... But why does this parameter exist? This is a static\nfunction with one call site and this parameter is passed as true at\nthat call site.\n\nSo I think this is just dead from some no longer extant code path\nthat's being copied by new parameters that are added.\n\nAs an aside conninfo_add_defaults doesn't put the default value there\nif the option is an empty string. I think we should make it do that,\neffectively forcing all options to treat empty strings as missing\noptions. Otherwise it's annoying to use environment variables when you\nwant to explicitly set a parameter to a default value since it's much\nless convenient to \"remove\" an environment variable in a shell than\npass it as an empty string. And it would just be confusing to have\nempty strings behave differently from omitted parameters.\n\n\n-- \ngreg\n\n\n",
"msg_date": "Thu, 16 Mar 2023 14:03:49 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": true,
"msg_subject": "Default libpq connection parameter handling and copy-paste of\n apparently dead code for it?"
}
] |
[
{
"msg_contents": "Hi\n\nand queryjumblefuncs.switch.c files.\n\nRegards\n\nPavel\n\nHiand queryjumblefuncs.switch.c files.RegardsPavel",
"msg_date": "Fri, 17 Mar 2023 21:11:53 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "meson issue? ninja clean doesn't drop queryjumblefuncs.funcs.c"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 09:11:53PM +0100, Pavel Stehule wrote:\n> and queryjumblefuncs.switch.c files.\n\nLet me see.. It looks like src/include/nodes/meson.build is just\nmissing a refresh. Will check and fix, thanks for the report!\n--\nMichael",
"msg_date": "Sat, 18 Mar 2023 09:58:43 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson issue? ninja clean doesn't drop queryjumblefuncs.funcs.c"
},
{
"msg_contents": "so 18. 3. 2023 v 1:58 odesílatel Michael Paquier <[email protected]>\nnapsal:\n\n> On Fri, Mar 17, 2023 at 09:11:53PM +0100, Pavel Stehule wrote:\n> > and queryjumblefuncs.switch.c files.\n>\n> Let me see.. It looks like src/include/nodes/meson.build is just\n> missing a refresh. Will check and fix, thanks for the report!\n>\n\nthank you\n\nPavel\n\n\n> --\n> Michael\n>\n\nso 18. 3. 2023 v 1:58 odesílatel Michael Paquier <[email protected]> napsal:On Fri, Mar 17, 2023 at 09:11:53PM +0100, Pavel Stehule wrote:\n> and queryjumblefuncs.switch.c files.\n\nLet me see.. It looks like src/include/nodes/meson.build is just\nmissing a refresh. Will check and fix, thanks for the report!thank youPavel \n--\nMichael",
"msg_date": "Sat, 18 Mar 2023 06:22:06 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: meson issue? ninja clean doesn't drop queryjumblefuncs.funcs.c"
}
] |
[
{
"msg_contents": "Hi,\n\nAs evidenced by the bug fixed in be504a3e974, vacuum_defer_cleanup_age is not\nheavily used - the bug was trivial to hit as soon as vacuum_defer_cleanup_age\nis set to a non-toy value. It complicates thinking about visibility horizons\nsubstantially, as vacuum_defer_cleanup_age can make them go backward\nsubstantially. Obviously it's also severely undertested.\n\nI started writing a test for vacuum_defer_cleanup_age while working on the fix\nreferenced above, but now I am wondering if said energy would be better spent\nremoving vacuum_defer_cleanup_age alltogether.\n\nvacuum_defer_cleanup_age was added as part of hot standby. Back then we did\nnot yet have hot_standby_feedback. Now that that exists,\nvacuum_defer_cleanup_age doesn't seem like a good idea anymore. It's\npessimisistic, i.e. always retains rows, even if none of the standbys has an\nold enough snapshot.\n\nThe only benefit of vacuum_defer_cleanup_age over hot_standby_feedback is that\nit provides a limit of some sort. But transactionids aren't producing dead\nrows in a uniform manner, so limiting via xid isn't particularly useful. And\neven if there are use cases, it seems those would be served better by\nintroducing a cap on how much hot_standby_feedback can hold the horizon back.\n\nI don't think I have the cycles to push this through in the next weeks, but if\nwe agree removing vacuum_defer_cleanup_age is a good idea, it seems like a\ngood idea to mark it as deprecated in 16?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 Mar 2023 16:09:30 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "On Sat, Mar 18, 2023 at 12:09 AM Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> As evidenced by the bug fixed in be504a3e974, vacuum_defer_cleanup_age is\n> not\n> heavily used - the bug was trivial to hit as soon as\n> vacuum_defer_cleanup_age\n> is set to a non-toy value. It complicates thinking about visibility\n> horizons\n> substantially, as vacuum_defer_cleanup_age can make them go backward\n> substantially. Obviously it's also severely undertested.\n>\n> I started writing a test for vacuum_defer_cleanup_age while working on the\n> fix\n> referenced above, but now I am wondering if said energy would be better\n> spent\n> removing vacuum_defer_cleanup_age alltogether.\n>\n> vacuum_defer_cleanup_age was added as part of hot standby. Back then we did\n> not yet have hot_standby_feedback. Now that that exists,\n> vacuum_defer_cleanup_age doesn't seem like a good idea anymore. It's\n> pessimisistic, i.e. always retains rows, even if none of the standbys has\n> an\n> old enough snapshot.\n>\n> The only benefit of vacuum_defer_cleanup_age over hot_standby_feedback is\n> that\n> it provides a limit of some sort. But transactionids aren't producing dead\n> rows in a uniform manner, so limiting via xid isn't particularly useful.\n> And\n> even if there are use cases, it seems those would be served better by\n> introducing a cap on how much hot_standby_feedback can hold the horizon\n> back.\n>\n> I don't think I have the cycles to push this through in the next weeks,\n> but if\n> we agree removing vacuum_defer_cleanup_age is a good idea, it seems like a\n> good idea to mark it as deprecated in 16?\n>\n\n+1. I haven't seen any (correct) use of this in many many years on my end\nat least.\n\nAnd yes, having a cap on hot_standby_feedback would also be great.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Mar 18, 2023 at 12:09 AM Andres Freund <[email protected]> wrote:Hi,\n\nAs evidenced by the bug fixed in be504a3e974, vacuum_defer_cleanup_age is not\nheavily used - the bug was trivial to hit as soon as vacuum_defer_cleanup_age\nis set to a non-toy value. It complicates thinking about visibility horizons\nsubstantially, as vacuum_defer_cleanup_age can make them go backward\nsubstantially. Obviously it's also severely undertested.\n\nI started writing a test for vacuum_defer_cleanup_age while working on the fix\nreferenced above, but now I am wondering if said energy would be better spent\nremoving vacuum_defer_cleanup_age alltogether.\n\nvacuum_defer_cleanup_age was added as part of hot standby. Back then we did\nnot yet have hot_standby_feedback. Now that that exists,\nvacuum_defer_cleanup_age doesn't seem like a good idea anymore. It's\npessimisistic, i.e. always retains rows, even if none of the standbys has an\nold enough snapshot.\n\nThe only benefit of vacuum_defer_cleanup_age over hot_standby_feedback is that\nit provides a limit of some sort. But transactionids aren't producing dead\nrows in a uniform manner, so limiting via xid isn't particularly useful. And\neven if there are use cases, it seems those would be served better by\nintroducing a cap on how much hot_standby_feedback can hold the horizon back.\n\nI don't think I have the cycles to push this through in the next weeks, but if\nwe agree removing vacuum_defer_cleanup_age is a good idea, it seems like a\ngood idea to mark it as deprecated in 16?+1. I haven't seen any (correct) use of this in many many years on my end at least.And yes, having a cap on hot_standby_feedback would also be great. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sat, 18 Mar 2023 00:17:46 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "On 2023-Mar-17, Andres Freund wrote:\n\n> I started writing a test for vacuum_defer_cleanup_age while working on the fix\n> referenced above, but now I am wondering if said energy would be better spent\n> removing vacuum_defer_cleanup_age alltogether.\n\n+1 I agree it's not useful anymore.\n\n> I don't think I have the cycles to push this through in the next weeks, but if\n> we agree removing vacuum_defer_cleanup_age is a good idea, it seems like a\n> good idea to mark it as deprecated in 16?\n\nHmm, for the time being, can we just \"disable\" it by disallowing to set\nthe GUC to a value different from 0? Then we can remove the code later\nin the cycle at leisure.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La gente vulgar sólo piensa en pasar el tiempo;\nel que tiene talento, en aprovecharlo\"\n\n\n",
"msg_date": "Sat, 18 Mar 2023 10:33:57 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "On Sat, Mar 18, 2023 at 10:33:57AM +0100, Alvaro Herrera wrote:\n> On 2023-Mar-17, Andres Freund wrote:\n> \n> > I started writing a test for vacuum_defer_cleanup_age while working on the fix\n> > referenced above, but now I am wondering if said energy would be better spent\n> > removing vacuum_defer_cleanup_age alltogether.\n> \n> +1 I agree it's not useful anymore.\n> \n> > I don't think I have the cycles to push this through in the next weeks, but if\n> > we agree removing vacuum_defer_cleanup_age is a good idea, it seems like a\n> > good idea to mark it as deprecated in 16?\n> \n> Hmm, for the time being, can we just \"disable\" it by disallowing to set\n> the GUC to a value different from 0? Then we can remove the code later\n> in the cycle at leisure.\n\nIt can be useful to do a \"rolling transition\", and it's something I do\noften.\n\nBut I can't see why that would be useful here? It seems like something\nthat could be done after the feature freeze. It's removing a feature,\nnot adding one.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 22 Mar 2023 11:44:20 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-22 11:44:20 -0500, Justin Pryzby wrote:\n> On Sat, Mar 18, 2023 at 10:33:57AM +0100, Alvaro Herrera wrote:\n> > On 2023-Mar-17, Andres Freund wrote:\n> > \n> > > I started writing a test for vacuum_defer_cleanup_age while working on the fix\n> > > referenced above, but now I am wondering if said energy would be better spent\n> > > removing vacuum_defer_cleanup_age alltogether.\n> > \n> > +1 I agree it's not useful anymore.\n> > \n> > > I don't think I have the cycles to push this through in the next weeks, but if\n> > > we agree removing vacuum_defer_cleanup_age is a good idea, it seems like a\n> > > good idea to mark it as deprecated in 16?\n> > \n> > Hmm, for the time being, can we just \"disable\" it by disallowing to set\n> > the GUC to a value different from 0? Then we can remove the code later\n> > in the cycle at leisure.\n> \n> It can be useful to do a \"rolling transition\", and it's something I do\n> often.\n> \n> But I can't see why that would be useful here? It seems like something\n> that could be done after the feature freeze. It's removing a feature,\n> not adding one.\n\nIt wasn't actually that much work to write a patch to remove\nvacuum_defer_cleanup_age, see the attached.\n\nI don't know whether others think we should apply it this release, given the\n\"late submission\", but I tend to think it's not worth caring the complication\nof vacuum_defer_cleanup_age forward.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 22 Mar 2023 10:00:48 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "> On 22 Mar 2023, at 18:00, Andres Freund <[email protected]> wrote:\n\n> It wasn't actually that much work to write a patch to remove\n> vacuum_defer_cleanup_age, see the attached.\n\n- and <xref linkend=\"guc-vacuum-defer-cleanup-age\"/> provide protection against\n+ provides protection against\n relevant rows being removed by vacuum, but the former provides no\n protection during any time period when the standby is not connected,\n and the latter often needs to be set to a high value to provide adequate\n\nIsn't \"the latter\" in the kept part of the sentence referring to the guc we're\nremoving here?\n\n-\t * It's possible that slots / vacuum_defer_cleanup_age backed up the\n-\t * horizons further than oldest_considered_running. Fix.\n+\t * It's possible that slots backed up the horizons further than\n+\t * oldest_considered_running. Fix.\n\nWhile not the fault of this patch, can't we use the opportunity to expand\n\"Fix.\" to a short \"Fix this by ...\" sentence? Or remove \"Fix.\" perhaps, either\nof those would improve the comment IMHO.\n\n> I don't know whether others think we should apply it this release, given the\n> \"late submission\", but I tend to think it's not worth caring the complication\n> of vacuum_defer_cleanup_age forward.\n\nIt might be late in the cycle, but as it's not adding something that might\nbreak but rather removing something that's known for being problematic (and not\nuseful) I think it's Ok.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 23 Mar 2023 10:18:35 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-23 10:18:35 +0100, Daniel Gustafsson wrote:\n> > On 22 Mar 2023, at 18:00, Andres Freund <[email protected]> wrote:\n> \n> > It wasn't actually that much work to write a patch to remove\n> > vacuum_defer_cleanup_age, see the attached.\n> \n> - and <xref linkend=\"guc-vacuum-defer-cleanup-age\"/> provide protection against\n> + provides protection against\n> relevant rows being removed by vacuum, but the former provides no\n> protection during any time period when the standby is not connected,\n> and the latter often needs to be set to a high value to provide adequate\n> \n> Isn't \"the latter\" in the kept part of the sentence referring to the guc we're\n> removing here?\n\nYou're right. That paragraph generally seems a bit off. In HEAD:\n\n <para>\n In lieu of using replication slots, it is possible to prevent the removal\n of old WAL segments using <xref linkend=\"guc-wal-keep-size\"/>, or by\n storing the segments in an archive using\n <xref linkend=\"guc-archive-command\"/> or <xref linkend=\"guc-archive-library\"/>.\n However, these methods often result in retaining more WAL segments than\n required, whereas replication slots retain only the number of segments\n known to be needed. On the other hand, replication slots can retain so\n many WAL segments that they fill up the space allocated\n for <literal>pg_wal</literal>;\n <xref linkend=\"guc-max-slot-wal-keep-size\"/> limits the size of WAL files\n retained by replication slots.\n </para>\n <para>\n Similarly, <xref linkend=\"guc-hot-standby-feedback\"/>\n and <xref linkend=\"guc-vacuum-defer-cleanup-age\"/> provide protection against\n relevant rows being removed by vacuum, but the former provides no\n protection during any time period when the standby is not connected,\n and the latter often needs to be set to a high value to provide adequate\n protection. Replication slots overcome these disadvantages.\n </para>\n\nReplication slots alone don't prevent row removal, that requires\nhot_standby_feedback to be used as well.\n\nA minimal rephrasing would be:\n <para>\n Similarly, <xref linkend=\"guc-hot-standby-feedback\"/> on its own, without\n also using a replication slot, provides protection against relevant rows\n being removed by vacuum, but provides no protection during any time period\n when the standby is not connected. Replication slots overcome these\n disadvantages.\n </para>\n\n\n\n> -\t * It's possible that slots / vacuum_defer_cleanup_age backed up the\n> -\t * horizons further than oldest_considered_running. Fix.\n> +\t * It's possible that slots backed up the horizons further than\n> +\t * oldest_considered_running. Fix.\n> \n> While not the fault of this patch, can't we use the opportunity to expand\n> \"Fix.\" to a short \"Fix this by ...\" sentence? Or remove \"Fix.\" perhaps, either\n> of those would improve the comment IMHO.\n\nPerhaps unsurprisingly, given that I wrote that comment, I don't see a problem\nwith the \"Fix.\"...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 24 Mar 2023 13:27:42 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "> On 24 Mar 2023, at 21:27, Andres Freund <[email protected]> wrote:\n> On 2023-03-23 10:18:35 +0100, Daniel Gustafsson wrote:\n>>> On 22 Mar 2023, at 18:00, Andres Freund <[email protected]> wrote:\n>> \n>>> It wasn't actually that much work to write a patch to remove\n>>> vacuum_defer_cleanup_age, see the attached.\n>> \n>> - and <xref linkend=\"guc-vacuum-defer-cleanup-age\"/> provide protection against\n>> + provides protection against\n>> relevant rows being removed by vacuum, but the former provides no\n>> protection during any time period when the standby is not connected,\n>> and the latter often needs to be set to a high value to provide adequate\n>> \n>> Isn't \"the latter\" in the kept part of the sentence referring to the guc we're\n>> removing here?\n> \n> You're right. That paragraph generally seems a bit off. In HEAD:\n> \n> ...\n> \n> Replication slots alone don't prevent row removal, that requires\n> hot_standby_feedback to be used as well.\n> \n> A minimal rephrasing would be:\n> <para>\n> Similarly, <xref linkend=\"guc-hot-standby-feedback\"/> on its own, without\n> also using a replication slot, provides protection against relevant rows\n> being removed by vacuum, but provides no protection during any time period\n> when the standby is not connected. Replication slots overcome these\n> disadvantages.\n> </para>\n\n+1, that's definitely an improvement.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 24 Mar 2023 21:45:06 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "On Sat, Mar 18, 2023 at 2:34 AM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2023-Mar-17, Andres Freund wrote:\n>\n> > I started writing a test for vacuum_defer_cleanup_age while working on the fix\n> > referenced above, but now I am wondering if said energy would be better spent\n> > removing vacuum_defer_cleanup_age alltogether.\n>\n> +1 I agree it's not useful anymore.\n\n+1.\n\nI am suspicious of most of the GUCs whose value is an XID age. It\nstrikes me as something that is convenient to the implementation, but\nnot to the user, since there are so many ways that XID age might be a\npoor proxy for whatever it is that you really care about in each case.\n\nA theoretical advantage of vacuum_defer_cleanup_age is that it allows\nthe user to control things in terms of the impact on the primary --\nwhereas hot_standby_feedback is a mechanism that controls things in\nterms of the needs of the standby. In practice this is pretty useless,\nbut it seems like it might be possible to come up with some other new\nmechanism that somehow does this in a way that's truly useful.\nSomething that allows the user to constrain how far we hold back\nconflicts/vacuuming in terms of the *impact* on the primary.\n\nIt might be helpful to permit opportunistic cleanup by pruning and\nindex deletion at some point, but to throttle it when we know it would\nviolate some soft limit related to hot_standby_feedback. Maybe the\nsystem could prevent the first few attempts at pruning when it\nviolates the soft limit, or make pruning prune somewhat less\naggressively where there is little advantage to it in terms of\nspace/tuples freed -- decide on what to do at the very last minute,\nbased on all available information at that late stage, with the full\ncontext available. The system could be taught to be very patient at\nfirst, when relatively few pruning operations have been attempted,\nwhen the cost is basically still acceptable. But as more pruning\noperations ran and clearly didn't free space that really should be\nfreed, we'd quickly lose patience.\n\nThe big idea here is to delay committing to any course of action for\nas long as possible, so we wouldn't kill queries on standbys for very\nlittle benefit on the primary, while at the same time avoiding ever\nreally failing to kill queries on standbys when the cost proved too\nhigh on the primary. For this to have any chance of working it needs\nto focus on the actual costs on the primary, and not some extremely\nnoisy proxy for that cost. The standby will have its query killed by\njust one prune record affecting just one heap page, and delaying that\nspecific prune record is likely no big deal. It's preventing pruning\nof tens of thousands of heap pages that we need to worry about.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 24 Mar 2023 14:27:53 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 10:00:48AM -0700, Andres Freund wrote:\n> I don't know whether others think we should apply it this release, given the\n> \"late submission\", but I tend to think it's not worth caring the complication\n> of vacuum_defer_cleanup_age forward.\n\nI don't see any utility in waiting; it just makes the process of\nremoving it take longer for no reason.\n\nAs long as it's done before the betas, it seems completely reasonable to\nremove it for v16. \n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 11 Apr 2023 11:33:01 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-11 11:33:01 -0500, Justin Pryzby wrote:\n> On Wed, Mar 22, 2023 at 10:00:48AM -0700, Andres Freund wrote:\n> > I don't know whether others think we should apply it this release, given the\n> > \"late submission\", but I tend to think it's not worth caring the complication\n> > of vacuum_defer_cleanup_age forward.\n>\n> I don't see any utility in waiting; it just makes the process of\n> removing it take longer for no reason.\n>\n> As long as it's done before the betas, it seems completely reasonable to\n> remove it for v16.\n\nAdded the RMT.\n\nWe really should have a [email protected] alias...\n\nUpdated patch attached. I think we should either apply something like that\npatch, or at least add a <warning/> to the docs.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 11 Apr 2023 11:20:10 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 11:50 PM Andres Freund <[email protected]> wrote:\n>\n> On 2023-04-11 11:33:01 -0500, Justin Pryzby wrote:\n> > On Wed, Mar 22, 2023 at 10:00:48AM -0700, Andres Freund wrote:\n> > > I don't know whether others think we should apply it this release, given the\n> > > \"late submission\", but I tend to think it's not worth caring the complication\n> > > of vacuum_defer_cleanup_age forward.\n> >\n> > I don't see any utility in waiting; it just makes the process of\n> > removing it take longer for no reason.\n> >\n> > As long as it's done before the betas, it seems completely reasonable to\n> > remove it for v16.\n>\n> Added the RMT.\n>\n> We really should have a [email protected] alias...\n>\n> Updated patch attached. I think we should either apply something like that\n> patch, or at least add a <warning/> to the docs.\n>\n\n+1 to do one of the above. I think there is a good chance that\nsomebody might be doing more harm by using it so removing this\nshouldn't be a problem. Personally, I have not heard of people using\nit but OTOH it is difficult to predict so giving some time is also not\na bad idea.\n\nDo others have any opinion/suggestion on this matter?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 13 Apr 2023 09:04:24 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "On 2023-Apr-11, Andres Freund wrote:\n\n> Updated patch attached. I think we should either apply something like that\n> patch, or at least add a <warning/> to the docs.\n\nI gave this patch a look. The only code change is that\nComputeXidHorizons() and GetSnapshotData() no longer handle the case\nwhere vacuum_defer_cleanup_age is different from zero. It looks good.\nThe TransactionIdRetreatSafely() code being removed is pretty weird (I\nspent a good dozen minutes writing a complaint that your rewrite was\nfaulty, but it turns out I had misunderstood the function), so I'm glad\nit's being retired.\n\n\n> <para>\n> - Similarly, <xref linkend=\"guc-hot-standby-feedback\"/>\n> - and <xref linkend=\"guc-vacuum-defer-cleanup-age\"/> provide protection against\n> - relevant rows being removed by vacuum, but the former provides no\n> - protection during any time period when the standby is not connected,\n> - and the latter often needs to be set to a high value to provide adequate\n> - protection. Replication slots overcome these disadvantages.\n> + Similarly, <xref linkend=\"guc-hot-standby-feedback\"/> on its own, without\n> + also using a replication slot, provides protection against relevant rows\n> + being removed by vacuum, but provides no protection during any time period\n> + when the standby is not connected. Replication slots overcome these\n> + disadvantages.\n\nI think it made sense to have this paragraph be separate from the\nprevious one when it was talking about two separate variables, but now\nthat it's just one, it looks a bit isolated. I would merge it with the\none above, which is talking about pretty much the same thing, and\nreorder the whole thing approximately like this\n\n <para>\n In lieu of using replication slots, it is possible to prevent the removal\n of old WAL segments using <xref linkend=\"guc-wal-keep-size\"/>, or by\n storing the segments in an archive using\n <xref linkend=\"guc-archive-command\"/> or <xref linkend=\"guc-archive-library\"/>.\n However, these methods often result in retaining more WAL segments than\n required.\n Similarly, <xref linkend=\"guc-hot-standby-feedback\"/> without\n a replication slot provides protection against relevant rows\n being removed by vacuum, but provides no protection during any time period\n when the standby is not connected.\n </para>\n <para>\n Replication slots overcome these disadvantages by retaining only the number\n of segments known to be needed.\n On the other hand, replication slots can retain so\n many WAL segments that they fill up the space allocated\n for <literal>pg_wal</literal>;\n <xref linkend=\"guc-max-slot-wal-keep-size\"/> limits the size of WAL files\n retained by replication slots.\n </para>\n\nThough the \"However,\" looks a poor fit; I would do this:\n\n <para>\n In lieu of using replication slots, it is possible to prevent the removal\n of old WAL segments using <xref linkend=\"guc-wal-keep-size\"/>, or by\n storing the segments in an archive using\n <xref linkend=\"guc-archive-command\"/> or <xref linkend=\"guc-archive-library\"/>.\n A disadvantage of these methods is that they often result in retaining\n more WAL segments than required.\n Similarly, <xref linkend=\"guc-hot-standby-feedback\"/> without\n a replication slot provides protection against relevant rows\n being removed by vacuum, but provides no protection during any time period\n when the standby is not connected.\n </para>\n <para>\n Replication slots overcome these disadvantages by retaining only the number\n of segments known to be needed.\n On the other hand, replication slots can retain so\n many WAL segments that they fill up the space allocated\n for <literal>pg_wal</literal>;\n <xref linkend=\"guc-max-slot-wal-keep-size\"/> limits the size of WAL files\n retained by replication slots.\n </para>\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\nTom: There seems to be something broken here.\nTeodor: I'm in sackcloth and ashes... Fixed.\n http://postgr.es/m/[email protected]\n\n\n",
"msg_date": "Thu, 13 Apr 2023 13:18:38 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "On 4/12/23 11:34 PM, Amit Kapila wrote:\r\n> On Tue, Apr 11, 2023 at 11:50 PM Andres Freund <[email protected]> wrote:\r\n>>\r\n>> On 2023-04-11 11:33:01 -0500, Justin Pryzby wrote:\r\n>>> On Wed, Mar 22, 2023 at 10:00:48AM -0700, Andres Freund wrote:\r\n>>>> I don't know whether others think we should apply it this release, given the\r\n>>>> \"late submission\", but I tend to think it's not worth caring the complication\r\n>>>> of vacuum_defer_cleanup_age forward.\r\n>>>\r\n>>> I don't see any utility in waiting; it just makes the process of\r\n>>> removing it take longer for no reason.\r\n>>>\r\n>>> As long as it's done before the betas, it seems completely reasonable to\r\n>>> remove it for v16.\r\n>>\r\n>> Added the RMT.\r\n>>\r\n>> We really should have a [email protected] alias...\r\n\r\n(I had thought something as much -- will reach out to pginfra about options)\r\n\r\n>> Updated patch attached. I think we should either apply something like that\r\n>> patch, or at least add a <warning/> to the docs.\r\n>>\r\n\r\n> +1 to do one of the above. I think there is a good chance that\r\n> somebody might be doing more harm by using it so removing this\r\n> shouldn't be a problem. Personally, I have not heard of people using\r\n> it but OTOH it is difficult to predict so giving some time is also not\r\n> a bad idea.\r\n> \r\n> Do others have any opinion/suggestion on this matter?\r\n\r\nI need a bit more time to study this before formulating an opinion on \r\nwhether we should remove it for v16. In any case, I'm not against \r\ndocumentation.\r\n\r\nJonathan",
"msg_date": "Thu, 13 Apr 2023 11:32:50 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "On 4/13/23 11:32 AM, Jonathan S. Katz wrote:\r\n> On 4/12/23 11:34 PM, Amit Kapila wrote:\r\n>> On Tue, Apr 11, 2023 at 11:50 PM Andres Freund <[email protected]> \r\n\r\n>> +1 to do one of the above. I think there is a good chance that\r\n>> somebody might be doing more harm by using it so removing this\r\n>> shouldn't be a problem. Personally, I have not heard of people using\r\n>> it but OTOH it is difficult to predict so giving some time is also not\r\n>> a bad idea.\r\n>>\r\n>> Do others have any opinion/suggestion on this matter?\r\n> \r\n> I need a bit more time to study this before formulating an opinion on \r\n> whether we should remove it for v16. In any case, I'm not against \r\n> documentation.\r\n\r\n(didn't need too much more time).\r\n\r\n[RMT hat]\r\n\r\n+1 for removing.\r\n\r\nI looked at some data and it doesn't seem like vacuum_defer_cleanup_age \r\nis used in any significant way, whereas hot_standby_feedback is much \r\nmore widely used. Given this, and all the problems + arguments made in \r\nthe thread, we should just get rid of it for v16.\r\n\r\nThere are cases where we should deprecate before removing, but I don't \r\nthink this one based upon usage and having a better alternative.\r\n\r\nPer [1] it does sound like we can make some improvements to \r\nhot_standby_feedback, but those can wait to v17.\r\n\r\nWe should probably set $DATE to finish this, too. I don't think it's a \r\nrush, but we should give enough time before Beta 1.\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://www.postgresql.org/message-id/20230317230930.nhsgk3qfk7f4axls%40awork3.anarazel.de",
"msg_date": "Thu, 13 Apr 2023 12:16:38 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "On Thu, 2023-04-13 at 12:16 -0400, Jonathan S. Katz wrote:\n> On 4/13/23 11:32 AM, Jonathan S. Katz wrote:\n> > On 4/12/23 11:34 PM, Amit Kapila wrote:\n> > > On Tue, Apr 11, 2023 at 11:50 PM Andres Freund <[email protected]> \n> \n> > > +1 to do one of the above. I think there is a good chance that\n> > > somebody might be doing more harm by using it so removing this\n> > > shouldn't be a problem. Personally, I have not heard of people using\n> > > it but OTOH it is difficult to predict so giving some time is also not\n> > > a bad idea.\n> > > \n> > > Do others have any opinion/suggestion on this matter?\n> > \n> > I need a bit more time to study this before formulating an opinion on \n> > whether we should remove it for v16. In any case, I'm not against \n> > documentation.\n> \n> [RMT hat]\n> \n> +1 for removing.\n\nI am not against this in principle, but I know that there are people using\nthis parameter; see the discussion linked in\n\nhttps://postgr.es/m/[email protected]\n\nI can't say if they have a good use case for that parameter or not.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 14 Apr 2023 05:06:46 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "On Thu, Apr 13, 2023 at 11:06 PM Laurenz Albe <[email protected]> wrote:\n> I am not against this in principle, but I know that there are people using\n> this parameter; see the discussion linked in\n>\n> https://postgr.es/m/[email protected]\n>\n> I can't say if they have a good use case for that parameter or not.\n\nYeah, I feel similarly. Actually, personally I have no evidence, not\neven an anecdote, suggesting that this parameter is in use, but I'm a\nbit skeptical of any consensus of the form \"no one is using X,\"\nbecause there sure are a lot of people running PostgreSQL and they\nsure do a lot of things. Some more justifiably than others, but often\npeople have surprisingly good excuses for doing stuff that sounds\nbizarre when you first hear about it, and it doesn't seem totally\nimpossible that somebody could have found a way to get value out of\nthis.\n\nHowever, I suspect that there aren't many such people, and I think the\nsetting is a kludge, so I support removing it. Maybe we'll find out\nthat we ought to add something else instead, like a limited delimited\nin time rather than in XIDs. Or maybe the existing facilities are good\nenough. But as Peter rightly says, XID age is likely a poor proxy for\nwhatever people really care about, so I don't think continuing to have\na setting that works like that is a good plan.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 14 Apr 2023 08:30:30 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "> On 14 Apr 2023, at 14:30, Robert Haas <[email protected]> wrote:\n\n> ..as Peter rightly says, XID age is likely a poor proxy for\n> whatever people really care about, so I don't think continuing to have\n> a setting that works like that is a good plan.\n\nAgreed, and removing it is likely to be a good vehicle for figuring out what we\nshould have instead (if anything).\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 14 Apr 2023 14:33:44 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "On 4/14/23 8:30 AM, Robert Haas wrote:\r\n> On Thu, Apr 13, 2023 at 11:06 PM Laurenz Albe <[email protected]> wrote:\r\n>> I am not against this in principle, but I know that there are people using\r\n>> this parameter; see the discussion linked in\r\n>>\r\n>> https://postgr.es/m/[email protected]\r\n>>\r\n>> I can't say if they have a good use case for that parameter or not.\r\n> \r\n> Yeah, I feel similarly. Actually, personally I have no evidence, not\r\n> even an anecdote, suggesting that this parameter is in use, but I'm a\r\n> bit skeptical of any consensus of the form \"no one is using X,\"\r\n> because there sure are a lot of people running PostgreSQL and they\r\n> sure do a lot of things. Some more justifiably than others, but often\r\n> people have surprisingly good excuses for doing stuff that sounds\r\n> bizarre when you first hear about it, and it doesn't seem totally\r\n> impossible that somebody could have found a way to get value out of\r\n> this.\r\n\r\nLet me restate [1] in a different way.\r\n\r\nUsing a large enough dataset, I did qualitatively look at overall usage \r\nof both \"vacuum_defer_cleanup_age\" and compared to \r\n\"hot_standby_feedback\", given you can use both to accomplish similar \r\noutcomes. The usage of \"vacuum_defer_cleanup_age\" was really minimal, in \r\nfact approaching \"0\", whereas \"hot_standby_feedback\" had significant \r\nadoption.\r\n\r\nI'm not saying that \"we should remove a setting just because it's not \r\nused\" or that it may not have utility -- I'm saying that we can remove \r\nthe setting given:\r\n\r\n1. We know that using this setting incorrectly (which can be done fairly \r\neasily) can cause significant issues\r\n2. There's another setting that can accomplish similar goals that's much \r\nsafer\r\n3. The setting itself is not widely used\r\n\r\nIt's the combination of all 3 that led to my conclusion. If it were just \r\n(1), I'd lean more strongly towards trying to fix it first.\r\n\r\n> However, I suspect that there aren't many such people, and I think the\r\n> setting is a kludge, so I support removing it. Maybe we'll find out\r\n> that we ought to add something else instead, like a limited delimited\r\n> in time rather than in XIDs. Or maybe the existing facilities are good\r\n> enough. But as Peter rightly says, XID age is likely a poor proxy for\r\n> whatever people really care about, so I don't think continuing to have\r\n> a setting that works like that is a good plan.\r\n\r\nThat seems like a good eventual outcome.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://www.postgresql.org/message-id/bf42784f-4d57-0a3d-1a06-ffac1a09318c%40postgresql.org",
"msg_date": "Fri, 14 Apr 2023 09:46:59 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "On Fri, 14 Apr 2023 at 09:47, Jonathan S. Katz <[email protected]> wrote:\n>\n> Let me restate [1] in a different way.\n>\n> Using a large enough dataset, I did qualitatively look at overall usage\n> of both \"vacuum_defer_cleanup_age\" and compared to\n> \"hot_standby_feedback\", given you can use both to accomplish similar\n> outcomes.\n\nI assume people would use hot_standby_feedback if they have streaming\nreplication. The main use cases for vacuum_defer_cleanup_age would be\nif you're replaying WAL files. That may sound archaic but there are\nplenty of circumstances where your standby may not have network access\nto your primary at all or not want to be replaying continuously.\n\nI wonder whether your dataset is self-selecting sites that have\nstreaming replication. That does seem like the more common usage\npattern.\n\nSystems using wal files are more likely to be things like data\nwarehouses, offline analytics systems, etc. They may not even be well\nknown in the same organization that runs the online operations -- in\nmy experience they're often run by marketing or sales organizations or\nin some cases infosec teams and consume data from lots of sources. The\nmain reason to use wal archive replay is often to provide the\nisolation so that the operations team don't need to worry about the\nimpact on production and that makes it easy to forget these even\nexist.\n\n-- \ngreg\n\n\n",
"msg_date": "Fri, 14 Apr 2023 11:25:34 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "On 2023-Apr-14, Greg Stark wrote:\n\n> On Fri, 14 Apr 2023 at 09:47, Jonathan S. Katz <[email protected]> wrote:\n> >\n> > Let me restate [1] in a different way.\n> >\n> > Using a large enough dataset, I did qualitatively look at overall usage\n> > of both \"vacuum_defer_cleanup_age\" and compared to\n> > \"hot_standby_feedback\", given you can use both to accomplish similar\n> > outcomes.\n> \n> I assume people would use hot_standby_feedback if they have streaming\n> replication. \n\nYes, either that or a replication slot.\n\nvacuum_defer_cleanup_age was added in commit efc16ea52067 (2009-12-19),\nfor Postgres 9.0. hot_standby_feedback is a bit newer\n(bca8b7f16a3e, 2011-02-16), and replication slots are newer still\n(858ec11858a9, 2014-01-31).\n\n> The main use cases for vacuum_defer_cleanup_age would be if you're\n> replaying WAL files. That may sound archaic but there are plenty of\n> circumstances where your standby may not have network access to your\n> primary at all or not want to be replaying continuously.\n\nTrue, those cases exist. However, it sounds to me like in those cases\nvacuum_defer_cleanup_age doesn't really help you either; you'd just want\nto pause WAL replay depending on your queries or whatever. After all,\nyou'd have to feed the WAL files \"manually\" to replay, so you're in\ncontrol anyway without having to touch the primary.\n\nI find it very hard to believe that people are doing stuff with\nvacuum_defer_cleanup_age that cannot be done with either of the other\nnewer mechanisms, which have also seen much wider usage and so bugs\nfixed, etc.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Oh, great altar of passive entertainment, bestow upon me thy discordant images\nat such speed as to render linear thought impossible\" (Calvin a la TV)\n\n\n",
"msg_date": "Fri, 14 Apr 2023 18:43:29 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "On Fri, 2023-04-14 at 18:43 +0200, Alvaro Herrera wrote:\n> On 2023-Apr-14, Greg Stark wrote:\n> > I assume people would use hot_standby_feedback if they have streaming\n> > replication. \n> \n> Yes, either that or a replication slot.\n\nA replication slot doesn't do anything against snapshot conflicts,\nwhich is what we are discussing here. Or are we not?\n\n> \n> I find it very hard to believe that people are doing stuff with\n> vacuum_defer_cleanup_age that cannot be done with either of the other\n> newer mechanisms, which have also seen much wider usage and so bugs\n> fixed, etc.\n\nvacuum_defer_cleanup_age offers a more fine-grained approach.\nWith hot_standby_feedback you can only say \"don't ever remove any dead\ntuples that the standby still needs\".\n\nBut perhaps you'd prefer \"don't remove dead tuples unless they are\nquite old\", so that you can get your shorter queries on the standby\nto complete, without delaying replay and without the danger that a\nlong running query on the standby bloats your primary.\n\nHow about this:\nLet's remove vacuum_defer_cleanup_age, and put a note in the release notes\nthat recommends using statement_timeout and hot_standby_feedback = on\non the standby instead.\nThat should have pretty much the same effect, and it is measured in\ntime and not in the number of transactions.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 14 Apr 2023 19:15:04 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "On Fri, 14 Apr 2023 at 13:15, Laurenz Albe <[email protected]> wrote:\n>\n> On Fri, 2023-04-14 at 18:43 +0200, Alvaro Herrera wrote:\n> > On 2023-Apr-14, Greg Stark wrote:\n> > > I assume people would use hot_standby_feedback if they have streaming\n> > > replication.\n> >\n> > Yes, either that or a replication slot.\n>\n> A replication slot doesn't do anything against snapshot conflicts,\n> which is what we are discussing here. Or are we not?\n\nThey're related -- the replication slot holds the feedback xmin so\nthat if your standby disconnects it can reconnect later and not have\nlost data in the meantime. At least I think that's what I think it\ndoes -- I don't know if I'm just assuming that, but xmin is indeed in\npg_replication_slots.\n\n-- \ngreg\n\n\n",
"msg_date": "Fri, 14 Apr 2023 14:08:56 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "On 4/14/23 1:15 PM, Laurenz Albe wrote:\r\n\r\n> Let's remove vacuum_defer_cleanup_age, and put a note in the release notes\r\n> that recommends using statement_timeout and hot_standby_feedback = on\r\n> on the standby instead.\r\n> That should have pretty much the same effect, and it is measured in\r\n> time and not in the number of transactions.\r\n\r\n+1.\r\n\r\nJonathan",
"msg_date": "Fri, 14 Apr 2023 15:07:37 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "On Fri, Apr 14, 2023 at 03:07:37PM -0400, Jonathan S. Katz wrote:\n> +1.\n\n+1. I agree with the upthread discussion and support removing\nvacuum_defer_cleanup_age.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 20 Apr 2023 15:51:54 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-13 13:18:38 +0200, Alvaro Herrera wrote:\n> On 2023-Apr-11, Andres Freund wrote:\n> \n> > Updated patch attached. I think we should either apply something like that\n> > patch, or at least add a <warning/> to the docs.\n> \n> I gave this patch a look. The only code change is that\n> ComputeXidHorizons() and GetSnapshotData() no longer handle the case\n> where vacuum_defer_cleanup_age is different from zero. It looks good.\n> The TransactionIdRetreatSafely() code being removed is pretty weird (I\n> spent a good dozen minutes writing a complaint that your rewrite was\n> faulty, but it turns out I had misunderstood the function), so I'm glad\n> it's being retired.\n\nMy rewrite of what? The creation of TransactionIdRetreatSafely() in\nbe504a3e974?\n\nI'm afraid we'll need TransactionIdRetreatSafely() again, when we convert more\nthings to 64bit xids (lest they end up with the same bug as fixed by\nbe504a3e974), so it's perhaps worth thinking about how to make it less\nconfusing.\n\n\n> > <para>\n> > - Similarly, <xref linkend=\"guc-hot-standby-feedback\"/>\n> > - and <xref linkend=\"guc-vacuum-defer-cleanup-age\"/> provide protection against\n> > - relevant rows being removed by vacuum, but the former provides no\n> > - protection during any time period when the standby is not connected,\n> > - and the latter often needs to be set to a high value to provide adequate\n> > - protection. Replication slots overcome these disadvantages.\n> > + Similarly, <xref linkend=\"guc-hot-standby-feedback\"/> on its own, without\n> > + also using a replication slot, provides protection against relevant rows\n> > + being removed by vacuum, but provides no protection during any time period\n> > + when the standby is not connected. Replication slots overcome these\n> > + disadvantages.\n> \n> I think it made sense to have this paragraph be separate from the\n> previous one when it was talking about two separate variables, but now\n> that it's just one, it looks a bit isolated. I would merge it with the\n> one above, which is talking about pretty much the same thing, and\n> reorder the whole thing approximately like this\n> \n> <para>\n> In lieu of using replication slots, it is possible to prevent the removal\n> of old WAL segments using <xref linkend=\"guc-wal-keep-size\"/>, or by\n> storing the segments in an archive using\n> <xref linkend=\"guc-archive-command\"/> or <xref linkend=\"guc-archive-library\"/>.\n> However, these methods often result in retaining more WAL segments than\n> required.\n> Similarly, <xref linkend=\"guc-hot-standby-feedback\"/> without\n> a replication slot provides protection against relevant rows\n> being removed by vacuum, but provides no protection during any time period\n> when the standby is not connected.\n> </para>\n> <para>\n> Replication slots overcome these disadvantages by retaining only the number\n> of segments known to be needed.\n> On the other hand, replication slots can retain so\n> many WAL segments that they fill up the space allocated\n> for <literal>pg_wal</literal>;\n> <xref linkend=\"guc-max-slot-wal-keep-size\"/> limits the size of WAL files\n> retained by replication slots.\n> </para>\n\nIt seems a bit confusing now, because \"by retaining only the number of\nsegments ...\" now also should cover hs_feedback (due to merging), but doesn't.\n\n\n> Though the \"However,\" looks a poor fit; I would do this:\n\nI agree, I don't like the however.\n\n\nI think I'll push the version I had. Then we can separately word-smith the\nsection? Unless somebody protests I'm gonna do that soon.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 22 Apr 2023 15:47:21 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "On 2023-Apr-22, Andres Freund wrote:\n\n> On 2023-04-13 13:18:38 +0200, Alvaro Herrera wrote:\n> > \n> > > Updated patch attached. I think we should either apply something like that\n> > > patch, or at least add a <warning/> to the docs.\n> > \n> > I gave this patch a look. The only code change is that\n> > ComputeXidHorizons() and GetSnapshotData() no longer handle the case\n> > where vacuum_defer_cleanup_age is different from zero. It looks good.\n> > The TransactionIdRetreatSafely() code being removed is pretty weird (I\n> > spent a good dozen minutes writing a complaint that your rewrite was\n> > faulty, but it turns out I had misunderstood the function), so I'm glad\n> > it's being retired.\n> \n> My rewrite of what? The creation of TransactionIdRetreatSafely() in\n> be504a3e974?\n\nI meant the code that used to call TransactionIdRetreatSafely() and that\nyou're changing in the proposed patch.\n\n> I'm afraid we'll need TransactionIdRetreatSafely() again, when we convert more\n> things to 64bit xids (lest they end up with the same bug as fixed by\n> be504a3e974), so it's perhaps worth thinking about how to make it less\n> confusing.\n\nThe one thing that IMO makes it less confusing is to have it return the\nvalue rather than modifying it in place.\n\n> > <para>\n> > Replication slots overcome these disadvantages by retaining only the number\n> > of segments known to be needed.\n> > On the other hand, replication slots can retain so\n> > many WAL segments that they fill up the space allocated\n> > for <literal>pg_wal</literal>;\n> > <xref linkend=\"guc-max-slot-wal-keep-size\"/> limits the size of WAL files\n> > retained by replication slots.\n> > </para>\n> \n> It seems a bit confusing now, because \"by retaining only the number of\n> segments ...\" now also should cover hs_feedback (due to merging), but doesn't.\n\nHah, ok.\n\n> I think I'll push the version I had. Then we can separately word-smith the\n> section? Unless somebody protests I'm gonna do that soon.\n\nNo objection.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 24 Apr 2023 14:36:36 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "On Mon, Apr 24, 2023 at 8:36 AM Alvaro Herrera <[email protected]> wrote:\n> The one thing that IMO makes it less confusing is to have it return the\n> value rather than modifying it in place.\n\nYeah, I don't understand why we have these functions that modify the\nvalue in place. Those are probably convenient here and there, but\noverall they seem to make things more confusing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 24 Apr 2023 12:09:45 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "Hi,\nNot very convenient but if autovacuum is enabled isn't vacuum_defer_cleanup_age the way to make extensions like pg_dirtyread more effective for temporal queries to quickly correct human DML mistakes without the need of a complete restore, even if no standby is in use ? vacuum_defer_cleanup_age+pg_dirtyread give PostgreSQL something like \"flashback query\" in Oracle.\nBest regards,\nPhil\n\n________________________________\nDe : Andres Freund <[email protected]>\nEnvoyé : dimanche 23 avril 2023 00:47\nÀ : Alvaro Herrera <[email protected]>\nCc : Justin Pryzby <[email protected]>; [email protected] <[email protected]>; Amit Kapila <[email protected]>\nObjet : Re: Should we remove vacuum_defer_cleanup_age?\n\nHi,\n\nOn 2023-04-13 13:18:38 +0200, Alvaro Herrera wrote:\n> On 2023-Apr-11, Andres Freund wrote:\n>\n> > Updated patch attached. I think we should either apply something like that\n> > patch, or at least add a <warning/> to the docs.\n>\n> I gave this patch a look. The only code change is that\n> ComputeXidHorizons() and GetSnapshotData() no longer handle the case\n> where vacuum_defer_cleanup_age is different from zero. It looks good.\n> The TransactionIdRetreatSafely() code being removed is pretty weird (I\n> spent a good dozen minutes writing a complaint that your rewrite was\n> faulty, but it turns out I had misunderstood the function), so I'm glad\n> it's being retired.\n\nMy rewrite of what? The creation of TransactionIdRetreatSafely() in\nbe504a3e974?\n\nI'm afraid we'll need TransactionIdRetreatSafely() again, when we convert more\nthings to 64bit xids (lest they end up with the same bug as fixed by\nbe504a3e974), so it's perhaps worth thinking about how to make it less\nconfusing.\n\n\n> > <para>\n> > - Similarly, <xref linkend=\"guc-hot-standby-feedback\"/>\n> > - and <xref linkend=\"guc-vacuum-defer-cleanup-age\"/> provide protection against\n> > - relevant rows being removed by vacuum, but the former provides no\n> > - protection during any time period when the standby is not connected,\n> > - and the latter often needs to be set to a high value to provide adequate\n> > - protection. Replication slots overcome these disadvantages.\n> > + Similarly, <xref linkend=\"guc-hot-standby-feedback\"/> on its own, without\n> > + also using a replication slot, provides protection against relevant rows\n> > + being removed by vacuum, but provides no protection during any time period\n> > + when the standby is not connected. Replication slots overcome these\n> > + disadvantages.\n>\n> I think it made sense to have this paragraph be separate from the\n> previous one when it was talking about two separate variables, but now\n> that it's just one, it looks a bit isolated. I would merge it with the\n> one above, which is talking about pretty much the same thing, and\n> reorder the whole thing approximately like this\n>\n> <para>\n> In lieu of using replication slots, it is possible to prevent the removal\n> of old WAL segments using <xref linkend=\"guc-wal-keep-size\"/>, or by\n> storing the segments in an archive using\n> <xref linkend=\"guc-archive-command\"/> or <xref linkend=\"guc-archive-library\"/>.\n> However, these methods often result in retaining more WAL segments than\n> required.\n> Similarly, <xref linkend=\"guc-hot-standby-feedback\"/> without\n> a replication slot provides protection against relevant rows\n> being removed by vacuum, but provides no protection during any time period\n> when the standby is not connected.\n> </para>\n> <para>\n> Replication slots overcome these disadvantages by retaining only the number\n> of segments known to be needed.\n> On the other hand, replication slots can retain so\n> many WAL segments that they fill up the space allocated\n> for <literal>pg_wal</literal>;\n> <xref linkend=\"guc-max-slot-wal-keep-size\"/> limits the size of WAL files\n> retained by replication slots.\n> </para>\n\nIt seems a bit confusing now, because \"by retaining only the number of\nsegments ...\" now also should cover hs_feedback (due to merging), but doesn't.\n\n\n> Though the \"However,\" looks a poor fit; I would do this:\n\nI agree, I don't like the however.\n\n\nI think I'll push the version I had. Then we can separately word-smith the\nsection? Unless somebody protests I'm gonna do that soon.\n\nGreetings,\n\nAndres Freund\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi,\nNot very convenient but if autovacuum is enabled isn't vacuum_defer_cleanup_age the way to make extensions like pg_dirtyread more effective for temporal queries to quickly correct human DML mistakes without the need of a complete\n restore, even if no standby is in use ? vacuum_defer_cleanup_age+pg_dirtyread give PostgreSQL something like \"flashback query\" in Oracle.\n\nBest regards,\nPhil\n\n\n\nDe : Andres Freund <[email protected]>\nEnvoyé : dimanche 23 avril 2023 00:47\nÀ : Alvaro Herrera <[email protected]>\nCc : Justin Pryzby <[email protected]>; [email protected] <[email protected]>; Amit Kapila <[email protected]>\nObjet : Re: Should we remove vacuum_defer_cleanup_age?\n \n\n\nHi,\n\nOn 2023-04-13 13:18:38 +0200, Alvaro Herrera wrote:\n> On 2023-Apr-11, Andres Freund wrote:\n> \n> > Updated patch attached. I think we should either apply something like that\n> > patch, or at least add a <warning/> to the docs.\n> \n> I gave this patch a look. The only code change is that\n> ComputeXidHorizons() and GetSnapshotData() no longer handle the case\n> where vacuum_defer_cleanup_age is different from zero. It looks good.\n> The TransactionIdRetreatSafely() code being removed is pretty weird (I\n> spent a good dozen minutes writing a complaint that your rewrite was\n> faulty, but it turns out I had misunderstood the function), so I'm glad\n> it's being retired.\n\nMy rewrite of what? The creation of TransactionIdRetreatSafely() in\nbe504a3e974?\n\nI'm afraid we'll need TransactionIdRetreatSafely() again, when we convert more\nthings to 64bit xids (lest they end up with the same bug as fixed by\nbe504a3e974), so it's perhaps worth thinking about how to make it less\nconfusing.\n\n\n> > <para>\n> > - Similarly, <xref linkend=\"guc-hot-standby-feedback\"/>\n> > - and <xref linkend=\"guc-vacuum-defer-cleanup-age\"/> provide protection against\n> > - relevant rows being removed by vacuum, but the former provides no\n> > - protection during any time period when the standby is not connected,\n> > - and the latter often needs to be set to a high value to provide adequate\n> > - protection. Replication slots overcome these disadvantages.\n> > + Similarly, <xref linkend=\"guc-hot-standby-feedback\"/> on its own, without\n> > + also using a replication slot, provides protection against relevant rows\n> > + being removed by vacuum, but provides no protection during any time period\n> > + when the standby is not connected. Replication slots overcome these\n> > + disadvantages.\n> \n> I think it made sense to have this paragraph be separate from the\n> previous one when it was talking about two separate variables, but now\n> that it's just one, it looks a bit isolated. I would merge it with the\n> one above, which is talking about pretty much the same thing, and\n> reorder the whole thing approximately like this\n> \n> <para>\n> In lieu of using replication slots, it is possible to prevent the removal\n> of old WAL segments using <xref linkend=\"guc-wal-keep-size\"/>, or by\n> storing the segments in an archive using\n> <xref linkend=\"guc-archive-command\"/> or <xref linkend=\"guc-archive-library\"/>.\n> However, these methods often result in retaining more WAL segments than\n> required.\n> Similarly, <xref linkend=\"guc-hot-standby-feedback\"/> without\n> a replication slot provides protection against relevant rows\n> being removed by vacuum, but provides no protection during any time period\n> when the standby is not connected.\n> </para>\n> <para>\n> Replication slots overcome these disadvantages by retaining only the number\n> of segments known to be needed.\n> On the other hand, replication slots can retain so\n> many WAL segments that they fill up the space allocated\n> for <literal>pg_wal</literal>;\n> <xref linkend=\"guc-max-slot-wal-keep-size\"/> limits the size of WAL files\n> retained by replication slots.\n> </para>\n\nIt seems a bit confusing now, because \"by retaining only the number of\nsegments ...\" now also should cover hs_feedback (due to merging), but doesn't.\n\n\n> Though the \"However,\" looks a poor fit; I would do this:\n\nI agree, I don't like the however.\n\n\nI think I'll push the version I had. Then we can separately word-smith the\nsection? Unless somebody protests I'm gonna do that soon.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 24 Apr 2023 18:37:30 +0000",
"msg_from": "Phil Florent <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Should we remove vacuum_defer_cleanup_age?"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-24 14:36:36 +0200, Alvaro Herrera wrote:\n> On 2023-Apr-22, Andres Freund wrote:\n> > I'm afraid we'll need TransactionIdRetreatSafely() again, when we convert more\n> > things to 64bit xids (lest they end up with the same bug as fixed by\n> > be504a3e974), so it's perhaps worth thinking about how to make it less\n> > confusing.\n> \n> The one thing that IMO makes it less confusing is to have it return the\n> value rather than modifying it in place.\n\nPartially I made it that way because you otherwise end up repeating long\nvariable names multiple times within one statement, yielding long repetitive\nlines... Not sure that's a good enough reason, but ...\n\n\n\n> > > <para>\n> > > Replication slots overcome these disadvantages by retaining only the number\n> > > of segments known to be needed.\n> > > On the other hand, replication slots can retain so\n> > > many WAL segments that they fill up the space allocated\n> > > for <literal>pg_wal</literal>;\n> > > <xref linkend=\"guc-max-slot-wal-keep-size\"/> limits the size of WAL files\n> > > retained by replication slots.\n> > > </para>\n> > \n> > It seems a bit confusing now, because \"by retaining only the number of\n> > segments ...\" now also should cover hs_feedback (due to merging), but doesn't.\n> \n> Hah, ok.\n> \n\n> > I think I'll push the version I had. Then we can separately word-smith the\n> > section? Unless somebody protests I'm gonna do that soon.\n> \n> No objection.\n\nCool. Pushed now.\n\n\n",
"msg_date": "Mon, 24 Apr 2023 13:04:34 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should we remove vacuum_defer_cleanup_age?"
}
] |
[
{
"msg_contents": "Hi,\n\nPeter Smith has recently reported a BF failure [1]. AFAICS, the call\nstack of failure [2] is as follows:\n\n0x1e66644 <ExceptionalCondition+0x8c> at postgres\n0x1d0143c <pgstat_release_entry_ref+0x4c0> at postgres\n0x1d02534 <pgstat_get_entry_ref+0x780> at postgres\n0x1cfb120 <pgstat_prep_pending_entry+0x8c> at postgres\n0x1cfd590 <pgstat_report_disconnect+0x34> at postgres\n0x1cfbfc0 <pgstat_shutdown_hook+0xd4> at postgres\n0x1ca7b08 <shmem_exit+0x7c> at postgres\n0x1ca7c74 <proc_exit_prepare+0x70> at postgres\n0x1ca7d2c <proc_exit+0x18> at postgres\n0x1cdf060 <PostgresMain+0x584> at postgres\n0x1c203f4 <ServerLoop.isra.0+0x1e88> at postgres\n0x1c2161c <PostmasterMain+0xfa4> at postgres\n0x1edcf94 <main+0x254> at postgres\n\nI couldn't correlate it to the recent commits. Any thoughts?\n\n[1] - https://www.postgresql.org/message-id/CAHut%2BPsHdWFjU43VEX%2BR-8de6dFQ-_JWrsqs%3DvWek1hULexP4Q%40mail.gmail.com\n[2] -\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2023-03-17%2005%3A36%3A10\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 18 Mar 2023 09:41:59 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": true,
"msg_subject": "BF mamba failure"
},
{
"msg_contents": "Amit Kapila <[email protected]> writes:\n> Peter Smith has recently reported a BF failure [1]. AFAICS, the call\n> stack of failure [2] is as follows:\n\nNote the assertion report a few lines further up:\n\nTRAP: failed Assert(\"pg_atomic_read_u32(&entry_ref->shared_entry->refcount) == 0\"), File: \"pgstat_shmem.c\", Line: 560, PID: 25004\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 18 Mar 2023 00:26:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BF mamba failure"
},
{
"msg_contents": "Hi,\n\n18.03.2023 07:26, Tom Lane wrote:\n> Amit Kapila<[email protected]> writes:\n>> Peter Smith has recently reported a BF failure [1]. AFAICS, the call\n>> stack of failure [2] is as follows:\n> Note the assertion report a few lines further up:\n>\n> TRAP: failed Assert(\"pg_atomic_read_u32(&entry_ref->shared_entry->refcount) == 0\"), File: \"pgstat_shmem.c\", Line: 560, PID: 25004\n\nThis assertion failure can be reproduced easily with the attached patch:\n============== running regression test queries ==============\ntest oldest_xmin ... ok 55 ms\ntest oldest_xmin ... FAILED (test process exited with exit code 1) 107 ms\ntest oldest_xmin ... FAILED (test process exited with exit code 1) 8 ms\n============== shutting down postmaster ==============\n\ncontrib/test_decoding/output_iso/log/postmaster.log contains:\nTRAP: failed Assert(\"pg_atomic_read_u32(&entry_ref->shared_entry->refcount) == 0\"), File: \"pgstat_shmem.c\", Line: 561, \nPID: 456844\n\nWith the sleep placed above Assert(entry_ref->shared_entry->dropped) this Assert fails too.\n\nBest regards,\nAlexander",
"msg_date": "Sat, 18 Mar 2023 18:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BF mamba failure"
},
{
"msg_contents": "On Sun, Mar 19, 2023 at 2:00 AM Alexander Lakhin <[email protected]> wrote:\n>\n> Hi,\n>\n> 18.03.2023 07:26, Tom Lane wrote:\n>\n> Amit Kapila <[email protected]> writes:\n>\n> Peter Smith has recently reported a BF failure [1]. AFAICS, the call\n> stack of failure [2] is as follows:\n>\n> Note the assertion report a few lines further up:\n>\n> TRAP: failed Assert(\"pg_atomic_read_u32(&entry_ref->shared_entry->refcount) == 0\"), File: \"pgstat_shmem.c\", Line: 560, PID: 25004\n>\n>\n> This assertion failure can be reproduced easily with the attached patch:\n> ============== running regression test queries ==============\n> test oldest_xmin ... ok 55 ms\n> test oldest_xmin ... FAILED (test process exited with exit code 1) 107 ms\n> test oldest_xmin ... FAILED (test process exited with exit code 1) 8 ms\n> ============== shutting down postmaster ==============\n>\n> contrib/test_decoding/output_iso/log/postmaster.log contains:\n> TRAP: failed Assert(\"pg_atomic_read_u32(&entry_ref->shared_entry->refcount) == 0\"), File: \"pgstat_shmem.c\", Line: 561, PID: 456844\n>\n> With the sleep placed above Assert(entry_ref->shared_entry->dropped) this Assert fails too.\n>\n> Best regards,\n> Alexander\n\nI used a slightly modified* patch of Alexander's [1] applied to the\nlatest HEAD code (but with my \"toptxn\" patch reverted).\n--- the patch was modified in that I injected 'sleep' both above and\nbelow the Assert(entry_ref->shared_entry->dropped).\n\nUsing this I was also able to reproduce the problem. But test failures\nwere rare. The make check-world seemed OK, and indeed the\ntest_decoding tests would also appear to PASS around 14 out of 15\ntimes.\n\n============== running regression test queries ==============\ntest oldest_xmin ... ok 342 ms\ntest oldest_xmin ... ok 121 ms\ntest oldest_xmin ... ok 283 ms\n============== shutting down postmaster ==============\n============== removing temporary instance ==============\n\n=====================\n All 3 tests passed.\n=====================\n\n~~\n\nOften (but not always) depite the test_decoding reported PASS all 3\ntests as \"ok\", I still observed there was a TRAP in the logfile\n(contrib/test_decoding/output_iso/log/postmaster.log).\nTRAP: failed Assert(\"entry_ref->shared_entry->dropped\")\n\n~~\n\nOccasionally (about 1 in 15 test runs) the test would fail the same\nway as described by Alexander [1], with the accompanying TRAP.\nTRAP: failed Assert(\"pg_atomic_read_u32(&entry_ref->shared_entry->refcount)\n== 0\"), File: \"pgstat_shmem.c\", Line: 562, PID: 32013\n\n============== running regression test queries ==============\ntest oldest_xmin ... ok 331 ms\ntest oldest_xmin ... ok 91 ms\ntest oldest_xmin ... FAILED 702 ms\n============== shutting down postmaster ==============\n\n======================\n 1 of 3 tests failed.\n======================\n\n\n~~\n\n\nFWIW, the \"toptxn\" patch. whose push coincided with the build-farm\nerror I first reported [2], turns out to be an innocent party in this\nTRAP. We know this because all of the above results were running using\nHEAD code but with that \"toptxn\" patch reverted.\n\n------\n[1] https://www.postgresql.org/message-id/1941b7e2-be7c-9c4c-8505-c0fd05910e9a%40gmail.com\n[2] https://www.postgresql.org/message-id/CAHut%2BPsHdWFjU43VEX%2BR-8de6dFQ-_JWrsqs%3DvWek1hULexP4Q%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 20 Mar 2023 17:10:46 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BF mamba failure"
},
{
"msg_contents": "Hello hackers,\n\n20.03.2023 09:10, Peter Smith wrote:\n>\n> Using this I was also able to reproduce the problem. But test failures\n> were rare. The make check-world seemed OK, and indeed the\n> test_decoding tests would also appear to PASS around 14 out of 15\n> times.\n\nI've stumbled upon this assertion failure again during testing following cd312adc5.\n\nThis time I've simplified the reproducer to the attached modification.\nWith this patch applied, `make -s check -C contrib/test_decoding` fails on master as below:\nok 1 - pgstat_rc_1 14 ms\nnot ok 2 - pgstat_rc_2 1351 ms\n\n\ncontrib/test_decoding/output_iso/log/postmaster.log contains:\nTRAP: failed Assert(\"pg_atomic_read_u32(&entry_ref->shared_entry->refcount) == 0\"), File: \"pgstat_shmem.c\", Line: 562, \nPID: 1130928\n\nWith extra logging added, I see the following events happening:\n1) pgstat_rc_1.setup calls pgstat_create_replslot(), gets\n ReplicationSlotIndex(slot) = 0 and calls\n pgstat_get_entry_ref_locked(PGSTAT_KIND_REPLSLOT, InvalidOid, 0, 0).\n\n2) pgstat_rc_1.s0_get_changes executes pg_logical_slot_get_changes(...)\n and then calls pgstat_gc_entry_refs on shmem_exit() ->\n pgstat_shutdown_hook() ...;\n with the sleep added inside pgstat_release_entry_ref, this backend waits\n after decreasing entry_ref->shared_entry->refcount to 0.\n\n3) pgstat_rc_1.stop removes the replication slot.\n\n4) pgstat_rc_2.setup calls pgstat_create_replslot(), gets\n ReplicationSlotIndex(slot) = 0 and calls\n pgstat_get_entry_ref_locked(PGSTAT_KIND_REPLSLOT, InvalidOid, 0, 0),\n which leads to the call pgstat_reinit_entry(), which increases refcount\n for the same shared_entry as in (1) and (2), and then to the call\n pgstat_acquire_entry_ref(), which increases refcount once more.\n\n5) the backend 2 reaches\nAssert(pg_atomic_read_u32(&entry_ref->shared_entry->refcount) == 0),\n which fails due to refcount = 2.\n\nBest regards,\nAlexander",
"msg_date": "Wed, 12 Jun 2024 20:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BF mamba failure"
},
{
"msg_contents": "On Wed, Jun 12, 2024 at 08:00:00PM +0300, Alexander Lakhin wrote:\n> With extra logging added, I see the following events happening:\n> 1) pgstat_rc_1.setup calls pgstat_create_replslot(), gets\n> ReplicationSlotIndex(slot) = 0 and calls\n> pgstat_get_entry_ref_locked(PGSTAT_KIND_REPLSLOT, InvalidOid, 0, 0).\n> \n> 2) pgstat_rc_1.s0_get_changes executes pg_logical_slot_get_changes(...)\n> and then calls pgstat_gc_entry_refs on shmem_exit() ->\n> pgstat_shutdown_hook() ...;\n> with the sleep added inside pgstat_release_entry_ref, this backend waits\n> after decreasing entry_ref->shared_entry->refcount to 0.\n> \n> 3) pgstat_rc_1.stop removes the replication slot.\n> \n> 4) pgstat_rc_2.setup calls pgstat_create_replslot(), gets\n> ReplicationSlotIndex(slot) = 0 and calls\n> pgstat_get_entry_ref_locked(PGSTAT_KIND_REPLSLOT, InvalidOid, 0, 0),\n> which leads to the call pgstat_reinit_entry(), which increases refcount\n> for the same shared_entry as in (1) and (2), and then to the call\n> pgstat_acquire_entry_ref(), which increases refcount once more.\n> \n> 5) the backend 2 reaches\n> Assert(pg_atomic_read_u32(&entry_ref->shared_entry->refcount) == 0),\n> which fails due to refcount = 2.\n\nThanks for the details.\n\nSo this comes down to the point that we offer no guarantee that the\nstats entry a backend sees at shutdown is the same as the one it wants\nto clean up. That's the same problem as what Floris has reported\nhere, with an OID wraparound and tables to get the same hash key.\nThat can happen for all stats kinds:\nhttps://www.postgresql.org/message-id/[email protected]\n\nI don't think that this is going to fly far except if we introduce a\nconcept of \"generation\" or \"age\" in the stats entries. The idea is\nsimple: when a stats entry is reinitialized because of a drop&create,\nincrement a counter to tell that this is a new generation, and keep\ntrack of it in *both* PgStat_EntryRef (local backend reference to the\nshmem stats entry) *and* PgStatShared_HashEntry (the central one).\nWhen releasing an entry, if we know that the shared entry we are\npointing at is not of the same generation as the local reference, it\nmeans that the entry has been reused for something else with the same\nhash key, so give up. It should not be that invasive, still it means\nABI breakage in the two pgstats internal structures I am mentioning,\nwhich is OK for a backpatch as this stuff is internal. On top of\nthat, this problem means that we can silently and randomly lose stats,\nwhich is not cool.\n\nNote that Noah has been working on a better integration of injection\npoints with isolation tests. We could reuse that here to have a test\ncase, with an injection point waiting around the pg_usleep() you have\nhardcoded:\nhttps://www.postgresql.org/message-id/[email protected]\n\nI'll try to give it a go on Monday.\n--\nMichael",
"msg_date": "Fri, 14 Jun 2024 14:31:37 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BF mamba failure"
},
{
"msg_contents": "On Fri, Jun 14, 2024 at 02:31:37PM +0900, Michael Paquier wrote:\n> I don't think that this is going to fly far except if we introduce a\n> concept of \"generation\" or \"age\" in the stats entries. The idea is\n> simple: when a stats entry is reinitialized because of a drop&create,\n> increment a counter to tell that this is a new generation, and keep\n> track of it in *both* PgStat_EntryRef (local backend reference to the\n> shmem stats entry) *and* PgStatShared_HashEntry (the central one).\n> When releasing an entry, if we know that the shared entry we are\n> pointing at is not of the same generation as the local reference, it\n> means that the entry has been reused for something else with the same\n> hash key, so give up. It should not be that invasive, still it means\n> ABI breakage in the two pgstats internal structures I am mentioning,\n> which is OK for a backpatch as this stuff is internal. On top of\n> that, this problem means that we can silently and randomly lose stats,\n> which is not cool.\n> \n> I'll try to give it a go on Monday.\n\nHere you go, the patch introduces what I've named an \"age\" counter\nattached to the shared entry references, and copied over to the local\nreferences. The countner is initialized at 0 and incremented each\ntime an entry is reused, then when attempting to drop an entry we\ncross-check the version hold locally with the shared one.\n\nWhile looking at the whole, this is close to a concept patch sent\npreviously, where a counter is used in the shared entry with a\ncross-check done with the local reference, that was posted here\n(noticed that today):\nhttps://www.postgresql.org/message-id/[email protected]\n\nThe logic is different though, as we don't need to care about the\ncontents of the local cache when cross-checking the \"age\" count when\nretrieving the contents, just the case where a backend would attempt\nto drop an entry it thinks is OK to operate on, that got reused\nbecause of the effect of other backends doing creates and drops with\nthe same hash key.\n\nThis idea needs more eyes, so I am adding that to the next CF for now.\nI've played with it for a few hours and concurrent replication slot\ndrops/creates, without breaking it. I have not implemented an\nisolation test for this case, as it depends on where we are going with\ntheir integration with injection points.\n--\nMichael",
"msg_date": "Mon, 17 Jun 2024 13:32:30 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BF mamba failure"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm attaching a patch to do $subject in autoprewarm.c and worker_spi\nextensions. The way it is right now doesn't hurt anyone, but why to\nfail after defining custom GUCs if we aren't loading them via\nshared_preload_libraries.\n\nThoughts?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 18 Mar 2023 10:26:42 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix misplaced shared_preload_libraries_in_progress check in few\n extensions"
},
{
"msg_contents": "At Sat, 18 Mar 2023 10:26:42 +0530, Bharath Rupireddy <[email protected]> wrote in \n> Hi,\n> \n> I'm attaching a patch to do $subject in autoprewarm.c and worker_spi\n> extensions. The way it is right now doesn't hurt anyone, but why to\n> fail after defining custom GUCs if we aren't loading them via\n> shared_preload_libraries.\n> \n> Thoughts?\n\nI don't think they're misplaced at least for pg_prewram. pg_prewarm\nworker allows to be executed after startup and the variable is still\nused by such prewarm worker processes, but the patch make the variable\nstop working in that case.\n\nI didn't look at the part for worker_spi.c.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 20 Mar 2023 18:05:41 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix misplaced shared_preload_libraries_in_progress check in\n few extensions"
}
] |
[
{
"msg_contents": "Hi hackers,\n\n In heap_create_with_catalog, the Relation new_rel_desc is created\nby RelationBuildLocalRelation, not table_open. So it's better to\ncall RelationClose to release it.\n\nWhat's more, the comment for it seems useless, just delete it.\n\nThanks!",
"msg_date": "Sat, 18 Mar 2023 07:04:45 +0000",
"msg_from": "Xiaoran Wang <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] Use RelationClose rather than table_close in\n heap_create_with_catalog"
},
{
"msg_contents": "Xiaoran Wang <[email protected]> 于2023年3月18日周六 15:04写道:\n\n> Hi hackers,\n>\n> In heap_create_with_catalog, the Relation new_rel_desc is created\n> by RelationBuildLocalRelation, not table_open. So it's better to\n> call RelationClose to release it.\n>\nWhy it's better to call RelationClose? Is there a problem if using\ntable_close()?\n\n> What's more, the comment for it seems useless, just delete it.\n>\n> Thanks!\n>\n\nregard, tender wang\n\nXiaoran Wang <[email protected]> 于2023年3月18日周六 15:04写道:\n\n\nHi hackers,\n\n\n\n\n\n In heap_create_with_catalog, the Relation new_rel_desc is created \nby RelationBuildLocalRelation, not table_open. So it's better to \ncall RelationClose to release it.Why it's better to call RelationClose? Is there a problem if using table_close()? \nWhat's more, the comment for it seems useless, just delete it.\n\n\n\n\n\nThanks!regard, tender wang",
"msg_date": "Wed, 10 May 2023 10:57:46 +0800",
"msg_from": "tender wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use RelationClose rather than table_close in\n heap_create_with_catalog"
},
{
"msg_contents": "The routine table_close takes 2 params: Relation and LOCKMODE, it first calls RelationClose to decrease the relation cache reference count, then deals with the lock on the table based on LOCKMOD param.\r\n\r\nIn heap_create_with_catalog, the Relation new_rel_desc is only a local relation cache, created by RelationBuildLocalRelation. No other processes can see this relation, as the transaction is not committed, so there is no lock on it.\r\n\r\nThere is no problem to release the relation cache by table_close(new_rel_desc, NoLock) here. However, from my point of view, table_close(new_rel_desc, NoLock); /* do not unlock till end of xact */\r\nthis line is a little confusing since there is no lock on the relation at all. So I think it's better to use RelationColse here.\r\n________________________________\r\nFrom: tender wang <[email protected]>\r\nSent: Wednesday, May 10, 2023 10:57 AM\r\nTo: Xiaoran Wang <[email protected]>\r\nCc: PostgreSQL-development <[email protected]>\r\nSubject: Re: [PATCH] Use RelationClose rather than table_close in heap_create_with_catalog\r\n\r\n!! External Email\r\n\r\n\r\nXiaoran Wang <[email protected]<mailto:[email protected]>> 于2023年3月18日周六 15:04写道:\r\nHi hackers,\r\n\r\n In heap_create_with_catalog, the Relation new_rel_desc is created\r\nby RelationBuildLocalRelation, not table_open. So it's better to\r\ncall RelationClose to release it.\r\nWhy it's better to call RelationClose? Is there a problem if using table_close()?\r\nWhat's more, the comment for it seems useless, just delete it.\r\n\r\nThanks!\r\n\r\nregard, tender wang\r\n\r\n!! External Email: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender.\r\n\n\n\n\n\n\n\n\nThe\r\n routine table_close takes 2 params: Relation and \r\nLOCKMODE, it first calls RelationClose to decrease the relation cache reference count, then deals with the lock on the table based on\r\nLOCKMOD param. \n\n\n\n\nIn\r\n heap_create_with_catalog, the Relation new_rel_desc is only a local relation cache, created by RelationBuildLocalRelation. No other processes can see this relation, as the transaction is not committed, so there is no lock on it. \n\n\nThere is no problem to release the relation cache by table_close(new_rel_desc, NoLock) here. However, from my point of view,\r\ntable_close(new_rel_desc, NoLock); /* do not unlock till end of xact */ \nthis line is a little confusing since there is no lock on the relation at all. So I think it's better to use RelationColse here.\n\n\nFrom: tender wang <[email protected]>\nSent: Wednesday, May 10, 2023 10:57 AM\nTo: Xiaoran Wang <[email protected]>\nCc: PostgreSQL-development <[email protected]>\nSubject: Re: [PATCH] Use RelationClose rather than table_close in heap_create_with_catalog\n \n\n\n\n\n\n\n\n\n\n\n!! External Email\n\n\n\n\n\n\n\n\n\n\n\nXiaoran Wang <[email protected]> 于2023年3月18日周六 15:04写道:\n\n\n\n\n\nHi hackers,\n\n\n\n\n\n In heap_create_with_catalog, the Relation new_rel_desc is created \nby RelationBuildLocalRelation, not table_open. So it's better to \ncall RelationClose to release it.\n\n\n\n\nWhy it's better to call RelationClose? Is there a problem if using table_close()? \n\n\n\n\nWhat's more, the comment for it seems useless, just delete it.\n\n\n\n\nThanks!\n\n\n\n\n\nregard, tender wang \n\n\n\n\n\n\n\n\n\n\n\n!! External Email: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender.",
"msg_date": "Wed, 10 May 2023 13:48:36 +0000",
"msg_from": "Xiaoran Wang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use RelationClose rather than table_close in\n heap_create_with_catalog"
},
{
"msg_contents": "On Sat, Mar 18, 2023 at 12:34 PM Xiaoran Wang <[email protected]> wrote:\n>\n> Hi hackers,\n>\n> In heap_create_with_catalog, the Relation new_rel_desc is created\n> by RelationBuildLocalRelation, not table_open. So it's better to\n> call RelationClose to release it.\n>\n> What's more, the comment for it seems useless, just delete it.\n\nEssentially, all the close functions are the same with NoLock, IOW,\ntable_close(relation, NoLock) = relation_closerelation, NoLock) =\nRelationClose(relation). Therefore, table_close(new_rel_desc, NoLock);\nlooks fine to me.\n\nAnd, the /* do not unlock till end of xact */, it looks like it's been\nthere from day 1. It may be indicating that the ref count fo the new\nrelation created in heap_create_with_catalog() will be decremented at\nthe end of xact, but I'm not sure what it means.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 10 May 2023 19:47:24 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use RelationClose rather than table_close in\n heap_create_with_catalog"
},
{
"msg_contents": "Bharath Rupireddy <[email protected]> 于2023年5月10日周三\n22:17写道:\n\n> On Sat, Mar 18, 2023 at 12:34 PM Xiaoran Wang <[email protected]> wrote:\n> >\n> > Hi hackers,\n> >\n> > In heap_create_with_catalog, the Relation new_rel_desc is created\n> > by RelationBuildLocalRelation, not table_open. So it's better to\n> > call RelationClose to release it.\n> >\n> > What's more, the comment for it seems useless, just delete it.\n>\n> Essentially, all the close functions are the same with NoLock, IOW,\n> table_close(relation, NoLock) = relation_closerelation, NoLock) =\n> RelationClose(relation). Therefore, table_close(new_rel_desc, NoLock);\n> looks fine to me.\n\n Agreed.\n\nAnd, the /* do not unlock till end of xact */, it looks like it's been\n> there from day 1. It may be indicating that the ref count fo the new\n> relation created in heap_create_with_catalog() will be decremented at\n> the end of xact, but I'm not sure what it means.\n>\n Me too\n\n> --\n> Bharath Rupireddy\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n>\n>\n>\n\nBharath Rupireddy <[email protected]> 于2023年5月10日周三 22:17写道:On Sat, Mar 18, 2023 at 12:34 PM Xiaoran Wang <[email protected]> wrote:\n>\n> Hi hackers,\n>\n> In heap_create_with_catalog, the Relation new_rel_desc is created\n> by RelationBuildLocalRelation, not table_open. So it's better to\n> call RelationClose to release it.\n>\n> What's more, the comment for it seems useless, just delete it.\n\nEssentially, all the close functions are the same with NoLock, IOW,\ntable_close(relation, NoLock) = relation_closerelation, NoLock) =\nRelationClose(relation). Therefore, table_close(new_rel_desc, NoLock);\nlooks fine to me. Agreed. \nAnd, the /* do not unlock till end of xact */, it looks like it's been\nthere from day 1. It may be indicating that the ref count fo the new\nrelation created in heap_create_with_catalog() will be decremented at\nthe end of xact, but I'm not sure what it means. Me too\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 10 May 2023 22:38:27 +0800",
"msg_from": "tender wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use RelationClose rather than table_close in\n heap_create_with_catalog"
},
{
"msg_contents": "Bharath Rupireddy <[email protected]> writes:\n> And, the /* do not unlock till end of xact */, it looks like it's been\n> there from day 1. It may be indicating that the ref count fo the new\n> relation created in heap_create_with_catalog() will be decremented at\n> the end of xact, but I'm not sure what it means.\n\nHmm, I think it's been copied-and-pasted from somewhere. It's quite\ncommon for us to not release locks on modified tables until end of\ntransaction. However, that's not what's happening here, because we\nactually *don't have any such lock* at this point, as you can easily\nprove by stepping through this code and watching the contents of\npg_locks from another session. We do acquire AccessExclusiveLock\non the new table eventually, but not till control returns to\nDefineRelation.\n\nI'm not real sure that I like the proposed code change: it's unclear\nto me whether it's an unwise piercing of a couple of abstraction\nlayers or an okay change because those abstraction layers haven't\nyet been applied to the new relation at all. However, I think the\nexisting comment is actively misleading and needs to be changed.\nMaybe something like\n\n /*\n * Close the relcache entry, since we return only an OID not a\n * relcache reference. Note that we do not yet hold any lockmanager\n * lock on the new rel, so there's nothing to release.\n */\n table_close(new_rel_desc, NoLock);\n\n /*\n * ok, the relation has been cataloged, so close catalogs and return\n * the OID of the newly created relation.\n */\n table_close(pg_class_desc, RowExclusiveLock);\n\nGiven these comments, maybe changing the first call to RelationClose\nwould be sensible, but I'm still not quite convinced.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 May 2023 12:32:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use RelationClose rather than table_close in\n heap_create_with_catalog"
},
{
"msg_contents": "Tom Lane <[email protected]> 于2023年5月11日周四 00:32写道:\n\n> Bharath Rupireddy <[email protected]> writes:\n> > And, the /* do not unlock till end of xact */, it looks like it's been\n> > there from day 1. It may be indicating that the ref count fo the new\n> > relation created in heap_create_with_catalog() will be decremented at\n> > the end of xact, but I'm not sure what it means.\n>\n> Hmm, I think it's been copied-and-pasted from somewhere. It's quite\n> common for us to not release locks on modified tables until end of\n> transaction. However, that's not what's happening here, because we\n> actually *don't have any such lock* at this point, as you can easily\n> prove by stepping through this code and watching the contents of\n> pg_locks from another session. We do acquire AccessExclusiveLock\n> on the new table eventually, but not till control returns to\n> DefineRelation.\n>\n> I'm not real sure that I like the proposed code change: it's unclear\n> to me whether it's an unwise piercing of a couple of abstraction\n> layers or an okay change because those abstraction layers haven't\n> yet been applied to the new relation at all. However, I think the\n> existing comment is actively misleading and needs to be changed.\n> Maybe something like\n>\n> /*\n> * Close the relcache entry, since we return only an OID not a\n> * relcache reference. Note that we do not yet hold any lockmanager\n> * lock on the new rel, so there's nothing to release.\n> */\n> table_close(new_rel_desc, NoLock);\n>\n> /*\n> * ok, the relation has been cataloged, so close catalogs and return\n> * the OID of the newly created relation.\n> */\n> table_close(pg_class_desc, RowExclusiveLock);\n>\n+1\n Personally, I prefer above code.\n\nGiven these comments, maybe changing the first call to RelationClose\n> would be sensible, but I'm still not quite convinced.\n>\n> regards, tom lane\n>\n>\n>\n\nTom Lane <[email protected]> 于2023年5月11日周四 00:32写道:Bharath Rupireddy <[email protected]> writes:\n> And, the /* do not unlock till end of xact */, it looks like it's been\n> there from day 1. It may be indicating that the ref count fo the new\n> relation created in heap_create_with_catalog() will be decremented at\n> the end of xact, but I'm not sure what it means.\n\nHmm, I think it's been copied-and-pasted from somewhere. It's quite\ncommon for us to not release locks on modified tables until end of\ntransaction. However, that's not what's happening here, because we\nactually *don't have any such lock* at this point, as you can easily\nprove by stepping through this code and watching the contents of\npg_locks from another session. We do acquire AccessExclusiveLock\non the new table eventually, but not till control returns to\nDefineRelation.\n\nI'm not real sure that I like the proposed code change: it's unclear\nto me whether it's an unwise piercing of a couple of abstraction\nlayers or an okay change because those abstraction layers haven't\nyet been applied to the new relation at all. However, I think the\nexisting comment is actively misleading and needs to be changed.\nMaybe something like\n\n /*\n * Close the relcache entry, since we return only an OID not a\n * relcache reference. Note that we do not yet hold any lockmanager\n * lock on the new rel, so there's nothing to release.\n */\n table_close(new_rel_desc, NoLock);\n\n /*\n * ok, the relation has been cataloged, so close catalogs and return\n * the OID of the newly created relation.\n */\n table_close(pg_class_desc, RowExclusiveLock);+1 Personally, I prefer above code.\nGiven these comments, maybe changing the first call to RelationClose\nwould be sensible, but I'm still not quite convinced.\n\n regards, tom lane",
"msg_date": "Thu, 11 May 2023 15:26:07 +0800",
"msg_from": "tender wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use RelationClose rather than table_close in\n heap_create_with_catalog"
},
{
"msg_contents": "Thanks for all your responses. It seems better to change the comments on the code\nrather than call RelationClose here.\n\n table_close(new_rel_desc, NoLock); /* do not unlock till end of xact */\n\nDo I need to create another patch to fix the comments?\n\nBest regards, xiaoran\n________________________________\nFrom: tender wang <[email protected]>\nSent: Thursday, May 11, 2023 3:26 PM\nTo: Tom Lane <[email protected]>\nCc: Bharath Rupireddy <[email protected]>; Xiaoran Wang <[email protected]>; [email protected] <[email protected]>\nSubject: Re: [PATCH] Use RelationClose rather than table_close in heap_create_with_catalog\n\n!! External Email\n\n\nTom Lane <[email protected]<mailto:[email protected]>> 于2023年5月11日周四 00:32写道:\nBharath Rupireddy <[email protected]<mailto:[email protected]>> writes:\n> And, the /* do not unlock till end of xact */, it looks like it's been\n> there from day 1. It may be indicating that the ref count fo the new\n> relation created in heap_create_with_catalog() will be decremented at\n> the end of xact, but I'm not sure what it means.\n\nHmm, I think it's been copied-and-pasted from somewhere. It's quite\ncommon for us to not release locks on modified tables until end of\ntransaction. However, that's not what's happening here, because we\nactually *don't have any such lock* at this point, as you can easily\nprove by stepping through this code and watching the contents of\npg_locks from another session. We do acquire AccessExclusiveLock\non the new table eventually, but not till control returns to\nDefineRelation.\n\nI'm not real sure that I like the proposed code change: it's unclear\nto me whether it's an unwise piercing of a couple of abstraction\nlayers or an okay change because those abstraction layers haven't\nyet been applied to the new relation at all. However, I think the\nexisting comment is actively misleading and needs to be changed.\nMaybe something like\n\n /*\n * Close the relcache entry, since we return only an OID not a\n * relcache reference. Note that we do not yet hold any lockmanager\n * lock on the new rel, so there's nothing to release.\n */\n table_close(new_rel_desc, NoLock);\n\n /*\n * ok, the relation has been cataloged, so close catalogs and return\n * the OID of the newly created relation.\n */\n table_close(pg_class_desc, RowExclusiveLock);\n+1\n Personally, I prefer above code.\n\nGiven these comments, maybe changing the first call to RelationClose\nwould be sensible, but I'm still not quite convinced.\n\n regards, tom lane\n\n\n\n!! External Email: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender.\n\n\n\n\n\n\n\n\nThanks for all your responses. It seems better to change the comments on the code\n\n\n\nrather than call RelationClose here.\n\n\n\n\n table_close(new_rel_desc, NoLock); /* do not unlock till end of xact */\n\n\n\n\n\n\n\nDo I need to create another patch to fix the comments?\n\n\n\n\n\n\n\nBest regards, xiaoran\n\n\n\n\n\nFrom: tender wang <[email protected]>\nSent: Thursday, May 11, 2023 3:26 PM\nTo: Tom Lane <[email protected]>\nCc: Bharath Rupireddy <[email protected]>; Xiaoran Wang <[email protected]>; [email protected] <[email protected]>\nSubject: Re: [PATCH] Use RelationClose rather than table_close in heap_create_with_catalog\n \n\n\n\n\n\n\n\n\n\n\n!! External Email\n\n\n\n\n\n\n\n\n\n\n\nTom Lane <[email protected]> 于2023年5月11日周四 00:32写道:\n\n\nBharath Rupireddy <[email protected]> writes:\n> And, the /* do not unlock till end of xact */, it looks like it's been\n> there from day 1. It may be indicating that the ref count fo the new\n> relation created in heap_create_with_catalog() will be decremented at\n> the end of xact, but I'm not sure what it means.\n\nHmm, I think it's been copied-and-pasted from somewhere. It's quite\ncommon for us to not release locks on modified tables until end of\ntransaction. However, that's not what's happening here, because we\nactually *don't have any such lock* at this point, as you can easily\nprove by stepping through this code and watching the contents of\npg_locks from another session. We do acquire AccessExclusiveLock\non the new table eventually, but not till control returns to\nDefineRelation.\n\nI'm not real sure that I like the proposed code change: it's unclear\nto me whether it's an unwise piercing of a couple of abstraction\nlayers or an okay change because those abstraction layers haven't\nyet been applied to the new relation at all. However, I think the\nexisting comment is actively misleading and needs to be changed.\nMaybe something like\n\n /*\n * Close the relcache entry, since we return only an OID not a\n * relcache reference. Note that we do not yet hold any lockmanager\n * lock on the new rel, so there's nothing to release.\n */\n table_close(new_rel_desc, NoLock);\n\n /*\n * ok, the relation has been cataloged, so close catalogs and return\n * the OID of the newly created relation.\n */\n table_close(pg_class_desc, RowExclusiveLock);\n\n+1\n Personally, I prefer above code.\n\n\n\nGiven these comments, maybe changing the first call to RelationClose\nwould be sensible, but I'm still not quite convinced.\n\n regards, tom lane\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n!! External Email: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender.",
"msg_date": "Sat, 13 May 2023 04:03:53 +0000",
"msg_from": "Xiaoran Wang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use RelationClose rather than table_close in\n heap_create_with_catalog"
}
] |
[
{
"msg_contents": "Hi,\nPostgreSQL passes bytea arguments to PL/Perl functions as hexadecimal strings, which is not only inconvenient, but also memory and time consuming.\nSo I decided to propose a simple transform extension to pass bytea as native Perl octet strings.\nPlease find the patch attached.\n \nRegards,\nIvan Panchenko",
"msg_date": "Sun, 19 Mar 2023 01:25:27 +0300",
"msg_from": "=?UTF-8?B?0JjQstCw0L0g0J/QsNC90YfQtdC90LrQvg==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?B?Qnl0ZWEgUEwvUGVybCB0cmFuc2Zvcm0=?="
},
{
"msg_contents": "> On 18 Mar 2023, at 23:25, Иван Панченко <[email protected]> wrote:\n> \n> Hi,\n> PostgreSQL passes bytea arguments to PL/Perl functions as hexadecimal strings, which is not only inconvenient, but also memory and time consuming.\n> So I decided to propose a simple transform extension to pass bytea as native Perl octet strings.\n> Please find the patch attached.\n\nThanks for the patch, I recommend registering this in the currently open\nCommitfest to make sure it's kept track of:\n\n\thttps://commitfest.postgresql.org/43/\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 22 Mar 2023 10:44:55 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea PL/Perl transform"
},
{
"msg_contents": ">Среда, 22 марта 2023, 12:45 +03:00 от Daniel Gustafsson <[email protected]>:\n> \n>> On 18 Mar 2023, at 23:25, Иван Панченко < [email protected] > wrote:\n>>\n>> Hi,\n>> PostgreSQL passes bytea arguments to PL/Perl functions as hexadecimal strings, which is not only inconvenient, but also memory and time consuming.\n>> So I decided to propose a simple transform extension to pass bytea as native Perl octet strings.\n>> Please find the patch attached.\n>Thanks for the patch, I recommend registering this in the currently open\n>Commitfest to make sure it's kept track of:\n>\n>https://commitfest.postgresql.org/43/\nThanks, done:\nhttps://commitfest.postgresql.org/43/4252/\n>\n>--\n>Daniel Gustafsson\n> \n--\nIvan\n \n Среда, 22 марта 2023, 12:45 +03:00 от Daniel Gustafsson <[email protected]>: > On 18 Mar 2023, at 23:25, Иван Панченко <[email protected]> wrote:>> Hi,> PostgreSQL passes bytea arguments to PL/Perl functions as hexadecimal strings, which is not only inconvenient, but also memory and time consuming.> So I decided to propose a simple transform extension to pass bytea as native Perl octet strings.> Please find the patch attached.Thanks for the patch, I recommend registering this in the currently openCommitfest to make sure it's kept track of:https://commitfest.postgresql.org/43/Thanks, done:https://commitfest.postgresql.org/43/4252/--Daniel Gustafsson --Ivan",
"msg_date": "Wed, 22 Mar 2023 19:17:24 +0300",
"msg_from": "=?UTF-8?B?0JjQstCw0L0g0J/QsNC90YfQtdC90LrQvg==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?B?UmU6IEJ5dGVhIFBML1BlcmwgdHJhbnNmb3Jt?="
},
{
"msg_contents": ">\n> So I decided to propose a simple transform extension to pass bytea as\n> native Perl octet strings.\n\n\nQuick review, mostly housekeeping things:\n\n* Needs a rebase, minor failure on Mkvcbuild.pm\n* Code needs standardized formatting, esp. bytea_plperl.c\n* Needs to be meson-i-fied (i.e. add a \"meson.build\" file)\n* Do all of these transforms need to be their own contrib modules? So much\nduplicated code across contrib/*_plperl already (and *plpython too for that\nmatter) ...\n\nCheers,\nGreg\n\nSo I decided to propose a simple transform extension to pass bytea as native Perl octet strings.Quick review, mostly housekeeping things:* Needs a rebase, minor failure on Mkvcbuild.pm* Code needs standardized formatting, esp. bytea_plperl.c* Needs to be meson-i-fied (i.e. add a \"meson.build\" file)* Do all of these transforms need to be their own contrib modules? So much duplicated code across contrib/*_plperl already (and *plpython too for that matter) ...Cheers,Greg",
"msg_date": "Thu, 22 Jun 2023 16:56:43 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea PL/Perl transform"
},
{
"msg_contents": "On 2023-06-22 Th 16:56, Greg Sabino Mullane wrote:\n>\n> * Do all of these transforms need to be their own contrib modules? So \n> much duplicated code across contrib/*_plperl already (and *plpython \n> too for that matter) ...\n>\n>\n\nYeah, that's a bit of a mess. Not sure what we can do about it now.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-22 Th 16:56, Greg Sabino\n Mullane wrote:\n\n\n\n\n\n\n* Do all of these transforms need to be their own\n contrib modules? So much duplicated code across\n contrib/*_plperl already (and *plpython too for that\n matter) ...\n\n\n\n\n\n\n\n\n\nYeah, that's a bit of a mess. Not sure what we can do about it\n now. \n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 23 Jun 2023 09:06:43 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea PL/Perl transform"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n\n> On 2023-06-22 Th 16:56, Greg Sabino Mullane wrote:\n>>\n>> * Do all of these transforms need to be their own contrib modules? So\n>> much duplicated code across contrib/*_plperl already (and *plpython \n>> too for that matter) ...\n>>\n>>\n>\n> Yeah, that's a bit of a mess. Not sure what we can do about it now.\n\nWould it be possible to move the functions and other objects to a new\ncombined extension, and make the existing ones depend on that?\n\nI see ALTER EXTENSION has both ADD and DROP subcommands which don't\naffect the object itself, only the extension membership. The challenge\nwould be getting the ordering right between the upgrade/install scripts\ndropping the objects from the existing extension and adding them to the\nnew extension.\n\n- ilmari\n\n\n",
"msg_date": "Fri, 23 Jun 2023 16:14:34 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea PL/Perl transform"
},
{
"msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]> writes:\n> Andrew Dunstan <[email protected]> writes:\n>> On 2023-06-22 Th 16:56, Greg Sabino Mullane wrote:\n>>> * Do all of these transforms need to be their own contrib modules? So\n>>> much duplicated code across contrib/*_plperl already (and *plpython \n>>> too for that matter) ...\n\n>> Yeah, that's a bit of a mess. Not sure what we can do about it now.\n\n> Would it be possible to move the functions and other objects to a new\n> combined extension, and make the existing ones depend on that?\n\nPerhaps another way could be to accept that the packaging is what it\nis, but look for ways to share the repetitive source code. The .so's\nwouldn't get any smaller, but they're not that big anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Jun 2023 12:34:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea PL/Perl transform"
},
{
"msg_contents": "On 22.06.23 22:56, Greg Sabino Mullane wrote:\n> * Do all of these transforms need to be their own contrib modules? So \n> much duplicated code across contrib/*_plperl already (and *plpython too \n> for that matter) ...\n\nThe reason the first transform modules were separate extensions is that \nthey interfaced between one extension (plpython, plperl) and another \nextension (ltree, hstore), so it wasn't clear where to put them without \ncreating an additional dependency for one of them.\n\nIf the transform deals with a built-in type, then they should just be \nadded to the respective pl extension directly.\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 13:47:51 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea PL/Perl transform"
},
{
"msg_contents": ">Четверг, 6 июля 2023, 14:48 +03:00 от Peter Eisentraut < [email protected] >:\n> \n>On 22.06.23 22:56, Greg Sabino Mullane wrote:\n>> * Do all of these transforms need to be their own contrib modules? So\n>> much duplicated code across contrib/*_plperl already (and *plpython too\n>> for that matter) ...\n>The reason the first transform modules were separate extensions is that\n>they interfaced between one extension (plpython, plperl) and another\n>extension (ltree, hstore), so it wasn't clear where to put them without\n>creating an additional dependency for one of them.\n>\n>If the transform deals with a built-in type, then they should just be\n>added to the respective pl extension directly.\nLooks reasonable. \nThe new extension bytea_plperl can be easily moved into plperl now, but what should be do with the existing ones, namely jsonb_plperl and bool_plperl ?\nIf we leave them where they are, it would be hard to explain why some transforms are inside plperl while other ones live separately. If we move them into plperl also, wouldn’t it break some compatibility?\n>\n> \n \nЧетверг, 6 июля 2023, 14:48 +03:00 от Peter Eisentraut <[email protected]>: On 22.06.23 22:56, Greg Sabino Mullane wrote:> * Do all of these transforms need to be their own contrib modules? So> much duplicated code across contrib/*_plperl already (and *plpython too> for that matter) ...The reason the first transform modules were separate extensions is thatthey interfaced between one extension (plpython, plperl) and anotherextension (ltree, hstore), so it wasn't clear where to put them withoutcreating an additional dependency for one of them.If the transform deals with a built-in type, then they should just beadded to the respective pl extension directly.Looks reasonable. The new extension bytea_plperl can be easily moved into plperl now, but what should be do with the existing ones, namely jsonb_plperl and bool_plperl ?If we leave them where they are, it would be hard to explain why some transforms are inside plperl while other ones live separately. If we move them into plperl also, wouldn’t it break some compatibility?",
"msg_date": "Fri, 14 Jul 2023 18:37:53 +0300",
"msg_from": "=?UTF-8?B?SXZhbiBQYW5jaGVua28=?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?B?UmU6IEJ5dGVhIFBML1BlcmwgdHJhbnNmb3Jt?="
},
{
"msg_contents": "=?UTF-8?B?SXZhbiBQYW5jaGVua28=?= <[email protected]> writes:\n> Четверг, 6 июля 2023, 14:48 +03:00 от Peter Eisentraut < [email protected] >:\n>> If the transform deals with a built-in type, then they should just be\n>> added to the respective pl extension directly.\n\n> The new extension bytea_plperl can be easily moved into plperl now, but what should be do with the existing ones, namely jsonb_plperl and bool_plperl ?\n> If we leave them where they are, it would be hard to explain why some transforms are inside plperl while other ones live separately. If we move them into plperl also, wouldn’t it break some compatibility?\n\nIt's kind of a mess, indeed. But I think we could make plperl 1.1\ncontain the additional transforms and just tell people they have\nto drop the obsolete extensions before they upgrade to 1.1.\nFortunately, it doesn't look like functions using a transform\nhave any hard dependency on the transform, so \"drop extension\njsonb_plperl\" followed by \"alter extension plperl update\" should\nwork without cascading to all your plperl functions.\n\nHaving said that, we'd still be in the position of having to\nexplain why some transforms are packaged with plperl and others\nnot. The distinction between built-in and contrib types might\nbe obvious to us hackers, but I bet a lot of users see it as\npretty artificial. So maybe the existing packaging design is\nfine and we should just look for a way to reduce the code\nduplication.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Jul 2023 16:27:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea PL/Perl transform"
},
{
"msg_contents": ">Friday, 14 July 2023, 23:27 +03:00 от Tom Lane <[email protected]>:\n> \n>=?UTF-8?B?SXZhbiBQYW5jaGVua28=?= < [email protected] > writes:\n>> Четверг, 6 июля 2023, 14:48 +03:00 от Peter Eisentraut < [email protected] >:\n>>> If the transform deals with a built-in type, then they should just be\n>>> added to the respective pl extension directly.\n>\n>> The new extension bytea_plperl can be easily moved into plperl now, but what should be do with the existing ones, namely jsonb_plperl and bool_plperl ?\n>> If we leave them where they are, it would be hard to explain why some transforms are inside plperl while other ones live separately. If we move them into plperl also, wouldn’t it break some compatibility?\n>\n>It's kind of a mess, indeed. But I think we could make plperl 1.1\n>contain the additional transforms and just tell people they have\n>to drop the obsolete extensions before they upgrade to 1.1.\n>Fortunately, it doesn't look like functions using a transform\n>have any hard dependency on the transform, so \"drop extension\n>jsonb_plperl\" followed by \"alter extension plperl update\" should\n>work without cascading to all your plperl functions.\n>\n>Having said that, we'd still be in the position of having to\n>explain why some transforms are packaged with plperl and others\n>not. The distinction between built-in and contrib types might\n>be obvious to us hackers, but I bet a lot of users see it as\n>pretty artificial. So maybe the existing packaging design is\n>fine and we should just look for a way to reduce the code\n>duplication.\nThe code duplication between different transforms is not in C code, but mostly in copy-pasted Makefile, *.control, *.sql and meson-build. These files could be generated from some universal templates. But, keeping in mind Windows compatibility and presence of three build system, this hardly looks like a simplification.\nProbably at present time it would be better to keep the existing code duplication until plperl 1.1.\nNevertheless, dealing with code duplication is a wider task than the bytea transform, so let me suggest to keep this extension in the present form. If we decide what to do with the duplication, it would be another patch.\n\nThe mesonified and rebased version of the transform patch is attached.\n>\n>regards, tom lane\n> \nRegards, Ivan",
"msg_date": "Fri, 21 Jul 2023 00:29:12 +0300",
"msg_from": "=?UTF-8?B?SXZhbiBQYW5jaGVua28=?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?B?UmU6IEJ5dGVhIFBML1BlcmwgdHJhbnNmb3Jt?="
},
{
"msg_contents": "On Fri, 21 Jul 2023 at 02:59, Ivan Panchenko <[email protected]> wrote:\n>\n> Friday, 14 July 2023, 23:27 +03:00 от Tom Lane <[email protected]>:\n>\n> =?UTF-8?B?SXZhbiBQYW5jaGVua28=?= <[email protected]> writes:\n> > Четверг, 6 июля 2023, 14:48 +03:00 от Peter Eisentraut < [email protected] >:\n> >> If the transform deals with a built-in type, then they should just be\n> >> added to the respective pl extension directly.\n>\n> > The new extension bytea_plperl can be easily moved into plperl now, but what should be do with the existing ones, namely jsonb_plperl and bool_plperl ?\n> > If we leave them where they are, it would be hard to explain why some transforms are inside plperl while other ones live separately. If we move them into plperl also, wouldn’t it break some compatibility?\n>\n> It's kind of a mess, indeed. But I think we could make plperl 1.1\n> contain the additional transforms and just tell people they have\n> to drop the obsolete extensions before they upgrade to 1.1.\n> Fortunately, it doesn't look like functions using a transform\n> have any hard dependency on the transform, so \"drop extension\n> jsonb_plperl\" followed by \"alter extension plperl update\" should\n> work without cascading to all your plperl functions.\n>\n> Having said that, we'd still be in the position of having to\n> explain why some transforms are packaged with plperl and others\n> not. The distinction between built-in and contrib types might\n> be obvious to us hackers, but I bet a lot of users see it as\n> pretty artificial. So maybe the existing packaging design is\n> fine and we should just look for a way to reduce the code\n> duplication.\n>\n> The code duplication between different transforms is not in C code, but mostly in copy-pasted Makefile, *.control, *.sql and meson-build. These files could be generated from some universal templates. But, keeping in mind Windows compatibility and presence of three build system, this hardly looks like a simplification.\n> Probably at present time it would be better to keep the existing code duplication until plperl 1.1.\n> Nevertheless, dealing with code duplication is a wider task than the bytea transform, so let me suggest to keep this extension in the present form. If we decide what to do with the duplication, it would be another patch.\n>\n> The mesonified and rebased version of the transform patch is attached.\n\nThe patch needs to be rebased as these changes are not required anymore:\ndiff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm\nindex 9e05eb91b1..ec0a3f8097 100644\n--- a/src/tools/msvc/Mkvcbuild.pm\n+++ b/src/tools/msvc/Mkvcbuild.pm\n@@ -43,7 +43,7 @@ my $contrib_extralibs = { 'libpq_pipeline' =>\n['ws2_32.lib'] };\n my $contrib_extraincludes = {};\n my $contrib_extrasource = {};\n my @contrib_excludes = (\n- 'bool_plperl', 'commit_ts',\n+ 'bool_plperl', 'bytea_plperl', 'commit_ts',\n 'hstore_plperl', 'hstore_plpython',\n 'intagg', 'jsonb_plperl',\n 'jsonb_plpython', 'ltree_plpython',\n@@ -791,6 +791,9 @@ sub mkvcbuild\n my $bool_plperl = AddTransformModule(\n 'bool_plperl', 'contrib/bool_plperl',\n 'plperl', 'src/pl/plperl');\n+ my $bytea_plperl = AddTransformModule(\n+ 'bytea_plperl', 'contrib/bytea_plperl',\n+ 'plperl', 'src/pl/plperl');\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 6 Jan 2024 21:21:35 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea PL/Perl transform"
},
{
"msg_contents": "Hi\n\nso 6. 1. 2024 v 16:51 odesílatel vignesh C <[email protected]> napsal:\n\n> On Fri, 21 Jul 2023 at 02:59, Ivan Panchenko <[email protected]> wrote:\n> >\n> > Friday, 14 July 2023, 23:27 +03:00 от Tom Lane <[email protected]>:\n> >\n> > =?UTF-8?B?SXZhbiBQYW5jaGVua28=?= <[email protected]> writes:\n> > > Четверг, 6 июля 2023, 14:48 +03:00 от Peter Eisentraut <\n> [email protected] >:\n> > >> If the transform deals with a built-in type, then they should just be\n> > >> added to the respective pl extension directly.\n> >\n> > > The new extension bytea_plperl can be easily moved into plperl now,\n> but what should be do with the existing ones, namely jsonb_plperl and\n> bool_plperl ?\n> > > If we leave them where they are, it would be hard to explain why some\n> transforms are inside plperl while other ones live separately. If we move\n> them into plperl also, wouldn’t it break some compatibility?\n> >\n> > It's kind of a mess, indeed. But I think we could make plperl 1.1\n> > contain the additional transforms and just tell people they have\n> > to drop the obsolete extensions before they upgrade to 1.1.\n> > Fortunately, it doesn't look like functions using a transform\n> > have any hard dependency on the transform, so \"drop extension\n> > jsonb_plperl\" followed by \"alter extension plperl update\" should\n> > work without cascading to all your plperl functions.\n> >\n> > Having said that, we'd still be in the position of having to\n> > explain why some transforms are packaged with plperl and others\n> > not. The distinction between built-in and contrib types might\n> > be obvious to us hackers, but I bet a lot of users see it as\n> > pretty artificial. So maybe the existing packaging design is\n> > fine and we should just look for a way to reduce the code\n> > duplication.\n> >\n> > The code duplication between different transforms is not in C code, but\n> mostly in copy-pasted Makefile, *.control, *.sql and meson-build. These\n> files could be generated from some universal templates. But, keeping in\n> mind Windows compatibility and presence of three build system, this hardly\n> looks like a simplification.\n> > Probably at present time it would be better to keep the existing code\n> duplication until plperl 1.1.\n> > Nevertheless, dealing with code duplication is a wider task than the\n> bytea transform, so let me suggest to keep this extension in the present\n> form. If we decide what to do with the duplication, it would be another\n> patch.\n> >\n> > The mesonified and rebased version of the transform patch is attached.\n>\n> The patch needs to be rebased as these changes are not required anymore:\n> diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm\n> index 9e05eb91b1..ec0a3f8097 100644\n> --- a/src/tools/msvc/Mkvcbuild.pm\n> +++ b/src/tools/msvc/Mkvcbuild.pm\n> @@ -43,7 +43,7 @@ my $contrib_extralibs = { 'libpq_pipeline' =>\n> ['ws2_32.lib'] };\n> my $contrib_extraincludes = {};\n> my $contrib_extrasource = {};\n> my @contrib_excludes = (\n> - 'bool_plperl', 'commit_ts',\n> + 'bool_plperl', 'bytea_plperl', 'commit_ts',\n> 'hstore_plperl', 'hstore_plpython',\n> 'intagg', 'jsonb_plperl',\n> 'jsonb_plpython', 'ltree_plpython',\n> @@ -791,6 +791,9 @@ sub mkvcbuild\n> my $bool_plperl = AddTransformModule(\n> 'bool_plperl', 'contrib/bool_plperl',\n> 'plperl', 'src/pl/plperl');\n> + my $bytea_plperl = AddTransformModule(\n> + 'bytea_plperl', 'contrib/bytea_plperl',\n> + 'plperl', 'src/pl/plperl');\n>\n> Regards,\n> Vignesh\n>\n>\n>\nI am checking this patch, it looks well. All tests passed. I am sending a\ncleaned patch.\n\nI did minor formatting cleaning.\n\nI inserted perl reference support - hstore_plperl and json_plperl does it.\n\n+<->/* Dereference references recursively. */\n+<->while (SvROK(in))\n+<-><-->in = SvRV(in);\n\nRegards\n\nPavel",
"msg_date": "Tue, 30 Jan 2024 13:51:17 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea PL/Perl transform"
},
{
"msg_contents": "Pavel Stehule <[email protected]> writes:\n\n> I inserted perl reference support - hstore_plperl and json_plperl does it.\n>\n> +<->/* Dereference references recursively. */\n> +<->while (SvROK(in))\n> +<-><-->in = SvRV(in);\n\nThat code in hstore_plperl and json_plperl is only relevant because they\ndeal with non-scalar values (hashes for hstore, and also arrays for\njson) which must be passed as references. The recursive nature of the\ndereferencing is questionable, and masked the bug fixed by commit\n1731e3741cbbf8e0b4481665d7d523bc55117f63.\n\nbytea_plperl only deals with scalars (specifically strings), so should\nnot concern itself with references. In fact, this code breaks returning\nobjects with overloaded stringification, for example:\n\nCREATE FUNCTION plperlu_overload() RETURNS bytea LANGUAGE plperlu\n TRANSFORM FOR TYPE bytea\n AS $$\n package StringOverload { use overload '\"\"' => sub { \"stuff\" }; }\n return bless {}, \"StringOverload\";\n $$;\n\nThis makes the server crash with an assertion failure from Perl because\nSvPVbyte() was passed a non-scalar value:\n\npostgres: ilmari regression_bytea_plperl [local] SELECT: sv.c:2865: Perl_sv_2pv_flags:\nAssertion `SvTYPE(sv) != SVt_PVAV && SvTYPE(sv) != SVt_PVHV && SvTYPE(sv) != SVt_PVFM' failed.\n\nIf I remove the dereferincing loop it succeeds:\n\nSELECT encode(plperlu_overload(), 'escape') AS string;\n string\n--------\n stuff\n(1 row)\n\nAttached is a v2 patch which removes the dereferencing and includes the\nabove example as a test.\n\n- ilmari",
"msg_date": "Tue, 30 Jan 2024 15:43:51 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea PL/Perl transform"
},
{
"msg_contents": "út 30. 1. 2024 v 16:43 odesílatel Dagfinn Ilmari Mannsåker <\[email protected]> napsal:\n\n> Pavel Stehule <[email protected]> writes:\n>\n> > I inserted perl reference support - hstore_plperl and json_plperl does\n> it.\n> >\n> > +<->/* Dereference references recursively. */\n> > +<->while (SvROK(in))\n> > +<-><-->in = SvRV(in);\n>\n> That code in hstore_plperl and json_plperl is only relevant because they\n> deal with non-scalar values (hashes for hstore, and also arrays for\n> json) which must be passed as references. The recursive nature of the\n> dereferencing is questionable, and masked the bug fixed by commit\n> 1731e3741cbbf8e0b4481665d7d523bc55117f63.\n>\n> bytea_plperl only deals with scalars (specifically strings), so should\n> not concern itself with references. In fact, this code breaks returning\n> objects with overloaded stringification, for example:\n>\n> CREATE FUNCTION plperlu_overload() RETURNS bytea LANGUAGE plperlu\n> TRANSFORM FOR TYPE bytea\n> AS $$\n> package StringOverload { use overload '\"\"' => sub { \"stuff\" }; }\n> return bless {}, \"StringOverload\";\n> $$;\n>\n> This makes the server crash with an assertion failure from Perl because\n> SvPVbyte() was passed a non-scalar value:\n>\n> postgres: ilmari regression_bytea_plperl [local] SELECT: sv.c:2865:\n> Perl_sv_2pv_flags:\n> Assertion `SvTYPE(sv) != SVt_PVAV && SvTYPE(sv) != SVt_PVHV && SvTYPE(sv)\n> != SVt_PVFM' failed.\n>\n> If I remove the dereferincing loop it succeeds:\n>\n> SELECT encode(plperlu_overload(), 'escape') AS string;\n> string\n> --------\n> stuff\n> (1 row)\n>\n> Attached is a v2 patch which removes the dereferencing and includes the\n> above example as a test.\n>\n\nBut without dereference it returns bad value.\n\nMaybe there should be a check so references cannot be returned? Probably is\nnot safe pass pointers between Perl and Postgres.\n\n\n\n>\n> - ilmari\n>\n>\n\nút 30. 1. 2024 v 16:43 odesílatel Dagfinn Ilmari Mannsåker <[email protected]> napsal:Pavel Stehule <[email protected]> writes:\n\n> I inserted perl reference support - hstore_plperl and json_plperl does it.\n>\n> +<->/* Dereference references recursively. */\n> +<->while (SvROK(in))\n> +<-><-->in = SvRV(in);\n\nThat code in hstore_plperl and json_plperl is only relevant because they\ndeal with non-scalar values (hashes for hstore, and also arrays for\njson) which must be passed as references. The recursive nature of the\ndereferencing is questionable, and masked the bug fixed by commit\n1731e3741cbbf8e0b4481665d7d523bc55117f63.\n\nbytea_plperl only deals with scalars (specifically strings), so should\nnot concern itself with references. In fact, this code breaks returning\nobjects with overloaded stringification, for example:\n\nCREATE FUNCTION plperlu_overload() RETURNS bytea LANGUAGE plperlu\n TRANSFORM FOR TYPE bytea\n AS $$\n package StringOverload { use overload '\"\"' => sub { \"stuff\" }; }\n return bless {}, \"StringOverload\";\n $$;\n\nThis makes the server crash with an assertion failure from Perl because\nSvPVbyte() was passed a non-scalar value:\n\npostgres: ilmari regression_bytea_plperl [local] SELECT: sv.c:2865: Perl_sv_2pv_flags:\nAssertion `SvTYPE(sv) != SVt_PVAV && SvTYPE(sv) != SVt_PVHV && SvTYPE(sv) != SVt_PVFM' failed.\n\nIf I remove the dereferincing loop it succeeds:\n\nSELECT encode(plperlu_overload(), 'escape') AS string;\n string\n--------\n stuff\n(1 row)\n\nAttached is a v2 patch which removes the dereferencing and includes the\nabove example as a test.But without dereference it returns bad value.Maybe there should be a check so references cannot be returned? Probably is not safe pass pointers between Perl and Postgres. \n\n- ilmari",
"msg_date": "Tue, 30 Jan 2024 17:05:53 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea PL/Perl transform"
},
{
"msg_contents": "Pavel Stehule <[email protected]> writes:\n\n> út 30. 1. 2024 v 16:43 odesílatel Dagfinn Ilmari Mannsåker <\n> [email protected]> napsal:\n>\n>> Pavel Stehule <[email protected]> writes:\n>>\n>> > I inserted perl reference support - hstore_plperl and json_plperl does\n>> it.\n>> >\n>> > +<->/* Dereference references recursively. */\n>> > +<->while (SvROK(in))\n>> > +<-><-->in = SvRV(in);\n>>\n>> That code in hstore_plperl and json_plperl is only relevant because they\n>> deal with non-scalar values (hashes for hstore, and also arrays for\n>> json) which must be passed as references. The recursive nature of the\n>> dereferencing is questionable, and masked the bug fixed by commit\n>> 1731e3741cbbf8e0b4481665d7d523bc55117f63.\n>>\n>> bytea_plperl only deals with scalars (specifically strings), so should\n>> not concern itself with references. In fact, this code breaks returning\n>> objects with overloaded stringification, for example:\n>>\n>> CREATE FUNCTION plperlu_overload() RETURNS bytea LANGUAGE plperlu\n>> TRANSFORM FOR TYPE bytea\n>> AS $$\n>> package StringOverload { use overload '\"\"' => sub { \"stuff\" }; }\n>> return bless {}, \"StringOverload\";\n>> $$;\n>>\n>> This makes the server crash with an assertion failure from Perl because\n>> SvPVbyte() was passed a non-scalar value:\n>>\n>> postgres: ilmari regression_bytea_plperl [local] SELECT: sv.c:2865:\n>> Perl_sv_2pv_flags:\n>> Assertion `SvTYPE(sv) != SVt_PVAV && SvTYPE(sv) != SVt_PVHV && SvTYPE(sv)\n>> != SVt_PVFM' failed.\n>>\n>> If I remove the dereferincing loop it succeeds:\n>>\n>> SELECT encode(plperlu_overload(), 'escape') AS string;\n>> string\n>> --------\n>> stuff\n>> (1 row)\n>>\n>> Attached is a v2 patch which removes the dereferencing and includes the\n>> above example as a test.\n>>\n>\n> But without dereference it returns bad value.\n\nWhere exactly does it return a bad value? The existing tests pass, and\nthe one I included shows that it does the right thing in that case too.\nIf you pass it an unblessed reference it returns the stringified version\nof that, as expected.\n\nCREATE FUNCTION plperl_reference() RETURNS bytea LANGUAGE plperl\n TRANSFORM FOR TYPE bytea\n AS $$ return []; $$;\n\nSELECT encode(plperl_reference(), 'escape') string;\n string\n-----------------------\n ARRAY(0x559a3109f0a8)\n(1 row)\n\nThis would also crash if the dereferencing loop was left in place.\n\n> Maybe there should be a check so references cannot be returned? Probably is\n> not safe pass pointers between Perl and Postgres.\n\nThere's no reason to ban references, that would break every Perl\nprogrammer's expectations. And there are no pointers being passed,\nSvPVbyte() returns the stringified form of whatever's passed in, which\nis well-behaved for both blessed and unblessed references.\n\nIf we really want to be strict, we should at least allow references to\nobjects that overload stringification, as they are explicitly designed\nto be well-behaved as strings. But that would be a lot of extra code\nfor very little benefit over just letting Perl stringify everything.\n\n- ilmari\n\n\n",
"msg_date": "Tue, 30 Jan 2024 16:18:21 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea PL/Perl transform"
},
{
"msg_contents": "út 30. 1. 2024 v 17:18 odesílatel Dagfinn Ilmari Mannsåker <\[email protected]> napsal:\n\n> Pavel Stehule <[email protected]> writes:\n>\n> > út 30. 1. 2024 v 16:43 odesílatel Dagfinn Ilmari Mannsåker <\n> > [email protected]> napsal:\n> >\n> >> Pavel Stehule <[email protected]> writes:\n> >>\n> >> > I inserted perl reference support - hstore_plperl and json_plperl does\n> >> it.\n> >> >\n> >> > +<->/* Dereference references recursively. */\n> >> > +<->while (SvROK(in))\n> >> > +<-><-->in = SvRV(in);\n> >>\n> >> That code in hstore_plperl and json_plperl is only relevant because they\n> >> deal with non-scalar values (hashes for hstore, and also arrays for\n> >> json) which must be passed as references. The recursive nature of the\n> >> dereferencing is questionable, and masked the bug fixed by commit\n> >> 1731e3741cbbf8e0b4481665d7d523bc55117f63.\n> >>\n> >> bytea_plperl only deals with scalars (specifically strings), so should\n> >> not concern itself with references. In fact, this code breaks returning\n> >> objects with overloaded stringification, for example:\n> >>\n> >> CREATE FUNCTION plperlu_overload() RETURNS bytea LANGUAGE plperlu\n> >> TRANSFORM FOR TYPE bytea\n> >> AS $$\n> >> package StringOverload { use overload '\"\"' => sub { \"stuff\" }; }\n> >> return bless {}, \"StringOverload\";\n> >> $$;\n> >>\n> >> This makes the server crash with an assertion failure from Perl because\n> >> SvPVbyte() was passed a non-scalar value:\n> >>\n> >> postgres: ilmari regression_bytea_plperl [local] SELECT: sv.c:2865:\n> >> Perl_sv_2pv_flags:\n> >> Assertion `SvTYPE(sv) != SVt_PVAV && SvTYPE(sv) != SVt_PVHV &&\n> SvTYPE(sv)\n> >> != SVt_PVFM' failed.\n> >>\n> >> If I remove the dereferincing loop it succeeds:\n> >>\n> >> SELECT encode(plperlu_overload(), 'escape') AS string;\n> >> string\n> >> --------\n> >> stuff\n> >> (1 row)\n> >>\n> >> Attached is a v2 patch which removes the dereferencing and includes the\n> >> above example as a test.\n> >>\n> >\n> > But without dereference it returns bad value.\n>\n> Where exactly does it return a bad value? The existing tests pass, and\n> the one I included shows that it does the right thing in that case too.\n> If you pass it an unblessed reference it returns the stringified version\n> of that, as expected.\n>\n\nugly test code\n\n(2024-01-30 13:44:28) postgres=# CREATE or replace FUNCTION\nperl_inverse_bytes(bytea) RETURNS bytea\nTRANSFORM FOR TYPE bytea\nAS $$ my $bytes = pack 'H*', '0123'; my $ref = \\$bytes;\nreturn $ref;\n$$ LANGUAGE plperlu;\nCREATE FUNCTION\n(2024-01-30 13:44:33) postgres=# select perl_inverse_bytes(''), ' '::bytea;\n┌──────────────────────────────────────┬───────┐\n│ perl_inverse_bytes │ bytea │\n╞══════════════════════════════════════╪═══════╡\n│ \\x5343414c41522830783130656134333829 │ \\x20 │\n└──────────────────────────────────────┴───────┘\n(1 row)\n\nexpected\n\n(2024-01-30 13:46:58) postgres=# select perl_inverse_bytes(''), ' '::bytea;\n┌────────────────────┬───────┐\n│ perl_inverse_bytes │ bytea │\n╞════════════════════╪═══════╡\n│ \\x0123 │ \\x20 │\n└────────────────────┴───────┘\n(1 row)\n\n\n>\n> CREATE FUNCTION plperl_reference() RETURNS bytea LANGUAGE plperl\n> TRANSFORM FOR TYPE bytea\n> AS $$ return []; $$;\n>\n> SELECT encode(plperl_reference(), 'escape') string;\n> string\n> -----------------------\n> ARRAY(0x559a3109f0a8)\n> (1 row)\n>\n> This would also crash if the dereferencing loop was left in place.\n>\n> > Maybe there should be a check so references cannot be returned? Probably\n> is\n> > not safe pass pointers between Perl and Postgres.\n>\n> There's no reason to ban references, that would break every Perl\n> programmer's expectations. And there are no pointers being passed,\n> SvPVbyte() returns the stringified form of whatever's passed in, which\n> is well-behaved for both blessed and unblessed references.\n>\n> If we really want to be strict, we should at least allow references to\n> objects that overload stringification, as they are explicitly designed\n> to be well-behaved as strings. But that would be a lot of extra code\n> for very little benefit over just letting Perl stringify everything.\n>\n> - ilmari\n>\n\nút 30. 1. 2024 v 17:18 odesílatel Dagfinn Ilmari Mannsåker <[email protected]> napsal:Pavel Stehule <[email protected]> writes:\n\n> út 30. 1. 2024 v 16:43 odesílatel Dagfinn Ilmari Mannsåker <\n> [email protected]> napsal:\n>\n>> Pavel Stehule <[email protected]> writes:\n>>\n>> > I inserted perl reference support - hstore_plperl and json_plperl does\n>> it.\n>> >\n>> > +<->/* Dereference references recursively. */\n>> > +<->while (SvROK(in))\n>> > +<-><-->in = SvRV(in);\n>>\n>> That code in hstore_plperl and json_plperl is only relevant because they\n>> deal with non-scalar values (hashes for hstore, and also arrays for\n>> json) which must be passed as references. The recursive nature of the\n>> dereferencing is questionable, and masked the bug fixed by commit\n>> 1731e3741cbbf8e0b4481665d7d523bc55117f63.\n>>\n>> bytea_plperl only deals with scalars (specifically strings), so should\n>> not concern itself with references. In fact, this code breaks returning\n>> objects with overloaded stringification, for example:\n>>\n>> CREATE FUNCTION plperlu_overload() RETURNS bytea LANGUAGE plperlu\n>> TRANSFORM FOR TYPE bytea\n>> AS $$\n>> package StringOverload { use overload '\"\"' => sub { \"stuff\" }; }\n>> return bless {}, \"StringOverload\";\n>> $$;\n>>\n>> This makes the server crash with an assertion failure from Perl because\n>> SvPVbyte() was passed a non-scalar value:\n>>\n>> postgres: ilmari regression_bytea_plperl [local] SELECT: sv.c:2865:\n>> Perl_sv_2pv_flags:\n>> Assertion `SvTYPE(sv) != SVt_PVAV && SvTYPE(sv) != SVt_PVHV && SvTYPE(sv)\n>> != SVt_PVFM' failed.\n>>\n>> If I remove the dereferincing loop it succeeds:\n>>\n>> SELECT encode(plperlu_overload(), 'escape') AS string;\n>> string\n>> --------\n>> stuff\n>> (1 row)\n>>\n>> Attached is a v2 patch which removes the dereferencing and includes the\n>> above example as a test.\n>>\n>\n> But without dereference it returns bad value.\n\nWhere exactly does it return a bad value? The existing tests pass, and\nthe one I included shows that it does the right thing in that case too.\nIf you pass it an unblessed reference it returns the stringified version\nof that, as expected.ugly test code(2024-01-30 13:44:28) postgres=# CREATE or replace FUNCTION perl_inverse_bytes(bytea) RETURNS byteaTRANSFORM FOR TYPE byteaAS $$ my $bytes = pack 'H*', '0123'; my $ref = \\$bytes;return $ref;$$ LANGUAGE plperlu;CREATE FUNCTION(2024-01-30 13:44:33) postgres=# select perl_inverse_bytes(''), ' '::bytea;┌──────────────────────────────────────┬───────┐│ perl_inverse_bytes │ bytea │╞══════════════════════════════════════╪═══════╡│ \\x5343414c41522830783130656134333829 │ \\x20 │└──────────────────────────────────────┴───────┘(1 row)expected(2024-01-30 13:46:58) postgres=# select perl_inverse_bytes(''), ' '::bytea;┌────────────────────┬───────┐│ perl_inverse_bytes │ bytea │╞════════════════════╪═══════╡│ \\x0123 │ \\x20 │└────────────────────┴───────┘(1 row) \n\nCREATE FUNCTION plperl_reference() RETURNS bytea LANGUAGE plperl\n TRANSFORM FOR TYPE bytea\n AS $$ return []; $$;\n\nSELECT encode(plperl_reference(), 'escape') string;\n string\n-----------------------\n ARRAY(0x559a3109f0a8)\n(1 row)\n\nThis would also crash if the dereferencing loop was left in place.\n\n> Maybe there should be a check so references cannot be returned? Probably is\n> not safe pass pointers between Perl and Postgres.\n\nThere's no reason to ban references, that would break every Perl\nprogrammer's expectations. And there are no pointers being passed,\nSvPVbyte() returns the stringified form of whatever's passed in, which\nis well-behaved for both blessed and unblessed references.\n\nIf we really want to be strict, we should at least allow references to\nobjects that overload stringification, as they are explicitly designed\nto be well-behaved as strings. But that would be a lot of extra code\nfor very little benefit over just letting Perl stringify everything.\n\n- ilmari",
"msg_date": "Tue, 30 Jan 2024 17:24:21 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea PL/Perl transform"
},
{
"msg_contents": "Pavel Stehule <[email protected]> writes:\n\n> út 30. 1. 2024 v 17:18 odesílatel Dagfinn Ilmari Mannsåker <\n> [email protected]> napsal:\n>\n>> Pavel Stehule <[email protected]> writes:\n>>\n>> > út 30. 1. 2024 v 16:43 odesílatel Dagfinn Ilmari Mannsåker <\n>> > [email protected]> napsal:\n>> >\n>> >> Pavel Stehule <[email protected]> writes:\n>> >>\n>> >> > I inserted perl reference support - hstore_plperl and json_plperl does\n>> >> it.\n>> >> >\n>> >> > +<->/* Dereference references recursively. */\n>> >> > +<->while (SvROK(in))\n>> >> > +<-><-->in = SvRV(in);\n>> >>\n>> >> That code in hstore_plperl and json_plperl is only relevant because they\n>> >> deal with non-scalar values (hashes for hstore, and also arrays for\n>> >> json) which must be passed as references. The recursive nature of the\n>> >> dereferencing is questionable, and masked the bug fixed by commit\n>> >> 1731e3741cbbf8e0b4481665d7d523bc55117f63.\n>> >>\n>> >> bytea_plperl only deals with scalars (specifically strings), so should\n>> >> not concern itself with references. In fact, this code breaks returning\n>> >> objects with overloaded stringification, for example:\n>> >>\n>> >> CREATE FUNCTION plperlu_overload() RETURNS bytea LANGUAGE plperlu\n>> >> TRANSFORM FOR TYPE bytea\n>> >> AS $$\n>> >> package StringOverload { use overload '\"\"' => sub { \"stuff\" }; }\n>> >> return bless {}, \"StringOverload\";\n>> >> $$;\n>> >>\n>> >> This makes the server crash with an assertion failure from Perl because\n>> >> SvPVbyte() was passed a non-scalar value:\n>> >>\n>> >> postgres: ilmari regression_bytea_plperl [local] SELECT: sv.c:2865:\n>> >> Perl_sv_2pv_flags:\n>> >> Assertion `SvTYPE(sv) != SVt_PVAV && SvTYPE(sv) != SVt_PVHV &&\n>> SvTYPE(sv)\n>> >> != SVt_PVFM' failed.\n>> >>\n>> >> If I remove the dereferincing loop it succeeds:\n>> >>\n>> >> SELECT encode(plperlu_overload(), 'escape') AS string;\n>> >> string\n>> >> --------\n>> >> stuff\n>> >> (1 row)\n>> >>\n>> >> Attached is a v2 patch which removes the dereferencing and includes the\n>> >> above example as a test.\n>> >>\n>> >\n>> > But without dereference it returns bad value.\n>>\n>> Where exactly does it return a bad value? The existing tests pass, and\n>> the one I included shows that it does the right thing in that case too.\n>> If you pass it an unblessed reference it returns the stringified version\n>> of that, as expected.\n>>\n>\n> ugly test code\n>\n> (2024-01-30 13:44:28) postgres=# CREATE or replace FUNCTION\n> perl_inverse_bytes(bytea) RETURNS bytea\n> TRANSFORM FOR TYPE bytea\n> AS $$ my $bytes = pack 'H*', '0123'; my $ref = \\$bytes;\n\nYou are returning a reference, not a string.\n\n> return $ref;\n> $$ LANGUAGE plperlu;\n> CREATE FUNCTION\n> (2024-01-30 13:44:33) postgres=# select perl_inverse_bytes(''), ' '::bytea;\n> ┌──────────────────────────────────────┬───────┐\n> │ perl_inverse_bytes │ bytea │\n> ╞══════════════════════════════════════╪═══════╡\n> │ \\x5343414c41522830783130656134333829 │ \\x20 │\n> └──────────────────────────────────────┴───────┘\n> (1 row)\n\n~=# select encode('\\x5343414c41522830783130656134333829', 'escape');\n┌───────────────────┐\n│ encode │\n├───────────────────┤\n│ SCALAR(0x10ea438) │\n└───────────────────┘\n\nThis is how Perl stringifies references in the absence of overloading.\nReturn the byte string directly from your function and it will do the\nright thing:\n\nCREATE FUNCTION plperlu_bytes() RETURNS bytea LANGUAGE plperlu\n TRANSFORM FOR TYPE bytea\n AS $$ return pack 'H*', '0123'; $$;\n\nSELECT plperlu_bytes();\n plperlu_bytes\n---------------\n \\x0123\n(1 row)\n \n\n- ilmari\n\n\n",
"msg_date": "Tue, 30 Jan 2024 16:46:10 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea PL/Perl transform"
},
{
"msg_contents": "út 30. 1. 2024 v 17:46 odesílatel Dagfinn Ilmari Mannsåker <\[email protected]> napsal:\n\n> Pavel Stehule <[email protected]> writes:\n>\n> > út 30. 1. 2024 v 17:18 odesílatel Dagfinn Ilmari Mannsåker <\n> > [email protected]> napsal:\n> >\n> >> Pavel Stehule <[email protected]> writes:\n> >>\n> >> > út 30. 1. 2024 v 16:43 odesílatel Dagfinn Ilmari Mannsåker <\n> >> > [email protected]> napsal:\n> >> >\n> >> >> Pavel Stehule <[email protected]> writes:\n> >> >>\n> >> >> > I inserted perl reference support - hstore_plperl and json_plperl\n> does\n> >> >> it.\n> >> >> >\n> >> >> > +<->/* Dereference references recursively. */\n> >> >> > +<->while (SvROK(in))\n> >> >> > +<-><-->in = SvRV(in);\n> >> >>\n> >> >> That code in hstore_plperl and json_plperl is only relevant because\n> they\n> >> >> deal with non-scalar values (hashes for hstore, and also arrays for\n> >> >> json) which must be passed as references. The recursive nature of\n> the\n> >> >> dereferencing is questionable, and masked the bug fixed by commit\n> >> >> 1731e3741cbbf8e0b4481665d7d523bc55117f63.\n> >> >>\n> >> >> bytea_plperl only deals with scalars (specifically strings), so\n> should\n> >> >> not concern itself with references. In fact, this code breaks\n> returning\n> >> >> objects with overloaded stringification, for example:\n> >> >>\n> >> >> CREATE FUNCTION plperlu_overload() RETURNS bytea LANGUAGE plperlu\n> >> >> TRANSFORM FOR TYPE bytea\n> >> >> AS $$\n> >> >> package StringOverload { use overload '\"\"' => sub { \"stuff\" }; }\n> >> >> return bless {}, \"StringOverload\";\n> >> >> $$;\n> >> >>\n> >> >> This makes the server crash with an assertion failure from Perl\n> because\n> >> >> SvPVbyte() was passed a non-scalar value:\n> >> >>\n> >> >> postgres: ilmari regression_bytea_plperl [local] SELECT: sv.c:2865:\n> >> >> Perl_sv_2pv_flags:\n> >> >> Assertion `SvTYPE(sv) != SVt_PVAV && SvTYPE(sv) != SVt_PVHV &&\n> >> SvTYPE(sv)\n> >> >> != SVt_PVFM' failed.\n> >> >>\n> >> >> If I remove the dereferincing loop it succeeds:\n> >> >>\n> >> >> SELECT encode(plperlu_overload(), 'escape') AS string;\n> >> >> string\n> >> >> --------\n> >> >> stuff\n> >> >> (1 row)\n> >> >>\n> >> >> Attached is a v2 patch which removes the dereferencing and includes\n> the\n> >> >> above example as a test.\n> >> >>\n> >> >\n> >> > But without dereference it returns bad value.\n> >>\n> >> Where exactly does it return a bad value? The existing tests pass, and\n> >> the one I included shows that it does the right thing in that case too.\n> >> If you pass it an unblessed reference it returns the stringified version\n> >> of that, as expected.\n> >>\n> >\n> > ugly test code\n> >\n> > (2024-01-30 13:44:28) postgres=# CREATE or replace FUNCTION\n> > perl_inverse_bytes(bytea) RETURNS bytea\n> > TRANSFORM FOR TYPE bytea\n> > AS $$ my $bytes = pack 'H*', '0123'; my $ref = \\$bytes;\n>\n> You are returning a reference, not a string.\n>\n\nI know, but for this case, should not be raised an error?\n\n\n>\n> > return $ref;\n> > $$ LANGUAGE plperlu;\n> > CREATE FUNCTION\n> > (2024-01-30 13:44:33) postgres=# select perl_inverse_bytes(''), '\n> '::bytea;\n> > ┌──────────────────────────────────────┬───────┐\n> > │ perl_inverse_bytes │ bytea │\n> > ╞══════════════════════════════════════╪═══════╡\n> > │ \\x5343414c41522830783130656134333829 │ \\x20 │\n> > └──────────────────────────────────────┴───────┘\n> > (1 row)\n>\n> ~=# select encode('\\x5343414c41522830783130656134333829', 'escape');\n> ┌───────────────────┐\n> │ encode │\n> ├───────────────────┤\n> │ SCALAR(0x10ea438) │\n> └───────────────────┘\n>\n> This is how Perl stringifies references in the absence of overloading.\n> Return the byte string directly from your function and it will do the\n> right thing:\n>\n> CREATE FUNCTION plperlu_bytes() RETURNS bytea LANGUAGE plperlu\n> TRANSFORM FOR TYPE bytea\n> AS $$ return pack 'H*', '0123'; $$;\n>\n> SELECT plperlu_bytes();\n> plperlu_bytes\n> ---------------\n> \\x0123\n> (1 row)\n>\n>\n> - ilmari\n>\n\nút 30. 1. 2024 v 17:46 odesílatel Dagfinn Ilmari Mannsåker <[email protected]> napsal:Pavel Stehule <[email protected]> writes:\n\n> út 30. 1. 2024 v 17:18 odesílatel Dagfinn Ilmari Mannsåker <\n> [email protected]> napsal:\n>\n>> Pavel Stehule <[email protected]> writes:\n>>\n>> > út 30. 1. 2024 v 16:43 odesílatel Dagfinn Ilmari Mannsåker <\n>> > [email protected]> napsal:\n>> >\n>> >> Pavel Stehule <[email protected]> writes:\n>> >>\n>> >> > I inserted perl reference support - hstore_plperl and json_plperl does\n>> >> it.\n>> >> >\n>> >> > +<->/* Dereference references recursively. */\n>> >> > +<->while (SvROK(in))\n>> >> > +<-><-->in = SvRV(in);\n>> >>\n>> >> That code in hstore_plperl and json_plperl is only relevant because they\n>> >> deal with non-scalar values (hashes for hstore, and also arrays for\n>> >> json) which must be passed as references. The recursive nature of the\n>> >> dereferencing is questionable, and masked the bug fixed by commit\n>> >> 1731e3741cbbf8e0b4481665d7d523bc55117f63.\n>> >>\n>> >> bytea_plperl only deals with scalars (specifically strings), so should\n>> >> not concern itself with references. In fact, this code breaks returning\n>> >> objects with overloaded stringification, for example:\n>> >>\n>> >> CREATE FUNCTION plperlu_overload() RETURNS bytea LANGUAGE plperlu\n>> >> TRANSFORM FOR TYPE bytea\n>> >> AS $$\n>> >> package StringOverload { use overload '\"\"' => sub { \"stuff\" }; }\n>> >> return bless {}, \"StringOverload\";\n>> >> $$;\n>> >>\n>> >> This makes the server crash with an assertion failure from Perl because\n>> >> SvPVbyte() was passed a non-scalar value:\n>> >>\n>> >> postgres: ilmari regression_bytea_plperl [local] SELECT: sv.c:2865:\n>> >> Perl_sv_2pv_flags:\n>> >> Assertion `SvTYPE(sv) != SVt_PVAV && SvTYPE(sv) != SVt_PVHV &&\n>> SvTYPE(sv)\n>> >> != SVt_PVFM' failed.\n>> >>\n>> >> If I remove the dereferincing loop it succeeds:\n>> >>\n>> >> SELECT encode(plperlu_overload(), 'escape') AS string;\n>> >> string\n>> >> --------\n>> >> stuff\n>> >> (1 row)\n>> >>\n>> >> Attached is a v2 patch which removes the dereferencing and includes the\n>> >> above example as a test.\n>> >>\n>> >\n>> > But without dereference it returns bad value.\n>>\n>> Where exactly does it return a bad value? The existing tests pass, and\n>> the one I included shows that it does the right thing in that case too.\n>> If you pass it an unblessed reference it returns the stringified version\n>> of that, as expected.\n>>\n>\n> ugly test code\n>\n> (2024-01-30 13:44:28) postgres=# CREATE or replace FUNCTION\n> perl_inverse_bytes(bytea) RETURNS bytea\n> TRANSFORM FOR TYPE bytea\n> AS $$ my $bytes = pack 'H*', '0123'; my $ref = \\$bytes;\n\nYou are returning a reference, not a string.I know, but for this case, should not be raised an error? \n\n> return $ref;\n> $$ LANGUAGE plperlu;\n> CREATE FUNCTION\n> (2024-01-30 13:44:33) postgres=# select perl_inverse_bytes(''), ' '::bytea;\n> ┌──────────────────────────────────────┬───────┐\n> │ perl_inverse_bytes │ bytea │\n> ╞══════════════════════════════════════╪═══════╡\n> │ \\x5343414c41522830783130656134333829 │ \\x20 │\n> └──────────────────────────────────────┴───────┘\n> (1 row)\n\n~=# select encode('\\x5343414c41522830783130656134333829', 'escape');\n┌───────────────────┐\n│ encode │\n├───────────────────┤\n│ SCALAR(0x10ea438) │\n└───────────────────┘\n\nThis is how Perl stringifies references in the absence of overloading.\nReturn the byte string directly from your function and it will do the\nright thing:\n\nCREATE FUNCTION plperlu_bytes() RETURNS bytea LANGUAGE plperlu\n TRANSFORM FOR TYPE bytea\n AS $$ return pack 'H*', '0123'; $$;\n\nSELECT plperlu_bytes();\n plperlu_bytes\n---------------\n \\x0123\n(1 row)\n\n\n- ilmari",
"msg_date": "Tue, 30 Jan 2024 17:49:59 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea PL/Perl transform"
},
{
"msg_contents": "Pavel Stehule <[email protected]> writes:\n\n> út 30. 1. 2024 v 17:46 odesílatel Dagfinn Ilmari Mannsåker <\n> [email protected]> napsal:\n>\n>> Pavel Stehule <[email protected]> writes:\n>>\n>> > út 30. 1. 2024 v 17:18 odesílatel Dagfinn Ilmari Mannsåker <\n>> > [email protected]> napsal:\n>> >\n>> >> Pavel Stehule <[email protected]> writes:\n>> >>\n>> >> > út 30. 1. 2024 v 16:43 odesílatel Dagfinn Ilmari Mannsåker <\n>> >> > [email protected]> napsal:\n>> >> >\n>> >> >> Pavel Stehule <[email protected]> writes:\n>> >> >>\n>> >> >> > I inserted perl reference support - hstore_plperl and json_plperl\n>> does\n>> >> >> it.\n>> >> >> >\n>> >> >> > +<->/* Dereference references recursively. */\n>> >> >> > +<->while (SvROK(in))\n>> >> >> > +<-><-->in = SvRV(in);\n>> >> >>\n>> >> >> That code in hstore_plperl and json_plperl is only relevant because\n>> they\n>> >> >> deal with non-scalar values (hashes for hstore, and also arrays for\n>> >> >> json) which must be passed as references. The recursive nature of\n>> the\n>> >> >> dereferencing is questionable, and masked the bug fixed by commit\n>> >> >> 1731e3741cbbf8e0b4481665d7d523bc55117f63.\n>> >> >>\n>> >> >> bytea_plperl only deals with scalars (specifically strings), so\n>> should\n>> >> >> not concern itself with references. In fact, this code breaks\n>> returning\n>> >> >> objects with overloaded stringification, for example:\n>> >> >>\n>> >> >> CREATE FUNCTION plperlu_overload() RETURNS bytea LANGUAGE plperlu\n>> >> >> TRANSFORM FOR TYPE bytea\n>> >> >> AS $$\n>> >> >> package StringOverload { use overload '\"\"' => sub { \"stuff\" }; }\n>> >> >> return bless {}, \"StringOverload\";\n>> >> >> $$;\n>> >> >>\n>> >> >> This makes the server crash with an assertion failure from Perl\n>> because\n>> >> >> SvPVbyte() was passed a non-scalar value:\n>> >> >>\n>> >> >> postgres: ilmari regression_bytea_plperl [local] SELECT: sv.c:2865:\n>> >> >> Perl_sv_2pv_flags:\n>> >> >> Assertion `SvTYPE(sv) != SVt_PVAV && SvTYPE(sv) != SVt_PVHV &&\n>> >> SvTYPE(sv)\n>> >> >> != SVt_PVFM' failed.\n>> >> >>\n>> >> >> If I remove the dereferincing loop it succeeds:\n>> >> >>\n>> >> >> SELECT encode(plperlu_overload(), 'escape') AS string;\n>> >> >> string\n>> >> >> --------\n>> >> >> stuff\n>> >> >> (1 row)\n>> >> >>\n>> >> >> Attached is a v2 patch which removes the dereferencing and includes\n>> the\n>> >> >> above example as a test.\n>> >> >>\n>> >> >\n>> >> > But without dereference it returns bad value.\n>> >>\n>> >> Where exactly does it return a bad value? The existing tests pass, and\n>> >> the one I included shows that it does the right thing in that case too.\n>> >> If you pass it an unblessed reference it returns the stringified version\n>> >> of that, as expected.\n>> >>\n>> >\n>> > ugly test code\n>> >\n>> > (2024-01-30 13:44:28) postgres=# CREATE or replace FUNCTION\n>> > perl_inverse_bytes(bytea) RETURNS bytea\n>> > TRANSFORM FOR TYPE bytea\n>> > AS $$ my $bytes = pack 'H*', '0123'; my $ref = \\$bytes;\n>>\n>> You are returning a reference, not a string.\n>>\n>\n> I know, but for this case, should not be raised an error?\n\nI don't think so, as I explained in my previous reply:\n\n> There's no reason to ban references, that would break every Perl\n> programmer's expectations.\n\nTo elaborate on this: when a function is defined to return a string\n(which bytea effectively is, as far as Perl is converned), I as a Perl\nprogrammer would expect PL/Perl to just stringify whatever value I\nreturned, according to the usual Perl rules.\n\nI also said:\n\n> If we really want to be strict, we should at least allow references to\n> objects that overload stringification, as they are explicitly designed\n> to be well-behaved as strings. But that would be a lot of extra code\n> for very little benefit over just letting Perl stringify everything.\n\nBy \"a lot of code\", I mean everything `string_amg`-related in the\namagic_applies() function\n(https://github.com/Perl/perl5/blob/v5.38.0/gv.c#L3401-L3545). We can't\njust call it: it's only available since Perl 5.38 (released last year),\nand we support Perl versions all the way back to 5.14.\n\n- ilmari\n\n\n",
"msg_date": "Tue, 30 Jan 2024 17:26:45 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea PL/Perl transform"
},
{
"msg_contents": "út 30. 1. 2024 v 18:26 odesílatel Dagfinn Ilmari Mannsåker <\[email protected]> napsal:\n\n> Pavel Stehule <[email protected]> writes:\n>\n> > út 30. 1. 2024 v 17:46 odesílatel Dagfinn Ilmari Mannsåker <\n> > [email protected]> napsal:\n> >\n> >> Pavel Stehule <[email protected]> writes:\n> >>\n> >> > út 30. 1. 2024 v 17:18 odesílatel Dagfinn Ilmari Mannsåker <\n> >> > [email protected]> napsal:\n> >> >\n> >> >> Pavel Stehule <[email protected]> writes:\n> >> >>\n> >> >> > út 30. 1. 2024 v 16:43 odesílatel Dagfinn Ilmari Mannsåker <\n> >> >> > [email protected]> napsal:\n> >> >> >\n> >> >> >> Pavel Stehule <[email protected]> writes:\n> >> >> >>\n> >> >> >> > I inserted perl reference support - hstore_plperl and\n> json_plperl\n> >> does\n> >> >> >> it.\n> >> >> >> >\n> >> >> >> > +<->/* Dereference references recursively. */\n> >> >> >> > +<->while (SvROK(in))\n> >> >> >> > +<-><-->in = SvRV(in);\n> >> >> >>\n> >> >> >> That code in hstore_plperl and json_plperl is only relevant\n> because\n> >> they\n> >> >> >> deal with non-scalar values (hashes for hstore, and also arrays\n> for\n> >> >> >> json) which must be passed as references. The recursive nature of\n> >> the\n> >> >> >> dereferencing is questionable, and masked the bug fixed by commit\n> >> >> >> 1731e3741cbbf8e0b4481665d7d523bc55117f63.\n> >> >> >>\n> >> >> >> bytea_plperl only deals with scalars (specifically strings), so\n> >> should\n> >> >> >> not concern itself with references. In fact, this code breaks\n> >> returning\n> >> >> >> objects with overloaded stringification, for example:\n> >> >> >>\n> >> >> >> CREATE FUNCTION plperlu_overload() RETURNS bytea LANGUAGE plperlu\n> >> >> >> TRANSFORM FOR TYPE bytea\n> >> >> >> AS $$\n> >> >> >> package StringOverload { use overload '\"\"' => sub { \"stuff\"\n> }; }\n> >> >> >> return bless {}, \"StringOverload\";\n> >> >> >> $$;\n> >> >> >>\n> >> >> >> This makes the server crash with an assertion failure from Perl\n> >> because\n> >> >> >> SvPVbyte() was passed a non-scalar value:\n> >> >> >>\n> >> >> >> postgres: ilmari regression_bytea_plperl [local] SELECT:\n> sv.c:2865:\n> >> >> >> Perl_sv_2pv_flags:\n> >> >> >> Assertion `SvTYPE(sv) != SVt_PVAV && SvTYPE(sv) != SVt_PVHV &&\n> >> >> SvTYPE(sv)\n> >> >> >> != SVt_PVFM' failed.\n> >> >> >>\n> >> >> >> If I remove the dereferincing loop it succeeds:\n> >> >> >>\n> >> >> >> SELECT encode(plperlu_overload(), 'escape') AS string;\n> >> >> >> string\n> >> >> >> --------\n> >> >> >> stuff\n> >> >> >> (1 row)\n> >> >> >>\n> >> >> >> Attached is a v2 patch which removes the dereferencing and\n> includes\n> >> the\n> >> >> >> above example as a test.\n> >> >> >>\n> >> >> >\n> >> >> > But without dereference it returns bad value.\n> >> >>\n> >> >> Where exactly does it return a bad value? The existing tests pass,\n> and\n> >> >> the one I included shows that it does the right thing in that case\n> too.\n> >> >> If you pass it an unblessed reference it returns the stringified\n> version\n> >> >> of that, as expected.\n> >> >>\n> >> >\n> >> > ugly test code\n> >> >\n> >> > (2024-01-30 13:44:28) postgres=# CREATE or replace FUNCTION\n> >> > perl_inverse_bytes(bytea) RETURNS bytea\n> >> > TRANSFORM FOR TYPE bytea\n> >> > AS $$ my $bytes = pack 'H*', '0123'; my $ref = \\$bytes;\n> >>\n> >> You are returning a reference, not a string.\n> >>\n> >\n> > I know, but for this case, should not be raised an error?\n>\n> I don't think so, as I explained in my previous reply:\n>\n> > There's no reason to ban references, that would break every Perl\n> > programmer's expectations.\n>\n> To elaborate on this: when a function is defined to return a string\n> (which bytea effectively is, as far as Perl is converned), I as a Perl\n> programmer would expect PL/Perl to just stringify whatever value I\n> returned, according to the usual Perl rules.\n>\n\nok\n\nPavel\n\n\n>\n> I also said:\n>\n> > If we really want to be strict, we should at least allow references to\n> > objects that overload stringification, as they are explicitly designed\n> > to be well-behaved as strings. But that would be a lot of extra code\n> > for very little benefit over just letting Perl stringify everything.\n>\n\n> By \"a lot of code\", I mean everything `string_amg`-related in the\n> amagic_applies() function\n> (https://github.com/Perl/perl5/blob/v5.38.0/gv.c#L3401-L3545). We can't\n> just call it: it's only available since Perl 5.38 (released last year),\n> and we support Perl versions all the way back to 5.14.\n>\n> - ilmari\n>\n\nút 30. 1. 2024 v 18:26 odesílatel Dagfinn Ilmari Mannsåker <[email protected]> napsal:Pavel Stehule <[email protected]> writes:\n\n> út 30. 1. 2024 v 17:46 odesílatel Dagfinn Ilmari Mannsåker <\n> [email protected]> napsal:\n>\n>> Pavel Stehule <[email protected]> writes:\n>>\n>> > út 30. 1. 2024 v 17:18 odesílatel Dagfinn Ilmari Mannsåker <\n>> > [email protected]> napsal:\n>> >\n>> >> Pavel Stehule <[email protected]> writes:\n>> >>\n>> >> > út 30. 1. 2024 v 16:43 odesílatel Dagfinn Ilmari Mannsåker <\n>> >> > [email protected]> napsal:\n>> >> >\n>> >> >> Pavel Stehule <[email protected]> writes:\n>> >> >>\n>> >> >> > I inserted perl reference support - hstore_plperl and json_plperl\n>> does\n>> >> >> it.\n>> >> >> >\n>> >> >> > +<->/* Dereference references recursively. */\n>> >> >> > +<->while (SvROK(in))\n>> >> >> > +<-><-->in = SvRV(in);\n>> >> >>\n>> >> >> That code in hstore_plperl and json_plperl is only relevant because\n>> they\n>> >> >> deal with non-scalar values (hashes for hstore, and also arrays for\n>> >> >> json) which must be passed as references. The recursive nature of\n>> the\n>> >> >> dereferencing is questionable, and masked the bug fixed by commit\n>> >> >> 1731e3741cbbf8e0b4481665d7d523bc55117f63.\n>> >> >>\n>> >> >> bytea_plperl only deals with scalars (specifically strings), so\n>> should\n>> >> >> not concern itself with references. In fact, this code breaks\n>> returning\n>> >> >> objects with overloaded stringification, for example:\n>> >> >>\n>> >> >> CREATE FUNCTION plperlu_overload() RETURNS bytea LANGUAGE plperlu\n>> >> >> TRANSFORM FOR TYPE bytea\n>> >> >> AS $$\n>> >> >> package StringOverload { use overload '\"\"' => sub { \"stuff\" }; }\n>> >> >> return bless {}, \"StringOverload\";\n>> >> >> $$;\n>> >> >>\n>> >> >> This makes the server crash with an assertion failure from Perl\n>> because\n>> >> >> SvPVbyte() was passed a non-scalar value:\n>> >> >>\n>> >> >> postgres: ilmari regression_bytea_plperl [local] SELECT: sv.c:2865:\n>> >> >> Perl_sv_2pv_flags:\n>> >> >> Assertion `SvTYPE(sv) != SVt_PVAV && SvTYPE(sv) != SVt_PVHV &&\n>> >> SvTYPE(sv)\n>> >> >> != SVt_PVFM' failed.\n>> >> >>\n>> >> >> If I remove the dereferincing loop it succeeds:\n>> >> >>\n>> >> >> SELECT encode(plperlu_overload(), 'escape') AS string;\n>> >> >> string\n>> >> >> --------\n>> >> >> stuff\n>> >> >> (1 row)\n>> >> >>\n>> >> >> Attached is a v2 patch which removes the dereferencing and includes\n>> the\n>> >> >> above example as a test.\n>> >> >>\n>> >> >\n>> >> > But without dereference it returns bad value.\n>> >>\n>> >> Where exactly does it return a bad value? The existing tests pass, and\n>> >> the one I included shows that it does the right thing in that case too.\n>> >> If you pass it an unblessed reference it returns the stringified version\n>> >> of that, as expected.\n>> >>\n>> >\n>> > ugly test code\n>> >\n>> > (2024-01-30 13:44:28) postgres=# CREATE or replace FUNCTION\n>> > perl_inverse_bytes(bytea) RETURNS bytea\n>> > TRANSFORM FOR TYPE bytea\n>> > AS $$ my $bytes = pack 'H*', '0123'; my $ref = \\$bytes;\n>>\n>> You are returning a reference, not a string.\n>>\n>\n> I know, but for this case, should not be raised an error?\n\nI don't think so, as I explained in my previous reply:\n\n> There's no reason to ban references, that would break every Perl\n> programmer's expectations.\n\nTo elaborate on this: when a function is defined to return a string\n(which bytea effectively is, as far as Perl is converned), I as a Perl\nprogrammer would expect PL/Perl to just stringify whatever value I\nreturned, according to the usual Perl rules.okPavel \n\nI also said:\n\n> If we really want to be strict, we should at least allow references to\n> objects that overload stringification, as they are explicitly designed\n> to be well-behaved as strings. But that would be a lot of extra code\n> for very little benefit over just letting Perl stringify everything. \n\nBy \"a lot of code\", I mean everything `string_amg`-related in the\namagic_applies() function\n(https://github.com/Perl/perl5/blob/v5.38.0/gv.c#L3401-L3545). We can't\njust call it: it's only available since Perl 5.38 (released last year),\nand we support Perl versions all the way back to 5.14.\n\n- ilmari",
"msg_date": "Tue, 30 Jan 2024 18:35:55 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea PL/Perl transform"
},
{
"msg_contents": "Hi\n\nút 30. 1. 2024 v 18:35 odesílatel Pavel Stehule <[email protected]>\nnapsal:\n\n>\n>\n> út 30. 1. 2024 v 18:26 odesílatel Dagfinn Ilmari Mannsåker <\n> [email protected]> napsal:\n>\n>> Pavel Stehule <[email protected]> writes:\n>>\n>> > út 30. 1. 2024 v 17:46 odesílatel Dagfinn Ilmari Mannsåker <\n>> > [email protected]> napsal:\n>> >\n>> >> Pavel Stehule <[email protected]> writes:\n>> >>\n>> >> > út 30. 1. 2024 v 17:18 odesílatel Dagfinn Ilmari Mannsåker <\n>> >> > [email protected]> napsal:\n>> >> >\n>> >> >> Pavel Stehule <[email protected]> writes:\n>> >> >>\n>> >> >> > út 30. 1. 2024 v 16:43 odesílatel Dagfinn Ilmari Mannsåker <\n>> >> >> > [email protected]> napsal:\n>> >> >> >\n>> >> >> >> Pavel Stehule <[email protected]> writes:\n>> >> >> >>\n>> >> >> >> > I inserted perl reference support - hstore_plperl and\n>> json_plperl\n>> >> does\n>> >> >> >> it.\n>> >> >> >> >\n>> >> >> >> > +<->/* Dereference references recursively. */\n>> >> >> >> > +<->while (SvROK(in))\n>> >> >> >> > +<-><-->in = SvRV(in);\n>> >> >> >>\n>> >> >> >> That code in hstore_plperl and json_plperl is only relevant\n>> because\n>> >> they\n>> >> >> >> deal with non-scalar values (hashes for hstore, and also arrays\n>> for\n>> >> >> >> json) which must be passed as references. The recursive nature\n>> of\n>> >> the\n>> >> >> >> dereferencing is questionable, and masked the bug fixed by commit\n>> >> >> >> 1731e3741cbbf8e0b4481665d7d523bc55117f63.\n>> >> >> >>\n>> >> >> >> bytea_plperl only deals with scalars (specifically strings), so\n>> >> should\n>> >> >> >> not concern itself with references. In fact, this code breaks\n>> >> returning\n>> >> >> >> objects with overloaded stringification, for example:\n>> >> >> >>\n>> >> >> >> CREATE FUNCTION plperlu_overload() RETURNS bytea LANGUAGE plperlu\n>> >> >> >> TRANSFORM FOR TYPE bytea\n>> >> >> >> AS $$\n>> >> >> >> package StringOverload { use overload '\"\"' => sub { \"stuff\"\n>> }; }\n>> >> >> >> return bless {}, \"StringOverload\";\n>> >> >> >> $$;\n>> >> >> >>\n>> >> >> >> This makes the server crash with an assertion failure from Perl\n>> >> because\n>> >> >> >> SvPVbyte() was passed a non-scalar value:\n>> >> >> >>\n>> >> >> >> postgres: ilmari regression_bytea_plperl [local] SELECT:\n>> sv.c:2865:\n>> >> >> >> Perl_sv_2pv_flags:\n>> >> >> >> Assertion `SvTYPE(sv) != SVt_PVAV && SvTYPE(sv) != SVt_PVHV &&\n>> >> >> SvTYPE(sv)\n>> >> >> >> != SVt_PVFM' failed.\n>> >> >> >>\n>> >> >> >> If I remove the dereferincing loop it succeeds:\n>> >> >> >>\n>> >> >> >> SELECT encode(plperlu_overload(), 'escape') AS string;\n>> >> >> >> string\n>> >> >> >> --------\n>> >> >> >> stuff\n>> >> >> >> (1 row)\n>> >> >> >>\n>> >> >> >> Attached is a v2 patch which removes the dereferencing and\n>> includes\n>> >> the\n>> >> >> >> above example as a test.\n>> >> >> >>\n>> >> >> >\n>> >> >> > But without dereference it returns bad value.\n>> >> >>\n>> >> >> Where exactly does it return a bad value? The existing tests pass,\n>> and\n>> >> >> the one I included shows that it does the right thing in that case\n>> too.\n>> >> >> If you pass it an unblessed reference it returns the stringified\n>> version\n>> >> >> of that, as expected.\n>> >> >>\n>> >> >\n>> >> > ugly test code\n>> >> >\n>> >> > (2024-01-30 13:44:28) postgres=# CREATE or replace FUNCTION\n>> >> > perl_inverse_bytes(bytea) RETURNS bytea\n>> >> > TRANSFORM FOR TYPE bytea\n>> >> > AS $$ my $bytes = pack 'H*', '0123'; my $ref = \\$bytes;\n>> >>\n>> >> You are returning a reference, not a string.\n>> >>\n>> >\n>> > I know, but for this case, should not be raised an error?\n>>\n>> I don't think so, as I explained in my previous reply:\n>>\n>> > There's no reason to ban references, that would break every Perl\n>> > programmer's expectations.\n>>\n>> To elaborate on this: when a function is defined to return a string\n>> (which bytea effectively is, as far as Perl is converned), I as a Perl\n>> programmer would expect PL/Perl to just stringify whatever value I\n>> returned, according to the usual Perl rules.\n>>\n>\n> ok\n>\n> Pavel\n>\n>\n>>\n>> I also said:\n>>\n>> > If we really want to be strict, we should at least allow references to\n>> > objects that overload stringification, as they are explicitly designed\n>> > to be well-behaved as strings. But that would be a lot of extra code\n>> > for very little benefit over just letting Perl stringify everything.\n>>\n>\n>> By \"a lot of code\", I mean everything `string_amg`-related in the\n>> amagic_applies() function\n>> (https://github.com/Perl/perl5/blob/v5.38.0/gv.c#L3401-L3545). We can't\n>> just call it: it's only available since Perl 5.38 (released last year),\n>> and we support Perl versions all the way back to 5.14.\n>>\n>> - ilmari\n>>\n>\nI marked this patch as ready for committer.\n\nIt is almost trivial, make check-world, make doc passed\n\nRegards\n\nPavel\n\nHiút 30. 1. 2024 v 18:35 odesílatel Pavel Stehule <[email protected]> napsal:út 30. 1. 2024 v 18:26 odesílatel Dagfinn Ilmari Mannsåker <[email protected]> napsal:Pavel Stehule <[email protected]> writes:\n\n> út 30. 1. 2024 v 17:46 odesílatel Dagfinn Ilmari Mannsåker <\n> [email protected]> napsal:\n>\n>> Pavel Stehule <[email protected]> writes:\n>>\n>> > út 30. 1. 2024 v 17:18 odesílatel Dagfinn Ilmari Mannsåker <\n>> > [email protected]> napsal:\n>> >\n>> >> Pavel Stehule <[email protected]> writes:\n>> >>\n>> >> > út 30. 1. 2024 v 16:43 odesílatel Dagfinn Ilmari Mannsåker <\n>> >> > [email protected]> napsal:\n>> >> >\n>> >> >> Pavel Stehule <[email protected]> writes:\n>> >> >>\n>> >> >> > I inserted perl reference support - hstore_plperl and json_plperl\n>> does\n>> >> >> it.\n>> >> >> >\n>> >> >> > +<->/* Dereference references recursively. */\n>> >> >> > +<->while (SvROK(in))\n>> >> >> > +<-><-->in = SvRV(in);\n>> >> >>\n>> >> >> That code in hstore_plperl and json_plperl is only relevant because\n>> they\n>> >> >> deal with non-scalar values (hashes for hstore, and also arrays for\n>> >> >> json) which must be passed as references. The recursive nature of\n>> the\n>> >> >> dereferencing is questionable, and masked the bug fixed by commit\n>> >> >> 1731e3741cbbf8e0b4481665d7d523bc55117f63.\n>> >> >>\n>> >> >> bytea_plperl only deals with scalars (specifically strings), so\n>> should\n>> >> >> not concern itself with references. In fact, this code breaks\n>> returning\n>> >> >> objects with overloaded stringification, for example:\n>> >> >>\n>> >> >> CREATE FUNCTION plperlu_overload() RETURNS bytea LANGUAGE plperlu\n>> >> >> TRANSFORM FOR TYPE bytea\n>> >> >> AS $$\n>> >> >> package StringOverload { use overload '\"\"' => sub { \"stuff\" }; }\n>> >> >> return bless {}, \"StringOverload\";\n>> >> >> $$;\n>> >> >>\n>> >> >> This makes the server crash with an assertion failure from Perl\n>> because\n>> >> >> SvPVbyte() was passed a non-scalar value:\n>> >> >>\n>> >> >> postgres: ilmari regression_bytea_plperl [local] SELECT: sv.c:2865:\n>> >> >> Perl_sv_2pv_flags:\n>> >> >> Assertion `SvTYPE(sv) != SVt_PVAV && SvTYPE(sv) != SVt_PVHV &&\n>> >> SvTYPE(sv)\n>> >> >> != SVt_PVFM' failed.\n>> >> >>\n>> >> >> If I remove the dereferincing loop it succeeds:\n>> >> >>\n>> >> >> SELECT encode(plperlu_overload(), 'escape') AS string;\n>> >> >> string\n>> >> >> --------\n>> >> >> stuff\n>> >> >> (1 row)\n>> >> >>\n>> >> >> Attached is a v2 patch which removes the dereferencing and includes\n>> the\n>> >> >> above example as a test.\n>> >> >>\n>> >> >\n>> >> > But without dereference it returns bad value.\n>> >>\n>> >> Where exactly does it return a bad value? The existing tests pass, and\n>> >> the one I included shows that it does the right thing in that case too.\n>> >> If you pass it an unblessed reference it returns the stringified version\n>> >> of that, as expected.\n>> >>\n>> >\n>> > ugly test code\n>> >\n>> > (2024-01-30 13:44:28) postgres=# CREATE or replace FUNCTION\n>> > perl_inverse_bytes(bytea) RETURNS bytea\n>> > TRANSFORM FOR TYPE bytea\n>> > AS $$ my $bytes = pack 'H*', '0123'; my $ref = \\$bytes;\n>>\n>> You are returning a reference, not a string.\n>>\n>\n> I know, but for this case, should not be raised an error?\n\nI don't think so, as I explained in my previous reply:\n\n> There's no reason to ban references, that would break every Perl\n> programmer's expectations.\n\nTo elaborate on this: when a function is defined to return a string\n(which bytea effectively is, as far as Perl is converned), I as a Perl\nprogrammer would expect PL/Perl to just stringify whatever value I\nreturned, according to the usual Perl rules.okPavel \n\nI also said:\n\n> If we really want to be strict, we should at least allow references to\n> objects that overload stringification, as they are explicitly designed\n> to be well-behaved as strings. But that would be a lot of extra code\n> for very little benefit over just letting Perl stringify everything. \n\nBy \"a lot of code\", I mean everything `string_amg`-related in the\namagic_applies() function\n(https://github.com/Perl/perl5/blob/v5.38.0/gv.c#L3401-L3545). We can't\njust call it: it's only available since Perl 5.38 (released last year),\nand we support Perl versions all the way back to 5.14.\n\n- ilmariI marked this patch as ready for committer. It is almost trivial, make check-world, make doc passedRegardsPavel",
"msg_date": "Tue, 30 Jan 2024 19:45:09 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea PL/Perl transform"
},
{
"msg_contents": "Hi!\n\nOn Tue, Jan 30, 2024 at 8:46 PM Pavel Stehule <[email protected]> wrote:\n> I marked this patch as ready for committer.\n\nThe last version of the patch still provides transform for builtin\ntype in a separate extension. As discussed upthread such transforms\ndon't need separate extensions, and could be provided as part of\nupgrades of existing extensions. There is probably no consensus yet\non what to do with existing extensions like jsonb_plperl and\njsonb_plpython, but we clearly shouldn't spread such cases.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 27 Feb 2024 22:03:33 +0200",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea PL/Perl transform"
},
{
"msg_contents": "út 27. 2. 2024 v 21:03 odesílatel Alexander Korotkov <[email protected]>\nnapsal:\n\n> Hi!\n>\n> On Tue, Jan 30, 2024 at 8:46 PM Pavel Stehule <[email protected]>\n> wrote:\n> > I marked this patch as ready for committer.\n>\n> The last version of the patch still provides transform for builtin\n> type in a separate extension. As discussed upthread such transforms\n> don't need separate extensions, and could be provided as part of\n> upgrades of existing extensions. There is probably no consensus yet\n> on what to do with existing extensions like jsonb_plperl and\n> jsonb_plpython, but we clearly shouldn't spread such cases.\n>\n\n+1\n\nPavel\n\n\n> ------\n> Regards,\n> Alexander Korotkov\n>\n\nút 27. 2. 2024 v 21:03 odesílatel Alexander Korotkov <[email protected]> napsal:Hi!\n\nOn Tue, Jan 30, 2024 at 8:46 PM Pavel Stehule <[email protected]> wrote:\n> I marked this patch as ready for committer.\n\nThe last version of the patch still provides transform for builtin\ntype in a separate extension. As discussed upthread such transforms\ndon't need separate extensions, and could be provided as part of\nupgrades of existing extensions. There is probably no consensus yet\non what to do with existing extensions like jsonb_plperl and\njsonb_plpython, but we clearly shouldn't spread such cases.+1Pavel\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Wed, 28 Feb 2024 05:02:40 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea PL/Perl transform"
},
{
"msg_contents": "Alexander Korotkov <[email protected]> writes:\n> On Tue, Jan 30, 2024 at 8:46 PM Pavel Stehule <[email protected]> wrote:\n>> I marked this patch as ready for committer.\n\n> The last version of the patch still provides transform for builtin\n> type in a separate extension. As discussed upthread such transforms\n> don't need separate extensions, and could be provided as part of\n> upgrades of existing extensions. There is probably no consensus yet\n> on what to do with existing extensions like jsonb_plperl and\n> jsonb_plpython, but we clearly shouldn't spread such cases.\n\nYeah, I think including this as part of \"plperl[u] 1.1\" is probably\nthe best way forward. The patch of record doesn't do that, so\nI've set the CF entry back to Waiting On Author.\n\nTaking a quick look at the rest of the patch ... I think the\ndocumentation is pretty inadequate, as it just says that the transform\ncauses byteas to be \"passed and returned as native Perl octet\nstrings\", a term that it doesn't define, and googling doesn't exactly\nclarify either. The \"example\" is no example at all, as it does not\nshow what happens or how the results are different. (Admittedly\nthe nearby example for the bool transform is nearly as bad, but we\ncould improve that too while we're at it.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Mar 2024 17:30:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bytea PL/Perl transform"
}
] |
[
{
"msg_contents": "Hello everyone and Tom.\n\nTom, this is about your idea (1) from 2010 to replace spinlock with a\nmemory barrier in a known assignment xids machinery.\n\nIt was mentioned by you again in (2) and in (3) we have decided to\nextract this change into a separate commitfest entry.\n\nSo, creating it here with a rebased version of (4).\n\nIn a nutshell: KnownAssignedXids as well as the head/tail pointers are\nmodified only by the startup process, so spinlock is used to ensure\nthat updates of the array and head/tail pointers are seen in a correct\norder. It is enough to pass the barrier after writing to the array\n(but before updating the pointers) to achieve the same result.\n\nBest regards.\n\n[1]: https://github.com/postgres/postgres/commit/2871b4618af1acc85665eec0912c48f8341504c4#diff-8879f0173be303070ab7931db7c757c96796d84402640b9e386a4150ed97b179R2408-R2412\n\n[2]: https://www.postgresql.org/message-id/flat/1249332.1668553589%40sss.pgh.pa.us#19d00eb435340f5c5455e3bf259eccc8\n\n[3]: https://www.postgresql.org/message-id/flat/1225350.1669757944%40sss.pgh.pa.us#23ca1956e694910fd7795a514a3bc79f\n\n[4]: https://www.postgresql.org/message-id/flat/CANtu0oiPoSdQsjRd6Red5WMHi1E83d2%2B-bM9J6dtWR3c5Tap9g%40mail.gmail.com#cc4827dee902978f93278732435e8521",
"msg_date": "Sun, 19 Mar 2023 12:43:43 +0300",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Replace known_assigned_xids_lck by memory barrier"
},
{
"msg_contents": "On Sun, Mar 19, 2023 at 12:43:43PM +0300, Michail Nikolaev wrote:\n> In a nutshell: KnownAssignedXids as well as the head/tail pointers are\n> modified only by the startup process, so spinlock is used to ensure\n> that updates of the array and head/tail pointers are seen in a correct\n> order. It is enough to pass the barrier after writing to the array\n> (but before updating the pointers) to achieve the same result.\n\nWhat sort of benefits do you see from this patch? It might be worthwhile\nin itself to remove spinlocks when possible, but IME it's much easier to\njustify such changes when there is a tangible benefit we can point to.\n\n \t/*\n-\t * Now update the head pointer. We use a spinlock to protect this\n+\t * Now update the head pointer. We use a memory barrier to protect this\n \t * pointer, not because the update is likely to be non-atomic, but to\n \t * ensure that other processors see the above array updates before they\n \t * see the head pointer change.\n \t *\n \t * If we're holding ProcArrayLock exclusively, there's no need to take the\n-\t * spinlock.\n+\t * barrier.\n \t */\n\nAre the assignments in question guaranteed to be atomic? IIUC we assume\nthat aligned 4-byte loads/stores are atomic, so we should be okay as long\nas we aren't handling anything larger.\n\n-\t\tSpinLockAcquire(&pArray->known_assigned_xids_lck);\n+\t\tpg_write_barrier();\n \t\tpArray->headKnownAssignedXids = head;\n-\t\tSpinLockRelease(&pArray->known_assigned_xids_lck);\n\nThis use of pg_write_barrier() looks correct to me, but don't we need\ncorresponding read barriers wherever we obtain the pointers? FWIW I tend\nto review src/backend/storage/lmgr/README.barrier in its entirety whenever\nI deal with this stuff.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 14 Aug 2023 08:36:34 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace known_assigned_xids_lck by memory barrier"
},
{
"msg_contents": "Hello, Nathan.\n\n> What sort of benefits do you see from this patch? It might be worthwhile\n> in itself to remove spinlocks when possible, but IME it's much easier to\n> justify such changes when there is a tangible benefit we can point to.\n\nOh, it is not an easy question :)\n\nThe answer, probably, looks like this:\n1) performance benefits of spin lock acquire removing in\nKnownAssignedXidsGetOldestXmin and KnownAssignedXidsSearch\n2) it is closing 13-year-old tech depth\n\nBut in reality, it is not easy to measure performance improvement\nconsistently for this change.\n\n> Are the assignments in question guaranteed to be atomic? IIUC we assume\n> that aligned 4-byte loads/stores are atomic, so we should be okay as long\n> as we aren't handling anything larger.\n\nYes, 4-bytes assignment are atomic, locking is used to ensure memory\nwrite ordering in this place.\n\n> This use of pg_write_barrier() looks correct to me, but don't we need\n> corresponding read barriers wherever we obtain the pointers? FWIW I tend\n> to review src/backend/storage/lmgr/README.barrier in its entirety whenever\n> I deal with this stuff.\n\nOh, yeah, you're right! (1)\nI'll prepare an updated version of the patch soon. I don't why I was\nassuming pg_write_barrier is enough (⊙_⊙')\n\n\n[1]: https://github.com/postgres/postgres/blob/master/src/backend/storage/lmgr/README.barrier#L125\n\n\n",
"msg_date": "Tue, 15 Aug 2023 12:29:24 +0200",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Replace known_assigned_xids_lck by memory barrier"
},
{
"msg_contents": "On Tue, Aug 15, 2023 at 12:29:24PM +0200, Michail Nikolaev wrote:\n>> What sort of benefits do you see from this patch? It might be worthwhile\n>> in itself to remove spinlocks when possible, but IME it's much easier to\n>> justify such changes when there is a tangible benefit we can point to.\n> \n> Oh, it is not an easy question :)\n> \n> The answer, probably, looks like this:\n> 1) performance benefits of spin lock acquire removing in\n> KnownAssignedXidsGetOldestXmin and KnownAssignedXidsSearch\n> 2) it is closing 13-year-old tech depth\n> \n> But in reality, it is not easy to measure performance improvement\n> consistently for this change.\n\nOkay. Elsewhere, it seems like folks are fine with patches that reduce\nshared memory space via atomics or barriers even if there's no immediate\nbenefit [0], so I think it's fine to proceed.\n\n>> Are the assignments in question guaranteed to be atomic? IIUC we assume\n>> that aligned 4-byte loads/stores are atomic, so we should be okay as long\n>> as we aren't handling anything larger.\n> \n> Yes, 4-bytes assignment are atomic, locking is used to ensure memory\n> write ordering in this place.\n\nYeah, it looks like both the values that are protected by\nknown_assigned_xids_lck are integers, so this should be okay. One\nremaining question I have is whether it is okay if we see an updated value\nfor one of the head/tail variables but not the other. It looks like the\ntail variable is only updated with ProcArrayLock held exclusively, which\nIIUC wouldn't prevent such mismatches even today, since we use a separate\nspinlock for reading them in some cases.\n\n[0] https://postgr.es/m/20230524214958.mt6f5xokpumvnrio%40awork3.anarazel.de\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 15 Aug 2023 08:22:24 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace known_assigned_xids_lck by memory barrier"
},
{
"msg_contents": "Hello!\n\nUpdated version (with read barriers is attached).\n\n> One remaining question I have is whether it is okay if we see an updated value\n> for one of the head/tail variables but not the other. It looks like the\n> tail variable is only updated with ProcArrayLock held exclusively, which\n> IIUC wouldn't prevent such mismatches even today, since we use a separate\n> spinlock for reading them in some cases.\n\n1) \"The convention is that backends must hold shared ProcArrayLock to\nexamine the array\", it is applied to pointers as well\n2) Also, \"only the startup process modifies the head/tail pointers.\"\n\nSo, the \"tail\" variable is updated by the startup process with\nProcArrayLock held in exclusive-only mode - so, no issues here.\n\nRegarding \"head\" variable - updates by the startup processes are\npossible in next cases:\n* ProcArrayLock in exclusive mode (like KnownAssignedXidsCompress or\nKnownAssignedXidsSearch(remove=true)), no issues here\n* ProcArrayLock not taken at all (like\nKnownAssignedXidsAdd(exclusive_lock=false)) in such case we rely on\nmemory barrier machinery\n\nBoth head and tail variables are changed only with exclusive lock held.\n\nI'll think more, but can't find something wrong here so far.",
"msg_date": "Wed, 16 Aug 2023 17:30:59 +0200",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Replace known_assigned_xids_lck by memory barrier"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 05:30:59PM +0200, Michail Nikolaev wrote:\n> Updated version (with read barriers is attached).\n\nThanks for the updated patch. I've attached v4 in which I've made a number\nof cosmetic edits.\n\n> I'll think more, but can't find something wrong here so far.\n\nIIUC this memory barrier stuff is only applicable to KnownAssignedXidsAdd()\n(without an exclusive lock) when we add entries to the end of the array and\nthen update the head pointer. Otherwise, appropriate locks are taken when\nreading/writing the array. For example, say we have the following array:\n\n head\n |\n v\n [ 0, 1, 2, 3 ]\n\nWhen adding elements, we keep the head pointer where it is:\n\n head\n |\n v\n [ 0, 1, 2, 3, 4, 5 ]\n\nIf another processor sees this intermediate state, it's okay because it\nwill only inspect elements 0 through 3. Only at the end do we update the\nhead pointer:\n\n head\n |\n v\n [ 0, 1, 2, 3, 4, 5 ]\n\nWith weak memory ordering and no barriers, another process may see this\n(which is obviously no good):\n\n head\n |\n v\n [ 0, 1, 2, 3 ]\n\nOne thing that I'm still trying to understand is this code in\nKnownAssignedXidsSearch():\n\n\t\t/* we hold ProcArrayLock exclusively, so no need for spinlock */\n\t\ttail = pArray->tailKnownAssignedXids;\n\t\thead = pArray->headKnownAssignedXids;\n\nIt's not clear to me why holding ProcArrayLock exclusively means we don't\nneed to worry about the spinlock/barriers. If KnownAssignedXidsAdd() adds\nentries without a lock, holding ProcArrayLock won't protect you, and I\ndon't see anything else that acts as a read barrier before the array\nentries are inspected.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 16 Aug 2023 11:32:36 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace known_assigned_xids_lck by memory barrier"
},
{
"msg_contents": "Hello, good question!\n\nThanks for your edits.\n\nAs answer: probably we need to change\n\"If we know that we're holding ProcArrayLock exclusively, we don't\nneed the read barrier.\"\nto\n\"If we're removing xid, we don't need the read barrier because only\nthe startup process can remove and add xids to KnownAssignedXids\"\n\nBest regards,\nMikhail.\n\n\n",
"msg_date": "Wed, 16 Aug 2023 21:29:10 +0200",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Replace known_assigned_xids_lck by memory barrier"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 09:29:10PM +0200, Michail Nikolaev wrote:\n> As answer: probably we need to change\n> \"If we know that we're holding ProcArrayLock exclusively, we don't\n> need the read barrier.\"\n> to\n> \"If we're removing xid, we don't need the read barrier because only\n> the startup process can remove and add xids to KnownAssignedXids\"\n\nAh, that explains it. v5 of the patch is attached.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 16 Aug 2023 13:07:15 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace known_assigned_xids_lck by memory barrier"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 01:07:15PM -0700, Nathan Bossart wrote:\n> Ah, that explains it. v5 of the patch is attached.\n\nBarring additional feedback, I plan to commit this patch in the current\ncommitfest.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 1 Sep 2023 11:40:17 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace known_assigned_xids_lck by memory barrier"
},
{
"msg_contents": "On Fri, Sep 1, 2023 at 3:41 PM Nathan Bossart <[email protected]> wrote:\n> On Wed, Aug 16, 2023 at 01:07:15PM -0700, Nathan Bossart wrote:\n> > Ah, that explains it. v5 of the patch is attached.\n>\n> Barring additional feedback, I plan to commit this patch in the current\n> commitfest.\n\nI'm not an expert on this code but I looked at this patch briefly and\nit seems OK to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 1 Sep 2023 15:53:54 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace known_assigned_xids_lck by memory barrier"
},
{
"msg_contents": "On Fri, Sep 01, 2023 at 03:53:54PM -0400, Robert Haas wrote:\n> I'm not an expert on this code but I looked at this patch briefly and\n> it seems OK to me.\n\nThanks for taking a look. Committed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 5 Sep 2023 14:08:29 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replace known_assigned_xids_lck by memory barrier"
},
{
"msg_contents": "Thanks everyone for help!\n\nThanks everyone for help!",
"msg_date": "Wed, 6 Sep 2023 01:56:16 +0200",
"msg_from": "Michail Nikolaev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Replace known_assigned_xids_lck by memory barrier"
}
] |
[
{
"msg_contents": "Hi,\n\nI observed absurd behaviour while using pg_logical_slot_peek_changes()\nand pg_logical_slot_get_changes(). Whenever any of these two functions\nare called to read the changes using a decoder plugin, the following\nmessages are printed in the log for every single such call.\n\n2023-03-19 16:36:06.040 IST [30099] LOG: starting logical decoding for\nslot \"test_slot1\"\n2023-03-19 16:36:06.040 IST [30099] DETAIL: Streaming transactions\ncommitting after 0/851DFD8, reading WAL from 0/851DFA0.\n2023-03-19 16:36:06.040 IST [30099] STATEMENT: SELECT data FROM\npg_logical_slot_get_changes('test_slot1', NULL, NULL, 'format-version',\n'2');\n2023-03-19 16:36:06.040 IST [30099] LOG: logical decoding found consistent\npoint at 0/851DFA0\n2023-03-19 16:36:06.040 IST [30099] DETAIL: There are no running\ntransactions.\n2023-03-19 16:36:06.040 IST [30099] STATEMENT: SELECT data FROM\npg_logical_slot_get_changes('test_slot1', NULL, NULL, 'format-version',\n'2');\n\nThis log is printed on every single call to peek/get functions and bloats\nthe server log file by a huge amount when called in the loop for reading\nthe changes.\n\nIMHO, printing the message every time we create the context for\ndecoding a slot using pg_logical_slot_get_changes() seems over-burn.\nWondering if instead of LOG messages, should we mark these as\nDEBUG1 in SnapBuildFindSnapshot() and CreateDecodingContext()\nrespectively? I can produce a patch for the same if we agree.\n\nRegards,\nJeevan Ladhe\n\nHi, I observed absurd behaviour while using pg_logical_slot_peek_changes()and pg_logical_slot_get_changes(). Whenever any of these two functionsare called to read the changes using a decoder plugin, the followingmessages are printed in the log for every single such call.2023-03-19 16:36:06.040 IST [30099] LOG: starting logical decoding for slot \"test_slot1\"2023-03-19 16:36:06.040 IST [30099] DETAIL: Streaming transactions committing after 0/851DFD8, reading WAL from 0/851DFA0.2023-03-19 16:36:06.040 IST [30099] STATEMENT: SELECT data FROM pg_logical_slot_get_changes('test_slot1', NULL, NULL, 'format-version', '2');2023-03-19 16:36:06.040 IST [30099] LOG: logical decoding found consistent point at 0/851DFA02023-03-19 16:36:06.040 IST [30099] DETAIL: There are no running transactions.2023-03-19 16:36:06.040 IST [30099] STATEMENT: SELECT data FROM pg_logical_slot_get_changes('test_slot1', NULL, NULL, 'format-version', '2');This log is printed on every single call to peek/get functions and bloatsthe server log file by a huge amount when called in the loop for readingthe changes.IMHO, printing the message every time we create the context fordecoding a slot using pg_logical_slot_get_changes() seems over-burn.Wondering if instead of LOG messages, should we mark these asDEBUG1 in SnapBuildFindSnapshot() and CreateDecodingContext()respectively? I can produce a patch for the same if we agree.Regards,Jeevan Ladhe",
"msg_date": "Sun, 19 Mar 2023 16:59:17 +0530",
"msg_from": "Jeevan Ladhe <[email protected]>",
"msg_from_op": true,
"msg_subject": "server log inflates due to\n pg_logical_slot_peek_changes/pg_logical_slot_get_changes\n calls"
},
{
"msg_contents": "On Sun, Mar 19, 2023 at 4:59 PM Jeevan Ladhe <[email protected]> wrote:\n>\n> Hi,\n>\n> I observed absurd behaviour while using pg_logical_slot_peek_changes()\n> and pg_logical_slot_get_changes(). Whenever any of these two functions\n> are called to read the changes using a decoder plugin, the following\n> messages are printed in the log for every single such call.\n>\n> 2023-03-19 16:36:06.040 IST [30099] LOG: starting logical decoding for slot \"test_slot1\"\n> 2023-03-19 16:36:06.040 IST [30099] DETAIL: Streaming transactions committing after 0/851DFD8, reading WAL from 0/851DFA0.\n> 2023-03-19 16:36:06.040 IST [30099] STATEMENT: SELECT data FROM pg_logical_slot_get_changes('test_slot1', NULL, NULL, 'format-version', '2');\n> 2023-03-19 16:36:06.040 IST [30099] LOG: logical decoding found consistent point at 0/851DFA0\n> 2023-03-19 16:36:06.040 IST [30099] DETAIL: There are no running transactions.\n> 2023-03-19 16:36:06.040 IST [30099] STATEMENT: SELECT data FROM pg_logical_slot_get_changes('test_slot1', NULL, NULL, 'format-version', '2');\n>\n> This log is printed on every single call to peek/get functions and bloats\n> the server log file by a huge amount when called in the loop for reading\n> the changes.\n>\n> IMHO, printing the message every time we create the context for\n> decoding a slot using pg_logical_slot_get_changes() seems over-burn.\n> Wondering if instead of LOG messages, should we mark these as\n> DEBUG1 in SnapBuildFindSnapshot() and CreateDecodingContext()\n> respectively? I can produce a patch for the same if we agree.\n>\n\nI think those messages are useful when debugging logical replication\nproblems (imagine missing transaction or inconsistent data between\npublisher and subscriber). I don't think pg_logical_slot_get_changes()\nor pg_logical_slot_peek_changes() are expected to be called frequently\nin a loop. Instead you should open a replication connection to\ncontinue to receive logical changes ... forever.\n\nWhy do you need to call pg_logical_slot_peek_changes() and\npg_logical_slot_get_changes() frequently?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 20 Mar 2023 16:06:27 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: server log inflates due to\n pg_logical_slot_peek_changes/pg_logical_slot_get_changes\n calls"
},
{
"msg_contents": "Thanks, Ashutosh for the reply.\n\nI think those messages are useful when debugging logical replication\n> problems (imagine missing transaction or inconsistent data between\n> publisher and subscriber). I don't think pg_logical_slot_get_changes()\n> or pg_logical_slot_peek_changes() are expected to be called frequently\n> in a loop.\n\n\nYeah right. But can you please shed some light on when these functions\nshould be called, or are they used only for testing purposes?\n\nInstead you should open a replication connection to\n> continue to receive logical changes ... forever.\n>\n\nYes, this is what I have decided to resort to now.\n\nWhy do you need to call pg_logical_slot_peek_changes() and\n> pg_logical_slot_get_changes() frequently?\n>\n\nI was just playing around to do something for logical replication and\nthought\nof doing this quick test where every time interval I read using\npg_logical_slot_peek_changes(), make sure to consume them to a consistent\nstate, and only then use pg_logical_slot_get_changes() to advance the slot.\n\nRegards,\nJeevan Ladhe\n\nThanks, Ashutosh for the reply.\nI think those messages are useful when debugging logical replication\nproblems (imagine missing transaction or inconsistent data between\npublisher and subscriber). I don't think pg_logical_slot_get_changes()\nor pg_logical_slot_peek_changes() are expected to be called frequently\nin a loop.Yeah right. But can you please shed some light on when these functionsshould be called, or are they used only for testing purposes? Instead you should open a replication connection to\ncontinue to receive logical changes ... forever.Yes, this is what I have decided to resort to now.\nWhy do you need to call pg_logical_slot_peek_changes() and\npg_logical_slot_get_changes() frequently?I was just playing around to do something for logical replication and thoughtof doing this quick test where every time interval I read usingpg_logical_slot_peek_changes(), make sure to consume them to a consistentstate, and only then use pg_logical_slot_get_changes() to advance the slot.Regards,Jeevan Ladhe",
"msg_date": "Mon, 20 Mar 2023 16:48:20 +0530",
"msg_from": "Jeevan Ladhe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: server log inflates due to\n pg_logical_slot_peek_changes/pg_logical_slot_get_changes\n calls"
}
] |
[
{
"msg_contents": "Hi all,\n\n$subject has been discussed here, still seems worth its own thread for\nclarity:\nhttps://www.postgresql.org/message-id/[email protected]\n\nSupport for Kerberos v4 has been removed in a159ad3 (2005) and the\nsame happened for v5 in 98de86e (2014, meaning that this is still\npossible with 9.2 and 9.3 backends). Anyway, the attached seems worth\nthe simplifications now? This includes a cleanup of protocol.sgml.\n\nThoughts?\n--\nMichael",
"msg_date": "Mon, 20 Mar 2023 07:20:47 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Remove AUTH_REQ_KRB4 and AUTH_REQ_KRB5 in libpq code"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> $subject has been discussed here, still seems worth its own thread for\n> clarity:\n> https://www.postgresql.org/message-id/[email protected]\n\n> Support for Kerberos v4 has been removed in a159ad3 (2005) and the\n> same happened for v5 in 98de86e (2014, meaning that this is still\n> possible with 9.2 and 9.3 backends). Anyway, the attached seems worth\n> the simplifications now? This includes a cleanup of protocol.sgml.\n\n9.2 is still within our \"supported old versions\" window, so it's\nat least plausible that somebody would hit this for KRB5. Still,\nthe net effect would be that they'd get \"authentication method 2\nnot supported\" instead of \"Kerberos 5 authentication not supported\".\nI lean (weakly) to the idea that it's no longer worth the translation\nmaintenance effort to keep the special message.\n\nA compromise could be to drop KRB4 but keep the KRB5 case for\nawhile yet.\n\nOne other thought is that I don't really like these comments\nimplying that recycling these AUTH_REQ codes might be a good\nthing to do:\n\n+/* 1 is available. It was used for Kerberos V4, not supported any more */\n\nI think we'd be better off treating them as permanently retired.\nIt's not like there's any shortage of code space to worry about.\nMore, there might be other implementations of our wire protocol\nthat still have support for these codes, so that re-using them\ncould cause compatibility issues. So maybe write \"reserved\"\ninstead of \"available\"?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 19 Mar 2023 18:53:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove AUTH_REQ_KRB4 and AUTH_REQ_KRB5 in libpq code"
},
{
"msg_contents": "On Sun, Mar 19, 2023 at 06:53:28PM -0400, Tom Lane wrote:\n> 9.2 is still within our \"supported old versions\" window, so it's\n> at least plausible that somebody would hit this for KRB5. Still,\n> the net effect would be that they'd get \"authentication method 2\n> not supported\" instead of \"Kerberos 5 authentication not supported\".\n> I lean (weakly) to the idea that it's no longer worth the translation\n> maintenance effort to keep the special message.\n> \n> A compromise could be to drop KRB4 but keep the KRB5 case for\n> awhile yet.\n\nHmm. I think that I would still drop both of them at the end, even in\nv16 but I won't fight hard on that, either. The only difference is\nthe verbosity of the error string generated, and there is still a\ntrace of what the code numbers were in pqcomm.h.\n\n> One other thought is that I don't really like these comments\n> implying that recycling these AUTH_REQ codes might be a good\n> thing to do:\n> \n> +/* 1 is available. It was used for Kerberos V4, not supported any more */\n> \n> I think we'd be better off treating them as permanently retired.\n> It's not like there's any shortage of code space to worry about.\n> More, there might be other implementations of our wire protocol\n> that still have support for these codes, so that re-using them\n> could cause compatibility issues. So maybe write \"reserved\"\n> instead of \"available\"?\n\nOkay, fine by me.\n--\nMichael",
"msg_date": "Mon, 20 Mar 2023 08:48:50 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove AUTH_REQ_KRB4 and AUTH_REQ_KRB5 in libpq code"
}
] |
[
{
"msg_contents": "Hello all,\nThis PostgreSQL version is 11.9.\nIn LockAcquireExtended(), why if lock requested conflicts with locks requested by waiters, must join\nwait queue. Why does the lock still check for conflict with the lock requested, \nrather than check for directly with conflict with the already-held lock?\nI think lock requested only check for conflict with already-held lock, if there is no conflict, the lock should be granted.\nBest regards\n\nHello all,This PostgreSQL version is 11.9.In LockAcquireExtended(), why if lock requested conflicts with locks requested by waiters, must joinwait queue. Why does the lock still check for conflict with the lock requested, rather than check for directly with conflict with the already-held lock?I think lock requested only check for conflict with already-held lock, if there is no conflict, the lock should be granted.Best regards",
"msg_date": "Mon, 20 Mar 2023 09:58:01 +0800",
"msg_from": "\"=?UTF-8?B?5bit5YayKOWunOephik=?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?B?TG9jayBjb25mbGljdA==?="
},
{
"msg_contents": "On Mon, 20 Mar 2023 at 14:58, 席冲(宜穆) <[email protected]> wrote:\n> I think lock requested only check for conflict with already-held lock, if there is no conflict, the lock should be granted.\n\nThat would mean that stronger locks such as AEL might never be granted\nif there was never any moment when no other conflicting locks existed\n(which is very likely to happen on busy OLTP-type workloads). The way\nit works now makes it fair so that weaker locks don't jump the queue.\n\nDavid\n\n\n",
"msg_date": "Mon, 20 Mar 2023 15:12:23 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Lock conflict"
}
] |
[
{
"msg_contents": "Hi all,\n\nNathan has reported to me offlist that maintainer-clean was not doing\nits job for the files generated by gen_node_support.pl in\nsrc/backend/nodes/ for the query jumbling. Attached is a patch to\ntake care of this issue.\n\nWhile on it, I have found a comment in the related README that was\nmissing a refresh.\n\nAny objections or comments?\n--\nMichael",
"msg_date": "Mon, 20 Mar 2023 15:43:09 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Missing rules for queryjumblefuncs.{funcs,switch}.c for\n maintainer-clean"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 2:43 PM Michael Paquier <[email protected]> wrote:\n\n> Nathan has reported to me offlist that maintainer-clean was not doing\n> its job for the files generated by gen_node_support.pl in\n> src/backend/nodes/ for the query jumbling. Attached is a patch to\n> take care of this issue.\n>\n> While on it, I have found a comment in the related README that was\n> missing a refresh.\n>\n> Any objections or comments?\n\n\nA minor comment for the README is that now we have five support\nfunctions not four.\n\n- outcome. (For some classes of node types, you don't need all four support\n+ outcome. (For some classes of node types, you don't need all five support\n\nThanks\nRichard\n\nOn Mon, Mar 20, 2023 at 2:43 PM Michael Paquier <[email protected]> wrote:\nNathan has reported to me offlist that maintainer-clean was not doing\nits job for the files generated by gen_node_support.pl in\nsrc/backend/nodes/ for the query jumbling. Attached is a patch to\ntake care of this issue.\n\nWhile on it, I have found a comment in the related README that was\nmissing a refresh.\n\nAny objections or comments?A minor comment for the README is that now we have five supportfunctions not four.- outcome. (For some classes of node types, you don't need all four support+ outcome. (For some classes of node types, you don't need all five supportThanksRichard",
"msg_date": "Mon, 20 Mar 2023 15:18:17 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Missing rules for queryjumblefuncs.{funcs,switch}.c for\n maintainer-clean"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 03:18:17PM +0800, Richard Guo wrote:\n> A minor comment for the README is that now we have five support\n> functions not four.\n> \n> - outcome. (For some classes of node types, you don't need all four support\n> + outcome. (For some classes of node types, you don't need all five support\n\nRight, missed that. How about removing the \"fout/five\" entirely here\nand make that simpler? I would propose:\n\"For some classes of node types, you don't need all the support\nfunctions.\"\n--\nMichael",
"msg_date": "Mon, 20 Mar 2023 16:46:08 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Missing rules for queryjumblefuncs.{funcs,switch}.c for\n maintainer-clean"
},
{
"msg_contents": "> On 20 Mar 2023, at 08:46, Michael Paquier <[email protected]> wrote:\n\n> How about removing the \"fout/five\" entirely here\n> and make that simpler? I would propose:\n> \"For some classes of node types, you don't need all the support\n> functions.\"\n\nYes please, keeping such counts in sync is always error-prone.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 20 Mar 2023 08:49:25 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Missing rules for queryjumblefuncs.{funcs,switch}.c for\n maintainer-clean"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 3:49 PM Daniel Gustafsson <[email protected]> wrote:\n\n> > On 20 Mar 2023, at 08:46, Michael Paquier <[email protected]> wrote:\n> > How about removing the \"fout/five\" entirely here\n> > and make that simpler? I would propose:\n> > \"For some classes of node types, you don't need all the support\n> > functions.\"\n>\n> Yes please, keeping such counts in sync is always error-prone.\n\n\nAgreed. +1 to remove the counts.\n\nThanks\nRichard\n\nOn Mon, Mar 20, 2023 at 3:49 PM Daniel Gustafsson <[email protected]> wrote:> On 20 Mar 2023, at 08:46, Michael Paquier <[email protected]> wrote:\n> How about removing the \"fout/five\" entirely here\n> and make that simpler? I would propose:\n> \"For some classes of node types, you don't need all the support\n> functions.\"\n\nYes please, keeping such counts in sync is always error-prone.Agreed. +1 to remove the counts.ThanksRichard",
"msg_date": "Mon, 20 Mar 2023 16:04:31 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Missing rules for queryjumblefuncs.{funcs,switch}.c for\n maintainer-clean"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> Nathan has reported to me offlist that maintainer-clean was not doing\n> its job for the files generated by gen_node_support.pl in\n> src/backend/nodes/ for the query jumbling. Attached is a patch to\n> take care of this issue.\n> While on it, I have found a comment in the related README that was\n> missing a refresh.\n> Any objections or comments?\n\nIs similar knowledge missing in the meson build files?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Mar 2023 10:21:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Missing rules for queryjumblefuncs.{funcs,switch}.c for\n maintainer-clean"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 10:21:28AM -0400, Tom Lane wrote:\n> Is similar knowledge missing in the meson build files?\n\nsrc/backend/nodes/meson.build and src/include/nodes/meson.build are\nthe two meson files that have the knowledge about the files generated\nby gen_node_support.pl, and the query jumbling files are consistent\nwith that since 0e681cf. Perhaps I've missed an extra spot?\n--\nMichael",
"msg_date": "Tue, 21 Mar 2023 09:17:22 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Missing rules for queryjumblefuncs.{funcs,switch}.c for\n maintainer-clean"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 04:04:31PM +0800, Richard Guo wrote:\n> Agreed. +1 to remove the counts.\n\nThanks. Adjusted this way, then.\n--\nMichael",
"msg_date": "Wed, 22 Mar 2023 08:42:42 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Missing rules for queryjumblefuncs.{funcs,switch}.c for\n maintainer-clean"
}
] |
[
{
"msg_contents": "I found an error message added by de4d456b406bf502341ef526710d3f764b41e2c8.\n\nWhen I incorrectly configured the primary_conninfo with the wrong\nuser, I received the following message on the server logs of both\nservers involved in a physical replcation set.\n\n[27022:walsender] FATAL: permission denied to start WAL sender\n[27022:walsender] DETAIL: Only roles with the REPLICATION attribute may start a WAL sender process.\n\nI'm not sure if adding the user name in the log prefix is a common\npractice, but without it, the log line might not have enough\ninformation. Unlike other permission-related messages, this message is\nnot the something human operators receive in response to their\nactions. It seems similar to connection authorization logs where the\nuser name is important. So, I'd like to propose the following\nalternative.\n\n[27022:walsender] DETAIL: The connection user \"r1\" requires the REPLICATION attribute.\n\nWhat do you think about this change?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 20 Mar 2023 17:05:41 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "About a recently-added permission-related error message "
},
{
"msg_contents": "On Mon, 20 Mar 2023 17:05:41 +0900 (JST)\nKyotaro Horiguchi <[email protected]> wrote:\n\n> I found an error message added by de4d456b406bf502341ef526710d3f764b41e2c8.\n> \n> When I incorrectly configured the primary_conninfo with the wrong\n> user, I received the following message on the server logs of both\n> servers involved in a physical replcation set.\n> \n> [27022:walsender] FATAL: permission denied to start WAL sender\n> [27022:walsender] DETAIL: Only roles with the REPLICATION attribute may start a WAL sender process.\n> \n> I'm not sure if adding the user name in the log prefix is a common\n> practice, but without it, the log line might not have enough\n> information. Unlike other permission-related messages, this message is\n> not the something human operators receive in response to their\n> actions. It seems similar to connection authorization logs where the\n> user name is important. So, I'd like to propose the following\n> alternative.\n\nI am not sure whether this change is necessary because the error message\nwill appear in the log of the standby server and users can easily know\nthe connection user just by checking primary_conninfo.\n\n> [27022:walsender] DETAIL: The connection user \"r1\" requires the REPLICATION attribute.\n\nHowever, if we need this change, how about using\n\"DETAIL: The connection user \"r1\" must have the REPLICATION attribute.\"\nThis pattern is used in other part like check_object_ownership() and\nAlterRole(). The user name is not included there, though.\n\nRegards,\nYugo Nagata\n\n> What do you think about this change?\n> \n> regards.\n> \n> -- \n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Wed, 22 Mar 2023 19:17:17 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About a recently-added permission-related error message"
}
] |
[
{
"msg_contents": "After the discussion in [0] ff., I was looking around in pg_attribute \nand noticed that we could possibly save 4 bytes. We could change both \nattstattarget and attinhcount from int4 to int2, which together with \nsome reordering would save 4 bytes from the fixed portion.\n\nattstattarget is already limited to 10000, so this wouldn't lose \nanything. For attinhcount, I don't see any documented limits. But it \nseems unlikely to me that someone would need more than 32k immediate \ninheritance parents on a column. (Maybe an overflow check would be \nuseful, though, to prevent shenanigans.)\n\nThe attached patch seems to work. Thoughts?\n\n\n[0]: \nhttps://www.postgresql.org/message-id/20230313204119.4mkepdvixcxrwpsc%40awork3.anarazel.de",
"msg_date": "Mon, 20 Mar 2023 11:00:12 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Save a few bytes in pg_attribute"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> After the discussion in [0] ff., I was looking around in pg_attribute \n> and noticed that we could possibly save 4 bytes. We could change both \n> attstattarget and attinhcount from int4 to int2, which together with \n> some reordering would save 4 bytes from the fixed portion.\n\n> attstattarget is already limited to 10000, so this wouldn't lose \n> anything. For attinhcount, I don't see any documented limits. But it \n> seems unlikely to me that someone would need more than 32k immediate \n> inheritance parents on a column. (Maybe an overflow check would be \n> useful, though, to prevent shenanigans.)\n\n> The attached patch seems to work. Thoughts?\n\nI agree that attinhcount could be narrowed, but I have some concern\nabout attstattarget. IIRC, the limit on attstattarget was once 1000\nand then we raised it to 10000. Is it inconceivable that we might\nwant to raise it to 100000 someday?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Mar 2023 10:37:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Save a few bytes in pg_attribute"
},
{
"msg_contents": "\n\nOn 3/20/23 15:37, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> After the discussion in [0] ff., I was looking around in pg_attribute \n>> and noticed that we could possibly save 4 bytes. We could change both \n>> attstattarget and attinhcount from int4 to int2, which together with \n>> some reordering would save 4 bytes from the fixed portion.\n> \n>> attstattarget is already limited to 10000, so this wouldn't lose \n>> anything. For attinhcount, I don't see any documented limits. But it \n>> seems unlikely to me that someone would need more than 32k immediate \n>> inheritance parents on a column. (Maybe an overflow check would be \n>> useful, though, to prevent shenanigans.)\n> \n>> The attached patch seems to work. Thoughts?\n> \n> I agree that attinhcount could be narrowed, but I have some concern\n> about attstattarget. IIRC, the limit on attstattarget was once 1000\n> and then we raised it to 10000. Is it inconceivable that we might\n> want to raise it to 100000 someday?\n> \n\nYeah, I don't think it'd be wise to make it harder to increase the\nstatistics target limit.\n\nIMHO it'd be much better to just not store the statistics target for\nattributes that have it default (which we now identify by -1), or for\nsystem attributes (where we store 0). I'd bet vast majority of systems\nwill just use the default / GUC value. So if we're interested in saving\nthese bytes, we could just store NULL in these cases, no?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 20 Mar 2023 18:46:06 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Save a few bytes in pg_attribute"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> IMHO it'd be much better to just not store the statistics target for\n> attributes that have it default (which we now identify by -1), or for\n> system attributes (where we store 0). I'd bet vast majority of systems\n> will just use the default / GUC value. So if we're interested in saving\n> these bytes, we could just store NULL in these cases, no?\n\nHmm, we'd have to move it to the nullable part of the row and expend\nmore code to fetch it; but I don't think it's touched in many places,\nso that might be a good tradeoff. Couple of notes:\n\n* As things stand I think we have a null bitmap in every row of\npg_attribute already (surely attfdwoptions and attmissingval are never\nboth filled), so there's no extra cost there.\n\n* Putting it in the variable part of the row means it wouldn't appear\nin tuple descriptors, but that seems fine.\n\nI wonder if the same is true of attinhcount. Since it's nonzero for\npartitions' attributes, it might be non-null in a fairly large fraction\nof pg_attribute rows in some use-cases, but it still seems like we'd not\nbe losing anything. It wouldn't need to be touched in any\nhigh-performance code paths AFAICS.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Mar 2023 14:13:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Save a few bytes in pg_attribute"
},
{
"msg_contents": "On Mon, 20 Mar 2023, 11:00 pm Peter Eisentraut,\n<[email protected]> wrote:\n> After the discussion in [0] ff., I was looking around in pg_attribute\n> and noticed that we could possibly save 4 bytes. We could change both\n> attstattarget and attinhcount from int4 to int2, which together with\n> some reordering would save 4 bytes from the fixed portion.\n\nI just want to highlight 1ef61ddce9, which fixed a very long-standing\nbug that meant that pg_inherits.inhseqno was effectively just 16-bit.\nPerhaps because nobody seemed to report that as a limitation 16 bits\nis enough space. I only noticed that as a bug from code reading.\n\nDavid\n\n\n",
"msg_date": "Tue, 21 Mar 2023 08:51:15 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Save a few bytes in pg_attribute"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-20 10:37:36 -0400, Tom Lane wrote:\n> I agree that attinhcount could be narrowed, but I have some concern\n> about attstattarget. IIRC, the limit on attstattarget was once 1000\n> and then we raised it to 10000. Is it inconceivable that we might\n> want to raise it to 100000 someday?\n\nHard to believe that'd happen in a minor version - and I don't think there'd\nan issue with widening it again in a major version?\n\nI doubt we'll ever go to 100k without a major redesign of stats storage/access\n- the size of the stats datums would make that pretty impractical right now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 Mar 2023 16:44:57 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Save a few bytes in pg_attribute"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-03-20 10:37:36 -0400, Tom Lane wrote:\n>> I agree that attinhcount could be narrowed, but I have some concern\n>> about attstattarget. IIRC, the limit on attstattarget was once 1000\n>> and then we raised it to 10000. Is it inconceivable that we might\n>> want to raise it to 100000 someday?\n\n> Hard to believe that'd happen in a minor version - and I don't think there'd\n> an issue with widening it again in a major version?\n\nTrue. However, I think Tomas' idea of making these columns nullable\nis even better than narrowing them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Mar 2023 19:51:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Save a few bytes in pg_attribute"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-20 11:00:12 +0100, Peter Eisentraut wrote:\n> After the discussion in [0] ff., I was looking around in pg_attribute and\n> noticed that we could possibly save 4 bytes. We could change both\n> attstattarget and attinhcount from int4 to int2, which together with some\n> reordering would save 4 bytes from the fixed portion.\n\nattndims seems like another good candidate to shrink.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 Mar 2023 17:49:03 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Save a few bytes in pg_attribute"
},
{
"msg_contents": "On 21.03.23 00:51, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n>> On 2023-03-20 10:37:36 -0400, Tom Lane wrote:\n>>> I agree that attinhcount could be narrowed, but I have some concern\n>>> about attstattarget. IIRC, the limit on attstattarget was once 1000\n>>> and then we raised it to 10000. Is it inconceivable that we might\n>>> want to raise it to 100000 someday?\n> \n>> Hard to believe that'd happen in a minor version - and I don't think there'd\n>> an issue with widening it again in a major version?\n> \n> True. However, I think Tomas' idea of making these columns nullable\n> is even better than narrowing them.\n\nThe context of my message was to do the proposed change for PG16 to buy \nback a few bytes that are being added by another feature, and then \nconsider doing a larger detangling of pg_attribute and tuple descriptors \nin PG17, which might well involve taking the attstattarget out of the \nhot path. Making attstattarget nullable (i.e., not part of the fixed \npart of pg_attribute) would require fairly significant surgery, so I \nthink it would be better done as part of a more comprehensive change \nthat would allow the same treatment for other columns as well.\n\n\n\n\n",
"msg_date": "Tue, 21 Mar 2023 17:36:48 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Save a few bytes in pg_attribute"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-21 17:36:48 +0100, Peter Eisentraut wrote:\n> On 21.03.23 00:51, Tom Lane wrote:\n> > Andres Freund <[email protected]> writes:\n> > > On 2023-03-20 10:37:36 -0400, Tom Lane wrote:\n> > > > I agree that attinhcount could be narrowed, but I have some concern\n> > > > about attstattarget. IIRC, the limit on attstattarget was once 1000\n> > > > and then we raised it to 10000. Is it inconceivable that we might\n> > > > want to raise it to 100000 someday?\n> > \n> > > Hard to believe that'd happen in a minor version - and I don't think there'd\n> > > an issue with widening it again in a major version?\n> > \n> > True. However, I think Tomas' idea of making these columns nullable\n> > is even better than narrowing them.\n\nWhy not do both?\n\n\n> The context of my message was to do the proposed change for PG16 to buy back\n> a few bytes that are being added by another feature\n\nHow much would you need to buy back to \"reach parity\"?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 Mar 2023 09:43:23 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Save a few bytes in pg_attribute"
},
{
"msg_contents": "On 21.03.23 17:43, Andres Freund wrote:\n>> The context of my message was to do the proposed change for PG16 to buy back\n>> a few bytes that are being added by another feature\n> How much would you need to buy back to \"reach parity\"?\n\nI don't think we can find enough to make the impact zero bytes. It's \nalso not clear exactly what the impact of each byte would be (compared \nto possible complications in other parts of the code, for example). But \nif there are a few low-hanging fruit, it seems like we could pick them, \nto old us over until we have a better solution to the underlying issue.\n\n\n\n",
"msg_date": "Tue, 21 Mar 2023 18:15:40 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Save a few bytes in pg_attribute"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-21 18:15:40 +0100, Peter Eisentraut wrote:\n> On 21.03.23 17:43, Andres Freund wrote:\n> > > The context of my message was to do the proposed change for PG16 to buy back\n> > > a few bytes that are being added by another feature\n> > How much would you need to buy back to \"reach parity\"?\n> \n> I don't think we can find enough to make the impact zero bytes. It's also\n> not clear exactly what the impact of each byte would be (compared to\n> possible complications in other parts of the code, for example). But if\n> there are a few low-hanging fruit, it seems like we could pick them, to old\n> us over until we have a better solution to the underlying issue.\n\nattndims 4->2\nattstattarget 4->2\nattinhcount 4->2\n\n+ some reordering only gets you from 112->108 unfortunately, due to a 1 byte\nalignment hole and 2 bytes of trailing padding.\n\nbefore:\n /* size: 112, cachelines: 2, members: 22 */\n /* sum members: 111, holes: 1, sum holes: 1 */\n /* last cacheline: 48 bytes */\n\nafter:\n /* size: 108, cachelines: 2, members: 22 */\n /* sum members: 105, holes: 1, sum holes: 1 */\n /* padding: 2 */\n /* last cacheline: 44 bytes */\n\nYou might be able to fill the hole + padding with your data - but IIRC that\nwas 3 4byte integers?\n\n\nFWIW, I think we should consider getting rid of attcacheoff. I doubt it's\nworth its weight these days, because deforming via slots starts at the\nbeginning anyway. The overhead of maintaining it is not insubstantial, and\nit's just architecturally ugly to to update tupledescs continually.\n\n\nNot for your current goal, but I do wonder how hard it'd be to make it work to\nstore multiple booleans as bitmasks. Probably ties into the discussion around\nnot relying on struct \"mapping\" for catalog tables (which we IIRC decided is\nthe sensible way the NAMEDATALEN restriction).\n\nE.g. pg_attribute has 6 booleans, and attgenerated effectively is a boolean\ntoo, and attidentity could easily be modeled as such as well.\n\nIf were to not rely on struct mapping anymore, we could possibly transparently\ndo this as part of forming/deforming heap tuples. Using something like\nTYPALIGN_BIT. The question is whether it'd be too expensive to decode...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 Mar 2023 10:46:33 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Save a few bytes in pg_attribute"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> FWIW, I think we should consider getting rid of attcacheoff. I doubt it's\n> worth its weight these days, because deforming via slots starts at the\n> beginning anyway. The overhead of maintaining it is not insubstantial, and\n> it's just architecturally ugly to to update tupledescs continually.\n\nI'd be for that if we can convince ourselves there's not a material\nspeed penalty. As you say, it's quite ugly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 Mar 2023 14:55:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Save a few bytes in pg_attribute"
},
{
"msg_contents": "On Tue, 21 Mar 2023 at 19:55, Tom Lane <[email protected]> wrote:\n>\n> Andres Freund <[email protected]> writes:\n> > FWIW, I think we should consider getting rid of attcacheoff. I doubt it's\n> > worth its weight these days, because deforming via slots starts at the\n> > beginning anyway. The overhead of maintaining it is not insubstantial, and\n> > it's just architecturally ugly to to update tupledescs continually.\n>\n> I'd be for that if we can convince ourselves there's not a material\n> speed penalty. As you say, it's quite ugly.\n\nYes, attcacheoff is a tremendous performance boon in many cases. But\nall is not lost:\n\nWhen I was working on other improvements I experimented with storing\nthe attributes used in (de)serializing tuples to disk in a separate\nstructured array in the TupleDesc, a prototype patch of which I shared\nhere [0]. I didn't see a speed difference back then so I didn't\nfurther venture into that path (as it adds complexity without\nperformance benefits), but I think it can be relevant to this thread\nbecause with that patch we actually don't need the attcacheoff in the\npg_atttribute struct: it only needs to be present in the derived\n\"TupleAttrAlignData\" structs which carry the\nlength/alignment/storage/byval info.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/message-id/CAEze2Wh8-metSryZX_Ubj-uv6kb%2B2YnzHAejmEdubjhmGusBAg%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 21 Mar 2023 20:20:40 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Save a few bytes in pg_attribute"
},
{
"msg_contents": "Matthias van de Meent <[email protected]> writes:\n> ... with that patch we actually don't need the attcacheoff in the\n> pg_atttribute struct: it only needs to be present in the derived\n> \"TupleAttrAlignData\" structs which carry the\n> length/alignment/storage/byval info.\n\nYeah, I was wondering about that too: keeping attcacheoff as local\nstate in slots might get us all its win without so much conceptual\ndirtiness.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 Mar 2023 15:26:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Save a few bytes in pg_attribute"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-21 20:20:40 +0100, Matthias van de Meent wrote:\n> On Tue, 21 Mar 2023 at 19:55, Tom Lane <[email protected]> wrote:\n> >\n> > Andres Freund <[email protected]> writes:\n> > > FWIW, I think we should consider getting rid of attcacheoff. I doubt it's\n> > > worth its weight these days, because deforming via slots starts at the\n> > > beginning anyway. The overhead of maintaining it is not insubstantial, and\n> > > it's just architecturally ugly to to update tupledescs continually.\n> >\n> > I'd be for that if we can convince ourselves there's not a material\n> > speed penalty. As you say, it's quite ugly.\n> \n> Yes, attcacheoff is a tremendous performance boon in many cases.\n\nWhich? We don't use fastgetattr() in many places these days. And in some quick\nmeasurements it's a wash or small loss when deforming slot tuples, even when\nthe attcacheoff optimization would apply, because the branches for managing it\nadd more overhead than they safe.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 Mar 2023 12:58:05 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Save a few bytes in pg_attribute"
},
{
"msg_contents": "On Tue, 21 Mar 2023 at 20:58, Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-03-21 20:20:40 +0100, Matthias van de Meent wrote:\n> > On Tue, 21 Mar 2023 at 19:55, Tom Lane <[email protected]> wrote:\n> > >\n> > > Andres Freund <[email protected]> writes:\n> > > > FWIW, I think we should consider getting rid of attcacheoff. I doubt it's\n> > > > worth its weight these days, because deforming via slots starts at the\n> > > > beginning anyway. The overhead of maintaining it is not insubstantial, and\n> > > > it's just architecturally ugly to to update tupledescs continually.\n> > >\n> > > I'd be for that if we can convince ourselves there's not a material\n> > > speed penalty. As you say, it's quite ugly.\n> >\n> > Yes, attcacheoff is a tremendous performance boon in many cases.\n>\n> Which? We don't use fastgetattr() in many places these days. And in some quick\n> measurements it's a wash or small loss when deforming slot tuples, even when\n> the attcacheoff optimization would apply, because the branches for managing it\n> add more overhead than they safe.\n\nMy experience with attcacheoff performance is in indexes, specifically\nindex_getattr(). Sure, multi-column indexes are uncommon, but the\ndifference between have and have-not for cached attribute offsets is\nseveral %.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Tue, 21 Mar 2023 21:02:08 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Save a few bytes in pg_attribute"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-21 15:26:38 -0400, Tom Lane wrote:\n> Matthias van de Meent <[email protected]> writes:\n> > ... with that patch we actually don't need the attcacheoff in the\n> > pg_atttribute struct: it only needs to be present in the derived\n> > \"TupleAttrAlignData\" structs which carry the\n> > length/alignment/storage/byval info.\n> \n> Yeah, I was wondering about that too: keeping attcacheoff as local\n> state in slots might get us all its win without so much conceptual\n> dirtiness.\n\nIt's also the place where it's the least likely to help - afaict attcacheoff\nis only really beneficial for fastgetattr(). Which conditions it's use more\nstrictly - not only can there not be any NULLs before the accessed column,\nthere may not be any NULLs in the tuple at all.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 Mar 2023 13:11:46 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Save a few bytes in pg_attribute"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-21 21:02:08 +0100, Matthias van de Meent wrote:\n> On Tue, 21 Mar 2023 at 20:58, Andres Freund <[email protected]> wrote:\n> > On 2023-03-21 20:20:40 +0100, Matthias van de Meent wrote:\n> > > Yes, attcacheoff is a tremendous performance boon in many cases.\n> >\n> > Which? We don't use fastgetattr() in many places these days. And in some quick\n> > measurements it's a wash or small loss when deforming slot tuples, even when\n> > the attcacheoff optimization would apply, because the branches for managing it\n> > add more overhead than they safe.\n> \n> My experience with attcacheoff performance is in indexes, specifically\n> index_getattr(). Sure, multi-column indexes are uncommon, but the\n> difference between have and have-not for cached attribute offsets is\n> several %.\n\nI did indeed not think of index_getattr(), just heap related things.\n\nDo you have a good test workload handy - I'm kinda curious to compare the cost\nof removing attcacheoff vs the gain of not maintaining it for index workloads.\n\nIt looks like many of the index_getattr() cases could be made faster without\nattcacheoff. A lot of places seem to loop over all attributes, and the key to\naccelerating that is to keep state between the iterations. Attcacheoff is\nthat, but quite stunted, because it only works if there aren't any NULLs (even\nif the NULL is in a later column).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 Mar 2023 15:05:33 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Save a few bytes in pg_attribute"
},
{
"msg_contents": "On Tue, 21 Mar 2023 at 23:05, Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-03-21 21:02:08 +0100, Matthias van de Meent wrote:\n> > On Tue, 21 Mar 2023 at 20:58, Andres Freund <[email protected]> wrote:\n> > > On 2023-03-21 20:20:40 +0100, Matthias van de Meent wrote:\n> > > > Yes, attcacheoff is a tremendous performance boon in many cases.\n> > >\n> > > Which? We don't use fastgetattr() in many places these days. And in some quick\n> > > measurements it's a wash or small loss when deforming slot tuples, even when\n> > > the attcacheoff optimization would apply, because the branches for managing it\n> > > add more overhead than they safe.\n> >\n> > My experience with attcacheoff performance is in indexes, specifically\n> > index_getattr(). Sure, multi-column indexes are uncommon, but the\n> > difference between have and have-not for cached attribute offsets is\n> > several %.\n>\n> I did indeed not think of index_getattr(), just heap related things.\n>\n> Do you have a good test workload handy - I'm kinda curious to compare the cost\n> of removing attcacheoff vs the gain of not maintaining it for index workloads.\n\nRebuilding indexes has been my go-to workload for comparing\nattribute-related btree performance optimizations in [0] and [1].\nResults of tests from '21 in which we're always calculating offsets\nfrom 0 show a slowdown of 4-18% in attcacheoff-enabled workloads if\nwe're calculating offsets dynamically.\n\n> It looks like many of the index_getattr() cases could be made faster without\n> attcacheoff. A lot of places seem to loop over all attributes, and the key to\n> accelerating that is to keep state between the iterations.\n\nIndeed, it's not great. You can take a look at [1], which is where I'm\ntrying to optimize btree's handling of comparing tuples; which\nincludes work on reducing overhead for attribute accesses.\n\nNote that each btree page should be able to do with comparing at most\n2*log(ntups) columns, where this is currently natts * log(ntups).\n\n> Attcacheoff is\n> that, but quite stunted, because it only works if there aren't any NULLs (even\n> if the NULL is in a later column).\n\nYes, that isn't great either, but most indexes I've seen have tuples\nthat are either all NULL, or have no nulls; only seldom I see indexes\nthat have mixed NULL/not-null index tuple attributes.\n\n\nKind regards,\n\nMatthias van de Meent.\n\n\n[0] https://www.postgresql.org/message-id/flat/CAEze2WhyBT2bKZRdj_U0KS2Sbewa1XoO_BzgpzLC09sa5LUROg%40mail.gmail.com#fe3369c4e202a7ed468e47bf5420f530\n[1] https://www.postgresql.org/message-id/flat/CAEze2Wg52tsSWA9Fy7OCXx-K7pPLMNxA_fmQ6-+_pzR-AoODDA@mail.gmail.com\n\n\n",
"msg_date": "Tue, 21 Mar 2023 23:22:40 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Save a few bytes in pg_attribute"
},
{
"msg_contents": "On 21.03.23 18:46, Andres Freund wrote:\n> FWIW, I think we should consider getting rid of attcacheoff. I doubt it's\n> worth its weight these days, because deforming via slots starts at the\n> beginning anyway. The overhead of maintaining it is not insubstantial, and\n> it's just architecturally ugly to to update tupledescs continually.\n\nBtw., could attcacheoff be int16?\n\n\n\n",
"msg_date": "Wed, 22 Mar 2023 10:42:00 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Save a few bytes in pg_attribute"
},
{
"msg_contents": "On Wed, 22 Mar 2023 at 10:42, Peter Eisentraut\n<[email protected]> wrote:\n>\n> On 21.03.23 18:46, Andres Freund wrote:\n> > FWIW, I think we should consider getting rid of attcacheoff. I doubt it's\n> > worth its weight these days, because deforming via slots starts at the\n> > beginning anyway. The overhead of maintaining it is not insubstantial, and\n> > it's just architecturally ugly to to update tupledescs continually.\n>\n> Btw., could attcacheoff be int16?\n\nI had the same thought in '21, and in the patch linked upthread[0] I\nadded an extra comment on the field:\n\n> + Note: Although the maximum offset encountered in stored tuples is\n> + limited to the max BLCKSZ (2**15), FormData_pg_attribute is used for\n> + all internal tuples as well, so attcacheoff may be larger for those\n> + tuples, and it is therefore not safe to use int16.\n\nSo, we can't reduce its size while we use attcacheoff for\n(de)serialization of tuples with up to MaxAttributeNumber (=INT16_MAX)\nof attributes which each can be larger than one byte (such as in\ntuplestore, tuplesort, spilling hash aggregates, ...)\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/message-id/CAEze2Wh8-metSryZX_Ubj-uv6kb+2YnzHAejmEdubjhmGusBAg@mail.gmail.com\n\n\n",
"msg_date": "Wed, 22 Mar 2023 12:44:09 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Save a few bytes in pg_attribute"
},
{
"msg_contents": "On 21.03.23 18:46, Andres Freund wrote:\n>> I don't think we can find enough to make the impact zero bytes. It's also\n>> not clear exactly what the impact of each byte would be (compared to\n>> possible complications in other parts of the code, for example). But if\n>> there are a few low-hanging fruit, it seems like we could pick them, to old\n>> us over until we have a better solution to the underlying issue.\n> \n> attndims 4->2\n> attstattarget 4->2\n> attinhcount 4->2\n> \n> + some reordering only gets you from 112->108 unfortunately, due to a 1 byte\n> alignment hole and 2 bytes of trailing padding.\n> \n> before:\n> /* size: 112, cachelines: 2, members: 22 */\n> /* sum members: 111, holes: 1, sum holes: 1 */\n> /* last cacheline: 48 bytes */\n> \n> after:\n> /* size: 108, cachelines: 2, members: 22 */\n> /* sum members: 105, holes: 1, sum holes: 1 */\n> /* padding: 2 */\n> /* last cacheline: 44 bytes */\n> \n> You might be able to fill the hole + padding with your data - but IIRC that\n> was 3 4byte integers?\n\nHere is an updated patch that handles those three fields, including some \noverflow checks. I also changed coninhcount to match attinhcount.\n\nI structured the inhcount overflow checks to be independent of the \ninteger size, but maybe others find this approach weird.\n\nGiven the calculation shown, there is no value in reducing all three \nfields versus just two, but I don't find compelling reasons to leave out \none or the other field. (attstattarget got the most discussion, but \nthat one is actually the easiest part of the patch.)\n\nI took another hard look at some of the other proposals, including \nmoving some fields to the variable length part or combining some bool or \nchar fields. Those changes all appear to have a really long tail of \nissues all over the code that I wouldn't want to attack them now in an \nad hoc way.\n\nMy suggestion is to use this patch and then consider the column \nencryption patch as it stands now.\n\nThe discussion about attcacheoff seems to be still ongoing. But it \nseems whatever the outcome would be independent of this patch: Either we \nkeep it or we remove it; there is no proposal to resize it.",
"msg_date": "Thu, 23 Mar 2023 13:45:15 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Save a few bytes in pg_attribute"
},
{
"msg_contents": "On 23.03.23 13:45, Peter Eisentraut wrote:\n> My suggestion is to use this patch and then consider the column \n> encryption patch as it stands now.\n\nI have committed this.\n\n\n",
"msg_date": "Tue, 28 Mar 2023 11:25:29 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Save a few bytes in pg_attribute"
}
] |
[
{
"msg_contents": "Dear hackers,\n\nWhile checking documentations, I found that one line notes our product as\n\"<productname>PostgreSQL</productname>\", whereas another line notes as just\n\"PostgreSQL\". For example, in bgworker.sgml:\n\n```\nPostgreSQL can be extended to run user-supplied code in separate processes.\n...\nThese processes are attached to <productname>PostgreSQL</productname>'s\nshared memory area and have the option to connect to databases internally; ...\n```\n\nIt seems that <productname> tag is not used when the string is used as link or\ntitle, but I cannot find other rule to use them. Could you please tell me discussions\nor wikipage about it?\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Mon, 20 Mar 2023 12:13:35 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Question: Do we have a rule to use \"PostgreSQL\" and\n \"<productname>PostgreSQL</productname>\" separately?"
},
{
"msg_contents": "\"Hayato Kuroda (Fujitsu)\" <[email protected]> writes:\n> While checking documentations, I found that one line notes our product as\n> \"<productname>PostgreSQL</productname>\", whereas another line notes as just\n> \"PostgreSQL\".\n\nIMO the convention is to use the <productname> tag everywhere that we\nspell out \"PostgreSQL\". I don't think it's actually rendered differently\nwith our current stylesheets, but maybe someday it will be.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Mar 2023 10:31:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question: Do we have a rule to use \"PostgreSQL\" and\n \"<productname>PostgreSQL</productname>\" separately?"
},
{
"msg_contents": "> On 20 Mar 2023, at 15:31, Tom Lane <[email protected]> wrote:\n> \n> \"Hayato Kuroda (Fujitsu)\" <[email protected]> writes:\n>> While checking documentations, I found that one line notes our product as\n>> \"<productname>PostgreSQL</productname>\", whereas another line notes as just\n>> \"PostgreSQL\".\n> \n> IMO the convention is to use the <productname> tag everywhere that we\n> spell out \"PostgreSQL\". I don't think it's actually rendered differently\n> with our current stylesheets, but maybe someday it will be.\n\nIIRC the main use in DocBook is for automatically decorating productnames with\ntrademark signs etc, and to generate lists of trademarks, but also that they\ncan be rendered differently.\n\nThis reminded me that I was planning to apply the below to make the markup of\nPostgreSQL consistent:\n\ndiff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml\nindex 4df8bd1b64..9ff6b08f5a 100644\n--- a/doc/src/sgml/datatype.sgml\n+++ b/doc/src/sgml/datatype.sgml\n@@ -2667,7 +2667,7 @@ TIMESTAMP WITH TIME ZONE '2004-10-19 10:23:54+02'\n To complicate matters, some jurisdictions have used the same timezone\n abbreviation to mean different UTC offsets at different times; for\n example, in Moscow <literal>MSK</literal> has meant UTC+3 in some years and\n- UTC+4 in others. <application>PostgreSQL</application> interprets such\n+ UTC+4 in others. <productname>PostgreSQL</productname> interprets such\n abbreviations according to whatever they meant (or had most recently\n meant) on the specified date; but, as with the <literal>EST</literal> example\n above, this is not necessarily the same as local civil time on that date.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 20 Mar 2023 15:44:43 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question: Do we have a rule to use \"PostgreSQL\" and\n \"<productname>PostgreSQL</productname>\" separately?"
},
{
"msg_contents": "Dear Daniel, Tom,\n\n> > On 20 Mar 2023, at 15:31, Tom Lane <[email protected]> wrote:\n> >\n> > \"Hayato Kuroda (Fujitsu)\" <[email protected]> writes:\n> >> While checking documentations, I found that one line notes our product as\n> >> \"<productname>PostgreSQL</productname>\", whereas another line notes\n> as just\n> >> \"PostgreSQL\".\n> >\n> > IMO the convention is to use the <productname> tag everywhere that we\n> > spell out \"PostgreSQL\". I don't think it's actually rendered differently\n> > with our current stylesheets, but maybe someday it will be.\n> \n> IIRC the main use in DocBook is for automatically decorating productnames with\n> trademark signs etc, and to generate lists of trademarks, but also that they\n> can be rendered differently.\n\nOK, I understood that even if the string is not rendered, that should be tagged as <productname>.\n\n> IIRC the main use in DocBook is for automatically decorating productnames with\n> trademark signs etc, and to generate lists of trademarks, but also that they\n> can be rendered differently.\n> \n> This reminded me that I was planning to apply the below to make the markup of\n> PostgreSQL consistent:\n\nI have also grepped to detect another wrong markups, and I think at least\n\"<entry>PostgreSQL</entry>\" should be changed. PSA the patch.\n\n```\n$ grep -rI \\>PostgreSQL\\< | grep -v productname\nconfig.sgml: the log. The default is <literal>PostgreSQL</literal>.\nfunc.sgml: <returnvalue>PostgreSQL</returnvalue>\nfunc.sgml: <entry>PostgreSQL</entry>\nruntime.sgml: event source named <literal>PostgreSQL</literal>.\nproblems.sgml: The software package in total is called <quote>PostgreSQL</quote>,\nref/pg_ctl-ref.sgml: default is <literal>PostgreSQL</literal>. Note that this only controls\nref/pg_ctl-ref.sgml: source name <literal>PostgreSQL</literal>.\nref/pg_ctl-ref.sgml: The default is <literal>PostgreSQL</literal>.\n```\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Wed, 22 Mar 2023 03:19:11 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Question: Do we have a rule to use \"PostgreSQL\" and\n \"<productname>PostgreSQL</productname>\" separately?"
},
{
"msg_contents": "> On 22 Mar 2023, at 04:19, Hayato Kuroda (Fujitsu) <[email protected]> wrote:\n\n> I have also grepped to detect another wrong markups, and I think at least\n> \"<entry>PostgreSQL</entry>\" should be changed. PSA the patch.\n\nI agree with that analysis, this instance should be marked up with\n<productname> but not the other ones. I'll go ahead with your patch after some\ntesting.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 22 Mar 2023 12:48:18 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question: Do we have a rule to use \"PostgreSQL\" and\n \"<productname>PostgreSQL</productname>\" separately?"
}
] |
[
{
"msg_contents": "Hi, all\n\nFound several typos like plgsql, I think it should be plpgsql.\n\n\nRegards,\nZhang Mingli",
"msg_date": "Tue, 21 Mar 2023 00:26:17 +0800",
"msg_from": "Zhang Mingli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix typo plgsql to plpgsql."
},
{
"msg_contents": "On Tue, Mar 21, 2023 at 12:26 AM Zhang Mingli <[email protected]> wrote:\n\n> Found several typos like plgsql, I think it should be plpgsql.\n>\n\n+1. I believe these are typos. And a grep search shows that all typos of\nthis kind are addressed by the patch.\n\nThanks\nRichard\n\nOn Tue, Mar 21, 2023 at 12:26 AM Zhang Mingli <[email protected]> wrote:\nFound several typos like plgsql, I think it should be plpgsql.+1. I believe these are typos. And a grep search shows that all typos ofthis kind are addressed by the patch.ThanksRichard",
"msg_date": "Tue, 21 Mar 2023 09:04:32 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo plgsql to plpgsql."
},
{
"msg_contents": "On Tue, Mar 21, 2023 at 09:04:32AM +0800, Richard Guo wrote:\n> +1. I believe these are typos. And a grep search shows that all typos of\n> this kind are addressed by the patch.\n\nYes, you are right. The comments fixed here are related to plpgsql.\nWill fix..\n--\nMichael",
"msg_date": "Tue, 21 Mar 2023 10:15:39 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo plgsql to plpgsql."
}
] |
[
{
"msg_contents": "Hi all,\n\nWe have a situation where we need to revoke SELECT on a table that\nbelongs to our extension, and we also need to let less privileged users\ndump the extension's external config tables. (The restricted table's\ncontents are exposed through a security_barrier view, and it's a cloud\nenvironment where \"admin\" users don't necessarily have true superuser\naccess.)\n\nSince the restricted table is internal, its contents aren't included in\ndumps anyway, so we expected to be able to meet both use cases at once.\nUnfortunately:\n\n $ pg_dump -U unprivileged_user -d postgres\n pg_dump: error: query failed: ERROR: permission denied for relation\next_table\n pg_dump: error: query was: LOCK TABLE public.ext_table IN ACCESS\nSHARE MODE\n\n...and there appears to be no way to work around this with\n--exclude-table, since the table is part of the extension.\n\nIt looks like the only reason pg_dump locks this particular table is\nbecause it's been marked with DUMP_COMPONENT_POLICY, which needs a lock\nto ensure the consistency of later pg_get_expr() calls. That stings for\ntwo reasons: 1) it doesn't seem like you need SELECT access on a table\nto see its policies, and 2) we have no policies on the table anyway;\nthere are no pg_get_expr() calls to protect.\n\nSo I've attached the simplest backportable workaround I could think of:\nunmark DUMP_COMPONENT_POLICY for a table that has no policies at the\ntime of the getTables() query. This is similar to the ACL optimization\nthat back branches do; it should ensure that there will be no\npg_get_expr() calls on pg_policy for that table later, due to\nrepeatable-read, and it omits the lock when there's no reason to grab\nit. It won't fix the problem for tables that have do policies, but I\ndon't have any ideas for how to do that safely, unless there's some lock\nmode that uses fewer privileges.\n\nI also attached a speculative backport to 11 to illustrate what that\nmight look like, but first I have to convince you it's a bug. :)\n\nWDYT?\n\nThanks,\n--Jacob",
"msg_date": "Mon, 20 Mar 2023 09:44:24 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump needs SELECT privileges on irrelevant extension table"
},
{
"msg_contents": "Jacob Champion <[email protected]> writes:\n> We have a situation where we need to revoke SELECT on a table that\n> belongs to our extension, and we also need to let less privileged users\n> dump the extension's external config tables.\n\nIn general, we don't expect that random minimum-privilege users can do\na database-wide pg_dump, so I'm not entirely sure that I buy that this\nis a case we should cater to. Why shouldn't your dump user have enough\nprivilege to take this lock?\n\nI'd be more willing to consider the proposed patch if it weren't such\na hack --- as you say, it doesn't fix the problem when the table has\npolicies, so it's hardly a general-purpose solution. I fear that it's\nalso fairly expensive: adding sub-selects to the query we must do\nbefore we can lock any tables is not appetizing, because making that\nwindow wider adds to the risk of deadlocks, dump failures, etc.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Mar 2023 13:43:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump needs SELECT privileges on irrelevant extension table"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 10:43 AM Tom Lane <[email protected]> wrote:\n> In general, we don't expect that random minimum-privilege users can do\n> a database-wide pg_dump, so I'm not entirely sure that I buy that this\n> is a case we should cater to.\n\nThey're neither random nor minimum-privilege -- it's the role with the\nmost privileges available to our end users. They just can't see the\ncontents of this table.\n\n> Why shouldn't your dump user have enough\n> privilege to take this lock?\n\nThe table contains information that's confidential to the superuser.\nOther users access it through a view.\n\n> I'd be more willing to consider the proposed patch if it weren't such\n> a hack --- as you say, it doesn't fix the problem when the table has\n> policies, so it's hardly a general-purpose solution.\n\nRight. Does a more general fix exist?\n\n> I fear that it's\n> also fairly expensive: adding sub-selects to the query we must do\n> before we can lock any tables is not appetizing, because making that\n> window wider adds to the risk of deadlocks, dump failures, etc.\n\nI was hoping an EXISTS subselect would be cheap enough, but maybe I\ndon't have enough entries in pg_policy to see a slowdown. Any\nsuggestions on an order of magnitude so I can characterize it? Or\nwould you just like to know at what point I start seeing slower\nbehavior? (Alternatively: are there cheaper ways to write this query?)\n\nThanks!\n--Jacob\n\n\n",
"msg_date": "Mon, 20 Mar 2023 11:23:54 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump needs SELECT privileges on irrelevant extension table"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 11:23 AM Jacob Champion <[email protected]> wrote:\n> On Mon, Mar 20, 2023 at 10:43 AM Tom Lane <[email protected]> wrote:\n> > I fear that it's\n> > also fairly expensive: adding sub-selects to the query we must do\n> > before we can lock any tables is not appetizing, because making that\n> > window wider adds to the risk of deadlocks, dump failures, etc.\n>\n> I was hoping an EXISTS subselect would be cheap enough, but maybe I\n> don't have enough entries in pg_policy to see a slowdown. Any\n> suggestions on an order of magnitude so I can characterize it? Or\n> would you just like to know at what point I start seeing slower\n> behavior? (Alternatively: are there cheaper ways to write this query?)\n\nAs a smoke test, I have 10M policies spread across 100k tables on my\nlaptop (that is, 100 policies each). I also have 100k more empty\ntables with no policies on them, to try to stress both sides of the\nEXISTS. On PG11, the baseline query duration is roughly 20s; with the\npatch, it increases to roughly 22s (~10% slowdown). Setup SQL\nattached.\n\nThis appears to be tied to the number of policies more than the number\nof tables; if I reduce it to \"only\" 1M policies, the slowdown drops to\n~400ms (2%), and at 10k policies any difference is lost in noise. That\ndoesn't seem unreasonable to me, but I don't know what a worst-case\npg_policy catalog looks like.\n\n--Jacob",
"msg_date": "Mon, 20 Mar 2023 15:51:28 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump needs SELECT privileges on irrelevant extension table"
},
{
"msg_contents": "On 3/20/23 10:43, Tom Lane wrote:\n> I'd be more willing to consider the proposed patch if it weren't such\n> a hack --- as you say, it doesn't fix the problem when the table has\n> policies, so it's hardly a general-purpose solution. I fear that it's\n> also fairly expensive: adding sub-selects to the query we must do\n> before we can lock any tables is not appetizing, because making that\n> window wider adds to the risk of deadlocks, dump failures, etc.\n(moving to -hackers and July CF)\n\n= Recap for Potential Reviewers =\n\nThe timescaledb extension has an internal table that's owned by the\nsuperuser. It's not dumpable, and other users can only access its\ncontents through a filtered view. For our cloud deployments, we\nadditionally have that common trope where the most privileged users\naren't actually superusers, but we still want them to be able to perform\na subset of maintenance tasks, including pg_dumping their data.\n\nWhen cloud users try to dump that data, pg_dump sees that this internal\ntable is an extension member and plans to dump ACLs, security labels,\nand RLS policies for it. (This behavior cannot be overridden with\n--exclude-table. pg_dump ignores that flag for extension members.)\nDumping policies requires the use of pg_get_expr() on the backend; this,\nin turn, requires a lock on the table with ACCESS SHARE.\n\nSo pg_dump tries to lock a table, with no policies, that it's not going\nto dump the schema or data for anyway, and it fails because our users\ndon't have (and shouldn't need) SELECT access to it. For an example of\nthis in action, I've attached a test case in v2-0001.\n\n= Proposal =\n\nSince this is affecting users on released Postgres versions, my end goal\nis to find a fix that's backportable.\n\nThis situation looks very similar to [1], where non-superusers couldn't\nperform a dump because we were eagerly grabbing table locks to read\n(non-existent) ACLs. But that was solved with the realization that ACLs\ndon't need locks anyway, which is unfortunately not applicable to policies.\n\nMy initial patch to -bugs was a riff on a related performance fix [2],\nwhich figured out which tables had interesting ACLs and skipped that\npart if nothing was found. I added the same kind of subselect for RLS\npolicies as well, but that had nasty corner cases where it would perform\nterribly, as Tom alluded to above. (In a cluster of 200k tables, where\none single table had 10M policies, the query ground to a halt.)\n\nSo v2-0002 is instead inspired by Tom's rewrite of that ACL dump logic\n[3]. It scans pg_policy separately, stores the tables it finds into the\ncatalog map on the client side, and then again skips the policy dump\n(and therefore the lock) if no policies exist. The performance hit now\nscales with the size of pg_policy alone.\n\nThis is a bit more invasive than the subselect, but hopefully still\nstraightforward enough to be applicable to the back branches' old\ncatalog map strategy. It's still not a general-purpose fix, as Tom\npointed out above, but that was true of the discussion in [1] as well,\nso I'm optimistic.\n\nWDYT?\n\n--Jacob\n\n[1]\nhttps://postgr.es/m/CAGPqQf3Uzo-yU1suYyoZR83h6QTxXxkGTtEyeMV7EAVBqn%3DPcQ%40mail.gmail.com\n[2] https://git.postgresql.org/cgit/postgresql.git/commit/?id=5d589993\n[3] https://git.postgresql.org/cgit/postgresql.git/commit/?id=0c9d8442",
"msg_date": "Thu, 29 Jun 2023 09:24:32 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump needs SELECT privileges on irrelevant extension table"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nPasses the default cases; Does not make any trivial changes to the codebase",
"msg_date": "Fri, 14 Jul 2023 07:45:53 +0000",
"msg_from": "Akshat Jaimini <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump needs SELECT privileges on irrelevant extension table"
},
{
"msg_contents": "On 2023-06-29 Th 12:24, Jacob Champion wrote:\n> On 3/20/23 10:43, Tom Lane wrote:\n>> I'd be more willing to consider the proposed patch if it weren't such\n>> a hack --- as you say, it doesn't fix the problem when the table has\n>> policies, so it's hardly a general-purpose solution. I fear that it's\n>> also fairly expensive: adding sub-selects to the query we must do\n>> before we can lock any tables is not appetizing, because making that\n>> window wider adds to the risk of deadlocks, dump failures, etc.\n> (moving to -hackers and July CF)\n>\n> = Recap for Potential Reviewers =\n>\n> The timescaledb extension has an internal table that's owned by the\n> superuser. It's not dumpable, and other users can only access its\n> contents through a filtered view. For our cloud deployments, we\n> additionally have that common trope where the most privileged users\n> aren't actually superusers, but we still want them to be able to perform\n> a subset of maintenance tasks, including pg_dumping their data.\n>\n> When cloud users try to dump that data, pg_dump sees that this internal\n> table is an extension member and plans to dump ACLs, security labels,\n> and RLS policies for it. (This behavior cannot be overridden with\n> --exclude-table. pg_dump ignores that flag for extension members.)\n> Dumping policies requires the use of pg_get_expr() on the backend; this,\n> in turn, requires a lock on the table with ACCESS SHARE.\n>\n> So pg_dump tries to lock a table, with no policies, that it's not going\n> to dump the schema or data for anyway, and it fails because our users\n> don't have (and shouldn't need) SELECT access to it. For an example of\n> this in action, I've attached a test case in v2-0001.\n>\n> = Proposal =\n>\n> Since this is affecting users on released Postgres versions, my end goal\n> is to find a fix that's backportable.\n>\n> This situation looks very similar to [1], where non-superusers couldn't\n> perform a dump because we were eagerly grabbing table locks to read\n> (non-existent) ACLs. But that was solved with the realization that ACLs\n> don't need locks anyway, which is unfortunately not applicable to policies.\n>\n> My initial patch to -bugs was a riff on a related performance fix [2],\n> which figured out which tables had interesting ACLs and skipped that\n> part if nothing was found. I added the same kind of subselect for RLS\n> policies as well, but that had nasty corner cases where it would perform\n> terribly, as Tom alluded to above. (In a cluster of 200k tables, where\n> one single table had 10M policies, the query ground to a halt.)\n>\n> So v2-0002 is instead inspired by Tom's rewrite of that ACL dump logic\n> [3]. It scans pg_policy separately, stores the tables it finds into the\n> catalog map on the client side, and then again skips the policy dump\n> (and therefore the lock) if no policies exist. The performance hit now\n> scales with the size of pg_policy alone.\n>\n> This is a bit more invasive than the subselect, but hopefully still\n> straightforward enough to be applicable to the back branches' old\n> catalog map strategy. It's still not a general-purpose fix, as Tom\n> pointed out above, but that was true of the discussion in [1] as well,\n> so I'm optimistic.\n>\n> WDYT?\n>\n> --Jacob\n>\n> [1]\n> https://postgr.es/m/CAGPqQf3Uzo-yU1suYyoZR83h6QTxXxkGTtEyeMV7EAVBqn%3DPcQ%40mail.gmail.com\n> [2]https://git.postgresql.org/cgit/postgresql.git/commit/?id=5d589993\n> [3]https://git.postgresql.org/cgit/postgresql.git/commit/?id=0c9d8442\n\n\nSeems reasonable at first glance. Isn't it going to save some work \nanyway later on, so the performance hit could end up negative?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-29 Th 12:24, Jacob Champion\n wrote:\n\n\nOn 3/20/23 10:43, Tom Lane wrote:\n\n\nI'd be more willing to consider the proposed patch if it weren't such\na hack --- as you say, it doesn't fix the problem when the table has\npolicies, so it's hardly a general-purpose solution. I fear that it's\nalso fairly expensive: adding sub-selects to the query we must do\nbefore we can lock any tables is not appetizing, because making that\nwindow wider adds to the risk of deadlocks, dump failures, etc.\n\n\n(moving to -hackers and July CF)\n\n= Recap for Potential Reviewers =\n\nThe timescaledb extension has an internal table that's owned by the\nsuperuser. It's not dumpable, and other users can only access its\ncontents through a filtered view. For our cloud deployments, we\nadditionally have that common trope where the most privileged users\naren't actually superusers, but we still want them to be able to perform\na subset of maintenance tasks, including pg_dumping their data.\n\nWhen cloud users try to dump that data, pg_dump sees that this internal\ntable is an extension member and plans to dump ACLs, security labels,\nand RLS policies for it. (This behavior cannot be overridden with\n--exclude-table. pg_dump ignores that flag for extension members.)\nDumping policies requires the use of pg_get_expr() on the backend; this,\nin turn, requires a lock on the table with ACCESS SHARE.\n\nSo pg_dump tries to lock a table, with no policies, that it's not going\nto dump the schema or data for anyway, and it fails because our users\ndon't have (and shouldn't need) SELECT access to it. For an example of\nthis in action, I've attached a test case in v2-0001.\n\n= Proposal =\n\nSince this is affecting users on released Postgres versions, my end goal\nis to find a fix that's backportable.\n\nThis situation looks very similar to [1], where non-superusers couldn't\nperform a dump because we were eagerly grabbing table locks to read\n(non-existent) ACLs. But that was solved with the realization that ACLs\ndon't need locks anyway, which is unfortunately not applicable to policies.\n\nMy initial patch to -bugs was a riff on a related performance fix [2],\nwhich figured out which tables had interesting ACLs and skipped that\npart if nothing was found. I added the same kind of subselect for RLS\npolicies as well, but that had nasty corner cases where it would perform\nterribly, as Tom alluded to above. (In a cluster of 200k tables, where\none single table had 10M policies, the query ground to a halt.)\n\nSo v2-0002 is instead inspired by Tom's rewrite of that ACL dump logic\n[3]. It scans pg_policy separately, stores the tables it finds into the\ncatalog map on the client side, and then again skips the policy dump\n(and therefore the lock) if no policies exist. The performance hit now\nscales with the size of pg_policy alone.\n\nThis is a bit more invasive than the subselect, but hopefully still\nstraightforward enough to be applicable to the back branches' old\ncatalog map strategy. It's still not a general-purpose fix, as Tom\npointed out above, but that was true of the discussion in [1] as well,\nso I'm optimistic.\n\nWDYT?\n\n--Jacob\n\n[1]\nhttps://postgr.es/m/CAGPqQf3Uzo-yU1suYyoZR83h6QTxXxkGTtEyeMV7EAVBqn%3DPcQ%40mail.gmail.com\n[2] https://git.postgresql.org/cgit/postgresql.git/commit/?id=5d589993\n[3] https://git.postgresql.org/cgit/postgresql.git/commit/?id=0c9d8442\n\n\n\nSeems reasonable at first glance. Isn't it going to save some\n work anyway later on, so the performance hit could end up\n negative?\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 14 Jul 2023 08:04:18 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump needs SELECT privileges on irrelevant extension table"
},
{
"msg_contents": "Thanks for the reviews, both of you!\n\nOn Fri, Jul 14, 2023 at 5:04 AM Andrew Dunstan <[email protected]> wrote:\n> Seems reasonable at first glance. Isn't it going to save some work anyway later on, so the performance hit could end up negative?\n\nTheoretically it could, if the OID list sent during getPolicies()\nshrinks enough. I tried a quick test against the regression database,\nbut it's too noisy on my machine to know whether the difference is\nreally meaningful.\n\n--Jacob\n\n\n",
"msg_date": "Fri, 14 Jul 2023 15:20:39 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump needs SELECT privileges on irrelevant extension table"
},
{
"msg_contents": "Hi all,\n\nv3 fixes a doc comment I forgot to fill in; there are no other code\nchanges. To try to further reduce the activation energy, I've also\nattached an attempt at a backport to 11. The main difference is the\nabsence of catalogIdHash, which showed up in 15, so we don't get the\nbenefit of that deduplication.\n\nThanks,\n--Jacob",
"msg_date": "Wed, 9 Aug 2023 16:10:39 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump needs SELECT privileges on irrelevant extension table"
},
{
"msg_contents": "Jacob Champion <[email protected]> writes:\n> v3 fixes a doc comment I forgot to fill in; there are no other code\n> changes. To try to further reduce the activation energy, I've also\n> attached an attempt at a backport to 11. The main difference is the\n> absence of catalogIdHash, which showed up in 15, so we don't get the\n> benefit of that deduplication.\n\nSo ... I still do not like anything about this patch. Putting\nhas_policies into CatalogIdMapEntry isn't a wart, it's more\nnearly a tumor. Running getTablesWithPolicies before we can\nacquire locks is horrid from the standpoint of minimizing the\nwindow between our transaction snapshot and successful acquisition\nof all needed locks. (It might be all right in databases with\nfew pg_policy entries, but I don't think we can assume that that\nholds everywhere.) And the whole thing is just ugly and solves\nthe problem only partially.\n\nWhat I am wondering about is whether we shouldn't just undo what\ncheckExtensionMembership does, specifically:\n\n /*\n * In 9.6 and above, mark the member object to have any non-initial ACL,\n * policies, and security labels dumped.\n *\n * Note that any initial ACLs (see pg_init_privs) will be removed when we\n * extract the information about the object. We don't provide support for\n * initial policies and security labels and it seems unlikely for those to\n * ever exist, but we may have to revisit this later.\n *\n * ...\n */\n\n dobj->dump = ext->dobj.dump_contains & (DUMP_COMPONENT_ACL |\n DUMP_COMPONENT_SECLABEL |\n DUMP_COMPONENT_POLICY);\n\nWhy are we marking extension member objects as being subject to SECLABEL\nor POLICY dumping? As the comment notes, that isn't really sensible\nunless what we are dumping is a delta from the extension's initial\nassignments. But we have no infrastructure for that, and none seems\nlikely to appear in the near future.\n\nCould we not fix this by just reducing the above to\n\n dobj->dump = ext->dobj.dump_contains & (DUMP_COMPONENT_ACL);\n\nWhen and if someone comes along and implements storage of extensions'\ninitial policies, they can figure out how to avoid fetching policies for\nnot-to-be-dumped tables. (My thoughts would run towards adding a column\nto pg_class to help detect whether such work is needed without doing\nexpensive additional queries.) But I don't see why we require a solution\nto that problem as things stand.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Oct 2023 16:12:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump needs SELECT privileges on irrelevant extension table"
},
{
"msg_contents": "I wrote:\n> Why are we marking extension member objects as being subject to SECLABEL\n> or POLICY dumping? As the comment notes, that isn't really sensible\n> unless what we are dumping is a delta from the extension's initial\n> assignments. But we have no infrastructure for that, and none seems\n> likely to appear in the near future.\n\nHere's a quick patch that does it that way. The test changes\nare identical to Jacob's v3-0001.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 17 Oct 2023 17:11:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump needs SELECT privileges on irrelevant extension table"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane ([email protected]) wrote:\n> I wrote:\n> > Why are we marking extension member objects as being subject to SECLABEL\n> > or POLICY dumping? As the comment notes, that isn't really sensible\n> > unless what we are dumping is a delta from the extension's initial\n> > assignments. But we have no infrastructure for that, and none seems\n> > likely to appear in the near future.\n> \n> Here's a quick patch that does it that way. The test changes\n> are identical to Jacob's v3-0001.\n\nWhat the comment is talking about is that we don't support initial\npolicies, not that we don't support policies on extension tables at all.\nThat said ... even the claim that we don't support such policies isn't\nsupported by code and there are people out there doing it, which creates\nits own set of problems (ones we should really try to find solutions to\nthough..).\n\nThis change would mean that policies added by a user after the extension\nis created would just be lost by a pg_dump/reload, doesn't it?\n\nThanks,\n\nStephen",
"msg_date": "Wed, 18 Oct 2023 16:11:59 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump needs SELECT privileges on irrelevant extension table"
},
{
"msg_contents": "Stephen Frost <[email protected]> writes:\n> Greetings,\n> * Tom Lane ([email protected]) wrote:\n>> I wrote:\n>>> Why are we marking extension member objects as being subject to SECLABEL\n>>> or POLICY dumping?\n\n> This change would mean that policies added by a user after the extension\n> is created would just be lost by a pg_dump/reload, doesn't it?\n\nYes. But I'd say that's unsupported, just like making other ad-hoc\nchanges to extension objects is unsupported (and the effects will be\nlost on dump/reload). We specifically have support for user-added\nACLs, and that's good, but don't claim that we have support for\ndoing the same with policies.\n\nAs far as I can see, the current behavior is that we'll dump and\ntry to reload policies (and seclabels) on extension objects even\nif those properties were set by the extension creation script.\nThat has many more problems than just the one Jacob is moaning\nabout: you'll see failures at reload if you're not superuser,\nand if the destination installation has a newer version of the\nextension than what was dumped, the old properties might be\ncompletely inappropriate. So IMO there's basically nothing\nthat works properly about this. To make it work, we'd need\ninfrastructure comparable to the pg_init_privs infrastructure.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Oct 2023 16:25:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump needs SELECT privileges on irrelevant extension table"
},
{
"msg_contents": "On Wed, Oct 18, 2023 at 1:25 PM Tom Lane <[email protected]> wrote:\n> Stephen Frost <[email protected]> writes:\n> > This change would mean that policies added by a user after the extension\n> > is created would just be lost by a pg_dump/reload, doesn't it?\n>\n> Yes. But I'd say that's unsupported, just like making other ad-hoc\n> changes to extension objects is unsupported (and the effects will be\n> lost on dump/reload). We specifically have support for user-added\n> ACLs, and that's good, but don't claim that we have support for\n> doing the same with policies.\n\nIs this approach backportable?\n\n(Adding Aleks to CC -- Timescale may want to double-check that the new\nproposal still works for them.)\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Mon, 23 Oct 2023 11:21:30 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump needs SELECT privileges on irrelevant extension table"
},
{
"msg_contents": "Jacob Champion <[email protected]> writes:\n> Is this approach backportable?\n\nThe code fix would surely work in the back branches. Whether the\nbehavioral change is too big to be acceptable in minor releases\nis something I don't have a strong opinion on.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 Oct 2023 14:42:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump needs SELECT privileges on irrelevant extension table"
},
{
"msg_contents": "I wrote:\n> Jacob Champion <[email protected]> writes:\n>> Is this approach backportable?\n\n> The code fix would surely work in the back branches. Whether the\n> behavioral change is too big to be acceptable in minor releases\n> is something I don't have a strong opinion on.\n\nI'm hearing nothing but crickets :-(\n\nIf nobody objects by say Monday, I'm going to go ahead and\ncommit (and backpatch) the patch I posted at [1].\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/2984517.1697577076%40sss.pgh.pa.us\n\n\n",
"msg_date": "Thu, 09 Nov 2023 14:02:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump needs SELECT privileges on irrelevant extension table"
},
{
"msg_contents": "On Thu, Nov 9, 2023 at 11:02 AM Tom Lane <[email protected]> wrote:\n> I'm hearing nothing but crickets :-(\n\nYeah :/\n\nBased on your arguments above, it sounds like your patch may improve\nseveral other corner cases when backported, so that sounds good\noverall to me. My best guess is that Timescale will be happy with this\npatch's approach. But I can't speak with any authority.\n\nAleks -- anything to add?\n\n--Jacob\n\n\n",
"msg_date": "Fri, 10 Nov 2023 12:48:04 -0800",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump needs SELECT privileges on irrelevant extension table"
},
{
"msg_contents": "> commit a70f2a57f233244c0a780829baf48c624187d456\n> Author: Tom Lane <[email protected]>\n> Date: Mon Nov 13 17:04:10 2023 -0500\n>\n> Don't try to dump RLS policies or security labels for extension objects.\n\n(Thanks Tom!)\n\n--Jacob\n\n\n",
"msg_date": "Wed, 15 Nov 2023 11:59:25 -0800",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump needs SELECT privileges on irrelevant extension table"
}
] |
[
{
"msg_contents": "Yesterday, in 785f70957, I adjusted the Memoize costing code to\naccount for the size of the cache key when estimating how many cache\nentries can exist at once in the cache. That effectively makes\nMemoize a less likely choice as fewer entries will be expected to fit\nin work_mem now.\n\nBecause that's being changed in v16, I think it might also be a good\nidea to fix the hit_ratio calculation problem reported by David\nJohnston in [1]. In the attached, I've adjusted David's calculation\nslightly so that we divide by Max(ndistinct, est_cache_entries)\ninstead of ndistinct. This saves from overestimating when ndistinct\nis smaller than est_cache_entries. I'd rather fix this now for v16\nthan wait until v17 and further adjust the Memoize costing.\n\nI've attached a spreadsheet showing the new and old hit_ration\ncalculations. Cells C1 - C3 can be adjusted to show what the hit ratio\nis for both the old and new method.\n\nAny objections?\n\nDavid\n\n[1] https://postgr.es/m/CAKFQuwZEmcNk3YQo2Xj4EDUOdY6qakad31rOD1Vc4q1_s68-Ew@mail.gmail.com",
"msg_date": "Tue, 21 Mar 2023 09:41:36 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Adjust Memoize hit_ratio calculation"
},
{
"msg_contents": "On Tue, 21 Mar 2023 at 09:41, David Rowley <[email protected]> wrote:\n> Because that's being changed in v16, I think it might also be a good\n> idea to fix the hit_ratio calculation problem reported by David\n> Johnston in [1]. In the attached, I've adjusted David's calculation\n> slightly so that we divide by Max(ndistinct, est_cache_entries)\n> instead of ndistinct. This saves from overestimating when ndistinct\n> is smaller than est_cache_entries. I'd rather fix this now for v16\n> than wait until v17 and further adjust the Memoize costing.\n\nI've now pushed this change.\n\nDavid\n\n\n",
"msg_date": "Wed, 22 Mar 2023 08:48:16 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adjust Memoize hit_ratio calculation"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile hacking on my relation extension patch I found two issues with WAL_LOG:\n\n1) RelationCopyStorageUsingBuffer() doesn't free the used strategies. This\n means we'll use #relations * ~10k memory\n\n2) RelationCopyStorageUsingBuffer() gets the buffer for the target relation\n with RBM_NORMAL, therefore requiring a read of a block guaranteed to be\n zero\n\nEasy enough to fix and shows clear improvement. One thing I wonder is if it's\nworth moving the strategies up one level? Probaly not, but ...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 Mar 2023 00:01:13 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "CREATE DATABASE ... STRATEGY WAL_LOG issues"
},
{
"msg_contents": "On Tue, Mar 21, 2023 at 3:01 AM Andres Freund <[email protected]> wrote:\n> While hacking on my relation extension patch I found two issues with WAL_LOG:\n>\n> 1) RelationCopyStorageUsingBuffer() doesn't free the used strategies. This\n> means we'll use #relations * ~10k memory\n\nWoops.\n\n> 2) RelationCopyStorageUsingBuffer() gets the buffer for the target relation\n> with RBM_NORMAL, therefore requiring a read of a block guaranteed to be\n> zero\n\nWoops.\n\n> Easy enough to fix and shows clear improvement. One thing I wonder is if it's\n> worth moving the strategies up one level? Probaly not, but ...\n\nHmm, so share a strategy across all relation forks? You could even\npush it up a level beyond that and share it across all relations being\ncopied. That feels like it would be slightly more rational behavior,\nbut I'm not smart enough to guess whether anyone would actually be\nhappier (or less happy) after such a change than they are now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 21 Mar 2023 11:33:59 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE DATABASE ... STRATEGY WAL_LOG issues"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-21 11:33:59 -0400, Robert Haas wrote:\n> On Tue, Mar 21, 2023 at 3:01 AM Andres Freund <[email protected]> wrote:\n> > Easy enough to fix and shows clear improvement. One thing I wonder is if it's\n> > worth moving the strategies up one level? Probaly not, but ...\n> \n> Hmm, so share a strategy across all relation forks? You could even\n> push it up a level beyond that and share it across all relations being\n> copied.\n\nThe latter is what I was wondering about.\n\n\n> That feels like it would be slightly more rational behavior,\n> but I'm not smart enough to guess whether anyone would actually be\n> happier (or less happy) after such a change than they are now.\n\nYea, I'm not either. The current behaviour does have the feature that it will\nread in some data for each table, but limits trashing of shared buffers for\nhuge tables. That's good if your small to medium sized source database isn't\nin s_b, because the next CREATE DATABASE has a change to not need to read the\ndata again. But if you have a source database with lots of small relations, it\ncan easily lead to swamping s_b.\n\nMore generally, I still think we need logic to use unused buffers even when\nstrategies are in use (my current theory is that we wouldn't increase the\nusagecount when strategies use unused buffers, so they can be replaced more\neasily).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 Mar 2023 09:34:14 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE DATABASE ... STRATEGY WAL_LOG issues"
},
{
"msg_contents": "On Tue, Mar 21, 2023 at 12:34 PM Andres Freund <[email protected]> wrote:\n> More generally, I still think we need logic to use unused buffers even when\n> strategies are in use\n\nYep.\n\n> (my current theory is that we wouldn't increase the\n> usagecount when strategies use unused buffers, so they can be replaced more\n> easily).\n\nDon't know about this part.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 21 Mar 2023 13:12:39 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE DATABASE ... STRATEGY WAL_LOG issues"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-21 09:34:14 -0700, Andres Freund wrote:\n> On 2023-03-21 11:33:59 -0400, Robert Haas wrote:\n> > That feels like it would be slightly more rational behavior,\n> > but I'm not smart enough to guess whether anyone would actually be\n> > happier (or less happy) after such a change than they are now.\n> \n> Yea, I'm not either. The current behaviour does have the feature that it will\n> read in some data for each table, but limits trashing of shared buffers for\n> huge tables. That's good if your small to medium sized source database isn't\n> in s_b, because the next CREATE DATABASE has a change to not need to read the\n> data again. But if you have a source database with lots of small relations, it\n> can easily lead to swamping s_b.\n\nPatch with the two minimal fixes attached. As we don't know whether it's worth\nchanging the strategy, the more minimal fixes seem more appropriate.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 21 Mar 2023 22:11:58 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE DATABASE ... STRATEGY WAL_LOG issues"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 1:12 AM Andres Freund <[email protected]> wrote:\n> Patch with the two minimal fixes attached. As we don't know whether it's worth\n> changing the strategy, the more minimal fixes seem more appropriate.\n\nLGTM.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 22 Mar 2023 09:58:58 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE DATABASE ... STRATEGY WAL_LOG issues"
},
{
"msg_contents": "On 2023-03-22 09:58:58 -0400, Robert Haas wrote:\n> On Wed, Mar 22, 2023 at 1:12 AM Andres Freund <[email protected]> wrote:\n> > Patch with the two minimal fixes attached. As we don't know whether it's worth\n> > changing the strategy, the more minimal fixes seem more appropriate.\n> \n> LGTM.\n\nThanks for checking. Pushed.\n\n\n",
"msg_date": "Wed, 22 Mar 2023 10:04:19 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE DATABASE ... STRATEGY WAL_LOG issues"
}
] |
[
{
"msg_contents": "Hi,\nHash table scans (seq_scan_table/level) are cleaned up at the end of a\ntransaction in AtEOXact_HashTables(). If a hash seq scan continues\nbeyond transaction end it will meet \"ERROR: no hash_seq_search scan\nfor hash table\" in deregister_seq_scan(). That seems like a limiting\nthe hash table usage.\n\nOur use case is\n1. Add/update/remove entries in hash table\n2. Scan the existing entries and perform one transaction per entry\n3. Close scan\n\nrepeat above steps in an infinite loop. Note that we do not\nadd/modify/delete entries in step 2. We can't use linked lists since\nthe entries need to be updated or deleted using hash keys. Because the\nhash seq scan is cleaned up at the end of the transaction, we\nencounter error in the 3rd step. I don't see that the actual hash\ntable scan depends upon the seq_scan_table/level[] which is cleaned up\nat the end of the transaction.\n\nI have following questions\n1. Is there a way to avoid cleaning up seq_scan_table/level() when the\ntransaction ends?\n2. Is there a way that we can use hash table implementation in\nPostgreSQL code for our purpose?\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 21 Mar 2023 12:51:36 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hash table scans outside transactions"
},
{
"msg_contents": "Bumping it to attract some attention.\n\nOn Tue, Mar 21, 2023 at 12:51 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> Hi,\n> Hash table scans (seq_scan_table/level) are cleaned up at the end of a\n> transaction in AtEOXact_HashTables(). If a hash seq scan continues\n> beyond transaction end it will meet \"ERROR: no hash_seq_search scan\n> for hash table\" in deregister_seq_scan(). That seems like a limiting\n> the hash table usage.\n>\n> Our use case is\n> 1. Add/update/remove entries in hash table\n> 2. Scan the existing entries and perform one transaction per entry\n> 3. Close scan\n>\n> repeat above steps in an infinite loop. Note that we do not\n> add/modify/delete entries in step 2. We can't use linked lists since\n> the entries need to be updated or deleted using hash keys. Because the\n> hash seq scan is cleaned up at the end of the transaction, we\n> encounter error in the 3rd step. I don't see that the actual hash\n> table scan depends upon the seq_scan_table/level[] which is cleaned up\n> at the end of the transaction.\n>\n> I have following questions\n> 1. Is there a way to avoid cleaning up seq_scan_table/level() when the\n> transaction ends?\n> 2. Is there a way that we can use hash table implementation in\n> PostgreSQL code for our purpose?\n>\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 28 Mar 2023 18:28:23 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hash table scans outside transactions"
}
] |
[
{
"msg_contents": "Hi,\n\nI have noticed a rather odd behaviour that is not strictly a bug, but is\nunexpected.\n\nIt is when a immutable (or stable) PG function is returning results in a\nrecord structure a select on these calls the function repeatedly for each\nelement in the output record.\n\nSee below for an example.\n\nSure I can work around this by returning in an array, or materialised as a\nwhole by e.g. a materialised CTE, but what I'm looking for is *materialising\nof just the individual row *during processing, if the function is to be\ncalled on many rows.\n\nObviously in theory the returned record could be very complex, so we might\nnot want it materialised in general, but an option to do so would be nice.\nI would suggest that a WITH could be marked with a new \"MATERIALIZED *ROW*\"\noption (reusing already reserved keywords).\n\nNote how I below have set the cost extreme, in this test, the value does\nnot affect the behaviour..\n\nThe result set here have five elements, if i change the type to VOLATILE,\nthe execution time is reduced by a factor of five (see the difference\nbetween the stamp of line one and two). It is directly proportional to the\nnumber of elements requested from the record (here I requested all)\n\n(The real life scenario is a function that by a list of reg_ex expessions,\nsplits up the input in numerous fields, And I noticed the behaviour as a\nraise function added for debug, put out the same repeatedly.)\n\n-----------------\n\nDROP TYPE IF EXISTS septima.foo_type CASCADE;\nCREATE TYPE septima.foo_type AS (a text, b text, c text, d text, e text);\nDROP FUNCTION IF EXISTS septima.foo(text);\nCREATE OR REPLACE FUNCTION septima.foo(inp text) RETURNS septima.foo_type\nAS\n$BODY$\nDECLARE\n result_record septima.foo_type;\n i BIGINT :=12345678;\nBEGIN\n WHILE 0<i LOOP\n i=i-1;\n END LOOP;\n RETURN result_record;\nEND\n$BODY$\n LANGUAGE plpgsql IMMUTABLE\n COST 1234567890;\n;\nWITH x AS (\n SELECT * FROM (\n SELECT clock_timestamp() rowstart, (g).*, clock_timestamp() rowend FROM\n(\n SELECT septima.foo(inp) g FROM (\n SELECT '1' inp UNION\n SELECT '2')\n y) x\n ) x\n)\nSELECT * FROM x;\nDROP TYPE IF EXISTS septima.foo_type CASCADE;\n\nMed venlig hilsen\n*Eske Rahn*\nSeniorkonsulent\n+45 93 87 96 30\[email protected]\n--------------------------\nSeptima P/S\nFrederiksberggade 19, 2. sal\n1459 København K\n+45 72 30 06 72\nhttps://septima.dk\n\nHi,I have noticed a rather odd behaviour that is not strictly a bug, but is unexpected.It is when a immutable (or stable) PG function is returning results in a record structure a select on these calls the function repeatedly for each element in the output record.See below for an example.Sure I can work around this by returning in an array, or materialised as a whole by e.g. a materialised CTE, but what I'm looking for is materialising of just the individual row during processing, if the function is to be called on many rows.Obviously in theory the returned record could be very complex, so we might not want it materialised in general, but an option to do so would be nice. I would suggest that a WITH could be marked with a new \"MATERIALIZED ROW\" option (reusing already reserved keywords).Note how I below have set the cost extreme, in this test, the value does not affect the behaviour..The result set here have five elements, if i change the type to VOLATILE, the execution time is reduced by a factor of five (see the difference between the stamp of line one and two). It is directly proportional to the number of elements requested from the record (here I requested all) (The real life scenario is a function that by a list of reg_ex expessions, splits up the input in numerous fields, And I noticed the behaviour as a raise function added for debug, put out the same repeatedly.)-----------------DROP TYPE IF EXISTS septima.foo_type CASCADE;CREATE TYPE septima.foo_type AS (a text, b text, c text, d text, e text);DROP FUNCTION IF EXISTS septima.foo(text);CREATE OR REPLACE FUNCTION septima.foo(inp text) RETURNS septima.foo_typeAS$BODY$DECLARE result_record septima.foo_type; i BIGINT :=12345678; BEGIN WHILE 0<i LOOP i=i-1; END LOOP; RETURN result_record;END$BODY$ LANGUAGE plpgsql IMMUTABLE COST 1234567890;;WITH x AS ( SELECT * FROM ( SELECT clock_timestamp() rowstart, (g).*, clock_timestamp() rowend FROM ( SELECT septima.foo(inp) g FROM ( SELECT '1' inp UNION SELECT '2') y) x ) x)SELECT * FROM x;DROP TYPE IF EXISTS septima.foo_type CASCADE;Med venlig hilsenEske RahnSeniorkonsulent+45 93 87 96 30 [email protected] P/SFrederiksberggade 19, 2. sal1459 København K+45 72 30 06 72https://septima.dk",
"msg_date": "Tue, 21 Mar 2023 08:38:12 +0100",
"msg_from": "Eske Rahn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Options to rowwise persist result of stable/immutable function with\n RECORD result"
},
{
"msg_contents": "On Tuesday, March 21, 2023, Eske Rahn <[email protected]> wrote:\n\n> Hi,\n>\n> I have noticed a rather odd behaviour that is not strictly a bug, but is\n> unexpected.\n>\n> It is when a immutable (or stable) PG function is returning results in a\n> record structure a select on these calls the function repeatedly for each\n> element in the output record.\n>\n\nThe LATERAL join modifier exists to handle this kind of situation.\n\nDavid J.\n\nOn Tuesday, March 21, 2023, Eske Rahn <[email protected]> wrote:Hi,I have noticed a rather odd behaviour that is not strictly a bug, but is unexpected.It is when a immutable (or stable) PG function is returning results in a record structure a select on these calls the function repeatedly for each element in the output record.The LATERAL join modifier exists to handle this kind of situation.David J.",
"msg_date": "Wed, 22 Mar 2023 14:50:07 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Options to rowwise persist result of stable/immutable function\n with RECORD result"
},
{
"msg_contents": "Hi,\n\nThanks for the quick answer *:-D*\n\nThat was a nice sideeffect of lateral.\n\nIn the example, the calling code also gets simplified:\n\nWITH x AS (\n SELECT clock_timestamp() rowstart, *, clock_timestamp() rowend FROM (\n SELECT '1' inp UNION\n SELECT '2'\n ) y, LATERAL septima.foo(inp) g\n)\nSELECT * FROM x;\n\n\nThat solved the issue at hand, in a much better way. Thanks\n\nThough I still fail to see *why* the other way should generally call the\nfunction for every column in the *result* record - if the function is\nSTABLE or IMMUTABLE.\n\nBUT as I can not think up a sensible example where LATERAL will *not* do\nthe trick, so the oddity becomes academic.\nSo just a thing to remember: *always use lateral with functions with record\nresult types* - unless they are volatile)\n\n\n\n\nMed venlig hilsen\n*Eske Rahn*\nSeniorkonsulent\n+45 93 87 96 30\[email protected]\n--------------------------\nSeptima P/S\nFrederiksberggade 19, 2. sal\n1459 København K\n+45 72 30 06 72\nhttps://septima.dk\n\n\nOn Wed, Mar 22, 2023 at 10:50 PM David G. Johnston <\[email protected]> wrote:\n\n> On Tuesday, March 21, 2023, Eske Rahn <[email protected]> wrote:\n>\n>> Hi,\n>>\n>> I have noticed a rather odd behaviour that is not strictly a bug, but is\n>> unexpected.\n>>\n>> It is when a immutable (or stable) PG function is returning results in a\n>> record structure a select on these calls the function repeatedly for each\n>> element in the output record.\n>>\n>\n> The LATERAL join modifier exists to handle this kind of situation.\n>\n> David J.\n>\n>\n\nHi,Thanks for the quick answer :-DThat was a nice sideeffect of lateral.In the example, the calling code also gets simplified:WITH x AS ( SELECT clock_timestamp() rowstart, *, clock_timestamp() rowend FROM ( SELECT '1' inp UNION SELECT '2' ) y, LATERAL septima.foo(inp) g)SELECT * FROM x;That solved the issue at hand, in a much better way. ThanksThough I still fail to see why the other way should generally call the function for every column in the result record - if the function is STABLE or IMMUTABLE.BUT as I can not think up a sensible example where LATERAL will not do the trick, so the oddity becomes academic.So just a thing to remember: always use lateral with functions with record result types - unless they are volatile)Med venlig hilsenEske RahnSeniorkonsulent+45 93 87 96 30 [email protected] P/SFrederiksberggade 19, 2. sal1459 København K+45 72 30 06 72https://septima.dkOn Wed, Mar 22, 2023 at 10:50 PM David G. Johnston <[email protected]> wrote:On Tuesday, March 21, 2023, Eske Rahn <[email protected]> wrote:Hi,I have noticed a rather odd behaviour that is not strictly a bug, but is unexpected.It is when a immutable (or stable) PG function is returning results in a record structure a select on these calls the function repeatedly for each element in the output record.The LATERAL join modifier exists to handle this kind of situation.David J.",
"msg_date": "Thu, 23 Mar 2023 00:32:42 +0100",
"msg_from": "Eske Rahn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Options to rowwise persist result of stable/immutable function\n with RECORD result"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 4:32 PM Eske Rahn <[email protected]> wrote:\n\n> Hi,\n>\n> Thanks for the quick answer *:-D*\n>\n> That was a nice sideeffect of lateral.\n>\n> In the example, the calling code also gets simplified:\n>\n> WITH x AS (\n> SELECT clock_timestamp() rowstart, *, clock_timestamp() rowend FROM (\n> SELECT '1' inp UNION\n> SELECT '2'\n> ) y, LATERAL septima.foo(inp) g\n> )\n> SELECT * FROM x;\n>\n>\n> That solved the issue at hand, in a much better way. Thanks\n>\n> Though I still fail to see *why* the other way should generally call the\n> function for every column in the *result* record - if the function is\n> STABLE or IMMUTABLE.\n>\n\nIt gets rewritten to be effectively:\n\nselect func_call(...).col1, func_call(...).col2, func_call(...).col3\n\nunder the assumption that repeating the function call will be cheap and\nside-effect free. It was never ideal but fixing that form of optimization\nwas harder than implementing LATERAL where the multi-column result has a\nnatural output in the form of a multi-column table. A normal function call\nin the target list really means \"return a single value\" which is at odds\nwith writing .* after it.\n\nDavid J.\n\nOn Wed, Mar 22, 2023 at 4:32 PM Eske Rahn <[email protected]> wrote:Hi,Thanks for the quick answer :-DThat was a nice sideeffect of lateral.In the example, the calling code also gets simplified:WITH x AS ( SELECT clock_timestamp() rowstart, *, clock_timestamp() rowend FROM ( SELECT '1' inp UNION SELECT '2' ) y, LATERAL septima.foo(inp) g)SELECT * FROM x;That solved the issue at hand, in a much better way. ThanksThough I still fail to see why the other way should generally call the function for every column in the result record - if the function is STABLE or IMMUTABLE.It gets rewritten to be effectively:select func_call(...).col1, func_call(...).col2, func_call(...).col3under the assumption that repeating the function call will be cheap and side-effect free. It was never ideal but fixing that form of optimization was harder than implementing LATERAL where the multi-column result has a natural output in the form of a multi-column table. A normal function call in the target list really means \"return a single value\" which is at odds with writing .* after it.David J.",
"msg_date": "Wed, 22 Mar 2023 16:46:09 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Options to rowwise persist result of stable/immutable function\n with RECORD result"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 4:46 PM David G. Johnston <\[email protected]> wrote:\n\n> On Wed, Mar 22, 2023 at 4:32 PM Eske Rahn <[email protected]> wrote:\n>\n>> Hi,\n>>\n>> Thanks for the quick answer *:-D*\n>>\n>> That was a nice sideeffect of lateral.\n>>\n>> In the example, the calling code also gets simplified:\n>>\n>> WITH x AS (\n>> SELECT clock_timestamp() rowstart, *, clock_timestamp() rowend FROM (\n>> SELECT '1' inp UNION\n>> SELECT '2'\n>> ) y, LATERAL septima.foo(inp) g\n>> )\n>> SELECT * FROM x;\n>>\n>>\n>> That solved the issue at hand, in a much better way. Thanks\n>>\n>> Though I still fail to see *why* the other way should generally call the\n>> function for every column in the *result* record - if the function is\n>> STABLE or IMMUTABLE.\n>>\n>\n> It gets rewritten to be effectively:\n>\n> select func_call(...).col1, func_call(...).col2, func_call(...).col3\n>\n> under the assumption that repeating the function call will be cheap and\n> side-effect free. It was never ideal but fixing that form of optimization\n> was harder than implementing LATERAL where the multi-column result has a\n> natural output in the form of a multi-column table. A normal function call\n> in the target list really means \"return a single value\" which is at odds\n> with writing .* after it.\n>\n>\nActually, it is less \"optimization\" and more \"SQL is strongly typed and all\ncolumns must be defined during query compilation\".\n\nDavid J.\n\nOn Wed, Mar 22, 2023 at 4:46 PM David G. Johnston <[email protected]> wrote:On Wed, Mar 22, 2023 at 4:32 PM Eske Rahn <[email protected]> wrote:Hi,Thanks for the quick answer :-DThat was a nice sideeffect of lateral.In the example, the calling code also gets simplified:WITH x AS ( SELECT clock_timestamp() rowstart, *, clock_timestamp() rowend FROM ( SELECT '1' inp UNION SELECT '2' ) y, LATERAL septima.foo(inp) g)SELECT * FROM x;That solved the issue at hand, in a much better way. ThanksThough I still fail to see why the other way should generally call the function for every column in the result record - if the function is STABLE or IMMUTABLE.It gets rewritten to be effectively:select func_call(...).col1, func_call(...).col2, func_call(...).col3under the assumption that repeating the function call will be cheap and side-effect free. It was never ideal but fixing that form of optimization was harder than implementing LATERAL where the multi-column result has a natural output in the form of a multi-column table. A normal function call in the target list really means \"return a single value\" which is at odds with writing .* after it.Actually, it is less \"optimization\" and more \"SQL is strongly typed and all columns must be defined during query compilation\".David J.",
"msg_date": "Wed, 22 Mar 2023 16:51:30 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Options to rowwise persist result of stable/immutable function\n with RECORD result"
}
] |
[
{
"msg_contents": "While working on something else, I noticed $SUBJECT added by commit 86dc90056:\n\n * For UPDATE and DELETE queries, the targetlist must also contain \"junk\"\n * tlist entries needed to allow the executor to identify the rows to be\n * updated or deleted; for example, the ctid of a heap row. (The planner\n * adds these; they're not in what we receive from the planner/rewriter.)\n\nI think that “planner/rewriter” should be parser/rewriter. Attached\nis a patch for that.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Tue, 21 Mar 2023 18:41:35 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": true,
"msg_subject": "Comment in preptlist.c"
},
{
"msg_contents": "On Tue, Mar 21, 2023 at 5:41 PM Etsuro Fujita <[email protected]>\nwrote:\n\n> While working on something else, I noticed $SUBJECT added by commit\n> 86dc90056:\n>\n> * For UPDATE and DELETE queries, the targetlist must also contain \"junk\"\n> * tlist entries needed to allow the executor to identify the rows to be\n> * updated or deleted; for example, the ctid of a heap row. (The planner\n> * adds these; they're not in what we receive from the planner/rewriter.)\n>\n> I think that “planner/rewriter” should be parser/rewriter. Attached\n> is a patch for that.\n\n\nYes of course. It should be parser/rewriter here.\n\nThanks\nRichard\n\nOn Tue, Mar 21, 2023 at 5:41 PM Etsuro Fujita <[email protected]> wrote:While working on something else, I noticed $SUBJECT added by commit 86dc90056:\n\n * For UPDATE and DELETE queries, the targetlist must also contain \"junk\"\n * tlist entries needed to allow the executor to identify the rows to be\n * updated or deleted; for example, the ctid of a heap row. (The planner\n * adds these; they're not in what we receive from the planner/rewriter.)\n\nI think that “planner/rewriter” should be parser/rewriter. Attached\nis a patch for that.Yes of course. It should be parser/rewriter here.ThanksRichard",
"msg_date": "Tue, 21 Mar 2023 18:02:39 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comment in preptlist.c"
},
{
"msg_contents": "Etsuro Fujita <[email protected]> writes:\n> While working on something else, I noticed $SUBJECT added by commit 86dc90056:\n> * For UPDATE and DELETE queries, the targetlist must also contain \"junk\"\n> * tlist entries needed to allow the executor to identify the rows to be\n> * updated or deleted; for example, the ctid of a heap row. (The planner\n> * adds these; they're not in what we receive from the planner/rewriter.)\n\n> I think that “planner/rewriter” should be parser/rewriter. Attached\n> is a patch for that.\n\nAgreed, obviously a thinko :-(\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 Mar 2023 10:01:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comment in preptlist.c"
},
{
"msg_contents": "On Tue, 21 Mar 2023 at 22:41, Etsuro Fujita <[email protected]> wrote:\n> I think that “planner/rewriter” should be parser/rewriter. Attached\n> is a patch for that.\n\nPushed.\n\nDavid\n\n\n",
"msg_date": "Wed, 22 Mar 2023 08:59:29 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comment in preptlist.c"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 4:59 AM David Rowley <[email protected]> wrote:\n> On Tue, 21 Mar 2023 at 22:41, Etsuro Fujita <[email protected]> wrote:\n> > I think that “planner/rewriter” should be parser/rewriter. Attached\n> > is a patch for that.\n>\n> Pushed.\n\nThanks for picking this up, David! Thanks for looking, Tom and Richard!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 22 Mar 2023 16:39:52 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Comment in preptlist.c"
},
{
"msg_contents": "On Wed, 22 Mar 2023 at 20:40, Etsuro Fujita <[email protected]> wrote:\n> Thanks for picking this up, David! Thanks for looking, Tom and Richard!\n\nAnd now it just clicked with me why Tom left this. Sorry for stepping\non your toes here.\n\nDavid\n\n\n",
"msg_date": "Wed, 22 Mar 2023 20:50:28 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comment in preptlist.c"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 4:50 PM David Rowley <[email protected]> wrote:\n> And now it just clicked with me why Tom left this. Sorry for stepping\n> on your toes here.\n\nNo problem at all.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 22 Mar 2023 20:50:56 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Comment in preptlist.c"
}
] |
[
{
"msg_contents": "Hi all,\n\nI started this new thread from another thread[1] where we're\ndiscussing a new storage for TIDs, TidStore, since we found a\ndifficulty about the memory usage limit for TidStores on DSA.\n\nTidStore is a new data structure to efficiently store TIDs, backed by\na radix tree. In the patch series proposed on that thread, in addition\nto radix tree and TidStore, there is another patch for lazy (parallel)\nvacuum to replace the array of dead tuple TIDs with a TidStore. To\nsupport parallel vacuum, radix tree (and TidStore) can be created on a\nlocal memory as well as on DSA. Also, it has memory usage limit\nfunctionality; we can specify the memory limit (e.g.,\nmaintenance_work_mem) to TidStoreCreate() function. Once the total DSA\nsegment size (area->control->total_segment_size) exceeds the limit,\nTidStoreIsFull() returns true. The lazy vacuum can continue scanning\nheap blocks to collect dead tuple TIDs until TidStoreIsFull() returns\ntrue. Currently lazy vacuum is the sole user of TidStore but maybe it\ncan be used by other codes such as tidbitmap.c where will be limited\nby work_mem.\n\nDuring the development, we found out that DSA memory growth is\nunpredictable, leading to inefficient memory limitation.\n\nDSA is built on top of DSM segments and it manages a set of DSM\nsegments, adding new segments as required and detaching them when they\nare no longer needed. The DSA segment starts with 1MB in size and a\nnew segment size is at least big enough to follow a geometric series\nthat approximately doubles the total storage each time we create a new\nsegment. Because of this fact, it's not efficient to simply compare\nthe memory limit to the total segment size. For example, if\nmaintenance_work_mem is 512MB, the total segment size will be like:\n\n2 * (1 + 2 + 4 + 8 + 16 + 32 + 64 + 128) = 510MB -> less than the\nlimit, continue heap scan.\n\n2 * (1 + 2 + 4 + 8 + 16 + 32 + 64 + 128) + 256 = 766MB -> stop (exceed 254MB).\n\nOne might think we can use dsa_set_size_limit() but it cannot; lazy\nvacuum ends up with an error. If we set DSA_ALLOC_NO_OOM, we might end\nup stopping the insertion halfway.\n\nBesides excessively allocating memory, since the initial DSM segment\nsize is fixed 1MB, memory usage of a shared TidStore will start from\n1MB+. This is higher than the minimum values of both work_mem and\nmaintenance_work_mem, 64kB and 1MB respectively. Increasing the\nminimum m_w_m to 2MB might be acceptable but not for work_mem.\n\nResearching possible solutions, we found that aset.c also has a\nsimilar characteristic; allocates an 8K block (by default) upon the\nfirst allocation in a context, and doubles that size for each\nsuccessive block request. But we can specify the initial block size\nand max blocksize. This made me think of an idea to specify both to\nDSA and both values are calculated based on m_w_m. I've attached the\npatch for this idea. The changes to dsa.c are straightforward since\ndsa.c already uses macros DSA_INITIAL_SEGMENT_SIZE and\nDSA_MAX_SEGMENT_SIZE. I just made these values configurable.\n\nFYI with this patch, we can create a DSA in parallel_vacuum_init()\nwith initial and maximum block sizes as follows:\n\ninitial block size = min(m_w_m / 4, 1MB)\nmax block size = max(m_w_m / 8, 8MB)\n\nIn most cases, we can start with a 1MB initial segment, the same as\nbefore. For larger memory, the heap scan stops after DSA allocates\n1.25 times more memory than m_w_m. For example, if m_w_m = 512MB, the\nboth initial and maximum segment sizes are 1MB and 64MB respectively,\nand then DSA allocates the segments as follows until heap scanning\nstops:\n\n2 * (1 + 2 + 4 + 8 + 16 + 32 + 64) + (64 * 4) = 510MB -> less than the\nlimit, continue heap scan.\n\n2 * (1 + 2 + 4 + 8 + 16 + 32 + 64) + (64 * 5) = 574MB -> stop\n(allocated additional 62MB).\n\nIt also works with smaller memory; If the limit is 1MB, we start with\na 256KB initial segment and heap scanning stops after DSA allocated\n1.5MB (= 256kB + 256kB + 512kB + 512kB).\n\nThere is room for considering better formulas for initial and maximum\nblock sizes but making both values configurable is a promising idea.\nAnd the analogous behavior to aset could be a good thing for\nreadability and maintainability. There is another test result where I\nused this idea on top of a radix tree[2].\n\nWe need to consider the total number of allocated DSA segments as the\ntotal number of DSM segments available on the system is fixed[3]. But\nit seems not problematic even with this patch since we allocate only a\nfew additional segments (in above examples 17 segs vs. 19 segs). There\nwas no big difference also in performance[2].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoDBmD5q%3DeO%2BK%3DgyuVt53XvwpJ2dgxPwrtZ-eVOjVmtJjg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAD21AoDKr%3D4YHphy6cRojE5eyT6E2ao8xb44E309eTrUEOC6xw%40mail.gmail.com\n[3] from dsm.c, the total number of DSM segments available on the\nsystem is calculated by:\n#define PG_DYNSHMEM_FIXED_SLOTS 64\n#define PG_DYNSHMEM_SLOTS_PER_BACKEND 5\nmaxitems = PG_DYNSHMEM_FIXED_SLOTS\n + PG_DYNSHMEM_SLOTS_PER_BACKEND * MaxBackends;\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 22 Mar 2023 00:15:47 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Making the initial and maximum DSA segment sizes configurable"
},
{
"msg_contents": "Hi,\n\nOn Wed, Mar 22, 2023 at 12:15 AM Masahiko Sawada <[email protected]> wrote:\n>\n> Hi all,\n>\n> I started this new thread from another thread[1] where we're\n> discussing a new storage for TIDs, TidStore, since we found a\n> difficulty about the memory usage limit for TidStores on DSA.\n>\n> TidStore is a new data structure to efficiently store TIDs, backed by\n> a radix tree. In the patch series proposed on that thread, in addition\n> to radix tree and TidStore, there is another patch for lazy (parallel)\n> vacuum to replace the array of dead tuple TIDs with a TidStore. To\n> support parallel vacuum, radix tree (and TidStore) can be created on a\n> local memory as well as on DSA. Also, it has memory usage limit\n> functionality; we can specify the memory limit (e.g.,\n> maintenance_work_mem) to TidStoreCreate() function. Once the total DSA\n> segment size (area->control->total_segment_size) exceeds the limit,\n> TidStoreIsFull() returns true. The lazy vacuum can continue scanning\n> heap blocks to collect dead tuple TIDs until TidStoreIsFull() returns\n> true. Currently lazy vacuum is the sole user of TidStore but maybe it\n> can be used by other codes such as tidbitmap.c where will be limited\n> by work_mem.\n>\n> During the development, we found out that DSA memory growth is\n> unpredictable, leading to inefficient memory limitation.\n>\n> DSA is built on top of DSM segments and it manages a set of DSM\n> segments, adding new segments as required and detaching them when they\n> are no longer needed. The DSA segment starts with 1MB in size and a\n> new segment size is at least big enough to follow a geometric series\n> that approximately doubles the total storage each time we create a new\n> segment. Because of this fact, it's not efficient to simply compare\n> the memory limit to the total segment size. For example, if\n> maintenance_work_mem is 512MB, the total segment size will be like:\n>\n> 2 * (1 + 2 + 4 + 8 + 16 + 32 + 64 + 128) = 510MB -> less than the\n> limit, continue heap scan.\n>\n> 2 * (1 + 2 + 4 + 8 + 16 + 32 + 64 + 128) + 256 = 766MB -> stop (exceed 254MB).\n>\n> One might think we can use dsa_set_size_limit() but it cannot; lazy\n> vacuum ends up with an error. If we set DSA_ALLOC_NO_OOM, we might end\n> up stopping the insertion halfway.\n>\n> Besides excessively allocating memory, since the initial DSM segment\n> size is fixed 1MB, memory usage of a shared TidStore will start from\n> 1MB+. This is higher than the minimum values of both work_mem and\n> maintenance_work_mem, 64kB and 1MB respectively. Increasing the\n> minimum m_w_m to 2MB might be acceptable but not for work_mem.\n>\n> Researching possible solutions, we found that aset.c also has a\n> similar characteristic; allocates an 8K block (by default) upon the\n> first allocation in a context, and doubles that size for each\n> successive block request. But we can specify the initial block size\n> and max blocksize. This made me think of an idea to specify both to\n> DSA and both values are calculated based on m_w_m. I've attached the\n> patch for this idea. The changes to dsa.c are straightforward since\n> dsa.c already uses macros DSA_INITIAL_SEGMENT_SIZE and\n> DSA_MAX_SEGMENT_SIZE. I just made these values configurable.\n>\n> FYI with this patch, we can create a DSA in parallel_vacuum_init()\n> with initial and maximum block sizes as follows:\n>\n> initial block size = min(m_w_m / 4, 1MB)\n> max block size = max(m_w_m / 8, 8MB)\n>\n> In most cases, we can start with a 1MB initial segment, the same as\n> before. For larger memory, the heap scan stops after DSA allocates\n> 1.25 times more memory than m_w_m. For example, if m_w_m = 512MB, the\n> both initial and maximum segment sizes are 1MB and 64MB respectively,\n> and then DSA allocates the segments as follows until heap scanning\n> stops:\n>\n> 2 * (1 + 2 + 4 + 8 + 16 + 32 + 64) + (64 * 4) = 510MB -> less than the\n> limit, continue heap scan.\n>\n> 2 * (1 + 2 + 4 + 8 + 16 + 32 + 64) + (64 * 5) = 574MB -> stop\n> (allocated additional 62MB).\n>\n> It also works with smaller memory; If the limit is 1MB, we start with\n> a 256KB initial segment and heap scanning stops after DSA allocated\n> 1.5MB (= 256kB + 256kB + 512kB + 512kB).\n>\n> There is room for considering better formulas for initial and maximum\n> block sizes but making both values configurable is a promising idea.\n> And the analogous behavior to aset could be a good thing for\n> readability and maintainability. There is another test result where I\n> used this idea on top of a radix tree[2].\n>\n> We need to consider the total number of allocated DSA segments as the\n> total number of DSM segments available on the system is fixed[3]. But\n> it seems not problematic even with this patch since we allocate only a\n> few additional segments (in above examples 17 segs vs. 19 segs). There\n> was no big difference also in performance[2].\n>\n\nThe last time I posted this email seemed not good timing since it was\nclose to the feature freeze, and the email was very long. The tidstore\nand radix tree developments are still in-progress[1] and this change\nis still necessary. I'd like to summarize the problem and proposal:\n\n* Both the initial DSA segment size and the maximum DSA segment size\nare fixed values: 1MB and 1TB respectively.\n* The total allocated DSA segments follows a geometric series.\n* The patch makes both the initial and maximum DSA segment sizes configurable.\n* Which helps:\n * minimize wasting memory when the total DSA segment size reaches\nthe limit set by caller.\n * create a data structure with a small memory, for example 64kB,\nthe minimum value of work_mem.\n\nAccording to the recent discussion, it might be sufficient to make\nonly the maximum DSA segment size configurable.\n\nI'll register this item for the next commit fest.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoDjTbp2SHn4hRzAxWNeYArn4Yd4UdH9XRoNzdrYWNgExw%40mail.gmail.com\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 19 Dec 2023 15:03:26 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Making the initial and maximum DSA segment sizes configurable"
},
{
"msg_contents": "Hello Masahiko-san,\n\nI'm not super-familiar with the DSA/DSM stuff, but I think your proposal\nmakes sense.\n\nI agree with your observation that DSA is a bit like AllocSet, so if\nthat allows specifying min/max block size, maybe DSA should allow the\nsame thing for segments ...\n\nHowever, does it actually address the problem you've described? If I\nunderstand it correctly, the problem is that with the doubling logic,\nwe're likely to overshoot the limit. For example with m_w_m=512MB we\nfirst undershoot it a bit (510MB), and then overshoot it a lot (766MB).\nWhich is not great, I agree (especially the overshooting).\n\nBut is the modification you propose much better? I mean, we still\novershoot the limit, right? By a smaller amount (just 62MB instead of\n254MB), but it's still more than the limit ...\n\nMoreover, this really depend on the caller using lower init/max segment\nsize, right? I'd bet most places would just hard-code something, which\nmeans it won't respond to changes in the m_w_m value.\n\n\nCould instead allow specifying the expected size / memory limit,\ncalculate the maximum segment size in DSA code, and also modify how the\nsegment size evolves over time to decrease as we get closer to the\nexpected size / limit?\n\nFor example, let's say we'd create DSA with 512MB limit. Then we could\ndo this:\n\n1MB, 2MB, 4MB, ..., 128MB, 256MB, 1MB, 1MB, ...\n\nbecause after 256MB we have 511MB of segments (in total), and we have to\ngo all the way back to the smallest segment to not exceed the limit (or\nto minimize how much we exceed it). If the limit was set to 600MB, we'd\ngo back to 64MB, then 16MB, etc.\n\nOr maybe we could be smarter and calculate an \"optimal point\" at which\npoint to start decreasing the segment size, roughly half-way through. So\nwe'd end up with something like\n\n1MB, 2MB, 4MB, ..., 128MB, 128MB, 64MB, 32MB, 16MB, ..., 1MB\n\nBut maybe that's unnecessarily complicated ... or maybe I'm missing some\ndetails that make this impossible for the DSA/DSM code.\n\nFWIW the aset.c code has the same problem - it's not aware of limits\nlike work_mem / maintenance_work_mem, and with hard-coded limits we may\neasily hit exceed those (if we get to sufficiently large blocks,\nalthough in most cases the max block is 8MB, which limits how much we\novershoot the limit). Not sure if that's an issue in practice, maybe the\nvirtual memory thing deals with this for us.\n\n\nIf you choose to go with passing the min/max segment size to DSA, maybe\nthis should do a similar thing to aset.c and define a couple \"good\"\nvalues (like ALLOCSET_DEFAULT_SIZES, ALLOCSET_SMALL_SIZES, ...) and/or a\nmacro to calculate good segment sizes for a given limit.\n\nAlso, there's a comment:\n\n * See dsa_create() for a note about the tranche arguments.\n\nwhich should probably reference dsa_create_extended() instead.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 25 Feb 2024 22:58:52 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Making the initial and maximum DSA segment sizes configurable"
},
{
"msg_contents": "Hi,\n\nThank you for the comments!\n\nOn Mon, Feb 26, 2024 at 6:58 AM Tomas Vondra\n<[email protected]> wrote:\n>\n> Hello Masahiko-san,\n>\n> I'm not super-familiar with the DSA/DSM stuff, but I think your proposal\n> makes sense.\n>\n> I agree with your observation that DSA is a bit like AllocSet, so if\n> that allows specifying min/max block size, maybe DSA should allow the\n> same thing for segments ...\n>\n> However, does it actually address the problem you've described? If I\n> understand it correctly, the problem is that with the doubling logic,\n> we're likely to overshoot the limit. For example with m_w_m=512MB we\n> first undershoot it a bit (510MB), and then overshoot it a lot (766MB).\n> Which is not great, I agree (especially the overshooting).\n>\n> But is the modification you propose much better? I mean, we still\n> overshoot the limit, right? By a smaller amount (just 62MB instead of\n> 254MB), but it's still more than the limit ...\n>\n> Moreover, this really depend on the caller using lower init/max segment\n> size, right? I'd bet most places would just hard-code something, which\n> means it won't respond to changes in the m_w_m value.\n>\n>\n> Could instead allow specifying the expected size / memory limit,\n> calculate the maximum segment size in DSA code, and also modify how the\n> segment size evolves over time to decrease as we get closer to the\n> expected size / limit?\n>\n> For example, let's say we'd create DSA with 512MB limit. Then we could\n> do this:\n>\n> 1MB, 2MB, 4MB, ..., 128MB, 256MB, 1MB, 1MB, ...\n>\n> because after 256MB we have 511MB of segments (in total), and we have to\n> go all the way back to the smallest segment to not exceed the limit (or\n> to minimize how much we exceed it). If the limit was set to 600MB, we'd\n> go back to 64MB, then 16MB, etc.\n\nInteresting idea. In fact, since we use each segment size two times,\nwe would do like:\n\n1MB, 1MB, 2MB, 2MB, ... 128MB, 128MB = 510MB (continue)\n\nthen, back to the smallest segment:\n\n2MB, 1MB = 513MB (stop)\n\nWith 600MB limit, we would do like:\n\n1MB, 1MB, 2MB, 2MB, ... 128MB, 128MB = 510MB (continue)\n64MB + 16MB + 8MB + 2MB + 1MB = 601MB (stop)\n\n>\n> Or maybe we could be smarter and calculate an \"optimal point\" at which\n> point to start decreasing the segment size, roughly half-way through. So\n> we'd end up with something like\n>\n> 1MB, 2MB, 4MB, ..., 128MB, 128MB, 64MB, 32MB, 16MB, ..., 1MB\n\nI remember John proposed a similar idea[1]. Quoting from the email:\n\nm_w_m = 1GB, so calculate the soft limit to be 512MB and pass it to\nthe DSA area.\n\n2*(1+2+4+8+16+32+64+128) + 256 = 766MB (74.8% of 1GB) -> hit soft limit, so\n\"stairstep down\" the new segment sizes:\n\n766 + 2*(128) + 64 = 1086MB -> stop\n\n\nBoth are interesting ideas. The reason why I proposed the idea is the\nsimplicity; it is simple and a similar usage as aset.c.\n\nI guess the latter idea (a soft limit idea) might also be implemented\nin a simple way. It's worth trying it.\n\n>\n> But maybe that's unnecessarily complicated ... or maybe I'm missing some\n> details that make this impossible for the DSA/DSM code.\n>\n> FWIW the aset.c code has the same problem - it's not aware of limits\n> like work_mem / maintenance_work_mem, and with hard-coded limits we may\n> easily hit exceed those (if we get to sufficiently large blocks,\n> although in most cases the max block is 8MB, which limits how much we\n> overshoot the limit). Not sure if that's an issue in practice, maybe the\n> virtual memory thing deals with this for us.\n\nRight, we use 8MB max block size in most cases and it works in aset.c.\nOn the other hand, since dsm.c has the limits on total number of\nsegments, we cannot use unnecessarily small segments.\n\n>\n> If you choose to go with passing the min/max segment size to DSA, maybe\n> this should do a similar thing to aset.c and define a couple \"good\"\n> values (like ALLOCSET_DEFAULT_SIZES, ALLOCSET_SMALL_SIZES, ...) and/or a\n> macro to calculate good segment sizes for a given limit.\n\nAgreed.\n\n>\n> Also, there's a comment:\n>\n> * See dsa_create() for a note about the tranche arguments.\n>\n> which should probably reference dsa_create_extended() instead.\n\nThanks, will fix it.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAFBsxsGiiyY%2BwykVLBbN9hFUMiNHqEr_Kqg9Mpc%3Duv4sg8eagQ%40mail.gmail.com\n\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 29 Feb 2024 11:47:51 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Making the initial and maximum DSA segment sizes configurable"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nWe are pleased to announce the Release Management Team (RMT) (cc'd) for\r\nthe PostgreSQL 16 release:\r\n\r\n - Alvaro Herrera\r\n - Amit Kapila\r\n - Jonathan Katz\r\n\r\nYou can find information about the responsibilities of the RMT here:\r\n\r\n https://wiki.postgresql.org/wiki/Release_Management_Team\r\n\r\nAdditionally, the RMT has set the feature freeze to be **April 8, 2023 \r\nat 0:00 AoE**[1]. This is the last time to commit features for \r\nPostgreSQL 16. In other words, no new PostgreSQL 16 feature can be \r\ncommitted after April 8, 2023 at 0:00 AoE.\r\n\r\nYou can track open items for the PostgreSQL 16 release here:\r\n\r\n https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items\r\n\r\nFinally, the Release Team is considering making April 8 0:00 AoE the \r\nstandard feature freeze date/time for future years. This has been the \r\ncutoff over the past several years, and standardizing will give \r\npredictability to when the feature development cycle ends. If you have \r\nreasons why this should not be the case, please voice your concerns.\r\n\r\nPlease let us know if you have any questions.\r\n\r\nOn behalf of the PG16 RMT,\r\n\r\nJonathan\r\n\r\n[1] https://en.wikipedia.org/wiki/Anywhere_on_Earth",
"msg_date": "Tue, 21 Mar 2023 11:35:16 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 16 Release Management Team & Feature Freeze"
},
{
"msg_contents": "On Tue, Mar 21, 2023 at 9:35 AM Jonathan S. Katz <[email protected]> wrote:\n>\n> You can track open items for the PostgreSQL 16 release here:\n>\n> https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items\n\nThe wiki page references April 8th, 2022, btw.\n\nRoberto\n\n\n",
"msg_date": "Tue, 21 Mar 2023 11:17:41 -0600",
"msg_from": "Roberto Mello <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 16 Release Management Team & Feature Freeze"
},
{
"msg_contents": "On 3/21/23 1:17 PM, Roberto Mello wrote:\r\n> On Tue, Mar 21, 2023 at 9:35 AM Jonathan S. Katz <[email protected]> wrote:\r\n>>\r\n>> You can track open items for the PostgreSQL 16 release here:\r\n>>\r\n>> https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items\r\n> \r\n> The wiki page references April 8th, 2022, btw.\r\n\r\nFixed :) Thanks!\r\n\r\nJonathan",
"msg_date": "Tue, 21 Mar 2023 13:54:01 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 16 Release Management Team & Feature Freeze"
},
{
"msg_contents": "On 3/21/23 11:35 AM, Jonathan S. Katz wrote:\r\n\r\n> Additionally, the RMT has set the feature freeze to be **April 8, 2023 \r\n> at 0:00 AoE**[1]. This is the last time to commit features for \r\n> PostgreSQL 16. In other words, no new PostgreSQL 16 feature can be \r\n> committed after April 8, 2023 at 0:00 AoE.\r\n\r\nThis is a reminder that feature freeze is rapidly approach. The freeze \r\nbegins at April 8, 2023 at 0:00 AoE. No new PostgreSQL 16 features can \r\nbe committed after this time.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Thu, 6 Apr 2023 17:37:39 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 16 Release Management Team & Feature Freeze"
},
{
"msg_contents": "On 4/6/23 5:37 PM, Jonathan S. Katz wrote:\r\n> On 3/21/23 11:35 AM, Jonathan S. Katz wrote:\r\n> \r\n>> Additionally, the RMT has set the feature freeze to be **April 8, 2023 \r\n>> at 0:00 AoE**[1]. This is the last time to commit features for \r\n>> PostgreSQL 16. In other words, no new PostgreSQL 16 feature can be \r\n>> committed after April 8, 2023 at 0:00 AoE.\r\n> \r\n> This is a reminder that feature freeze is rapidly approach. The freeze \r\n> begins at April 8, 2023 at 0:00 AoE. No new PostgreSQL 16 features can \r\n> be committed after this time.\r\n\r\nThe feature freeze for PostgreSQL 16 has begun.\r\n\r\nThank you everyone for all of your hard work and contributions towards \r\nthis release! Now we can begin to prepare for the beta period.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Sat, 8 Apr 2023 08:29:42 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 16 Release Management Team & Feature Freeze"
}
] |
[
{
"msg_contents": "So back in 2002 in 7.3 there was a commit 2c6b34d9598 which added a\nGUC db_user_namespace which is stored in a variable Db_user_namespace.\nAll that seems fine except...\n\nThe variable this GUC is stored in is Db_user_namespace which... is\nactually declared in pqcomm.h which is intended to be \"Definitions\ncommon to frontends and backends\".\n\nAfaics it's never actually defined in any FE code, neither libpq nor\nany clients. I was a bit surprised this isn't producing a warning\nabout an extern declaration that's never defined but I guess that's\nnot actually that unusual.\n\nThe actual variable is defined in the backend in postmaster.c. I'm\nguessing this declaration can just move to libpq/libpq.h which\n(counterintuitively) is for the backend iiuc.\n\nI don't think this causes any actual problems aside from namespace\npollution but it confused me. I found this because I was looking for\nwhere to put the ALPN protocol version which (at least at the moment)\nwould be the same for the server and client. But as far as I can tell\nit would be the only variable (other than the above) declared in both\nand that means there's no particularly handy place to put the\ndefinition.\n\n--\ngreg\n\n\n",
"msg_date": "Tue, 21 Mar 2023 22:43:05 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": true,
"msg_subject": "misplaced GUC in pqcomm.h -- where to put actual common variable\n though...?"
}
] |
[
{
"msg_contents": "The -isysroot options should only be added if the sysroot resolved to a \nnonempty string. This matches the behavior in src/template/darwin (also \ndocumented in installation.sgml).",
"msg_date": "Wed, 22 Mar 2023 08:34:00 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "meson: Fix support for empty darwin sysroot"
}
] |
[
{
"msg_contents": "Hi!\n\nComments in src/backend/libpq/auth.c [1] say:\n(after successfully finding the final DN to check the user-supplied\npassword against)\n/* Unbind and disconnect from the LDAP server */\nand later\n/*\n* Need to re-initialize the LDAP connection, so that we can bind to\n* it with a different username.\n*/\n\nBut the protocol actually permits multiple subsequent authentications\n(\"binds\" in LDAP parlance) over a single connection [2].\nMoreover, inspection of the code revision history of mod_authnz_ldap,\npam_ldap, Bugzilla, and MediaWiki LDAP authentication plugin, shows that\nthey've been doing this bind-after-search over the same LDAP connection for\n~20 years without any evidence of interoperability troubles.\n\n(mod_authnz_ldap and pam_ldap are listed in the PostgreSQL documentation as\nexamples of other software implementing this scheme. Bugzilla and MediaWiki\nare the original patch author's motivating examples [3])\n\nAlso it might be interesting to consider this note from the current\nrevision of the protocol RFC [4]:\n\"The Unbind operation is not the antithesis of the Bind operation as the\nname implies. The naming of these operations are historical. The Unbind\noperation should be thought of as the \"quit\" operation.\"\n\nSo, it seems like the whole connection re-initialization thing was just a\nconfusion caused by this very unfortunate \"historical\" naming, and can be\nsafely removed, thus saving quite a few network round-trips, especially for\nthe case of ldaps/starttls.\n\n[1]\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/libpq/auth.c;h=bc0cf26b122a1b28c20fe037ec851c0e99b1ffb6;hb=HEAD#l2603\n[2] https://www.rfc-editor.org/rfc/rfc4511#section-4.2.1\n[3]\nhttps://www.postgresql.org/message-id/4c0112730909141334n201cadf3x2e288528a97883ca%40mail.gmail.com\n[4] https://www.rfc-editor.org/rfc/rfc4511#section-4.3\n-- \nBest regards,\nAnatoly Zaretsky",
"msg_date": "Thu, 23 Mar 2023 03:45:17 +0200",
"msg_from": "Anatoly Zaretsky <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] Remove unnecessary unbind in LDAP search+bind mode"
},
{
"msg_contents": "On 23.03.23 02:45, Anatoly Zaretsky wrote:\n> Comments in src/backend/libpq/auth.c [1] say:\n> (after successfully finding the final DN to check the user-supplied \n> password against)\n> /* Unbind and disconnect from the LDAP server */\n> and later\n> /*\n> * Need to re-initialize the LDAP connection, so that we can bind to\n> * it with a different username.\n> */\n> \n> But the protocol actually permits multiple subsequent authentications \n> (\"binds\" in LDAP parlance) over a single connection [2].\n> Moreover, inspection of the code revision history of mod_authnz_ldap, \n> pam_ldap, Bugzilla, and MediaWiki LDAP authentication plugin, shows that \n> they've been doing this bind-after-search over the same LDAP connection \n> for ~20 years without any evidence of interoperability troubles.\n\n> So, it seems like the whole connection re-initialization thing was just \n> a confusion caused by this very unfortunate \"historical\" naming, and can \n> be safely removed, thus saving quite a few network round-trips, \n> especially for the case of ldaps/starttls.\n\nYour reasoning and your patch look correct to me.\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 11:53:03 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Remove unnecessary unbind in LDAP search+bind mode"
},
{
"msg_contents": "On 03.07.23 11:53, Peter Eisentraut wrote:\n> On 23.03.23 02:45, Anatoly Zaretsky wrote:\n>> Comments in src/backend/libpq/auth.c [1] say:\n>> (after successfully finding the final DN to check the user-supplied \n>> password against)\n>> /* Unbind and disconnect from the LDAP server */\n>> and later\n>> /*\n>> * Need to re-initialize the LDAP connection, so that we can bind to\n>> * it with a different username.\n>> */\n>>\n>> But the protocol actually permits multiple subsequent authentications \n>> (\"binds\" in LDAP parlance) over a single connection [2].\n>> Moreover, inspection of the code revision history of mod_authnz_ldap, \n>> pam_ldap, Bugzilla, and MediaWiki LDAP authentication plugin, shows \n>> that they've been doing this bind-after-search over the same LDAP \n>> connection for ~20 years without any evidence of interoperability \n>> troubles.\n> \n>> So, it seems like the whole connection re-initialization thing was \n>> just a confusion caused by this very unfortunate \"historical\" \n>> naming, and can be safely removed, thus saving quite a few \n>> network round-trips, especially for the case of ldaps/starttls.\n> \n> Your reasoning and your patch look correct to me.\n\ncommitted\n\n\n\n",
"msg_date": "Sun, 9 Jul 2023 08:57:37 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Remove unnecessary unbind in LDAP search+bind mode"
},
{
"msg_contents": "On Sun, Jul 9, 2023 at 9:57 AM Peter Eisentraut <[email protected]>\nwrote:\n\n> committed\n>\nThanks!\n\n-- \nBest regards,\nAnatoly Zaretsky\n\nOn Sun, Jul 9, 2023 at 9:57 AM Peter Eisentraut <[email protected]> wrote:committed\nThanks!-- Best regards,Anatoly Zaretsky",
"msg_date": "Sun, 9 Jul 2023 21:11:53 +0300",
"msg_from": "Anatoly Zaretsky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Remove unnecessary unbind in LDAP search+bind mode"
}
] |
[
{
"msg_contents": "Hi,\n\nCommit dee663f7 made WAIT_EVENT_SLRU_FLUSH_SYNC redundant, so here's a\npatch to remove it.\n\nIn case it's useful again, here's how I noticed:\n\nfor X in ` grep WAIT_EVENT_ src/include/utils/wait_event.h |\n sed '/^#/d;s/,//;s/ = .*//' `\ndo\n if ! ( git grep $X |\n grep -v src/include/utils/wait_event.h |\n grep -v src/backend/utils/activity/wait_event.c |\n grep $X > /dev/null )\n then\n echo \"$X is not used\"\n fi\ndone",
"msg_date": "Thu, 23 Mar 2023 15:12:21 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Orphaned wait event"
},
{
"msg_contents": "On Thu, Mar 23, 2023 at 7:43 AM Thomas Munro <[email protected]> wrote:\n>\n> Hi,\n>\n> Commit dee663f7 made WAIT_EVENT_SLRU_FLUSH_SYNC redundant, so here's a\n> patch to remove it.\n\nYeah, commit [1] removed the last trace of it. I wonder if we can add\na WAIT_EVENT_SLRU_FLUSH_SYNC wait event in SlruSyncFileTag(), similar\nto mdsyncfiletag. This way, we would have covered all sync_syncfiletag\nfsyncs with wait events.\n\n> In case it's useful again, here's how I noticed:\n>\n> for X in ` grep WAIT_EVENT_ src/include/utils/wait_event.h |\n> sed '/^#/d;s/,//;s/ = .*//' `\n> do\n> if ! ( git grep $X |\n> grep -v src/include/utils/wait_event.h |\n> grep -v src/backend/utils/activity/wait_event.c |\n> grep $X > /dev/null )\n> then\n> echo \"$X is not used\"\n> fi\n> done\n\nInteresting. It might be an overkill to think of placing it as a\ncompile-time script to catch similar miss-outs in future.\n\n[1]\ncommit dee663f7843902535a15ae366cede8b4089f1144\nAuthor: Thomas Munro <[email protected]>\nDate: Fri Sep 25 18:49:43 2020 +1200\n\n Defer flushing of SLRU files.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 23 Mar 2023 12:40:34 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Orphaned wait event"
},
{
"msg_contents": "On Thu, Mar 23, 2023 at 8:10 PM Bharath Rupireddy\n<[email protected]> wrote:\n> Yeah, commit [1] removed the last trace of it. I wonder if we can add\n> a WAIT_EVENT_SLRU_FLUSH_SYNC wait event in SlruSyncFileTag(), similar\n> to mdsyncfiletag. This way, we would have covered all sync_syncfiletag\n> fsyncs with wait events.\n\nAhh, right. Thanks. The mistake was indeed that SlruSyncFileTag\nfailed to report it while running pg_fsync().\n\n> > In case it's useful again, here's how I noticed:\n> >\n> > for X in ` grep WAIT_EVENT_ src/include/utils/wait_event.h |\n> > sed '/^#/d;s/,//;s/ = .*//' `\n> > do\n> > if ! ( git grep $X |\n> > grep -v src/include/utils/wait_event.h |\n> > grep -v src/backend/utils/activity/wait_event.c |\n> > grep $X > /dev/null )\n> > then\n> > echo \"$X is not used\"\n> > fi\n> > done\n>\n> Interesting. It might be an overkill to think of placing it as a\n> compile-time script to catch similar miss-outs in future.\n\nMeh. Parsing C programs from shell scripts is fun for one-off\nthrow-away usage, but I think if we want proper automation here we\nshould look into a way to define wait events in a central file similar\nto what we do for src/backend/storage/lmgr/lwlocknames.txt. It could\ngive the enum name, the display name, and the documentation sentence\non one tab-separated line, and we could generate all the rest from\nthat, or something like that? I suspect that downstream/monitoring\ntools might appreciate the existence of such a file too.",
"msg_date": "Fri, 24 Mar 2023 11:00:40 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Orphaned wait event"
},
{
"msg_contents": "Hi,\n\nOn 3/23/23 11:00 PM, Thomas Munro wrote:\n> I think if we want proper automation here we\n> should look into a way to define wait events in a central file similar\n> to what we do for src/backend/storage/lmgr/lwlocknames.txt. It could\n> give the enum name, the display name, and the documentation sentence\n> on one tab-separated line, and we could generate all the rest from\n> that, or something like that? I suspect that downstream/monitoring\n> tools might appreciate the existence of such a file too.\n\nYeah, I think that makes sense. I'll look at this and start a new\nthread once I've a patch to share. FWIW, I'm also working on wait event \"improvements\"\n(mainly adding extra info per wait event) that 1) I'll share once ready 2) could also probably\nbenefit from your proposal here.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 24 Mar 2023 07:23:20 +0100",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Orphaned wait event"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 3:31 AM Thomas Munro <[email protected]> wrote:\n>\n> On Thu, Mar 23, 2023 at 8:10 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> > Yeah, commit [1] removed the last trace of it. I wonder if we can add\n> > a WAIT_EVENT_SLRU_FLUSH_SYNC wait event in SlruSyncFileTag(), similar\n> > to mdsyncfiletag. This way, we would have covered all sync_syncfiletag\n> > fsyncs with wait events.\n>\n> Ahh, right. Thanks. The mistake was indeed that SlruSyncFileTag\n> failed to report it while running pg_fsync().\n\nThanks. The attached patch looks good to me.\n\n> > > In case it's useful again, here's how I noticed:\n> > >\n> > > for X in ` grep WAIT_EVENT_ src/include/utils/wait_event.h |\n> > > sed '/^#/d;s/,//;s/ = .*//' `\n> > > do\n> > > if ! ( git grep $X |\n> > > grep -v src/include/utils/wait_event.h |\n> > > grep -v src/backend/utils/activity/wait_event.c |\n> > > grep $X > /dev/null )\n> > > then\n> > > echo \"$X is not used\"\n> > > fi\n> > > done\n> >\n> > Interesting. It might be an overkill to think of placing it as a\n> > compile-time script to catch similar miss-outs in future.\n>\n> Meh. Parsing C programs from shell scripts is fun for one-off\n> throw-away usage, but I think if we want proper automation here we\n> should look into a way to define wait events in a central file similar\n> to what we do for src/backend/storage/lmgr/lwlocknames.txt. It could\n> give the enum name, the display name, and the documentation sentence\n> on one tab-separated line, and we could generate all the rest from\n> that, or something like that? I suspect that downstream/monitoring\n> tools might appreciate the existence of such a file too.\n\n+1. So, with that approach, both wait_event.h and wait_event.c will be\nauto-generated I believe.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 24 Mar 2023 12:00:13 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Orphaned wait event"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 12:00 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Fri, Mar 24, 2023 at 3:31 AM Thomas Munro <[email protected]> wrote:\n> >\n> > On Thu, Mar 23, 2023 at 8:10 PM Bharath Rupireddy\n> > <[email protected]> wrote:\n> > > Yeah, commit [1] removed the last trace of it. I wonder if we can add\n> > > a WAIT_EVENT_SLRU_FLUSH_SYNC wait event in SlruSyncFileTag(), similar\n> > > to mdsyncfiletag. This way, we would have covered all sync_syncfiletag\n> > > fsyncs with wait events.\n> >\n> > Ahh, right. Thanks. The mistake was indeed that SlruSyncFileTag\n> > failed to report it while running pg_fsync().\n>\n> Thanks. The attached patch looks good to me.\n\nIt looks like this patch attached upthread at [1] isn't in yet,\nmeaning WAIT_EVENT_SLRU_FLUSH_SYNC stays unused. IMO, it's worth\npushing it to the PG16 branch. It will help add a wait event for SLRU\npage flushes. Thoughts?\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKG%2BewEpxm%3DhPNXyupRUB_SKGh-6tO86viaco0g-P_pm_Cw%40mail.gmail.com\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 25 Apr 2023 21:24:58 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Orphaned wait event"
},
{
"msg_contents": "On Tue, Apr 25, 2023 at 09:24:58PM +0530, Bharath Rupireddy wrote:\n> It looks like this patch attached upthread at [1] isn't in yet,\n> meaning WAIT_EVENT_SLRU_FLUSH_SYNC stays unused. IMO, it's worth\n> pushing it to the PG16 branch. It will help add a wait event for SLRU\n> page flushes. Thoughts?\n> \n> [1] https://www.postgresql.org/message-id/CA%2BhUKG%2BewEpxm%3DhPNXyupRUB_SKGh-6tO86viaco0g-P_pm_Cw%40mail.gmail.com\n\nThere could be the argument that some external code could abuse of\nthis value for its own needs, but I don't really buy that. I'll go\nclean up that..\n--\nMichael",
"msg_date": "Wed, 26 Apr 2023 06:34:13 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Orphaned wait event"
}
] |
[
{
"msg_contents": "Dear hackers, (CC: reviewers of copy-binary patch)\n\nThis is an follow-up thread of [1]. PSA patch that adds an attributes.\n\nBy ecb6965, an XML ID attribute is added only one varlistentry in create_subscription.sgml.\nBut according to the commit 78ee60 and related discussions [2], [3], it is worth\nadding ID attribute to other entries. This patch adds them.\n\nMoreover, I have added some references to parameters from pre-existing documents.\nOnly entries that are referred from other files have XREFLABEL attribute.\n\nBasically I detected the to-be-added position by:\n\n1. Grepped subscription options, e.g. <literal>two_phase</literal>\n2. Found a first place of above detection in each sgml files.\n3. Replaced them link, e.g. <xref linkend=\"sql-createsubscription-with-two-phase\"/>.\n\n\"XXX = YYY\" style was not replaced because there are few links of its style for now.\n\n[1]: https://www.postgresql.org/message-id/flat/CAGPVpCQvAziCLknEnygY0v1-KBtg+Om-9JHJYZOnNPKFJPompw@mail.gmail.com\n[2]: https://www.postgresql.org/message-id/CAB8KJ=jpuQU9QJe4+RgWENrK5g9jhoysMw2nvTN_esoOU0=a_w@mail.gmail.com\n[3]: https://www.postgresql.org/message-id/[email protected]\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Thu, 23 Mar 2023 06:22:03 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PGdoc: add missing ID attribute to create_subscription.sgml"
},
{
"msg_contents": "Firstly, +1 for this patch. Directly jumping to the subscription\noptions makes it much easier to navigate in the documentation instead\nof scrolling up and done in CREATE SUBSCRIPTION page looking for each\nparameter. Already (just experimenting with this patch) it is\nnoticeably better.\n\n~~\n\nAnyway, here are my review comments for patch 0001\n\n======\nGeneral\n\n1.\nIt will be better if all the references follow a consistent pattern:\n\nRule 1 - IMO it is quite important/necessary for these option name\n“XXX” (see below) to be rendered using <literal> markup rather than\njust plain text font. Unfortunately, I don't know how to do that using\nxref labels. If you can figure out some way to do it then great,\notherwise I feel it is better just remove all those xreflabels and\ninstead create the links like this:\n\n<link linkend=\"sql-createsubscription-with-XXX\"><literal>XXX</literal></link>\noption\n\nRule 2 – Try to keep consistent phrasing like \"XXX option\" or \"XXX\nparameter\" (whatever is appropriate for the neighbouring text)\n\n~~~\n\n2.\nI think you can extend this patch similarly to add IDs for the WITH\nparameters of CREATE PUBLICATION. For example, I saw a couple of\nplaces where referencing the 'publish' parameter might be useful.\n\n======\nCommit message\n\n3.\nCurrently, there is nothing.\n\n======\ndoc/src/sgml/config.sgml\n\n4. (Section 20.17 Developer Options -- logical_replication_mode)\n\n- <literal>streaming</literal> option (see optional parameters set by\n- <link linkend=\"sql-createsubscription\"><command>CREATE\nSUBSCRIPTION</command></link>)\n+ <xref linkend=\"sql-createsubscription-with-streaming\"/> option\n+ (see optional parameters set by <link\nlinkend=\"sql-createsubscription\"><command>CREATE\nSUBSCRIPTION</command></link>)\n\nSince we now have a direct link to the option, I think the rest of\nthat sentence can now be a bit simpler. YMMV.\n\nSUGGESTION (per my general comment about links/fonts)\n... if the <link\nlinkend=\"sql-createsubscription-with-streaming\"><literal>streaming</literal></link>\noption of <link linkend=\"sql-createsubscription\"><command>CREATE\nSUBSCRIPTION</command></link> is enabled, otherwise, serialize each\nchange.\n\n======\ndoc/src/sgml/logical-replication.\n\n5. (Section 31.2 Subscription)\n\n- <literal>streaming</literal> option (see optional parameters set by\n- <link linkend=\"sql-createsubscription\"><command>CREATE\nSUBSCRIPTION</command></link>)\n+ <xref linkend=\"sql-createsubscription-with-streaming\"/> option\n+ (see optional parameters set by <link\nlinkend=\"sql-createsubscription\"><command>CREATE\nSUBSCRIPTION</command></link>)\n\nFor consistency with everything else, I think only the word “binary\nshould be the link.\n\nSUGGESTION\nSee the <link linkend=\"sql-createsubscription-with-binary\"><literal>binary</literal></link>\noption ...\n\n~~~\n\n\n6. (Section 31.2.3 Examples)\n\n- restrictive. See the <link\nlinkend=\"sql-createsubscription-binary\"><literal>binary</literal>\n+ restrictive. See the <link\nlinkend=\"sql-createsubscription-with-binary\"><literal>binary</literal>\n\nSUGGESTION (per my general comment about links/fonts, and also added\nword \"option\")\n<link linkend=\"sql-createsubscription-with-slot-name\"><literal>slot_name</literal></link>\noption.\n\n~~~\n\n7. (Section 31.5 Conflicts)\n\n- subscription can be used with the\n<literal>disable_on_error</literal> option.\n- Then, you can use\n<function>pg_replication_origin_advance()</function> function\n- with the <parameter>node_name</parameter> (i.e.,\n<literal>pg_16395</literal>)\n+ subscription can be used with the <xref\nlinkend=\"sql-createsubscription-with-disable-on-error\"/>\n+ option. Then, you can use\n<function>pg_replication_origin_advance()</function>\n+ function with the <parameter>node_name</parameter> (i.e.,\n<literal>pg_16395</literal>)\n\nSUGGESTION (per my general comment about links/fonts)\n<link linkend=\"sql-createsubscription-with-disable-on-error\"><literal>disable_on_error</literal></link>\n\n======\ndoc/src/sgml/ref/alter_subscription.sgml\n\n\n8. (Description)\n\n- <literal>two_phase</literal> commit enabled,\n- unless <literal>copy_data</literal> is <literal>false</literal>.\n+ <link linkend=\"sql-createsubscription-with-two-phase\"> commit\nenabled</link>,\n+ unless <xref linkend=\"sql-createsubscription-with-copy-data\"/> is\n<literal>false</literal>.\n\nI think the \"two_phase\" was rendering wrongly because there was a\nmixup of link/xref. Suggest fix it like below:\n\nSUGGESTION (per my general comment about links/fonts)\n<link linkend=\"sql-createsubscription-with-two-phase\"><literal>two_phase</literal></link>\ncommit enabled, unless <link\nlinkend=\"sql-createsubscription-with-copy-data\"><literal>copy_data</literal></link>\nis <literal>false</literal>.\n\n~~~\n\n9. (copy_data)\n\n- <literal>origin</literal> parameter.\n+ <xref linkend=\"sql-createsubscription-with-origin\"/> parameter.\n\nSUGGESTION (per my general comment about links/fonts)\n<link linkend=\"sql-createsubscription-with-origin\"><literal>origin</literal></link>\nparameter.\n\n~\n\n10.\n <para>\n- See the <link\nlinkend=\"sql-createsubscription-binary\"><literal>binary</literal>\n+ See the <link\nlinkend=\"sql-createsubscription-with-binary\"><literal>binary</literal>\n\nEverything nearby was called a \"parameter\" so I recommend to change\n\"binary option\" to \"binary parameter\" here too and move that word\noutside the link.\n\nSUGGESTION (per my general comment about links/fonts)\nSee the <link linkend=\"sql-createsubscription-with-binary\"><literal>binary</literal></link>\nparameter of ...\n\n~~~\n\n11 (SET)\n\n- are <literal>slot_name</literal>,\n- <literal>synchronous_commit</literal>,\n- <literal>binary</literal>, <literal>streaming</literal>,\n- <literal>disable_on_error</literal>, and\n+ are <xref linkend=\"sql-createsubscription-with-slot-name\"/>,\n+ <xref linkend=\"sql-createsubscription-with-synchronous-commit\"/>,\n+ <literal>binary</literal>, <xref\nlinkend=\"sql-createsubscription-with-streaming\"/>,\n+ <xref linkend=\"sql-createsubscription-with-disable-on-error\"/>, and\n\nModify so all the fonts are <literal>. Also, the binary link and\norigin links were added. I know you said you chose to do that because\nthey are already linked previously on this page, but in practice, it\nlooked strange when rendered where only those ones were missing as\nlinks from this long list.\n\nSUGGESTION (per my general comment about links/fonts)\nThe parameters that can be altered are\n<link linkend=\"sql-createsubscription-with-slot-name\"><literal>slot_name</literal></link>,\n<link linkend=\"sql-createsubscription-with-synchronous-commit\"><literal>synchronous_commit</literal></link>,\n<link linkend=\"sql-createsubscription-with-binary\"><literal>binary</literal></link>,\n<link linkend=\"sql-createsubscription-with-streaming\"><literal>streaming</literal></link>,\n<link linkend=\"sql-createsubscription-with-disable-on-error\"><literal>disable_on_error</literal></link>,\nand\n<link linkend=\"sql-createsubscription-with-origin\"><literal>origin</literal></link>.\n\n======\ndoc/src/sgml/ref/create_subscription.sgml\n\n12.\nI think all those xreflabels can be removed. As per my general\ncomment, the references to the WITH option should use a <literal> font\nfor the option name, but then I was unable to get that working using\nxreflabels. So AFAIK those xreflabels are unused (unless they have\nsome other purpose that I don't know about).\n\n~~~\n\n13.\nSometimes the WITH parameters reference to each other on this page. I\nwasn’t sure if we should cross-reference within the same page. What do\nyou think? It might be useful, or OTOH it might be overkill to have\ntoo many links.\n\ne.g. connect refers to -- create_slot, enabled, copy_data\n\ne.g. a lot_name refers to -- create_slot, enabled\n\ne.g. binary refers to -- copy_data\n\ne.g. copy_data refers to -- origin\n\ne.g. origin refers to -- copy_data\n\n======\ndoc/src/sgml/ref/pg_dump.sgml\n\n14. (Section II. PG client applications -- pg_dump)\n\n- <literal>two_phase</literal> option will be automatically enabled by the\n- subscriber if the subscription had been originally created with\n- <literal>two_phase = true</literal> option.\n+ <xref linkend=\"sql-createsubscription-with-two-phase\"/> option will be\n+ automatically enabled by the subscriber if the subscription had been\n+ originally created with <literal>two_phase = true</literal> option.\n\nSUGGESTION (per my general comment about links/fonts)\n<link linkend=\"sql-createsubscription-with-two-phase\"><literal>two_phase</literal></link>\noption\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 24 Mar 2023 20:18:16 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGdoc: add missing ID attribute to create_subscription.sgml"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing! PSA new patch set.\r\n\r\n> 1.\r\n> It will be better if all the references follow a consistent pattern:\r\n> \r\n> Rule 1 - IMO it is quite important/necessary for these option name\r\n> “XXX” (see below) to be rendered using <literal> markup rather than\r\n> just plain text font. Unfortunately, I don't know how to do that using\r\n> xref labels. If you can figure out some way to do it then great,\r\n> otherwise I feel it is better just remove all those xreflabels and\r\n> instead create the links like this:\r\n> \r\n> <link\r\n> linkend=\"sql-createsubscription-with-XXX\"><literal>XXX</literal></link>\r\n> option\r\n\r\nI have not known the better way, so I followed that.\r\n\r\n> Rule 2 – Try to keep consistent phrasing like \"XXX option\" or \"XXX\r\n> parameter\" (whatever is appropriate for the neighbouring text)\r\n\r\nBetter suggestion.\r\n\r\n> 2.\r\n> I think you can extend this patch similarly to add IDs for the WITH\r\n> parameters of CREATE PUBLICATION. For example, I saw a couple of\r\n> places where referencing the 'publish' parameter might be useful.\r\n\r\nThis suggestions exceeds initial motivation, but I made a patch. See 0002.\r\n\r\n> ======\r\n> Commit message\r\n> \r\n> 3.\r\n> Currently, there is nothing.\r\n\r\nAdded.\r\n\r\n> ======\r\n> doc/src/sgml/config.sgml\r\n> \r\n> 4. (Section 20.17 Developer Options -- logical_replication_mode)\r\n> \r\n> - <literal>streaming</literal> option (see optional parameters set by\r\n> - <link linkend=\"sql-createsubscription\"><command>CREATE\r\n> SUBSCRIPTION</command></link>)\r\n> + <xref linkend=\"sql-createsubscription-with-streaming\"/> option\r\n> + (see optional parameters set by <link\r\n> linkend=\"sql-createsubscription\"><command>CREATE\r\n> SUBSCRIPTION</command></link>)\r\n> \r\n> Since we now have a direct link to the option, I think the rest of\r\n> that sentence can now be a bit simpler. YMMV.\r\n> \r\n> SUGGESTION (per my general comment about links/fonts)\r\n> ... if the <link\r\n> linkend=\"sql-createsubscription-with-streaming\"><literal>streaming</literal>\r\n> </link>\r\n> option of <link linkend=\"sql-createsubscription\"><command>CREATE\r\n> SUBSCRIPTION</command></link> is enabled, otherwise, serialize each\r\n> change.\r\n\r\nChanged. Moreover, I reworded from \"option\" to \"parameter\" because\r\nIt has already been used in the file.\r\n\r\n> ======\r\n> doc/src/sgml/logical-replication.\r\n> \r\n> 5. (Section 31.2 Subscription)\r\n> \r\n> - <literal>streaming</literal> option (see optional parameters set by\r\n> - <link linkend=\"sql-createsubscription\"><command>CREATE\r\n> SUBSCRIPTION</command></link>)\r\n> + <xref linkend=\"sql-createsubscription-with-streaming\"/> option\r\n> + (see optional parameters set by <link\r\n> linkend=\"sql-createsubscription\"><command>CREATE\r\n> SUBSCRIPTION</command></link>)\r\n> \r\n> For consistency with everything else, I think only the word “binary\r\n> should be the link.\r\n> \r\n> SUGGESTION\r\n> See the <link\r\n> linkend=\"sql-createsubscription-with-binary\"><literal>binary</literal></link>\r\n> option ...\r\n\r\nYou seemed to copy wrong diffs, but your point was right, fixed.\r\n\r\n> 6. (Section 31.2.3 Examples)\r\n> \r\n> - restrictive. See the <link\r\n> linkend=\"sql-createsubscription-binary\"><literal>binary</literal>\r\n> + restrictive. See the <link\r\n> linkend=\"sql-createsubscription-with-binary\"><literal>binary</literal>\r\n> \r\n> SUGGESTION (per my general comment about links/fonts, and also added\r\n> word \"option\")\r\n> <link\r\n> linkend=\"sql-createsubscription-with-slot-name\"><literal>slot_name</literal>\r\n> </link>\r\n> option.\r\n\r\nYou seemed to copy wrong diffs, but I fixed.\r\n\r\n> 7. (Section 31.5 Conflicts)\r\n> \r\n> - subscription can be used with the\r\n> <literal>disable_on_error</literal> option.\r\n> - Then, you can use\r\n> <function>pg_replication_origin_advance()</function> function\r\n> - with the <parameter>node_name</parameter> (i.e.,\r\n> <literal>pg_16395</literal>)\r\n> + subscription can be used with the <xref\r\n> linkend=\"sql-createsubscription-with-disable-on-error\"/>\r\n> + option. Then, you can use\r\n> <function>pg_replication_origin_advance()</function>\r\n> + function with the <parameter>node_name</parameter> (i.e.,\r\n> <literal>pg_16395</literal>)\r\n> \r\n> SUGGESTION (per my general comment about links/fonts)\r\n> <link\r\n> linkend=\"sql-createsubscription-with-disable-on-error\"><literal>disable_on_er\r\n> ror</literal></link>\r\n\r\nFixed.\r\n\r\n> doc/src/sgml/ref/alter_subscription.sgml\r\n> \r\n> \r\n> 8. (Description)\r\n> \r\n> - <literal>two_phase</literal> commit enabled,\r\n> - unless <literal>copy_data</literal> is <literal>false</literal>.\r\n> + <link linkend=\"sql-createsubscription-with-two-phase\"> commit\r\n> enabled</link>,\r\n> + unless <xref linkend=\"sql-createsubscription-with-copy-data\"/> is\r\n> <literal>false</literal>.\r\n> \r\n> I think the \"two_phase\" was rendering wrongly because there was a\r\n> mixup of link/xref. Suggest fix it like below:\r\n> \r\n> SUGGESTION (per my general comment about links/fonts)\r\n> <link\r\n> linkend=\"sql-createsubscription-with-two-phase\"><literal>two_phase</literal\r\n> ></link>\r\n> commit enabled, unless <link\r\n> linkend=\"sql-createsubscription-with-copy-data\"><literal>copy_data</literal>\r\n> </link>\r\n> is <literal>false</literal>.\r\n\r\nGood detection. Fixed.\r\n\r\n> 9. (copy_data)\r\n> \r\n> - <literal>origin</literal> parameter.\r\n> + <xref linkend=\"sql-createsubscription-with-origin\"/> parameter.\r\n> \r\n> SUGGESTION (per my general comment about links/fonts)\r\n> <link\r\n> linkend=\"sql-createsubscription-with-origin\"><literal>origin</literal></link>\r\n> parameter.\r\n\r\nFixed.\r\n\r\n> 10.\r\n> <para>\r\n> - See the <link\r\n> linkend=\"sql-createsubscription-binary\"><literal>binary</literal>\r\n> + See the <link\r\n> linkend=\"sql-createsubscription-with-binary\"><literal>binary</literal>\r\n> \r\n> Everything nearby was called a \"parameter\" so I recommend to change\r\n> \"binary option\" to \"binary parameter\" here too and move that word\r\n> outside the link.\r\n> \r\n> SUGGESTION (per my general comment about links/fonts)\r\n> See the <link\r\n> linkend=\"sql-createsubscription-with-binary\"><literal>binary</literal></link>\r\n> parameter of ...\r\n\r\nFixed.\r\n\r\n> 11 (SET)\r\n> \r\n> - are <literal>slot_name</literal>,\r\n> - <literal>synchronous_commit</literal>,\r\n> - <literal>binary</literal>, <literal>streaming</literal>,\r\n> - <literal>disable_on_error</literal>, and\r\n> + are <xref linkend=\"sql-createsubscription-with-slot-name\"/>,\r\n> + <xref linkend=\"sql-createsubscription-with-synchronous-commit\"/>,\r\n> + <literal>binary</literal>, <xref\r\n> linkend=\"sql-createsubscription-with-streaming\"/>,\r\n> + <xref linkend=\"sql-createsubscription-with-disable-on-error\"/>, and\r\n> \r\n> Modify so all the fonts are <literal>. Also, the binary link and\r\n> origin links were added. I know you said you chose to do that because\r\n> they are already linked previously on this page, but in practice, it\r\n> looked strange when rendered where only those ones were missing as\r\n> links from this long list.\r\n> \r\n> SUGGESTION (per my general comment about links/fonts)\r\n> The parameters that can be altered are\r\n> <link\r\n> linkend=\"sql-createsubscription-with-slot-name\"><literal>slot_name</literal>\r\n> </link>,\r\n> <link\r\n> linkend=\"sql-createsubscription-with-synchronous-commit\"><literal>synchron\r\n> ous_commit</literal></link>,\r\n> <link\r\n> linkend=\"sql-createsubscription-with-binary\"><literal>binary</literal></link>\r\n> ,\r\n> <link\r\n> linkend=\"sql-createsubscription-with-streaming\"><literal>streaming</literal>\r\n> </link>,\r\n> <link\r\n> linkend=\"sql-createsubscription-with-disable-on-error\"><literal>disable_on_er\r\n> ror</literal></link>,\r\n> and\r\n> <link\r\n> linkend=\"sql-createsubscription-with-origin\"><literal>origin</literal></link>.\r\n\r\nFixed.\r\n\r\n> doc/src/sgml/ref/create_subscription.sgml\r\n> \r\n> 12.\r\n> I think all those xreflabels can be removed. As per my general\r\n> comment, the references to the WITH option should use a <literal> font\r\n> for the option name, but then I was unable to get that working using\r\n> xreflabels. So AFAIK those xreflabels are unused (unless they have\r\n> some other purpose that I don't know about).\r\n\r\nThey are no longer used, so removed.\r\n\r\n> 13.\r\n> Sometimes the WITH parameters reference to each other on this page. I\r\n> wasn’t sure if we should cross-reference within the same page. What do\r\n> you think? It might be useful, or OTOH it might be overkill to have\r\n> too many links.\r\n> \r\n> e.g. connect refers to -- create_slot, enabled, copy_data\r\n> \r\n> e.g. a lot_name refers to -- create_slot, enabled\r\n> \r\n> e.g. binary refers to -- copy_data\r\n> \r\n> e.g. copy_data refers to -- origin\r\n> \r\n> e.g. origin refers to -- copy_data\r\n\r\nI have not added links because it was in the same page and I thought\r\nit was overkill. I checked a few reference pages, e.g. create_table.sgml and\r\ncreate_type.sgml, but I could not find any links that refer varlistentry\r\nin the same page (except links for <sectN>). So I kept them.\r\n\r\n> doc/src/sgml/ref/pg_dump.sgml\r\n> \r\n> 14. (Section II. PG client applications -- pg_dump)\r\n> \r\n> - <literal>two_phase</literal> option will be automatically enabled by the\r\n> - subscriber if the subscription had been originally created with\r\n> - <literal>two_phase = true</literal> option.\r\n> + <xref linkend=\"sql-createsubscription-with-two-phase\"/> option will be\r\n> + automatically enabled by the subscriber if the subscription had been\r\n> + originally created with <literal>two_phase = true</literal> option.\r\n> \r\n> SUGGESTION (per my general comment about links/fonts)\r\n> <link\r\n> linkend=\"sql-createsubscription-with-two-phase\"><literal>two_phase</literal\r\n> ></link>\r\n> option\r\n\r\nFixed.\r\n\r\nBesides, I have added a missing reference related with \"CONNECTION\".\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 24 Mar 2023 11:27:48 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: PGdoc: add missing ID attribute to create_subscription.sgml"
},
{
"msg_contents": "Here are review comments for v2-0001\n\n======\nCommit Message\n\n1.\nIn commit ecb696, an XML ID attribute was added to only one varlistentry,\ncreating inconsistency with the commit 78ee60. This commit adds XML ID\nattributes to all varlistentries in create_subscritpion.sgml for consistency.\nAdditionally, links are added to refer subscription options, enhancing the\nreadability of documents.\n\n~\n\n1a.\nTypo: create_subscritpion.sgml\n\n~\n\n1b.\n\"to refer subscription options\" --> \"to refer to the subscription options\"\n\n======\ndoc/src/sgml/config.sgml\n\n2.\n- <literal>streaming</literal> option (see optional parameters set by\n- <link linkend=\"sql-createsubscription\"><command>CREATE\nSUBSCRIPTION</command></link>)\n+ <link linkend=\"sql-createsubscription-with-streaming\"><literal>streaming</literal></link>\n+ parameter of <link\nlinkend=\"sql-createsubscription\"><command>CREATE\nSUBSCRIPTION</command></link>\n\nNow, this link says \"streaming parameter\", but the very next paragraph\nrefers to \"streaming option\". I think it is better to keep them the\nsame (e.g. both say \"streaming option\").\n\n======\ndoc/src/sgml/ref/alter_subscription.sgml\n\nThe SKIP part says \"... enabling two_phase on subscriber.\". I thought\nthere could be a link for \"two_phase\" here (also \"on subscriber\" -->\n\"on the subscriber\").\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 27 Mar 2023 17:56:53 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGdoc: add missing ID attribute to create_subscription.sgml"
},
{
"msg_contents": "Here are review comments for v2-0002\n\n======\ndoc/src/sgml/logical-replication.sgml\n\n1.\nI am not sure your convention to only give the link to the FIRST\nreference on a page is good in all case. Maybe that rule is OK for\nmultiple references all in the same sub-section but when they are in\ndifferent sub-sections (even on one page) I think it would be better\nto include the extra links.\n\n1a.\nFor example, Section 33.3 (Row Filter) refers to\n\"publish_via_partition_root\" lots of times across multiple subsections\n– So it is not convenient to have to scroll around looking in\ndifferent sections for the topmost reference which has the link.\n\n1b.\nAlso in Section 33.3 (Row Filter), there are a couple of places you\ncould link to \"publish\" parameter on this page.\n\n~~~\n\n2.\nI thought was a missing link in 31.7.1 (Architecture/Initial Snapshot)\nwhich could've linked to the \"publish\" parameter.\n\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 27 Mar 2023 17:58:25 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGdoc: add missing ID attribute to create_subscription.sgml"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing! New patch set will be attached in later mail.\r\n\r\n> ======\r\n> Commit Message\r\n> \r\n> 1.\r\n> In commit ecb696, an XML ID attribute was added to only one varlistentry,\r\n> creating inconsistency with the commit 78ee60. This commit adds XML ID\r\n> attributes to all varlistentries in create_subscritpion.sgml for consistency.\r\n> Additionally, links are added to refer subscription options, enhancing the\r\n> readability of documents.\r\n> \r\n> ~\r\n> \r\n> 1a.\r\n> Typo: create_subscritpion.sgml\r\n\r\nFixed.\r\n\r\n> 1b.\r\n> \"to refer subscription options\" --> \"to refer to the subscription options\"\r\n\r\nFixed.\r\n\r\n> 2.\r\n> - <literal>streaming</literal> option (see optional parameters set by\r\n> - <link linkend=\"sql-createsubscription\"><command>CREATE\r\n> SUBSCRIPTION</command></link>)\r\n> + <link\r\n> linkend=\"sql-createsubscription-with-streaming\"><literal>streaming</literal>\r\n> </link>\r\n> + parameter of <link\r\n> linkend=\"sql-createsubscription\"><command>CREATE\r\n> SUBSCRIPTION</command></link>\r\n> \r\n> Now, this link says \"streaming parameter\", but the very next paragraph\r\n> refers to \"streaming option\". I think it is better to keep them the\r\n> same (e.g. both say \"streaming option\").\r\n\r\nI missed just next paragraph, I thought :-(.\r\nReverted the change, now it is called as \"streaming option\"\r\n\r\n> doc/src/sgml/ref/alter_subscription.sgml\r\n> \r\n> The SKIP part says \"... enabling two_phase on subscriber.\". I thought\r\n> there could be a link for \"two_phase\" here (also \"on subscriber\" -->\r\n> \"on the subscriber\").\r\n\r\nAdded.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n\r\n",
"msg_date": "Mon, 27 Mar 2023 09:17:50 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: PGdoc: add missing ID attribute to create_subscription.sgml"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing! PSA new version.\r\n\r\n> doc/src/sgml/logical-replication.sgml\r\n> \r\n> 1.\r\n> I am not sure your convention to only give the link to the FIRST\r\n> reference on a page is good in all case. Maybe that rule is OK for\r\n> multiple references all in the same sub-section but when they are in\r\n> different sub-sections (even on one page) I think it would be better\r\n> to include the extra links.\r\n\r\nSounds better for readability.\r\n\r\n> 1a.\r\n> For example, Section 33.3 (Row Filter) refers to\r\n> \"publish_via_partition_root\" lots of times across multiple subsections\r\n> – So it is not convenient to have to scroll around looking in\r\n> different sections for the topmost reference which has the link.\r\n\r\nAdded only two links because almost lines were in the same sub-section(Examples).\r\nDid it match with your expectation?\r\n\r\n> 1b.\r\n> Also in Section 33.3 (Row Filter), there are a couple of places you\r\n> could link to \"publish\" parameter on this page.\r\n\r\nIIUC there was only one point to add the link, but added.\r\n\r\nAlso, I have added further links for \"FOR ALL TABLES\" and \"FOR TABLES IN SCHEMA\" clauses.\r\n\r\n> 2.\r\n> I thought was a missing link in 31.7.1 (Architecture/Initial Snapshot)\r\n> which could've linked to the \"publish\" parameter.\r\n>\r\n\r\nAdded.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Mon, 27 Mar 2023 09:21:01 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: PGdoc: add missing ID attribute to create_subscription.sgml"
},
{
"msg_contents": "Hi Kuroda-san. Here are my review comments for both patches v3-0001 and v3-0002.\n\n\n////////\nv3-0001\n////////\n\nThis patch looks good, but I think there are a couple of other places\nwhere you could add links:\n\n~~~\n\n1.1 doc/src/sgml/logical-replication.sgml (31.5 Conflicts)\n\n\"When the streaming mode is parallel, the finish LSN ...\"\n\nMaybe you can add a \"streaming\" link there.\n\n~~~\n\n1.2. doc/src/sgml/logical-replication.sgml (31.5 31.8. Monitoring)\n\n\"Moreover, if the streaming transaction is applied in parallel, there\nmay be additional parallel apply workers.\"\n\nMaybe you can add a \"streaming\" link there.\n\n\n////////\nv3-0002\n////////\n\nThere is one bug, and I think there are a couple of other places where\nyou could add links:\n\n~~~\n\n2.1 doc/src/sgml/logical-replication.sgml (31.4. Column Lists blurb)\n\nFor partitioned tables, the publication parameter\npublish_via_partition_root determines which column list is used.\n\n~\n\nMaybe you can add a \"publish_via_partition_root\" link there.\n\n~~~\n\n2.2 doc/src/sgml/logical-replication.sgml (31.6. Restrictions)\n\nPublications can also specify that changes are to be replicated using\nthe identity and schema of the partitioned root table instead of that\nof the individual leaf partitions in which the changes actually\noriginate (see CREATE PUBLICATION).\n\n~\n\nMaybe that text can be changed now to say something like \"(see\npublish_via_partition_root parameter of CREATE PUBLICATION)” -- so\nonly the parameter part has the link, not the CREATE PUBLICATION part.\n\n~~~\n\n2.3 doc/src/sgml/logical-replication.sgml (31.9. Security)\n\n+ subscription <link\nlinkend=\"sql-createpublication-for-all-tables\"><literal>FOR ALL\nTABLES</literal></link>\n+ or <link linkend=\"sql-createpublication-for-tables-in-schema\"><literal>FOR\nTABLES IN SCHEMA</literal></link><literal>FOR TABLES IN\nSCHEMA</literal>\n+ only when superusers trust every user permitted to create a non-temp table\n+ on the publisher or the subscriber.\n\nThere is a cut/paste typo here -- it renders badly with \"FOR TABLES IN\nSCHEMA\" appearing 2x.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 28 Mar 2023 12:08:18 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGdoc: add missing ID attribute to create_subscription.sgml"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing. PSA new version.\r\n\r\n> ////////\r\n> v3-0001\r\n> ////////\r\n> \r\n> This patch looks good, but I think there are a couple of other places\r\n> where you could add links:\r\n> \r\n> ~~~\r\n> \r\n> 1.1 doc/src/sgml/logical-replication.sgml (31.5 Conflicts)\r\n> \r\n> \"When the streaming mode is parallel, the finish LSN ...\"\r\n> \r\n> Maybe you can add a \"streaming\" link there.\r\n\r\nAdded. It could not be detected because this is not tagged as <literal>.\r\n\r\n> 1.2. doc/src/sgml/logical-replication.sgml (31.5 31.8. Monitoring)\r\n> \r\n> \"Moreover, if the streaming transaction is applied in parallel, there\r\n> may be additional parallel apply workers.\"\r\n> \r\n> Maybe you can add a \"streaming\" link there.\r\n\r\nAdded.\r\n\r\n> ////////\r\n> v3-0002\r\n> ////////\r\n> \r\n> There is one bug, and I think there are a couple of other places where\r\n> you could add links:\r\n> \r\n> ~~~\r\n> \r\n> 2.1 doc/src/sgml/logical-replication.sgml (31.4. Column Lists blurb)\r\n> \r\n> For partitioned tables, the publication parameter\r\n> publish_via_partition_root determines which column list is used.\r\n> \r\n> ~\r\n> \r\n> Maybe you can add a \"publish_via_partition_root\" link there.\r\n\r\nAdded. I'm not sure why I missed it...\r\n\r\n> 2.2 doc/src/sgml/logical-replication.sgml (31.6. Restrictions)\r\n> \r\n> Publications can also specify that changes are to be replicated using\r\n> the identity and schema of the partitioned root table instead of that\r\n> of the individual leaf partitions in which the changes actually\r\n> originate (see CREATE PUBLICATION).\r\n> \r\n> ~\r\n> \r\n> Maybe that text can be changed now to say something like \"(see\r\n> publish_via_partition_root parameter of CREATE PUBLICATION)” -- so\r\n> only the parameter part has the link, not the CREATE PUBLICATION part.\r\n\r\nSeems better, added.\r\n\r\n> 2.3 doc/src/sgml/logical-replication.sgml (31.9. Security)\r\n> \r\n> + subscription <link\r\n> linkend=\"sql-createpublication-for-all-tables\"><literal>FOR ALL\r\n> TABLES</literal></link>\r\n> + or <link\r\n> linkend=\"sql-createpublication-for-tables-in-schema\"><literal>FOR\r\n> TABLES IN SCHEMA</literal></link><literal>FOR TABLES IN\r\n> SCHEMA</literal>\r\n> + only when superusers trust every user permitted to create a non-temp table\r\n> + on the publisher or the subscriber.\r\n> \r\n> There is a cut/paste typo here -- it renders badly with \"FOR TABLES IN\r\n> SCHEMA\" appearing 2x.\r\n>\r\n\r\nThat's my fault, fixed.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 28 Mar 2023 03:04:47 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: PGdoc: add missing ID attribute to create_subscription.sgml"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 2:04 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Peter,\n>\n> Thank you for reviewing. PSA new version.\n>\n\nv4-0001 LGTM\n\n>\n> > ////////\n> > v3-0002\n> > ////////\n> >\n>\n> > 2.2 doc/src/sgml/logical-replication.sgml (31.6. Restrictions)\n> >\n> > Publications can also specify that changes are to be replicated using\n> > the identity and schema of the partitioned root table instead of that\n> > of the individual leaf partitions in which the changes actually\n> > originate (see CREATE PUBLICATION).\n> >\n> > ~\n> >\n> > Maybe that text can be changed now to say something like \"(see\n> > publish_via_partition_root parameter of CREATE PUBLICATION)” -- so\n> > only the parameter part has the link, not the CREATE PUBLICATION part.\n>\n> Seems better, added.\n>\n\n- originate (see <link\nlinkend=\"sql-createpublication\"><command>CREATE\nPUBLICATION</command></link>).\n+ originate (see <link\nlinkend=\"sql-createpublication-with-publish-via-partition-root\"><literal>publish_via_partition_root</literal></link>\n+ of <command>CREATE PUBLICATION</command>).\n\nHmm, my above-suggested wording was “publish_via_partition_root\nparameter “ but it seems you (accidentally?) omitted the word\n“parameter”.\n\nOtherwise, the patch v4-0002 also LGTM\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 28 Mar 2023 14:50:06 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGdoc: add missing ID attribute to create_subscription.sgml"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for prompt reply!\r\n\r\n> Hmm, my above-suggested wording was “publish_via_partition_root\r\n> parameter “ but it seems you (accidentally?) omitted the word\r\n> “parameter”.\r\n\r\nIt is my carelessness, sorry for inconvenience. PSA new ones.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 28 Mar 2023 04:19:04 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: PGdoc: add missing ID attribute to create_subscription.sgml"
},
{
"msg_contents": "Thanks for this patch.\n\nv5-0001 looks good to me.\n\nv5-0002 looks good to me.\n\nI've marked the CF entry [1] as \"ready for committer\".\n\n------\n[1] https://commitfest.postgresql.org/43/4256/\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 28 Mar 2023 15:59:03 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGdoc: add missing ID attribute to create_subscription.sgml"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 9:49 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Thank you for prompt reply!\n>\n> > Hmm, my above-suggested wording was “publish_via_partition_root\n> > parameter “ but it seems you (accidentally?) omitted the word\n> > “parameter”.\n>\n> It is my carelessness, sorry for inconvenience. PSA new ones.\n>\n\nIn 0001, patch, I see a lot of long lines like below:\n- subscription can be used with the\n<literal>disable_on_error</literal> option.\n- Then, you can use\n<function>pg_replication_origin_advance()</function> function\n- with the <parameter>node_name</parameter> (i.e.,\n<literal>pg_16395</literal>)\n+ subscription can be used with the <link\nlinkend=\"sql-createsubscription-with-disable-on-error\"><literal>disable_on_error</literal></link>\n\nIsn't it better to move the link-related part to the next line\nwherever possible? Currently, it looks bit odd.\n\nWhy 0002 patch is part of this thread? I thought here we want to add\n'ids' to entries corresponding to Create Subscription as we have added\nthe one in commit ecb696.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 28 Mar 2023 12:03:23 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGdoc: add missing ID attribute to create_subscription.sgml"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for reviewing! PSA new version.\r\n\r\n> Isn't it better to move the link-related part to the next line\r\n> wherever possible? Currently, it looks bit odd.\r\n\r\nPreviously I preferred not to add a new line inside the <link> tag, but it caused\r\nlong-line. So I adjusted them not to be too short/long length.\r\n\r\n> Why 0002 patch is part of this thread? I thought here we want to add\r\n> 'ids' to entries corresponding to Create Subscription as we have added\r\n> the one in commit ecb696.\r\n>\r\n\r\n0002 was motivated by Peter's comment [1]. This exceeds the initial intention of\r\nthe patch, so I removed once.\r\n\r\n[1]: https://www.postgresql.org/message-id/CAHut%2BPu%2B-OocYYhW9E0gxxqgfUb1yJ8jVQ4AZ0v-ud00s7TxEA%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 28 Mar 2023 07:30:15 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: PGdoc: add missing ID attribute to create_subscription.sgml"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 1:00 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Amit,\n>\n> Thank you for reviewing! PSA new version.\n>\n> > Isn't it better to move the link-related part to the next line\n> > wherever possible? Currently, it looks bit odd.\n>\n> Previously I preferred not to add a new line inside the <link> tag, but it caused\n> long-line. So I adjusted them not to be too short/long length.\n>\n\nThere is no need to break the link line. See attached.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Tue, 28 Mar 2023 17:53:09 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGdoc: add missing ID attribute to create_subscription.sgml"
},
{
"msg_contents": "Dear Amit-san,\r\n\r\n> There is no need to break the link line. See attached.\r\n\r\nI understood your saying. I think it's better.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 29 Mar 2023 01:01:51 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: PGdoc: add missing ID attribute to create_subscription.sgml"
},
{
"msg_contents": "On Wed, Mar 29, 2023 at 6:31 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Amit-san,\n>\n> > There is no need to break the link line. See attached.\n>\n> I understood your saying. I think it's better.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 29 Mar 2023 11:32:16 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGdoc: add missing ID attribute to create_subscription.sgml"
}
] |
[
{
"msg_contents": "Hi, hackers\n\nI propose to slightly improve the performance of nested loop join in the\ncase of partitioned inner table.\nAs I see in the code, the backend looks for the partition of the inner\ntable each time after fetch a new row from the outer table.\nThese searches can take a significant amount of time.\nBut we can skip this step if the nested loop parameter(s) was(re) not\nchanged since the previous row fetched from the outer table\n\nThe draft patch is attached.",
"msg_date": "Thu, 23 Mar 2023 13:46:32 +0700",
"msg_from": "Alexandr Nikulin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Improve the performance of nested loop join in the case of\n partitioned inner table"
},
{
"msg_contents": "On Thu, 23 Mar 2023 at 19:46, Alexandr Nikulin\n<[email protected]> wrote:\n> I propose to slightly improve the performance of nested loop join in the case of partitioned inner table.\n> As I see in the code, the backend looks for the partition of the inner table each time after fetch a new row from the outer table.\n> These searches can take a significant amount of time.\n> But we can skip this step if the nested loop parameter(s) was(re) not changed since the previous row fetched from the outer table\n\nI think if we were to do something like that, then it should be done\nin nodeAppend.c and nodeMergeAppend.c. That way you can limit it to\nonly checking parameters that partition pruning needs to care about.\nThat does mean you'd need to find somewhere to cache the parameter\nvalues, however. Doing it in nodeNestloop.c means you're doing it when\nthe inner subplan is something that does not suffer from the\nadditional overhead you want to avoid, e.g an Index Scan.\n\nAlso, generally, if you want to get anywhere with a performance patch,\nyou should post performance results from before and after your change.\nAlso include your benchmark setup and relevant settings for how you\ngot those results. For this case, you'll want a best case (parameter\nvalue stays the same) and a worst case, where the parameter value\nchanges on each outer row. I expect you're going to add overhead to\nthis case as your additional checks will always detect the parameter\nhas changed as that'll always require partition pruning to be executed\nagain. We'll want to know if that overhead is high enough for us not\nto want to do this.\n\nI'll be interested to see a test that as some varlena parameter of say\na few hundred bytes to see how much overhead testing if that parameter\nhas changed when the pruning is being done on a HASH partitioned\ntable. HASH partitioning should prune quite a bit faster than both\nLIST and RANGE as the hashing is effectively O(1) vs O(log2 N) (N\nbeing the number of Datums in the partition bounds). I'd expect a\nmeasurable additional overhead with the patch when the parameter\nchanges on each outer row.\n\nDavid\n\n\n",
"msg_date": "Thu, 23 Mar 2023 23:05:41 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the performance of nested loop join in the case of\n partitioned inner table"
},
{
"msg_contents": "The following tests demonstrate the speedup which may be achieved with my\npatch:\n\n1. Add to postgresql.conf\nenable_hashjoin = OFF\nenable_mergejoin = OFF\nenable_material = OFF\nenable_bitmapscan = OFF\nenable_nestloop = ON\nmax_parallel_workers_per_gather = 0\nenable_memoize = OFF\n\n2. create test tables:\n\ncreate table test_part(id int primary key) partition by range(id);\ncreate table test_part0 partition of test_part for values from (0) to\n(1000000);\ncreate table test_part1 partition of test_part for values from (1000000) to\n(2000000);\ninsert into test_part select id from generate_series(0, 1000000-1) as id;\n\ncreate table ids(id int, name varchar); create index on ids(ascii(name));\ninsert into ids select id, 'worst case' as name from generate_series(0,\n1000000-1) as id;\ninsert into ids select 123456, 'best case' as name from generate_series(0,\n1000000-1) as id;\n\n3. run tests:\n\nexplain analyze select * from ids join test_part on ids.id=test_part.id\nwhere ascii(ids.name)=ascii('best case');\nexplain analyze select * from ids join test_part on ids.id=test_part.id\nwhere ascii(ids.name)=ascii('worst case');\n\nThe average results on my machine are as follows:\n\n | vanila postgres | patched postgres\nbest case | 2286 ms | 1924 ms\nworst case | 2278 ms | 2360 ms\n\nSo far I haven't refactored the patch as per David's advice. I just want to\nunderstand if we need such an optimization?\n\n\nчт, 23 мар. 2023 г. в 17:05, David Rowley <[email protected]>:\n\n> On Thu, 23 Mar 2023 at 19:46, Alexandr Nikulin\n> <[email protected]> wrote:\n> > I propose to slightly improve the performance of nested loop join in the\n> case of partitioned inner table.\n> > As I see in the code, the backend looks for the partition of the inner\n> table each time after fetch a new row from the outer table.\n> > These searches can take a significant amount of time.\n> > But we can skip this step if the nested loop parameter(s) was(re) not\n> changed since the previous row fetched from the outer table\n>\n> I think if we were to do something like that, then it should be done\n> in nodeAppend.c and nodeMergeAppend.c. That way you can limit it to\n> only checking parameters that partition pruning needs to care about.\n> That does mean you'd need to find somewhere to cache the parameter\n> values, however. Doing it in nodeNestloop.c means you're doing it when\n> the inner subplan is something that does not suffer from the\n> additional overhead you want to avoid, e.g an Index Scan.\n>\n> Also, generally, if you want to get anywhere with a performance patch,\n> you should post performance results from before and after your change.\n> Also include your benchmark setup and relevant settings for how you\n> got those results. For this case, you'll want a best case (parameter\n> value stays the same) and a worst case, where the parameter value\n> changes on each outer row. I expect you're going to add overhead to\n> this case as your additional checks will always detect the parameter\n> has changed as that'll always require partition pruning to be executed\n> again. We'll want to know if that overhead is high enough for us not\n> to want to do this.\n>\n> I'll be interested to see a test that as some varlena parameter of say\n> a few hundred bytes to see how much overhead testing if that parameter\n> has changed when the pruning is being done on a HASH partitioned\n> table. HASH partitioning should prune quite a bit faster than both\n> LIST and RANGE as the hashing is effectively O(1) vs O(log2 N) (N\n> being the number of Datums in the partition bounds). I'd expect a\n> measurable additional overhead with the patch when the parameter\n> changes on each outer row.\n>\n> David\n>\n\nThe following tests demonstrate the speedup which may be achieved \n\n with my patch:1. Add to postgresql.confenable_hashjoin = OFFenable_mergejoin = OFFenable_material = OFFenable_bitmapscan = OFFenable_nestloop = ONmax_parallel_workers_per_gather = 0enable_memoize = OFF2. create test tables:create table test_part(id int primary key) partition by range(id);create table test_part0 partition of test_part for values from (0) to (1000000);create table test_part1 partition of test_part for values from (1000000) to (2000000);insert into test_part select id from generate_series(0, 1000000-1) as id;create table ids(id int, name varchar); create index on ids(ascii(name));insert into ids select id, 'worst case' as name from generate_series(0, 1000000-1) as id;insert into ids select 123456, 'best case' as name from generate_series(0, 1000000-1) as id;3. run tests:explain analyze select * from ids join test_part on ids.id=test_part.id where ascii(ids.name)=ascii('best case');explain analyze select * from ids join test_part on ids.id=test_part.id where ascii(ids.name)=ascii('worst case');The average results on my machine are as follows: | vanila postgres | patched postgresbest case | 2286 ms | 1924 msworst case | 2278 ms | 2360 msSo far I haven't refactored the patch as per David's advice. I just want to understand if we need such an optimization?\nчт, 23 мар. 2023 г. в 17:05, David Rowley <[email protected]>:On Thu, 23 Mar 2023 at 19:46, Alexandr Nikulin\n<[email protected]> wrote:\n> I propose to slightly improve the performance of nested loop join in the case of partitioned inner table.\n> As I see in the code, the backend looks for the partition of the inner table each time after fetch a new row from the outer table.\n> These searches can take a significant amount of time.\n> But we can skip this step if the nested loop parameter(s) was(re) not changed since the previous row fetched from the outer table\n\nI think if we were to do something like that, then it should be done\nin nodeAppend.c and nodeMergeAppend.c. That way you can limit it to\nonly checking parameters that partition pruning needs to care about.\nThat does mean you'd need to find somewhere to cache the parameter\nvalues, however. Doing it in nodeNestloop.c means you're doing it when\nthe inner subplan is something that does not suffer from the\nadditional overhead you want to avoid, e.g an Index Scan.\n\nAlso, generally, if you want to get anywhere with a performance patch,\nyou should post performance results from before and after your change.\nAlso include your benchmark setup and relevant settings for how you\ngot those results. For this case, you'll want a best case (parameter\nvalue stays the same) and a worst case, where the parameter value\nchanges on each outer row. I expect you're going to add overhead to\nthis case as your additional checks will always detect the parameter\nhas changed as that'll always require partition pruning to be executed\nagain. We'll want to know if that overhead is high enough for us not\nto want to do this.\n\nI'll be interested to see a test that as some varlena parameter of say\na few hundred bytes to see how much overhead testing if that parameter\nhas changed when the pruning is being done on a HASH partitioned\ntable. HASH partitioning should prune quite a bit faster than both\nLIST and RANGE as the hashing is effectively O(1) vs O(log2 N) (N\nbeing the number of Datums in the partition bounds). I'd expect a\nmeasurable additional overhead with the patch when the parameter\nchanges on each outer row.\n\nDavid",
"msg_date": "Wed, 12 Apr 2023 22:00:07 +0700",
"msg_from": "Alexandr Nikulin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve the performance of nested loop join in the case of\n partitioned inner table"
},
{
"msg_contents": "On Thu, 13 Apr 2023 at 03:00, Alexandr Nikulin\n<[email protected]> wrote:\n> explain analyze select * from ids join test_part on ids.id=test_part.id where ascii(ids.name)=ascii('best case');\n> explain analyze select * from ids join test_part on ids.id=test_part.id where ascii(ids.name)=ascii('worst case');\n>\n> The average results on my machine are as follows:\n>\n> | vanila postgres | patched postgres\n> best case | 2286 ms | 1924 ms\n> worst case | 2278 ms | 2360 ms\n>\n> So far I haven't refactored the patch as per David's advice. I just want to understand if we need such an optimization?\n\nMy thoughts are that the worst-case numbers are not exactly great. I\nvery much imagine that with average cases, it's much more likely than\nnot that the parameter values from the nested loop's next outer row\nwill be different than it is that it'll be the same.\n\nLet's say, roughly your numbers show a 20% speedup for the best case,\nand a 4% slowdown for the worst case, for us to break even with this\npatch as it is, the parameter value would have to be the same around 1\nout of 5 times. That does not seem like good odds to bet on given\nwe're likely working with data types that allow billions of distinct\nvalues.\n\nI think if you really wanted to make this work, then you'd need to get\nthe planner on board with making the decision on if this should be\ndone or not based on the n_distinct estimates from the outer side of\nthe join. Either that or some heuristic in the executor that tries\nfor a while and gives up if the parameter value changes too often.\nSome code was added in 3592e0ff9 that uses a heuristics approach to\nsolving this problem by only enabling the optimisation if we hit the\nsame partition at least 16 times and switches it off again as soon as\nthe datum no longer matches the cached partition. I'm not quite sure\nhow the same could be made to work here as with 3592e0ff9. A tuple\nonly belongs to a single partition and we can very cheaply check if\nthis partition is the same as the last one by checking if the\npartition index matches. With this case, since we're running a query,\nmany partitions can remain after partition pruning runs, and checking\nthat some large number of partitions match some other large number of\npartitions is not going to be as cheap as checking just two partitions\nmatch. Bitmapsets can help here, but they'll just never be as fast as\nchecking two ints match.\n\nIn short, I think you're going to have to come up with something very\ncrafty here to reduce the overhead worst case. Whatever it is will\nneed to be neat and self-contained, perhaps in execPartition.c. It\njust does not seem good to have logic related to partition pruning\ninside nodeNestloop.c.\n\nI'm going to mark this as waiting on author in the CF app. It might be\nbetter if you withdraw it and resubmit when you have a patch that\naddresses the worst-case regression issue.\n\nDavid\n\n\n",
"msg_date": "Wed, 5 Jul 2023 00:02:06 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the performance of nested loop join in the case of\n partitioned inner table"
},
{
"msg_contents": "> On 4 Jul 2023, at 14:02, David Rowley <[email protected]> wrote:\n\n> I'm going to mark this as waiting on author in the CF app. It might be\n> better if you withdraw it and resubmit when you have a patch that\n> addresses the worst-case regression issue.\n\nSince there hasn't been any updates to this thread I am marking this returned\nwith feedback. Please feel free to resubmit to the next CF when the comments\nfrom David have been addressed.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 1 Aug 2023 20:15:41 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the performance of nested loop join in the case of\n partitioned inner table"
}
] |
[
{
"msg_contents": "Looks like we need a little magic to allow pg_bsd_indent to be part of a \nvpath build:\n\n\n<https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=fairywren&dt=2023-03-23%2014%3A21%3A08&stg=module-pg_bsd_indent-check>\n\n\nIt succeeded if I added this to the Makefile:\n\n\nCFLAGS += -I$(srcdir)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nLooks like we need a little magic to allow\n pg_bsd_indent to be part of a vpath build:\n\n\n<https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=fairywren&dt=2023-03-23%2014%3A21%3A08&stg=module-pg_bsd_indent-check>\n\n\nIt succeeded if I added this to the\n Makefile:\n\n\nCFLAGS += -I$(srcdir)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 23 Mar 2023 16:06:46 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_bsd_indent vs vpath"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> Looks like we need a little magic to allow pg_bsd_indent to be part of a \n> vpath build:\n\nYeah, I think I fixed that at dccef0f2f, but fairywren hasn't run since.\n(I'd thought that it was some weird Msys-ism, but if you say it's VPATH\nthen all is clear, other than why we didn't notice already.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 Mar 2023 16:46:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_bsd_indent vs vpath"
},
{
"msg_contents": "On 2023-03-23 Th 16:46, Tom Lane wrote:\n> Andrew Dunstan<[email protected]> writes:\n>> Looks like we need a little magic to allow pg_bsd_indent to be part of a\n>> vpath build:\n> Yeah, I think I fixed that at dccef0f2f, but fairywren hasn't run since.\n> (I'd thought that it was some weird Msys-ism, but if you say it's VPATH\n> then all is clear, other than why we didn't notice already.)\n>\n> \t\t\t\n\n\n\nThe code to pick up src/tools isn't in released code yet, and I just \nupdated fairywren from git to test some other code.\n\n\nAnyway, it's all good, fairywren is now building this happily.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-23 Th 16:46, Tom Lane wrote:\n\n\nAndrew Dunstan <[email protected]> writes:\n\n\nLooks like we need a little magic to allow pg_bsd_indent to be part of a \nvpath build:\n\n\n\nYeah, I think I fixed that at dccef0f2f, but fairywren hasn't run since.\n(I'd thought that it was some weird Msys-ism, but if you say it's VPATH\nthen all is clear, other than why we didn't notice already.)\n\n\t\t\t\n\n\n\n\n\nThe code to pick up src/tools isn't in released code yet, and I\n just updated fairywren from git to test some other code.\n\n\nAnyway, it's all good, fairywren is now building this happily.\n\n\ncheers\n\n\nandrew\n \n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 24 Mar 2023 08:08:13 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_bsd_indent vs vpath"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWhile reviewing another thread [1] I could not find the function\n'pg_get_publication_tables' described anywhere in the PG\ndocumentation.\n\nShould it be mentioned somewhere like the \"System Catalog Information\nFunctions\" table [2], or was this one deliberately omitted for some\nreason?\n\nThanks.\n\n------\n[1] https://www.postgresql.org/message-id/CAA4eK1KrzTOYsuCzz6fxRed37C6MfHE1t9kyrM5B4m9ToqKWrQ%40mail.gmail.com\n[2] https://www.postgresql.org/docs/devel/functions-info.html#FUNCTIONS-INFO-CATALOG-TABLE\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 24 Mar 2023 09:11:21 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "PGDOCS - function pg_get_publication_tables is not documented?"
},
{
"msg_contents": "Peter Smith <[email protected]> writes:\n> While reviewing another thread [1] I could not find the function\n> 'pg_get_publication_tables' described anywhere in the PG\n> documentation.\n> Should it be mentioned somewhere like the \"System Catalog Information\n> Functions\" table [2], or was this one deliberately omitted for some\n> reason?\n\nIt's not documented because it's intended only as infrastructure\nfor the pg_publication_tables view. (There are some other functions\nin the same category.)\n\nI do see a docs change that I think would be worth making: get\nrid of the explicit mention of it in create_subscription.sgml\nin favor of using that view.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 Mar 2023 18:26:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGDOCS - function pg_get_publication_tables is not documented?"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 9:26 AM Tom Lane <[email protected]> wrote:\n>\n> Peter Smith <[email protected]> writes:\n> > While reviewing another thread [1] I could not find the function\n> > 'pg_get_publication_tables' described anywhere in the PG\n> > documentation.\n> > Should it be mentioned somewhere like the \"System Catalog Information\n> > Functions\" table [2], or was this one deliberately omitted for some\n> > reason?\n>\n> It's not documented because it's intended only as infrastructure\n> for the pg_publication_tables view. (There are some other functions\n> in the same category.)\n>\n> I do see a docs change that I think would be worth making: get\n> rid of the explicit mention of it in create_subscription.sgml\n> in favor of using that view.\n>\n\nOK. Thanks very much for the information.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 24 Mar 2023 10:22:08 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PGDOCS - function pg_get_publication_tables is not documented?"
},
{
"msg_contents": "On Fri, Mar 24, 2023 6:26 AM Tom Lane <[email protected]> wrote:\r\n> \r\n> I do see a docs change that I think would be worth making: get\r\n> rid of the explicit mention of it in create_subscription.sgml\r\n> in favor of using that view.\r\n> \r\n\r\nI agree and I tried to modify the query to use the view.\r\nPlease see the attached patch.\r\n\r\nRegards,\r\nShi Yu",
"msg_date": "Sun, 9 Apr 2023 02:04:46 +0000",
"msg_from": "\"Yu Shi (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: PGDOCS - function pg_get_publication_tables is not documented?"
},
{
"msg_contents": "On Sun, Apr 9, 2023 at 7:35 AM Yu Shi (Fujitsu) <[email protected]> wrote:\n>\n> On Fri, Mar 24, 2023 6:26 AM Tom Lane <[email protected]> wrote:\n> >\n> > I do see a docs change that I think would be worth making: get\n> > rid of the explicit mention of it in create_subscription.sgml\n> > in favor of using that view.\n> >\n>\n> I agree and I tried to modify the query to use the view.\n> Please see the attached patch.\n>\n\nI am wondering whether we need to take the publication name as input\nto find tables that can include non-local origins. I think anyway\nusers need to separately query publication names to give that input.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 10 Apr 2023 09:38:13 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGDOCS - function pg_get_publication_tables is not documented?"
},
{
"msg_contents": "\"Yu Shi (Fujitsu)\" <[email protected]> writes:\n> On Fri, Mar 24, 2023 6:26 AM Tom Lane <[email protected]> wrote:\n>> I do see a docs change that I think would be worth making: get\n>> rid of the explicit mention of it in create_subscription.sgml\n>> in favor of using that view.\n\n> I agree and I tried to modify the query to use the view.\n> Please see the attached patch.\n\nAh, now I see why it was written like that: it's kind of annoying\nto join to pg_subscription_rel without having access to the relation\nOID. Still, this is more pedagogically correct, so pushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Apr 2023 12:24:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGDOCS - function pg_get_publication_tables is not documented?"
}
] |
[
{
"msg_contents": "%s/pg_current_xact/pg_current_xact_id\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Fri, 24 Mar 2023 15:16:01 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": true,
"msg_subject": "fix a typo in file src/backend/utils/adt/xid8funcs.c comment"
},
{
"msg_contents": "> On 24 Mar 2023, at 08:16, Junwang Zhao <[email protected]> wrote:\n> \n> %s/pg_current_xact/pg_current_xact_id\n\nPushed, thanks!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 24 Mar 2023 09:07:02 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fix a typo in file src/backend/utils/adt/xid8funcs.c comment"
}
] |
[
{
"msg_contents": "> But why are there no anchors next to> <h3> items on that page? For example,> how do I get the link for the> \"Meta Commands\" subsection?I can't look at the code right now, but I suspect the headers are refsections (not sections) which this patch does not add links for yet.I already have plans to add this in a follow-up patch at some point, but while I had already added ids to all section elements in the previous patch that added ids, this has yet to be done for all refsect elements (wich is not a small effort again).Regards,Brar\n\n> But why are there no anchors next to> <h3> items on that page? For example,> how do I get the link for the> \"Meta Commands\" subsection?I can't look at the code right now, but I suspect the headers are refsections (not sections) which this patch does not add links for yet.I already have plans to add this in a follow-up patch at some point, but while I had already added ids to all section elements in the previous patch that added ids, this has yet to be done for all refsect elements (wich is not a small effort again).Regards,Brar",
"msg_date": "Fri, 24 Mar 2023 11:37:48 +0100",
"msg_from": "brar <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?US-ASCII?Q?Re:_doc:_add_missing_\"id\"_attri?=\n =?US-ASCII?Q?butes_to_extension_packaging_page?="
},
{
"msg_contents": "On 2023-Mar-24, brar wrote:\n\n> Alvaro wrote:\n\n> > But why are there no anchors next to <h3> items on that page? For\n> > example, how do I get the link for the \"Meta Commands\" subsection?\n\n> I can't look at the code right now, but I suspect the headers are\n> refsections (not sections) which this patch does not add links for\n> yet. I already have plans to add this in a follow-up patch at some\n> point, but while I had already added ids to all section elements in\n> the previous patch that added ids, this has yet to be done for all\n> refsect elements (wich is not a small effort again).\n\nYou are right, those are <refsect2>. Understood, thanks.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n<inflex> really, I see PHP as like a strange amalgamation of C, Perl, Shell\n<crab> inflex: you know that \"amalgam\" means \"mixture with mercury\",\n more or less, right?\n<crab> i.e., \"deadly poison\"\n\n\n",
"msg_date": "Fri, 24 Mar 2023 11:46:48 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
}
] |
[
{
"msg_contents": "amcheck: Fix verify_heapam for tuples where xmin or xmax is 0.\n\nIn such cases, get_xid_status() doesn't set its output parameter (the\nthird argument), so we shouldn't fall through to code which will test\nthe value of that parameter. There are five existing calls to\nget_xid_status(), three of which seem to already handle this case\nproperly. This commit tries to fix the other two.\n\nIf we're checking xmin and find that it is invalid (i.e. 0) just\nreport that as corruption, similar to what's already done in the\nthree cases that seem correct. If we're checking xmax and find\nthat's invalid, that's fine: it just means that the tuple hasn't\nbeen updated or deleted.\n\nThanks to Andres Freund and valgrind for finding this problem, and\nalso to Andres for having a look at the patch. This bug seems to go\nall the way back to where verify_heapam was first introduced, but\nwasn't detected until recently, possibly because of the new test cases\nadded for update chain verification. Back-patch to v14, where this\ncode showed up.\n\nDiscussion: http://postgr.es/m/CA+TgmoZAYzQZqyUparXy_ks3OEOfLD9-bEXt8N-2tS1qghX9gQ@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/e88754a1965c0f40a723e6e46d670cacda9e19bd\n\nModified Files\n--------------\ncontrib/amcheck/verify_heapam.c | 8 ++++++--\n1 file changed, 6 insertions(+), 2 deletions(-)",
"msg_date": "Fri, 24 Mar 2023 15:13:52 +0000",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql: amcheck: Fix verify_heapam for tuples where xmin or xmax is 0."
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 8:13 AM Robert Haas <[email protected]> wrote:\n> If we're checking xmin and find that it is invalid (i.e. 0) just\n> report that as corruption, similar to what's already done in the\n> three cases that seem correct. If we're checking xmax and find\n> that's invalid, that's fine: it just means that the tuple hasn't\n> been updated or deleted.\n\nWhat about aborted speculative insertions? See\nheap_abort_speculative(), which directly sets the speculatively\ninserted heap tuple's xmin to InvalidTransactionId/zero.\n\nIt probably does make sense to keep something close to this check --\nit just needs to account for speculative insertions to avoid false\npositive reports of corruption. We could perform cross-checks against\na tuple whose xmin is InvalidTransactionId/zero to verify that it\nreally is from an aborted speculative insertion, to the extent that\nthat's possible. For example, such a tuple can't be a heap-only tuple,\nand it can't have any xmax value other than InvalidTransactionId/zero.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 25 Mar 2023 15:24:34 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: amcheck: Fix verify_heapam for tuples where xmin or xmax\n is 0."
},
{
"msg_contents": "On Sat, Mar 25, 2023 at 6:25 PM Peter Geoghegan <[email protected]> wrote:\n> On Fri, Mar 24, 2023 at 8:13 AM Robert Haas <[email protected]> wrote:\n> > If we're checking xmin and find that it is invalid (i.e. 0) just\n> > report that as corruption, similar to what's already done in the\n> > three cases that seem correct. If we're checking xmax and find\n> > that's invalid, that's fine: it just means that the tuple hasn't\n> > been updated or deleted.\n>\n> What about aborted speculative insertions? See\n> heap_abort_speculative(), which directly sets the speculatively\n> inserted heap tuple's xmin to InvalidTransactionId/zero.\n\nOh, dear. I didn't know about that case.\n\n> It probably does make sense to keep something close to this check --\n> it just needs to account for speculative insertions to avoid false\n> positive reports of corruption. We could perform cross-checks against\n> a tuple whose xmin is InvalidTransactionId/zero to verify that it\n> really is from an aborted speculative insertion, to the extent that\n> that's possible. For example, such a tuple can't be a heap-only tuple,\n> and it can't have any xmax value other than InvalidTransactionId/zero.\n\nSince this was back-patched, I think it's probably better to just\nremove the error. We can introduce new validation if we want, but that\nshould probably be master-only.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 27 Mar 2023 13:17:30 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: amcheck: Fix verify_heapam for tuples where xmin or xmax\n is 0."
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 10:17 AM Robert Haas <[email protected]> wrote:\n> > What about aborted speculative insertions? See\n> > heap_abort_speculative(), which directly sets the speculatively\n> > inserted heap tuple's xmin to InvalidTransactionId/zero.\n>\n> Oh, dear. I didn't know about that case.\n\nA big benefit of having extensive amcheck coverage is that it\neffectively centralizes information about the on-disk format, in an\neasy to understand way, and (over time) puts things on a more rigorous\nfooting. Now it'll be a lot harder for somebody else to overlook that\ncase in the future, which is good. Things are trending in the right\ndirection.\n\n> > It probably does make sense to keep something close to this check --\n> > it just needs to account for speculative insertions to avoid false\n> > positive reports of corruption. We could perform cross-checks against\n> > a tuple whose xmin is InvalidTransactionId/zero to verify that it\n> > really is from an aborted speculative insertion, to the extent that\n> > that's possible. For example, such a tuple can't be a heap-only tuple,\n> > and it can't have any xmax value other than InvalidTransactionId/zero.\n>\n> Since this was back-patched, I think it's probably better to just\n> remove the error. We can introduce new validation if we want, but that\n> should probably be master-only.\n\nThat makes sense.\n\nI don't think that it's particularly likely that having refined\naborted speculative insertion amcheck coverage will make a critical\ndifference to any user, at any time. But \"amcheck as documentation of\nthe on-disk format\" is reason enough to have it.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 27 Mar 2023 11:34:16 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: amcheck: Fix verify_heapam for tuples where xmin or xmax\n is 0."
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 2:34 PM Peter Geoghegan <[email protected]> wrote:\n> > Since this was back-patched, I think it's probably better to just\n> > remove the error. We can introduce new validation if we want, but that\n> > should probably be master-only.\n>\n> That makes sense.\n\nPatch attached.\n\n> I don't think that it's particularly likely that having refined\n> aborted speculative insertion amcheck coverage will make a critical\n> difference to any user, at any time. But \"amcheck as documentation of\n> the on-disk format\" is reason enough to have it.\n\nSure, if someone feels like writing the code. I have to admit that I\nhave mixed feelings about this whole direction. In concept, I agree\nwith you entirely: a fringe benefit of having checks that tell us\nwhether or not a page is valid is that it helps to make clear what\npage states we think are valid. In practice, however, the point you\nraise in your first sentence weighs awfully heavily with me. Spending\na lot of energy on checks that are unlikely to catch practical\nproblems feels like it may not be the best use of time. I'm not sure\nexactly where to draw the line, but it seems highly likely to be that\nthere are things we could deduce about the page that wouldn't be worth\nthe effort. For example, would we bother checking that a tuple with an\nin-progress xmin does not have a smaller natts value than a tuple with\na committed xmin? Or that natts values are non-decreasing across a HOT\nchain? I suspect there are even more obscure examples of things that\nshould be true but might not really be worth worrying about in the\ncode.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 27 Mar 2023 16:17:32 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: amcheck: Fix verify_heapam for tuples where xmin or xmax\n is 0."
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 1:17 PM Robert Haas <[email protected]> wrote:\n> Patch attached.\n\nThis is fine, as far as it goes. Obviously it fixes the immediate problem.\n\n> > I don't think that it's particularly likely that having refined\n> > aborted speculative insertion amcheck coverage will make a critical\n> > difference to any user, at any time. But \"amcheck as documentation of\n> > the on-disk format\" is reason enough to have it.\n>\n> Sure, if someone feels like writing the code. I have to admit that I\n> have mixed feelings about this whole direction. In concept, I agree\n> with you entirely: a fringe benefit of having checks that tell us\n> whether or not a page is valid is that it helps to make clear what\n> page states we think are valid.\n\nI don't think that it's a fringe benefit; it's just not necessarily of\ndirect benefit to amcheck users.\n\nBefore the HOT chain validation patch went in, it was unclear whether\ncertain conceivable on-disk states should constitute corruption. In\nparticular, it wasn't clear to anybody whether or not it was okay for\nan LP_REDIRECT to point to an LP_DEAD until recently (and probably\nother things besides that). I don't think that we should assume that\nthe easy part is abstractly defining corruption, while the hard part\nis writing the tool to check for the corruption. Sometimes it is, but\nI think that it's often the other way around.\n\n> In practice, however, the point you\n> raise in your first sentence weighs awfully heavily with me. Spending\n> a lot of energy on checks that are unlikely to catch practical\n> problems feels like it may not be the best use of time.\n\nThat definitely could be true, but I don't think that it's terribly\nmuch extra effort in most cases.\n\n> I'm not sure\n> exactly where to draw the line, but it seems highly likely to be that\n> there are things we could deduce about the page that wouldn't be worth\n> the effort. For example, would we bother checking that a tuple with an\n> in-progress xmin does not have a smaller natts value than a tuple with\n> a committed xmin? Or that natts values are non-decreasing across a HOT\n> chain? I suspect there are even more obscure examples of things that\n> should be true but might not really be worth worrying about in the\n> code.\n\nA related way of looking at it (that I also find appealing) is that\nit's often easier (far easier) to just have the check, and be done\nwith it. Of course there is bound to be uncertainty about how useful\nany given check might be; we're looking for something that is\ntheoretically never supposed to happen. Why not just assume that it\nmight matter if it's not costing very much to check for it?\n\nThis is quite a different mentality than the one we bring to core\nheapam code, where it's quite natural to just avoid strange corner\ncases in the on-disk format like the plague. The risk profile is\ntotally different for amcheck code. Within amcheck, I'd rather go too\nfar than not go far enough.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 27 Mar 2023 13:51:38 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: amcheck: Fix verify_heapam for tuples where xmin or xmax\n is 0."
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 4:52 PM Peter Geoghegan <[email protected]> wrote:\n> This is fine, as far as it goes. Obviously it fixes the immediate problem.\n\nOK, I've committed and back-patched this fix to v14, just like the\nerroneous commit that created the issue.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 28 Mar 2023 16:28:23 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: amcheck: Fix verify_heapam for tuples where xmin or xmax\n is 0."
}
] |
[
{
"msg_contents": "It looks like cfbot is stuck since 13h ago.\n\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql\n\n\n",
"msg_date": "Fri, 24 Mar 2023 14:23:20 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "cfbot stuck"
},
{
"msg_contents": "On Sat, Mar 25, 2023 at 8:23 AM Justin Pryzby <[email protected]> wrote:\n> It looks like cfbot is stuck since 13h ago.\n>\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql\n\nhttps://github.blog/2023-03-23-we-updated-our-rsa-ssh-host-key/ broke\nits ability to push branches. It's back in business now.\n\n\n",
"msg_date": "Sat, 25 Mar 2023 09:00:26 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cfbot stuck"
}
] |
[
{
"msg_contents": "Ever since 27b62377b47f9e7bf58613, I have been getting \"ERROR: mergejoin\ninput data is out of order\" for the attached reproducer.\n\nI get this on Ubuntu 20.04 and 22.04, whether initdb was run under LC_ALL=C\nor under LANG=en_US.UTF-8.\n\nIt is not my query, I don't really know what its point is. I just got this\nerror while looking into the performance of it and accidentally running it\nagainst 16dev.\n\nCheers,\n\nJeff",
"msg_date": "Fri, 24 Mar 2023 15:45:36 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bug with ICU for merge join"
},
{
"msg_contents": "On Fri, 2023-03-24 at 15:45 -0400, Jeff Janes wrote:\n> Ever since 27b62377b47f9e7bf58613, I have been getting \"ERROR:\n> mergejoin input data is out of order\" for the attached reproducer.\n\nThank you for the report! And the simple repro.\n\nFixed in 81a6d57e33.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Sat, 25 Mar 2023 11:16:28 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bug with ICU for merge join"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen building the pdf docs, fop emits a line for each page of the docs:\n> ...\n> [INFO] FOUserAgent - Rendered page #2931.\n\nwhich, given the length of our docs, makes the output pretty pointless. Even\nif there are warnings, one likely won't notice them.\n\nI just figured out that one can hide those. Unfortunately not at the\ncommandline, but in \"$HOME/.foprc\" or /etc.\n\n$ cat ~/.foprc\nLOGLEVEL=-Dorg.apache.commons.logging.simplelog.defaultlog=WARN\n\nmakes it a lot less annoying. And one can see that we currently are getting\nwarnings:\n\n[warning] /usr/bin/fop: JVM flavor 'sun' not understood\n[WARN] FOUserAgent - Font \"Symbol,normal,700\" not found. Substituting with \"Symbol,normal,400\".\n[WARN] FOUserAgent - Font \"ZapfDingbats,normal,700\" not found. Substituting with \"ZapfDingbats,normal,400\".\n[WARN] FOUserAgent - The contents of fo:block line 2 exceed the available area in the inline-progression direction by more than 50 points. (See position 30429:383)\n[WARN] PropertyMaker - span=\"inherit\" on fo:block, but no explicit value found on the parent FO.\n\nThe first is a debianism, the next two are possibly spurious [1]. But the next\ntwo might be relevant?\n\n\nI don't immediately see a way that's not too gross (like redefining HOME when\ninvoking fop) to set LOGLEVEL without editing .foprc. Perhaps we should add\nadvice to do so to docguide.sgml?\n\nGreetings,\n\nAndres Freund\n\n[1] https://lists.apache.org/thread/yqkjzow3y8fpo9fc3hlbqb9fk49fonlf\n\n\n",
"msg_date": "Fri, 24 Mar 2023 12:47:01 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Make fop less verbose when building PDF"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> I just figured out that one can hide those. Unfortunately not at the\n> commandline, but in \"$HOME/.foprc\" or /etc.\n\n> $ cat ~/.foprc\n> LOGLEVEL=-Dorg.apache.commons.logging.simplelog.defaultlog=WARN\n\nYeah. I've done it locally by modifying the \"fop\" script ;-)\n... but probably ~/.foprc would be neater. I see that I also\nchanged the default logger:\n\nLOGCHOICE=-Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.SimpleLog\n\nbecause at least in the version I have, that isn't the default.\n\n> [warning] /usr/bin/fop: JVM flavor 'sun' not understood\n> [WARN] FOUserAgent - Font \"Symbol,normal,700\" not found. Substituting with \"Symbol,normal,400\".\n> [WARN] FOUserAgent - Font \"ZapfDingbats,normal,700\" not found. Substituting with \"ZapfDingbats,normal,400\".\n> [WARN] FOUserAgent - The contents of fo:block line 2 exceed the available area in the inline-progression direction by more than 50 points. (See position 30429:383)\n> [WARN] PropertyMaker - span=\"inherit\" on fo:block, but no explicit value found on the parent FO.\n\n> The first is a debianism, the next two are possibly spurious [1]. But the next\n> two might be relevant?\n\nThe one about \"exceed the available area\" has been on my radar to fix;\nit's a consequence of an overly-wide example somebody added recently.\nThe other ones have been there all along and I don't know of a way to\nget rid of them.\n\n> I don't immediately see a way that's not too gross (like redefining HOME when\n> invoking fop) to set LOGLEVEL without editing .foprc. Perhaps we should add\n> advice to do so to docguide.sgml?\n\n+1\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 24 Mar 2023 16:19:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make fop less verbose when building PDF"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-24 16:19:57 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > I just figured out that one can hide those. Unfortunately not at the\n> > commandline, but in \"$HOME/.foprc\" or /etc.\n> \n> > $ cat ~/.foprc\n> > LOGLEVEL=-Dorg.apache.commons.logging.simplelog.defaultlog=WARN\n> \n> Yeah. I've done it locally by modifying the \"fop\" script ;-)\n> ... but probably ~/.foprc would be neater. I see that I also\n> changed the default logger:\n> \n> LOGCHOICE=-Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.SimpleLog\n> \n> because at least in the version I have, that isn't the default.\n\nIt might be a debian patch setting it as the default.\n\n\nHow about:\n\n <para>\n In its default configuration <productname>FOP</productname> will emit an\n <literal>INFO</literal> message for each page. The log level can be\n changed via <filename>~/.foprc</filename>:\n<programlisting>\nLOGCHOICE=-Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.SimpleLog\nLOGLEVEL=-Dorg.apache.commons.logging.simplelog.defaultlog=WARN\n</programlisting>\n </para>\n\n\n> > [warning] /usr/bin/fop: JVM flavor 'sun' not understood\n> > [WARN] FOUserAgent - Font \"Symbol,normal,700\" not found. Substituting with \"Symbol,normal,400\".\n> > [WARN] FOUserAgent - Font \"ZapfDingbats,normal,700\" not found. Substituting with \"ZapfDingbats,normal,400\".\n> > [WARN] FOUserAgent - The contents of fo:block line 2 exceed the available area in the inline-progression direction by more than 50 points. (See position 30429:383)\n> > [WARN] PropertyMaker - span=\"inherit\" on fo:block, but no explicit value found on the parent FO.\n> \n> > The first is a debianism, the next two are possibly spurious [1]. But the next\n> > two might be relevant?\n> \n> The one about \"exceed the available area\" has been on my radar to fix;\n> it's a consequence of an overly-wide example somebody added recently.\n\nAh, good.\n\n\n> The other ones have been there all along and I don't know of a way to\n> get rid of them.\n\nYea, looks like the span=\"inherit\" one is harmless and known:\n\nhttps://issues.apache.org/jira/browse/FOP-1534\n\nWe could silence it in our stylesheet, but it's probably not worth bothering.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 24 Mar 2023 14:02:55 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make fop less verbose when building PDF"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> How about:\n\n> <para>\n> In its default configuration <productname>FOP</productname> will emit an\n> <literal>INFO</literal> message for each page. The log level can be\n> changed via <filename>~/.foprc</filename>:\n> <programlisting>\n> LOGCHOICE=-Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.SimpleLog\n> LOGLEVEL=-Dorg.apache.commons.logging.simplelog.defaultlog=WARN\n> </programlisting>\n> </para>\n\nWFM.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 24 Mar 2023 17:05:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make fop less verbose when building PDF"
}
] |
[
{
"msg_contents": "Hello,\n\nRecently I have been trying to use libpq's pipeline mode in a project,\nand in the process I have noticed that the PQpipelineSync() function\nhas a deficiency (which, to be fair, could be an advantage in other\nsituations): It combines the establishment of a synchronization point\nin a pipeline with a send buffer flush, i.e. a system call. In my use\ncase I build up a pipeline of several completely independent queries,\nso a synchronization point is required between each of them, but\nperforming a system call for each is just unnecessary overhead,\nespecially if the system is severely affected by any mitigations for\nSpectre or other security vulnerabilities. That's why I propose to add\nan interface to libpq to establish a synchronization point in a\npipeline without performing any further actions.\n\nI have attached a patch that introduces PQsendSyncMessage(), a\nfunction that is equivalent to PQpipelineSync(), except that it does\nnot flush anything to the server; the user must subsequently call\nPQflush() instead. Alternatively, the new function is equivalent to\nPQsendFlushRequest(), except that it sends a sync message instead of a\nflush request. In addition to reducing the system call overhead of\nlibpq's pipeline mode, it also makes it easier for the operating\nsystem to send as much of the pipeline as possible in a single TCP (or\nlower level protocol) packet when the database is running remotely.\n\nI would appeciate your thoughts on my proposal.\n\nBest wishes,\nAnton Kirilov",
"msg_date": "Fri, 24 Mar 2023 22:38:48 +0000",
"msg_from": "Anton Kirilov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "Anton Kirilov wrote:\n> I would appeciate your thoughts on my proposal.\n\nThis sounds like a useful addition to me. I've played a bit with it in \nPsycopg and it works fine.\n\n\ndiff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c\nindex a16bbf32ef..e2b32c1379 100644\n--- a/src/interfaces/libpq/fe-exec.c\n+++ b/src/interfaces/libpq/fe-exec.c\n@@ -82,6 +82,7 @@ static int PQsendDescribe(PGconn *conn, char desc_type,\n static int check_field_number(const PGresult *res, int field_num);\n static void pqPipelineProcessQueue(PGconn *conn);\n static int pqPipelineFlush(PGconn *conn);\n+static int send_sync_message(PGconn *conn, int flush);\n\nCould (should?) be:\nstatic int send_sync_message(PGconn *conn, bool flush);\n\n\ndiff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c \nb/src/test/modules/libpq_pipeline/libpq_pipeline.c\nindex f48da7d963..829907957a 100644\n--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c\n+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c\n@@ -244,6 +244,104 @@ test_multi_pipelines(PGconn *conn)\n fprintf(stderr, \"ok\\n\");\n }\n\n+static void\n+test_multi_pipelines_noflush(PGconn *conn)\n+{\n\nMaybe test_multi_pipelines() could be extended with an additional \nPQsendQueryParams()+PQsendSyncMessage() step instead of adding this \nextra test case?\n\n\n",
"msg_date": "Tue, 25 Apr 2023 16:23:32 +0200",
"msg_from": "Denis Laxalde <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "Hello,\n\nOn 25/04/2023 15:23, Denis Laxalde wrote:\n> This sounds like a useful addition to me. I've played a bit with it in\n> Psycopg and it works fine.\n\nThank you very much for reviewing my patch! I have attached a new\nversion of it that addresses your comments and that has been rebased on\ntop of the current tip of the master branch (by fixing a merge\nconflict), i.e. commit 7b7fa85130330128b404eddebd4f33c6739454b0.\n\nFor the sake of others who might read this e-mail thread, I would like\nto mention that my patch is complete (including documentation and tests,\nbut modulo review comments, of course), and that it passes the tests, i.e.:\n\nmake check\nmake -C src/test/modules/libpq_pipeline check\n\nBest wishes,\nAnton Kirilov",
"msg_date": "Wed, 26 Apr 2023 23:56:49 +0100",
"msg_from": "Anton Kirilov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "Hello,\n\nAnton Kirilov a écrit :\n> On 25/04/2023 15:23, Denis Laxalde wrote:\n>> This sounds like a useful addition to me. I've played a bit with it in\n>> Psycopg and it works fine.\n> \n> Thank you very much for reviewing my patch! I have attached a new\n> version of it that addresses your comments and that has been rebased on\n> top of the current tip of the master branch (by fixing a merge\n> conflict), i.e. commit 7b7fa85130330128b404eddebd4f33c6739454b0.\n> \n> For the sake of others who might read this e-mail thread, I would like\n> to mention that my patch is complete (including documentation and tests,\n> but modulo review comments, of course), and that it passes the tests, i.e.:\n> \n> make check\n> make -C src/test/modules/libpq_pipeline check\n\nThank you; this V2 looks good to me.\nMarking as ready for committer.\n\n\n\n",
"msg_date": "Thu, 27 Apr 2023 13:06:27 +0200",
"msg_from": "Denis Laxalde <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "On Thu, Apr 27, 2023 at 01:06:27PM +0200, Denis Laxalde wrote:\n> Thank you; this V2 looks good to me.\n> Marking as ready for committer.\n\nPlease note that we are in a stabilization period for v16 and that the\nfirst commit fest of v17 should start in July, so it will perhaps take\nsome time before this is looked at by a committer.\n\nSpeaking of which, what was the performance impact of your application\nonce PQflush() was moved out of the pipeline sync? Just asking for\ncuriosity..\n--\nMichael",
"msg_date": "Fri, 28 Apr 2023 16:10:05 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "Michael Paquier a écrit :\n> On Thu, Apr 27, 2023 at 01:06:27PM +0200, Denis Laxalde wrote:\n>> Thank you; this V2 looks good to me.\n>> Marking as ready for committer.\n> \n> Please note that we are in a stabilization period for v16 and that the\n> first commit fest of v17 should start in July, so it will perhaps take\n> some time before this is looked at by a committer.\n\nYes, I am aware; totally fine by me.\n\n> Speaking of which, what was the performance impact of your application\n> once PQflush() was moved out of the pipeline sync? Just asking for\n> curiosity..\n\nI have no metrics for that; but maybe Anton has some?\n(In Psycopg, we generally do not expect users to handle the sync \noperation themselves, it's done under the hood; and I only found one \nsituation where the flush could be avoided, but that's largely because \nour design, there can be more in general I think.)\n\n\n\n",
"msg_date": "Fri, 28 Apr 2023 10:08:15 +0200",
"msg_from": "Denis Laxalde <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 6:39 PM Anton Kirilov <[email protected]> wrote:\n> I have attached a patch that introduces PQsendSyncMessage(), a\n> function that is equivalent to PQpipelineSync(), except that it does\n> not flush anything to the server; the user must subsequently call\n> PQflush() instead. Alternatively, the new function is equivalent to\n> PQsendFlushRequest(), except that it sends a sync message instead of a\n> flush request. In addition to reducing the system call overhead of\n> libpq's pipeline mode, it also makes it easier for the operating\n> system to send as much of the pipeline as possible in a single TCP (or\n> lower level protocol) packet when the database is running remotely.\n\nI wonder whether this is the naming that we want. The two names are\nsignificantly different. Something like PQpipelineSendSync() would be\nmore similar.\n\nI also wonder, really even more, whether it would be better to do\nsomething like PQpipelinePutSync(PGconn *conn, bool flush) with\nPQpipelineSync(conn) just meaning PQpipelinePutSync(conn, true). We're\nbasically using the function name as a Boolean parameter to select the\nbehavior, which is fine if you only have one parameter and it's a\nBoolean, but it's obviously unworkable if you have say 3 Boolean\nparameters because you don't want 8 different functions, and what if\nyou need an integer parameter for some reason?\n\nSo I'd favor exposing a function that is effectively an extended\nversion of PQpipelineSendSync() with an additional Boolean parameter,\nand that way if for some reason somebody needs to extend it again,\nthey can just make an even more extended version with yet another\nparameter. That way, all the functionality is always available by\ncalling the newest function, and older ones are still there for older\napplications.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 28 Apr 2023 08:06:47 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "Hello,\n\nOn 28/04/2023 13:06, Robert Haas wrote:\n> On Fri, Mar 24, 2023 at 6:39 PM Anton Kirilov <[email protected]> wrote:\n>> I have attached a patch that introduces PQsendSyncMessage()...\n> \n> I wonder whether this is the naming that we want. The two names are\n> significantly different. Something like PQpipelineSendSync() would be\n> more similar.\n\nThe reason is that the function is modeled after PQsendFlushRequest(), \nsince it felt closer to what I was trying to achieve, i.e. appending a \nprotocol message to the output buffer without doing any actual I/O \noperations.\n\n> I also wonder, really even more, whether it would be better to do\n> something like PQpipelinePutSync(PGconn *conn, bool flush) with\n> PQpipelineSync(conn) just meaning PQpipelinePutSync(conn, true).\n\nActually I believe that there is another issue with PQpipelineSync() \nthat has to do with ergonomics - according to a comment inside its body \n( \nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/interfaces/libpq/fe-exec.c;h=a16bbf32ef5c0043eee9c92ab82bf4f11386ee47;hb=HEAD#l3189 \n) it could fail silently to send all the buffered data, which seems to \nbe problematic when operating in non-blocking mode. In practice, this \nmeans that all calls to PQpipelineSync() must be followed by execution \nof PQflush() to check whether the application should poll for write \nreadiness. I suppose that that was the reason why I was going for a \nsolution that did not combine changing the connection state with doing \nI/O operations.\n\nIn any case I am not particularly attached to any naming or the exact \nshape of the new API, as long as it achieves the same goal (reducing the \nnumber of system calls), but before I make any substantial changes to my \npatch, I would like to hear your thoughts on the matter.\n\nBest wishes,\nAnton Kirilov\n\n\n",
"msg_date": "Sat, 29 Apr 2023 17:06:03 +0100",
"msg_from": "Anton Kirilov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "Hello,\n\nOn 28/04/2023 09:08, Denis Laxalde wrote:\n> Michael Paquier a écrit :\n>> Speaking of which, what was the performance impact of your application\n>> once PQflush() was moved out of the pipeline sync? Just asking for\n>> curiosity..\n> \n> I have no metrics for that; but maybe Anton has some?\nI did a quick check using the TechEmpower Framework Benchmarks ( \nhttps://www.techempower.com/benchmarks/ ) - they define 4 Web \napplication tests that are database-bound. Everything was running on a \nsingle machine, and 3 of the tests had an improvement of 29.16%, 32.30%, \nand 41.78% respectively in the number of requests per second (Web \napplication requests, not database queries), while the last test \nregressed by 0.66% (which I would say is practically no difference, \ngiven that there is always some measurement noise). I will try to get \nthe changes from my patch tested in the project's continuous \nbenchmarking environment, which has a proper set up with 3 servers \n(client, application server, and database) connected by a 10GbE link.\n\nBest wishes,\nAnton Kirilov\n\n\n",
"msg_date": "Sun, 30 Apr 2023 01:59:17 +0100",
"msg_from": "Anton Kirilov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "On Sun, Apr 30, 2023 at 01:59:17AM +0100, Anton Kirilov wrote:\n> I did a quick check using the TechEmpower Framework Benchmarks (\n> https://www.techempower.com/benchmarks/ ) - they define 4 Web application\n> tests that are database-bound. Everything was running on a single machine,\n> and 3 of the tests had an improvement of 29.16%, 32.30%, and 41.78%\n> respectively in the number of requests per second (Web application requests,\n> not database queries), while the last test regressed by 0.66% (which I would\n> say is practically no difference, given that there is always some\n> measurement noise). I will try to get the changes from my patch tested in\n> the project's continuous benchmarking environment, which has a proper set up\n> with 3 servers (client, application server, and database) connected by a\n> 10GbE link.\n\nWell, these are nice numbers. At ~1% I am ready to buy the noise\nargument, but what would the range of the usual noise when it comes to\nmultiple runs under the same conditions?\n\nLet's make sure that the API interface is the most intuitive (Robert\nhas commented about that a few days ago, still need to follow up on\nthat).\n--\nMichael",
"msg_date": "Tue, 2 May 2023 08:55:09 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "On Sat, Apr 29, 2023 at 05:06:03PM +0100, Anton Kirilov wrote:\n> In any case I am not particularly attached to any naming or the exact shape\n> of the new API, as long as it achieves the same goal (reducing the number of\n> system calls), but before I make any substantial changes to my patch, I\n> would like to hear your thoughts on the matter.\n\nAnother thing that may matter in terms of extensibility? Would a\nboolean argument really be the best design? Could it be better to\nhave instead one API with a bits32 and some flags controlling its\ninternals?\n--\nMichael",
"msg_date": "Tue, 2 May 2023 09:42:19 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "On Mon, May 1, 2023 at 8:42 PM Michael Paquier <[email protected]> wrote:\n> Another thing that may matter in terms of extensibility? Would a\n> boolean argument really be the best design? Could it be better to\n> have instead one API with a bits32 and some flags controlling its\n> internals?\n\nI wondered that, too. If we never add any more Boolean parameters to\nthis function then that would end up a waste, but maybe we will and\nthen it will be genius. Not sure what's best.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 2 May 2023 10:02:18 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "On 2023-May-02, Robert Haas wrote:\n\n> On Mon, May 1, 2023 at 8:42 PM Michael Paquier <[email protected]> wrote:\n> > Another thing that may matter in terms of extensibility? Would a\n> > boolean argument really be the best design? Could it be better to\n> > have instead one API with a bits32 and some flags controlling its\n> > internals?\n> \n> I wondered that, too. If we never add any more Boolean parameters to\n> this function then that would end up a waste, but maybe we will and\n> then it will be genius. Not sure what's best.\n\nI agree that adding a flag is the way to go, since it improve chances\nthat we won't end up with ten different functions in case we decide to\nhave eight other behaviors. One more function and we're done. And\nwhile I can't think of any use for a future flag, we (I) already didn't\nof this one either, so let's not make the same mistake.\n\nWe already have 'int' flag masks in PQcopyResult() and\nPQsetTraceFlags(). We were using bits32 initially for flag stuff in the\nPQtrace facilities, until [1] reminded us that we shouldn't let c.h\ncreep into app-land, so that was turned into plain 'int'.\n\n[1] https://www.postgresql.org/message-id/TYAPR01MB2990B6C6A32ACF15D97AE94AFEBD0%40TYAPR01MB2990.jpnprd01.prod.outlook.com\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"No nos atrevemos a muchas cosas porque son difíciles,\npero son difíciles porque no nos atrevemos a hacerlas\" (Séneca)\n\n\n",
"msg_date": "Wed, 3 May 2023 12:03:57 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "Hello,\n\n> On 3 May 2023, at 11:03, Alvaro Herrera <[email protected]> wrote:\n> \n> On 2023-May-02, Robert Haas wrote:\n> \n>> On Mon, May 1, 2023 at 8:42 PM Michael Paquier <[email protected]> wrote:\n>>> Another thing that may matter in terms of extensibility? Would a\n>>> boolean argument really be the best design? Could it be better to\n>>> have instead one API with a bits32 and some flags controlling its\n>>> internals?\n>> \n>> I wondered that, too. If we never add any more Boolean parameters to\n>> this function then that would end up a waste, but maybe we will and\n>> then it will be genius. Not sure what's best.\n> \n> I agree that adding a flag is the way to go, since it improve chances\n> that we won't end up with ten different functions in case we decide to\n> have eight other behaviors. One more function and we're done. And\n> while I can't think of any use for a future flag, we (I) already didn't\n> of this one either, so let's not make the same mistake.\n\nThank you all for the feedback! Do you have any thoughts on the other issue with PQpipelineSync() I have mentioned in my previous message? Am I just misunderstanding what the code comment means and how the API is supposed to be used by any chance?\n\nBest wishes,\nAnton Kirilov\n\n\n",
"msg_date": "Thu, 4 May 2023 10:21:56 +0100",
"msg_from": "Anton Kirilov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "On 2023-May-04, Anton Kirilov wrote:\n\n> Thank you all for the feedback! Do you have any thoughts on the other\n> issue with PQpipelineSync() I have mentioned in my previous message?\n\nEh, I hadn't seen that one.\n\n> Am I just misunderstanding what the code comment means and how the API\n> is supposed to be used by any chance?\n\nI think you have it right: it is possible that the buffer has not been\nfully flushed by the time PQpipelineSync returns.\n\nIf you want to make sure it's fully flushed, your only option is to have\nthe call block. That would make it no longer non-blocking, so it has to\nbe explicitly requested behavior.\nI think this means to add yet another behavior flag for the new\nfunction: have it block, waiting for the buffer to be flushed.\n\nSo your application can put several sync points in the queue, with no\nflushing (and of course no blocking), and have it flush+block only on\nthe \"last\" one. Of course, for other users, you want the current\nbehavior: have it flush opportunistically but not block. So you can't\nmake it a single flag.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 4 May 2023 12:36:42 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "Hello,\n\nOn Thu, 4 May 2023, 11:36 Alvaro Herrera, <[email protected] <mailto:[email protected]>> wrote:\n> On 2023-May-04, Anton Kirilov wrote:\n> If you want to make sure it's fully flushed, your only option is to have\n> the call block.\n\n\nSurely PQflush() returning 0 would signify that the output buffer has been fully flushed? Which means that there is another, IMHO simpler option than introducing an extra flag - make the new function return the same values as PQflush(), i.e. 0 for no error and fully flushed output, -1 for error, and 1 for partial flush (so that the user may start polling for write readiness). Of course, the function would never return 1 (but would block instead) unless the user has called PQsetnonblocking() beforehand.\n\nBest wishes,\nAnton Kirilov\n\nHello,On Thu, 4 May 2023, 11:36 Alvaro Herrera, <[email protected]> wrote:On 2023-May-04, Anton Kirilov wrote:\nIf you want to make sure it's fully flushed, your only option is to have\nthe call block.Surely PQflush() returning 0 would signify that the output buffer has been fully flushed? Which means that there is another, IMHO simpler option than introducing an extra flag - make the new function return the same values as PQflush(), i.e. 0 for no error and fully flushed output, -1 for error, and 1 for partial flush (so that the user may start polling for write readiness). Of course, the function would never return 1 (but would block instead) unless the user has called PQsetnonblocking() beforehand.Best wishes,Anton Kirilov",
"msg_date": "Fri, 5 May 2023 16:02:24 +0100",
"msg_from": "Anton Kirilov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "On Wed, May 03, 2023 at 12:03:57PM +0200, Alvaro Herrera wrote:\n> We already have 'int' flag masks in PQcopyResult() and\n> PQsetTraceFlags(). We were using bits32 initially for flag stuff in the\n> PQtrace facilities, until [1] reminded us that we shouldn't let c.h\n> creep into app-land, so that was turned into plain 'int'.\n> \n> [1] https://www.postgresql.org/message-id/TYAPR01MB2990B6C6A32ACF15D97AE94AFEBD0%40TYAPR01MB2990.jpnprd01.prod.outlook.com\n\nIndeed. Good point!\n--\nMichael",
"msg_date": "Mon, 8 May 2023 12:14:58 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "Hello,\n\nOn 05/05/2023 16:02, Anton Kirilov wrote:\n> On Thu, 4 May 2023, 11:36 Alvaro Herrera, <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> On 2023-May-04, Anton Kirilov wrote:\n> If you want to make sure it's fully flushed, your only option is to have\n> the call block.\n> \n> \n> Surely PQflush() returning 0 would signify that the output buffer has \n> been fully flushed?\nSince I haven't got any further comments, I assume that I am correct, so \nhere is an updated version of the patch that should address all feedback \nthat I have received so far and all issues that I have identified.\n\nThanks,\nAnton Kirilov",
"msg_date": "Sun, 21 May 2023 18:17:18 +0100",
"msg_from": "Anton Kirilov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "Hello,\n\nOn 02/05/2023 00:55, Michael Paquier wrote:\n > Well, these are nice numbers. At ~1% I am ready to buy the noise\n > argument, but what would the range of the usual noise when it comes to\n > multiple runs under the same conditions>\n\nI managed to get my patch tested in the TechEmpower Framework Benchmarks \ncontinuous benchmarking environment, and even though it takes roughly a \nweek to get a new set of results, now there had been a couple of runs \nboth with and without my changes. All 4 database-bound Web application \ntests (single query, multiple queries, fortunes, and data updates) saw \nimprovements, by approximately 8.94%, 0.64%, 9.54%, and 2.78% \nrespectively. The standard errors were 0.65% or less, so there was \npractically no change in the second test. However, I have seen another \nimplementation experience a much larger improvement (~6.69%) in that \ntest from essentially the same optimization, so I think that my own code \nhas another bottleneck. Note that these test runs were not in the same \nbenchmarking environment as the one I used previously for a quick check, \nso the values differ. Also, another set of results should become \navailable in a week or so (and would be based on my optimization).\n\nLinks to the test runs:\nhttps://www.techempower.com/benchmarks/#section=test&runid=1ecf679a-9686-4de7-a3b7-de16a1a84bb6&l=zik0zi-35r&w=zhb2tb-zik0zj-zik0zj-sf&test=db\nhttps://www.techempower.com/benchmarks/#section=test&runid=aab00736-445c-4b7f-83b5-451c47c83395&l=zik0zi-35r&w=zhb2tb-zik0zj-zik0zj-sf&test=db\nhttps://www.techempower.com/benchmarks/#section=test&runid=bc7f7570-a88e-48e3-9874-06d7dc0a0f74&l=zik0zi-35r&w=zhb2tb-zik0zj-zik0zj-sf&test=db\nhttps://www.techempower.com/benchmarks/#section=test&runid=e6dd1abd-7aa2-4846-9b44-d8fd8a23d385&l=zik0zi-35r&w=zhb2tb-zik0zj-zik0zj-sf&test=db\n(ordered chronologically; the first 2 did not include my optimization)\n\nBest wishes,\nAnton Kirilov\n\n\n",
"msg_date": "Mon, 22 May 2023 01:18:09 +0100",
"msg_from": "Anton Kirilov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "> On 21 May 2023, at 19:17, Anton Kirilov <[email protected]> wrote:\n\n> .. here is an updated version of the patch \n\nThis hunk here:\n\n-\tif (PQflush(conn) < 0)\n+\tconst int ret = flags & PG_PIPELINEPUTSYNC_FLUSH ? PQflush(conn) : 0;\n+\n+\tif (ret < 0)\n\n..is causing this compiler warning:\n\nfe-exec.c: In function ‘PQpipelinePutSync’:\nfe-exec.c:3203:2: error: ISO C90 forbids mixed declarations and code [-Werror=declaration-after-statement]\n3203 | const int ret = flags & PG_PIPELINEPUTSYNC_FLUSH ? PQflush(conn) : 0;\n | ^~~~~\ncc1: all warnings being treated as errors\n\nAlso, the patch no longer applies. Please rebase and send an updated version.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 5 Jul 2023 22:45:26 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "Hello,\n\nOn 05/07/2023 21:45, Daniel Gustafsson wrote:\n> Please rebase and send an updated version.\nHere it is (including the warning fix).\n\nThanks,\nAnton Kirilov",
"msg_date": "Thu, 6 Jul 2023 01:42:59 +0100",
"msg_from": "Anton Kirilov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "On Fri, 28 Apr 2023 at 14:07, Robert Haas <[email protected]> wrote:\n> I wonder whether this is the naming that we want. The two names are\n> significantly different. Something like PQpipelineSendSync() would be\n> more similar.\n>\n> I also wonder, really even more, whether it would be better to do\n> something like PQpipelinePutSync(PGconn *conn, bool flush) with\n> PQpipelineSync(conn) just meaning PQpipelinePutSync(conn, true). We're\n> basically using the function name as a Boolean parameter to select the\n> behavior, which is fine if you only have one parameter and it's a\n> Boolean, but it's obviously unworkable if you have say 3 Boolean\n> parameters because you don't want 8 different functions, and what if\n> you need an integer parameter for some reason?\n\nOn Wed, 3 May 2023 at 12:04, Alvaro Herrera <[email protected]> wrote:\n> I agree that adding a flag is the way to go, since it improve chances\n> that we won't end up with ten different functions in case we decide to\n> have eight other behaviors. One more function and we're done. And\n> while I can't think of any use for a future flag, we (I) already didn't\n> of this one either, so let's not make the same mistake.\n\nOn Sat, 29 Apr 2023 at 18:07, Anton Kirilov <[email protected]> wrote:\n> The reason is that the function is modeled after PQsendFlushRequest(),\n> since it felt closer to what I was trying to achieve, i.e. appending a\n> protocol message to the output buffer without doing any actual I/O\n> operations.\n\nSorry for being late to the party, but I think the current API naming\nand the flag argument don't fit well with the current libpq API that\nwe have. I much prefer something similar to the original version of\nthe patch.\n\nI think this function should be named something with the \"PQsend\"\nprefix since that's the way we name all our public async message\nsending functions in libpq. The \"Put\" word we only use in internal\nlibpq functions, so I feel it has no place in the external API\nsurface. My proposal would be to call the function PQsendPipelineSync\n(i.e. having the PQsend prefix while still looking similar to the\nexisting PQpipelineSync).\n\nAlso I think the flag argument is completely unnecessary. I understand\nthe argument that we didn't foresee the need for this non-flushing\nbehaviour either, and the follow up reasoning that we thus should add\na flag for future things we didn't forsee. But I think it's looking at\nthe situation from the wrong direction. Instead of looking at it as\nadding another version of our current PQpipelineSync API, we should\nlook at it as an addition to our current list of PQsend functions for\na new packet type. And none of those PQsend functions ever needed a\nflag. Which makes sense, because they are the lowest level building\nblocks that make sense from a user perspective: They send a message\ntype over the socket and don't do anything else. And if the assumption\nthat this is the lowest level building block is wrong, then it will\nalmost certainly be wrong for all other PQsend functions too. And thus\nwe'll need a solution that fits for all of them.\n\nFinally, I have one suggestion for a behavioural change: I think the\nfunction should still call pqPipelineFlush, just like all of our other\nPQsend functions (except PQsendFlushRequest, but that seems like an\noversight there too).\n\n\n",
"msg_date": "Tue, 7 Nov 2023 10:23:12 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "On 2023-Nov-07, Jelte Fennema-Nio wrote:\n\n> I think this function should be named something with the \"PQsend\"\n> prefix since that's the way we name all our public async message\n> sending functions in libpq. The \"Put\" word we only use in internal\n> libpq functions, so I feel it has no place in the external API\n> surface. My proposal would be to call the function PQsendPipelineSync\n> (i.e. having the PQsend prefix while still looking similar to the\n> existing PQpipelineSync).\n\nArgued that way, it makes sense to me.\n\n> Also I think the flag argument is completely unnecessary. [...]\n> Instead of looking at it as adding another version of our current\n> PQpipelineSync API, we should look at it as an addition to our current\n> list of PQsend functions for a new packet type. And none of those\n> PQsend functions ever needed a flag.\n\nTrue.\n\n> Finally, I have one suggestion for a behavioural change: I think the\n> function should still call pqPipelineFlush, just like all of our other\n> PQsend functions (except PQsendFlushRequest, but that seems like an\n> oversight there too).\n\nI agree.\n\nSo, yeah, it looks like this will be pretty similar to Anton's original\npatch, with PQpipelineSync() being just PQsendPipelineSync() + PQflush().\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"In fact, the basic problem with Perl 5's subroutines is that they're not\ncrufty enough, so the cruft leaks out into user-defined code instead, by\nthe Conservation of Cruft Principle.\" (Larry Wall, Apocalypse 6)\n\n\n",
"msg_date": "Wed, 8 Nov 2023 17:20:54 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "Hello,\n\nThanks for the feedback!\n\nOn 07/11/2023 09:23, Jelte Fennema-Nio wrote:\n > But I think it's looking at the situation from the wrong direction. \n[...] we should look at it as an addition to our current list of PQsend \nfunctions for a new packet type. And none of those PQsend functions ever \nneeded a flag. Which makes sense, because they are the lowest level \nbuilding blocks that make sense from a user perspective: They send a \nmessage type over the socket and don't do anything else.\n\nYes, I think that this is quite close to my thinking when I created the \noriginal version of the patch. Also, the protocol specification states \nthat the Sync message lacks parameters.\n\nSince there haven't been any comments from the other people who have \nchimed in on this e-mail thread, I will assume that there is consensus \n(we are doing a U-turn with the implementation approach after all), so \nhere is the updated version of the patch.\n\nBest wishes,\nAnton Kirilov",
"msg_date": "Sun, 12 Nov 2023 13:37:16 +0000",
"msg_from": "Anton Kirilov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "Hi,\n\nI've played a bit with the patch on my side. One thing that would be\ngreat would be to make this\navailable in pgbench through a \\syncpipeline meta command. That would\nmake it easier for users\nto test whether there's a positive impact with their queries or not.\n\nI've wrote a patch to add it to pgbench (don't want to mess with the\nthread's attachment so here's a GH link\nhttps://github.com/bonnefoa/postgres/commit/047b5b05169e36361fe29fef9f430da045ef012d).\nHere's some quick results:\n\necho \"\\set aid1 random(1, 100000 * :scale)\n\\set aid2 random(1, 100000 * :scale)\n\\startpipeline\nselect 1;\nselect * from pgbench_accounts where aid=:aid1;\nselect 2;\n\\syncpipeline\nselect 1;\nselect * from pgbench_accounts where aid=:aid2;\nselect 2;\n\\endpipeline\" > /tmp/pipeline_without_flush.sql\npgbench -T30 -Mextended -f /tmp/pipeline_without_flush.sql -h127.0.0.1\nlatency average = 0.383 ms\ninitial connection time = 2.810 ms\ntps = 2607.587877 (without initial connection time)\n\necho \"\\set aid1 random(1, 100000 * :scale)\n\\set aid2 random(1, 100000 * :scale)\n\\startpipeline\nselect 1;\nselect * from pgbench_accounts where aid=:aid1;\nselect 2;\n\\endpipeline\n\\startpipeline\nselect 1;\nselect * from pgbench_accounts where aid=:aid2;\nselect 2;\n\\endpipeline\" > /tmp/pipeline_with_flush.sql\npgbench -T30 -Mextended -f /tmp/pipeline_with_flush.sql -h127.0.0.1\nlatency average = 0.437 ms\ninitial connection time = 2.602 ms\ntps = 2290.527462 (without initial connection time)\n\nI took some perfs and the main change is from the server spending less time in\nReadCommand which makes sense since the commands are sent in a single tcp\nframe with the \\syncpipeline version.\n\nRegards,\nAnthonin\n\nOn Sun, Nov 12, 2023 at 2:37 PM Anton Kirilov <[email protected]> wrote:\n>\n> Hello,\n>\n> Thanks for the feedback!\n>\n> On 07/11/2023 09:23, Jelte Fennema-Nio wrote:\n> > But I think it's looking at the situation from the wrong direction.\n> [...] we should look at it as an addition to our current list of PQsend\n> functions for a new packet type. And none of those PQsend functions ever\n> needed a flag. Which makes sense, because they are the lowest level\n> building blocks that make sense from a user perspective: They send a\n> message type over the socket and don't do anything else.\n>\n> Yes, I think that this is quite close to my thinking when I created the\n> original version of the patch. Also, the protocol specification states\n> that the Sync message lacks parameters.\n>\n> Since there haven't been any comments from the other people who have\n> chimed in on this e-mail thread, I will assume that there is consensus\n> (we are doing a U-turn with the implementation approach after all), so\n> here is the updated version of the patch.\n>\n> Best wishes,\n> Anton Kirilov\n\n\n",
"msg_date": "Mon, 13 Nov 2023 09:19:52 +0100",
"msg_from": "Anthonin Bonnefoy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "On Sun, 12 Nov 2023 at 14:37, Anton Kirilov <[email protected]> wrote:\n> Since there haven't been any comments from the other people who have\n> chimed in on this e-mail thread, I will assume that there is consensus\n> (we are doing a U-turn with the implementation approach after all), so\n> here is the updated version of the patch.\n\nThe new patch looks great to me. And indeed consensus seems to have\nbeen reached on the approach and that this patch is useful. So I'm\ntaking the liberty of marking this patch as Ready for Committer.\n\n\n",
"msg_date": "Fri, 29 Dec 2023 12:49:29 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "On Mon, 13 Nov 2023 at 09:20, Anthonin Bonnefoy\n<[email protected]> wrote:\n> \\syncpipeline\n> tps = 2607.587877 (without initial connection time)\n> ...\n> \\endpipeline\n> \\startpipeline\n> tps = 2290.527462 (without initial connection time)\n\nThose are some nice improvements. And I think once this patch is in,\nit would make sense to add the pgbench feature you're suggesting.\nBecause indeed that makes it see what perf improvements can be gained\nfor your workload.\n\n\n",
"msg_date": "Fri, 29 Dec 2023 12:52:30 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "On Fri, Dec 29, 2023 at 12:52:30PM +0100, Jelte Fennema-Nio wrote:\n> On Mon, 13 Nov 2023 at 09:20, Anthonin Bonnefoy\n> <[email protected]> wrote:\n> > \\syncpipeline\n> > tps = 2607.587877 (without initial connection time)\n> > ...\n> > \\endpipeline\n> > \\startpipeline\n> > tps = 2290.527462 (without initial connection time)\n> \n> Those are some nice improvements. And I think once this patch is in,\n> it would make sense to add the pgbench feature you're suggesting.\n> Because indeed that makes it see what perf improvements can be gained\n> for your workload.\n\nYeah, that sounds like a good idea seen from here. (Still need to\nlook at the core patch.)\n--\nMichael",
"msg_date": "Sun, 31 Dec 2023 09:37:31 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "On Sun, Dec 31, 2023 at 09:37:31AM +0900, Michael Paquier wrote:\n> On Fri, Dec 29, 2023 at 12:52:30PM +0100, Jelte Fennema-Nio wrote:\n>> Those are some nice improvements. And I think once this patch is in,\n>> it would make sense to add the pgbench feature you're suggesting.\n>> Because indeed that makes it see what perf improvements can be gained\n>> for your workload.\n> \n> Yeah, that sounds like a good idea seen from here. (Still need to\n> look at the core patch.)\n\n PQpipelineSync(PGconn *conn)\n+{\n+\treturn PQsendPipelineSync(conn) && pqFlush(conn) >= 0;\n+}\n[...]\n+\t * Give the data a push if we're past the size threshold. In nonblock\n+\t * mode, don't complain if we're unable to send it all; the caller is\n+\t * expected to execute PQflush() at some point anyway.\n \t */\n-\tif (PQflush(conn) < 0)\n+\tif (pqPipelineFlush(conn) < 0)\n \t\tgoto sendFailed;\n\nI was looking at this patch, and calling PQpipelineSync() would now\ncause two calls of PQflush() to be issued when the output buffer\nthreshold has been reached. Could that lead to regressions?\n\nA second thing I find disturbing is that pqAppendCmdQueueEntry() would\nbe called before the final pqFlush(), which could cause the commands\nto be listed in a queue even if the flush fails when calling\nPQpipelineSync().\n\nHence, as a whole, wouldn't it be more consistent if the new\nPQsendPipelineSync() and the existing PQpipelineSync() call an\ninternal static routine (PQPipelineSyncInternal?) that can switch\nbetween both modes? Let's just make the extra argument a boolean.\n--\nMichael",
"msg_date": "Wed, 10 Jan 2024 15:40:36 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "On Wed, Jan 10, 2024 at 03:40:36PM +0900, Michael Paquier wrote:\n> Hence, as a whole, wouldn't it be more consistent if the new\n> PQsendPipelineSync() and the existing PQpipelineSync() call an\n> internal static routine (PQPipelineSyncInternal?) that can switch\n> between both modes? Let's just make the extra argument a boolean.\n\nYeah, I'll go with that after a second look. Attached is what I am\nfinishing with, and I have reproduced some numbers with the pgbench\nmetacommand mentioned upthread, which is reeeaaally nice.\n\nI have also made a few edits to the tests.\n--\nMichael",
"msg_date": "Mon, 15 Jan 2024 16:50:07 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "On Mon, 15 Jan 2024 at 08:50, Michael Paquier <[email protected]> wrote:\n> Yeah, I'll go with that after a second look. Attached is what I am\n> finishing with, and I have reproduced some numbers with the pgbench\n> metacommand mentioned upthread, which is reeeaaally nice.\n\nCode looks good to me. But one small notes on the test.\n\n+ /* second pipeline */\n+ if (PQsendQueryParams(conn, \"SELECT $1\", 1, dummy_param_oids,\n+ dummy_params, NULL, NULL, 0) != 1)\n+ pg_fatal(\"dispatching first SELECT failed: %s\",\nPQerrorMessage(conn));\n\nError message should be \"second SELECT\" not \"first SELECT\". Same note\nfor the error message in the third pipeline, where it still says\n\"second SELECT\".\n\n\n+ res = PQgetResult(conn);\n+ if (res == NULL)\n+ pg_fatal(\"PQgetResult returned null when there's a\npipeline item: %s\",\n+ PQerrorMessage(conn));\n+\n+ if (PQresultStatus(res) != PGRES_TUPLES_OK)\n+ pg_fatal(\"Unexpected result code %s from first pipeline item\",\n+ PQresStatus(PQresultStatus(res)));\n+ PQclear(res);\n+ res = NULL;\n+\n+ if (PQgetResult(conn) != NULL)\n+ pg_fatal(\"PQgetResult returned something extra after first result\");\n\nsame issue: s/first/second/g (and s/second/third/g for the existing\npart of the test).\n\n\n",
"msg_date": "Mon, 15 Jan 2024 10:01:59 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "On 2024-Jan-15, Michael Paquier wrote:\n\nLooks good! Just some small notes,\n\n> +/*\n> + * Wrapper for PQpipelineSync and PQsendPipelineSync.\n> *\n> * It's legal to start submitting more commands in the pipeline immediately,\n> * without waiting for the results of the current pipeline. There's no need to\n\nthe new function pqPipelineSyncInternal is not a wrapper for these other\ntwo functions -- the opposite is true actually. We tend to use the term\n\"workhorse\" or \"internal workhorse\" for this kind of thing.\n\nIn the docs, after this patch we have\n\n- PQpipelineSync\n- PQsendFlushRequest\n- PQsendPipelineSync\n\nWouldn't it make more sense to add the new function in the middle of the\ntwo existing ones instead?\n\n\nLooking again at the largish comment that's now atop\npqPipelineSyncInternal(), I think most of it should be removed -- these\nthings should be explained in the SGML docs, and I think they are, in\nthe \"Using Pipeline Mode\" section. We can just have the lines this\npatch is adding.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I can't go to a restaurant and order food because I keep looking at the\nfonts on the menu. Five minutes later I realize that it's also talking\nabout food\" (Donald Knuth)\n\n\n",
"msg_date": "Mon, 15 Jan 2024 10:49:56 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "On Mon, Jan 15, 2024 at 10:01:59AM +0100, Jelte Fennema-Nio wrote:\n> Error message should be \"second SELECT\" not \"first SELECT\". Same note\n> for the error message in the third pipeline, where it still says\n> \"second SELECT\".\n>\n> same issue: s/first/second/g (and s/second/third/g for the existing\n> part of the test).\n\nUgh, yes. The note in the test was wrong. Thanks for\ndouble-checking.\n--\nMichael",
"msg_date": "Tue, 16 Jan 2024 08:28:25 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "On Mon, Jan 15, 2024 at 10:49:56AM +0100, Alvaro Herrera wrote:\n> the new function pqPipelineSyncInternal is not a wrapper for these other\n> two functions -- the opposite is true actually. We tend to use the term\n> \"workhorse\" or \"internal workhorse\" for this kind of thing.\n\nIndeed, makes sense.\n\n> In the docs, after this patch we have\n> \n> - PQpipelineSync\n> - PQsendFlushRequest\n> - PQsendPipelineSync\n> \n> Wouldn't it make more sense to add the new function in the middle of the\n> two existing ones instead?\n\nOrdering PQsendPipelineSync just after PQpipelineSync is OK by me.\nI've applied the patch with all these modifications to move on with\nthe subject.\n\n> Looking again at the largish comment that's now atop\n> pqPipelineSyncInternal(), I think most of it should be removed -- these\n> things should be explained in the SGML docs, and I think they are, in\n> the \"Using Pipeline Mode\" section. We can just have the lines this\n> patch is adding.\n\nHmm. The first two sentences about being able to submit more commands\nto the pipeline are documented in the subsection \"Issuing Queries\".\nThe third sentence is implied in the second paragraph of this\nsubsection. The 4th paragraph of the comment where sync commands\ncannot be issued until all the results from the pipeline have been\nconsumed is mentioned in the first paragraph in \"Using Pipeline Mode\".\nSo you are right that this could be entirely removed.\n\nHow about the attached to remove all that, then?\n--\nMichael",
"msg_date": "Tue, 16 Jan 2024 12:32:22 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "On 2024-Jan-16, Michael Paquier wrote:\n\n> I've applied the patch with all these modifications to move on with\n> the subject.\n\nThanks!\n\n> On Mon, Jan 15, 2024 at 10:49:56AM +0100, Alvaro Herrera wrote:\n\n> > Looking again at the largish comment that's now atop\n> > pqPipelineSyncInternal(), I think most of it should be removed -- these\n> > things should be explained in the SGML docs, and I think they are, in\n> > the \"Using Pipeline Mode\" section. We can just have the lines this\n> > patch is adding.\n> \n> Hmm. The first two sentences about being able to submit more commands\n> to the pipeline are documented in the subsection \"Issuing Queries\".\n> The third sentence is implied in the second paragraph of this\n> subsection. The 4th paragraph of the comment where sync commands\n> cannot be issued until all the results from the pipeline have been\n> consumed is mentioned in the first paragraph in \"Using Pipeline Mode\".\n> So you are right that this could be entirely removed.\n\n(I'm pretty sure that the history of this comment is that Craig Ringer\nwrote it for his prototype patch, and then I took the various parts and\nstruggled to add them as SGML docs as it made logical sense. So if\nthere's anything in the comment that's important and not covered by the\ndocs, that would be a docs bug.) I agree with your findings.\n\n> How about the attached to remove all that, then?\n\nLooks good, thank you.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Tiene valor aquel que admite que es un cobarde\" (Fernandel)\n\n\n",
"msg_date": "Tue, 16 Jan 2024 14:55:12 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "On Tue, Jan 16, 2024 at 02:55:12PM +0100, Alvaro Herrera wrote:\n> On 2024-Jan-16, Michael Paquier wrote:\n>> How about the attached to remove all that, then?\n> \n> Looks good, thank you.\n\nThanks for double-checking. Done.\n--\nMichael",
"msg_date": "Wed, 17 Jan 2024 16:30:34 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
},
{
"msg_contents": "Hello,\n\nOn 17/01/2024 07:30, Michael Paquier wrote:\n> Thanks for double-checking. Done.\nThank you very much for taking care of my patch!\n\nOne thing that I noticed is that the TODO list on the PostgreSQL Wiki \nstill contained an entry ( https://wiki.postgresql.org/wiki/Todo#libpq ) \nabout adding pipelining support to libpq - perhaps it ought to be updated?\n\nBest wishes,\nAnton Kirilov\n\n\n",
"msg_date": "Thu, 18 Jan 2024 23:11:22 +0000",
"msg_from": "Anton Kirilov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add PQsendSyncMessage() to libpq"
}
] |
[
{
"msg_contents": "Hi,\n\nStarting with\n\ncommit 7db0cd2145f2bce84cac92402e205e4d2b045bf2\nAuthor: Tomas Vondra <[email protected]>\nDate: 2021-01-17 22:11:39 +0100\n\n Set PD_ALL_VISIBLE and visibility map bits in COPY FREEZE\n\nRelationGetBufferForTuple does\n\n\t/*\n\t * The page is empty, pin vmbuffer to set all_frozen bit.\n\t */\n\tif (options & HEAP_INSERT_FROZEN)\n\t{\n\t\tAssert(PageGetMaxOffsetNumber(BufferGetPage(buffer)) == 0);\n\t\tvisibilitymap_pin(relation, BufferGetBlockNumber(buffer), vmbuffer);\n\t}\n\nwhile holding a buffer lock. visibilitymap_pin() reads pages, if vmbuffer\ndoesn't already point to the right block.\n\n\nThe lock ordering rules are to lock VM pages *before* locking heap pages.\n\n\nI think the reason this hasn't yet bitten us badly, is that INSERT_FROZEN\neffectively requires that the relation is access exclusive locked. There\nshouldn't be other backends locking multiple buffers in the relation (bgwriter\n/ checkpointer can lock a single buffer at a time, but that's it).\n\n\nI see roughly two ways forward:\n\n1) We add a comment explaining why it's safe to violate lock ordering rules in\n this one situation\n\n2) Change relevant code so that we only return a valid vmbuffer if we could do\n so without blocking / IO and, obviously, skip updating the VM if we\n couldn't get the buffer.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 24 Mar 2023 19:57:40 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "hio.c does visibilitymap_pin()/IO while holding buffer lock"
},
{
"msg_contents": "On 3/25/23 03:57, Andres Freund wrote:\n> Hi,\n> \n> Starting with\n> \n> commit 7db0cd2145f2bce84cac92402e205e4d2b045bf2\n> Author: Tomas Vondra <[email protected]>\n> Date: 2021-01-17 22:11:39 +0100\n> \n> Set PD_ALL_VISIBLE and visibility map bits in COPY FREEZE\n> \n\nThat's a bummer :-(\n\n> RelationGetBufferForTuple does\n> \n> \t/*\n> \t * The page is empty, pin vmbuffer to set all_frozen bit.\n> \t */\n> \tif (options & HEAP_INSERT_FROZEN)\n> \t{\n> \t\tAssert(PageGetMaxOffsetNumber(BufferGetPage(buffer)) == 0);\n> \t\tvisibilitymap_pin(relation, BufferGetBlockNumber(buffer), vmbuffer);\n> \t}\n> \n> while holding a buffer lock. visibilitymap_pin() reads pages, if vmbuffer\n> doesn't already point to the right block.\n> \n> \n> The lock ordering rules are to lock VM pages *before* locking heap pages.\n> \n> \n> I think the reason this hasn't yet bitten us badly, is that INSERT_FROZEN\n> effectively requires that the relation is access exclusive locked. There\n> shouldn't be other backends locking multiple buffers in the relation (bgwriter\n> / checkpointer can lock a single buffer at a time, but that's it).\n>\n\nRight. Still, it seems a bit fragile ...\n\n> \n> I see roughly two ways forward:\n> \n> 1) We add a comment explaining why it's safe to violate lock ordering rules in\n> this one situation\n> \n\nPossible, although I feel uneasy about just documenting a broken rule.\nWould be better to maintain the locking order.\n\n> 2) Change relevant code so that we only return a valid vmbuffer if we could do\n> so without blocking / IO and, obviously, skip updating the VM if we\n> couldn't get the buffer.\n> \n\nI don't recall the exact details about the vm locking/pinning, but can't\nwe just ensure we actually follow the proper locking order? I mean, this\nonly deals with new pages, requested at line ~624:\n\n buffer = ReadBufferBI(relation, P_NEW, RBM_ZERO_AND_LOCK, bistate);\n\nCan't we ensure we actually lock the vm buffer too in ReadBufferBI,\nbefore calling ReadBufferExtended? Or am I confused and that's not\npossible for some reason?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 25 Mar 2023 14:34:25 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hio.c does visibilitymap_pin()/IO while holding buffer lock"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-25 14:34:25 +0100, Tomas Vondra wrote:\n> On 3/25/23 03:57, Andres Freund wrote:\n> > 2) Change relevant code so that we only return a valid vmbuffer if we could do\n> > so without blocking / IO and, obviously, skip updating the VM if we\n> > couldn't get the buffer.\n> > \n> \n> I don't recall the exact details about the vm locking/pinning, but can't\n> we just ensure we actually follow the proper locking order? I mean, this\n> only deals with new pages, requested at line ~624:\n> \n> buffer = ReadBufferBI(relation, P_NEW, RBM_ZERO_AND_LOCK, bistate);\n> \n> Can't we ensure we actually lock the vm buffer too in ReadBufferBI,\n> before calling ReadBufferExtended? Or am I confused and that's not\n> possible for some reason?\n\nNote that this is using P_NEW. I.e. we don't know the buffer location yet.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 25 Mar 2023 09:39:03 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hio.c does visibilitymap_pin()/IO while holding buffer lock"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-03-25 14:34:25 +0100, Tomas Vondra wrote:\n>> Can't we ensure we actually lock the vm buffer too in ReadBufferBI,\n>> before calling ReadBufferExtended? Or am I confused and that's not\n>> possible for some reason?\n\n> Note that this is using P_NEW. I.e. we don't know the buffer location yet.\n\nMaybe the relation-extension logic needs to include the ability to get\nthe relevant vm page?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 Mar 2023 12:57:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hio.c does visibilitymap_pin()/IO while holding buffer lock"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-25 12:57:17 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2023-03-25 14:34:25 +0100, Tomas Vondra wrote:\n> >> Can't we ensure we actually lock the vm buffer too in ReadBufferBI,\n> >> before calling ReadBufferExtended? Or am I confused and that's not\n> >> possible for some reason?\n> \n> > Note that this is using P_NEW. I.e. we don't know the buffer location yet.\n> \n> Maybe the relation-extension logic needs to include the ability to get\n> the relevant vm page?\n\nI don't see how that's easily possible with the current lock ordering\nrules. At least without giving up using RBM_ZERO_AND_LOCK for extending or\nstuffing even more things to happen with the the extension lock held, which I\ndon't think we want to. I don't think INSERT_FROZEN is worth that price.\n\nPerhaps we should just try to heuristically pin the right VM buffer before\ntrying to extend?\n\nThinking more about this, I think there's no inherent deadlock danger with\nreading the VM while holding a buffer lock, \"just\" an efficiency issue. If we\navoid needing to do IO nearly all the time, by trying to pin the right page\nbefore extending, it's probably good enough.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 25 Mar 2023 11:17:07 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hio.c does visibilitymap_pin()/IO while holding buffer lock"
},
{
"msg_contents": "On Sat, Mar 25, 2023 at 11:17 AM Andres Freund <[email protected]> wrote:\n> Thinking more about this, I think there's no inherent deadlock danger with\n> reading the VM while holding a buffer lock, \"just\" an efficiency issue. If we\n> avoid needing to do IO nearly all the time, by trying to pin the right page\n> before extending, it's probably good enough.\n\nUh, it was quite possible for lazy_vacuum_heap_page() to do that up\nuntil very recently (it was fixed by my commit 980ae17310). Since it\nwould call visibilitymap_get_status() with an exclusive buffer lock on\nthe heap page, which sometimes had to change the VM page. It\npotentially did an IO at that point, to read in a later VM page to the\ncaller's initially-pinned one.\n\nIn other words, up until recently there was a strange idiom used by\nlazy_vacuum_heap_page/lazy_vacuum_heap_rel, where we'd abuse\nvisibilitymap_get_status() as a replacement for calling\nvisibilitymap_pin() right before acquire a heap page buffer lock. But\nnow the second heap pass does it the same way as the first heap pass.\n(Even still, I have no reason to believe that the previous approach\nwas all that bad; it was just a bit ugly.)\n\nThere are still a few visibilitymap_get_status()-with-buffer-lock\ncalls in vacuumlazy.c, FWIW, but they don't run the risk of needing to\nchange the vmbuffer we have pinned with the heap page buffer lock\nheld.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 25 Mar 2023 11:28:34 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hio.c does visibilitymap_pin()/IO while holding buffer lock"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-25 11:17:07 -0700, Andres Freund wrote:\n> I don't see how that's easily possible with the current lock ordering\n> rules. At least without giving up using RBM_ZERO_AND_LOCK for extending or\n> stuffing even more things to happen with the the extension lock held, which I\n> don't think we want to. I don't think INSERT_FROZEN is worth that price.\n\nI think I might have been thinking of this too narrowly. It's extremely\nunlikely that another backend would discover the page. And we can use\nvisibilitymap_pin_ok() to amortize the cost to almost nothing - there's a lot\nof bits in an 8k block...\n\n\nHere's a draft patch.\n\n\nThe bulk relation patch I am polishing has a similar issue, except that there\nthe problem is inserting into the FSM, instead of pinning a VM pageabout the\nFSM. Hence the patch above makes the infrastructure a bit more general than\nrequired for the HEAP_INSERT_FROZEN case alone (where we currently shouldn't\never have a valid otherBuffer).\n\n\nThe way the parameter ordering for GetVisibilityMapPins() works make it\nsomewhat unwieldy - see e.g the existing\n\t\tif (otherBuffer == InvalidBuffer || targetBlock <= otherBlock)\n\t\t\tGetVisibilityMapPins(relation, buffer, otherBuffer,\n\t\t\t\t\t\t\t\t targetBlock, otherBlock, vmbuffer,\n\t\t\t\t\t\t\t\t vmbuffer_other);\n\t\telse\n\t\t\tGetVisibilityMapPins(relation, otherBuffer, buffer,\n\t\t\t\t\t\t\t\t otherBlock, targetBlock, vmbuffer_other,\n\t\t\t\t\t\t\t\t vmbuffer);\n\nWhich I now duplicated in yet another place.\n\nPerhaps we just ought to switch buffer1/block1 with buffer2/block2 inside\nGetVisibilityMapPins(), to avoid duplicating that code elsewhere?\n\n\nBecause we now track whether the *targetBuffer* was ever unlocked, we can be a\nbit more narrow about the possibility of there not being sufficient space.\n\n\nThe patch could be narrowed for backpatching. But as there's likely no\npractical problem at this point, I wouldn't want to backpatch anyway?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 28 Mar 2023 18:21:02 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hio.c does visibilitymap_pin()/IO while holding buffer lock"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-28 18:21:02 -0700, Andres Freund wrote:\n> Here's a draft patch.\n\nAttached is v2, with a stupid bug fixed and a bit of comment / pgindent\npolish.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 28 Mar 2023 19:17:21 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hio.c does visibilitymap_pin()/IO while holding buffer lock"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-28 19:17:21 -0700, Andres Freund wrote:\n> On 2023-03-28 18:21:02 -0700, Andres Freund wrote:\n> > Here's a draft patch.\n> \n> Attached is v2, with a stupid bug fixed and a bit of comment / pgindent\n> polish.\n\nI'd welcome some review (Tomas?), but otherwise I'm planning to push ahead\nwith this.\n\nI'm still debating with myself whether this commit (or a prerequisite commit)\nshould move logic dealing with the buffer ordering into\nGetVisibilityMapPins(), so we don't need two blocks like this:\n\n\n\t\tif (otherBuffer == InvalidBuffer || targetBlock <= otherBlock)\n\t\t\tGetVisibilityMapPins(relation, buffer, otherBuffer,\n\t\t\t\t\t\t\t\t targetBlock, otherBlock, vmbuffer,\n\t\t\t\t\t\t\t\t vmbuffer_other);\n\t\telse\n\t\t\tGetVisibilityMapPins(relation, otherBuffer, buffer,\n\t\t\t\t\t\t\t\t otherBlock, targetBlock, vmbuffer_other,\n\t\t\t\t\t\t\t\t vmbuffer);\n...\n\n\t\tif (otherBuffer != InvalidBuffer)\n\t\t{\n\t\t\tif (GetVisibilityMapPins(relation, otherBuffer, buffer,\n\t\t\t\t\t\t\t\t\t otherBlock, targetBlock, vmbuffer_other,\n\t\t\t\t\t\t\t\t\t vmbuffer))\n\t\t\t\tunlockedTargetBuffer = true;\n\t\t}\n\t\telse\n\t\t{\n\t\t\tif (GetVisibilityMapPins(relation, buffer, InvalidBuffer,\n\t\t\t\t\t\t\t\t\t targetBlock, InvalidBlockNumber,\n\t\t\t\t\t\t\t\t\t vmbuffer, InvalidBuffer))\n\t\t\t\tunlockedTargetBuffer = true;\n\t\t}\n\t}\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 2 Apr 2023 15:40:24 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hio.c does visibilitymap_pin()/IO while holding buffer lock"
},
{
"msg_contents": "On 4/3/23 00:40, Andres Freund wrote:\n> Hi,\n> \n> On 2023-03-28 19:17:21 -0700, Andres Freund wrote:\n>> On 2023-03-28 18:21:02 -0700, Andres Freund wrote:\n>>> Here's a draft patch.\n>>\n>> Attached is v2, with a stupid bug fixed and a bit of comment / pgindent\n>> polish.\n> \n> I'd welcome some review (Tomas?), but otherwise I'm planning to push ahead\n> with this.\n> \n\nI guess the 0001 part was already pushed, so I should be looking only at\n0002, correct?\n\nI think 0002 makes RelationGetBufferForTuple() harder to understand. I'm\nnot saying it's incorrect, but I find it hard to reason about the new\ncombinations of conditions :-(\n\nI mean, it only had this condition:\n\n if (otherBuffer != InvalidBuffer)\n {\n ...\n }\n\nbut now it have\n\n if (unlockedTargetBuffer)\n {\n ...\n }\n else if (otherBuffer != InvalidBuffer)\n {\n ...\n }\n\n if (unlockedTargetBuffer || otherBuffer != InvalidBuffer)\n {\n ...\n }\n\nNot sure how to improve that :-/ but not exactly trivial to figure out\nwhat's going to happen.\n\nMaybe this\n\n * If we unlocked the target buffer above, it's unlikely, but possible,\n * that another backend used space on this page.\n\nmight say what we're going to do in this case. I mean - I understand\nsome backend may use space in unlocked page, but what does that mean for\nthis code? What is it going to do? (The same comment talks about the\nnext condition in much more detail, for example.)\n\nAlso, isn't it a bit strange the free space check now happens outside\nany if condition? It used to happen in the\n\n if (otherBuffer != InvalidBuffer)\n {\n ...\n }\n\nblock, but now it happens outside.\n\n> I'm still debating with myself whether this commit (or a prerequisite commit)\n> should move logic dealing with the buffer ordering into\n> GetVisibilityMapPins(), so we don't need two blocks like this:\n> \n> \n> \t\tif (otherBuffer == InvalidBuffer || targetBlock <= otherBlock)\n> \t\t\tGetVisibilityMapPins(relation, buffer, otherBuffer,\n> \t\t\t\t\t\t\t\t targetBlock, otherBlock, vmbuffer,\n> \t\t\t\t\t\t\t\t vmbuffer_other);\n> \t\telse\n> \t\t\tGetVisibilityMapPins(relation, otherBuffer, buffer,\n> \t\t\t\t\t\t\t\t otherBlock, targetBlock, vmbuffer_other,\n> \t\t\t\t\t\t\t\t vmbuffer);\n> ...\n> \n> \t\tif (otherBuffer != InvalidBuffer)\n> \t\t{\n> \t\t\tif (GetVisibilityMapPins(relation, otherBuffer, buffer,\n> \t\t\t\t\t\t\t\t\t otherBlock, targetBlock, vmbuffer_other,\n> \t\t\t\t\t\t\t\t\t vmbuffer))\n> \t\t\t\tunlockedTargetBuffer = true;\n> \t\t}\n> \t\telse\n> \t\t{\n> \t\t\tif (GetVisibilityMapPins(relation, buffer, InvalidBuffer,\n> \t\t\t\t\t\t\t\t\t targetBlock, InvalidBlockNumber,\n> \t\t\t\t\t\t\t\t\t vmbuffer, InvalidBuffer))\n> \t\t\t\tunlockedTargetBuffer = true;\n> \t\t}\n> \t}\n> \n\nYeah. I haven't tried, but I imagine it'd make RelationGetBufferForTuple\na little bit.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 3 Apr 2023 14:25:59 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hio.c does visibilitymap_pin()/IO while holding buffer lock"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-03 14:25:59 +0200, Tomas Vondra wrote:\n> On 4/3/23 00:40, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2023-03-28 19:17:21 -0700, Andres Freund wrote:\n> >> On 2023-03-28 18:21:02 -0700, Andres Freund wrote:\n> >>> Here's a draft patch.\n> >>\n> >> Attached is v2, with a stupid bug fixed and a bit of comment / pgindent\n> >> polish.\n> >\n> > I'd welcome some review (Tomas?), but otherwise I'm planning to push ahead\n> > with this.\n> >\n>\n> I guess the 0001 part was already pushed, so I should be looking only at\n> 0002, correct?\n\nYes.\n\n\n> I think 0002 makes RelationGetBufferForTuple() harder to understand. I'm\n> not saying it's incorrect, but I find it hard to reason about the new\n> combinations of conditions :-(\n\n> I mean, it only had this condition:\n>\n> if (otherBuffer != InvalidBuffer)\n> {\n> ...\n> }\n>\n> but now it have\n>\n> if (unlockedTargetBuffer)\n> {\n> ...\n> }\n> else if (otherBuffer != InvalidBuffer)\n> {\n> ...\n> }\n>\n> if (unlockedTargetBuffer || otherBuffer != InvalidBuffer)\n> {\n> ...\n> }\n>\n> Not sure how to improve that :-/ but not exactly trivial to figure out\n> what's going to happen.\n\nIt's not great, I agree. I tried to make it easier to read in this version by\na) changing GetVisibilityMapPins() as I proposed\nb) added a new variable \"recheckVmPins\", that gets set in\n if (unlockedTargetBuffer)\n and\n if (otherBuffer != InvalidBuffer)\nc) reformulated comments\n\n\n> Maybe this\n>\n> * If we unlocked the target buffer above, it's unlikely, but possible,\n> * that another backend used space on this page.\n>\n> might say what we're going to do in this case. I mean - I understand\n> some backend may use space in unlocked page, but what does that mean for\n> this code? What is it going to do? (The same comment talks about the\n> next condition in much more detail, for example.)\n\nThere's a comment about that detail further below. But you're right, it wasn't\nclear as-is. How about now?\n\n\n> Also, isn't it a bit strange the free space check now happens outside\n> any if condition? It used to happen in the\n>\n> if (otherBuffer != InvalidBuffer)\n> {\n> ...\n> }\n>\n> block, but now it happens outside.\n\nWell, the alternative is to repeat it in the two branches, which doesn't seem\ngreat either. Particularly because there'll be a third branch after the bulk\nextension patch.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 3 Apr 2023 12:00:30 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hio.c does visibilitymap_pin()/IO while holding buffer lock"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-03 12:00:30 -0700, Andres Freund wrote:\n> It's not great, I agree. I tried to make it easier to read in this version by\n> a) changing GetVisibilityMapPins() as I proposed\n> b) added a new variable \"recheckVmPins\", that gets set in\n> if (unlockedTargetBuffer)\n> and\n> if (otherBuffer != InvalidBuffer)\n> c) reformulated comments\n\nI pushed this version a couple hours ago, after a bit more polishing.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 Apr 2023 18:03:44 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hio.c does visibilitymap_pin()/IO while holding buffer lock"
}
] |
[
{
"msg_contents": "I happened to notice a constant-TRUE clause with is_pushed_down being\ntrue while its required_relids not including the OJ being formed, which\nseems abnormal to me. It turns out that this clause comes from\nreconsider_outer_join_clauses(), as a dummy replacement if we've\ngenerated a derived clause. The comment explains this as\n\n* If we do generate a derived clause,\n* however, the outer-join clause is redundant. We must still put some\n* clause into the regular processing, because otherwise the join will be\n* seen as a clauseless join and avoided during join order searching.\n* We handle this by generating a constant-TRUE clause that is marked with\n* required_relids that make it a join between the correct relations.\n\nShould we instead mark the constant-TRUE clause with required_relids\nplus the OJ relid?\n\nBesides, I think 'otherwise the join will be seen as a clauseless join'\nis not necessarily true, because the join may have other join clauses\nthat do not have any match. As an example, consider\n\n select * from a left join b on a.i = b.i and a.j = b.j where a.i = 2;\n\nSo should we use 'may' rather than 'will' here?\n\nEven if the join does become clauseless, it will end up being an\nunqualified nestloop. I think the join ordering algorithm will force\nthis join to be formed when necessary. So I begin to wonder if it's\nreally necessary to generate this dummy constant-TRUE clause.\n\nThanks\nRichard\n\nI happened to notice a constant-TRUE clause with is_pushed_down beingtrue while its required_relids not including the OJ being formed, whichseems abnormal to me. It turns out that this clause comes fromreconsider_outer_join_clauses(), as a dummy replacement if we'vegenerated a derived clause. The comment explains this as* If we do generate a derived clause,* however, the outer-join clause is redundant. We must still put some* clause into the regular processing, because otherwise the join will be* seen as a clauseless join and avoided during join order searching.* We handle this by generating a constant-TRUE clause that is marked with* required_relids that make it a join between the correct relations.Should we instead mark the constant-TRUE clause with required_relidsplus the OJ relid?Besides, I think 'otherwise the join will be seen as a clauseless join'is not necessarily true, because the join may have other join clausesthat do not have any match. As an example, consider select * from a left join b on a.i = b.i and a.j = b.j where a.i = 2;So should we use 'may' rather than 'will' here?Even if the join does become clauseless, it will end up being anunqualified nestloop. I think the join ordering algorithm will forcethis join to be formed when necessary. So I begin to wonder if it'sreally necessary to generate this dummy constant-TRUE clause.ThanksRichard",
"msg_date": "Sat, 25 Mar 2023 16:13:48 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "About the constant-TRUE clause in reconsider_outer_join_clauses"
},
{
"msg_contents": "Richard Guo <[email protected]> writes:\n> Should we instead mark the constant-TRUE clause with required_relids\n> plus the OJ relid?\n\nI do not think it matters.\n\n> Even if the join does become clauseless, it will end up being an\n> unqualified nestloop. I think the join ordering algorithm will force\n> this join to be formed when necessary.\n\nWe would find *some* valid plan, but not necessarily a *good* plan.\nThe point of the dummy clause is to ensure that the join is considered\nas soon as possible. That might not be the ideal join order of course,\nbut we'll consider it among other join orders and arrive at a cost-based\ndecision. With no dummy clause, the join order heuristics would always\ndelay this join as long as possible; so even if another ordering is\nbetter, we'd not find it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 Mar 2023 11:41:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About the constant-TRUE clause in reconsider_outer_join_clauses"
},
{
"msg_contents": "On Sat, Mar 25, 2023 at 11:41 PM Tom Lane <[email protected]> wrote:\n\n> Richard Guo <[email protected]> writes:\n> > Should we instead mark the constant-TRUE clause with required_relids\n> > plus the OJ relid?\n>\n> I do not think it matters.\n\n\nYeah, I agree that it makes no difference currently. One day if we want\nto replace the is_pushed_down flag with checking to see if a clause's\nrequired_relids includes the OJ being formed in order to tell whether\nit's a filter or join clause, I think we'd need to make this change.\n\n\n>\n> > Even if the join does become clauseless, it will end up being an\n> > unqualified nestloop. I think the join ordering algorithm will force\n> > this join to be formed when necessary.\n>\n> We would find *some* valid plan, but not necessarily a *good* plan.\n> The point of the dummy clause is to ensure that the join is considered\n> as soon as possible. That might not be the ideal join order of course,\n> but we'll consider it among other join orders and arrive at a cost-based\n> decision. With no dummy clause, the join order heuristics would always\n> delay this join as long as possible; so even if another ordering is\n> better, we'd not find it.\n\n\nI understand it now. Thanks for the explanation.\n\nThanks\nRichard\n\nOn Sat, Mar 25, 2023 at 11:41 PM Tom Lane <[email protected]> wrote:Richard Guo <[email protected]> writes:\n> Should we instead mark the constant-TRUE clause with required_relids\n> plus the OJ relid?\n\nI do not think it matters.Yeah, I agree that it makes no difference currently. One day if we wantto replace the is_pushed_down flag with checking to see if a clause'srequired_relids includes the OJ being formed in order to tell whetherit's a filter or join clause, I think we'd need to make this change. \n\n> Even if the join does become clauseless, it will end up being an\n> unqualified nestloop. I think the join ordering algorithm will force\n> this join to be formed when necessary.\n\nWe would find *some* valid plan, but not necessarily a *good* plan.\nThe point of the dummy clause is to ensure that the join is considered\nas soon as possible. That might not be the ideal join order of course,\nbut we'll consider it among other join orders and arrive at a cost-based\ndecision. With no dummy clause, the join order heuristics would always\ndelay this join as long as possible; so even if another ordering is\nbetter, we'd not find it.I understand it now. Thanks for the explanation.ThanksRichard",
"msg_date": "Mon, 27 Mar 2023 10:57:59 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: About the constant-TRUE clause in reconsider_outer_join_clauses"
},
{
"msg_contents": "Richard Guo <[email protected]> writes:\n> On Sat, Mar 25, 2023 at 11:41 PM Tom Lane <[email protected]> wrote:\n>> Richard Guo <[email protected]> writes:\n>>> Should we instead mark the constant-TRUE clause with required_relids\n>>> plus the OJ relid?\n\n>> I do not think it matters.\n\n> Yeah, I agree that it makes no difference currently. One day if we want\n> to replace the is_pushed_down flag with checking to see if a clause's\n> required_relids includes the OJ being formed in order to tell whether\n> it's a filter or join clause, I think we'd need to make this change.\n\nI did think about that ... but a constant-TRUE clause is going to be a\nno-op no matter which classification you give it. We do have some work to\ndo in that area, but I think it's not an issue for this particular case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 26 Mar 2023 23:15:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About the constant-TRUE clause in reconsider_outer_join_clauses"
}
] |
[
{
"msg_contents": "Hi,\n\nThis small patch proposes the implementation of the standard SQL/XML \nfunction XMLText (X038). It basically converts a text parameter into an \nxml text node. It uses the libxml2 function xmlEncodeSpecialChars[1] to \nescape possible predefined entities.\n\nThis patch also contains documentation and regression tests.\n\nAny thoughts?\n\nBest, Jim\n\n1 - \nhttps://gnome.pages.gitlab.gnome.org/libxml2/devhelp/libxml2-entities.html#xmlEncodeSpecialChars",
"msg_date": "Sat, 25 Mar 2023 12:49:33 +0100",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "so 25. 3. 2023 v 12:49 odesílatel Jim Jones <[email protected]>\nnapsal:\n\n> Hi,\n>\n> This small patch proposes the implementation of the standard SQL/XML\n> function XMLText (X038). It basically converts a text parameter into an\n> xml text node. It uses the libxml2 function xmlEncodeSpecialChars[1] to\n> escape possible predefined entities.\n>\n> This patch also contains documentation and regression tests.\n>\n> Any thoughts?\n>\n\n+1\n\nPavel\n\n\n> Best, Jim\n>\n> 1 -\n>\n> https://gnome.pages.gitlab.gnome.org/libxml2/devhelp/libxml2-entities.html#xmlEncodeSpecialChars\n>\n\nso 25. 3. 2023 v 12:49 odesílatel Jim Jones <[email protected]> napsal:Hi,\n\nThis small patch proposes the implementation of the standard SQL/XML \nfunction XMLText (X038). It basically converts a text parameter into an \nxml text node. It uses the libxml2 function xmlEncodeSpecialChars[1] to \nescape possible predefined entities.\n\nThis patch also contains documentation and regression tests.\n\nAny thoughts?+1Pavel\n\nBest, Jim\n\n1 - \nhttps://gnome.pages.gitlab.gnome.org/libxml2/devhelp/libxml2-entities.html#xmlEncodeSpecialChars",
"msg_date": "Sat, 25 Mar 2023 12:53:10 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "On 25.03.23 12:53, Pavel Stehule wrote:\n>\n> so 25. 3. 2023 v 12:49 odesílatel Jim Jones \n> <[email protected]> napsal:\n>\n> Hi,\n>\n> This small patch proposes the implementation of the standard SQL/XML\n> function XMLText (X038). It basically converts a text parameter\n> into an\n> xml text node. It uses the libxml2 function\n> xmlEncodeSpecialChars[1] to\n> escape possible predefined entities.\n>\n> This patch also contains documentation and regression tests.\n>\n> Any thoughts?\n>\n>\n> +1\n>\n> Pavel\n\n\nThanks!\n\nI just realized that I forgot to add a few examples to my last message :D\n\npostgres=# SELECT xmltext('foo ´/[({bar?})]\\`');\n xmltext\n--------------------\n foo ´/[({bar?})]\\`\n(1 row)\n\npostgres=# SELECT xmltext('foo & <bar>');\n xmltext\n-----------------------\n foo & <bar>\n(1 row)\n\n\n\n\n\n\nOn 25.03.23\n 12:53, Pavel Stehule wrote:\n\n\n\n\n\n\nso\n 25. 3. 2023 v 12:49 odesílatel Jim Jones <[email protected]>\n napsal:\n\nHi,\n\n\n This small patch proposes the implementation of the\n standard SQL/XML \n function XMLText (X038). It basically converts a text\n parameter into an \n xml text node. It uses the libxml2 function\n xmlEncodeSpecialChars[1] to \n escape possible predefined entities.\n\n\n This patch also contains documentation and regression\n tests.\n\n\n Any thoughts?\n\n\n\n+1\n\n\nPavel\n\n\n\n\n\nThanks!\nI just realized that I forgot to add a few\n examples to my last message :D\npostgres=# SELECT xmltext('foo\n ´/[({bar?})]\\`');\n xmltext \n --------------------\n foo ´/[({bar?})]\\`\n (1 row)\n\n postgres=# SELECT xmltext('foo & <bar>');\n xmltext \n -----------------------\n foo & <bar>\n (1 row)",
"msg_date": "Sat, 25 Mar 2023 13:25:23 +0100",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "On 25.03.23 13:25, I wrote:\n> I just realized that I forgot to add a few examples to my last message :D\n>\n> postgres=# SELECT xmltext('foo ´/[({bar?})]\\`');\n> xmltext\n> --------------------\n> foo ´/[({bar?})]\\`\n> (1 row)\n>\n> postgres=# SELECT xmltext('foo & <bar>');\n> xmltext\n> -----------------------\n> foo & <bar>\n> (1 row)\n>\nIt seems that an encoding issue appears in the regression tests on \nDebian + Meson, 32 bit.\n\n´ > ´\n° > °\n\nv2 attached updates the regression tests to fix it.\n\nJim",
"msg_date": "Fri, 25 Aug 2023 10:27:08 +0200",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "On 3/25/23 12:49, Jim Jones wrote:\n> Hi,\n> \n> This small patch proposes the implementation of the standard SQL/XML \n> function XMLText (X038). It basically converts a text parameter into an \n> xml text node. It uses the libxml2 function xmlEncodeSpecialChars[1] to \n> escape possible predefined entities.\n> \n> This patch also contains documentation and regression tests.\n> \n> Any thoughts?\n\nI am replying to this email, but my comments are based on the v2 patch.\n\nThank you for working on this, and I think this is a valuable addition. \nHowever, I have two issues with it.\n\n1) There seems to be several spurious blank lines added that I do not \nthink are warranted.\n\n2) This patch does nothing to address the <XML returning clause> so we \ncan't claim to implement X038 without a disclaimer. Upon further \nreview, the same is true of XMLCOMMENT() so maybe that is okay for this \npatch, and a more comprehensive patch for our xml features is necessary.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Fri, 25 Aug 2023 12:05:15 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "Hi Vik\n\nThanks for reviewing my patch!\n\nOn 25.08.23 12:05, Vik Fearing wrote:\n> I am replying to this email, but my comments are based on the v2 patch.\n>\n> Thank you for working on this, and I think this is a valuable \n> addition. However, I have two issues with it.\n>\n> 1) There seems to be several spurious blank lines added that I do not \n> think are warranted.\n\nI tried to copy the aesthetics of other functions, but it seems I failed \n:) I removed a few blank lines. I hope it's fine now.\n\nIs there any tool like pgindent to take care of it automatically?\n\n>\n> 2) This patch does nothing to address the <XML returning clause> so we \n> can't claim to implement X038 without a disclaimer. Upon further \n> review, the same is true of XMLCOMMENT() so maybe that is okay for \n> this patch, and a more comprehensive patch for our xml features is \n> necessary.\n\nIf we decide to not address this point here, I can take a look at it and \nwork in a separated patch.\n\nv3 attached.\n\nThanks\n\nJim",
"msg_date": "Fri, 25 Aug 2023 14:42:35 +0200",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "> On 25 Aug 2023, at 14:42, Jim Jones <[email protected]> wrote:\n\n> Is there any tool like pgindent to take care of it automatically?\n\nNo, pgindent doesn't address whitespace, only indentation of non-whitespace.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 25 Aug 2023 14:44:50 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "On 8/25/23 14:42, Jim Jones wrote:\n> Hi Vik\n> \n> Thanks for reviewing my patch!\n\nThank you for writing it!\n\n> On 25.08.23 12:05, Vik Fearing wrote:\n>> I am replying to this email, but my comments are based on the v2 patch.\n>>\n>> Thank you for working on this, and I think this is a valuable \n>> addition. However, I have two issues with it.\n>>\n>> 1) There seems to be several spurious blank lines added that I do not \n>> think are warranted.\n> \n> I tried to copy the aesthetics of other functions, but it seems I failed \n> :) I removed a few blank lines. I hope it's fine now.\n\nI am talking specifically about this:\n\n@@ -505,6 +506,10 @@ xmlcomment(PG_FUNCTION_ARGS)\n \tappendStringInfoText(&buf, arg);\n \tappendStringInfoString(&buf, \"-->\");\n\n+\n+\n+\n+\n \tPG_RETURN_XML_P(stringinfo_to_xmltype(&buf));\n #else\n \tNO_XML_SUPPORT();\n\n\n>> 2) This patch does nothing to address the <XML returning clause> so we \n>> can't claim to implement X038 without a disclaimer. Upon further \n>> review, the same is true of XMLCOMMENT() so maybe that is okay for \n>> this patch, and a more comprehensive patch for our xml features is \n>> necessary.\n> \n> If we decide to not address this point here, I can take a look at it and \n> work in a separated patch.\n\nI do not think this should be addressed in this patch because there are \nquite a lot of functions that need to handle this.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Fri, 25 Aug 2023 16:49:50 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "On 25.08.23 16:49, Vik Fearing wrote:\n>\n> I am talking specifically about this:\n>\n> @@ -505,6 +506,10 @@ xmlcomment(PG_FUNCTION_ARGS)\n> appendStringInfoText(&buf, arg);\n> appendStringInfoString(&buf, \"-->\");\n>\n> +\n> +\n> +\n> +\n> PG_RETURN_XML_P(stringinfo_to_xmltype(&buf));\n> #else\n> NO_XML_SUPPORT();\n\nI have no idea how xmlcomment() got changed in this patch :D nice catch!\n\n>\n> I do not think this should be addressed in this patch because there \n> are quite a lot of functions that need to handle this.\n\nv4 attached.\n\nJim",
"msg_date": "Fri, 25 Aug 2023 17:40:28 +0200",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "On 2023-08-25 10:49, Vik Fearing wrote:\n> I do not think this should be addressed in this patch because\n> there are quite a lot of functions that need to handle this.\n\nIndeed, as described in [0], we still largely provide the SQL/XML:2003\nnotion of a single XML datatype, not the distinguishable XML(DOCUMENT),\nXML(CONTENT), XML(SEQUENCE) types from :2006 and later, which has a\nnumber of adverse consequences for developers[1], and that wiki page\nproposed a couple possible ways forward[2].\n\nRegards,\n-Chap\n\n\n[0] https://wiki.postgresql.org/wiki/PostgreSQL_vs_SQL/XML_Standards\n[1] \nhttps://wiki.postgresql.org/wiki/PostgreSQL_vs_SQL/XML_Standards#Obstacles_to_improving_conformance\n[2] \nhttps://wiki.postgresql.org/wiki/PostgreSQL_vs_SQL/XML_Standards#Possible_ways_forward\n\n\n",
"msg_date": "Fri, 25 Aug 2023 11:56:40 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "On 8/25/23 17:56, Chapman Flack wrote:\n> [0] https://wiki.postgresql.org/wiki/PostgreSQL_vs_SQL/XML_Standards\n\nI was not aware of this page. What a wealth of information!\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Fri, 25 Aug 2023 18:40:43 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "On 2023-Aug-25, Chapman Flack wrote:\n\n> On 2023-08-25 10:49, Vik Fearing wrote:\n> > I do not think this should be addressed in this patch because\n> > there are quite a lot of functions that need to handle this.\n> \n> Indeed, as described in [0], we still largely provide the SQL/XML:2003\n> notion of a single XML datatype, not the distinguishable XML(DOCUMENT),\n> XML(CONTENT), XML(SEQUENCE) types from :2006 and later, which has a\n> number of adverse consequences for developers[1], and that wiki page\n> proposed a couple possible ways forward[2].\n\nSadly, all the projects seem to have been pretty much abandoned in the\nmeantime. Zorba has been dead for 9 years, xqilla for 6. Even XQC, the\nAPI they claim to implement, is dead.\n\nIt sounds unlikely that there is *any* way forward here.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"El Maquinismo fue proscrito so pena de cosquilleo hasta la muerte\"\n(Ijon Tichy en Viajes, Stanislaw Lem)\n\n\n",
"msg_date": "Sat, 26 Aug 2023 19:02:02 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "On 2023-08-26 13:02, Alvaro Herrera wrote:\n> Sadly, all the projects seem to have been pretty much abandoned in the\n> meantime. Zorba has been dead for 9 years, xqilla for 6. Even XQC, \n> the\n> API they claim to implement, is dead.\n\nSounds like bad news for the \"XQC as integration point\" proposal, \nanyway.\n\nSaxon 11.6 came out two days ago[0], supporting XPath/XQuery 3.1 etc.\n(12.3 came out last month, but 12 isn't considered the 'stable'\nrelease yet. It's working toward XSLT/XPath/XQuery 4.0.)\n\nRegards,\n-Chap\n\n[0] https://blog.saxonica.com/announcements/2023/08/saxon-11.6.html\n\n\n",
"msg_date": "Sat, 26 Aug 2023 15:23:17 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "Hi\n\nso 26. 8. 2023 v 21:23 odesílatel Chapman Flack <[email protected]>\nnapsal:\n\n> On 2023-08-26 13:02, Alvaro Herrera wrote:\n> > Sadly, all the projects seem to have been pretty much abandoned in the\n> > meantime. Zorba has been dead for 9 years, xqilla for 6. Even XQC,\n> > the\n> > API they claim to implement, is dead.\n>\n> Sounds like bad news for the \"XQC as integration point\" proposal,\n> anyway.\n>\n> Saxon 11.6 came out two days ago[0], supporting XPath/XQuery 3.1 etc.\n> (12.3 came out last month, but 12 isn't considered the 'stable'\n> release yet. It's working toward XSLT/XPath/XQuery 4.0.)\n>\n\nSaxon can be an interesting library, but nobody knows if integration with\nPostgres is possible. Their C implementation is Java compiled/executed\nby GraalV.\n\nRegards\n\nPavel\n\n\n> Regards,\n> -Chap\n>\n> [0] https://blog.saxonica.com/announcements/2023/08/saxon-11.6.html\n>\n>\n>\n\nHiso 26. 8. 2023 v 21:23 odesílatel Chapman Flack <[email protected]> napsal:On 2023-08-26 13:02, Alvaro Herrera wrote:\n> Sadly, all the projects seem to have been pretty much abandoned in the\n> meantime. Zorba has been dead for 9 years, xqilla for 6. Even XQC, \n> the\n> API they claim to implement, is dead.\n\nSounds like bad news for the \"XQC as integration point\" proposal, \nanyway.\n\nSaxon 11.6 came out two days ago[0], supporting XPath/XQuery 3.1 etc.\n(12.3 came out last month, but 12 isn't considered the 'stable'\nrelease yet. It's working toward XSLT/XPath/XQuery 4.0.)Saxon can be an interesting library, but nobody knows if integration with Postgres is possible. Their C implementation is Java compiled/executed by GraalV.RegardsPavel \n\nRegards,\n-Chap\n\n[0] https://blog.saxonica.com/announcements/2023/08/saxon-11.6.html",
"msg_date": "Sat, 26 Aug 2023 22:00:49 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "On 2023-08-26 16:00, Pavel Stehule wrote:\n> Saxon can be an interesting library, but nobody knows if integration \n> with\n> Postgres is possible. Their C implementation is Java compiled/executed\n> by GraalV.\n\nIndeed, such an integration would probably not be in core.\n\nOf the two possible-ways-forward described on that wiki page, the one\nthat didn't rely on the defunct XQC was one involving query rewriting.\nHave the parser understand the SQL/XML customized syntax, and define\na set of ordinary functions it will be rewritten into. (This idea is\nbolstered somewhat by the fact that many things in SQL/XML, XMLTABLE\nfor example, are /defined in the standard/ in terms of query rewriting\ninto calls on simpler functions.)\n\nThen let there be an extension, or ideally someday a choice of\nextensions, supplying those functions.\n\nAs to whether running Saxon in a Postgres extension is possible, that's\nbeen an example that ships with PL/Java since 1.5.1 five years ago.\n\nIt's too bad the other projects have stalled; it's good to have more\nthan one ready option. But Saxon shows no sign of going away.\n\nPerhaps the act of devising a standardized rewriting of queries\nonto a standardized set of loadable functions could be of interest\nto other DBMS projects as well. It's hard to imagine another DBMS\nnot being in the same boat (if it isn't from a rich commercial firm\nthat happens to have a modern XQuery implementation in-house).\n\nMaybe having that set of functions specified, with the prospect\nthat more than one DBMS might be interested in a project\nimplementing them, even inspires someone to go look at the\nxqilla or zorba repos to see how far they got, and pick up\nthe baton, and then there could be more than one option.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Sat, 26 Aug 2023 16:47:39 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "so 26. 8. 2023 v 22:47 odesílatel Chapman Flack <[email protected]>\nnapsal:\n\n> On 2023-08-26 16:00, Pavel Stehule wrote:\n> > Saxon can be an interesting library, but nobody knows if integration\n> > with\n> > Postgres is possible. Their C implementation is Java compiled/executed\n> > by GraalV.\n>\n> Indeed, such an integration would probably not be in core.\n>\n> Of the two possible-ways-forward described on that wiki page, the one\n> that didn't rely on the defunct XQC was one involving query rewriting.\n> Have the parser understand the SQL/XML customized syntax, and define\n> a set of ordinary functions it will be rewritten into. (This idea is\n> bolstered somewhat by the fact that many things in SQL/XML, XMLTABLE\n> for example, are /defined in the standard/ in terms of query rewriting\n> into calls on simpler functions.)\n>\n> Then let there be an extension, or ideally someday a choice of\n> extensions, supplying those functions.\n>\n> As to whether running Saxon in a Postgres extension is possible, that's\n> been an example that ships with PL/Java since 1.5.1 five years ago.\n>\n\nThe most simple \"solution\" can be the introduction of some new hooks there.\nThen you can write an extension that will call PL/Java functions\n\n\n>\n> It's too bad the other projects have stalled; it's good to have more\n> than one ready option. But Saxon shows no sign of going away.\n>\n> Perhaps the act of devising a standardized rewriting of queries\n> onto a standardized set of loadable functions could be of interest\n> to other DBMS projects as well. It's hard to imagine another DBMS\n> not being in the same boat (if it isn't from a rich commercial firm\n> that happens to have a modern XQuery implementation in-house).\n>\n> Maybe having that set of functions specified, with the prospect\n> that more than one DBMS might be interested in a project\n> implementing them, even inspires someone to go look at the\n> xqilla or zorba repos to see how far they got, and pick up\n> the baton, and then there could be more than one option.\n>\n\nAnother possibility is revitalization of libxml2.\n\nThere was an extension http://www.explain.com.au/libx/ But the code is not\navailable to download too, but extending libxml2 is feasible.\n\nI am not sure how valuable this work can be. Probably whoever really needs\nit uses some Java based solution already.\n\nRegards\n\nPavel\n\n\n\n\n> Regards,\n> -Chap\n>\n\nso 26. 8. 2023 v 22:47 odesílatel Chapman Flack <[email protected]> napsal:On 2023-08-26 16:00, Pavel Stehule wrote:\n> Saxon can be an interesting library, but nobody knows if integration \n> with\n> Postgres is possible. Their C implementation is Java compiled/executed\n> by GraalV.\n\nIndeed, such an integration would probably not be in core.\n\nOf the two possible-ways-forward described on that wiki page, the one\nthat didn't rely on the defunct XQC was one involving query rewriting.\nHave the parser understand the SQL/XML customized syntax, and define\na set of ordinary functions it will be rewritten into. (This idea is\nbolstered somewhat by the fact that many things in SQL/XML, XMLTABLE\nfor example, are /defined in the standard/ in terms of query rewriting\ninto calls on simpler functions.)\n\nThen let there be an extension, or ideally someday a choice of\nextensions, supplying those functions.\n\nAs to whether running Saxon in a Postgres extension is possible, that's\nbeen an example that ships with PL/Java since 1.5.1 five years ago.The most simple \"solution\" can be the introduction of some new hooks there. Then you can write an extension that will call PL/Java functions \n\nIt's too bad the other projects have stalled; it's good to have more\nthan one ready option. But Saxon shows no sign of going away.\n\nPerhaps the act of devising a standardized rewriting of queries\nonto a standardized set of loadable functions could be of interest\nto other DBMS projects as well. It's hard to imagine another DBMS\nnot being in the same boat (if it isn't from a rich commercial firm\nthat happens to have a modern XQuery implementation in-house).\n\nMaybe having that set of functions specified, with the prospect\nthat more than one DBMS might be interested in a project\nimplementing them, even inspires someone to go look at the\nxqilla or zorba repos to see how far they got, and pick up\nthe baton, and then there could be more than one option.Another possibility is revitalization of libxml2.There was an extension http://www.explain.com.au/libx/ But the code is not available to download too, but extending libxml2 is feasible. I am not sure how valuable this work can be. Probably whoever really needs it uses some Java based solution already. RegardsPavel \n\nRegards,\n-Chap",
"msg_date": "Sun, 27 Aug 2023 06:40:55 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "> On 25 Aug 2023, at 17:40, Jim Jones <[email protected]> wrote:\n> On 25.08.23 16:49, Vik Fearing wrote:\n\n>> I do not think this should be addressed in this patch because there are quite a lot of functions that need to handle this.\n> \n> v4 attached.\n\nI had a look at v4 of this patch and apart from pgindenting and moving the\nfunction within xml.c next to xmlcomment() I think this is ready.\n\nJust like Vik says upthread we can't really claim X038 conformance without a\ndisclaimer, so I've added a 0002 which adds this to the XML spec conformance\npage in the docs.\n\nThe attached v5 contains the above mentioned changes. I've marked this ready\nfor committer in the CF app as well.\n\n--\nDaniel Gustafsson",
"msg_date": "Fri, 3 Nov 2023 16:30:17 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "On 11/3/23 16:30, Daniel Gustafsson wrote:\n>> On 25 Aug 2023, at 17:40, Jim Jones <[email protected]> wrote:\n> \n> Just like Vik says upthread we can't really claim X038 conformance without a\n> disclaimer, so I've added a 0002 which adds this to the XML spec conformance\n> page in the docs.\n\n\nWe should put a short version of the disclaimer in sql_features.txt as well.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Fri, 3 Nov 2023 16:45:03 +0100",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "Hi Daniel, hi Vik,\n\nThanks a lot for the review!\n\nOn 03.11.23 16:45, Vik Fearing wrote:\n> We should put a short version of the disclaimer in sql_features.txt as\n> well.\nYou mean to add a disclaimer in the X038 entry? Something along these\nlines perhaps?\n\nX038 XMLText YES It does not address the <literal><XML\nreturning clause></literal>, as it is not supported in\n<productname>PostgreSQL</productname>.\n\nJim\n\n\n",
"msg_date": "Fri, 3 Nov 2023 17:14:47 +0100",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "On 11/3/23 17:14, Jim Jones wrote:\n> Hi Daniel, hi Vik,\n> \n> Thanks a lot for the review!\n> \n> On 03.11.23 16:45, Vik Fearing wrote:\n>> We should put a short version of the disclaimer in sql_features.txt as\n>> well.\n> You mean to add a disclaimer in the X038 entry? Something along these\n> lines perhaps?\n> \n> X038 XMLText YES It does not address the <literal><XML\n> returning clause></literal>, as it is not supported in\n> <productname>PostgreSQL</productname>.\n\nI was thinking of something much shorter than that. Such as\n\n X038 XMLText YES supported except for RETURNING\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Fri, 3 Nov 2023 19:05:22 +0100",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "On 03.11.23 19:05, Vik Fearing wrote:\n> I was thinking of something much shorter than that. Such as\n>\n> X038 XMLText YES supported except for RETURNING\n\nv6 attached includes this change and the doc addition from Daniel.\n\nThanks!\n\n--\nJim",
"msg_date": "Fri, 3 Nov 2023 21:28:21 +0100",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "On 11/3/23 21:28, Jim Jones wrote:\n> \n> On 03.11.23 19:05, Vik Fearing wrote:\n>> I was thinking of something much shorter than that. Such as\n>>\n>> X038 XMLText YES supported except for RETURNING\n> \n> v6 attached includes this change and the doc addition from Daniel.\n\nThere are some typos in the commit message, but otherwise this looks \ncommitable to me.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Sat, 4 Nov 2023 15:01:34 +0100",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "> On 4 Nov 2023, at 15:01, Vik Fearing <[email protected]> wrote:\n> \n> On 11/3/23 21:28, Jim Jones wrote:\n>> On 03.11.23 19:05, Vik Fearing wrote:\n>>> I was thinking of something much shorter than that. Such as\n>>> \n>>> X038 XMLText YES supported except for RETURNING\n>> v6 attached includes this change and the doc addition from Daniel.\n> \n> There are some typos in the commit message, but otherwise this looks commitable to me.\n\nI took another look at this today, fixes the above mentioned typos and some\ntiny cosmetic things and pushed it.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 6 Nov 2023 11:49:28 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
},
{
"msg_contents": "\nOn 06.11.23 11:49, Daniel Gustafsson wrote:\n> I took another look at this today, fixes the above mentioned typos and some\n> tiny cosmetic things and pushed it.\n>\n> --\n> Daniel Gustafsson\n>\nAwesome! Thanks Daniel and Vik for reviewing and pushing this patch :)\n\n-- \nJim\n\n\n\n",
"msg_date": "Mon, 6 Nov 2023 12:15:58 +0100",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add XMLText function (SQL/XML X038)"
}
] |
[
{
"msg_contents": "Hi,\n\n\nconfig/perl.m4 contains this:\n\n\n AC_MSG_CHECKING(for flags to link embedded Perl)\n if test \"$PORTNAME\" = \"win32\" ; then\n perl_lib=`basename $perl_archlibexp/CORE/perl[[5-9]]*.lib .lib`\n if test -e \"$perl_archlibexp/CORE/$perl_lib.lib\"; then\n perl_embed_ldflags=\"-L$perl_archlibexp/CORE -l$perl_lib\"\n else\n perl_lib=`basename $perl_archlibexp/CORE/libperl[[5-9]]*.a .a | sed 's/^lib//'`\n if test -e \"$perl_archlibexp/CORE/lib$perl_lib.a\"; then\n perl_embed_ldflags=\"-L$perl_archlibexp/CORE -l$perl_lib\"\n fi\n fi\n else\n pgac_tmp1=`$PERL -MExtUtils::Embed -e ldopts`\n pgac_tmp2=`$PERL -MConfig -e 'print \"$Config{ccdlflags} $Config{ldflags}\"'`\n perl_embed_ldflags=`echo X\"$pgac_tmp1\" | sed -e \"s/^X//\" -e \"s%$pgac_tmp2%%\"`\n fi\n AC_SUBST(perl_embed_ldflags)dnl\n\nI don't see any equivalent in meson.build of the win32 logic, and thus I \nam getting a setup failure on fairywren when trying to move it to meson, \nwhile it will happily build with autoconf.\n\nI would expect the ld flags to be \"-LC:/STRAWB~1/perl/lib/CORE -lperl532\"\n\n(Off topic peeve - one of the things I dislike about meson is that the \nmeson.build files are written in YA bespoke language).\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\nHi, \n\n\n\nconfig/perl.m4 contains this:\n\n\n\nAC_MSG_CHECKING(for flags to link embedded Perl)\nif test \"$PORTNAME\" = \"win32\" ; then\n perl_lib=`basename $perl_archlibexp/CORE/perl[[5-9]]*.lib .lib`\n if test -e \"$perl_archlibexp/CORE/$perl_lib.lib\"; then\n perl_embed_ldflags=\"-L$perl_archlibexp/CORE -l$perl_lib\"\n else\n perl_lib=`basename $perl_archlibexp/CORE/libperl[[5-9]]*.a .a | sed 's/^lib//'`\n if test -e \"$perl_archlibexp/CORE/lib$perl_lib.a\"; then\n perl_embed_ldflags=\"-L$perl_archlibexp/CORE -l$perl_lib\"\n fi\n fi\nelse\n pgac_tmp1=`$PERL -MExtUtils::Embed -e ldopts`\n pgac_tmp2=`$PERL -MConfig -e 'print \"$Config{ccdlflags} $Config{ldflags}\"'`\n perl_embed_ldflags=`echo X\"$pgac_tmp1\" | sed -e \"s/^X//\" -e \"s%$pgac_tmp2%%\"`\nfi\nAC_SUBST(perl_embed_ldflags)dnl\n\n\n\nI don't see any equivalent in meson.build of the win32 logic, and\n thus I am getting a setup failure on fairywren when trying to move\n it to meson, while it will happily build with autoconf.\nI would expect the ld flags to be \"-LC:/STRAWB~1/perl/lib/CORE\n -lperl532\"\n\n(Off topic peeve - one of the things I dislike about meson is\n that the meson.build files are written in YA bespoke language).\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 25 Mar 2023 08:46:42 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "meson/msys2 fails with plperl/Strawberry"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-25 08:46:42 -0400, Andrew Dunstan wrote:\n> config/perl.m4 contains this:\n>\n>\n> AC_MSG_CHECKING(for flags to link embedded Perl)\n> if test \"$PORTNAME\" = \"win32\" ; then\n> perl_lib=`basename $perl_archlibexp/CORE/perl[[5-9]]*.lib .lib`\n> if test -e \"$perl_archlibexp/CORE/$perl_lib.lib\"; then\n> perl_embed_ldflags=\"-L$perl_archlibexp/CORE -l$perl_lib\"\n> else\n> perl_lib=`basename $perl_archlibexp/CORE/libperl[[5-9]]*.a .a | sed 's/^lib//'`\n> if test -e \"$perl_archlibexp/CORE/lib$perl_lib.a\"; then\n> perl_embed_ldflags=\"-L$perl_archlibexp/CORE -l$perl_lib\"\n> fi\n> fi\n> else\n> pgac_tmp1=`$PERL -MExtUtils::Embed -e ldopts`\n> pgac_tmp2=`$PERL -MConfig -e 'print \"$Config{ccdlflags} $Config{ldflags}\"'`\n> perl_embed_ldflags=`echo X\"$pgac_tmp1\" | sed -e \"s/^X//\" -e \"s%$pgac_tmp2%%\"`\n> fi\n> AC_SUBST(perl_embed_ldflags)dnl\n>\n> I don't see any equivalent in meson.build of the win32 logic, and thus I am\n> getting a setup failure on fairywren when trying to move it to meson, while\n> it will happily build with autoconf.\n\nI did not try to build with strawberry perl using mingw - it doesn't seem like\na very interesting thing, given that mingw has a much more reasonable perl\nthan strawberry - but with the mingw perl it works well.\n\n\nThe above logic actually did *not* work well with mingw for me, because the\nnames are not actually what configure expects, and it seems like a seriously\nbad idea to encode that much knowledge about library naming and locations.\n\nhttps://cirrus-ci.com/task/6421536551206912\n\n[16:32:28.997] Has header \"perl.h\" : YES\n[16:32:28.997] Message: CCFLAGS recommended by perl: -DWIN32 -DWIN64 -DPERL_TEXTMODE_SCRIPTS -DPERL_IMPLICIT_CONTEXT -DPERL_IMPLICIT_SYS -DUSE_PERLIO -D__USE_MINGW_ANSI_STDIO -fno-strict-aliasing -mms-bitfields\n[16:32:28.997] Message: CCFLAGS for embedding perl: -IC:\\msys64\\ucrt64\\lib\\perl5\\core_perl/CORE -DWIN32 -DWIN64 -DPERL_TEXTMODE_SCRIPTS -DPERL_IMPLICIT_CONTEXT -DPERL_IMPLICIT_SYS -DUSE_PERLIO -DPLPERL_HAVE_UID_GID\n[16:32:28.997] Message: LDFLAGS recommended by perl: \"-s -L\"C:\\msys64\\ucrt64\\lib\\perl5\\core_perl\\CORE\" -L\"C:\\msys64\\ucrt64\\lib\" \"C:\\msys64\\ucrt64\\lib\\perl5\\core_perl\\CORE\\libperl532.a\" \"C:\\msys64\\ucrt64\\lib\\libmoldname.a\" \"C:\\msys64\\ucrt64\\lib\\libkernel32.a\" \"C:\\msys64\\ucrt64\\lib\\libuser32.a\" \"C:\\msys64\\ucrt64\\lib\\libgdi32.a\" \"C:\\msys64\\ucrt64\\lib\\libwinspool.a\" \"C:\\msys64\\ucrt64\\lib\\libcomdlg32.a\" \"C:\\msys64\\ucrt64\\lib\\libadvapi32.a\" \"C:\\msys64\\ucrt64\\lib\\libshell32.a\" \"C:\\msys64\\ucrt64\\lib\\libole32.a\" \"C:\\msys64\\ucrt64\\lib\\liboleaut32.a\" \"C:\\msys64\\ucrt64\\lib\\libnetapi32.a\" \"C:\\msys64\\ucrt64\\lib\\libuuid.a\" \"C:\\msys64\\ucrt64\\lib\\libws2_32.a\" \"C:\\msys64\\ucrt64\\lib\\libmpr.a\" \"C:\\msys64\\ucrt64\\lib\\libwinmm.a\" \"C:\\msys64\\ucrt64\\lib\\libversion.a\" \"C:\\msys64\\ucrt64\\lib\\libodbc32.a\" \"C:\\msys64\\ucrt64\\lib\\libodbccp32.a\" \"C:\\msys64\\ucrt64\\lib\\libcomctl32.a\"\"\n[16:32:28.997] Message: LDFLAGS for embedding perl: \"C:\\msys64\\ucrt64\\lib\\perl5\\core_perl\\CORE\\libperl532.a C:\\msys64\\ucrt64\\lib\\libmoldname.a C:\\msys64\\ucrt64\\lib\\libkernel32.a C:\\msys64\\ucrt64\\lib\\libuser32.a C:\\msys64\\ucrt64\\lib\\libgdi32.a C:\\msys64\\ucrt64\\lib\\libwinspool.a C:\\msys64\\ucrt64\\lib\\libcomdlg32.a C:\\msys64\\ucrt64\\lib\\libadvapi32.a C:\\msys64\\ucrt64\\lib\\libshell32.a C:\\msys64\\ucrt64\\lib\\libole32.a C:\\msys64\\ucrt64\\lib\\liboleaut32.a C:\\msys64\\ucrt64\\lib\\libnetapi32.a C:\\msys64\\ucrt64\\lib\\libuuid.a C:\\msys64\\ucrt64\\lib\\libws2_32.a C:\\msys64\\ucrt64\\lib\\libmpr.a C:\\msys64\\ucrt64\\lib\\libwinmm.a C:\\msys64\\ucrt64\\lib\\libversion.a C:\\msys64\\ucrt64\\lib\\libodbc32.a C:\\msys64\\ucrt64\\lib\\libodbccp32.a C:\\msys64\\ucrt64\\lib\\libcomctl32.a\"\n[16:32:28.997] Checking if \"libperl\" : links: YES\n\n\n> I would expect the ld flags to be \"-LC:/STRAWB~1/perl/lib/CORE -lperl532\"\n\nYou didn't say what they ended up as?\n\n\n> (Off topic peeve - one of the things I dislike about meson is that the\n> meson.build files are written in YA bespoke language).\n\nI don't really disagree. However, all the general purpose language using build\netools I found were awful. And meson's language is a heck of a lot nicer than\ne.g. cmake's...\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 25 Mar 2023 09:38:18 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson/msys2 fails with plperl/Strawberry"
},
{
"msg_contents": "On 2023-03-25 Sa 12:38, Andres Freund wrote:\n> Hi,\n>\n> On 2023-03-25 08:46:42 -0400, Andrew Dunstan wrote:\n>> config/perl.m4 contains this:\n>>\n>>\n>> AC_MSG_CHECKING(for flags to link embedded Perl)\n>> if test \"$PORTNAME\" = \"win32\" ; then\n>> perl_lib=`basename $perl_archlibexp/CORE/perl[[5-9]]*.lib .lib`\n>> if test -e \"$perl_archlibexp/CORE/$perl_lib.lib\"; then\n>> perl_embed_ldflags=\"-L$perl_archlibexp/CORE -l$perl_lib\"\n>> else\n>> perl_lib=`basename $perl_archlibexp/CORE/libperl[[5-9]]*.a .a | sed 's/^lib//'`\n>> if test -e \"$perl_archlibexp/CORE/lib$perl_lib.a\"; then\n>> perl_embed_ldflags=\"-L$perl_archlibexp/CORE -l$perl_lib\"\n>> fi\n>> fi\n>> else\n>> pgac_tmp1=`$PERL -MExtUtils::Embed -e ldopts`\n>> pgac_tmp2=`$PERL -MConfig -e 'print \"$Config{ccdlflags} $Config{ldflags}\"'`\n>> perl_embed_ldflags=`echo X\"$pgac_tmp1\" | sed -e \"s/^X//\" -e \"s%$pgac_tmp2%%\"`\n>> fi\n>> AC_SUBST(perl_embed_ldflags)dnl\n>>\n>> I don't see any equivalent in meson.build of the win32 logic, and thus I am\n>> getting a setup failure on fairywren when trying to move it to meson, while\n>> it will happily build with autoconf.\n> I did not try to build with strawberry perl using mingw - it doesn't seem like\n> a very interesting thing, given that mingw has a much more reasonable perl\n> than strawberry - but with the mingw perl it works well.\n\n\nStrawberry is a recommended perl installation for Windows \n(<https://www.perl.org/get.html>) and is widely used AFAICT.\n\nIn general my approach has been to build as independently as possible \nfrom msys2 infrastructure, in particular a) not to rely on it at all for \nMSVC builds and b) to use independent third party installations for \nthings like openssl and PLs.\n\nIn any case, I don't think we should be choosing gratuitously to break \nthings that hitherto worked, however uninteresting you personally might \nfind them.\n\n\n> The above logic actually did *not* work well with mingw for me, because the\n> names are not actually what configure expects, and it seems like a seriously\n> bad idea to encode that much knowledge about library naming and locations.\n\n\nDidn't work well how? It just worked perfectly for me with ucrt perl \n(setup, built and tested) using configure:\n\n$ grep perl532 config.log\nconfigure:10482: result: \n-LC:/tools/nmsys64/ucrt64/lib/perl5/core_perl/CORE -lperl532\nconfigure:18820: gcc -o conftest.exe -Wall -Wmissing-prototypes \n-Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels \n-Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type \n-Wshadow=compatible-local -Wformat-security -fno-strict-aliasing -fwrapv \n-fexcess-precision=standard -Wno-format-truncation \n-Wno-stringop-truncation -O2 -I./src/include/port/win32 \n-IC:/tools/nmsys64/ucrt64/lib/perl5/core_perl/CORE \n-Wl,--allow-multiple-definition -Wl,--disable-auto-import conftest.c \n-LC:/tools/nmsys64/ucrt64/lib/perl5/core_perl/CORE -lperl532 >&5\nperl_embed_ldflags='-LC:/tools/nmsys64/ucrt64/lib/perl5/core_perl/CORE \n-lperl532'\n\n\n>> I would expect the ld flags to be \"-LC:/STRAWB~1/perl/lib/CORE -lperl532\"\n> You didn't say what they ended up as?\n\n\nI think you misunderstand me. This is what they should end up as.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-25 Sa 12:38, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-03-25 08:46:42 -0400, Andrew Dunstan wrote:\n\n\nconfig/perl.m4 contains this:\n\n\n AC_MSG_CHECKING(for flags to link embedded Perl)\n if test \"$PORTNAME\" = \"win32\" ; then\n perl_lib=`basename $perl_archlibexp/CORE/perl[[5-9]]*.lib .lib`\n if test -e \"$perl_archlibexp/CORE/$perl_lib.lib\"; then\n perl_embed_ldflags=\"-L$perl_archlibexp/CORE -l$perl_lib\"\n else\n perl_lib=`basename $perl_archlibexp/CORE/libperl[[5-9]]*.a .a | sed 's/^lib//'`\n if test -e \"$perl_archlibexp/CORE/lib$perl_lib.a\"; then\n perl_embed_ldflags=\"-L$perl_archlibexp/CORE -l$perl_lib\"\n fi\n fi\n else\n pgac_tmp1=`$PERL -MExtUtils::Embed -e ldopts`\n pgac_tmp2=`$PERL -MConfig -e 'print \"$Config{ccdlflags} $Config{ldflags}\"'`\n perl_embed_ldflags=`echo X\"$pgac_tmp1\" | sed -e \"s/^X//\" -e \"s%$pgac_tmp2%%\"`\n fi\n AC_SUBST(perl_embed_ldflags)dnl\n\nI don't see any equivalent in meson.build of the win32 logic, and thus I am\ngetting a setup failure on fairywren when trying to move it to meson, while\nit will happily build with autoconf.\n\n\n\nI did not try to build with strawberry perl using mingw - it doesn't seem like\na very interesting thing, given that mingw has a much more reasonable perl\nthan strawberry - but with the mingw perl it works well.\n\n\n\nStrawberry is a recommended perl installation for Windows\n (<https://www.perl.org/get.html>) and is widely used AFAICT.\nIn general my approach has been to build as independently as\n possible from msys2 infrastructure, in particular a) not to rely\n on it at all for MSVC builds and b) to use independent third party\n installations for things like openssl and PLs.\nIn any case, I don't think we should be choosing gratuitously to\n break things that hitherto worked, however uninteresting you\n personally might find them.\n\n\n\n\n\n\n\nThe above logic actually did *not* work well with mingw for me, because the\nnames are not actually what configure expects, and it seems like a seriously\nbad idea to encode that much knowledge about library naming and locations.\n\n\n\nDidn't work well how? It just worked perfectly for me with ucrt\n perl (setup, built and tested) using configure:\n\n$ grep perl532 config.log\n configure:10482: result:\n -LC:/tools/nmsys64/ucrt64/lib/perl5/core_perl/CORE -lperl532\n configure:18820: gcc -o conftest.exe -Wall -Wmissing-prototypes\n -Wpointer-arith -Wdeclaration-after-statement -Werror=vla\n -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3\n -Wcast-function-type -Wshadow=compatible-local -Wformat-security\n -fno-strict-aliasing -fwrapv -fexcess-precision=standard\n -Wno-format-truncation -Wno-stringop-truncation -O2 \n -I./src/include/port/win32 \n -IC:/tools/nmsys64/ucrt64/lib/perl5/core_perl/CORE \n -Wl,--allow-multiple-definition -Wl,--disable-auto-import \n conftest.c -LC:/tools/nmsys64/ucrt64/lib/perl5/core_perl/CORE\n -lperl532 >&5\nperl_embed_ldflags='-LC:/tools/nmsys64/ucrt64/lib/perl5/core_perl/CORE\n -lperl532'\n\n\n\n\n\n\n\nI would expect the ld flags to be \"-LC:/STRAWB~1/perl/lib/CORE -lperl532\"\n\n\n\nYou didn't say what they ended up as?\n\n\n\nI think you misunderstand me. This is what they should end up as.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 26 Mar 2023 07:57:59 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: meson/msys2 fails with plperl/Strawberry"
},
{
"msg_contents": "On 2023-03-26 07:57:59 -0400, Andrew Dunstan wrote:\n> \n> On 2023-03-25 Sa 12:38, Andres Freund wrote:\n> > Hi,\n> > \n> > On 2023-03-25 08:46:42 -0400, Andrew Dunstan wrote:\n> > > config/perl.m4 contains this:\n> > > \n> > > \n> > > AC_MSG_CHECKING(for flags to link embedded Perl)\n> > > if test \"$PORTNAME\" = \"win32\" ; then\n> > > perl_lib=`basename $perl_archlibexp/CORE/perl[[5-9]]*.lib .lib`\n> > > if test -e \"$perl_archlibexp/CORE/$perl_lib.lib\"; then\n> > > perl_embed_ldflags=\"-L$perl_archlibexp/CORE -l$perl_lib\"\n> > > else\n> > > perl_lib=`basename $perl_archlibexp/CORE/libperl[[5-9]]*.a .a | sed 's/^lib//'`\n> > > if test -e \"$perl_archlibexp/CORE/lib$perl_lib.a\"; then\n> > > perl_embed_ldflags=\"-L$perl_archlibexp/CORE -l$perl_lib\"\n> > > fi\n> > > fi\n> > > else\n> > > pgac_tmp1=`$PERL -MExtUtils::Embed -e ldopts`\n> > > pgac_tmp2=`$PERL -MConfig -e 'print \"$Config{ccdlflags} $Config{ldflags}\"'`\n> > > perl_embed_ldflags=`echo X\"$pgac_tmp1\" | sed -e \"s/^X//\" -e \"s%$pgac_tmp2%%\"`\n> > > fi\n> > > AC_SUBST(perl_embed_ldflags)dnl\n> > > \n> > > I don't see any equivalent in meson.build of the win32 logic, and thus I am\n> > > getting a setup failure on fairywren when trying to move it to meson, while\n> > > it will happily build with autoconf.\n> > I did not try to build with strawberry perl using mingw - it doesn't seem like\n> > a very interesting thing, given that mingw has a much more reasonable perl\n> > than strawberry - but with the mingw perl it works well.\n> \n> \n> Strawberry is a recommended perl installation for Windows\n> (<https://www.perl.org/get.html>) and is widely used AFAICT.\n\nIt also hasn't released anything in years, including security fixes, dumps\nbroken binaries alongside the directory containing perl.\n\n\n> In general my approach has been to build as independently as possible from\n> msys2 infrastructure, in particular a) not to rely on it at all for MSVC\n> builds and b) to use independent third party installations for things like\n> openssl and PLs.\n\nNote that the msvc CI build *does* use strawberry perl.\n\nFirst: I am *not* arguing we shouldn't repair building against strawberry perl\nwith mingw.\n\nBut I fail to see what we gain by using builds of openssl etc from random\nplaces - all that achieves is making it very hard to reproduce problems. Given\nhow few users mingw built windows has, that's the opposite of what we should\ndo.\n\n\n> In any case, I don't think we should be choosing gratuitously to break\n> things that hitherto worked, however uninteresting you personally might find\n> them.\n\nI didn't gratuitously do so. I didn't even know it was broken - as I said\nabove, CI tests build with strawberry perl many times a day. I spent plenty\ntime figuring out why newer perl versions were broken on windows.\n\n\n> > The above logic actually did *not* work well with mingw for me, because the\n> > names are not actually what configure expects, and it seems like a seriously\n> > bad idea to encode that much knowledge about library naming and locations.\n> \n> \n> Didn't work well how? It just worked perfectly for me with ucrt perl (setup,\n> built and tested) using configure:\n> \n> $ grep perl532 config.log\n> configure:10482: result: -LC:/tools/nmsys64/ucrt64/lib/perl5/core_perl/CORE\n> -lperl532\n> configure:18820: gcc -o conftest.exe -Wall -Wmissing-prototypes\n> -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n> -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n> -Wshadow=compatible-local -Wformat-security -fno-strict-aliasing -fwrapv\n> -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation\n> -O2 -I./src/include/port/win32\n> -IC:/tools/nmsys64/ucrt64/lib/perl5/core_perl/CORE\n> -Wl,--allow-multiple-definition -Wl,--disable-auto-import conftest.c\n> -LC:/tools/nmsys64/ucrt64/lib/perl5/core_perl/CORE -lperl532 >&5\n> perl_embed_ldflags='-LC:/tools/nmsys64/ucrt64/lib/perl5/core_perl/CORE\n> -lperl532'\n\nI got mismatches around library names, because some of the win32 specific\npattern matches didn't apply or applied over broadly. I don't have a windows\nsystem running right now, I'll try to reproduce in the next few days.\n\n\n> > > I would expect the ld flags to be \"-LC:/STRAWB~1/perl/lib/CORE -lperl532\"\n> > You didn't say what they ended up as?\n> \n> \n> I think you misunderstand me. This is what they should end up as.\n\nI know. Without knowing what they *did* end up as, it's hard to compare, no?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 26 Mar 2023 12:39:08 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson/msys2 fails with plperl/Strawberry"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-26 12:39:08 -0700, Andres Freund wrote:\n> First: I am *not* arguing we shouldn't repair building against strawberry perl\n> with mingw.\n\nHm - can you describe the failure more - I just tried, and it worked to build\nagainst strawberry perl on mingw, without any issues. All I did was set\n-DPERL=\"c:/strawberrly/perl/bin/perl.exe\".\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 26 Mar 2023 14:28:30 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson/msys2 fails with plperl/Strawberry"
},
{
"msg_contents": "\n\n> On Mar 26, 2023, at 5:28 PM, Andres Freund <[email protected]> wrote:\n> \n> Hi,\n> \n>> On 2023-03-26 12:39:08 -0700, Andres Freund wrote:\n>> First: I am *not* arguing we shouldn't repair building against strawberry perl\n>> with mingw.\n> \n> Hm - can you describe the failure more - I just tried, and it worked to build\n> against strawberry perl on mingw, without any issues. All I did was set\n> -DPERL=\"c:/strawberrly/perl/bin/perl.exe\".\n> \n> \n\nThat might be the secret sauce I’m missing. I will be offline for a day or three, will test when I’m back.\n\nCheers \n\nAndrew \n\n",
"msg_date": "Sun, 26 Mar 2023 21:13:41 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: meson/msys2 fails with plperl/Strawberry"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-26 21:13:41 -0400, Andrew Dunstan wrote:\n> > On Mar 26, 2023, at 5:28 PM, Andres Freund <[email protected]> wrote:\n> >> On 2023-03-26 12:39:08 -0700, Andres Freund wrote:\n> >> First: I am *not* arguing we shouldn't repair building against strawberry perl\n> >> with mingw.\n> > \n> > Hm - can you describe the failure more - I just tried, and it worked to build\n> > against strawberry perl on mingw, without any issues. All I did was set\n> > -DPERL=\"c:/strawberrly/perl/bin/perl.exe\".\n\n> That might be the secret sauce I’m missing. I will be offline for a day or three, will test when I’m back.\n\nIt should suffice to put strawberry perl first in PATH. All that the -DPERL\ndoes is to use that, instead of 'perl' from PATH. If putting strawberry perl\nahead in PATH failed, something else must have been going on...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 27 Mar 2023 10:18:50 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: meson/msys2 fails with plperl/Strawberry"
},
{
"msg_contents": "On 2023-03-27 Mo 13:18, Andres Freund wrote:\n> Hi,\n>\n> On 2023-03-26 21:13:41 -0400, Andrew Dunstan wrote:\n>>> On Mar 26, 2023, at 5:28 PM, Andres Freund<[email protected]> wrote:\n>>>> On 2023-03-26 12:39:08 -0700, Andres Freund wrote:\n>>>> First: I am *not* arguing we shouldn't repair building against strawberry perl\n>>>> with mingw.\n>>> Hm - can you describe the failure more - I just tried, and it worked to build\n>>> against strawberry perl on mingw, without any issues. All I did was set\n>>> -DPERL=\"c:/strawberrly/perl/bin/perl.exe\".\n>> That might be the secret sauce I’m missing. I will be offline for a day or three, will test when I’m back.\n> It should suffice to put strawberry perl first in PATH. All that the -DPERL\n> does is to use that, instead of 'perl' from PATH. If putting strawberry perl\n> ahead in PATH failed, something else must have been going on...\n\n\n\nYeah, What it actually needed was a system upgrade. Sorry for the noise.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-27 Mo 13:18, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-03-26 21:13:41 -0400, Andrew Dunstan wrote:\n\n\n\nOn Mar 26, 2023, at 5:28 PM, Andres Freund <[email protected]> wrote:\n\n\nOn 2023-03-26 12:39:08 -0700, Andres Freund wrote:\nFirst: I am *not* arguing we shouldn't repair building against strawberry perl\nwith mingw.\n\n\n\nHm - can you describe the failure more - I just tried, and it worked to build\nagainst strawberry perl on mingw, without any issues. All I did was set\n-DPERL=\"c:/strawberrly/perl/bin/perl.exe\".\n\n\n\n\n\n\nThat might be the secret sauce I’m missing. I will be offline for a day or three, will test when I’m back.\n\n\n\nIt should suffice to put strawberry perl first in PATH. All that the -DPERL\ndoes is to use that, instead of 'perl' from PATH. If putting strawberry perl\nahead in PATH failed, something else must have been going on...\n\n\n\n\n\n\nYeah, What it actually needed was a system upgrade. Sorry for the\n noise.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 30 Mar 2023 11:00:01 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: meson/msys2 fails with plperl/Strawberry"
}
] |
[
{
"msg_contents": "Hi,\n\nA question by Justin made me wonder what the right behaviour for world,\ninstall-world should be when the docs tools aren't available. I'm wondering\nfrom the angle of meson, but it also seems something we possibly should think\nabout for autoconf.\n\nRight now if one does install-world with autoconf, without having xmllint or\nxsltproc available, one gets an error:\nERROR: `xmllint' is missing on your system.\n\nIs that good? Should meson behave the same?\n\nI wonder if, for meson, the best behaviour would be to make 'docs' a feature\nset to auto. If docs set to enabled, and the necessary tools are not\navailable, fail at that time, instead of doing so while building.\n\nIf that's what we decide to do, perhaps \"docs\" should be split further? The\ndependencies for pdf generation are a lot more heavyweight.\n\n\nWe should probably also generate a useful error when the stylesheets aren't\navailable. Right now we just generate a long error:\n\n/usr/bin/xsltproc --nonet --path . --stringparam pg.version '16devel' /home/andres/src/postgresql/doc/src/sgml/stylesheet.xsl postgres-full.xml\nI/O error : Attempt to load network entity http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\nwarning: failed to load external entity \"http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\"\ncompilation error: file /home/andres/src/postgresql/doc/src/sgml/stylesheet.xsl line 6 element import\nxsl:import : unable to load http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\nI/O error : Attempt to load network entity http://docbook.sourceforge.net/release/xsl/current/common/entities.ent\n/home/andres/src/postgresql/doc/src/sgml/stylesheet-html-common.xsl:4: warning: failed to load external entity \"http://docbook.sourceforge.net/release/xsl/current/common/entities.ent\"\n%common.entities;\n ^\n/home/andres/src/postgresql/doc/src/sgml/stylesheet-html-common.xsl:116: parser error : Entity 'primary' not defined\n translate(substring(&primary;, 1, 1),\n...\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20230325180310.o6drykb3uz4u4x4r%40awork3.anarazel.de\n\n\n",
"msg_date": "Sat, 25 Mar 2023 13:14:14 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "what should install-world do when docs are not available?"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> Right now if one does install-world with autoconf, without having xmllint or\n> xsltproc available, one gets an error:\n> ERROR: `xmllint' is missing on your system.\n\n> Is that good? Should meson behave the same?\n\nSince install-world is defined to install documentation, it should\ndo so or fail trying. Maybe we could skip the xmllint step, but you'd\nstill need xsltproc so I'm not sure that that moves the bar very far.\n\n> If that's what we decide to do, perhaps \"docs\" should be split further? The\n> dependencies for pdf generation are a lot more heavyweight.\n\nYeah. Personally I think \"docs\" should just build/install the HTML\ndocs, but maybe I'm too narrow-minded.\n\n> We should probably also generate a useful error when the stylesheets aren't\n> available.\n\nMaybe, but we haven't had that in the autoconf case either, and there\nhave not been too many complaints. Not sure it's worth putting extra\neffort into.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 Mar 2023 16:40:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: what should install-world do when docs are not available?"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-25 16:40:03 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > Right now if one does install-world with autoconf, without having xmllint or\n> > xsltproc available, one gets an error:\n> > ERROR: `xmllint' is missing on your system.\n>\n> > Is that good? Should meson behave the same?\n>\n> Since install-world is defined to install documentation, it should\n> do so or fail trying. Maybe we could skip the xmllint step, but you'd\n> still need xsltproc so I'm not sure that that moves the bar very far.\n\nxmllint is the more commonly installed tool (it's part of libxml, which\nlibxslt depends on), so that wouldn't help much - and we now depend on xmllint\nto build the input to xsltproc anyway...\n\n\n> > If that's what we decide to do, perhaps \"docs\" should be split further? The\n> > dependencies for pdf generation are a lot more heavyweight.\n>\n> Yeah. Personally I think \"docs\" should just build/install the HTML\n> docs, but maybe I'm too narrow-minded.\n\nSorry, I meant docs as a meson option, not as a build target. The 'docs'\ntarget builds just the html doc (as with autoconf), and install-doc installs\nboth html and manpages (also as with autoconf).\n\nI am basically wondering if we should make it so that if you say\n-Ddocs=enabled and xmllint or xsltproc aren't available, you get an error at\nconfigure time. And if -Ddocs=auto, the summary at the end of configure will\ntell you if the necessary tools to build the docs are available, but not error\nout.\n\nThe extension to that could be to have a separate -Ddoc_pdf option, which'd\nmirror the above.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 25 Mar 2023 14:08:52 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: what should install-world do when docs are not available?"
},
{
"msg_contents": "On 25.03.23 21:14, Andres Freund wrote:\n> I wonder if, for meson, the best behaviour would be to make 'docs' a feature\n> set to auto. If docs set to enabled, and the necessary tools are not\n> available, fail at that time, instead of doing so while building.\n\nMakes sense to me.\n\n> If that's what we decide to do, perhaps \"docs\" should be split further? The\n> dependencies for pdf generation are a lot more heavyweight.\n\nI think \"docs\" should be html and man, because that's what gets installed.\n\npdf and other things can just be an ad hoc build target and doesn't need \ninstall support.\n\n\n\n",
"msg_date": "Wed, 29 Mar 2023 18:15:02 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: what should install-world do when docs are not available?"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-29 18:15:02 +0200, Peter Eisentraut wrote:\n> On 25.03.23 21:14, Andres Freund wrote:\n> > I wonder if, for meson, the best behaviour would be to make 'docs' a feature\n> > set to auto. If docs set to enabled, and the necessary tools are not\n> > available, fail at that time, instead of doing so while building.\n> \n> Makes sense to me.\n> \n> > If that's what we decide to do, perhaps \"docs\" should be split further? The\n> > dependencies for pdf generation are a lot more heavyweight.\n> \n> I think \"docs\" should be html and man, because that's what gets installed.\n> \n> pdf and other things can just be an ad hoc build target and doesn't need\n> install support.\n\nI just meant for feature detection.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 29 Mar 2023 09:25:03 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: what should install-world do when docs are not available?"
},
{
"msg_contents": "On 29.03.23 18:25, Andres Freund wrote:\n> On 2023-03-29 18:15:02 +0200, Peter Eisentraut wrote:\n>> On 25.03.23 21:14, Andres Freund wrote:\n>>> I wonder if, for meson, the best behaviour would be to make 'docs' a feature\n>>> set to auto. If docs set to enabled, and the necessary tools are not\n>>> available, fail at that time, instead of doing so while building.\n>>\n>> Makes sense to me.\n>>\n>>> If that's what we decide to do, perhaps \"docs\" should be split further? The\n>>> dependencies for pdf generation are a lot more heavyweight.\n>>\n>> I think \"docs\" should be html and man, because that's what gets installed.\n>>\n>> pdf and other things can just be an ad hoc build target and doesn't need\n>> install support.\n> \n> I just meant for feature detection.\n\nAh yes, then things like fop should either be a separate feature or just \ndo something light weight, like failing the target if fop isn't there.\n\n\n\n",
"msg_date": "Wed, 29 Mar 2023 18:39:27 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: what should install-world do when docs are not available?"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-29 18:39:27 +0200, Peter Eisentraut wrote:\n> On 29.03.23 18:25, Andres Freund wrote:\n> > On 2023-03-29 18:15:02 +0200, Peter Eisentraut wrote:\n> > > On 25.03.23 21:14, Andres Freund wrote:\n> > > > I wonder if, for meson, the best behaviour would be to make 'docs' a feature\n> > > > set to auto. If docs set to enabled, and the necessary tools are not\n> > > > available, fail at that time, instead of doing so while building.\n> > > \n> > > Makes sense to me.\n> > > \n> > > > If that's what we decide to do, perhaps \"docs\" should be split further? The\n> > > > dependencies for pdf generation are a lot more heavyweight.\n> > > \n> > > I think \"docs\" should be html and man, because that's what gets installed.\n> > > \n> > > pdf and other things can just be an ad hoc build target and doesn't need\n> > > install support.\n> > \n> > I just meant for feature detection.\n> \n> Ah yes, then things like fop should either be a separate feature or just do\n> something light weight, like failing the target if fop isn't there.\n\nAttached is an implementation of this approach. This includes some lightly\npolished patches from [1] and a new patch to remove htmlhelp.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/3fc3bb9b-f7f8-d442-35c1-ec82280c564a%40enterprisedb.com",
"msg_date": "Wed, 29 Mar 2023 15:41:32 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: what should install-world do when docs are not available?"
},
{
"msg_contents": "> + Enables the building of documentation in <acronym>HTML</acronym> and\n> + <acronym>man</acronym> format. It defaults to auto.\n> +\n> + Enables the building of documentation in <acronym>PDF</acronym>\n> + format. It defaults to auto.\n\nThese sound awkward. Recommend:\n\nEnables building the documentation in <acronym>PDF</acronym>\nformat. It defaults to auto.\n\n> + <varlistentry id=\"configure-docs-html-style\">\n> + <term><option>-Ddocs_html_style={ simple | website }</option></term>\n> + <listitem>\n> + <para>\n> + Influences which <acronym>CSS</acronym> stylesheet is used. If\n> + <literal>website</literal>, instead of the default\n> + <literal>simple</literal>, is used, HTML documentation will use the\n> + stylesheet used on <ulink\n> + url=\"https://www.postgresql.org/docs/current/\">postgresql.org</ulink>.\n\ns/Influences/Controls/\n\nI think the default should be given separately from the description of\nthe other option.\n\nControls which <acronym>CSS</acronym> stylesheet is used.\nThe default is <literal>simple</literal>.\nIf set to <literal>website</literal>, the HTML documentation will use the\nsame stylesheet used on <ulink\nurl=\"https://www.postgresql.org/docs/current/\">postgresql.org</ulink>.\n\n\n",
"msg_date": "Wed, 29 Mar 2023 17:51:01 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: what should install-world do when docs are not available?"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-29 17:51:01 -0500, Justin Pryzby wrote:\n> > + Enables the building of documentation in <acronym>HTML</acronym> and\n> > + <acronym>man</acronym> format. It defaults to auto.\n> > +\n> > + Enables the building of documentation in <acronym>PDF</acronym>\n> > + format. It defaults to auto.\n> \n> These sound awkward. Recommend:\n> \n> Enables building the documentation in <acronym>PDF</acronym>\n> format. It defaults to auto.\n> \n> > + <varlistentry id=\"configure-docs-html-style\">\n> > + <term><option>-Ddocs_html_style={ simple | website }</option></term>\n> > + <listitem>\n> > + <para>\n> > + Influences which <acronym>CSS</acronym> stylesheet is used. If\n> > + <literal>website</literal>, instead of the default\n> > + <literal>simple</literal>, is used, HTML documentation will use the\n> > + stylesheet used on <ulink\n> > + url=\"https://www.postgresql.org/docs/current/\">postgresql.org</ulink>.\n> \n> s/Influences/Controls/\n> \n> I think the default should be given separately from the description of\n> the other option.\n> \n> Controls which <acronym>CSS</acronym> stylesheet is used.\n> The default is <literal>simple</literal>.\n> If set to <literal>website</literal>, the HTML documentation will use the\n> same stylesheet used on <ulink\n> url=\"https://www.postgresql.org/docs/current/\">postgresql.org</ulink>.\n\nYour alternatives are indeed better. Except that \"the same\" seems a bit\nmisleading to me, sounding like it could just be a copy. I changed to \"will\nreference the stylesheet for ...\".\n\nPushed the changes.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 4 Apr 2023 21:46:11 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: what should install-world do when docs are not available?"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-04 21:46:11 -0700, Andres Freund wrote:\n> Pushed the changes.\n\nThis failed on crake - afaict because the meson buildfarm code disables all\nfeatures. Because 'docs' is a feature now, the BF code building\ndoc/src/sgml/html fails.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 4 Apr 2023 21:57:32 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: what should install-world do when docs are not available?"
},
{
"msg_contents": "On 2023-04-05 We 00:57, Andres Freund wrote:\n> Hi,\n>\n> On 2023-04-04 21:46:11 -0700, Andres Freund wrote:\n>> Pushed the changes.\n> This failed on crake - afaict because the meson buildfarm code disables all\n> features. Because 'docs' is a feature now, the BF code building\n> doc/src/sgml/html fails.\n\n\nI changed it so that if the config mandates building docs we add \n-Ddocs=enabled and if it mandates building a pdf we also add \n-Ddocs_pdf=enabled. See\n\n<https://github.com/PGBuildFarm/client-code/commit/b18a129f91352f77e67084a758462b92ac1abaf7>\n\nIt's a slight pity that you have to pick this at setup time, but I guess \nthe upside is that we don't spend time looking for stuff we're not \nactually going to use.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-05 We 00:57, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-04-04 21:46:11 -0700, Andres Freund wrote:\n\n\nPushed the changes.\n\n\n\nThis failed on crake - afaict because the meson buildfarm code disables all\nfeatures. Because 'docs' is a feature now, the BF code building\ndoc/src/sgml/html fails.\n\n\n\nI changed it so that if the config mandates building docs we add\n -Ddocs=enabled and if it mandates building a pdf we also add\n -Ddocs_pdf=enabled. See\n<https://github.com/PGBuildFarm/client-code/commit/b18a129f91352f77e67084a758462b92ac1abaf7>\nIt's a slight pity that you have to pick this at setup time, but\n I guess the upside is that we don't spend time looking for stuff\n we're not actually going to use.\n\n\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 6 Apr 2023 14:52:51 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: what should install-world do when docs are not available?"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-06 14:52:51 -0400, Andrew Dunstan wrote:\n> On 2023-04-05 We 00:57, Andres Freund wrote:\n> > Hi,\n> > \n> > On 2023-04-04 21:46:11 -0700, Andres Freund wrote:\n> > > Pushed the changes.\n> > This failed on crake - afaict because the meson buildfarm code disables all\n> > features. Because 'docs' is a feature now, the BF code building\n> > doc/src/sgml/html fails.\n> \n> \n> I changed it so that if the config mandates building docs we add\n> -Ddocs=enabled and if it mandates building a pdf we also add\n> -Ddocs_pdf=enabled. See\n\nSounds good, thanks!\n\n\n> <https://github.com/PGBuildFarm/client-code/commit/b18a129f91352f77e67084a758462b92ac1abaf7>\n> \n> It's a slight pity that you have to pick this at setup time,\n\nFWIW, you can change options with meson configure -Ddocs=enabled (or whatnot),\nin an existing buildtree. It'll rerun configure (with caching). For options\nlike docs, it won't lead to rebuilding binaries.\n\n\n> but I guess the upside is that we don't spend time looking for stuff we're\n> not actually going to use.\n\nAnd that you'll learn that tools are missing before you get through most of\nthe build...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 Apr 2023 17:50:45 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: what should install-world do when docs are not available?"
}
] |
[
{
"msg_contents": "Currently pg_test_timing utility measures its timing overhead in\nmicroseconds, giving results like this\n\n~$ /usr/lib/postgresql/15/bin/pg_test_timing\nTesting timing overhead for 3 seconds.\nPer loop time including overhead: 18.97 ns\nHistogram of timing durations:\n < us % of total count\n 1 98.11132 155154419\n 2 1.88756 2985010\n 4 0.00040 630\n 8 0.00012 184\n 16 0.00058 919\n 32 0.00003 40\n 64 0.00000 6\n\nI got curious and wanted to see how the 98.1% timings are distributed\n(raw uncleaned patch attached)\nAnd this is what I got when I increased the measuring resolution to nanoseconds\n\nhannuk@hannuk1:~/work/postgres15_uni_dist_on/src/bin/pg_test_timing$\n./pg_test_timing\nTesting timing overhead for 3 seconds.\nPer loop time including overhead: 17.34 ns, min: 15, same: 0\nHistogram of timing durations:\n < ns % of total count\n 1 0.00000 0\n 2 0.00000 0\n 4 0.00000 0\n 8 0.00000 0\n 16 1.14387 1979085\n 32 98.47924 170385392\n 64 0.21666 374859\n 128 0.15654 270843\n 256 0.00297 5139\n 512 0.00016 272\n 1024 0.00004 73\n 2048 0.00018 306\n 4096 0.00022 375\n 8192 0.00006 99\n 16384 0.00005 80\n 32768 0.00001 20\n 65536 0.00000 6\n 131072 0.00000 2\n\nAs most of the samples seems to be in ranges 8..15 and 16..32\nnanoseconds the current way of measuring at microsecond resolution is\nclearly inadequate.\n\nThe attached patch is not meant to be applied as-is but is rather\nthere as a helper to easily verify the above numbers.\n\n\nQUESTIONS\n\n1. Do you think it is ok to just change pg_test_timing to return the\nresult in nanoseconds or should there be a flag that asks for\nnanosecond resolution ?\n\n2. Should the output be changed to give ranges instead of `<ns`\nnumbers for better clarity, and leave out the \"too small numbers\" from\nthe beginning as well ?\n\n So the first few lines would look like\n 8 .. 15 ....\n 16 .. 32 .....\n....\n\n\n---\nBest Regards,\nHannu",
"msg_date": "Sun, 26 Mar 2023 16:43:21 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Time to move pg_test_timing to measure in nanoseconds"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-26 16:43:21 +0200, Hannu Krosing wrote:\n> Currently pg_test_timing utility measures its timing overhead in\n> microseconds, giving results like this\n\nI have a patch that does that and a bit more that's included in a larger\npatchset by David Geier:\nhttps://postgr.es/m/198ef658-a5b7-9862-2017-faf85d59e3a8%40gmail.com\n\nCould you review that part of the patchset?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 26 Mar 2023 14:40:20 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Time to move pg_test_timing to measure in nanoseconds"
},
{
"msg_contents": "Sure, will do.\n\nOn Sun, Mar 26, 2023 at 11:40 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-03-26 16:43:21 +0200, Hannu Krosing wrote:\n> > Currently pg_test_timing utility measures its timing overhead in\n> > microseconds, giving results like this\n>\n> I have a patch that does that and a bit more that's included in a larger\n> patchset by David Geier:\n> https://postgr.es/m/198ef658-a5b7-9862-2017-faf85d59e3a8%40gmail.com\n>\n> Could you review that part of the patchset?\n>\n> Greetings,\n>\n> Andres Freund\n\n\n",
"msg_date": "Mon, 27 Mar 2023 00:05:44 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Time to move pg_test_timing to measure in nanoseconds"
}
] |
[
{
"msg_contents": "When a parallel query gets cancelled on a standby due to\nmax_standby_streaming_delay, it happens rather awkwardly. I get two errors\nstacked up, a query cancellation followed by a connection termination.\n\nI use `pgbench -R 1 -T3600 -P5` on the master to generate a light but\nsteady stream of HOT pruning records, and then run `select\nsum(a.abalance*b.abalance) from pgbench_accounts a join pgbench_accounts b\nusing (bid);` on the standby not in a transaction block to be a\nlong-running parallel query (scale factor of 20)\n\nI also set max_standby_streaming_delay = 0. That isn't necessary, but it\nsaves wear and tear on my patience.\n\nERROR: canceling statement due to conflict with recovery\nDETAIL: User query might have needed to see row versions that must be\nremoved.\nFATAL: terminating connection due to conflict with recovery\nDETAIL: User query might have needed to see row versions that must be\nremoved.\n\nThis happens quite reliably. In psql, these sometimes both show up\nimmediately, and sometimes only the first one shows up immediately and then\nthe second one appears upon the next communication to the backend.\n\nI don't know if this is actually a problem. It isn't for me as I don't do\nthis kind of thing outside of testing, but it seems untidy and I can see it\nbeing frustrating from a catch-and-retry perspective and from a log-spam\nperspective.\n\nIt looks like the backend gets signalled by the startup process, and then\nit signals the postmaster to signal the parallel workers, and then they\nignore it for a quite long time (tens to hundreds of ms). By the time they\nget around responding, someone has decided to escalate things. Which\ndoesn't seem to be useful, because no one can do anything until the workers\nrespond anyway.\n\nThis behavior seems to go back a long way, but the propensity for both\nmessages to show up at the same time vs. in different round-trips changes\nfrom version to version.\n\nIs this something we should do something about?\n\nCheers,\n\nJeff\n\nWhen a parallel query gets cancelled on a standby due to max_standby_streaming_delay, it happens rather awkwardly. I get two errors stacked up, a query cancellation followed by a connection termination.I use `pgbench -R 1 -T3600 -P5` on the master to generate a light but steady stream of HOT pruning records, and then run `select sum(a.abalance*b.abalance) from pgbench_accounts a join pgbench_accounts b using (bid);` on the standby not in a transaction block to be a long-running parallel query (scale factor of 20)I also set max_standby_streaming_delay = 0. That isn't necessary, but it saves wear and tear on my patience.ERROR: canceling statement due to conflict with recoveryDETAIL: User query might have needed to see row versions that must be removed.FATAL: terminating connection due to conflict with recoveryDETAIL: User query might have needed to see row versions that must be removed.This happens quite reliably. In psql, these sometimes both show up immediately, and sometimes only the first one shows up immediately and then the second one appears upon the next communication to the backend.I don't know if this is actually a problem. It isn't for me as I don't do this kind of thing outside of testing, but it seems untidy and I can see it being frustrating from a catch-and-retry perspective and from a log-spam perspective.It looks like the backend gets signalled by the startup process, and then it signals the postmaster to signal the parallel workers, and then they ignore it for a quite long time (tens to hundreds of ms). By the time they get around responding, someone has decided to escalate things. Which doesn't seem to be useful, because no one can do anything until the workers respond anyway.This behavior seems to go back a long way, but the propensity for both messages to show up at the same time vs. in different round-trips changes from version to version.Is this something we should do something about?Cheers,Jeff",
"msg_date": "Sun, 26 Mar 2023 11:12:48 -0400",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": true,
"msg_subject": "awkward cancellation of parallel queries on standby."
},
{
"msg_contents": "At Sun, 26 Mar 2023 11:12:48 -0400, Jeff Janes <[email protected]> wrote in \n> I don't know if this is actually a problem. It isn't for me as I don't do\n> this kind of thing outside of testing, but it seems untidy and I can see it\n> being frustrating from a catch-and-retry perspective and from a log-spam\n> perspective.\n> \n> It looks like the backend gets signalled by the startup process, and then\n> it signals the postmaster to signal the parallel workers, and then they\n> ignore it for a quite long time (tens to hundreds of ms). By the time they\n> get around responding, someone has decided to escalate things. Which\n> doesn't seem to be useful, because no one can do anything until the workers\n> respond anyway.\n\nI believe you are seeing autovacuum_naptime as the latency since the\nkilled backend is running a busy query. It seems to me that the\nsignals are get processed pretty much instantly in most cases. There's\na situation where detection takes longer if a session is sitting idle\nin a transaction, but that's just how we deal with that\nsituation. There could be a delay when the system load is pretty high,\nbut it's not really our concern unless some messages start going\nmissing irregularly.\n\n> This behavior seems to go back a long way, but the propensity for both\n> messages to show up at the same time vs. in different round-trips changes\n> from version to version.\n> \n> Is this something we should do something about?\n\nI can't say for certain about the version dependency, but the latency\nyou mentioned doesn't really seem to be an issue, so we don't need to\nworry about it. Regarding session cancellation, taking action might be\nan option. However, even if we detect transaction status in\nPostgresMain, there's still a possibility of the cancellation if a\nconflicting process tries to read a command right before ending the\nongoing transaction. Although we might prevent cancellations in those\nfinal moments, it seems like things could get complicated.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 27 Mar 2023 18:42:34 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: awkward cancellation of parallel queries on standby."
}
] |
[
{
"msg_contents": "Andres,\n Apologies to pick on you directly.\n But it appears that sites are refusing HTTP requests,\nand it's affecting compilation of docs in a new configuration.\n\n I was surprised to see NON-HTTPS references in 2023, tbh...\nI cannot even curl these references.\n\n Maybe I am missing a simple flag...\n\n Or should I offer to search/replace to fix everything to HTTPS,\nand submit a patch?\n\nRegards, Kirk\n\nAndres, Apologies to pick on you directly. But it appears that sites are refusing HTTP requests,and it's affecting compilation of docs in a new configuration. I was surprised to see NON-HTTPS references in 2023, tbh...I cannot even curl these references. Maybe I am missing a simple flag... Or should I offer to search/replace to fix everything to HTTPS,and submit a patch?Regards, Kirk",
"msg_date": "Sun, 26 Mar 2023 21:12:35 -0400",
"msg_from": "Kirk Wolak <[email protected]>",
"msg_from_op": true,
"msg_subject": "Documentation Not Compiling (http://docbook... not https:.//...)"
},
{
"msg_contents": "On Sun, Mar 26, 2023 at 09:12:35PM -0400, Kirk Wolak wrote:\n> Andres,\n> Apologies to pick on you directly.\n> But it appears that sites are refusing HTTP requests,\n> and it's affecting compilation of docs in a new configuration.\n> \n> I was surprised to see NON-HTTPS references in 2023, tbh...\n> I cannot even curl these references.\n> \n> Maybe I am missing a simple flag...\n> \n> Or should I offer to search/replace to fix everything to HTTPS,\n> and submit a patch?\n\nSee 969509c3f2e3b4c32dcf264f9d642b5ef01319f3\n\nDo you have the necessary packages installed for your platform (which\nplatform?).\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 26 Mar 2023 20:22:00 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Documentation Not Compiling (http://docbook... not https:.//...)"
},
{
"msg_contents": "On Sun, Mar 26, 2023 at 9:12 PM Kirk Wolak <[email protected]> wrote:\n\n> Andres,\n> Apologies to pick on you directly.\n> But it appears that sites are refusing HTTP requests,\n> and it's affecting compilation of docs in a new configuration.\n>\n> I was surprised to see NON-HTTPS references in 2023, tbh...\n> I cannot even curl these references.\n>\n> Maybe I am missing a simple flag...\n>\n> Or should I offer to search/replace to fix everything to HTTPS,\n> and submit a patch?\n>\n> Regards, Kirk\n>\n\nOkay, for future reference I had to install a few things (fop, dbtoepub,\ndocbook-xsl)\n\nNot sure why the original ./configure did not bring those in...\n\nRegards, Kirk\n\nOn Sun, Mar 26, 2023 at 9:12 PM Kirk Wolak <[email protected]> wrote:Andres, Apologies to pick on you directly. But it appears that sites are refusing HTTP requests,and it's affecting compilation of docs in a new configuration. I was surprised to see NON-HTTPS references in 2023, tbh...I cannot even curl these references. Maybe I am missing a simple flag... Or should I offer to search/replace to fix everything to HTTPS,and submit a patch?Regards, KirkOkay, for future reference I had to install a few things (fop, dbtoepub, docbook-xsl)Not sure why the original ./configure did not bring those in...Regards, Kirk",
"msg_date": "Sun, 26 Mar 2023 22:24:42 -0400",
"msg_from": "Kirk Wolak <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Documentation Not Compiling (http://docbook... not https:.//...)"
}
] |
[
{
"msg_contents": "Hi,\n\nI recently observed an assertion failure [1] a few times on my dev\nsetup during initdb. The code was built with --enable-debug\n--enable-cassert CFLAGS=\"-ggdb3 -O0\". The assertion was gone after I\ndid make distclean and built the source code again. It looks like the\nsame relation (pg_type [2]) is linked to multiple relcache entries.\nI'm not sure if anyone else has seen this, but thought of reporting it\nhere. Note that I'm not seeing this issue any more.\n\n[1]\nrunning bootstrap script ... TRAP: failed\nAssert(\"rel->pgstat_info->relation == NULL\"), File:\n\"pgstat_relation.c\", Line: 143, PID: 837245\n/home/ubuntu/postgres/inst/bin/postgres(ExceptionalCondition+0xbb)[0x55d98ff6abc4]\n/home/ubuntu/postgres/inst/bin/postgres(pgstat_assoc_relation+0xcd)[0x55d98fdb3db7]\n/home/ubuntu/postgres/inst/bin/postgres(+0x1326f5)[0x55d98f8576f5]\n/home/ubuntu/postgres/inst/bin/postgres(heap_beginscan+0x17a)[0x55d98f8586b5]\n/home/ubuntu/postgres/inst/bin/postgres(table_beginscan_catalog+0x6e)[0x55d98f8c4cf3]\n/home/ubuntu/postgres/inst/bin/postgres(+0x1f3d29)[0x55d98f918d29]\n/home/ubuntu/postgres/inst/bin/postgres(+0x1f4031)[0x55d98f919031]\n/home/ubuntu/postgres/inst/bin/postgres(DefineAttr+0x216)[0x55d98f918375]\n/home/ubuntu/postgres/inst/bin/postgres(boot_yyparse+0x115c)[0x55d98f91499c]\n/home/ubuntu/postgres/inst/bin/postgres(BootstrapModeMain+0x5cb)[0x55d98f917cda]\n/home/ubuntu/postgres/inst/bin/postgres(main+0x2f3)[0x55d98fb63738]\n/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7faac24d8d90]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7faac24d8e40]\n/home/ubuntu/postgres/inst/bin/postgres(_start+0x25)[0x55d98f7f2045]\nAborted (core dumped)\nchild process exited with exit code 134\ninitdb: removing data directory \"data\"\n\n#3 0x0000562d92043da9 in pgstat_assoc_relation (rel=0x562d93dd9bc8)\nat pgstat_relation.c:148\n#4 0x0000562d91ae76f5 in initscan (scan=0x562d93e047f8, key=0x0,\nkeep_startblock=false) at heapam.c:344\n#5 0x0000562d91ae86b5 in heap_beginscan (relation=0x562d93dd9bc8,\nsnapshot=0x562d93dfe3f8, nkeys=0, key=0x0,\n parallel_scan=0x0, flags=961) at heapam.c:1017\n#6 0x0000562d91b54cf3 in table_beginscan_catalog\n(relation=0x562d93dd9bc8, nkeys=0, key=0x0) at tableam.c:119\n#7 0x0000562d91ba8d29 in populate_typ_list () at bootstrap.c:719\n#8 0x0000562d91ba9031 in gettype (type=0x562d93e047d8 \"anyarray\") at\nbootstrap.c:801\n#9 0x0000562d91ba8375 in DefineAttr (name=0x562d93e047b8\n\"attmissingval\", type=0x562d93e047d8 \"anyarray\", attnum=25,\n nullness=1) at bootstrap.c:521\n#10 0x0000562d91ba499c in boot_yyparse () at\n/home/ubuntu/postgres/src/backend/bootstrap/bootparse.y:438\n#11 0x0000562d91ba7cda in BootstrapModeMain (argc=6,\nargv=0x562d93d4a1d8, check_only=false) at bootstrap.c:370\n#12 0x0000562d91df3738 in main (argc=7, argv=0x562d93d4a1d0) at main.c:189\n\n[2]\n(gdb) p *relation\n$2 = {rd_locator = {spcOid = 1663, dbOid = 1, relNumber = 1247},\nrd_smgr = 0x562d93e090a8, rd_refcnt = 3, rd_backend = -1,\nrd_islocaltemp = false,\n rd_isnailed = true, rd_isvalid = true, rd_indexvalid = false,\nrd_statvalid = false, rd_createSubid = 1, rd_newRelfilelocatorSubid =\n0,\n rd_firstRelfilelocatorSubid = 0, rd_droppedSubid = 0, rd_rel =\n0x562d93ddae18, rd_att = 0x562d93dd9dd8, rd_id = 1247, rd_lockInfo =\n{lockRelId = {\n relId = 1247, dbId = 1}}, rd_rules = 0x0, rd_rulescxt = 0x0,\ntrigdesc = 0x0, rd_rsdesc = 0x0, rd_fkeylist = 0x0, rd_fkeyvalid =\nfalse, rd_partkey = 0x0,\n rd_partkeycxt = 0x0, rd_partdesc = 0x0, rd_pdcxt = 0x0,\nrd_partdesc_nodetached = 0x0, rd_pddcxt = 0x0,\nrd_partdesc_nodetached_xmin = 0, rd_partcheck = 0x0,\n rd_partcheckvalid = false, rd_partcheckcxt = 0x0, rd_indexlist =\n0x0, rd_pkindex = 0, rd_replidindex = 0, rd_statlist = 0x0,\nrd_attrsvalid = false,\n rd_keyattr = 0x0, rd_pkattr = 0x0, rd_idattr = 0x0,\nrd_hotblockingattr = 0x0, rd_summarizedattr = 0x0, rd_pubdesc = 0x0,\nrd_options = 0x0, rd_amhandler = 3,\n rd_tableam = 0x562d92582040 <heapam_methods>, rd_index = 0x0,\nrd_indextuple = 0x0, rd_indexcxt = 0x0, rd_indam = 0x0, rd_opfamily =\n0x0, rd_opcintype = 0x0,\n rd_support = 0x0, rd_supportinfo = 0x0, rd_indoption = 0x0,\nrd_indexprs = 0x0, rd_indpred = 0x0, rd_exclops = 0x0, rd_exclprocs =\n0x0, rd_exclstrats = 0x0,\n rd_indcollation = 0x0, rd_opcoptions = 0x0, rd_amcache = 0x0,\nrd_fdwroutine = 0x0, rd_toastoid = 0, pgstat_enabled = true,\npgstat_info = 0x562d93d79fb8}\n(gdb) p *relation->rd_rel\n$3 = {oid = 0, relname = {data = \"pg_type\", '\\000' <repeats 56\ntimes>}, relnamespace = 11, reltype = 0, reloftype = 0, relowner = 10,\nrelam = 2,\n relfilenode = 0, reltablespace = 0, relpages = 0, reltuples = 0,\nrelallvisible = 0, reltoastrelid = 0, relhasindex = false, relisshared\n= false,\n relpersistence = 112 'p', relkind = 114 'r', relnatts = 32,\nrelchecks = 0, relhasrules = false, relhastriggers = false,\nrelhassubclass = false,\n relrowsecurity = false, relforcerowsecurity = false, relispopulated\n= true, relreplident = 110 'n', relispartition = false, relrewrite =\n0, relfrozenxid = 0,\n relminmxid = 0}\n(gdb)\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 27 Mar 2023 11:46:08 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Assertion in pgstat_assoc_relation() fails intermittently"
},
{
"msg_contents": "At Mon, 27 Mar 2023 11:46:08 +0530, Bharath Rupireddy <[email protected]> wrote in \n> I recently observed an assertion failure [1] a few times on my dev\n> setup during initdb. The code was built with --enable-debug\n> --enable-cassert CFLAGS=\"-ggdb3 -O0\". The assertion was gone after I\n> did make distclean and built the source code again. It looks like the\n> same relation (pg_type [2]) is linked to multiple relcache entries.\n> I'm not sure if anyone else has seen this, but thought of reporting it\n> here. Note that I'm not seeing this issue any more.\n\nThis seems like the same issue with [a] and it was fixed by cb2e7ddfe5\non Dec 2, 2022.\n\n[a] https://www.postgresql.org/message-id/CALDaNm2yXz%2BzOtv7y5zBd5WKT8O0Ld3YxikuU3dcyCvxF7gypA%40mail.gmail.com\n\na> #5 0x00005590bf283139 in ExceptionalCondition\na> (conditionName=0x5590bf468170 \"rel->pgstat_info->relation == NULL\",\na> fileName=0x5590bf46812b \"pgstat_relation.c\", lineNumber=143) at\na> assert.c:66\na> #6 0x00005590bf0ce5f8 in pgstat_assoc_relation (rel=0x7efcce996a48)\na> at pgstat_relation.c:143\na> #7 0x00005590beb83046 in initscan (scan=0x5590bfbf4af8, key=0x0,\na> keep_startblock=false) at heapam.c:343\na> #8 0x00005590beb8466f in heap_beginscan (relation=0x7efcce996a48,\nsnapshot=0x5590bfb5a520, nkeys=0, key=0x0, parallel_scan=0x0,\nflags=449) at heapam.c:1223\n\n\n> [1]\n> running bootstrap script ... TRAP: failed\n> Assert(\"rel->pgstat_info->relation == NULL\"), File:\n> \"pgstat_relation.c\", Line: 143, PID: 837245\n> /home/ubuntu/postgres/inst/bin/postgres(ExceptionalCondition+0xbb)[0x55d98ff6abc4]\n> /home/ubuntu/postgres/inst/bin/postgres(pgstat_assoc_relation+0xcd)[0x55d98fdb3db7]\n> /home/ubuntu/postgres/inst/bin/postgres(+0x1326f5)[0x55d98f8576f5]\n> /home/ubuntu/postgres/inst/bin/postgres(heap_beginscan+0x17a)[0x55d98f8586b5]\n> /home/ubuntu/postgres/inst/bin/postgres(table_beginscan_catalog+0x6e)[0x55d98f8c4cf3]\n\nregareds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 28 Mar 2023 10:45:30 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assertion in pgstat_assoc_relation() fails intermittently"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI wonder why while calculating cost of parallel scan we divide by \nparallel_divisor only CPU run cost,\nbut not storage access cost? So we do not take in account that reading \npages is also performed in parallel.\nActually I observed strange behavior when increasing work_mem disables \nparallel plan even with parallel-friendly tuning:\n\nset parallel_tuple_cost = 0;\nset parallel_setup_cost = 0;\nset max_parallel_workers = 16;\nset max_parallel_workers_per_gather = 16;\nset min_parallel_table_scan_size = '16kB';\n\npostgres=# set work_mem = '32MB';\nSET\npostgres=# explain select sum(payload) from sp where p <@ '((0.2,0.2),(0.3,0.3))'::box;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=3427210.71..3427210.72 rows=1 width=8)\n -> Gather (cost=3427210.67..3427210.68 rows=12 width=8)\n Workers Planned: 12\n -> Partial Aggregate (cost=3427210.67..3427210.68 rows=1 width=8)\n -> Parallel Bitmap Heap Scan on sp (cost=31994.55..3427002.34 rows=83333 width=4)\n Recheck Cond: (p <@ '(0.3,0.3),(0.2,0.2)'::box)\n -> Bitmap Index Scan on sp_p_idx (cost=0.00..31744.55 rows=1000000 width=0)\n Index Cond: (p <@ '(0.3,0.3),(0.2,0.2)'::box)\n(8 rows)\n\npostgres=# set work_mem = '64MB';\nSET\npostgres=# explain select sum(payload) from sp where p <@ '((0.2,0.2),(0.3,0.3))'::box;\n QUERY PLAN\n---------------------------------------------------------------------------------------\n Aggregate (cost=2694543.52..2694543.53 rows=1 width=8)\n -> Bitmap Heap Scan on sp (cost=31994.55..2692043.52 rows=1000000 width=4)\n Recheck Cond: (p <@ '(0.3,0.3),(0.2,0.2)'::box)\n -> Bitmap Index Scan on sp_p_idx (cost=0.00..31744.55 rows=1000000 width=0)\n Index Cond: (p <@ '(0.3,0.3),(0.2,0.2)'::box)\n(5 rows)\n\n\nIn theory, with zero parallel setup cost we should always prefer \nparallel plan with maximal possible number of workers.\nBut right now it is not true.\n\n\n\n\n\n Hi hackers,\n\n I wonder why while calculating cost of parallel scan we divide by\n parallel_divisor only CPU run cost, \n but not storage access cost? So we do not take in account that\n reading pages is also performed in parallel.\n Actually I observed strange behavior when increasing work_mem\n disables parallel plan even with parallel-friendly tuning:\nset parallel_tuple_cost = 0;\nset parallel_setup_cost = 0;\nset max_parallel_workers = 16;\nset max_parallel_workers_per_gather = 16;\nset min_parallel_table_scan_size = '16kB';\npostgres=# set work_mem = '32MB';\nSET\npostgres=# explain select sum(payload) from sp where p <@ '((0.2,0.2),(0.3,0.3))'::box;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=3427210.71..3427210.72 rows=1 width=8)\n -> Gather (cost=3427210.67..3427210.68 rows=12 width=8)\n Workers Planned: 12\n -> Partial Aggregate (cost=3427210.67..3427210.68 rows=1 width=8)\n -> Parallel Bitmap Heap Scan on sp (cost=31994.55..3427002.34 rows=83333 width=4)\n Recheck Cond: (p <@ '(0.3,0.3),(0.2,0.2)'::box)\n -> Bitmap Index Scan on sp_p_idx (cost=0.00..31744.55 rows=1000000 width=0)\n Index Cond: (p <@ '(0.3,0.3),(0.2,0.2)'::box)\n(8 rows)\n\npostgres=# set work_mem = '64MB';\nSET\npostgres=# explain select sum(payload) from sp where p <@ '((0.2,0.2),(0.3,0.3))'::box;\n QUERY PLAN \n---------------------------------------------------------------------------------------\n Aggregate (cost=2694543.52..2694543.53 rows=1 width=8)\n -> Bitmap Heap Scan on sp (cost=31994.55..2692043.52 rows=1000000 width=4)\n Recheck Cond: (p <@ '(0.3,0.3),(0.2,0.2)'::box)\n -> Bitmap Index Scan on sp_p_idx (cost=0.00..31744.55 rows=1000000 width=0)\n Index Cond: (p <@ '(0.3,0.3),(0.2,0.2)'::box)\n(5 rows)\n\n In theory, with zero parallel setup cost we should always prefer\n parallel plan with maximal possible number of workers.\n But right now it is not true.",
"msg_date": "Mon, 27 Mar 2023 11:16:36 +0300",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Parallel plan cost"
}
] |
[
{
"msg_contents": "Hi hackers!\n\nI've got a question on the JsonPath header - currently the header size\nis 4 bytes, where there are version and mode bits. Is there somewhere\na defined size of the version part? There are some extensions working\nwith JsonPath, and we have some too, thus it is important how many\nbits is it possible to use to store a version value?\n\nThanks!\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi hackers!I've got a question on the JsonPath header - currently the header sizeis 4 bytes, where there are version and mode bits. Is there somewherea defined size of the version part? There are some extensions workingwith JsonPath, and we have some too, thus it is important how manybits is it possible to use to store a version value?Thanks!--Regards,Nikita MalakhovPostgres Professionalhttps://postgrespro.ru/",
"msg_date": "Mon, 27 Mar 2023 12:54:57 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": true,
"msg_subject": "JsonPath version bits"
},
{
"msg_contents": "Hi hackers!\n\nCould the 1 byte from the JsonPath header be used to store version?\nOr how many bits from the header could be used for the version value?\n\nOn Mon, Mar 27, 2023 at 12:54 PM Nikita Malakhov <[email protected]> wrote:\n\n> Hi hackers!\n>\n> I've got a question on the JsonPath header - currently the header size\n> is 4 bytes, where there are version and mode bits. Is there somewhere\n> a defined size of the version part? There are some extensions working\n> with JsonPath, and we have some too, thus it is important how many\n> bits is it possible to use to store a version value?\n>\n>\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi hackers!Could the 1 byte from the JsonPath header be used to store version?Or how many bits from the header could be used for the version value?On Mon, Mar 27, 2023 at 12:54 PM Nikita Malakhov <[email protected]> wrote:Hi hackers!I've got a question on the JsonPath header - currently the header sizeis 4 bytes, where there are version and mode bits. Is there somewherea defined size of the version part? There are some extensions workingwith JsonPath, and we have some too, thus it is important how manybits is it possible to use to store a version value?-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Thu, 30 Mar 2023 15:16:39 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: JsonPath version bits"
}
] |
[
{
"msg_contents": "Hey Everyone,\n\nI am Parth Ratra from India majoring in CSE. I have *hands-on experience\nwith HTML/Vanilla CSS/ JS, Reactjs and CI/CD pipelines through various\nprojects. I am proficient in C/C++, Python, and Django* as well.\n\n\n*I am super excited to work with PostgreSQL and would like to contribute to\nthese projects: **“**GUI representation of monitoring System Activity with\nthe system stats Extension in pgAdmin 4”** as they match my skills and\ninterests. While Setting up I ran into some issues configuring the venv for\nfaulty packages.*\n\n\nRegards,\n\n\nParth Ratra\n[image: image.png]",
"msg_date": "Mon, 27 Mar 2023 19:12:42 +0530",
"msg_from": "parth ratra <[email protected]>",
"msg_from_op": true,
"msg_subject": "facing issues in downloading of packages in pgadmin4"
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 7:13 PM parth ratra <[email protected]> wrote:\n>\n> Hey Everyone,\n>\n> I am Parth Ratra from India majoring in CSE. I have hands-on experience with HTML/Vanilla CSS/ JS, Reactjs and CI/CD pipelines through various projects. I am proficient in C/C++, Python, and Django as well.\n>\n> I am super excited to work with PostgreSQL and would like to contribute to these projects: “GUI representation of monitoring System Activity with the system stats Extension in pgAdmin 4” as they match my skills and interests. While Setting up I ran into some issues configuring the venv for faulty packages.\n\nHi Parth, we appreciate your interest. There's a separate mailing list\nfor pgAdmin Support <[email protected]>, you may want to\nreach out to them for quicker help. FYI - you can find all postgres\nmailing lists here https://www.postgresql.org/list/.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 27 Mar 2023 19:34:34 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: facing issues in downloading of packages in pgadmin4"
},
{
"msg_contents": "Thank you, I will do that.\n\nOn Mon, Mar 27, 2023 at 7:34 PM Bharath Rupireddy <\[email protected]> wrote:\n\n> On Mon, Mar 27, 2023 at 7:13 PM parth ratra <[email protected]>\n> wrote:\n> >\n> > Hey Everyone,\n> >\n> > I am Parth Ratra from India majoring in CSE. I have hands-on experience\n> with HTML/Vanilla CSS/ JS, Reactjs and CI/CD pipelines through various\n> projects. I am proficient in C/C++, Python, and Django as well.\n> >\n> > I am super excited to work with PostgreSQL and would like to contribute\n> to these projects: “GUI representation of monitoring System Activity with\n> the system stats Extension in pgAdmin 4” as they match my skills and\n> interests. While Setting up I ran into some issues configuring the venv for\n> faulty packages.\n>\n> Hi Parth, we appreciate your interest. There's a separate mailing list\n> for pgAdmin Support <[email protected]>, you may want to\n> reach out to them for quicker help. FYI - you can find all postgres\n> mailing lists here https://www.postgresql.org/list/.\n>\n> --\n> Bharath Rupireddy\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n>\n\nThank you, I will do that. On Mon, Mar 27, 2023 at 7:34 PM Bharath Rupireddy <[email protected]> wrote:On Mon, Mar 27, 2023 at 7:13 PM parth ratra <[email protected]> wrote:\n>\n> Hey Everyone,\n>\n> I am Parth Ratra from India majoring in CSE. I have hands-on experience with HTML/Vanilla CSS/ JS, Reactjs and CI/CD pipelines through various projects. I am proficient in C/C++, Python, and Django as well.\n>\n> I am super excited to work with PostgreSQL and would like to contribute to these projects: “GUI representation of monitoring System Activity with the system stats Extension in pgAdmin 4” as they match my skills and interests. While Setting up I ran into some issues configuring the venv for faulty packages.\n\nHi Parth, we appreciate your interest. There's a separate mailing list\nfor pgAdmin Support <[email protected]>, you may want to\nreach out to them for quicker help. FYI - you can find all postgres\nmailing lists here https://www.postgresql.org/list/.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 27 Mar 2023 19:36:09 +0530",
"msg_from": "parth ratra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: facing issues in downloading of packages in pgadmin4"
}
] |
[
{
"msg_contents": "Hi,\n\nvisibilitymap.c currently marks empty pages as all visible, including WAL\nlogging them:\n\n if (PageIsEmpty(page))\n...\n /*\n * Empty pages are always all-visible and all-frozen (note that\n * the same is currently not true for new pages, see above).\n */\n if (!PageIsAllVisible(page))\n...\n visibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr,\n vmbuffer, InvalidTransactionId,\n VISIBILITYMAP_ALL_VISIBLE | VISIBILITYMAP_ALL_FROZEN);\n\n\nIt seems odd that we enter the page into the VM at this point. That means that\nuse of that page will now require a bit more work (including\nRelationGetBufferForTuple() pinning it).\n\nNote that we do *not* do so for new pages:\n\n\tif (PageIsNew(page))\n\t{\n\t\t/*\n\t\t * All-zeroes pages can be left over if either a backend extends the\n\t\t * relation by a single page, but crashes before the newly initialized\n\t\t * page has been written out, or when bulk-extending the relation\n\t\t * (which creates a number of empty pages at the tail end of the\n\t\t * relation), and then enters them into the FSM.\n\t\t *\n\t\t * Note we do not enter the page into the visibilitymap. That has the\n\t\t * downside that we repeatedly visit this page in subsequent vacuums,\n\t\t * but otherwise we'll never discover the space on a promoted standby.\n\t\t * The harm of repeated checking ought to normally not be too bad. The\n\t\t * space usually should be used at some point, otherwise there\n\t\t * wouldn't be any regular vacuums.\n...\n\t\treturn true;\n\t}\n\n\nThe standby.c reasoning seems to hold just as well for empty pages? In fact,\nthere might very well be many more empty pages than new pages.\n\nWhich of course also is also the only argument for putting empty pages into\nthe VM - there could be many of them, so we might not want to rescan them on a\nregular basis. But there's actually also no real bound on the number of new\npages, so I'm not sure that argument goes all that far?\n\nThe standby argument also doesn't just seem to apply to the standby, but also\nto crashes on the primary, as the FSM is not crashsafe...\n\n\nI traced that through the versions - that behaviour originates in the original\ncommit adding the visibilitymap (608195a3a365). There's no comments back then\nexplaining why this behaviour was chosen.\n\n\nThis got a bit stranger with 44fa84881fff, because now we add the page into\nthe VM even if it currently is pinned:\n\n\t\tif (!ConditionalLockBufferForCleanup(buf))\n...\n\t\t\t/* Check for new or empty pages before lazy_scan_noprune call */\n\t\t\tif (lazy_scan_new_or_empty(vacrel, buf, blkno, page, true,\n\t\t\t\t\t\t\t\t\t vmbuffer))\n...\n\n\nIt seems quite odd to set a page to all visible that we could not currently\nget a cleanup lock on - obviously evidence of another backend trying to to do\nsomething with the page.\n\nThe main way to encounter this situation, afaict, is when\nRelationGetTargetBufferForTuple() briefly releases the lock on a newly\nextended page, to acquire the lock on the source page. The buffer is pinned,\nbut not locked in that situation.\n\n\nI started to look into this in the context of\nhttps://postgr.es/m/20230325025740.wzvchp2kromw4zqz%40awork3.anarazel.de and\nhttps://postgr.es/m/20221029025420.eplyow6k7tgu6he3%40awork3.anarazel.de\n\nwhich both would ever so slightly extend the window in which we don't hold a\nlock on the page (to do a visibilitymap_pin() and RecordPageWithFreeSpace()\nrespectively).\n\n\nIt seems pretty clear that we shouldn't enter a currently-in-use page into the\nVM or freespacemap. All that's going to do is to \"disturb\" the backend trying\nto use that page (by directing other backends to it) and to make its job more\nexpensive.\n\n\nIt's less clear, but imo worth discussing, whether we should continue to set\nempty pages to all-visible.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 27 Mar 2023 18:48:06 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why mark empty pages all visible?"
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 6:48 PM Andres Freund <[email protected]> wrote:\n> It seems odd that we enter the page into the VM at this point. That means that\n> use of that page will now require a bit more work (including\n> RelationGetBufferForTuple() pinning it).\n\nI think that it's fairly obvious that it's *not* odd at all. If it\ndidn't do this then the pages would have to be scanned by VACUUM.\n\nYou haven't said anything about the leading cause of marking empty\npages all-frozen, by far: lazy_vacuum_heap_page(). Should it also stop\nmarking empty pages all-frozen?\n\nActually it isn't technically an empty page according to\nPageIsEmpty(), since I wrote PageTruncateLinePointerArray() in a way\nthat made it leave a heap page with at least a single LP_UNUSED item.\nBut it'll essentially leave behind an empty page in many cases. The\nregression tests mark pages all-frozen in this path quite a bit more\noften than any other path according to gcov.\n\n> This got a bit stranger with 44fa84881fff, because now we add the page into\n> the VM even if it currently is pinned:\n\n> It seems quite odd to set a page to all visible that we could not currently\n> get a cleanup lock on - obviously evidence of another backend trying to to do\n> something with the page.\n\nYou can say the same thing about lazy_vacuum_heap_page(), too.\nIncluding the part about cleanup locking.\n\n> It seems pretty clear that we shouldn't enter a currently-in-use page into the\n> VM or freespacemap. All that's going to do is to \"disturb\" the backend trying\n> to use that page (by directing other backends to it) and to make its job more\n> expensive.\n\nI don't think that it's clear. What about the case where there is only\none tuple, on a page that we cannot cleanup lock? Where do you draw\nthe line?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 27 Mar 2023 20:12:11 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why mark empty pages all visible?"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-27 20:12:11 -0700, Peter Geoghegan wrote:\n> On Mon, Mar 27, 2023 at 6:48 PM Andres Freund <[email protected]> wrote:\n> > It seems odd that we enter the page into the VM at this point. That means that\n> > use of that page will now require a bit more work (including\n> > RelationGetBufferForTuple() pinning it).\n> \n> I think that it's fairly obvious that it's *not* odd at all. If it\n> didn't do this then the pages would have to be scanned by VACUUM.\n\nYes - just like in the case of new pages.\n\n\n> You haven't said anything about the leading cause of marking empty\n> pages all-frozen, by far: lazy_vacuum_heap_page(). Should it also stop\n> marking empty pages all-frozen?\n\nIt's not obvious that it should - but it's not as clear a case as the\nConditionalLockBufferForCleanup() -> lazy_scan_new_or_empty() one. In the\nlatter, we know\na) that we don't have to do any work to be able to advance the horizon\nb) we know that somebody else has the page pinned\n\nWhat's the point in marking it all-visible at that point? In quite likely be\nfrom RelationGetBufferForTuple() having extended the relation and then briefly\nneeded to release the lock (to acquire the lock on otherBuffer or in\nGetVisibilityMapPins()).\n\n\n> > This got a bit stranger with 44fa84881fff, because now we add the page into\n> > the VM even if it currently is pinned:\n> \n> > It seems quite odd to set a page to all visible that we could not currently\n> > get a cleanup lock on - obviously evidence of another backend trying to to do\n> > something with the page.\n> \n> You can say the same thing about lazy_vacuum_heap_page(), too.\n> Including the part about cleanup locking.\n\nI don't follow. In the ConditionalLockBufferForCleanup() ->\nlazy_scan_new_or_empty() case we are dealing with an new or empty\npage. Whereas lazy_vacuum_heap_page() deals with a page that definitely has\ndead tuples on it. How are those two cases comparable?\n\n\n> > It seems pretty clear that we shouldn't enter a currently-in-use page into the\n> > VM or freespacemap. All that's going to do is to \"disturb\" the backend trying\n> > to use that page (by directing other backends to it) and to make its job more\n> > expensive.\n> \n> I don't think that it's clear. What about the case where there is only\n> one tuple, on a page that we cannot cleanup lock? Where do you draw\n> the line?\n\nI don't see how that's comparable? For one, we might need to clean up that\ntuple for vacuum to be able to advance the horizon. And as far as the\nnon-cleanup lock path goes, it actually can perform work there. And it doesn't\neven need to acquire an exclusive lock.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 27 Mar 2023 21:32:16 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why mark empty pages all visible?"
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 9:32 PM Andres Freund <[email protected]> wrote:\n> On 2023-03-27 20:12:11 -0700, Peter Geoghegan wrote:\n> > On Mon, Mar 27, 2023 at 6:48 PM Andres Freund <[email protected]> wrote:\n> > > It seems odd that we enter the page into the VM at this point. That means that\n> > > use of that page will now require a bit more work (including\n> > > RelationGetBufferForTuple() pinning it).\n> >\n> > I think that it's fairly obvious that it's *not* odd at all. If it\n> > didn't do this then the pages would have to be scanned by VACUUM.\n>\n> Yes - just like in the case of new pages.\n\nI'm not saying that the status quo is free of contradictions. Only\nthat there seem to be contradictions in what you're saying now.\n\n> > You haven't said anything about the leading cause of marking empty\n> > pages all-frozen, by far: lazy_vacuum_heap_page(). Should it also stop\n> > marking empty pages all-frozen?\n>\n> It's not obvious that it should - but it's not as clear a case as the\n> ConditionalLockBufferForCleanup() -> lazy_scan_new_or_empty() one. In the\n> latter, we know\n> a) that we don't have to do any work to be able to advance the horizon\n> b) we know that somebody else has the page pinned\n>\n> What's the point in marking it all-visible at that point? In quite likely be\n> from RelationGetBufferForTuple() having extended the relation and then briefly\n> needed to release the lock (to acquire the lock on otherBuffer or in\n> GetVisibilityMapPins()).\n\nI think that there is significant value in avoiding special cases, on\ngeneral principle. If we stopped doing this in\nlazy_scan_new_or_empty() we'd be inventing a new thing that cleanup\nlocks are supposed to protect against. Maybe something like that would\nmake sense, but if so then make that argument, and make it explicitly\nrepresented in the code.\n\n> I don't follow. In the ConditionalLockBufferForCleanup() ->\n> lazy_scan_new_or_empty() case we are dealing with an new or empty\n> page. Whereas lazy_vacuum_heap_page() deals with a page that definitely has\n> dead tuples on it. How are those two cases comparable?\n\nIt doesn't have dead tuples anymore, though.\n\nISTM that there is an issue here with the definition of an empty page.\nYou're concerned about PageIsEmpty() pages. Which actually aren't\nquite the same thing as an empty page left behind by\nlazy_vacuum_heap_page(). It's just that this distinction isn't quite\nacknowledged anywhere, and probably didn't exist at all at some point.\nMaybe that should be addressed.\n\n> > > It seems pretty clear that we shouldn't enter a currently-in-use page into the\n> > > VM or freespacemap. All that's going to do is to \"disturb\" the backend trying\n> > > to use that page (by directing other backends to it) and to make its job more\n> > > expensive.\n> >\n> > I don't think that it's clear. What about the case where there is only\n> > one tuple, on a page that we cannot cleanup lock? Where do you draw\n> > the line?\n>\n> I don't see how that's comparable? For one, we might need to clean up that\n> tuple for vacuum to be able to advance the horizon. And as far as the\n> non-cleanup lock path goes, it actually can perform work there. And it doesn't\n> even need to acquire an exclusive lock.\n\nSo we should put space in the FSM if it has one tuple, but not if it\nhas zero tuples? Though not if it has zero tuples following processing\nby lazy_vacuum_heap_page()?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 27 Mar 2023 21:51:09 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why mark empty pages all visible?"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-27 21:51:09 -0700, Peter Geoghegan wrote:\n> On Mon, Mar 27, 2023 at 9:32 PM Andres Freund <[email protected]> wrote:\n> > > You haven't said anything about the leading cause of marking empty\n> > > pages all-frozen, by far: lazy_vacuum_heap_page(). Should it also stop\n> > > marking empty pages all-frozen?\n> >\n> > It's not obvious that it should - but it's not as clear a case as the\n> > ConditionalLockBufferForCleanup() -> lazy_scan_new_or_empty() one. In the\n> > latter, we know\n> > a) that we don't have to do any work to be able to advance the horizon\n> > b) we know that somebody else has the page pinned\n> >\n> > What's the point in marking it all-visible at that point? In quite likely be\n> > from RelationGetBufferForTuple() having extended the relation and then briefly\n> > needed to release the lock (to acquire the lock on otherBuffer or in\n> > GetVisibilityMapPins()).\n> \n> I think that there is significant value in avoiding special cases, on\n> general principle. If we stopped doing this in\n> lazy_scan_new_or_empty() we'd be inventing a new thing that cleanup\n> locks are supposed to protect against. Maybe something like that would\n> make sense, but if so then make that argument, and make it explicitly\n> represented in the code.\n\nI will probably make that argument - so far I was just trying to understand\nthe intent of the current code. There aren't really comments explaining why we\nwant to mark currently-pinned empty/new pages all-visible and enter them into\nthe FSM.\n\nHistorically we did *not* enter currently pinned empty/new pages into the FSM\n/ VM. Afaics that's new as of 44fa84881fff.\n\n\nThe reason I'm looking at this is that there's a lot of complexity at the\nbottom of RelationGetBufferForTuple(), related to needing to release the lock\non the newly extended page and then needing to recheck whether there still is\nfree space on the page. And that it's not complicated enough\n(c.f. INSERT_FROZEN doing visibilitymap_pin() with the page locked).\n\nAs far as I can tell, if we went back to not entering new/empty pages into\neither VM or FSM, we could rely on the page not getting filled while just\npinning, not locking it.\n\n\n> > I don't follow. In the ConditionalLockBufferForCleanup() ->\n> > lazy_scan_new_or_empty() case we are dealing with an new or empty\n> > page. Whereas lazy_vacuum_heap_page() deals with a page that definitely\n> > has dead tuples on it. How are those two cases comparable?\n> \n> It doesn't have dead tuples anymore, though.\n> \n> ISTM that there is an issue here with the definition of an empty page.\n> You're concerned about PageIsEmpty() pages.\n\nAnd not just any PageIsEmpty() page, ones that are currently pinned.\n\nI also do wonder whether the different behaviour of PageIsEmpty() and\nPageIsNew() pages makes sense. The justification for not marking PageIsNew()\npages as all-visible holds just as true for empty pages. There's just as much\nfree space there.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 28 Mar 2023 12:01:38 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why mark empty pages all visible?"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 12:01 PM Andres Freund <[email protected]> wrote:\n> I will probably make that argument - so far I was just trying to understand\n> the intent of the current code. There aren't really comments explaining why we\n> want to mark currently-pinned empty/new pages all-visible and enter them into\n> the FSM.\n\nI don't think that not being able to immediately get a cleanup lock on\na page signifies much of anything. I never have.\n\n> Historically we did *not* enter currently pinned empty/new pages into the FSM\n> / VM. Afaics that's new as of 44fa84881fff.\n\nOf course that's true, but I don't know why that history is\nparticularly important. Either way, holding a pin was never supposed\nto work as an interlock against a page being concurrently set\nall-visible, or having its space recorded in the FSM. It's true that\nmy work on VACUUM has shaken out a couple of bugs where we\naccidentally relied on that being true. But those were all due to the\nchange in lazy_vacuum_heap_page() made in Postgres 14 -- not the\naddition of lazy_scan_new_or_empty() in Postgres 15.\n\nI actually think that I might agree with the substance of much of what\nyou're saying, but at the same time I don't think that you're defining\nthe problem in a way that's particularly helpful. I gather that you\n*don't* want to do anything about the lazy_vacuum_heap_page behavior\nwith setting empty pages all-visible (just the lazy_scan_new_or_empty\nbehavior). So clearly this isn't really about marking empty pages\nall-visible, with or without a cleanup lock. It's actually about\nsomething rather more specific: the interactions with\nRelationGetBufferForTuple.\n\nI actually agree that VACUUM is way too unconcerned about interfering\nwith concurrent activity in terms of how it manages free space in the\nFSM. But this seems like just about the least important example of\nthat (outside the context of your RelationGetBufferForTuple work). The\nreally important case (that VACUUM gets wrong) all involve recently\ndead tuples. But I don't think that you want to talk about that right\nnow. You want to talk about RelationGetBufferForTuple.\n\n> The reason I'm looking at this is that there's a lot of complexity at the\n> bottom of RelationGetBufferForTuple(), related to needing to release the lock\n> on the newly extended page and then needing to recheck whether there still is\n> free space on the page. And that it's not complicated enough\n> (c.f. INSERT_FROZEN doing visibilitymap_pin() with the page locked).\n>\n> As far as I can tell, if we went back to not entering new/empty pages into\n> either VM or FSM, we could rely on the page not getting filled while just\n> pinning, not locking it.\n\nWhat you're essentially arguing for is inventing a new rule that makes\nthe early lifetime of a page (what we currently call a PageIsEmpty()\npage, and new pages) special, to avoid interference from VACUUM. I\nhave made similar arguments myself on quite a few occasions, so I'm\nactually sympathetic. I just think that you should own it. And no, I'm\nnot just reflexively defending my work in 44fa84881fff; I actually\nthink that framing the problem as a case of restoring a previous\nbehavior is confusing and ahistorical. If there was a useful behavior\nthat was lost, then it was quite an accidental behavior all along. The\ndifference matters because now you have to reconcile what you're\nsaying with the lazy_vacuum_heap_page no-cleanup-lock behavior added\nin 14.\n\nI think that you must be arguing for making the early lifetime of a\nheap page special to VACUUM, since AFAICT you want to change VACUUM's\nbehavior with strictly PageIsEmpty/PageIsNew pages only -- and *not*\nwith pages that have one remaining LP_UNUSED item, but are otherwise\nempty (which could be set all-visible/all-frozen in either the first\nor second heap pass, even if we disabled the lazy_scan_new_or_empty()\nbehavior you're complaining about). You seem to want to distinguish\nbetween very new pages (that also happen to be empty), and old pages\nthat happen to be empty. Right?\n\n> I also do wonder whether the different behaviour of PageIsEmpty() and\n> PageIsNew() pages makes sense. The justification for not marking PageIsNew()\n> pages as all-visible holds just as true for empty pages. There's just as much\n> free space there.\n\nWhat you say here makes a lot of sense to me. I'm just not sure what\nI'd prefer to do about it.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 28 Mar 2023 13:05:19 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why mark empty pages all visible?"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-28 13:05:19 -0700, Peter Geoghegan wrote:\n> On Tue, Mar 28, 2023 at 12:01 PM Andres Freund <[email protected]> wrote:\n> > I will probably make that argument - so far I was just trying to understand\n> > the intent of the current code. There aren't really comments explaining why we\n> > want to mark currently-pinned empty/new pages all-visible and enter them into\n> > the FSM.\n>\n> I don't think that not being able to immediately get a cleanup lock on\n> a page signifies much of anything. I never have.\n\nWhy is that? It's as clear a signal of concurrent activity on the buffer\nyou're going to get.\n\n\n> > Historically we did *not* enter currently pinned empty/new pages into the FSM\n> > / VM. Afaics that's new as of 44fa84881fff.\n>\n> Of course that's true, but I don't know why that history is\n> particularly important.\n\nIt's interesting to understand *why* we are doing what we are. I think it'd\nmake sense to propose changing how things work around this, but I just don't\nfeel like I have a good enough grasp for why we do some of the things we\ndo. And given there's not a lot of comments around it and some of the comments\nthat do exist are inconsistent with themselves, looking at the history seems\nlike the next best thing?\n\n\n> I actually think that I might agree with the substance of much of what\n> you're saying, but at the same time I don't think that you're defining\n> the problem in a way that's particularly helpful.\n\nLikely because the goals of the existing code aren't clear to me. So I don't\nfeel like I have a firm grasp...\n\n\n> I gather that you *don't* want to do anything about the\n> lazy_vacuum_heap_page behavior with setting empty pages all-visible (just\n> the lazy_scan_new_or_empty behavior).\n\nNot in the short-medium term, at least. In the long term I do suspect we might\nwant to do something about it. We have a *crapton* of contention in the FSM\nand caused by the FSM in bulk workloads. With my relation extension patch\ndisabling the FSM nearly doubles concurrent load speed.\n\nAt the same time, the fact that we might loose knowledge about all the\nexisting free space in case of promotion or crash and never rediscover that\nspace (because the pages are frozen), seems decidedly not great.\n\nI don't know what the path forward is, but it seems somewhat clear that we\nought to do something. I suspect having a not crash-safe FSM isn't really\nacceptable anymore - it probably is fine to not persist a *reduction* in free\nspace, but we can't permanently loose track of free space, no matter how many\ncrashes.\n\nI know that you strongly dislike the way the FSM works, although I forgot some\nof the details.\n\nOne thing I am fairly certain about is that using the FSM to tell other\nbackends about newly bulk extended pages is not a good solution, even though\nwe're stuck with it for the moment.\n\n\n> I actually agree that VACUUM is way too unconcerned about interfering\n> with concurrent activity in terms of how it manages free space in the\n> FSM. But this seems like just about the least important example of\n> that (outside the context of your RelationGetBufferForTuple work). The\n> really important case (that VACUUM gets wrong) all involve recently\n> dead tuples. But I don't think that you want to talk about that right\n> now. You want to talk about RelationGetBufferForTuple.\n\nThat's indeed the background. By now I'd also like to add a few comments\nexplaining why we do what we currently do, because I don't find all of it\nobvious.\n\n\n> > The reason I'm looking at this is that there's a lot of complexity at the\n> > bottom of RelationGetBufferForTuple(), related to needing to release the lock\n> > on the newly extended page and then needing to recheck whether there still is\n> > free space on the page. And that it's not complicated enough\n> > (c.f. INSERT_FROZEN doing visibilitymap_pin() with the page locked).\n> >\n> > As far as I can tell, if we went back to not entering new/empty pages into\n> > either VM or FSM, we could rely on the page not getting filled while just\n> > pinning, not locking it.\n>\n> What you're essentially arguing for is inventing a new rule that makes\n> the early lifetime of a page (what we currently call a PageIsEmpty()\n> page, and new pages) special, to avoid interference from VACUUM. I\n> have made similar arguments myself on quite a few occasions, so I'm\n> actually sympathetic. I just think that you should own it. And no, I'm\n> not just reflexively defending my work in 44fa84881fff; I actually\n> think that framing the problem as a case of restoring a previous\n> behavior is confusing and ahistorical. If there was a useful behavior\n> that was lost, then it was quite an accidental behavior all along. The\n> difference matters because now you have to reconcile what you're\n> saying with the lazy_vacuum_heap_page no-cleanup-lock behavior added\n> in 14.\n\nI really don't have a position to own yet, not on firm enough ground.\n\n\n> I think that you must be arguing for making the early lifetime of a\n> heap page special to VACUUM, since AFAICT you want to change VACUUM's\n> behavior with strictly PageIsEmpty/PageIsNew pages only -- and *not*\n> with pages that have one remaining LP_UNUSED item, but are otherwise\n> empty (which could be set all-visible/all-frozen in either the first\n> or second heap pass, even if we disabled the lazy_scan_new_or_empty()\n> behavior you're complaining about). You seem to want to distinguish\n> between very new pages (that also happen to be empty), and old pages\n> that happen to be empty. Right?\n\nI think that might be worthwhile, yes. The retry code in\nRelationGetBufferForTuple() is quite hairy and almost impossible to test. If\nwe can avoid the complexity, at a fairly bound cost (vacuum needing to\nre-visit new/empty pages if they're currently pinned), it'd imo be more that\nworth the price.\n\n\nBut perhaps the better path forward is to just bite the bullet and introduce a\nshared memory table of open files, that contains \"content size\" and \"physical\nsize\" for each relation. We've had a lot of things over the years that'd have\nbenefitted from that.\n\nTo address the RelationGetBufferForTuple() case, vacuum would simply not scan\nbeyond the \"content size\", so it'd never encounter the page that\nRelationGetBufferForTuple() is currently dealing with. And we'd not need to\nadd bulk extended pages into the FSM.\n\nThis would also, I think, be the basis for teaching vacuum to truncate\nrelations without acquiring an AEL - which IME causes a lot of operational\nissues.\n\nIt'd not do anything about loosing track of free space though :/.\n\n\n> > I also do wonder whether the different behaviour of PageIsEmpty() and\n> > PageIsNew() pages makes sense. The justification for not marking PageIsNew()\n> > pages as all-visible holds just as true for empty pages. There's just as much\n> > free space there.\n>\n> What you say here makes a lot of sense to me. I'm just not sure what\n> I'd prefer to do about it.\n\nYou and me both...\n\n\nI wonder what it'd take to make the FSM \"more crashsafe\". Leaving aside the\ncases of \"just extended\" new/empty pages: We already call\nXLogRecordPageWithFreeSpace() for HEAP2_VACUUM, HEAP2_VISIBLE,\nXLOG_HEAP2_PRUNE as well as insert/multi_insert/update. However, we don't set\nthe LSN of FSM pages. Which means we'll potentially flush dirty FSM buffers to\ndisk, before the corresponding WAL records make it to disk.\n\nISTM that even if we just used the LSN of the last WAL record for\nRecordPageWithFreeSpace()/LogRecordPageWithFreeSpace(), and perhaps also\ncalled LogRecordPageWithFreeSpace() during XLOG_HEAP2_FREEZE replay, we'd\n*drastically* shrink the chance of loosing track of free space. Obviously not\nfree, but ISTM that it can't add a lot of overhead.\n\nI think we can loose the contents of individual leaf FSM pages, e.g. due to\nchecksum failures - but perhaps we could address that on access, e.g. by\nremoving the frozen bit for the corresponding heap pages, which'd lead us to\neventually rediscover the free space?\n\nThat'd still leve us with upper level corruption, but I guess we could just\nrecompute those in some circumstances?\n\n\n\nHm - it'd sure be nice if pg_buffercache would show the LSN of valid pages...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 28 Mar 2023 15:29:50 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why mark empty pages all visible?"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 3:29 PM Andres Freund <[email protected]> wrote:\n> Why is that? It's as clear a signal of concurrent activity on the buffer\n> you're going to get.\n\nNot immediately getting a cleanup lock in VACUUM is informative to the\nextent that you only care about what is happening that very\nnanosecond. If you look at which pages it happens to in detail, what\nyou seem to end up with is a whole bunch of noise, which (on its own)\ntells you exactly nothing about what VACUUM really ought to be doing\nwith those pages. In almost all cases we could get a cleanup lock by\nwaiting for one millisecond and retrying.\n\nI suspect that the cleanup lock thing might be a noisy, unreliable\nproxy for the condition that you actually care about, in the context\nof your work on relation extension. I bet there is a better signal to\ngo on, if you look for one.\n\n> It's interesting to understand *why* we are doing what we are. I think it'd\n> make sense to propose changing how things work around this, but I just don't\n> feel like I have a good enough grasp for why we do some of the things we\n> do. And given there's not a lot of comments around it and some of the comments\n> that do exist are inconsistent with themselves, looking at the history seems\n> like the next best thing?\n\nI think that I know why Heikki had no comments about PageIsEmpty()\npages when the VM first went in, back in 2009: because it was just so\nobvious that you'd treat them the same as any other initialized page,\nit didn't seem to warrant a comment at all. The difference between a\npage with 0 tuples and 1 tuple is the same difference between a page\nwith 1 tuple and a page with 2 tuples. A tiny difference (one extra\ntuple), of no particular consequence.\n\nI think that you don't see it that way now because you're focussed on\nthe hio.c view of things. That code looked very different back in\n2009, and in any case is very far removed from vacuumlazy.c.\n\nI can tell you what I was thinking of with lazy_scan_new_or_empty: I\nhate special cases. I will admit to being a zealot about it.\n\n> > I gather that you *don't* want to do anything about the\n> > lazy_vacuum_heap_page behavior with setting empty pages all-visible (just\n> > the lazy_scan_new_or_empty behavior).\n>\n> Not in the short-medium term, at least. In the long term I do suspect we might\n> want to do something about it. We have a *crapton* of contention in the FSM\n> and caused by the FSM in bulk workloads. With my relation extension patch\n> disabling the FSM nearly doubles concurrent load speed.\n\nI've seen the same effect myself. There is no question that that's a\nbig problem.\n\nI think that the problem is that the system doesn't have any firm\nunderstanding of pages as things that are owned by particular\ntransactions and/or backends, at least to a limited, scoped extent.\nIt's all really low level, when it actually needs to be high level and\ntake lots of context that comes from the application into account.\n\n> At the same time, the fact that we might loose knowledge about all the\n> existing free space in case of promotion or crash and never rediscover that\n> space (because the pages are frozen), seems decidedly not great.\n\nUnquestionably.\n\n> I don't know what the path forward is, but it seems somewhat clear that we\n> ought to do something. I suspect having a not crash-safe FSM isn't really\n> acceptable anymore - it probably is fine to not persist a *reduction* in free\n> space, but we can't permanently loose track of free space, no matter how many\n> crashes.\n\nStrongly agreed. It's a terrible false economy. If we bit the bullet\nand made relation extension and the FSM crash safe, we'd gain so much\nmore than we'd lose.\n\n> One thing I am fairly certain about is that using the FSM to tell other\n> backends about newly bulk extended pages is not a good solution, even though\n> we're stuck with it for the moment.\n\nStrongly agreed.\n\n> > I think that you must be arguing for making the early lifetime of a\n> > heap page special to VACUUM, since AFAICT you want to change VACUUM's\n> > behavior with strictly PageIsEmpty/PageIsNew pages only -- and *not*\n> > with pages that have one remaining LP_UNUSED item, but are otherwise\n> > empty (which could be set all-visible/all-frozen in either the first\n> > or second heap pass, even if we disabled the lazy_scan_new_or_empty()\n> > behavior you're complaining about). You seem to want to distinguish\n> > between very new pages (that also happen to be empty), and old pages\n> > that happen to be empty. Right?\n>\n> I think that might be worthwhile, yes. The retry code in\n> RelationGetBufferForTuple() is quite hairy and almost impossible to test. If\n> we can avoid the complexity, at a fairly bound cost (vacuum needing to\n> re-visit new/empty pages if they're currently pinned), it'd imo be more that\n> worth the price.\n\nShort term, you could explicitly say that PageIsEmpty() means that the\npage is qualitatively different to other empty pages that were left\nbehind by VACUUM's second phase.\n\n> But perhaps the better path forward is to just bite the bullet and introduce a\n> shared memory table of open files, that contains \"content size\" and \"physical\n> size\" for each relation. We've had a lot of things over the years that'd have\n> benefitted from that.\n\nStrongly agreed on this.\n\n> To address the RelationGetBufferForTuple() case, vacuum would simply not scan\n> beyond the \"content size\", so it'd never encounter the page that\n> RelationGetBufferForTuple() is currently dealing with. And we'd not need to\n> add bulk extended pages into the FSM.\n>\n> This would also, I think, be the basis for teaching vacuum to truncate\n> relations without acquiring an AEL - which IME causes a lot of operational\n> issues.\n\nI have said the same exact thing myself at least once. Again, it's a\nquestion of marrying high level and low level information. That is key\nhere.\n\n> I wonder what it'd take to make the FSM \"more crashsafe\". Leaving aside the\n> cases of \"just extended\" new/empty pages: We already call\n> XLogRecordPageWithFreeSpace() for HEAP2_VACUUM, HEAP2_VISIBLE,\n> XLOG_HEAP2_PRUNE as well as insert/multi_insert/update. However, we don't set\n> the LSN of FSM pages. Which means we'll potentially flush dirty FSM buffers to\n> disk, before the corresponding WAL records make it to disk.\n\nPart of the problem is that we remember the amount of free space in\neach heap page with way too much granularity. That adds to the\ncontention problem, because backends fight it out in a mad dash to\nlocate miniscule amounts of free space. Moreover, If there were (say)\nonly 5 or 7 distinct increments of free space that the FSM could\nrepresent for each heap page, then true crash safety becomes a lot\ncheaper.\n\nI'll say it again: high level and low level information need to be combined.\n\n> That'd still leve us with upper level corruption, but I guess we could just\n> recompute those in some circumstances?\n\nI think that we should just bite the bullet and come up with a way to\nmake it fully crash safe. No its, no buts.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 28 Mar 2023 16:22:33 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why mark empty pages all visible?"
}
] |
[
{
"msg_contents": "While working on [0], I was wondering why the collations ucs_basic and \nunicode are not in pg_collation.dat. I traced this back through \nhistory, and I think this was just lost in a game of telephone.\n\nThe initial commit for pg_collation.h (414c5a2ea6) has only the default \ncollation in pg_collation.h (pre .dat), with initdb handling everything \nelse. Over time, additional collations \"C\" and \"POSIX\" were moved to \npg_collation.h, and other logic was moved from initdb to \npg_import_system_collations(). But ucs_basic was untouched. Commit \n0b13b2a771 rearranged the relative order of operations in initdb and \nadded the current comment \"We don't want to pin these\", but looking at \nthe email[1], I think this was more a guess about the previous intent.\n\nI suggest we fix this now; see attached patch.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/1293e382-2093-a2bf-a397-c04e8f83d3c2%40enterprisedb.com\n\n[1]: https://www.postgresql.org/message-id/28195.1498172402%40sss.pgh.pa.us",
"msg_date": "Tue, 28 Mar 2023 12:19:57 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Move definition of standard collations from initdb to\n pg_collation.dat"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> While working on [0], I was wondering why the collations ucs_basic and \n> unicode are not in pg_collation.dat. I traced this back through \n> history, and I think this was just lost in a game of telephone.\n> The initial commit for pg_collation.h (414c5a2ea6) has only the default \n> collation in pg_collation.h (pre .dat), with initdb handling everything \n> else. Over time, additional collations \"C\" and \"POSIX\" were moved to \n> pg_collation.h, and other logic was moved from initdb to \n> pg_import_system_collations(). But ucs_basic was untouched. Commit \n> 0b13b2a771 rearranged the relative order of operations in initdb and \n> added the current comment \"We don't want to pin these\", but looking at \n> the email[1], I think this was more a guess about the previous intent.\n\nYeah, I was just loath to change the previous behavior in that\npatch. I can't see any strong reason not to pin these entries.\n\n> I suggest we fix this now; see attached patch.\n\nWhile we're here, do we want to adopt some other spelling of \"the\nroot locale\" than \"und\", in view of recent discoveries about the\ninstability of that on old ICU versions?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Mar 2023 07:33:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Move definition of standard collations from initdb to\n pg_collation.dat"
},
{
"msg_contents": "On 28.03.23 13:33, Tom Lane wrote:\n> While we're here, do we want to adopt some other spelling of \"the\n> root locale\" than \"und\", in view of recent discoveries about the\n> instability of that on old ICU versions?\n\nThat issue was fixed by 3b50275b12, so we can keep using the \"und\" spelling.\n\n\n",
"msg_date": "Tue, 28 Mar 2023 16:16:25 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Move definition of standard collations from initdb to\n pg_collation.dat"
}
] |
[
{
"msg_contents": "I have to run now so can't dissect it, but while running sqlsmith on the\nSQL/JSON patch after Justin's report, I got $SUBJECT in this query:\n\nMERGE INTO public.target_parted as target_0\n USING (select \n subq_0.c5 as c0, \n subq_0.c0 as c1, \n ref_0.a as c2, \n subq_0.c1 as c3, \n subq_0.c9 as c4, \n (select c from public.prt2_m_p3 limit 1 offset 1)\n as c5, \n subq_0.c8 as c6, \n ref_0.a as c7, \n subq_0.c7 as c8, \n subq_0.c1 as c9, \n pg_catalog.system_user() as c10\n from \n public.itest1 as ref_0\n left join (select \n ref_1.matches as c0, \n ref_1.typ as c1, \n ref_1.colname as c2, \n (select slotname from public.iface limit 1 offset 44)\n as c3, \n ref_1.matches as c4, \n ref_1.op as c5, \n ref_1.matches as c6, \n ref_1.value as c7, \n ref_1.op as c8, \n ref_1.op as c9, \n ref_1.typ as c10\n from \n public.brinopers_multi as ref_1\n where cast(null as polygon) <@ (select polygon from public.tab_core_types limit 1 offset 22)\n ) as subq_0\n on (cast(null as macaddr8) >= cast(null as macaddr8))\n where subq_0.c10 > subq_0.c2\n limit 49) as subq_1\n ON target_0.b = subq_1.c2 \n WHEN MATCHED \n AND (cast(null as box) |>> cast(null as box)) \n or (cast(null as lseg) ?-| (select s from public.lseg_tbl limit 1 offset 6)\n )\n THEN DELETE\n WHEN NOT MATCHED AND (EXISTS (\n select \n 21 as c0, \n subq_2.c0 as c1\n from \n public.itest14 as sample_0 tablesample system (3.6) \n inner join public.num_exp_sqrt as sample_1 tablesample bernoulli (0.3) \n on (cast(null as \"char\") <= cast(null as \"char\")),\n lateral (select \n sample_1.id as c0\n from \n public.a as ref_2\n where (cast(null as lseg) <@ cast(null as line)) \n or ((select b3 from public.bit_defaults limit 1 offset 80)\n <> (select b3 from public.bit_defaults limit 1 offset 4)\n )\n limit 158) as subq_2\n where (cast(null as name) !~ (select t from public.test_tsvector limit 1 offset 5)\n ) \n and ((select bool from public.tab_core_types limit 1 offset 61)\n < (select pg_catalog.bool_or(v) from public.rtest_view1)\n ))) \n or (18 is NULL)\n THEN INSERT VALUES ( pg_catalog.int4um(\n cast(public.func_with_bad_set() as int4)), 13)\n WHEN MATCHED AND ((24 is not NULL) \n or (true)) \n or (cast(null as \"timestamp\") <= cast(null as timestamptz))\n THEN UPDATE set \n b = target_0.b\n\n\nUgh.\n\nI got no more SQL/JSON related crashes so far.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"No hay ausente sin culpa ni presente sin disculpa\" (Prov. francés)\n\n\n",
"msg_date": "Tue, 28 Mar 2023 13:22:48 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": true,
"msg_subject": "\"variable not found in subplan target list\""
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> I have to run now so can't dissect it, but while running sqlsmith on the\n> SQL/JSON patch after Justin's report, I got $SUBJECT in this query:\n\nReproduces in HEAD and v15 too (once you replace pg_catalog.system_user\nwith some function that exists in v15). So it's not the fault of the\nJSON patch, nor of my outer-join hacking which had been my first thought.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Mar 2023 07:38:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"variable not found in subplan target list\""
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> I have to run now so can't dissect it, but while running sqlsmith on the\n> SQL/JSON patch after Justin's report, I got $SUBJECT in this query:\n\nI reduced this down to\n\nMERGE INTO public.target_parted as target_0\n USING public.itest1 as ref_0\n ON target_0.b = ref_0.a\n WHEN NOT MATCHED\n THEN INSERT VALUES (42, 13);\n\nThe critical moving part seems to just be that the MERGE target\nis a partitioned table ... but surely somebody tested that before?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Mar 2023 08:56:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"variable not found in subplan target list\""
},
{
"msg_contents": "I wrote:\n> I reduced this down to\n\n> MERGE INTO public.target_parted as target_0\n> USING public.itest1 as ref_0\n> ON target_0.b = ref_0.a\n> WHEN NOT MATCHED\n> THEN INSERT VALUES (42, 13);\n\n> The critical moving part seems to just be that the MERGE target\n> is a partitioned table ... but surely somebody tested that before?\n\nOh, it's not just any partitioned table:\n\nregression=# \\d+ target_parted\n Partitioned table \"public.target_parted\"\n Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description \n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n a | integer | | | | plain | | | \n b | integer | | | | plain | | | \nPartition key: LIST (a)\nNumber of partitions: 0\n\nThe planner is reducing the scan of target_parted to\na dummy scan, as is reasonable, but it forgets to\nprovide ctid as an output from that scan; then the\nparent join node is unhappy because it does have\na ctid output. So it looks like the problem is some\nshortcut we take while creating the dummy scan.\n\nI suppose that without the planner bug, this'd fail at\nruntime for lack of a partition to put (42,13) into.\nBecause of that, the case isn't really interesting\nfor production, which may explain the lack of reports.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Mar 2023 09:17:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"variable not found in subplan target list\""
},
{
"msg_contents": "I wrote:\n> The planner is reducing the scan of target_parted to\n> a dummy scan, as is reasonable, but it forgets to\n> provide ctid as an output from that scan; then the\n> parent join node is unhappy because it does have\n> a ctid output. So it looks like the problem is some\n> shortcut we take while creating the dummy scan.\n\nOh, actually the problem is in distribute_row_identity_vars,\nwhich is supposed to handle this case, but it thinks it doesn't\nhave to back-fill the rel's reltarget. Wrong. Now that I see\nthe problem, I wonder if we can't reproduce a similar symptom\nwithout MERGE, which would mean that v14 has the issue too.\n\nThe attached seems to fix it, but I'm going to look for a\nnon-MERGE test case before pushing.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 28 Mar 2023 10:46:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"variable not found in subplan target list\""
},
{
"msg_contents": "So I'm back home and found a couple more weird errors in the log:\n\nMERGE INTO public.idxpart2 as target_0\n USING (select 1 \n from \n public.xmltest2 as ref_0\n inner join public.prt1_l_p1 as sample_0\n inner join fkpart4.droppk as ref_1\n on (sample_0.a = ref_1.a )\n on (true)\n limit 50) as subq_0\n left join information_schema.transforms as ref_2\n left join public.transition_table_status as sample_1\n on (ref_2.transform_type is not NULL)\n on (true)\n ON target_0.a = sample_1.level \nWHEN MATCHED\n THEN UPDATE set a = target_0.a;\nERROR: mismatching PartitionPruneInfo found at part_prune_index 0\nDETALLE: plan node relids (b 1), pruneinfo relids (b 36)\n\nThis one is probably my fault, will look later.\n\n\nselect \n pg_catalog.pg_stat_get_buf_fsync_backend() as c9 \n from \n public.tenk2 as ref_0\n where (ref_0.stringu2 is NULL) \n and (EXISTS (\n select 1 from fkpart5.fk1 as ref_1\n where pg_catalog.current_date() < (select pg_catalog.max(filler3) from public.mcv_lists))) ;\n\nERROR: subplan \"InitPlan 1 (returns $1)\" was not initialized\nCONTEXTO: parallel worker\n\n\nselect 1 as c0\n from \n (select \n subq_0.c9 as c5, \n subq_0.c8 as c9\n from \n public.iso8859_5_inputs as ref_0,\n lateral (select \n ref_1.ident as c2, \n ref_0.description as c8, \n ref_1.used_bytes as c9\n from \n pg_catalog.pg_backend_memory_contexts as ref_1\n where true\n ) as subq_0\n where subq_0.c2 is not NULL) as subq_1\n inner join pg_catalog.pg_class as sample_0\n on (subq_1.c5 = public.int8alias1in(\n cast(case when subq_1.c9 is not NULL then null end\n as cstring)))\n where true;\nERROR: could not find commutator for operator 53286\n\n\nThere were quite a few of those \"variable not found\" ones, both\nmentioning singular \"targetlist\" and others that said \"targetlists\". I\nreran them with your patch and they no longer error out, so I guess it's\nall the same bug.\n\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"I must say, I am absolutely impressed with what pgsql's implementation of\nVALUES allows me to do. It's kind of ridiculous how much \"work\" goes away in\nmy code. Too bad I can't do this at work (Oracle 8/9).\" (Tom Allison)\n http://archives.postgresql.org/pgsql-general/2007-06/msg00016.php\n\n\n",
"msg_date": "Tue, 28 Mar 2023 20:19:30 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"variable not found in subplan target list\""
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> So I'm back home and found a couple more weird errors in the log:\n\n> ERROR: mismatching PartitionPruneInfo found at part_prune_index 0\n> DETALLE: plan node relids (b 1), pruneinfo relids (b 36)\n\nThis one reproduces for me.\n\n> select \n> pg_catalog.pg_stat_get_buf_fsync_backend() as c9 \n> from \n> public.tenk2 as ref_0\n> where (ref_0.stringu2 is NULL) \n> and (EXISTS (\n> select 1 from fkpart5.fk1 as ref_1\n> where pg_catalog.current_date() < (select pg_catalog.max(filler3) from public.mcv_lists))) ;\n\n> ERROR: subplan \"InitPlan 1 (returns $1)\" was not initialized\n> CONTEXTO: parallel worker\n\nHmph, I couldn't reproduce that, not even with other settings of\ndebug_parallel_query. Are you running it with non-default\nplanner parameters?\n\n> select 1 as c0\n> ...\n> ERROR: could not find commutator for operator 53286\n\nI got a slightly different error:\n\nERROR: missing support function 1(195306,195306) in opfamily 1976\n\nwhere\n\nregression=# select 195306::regtype; \n regtype \n------------\n int8alias1\n(1 row)\n\nSo that one is related to the intentionally-somewhat-broken\nint8 opclass configuration that equivclass.sql leaves behind.\nI've always had mixed emotions about whether leaving that\nset up that way was a good idea or not. In principle nothing\nreally bad should happen, but it can lead to confusing errors\nlike this one. Maybe it'd be better to roll that back?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Mar 2023 14:39:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"variable not found in subplan target list\""
},
{
"msg_contents": "On Wed, Mar 29, 2023 at 3:39 AM Tom Lane <[email protected]> wrote:\n> Alvaro Herrera <[email protected]> writes:\n> > So I'm back home and found a couple more weird errors in the log:\n>\n> > ERROR: mismatching PartitionPruneInfo found at part_prune_index 0\n> > DETALLE: plan node relids (b 1), pruneinfo relids (b 36)\n>\n> This one reproduces for me.\n\nI've looked into this one and the attached patch fixes it for me.\nTurns out set_plan_refs()'s idea of when the entries from\nPlannerInfo.partPruneInfos are transferred into\nPlannerGlobal.partPruneInfo was wrong.\n\nThough, I wonder if we need to keep ec386948948 that introduced the\nnotion of part_prune_index around if the project that needed it [1]\nhas moved on to an entirely different approach altogether, one that\ndoesn't require hacking up the pruning code.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n [1] https://commitfest.postgresql.org/42/3478/",
"msg_date": "Wed, 29 Mar 2023 17:28:22 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"variable not found in subplan target list\""
},
{
"msg_contents": "On 2023-Mar-29, Amit Langote wrote:\n\n> On Wed, Mar 29, 2023 at 3:39 AM Tom Lane <[email protected]> wrote:\n> > Alvaro Herrera <[email protected]> writes:\n> > > So I'm back home and found a couple more weird errors in the log:\n> >\n> > > ERROR: mismatching PartitionPruneInfo found at part_prune_index 0\n> > > DETALLE: plan node relids (b 1), pruneinfo relids (b 36)\n> >\n> > This one reproduces for me.\n> \n> I've looked into this one and the attached patch fixes it for me.\n> Turns out set_plan_refs()'s idea of when the entries from\n> PlannerInfo.partPruneInfos are transferred into\n> PlannerGlobal.partPruneInfo was wrong.\n\nThanks for the patch. I've pushed it to github for CI testing, and if\nthere are no problems I'll put it in.\n\n> Though, I wonder if we need to keep ec386948948 that introduced the\n> notion of part_prune_index around if the project that needed it [1]\n> has moved on to an entirely different approach altogether, one that\n> doesn't require hacking up the pruning code.\n\nHmm, that's indeed tempting.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 30 Mar 2023 12:53:25 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"variable not found in subplan target list\""
},
{
"msg_contents": "Hi Amit,\n\nOn 2023-Mar-30, Alvaro Herrera wrote:\n\n> On 2023-Mar-29, Amit Langote wrote:\n\n> > Though, I wonder if we need to keep ec386948948 that introduced the\n> > notion of part_prune_index around if the project that needed it [1]\n> > has moved on to an entirely different approach altogether, one that\n> > doesn't require hacking up the pruning code.\n> \n> Hmm, that's indeed tempting.\n\nWe have an open item about this, and I see no reason not to do it. I\nchecked, and putting things back is just a matter of reverting\n589bb816499e and ec386948948, cleaning up some trivial pgindent-induced\nconflicts, and bumping catversion once more. Would you like to do that\nyourself, or do you prefer that I do it? Ideally, we'd do it before\nbeta1.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 2 May 2023 19:54:09 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"variable not found in subplan target list\""
},
{
"msg_contents": "On 2023-May-02, Alvaro Herrera wrote:\n\n> We have an open item about this, and I see no reason not to do it. I\n> checked, and putting things back is just a matter of reverting\n> 589bb816499e and ec386948948, cleaning up some trivial pgindent-induced\n> conflicts, and bumping catversion once more. Would you like to do that\n> yourself, or do you prefer that I do it? Ideally, we'd do it before\n> beta1.\n\nI have pushed the revert now.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 4 May 2023 12:44:01 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"variable not found in subplan target list\""
},
{
"msg_contents": "Hi Alvaro,\n\nOn Thu, May 4, 2023 at 19:44 Alvaro Herrera <[email protected]> wrote:\n\n> On 2023-May-02, Alvaro Herrera wrote:\n>\n> > We have an open item about this, and I see no reason not to do it. I\n> > checked, and putting things back is just a matter of reverting\n> > 589bb816499e and ec386948948, cleaning up some trivial pgindent-induced\n> > conflicts, and bumping catversion once more. Would you like to do that\n> > yourself, or do you prefer that I do it? Ideally, we'd do it before\n> > beta1.\n>\n> I have pushed the revert now.\n\n\nThanks for taking care of it.\n\n(Wouldn’t have been able to get to it till Monday myself.)\n\n> --\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\nHi Alvaro,On Thu, May 4, 2023 at 19:44 Alvaro Herrera <[email protected]> wrote:On 2023-May-02, Alvaro Herrera wrote:\n\n> We have an open item about this, and I see no reason not to do it. I\n> checked, and putting things back is just a matter of reverting\n> 589bb816499e and ec386948948, cleaning up some trivial pgindent-induced\n> conflicts, and bumping catversion once more. Would you like to do that\n> yourself, or do you prefer that I do it? Ideally, we'd do it before\n> beta1.\n\nI have pushed the revert now.Thanks for taking care of it.(Wouldn’t have been able to get to it till Monday myself.)-- Thanks, Amit LangoteEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 4 May 2023 22:55:39 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"variable not found in subplan target list\""
}
] |
[
{
"msg_contents": "Hello,\n\nAttached patch introduces a function pg_column_toast_chunk_id\nthat returns a chunk ID of a TOASTed value.\n\nRecently, one of our clients needed a way to show which columns\nare actually TOASTed because they would like to know how much\nupdates on the original table affects to its toast table\nspecifically with regard to auto VACUUM. We could not find a\nfunction for this purpose in the current PostgreSQL, so I would\nlike propose pg_column_toast_chunk_id.\n\nThis function returns a chunk ID of a TOASTed value, or NULL\nif the value is not TOASTed. Here is an example;\n\npostgres=# \\d val\n Table \"public.val\"\n Column | Type | Collation | Nullable | Default \n--------+------+-----------+----------+---------\n t | text | | | \n\npostgres=# select length(t), pg_column_size(t), pg_column_compression(t), pg_column_toast_chunk_id(t), tableoid from val;\n length | pg_column_size | pg_column_compression | pg_column_toast_chunk_id | tableoid \n--------+----------------+-----------------------+--------------------------+----------\n 3 | 4 | | | 16388\n 3000 | 46 | pglz | | 16388\n 32000 | 413 | pglz | | 16388\n 305 | 309 | | | 16388\n 64000 | 64000 | | 16393 | 16388\n(5 rows)\n\npostgres=# select chunk_id, chunk_seq from pg_toast.pg_toast_16388;\n chunk_id | chunk_seq \n----------+-----------\n 16393 | 0\n 16393 | 1\n 16393 | 2\n (snip)\n 16393 | 30\n 16393 | 31\n 16393 | 32\n(33 rows)\n\nThis function is also useful to identify a problematic row when\nan error like \n \"ERROR: unexpected chunk number ... (expected ...) for toast value\"\noccurs.\n\nThe patch is a just a concept patch and not including documentation\nand tests.\n\nWhat do you think about this feature?\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Wed, 29 Mar 2023 10:55:07 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_column_toast_chunk_id: a function to get a chunk ID of a TOASTed\n value"
},
{
"msg_contents": "Hi!\n\nI like the idea of having a standard function which shows a TOAST value ID\nfor a row. I've used my own to handle TOAST errors. Just, maybe, more\ncorrect\nname would be \"...value_id\", because you actually retrieve valueid field\nfrom the TOAST pointer, and chunk ID consists of valueid + chunk_seq.\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!I like the idea of having a standard function which shows a TOAST value IDfor a row. I've used my own to handle TOAST errors. Just, maybe, more correctname would be \"...value_id\", because you actually retrieve valueid fieldfrom the TOAST pointer, and chunk ID consists of valueid + chunk_seq.-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Wed, 5 Jul 2023 17:49:20 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "Hi Nikita,\n\nOn Wed, 5 Jul 2023 17:49:20 +0300\nNikita Malakhov <[email protected]> wrote:\n\n> Hi!\n> \n> I like the idea of having a standard function which shows a TOAST value ID\n> for a row. I've used my own to handle TOAST errors. Just, maybe, more\n> correct\n> name would be \"...value_id\", because you actually retrieve valueid field\n> from the TOAST pointer, and chunk ID consists of valueid + chunk_seq.\n\nThank you for your review!\n\nAlthough, the retrieved field is \"va_valueid\" and it is called \"value ID\" in the\ncode, I chose the name \"..._chunk_id\" because I found the description in the\ndocumentation as followings:\n\n-------------\nEvery TOAST table has the columns chunk_id (an OID identifying the particular TOASTed value), chunk_seq (a sequence number for the chunk within its value), and chunk_data (the actual data of the chunk). A unique index on chunk_id and chunk_seq provides fast retrieval of the values. A pointer datum representing an out-of-line on-disk TOASTed value therefore needs to store the OID of the TOAST table in which to look and the OID of the specific value (its chunk_id)\n-------------\nhttps://www.postgresql.org/docs/devel/storage-toast.html\n\nHere, chunk_id defined separately from chunk_seq. Therefore, I wonder \npg_column_toast_chunk_id would be ok. However, I don't insist on this\nand I would be happy to change it if the other name is more natural for users.\n\nRegards,\nYugo Nagata\n\n> \n> -- \n> Regards,\n> Nikita Malakhov\n> Postgres Professional\n> The Russian Postgres Company\n> https://postgrespro.ru/\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Fri, 7 Jul 2023 17:21:36 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "On Fri, 7 Jul 2023 17:21:36 +0900\nYugo NAGATA <[email protected]> wrote:\n\n> Hi Nikita,\n> \n> On Wed, 5 Jul 2023 17:49:20 +0300\n> Nikita Malakhov <[email protected]> wrote:\n> \n> > Hi!\n> > \n> > I like the idea of having a standard function which shows a TOAST value ID\n> > for a row. I've used my own to handle TOAST errors. Just, maybe, more\n> > correct\n> > name would be \"...value_id\", because you actually retrieve valueid field\n> > from the TOAST pointer, and chunk ID consists of valueid + chunk_seq.\n> \n> Thank you for your review!\n> \n> Although, the retrieved field is \"va_valueid\" and it is called \"value ID\" in the\n> code, I chose the name \"..._chunk_id\" because I found the description in the\n> documentation as followings:\n> \n> -------------\n> Every TOAST table has the columns chunk_id (an OID identifying the particular TOASTed value), chunk_seq (a sequence number for the chunk within its value), and chunk_data (the actual data of the chunk). A unique index on chunk_id and chunk_seq provides fast retrieval of the values. A pointer datum representing an out-of-line on-disk TOASTed value therefore needs to store the OID of the TOAST table in which to look and the OID of the specific value (its chunk_id)\n> -------------\n> https://www.postgresql.org/docs/devel/storage-toast.html\n> \n> Here, chunk_id defined separately from chunk_seq. Therefore, I wonder \n> pg_column_toast_chunk_id would be ok. However, I don't insist on this\n> and I would be happy to change it if the other name is more natural for users.\n\nI attached v2 patch that contains the documentation fix.\n\nRegards,\nYugo Nagata\n\n> \n> Regards,\n> Yugo Nagata\n> \n> > \n> > -- \n> > Regards,\n> > Nikita Malakhov\n> > Postgres Professional\n> > The Russian Postgres Company\n> > https://postgrespro.ru/\n> \n> \n> -- \n> Yugo NAGATA <[email protected]>\n> \n> \n\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Fri, 7 Jul 2023 17:30:15 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "Hello\n\nMy +1 to have such a function in core or in some contrib at least (pg_surgery? amcheck?).\n\nIn the past, more than once I needed to find a damaged tuple knowing only chunk id and toastrelid. This feature would help a lot.\n\nregards, Sergei\n\n\n",
"msg_date": "Mon, 10 Jul 2023 21:39:41 +0300",
"msg_from": "Sergei Kornilov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "minor doc issues.\nReturns the chunk id of the TOASTed value, or NULL if the value is not TOASTed.\nShould it be \"chunk_id\"?\n\nyou may place it after pg_create_logical_replication_slot entry to\nmake it look like alphabetical order.\n\nThere is no test. maybe we can add following to src/test/regress/sql/misc.sql\ncreate table val(t text);\nINSERT into val(t) SELECT string_agg(\n chr((ascii('B') + round(random() * 25)) :: integer),'')\nFROM generate_series(1,2500);\nselect pg_column_toast_chunk_id(t) is not null from val;\ndrop table val;\n\n\n",
"msg_date": "Mon, 6 Nov 2023 08:00:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "On Mon, Nov 6, 2023 at 8:00 AM jian he <[email protected]> wrote:\n>\n> minor doc issues.\n> Returns the chunk id of the TOASTed value, or NULL if the value is not TOASTed.\n> Should it be \"chunk_id\"?\n>\n> you may place it after pg_create_logical_replication_slot entry to\n> make it look like alphabetical order.\n>\n> There is no test. maybe we can add following to src/test/regress/sql/misc.sql\n> create table val(t text);\n> INSERT into val(t) SELECT string_agg(\n> chr((ascii('B') + round(random() * 25)) :: integer),'')\n> FROM generate_series(1,2500);\n> select pg_column_toast_chunk_id(t) is not null from val;\n> drop table val;\n\nHi\nthe main C function (pg_column_toast_chunk_id) I didn't change.\nI added tests as mentioned above.\ntests put it on src/test/regress/sql/misc.sql, i hope that's fine.\nI placed pg_column_toast_chunk_id in \"Table 9.99. Database Object\nLocation Functions\" (below Table 9.98. Database Object Size\nFunctions).",
"msg_date": "Tue, 2 Jan 2024 08:00:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "On Tue, 2 Jan 2024 08:00:00 +0800\njian he <[email protected]> wrote:\n\n> On Mon, Nov 6, 2023 at 8:00 AM jian he <[email protected]> wrote:\n> >\n> > minor doc issues.\n> > Returns the chunk id of the TOASTed value, or NULL if the value is not TOASTed.\n> > Should it be \"chunk_id\"?\n\nThank you for your suggestion. As you pointed out, it is called \"chunk_id\" \nin the documentation, so I rewrote it and also added a link to the section\nwhere the TOAST table structure is explained.\n\n> > you may place it after pg_create_logical_replication_slot entry to\n> > make it look like alphabetical order.\n\nI've been thinking about where we should place the function in the doc,\nand I decided place it in the table of Database Object Size Functions\nbecause I think pg_column_toast_chunk_id also would assist understanding\nthe result of size functions as similar to pg_column_compression; that is,\nthose function can explain why a large value in size could be stored in\na column.\n\n> > There is no test. maybe we can add following to src/test/regress/sql/misc.sql\n> > create table val(t text);\n> > INSERT into val(t) SELECT string_agg(\n> > chr((ascii('B') + round(random() * 25)) :: integer),'')\n> > FROM generate_series(1,2500);\n> > select pg_column_toast_chunk_id(t) is not null from val;\n> > drop table val;\n\nThank you for the test proposal. However, if we add a test, I want\nto check that the chunk_id returned by the function exists in the\nTOAST table, and that it returns NULL if the values is not TOASTed.\nFor the purpose, I wrote a test using a dynamic SQL since the table\nname of the TOAST table have to be generated from the main table's OID.\n\n> Hi\n> the main C function (pg_column_toast_chunk_id) I didn't change.\n> I added tests as mentioned above.\n> tests put it on src/test/regress/sql/misc.sql, i hope that's fine.\n> I placed pg_column_toast_chunk_id in \"Table 9.99. Database Object\n> Location Functions\" (below Table 9.98. Database Object Size\n> Functions).\n\nI could not find any change in your patch from my previous patch.\nMaybe, you attached wrong file. I attached a patch updated based\non your review, including the documentation fixes and a test.\nWhat do you think about this it? \n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Fri, 26 Jan 2024 09:42:37 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "On Fri, Jan 26, 2024 at 8:42 AM Yugo NAGATA <[email protected]> wrote:\n>\n> On Tue, 2 Jan 2024 08:00:00 +0800\n> jian he <[email protected]> wrote:\n>\n> > On Mon, Nov 6, 2023 at 8:00 AM jian he <[email protected]> wrote:\n> > >\n> > > minor doc issues.\n> > > Returns the chunk id of the TOASTed value, or NULL if the value is not TOASTed.\n> > > Should it be \"chunk_id\"?\n>\n> Thank you for your suggestion. As you pointed out, it is called \"chunk_id\"\n> in the documentation, so I rewrote it and also added a link to the section\n> where the TOAST table structure is explained.\n>\n> > > you may place it after pg_create_logical_replication_slot entry to\n> > > make it look like alphabetical order.\n>\n> I've been thinking about where we should place the function in the doc,\n> and I decided place it in the table of Database Object Size Functions\n> because I think pg_column_toast_chunk_id also would assist understanding\n> the result of size functions as similar to pg_column_compression; that is,\n> those function can explain why a large value in size could be stored in\n> a column.\n\ndiff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\nindex 210c7c0b02..2d82331323 100644\n--- a/doc/src/sgml/func.sgml\n+++ b/doc/src/sgml/func.sgml\n@@ -28078,6 +28078,23 @@ postgres=# SELECT '0/0'::pg_lsn +\npd.segment_number * ps.setting::int + :offset\n </para></entry>\n </row>\n\n+ <row>\n+ <entry role=\"func_table_entry\"><para role=\"func_signature\">\n+ <indexterm>\n+ <primary>pg_column_toast_chunk_id</primary>\n+ </indexterm>\n+ <function>pg_column_toast_chunk_id</function> ( <type>\"any\"</type> )\n+ <returnvalue>oid</returnvalue>\n+ </para>\n+ <para>\n+ Shows the <structfield>chunk_id</structfield> of an on-disk\n+ <acronym>TOAST</acronym>ed value. Returns <literal>NULL</literal>\n+ if the value is un-<acronym>TOAST</acronym>ed or not on-disk.\n+ See <xref linkend=\"storage-toast-ondisk\"/> for details about\n+ <acronym>TOAST</acronym>.\n+ </para></entry>\n+ </row>\n\nv3 patch will place it on `Table 9.97. Replication Management Functions`\nI agree with you. it should be placed after pg_column_compression. but\napply your patch, it will be at\n\n\n> > > There is no test. maybe we can add following to src/test/regress/sql/misc.sql\n> > > create table val(t text);\n> > > INSERT into val(t) SELECT string_agg(\n> > > chr((ascii('B') + round(random() * 25)) :: integer),'')\n> > > FROM generate_series(1,2500);\n> > > select pg_column_toast_chunk_id(t) is not null from val;\n> > > drop table val;\n>\n> Thank you for the test proposal. However, if we add a test, I want\n> to check that the chunk_id returned by the function exists in the\n> TOAST table, and that it returns NULL if the values is not TOASTed.\n> For the purpose, I wrote a test using a dynamic SQL since the table\n> name of the TOAST table have to be generated from the main table's OID.\n>\n> > Hi\n> > the main C function (pg_column_toast_chunk_id) I didn't change.\n> > I added tests as mentioned above.\n> > tests put it on src/test/regress/sql/misc.sql, i hope that's fine.\n> > I placed pg_column_toast_chunk_id in \"Table 9.99. Database Object\n> > Location Functions\" (below Table 9.98. Database Object Size\n> > Functions).\n>\n> I could not find any change in your patch from my previous patch.\n> Maybe, you attached wrong file. I attached a patch updated based\n> on your review, including the documentation fixes and a test.\n> What do you think about this it?\n>\n\nsorry, I had attached the wrong file.\nbut your v3 also has no tests, documentation didn't fix.\nmaybe you also attached the wrong file too?\n\n\n",
"msg_date": "Tue, 30 Jan 2024 12:12:31 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "On Tue, 30 Jan 2024 12:12:31 +0800\njian he <[email protected]> wrote:\n\n> On Fri, Jan 26, 2024 at 8:42 AM Yugo NAGATA <[email protected]> wrote:\n> >\n> > On Tue, 2 Jan 2024 08:00:00 +0800\n> > jian he <[email protected]> wrote:\n> >\n> > > On Mon, Nov 6, 2023 at 8:00 AM jian he <[email protected]> wrote:\n> > > >\n> > > > minor doc issues.\n> > > > Returns the chunk id of the TOASTed value, or NULL if the value is not TOASTed.\n> > > > Should it be \"chunk_id\"?\n> >\n> > Thank you for your suggestion. As you pointed out, it is called \"chunk_id\"\n> > in the documentation, so I rewrote it and also added a link to the section\n> > where the TOAST table structure is explained.\n> >\n> > > > you may place it after pg_create_logical_replication_slot entry to\n> > > > make it look like alphabetical order.\n> >\n> > I've been thinking about where we should place the function in the doc,\n> > and I decided place it in the table of Database Object Size Functions\n> > because I think pg_column_toast_chunk_id also would assist understanding\n> > the result of size functions as similar to pg_column_compression; that is,\n> > those function can explain why a large value in size could be stored in\n> > a column.\n> \n> diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\n> index 210c7c0b02..2d82331323 100644\n> --- a/doc/src/sgml/func.sgml\n> +++ b/doc/src/sgml/func.sgml\n> @@ -28078,6 +28078,23 @@ postgres=# SELECT '0/0'::pg_lsn +\n> pd.segment_number * ps.setting::int + :offset\n> </para></entry>\n> </row>\n> \n> + <row>\n> + <entry role=\"func_table_entry\"><para role=\"func_signature\">\n> + <indexterm>\n> + <primary>pg_column_toast_chunk_id</primary>\n> + </indexterm>\n> + <function>pg_column_toast_chunk_id</function> ( <type>\"any\"</type> )\n> + <returnvalue>oid</returnvalue>\n> + </para>\n> + <para>\n> + Shows the <structfield>chunk_id</structfield> of an on-disk\n> + <acronym>TOAST</acronym>ed value. Returns <literal>NULL</literal>\n> + if the value is un-<acronym>TOAST</acronym>ed or not on-disk.\n> + See <xref linkend=\"storage-toast-ondisk\"/> for details about\n> + <acronym>TOAST</acronym>.\n> + </para></entry>\n> + </row>\n> \n> v3 patch will place it on `Table 9.97. Replication Management Functions`\n> I agree with you. it should be placed after pg_column_compression. but\n> apply your patch, it will be at\n> \n> \n> > > > There is no test. maybe we can add following to src/test/regress/sql/misc.sql\n> > > > create table val(t text);\n> > > > INSERT into val(t) SELECT string_agg(\n> > > > chr((ascii('B') + round(random() * 25)) :: integer),'')\n> > > > FROM generate_series(1,2500);\n> > > > select pg_column_toast_chunk_id(t) is not null from val;\n> > > > drop table val;\n> >\n> > Thank you for the test proposal. However, if we add a test, I want\n> > to check that the chunk_id returned by the function exists in the\n> > TOAST table, and that it returns NULL if the values is not TOASTed.\n> > For the purpose, I wrote a test using a dynamic SQL since the table\n> > name of the TOAST table have to be generated from the main table's OID.\n> >\n> > > Hi\n> > > the main C function (pg_column_toast_chunk_id) I didn't change.\n> > > I added tests as mentioned above.\n> > > tests put it on src/test/regress/sql/misc.sql, i hope that's fine.\n> > > I placed pg_column_toast_chunk_id in \"Table 9.99. Database Object\n> > > Location Functions\" (below Table 9.98. Database Object Size\n> > > Functions).\n> >\n> > I could not find any change in your patch from my previous patch.\n> > Maybe, you attached wrong file. I attached a patch updated based\n> > on your review, including the documentation fixes and a test.\n> > What do you think about this it?\n> >\n> \n> sorry, I had attached the wrong file.\n> but your v3 also has no tests, documentation didn't fix.\n> maybe you also attached the wrong file too?\n> \n\nSorry, I also attached a wrong file.\nAttached is the correct one.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Tue, 30 Jan 2024 13:34:59 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "On Tue, Jan 30, 2024 at 12:35 PM Yugo NAGATA <[email protected]> wrote:\n>\n>\n> Sorry, I also attached a wrong file.\n> Attached is the correct one.\nI think you attached the wrong file again. also please name it as v4.\n\n\n",
"msg_date": "Tue, 30 Jan 2024 13:47:45 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "On Tue, 30 Jan 2024 13:47:45 +0800\njian he <[email protected]> wrote:\n\n> On Tue, Jan 30, 2024 at 12:35 PM Yugo NAGATA <[email protected]> wrote:\n> >\n> >\n> > Sorry, I also attached a wrong file.\n> > Attached is the correct one.\n> I think you attached the wrong file again. also please name it as v4.\n\nOpps..sorry, again.\nI attached the correct one, v4.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Tue, 30 Jan 2024 14:56:32 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "On Tue, Jan 30, 2024 at 1:56 PM Yugo NAGATA <[email protected]> wrote:\n>\n> I attached the correct one, v4.\n>\n\n+-- Test pg_column_toast_chunk_id:\n+-- Check if the returned chunk_id exists in the TOAST table\n+CREATE TABLE test_chunk_id (v1 text, v2 text);\n+INSERT INTO test_chunk_id VALUES (\n+ repeat('0123456789', 10), -- v1: small enough not to be TOASTed\n+ repeat('0123456789', 100000)); -- v2: large enough to be TOASTed\n\nselect pg_size_pretty(100000::bigint);\nreturn 98kb.\n\nI think this is just too much, maybe I didn't consider the\nimplications of compression.\nAnyway, I refactored the tests, making the toast value size be small.\n\nI aslo refactor the doc.\npg_column_toast_chunk_id entry will be right after pg_column_compression entry.\nYou can check the screenshot.",
"msg_date": "Tue, 30 Jan 2024 14:57:20 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "On Tue, 30 Jan 2024 14:57:20 +0800\njian he <[email protected]> wrote:\n\n> On Tue, Jan 30, 2024 at 1:56 PM Yugo NAGATA <[email protected]> wrote:\n> >\n> > I attached the correct one, v4.\n> >\n> \n> +-- Test pg_column_toast_chunk_id:\n> +-- Check if the returned chunk_id exists in the TOAST table\n> +CREATE TABLE test_chunk_id (v1 text, v2 text);\n> +INSERT INTO test_chunk_id VALUES (\n> + repeat('0123456789', 10), -- v1: small enough not to be TOASTed\n> + repeat('0123456789', 100000)); -- v2: large enough to be TOASTed\n> \n> select pg_size_pretty(100000::bigint);\n> return 98kb.\n> \n> I think this is just too much, maybe I didn't consider the\n> implications of compression.\n> Anyway, I refactored the tests, making the toast value size be small.\n\nActually the data is compressed and the size is much smaller,\nbut I agree with you it is better not to generate large data unnecessarily.\nI rewrote the test to disallow compression in the toast data using \n\"ALTER TABLE ... SET STORAGE EXTERNAL\". In this case, any text larger\nthan 2k will be TOASTed on disk without compression, and it makes the\ntest simple, not required to use string_agg.\n> \n> I aslo refactor the doc.\n> pg_column_toast_chunk_id entry will be right after pg_column_compression entry.\n> You can check the screenshot.\n\nI found the document order was not changed between my patch and yours.\nIn both, pg_column_toast_chunk_id entry is right after \npg_column_compression.\n\nHere is a updated patch, v6.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Thu, 1 Feb 2024 13:45:24 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "On Thu, Feb 1, 2024 at 12:45 PM Yugo NAGATA <[email protected]> wrote:\n>\n> Here is a updated patch, v6.\n\nv6 patch looks good.\n\n\n",
"msg_date": "Thu, 1 Feb 2024 17:59:56 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "On Thu, 1 Feb 2024 17:59:56 +0800\njian he <[email protected]> wrote:\n\n> On Thu, Feb 1, 2024 at 12:45 PM Yugo NAGATA <[email protected]> wrote:\n> >\n> > Here is a updated patch, v6.\n> \n> v6 patch looks good.\n\nThank you for your review and updating the status to RwC!\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Mon, 5 Feb 2024 16:28:23 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "On Mon, Feb 05, 2024 at 04:28:23PM +0900, Yugo NAGATA wrote:\n> On Thu, 1 Feb 2024 17:59:56 +0800\n> jian he <[email protected]> wrote:\n>> v6 patch looks good.\n> \n> Thank you for your review and updating the status to RwC!\n\nI think this one needs a (pretty trivial) rebase. I spent a few minutes\ntesting it out and looking at the code, and it seems generally reasonable\nto me. Do you think it's worth adding something like a\npg_column_toast_num_chunks() function that returns the number of chunks for\nthe TOASTed value, too?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 7 Mar 2024 16:56:17 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "On Thu, 7 Mar 2024 16:56:17 -0600\nNathan Bossart <[email protected]> wrote:\n\n> On Mon, Feb 05, 2024 at 04:28:23PM +0900, Yugo NAGATA wrote:\n> > On Thu, 1 Feb 2024 17:59:56 +0800\n> > jian he <[email protected]> wrote:\n> >> v6 patch looks good.\n> > \n> > Thank you for your review and updating the status to RwC!\n> \n> I think this one needs a (pretty trivial) rebase. I spent a few minutes\n> testing it out and looking at the code, and it seems generally reasonable\n\nThank you for your review.\nI've attached a rebased patch.\n\n> to me. Do you think it's worth adding something like a\n> pg_column_toast_num_chunks() function that returns the number of chunks for\n> the TOASTed value, too?\n\nIf we want to know the number of chunks of a specified chunk_id,\nwe can get this by the following query.\n\npostgres=# SELECT id, (SELECT count(*) FROM pg_toast.pg_toast_16384 WHERE chunk_id = id) \n FROM (SELECT pg_column_toast_chunk_id(v) AS id FROM t);\n\n id | count \n-------+-------\n 16389 | 3\n 16390 | 287\n(2 rows)\n\n\nHowever, if there are needs for getting such information in a\nsimpler way, it might be worth making a new function.\n\nRegards,\nYugo Nagata\n\n> -- \n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Fri, 8 Mar 2024 15:31:55 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "On Fri, Mar 08, 2024 at 03:31:55PM +0900, Yugo NAGATA wrote:\n> On Thu, 7 Mar 2024 16:56:17 -0600\n> Nathan Bossart <[email protected]> wrote:\n>> to me. Do you think it's worth adding something like a\n>> pg_column_toast_num_chunks() function that returns the number of chunks for\n>> the TOASTed value, too?\n> \n> If we want to know the number of chunks of a specified chunk_id,\n> we can get this by the following query.\n> \n> postgres=# SELECT id, (SELECT count(*) FROM pg_toast.pg_toast_16384 WHERE chunk_id = id) \n> FROM (SELECT pg_column_toast_chunk_id(v) AS id FROM t);\n\nGood point. Overall, I think this patch is in decent shape, so I'll aim to\ncommit it sometime next week.\n\n> +{ oid => '8393', descr => 'chunk ID of on-disk TOASTed value',\n> + proname => 'pg_column_toast_chunk_id', provolatile => 's', prorettype => 'oid',\n> + proargtypes => 'any', prosrc => 'pg_column_toast_chunk_id' },\n\nNote to self: this change requires a catversion bump.\n\n> +INSERT INTO test_chunk_id(v1,v2)\n> + VALUES (repeat('x', 1), repeat('x', 2048));\n\nIs this guaranteed to be TOASTed for all possible page sizes?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 8 Mar 2024 16:17:58 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "On Fri, 8 Mar 2024 16:17:58 -0600\nNathan Bossart <[email protected]> wrote:\n\n> On Fri, Mar 08, 2024 at 03:31:55PM +0900, Yugo NAGATA wrote:\n> > On Thu, 7 Mar 2024 16:56:17 -0600\n> > Nathan Bossart <[email protected]> wrote:\n> >> to me. Do you think it's worth adding something like a\n> >> pg_column_toast_num_chunks() function that returns the number of chunks for\n> >> the TOASTed value, too?\n> > \n> > If we want to know the number of chunks of a specified chunk_id,\n> > we can get this by the following query.\n> > \n> > postgres=# SELECT id, (SELECT count(*) FROM pg_toast.pg_toast_16384 WHERE chunk_id = id) \n> > FROM (SELECT pg_column_toast_chunk_id(v) AS id FROM t);\n> \n> Good point. Overall, I think this patch is in decent shape, so I'll aim to\n> commit it sometime next week.\n\nThank you.\n\n> \n> > +{ oid => '8393', descr => 'chunk ID of on-disk TOASTed value',\n> > + proname => 'pg_column_toast_chunk_id', provolatile => 's', prorettype => 'oid',\n> > + proargtypes => 'any', prosrc => 'pg_column_toast_chunk_id' },\n> \n> Note to self: this change requires a catversion bump.\n> \n> > +INSERT INTO test_chunk_id(v1,v2)\n> > + VALUES (repeat('x', 1), repeat('x', 2048));\n> \n> Is this guaranteed to be TOASTed for all possible page sizes?\n\nShould we use block_size?\n\n SHOW block_size \\gset\n INSERT INTO test_chunk_id(v1,v2)\n VALUES (repeat('x', 1), repeat('x', (:block_size / 4)));\n\nI think this will work in various page sizes. \nI've attached a patch in which the test is updated.\n\nRegards,\nYugo Nagata\n\n> \n> -- \n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Sat, 9 Mar 2024 11:57:18 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "On Sat, Mar 09, 2024 at 11:57:18AM +0900, Yugo NAGATA wrote:\n> On Fri, 8 Mar 2024 16:17:58 -0600\n> Nathan Bossart <[email protected]> wrote:\n>> Is this guaranteed to be TOASTed for all possible page sizes?\n> \n> Should we use block_size?\n> \n> SHOW block_size \\gset\n> INSERT INTO test_chunk_id(v1,v2)\n> VALUES (repeat('x', 1), repeat('x', (:block_size / 4)));\n> \n> I think this will work in various page sizes. \n\nWFM\n\n> +SHOW block_size; \\gset\n> + block_size \n> +------------\n> + 8192\n> +(1 row)\n\nI think we need to remove the ';' so that the output of the query is not\nsaved in the \".out\" file. With that change, this test passes when Postgres\nis built with --with-blocksize=32. However, many other unrelated tests\nbegin failing, so I guess this fix isn't tremendously important.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 9 Mar 2024 08:50:28 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "On Sat, 9 Mar 2024 08:50:28 -0600\nNathan Bossart <[email protected]> wrote:\n\n> On Sat, Mar 09, 2024 at 11:57:18AM +0900, Yugo NAGATA wrote:\n> > On Fri, 8 Mar 2024 16:17:58 -0600\n> > Nathan Bossart <[email protected]> wrote:\n> >> Is this guaranteed to be TOASTed for all possible page sizes?\n> > \n> > Should we use block_size?\n> > \n> > SHOW block_size \\gset\n> > INSERT INTO test_chunk_id(v1,v2)\n> > VALUES (repeat('x', 1), repeat('x', (:block_size / 4)));\n> > \n> > I think this will work in various page sizes. \n> \n> WFM\n> \n> > +SHOW block_size; \\gset\n> > + block_size \n> > +------------\n> > + 8192\n> > +(1 row)\n> \n> I think we need to remove the ';' so that the output of the query is not\n> saved in the \".out\" file. With that change, this test passes when Postgres\n> is built with --with-blocksize=32. However, many other unrelated tests\n> begin failing, so I guess this fix isn't tremendously important.\n\nI rewrote the patch to use current_setting('block_size') instead of SHOW\nand \\gset as other tests do. Although some tests are failing with block_size=32,\nI wonder it is a bit better to use \"block_size\" instead of the constant\nto make the test more general to some extent.\n\nRegards,\nYugo Nagata\n\n> \n> -- \n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Tue, 12 Mar 2024 15:51:19 +0700",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "I did some light editing to prepare this for commit.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 12 Mar 2024 22:07:17 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "On Tue, 12 Mar 2024 22:07:17 -0500\nNathan Bossart <[email protected]> wrote:\n\n> I did some light editing to prepare this for commit.\n\nThank you. I confirmed the test you improved and I am fine with that.\n\nRegards,\nYugo Nagata\n\n> \n> -- \n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Wed, 13 Mar 2024 13:09:18 +0700",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "On Wed, Mar 13, 2024 at 01:09:18PM +0700, Yugo NAGATA wrote:\n> On Tue, 12 Mar 2024 22:07:17 -0500\n> Nathan Bossart <[email protected]> wrote:\n>> I did some light editing to prepare this for commit.\n> \n> Thank you. I confirmed the test you improved and I am fine with that.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 14 Mar 2024 11:10:42 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
},
{
"msg_contents": "On Thu, 14 Mar 2024 11:10:42 -0500\nNathan Bossart <[email protected]> wrote:\n\n> On Wed, Mar 13, 2024 at 01:09:18PM +0700, Yugo NAGATA wrote:\n> > On Tue, 12 Mar 2024 22:07:17 -0500\n> > Nathan Bossart <[email protected]> wrote:\n> >> I did some light editing to prepare this for commit.\n> > \n> > Thank you. I confirmed the test you improved and I am fine with that.\n> \n> Committed.\n\nThank you!\n\n> \n> -- \n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Mon, 18 Mar 2024 16:44:29 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_column_toast_chunk_id: a function to get a chunk ID of a\n TOASTed value"
}
] |
[
{
"msg_contents": "Hello,\n\n\nTemporary tables are often used to store transient data in\nbatch processing and the contents can be accessed multiple\ntimes. However, frequent use of temporary tables has a problem\nthat the system catalog tends to bloat. I know there has been\nseveral proposals to attack this problem, but I would like to\npropose a new one.\n\nThe idea is to use Ephemeral Named Relation (ENR) like a\ntemporary table. ENR information is not stored into the system\ncatalog, but in QueryEnvironment, so it never bloat the system\ncatalog.\n\nAlthough we cannot perform insert, update or delete on ENR,\nI wonder it could be beneficial if we need to reference to a\nresult of a query multiple times in a batch processing.\n\nThe attached is a concept patch. This adds a new syntax\n\"OPEN cursor INTO TABLE tablename\" to pl/pgSQL, that stores\na result of the cursor query into a ENR with specified name. \nHowever, this is a tentative interface to demonstrate the\nconcept of feature.\n\nHere is an example;\n\npostgres=# \\sf fnc\nCREATE OR REPLACE FUNCTION public.fnc()\n RETURNS TABLE(sum1 integer, avg1 integer, sum2 integer, avg2 integer)\n LANGUAGE plpgsql\nAS $function$\nDECLARE\n sum1 integer;\n sum2 integer;\n avg1 integer;\n avg2 integer;\n curs CURSOR FOR SELECT aid, bid, abalance FROM pgbench_accounts\n WHERE abalance BETWEEN 100 AND 200;\nBEGIN\n OPEN curs INTO TABLE tmp_accounts;\n SELECT count(abalance) , avg(abalance) INTO sum1, avg1\n FROM tmp_accounts;\n SELECT count(bbalance), avg(bbalance) INTO sum2, avg2\n FROM tmp_accounts a, pgbench_branches b WHERE a.bid = b.bid;\n RETURN QUERY SELECT sum1,avg1,sum2,avg2;\nEND;\n$function$\n\npostgres=# select fnc();\n fnc \n--------------------\n (541,151,541,3937)\n(1 row)\n\nAs above, we can use the same query result for multiple\naggregations, and also join it with other tables.\n\nWhat do you think of using ENR for this way?\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Wed, 29 Mar 2023 13:53:52 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Using Ephemeral Named Relation like a temporary table"
},
{
"msg_contents": "Hi\n\nst 29. 3. 2023 v 6:54 odesílatel Yugo NAGATA <[email protected]> napsal:\n\n> Hello,\n>\n>\n> Temporary tables are often used to store transient data in\n> batch processing and the contents can be accessed multiple\n> times. However, frequent use of temporary tables has a problem\n> that the system catalog tends to bloat. I know there has been\n> several proposals to attack this problem, but I would like to\n> propose a new one.\n>\n> The idea is to use Ephemeral Named Relation (ENR) like a\n> temporary table. ENR information is not stored into the system\n> catalog, but in QueryEnvironment, so it never bloat the system\n> catalog.\n>\n> Although we cannot perform insert, update or delete on ENR,\n> I wonder it could be beneficial if we need to reference to a\n> result of a query multiple times in a batch processing.\n>\n> The attached is a concept patch. This adds a new syntax\n> \"OPEN cursor INTO TABLE tablename\" to pl/pgSQL, that stores\n> a result of the cursor query into a ENR with specified name.\n> However, this is a tentative interface to demonstrate the\n> concept of feature.\n>\n> Here is an example;\n>\n> postgres=# \\sf fnc\n> CREATE OR REPLACE FUNCTION public.fnc()\n> RETURNS TABLE(sum1 integer, avg1 integer, sum2 integer, avg2 integer)\n> LANGUAGE plpgsql\n> AS $function$\n> DECLARE\n> sum1 integer;\n> sum2 integer;\n> avg1 integer;\n> avg2 integer;\n> curs CURSOR FOR SELECT aid, bid, abalance FROM pgbench_accounts\n> WHERE abalance BETWEEN 100 AND 200;\n> BEGIN\n> OPEN curs INTO TABLE tmp_accounts;\n> SELECT count(abalance) , avg(abalance) INTO sum1, avg1\n> FROM tmp_accounts;\n> SELECT count(bbalance), avg(bbalance) INTO sum2, avg2\n> FROM tmp_accounts a, pgbench_branches b WHERE a.bid = b.bid;\n> RETURN QUERY SELECT sum1,avg1,sum2,avg2;\n> END;\n> $function$\n>\n> postgres=# select fnc();\n> fnc\n> --------------------\n> (541,151,541,3937)\n> (1 row)\n>\n> As above, we can use the same query result for multiple\n> aggregations, and also join it with other tables.\n>\n> What do you think of using ENR for this way?\n>\n\nThe idea looks pretty good. I think it can be very useful. I am not sure if\nthis design is intuitive. If I remember well, the Oracle's has similar\nfeatures, and can be nice if we use the same or more similar syntax\n(although I am not sure how it can be implementable)? I think so PL/SQL\ndesign has an advantage, because you don't need to solve the scope of the\ncursor's assigned table.\n\nOPEN curs INTO TABLE tmp_accounts; -- it looks little bit strange. I miss\ninfo, so tmp_accounts is not normal table\n\nwhat about\n\nOPEN curs INTO CURSOR TABLE xxx;\n\nor\n\nOPEN curs FOR CURSOR TABLE xxx\n\n\nRegards\n\nPavel\n\n\n\n\n\n\n\n>\n> Regards,\n> Yugo Nagata\n>\n> --\n> Yugo NAGATA <[email protected]>\n>\n\nHist 29. 3. 2023 v 6:54 odesílatel Yugo NAGATA <[email protected]> napsal:Hello,\n\n\nTemporary tables are often used to store transient data in\nbatch processing and the contents can be accessed multiple\ntimes. However, frequent use of temporary tables has a problem\nthat the system catalog tends to bloat. I know there has been\nseveral proposals to attack this problem, but I would like to\npropose a new one.\n\nThe idea is to use Ephemeral Named Relation (ENR) like a\ntemporary table. ENR information is not stored into the system\ncatalog, but in QueryEnvironment, so it never bloat the system\ncatalog.\n\nAlthough we cannot perform insert, update or delete on ENR,\nI wonder it could be beneficial if we need to reference to a\nresult of a query multiple times in a batch processing.\n\nThe attached is a concept patch. This adds a new syntax\n\"OPEN cursor INTO TABLE tablename\" to pl/pgSQL, that stores\na result of the cursor query into a ENR with specified name. \nHowever, this is a tentative interface to demonstrate the\nconcept of feature.\n\nHere is an example;\n\npostgres=# \\sf fnc\nCREATE OR REPLACE FUNCTION public.fnc()\n RETURNS TABLE(sum1 integer, avg1 integer, sum2 integer, avg2 integer)\n LANGUAGE plpgsql\nAS $function$\nDECLARE\n sum1 integer;\n sum2 integer;\n avg1 integer;\n avg2 integer;\n curs CURSOR FOR SELECT aid, bid, abalance FROM pgbench_accounts\n WHERE abalance BETWEEN 100 AND 200;\nBEGIN\n OPEN curs INTO TABLE tmp_accounts;\n SELECT count(abalance) , avg(abalance) INTO sum1, avg1\n FROM tmp_accounts;\n SELECT count(bbalance), avg(bbalance) INTO sum2, avg2\n FROM tmp_accounts a, pgbench_branches b WHERE a.bid = b.bid;\n RETURN QUERY SELECT sum1,avg1,sum2,avg2;\nEND;\n$function$\n\npostgres=# select fnc();\n fnc \n--------------------\n (541,151,541,3937)\n(1 row)\n\nAs above, we can use the same query result for multiple\naggregations, and also join it with other tables.\n\nWhat do you think of using ENR for this way?The idea looks pretty good. I think it can be very useful. I am not sure if this design is intuitive. If I remember well, the Oracle's has similar features, and can be nice if we use the same or more similar syntax (although I am not sure how it can be implementable)? I think so PL/SQL design has an advantage, because you don't need to solve the scope of the cursor's assigned table.OPEN curs INTO TABLE tmp_accounts; -- it looks little bit strange. I miss info, so tmp_accounts is not normal tablewhat aboutOPEN curs INTO CURSOR TABLE xxx;or OPEN curs FOR CURSOR TABLE xxxRegardsPavel \n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Wed, 29 Mar 2023 07:27:02 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using Ephemeral Named Relation like a temporary table"
},
{
"msg_contents": "On Wed, Mar 29, 2023 at 12:54 AM Yugo NAGATA <[email protected]> wrote:\n\n> Hello,\n>\n>\n> Temporary tables are often used to store transient data in\n> batch processing and the contents can be accessed multiple\n> times. However, frequent use of temporary tables has a problem\n> that the system catalog tends to bloat. I know there has been\n> several proposals to attack this problem, but I would like to\n> propose a new one.\n>\n> The idea is to use Ephemeral Named Relation (ENR) like a\n> temporary table. ENR information is not stored into the system\n> catalog, but in QueryEnvironment, so it never bloat the system\n> catalog.\n>\n> Although we cannot perform insert, update or delete on ENR,\n> I wonder it could be beneficial if we need to reference to a\n> result of a query multiple times in a batch processing.\n>\n> The attached is a concept patch. This adds a new syntax\n> \"OPEN cursor INTO TABLE tablename\" to pl/pgSQL, that stores\n> a result of the cursor query into a ENR with specified name.\n> However, this is a tentative interface to demonstrate the\n> concept of feature.\n>\n> Here is an example;\n>\n> postgres=# \\sf fnc\n> CREATE OR REPLACE FUNCTION public.fnc()\n> RETURNS TABLE(sum1 integer, avg1 integer, sum2 integer, avg2 integer)\n> LANGUAGE plpgsql\n> AS $function$\n> DECLARE\n> sum1 integer;\n> sum2 integer;\n> avg1 integer;\n> avg2 integer;\n> curs CURSOR FOR SELECT aid, bid, abalance FROM pgbench_accounts\n> WHERE abalance BETWEEN 100 AND 200;\n> BEGIN\n> OPEN curs INTO TABLE tmp_accounts;\n> SELECT count(abalance) , avg(abalance) INTO sum1, avg1\n> FROM tmp_accounts;\n> SELECT count(bbalance), avg(bbalance) INTO sum2, avg2\n> FROM tmp_accounts a, pgbench_branches b WHERE a.bid = b.bid;\n> RETURN QUERY SELECT sum1,avg1,sum2,avg2;\n> END;\n> $function$\n>\n> postgres=# select fnc();\n> fnc\n> --------------------\n> (541,151,541,3937)\n> (1 row)\n>\n> As above, we can use the same query result for multiple\n> aggregations, and also join it with other tables.\n>\n> What do you think of using ENR for this way?\n>\n> Regards,\n> Yugo Nagata\n>\n> --\n> Yugo NAGATA <[email protected]>\n>\n\nThis looks like a slightly more flexible version of the Oracle pl/sql table\ntype.\n\nFor those not familiar, PL/SQL can have record types, and in-memory\ncollections of records types, and you can either build up multiple records\nin a collection manually, or you can bulk-collect them from a query. Then,\nyou can later reference that collection in a regular SQL query with FROM\nTABLE(collection_name). It's a neat system for certain types of workloads.\n\nexample link, I'm sure there's better out there:\nhttps://oracle-base.com/articles/12c/using-the-table-operator-with-locally-defined-types-in-plsql-12cr1\n\nMy first take is there are likely customers out there that will want this.\nHowever, those customers will want to manually add/delete rows from the\nENR, so we'll want a way to do that.\n\nI haven't looked at ENRs in a while, when would the memory from that ENR\nget freed?\n\nOn Wed, Mar 29, 2023 at 12:54 AM Yugo NAGATA <[email protected]> wrote:Hello,\n\n\nTemporary tables are often used to store transient data in\nbatch processing and the contents can be accessed multiple\ntimes. However, frequent use of temporary tables has a problem\nthat the system catalog tends to bloat. I know there has been\nseveral proposals to attack this problem, but I would like to\npropose a new one.\n\nThe idea is to use Ephemeral Named Relation (ENR) like a\ntemporary table. ENR information is not stored into the system\ncatalog, but in QueryEnvironment, so it never bloat the system\ncatalog.\n\nAlthough we cannot perform insert, update or delete on ENR,\nI wonder it could be beneficial if we need to reference to a\nresult of a query multiple times in a batch processing.\n\nThe attached is a concept patch. This adds a new syntax\n\"OPEN cursor INTO TABLE tablename\" to pl/pgSQL, that stores\na result of the cursor query into a ENR with specified name. \nHowever, this is a tentative interface to demonstrate the\nconcept of feature.\n\nHere is an example;\n\npostgres=# \\sf fnc\nCREATE OR REPLACE FUNCTION public.fnc()\n RETURNS TABLE(sum1 integer, avg1 integer, sum2 integer, avg2 integer)\n LANGUAGE plpgsql\nAS $function$\nDECLARE\n sum1 integer;\n sum2 integer;\n avg1 integer;\n avg2 integer;\n curs CURSOR FOR SELECT aid, bid, abalance FROM pgbench_accounts\n WHERE abalance BETWEEN 100 AND 200;\nBEGIN\n OPEN curs INTO TABLE tmp_accounts;\n SELECT count(abalance) , avg(abalance) INTO sum1, avg1\n FROM tmp_accounts;\n SELECT count(bbalance), avg(bbalance) INTO sum2, avg2\n FROM tmp_accounts a, pgbench_branches b WHERE a.bid = b.bid;\n RETURN QUERY SELECT sum1,avg1,sum2,avg2;\nEND;\n$function$\n\npostgres=# select fnc();\n fnc \n--------------------\n (541,151,541,3937)\n(1 row)\n\nAs above, we can use the same query result for multiple\naggregations, and also join it with other tables.\n\nWhat do you think of using ENR for this way?\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>This looks like a slightly more flexible version of the Oracle pl/sql table type.For those not familiar, PL/SQL can have record types, and in-memory collections of records types, and you can either build up multiple records in a collection manually, or you can bulk-collect them from a query. Then, you can later reference that collection in a regular SQL query with FROM TABLE(collection_name). It's a neat system for certain types of workloads.example link, I'm sure there's better out there:https://oracle-base.com/articles/12c/using-the-table-operator-with-locally-defined-types-in-plsql-12cr1My first take is there are likely customers out there that will want this. However, those customers will want to manually add/delete rows from the ENR, so we'll want a way to do that.I haven't looked at ENRs in a while, when would the memory from that ENR get freed?",
"msg_date": "Wed, 29 Mar 2023 01:42:59 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using Ephemeral Named Relation like a temporary table"
}
] |
[
{
"msg_contents": "Hi,\n\nI would like to suggest a patch against master (although it may be worth\nbackporting it) that makes it possible to listen on any unused port.\n\nThe main motivation is running colocated instances of Postgres (such as\ntest benches) without having to coordinate port allocation in an\nunnecessarily complicated way.\n\nInstead, with this patch, one can specify `port` as `0` (the \"wildcard\"\nport) and retrieve the assigned port from postmaster.pid\n\nI believe there is no significant performance or another impact as it is a\ntiny bit of conditional functionality executed during startup.\n\nThe patch builds and `make check` succeeds. The patch does not add a test;\nhowever, I am trying to figure out if this behaviour can be tested\nautomatically.\n\n\n-- \nhttp://omnigres.org\nY.",
"msg_date": "Wed, 29 Mar 2023 12:18:55 +0700",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "Yurii Rashkovskii <[email protected]> writes:\n> I would like to suggest a patch against master (although it may be worth\n> backporting it) that makes it possible to listen on any unused port.\n\nI think this is a bad idea, mainly because this:\n\n> Instead, with this patch, one can specify `port` as `0` (the \"wildcard\"\n> port) and retrieve the assigned port from postmaster.pid\n\nis a horrid way to find out what was picked, and yet there could\nbe no other.\n\nOur existing design for this sort of thing is to let the testing\nframework choose the port, and I don't really see what's wrong\nwith that approach. Yes, I know it's theoretically subject to\nrace conditions, but that hasn't seemed to be a problem in\npractice. It's especially not a problem given that modern\ntesting practice tends to not open any TCP port at all, just\na Unix socket in a test-private directory, so that port\nconflicts are a non-issue.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 Mar 2023 07:55:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "Hi Tom,\n\nThank you for your feedback. Below are my comments.\n\nOn Wed, Mar 29, 2023 at 6:55 PM Tom Lane <[email protected]> wrote:\n\n> Yurii Rashkovskii <[email protected]> writes:\n> > I would like to suggest a patch against master (although it may be worth\n> > backporting it) that makes it possible to listen on any unused port.\n>\n> I think this is a bad idea, mainly because this:\n>\n> > Instead, with this patch, one can specify `port` as `0` (the \"wildcard\"\n> > port) and retrieve the assigned port from postmaster.pid\n>\n> is a horrid way to find out what was picked, and yet there could\n> be no other.\n>\n\nCan you elaborate on why reading postmaster.pid is a horrid way to discover\nthe port, given that it is a pretty simple format with a fixed line number\nfor the port?\n\n\n> Our existing design for this sort of thing is to let the testing\n> framework choose the port, and I don't really see what's wrong\n> with that approach. Yes, I know it's theoretically subject to\n> race conditions, but that hasn't seemed to be a problem in\n> practice. It's especially not a problem given that modern\n> testing practice tends to not open any TCP port at all, just\n> a Unix socket in a test-private directory, so that port\n> conflicts are a non-issue.\n>\n\nI keep running into this race condition nearly daily, which is why I\nproposed to address it with this patch. Yes, I know that one can get around\nthis with UNIX sockets,\nbut they have limited capabilities (not accessible outside of the local\nmachine, to begin with). Here's a real-world example of why I need to be\nable to use TCP ports:\n\nI have an extension that allows managing the lifecycle of [Docker/OCI]\ncontainers, and it passes Postgres connection details to these containers\nas environment variables.\nThese containers can now connect to Postgres using any program that can\ncommunicate using the wire protocol. I test this functionality in an\nautomated test that is executed\nconcurrently with others. Testing that the extension can supply the correct\nconnection information to the containers is important.\n\nIf we forget the importance of testing this specific part of the\nfunctionality for a bit, the rest of my issue can be _theoretically_\nresolved by passing the UNIX socket in `PGHOST` instead.\n\nHowever, it won't work in a typical Docker Desktop for macOS setup as it\nutilizes a virtual machine, and therefore, I can't simply use a UNIX socket\nbetween them.\n\nHere's an example:\n\n1. Create a UNIX socket listener:\n\n```\nsocat unix-l:test.sock,fork system:'echo hello'\n```\n\n2. Verify that it works locally:\n\n```\n$ socat test.sock -\nhello\n```\n\n3. Now, while being on macOS and using Docker Desktop, let Docker mount the\ndirectory with the socket and try to connect it from there:\n\n```\n$ docker run -v /path/to/sock/dir:/sock -ti ubuntu bash\n# apt update && apt install -y socat\n# socat /sock/test.sock -\n2023/03/29 23:34:48 socat[451] E connect(5, AF=1 \"/sock/test.sock\", 17):\nConnection refused\n```\n\nI get that the UNIX socket around works for many cases, but it does not\nwork for mine. Hence the proposal. Allowing a (fairly common) practice of a\nwildcard port with the discovery of it via\npostmaster.pid resolves all the above concerns without having to resort to\na rather race-condition-prone way to pick a port (or a complicated way to\ndo so with proper locking).\n\n-- \nhttp://omnigres.org\nYurii\n\nHi Tom,Thank you for your feedback. Below are my comments.On Wed, Mar 29, 2023 at 6:55 PM Tom Lane <[email protected]> wrote:Yurii Rashkovskii <[email protected]> writes:\n> I would like to suggest a patch against master (although it may be worth\n> backporting it) that makes it possible to listen on any unused port.\n\nI think this is a bad idea, mainly because this:\n\n> Instead, with this patch, one can specify `port` as `0` (the \"wildcard\"\n> port) and retrieve the assigned port from postmaster.pid\n\nis a horrid way to find out what was picked, and yet there could\nbe no other.Can you elaborate on why reading postmaster.pid is a horrid way to discover the port, given that it is a pretty simple format with a fixed line number for the port?\nOur existing design for this sort of thing is to let the testing\nframework choose the port, and I don't really see what's wrong\nwith that approach. Yes, I know it's theoretically subject to\nrace conditions, but that hasn't seemed to be a problem in\npractice. It's especially not a problem given that modern\ntesting practice tends to not open any TCP port at all, just\na Unix socket in a test-private directory, so that port\nconflicts are a non-issue.I keep running into this race condition nearly daily, which is why I proposed to address it with this patch. Yes, I know that one can get around this with UNIX sockets,but they have limited capabilities (not accessible outside of the local machine, to begin with). Here's a real-world example of why I need to be able to use TCP ports:I have an extension that allows managing the lifecycle of [Docker/OCI] containers, and it passes Postgres connection details to these containers as environment variables. These containers can now connect to Postgres using any program that can communicate using the wire protocol. I test this functionality in an automated test that is executedconcurrently with others. Testing that the extension can supply the correct connection information to the containers is important.If we forget the importance of testing this specific part of the functionality for a bit, the rest of my issue can be _theoretically_ resolved by passing the UNIX socket in `PGHOST` instead.However, it won't work in a typical Docker Desktop for macOS setup as it utilizes a virtual machine, and therefore, I can't simply use a UNIX socket between them.Here's an example:1. Create a UNIX socket listener:```socat unix-l:test.sock,fork system:'echo hello'```2. Verify that it works locally:```$ socat test.sock -hello```3. Now, while being on macOS and using Docker Desktop, let Docker mount the directory with the socket and try to connect it from there:``` $ docker run -v /path/to/sock/dir:/sock -ti ubuntu bash# apt update && apt install -y socat# socat /sock/test.sock -2023/03/29 23:34:48 socat[451] E connect(5, AF=1 \"/sock/test.sock\", 17): Connection refused```I get that the UNIX socket around works for many cases, but it does not work for mine. Hence the proposal. Allowing a (fairly common) practice of a wildcard port with the discovery of it viapostmaster.pid resolves all the above concerns without having to resort to a rather race-condition-prone way to pick a port (or a complicated way to do so with proper locking). -- http://omnigres.orgYurii",
"msg_date": "Thu, 30 Mar 2023 06:37:48 +0700",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "Hi Tom,\n\nOn Wed, Mar 29, 2023 at 6:55 PM Tom Lane <[email protected]> wrote:\n\n> Yurii Rashkovskii <[email protected]> writes:\n> > I would like to suggest a patch against master (although it may be worth\n> > backporting it) that makes it possible to listen on any unused port.\n>\n> I think this is a bad idea, mainly because this:\n>\n> > Instead, with this patch, one can specify `port` as `0` (the \"wildcard\"\n> > port) and retrieve the assigned port from postmaster.pid\n>\n> is a horrid way to find out what was picked, and yet there could\n> be no other.\n>\n\nI answered you before (\nhttps://www.postgresql.org/message-id/CA+RLCQwYw-Er-E_RGNCDfA514w+1YL8HGhNstxX=y1gLAABFdA@mail.gmail.com),\nbut I am wondering whether you missed that response. I would really be\ninterested to learn why you think reading port from the pid file is a\n\"horrid way\" to find out what was picked.\n\nI've outlined my reasoning for this feature in detail in the referenced\nmessage. Hope you can consider it.\n\n-- \nhttp://omnigres.org\nYurii\n\nHi Tom,On Wed, Mar 29, 2023 at 6:55 PM Tom Lane <[email protected]> wrote:Yurii Rashkovskii <[email protected]> writes:\n> I would like to suggest a patch against master (although it may be worth\n> backporting it) that makes it possible to listen on any unused port.\n\nI think this is a bad idea, mainly because this:\n\n> Instead, with this patch, one can specify `port` as `0` (the \"wildcard\"\n> port) and retrieve the assigned port from postmaster.pid\n\nis a horrid way to find out what was picked, and yet there could\nbe no other.I answered you before (https://www.postgresql.org/message-id/CA+RLCQwYw-Er-E_RGNCDfA514w+1YL8HGhNstxX=y1gLAABFdA@mail.gmail.com), but I am wondering whether you missed that response. I would really be interested to learn why you think reading port from the pid file is a \"horrid way\" to find out what was picked.I've outlined my reasoning for this feature in detail in the referenced message. Hope you can consider it.-- http://omnigres.orgYurii",
"msg_date": "Fri, 7 Apr 2023 05:17:05 +0700",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "On 2023-03-29 We 07:55, Tom Lane wrote:\n> Yurii Rashkovskii<[email protected]> writes:\n>> I would like to suggest a patch against master (although it may be worth\n>> backporting it) that makes it possible to listen on any unused port.\n> I think this is a bad idea, mainly because this:\n>\n>> Instead, with this patch, one can specify `port` as `0` (the \"wildcard\"\n>> port) and retrieve the assigned port from postmaster.pid\n> is a horrid way to find out what was picked, and yet there could\n> be no other.\n>\n> Our existing design for this sort of thing is to let the testing\n> framework choose the port, and I don't really see what's wrong\n> with that approach. Yes, I know it's theoretically subject to\n> race conditions, but that hasn't seemed to be a problem in\n> practice. It's especially not a problem given that modern\n> testing practice tends to not open any TCP port at all, just\n> a Unix socket in a test-private directory, so that port\n> conflicts are a non-issue.\n\n\nFor TAP tests we have pretty much resolved the port collisions issue for \nTCP ports too. See commit 9b4eafcaf4\n\nPerhaps the OP could adapt that logic to his use case.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-29 We 07:55, Tom Lane wrote:\n\n\nYurii Rashkovskii <[email protected]> writes:\n\n\nI would like to suggest a patch against master (although it may be worth\nbackporting it) that makes it possible to listen on any unused port.\n\n\n\nI think this is a bad idea, mainly because this:\n\n\n\nInstead, with this patch, one can specify `port` as `0` (the \"wildcard\"\nport) and retrieve the assigned port from postmaster.pid\n\n\n\nis a horrid way to find out what was picked, and yet there could\nbe no other.\n\nOur existing design for this sort of thing is to let the testing\nframework choose the port, and I don't really see what's wrong\nwith that approach. Yes, I know it's theoretically subject to\nrace conditions, but that hasn't seemed to be a problem in\npractice. It's especially not a problem given that modern\ntesting practice tends to not open any TCP port at all, just\na Unix socket in a test-private directory, so that port\nconflicts are a non-issue.\n\n\n\nFor TAP tests we have pretty much resolved the port collisions\n issue for TCP ports too. See commit 9b4eafcaf4\nPerhaps the OP could adapt that logic to his use case.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 7 Apr 2023 08:06:55 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "Hi Andrew,\n\nOn Fri, Apr 7, 2023, 7:07 p.m. Andrew Dunstan <[email protected]> wrote:\n\n>\n> On 2023-03-29 We 07:55, Tom Lane wrote:\n>\n> Yurii Rashkovskii <[email protected]> <[email protected]> writes:\n>\n> I would like to suggest a patch against master (although it may be worth\n> backporting it) that makes it possible to listen on any unused port.\n>\n> I think this is a bad idea, mainly because this:\n>\n>\n> Instead, with this patch, one can specify `port` as `0` (the \"wildcard\"\n> port) and retrieve the assigned port from postmaster.pid\n>\n> is a horrid way to find out what was picked, and yet there could\n> be no other.\n>\n> Our existing design for this sort of thing is to let the testing\n> framework choose the port, and I don't really see what's wrong\n> with that approach. Yes, I know it's theoretically subject to\n> race conditions, but that hasn't seemed to be a problem in\n> practice. It's especially not a problem given that modern\n> testing practice tends to not open any TCP port at all, just\n> a Unix socket in a test-private directory, so that port\n> conflicts are a non-issue.\n>\n>\n> For TAP tests we have pretty much resolved the port collisions issue for\n> TCP ports too. See commit 9b4eafcaf4\n>\n> Perhaps the OP could adapt that logic to his use case.\n>\n\nThank you for referencing this commit. The point why I am suggesting my\npatch is that I believe that my solution is a much better way to avoid\ncollisions in the first place. Implementing an algorithm similar to the one\nin the referenced commit is error-pfone and can be difficult in\nenvironments like shell script.\n\nI'm trying to understand what's wrong with reading port from the pid file\n(if Postgres writes information there, it's surely so that somebody can\nread it, otherwise, why write it in the first placd)? The proposed solution\nuses operating system's functionality to achieve collision-free mechanics\nwith none of the complexity introduced in the commit.\n\nHi Andrew,On Fri, Apr 7, 2023, 7:07 p.m. Andrew Dunstan <[email protected]> wrote:\n\n\n\nOn 2023-03-29 We 07:55, Tom Lane wrote:\n\n\nYurii Rashkovskii <[email protected]> writes:\n\n\nI would like to suggest a patch against master (although it may be worth\nbackporting it) that makes it possible to listen on any unused port.\n\n\nI think this is a bad idea, mainly because this:\n\n\n\nInstead, with this patch, one can specify `port` as `0` (the \"wildcard\"\nport) and retrieve the assigned port from postmaster.pid\n\n\nis a horrid way to find out what was picked, and yet there could\nbe no other.\n\nOur existing design for this sort of thing is to let the testing\nframework choose the port, and I don't really see what's wrong\nwith that approach. Yes, I know it's theoretically subject to\nrace conditions, but that hasn't seemed to be a problem in\npractice. It's especially not a problem given that modern\ntesting practice tends to not open any TCP port at all, just\na Unix socket in a test-private directory, so that port\nconflicts are a non-issue.\n\n\n\nFor TAP tests we have pretty much resolved the port collisions\n issue for TCP ports too. See commit 9b4eafcaf4\nPerhaps the OP could adapt that logic to his use case.Thank you for referencing this commit. The point why I am suggesting my patch is that I believe that my solution is a much better way to avoid collisions in the first place. Implementing an algorithm similar to the one in the referenced commit is error-pfone and can be difficult in environments like shell script.I'm trying to understand what's wrong with reading port from the pid file (if Postgres writes information there, it's surely so that somebody can read it, otherwise, why write it in the first placd)? The proposed solution uses operating system's functionality to achieve collision-free mechanics with none of the complexity introduced in the commit.",
"msg_date": "Sat, 8 Apr 2023 04:33:42 +0700",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "On Fri, Apr 7, 2023 at 5:34 PM Yurii Rashkovskii <[email protected]> wrote:\n> I'm trying to understand what's wrong with reading port from the pid file (if Postgres writes information there, it's surely so that somebody can read it, otherwise, why write it in the first placd)? The proposed solution uses operating system's functionality to achieve collision-free mechanics with none of the complexity introduced in the commit.\n\nI agree. We don't document the exact format of the postmaster.pid file\nto my knowledge, but storage.sgml lists all the things it contains,\nand runtime.sgml documents that the first line contains the postmaster\nPID, so this is clearly not some totally opaque file that nobody\nshould ever touch. Consequently, I don't agree with Tom's statement\nthat this would be a \"a horrid way to find out what was picked.\" There\nis some question in my mind about whether this is a feature that we\nwant PostgreSQL to have, and if we do want it, there may be some room\nfor debate about how it's implemented, but I reject the idea that\nextracting the port number from postmaster.pid is intrinsically a\nterrible plan. It seems like a perfectly reasonable plan.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 10 Apr 2023 13:54:47 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "Hi Robert,\n\nOn Tue, Apr 11, 2023 at 12:54 AM Robert Haas <[email protected]> wrote:\n\n> On Fri, Apr 7, 2023 at 5:34 PM Yurii Rashkovskii <[email protected]> wrote:\n> > I'm trying to understand what's wrong with reading port from the pid\n> file (if Postgres writes information there, it's surely so that somebody\n> can read it, otherwise, why write it in the first placd)? The proposed\n> solution uses operating system's functionality to achieve collision-free\n> mechanics with none of the complexity introduced in the commit.\n>\n> I agree. We don't document the exact format of the postmaster.pid file\n> to my knowledge, but storage.sgml lists all the things it contains,\n> and runtime.sgml documents that the first line contains the postmaster\n> PID, so this is clearly not some totally opaque file that nobody\n> should ever touch. Consequently, I don't agree with Tom's statement\n> that this would be a \"a horrid way to find out what was picked.\" There\n> is some question in my mind about whether this is a feature that we\n> want PostgreSQL to have, and if we do want it, there may be some room\n> for debate about how it's implemented, but I reject the idea that\n> extracting the port number from postmaster.pid is intrinsically a\n> terrible plan. It seems like a perfectly reasonable plan.\n>\n>\nI appreciate your support on the pid file concern. What questions do you\nhave about this feature with regard to its desirability and/or\nimplementation? I'd love to learn from your insight and address any of\nthose if I can.\n\n-- \nY.\n\nHi Robert,On Tue, Apr 11, 2023 at 12:54 AM Robert Haas <[email protected]> wrote:On Fri, Apr 7, 2023 at 5:34 PM Yurii Rashkovskii <[email protected]> wrote:\n> I'm trying to understand what's wrong with reading port from the pid file (if Postgres writes information there, it's surely so that somebody can read it, otherwise, why write it in the first placd)? The proposed solution uses operating system's functionality to achieve collision-free mechanics with none of the complexity introduced in the commit.\n\nI agree. We don't document the exact format of the postmaster.pid file\nto my knowledge, but storage.sgml lists all the things it contains,\nand runtime.sgml documents that the first line contains the postmaster\nPID, so this is clearly not some totally opaque file that nobody\nshould ever touch. Consequently, I don't agree with Tom's statement\nthat this would be a \"a horrid way to find out what was picked.\" There\nis some question in my mind about whether this is a feature that we\nwant PostgreSQL to have, and if we do want it, there may be some room\nfor debate about how it's implemented, but I reject the idea that\nextracting the port number from postmaster.pid is intrinsically a\nterrible plan. It seems like a perfectly reasonable plan.\nI appreciate your support on the pid file concern. What questions do you have about this feature with regard to its desirability and/or implementation? I'd love to learn from your insight and address any of those if I can.-- Y.",
"msg_date": "Wed, 12 Apr 2023 09:17:29 +0700",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 10:17 PM Yurii Rashkovskii <[email protected]> wrote:\n> I appreciate your support on the pid file concern. What questions do you have about this feature with regard to its desirability and/or implementation? I'd love to learn from your insight and address any of those if I can.\n\nI don't have any particularly specific concerns. But, you know, if a\nbunch of other people, especially people already known the community\nshowed up on this thread to say \"hey, I'd like that too\" or \"that\nwould be better than what we have now,\" well then that would make me\nthink \"hey, we should probably move forward with this thing.\" But so\nfar the only people to comment are Tom and Andrew. Tom, in addition to\ncomplaining about the PID file thing, also basically said that the\nfeature didn't seem necessary to him, and Andrew's comments seem to me\nto suggest the same thing. So it kind of seems like you've convinced\nzero people that this is a thing we should have, and that's not very\nmany.\n\nIt happens from time to time on this mailing list that somebody shows\nup to propose a feature where I say to myself \"hmm, that doesn't sound\nlike an intrinsically terrible idea, but it sounds like it might be\nspecific enough that only the person proposing it would ever use it.\"\nFor instance, someone might propose a new backslash command for psql\nthat runs an SQL query that produces some output which that person\nfinds useful. There is no big design problem there, but psql is\nalready pretty cluttered with commands that look like line noise, so\nwe shouldn't add a new one on the strength of one person wanting it.\nEach feature, even if it's minor, has some cost. New releases need to\nkeep it working, which may mean that it needs a test, and then the\ntest is another thing that you have to keep working, and it also takes\ntime to run every time anyone does make check-world. These aren't big\ncosts and don't set a high bar for adding new features, but they do\nmean, at least IMHO, that one person wanting a feature that isn't\nobviously of general utility is not good enough. I think all of that\nalso applies to this feature.\n\nI haven't reviewed the code in detail. It at least has some style\nissues, but worrying about that seems premature.\n\nI mostly just wanted to say that I disagreed with Tom about the\nparticular point on postmaster.pid, without really expressing an\nopinion about anything else.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 12 Apr 2023 11:01:28 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "On 2023-04-12 We 11:01, Robert Haas wrote:\n> On Tue, Apr 11, 2023 at 10:17 PM Yurii Rashkovskii<[email protected]> wrote:\n>> I appreciate your support on the pid file concern. What questions do you have about this feature with regard to its desirability and/or implementation? I'd love to learn from your insight and address any of those if I can.\n> I don't have any particularly specific concerns. But, you know, if a\n> bunch of other people, especially people already known the community\n> showed up on this thread to say \"hey, I'd like that too\" or \"that\n> would be better than what we have now,\" well then that would make me\n> think \"hey, we should probably move forward with this thing.\" But so\n> far the only people to comment are Tom and Andrew. Tom, in addition to\n> complaining about the PID file thing, also basically said that the\n> feature didn't seem necessary to him, and Andrew's comments seem to me\n> to suggest the same thing. So it kind of seems like you've convinced\n> zero people that this is a thing we should have, and that's not very\n> many.\n\n\nNot quite, I just suggested looking at a different approach. I'm not \nopposed to the idea in principle.\n\nI agree with you that parsing the pid file shouldn't be hard or \nunreasonable.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-12 We 11:01, Robert Haas\n wrote:\n\n\nOn Tue, Apr 11, 2023 at 10:17 PM Yurii Rashkovskii <[email protected]> wrote:\n\n\nI appreciate your support on the pid file concern. What questions do you have about this feature with regard to its desirability and/or implementation? I'd love to learn from your insight and address any of those if I can.\n\n\n\nI don't have any particularly specific concerns. But, you know, if a\nbunch of other people, especially people already known the community\nshowed up on this thread to say \"hey, I'd like that too\" or \"that\nwould be better than what we have now,\" well then that would make me\nthink \"hey, we should probably move forward with this thing.\" But so\nfar the only people to comment are Tom and Andrew. Tom, in addition to\ncomplaining about the PID file thing, also basically said that the\nfeature didn't seem necessary to him, and Andrew's comments seem to me\nto suggest the same thing. So it kind of seems like you've convinced\nzero people that this is a thing we should have, and that's not very\nmany.\n\n\n\nNot quite, I just suggested looking at a different approach. I'm\n not opposed to the idea in principle.\nI agree with you that parsing the pid file shouldn't be hard or\n unreasonable.\n\n\ncheers\n\n\nandrew\n\n\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 12 Apr 2023 12:16:30 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "\n\n> On Apr 12, 2023, at 8:01 AM, Robert Haas <[email protected]> wrote:\n> \n> \"hey, I'd like that too\"\n\nI like the idea in principle. I have been developing a testing infrastructure in my spare time and would rather not duplicate Andrew's TAP logic. If we get this pushed down into the server itself, all the test infrastructure can use a single, shared solution.\n\nAs for the implementation, I just briefly scanned the patch. \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 12 Apr 2023 09:31:39 -0700",
"msg_from": "Mark Dilger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "On Fri, Apr 7, 2023 at 5:07 AM Andrew Dunstan <[email protected]> wrote:\n> For TAP tests we have pretty much resolved the port collisions issue for TCP ports too. See commit 9b4eafcaf4\n\nThe Cirrus config still has the following for the Windows tests:\n\n # Avoids port conflicts between concurrent tap test runs\n PG_TEST_USE_UNIX_SOCKETS: 1\n\nIs that comment out of date, or would this proposal improve the\nWindows situation too?\n\n--Jacob\n\n\n",
"msg_date": "Wed, 12 Apr 2023 09:45:01 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "On Wed, 12 Apr 2023 at 11:02, Robert Haas <[email protected]> wrote:\n>\n> I mostly just wanted to say that I disagreed with Tom about the\n> particular point on postmaster.pid, without really expressing an\n> opinion about anything else.\n\nI don't object to using the pid file as the mechanism -- but it is a\nbit of an awkward UI for shell scripting. I imagine it would be handy\nif pg_ctl had an option to just print the port number so you could get\nit with a simple port=`pg_ctl -D <dir> status-port`\n\n-- \ngreg\n\n\n",
"msg_date": "Wed, 12 Apr 2023 13:31:09 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 1:31 PM Greg Stark <[email protected]> wrote:\n> I don't object to using the pid file as the mechanism -- but it is a\n> bit of an awkward UI for shell scripting. I imagine it would be handy\n> if pg_ctl had an option to just print the port number so you could get\n> it with a simple port=`pg_ctl -D <dir> status-port`\n\nThat's not a bad idea, and would provide some additional isolation to\nreduce direct dependency on the PID file format.\n\nHowever, awk 'NR==4' $PGDATA/postmaster.pid is hardly insanity. If it\ncan be done with a 5-character awk script, it's not too hard. The kind\nof thing you're talking about is much more important with things like\npg_control or postgresql.conf that have much more complicated formats.\nThe format of the PID file is intentionally simple. But that's not to\nsay that I'm objecting.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 12 Apr 2023 13:51:40 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Wed, Apr 12, 2023 at 1:31 PM Greg Stark <[email protected]> wrote:\n>> I don't object to using the pid file as the mechanism -- but it is a\n>> bit of an awkward UI for shell scripting. I imagine it would be handy\n>> if pg_ctl had an option to just print the port number so you could get\n>> it with a simple port=`pg_ctl -D <dir> status-port`\n\n> That's not a bad idea, and would provide some additional isolation to\n> reduce direct dependency on the PID file format.\n\nYeah. My main concern here is with limiting our ability to change\nthe pidfile format in future. If we can keep the dependencies on that\nlocalized to code we control, it'd be much better.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Apr 2023 13:56:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 1:56 PM Tom Lane <[email protected]> wrote:\n> Yeah. My main concern here is with limiting our ability to change\n> the pidfile format in future. If we can keep the dependencies on that\n> localized to code we control, it'd be much better.\n\nI don't know if it's considered officially supported, but I often use\npg_ctl stop on a directory without worrying about whether I'm doing it\nwith the same server version that's running in that directory. I'd be\nreluctant to break that property. So I bet our ability to modify the\nfile format is already quite limited.\n\nBut again, no issue with having a way for pg_ctl to fish the\ninformation out of there.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 12 Apr 2023 14:08:22 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Wed, Apr 12, 2023 at 1:56 PM Tom Lane <[email protected]> wrote:\n>> Yeah. My main concern here is with limiting our ability to change\n>> the pidfile format in future. If we can keep the dependencies on that\n>> localized to code we control, it'd be much better.\n\n> I don't know if it's considered officially supported, but I often use\n> pg_ctl stop on a directory without worrying about whether I'm doing it\n> with the same server version that's running in that directory. I'd be\n> reluctant to break that property. So I bet our ability to modify the\n> file format is already quite limited.\n\nIMO, the only aspect we consider \"officially supported\" for outside\nuse is that the first line contains the postmaster's PID. All the\nrest is private (and has changed as recently as v10). Without\nhaving actually checked the code, I think that \"pg_ctl stop\" relies\nonly on that aspect, or at least it could be made to do so at need.\nSo I think your example would survive other changes in the format.\n\nI don't really want external code knowing that line 4 is the port,\nbecause I foresee us breaking that someday --- what will happen\nwhen we want to allow one postmaster to support multiple ports?\nMaybe we'll decide that we don't have to reflect that in the\npidfile, but let's not constrain our decisions ahead of time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Apr 2023 14:24:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 02:24:30PM -0400, Tom Lane wrote:\n> I don't really want external code knowing that line 4 is the port,\n> because I foresee us breaking that someday --- what will happen\n> when we want to allow one postmaster to support multiple ports?\n> Maybe we'll decide that we don't have to reflect that in the\n> pidfile, but let's not constrain our decisions ahead of time.\n\nIn the same fashion as something mentioned upthread, the format\nportability would not matter much if all the information from the PID\nfile is wrapped around a pg_ctl command or something equivalent that\ncontrols its output format, say: \npg_ctl -D $PGDATA --format={json,what_you_want} postmaster_file\n\nTo be more precise, storage.sgml documents the format of the PID file\nin what seems like the correct order for each item, some of them being\nempty depending on the setup.\n--\nMichael",
"msg_date": "Thu, 13 Apr 2023 06:49:16 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "Tom, Robert, Greg, Andrew,\n\nOn Thu, Apr 13, 2023 at 12:56 AM Tom Lane <[email protected]> wrote:\n\n> Robert Haas <[email protected]> writes:\n> > On Wed, Apr 12, 2023 at 1:31 PM Greg Stark <[email protected]> wrote:\n> >> I don't object to using the pid file as the mechanism -- but it is a\n> >> bit of an awkward UI for shell scripting. I imagine it would be handy\n> >> if pg_ctl had an option to just print the port number so you could get\n> >> it with a simple port=`pg_ctl -D <dir> status-port`\n>\n> > That's not a bad idea, and would provide some additional isolation to\n> > reduce direct dependency on the PID file format.\n>\n> Yeah. My main concern here is with limiting our ability to change\n> the pidfile format in future. If we can keep the dependencies on that\n> localized to code we control, it'd be much better.\n>\n>\nThank you all for the feedback. It's quite useful. I think it is important\nto separate this into two concerns:\n\n1. Letting Postgres pick an unused port.\n2. Retrieving the port it picked.\n\nIf I get this right, there's no significant opposition to (1) as this is\ncommon functionality we're relying on. The most contention is around (2)\nbecause I suggested using postmaster.pid\nfile, which may be considered private for the most part, at least for the\ntime being.\n\nWith this in mind, I still think that proceeding with (1) is a good idea,\nas retrieving the port being listened on is still much easier than\ninvolving a more complex lock file script. For example, on UNIX-like\nsystems, `lsof` can be typically used to do this:\n\n```\n# For IPv4\nlsof -a -w -FPn -p $(head -n 1 postmaster.pid) -i4TCP -sTCP:LISTEN -P -n |\ntail -n 1 | awk -F: '{print $NF}'\n# For IPv6\nlsof -a -w -FPn -p $(head -n 1postmaster.pid) -i6TCP -sTCP:LISTEN -P -n |\ntail -n 1 | awk -F: '{print $NF}'\n```\n\n(There are also other tools that can be used to achieve much of the same)\n\nOn Windows, this can be done using PowerShell (and perhaps netstat, too):\n\n```\n# IPv4\nPS> Get-NetTCPConnection -State Listen -OwningProcess (Get-Content\n\"postmaster.pid\" -First 1) | Where-Object { $_.LocalAddress -notmatch ':' }\n| Select-Object -ExpandProperty LocalPort\n5432\nPS> Get-NetTCPConnection -State Listen -OwningProcess (Get-Content\n\"postmaster.pid\" -First 1) | Where-Object { $_.LocalAddress -match ':' } |\nSelect-Object -ExpandProperty LocalPort\n5432\n```\n\nThe above commands can be worked on to extract multiple ports should that\never become a feature.\n\nThe bottom line is this decouples (1) from (2), and we can resolve them\nseparately if there's too much (understandable) hesitation to commit to a\nparticular approach to it (documenting postmaster.pid, changing its format,\namending pg_ctl functionality, etc.) I will be happy to participate in the\ndiscovery and resolution of (2) as well.\n\nThis would allow people like myself or Mark (above in the thread) to let\nPostgres pick the unused port and extract it using a oneliner for the time\nbeing. When a better approach for server introspection will be agreed on,\nwe can use that.\n\nI'll be happy to address any [styling or other] issues with the currently\nproposed patch.\n\n\n--\nhttp://omnigres.org\nYurii\n\nTom, Robert, Greg, Andrew,On Thu, Apr 13, 2023 at 12:56 AM Tom Lane <[email protected]> wrote:Robert Haas <[email protected]> writes:\n> On Wed, Apr 12, 2023 at 1:31 PM Greg Stark <[email protected]> wrote:\n>> I don't object to using the pid file as the mechanism -- but it is a\n>> bit of an awkward UI for shell scripting. I imagine it would be handy\n>> if pg_ctl had an option to just print the port number so you could get\n>> it with a simple port=`pg_ctl -D <dir> status-port`\n\n> That's not a bad idea, and would provide some additional isolation to\n> reduce direct dependency on the PID file format.\n\nYeah. My main concern here is with limiting our ability to change\nthe pidfile format in future. If we can keep the dependencies on that\nlocalized to code we control, it'd be much better.\nThank you all for the feedback. It's quite useful. I think it is important to separate this into two concerns:1. Letting Postgres pick an unused port.2. Retrieving the port it picked.If I get this right, there's no significant opposition to (1) as this is common functionality we're relying on. The most contention is around (2) because I suggested using postmaster.pidfile, which may be considered private for the most part, at least for the time being.With this in mind, I still think that proceeding with (1) is a good idea, as retrieving the port being listened on is still much easier than involving a more complex lock file script. For example, on UNIX-like systems, `lsof` can be typically used to do this:```# For IPv4lsof -a -w -FPn -p $(head -n 1 postmaster.pid) -i4TCP -sTCP:LISTEN -P -n | tail -n 1 | awk -F: '{print $NF}'# For IPv6lsof -a -w -FPn -p $(head -n 1postmaster.pid) -i6TCP -sTCP:LISTEN -P -n | tail -n 1 | awk -F: '{print $NF}'```(There are also other tools that can be used to achieve much of the same)On Windows, this can be done using PowerShell (and perhaps netstat, too):```# IPv4PS> Get-NetTCPConnection -State Listen -OwningProcess (Get-Content \"postmaster.pid\" -First 1) | Where-Object { $_.LocalAddress -notmatch ':' } | Select-Object -ExpandProperty LocalPort5432PS> Get-NetTCPConnection -State Listen -OwningProcess (Get-Content \"postmaster.pid\" -First 1) | Where-Object { $_.LocalAddress -match ':' } | Select-Object -ExpandProperty LocalPort5432```The above commands can be worked on to extract multiple ports should that ever become a feature.The bottom line is this decouples (1) from (2), and we can resolve them separately if there's too much (understandable) hesitation to commit to a particular approach to it (documenting postmaster.pid, changing its format, amending pg_ctl functionality, etc.) I will be happy to participate in the discovery and resolution of (2) as well.This would allow people like myself or Mark (above in the thread) to let Postgres pick the unused port and extract it using a oneliner for the time being. When a better approach for server introspection will be agreed on, we can use that.I'll be happy to address any [styling or other] issues with the currently proposed patch. --http://omnigres.orgYurii",
"msg_date": "Thu, 13 Apr 2023 06:18:12 +0700",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "Yurii Rashkovskii <[email protected]> writes:\n> Thank you all for the feedback. It's quite useful. I think it is important\n> to separate this into two concerns:\n\n> 1. Letting Postgres pick an unused port.\n> 2. Retrieving the port it picked.\n\nYeah, those are distinguishable implementation concerns, but ...\n\n> The bottom line is this decouples (1) from (2), and we can resolve them\n> separately if there's too much (understandable) hesitation to commit to a\n> particular approach to it (documenting postmaster.pid, changing its format,\n> amending pg_ctl functionality, etc.)\n\n... AFAICS, there is exactly zero value in committing a solution for (1)\nwithout also committing a solution for (2). I don't think any of the\nalternative methods you proposed are attractive or things we should\nrecommend.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Apr 2023 22:17:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "Hi Tom,\n\nOn Thu, Apr 13, 2023 at 9:17 AM Tom Lane <[email protected]> wrote:\n\n> Yurii Rashkovskii <[email protected]> writes:\n> > Thank you all for the feedback. It's quite useful. I think it is\n> important\n> > to separate this into two concerns:\n>\n> > 1. Letting Postgres pick an unused port.\n> > 2. Retrieving the port it picked.\n>\n> Yeah, those are distinguishable implementation concerns, but ...\n>\n> > The bottom line is this decouples (1) from (2), and we can resolve them\n> > separately if there's too much (understandable) hesitation to commit to a\n> > particular approach to it (documenting postmaster.pid, changing its\n> format,\n> > amending pg_ctl functionality, etc.)\n>\n> ... AFAICS, there is exactly zero value in committing a solution for (1)\n> without also committing a solution for (2). I don't think any of the\n> alternative methods you proposed are attractive or things we should\n> recommend.\n>\n\nI disagree that zero value exists in (1) without (2). As my examples show,\nthey make it possible to pick a port without synchronization scripting. Are\nthey perfect? Of course, not. But they are better than lock file-based\nscripts IMO. They are not exposed to race conditions.\n\nBut getting your agreement is important to get this in; I am willing to\nplay along and resolve both (1) and (2) in one go. As for the\nimplementation approach for (2), which of the following options would you\nprefer?\n\na) Document postmaster.pid as it stands today\nb) Expose the port number through pg_ctl (*my personal favorite)\nc) Redesign its content below line 1 to make it extensible (make unnamed\nlines named, for example)\n\nIf none of the above options suit you, do you have a strategy you'd prefer?\n\nHi Tom,On Thu, Apr 13, 2023 at 9:17 AM Tom Lane <[email protected]> wrote:Yurii Rashkovskii <[email protected]> writes:\n> Thank you all for the feedback. It's quite useful. I think it is important\n> to separate this into two concerns:\n\n> 1. Letting Postgres pick an unused port.\n> 2. Retrieving the port it picked.\n\nYeah, those are distinguishable implementation concerns, but ...\n\n> The bottom line is this decouples (1) from (2), and we can resolve them\n> separately if there's too much (understandable) hesitation to commit to a\n> particular approach to it (documenting postmaster.pid, changing its format,\n> amending pg_ctl functionality, etc.)\n\n... AFAICS, there is exactly zero value in committing a solution for (1)\nwithout also committing a solution for (2). I don't think any of the\nalternative methods you proposed are attractive or things we should\nrecommend.I disagree that zero value exists in (1) without (2). As my examples show, they make it possible to pick a port without synchronization scripting. Are they perfect? Of course, not. But they are better than lock file-based scripts IMO. They are not exposed to race conditions.But getting your agreement is important to get this in; I am willing to play along and resolve both (1) and (2) in one go. As for the implementation approach for (2), which of the following options would you prefer?a) Document postmaster.pid as it stands todayb) Expose the port number through pg_ctl (*my personal favorite)c) Redesign its content below line 1 to make it extensible (make unnamed lines named, for example)If none of the above options suit you, do you have a strategy you'd prefer?",
"msg_date": "Thu, 13 Apr 2023 09:45:09 +0700",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "On 13.04.23 04:45, Yurii Rashkovskii wrote:\n> But getting your agreement is important to get this in; I am willing to \n> play along and resolve both (1) and (2) in one go. As for the \n> implementation approach for (2), which of the following options would \n> you prefer?\n> \n> a) Document postmaster.pid as it stands today\n> b) Expose the port number through pg_ctl (*my personal favorite)\n> c) Redesign its content below line 1 to make it extensible (make unnamed \n> lines named, for example)\n> \n> If none of the above options suit you, do you have a strategy you'd prefer?\n\nYou could just drop another file into the data directory that just \ncontains the port number ($PGDATA/port). However, if we ever do \nmultiple ports, that would still require a change in the format of that \nfile, so I don't know if that's actually better than a).\n\nI don't think involving pg_ctl is necessary or desirable, since it would \nmake any future changes like that even more complicated.\n\n\n",
"msg_date": "Wed, 19 Apr 2023 06:10:27 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "Greetings,\n\n* Peter Eisentraut ([email protected]) wrote:\n> On 13.04.23 04:45, Yurii Rashkovskii wrote:\n> > But getting your agreement is important to get this in; I am willing to\n> > play along and resolve both (1) and (2) in one go. As for the\n> > implementation approach for (2), which of the following options would\n> > you prefer?\n> > \n> > a) Document postmaster.pid as it stands today\n> > b) Expose the port number through pg_ctl (*my personal favorite)\n> > c) Redesign its content below line 1 to make it extensible (make unnamed\n> > lines named, for example)\n> > \n> > If none of the above options suit you, do you have a strategy you'd prefer?\n> \n> You could just drop another file into the data directory that just contains\n> the port number ($PGDATA/port). However, if we ever do multiple ports, that\n> would still require a change in the format of that file, so I don't know if\n> that's actually better than a).\n\nIf we did a port per line then it wouldn't be changing the format of the\nfirst line, so that might not be all that bad.\n\n> I don't think involving pg_ctl is necessary or desirable, since it would\n> make any future changes like that even more complicated.\n\nI'm a bit confused by this- if pg_ctl is invoked then we have\nmore-or-less full control over parsing and reporting out the answer, so\nwhile it might be a bit more complicated for us, it seems surely simpler\nfor the end user. Or maybe you're referring to something here that I'm\nnot thinking of?\n\nIndependent of the above though ... this hand-wringing about what we\nmight do in the relative near-term when we haven't done much in the past\nmany-many years regarding listen_addresses or port strikes me as\nunlikely to be necessary. Let's pick something and get it done and\naccept that we may have to change it at some point in the future, but\nthat's kinda what major releases are for, imv anyway.\n\nThanks!\n\nStpehen",
"msg_date": "Wed, 19 Apr 2023 00:21:39 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "Stephen,\n\n> You could just drop another file into the data directory that just\n> contains\n> > the port number ($PGDATA/port). However, if we ever do multiple ports,\n> that\n> > would still require a change in the format of that file, so I don't know\n> if\n> > that's actually better than a).\n>\n\nI find it difficult to get anything done under the restriction of \"what if\nwe ever need to change X?\" as it is difficult to address something that\ndoesn't exist or hasn't been planned.\n\nA fine and delicate balance of anticipating what may happen theoretically\nand what's more likely happen is an art. It's also important to consider\nthe impact of a breaking change. It's one thing if we have to break, say,\nan SQL function signature or SQL syntax itself, and another if it is a\nrelatively small feature related to the administration of a server (in this\ncase, more like scripting a test bench).\n\n\n>\n> If we did a port per line then it wouldn't be changing the format of the\n> first line, so that might not be all that bad.\n>\n\nIf we consider this path, then (if we assume the format of the file is\nstill to be private), we can make the port line accept multiple ports using\na delimiter like `:` so that the next line still remains the same.\n\nThat being said, if the format is private to Postgres, it's all minor\nconsiderations.\n\n\n> > I don't think involving pg_ctl is necessary or desirable, since it would\n> > make any future changes like that even more complicated.\n>\n> I'm a bit confused by this- if pg_ctl is invoked then we have\n> more-or-less full control over parsing and reporting out the answer, so\n> while it might be a bit more complicated for us, it seems surely simpler\n> for the end user. Or maybe you're referring to something here that I'm\n> not thinking of?\n>\n\nI would love to learn about this as well.\n\n\n>\n> Independent of the above though ... this hand-wringing about what we\n> might do in the relative near-term when we haven't done much in the past\n> many-many years regarding listen_addresses or port strikes me as\n> unlikely to be necessary. Let's pick something and get it done and\n> accept that we may have to change it at some point in the future, but\n> that's kinda what major releases are for, imv anyway.\n>\n\nThat's how I see it, too. I tried to make this change as small as possible\nto appreciate the fact that all of this may change one day if or when that\nportion of Postgres will be due for a major redesign. I'd be happy to\ncontribute to that process, but in the meantime, I am looking for the\nsimplest reasonable way to achieve a relatively specific use case.\n\nPersonally, I am fine with reading the `.pid` file and accepting that it\n_may_ change in the future; I am also fine with amending the patch to add\nfunctionality to pg_ctl or adding a new file.\n\nTo keep everybody's cognitive load low, I'd rather not flood the thread\nwith multiple alternative implementations (unless that's desirable) and\njust go for something we can agree on.\n\n(I consider this feature so small that it doesn't deserve such a lengthy\ndiscussion. However, I also get Tom's point about how we document this\nfeature's use, which is very valid and valuable. If it was up to me\nentirely, I'd probably just document `postmaster.pid` and call it a day. If\nit ever breaks, that's a major release territory. Otherwise, amending\n`pg_ctl` to access information like this in a uniform way is also a good\napproach if we want to keep the format of the pid file private.)\n\n-- \nY.\n\nStephen,\n> You could just drop another file into the data directory that just contains\n> the port number ($PGDATA/port). However, if we ever do multiple ports, that\n> would still require a change in the format of that file, so I don't know if\n> that's actually better than a).I find it difficult to get anything done under the restriction of \"what if we ever need to change X?\" as it is difficult to address something that doesn't exist or hasn't been planned.A fine and delicate balance of anticipating what may happen theoretically and what's more likely happen is an art. It's also important to consider the impact of a breaking change. It's one thing if we have to break, say, an SQL function signature or SQL syntax itself, and another if it is a relatively small feature related to the administration of a server (in this case, more like scripting a test bench). \n\nIf we did a port per line then it wouldn't be changing the format of the\nfirst line, so that might not be all that bad.If we consider this path, then (if we assume the format of the file is still to be private), we can make the port line accept multiple ports using a delimiter like `:` so that the next line still remains the same. That being said, if the format is private to Postgres, it's all minor considerations.\n\n> I don't think involving pg_ctl is necessary or desirable, since it would\n> make any future changes like that even more complicated.\n\nI'm a bit confused by this- if pg_ctl is invoked then we have\nmore-or-less full control over parsing and reporting out the answer, so\nwhile it might be a bit more complicated for us, it seems surely simpler\nfor the end user. Or maybe you're referring to something here that I'm\nnot thinking of?I would love to learn about this as well. \n\nIndependent of the above though ... this hand-wringing about what we\nmight do in the relative near-term when we haven't done much in the past\nmany-many years regarding listen_addresses or port strikes me as\nunlikely to be necessary. Let's pick something and get it done and\naccept that we may have to change it at some point in the future, but\nthat's kinda what major releases are for, imv anyway.That's how I see it, too. I tried to make this change as small as possible to appreciate the fact that all of this may change one day if or when that portion of Postgres will be due for a major redesign. I'd be happy to contribute to that process, but in the meantime, I am looking for the simplest reasonable way to achieve a relatively specific use case. Personally, I am fine with reading the `.pid` file and accepting that it _may_ change in the future; I am also fine with amending the patch to add functionality to pg_ctl or adding a new file.To keep everybody's cognitive load low, I'd rather not flood the thread with multiple alternative implementations (unless that's desirable) and just go for something we can agree on.(I consider this feature so small that it doesn't deserve such a lengthy discussion. However, I also get Tom's point about how we document this feature's use, which is very valid and valuable. If it was up to me entirely, I'd probably just document `postmaster.pid` and call it a day. If it ever breaks, that's a major release territory. Otherwise, amending `pg_ctl` to access information like this in a uniform way is also a good approach if we want to keep the format of the pid file private.) -- Y.",
"msg_date": "Wed, 19 Apr 2023 06:56:46 +0200",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "Hi,\n\nHere are my two cents.\n\n> > I would like to suggest a patch against master (although it may be worth\n> > backporting it) that makes it possible to listen on any unused port.\n>\n> I think this is a bad idea, mainly because this:\n>\n> > Instead, with this patch, one can specify `port` as `0` (the \"wildcard\"\n> > port) and retrieve the assigned port from postmaster.pid\n>\n> is a horrid way to find out what was picked, and yet there could\n> be no other.\n\nWhat personally I dislike about this approach is the fact that it is\nnot guaranteed to work in the general case.\n\nLet's say the test framework started Postgres on a random port. Then\nthe framework started to do something else, building a Docker\ncontainer for instance. While the framework is busy PostgreSQL crashes\n(crazy, I know, but not impossible). Both PID and the port will be\nreused eventually by another process. How soon is the implementation\ndetail of the given OS and its setting.\n\nA bullet-proof approach would be (approximately) for the test\nframework to lease the ports on the given machine, for instance by\nusing a KV value with CAS support like Consul or etcd (or another\nPostgreSQL instance), as this is done for leader election in\ndistributed systems (so called leader lease). After leasing the port\nthe framework knows no other testing process on the given machine will\nuse it (and also it keeps telling the KV storage that the port is\nstill leased) and specifies it in postgresql.conf as usual.\n\nI realize this is a complicated problem to solve in a general case,\nbut it doesn't look like the proposed patch is the right solution for\nthe named problem.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 20 Apr 2023 00:44:21 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "Alexander,\n\nOn Wed, Apr 19, 2023 at 11:44 PM Aleksander Alekseev <\[email protected]> wrote:\n\n> Hi,\n>\n> Here are my two cents.\n>\n> > > I would like to suggest a patch against master (although it may be\n> worth\n> > > backporting it) that makes it possible to listen on any unused port.\n> >\n> > I think this is a bad idea, mainly because this:\n> >\n> > > Instead, with this patch, one can specify `port` as `0` (the \"wildcard\"\n> > > port) and retrieve the assigned port from postmaster.pid\n> >\n> > is a horrid way to find out what was picked, and yet there could\n> > be no other.\n>\n> What personally I dislike about this approach is the fact that it is\n> not guaranteed to work in the general case.\n>\n> Let's say the test framework started Postgres on a random port. Then\n> the framework started to do something else, building a Docker\n> container for instance. While the framework is busy PostgreSQL crashes\n> (crazy, I know, but not impossible). Both PID and the port will be\n> reused eventually by another process. How soon is the implementation\n\ndetail of the given OS and its setting.\n>\n\nLet's say Postgres crashed, and the port was not reused. In this case, the\nconnection will fail. The test bench script can then, at the very least,\ntry checking the log files to see if there's any indication of a crash\nthere and report if one occurred. If the port was reused by something other\nthan Postgres, the script should (ideally) fail to communicate with it\nusing Postgres protocol. If it was reused by another Postgres instance, it\ngets a bit tougher, but then the test bench can, upon connection, verify\nthat it is the same system by comparing the system identifier on the file\nsystem (retrieved using pg_controldata) and over the wire (retrieved\nusing `select system_identifier from pg_control_system()`)\n\nI also suspect that this problem has a bigger scope than port retrieval. If\none is to use postmaster.pid only for PID retrieval, then there's still no\nguarantee that between the time we retrieved the PID from the file and used\nit,\nPostgres didn't crash, and the PID was not re-used by a different process,\npotentially even another postgres process launched in parallel by the test\nbench.\n\nThere are tools mentioned previously by me in the thread that allow\ninspecting which ports are opened by a given PID, and one can use those to\nprovide an extra determination as to whether we're still on the right\ntrack. These tools\ncan also tell us what is the process name.\n\nUltimately, there's no transactionality in POSIX API, so we're always\nexposed to the chance of discrepancies between the inspection time and the\nnext step.\n\n>\n> A bullet-proof approach would be (approximately) for the test\n> framework to lease the ports on the given machine, for instance by\n> using a KV value with CAS support like Consul or etcd (or another\n> PostgreSQL instance), as this is done for leader election in\n> distributed systems (so called leader lease). After leasing the port\n> the framework knows no other testing process on the given machine will\n> use it (and also it keeps telling the KV storage that the port is\n> still leased) and specifies it in postgresql.conf as usual.\n>\n\nThe approach you suggest introduces a significant amount of complexity but\nseemingly fails to address one of the core issues: using a KV store to\nlease a port does not guarantee the port's availability. I don't believe\nthis is a sound way to address this issue, let alone a bulletproof one.\n\nAlso, I don't think there's a case for distributed systems here because\nwe're only managing a single computer's resource: the allocation of local\nports.\n\nIf I were to go for a more bulletproof approach, I would probably consider\na different technique that would not necessitate provisioning and running\nadditional software for port leasing.\n\nFor example, I'd suggest adding an option to Postgres to receive sockets it\nshould listen on from a UNIX socket (using SCM_RIGHTS message) and then\nhave another program acquire the sockets using whatever algorithm (picking\npre-set one, unused wildcard port, etc.) and start Postgres passing the\nsockets using the aforementioned UNIX socket. This program will be your\nleaseholder and can perhaps print out the PID so that the testing scripts\ncan immediately use it. The leaseholder should watch for the Postgres\nprocess to crash. This is still a fairly complicated solution that needs\nsome refining, but it does allocate ports flawlessly, relying on OS being\nthe actual leaseholder and not requiring fighting against race conditions.\nI didn't go for anything like this because of the sheer complexity of it.\n\nThe proposed solution is, I believe, a simple one that gets you there in an\nawful majority of cases. If one starts running out in the error cases like\nport reuse or listener disappearance, the logic I described above may get\nthem a step further.\n\nAlexander,On Wed, Apr 19, 2023 at 11:44 PM Aleksander Alekseev <[email protected]> wrote:Hi,\n\nHere are my two cents.\n\n> > I would like to suggest a patch against master (although it may be worth\n> > backporting it) that makes it possible to listen on any unused port.\n>\n> I think this is a bad idea, mainly because this:\n>\n> > Instead, with this patch, one can specify `port` as `0` (the \"wildcard\"\n> > port) and retrieve the assigned port from postmaster.pid\n>\n> is a horrid way to find out what was picked, and yet there could\n> be no other.\n\nWhat personally I dislike about this approach is the fact that it is\nnot guaranteed to work in the general case.\n\nLet's say the test framework started Postgres on a random port. Then\nthe framework started to do something else, building a Docker\ncontainer for instance. While the framework is busy PostgreSQL crashes\n(crazy, I know, but not impossible). Both PID and the port will be\nreused eventually by another process. How soon is the implementation \ndetail of the given OS and its setting.Let's say Postgres crashed, and the port was not reused. In this case, the connection will fail. The test bench script can then, at the very least, try checking the log files to see if there's any indication of a crash there and report if one occurred. If the port was reused by something other than Postgres, the script should (ideally) fail to communicate with it using Postgres protocol. If it was reused by another Postgres instance, it gets a bit tougher, but then the test bench can, upon connection, verify that it is the same system by comparing the system identifier on the file system (retrieved using pg_controldata) and over the wire (retrieved using `select system_identifier from pg_control_system()`)I also suspect that this problem has a bigger scope than port retrieval. If one is to use postmaster.pid only for PID retrieval, then there's still no guarantee that between the time we retrieved the PID from the file and used it,Postgres didn't crash, and the PID was not re-used by a different process, potentially even another postgres process launched in parallel by the test bench.There are tools mentioned previously by me in the thread that allow inspecting which ports are opened by a given PID, and one can use those to provide an extra determination as to whether we're still on the right track. These toolscan also tell us what is the process name.Ultimately, there's no transactionality in POSIX API, so we're always exposed to the chance of discrepancies between the inspection time and the next step.\nA bullet-proof approach would be (approximately) for the test\nframework to lease the ports on the given machine, for instance by\nusing a KV value with CAS support like Consul or etcd (or another\nPostgreSQL instance), as this is done for leader election in\ndistributed systems (so called leader lease). After leasing the port\nthe framework knows no other testing process on the given machine will\nuse it (and also it keeps telling the KV storage that the port is\nstill leased) and specifies it in postgresql.conf as usual.The approach you suggest introduces a significant amount of complexity but seemingly fails to address one of the core issues: using a KV store to lease a port does not guarantee the port's availability. I don't believe this is a sound way to address this issue, let alone a bulletproof one.Also, I don't think there's a case for distributed systems here because we're only managing a single computer's resource: the allocation of local ports.If I were to go for a more bulletproof approach, I would probably consider a different technique that would not necessitate provisioning and running additional software for port leasing. For example, I'd suggest adding an option to Postgres to receive sockets it should listen on from a UNIX socket (using SCM_RIGHTS message) and then have another program acquire the sockets using whatever algorithm (picking pre-set one, unused wildcard port, etc.) and start Postgres passing the sockets using the aforementioned UNIX socket. This program will be your leaseholder and can perhaps print out the PID so that the testing scripts can immediately use it. The leaseholder should watch for the Postgres process to crash. This is still a fairly complicated solution that needs some refining, but it does allocate ports flawlessly, relying on OS being the actual leaseholder and not requiring fighting against race conditions. I didn't go for anything like this because of the sheer complexity of it.The proposed solution is, I believe, a simple one that gets you there in an awful majority of cases. If one starts running out in the error cases like port reuse or listener disappearance, the logic I described above may get them a step further.",
"msg_date": "Thu, 20 Apr 2023 06:30:46 +0200",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "Hi,\n\nYurii Rashkovskii a écrit :\n> On Wed, Apr 19, 2023 at 11:44 PM Aleksander Alekseev <\n> [email protected]> wrote:\n>>>> I would like to suggest a patch against master (although it may be\n>> worth\n>>>> backporting it) that makes it possible to listen on any unused port.\n[...]\n>> A bullet-proof approach would be (approximately) for the test\n>> framework to lease the ports on the given machine, for instance by\n>> using a KV value with CAS support like Consul or etcd (or another\n>> PostgreSQL instance), as this is done for leader election in\n>> distributed systems (so called leader lease). After leasing the port\n>> the framework knows no other testing process on the given machine will\n>> use it (and also it keeps telling the KV storage that the port is\n>> still leased) and specifies it in postgresql.conf as usual.\n>>\n> \n> The approach you suggest introduces a significant amount of complexity but\n> seemingly fails to address one of the core issues: using a KV store to\n> lease a port does not guarantee the port's availability. I don't believe\n> this is a sound way to address this issue, let alone a bulletproof one.\n> \n> Also, I don't think there's a case for distributed systems here because\n> we're only managing a single computer's resource: the allocation of local\n> ports.\n\nFor this (local computer) use case, a tool such as \nhttps://github.com/kmike/port-for/ would do the job if I understand \ncorrectly (the lease thing, locally). And it would work for \"anything\", \nnot just Postgres.\n\nI am curious, Yurii, is Postgres the only service that need an unused \nport for listening in your testing/application environment? Otherwise, \nhow is this handled in other software?\n\nCheers,\nDenis\n\n\n",
"msg_date": "Thu, 20 Apr 2023 11:03:24 +0200",
"msg_from": "Denis Laxalde <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "Hi,\n\n> Also, I don't think there's a case for distributed systems here because we're only managing a single computer's resource: the allocation of local ports.\n\nI don't suggest building a distributed system but rather using\nwell-known solutions from this area. For the described case the\n\"resource manager\" will be as simple a single Consul instance (a\nsingle binary file, since Consul is written in Go) running locally.\nThe \"complexity\" would be for the test framework to use a few extra\nREST queries. Arguably not that complicated.\n\n> using a KV store to lease a port does not guarantee the port's availability\n\nI assume you don't have random processes doing random things (like\nlistening random ports) on a CI machine. You know that certain ports\nare reserved for the tests and are going to be used only for this\npurpose using the same leasing protocol.\n\nIf there are random things happening on CI you have no control of, you\nare having a problem with the CI infrastructure, not with Postgres.\n\n> For example, I'd suggest adding an option to Postgres to receive sockets it should listen [...]\n\nNot sure if I fully understood the idea, but it looks like you are\nsuggesting to build in certain rather complicated functionality for an\narguably rare use case so that a QA engineer didn't have one extra\nsmall dependency to worry about in this rare case. I'm quite skeptical\nthat this is going to happen.\n\n> I am curious, Yurii, is Postgres the only service that need an unused\n> port for listening in your testing/application environment? Otherwise,\n> how is this handled in other software?\n\nThat's a very good point.\n\nTo clarify, there is nothing wrong with the patch per se. It's merely\nan unreliable solution for a problem it is supposed to address. I\ndon't think we should encourage the users to build unreliable systems.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 20 Apr 2023 14:22:07 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "Aleksander,\n\nOn Thu, Apr 20, 2023 at 1:22 PM Aleksander Alekseev <\[email protected]> wrote:\n\n> Hi,\n>\n> > Also, I don't think there's a case for distributed systems here because\n> we're only managing a single computer's resource: the allocation of local\n> ports.\n>\n> I don't suggest building a distributed system but rather using\n> well-known solutions from this area. For the described case the\n> \"resource manager\" will be as simple a single Consul instance (a\n> single binary file, since Consul is written in Go) running locally.\n> The \"complexity\" would be for the test framework to use a few extra\n> REST queries. Arguably not that complicated.\n>\n\nBringing in a process that works over REST API (requiring itself to have a\nport, by the way) and increasing the rollout of such an environment is\nantithetical to simplicity\nand, thus, will make it only worse. If this is the alternative, I'd rather\nhave a few retries or some other small hacks.\n\nBringing in a new dependency with Python is also a heavy solution I'd\nrather avoid. I find that this is rather a problem with a relatively small\nsurface. If the patch won't go through,\nI'll just find a workaround to live with, but I'd rather stay away from\nblowing the development environment out of proportion for something so\nminuscule.\n\n\n>\n> > using a KV store to lease a port does not guarantee the port's\n> availability\n>\n> I assume you don't have random processes doing random things (like\n> listening random ports) on a CI machine. You know that certain ports\n> are reserved for the tests and are going to be used only for this\n> purpose using the same leasing protocol.\n>\n\nThis is intended to be used by CI and development workstations, where all\nbets are kind of off.\n\n\n>\n> > For example, I'd suggest adding an option to Postgres to receive sockets\n> it should listen [...]\n>\n> Not sure if I fully understood the idea, but it looks like you are\n> suggesting to build in certain rather complicated functionality for an\n> arguably rare use case so that a QA engineer didn't have one extra\n> small dependency to worry about in this rare case. I'm quite skeptical\n> that this is going to happen.\n>\n\nMy suggestion was to simply allow listening for a wildcard port and be able\nto read it out in some way. Nothing particularly complicated. The fact that\nthe process may die before it is connected to is rather a strange argument\nas the same can happen outside of this use case.\n\n\n-- \nY.\n\nAleksander,On Thu, Apr 20, 2023 at 1:22 PM Aleksander Alekseev <[email protected]> wrote:Hi,\n\n> Also, I don't think there's a case for distributed systems here because we're only managing a single computer's resource: the allocation of local ports.\n\nI don't suggest building a distributed system but rather using\nwell-known solutions from this area. For the described case the\n\"resource manager\" will be as simple a single Consul instance (a\nsingle binary file, since Consul is written in Go) running locally.\nThe \"complexity\" would be for the test framework to use a few extra\nREST queries. Arguably not that complicated.Bringing in a process that works over REST API (requiring itself to have a port, by the way) and increasing the rollout of such an environment is antithetical to simplicityand, thus, will make it only worse. If this is the alternative, I'd rather have a few retries or some other small hacks.Bringing in a new dependency with Python is also a heavy solution I'd rather avoid. I find that this is rather a problem with a relatively small surface. If the patch won't go through,I'll just find a workaround to live with, but I'd rather stay away from blowing the development environment out of proportion for something so minuscule. \n\n> using a KV store to lease a port does not guarantee the port's availability\n\nI assume you don't have random processes doing random things (like\nlistening random ports) on a CI machine. You know that certain ports\nare reserved for the tests and are going to be used only for this\npurpose using the same leasing protocol.This is intended to be used by CI and development workstations, where all bets are kind of off. \n\n> For example, I'd suggest adding an option to Postgres to receive sockets it should listen [...]\n\nNot sure if I fully understood the idea, but it looks like you are\nsuggesting to build in certain rather complicated functionality for an\narguably rare use case so that a QA engineer didn't have one extra\nsmall dependency to worry about in this rare case. I'm quite skeptical\nthat this is going to happen.My suggestion was to simply allow listening for a wildcard port and be able to read it out in some way. Nothing particularly complicated. The fact that the process may die before it is connected to is rather a strange argument as the same can happen outside of this use case.-- Y.",
"msg_date": "Thu, 20 Apr 2023 13:58:01 +0200",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "On 19.04.23 06:21, Stephen Frost wrote:\n>> I don't think involving pg_ctl is necessary or desirable, since it would\n>> make any future changes like that even more complicated.\n> I'm a bit confused by this- if pg_ctl is invoked then we have\n> more-or-less full control over parsing and reporting out the answer, so\n> while it might be a bit more complicated for us, it seems surely simpler\n> for the end user. Or maybe you're referring to something here that I'm\n> not thinking of?\n\nGetting pg_ctl involved just requires a lot more work. We need to write \nactual code, documentation, tests, help output, translations, etc. If \nwe ever change anything, then we need to transition the command-line \narguments somehow, add more documentation, etc.\n\nA file is a much simpler interface: You just write to it, write two \nsentences of documentation, that's all.\n\nOr to put it another way, if we don't think a file is an appropriate \ninterface, then why is a PID file appropriate?\n\n> Independent of the above though ... this hand-wringing about what we\n> might do in the relative near-term when we haven't done much in the past\n> many-many years regarding listen_addresses or port strikes me as\n> unlikely to be necessary. Let's pick something and get it done and\n> accept that we may have to change it at some point in the future, but\n> that's kinda what major releases are for, imv anyway.\n\nRight. I'm perfectly content with just allowing port number 0 and \nleaving it at that.\n\n\n\n",
"msg_date": "Mon, 24 Apr 2023 16:16:03 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, failed\nSpec compliant: not tested\nDocumentation: not tested\n\nHello\r\n\r\nThis is one of those features that is beneficial to a handful of people in specific test cases. It may not benefit the majority of the users but is certainly not useless either. As long as it can be disabled and enough tests have been run to ensure it won't have a significant impact on working components while disabled, it should be fine in my opinion. Regarding where the selected port shall be saved (postmaster.pid, read by pg_ctl or saved in a dedicated file), I see that postmaster.pid already contains a port number in line number 4, so adding a port number into there is nothing new; port number is already there and we can simply replace the port number with the one selected by the system. \r\n\r\nI applied and tested the patch and found that the system can indeed start when port is set to 0, but line 4 of postmaster.pid does not store the port number selected by the system, rather, it stored 0, which is the same as configured. So I am actually not able to find out the port number that my PG is running on, at least not in a straight-forward way. \r\n\r\nthank you\r\n==================\r\nCary Huang\r\nHighGo Software\r\nwww.highgo.ca",
"msg_date": "Fri, 05 May 2023 22:30:21 +0000",
"msg_from": "Cary Huang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "Hi Cary,\n\nThank you so much for the review. It's very valuable, and you caught an\nimportant issue with it that I somehow missed (not updating the .pid file\nwith the selected port number). I'm not sure how it escaped me (perhaps I\nwas focusing too much on the log file to validate the behaviour).\n\nI've amended the patch to ensure the port number is in the lock file. I've\nattached V2.\n\nYurii\n\n\nOn Sat, May 6, 2023 at 12:31 AM Cary Huang <[email protected]> wrote:\n\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, failed\n> Spec compliant: not tested\n> Documentation: not tested\n>\n> Hello\n>\n> This is one of those features that is beneficial to a handful of people in\n> specific test cases. It may not benefit the majority of the users but is\n> certainly not useless either. As long as it can be disabled and enough\n> tests have been run to ensure it won't have a significant impact on working\n> components while disabled, it should be fine in my opinion. Regarding where\n> the selected port shall be saved (postmaster.pid, read by pg_ctl or saved\n> in a dedicated file), I see that postmaster.pid already contains a port\n> number in line number 4, so adding a port number into there is nothing new;\n> port number is already there and we can simply replace the port number with\n> the one selected by the system.\n>\n> I applied and tested the patch and found that the system can indeed start\n> when port is set to 0, but line 4 of postmaster.pid does not store the port\n> number selected by the system, rather, it stored 0, which is the same as\n> configured. So I am actually not able to find out the port number that my\n> PG is running on, at least not in a straight-forward way.\n>\n> thank you\n> ==================\n> Cary Huang\n> HighGo Software\n> www.highgo.ca\n\n\n\n-- \nY.",
"msg_date": "Sun, 7 May 2023 10:07:52 +0200",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "On Mon, Apr 24, 2023 at 10:16 AM Peter Eisentraut\n<[email protected]> wrote:\n> Right. I'm perfectly content with just allowing port number 0 and\n> leaving it at that.\n\nThat seems fine to me, too. If somebody wants to add a pg_ctl feature\nto extract this or any other information from the postmaster.pid file,\nthat can be a separate patch. But it's not necessarily the case that\nusers would even prefer that interface. Some might, some might not. Or\nso it seems to me, anyway.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 May 2023 08:46:48 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "On 2023-Apr-19, Yurii Rashkovskii wrote:\n\n> If we consider this path, then (if we assume the format of the file is\n> still to be private), we can make the port line accept multiple ports using\n> a delimiter like `:` so that the next line still remains the same.\n\nThis made me wonder if storing the unadorned port number is really the\nbest way. Suppose we did extend things so that we listen on different\nports on different interfaces; how would this scheme work at all? I\nsuspect it would be simpler to store both the interface address and the\nport, perhaps separated by :. You would keep it to one pair per line,\nso you'd get the IPv6 address/port separately from the IPv4 address, for\nexample. And if you happen to have multiple addresses, you know exactly\nwhich ones you're listening on.\n\nTo match a problem that has been discussed in the lists previously,\nsuppose you have listen_addresses='localhost' and the resolver does\nfunny things with that name (say you mess up /etc/hosts after starting).\nThings would be much simpler if you knew exactly what the resolver did\nat postmaster start time.\n\n> (I consider this feature so small that it doesn't deserve such a lengthy\n> discussion. However, I also get Tom's point about how we document this\n\nYou should see the discussion that led to the addition of psql's 'exit'\ncommand sometime:\nhttps://www.postgresql.org/message-id/flat/CALVFHFb-C_5_94hueWg6Dd0zu7TfbpT7hzsh9Zf0DEDOSaAnfA%40mail.gmail.com#949321e44856b7fa295834d6a3997ab4\n\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Every machine is a smoke machine if you operate it wrong enough.\"\nhttps://twitter.com/libseybieda/status/1541673325781196801\n\n\n",
"msg_date": "Mon, 8 May 2023 16:43:01 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> This made me wonder if storing the unadorned port number is really the\n> best way. Suppose we did extend things so that we listen on different\n> ports on different interfaces; how would this scheme work at all?\n\nYeah, the probability that that will happen someday is one of the\nthings bothering me about this proposal. I'd rather change the\nfile format to support that first (it can be dummy for now, with\nall lines showing the same port), and then document it second.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 May 2023 10:49:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "The documentation fails to build for me:\n\n$ ninja docs\n[1/2] Generating doc/src/sgml/postgres-full.xml with a custom command\nFAILED: doc/src/sgml/postgres-full.xml\n/usr/bin/python3 ../postgresql/doc/src/sgml/xmltools_dep_wrapper \n--targetname doc/src/sgml/postgres-full.xml --depfile \ndoc/src/sgml/postgres-full.xml.d --tool /usr/bin/xmllint -- --nonet \n--noent --valid --path doc/src/sgml -o doc/src/sgml/postgres-full.xml \n../postgresql/doc/src/sgml/postgres.sgml\n../postgresql/doc/src/sgml/postgres.sgml:685: element para: validity \nerror : Element entry is not declared in para list of possible children\nninja: build stopped: subcommand failed.\n\n\nRemoving the <entry> tag resolves the issue:\ndiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\nindex cd07bad3b5..f71859f710 100644\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -684,7 +684,7 @@ include_dir 'conf.d'\n </para>\n <para>\n The port can be set to 0 to make Postgres pick an unused port \nnumber.\n- The assigned port number can be then retrieved from \n<entry><filename>postmaster.pid</filename></entry>.\n+ The assigned port number can be then retrieved from \n<filename>postmaster.pid</filename>.\n </para>\n </listitem>\n </varlistentry>\n\n\n\n",
"msg_date": "Tue, 9 May 2023 14:25:42 +0200",
"msg_from": "Denis Laxalde <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "Hi Denis,\n\nGreat catch. I've amended the patch to fix this issue with the\ndocumentation (V3).\n\n\n\nOn Tue, May 9, 2023 at 2:25 PM Denis Laxalde <[email protected]>\nwrote:\n\n> The documentation fails to build for me:\n>\n> $ ninja docs\n> [1/2] Generating doc/src/sgml/postgres-full.xml with a custom command\n> FAILED: doc/src/sgml/postgres-full.xml\n> /usr/bin/python3 ../postgresql/doc/src/sgml/xmltools_dep_wrapper\n> --targetname doc/src/sgml/postgres-full.xml --depfile\n> doc/src/sgml/postgres-full.xml.d --tool /usr/bin/xmllint -- --nonet\n> --noent --valid --path doc/src/sgml -o doc/src/sgml/postgres-full.xml\n> ../postgresql/doc/src/sgml/postgres.sgml\n> ../postgresql/doc/src/sgml/postgres.sgml:685: element para: validity\n> error : Element entry is not declared in para list of possible children\n> ninja: build stopped: subcommand failed.\n>\n>\n> Removing the <entry> tag resolves the issue:\n> diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\n> index cd07bad3b5..f71859f710 100644\n> --- a/doc/src/sgml/config.sgml\n> +++ b/doc/src/sgml/config.sgml\n> @@ -684,7 +684,7 @@ include_dir 'conf.d'\n> </para>\n> <para>\n> The port can be set to 0 to make Postgres pick an unused port\n> number.\n> - The assigned port number can be then retrieved from\n> <entry><filename>postmaster.pid</filename></entry>.\n> + The assigned port number can be then retrieved from\n> <filename>postmaster.pid</filename>.\n> </para>\n> </listitem>\n> </varlistentry>\n>\n>\n\n-- \nY.",
"msg_date": "Thu, 11 May 2023 08:46:30 +0200",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "Alvaro, Tom,\n\nOn Mon, May 8, 2023 at 4:49 PM Tom Lane <[email protected]> wrote:\n\n> Alvaro Herrera <[email protected]> writes:\n> > This made me wonder if storing the unadorned port number is really the\n> > best way. Suppose we did extend things so that we listen on different\n> > ports on different interfaces; how would this scheme work at all?\n>\n> Yeah, the probability that that will happen someday is one of the\n> things bothering me about this proposal. I'd rather change the\n> file format to support that first (it can be dummy for now, with\n> all lines showing the same port), and then document it second.\n>\n\nHow soon do you think the change will occur that will allow for choosing\ndifferent ports on different interfaces? I am happy to help address this.\n\nRelying on a variable number of lines may be counter-productive here if we\nwant postmaster.pid to be easily readable by shell scripts. What if we\nimproved the port line to be something like this?\n\n```\n127.0.0.1=5432 ::1=54321\n```\n\nBasically, a space-delimited set of address/port pairs (delimited by `=` to\nallow IPv6 addresses to use a colon). If we allow the address side to be\ndropped, the current format (`5432`) will also be correct parsing-wise.\n\n-- \nY.\n\nAlvaro, Tom,On Mon, May 8, 2023 at 4:49 PM Tom Lane <[email protected]> wrote:Alvaro Herrera <[email protected]> writes:\n> This made me wonder if storing the unadorned port number is really the\n> best way. Suppose we did extend things so that we listen on different\n> ports on different interfaces; how would this scheme work at all?\n\nYeah, the probability that that will happen someday is one of the\nthings bothering me about this proposal. I'd rather change the\nfile format to support that first (it can be dummy for now, with\nall lines showing the same port), and then document it second.How soon do you think the change will occur that will allow for choosing different ports on different interfaces? I am happy to help address this.Relying on a variable number of lines may be counter-productive here if we want postmaster.pid to be easily readable by shell scripts. What if weimproved the port line to be something like this?```127.0.0.1=5432 ::1=54321```Basically, a space-delimited set of address/port pairs (delimited by `=` to allow IPv6 addresses to use a colon). If we allow the address side to be dropped, the current format (`5432`) will also be correct parsing-wise.-- Y.",
"msg_date": "Thu, 11 May 2023 09:01:37 +0200",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "On 2023-May-11, Yurii Rashkovskii wrote:\n\n> Relying on a variable number of lines may be counter-productive here if we\n> want postmaster.pid to be easily readable by shell scripts.\n\nOh, I was thinking in Peter E's proposal to list the interface/port\nnumber pairs in a separate file named 'ports' or something like that.\n\n> ```\n> 127.0.0.1=5432 ::1=54321\n> ```\n> \n> Basically, a space-delimited set of address/port pairs (delimited by `=` to\n> allow IPv6 addresses to use a colon).\n\nThis seems a bit too creative. I'd rather have the IPv6 address in\nsquare brackets, which clues the parser immediately as to the address\nfamily and use colons to separate the port number. If we do go with a\nseparate file, which to me sounds easier than cramming it into the PID\nfile, then one per line is likely better, if only because line-oriented\nUnix text tooling has an easier time that way.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Having your biases confirmed independently is how scientific progress is\nmade, and hence made our great society what it is today\" (Mary Gardiner)\n\n\n",
"msg_date": "Thu, 11 May 2023 10:36:30 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "On Thu, May 11, 2023 at 10:36 AM Alvaro Herrera <[email protected]>\nwrote:\n\n> On 2023-May-11, Yurii Rashkovskii wrote:\n>\n> > ```\n> > 127.0.0.1=5432 ::1=54321\n> > ```\n> >\n> > Basically, a space-delimited set of address/port pairs (delimited by `=`\n> to\n> > allow IPv6 addresses to use a colon).\n>\n> This seems a bit too creative. I'd rather have the IPv6 address in\n> square brackets, which clues the parser immediately as to the address\n> family and use colons to separate the port number. If we do go with a\n> separate file, which to me sounds easier than cramming it into the PID\n> file, then one per line is likely better, if only because line-oriented\n> Unix text tooling has an easier time that way.\n>\n\nJust a general caution here that using square brackets to denote IPv6\naddresses will make it (unnecessarily?) harder to process this with a shell\nscript.\n\n-- \nY.\n\nOn Thu, May 11, 2023 at 10:36 AM Alvaro Herrera <[email protected]> wrote:On 2023-May-11, Yurii Rashkovskii wrote:\n> ```\n> 127.0.0.1=5432 ::1=54321\n> ```\n> \n> Basically, a space-delimited set of address/port pairs (delimited by `=` to\n> allow IPv6 addresses to use a colon).\n\nThis seems a bit too creative. I'd rather have the IPv6 address in\nsquare brackets, which clues the parser immediately as to the address\nfamily and use colons to separate the port number. If we do go with a\nseparate file, which to me sounds easier than cramming it into the PID\nfile, then one per line is likely better, if only because line-oriented\nUnix text tooling has an easier time that way.Just a general caution here that using square brackets to denote IPv6 addresses will make it (unnecessarily?) harder to process this with a shell script.-- Y.",
"msg_date": "Thu, 11 May 2023 13:24:23 +0200",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: tested, passed\n\nHello Yurii,\r\n\r\nI've retested your latest patch and tested building the documentation. \r\n\r\nI agree with the general sentiment that this is an interesting, albeit specific feature. Nevertheless, I would still like to see this integrated. My only concern, like many others have voiced, is in regard to the port number. When I was reviewing this patch I found it quite unintuitive to rummage through the postmaster.pid to find the correct port. I think either a specific pg_ctl command to return the port like Greg had initially mentioned or simply a separate file to store the port numbers would be ideal. The standalone file being the simpler option, this would free up postmaster.pid to allow any future alterations and still be able to reliably get the port number when using this wildcard. We can also build on this file later to allow for multiple ports to be listened on as previously suggested.\r\n\r\nKind regards,\r\nTristen",
"msg_date": "Wed, 17 May 2023 18:37:49 +0000",
"msg_from": "Tristen Raab <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "Hi,\n\n> I think either a specific pg_ctl command to return the port like Greg had initially mentioned or simply a separate file to store the port numbers would be ideal.\n\n+1, if we are going to do this we definitely need a pg_ctl command\nand/or a file.\n\n> The standalone file being the simpler option\n\nAgree. However, I think we will have to add the display of the port\nnumber to \"pg_ctl status\" too, for the sake of consistency [1].\n\n[1]: https://www.postgresql.org/docs/current/app-pg-ctl.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 18 May 2023 14:17:02 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "> On 11 May 2023, at 13:24, Yurii Rashkovskii <[email protected]> wrote:\n> \n> On Thu, May 11, 2023 at 10:36 AM Alvaro Herrera <[email protected] <mailto:[email protected]>> wrote:\n> On 2023-May-11, Yurii Rashkovskii wrote:\n> \n> > ```\n> > 127.0.0.1=5432 ::1=54321\n> > ```\n> > \n> > Basically, a space-delimited set of address/port pairs (delimited by `=` to\n> > allow IPv6 addresses to use a colon).\n> \n> This seems a bit too creative. I'd rather have the IPv6 address in\n> square brackets, which clues the parser immediately as to the address\n> family and use colons to separate the port number. If we do go with a\n> separate file, which to me sounds easier than cramming it into the PID\n> file, then one per line is likely better, if only because line-oriented\n> Unix text tooling has an easier time that way.\n> \n> Just a general caution here that using square brackets to denote IPv6 addresses will make it (unnecessarily?) harder to process this with a shell script.\n\nThis patch is Waiting on Author in the current commitfest with no new patch\npresented following the discussion here. Is there an update ready or should we\nclose it in this CF in favour of a future one?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 10 Jul 2023 14:27:23 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
},
{
"msg_contents": "> On 10 Jul 2023, at 14:27, Daniel Gustafsson <[email protected]> wrote:\n\n> This patch is Waiting on Author in the current commitfest with no new patch\n> presented following the discussion here. Is there an update ready or should we\n> close it in this CF in favour of a future one?\n\nSince the thread stalled here with the patch waiting on author since May I will\ngo ahead and mark it returned with feedback in this CF. Please feel free to\nre-open a new entry in a future CF.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 2 Aug 2023 21:52:24 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow Postgres to pick an unused port to listen"
}
] |
[
{
"msg_contents": "Hi,\n\nPreviously, we read int this mailing list some controversial opinions on \nqueryid generation and Jumbling technique. Here we don't intend to solve \nthese problems but help an extension at least don't conflict with others \non the queryId value.\n\nExtensions could need to pass some query-related data through all stages \nof the query planning and execution. As a trivial example, \npg_stat_statements uses queryid at the end of execution to save some \nstatistics. One more reason - extensions now conflict on queryid value \nand the logic of its changing. With this patch, it can be managed.\n\nThis patch introduces the structure 'ExtensionData' which allows to \nmanage of a list of entries with a couple of interface functions \naddExtensionDataToNode() and GetExtensionData(). Keep in mind the \npossible future hiding of this structure from the public interface.\nAn extension should invent a symbolic key to identify its data. It may \ninvent as many additional keys as it wants but the best option here - is \nno more than one entry for each extension.\nUsage of this machinery is demonstrated by the pg_stat_statements \nexample - here we introduced Bigint node just for natively storing of \nqueryId value.\n\nRuthless pgbench benchmark shows that we got some overhead:\n1.6% - in default mode\n4% - in prepared mode\n~0.1% in extended mode.\n\nAn optimization that avoids copying of queryId by storing it into the \nnode pointer field directly allows to keep this overhead in a range of \n%0.5 for all these modes but increases complexity. So here we \ndemonstrate not optimized variant.\n\nSome questions still cause doubts:\n- QueryRewrite: should we copy extension fields from the parent \nparsetree to the rewritten ones?\n- Are we need to invent a registration procedure to do away with the \nnames of entries and use some compact integer IDs?\n- Do we need to optimize this structure to avoid a copy for simple data \ntypes, for example, inventing something like A_Const?\n\nAll in all, in our opinion, this issue is tend to grow with an \nincreasing number of extensions that utilize planner and executor hooks \nfor some purposes. So, any thoughts will be useful.\n\n-- \nRegards\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Wed, 29 Mar 2023 12:02:30 +0500",
"msg_from": "Andrey Lepikhov <[email protected]>",
"msg_from_op": true,
"msg_subject": "[POC] Allow an extension to add data into Query and PlannedStmt nodes"
},
{
"msg_contents": "Hi,\n\nOn Wed, Mar 29, 2023 at 12:02:30PM +0500, Andrey Lepikhov wrote:\n>\n> Previously, we read int this mailing list some controversial opinions on\n> queryid generation and Jumbling technique. Here we don't intend to solve\n> these problems but help an extension at least don't conflict with others on\n> the queryId value.\n>\n> Extensions could need to pass some query-related data through all stages of\n> the query planning and execution. As a trivial example, pg_stat_statements\n> uses queryid at the end of execution to save some statistics. One more\n> reason - extensions now conflict on queryid value and the logic of its\n> changing. With this patch, it can be managed.\n\nI just had a quick lookc at the patch, and IIUC it doesn't really help on that\nside, as there's still a single official \"queryid\" that's computed, stored\neverywhere and later used by pg_stat_statements (which does then store in\nadditionally to that official queryid).\n\nYou can currently change the main jumbling algorithm with a custom extension,\nand all extensions will then use it as the source of truth, but I guess that\nwhat you want is to e.g. have an additional and semantically different queryid,\nand create multiple ecosystem of extensions, each using one or the other source\nof queryid without changing the other ecosystem behavior.\n>\n> This patch introduces the structure 'ExtensionData' which allows to manage\n> of a list of entries with a couple of interface functions\n> addExtensionDataToNode() and GetExtensionData(). Keep in mind the possible\n> future hiding of this structure from the public interface.\n> An extension should invent a symbolic key to identify its data. It may\n> invent as many additional keys as it wants but the best option here - is no\n> more than one entry for each extension.\n> Usage of this machinery is demonstrated by the pg_stat_statements example -\n> here we introduced Bigint node just for natively storing of queryId value.\n>\n> Ruthless pgbench benchmark shows that we got some overhead:\n> 1.6% - in default mode\n> 4% - in prepared mode\n> ~0.1% in extended mode.\n\nThat's a quite significant overhead. But the only reason to accept such a\nchange is to actually use it to store additional data, so it would be\ninteresting to see what the overhead is like once you store at least 2\ndifferent values there.\n\n> - Are we need to invent a registration procedure to do away with the names\n> of entries and use some compact integer IDs?\n\nNote that the patch as proposed doesn't have any defense for two extensions\ntrying to register something with the same name, or update a stored value, as\nAddExtensionDataToNode() simply prepend the new value to the list. You can\nactually update the value by just storing the new value, but it will add a\nsignificant overhead to every other extension that want to read another value.\n\nThe API is also quite limited as each stored value has a single identifier.\nWhat if your extension needs to store multiple things? Since it's all based on\nNode you can't really store some custom struct, so you probably have to end up\nwith things like \"my_extension.my_val1\", \"my_extension.my_val2\" which isn't\ngreat.\n\n\n",
"msg_date": "Thu, 30 Mar 2023 15:57:22 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [POC] Allow an extension to add data into Query and PlannedStmt\n nodes"
},
{
"msg_contents": "On 30/3/2023 12:57, Julien Rouhaud wrote:\n>> Extensions could need to pass some query-related data through all stages of\n>> the query planning and execution. As a trivial example, pg_stat_statements\n>> uses queryid at the end of execution to save some statistics. One more\n>> reason - extensions now conflict on queryid value and the logic of its\n>> changing. With this patch, it can be managed.\n> \n> I just had a quick lookc at the patch, and IIUC it doesn't really help on that\n> side, as there's still a single official \"queryid\" that's computed, stored\n> everywhere and later used by pg_stat_statements (which does then store in\n> additionally to that official queryid).\nThank you for the attention!\nThis patch doesn't try to solve the problem of oneness of queryId. In \nthis patch we change pg_stat_statements and it doesn't set 0 into \nqueryId field according to its internal logic. And another extension \nshould do the same - use queryId on your own but not erase it - erase \nyour private copy in the ext_field.\n\n>> Ruthless pgbench benchmark shows that we got some overhead:\n>> 1.6% - in default mode\n>> 4% - in prepared mode\n>> ~0.1% in extended mode.\n> \n> That's a quite significant overhead. But the only reason to accept such a\n> change is to actually use it to store additional data, so it would be\n> interesting to see what the overhead is like once you store at least 2\n> different values there.\nYeah, but as I said earlier, it can be reduced to much smaller value \njust with simple optimization. Here I intentionally avoid it to discuss \nthe core concept.\n> \n>> - Are we need to invent a registration procedure to do away with the names\n>> of entries and use some compact integer IDs?\n> \n> Note that the patch as proposed doesn't have any defense for two extensions\n> trying to register something with the same name, or update a stored value, as\n> AddExtensionDataToNode() simply prepend the new value to the list. You can\n> actually update the value by just storing the new value, but it will add a\n> significant overhead to every other extension that want to read another value.\nThanks a lot! Patch in attachment implements such an idea - extension \ncan allocate some entries and use these private IDs to add entries. I \nhope, an extension would prefer to use only one entry for all the data \nto manage overhead, but still.\n> \n> The API is also quite limited as each stored value has a single identifier.\n> What if your extension needs to store multiple things? Since it's all based on\n> Node you can't really store some custom struct, so you probably have to end up\n> with things like \"my_extension.my_val1\", \"my_extension.my_val2\" which isn't\n> great.\nMain idea here - if an extension holds custom struct and want to pass it \nalong all planning and execution stages it should use extensible node \nwith custom read/write/copy routines.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Thu, 30 Mar 2023 22:20:19 +0500",
"msg_from": "Andrey Lepikhov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [POC] Allow an extension to add data into Query and PlannedStmt\n nodes"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nIn another thread [1], Thomas had the idea to $SUBJECT in a similar way\nto what is currently done with src/backend/storage/lmgr/lwlocknames.txt.\n\nDoing so, like in the attached patch proposal, would help to avoid:\n\n- wait event without documentation like observed in [2]\n- orphaned wait event like observed in [3]\n\nThe patch relies on a new src/backend/utils/activity/waiteventnames.txt file that contains on row\nper wait event, with this format:\n\n<ENUM NAME> <WAIT EVENT ENUM> <WAIT EVENT NAME> <WAIT EVENT DOC SENTENCE>\n\nThen, a new perl script (src/backend/utils/activity/generate-waiteventnames.pl) generates the new:\n\n- waiteventnames.c\n- waiteventnames.h\n- waiteventnames.sgml\n\nfiles.\n\nRemarks:\n\n- The new src/backend/utils/activity/waiteventnames.txt file has been created with (a quickly written, non polished\nand not part of the patch) generate_waiteventnames_txt.sh script attached. Then, the proposal for the 2 wait events\nmissing documentation (non committed yet) done in [2] has been added manually to waiteventnames.txt.\n\n- The patch does take care of wait events that currently are linked to enums, means:\n \n - PG_WAIT_ACTIVITY\n - PG_WAIT_CLIENT\n - PG_WAIT_IPC\n - PG_WAIT_TIMEOUT\n - PG_WAIT_IO\n\n so that PG_WAIT_LWLOCK, PG_WAIT_LOCK, PG_WAIT_BUFFER_PIN and PG_WAIT_EXTENSION are not autogenerated.\n\nThis result to having the wait event part of the documentation \"monitoring-stats\" not ordered as compared to the \"Wait Event Types\" Table.\n\nThis is due to the fact that the new waiteventnames.sgml that contains the documentation for\nthe autogenerated ones listed above is \"included\" into doc/src/sgml/monitoring.sgml and then breaks the alphabetical ordering\nwith the ones not autogenerated.\n\nTo fix this I've in mind to also autogenerate enums for PG_WAIT_BUFFER_PIN and PG_WAIT_EXTENSION and\nsplit the current documentation \"Wait Event Types\" Table in 2 tables: one for the autogenerated ones and one (then for\nPG_WAIT_LWLOCK, PG_WAIT_LOCK) for the non autogenerated \"lock\" related ones.\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n[1]: https://www.postgresql.org/message-id/CA%2BhUKG%2BewEpxm%3DhPNXyupRUB_SKGh-6tO86viaco0g-P_pm_Cw%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/CA%2BhUKGJixAHc860Ej9Qzd_z96Z6aoajAgJ18bYfV3Lfn6t9%3D%2BQ%40mail.gmail.com\n[3]: https://www.postgresql.org/message-id/CA%2BhUKGK6tqm59KuF1z%2Bh5Y8fsWcu5v8%2B84kduSHwRzwjB2aa_A%40mail.gmail.com",
"msg_date": "Wed, 29 Mar 2023 11:44:39 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 3/29/23 11:44 AM, Drouvot, Bertrand wrote:\n\n> \n> Looking forward to your feedback,\n\nJust realized that more polishing was needed.\n\nDone in V2 attached.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 29 Mar 2023 14:51:27 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Wed, Mar 29, 2023 at 8:51 AM Drouvot, Bertrand <\[email protected]> wrote:\n\n> Hi,\n>\n> On 3/29/23 11:44 AM, Drouvot, Bertrand wrote:\n>\n> >\n> > Looking forward to your feedback,\n>\n> Just realized that more polishing was needed.\n>\n> Done in V2 attached.\n>\n> Regards,\n>\n> --\n> Bertrand Drouvot\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n\n\nI think this is good work, but I can't help thinking it would be easier to\nunderstand and maintain if we used a template engine like Text::Template,\nand filled out the template with the variant bits. I'll ask that question\nin another thread for higher visibility.\n\nOn Wed, Mar 29, 2023 at 8:51 AM Drouvot, Bertrand <[email protected]> wrote:Hi,\n\nOn 3/29/23 11:44 AM, Drouvot, Bertrand wrote:\n\n> \n> Looking forward to your feedback,\n\nJust realized that more polishing was needed.\n\nDone in V2 attached.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.comI think this is good work, but I can't help thinking it would be easier to understand and maintain if we used a template engine like Text::Template, and filled out the template with the variant bits. I'll ask that question in another thread for higher visibility.",
"msg_date": "Thu, 30 Mar 2023 12:41:27 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 12:41:27PM -0400, Corey Huinker wrote:\n> I think this is good work, but I can't help thinking it would be easier to\n> understand and maintain if we used a template engine like Text::Template,\n> and filled out the template with the variant bits. I'll ask that question\n> in another thread for higher visibility.\n\nHmm.. This is not part of the main perl distribution, is it? I am\nnot sure that it is a good idea to increase the requirement bar when\nit comes to build the code and documentation by depending more on\nexternal modules, and the minimum version of perl supported is very\nold^D^D^D ancient, making it harder to satisfy.\n--\nMichael",
"msg_date": "Thu, 20 Apr 2023 09:48:08 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Wed, Mar 29, 2023 at 02:51:27PM +0200, Drouvot, Bertrand wrote:\n> Just realized that more polishing was needed.\n> \n> Done in V2 attached.\n\nThat would be pretty cool to get that done in an automated way, I've\nwanted that for a few years now. And I guess that a few others have\nthe same feeling after missing to update these docs when adding a new\nwait event, or just to enforce this alphabetically, so let's do\nsomething about it in v17.\n\nAbout the alphabetical order, could we have the script enforce a sort\nof the elements parsed from waiteventnames.txt, based on the second\ncolumn? This now relies on the order of the items in the file, but\nmy history with this stuff has proved that forcing an ordering rule\nwould be a very good thing long-term.\n\nSeeing waiteventnames.txt, I think that we should have something\ncloser to errcodes.txt. Well, seeing the patch, I assume that this is\ninspired by errcodes.txt, but this new file should be able to do more\nIMO:\n- Supporting the parsing of comments, by ignoring them in\ngenerate-waiteventnames.pl.\n- Ignore empty likes.\n- Add a proper header, copyright, the output generated from it, etc.\n- Document the format lines of the file.\n\nIt is clear that the format of the file is:\n- category\n- C symbol in enums.\n- Format in the system views.\n- Description in the docs.\nOr perhaps it would be better to divide this file by sections (like\nerrcodes.txt) for each category so as we eliminate entirely the first\ncolumn?\n\nThis number from v2 is nice to see:\n 17 files changed, 423 insertions(+), 955 deletions(-)\n\nPerhaps waiteventnames.c should be named pgstat_wait_event.c? The\nresult is simply the set of pgstat functions, included in\nwait_event.c (this inclusion is OK for me). Similarly,\nwait_event_types.h would be a better name for the set of enums? \n\n utils/adt/jsonpath_scan.c \\\n+ utils/activity/waiteventnames.c \\\n+ utils/activity/waiteventnames.h \\\n+ utils/adt/jsonpath_scan.c \\\n\nLooks like a copy-pasto.\n\nNote that the patch does not apply, there is a conflict in the docs.\n--\nMichael",
"msg_date": "Thu, 20 Apr 2023 10:09:17 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 4/20/23 3:09 AM, Michael Paquier wrote:\n> On Wed, Mar 29, 2023 at 02:51:27PM +0200, Drouvot, Bertrand wrote:\n>> Just realized that more polishing was needed.\n>>\n>> Done in V2 attached.\n> \n> That would be pretty cool to get that done in an automated way, I've\n> wanted that for a few years now. And I guess that a few others have\n> the same feeling after missing to update these docs when adding a new\n> wait event, or just to enforce this alphabetically, so let's do\n> something about it in v17.\n\nThanks for the feedback!\n\n> About the alphabetical order, could we have the script enforce a sort\n> of the elements parsed from waiteventnames.txt, based on the second\n> column? This now relies on the order of the items in the file, but\n> my history with this stuff has proved that forcing an ordering rule\n> would be a very good thing long-term.\n\nNot having the lines in order would not have been a problem for the perl script\n(as it populated the hash table based on the category column while reading the\ntext file).\n\nThat said I do agree that enforcing an order is a good idea, as it's \"easier\" to read\nthe generated output files (their content is now somehow \"ordered\").\n\nThis is done in V3 attached.\n\n> Seeing waiteventnames.txt, I think that we should have something\n> closer to errcodes.txt. Well, seeing the patch, I assume that this is\n> inspired by errcodes.txt, but this new file should be able to do more\n> IMO:\n> - Supporting the parsing of comments, by ignoring them in\n> generate-waiteventnames.pl.\n> - Ignore empty likes.\n> - Add a proper header, copyright, the output generated from it, etc.\n> - Document the format lines of the file.\n> \n\nFully agree, it's done in V3 attached.\n\n> It is clear that the format of the file is:\n> - category\n> - C symbol in enums.\n> - Format in the system views.\n> - Description in the docs.\n> Or perhaps it would be better to divide this file by sections (like\n> errcodes.txt) for each category so as we eliminate entirely the first\n> column?\n> \n\nYeah, that could be an option. V3 is still using the category as the first column\nbut I'm ok to change it by a section if you prefer (though I don't really see the need).\n\n> Perhaps waiteventnames.c should be named pgstat_wait_event.c? \n\nAgree, done.\n\n> Similarly,\n> wait_event_types.h would be a better name for the set of enums?\n> \n\nAlso agree, done.\n\n\n> utils/adt/jsonpath_scan.c \\\n> + utils/activity/waiteventnames.c \\\n> + utils/activity/waiteventnames.h \\\n> + utils/adt/jsonpath_scan.c \\\n> \n> Looks like a copy-pasto.\n\nWhy do you think so? both files have to be removed.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 22 Apr 2023 15:36:05 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Sat, Apr 22, 2023 at 03:36:05PM +0200, Drouvot, Bertrand wrote:\n> On 4/20/23 3:09 AM, Michael Paquier wrote:\n>> It is clear that the format of the file is:\n>> - category\n>> - C symbol in enums.\n>> - Format in the system views.\n>> - Description in the docs.\n>> Or perhaps it would be better to divide this file by sections (like\n>> errcodes.txt) for each category so as we eliminate entirely the first\n>> column?\n> \n> Yeah, that could be an option. V3 is still using the category as the first column\n> but I'm ok to change it by a section if you prefer (though I don't really see the need).\n\nIt can make the file width shorter, at least..\n\n[ .. thinks .. ]\n\n+my $waitclass;\n+my @wait_classes = (\"PG_WAIT_ACTIVITY\", \"PG_WAIT_CLIENT\", \"PG_WAIT_IPC\", \"PG_WAIT_TIMEOUT\", \"PG_WAIT_IO\");\n\nActually, having a \"Section\" in waiteventnames.txt would remove the\nneed to store this knowledge in generate-waiteventnames.pl, which is\na duplicate of the txt contents. If somebody adds a new class in the\nfuture, it would be necessary to update this path as well. Well, that\nwould not be a huge effort in itself, but IMO the script translating\nthe .txt to the docs and the code should have no need to know the\ntypes of classes. I guess that a format like that would make the most\nsense to me, then:\nSection: ClassName PG_WAIT_CLASS_NAME\n\n# ClassName would be \"IO\", \"IPC\", \"Timeout\", etc.\n\nWAIT_EVENT_NAME_1 \"WaitEventName1\" \"Description of wait event 1\"\nWAIT_EVENT_NAME_N \"WaitEventNameN\" \"Description of wait event N\"\n\n>> utils/adt/jsonpath_scan.c \\\n>> + utils/activity/waiteventnames.c \\\n>> + utils/activity/waiteventnames.h \\\n>> + utils/adt/jsonpath_scan.c \\\n>> \n>> Looks like a copy-pasto.\n> \n> Why do you think so? both files have to be removed.\n\njsonpath_scan.c is listed twice, and that's still the case in v3. The\nlist of files deleted for maintainer-clean in src/backend/Makefile\nshould be listed alphabetically (utils/activity/ before utils/adt/),\nbut that's a nit ;)\n--\nMichael",
"msg_date": "Mon, 24 Apr 2023 12:15:13 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 4/24/23 5:15 AM, Michael Paquier wrote:\n> On Sat, Apr 22, 2023 at 03:36:05PM +0200, Drouvot, Bertrand wrote:\n>> On 4/20/23 3:09 AM, Michael Paquier wrote:\n>>> It is clear that the format of the file is:\n>>> - category\n>>> - C symbol in enums.\n>>> - Format in the system views.\n>>> - Description in the docs.\n>>> Or perhaps it would be better to divide this file by sections (like\n>>> errcodes.txt) for each category so as we eliminate entirely the first\n>>> column?\n>>\n>> Yeah, that could be an option. V3 is still using the category as the first column\n>> but I'm ok to change it by a section if you prefer (though I don't really see the need).\n> \n> It can make the file width shorter, at least..\n\nRight.\n\n> \n> [ .. thinks .. ]\n> \n> +my $waitclass;\n> +my @wait_classes = (\"PG_WAIT_ACTIVITY\", \"PG_WAIT_CLIENT\", \"PG_WAIT_IPC\", \"PG_WAIT_TIMEOUT\", \"PG_WAIT_IO\");\n> \n> Actually, having a \"Section\" in waiteventnames.txt would remove the\n> need to store this knowledge in generate-waiteventnames.pl, which is\n> a duplicate of the txt contents. If somebody adds a new class in the\n> future, it would be necessary to update this path as well. Well, that\n> would not be a huge effort in itself, but IMO the script translating\n> the .txt to the docs and the code should have no need to know the\n> types of classes. I guess that a format like that would make the most\n> sense to me, then:\n> Section: ClassName PG_WAIT_CLASS_NAME\n> \n> # ClassName would be \"IO\", \"IPC\", \"Timeout\", etc.\n> \n> WAIT_EVENT_NAME_1 \"WaitEventName1\" \"Description of wait event 1\"\n> WAIT_EVENT_NAME_N \"WaitEventNameN\" \"Description of wait event N\"\n> \n\nI gave another thought on it, and do agree that's better to use sections\nin the .txt file. This is done in V4 attached.\n\n>>> utils/adt/jsonpath_scan.c \\\n>>> + utils/activity/waiteventnames.c \\\n>>> + utils/activity/waiteventnames.h \\\n>>> + utils/adt/jsonpath_scan.c \\\n>>>\n>>> Looks like a copy-pasto.\n>>\n>> Why do you think so? both files have to be removed.\n> \n> jsonpath_scan.c is listed twice, and that's still the case in v3.\n\nOh I see, I misunderstood what you thought the typo was.\nFixed in V4 thanks!\n\n> The\n> list of files deleted for maintainer-clean in src/backend/Makefile\n> should be listed alphabetically (utils/activity/ before utils/adt/),\n> but that's a nit ;)\n\nOh right, fixed.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 24 Apr 2023 09:03:53 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Mon, Apr 24, 2023 at 09:03:53AM +0200, Drouvot, Bertrand wrote:\n> Oh right, fixed.\n\nI may tweak a few things if I put my hands on it, but that looks\npretty solid seen from here.. I have spotted a few extra issues.\n\nOne thing I have noticed with v4 is that the order of the tables\ngenerated in wait_event_types.h and the SGML docs is inconsistent with\nprevious versions, and these are not in an alphabetical order. HEAD\norders them as Activity, BufferPin, Client, Extension, IO, IPC, Lock,\nLWLock and Timeout. This patch switches the order to become\ndifferent, and note that the first table describing each of the wait\nevent type classes gets it right.\n\nIt seems to me that you should apply an extra ordering in\ngenerate-waiteventnames.pl to make sure that the tables are printed in \nthe same order as previously, around here:\n+# Generate the output files\n+foreach $waitclass (keys %hashwe) {\n\n(The table describing all the wait event types could be removed from\nthe SGML docs as well, at the cost of having their description in the\nnew .txt file. However, as these are long, it would make the .txt\nfile much messier, so not doing this extra part is OK for me.)\n\n- * Use this category when a process is waiting because it has no work to do,\n- * unless the \"Client\" or \"Timeout\" category describes the situation better.\n- * Typically, this should only be used for background processes\n\nwait_event.h includes a set of comments describing each category, that\nthis patch removes. Rather than removing this information, which is\nhelpful to have around, why not making them comments of\nwaiteventnames.txt instead? Losing this information would be sad.\n\n+# src/backend/utils/activity/pgstat_wait_event.c\n+# c functions to get the wait event name based on the enum\nNit. 'c' should be upper-case.\n\n }\n+\n if (IsNewer(\n 'src/include/storage/lwlocknames.h',\nNot wrong, but this is an unrelated diff.\n\n+if %DIST%==1 if exist src\\backend\\utils\\activity\\pgstat_wait_event.c del /q src\\backend\\utils\\activity\\pgstat_wait_event.c\n if %DIST%==1 if exist src\\backend\\storage\\lmgr\\lwlocknames.h del /q src\\backend\\storage\\lmgr\\lwlocknames.h\n+if %DIST%==1 if exist src\\backend\\utils\\activity\\wait_event_types.h del /q src\\backend\\utils\\activity\\wait_event_types.h\nThe order here is off a bit. Missed that previously..\n\nperltidy had better be run on generate-waiteventnames.pl (I can do\nthat myself, no worries), as a couple of lines' format don't seem\nquite in line.\n--\nMichael",
"msg_date": "Tue, 25 Apr 2023 14:15:07 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 4/25/23 7:15 AM, Michael Paquier wrote:\n> On Mon, Apr 24, 2023 at 09:03:53AM +0200, Drouvot, Bertrand wrote:\n>> Oh right, fixed.\n> \n> I may tweak a few things if I put my hands on it, but that looks\n> pretty solid seen from here.. \n\nGlad to hear! ;-)\n\n> I have spotted a few extra issues.\n> \n> One thing I have noticed with v4 is that the order of the tables\n> generated in wait_event_types.h and the SGML docs is inconsistent with\n> previous versions, and these are not in an alphabetical order. HEAD\n> orders them as Activity, BufferPin, Client, Extension, IO, IPC, Lock,\n> LWLock and Timeout. This patch switches the order to become\n> different, and note that the first table describing each of the wait\n> event type classes gets it right.\n> \n\nRight, ordering being somehow broken is also something I did mention initially when I first\npresented this patch up-thread. That's also due to the fact that this patch\ndoes not autogenerate PG_WAIT_LWLOCK, PG_WAIT_LOCK, PG_WAIT_BUFFER_PIN and PG_WAIT_EXTENSION.\n\n> It seems to me that you should apply an extra ordering in\n> generate-waiteventnames.pl to make sure that the tables are printed in\n> the same order as previously, around here:\n> +# Generate the output files\n> +foreach $waitclass (keys %hashwe) {\n> \n\nYeah but that would still affect only the auto-generated one and then\nresult to having the wait event part of the documentation \"monitoring-stats\"\nnot ordered as compared to the \"Wait Event Types\" Table.\n\nAnd as we have only one \"include\" in doc/src/sgml/monitoring.sgml for all the\nauto-generated one, the ordering would still be broken.\n\n> (The table describing all the wait event types could be removed from\n> the SGML docs as well, at the cost of having their description in the\n> new .txt file. However, as these are long, it would make the .txt\n> file much messier, so not doing this extra part is OK for me.)\n\nRight, but that might be a valuable option to also fix the ordering issue\nmentioned above (need to look deeper at this).\n\n>\n> - * Use this category when a process is waiting because it has no work to do,\n> - * unless the \"Client\" or \"Timeout\" category describes the situation better.\n> - * Typically, this should only be used for background processes\n> \n> wait_event.h includes a set of comments describing each category, that\n> this patch removes. Rather than removing this information, which is\n> helpful to have around, why not making them comments of\n> waiteventnames.txt instead? Losing this information would be sad.\n>\n\nYeah, good point, I'll look at this.\n\n> +# src/backend/utils/activity/pgstat_wait_event.c\n> +# c functions to get the wait event name based on the enum\n> Nit. 'c' should be upper-case.\n> \n> }\n> +\n> if (IsNewer(\n> 'src/include/storage/lwlocknames.h',\n> Not wrong, but this is an unrelated diff.\n> \n\nYeah, probably due to a pgindent run.\n\n> +if %DIST%==1 if exist src\\backend\\utils\\activity\\pgstat_wait_event.c del /q src\\backend\\utils\\activity\\pgstat_wait_event.c\n> if %DIST%==1 if exist src\\backend\\storage\\lmgr\\lwlocknames.h del /q src\\backend\\storage\\lmgr\\lwlocknames.h\n> +if %DIST%==1 if exist src\\backend\\utils\\activity\\wait_event_types.h del /q src\\backend\\utils\\activity\\wait_event_types.h\n> The order here is off a bit. Missed that previously..\n> \n> perltidy had better be run on generate-waiteventnames.pl (I can do\n> that myself, no worries), as a couple of lines' format don't seem\n> quite in line.\n\nWill do, no problem at all.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 26 Apr 2023 18:51:46 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 4/26/23 6:51 PM, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 4/25/23 7:15 AM, Michael Paquier wrote:\n> \n> Will do, no problem at all.\n> \n\nPlease find attached V5 addressing the previous comments except\nthe \"ordering\" one (need to look deeper at this).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 26 Apr 2023 20:36:44 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Wed, Apr 26, 2023 at 08:36:44PM +0200, Drouvot, Bertrand wrote:\n> Please find attached V5 addressing the previous comments except\n> the \"ordering\" one (need to look deeper at this).\n\nI was putting my hands into that, and I see now what you mean here..\nAmong the nine types of wait events, Lock, LWLock, BufferPin and\nExtension don't get generated at all.\n\nGenerating the contents of Lock would mean to gather in a single file\nthe data for the generation of LockTagType in lock.h, the list of\nLockTagTypeNames in lockfuncs.c and the description of the docs. This\ndata being spread across three files is not really appealing to make\nthat generated.. LWLocks would mean to either extend lwlocknames.txt\nwith the description from the docs if we were to centralize the whole\nthing.\n\nBut do we need to merge more data than necessary? We could do things\nin the simplest fashion possible while making the docs and code\nuser-friendly in the ordering: just add a section for Lock and LWLocks\nin waiteventnames.txt with an extra comment in their headers and/or\ndata files to tell that waiteventnames.txt also needs a refresh. I\nwould be tempted to do that, actually, and force an ordering for all\nthe wait event categories in generate-waiteventnames.pl with something\nlike that:\n # Generate the output files\n-foreach $waitclass (keys %hashwe)\n+foreach $waitclass (sort keys %hashwe)\n\nBufferPin and Extension don't really imply an extra cost by the way:\nthey could just be added to the txt for the wait events even if they\nhave one single element for now.\n--\nMichael",
"msg_date": "Thu, 27 Apr 2023 15:13:42 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 4/27/23 8:13 AM, Michael Paquier wrote:\n> On Wed, Apr 26, 2023 at 08:36:44PM +0200, Drouvot, Bertrand wrote:\n>> Please find attached V5 addressing the previous comments except\n>> the \"ordering\" one (need to look deeper at this).\n> \n> I was putting my hands into that, and I see now what you mean here..\n> Among the nine types of wait events, Lock, LWLock, BufferPin and\n> Extension don't get generated at all.\n> \n\nRight.\n\n> Generating the contents of Lock would mean to gather in a single file\n> the data for the generation of LockTagType in lock.h, the list of\n> LockTagTypeNames in lockfuncs.c and the description of the docs. This\n> data being spread across three files is not really appealing to make\n> that generated.. LWLocks would mean to either extend lwlocknames.txt\n> with the description from the docs if we were to centralize the whole\n> thing.\n> \n> But do we need to merge more data than necessary? We could do things\n> in the simplest fashion possible while making the docs and code\n> user-friendly in the ordering: just add a section for Lock and LWLocks\n> in waiteventnames.txt with an extra comment in their headers and/or\n> data files to tell that waiteventnames.txt also needs a refresh. \n\nAgree that it would fix the doc ordering and that we could do that.\n\nIt's done that way in V6.\n\nThere is already comments about this in lockfuncs.c and lwlocknames.txt, so\nV6 updates those comments accordingly.\n\n> I would be tempted to do that, actually, and force an ordering for all\n> the wait event categories in generate-waiteventnames.pl with something\n> like that:\n> # Generate the output files\n> -foreach $waitclass (keys %hashwe)\n> +foreach $waitclass (sort keys %hashwe)\n> \n\nAgree, done in V6.\n\n> BufferPin and Extension don't really imply an extra cost by the way:\n> they could just be added to the txt for the wait events even if they\n> have one single element for now.\n\nRight, done that way in V6.\n\nPlease note that it creates 2 new \"wait events\": WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN.\n\nThen, they replace PG_WAIT_EXTENSION and PG_WAIT_BUFFER_PIN (resp.) where appropriate.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 28 Apr 2023 14:29:13 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Fri, Apr 28, 2023 at 02:29:13PM +0200, Drouvot, Bertrand wrote:\n> On 4/27/23 8:13 AM, Michael Paquier wrote:\n>> Generating the contents of Lock would mean to gather in a single file\n>> the data for the generation of LockTagType in lock.h, the list of\n>> LockTagTypeNames in lockfuncs.c and the description of the docs. This\n>> data being spread across three files is not really appealing to make\n>> that generated.. LWLocks would mean to either extend lwlocknames.txt\n>> with the description from the docs if we were to centralize the whole\n>> thing.\n>> \n>> But do we need to merge more data than necessary? We could do things\n>> in the simplest fashion possible while making the docs and code\n>> user-friendly in the ordering: just add a section for Lock and LWLocks\n>> in waiteventnames.txt with an extra comment in their headers and/or\n>> data files to tell that waiteventnames.txt also needs a refresh.\n> \n> Agree that it would fix the doc ordering and that we could do that.\n\nNot much a fan of the part where a full paragraph of the SGML docs is\nadded to the .txt, particularly with the new handling for \"Notes\".\nI'd rather shape the perl script to be minimalistic and simpler, even\nif it means moving this paragraph about LWLocks after all the tables\nare generated.\n\nDo we also need the comments in the generated header as well? My\ninitial impression was to just move these as comments of the .txt file\nbecause that's where the new events would be added, as the .txt is the\nmain point of reference.\n\n> It's done that way in V6.\n> \n> There is already comments about this in lockfuncs.c and lwlocknames.txt, so\n> V6 updates those comments accordingly.\n> \n> Right, done that way in V6.\n> \n> Please note that it creates 2 new \"wait events\":\n> WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN.\n\nNoted. Makes sense here.\n\n> Then, they replace PG_WAIT_EXTENSION and PG_WAIT_BUFFER_PIN (resp.) where appropriate.\n\nSo, the change here..\n+ # Exception here\n+ if ($last =~ /^BufferPin/)\n+ {\n+ $last = \"Buffer_Pin\";\n+ }\n\n.. Implies the two following changes:\ntypedef enum\n {\n-\tWAIT_EVENT_BUFFER_PIN = PG_WAIT_BUFFER_PIN\n+\tWAIT_EVENT_BUFFER_PIN = PG_WAIT_BUFFERPIN\n } WaitEventBufferPin;\n[...]\n static const char *\n-pgstat_get_wait_buffer_pin(WaitEventBufferPin w)\n+pgstat_get_wait_bufferpin(WaitEventBufferPin w)\n\nI would be OK to remove this exception in the script as it does not\nchange anything for the end user (the wait event string is still\nreported as \"BufferPin\"). This way, we keep things simpler in the\nscript. This has as extra consequence to require a change in\nwait_event.h so as PG_WAIT_BUFFER_PIN is renamed to PG_WAIT_BUFFERPIN,\nequally fine by me. Logically, this rename should be done in a patch\nof its own, for clarity.\n\n@@ -185,6 +193,7 @@ distprep:\n $(MAKE) -C utils distprep\n $(MAKE) -C utils/adt jsonpath_gram.c jsonpath_gram.h jsonpath_scan.c\n $(MAKE) -C utils/misc guc-file.c\n+ $(MAKE) -C utils/actvity wait_event_types.h pgstat_wait_event.c\nIncorrect order, and incorrect name (s/actvity/activity/, lacking an\n'i').\n\n+printf $h $header_comment, 'wait_event_types.h';\n+printf $h \"#ifndef WAITEVENTNAMES_H\\n\";\n+printf $h \"#define WAITEVENTNAMES_H\\n\\n\";\nInconsistency detected here.\n\nIt seems to me that we'd better have a .gitignore in utils/activity/\nfor the new files.\n\n@@ -237,7 +237,7 @@ autoprewarm_main(Datum main_arg)\n (void) WaitLatch(MyLatch,\n WL_LATCH_SET | WL_EXIT_ON_PM_DEATH,\n -1L,\n- PG_WAIT_EXTENSION);\n+ WAIT_EVENT_EXTENSION);\n\nPerhaps this should also be part of a first, separate patch, with the\nintroduction of the new pgstat_get_wait_extension/bufferpin()\nfunctions. Okay, it is not a big deal because the main patch\ngenerates the enum for extensions which would be used here, but for\nthe sake of history clarity I'd rather refactor and rename all that\nfirst.\n\nThe choices of LWLOCK and LOCK for the internal names was a bit\nsurprising, while we can be consistent with the rest and use \"LWLock\"\nand \"Lock\".\n\nAttached is a v7 with the portions I have adjusted, which is mostly OK\nby me at this point. We are still away from the next CF, but I'll\nlook at that again when the v17 branch opens.\n--\nMichael",
"msg_date": "Mon, 1 May 2023 08:59:44 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "> [patch]\n\nThis is not a review of the perl/make/meson glue/details, but I just\nwanted to say thanks for working on this Bertrand & Michael, at a\nquick glance that .txt file looks like it's going to be a lot more fun\nto maintain!\n\n\n",
"msg_date": "Tue, 2 May 2023 14:50:26 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 5/1/23 1:59 AM, Michael Paquier wrote:\n> On Fri, Apr 28, 2023 at 02:29:13PM +0200, Drouvot, Bertrand wrote:\n>> On 4/27/23 8:13 AM, Michael Paquier wrote:\n>>>\n>>> But do we need to merge more data than necessary? We could do things\n>>> in the simplest fashion possible while making the docs and code\n>>> user-friendly in the ordering: just add a section for Lock and LWLocks\n>>> in waiteventnames.txt with an extra comment in their headers and/or\n>>> data files to tell that waiteventnames.txt also needs a refresh.\n>>\n>> Agree that it would fix the doc ordering and that we could do that.\n> \n> Not much a fan of the part where a full paragraph of the SGML docs is\n> added to the .txt, particularly with the new handling for \"Notes\".\n\nI understand your concern.\n\n> I'd rather shape the perl script to be minimalistic and simpler, even\n> if it means moving this paragraph about LWLocks after all the tables\n> are generated.\n\nI'm not sure I like it. First, it does break the \"Note\" ordering as compare\nto the current documentation. That's not a big deal though.\n\nSecondly, what If we need to add some note(s) in the future for another wait class? Having all the notes\nafter all the tables are generated would sound weird to me.\n\nWe could discuss another approach for the \"Note\" part if there is a need to add one for an existing/new wait class\nthough.\n\n> \n> Do we also need the comments in the generated header as well? My\n> initial impression was to just move these as comments of the .txt file\n> because that's where the new events would be added, as the .txt is the\n> main point of reference.\n> \n\nOh I see. The advantage of the previous approach is to have them at both places (.txt and header).\nBut that said I understand your point about having the perl script minimalistic and simpler.\n\n>> Please note that it creates 2 new \"wait events\":\n>> WAIT_EVENT_EXTENSION and WAIT_EVENT_BUFFER_PIN.\n> \n> Noted. Makes sense here.\n\nYup and that may help to add \"custom\" wait event for extensions too (need to think about it once\nthis refactoring is done).\n\n> So, the change here..\n> + # Exception here\n> + if ($last =~ /^BufferPin/)\n> + {\n> + $last = \"Buffer_Pin\";\n> + }\n> \n> .. Implies the two following changes:\n> typedef enum\n> {\n> -\tWAIT_EVENT_BUFFER_PIN = PG_WAIT_BUFFER_PIN\n> +\tWAIT_EVENT_BUFFER_PIN = PG_WAIT_BUFFERPIN\n> } WaitEventBufferPin;\n> [...]\n> static const char *\n> -pgstat_get_wait_buffer_pin(WaitEventBufferPin w)\n> +pgstat_get_wait_bufferpin(WaitEventBufferPin w)\n> \n> I would be OK to remove this exception in the script as it does not\n> change anything for the end user (the wait event string is still\n> reported as \"BufferPin\"). This way, we keep things simpler in the\n> script. \n\nGood point, agree.\n\n> This has as extra consequence to require a change in\n> wait_event.h so as PG_WAIT_BUFFER_PIN is renamed to PG_WAIT_BUFFERPIN,\n> equally fine by me. Logically, this rename should be done in a patch\n> of its own, for clarity.\n\nYes, I can look at it.\n\n> \n> @@ -185,6 +193,7 @@ distprep:\n> $(MAKE) -C utils distprep\n> $(MAKE) -C utils/adt jsonpath_gram.c jsonpath_gram.h jsonpath_scan.c\n> $(MAKE) -C utils/misc guc-file.c\n> + $(MAKE) -C utils/actvity wait_event_types.h pgstat_wait_event.c\n> Incorrect order, and incorrect name (s/actvity/activity/, lacking an\n> 'i').\n> \n\nNice catch.\n\n> +printf $h $header_comment, 'wait_event_types.h';\n> +printf $h \"#ifndef WAITEVENTNAMES_H\\n\";\n> +printf $h \"#define WAITEVENTNAMES_H\\n\\n\";\n> Inconsistency detected here.\n> \n> It seems to me that we'd better have a .gitignore in utils/activity/\n> for the new files.\n> \n\nAgree.\n\n> @@ -237,7 +237,7 @@ autoprewarm_main(Datum main_arg)\n> (void) WaitLatch(MyLatch,\n> WL_LATCH_SET | WL_EXIT_ON_PM_DEATH,\n> -1L,\n> - PG_WAIT_EXTENSION);\n> + WAIT_EVENT_EXTENSION);\n> \n> Perhaps this should also be part of a first, separate patch, with the\n> introduction of the new pgstat_get_wait_extension/bufferpin()\n> functions. Okay, it is not a big deal because the main patch\n> generates the enum for extensions which would be used here, but for\n> the sake of history clarity I'd rather refactor and rename all that\n> first.\n> \n\nAgree, I'll look at this.\n\n> The choices of LWLOCK and LOCK for the internal names was a bit\n> surprising, while we can be consistent with the rest and use \"LWLock\"\n> and \"Lock\".\n> \n> Attached is a v7 with the portions I have adjusted, which is mostly OK\n> by me at this point. We are still away from the next CF, but I'll\n> look at that again when the v17 branch opens.\n\nThanks for the v7! I did not look at the details but just replied to this thread.\n\nI'll look at v7 when the v17 branch opens and propose the separate patch\nmentioned above at that time too.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 4 May 2023 08:39:49 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 5/2/23 4:50 AM, Thomas Munro wrote:\n>> [patch]\n> \n> This is not a review of the perl/make/meson glue/details, but I just\n> wanted to say thanks for working on this Bertrand & Michael, at a\n> quick glance that .txt file looks like it's going to be a lot more fun\n> to maintain!\n\nThanks Thomas! Yeah and probably less error prone too ;-)\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 4 May 2023 08:45:07 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Thu, May 04, 2023 at 08:39:49AM +0200, Drouvot, Bertrand wrote:\n> On 5/1/23 1:59 AM, Michael Paquier wrote:\n> I'm not sure I like it. First, it does break the \"Note\" ordering as compare\n> to the current documentation. That's not a big deal though.\n> \n> Secondly, what If we need to add some note(s) in the future for\n> another wait class? Having all the notes after all the tables are\n> generated would sound weird to me.\n\nAppending these notes at the end of all the tables does not strike me\nas a big dea, TBH. But, well, my sole opinion is not the final choice\neither. For now, I am mostly tempted to keep the generation script as\nminimalistic as possible.\n\n> We could discuss another approach for the \"Note\" part if there is a\n> need to add one for an existing/new wait class though.\n\nDocumenting what's expected from the wait event classes is critical in\nthe .txt file as that's what developers are going to look at when\nadding a new wait event. Adding them in the header is less appealing\nto me considering that is it now generated, and the docs provide a lot\nof explanation as well.\n\n>> This has as extra consequence to require a change in\n>> wait_event.h so as PG_WAIT_BUFFER_PIN is renamed to PG_WAIT_BUFFERPIN,\n>> equally fine by me. Logically, this rename should be done in a patch\n>> of its own, for clarity.\n> \n> Yes, I can look at it.\n> [...]\n> Agree, I'll look at this.\n\nThanks!\n\n> I'll look at v7 when the v17 branch opens and propose the separate patch\n> mentioned above at that time too.\n\nThanks, again.\n--\nMichael",
"msg_date": "Sat, 6 May 2023 11:23:17 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Sat, May 06, 2023 at 11:23:17AM +0900, Michael Paquier wrote:\n>> I'll look at v7 when the v17 branch opens and propose the separate patch\n>> mentioned above at that time too.\n> \n> Thanks, again.\n\nBy the way, while browsing through the patch, I have noticed two\nthings:\n- The ordering of the items for Lock and LWLock is incorrect.\n- We are missing some of the LWLock entries, like CommitTsBuffer,\nXactBuffer or WALInsert, as of all the entries in\nBuiltinTrancheNames.\n\nMy apologies for not noticing that earlier. This exists in v6 as much\nas v7.\n--\nMichael",
"msg_date": "Sat, 6 May 2023 11:37:05 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 5/6/23 4:23 AM, Michael Paquier wrote:\n> On Thu, May 04, 2023 at 08:39:49AM +0200, Drouvot, Bertrand wrote:\n>> On 5/1/23 1:59 AM, Michael Paquier wrote:\n>> I'm not sure I like it. First, it does break the \"Note\" ordering as compare\n>> to the current documentation. That's not a big deal though.\n>>\n>> Secondly, what If we need to add some note(s) in the future for\n>> another wait class? Having all the notes after all the tables are\n>> generated would sound weird to me.\n> \n> Appending these notes at the end of all the tables does not strike me\n> as a big dea, TBH. But, well, my sole opinion is not the final choice\n> either. For now, I am mostly tempted to keep the generation script as\n> minimalistic as possible.\n> \n\nI agree that's not a big deal and I'm not against having these notes at the end\nof all the tables.\n\n>> We could discuss another approach for the \"Note\" part if there is a\n>> need to add one for an existing/new wait class though.\n> \n\nmeans, that was more a NIT comment from my side.\n\n> Documenting what's expected from the wait event classes is critical in\n> the .txt file as that's what developers are going to look at when\n> adding a new wait event. Adding them in the header is less appealing\n> to me considering that is it now generated, and the docs provide a lot\n> of explanation as well.\n> \n\nYour argument that the header is now generated makes me change my mind: I\nknow think that having the comments in the .txt file is enough.\n\n>>> This has as extra consequence to require a change in\n>>> wait_event.h so as PG_WAIT_BUFFER_PIN is renamed to PG_WAIT_BUFFERPIN,\n>>> equally fine by me. Logically, this rename should be done in a patch\n>>> of its own, for clarity.\n>>\n>> Yes, I can look at it.\n>> [...]\n>> Agree, I'll look at this.\n> \n> Thanks!\n\nPlease find the dedicated patch proposal in [1].\n\n[1]: https://www.postgresql.org/message-id/c6f35117-4b20-4c78-1df5-d3056010dcf5%40gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 15 May 2023 10:18:42 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 5/6/23 4:37 AM, Michael Paquier wrote:\n> On Sat, May 06, 2023 at 11:23:17AM +0900, Michael Paquier wrote:\n>>> I'll look at v7 when the v17 branch opens and propose the separate patch\n>>> mentioned above at that time too.\n>>\n>> Thanks, again.\n> \n> By the way, while browsing through the patch, I have noticed two\n> things:\n> - The ordering of the items for Lock and LWLock is incorrect.\n\nOh right, fixed in V8 attached (moved the sort on the third column\ninstead of the second which has always the same content \"WAIT_EVENT_DOCONLY\"\nfor Lock and LWLock).\n\n> - We are missing some of the LWLock entries, like CommitTsBuffer,\n> XactBuffer or WALInsert, as of all the entries in\n> BuiltinTrancheNames.\n> \n\nYeah, my bad. Fixed in V8 attached.\n\n> My apologies for not noticing that earlier. This exists in v6 as much\n> as v7.\n\nNo problem at all and thanks for the call out!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 15 May 2023 18:45:23 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Mon, May 15, 2023 at 06:45:23PM +0200, Drouvot, Bertrand wrote:\n> On 5/6/23 4:37 AM, Michael Paquier wrote:\n>> On Sat, May 06, 2023 at 11:23:17AM +0900, Michael Paquier wrote:\n>>>> I'll look at v7 when the v17 branch opens and propose the separate patch\n>>>> mentioned above at that time too.\n>>> \n>>> Thanks, again.\n>> \n>> By the way, while browsing through the patch, I have noticed two\n>> things:\n>> - The ordering of the items for Lock and LWLock is incorrect.\n> \n> Oh right, fixed in V8 attached (moved the sort on the third column\n> instead of the second which has always the same content \"WAIT_EVENT_DOCONLY\"\n> for Lock and LWLock).\n\nAh, I didn't notice that. Makes sense.\n\n>> - We are missing some of the LWLock entries, like CommitTsBuffer,\n>> XactBuffer or WALInsert, as of all the entries in\n>> BuiltinTrancheNames.\n> \n> Yeah, my bad. Fixed in V8 attached.\n\nBufFileTruncate and BufFileWrite have an incorrect order in HEAD's\nmonitoring.sgml (will fix in a minute for 16~). So your patch fixes\nthat.\n\nPgStatsDSA and PgStatsData are reversed in your patch compared to\nHEAD, actually, based on the way perl sees fit to do its ordering by\ngiving priority to upper-case characters. Same for RelCacheInit and\nRelationMapping, or even SInvalRead/SInvalWrite being now before the\n\"Serial\" family. Worse, the tables LWLock and Lock are in an\nincorrect order as well with the patch. We'd better be a bit more\nverbose with the sort step, I think.. perl does not handle well\nsorting with collations from what I recall, but we could use uc() with\na block sort to force the operation to be case-insensitive, like \"sort\n{uc($a) cmp uc($b)}\". That needs to be applied here, I guess: \n+# Sort the lines based on the third column\n+my @lines_sorted =\n+ sort { (split(/\\t/, $a))[2] cmp(split(/\\t/, $b))[2] } @lines;\n\nAnd it looks like you'd need to apply uc() on each [2] element. I\nwould add a comment about this detail, as well.\n\nNo entries are missing, after comparing what's generated by the patch\nwith the contents of HEAD.\n\nSmall nit-ish question: waiteventnames.sgml or wait_event_types.sgml?\nSame for generate-waiteventtypes.pl?\n\n>> My apologies for not noticing that earlier. This exists in v6 as much\n>> as v7.\n> \n> No problem at all and thanks for the call out!\n\nFWIW, I would have posted two patches, one with the refactoring of\ndone in [1], and a second that switches to the automation, to make\nclear the preparatory step.\n\n[1]: https://www.postgresql.org/message-id/[email protected]\n--\nMichael",
"msg_date": "Tue, 16 May 2023 16:48:35 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 5/16/23 9:48 AM, Michael Paquier wrote:\n> On Mon, May 15, 2023 at 06:45:23PM +0200, Drouvot, Bertrand wrote:\n>> On 5/6/23 4:37 AM, Michael Paquier wrote:\n>>> On Sat, May 06, 2023 at 11:23:17AM +0900, Michael Paquier wrote:\n\n> \n> BufFileTruncate and BufFileWrite have an incorrect order in HEAD's\n> monitoring.sgml (will fix in a minute for 16~). So your patch fixes\n> that.\n\nOh nice catch!\n\n> \n> PgStatsDSA and PgStatsData are reversed in your patch compared to\n> HEAD, actually, based on the way perl sees fit to do its ordering by\n> giving priority to upper-case characters. Same for RelCacheInit and\n> RelationMapping, or even SInvalRead/SInvalWrite being now before the\n> \"Serial\" family. Worse, the tables LWLock and Lock are in an\n> incorrect order as well with the patch. We'd better be a bit more\n> verbose with the sort step, I think.. perl does not handle well\n> sorting with collations from what I recall, but we could use uc() with\n> a block sort to force the operation to be case-insensitive, like \"sort\n> {uc($a) cmp uc($b)}\". That needs to be applied here, I guess:\n> +# Sort the lines based on the third column\n> +my @lines_sorted =\n> + sort { (split(/\\t/, $a))[2] cmp(split(/\\t/, $b))[2] } @lines;\n> \n\nOh right, nice catch.\n\n> And it looks like you'd need to apply uc() on each [2] element. I\n> would add a comment about this detail, as well.\n> \n\nDid it that way in V9 attached and the sorting does look like what we expect now.\n\n> No entries are missing, after comparing what's generated by the patch\n> with the contents of HEAD.\n> \n> Small nit-ish question: waiteventnames.sgml or wait_event_types.sgml?\n> Same for generate-waiteventtypes.pl?\n> \n\nAgree, it's more consistent. Done that way in V9.\n\n> \n> FWIW, I would have posted two patches, one with the refactoring of\n> done in [1], and a second that switches to the automation, to make\n> clear the preparatory step.\n> \n> [1]: https://www.postgresql.org/message-id/[email protected]\n> --\n\nAgree, V9 does now apply on top of v2-0001-Introducing-WAIT_EVENT_EXTENSION-and-WAIT_EVENT_B.patch\n(just shared in [1]).\n\n[1]: https://www.postgresql.org/message-id/a82c2660-64b4-1c59-3eef-bf82b86fb99a%40gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 17 May 2023 08:31:53 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Wed, May 17, 2023 at 08:31:53AM +0200, Drouvot, Bertrand wrote:\n> Did it that way in V9 attached and the sorting does look like what\n> we expect now.\n\nYes, the order of the items in the individual tables is fine, but this\nis still a bit incorrect for the classes? Note that the tables for\nthe LWLock and Lock are still in reverse order :)\n\n+foreach $waitclass (sort keys %hashwe)\n\nMeaning that you may want to add an extra case-insensitive rule for\nthe sorting on this line for the SGML docs (also the C part, I guess,\nbut we care less).\n\n> Agree, V9 does now apply on top of v2-0001-Introducing-WAIT_EVENT_EXTENSION-and-WAIT_EVENT_B.patch\n> (just shared in [1]).\n\nIf you don't send both patches in the same message the CF bot is going\nto complain as v9-0001 is not able to apply independently of the other\npatch v2-0001 on the other thread (you could do a git apply -2 -v2,\nfor example).\n--\nMichael",
"msg_date": "Wed, 17 May 2023 17:14:42 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 5/17/23 10:14 AM, Michael Paquier wrote:\n> On Wed, May 17, 2023 at 08:31:53AM +0200, Drouvot, Bertrand wrote:\n>> Did it that way in V9 attached and the sorting does look like what\n>> we expect now.\n> \n> Yes, the order of the items in the individual tables is fine, but this\n> is still a bit incorrect for the classes? Note that the tables for\n> the LWLock and Lock are still in reverse order :)\n\nSorry did not pay enough attention to it ;-(\n\n> +foreach $waitclass (sort keys %hashwe)\n> \n> Meaning that you may want to add an extra case-insensitive rule for\n> the sorting on this line for the SGML docs (also the C part, I guess,\n> but we care less).\n\nYeap, done in V10 for sgml and the C part too (for consistency).\n\n>> Agree, V9 does now apply on top of v2-0001-Introducing-WAIT_EVENT_EXTENSION-and-WAIT_EVENT_B.patch\n>> (just shared in [1]).\n> \n> If you don't send both patches in the same message the CF bot is going\n> to complain as v9-0001 is not able to apply independently of the other\n> patch v2-0001 on the other thread \n\nYeah, good point, attaching both here to keep the CF bot happy.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 17 May 2023 11:10:21 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Wed, May 17, 2023 at 11:10:21AM +0200, Drouvot, Bertrand wrote:\n> Sorry did not pay enough attention to it ;-(\n\nNo problem.\n\n> Yeap, done in V10 for sgml and the C part too (for consistency).\n\nThe order looks fine seen from here, thanks!\n--\nMichael",
"msg_date": "Thu, 18 May 2023 13:36:30 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Thu, May 18, 2023 at 01:36:30PM +0900, Michael Paquier wrote:\n> The order looks fine seen from here, thanks!\n\nNow that v17 is open for business, I have looked again at this patch.\n\nperlcritic is formulating three complaints:\n./src/backend/utils/activity/generate-waiteventtypes.pl: Loop iterator\nis not lexical at line 99, column 1. See page 108 of PBP.\n([Variables::RequireLexicalLoopIterators] Severity: 5)\n./src/backend/utils/activity/generate-waiteventtypes.pl: Loop iterator\nis not lexical at line 126, column 1. See page 108 of PBP.\n([Variables::RequireLexicalLoopIterators] Severity: 5)\n./src/backend/utils/activity/generate-waiteventtypes.pl: Loop iterator\nis not lexical at line 181, column 1. See page 108 of PBP.\n([Variables::RequireLexicalLoopIterators] Severity: 5)\n\nThese are caused by three foreach loops, where perl wants to use a\nlocal declaration for the iterators.\n\nThe indentation was a bit off, as well, perltidy v20230309 has\nreported a few diffs. Not a big deal.\n\nsrc/common/meson.build includes the following comment:\n# For the server build of pgcommon, depend on lwlocknames_h, because at least\n# cryptohash_openssl.c, hmac_openssl.c depend on it. That's arguably a\n# layering violation, but ...\n\nThe thing is that controldata_utils.c has a dependency to wait events\nso we should add wait_event_types_h to 'sources'.\n\nThe names chosen, as of wait_event_types.h, pgstat_wait_event.c,\nwaiteventnames.txt and generate-wait_event_types.pl are inconsistent,\ncomparing them for instance with the lwlock parts. I have renamed\nthese a bit, with more underscores.\n\nBuilding the documentation in a meson/ninja build can be done with the\nfollowing command run from the root of the build directory:\nninja alldocs\n\nHowever this command fails with v10. The step that fails is:\n[6/14] Generating doc/src/sgml/postgres-full.xml with a custom command\n\nIt seems to me that the correct thing to do is to add --outdir\n@OUTDIR@ to the command? However, I do see a few problems even after\nthat:\n- The C and H files are still generated in doc/src/sgml/, which is\nuseless.\n- The SGML file wait_event_types.sgml in doc/src/sgml/ seems to be\nempty, still to my surprise the HTML part was created correctly.\n- The SGML file is not needed for the C code.\n\nI think that we should add some options to the perl script to be more\nselective with the files generated. How about having two options\ncalled --docs and --code to select one or the other, then limit what\ngets generated in each path? I guess that it would be cleaner if we\nerror in the case where both options are defined, and just use some\ngotos to redirect to each foreach loop to limit extra indentations in\nthe script. This would avoid the need to remove the C and H files\nfrom the docs, additionally, which is what the Makefile in doc/ does.\n\nI have fixed all the issues I've found in v11 attached, except for the\nlast one (I have added the OUTDIR trick for reference, but that's\nincorrect and incomplete). Could you look at this part?\n--\nMichael",
"msg_date": "Mon, 3 Jul 2023 15:57:42 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Mon, Jul 03, 2023 at 03:57:42PM +0900, Michael Paquier wrote:\n> I think that we should add some options to the perl script to be more\n> selective with the files generated. How about having two options\n> called --docs and --code to select one or the other, then limit what\n> gets generated in each path? I guess that it would be cleaner if we\n> error in the case where both options are defined, and just use some\n> gotos to redirect to each foreach loop to limit extra indentations in\n> the script. This would avoid the need to remove the C and H files\n> from the docs, additionally, which is what the Makefile in doc/ does.\n> \n> I have fixed all the issues I've found in v11 attached, except for the\n> last one (I have added the OUTDIR trick for reference, but that's\n> incorrect and incomplete). Could you look at this part?\n\nAh. It took me a few extra minutes, but I think that we should set\n\"capture\" to \"false\", no? It looks like meson is getting confused,\nexpecting something in stdout but the new script generates a few\nfiles, and does not output anything. That's different from the other\ndoc-related perl scripts.\n--\nMichael",
"msg_date": "Mon, 3 Jul 2023 16:11:51 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 7/3/23 9:11 AM, Michael Paquier wrote:\n> On Mon, Jul 03, 2023 at 03:57:42PM +0900, Michael Paquier wrote:\n\nThanks for looking at it and having fixed the issues that were present in\nv10.\n\n>> I think that we should add some options to the perl script to be more\n>> selective with the files generated. How about having two options\n>> called --docs and --code to select one or the other, then limit what\n>> gets generated in each path? I guess that it would be cleaner if we\n>> error in the case where both options are defined, and just use some\n>> gotos to redirect to each foreach loop to limit extra indentations in\n>> the script. This would avoid the need to remove the C and H files\n>> from the docs, additionally, which is what the Makefile in doc/ does.\n>>\n>> I have fixed all the issues I've found in v11 attached, except for the\n>> last one (I have added the OUTDIR trick for reference, but that's\n>> incorrect and incomplete). Could you look at this part?\n> \n> Ah. It took me a few extra minutes, but I think that we should set\n> \"capture\" to \"false\", no? It looks like meson is getting confused,\n> expecting something in stdout but the new script generates a few\n> files, and does not output anything. That's different from the other\n> doc-related perl scripts.\n> --\n\nYeah, with \"capture\" set to \"false\" then ninja alldocs does not error out\nand wait_event_types.sgml gets generated.\n\nI'll look at the extra options --code and --docs.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 4 Jul 2023 09:34:33 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 7/4/23 9:34 AM, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 7/3/23 9:11 AM, Michael Paquier wrote:\n>> On Mon, Jul 03, 2023 at 03:57:42PM +0900, Michael Paquier wrote:\n> \n> Thanks for looking at it and having fixed the issues that were present in\n> v10.\n> \n>>> I think that we should add some options to the perl script to be more\n>>> selective with the files generated. How about having two options\n>>> called --docs and --code to select one or the other, then limit what\n>>> gets generated in each path? I guess that it would be cleaner if we\n>>> error in the case where both options are defined, and just use some\n>>> gotos to redirect to each foreach loop to limit extra indentations in\n>>> the script. This would avoid the need to remove the C and H files\n>>> from the docs, additionally, which is what the Makefile in doc/ does.\n>>>\n>>> I have fixed all the issues I've found in v11 attached, except for the\n>>> last one (I have added the OUTDIR trick for reference, but that's\n>>> incorrect and incomplete). Could you look at this part?\n>>\n>> Ah. It took me a few extra minutes, but I think that we should set\n>> \"capture\" to \"false\", no? It looks like meson is getting confused,\n>> expecting something in stdout but the new script generates a few\n>> files, and does not output anything. That's different from the other\n>> doc-related perl scripts.\n>> -- \n> \n> Yeah, with \"capture\" set to \"false\" then ninja alldocs does not error out\n> and wait_event_types.sgml gets generated.\n> \n> I'll look at the extra options --code and --docs.\n\nPlease find attached v12 that:\n\n- sets \"capture\" to \"false\"\n- adds the --code and --docs extra options to generate-wait_event_types.pl\n- makes sure at least one of those option is provided\n- makes sure that both options can't be provided simultaneously\n- update the related Makefile/meson.build files accordingly\n- fix a bug in generate-wait_event_types.pl (die on rename($stmp...) was not\nusing the right file (it was using ctmp). That was not visible before the docs/code\nsplit.\n\nNot related to this patch but I noticed that when building with meson some c files\nare duplicated a the end of the build.\n\nIndeed, they also appear in some include directories:\n\n./meson_build/src/include/storage/lwlocknames.c\n./meson_build/src/include/utils/pgstat_wait_event.c\n./meson_build/src/include/utils/fmgrtab.c\n./meson_build/src/include/nodes/queryjumblefuncs.funcs.c\n./meson_build/src/include/nodes/readfuncs.switch.c\n./meson_build/src/include/nodes/readfuncs.funcs.c\n./meson_build/src/include/nodes/copyfuncs.switch.c\n./meson_build/src/include/nodes/equalfuncs.funcs.c\n./meson_build/src/include/nodes/outfuncs.switch.c\n./meson_build/src/include/nodes/queryjumblefuncs.switch.c\n./meson_build/src/include/nodes/copyfuncs.funcs.c\n./meson_build/src/include/nodes/equalfuncs.switch.c\n./meson_build/src/include/nodes/outfuncs.funcs.c\n\nIs it expected? If not, I guess it's worth another patch.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 4 Jul 2023 12:04:50 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Tue, Jul 04, 2023 at 09:34:33AM +0200, Drouvot, Bertrand wrote:\n> Yeah, with \"capture\" set to \"false\" then ninja alldocs does not error out\n> and wait_event_types.sgml gets generated.\n> \n> I'll look at the extra options --code and --docs.\n\n+wait_event_types.sgml: $(top_srcdir)/src/backend/utils/activity/wait_event_names.txt $(top_srcdir)/src/backend/utils/activity/generate-wait_event_types.pl\n+ $(PERL) $(top_srcdir)/src/backend/utils/activity/generate-wait_event_types.pl --docs $< > $@\n\nThis is doing the same error as meson in v10, there is no need for\nthe last part doing the redirection because the script outputs\nnothing. Here is the command generated:\nmake -C doc/src/sgml/ wait_event_types.sgml\n'/usr/bin/perl'\n../../../src/backend/utils/activity/generate-wait_event_types.pl\n--docs ../../../src/backend/utils/activity/wait_event_names.txt >\nwait_event_types.sgml\n\n+wait_event_names = custom_target('wait_event_names',\n+ input: files('../../backend/utils/activity/wait_event_names.txt'),\n+ output: ['wait_event_types.h'],\nThis one was not completely correct (look at fmgrtab, for example), as\nit is missing pgstat_wait_event.c in the output generated. We could\nperhaps be more selective with all that, including fmgrtab, but I have\nleft that out for now. Note also the tweak with install_dir to not\ninstall the C file.\n\n+wait_event_names = custom_target('wait_event_names',\n+ input: files('./wait_event_names.txt'),\n+ output: ['pgstat_wait_event.c'],\n+ command: [\n+ perl, files('./generate-wait_event_types.pl'),\n+ '-o', '@OUTDIR@', '--code',\n+ '@INPUT@'\n+ ],\n+ install: true,\n+ install_dir: [false],\n+)\n[...]\n+# these include .c files generated in ../../../include/activity, seems nicer to not\n+# add that as an include path for the whole backend\n+waitevent_sources = files(\n 'wait_event.c',\n )\n+\n+backend_link_with += static_library('wait_event_names',\n+ waitevent_sources,\n+ dependencies: [backend_code],\n+ include_directories: include_directories('../../../include/utils'),\n+ kwargs: internal_lib_args,\n+)\n\n\"wait_event_names\" with the extra command should not be necessary\nhere, because we feed from the C file generated in src/include/utils/,\nincluded in wait_event.c. See src/backend/nodes/meson.build for a\nsimilar example\n\nTwo of the error messages after rename() in the script were\ninconsistent. So reworded these on the way.\n\nI have added a usage() to the script, while on it.\n\nThe VPATH build was broken, because the following line was missing\nfrom src/backend/utils/activity/Makefile to be able to detect\npgstat_wait_event.c from wait_event.c:\noverride CPPFLAGS := -I. -I$(srcdir) $(CPPFLAGS)\n\nWith all that in place, VPATH builds, the CI, meson, configure/make\nand the various cleanup targets were working fine, so I have applied\nit. Now let's see what the buildfarm tells.\n\nThe final --stat number is like that:\n 22 files changed, 757 insertions(+), 2111 deletions(-)\n--\nMichael",
"msg_date": "Wed, 5 Jul 2023 10:57:19 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-05 10:57:19 +0900, Michael Paquier wrote:\n> With all that in place, VPATH builds, the CI, meson, configure/make\n> and the various cleanup targets were working fine, so I have applied\n> it. Now let's see what the buildfarm tells.\n>\n> The final --stat number is like that:\n> 22 files changed, 757 insertions(+), 2111 deletions(-)\n\nThat's pretty nice!\n\nRebasing a patch over this I was a bit confused because I got a bunch of\n\"\"unable to parse wait_event_names.txt\" errors. Took me a while to figure out\nthat that was just because I didn't include a trailing . in the description.\nPerhaps that could be turned into a more meaningful error?\n\n\tdie \"unable to parse wait_event_names.txt\"\n\t unless $line =~ /^(\\w+)\\t+(\\w+)\\t+(\"\\w+\")\\t+(\"\\w.*\\.\")$/;\n\nIt's not helped by the fact that the regex in the error actually doesn't match\nany lines, because it's not operating on the input but on\n\tpush(@lines, $section_name . \"\\t\" . $_);\n\n\nI also do wonder if we should invest in generating the lwlock names as\nwell. Except for a few abbreviations, the second column is always the\ncamel-cased version of what follows WAIT_EVENT_. Feels pretty tedious to write\nthat out.\n\nPerhaps we should just change the case of the upper-cased names (DSM, SSL,\nWAL, ...) to follow the other names?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 5 Jul 2023 14:59:39 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Wed, Jul 05, 2023 at 02:59:39PM -0700, Andres Freund wrote:\n> Rebasing a patch over this I was a bit confused because I got a bunch of\n> \"\"unable to parse wait_event_names.txt\" errors. Took me a while to figure out\n> that that was just because I didn't include a trailing . in the description.\n> Perhaps that could be turned into a more meaningful error?\n> \n> \tdie \"unable to parse wait_event_names.txt\"\n> \t unless $line =~ /^(\\w+)\\t+(\\w+)\\t+(\"\\w+\")\\t+(\"\\w.*\\.\")$/;\n\nAgreed that we could at least add the $line in the error message\ngenerated, at least, to help with debugging.\n\n> I also do wonder if we should invest in generating the lwlock names as\n> well. Except for a few abbreviations, the second column is always the\n> camel-cased version of what follows WAIT_EVENT_. Feels pretty tedious to write\n> that out.\n\nAnd you mean getting rid of lwlocknames.txt? The impact on dtrace or\nother similar tools is uncertain to me because we have free number on\nthis list, and stuff like GetLWLockIdentifier() rely on the input ID\nfrom lwlocknames.txt.\n\n> Perhaps we should just change the case of the upper-cased names (DSM, SSL,\n> WAL, ...) to follow the other names?\n\nSo you mean renaming the existing events like WalSenderWaitForWAL to\nWalSenderWaitForWal? The impact on existing monitoring queries is not\nzero because any changes would be silent, and that's the part that\nworried me the most even if it can remove one column in the txt file.\n--\nMichael",
"msg_date": "Thu, 6 Jul 2023 09:36:12 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-06 09:36:12 +0900, Michael Paquier wrote:\n> On Wed, Jul 05, 2023 at 02:59:39PM -0700, Andres Freund wrote:\n> > Rebasing a patch over this I was a bit confused because I got a bunch of\n> > \"\"unable to parse wait_event_names.txt\" errors. Took me a while to figure out\n> > that that was just because I didn't include a trailing . in the description.\n> > Perhaps that could be turned into a more meaningful error?\n> > \n> > \tdie \"unable to parse wait_event_names.txt\"\n> > \t unless $line =~ /^(\\w+)\\t+(\\w+)\\t+(\"\\w+\")\\t+(\"\\w.*\\.\")$/;\n> \n> Agreed that we could at least add the $line in the error message\n> generated, at least, to help with debugging.\n> \n> > I also do wonder if we should invest in generating the lwlock names as\n> > well. Except for a few abbreviations, the second column is always the\n> > camel-cased version of what follows WAIT_EVENT_. Feels pretty tedious to write\n> > that out.\n> \n> And you mean getting rid of lwlocknames.txt?\n\nNo, I meant the second column in wait_event_names.txt. If you look at stuff\nlike:\nWAIT_EVENT_ARCHIVER_MAIN\t\"ArchiverMain\"\t\"Waiting in main loop of archiver process.\"\n\nIt'd be pretty trivial to generate ArchiverMain from ARCHIVER_MAIN.\n\n\n\n> The impact on dtrace or other similar tools is uncertain to me because we\n> have free number on this list, and stuff like GetLWLockIdentifier() rely on\n> the input ID from lwlocknames.txt.\n\nI don't really care, tbh. If we wanted to keep the names the same in case of\nabbreviations, we could just make the name optional, and auto-generated if not\nexplicitly specified.\n\n\n\n> > Perhaps we should just change the case of the upper-cased names (DSM, SSL,\n> > WAL, ...) to follow the other names?\n> \n> So you mean renaming the existing events like WalSenderWaitForWAL to\n> WalSenderWaitForWal?\n\nYes.\n\n\n> The impact on existing monitoring queries is not zero because any changes\n> would be silent, and that's the part that worried me the most even if it can\n> remove one column in the txt file.\n\nThen let's just use - or so to indicate the inferred name, with a \"string\"\noverriding it?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 Jul 2023 18:19:43 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Thu, Jul 06, 2023 at 06:19:43PM -0700, Andres Freund wrote:\n> On 2023-07-06 09:36:12 +0900, Michael Paquier wrote:\n>> So you mean renaming the existing events like WalSenderWaitForWAL to\n>> WalSenderWaitForWal?\n> \n> Yes.\n>\n>> The impact on existing monitoring queries is not zero because any changes\n>> would be silent, and that's the part that worried me the most even if it can\n>> remove one column in the txt file.\n> \n> Then let's just use - or so to indicate the inferred name, with a \"string\"\n> overriding it?\n\nHmm. If we go down this road I would make the choice of simplicity\nand remove entirely a column, then, generating the snakecase from the\ncamelcase or vice-versa (say like a $string =~ s/([a-z]+)/$1_/g;),\neven if it means having slightly incompatible strings showing to the\nusers. And I'd rather minimize the number of exceptions we need to\nhandle in this automation (aka no exception rules for some keywords\nlike \"SSL\" or \"WAL\", etc.).\n--\nMichael",
"msg_date": "Fri, 7 Jul 2023 13:49:24 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Fri, Jul 07, 2023 at 01:49:24PM +0900, Michael Paquier wrote:\n> Hmm. If we go down this road I would make the choice of simplicity\n> and remove entirely a column, then, generating the snakecase from the\n> camelcase or vice-versa (say like a $string =~ s/([a-z]+)/$1_/g;),\n> even if it means having slightly incompatible strings showing to the\n> users. And I'd rather minimize the number of exceptions we need to\n> handle in this automation (aka no exception rules for some keywords\n> like \"SSL\" or \"WAL\", etc.).\n\nAfter pondering more about that, the attached patch set does exactly\nthat. Patch 0001 includes an update of the wait event names so as\nthese are more consistent with the enum elements generated. With this\nchange, users can apply lower() or upper() across monitoring queries\nand still get the same results as before. An exception was the\nmessage queue events, which the enums used \"MQ\" but the event names\nused \"MessageQueue\", but this concerns only four lines of code in the\nbackend. The newly-generated enum elements match with the existing\nones, except for MQ.\n\nPatch 0002 introduces a set of simplifications for the format of\nwait_event_names.txt:\n- Removal of the first column for the enums.\n- Removal of the quotes for the event name. We have a single keyword\nfor these, so that's kind of annoying to cope with that for new\nentries.\n- Build of the enum elements using the event names, by applying a\nrebuild as simple as that:\n+ $waiteventenumname =~ s/([a-z])([A-Z])/$1_$2/g;\n+ $waiteventenumname = uc($waiteventenumname);\n\nThoughts?\n--\nMichael",
"msg_date": "Sun, 9 Jul 2023 13:32:47 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 7/9/23 6:32 AM, Michael Paquier wrote:\n> On Fri, Jul 07, 2023 at 01:49:24PM +0900, Michael Paquier wrote:\n>> Hmm. If we go down this road I would make the choice of simplicity\n>> and remove entirely a column, then, generating the snakecase from the\n>> camelcase or vice-versa (say like a $string =~ s/([a-z]+)/$1_/g;),\n>> even if it means having slightly incompatible strings showing to the\n>> users. And I'd rather minimize the number of exceptions we need to\n>> handle in this automation (aka no exception rules for some keywords\n>> like \"SSL\" or \"WAL\", etc.).\n> \n> After pondering more about that, the attached patch set does exactly\n> that.\n\nThanks!\n\n> Patch 0001 includes an update of the wait event names so as\n> these are more consistent with the enum elements generated. With this\n> change, users can apply lower() or upper() across monitoring queries\n> and still get the same results as before. An exception was the\n> message queue events, which the enums used \"MQ\" but the event names\n> used \"MessageQueue\", but this concerns only four lines of code in the\n> backend. The newly-generated enum elements match with the existing\n> ones, except for MQ.\n\n> \n> Patch 0002 introduces a set of simplifications for the format of\n> wait_event_names.txt:\n> - Removal of the first column for the enums.\n> - Removal of the quotes for the event name. We have a single keyword\n> for these, so that's kind of annoying to cope with that for new\n> entries.\n> - Build of the enum elements using the event names, by applying a\n> rebuild as simple as that:\n> + $waiteventenumname =~ s/([a-z])([A-Z])/$1_$2/g;\n> + $waiteventenumname = uc($waiteventenumname);\n> \n> Thoughts?\n\nThat's great and it does simplify the wait_event_names.txt format (and the\nimpact on \"MQ\" does not seem like a big deal).\n\nI also noticed that you now provide the culprit line in case of parsing\nfailure (thanks for that).\n\n #\n-# \"C symbol in enums\" \"format in the system views\" \"description in the docs\"\n+# \"format in the system views\" \"description in the docs\"\n\nShould we add a note here about the impact of the \"format in the system views\" on\nthe auto generated enum? (aka how it is generated based on its format)?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 9 Jul 2023 09:15:34 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Sun, Jul 09, 2023 at 09:15:34AM +0200, Drouvot, Bertrand wrote:\n> I also noticed that you now provide the culprit line in case of parsing\n> failure (thanks for that).\n\nYes, that's mentioned in the commit message I quickly wrote in 0002.\n\n> #\n> -# \"C symbol in enums\" \"format in the system views\" \"description in the docs\"\n> +# \"format in the system views\" \"description in the docs\"\n> \n> Should we add a note here about the impact of the \"format in the system views\" on\n> the auto generated enum? (aka how it is generated based on its format)?\n\nThere is one, but now that I look at it WAIT_EVENT repeated twice does\nnot look great, so this could use \"FooBarName\" or equivalent:\n+ # Generate the element name for the enums based on the\n+ # description. Camelcase strings like \"WaitEventName\"\n+ # are converted to WAIT_EVENT_WAIT_EVENT_NAME.\n--\nMichael",
"msg_date": "Sun, 9 Jul 2023 16:36:11 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 7/9/23 9:36 AM, Michael Paquier wrote:\n> On Sun, Jul 09, 2023 at 09:15:34AM +0200, Drouvot, Bertrand wrote:\n>> I also noticed that you now provide the culprit line in case of parsing\n>> failure (thanks for that).\n> \n> Yes, that's mentioned in the commit message I quickly wrote in 0002.\n> \n>> #\n>> -# \"C symbol in enums\" \"format in the system views\" \"description in the docs\"\n>> +# \"format in the system views\" \"description in the docs\"\n>>\n>> Should we add a note here about the impact of the \"format in the system views\" on\n>> the auto generated enum? (aka how it is generated based on its format)?\n> \n> There is one, \n\nYeah there is one in generate-wait_event_types.pl. I was wondering\nto add one in wait_event_names.txt too (as this is the place where\nno wait events would be added if any).\n\n> but now that I look at it WAIT_EVENT repeated twice does\n> not look great, so this could use \"FooBarName\" or equivalent:\n\n+1 for \"FooBarName\"\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 10 Jul 2023 07:05:30 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 07:05:30AM +0200, Drouvot, Bertrand wrote:\n> Yeah there is one in generate-wait_event_types.pl. I was wondering\n> to add one in wait_event_names.txt too (as this is the place where\n> no wait events would be added if any).\n\nHmm. Something like that could be done, for instance:\n\n # src/backend/utils/activity/wait_event_types.h\n-# typedef enum definitions for wait events.\n+# typedef enum definitions for wait events, generated from the first\n+# field.\n--\nMichael",
"msg_date": "Mon, 10 Jul 2023 14:20:40 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "\n\nOn 7/10/23 7:20 AM, Michael Paquier wrote:\n> On Mon, Jul 10, 2023 at 07:05:30AM +0200, Drouvot, Bertrand wrote:\n>> Yeah there is one in generate-wait_event_types.pl. I was wondering\n>> to add one in wait_event_names.txt too (as this is the place where\n>> no wait events would be added if any).\n> \n> Hmm. Something like that could be done, for instance:\n> \n> # src/backend/utils/activity/wait_event_types.h\n> -# typedef enum definitions for wait events.\n> +# typedef enum definitions for wait events, generated from the first\n> +# field.\n\nYeah, it looks a good place for it.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 10 Jul 2023 07:52:23 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On 2023-Jul-09, Michael Paquier wrote:\n\n> Patch 0002 introduces a set of simplifications for the format of\n> wait_event_names.txt:\n> - Removal of the first column for the enums.\n\nI don't like this bit, because it means the .txt file is now ungreppable\nas source of the enum name. Things become mysterious and people have to\ntrack down the event name by reading the the Perl generating script.\nIt's annoying. I'd rather have the extra column, even if it means a\nlittle duplicity.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Siempre hay que alimentar a los dioses, aunque la tierra esté seca\" (Orual)\n\n\n",
"msg_date": "Mon, 10 Jul 2023 09:11:36 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 09:11:36AM +0200, Alvaro Herrera wrote:\n> I don't like this bit, because it means the .txt file is now ungreppable\n> as source of the enum name. Things become mysterious and people have to\n> track down the event name by reading the the Perl generating script.\n> It's annoying. I'd rather have the extra column, even if it means a\n> little duplicity.\n\nHmm. I can see your point that we'd lose the direct relationship\nbetween the enum and string when running a single `git grep` from the\ntree, still attempting to do that does not actually lead to much\ninformation gained? Personally, I usually grep for code when looking\nfor consistent information across various paths in the tree. Wait\nevents are very different: each enum is used in a single place in the\ntree making their grep search the equivalent of looking at\nwait_event_names.txt anyway?\n\nThe quotes in the second columns can be removed even with your\nargument in place. That improves a bit the format.\n--\nMichael",
"msg_date": "Tue, 11 Jul 2023 07:52:45 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "\n\nOn 7/11/23 12:52 AM, Michael Paquier wrote:\n> On Mon, Jul 10, 2023 at 09:11:36AM +0200, Alvaro Herrera wrote:\n>> I don't like this bit, because it means the .txt file is now ungreppable\n>> as source of the enum name. Things become mysterious and people have to\n>> track down the event name by reading the the Perl generating script.\n>> It's annoying. I'd rather have the extra column, even if it means a\n>> little duplicity.\n> \n> Hmm. I can see your point that we'd lose the direct relationship\n> between the enum and string when running a single `git grep` from the\n> tree, still attempting to do that does not actually lead to much\n> information gained? Personally, I usually grep for code when looking\n> for consistent information across various paths in the tree. Wait\n> events are very different: each enum is used in a single place in the\n> tree making their grep search the equivalent of looking at\n> wait_event_names.txt anyway?\n> \n\nBefore commit fa88928470 one could find the relationship between the enum and the name\nin wait_event.c (a simple git grep would provide it).\n\nWith commit fa88928470 in place, one could find the relationship between the enum and the name\nin wait_event_names.txt (a simple git grep would provide it).\n\nWith the proposal we are discussing here, once the build is done and so the pgstat_wait_event.c\nfile is generated then we have the same \"grep\" capability than pre commit fa88928470 (except that\n\"git grep\" can't be used and one would need things like\nfind . -name \"*.c\" -exec grep -il \"WAIT_EVENT_CHECKPOINTER_MAIN\" {} \\;)\n\nI agree that it is less \"obvious\" than pre fa88928470 but still doable though.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 11 Jul 2023 06:54:23 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 07:52:23AM +0200, Drouvot, Bertrand wrote:\n> On 7/10/23 7:20 AM, Michael Paquier wrote:\n>> Hmm. Something like that could be done, for instance:\n>> \n>> # src/backend/utils/activity/wait_event_types.h\n>> -# typedef enum definitions for wait events.\n>> +# typedef enum definitions for wait events, generated from the first\n>> +# field.\n> \n> Yeah, it looks a good place for it.\n\nI am not sure where we are on that based on the objection from Alvaro\nto not remove the first column in wait_event_names.txt about\ngreppability. Anyway, I am not seeing any objections behind my\nsuggestion to simplify the second column and remove the quotes from\nthe event names, either. Besides, the suggestion of Andres to improve\nthe error message on parsing and show the line information is\nsomething useful in itself.\n\nHence, attached is a rebased patch set that separates the work into\nmore patches:\n- 0001 removes the quotes from the second column, improving the\nreadability of the .txt file.\n- 0002 fixes the report from Andres to improve the error message on\nparsing.\n- 0003 is the rename of the wait events, in preparation for...\n- 0004 that removes entirely the first column (enum element names)\nfrom wait_event_names.txt.\n\nI would like to apply 0001 and 0002 to improve the format if there are\nno objections. 0003 and 0004 are still here for discussion, as it\ndoes not seem like a consensus has been reached for that yet. Getting\nmore opinions would be a good next step for the last two patches, I\nassume.\n\nSo, any comments?\n--\nMichael",
"msg_date": "Thu, 13 Jul 2023 10:26:54 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 10:26:54AM +0900, Michael Paquier wrote:\n> I would like to apply 0001 and 0002 to improve the format if there are\n> no objections. 0003 and 0004 are still here for discussion, as it\n> does not seem like a consensus has been reached for that yet. Getting\n> more opinions would be a good next step for the last two patches, I\n> assume.\n\nI have looked again at 0001 and 0002 and applied them to get them out\nof the way. 0003 and 0004 are rebased and attached. I'll add them to\nthe CF for later consideration. More opinions are welcome.\n--\nMichael",
"msg_date": "Fri, 14 Jul 2023 13:49:22 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-14 13:49:22 +0900, Michael Paquier wrote:\n> I have looked again at 0001 and 0002 and applied them to get them out\n> of the way. 0003 and 0004 are rebased and attached. I'll add them to\n> the CF for later consideration. More opinions are welcome.\n\n> From b6390183bdcc054df82279bb0b2991730f85a0a3 Mon Sep 17 00:00:00 2001\n> From: Michael Paquier <[email protected]>\n> Date: Thu, 13 Jul 2023 10:14:47 +0900\n> Subject: [PATCH v3 4/4] Remove column for enum elements in\n> wait_event_names.txt\n> \n> This file is now made of two columns, removing the column listing the\n> enum elements for each wait event class:\n> - Camelcase event name used in pg_stat_activity. There are now\n> unquoted.\n> - Description of the documentation.\n\n> The enum elements are generated from what is now the first column.\n\nI think the search issue is valid, so I do think going the other way is\npreferrable. I.e. use just the enum value in the .txt and generate the camel\ncase name from that. That allows you to search the define used in code and\nfind a hit in the file.\n\nI personally would still leave off the WAIT_EVENT prefix in the .txt, I think\nmost of us can remember to chop that off.\n\nI don't think we need to be particularly consistent with wait events across\nmajor versions. They're necessarily tied to how the code works, and we've\nyanked that around plenty.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 16 Jul 2023 12:21:20 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Sun, Jul 16, 2023 at 12:21:20PM -0700, Andres Freund wrote:\n> I think the search issue is valid, so I do think going the other way is\n> preferrable. I.e. use just the enum value in the .txt and generate the camel\n> case name from that. That allows you to search the define used in code and\n> find a hit in the file.\n> \n> I personally would still leave off the WAIT_EVENT prefix in the .txt, I think\n> most of us can remember to chop that off.\n\nSo you mean to switch a line that now looks like that:\nWAIT_EVENT_FOO_BAR FooBar \"Waiting on Foo Bar.\"\nTo that:\nFOO_BAR \"Waiting on Foo Bar.\"\nOr even that:\nWAIT_EVENT_FOO_BAR \"Waiting on Foo Bar.\"\n\nSure, it is an improvement for any wait events that use WAIT_EVENT_\nwhen searching them, but this adds more magic into the LWLock and Lock\nareas if the same conversion is applied there. Or am I right to\nassume that you'd mean to *not* do any of that for these two classes?\nThese can be treated as exceptions in the script when generating the\nwait event names from the enum elements, of course.\n\n> I don't think we need to be particularly consistent with wait events across\n> major versions. They're necessarily tied to how the code works, and we've\n> yanked that around plenty.\n\nIMO, it depends on the code path involved. For example, I know of\nsome code that relies on SyncRep to track backends waiting on a sync\nreply, and that's one sensible to keep compatible. I'd be sad if\nsomething like that breaks suddenly after a major release.\n--\nMichael",
"msg_date": "Mon, 17 Jul 2023 10:16:02 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Mon, Jul 17, 2023 at 10:16:02AM +0900, Michael Paquier wrote:\n> So you mean to switch a line that now looks like that:\n> WAIT_EVENT_FOO_BAR FooBar \"Waiting on Foo Bar.\"\n> To that:\n> FOO_BAR \"Waiting on Foo Bar.\"\n> Or even that:\n> WAIT_EVENT_FOO_BAR \"Waiting on Foo Bar.\"\n> \n> Sure, it is an improvement for any wait events that use WAIT_EVENT_\n> when searching them, but this adds more magic into the LWLock and Lock\n> areas if the same conversion is applied there. Or am I right to\n> assume that you'd mean to *not* do any of that for these two classes?\n> These can be treated as exceptions in the script when generating the\n> wait event names from the enum elements, of course.\n\nI have looked again at that, and switching wait_event_names.txt to use\ntwo columns made of the typedef definitions and the docs like is not a\nproblem:\nFOO_BAR \"Waiting on Foo Bar.\"\n\nWAIT_EVENT_ is appended to the typedef definitions in the script. The\nwait event names like \"FooBar\" are generated from the enums by\nsplitting using their underscores and doing some lc(). Lock and\nLWLock don't need to change. This way, it is easy to grep the wait\nevents from the source code and match them with wait_event_names.txt.\n\nThoughts or comments?\n--\nMichael",
"msg_date": "Mon, 28 Aug 2023 17:04:45 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 8/28/23 10:04 AM, Michael Paquier wrote:\n> On Mon, Jul 17, 2023 at 10:16:02AM +0900, Michael Paquier wrote:\n>> So you mean to switch a line that now looks like that:\n>> WAIT_EVENT_FOO_BAR FooBar \"Waiting on Foo Bar.\"\n>> To that:\n>> FOO_BAR \"Waiting on Foo Bar.\"\n>> Or even that:\n>> WAIT_EVENT_FOO_BAR \"Waiting on Foo Bar.\"\n>>\n>> Sure, it is an improvement for any wait events that use WAIT_EVENT_\n>> when searching them, but this adds more magic into the LWLock and Lock\n>> areas if the same conversion is applied there. Or am I right to\n>> assume that you'd mean to *not* do any of that for these two classes?\n>> These can be treated as exceptions in the script when generating the\n>> wait event names from the enum elements, of course.\n> \n> I have looked again at that, and switching wait_event_names.txt to use\n> two columns made of the typedef definitions and the docs like is not a\n> problem:\n> FOO_BAR \"Waiting on Foo Bar.\"\n> \n> WAIT_EVENT_ is appended to the typedef definitions in the script. The\n> wait event names like \"FooBar\" are generated from the enums by\n> splitting using their underscores and doing some lc(). Lock and\n> LWLock don't need to change. This way, it is easy to grep the wait\n> events from the source code and match them with wait_event_names.txt.\n> \n> Thoughts or comments?\n\nAgree that done that way one could easily grep the events from the source code and\nmatch them with wait_event_names.txt. Then I don't think the \"search\" issue in the code\nis still a concern with the current proposal.\n\nFWIW, I'm getting:\n\n$ git am v3-000*\nApplying: Rename wait events with more consistent camelcase style\nApplying: Remove column for wait event names in wait_event_names.txt\nerror: patch failed: src/backend/utils/activity/wait_event_names.txt:261\nerror: src/backend/utils/activity/wait_event_names.txt: patch does not apply\nPatch failed at 0002 Remove column for wait event names in wait_event_names.txt\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 29 Aug 2023 08:17:10 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 08:17:10AM +0200, Drouvot, Bertrand wrote:\n> Agree that done that way one could easily grep the events from the\n> source code and match them with wait_event_names.txt. Then I don't\n> think the \"search\" issue in the code is still a concern with the\n> current proposal.\n\nIt could still be able to append WAIT_EVENT_ to the first column of\nthe file. I'd just rather keep it shorter.\n\n> FWIW, I'm getting:\n> \n> $ git am v3-000*\n> Applying: Rename wait events with more consistent camelcase style\n> Applying: Remove column for wait event names in wait_event_names.txt\n> error: patch failed: src/backend/utils/activity/wait_event_names.txt:261\n> error: src/backend/utils/activity/wait_event_names.txt: patch does not apply\n> Patch failed at 0002 Remove column for wait event names in wait_event_names.txt\n\nThat may be a bug in the matrix because of bb90022, as git am can be\neasily pissed. I am attaching a new patch series, but it does not\nseem to matter here.\n\nI have double-checked the docs generated, while on it, and I am not\nseeing anything missing, particularly for the LWLock and Lock parts..\n--\nMichael",
"msg_date": "Tue, 29 Aug 2023 15:41:49 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On 2023-Aug-29, Michael Paquier wrote:\n\n> On Tue, Aug 29, 2023 at 08:17:10AM +0200, Drouvot, Bertrand wrote:\n> > Agree that done that way one could easily grep the events from the\n> > source code and match them with wait_event_names.txt. Then I don't\n> > think the \"search\" issue in the code is still a concern with the\n> > current proposal.\n> \n> It could still be able to append WAIT_EVENT_ to the first column of\n> the file. I'd just rather keep it shorter.\n\nYeah, I have a mild preference for keeping the prefix, but it's mild\nbecause I also imagine that if somebody doesn't see the full symbol name\nwhen grepping they will think to remove the prefix. So only -0.1.\n\nI think the DOCONLY stuff should be better documented; they make no\nsense without looking at the commit message for fa88928470b5.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 29 Aug 2023 14:21:48 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 02:21:48PM +0200, Alvaro Herrera wrote:\n> Yeah, I have a mild preference for keeping the prefix, but it's mild\n> because I also imagine that if somebody doesn't see the full symbol name\n> when grepping they will think to remove the prefix. So only -0.1.\n\nSo, are you fine with the patch as presented? Or are there other\nthings you'd like to see changed in the format?\n\n> I think the DOCONLY stuff should be better documented; they make no\n> sense without looking at the commit message for fa88928470b5.\n\nGood point. However, with 0002 in place these are gone.\n--\nMichael",
"msg_date": "Wed, 30 Aug 2023 07:55:49 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 8/29/23 8:41 AM, Michael Paquier wrote:\n> On Tue, Aug 29, 2023 at 08:17:10AM +0200, Drouvot, Bertrand wrote:\n> That may be a bug in the matrix because of bb90022, as git am can be\n> easily pissed. \n\ngit am does not complain anymore.\n\n\n+ # Generate the element name for the enums based on the\n+ # description. The C symbols are prefixed with \"WAIT_EVENT_\".\n\nNit: 2 whitespaces before \"The C\"\n\n # Build the descriptions. There are in camel-case.\n # LWLock and Lock classes do not need any modifications.\n\nNit: 2 whitespaces before \"There are in camel\"\n\n+ my $waiteventdescription = '';\n+ if ( $waitclassname eq 'WaitEventLWLock'\n\nNit: Too many whitespace after the \"if (\" ?? (I guess pgperltidy would\nfix it).\n\n> I have double-checked the docs generated, while on it, and I am not\n> seeing anything missing, particularly for the LWLock and Lock parts..\n\nI did compare the output of select * from pg_wait_events order by 1,2 and\nignored the case (with and without the patch series).\n\nThen, the only diff is:\n\n< Client,WalSenderWaitWal,Waiting for WAL to be flushed in WAL sender process\n---\n> Client,WalSenderWaitForWAL,Waiting for WAL to be flushed in WAL sender process\n\nThat said, it looks good to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 4 Sep 2023 14:14:58 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Mon, Sep 04, 2023 at 02:14:58PM +0200, Drouvot, Bertrand wrote:\n> # Build the descriptions. There are in camel-case.\n> # LWLock and Lock classes do not need any modifications.\n> \n> Nit: 2 whitespaces before \"There are in camel\"\n\nThe whitespaces are intentional, the typo in the first line is not.\n\n> + my $waiteventdescription = '';\n> + if ( $waitclassname eq 'WaitEventLWLock'\n> \n> Nit: Too many whitespace after the \"if (\" ?? (I guess pgperltidy would\n> fix it).\n\nHere, perltidy is indeed complaining, but it is adding a few\nwhitespaces.\n\n> Then, the only diff is:\n> \n> < Client,WalSenderWaitWal,Waiting for WAL to be flushed in WAL sender process\n> ---\n> > Client,WalSenderWaitForWAL,Waiting for WAL to be flushed in WAL sender process\n> \n> That said, it looks good to me.\n\nAh, good catch. I did not think about cross-checking the data in the\nnew view before and after the patch set. This rename needs to happen\nin 0001.\n\nPlease find v5 attached. How does that look?\n--\nMichael",
"msg_date": "Tue, 5 Sep 2023 14:44:50 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 9/5/23 7:44 AM, Michael Paquier wrote:\n> On Mon, Sep 04, 2023 at 02:14:58PM +0200, Drouvot, Bertrand wrote:\n>> # Build the descriptions. There are in camel-case.\n>> # LWLock and Lock classes do not need any modifications.\n>>\n>> Nit: 2 whitespaces before \"There are in camel\"\n> \n> The whitespaces are intentional, \n\nOh ok, out of curiosity, why are 2 whitespaces intentional?\n\n>> Then, the only diff is:\n>>\n>> < Client,WalSenderWaitWal,Waiting for WAL to be flushed in WAL sender process\n>> ---\n>>> Client,WalSenderWaitForWAL,Waiting for WAL to be flushed in WAL sender process\n>>\n>> That said, it looks good to me.\n> \n> Ah, good catch. I did not think about cross-checking the data in the\n> new view before and after the patch set. This rename needs to happen\n> in 0001.\n> \n> Please find v5 attached. How does that look?\n\nThanks!\n\nThat looks good. I just noticed that v5 did re-introduce the \"issue\" that\nwas fixed in 00e49233a9.\n\nAlso, v5 needs a rebase due to f691f5b80a.\n\nAttaching v6 taking care of the 2 points mentioned above.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 5 Sep 2023 11:06:36 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Tue, Sep 05, 2023 at 11:06:36AM +0200, Drouvot, Bertrand wrote:\n> Oh ok, out of curiosity, why are 2 whitespaces intentional?\n\nThat depends on the individual who write the code, but I recall that\nthis is some old-school style from the 70's and/or the 80's when\ntyping machines were still something. I'm just used to this style\nafter the end of a sentence in a comment.\n\n> That looks good. I just noticed that v5 did re-introduce the \"issue\" that\n> was fixed in 00e49233a9.\n> \n> Also, v5 needs a rebase due to f691f5b80a.\n> \n> Attaching v6 taking care of the 2 points mentioned above.\n\nDammit, thanks. These successive rebases are a bit annoying.. The\ndata produced is consistent, and the new contents can be grepped, so I\nthink that I am just going to apply both patches and move on to other\ntopics.\n--\nMichael",
"msg_date": "Tue, 5 Sep 2023 20:50:24 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On 05/09/2023 13:50 CEST Michael Paquier <[email protected]> wrote:\n\n> On Tue, Sep 05, 2023 at 11:06:36AM +0200, Drouvot, Bertrand wrote:\n> > Oh ok, out of curiosity, why are 2 whitespaces intentional?\n>\n> That depends on the individual who write the code, but I recall that\n> this is some old-school style from the 70's and/or the 80's when\n> typing machines were still something. I'm just used to this style\n> after the end of a sentence in a comment.\n\nFYI: https://en.wikipedia.org/wiki/Sentence_spacing\n\n--\nErik\n\n\n",
"msg_date": "Wed, 6 Sep 2023 04:20:23 +0200 (CEST)",
"msg_from": "Erik Wienhold <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Tue, Sep 05, 2023 at 11:06:36AM +0200, Drouvot, Bertrand wrote:\n> Also, v5 needs a rebase due to f691f5b80a.\n> \n> Attaching v6 taking care of the 2 points mentioned above.\n\nThanks. This time I have correctly checked the consistency of the\ndata produced across all these commits using pg_wait_events, and\nthat's OK. So applied both.\n--\nMichael",
"msg_date": "Wed, 6 Sep 2023 12:42:18 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Wed, Sep 06, 2023 at 04:20:23AM +0200, Erik Wienhold wrote:\n> FYI: https://en.wikipedia.org/wiki/Sentence_spacing\n\nThat was an interesting read. Thanks.\n--\nMichael",
"msg_date": "Wed, 6 Sep 2023 12:44:09 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn 9/6/23 5:44 AM, Michael Paquier wrote:\n> On Wed, Sep 06, 2023 at 04:20:23AM +0200, Erik Wienhold wrote:\n>> FYI: https://en.wikipedia.org/wiki/Sentence_spacing\n> \n> That was an interesting read. Thanks.\n\n+1, thanks!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 6 Sep 2023 06:43:24 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Wed, Jul 05, 2023 at 10:57:19AM +0900, Michael Paquier wrote:\n> I have applied it.\n\nI like the new developer experience of adding a wait event. After release of\nv17, how should we approach back-patching an event, like was done in commits\n8fa4a1a 1396b5c 78c0f85? Each of those commits put the new event at the end\nof its released-branch wait_event.h enum. In v17,\ngenerate-wait_event_types.pl sorts events to position them. Adding an event\nwill renumber others, which can make an extension report the wrong event until\nrecompiled. Extensions citus, pg_bulkload, and vector refer to static events.\nIf a back-patch added WAIT_EVENT_MESSAGE_QUEUE_SOMETHING_NEW, an old-build\npg_bulkload report of WAIT_EVENT_PARALLEL_CREATE_INDEX_SCAN would show up in\npg_stat_activity as WAIT_EVENT_PARALLEL_BITMAP_SCAN. (WAIT_EVENT_EXTENSION is\nnot part of a generated enum, fortunately.) Some options:\n\n1. Don't back-patch wait events to v17+. Use the closest existing event.\n2. Let wait_event_names.txt back-patches control the enum order. For example,\n a line could have an annotation that controls its position relative to the\n auto-sorted lines. For another example, the generator could stop sorting.\n3. Accept the renumbering, because the consequence isn't that horrible.\n\nOption (3) is worse than (1), but I don't have a recommendation between (1)\nand (2). I tend to like (1), a concern being the ease of accidental\nviolations. If we had the ABI compliance checking that\nhttps://postgr.es/m/flat/CAH2-Wzk7tvgLXzOZ8a22aF-gmO5gHojWTYRvAk5ZgOvTrcEQeg@mail.gmail.com\nexplored, (1) would be plenty safe. Should anything change here, or not?\n\n\n",
"msg_date": "Sun, 17 Mar 2024 11:31:14 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Sun, Mar 17, 2024 at 11:31:14AM -0700, Noah Misch wrote:\n> I like the new developer experience of adding a wait event. After release of\n> v17, how should we approach back-patching an event, like was done in commits\n> 8fa4a1a 1396b5c 78c0f85? Each of those commits put the new event at the end\n> of its released-branch wait_event.h enum. In v17,\n> generate-wait_event_types.pl sorts events to position them.\n\nIndeed, that would be a bad idea.\n\n> Adding an event\n> will renumber others, which can make an extension report the wrong event until\n> recompiled. Extensions citus, pg_bulkload, and vector refer to static events.\n> If a back-patch added WAIT_EVENT_MESSAGE_QUEUE_SOMETHING_NEW, an old-build\n> pg_bulkload report of WAIT_EVENT_PARALLEL_CREATE_INDEX_SCAN would show up in\n> pg_stat_activity as WAIT_EVENT_PARALLEL_BITMAP_SCAN. (WAIT_EVENT_EXTENSION is\n> not part of a generated enum, fortunately.) Some options:\n> \n> 1. Don't back-patch wait events to v17+. Use the closest existing event.\n> 2. Let wait_event_names.txt back-patches control the enum order. For example,\n> a line could have an annotation that controls its position relative to the\n> auto-sorted lines. For another example, the generator could stop sorting.\n> 3. Accept the renumbering, because the consequence isn't that horrible.\n> \n> Option (3) is worse than (1), but I don't have a recommendation between (1)\n> and (2). I tend to like (1), a concern being the ease of accidental\n> violations. If we had the ABI compliance checking that\n> https://postgr.es/m/flat/CAH2-Wzk7tvgLXzOZ8a22aF-gmO5gHojWTYRvAk5ZgOvTrcEQeg@mail.gmail.com\n> explored, (1) would be plenty safe. Should anything change here, or not?\n\n(1) would be annoying, we have backpatched scaling problems in the\npast even if that does not happen often. And in some cases I can\nunderstand why one would want to add a new wait event to track that\na patch does what is expected of it. (2) to stop the automated\nsorting would bring back the problems that this thread has spent time\nto solve: people tend to not add wait events correctly, so I would\nsuspect issues on HEAD. I've seen that too many times on older\nbranches.\n\nI see an option (4), similar to your (2) without the per-line\nannotation: add a new magic keyword like the existing \"Section\" that\nis used in the first lines of generate-wait_event_types.pl where we\ngenerate tab-separated lines with the section name as prefix of each\nline. So I can think of something like:\nSection: ClassName - WaitEventFoo\nFOO_1\t\"Waiting in foo1\"\nFOO_2\t\"Waiting in foo2\"\nBackpatch:\nBAR_1\t\"Waiting in bar1\"\nBAR_2\t\"Waiting in bar2\"\n\nThen force the ordering for the docs and keep the elements in the\nbackpatch section at the end of the enums in the order in the txt.\nOne thing that could make sense is to enforce that \"Backpatch\" is at\nthe end of a section, meaning that we would need a second keyword like\na \"Section: EndBackpatch\" or something like that. That's not strictly\nnecessary IMO as the format of the txt is easy to follow.\n--\nMichael",
"msg_date": "Mon, 18 Mar 2024 08:02:24 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 08:02:24AM +0900, Michael Paquier wrote:\n> > 1. Don't back-patch wait events to v17+. Use the closest existing event.\n> > 2. Let wait_event_names.txt back-patches control the enum order. For example,\n> > a line could have an annotation that controls its position relative to the\n> > auto-sorted lines. For another example, the generator could stop sorting.\n> > 3. Accept the renumbering, because the consequence isn't that horrible.\n\n> I see an option (4), similar to your (2) without the per-line\n> annotation: add a new magic keyword like the existing \"Section\" that\n> is used in the first lines of generate-wait_event_types.pl where we\n> generate tab-separated lines with the section name as prefix of each\n> line. So I can think of something like:\n> Section: ClassName - WaitEventFoo\n> FOO_1\t\"Waiting in foo1\"\n> FOO_2\t\"Waiting in foo2\"\n> Backpatch:\n> BAR_1\t\"Waiting in bar1\"\n> BAR_2\t\"Waiting in bar2\"\n> \n> Then force the ordering for the docs and keep the elements in the\n> backpatch section at the end of the enums in the order in the txt.\n> One thing that could make sense is to enforce that \"Backpatch\" is at\n> the end of a section, meaning that we would need a second keyword like\n> a \"Section: EndBackpatch\" or something like that. That's not strictly\n> necessary IMO as the format of the txt is easy to follow.\n\nWorks for me, with or without the trailing keyword line.\n\n\n",
"msg_date": "Sun, 17 Mar 2024 18:31:30 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn Mon, Mar 18, 2024 at 08:02:24AM +0900, Michael Paquier wrote:\n> On Sun, Mar 17, 2024 at 11:31:14AM -0700, Noah Misch wrote:\n> > Adding an event\n> > will renumber others, which can make an extension report the wrong event until\n> > recompiled. Extensions citus, pg_bulkload, and vector refer to static events.\n> > If a back-patch added WAIT_EVENT_MESSAGE_QUEUE_SOMETHING_NEW, an old-build\n> > pg_bulkload report of WAIT_EVENT_PARALLEL_CREATE_INDEX_SCAN would show up in\n> > pg_stat_activity as WAIT_EVENT_PARALLEL_BITMAP_SCAN. (WAIT_EVENT_EXTENSION is\n> > not part of a generated enum, fortunately.)\n\nNice catch, thanks!\n\n> I see an option (4), similar to your (2) without the per-line\n> annotation: add a new magic keyword like the existing \"Section\" that\n> is used in the first lines of generate-wait_event_types.pl where we\n> generate tab-separated lines with the section name as prefix of each\n> line. So I can think of something like:\n> Section: ClassName - WaitEventFoo\n> FOO_1\t\"Waiting in foo1\"\n> FOO_2\t\"Waiting in foo2\"\n> Backpatch:\n> BAR_1\t\"Waiting in bar1\"\n> BAR_2\t\"Waiting in bar2\"\n> \n> Then force the ordering for the docs and keep the elements in the\n> backpatch section at the end of the enums in the order in the txt.\n\nYeah I think that's a good idea.\n\n> One thing that could make sense is to enforce that \"Backpatch\" is at\n> the end of a section, meaning that we would need a second keyword like\n> a \"Section: EndBackpatch\" or something like that. That's not strictly\n> necessary IMO as the format of the txt is easy to follow.\n\nI gave it a try in the POC patch attached. I did not use a \"EndBackpatch\"\nsection to keep the perl script as simple a possible though (but documented the\nexpectation instead).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 18 Mar 2024 09:04:44 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On 2023-Aug-28, Michael Paquier wrote:\n\n> I have looked again at that, and switching wait_event_names.txt to use\n> two columns made of the typedef definitions and the docs like is not a\n> problem:\n> FOO_BAR \"Waiting on Foo Bar.\"\n> \n> WAIT_EVENT_ is appended to the typedef definitions in the script. The\n> wait event names like \"FooBar\" are generated from the enums by\n> splitting using their underscores and doing some lc(). Lock and\n> LWLock don't need to change. This way, it is easy to grep the wait\n> events from the source code and match them with wait_event_names.txt.\n\nFTR I had a rather unpleasant time last week upon finding a wait event\nnamed PgSleep. If you grep for that, there are no matches at all; and I\nspent ten minutes (for real) trying to figure out where that was coming\nfrom, until I remembered this thread.\n\nNow you have to guess that not only random lowercasing is happening, but\nalso underscore removal. This is not a good developer experience and I\nthink we should rethink this choice. It would be infinitely more\nusable, and not one bit more difficult, to make these lines be\n\nWAIT_EVENT_FOO_BAR\tFooBar\t\"Waiting on Foo Bar.\"\n\nthen there is no guessing involved.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"The important things in the world are problems with society that we don't\nunderstand at all. The machines will become more complicated but they won't\nbe more complicated than the societies that run them.\" (Freeman Dyson)\n\n\n",
"msg_date": "Mon, 18 Mar 2024 10:24:00 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 09:04:44AM +0000, Bertrand Drouvot wrote:\n> --- a/src/backend/utils/activity/wait_event_names.txt\n> +++ b/src/backend/utils/activity/wait_event_names.txt\n> @@ -24,7 +24,12 @@\n> # SGML tables of wait events for inclusion in the documentation.\n> #\n> # When adding a new wait event, make sure it is placed in the appropriate\n> -# ClassName section.\n> +# ClassName section. If the wait event is backpatched to a version < 17 then\n> +# put it under a \"Backpatch\" delimiter at the end of the related ClassName\n> +# section.\n\nBack-patch from v17 to pre-v17 won't use this, because v16 has hand-maintained\nenums. It's back-patch v18->v17 or v22->v17 where this will come up.\n\n> +# Ensure that the wait events under the \"Backpatch\" delimiter are placed in the\n> +# same order as in the pre 17 wait_event_types.h (see VERSION_FILE_SYNC as an\n> +# example).\n\nI expect the normal practice will be to put the entry in its natural position\nin git master, then put it in the backpatch section for any other branch. In\nother words, the backpatch regions are always empty in git master, and the\nnon-backpatch regions change in master only.\n\n\n",
"msg_date": "Mon, 18 Mar 2024 08:49:34 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn Mon, Mar 18, 2024 at 08:49:34AM -0700, Noah Misch wrote:\n> On Mon, Mar 18, 2024 at 09:04:44AM +0000, Bertrand Drouvot wrote:\n> > --- a/src/backend/utils/activity/wait_event_names.txt\n> > +++ b/src/backend/utils/activity/wait_event_names.txt\n> > @@ -24,7 +24,12 @@\n> > # SGML tables of wait events for inclusion in the documentation.\n> > #\n> > # When adding a new wait event, make sure it is placed in the appropriate\n> > -# ClassName section.\n> > +# ClassName section. If the wait event is backpatched to a version < 17 then\n> > +# put it under a \"Backpatch\" delimiter at the end of the related ClassName\n> > +# section.\n> \n> Back-patch from v17 to pre-v17 won't use this, because v16 has hand-maintained\n> enums. It's back-patch v18->v17 or v22->v17 where this will come up.\n\nThanks for looking at it!\nOh right, the comment is wrong, re-worded in v2 attached.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 18 Mar 2024 17:57:02 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 08:49:34AM -0700, Noah Misch wrote:\n> On Mon, Mar 18, 2024 at 09:04:44AM +0000, Bertrand Drouvot wrote:\n>> +# Ensure that the wait events under the \"Backpatch\" delimiter are placed in the\n>> +# same order as in the pre 17 wait_event_types.h (see VERSION_FILE_SYNC as an\n>> +# example).\n> \n> I expect the normal practice will be to put the entry in its natural position\n> in git master, then put it in the backpatch section for any other branch. In\n> other words, the backpatch regions are always empty in git master, and the\n> non-backpatch regions change in master only.\n\nYes, I'd expect the same experience. And it is very important to\ndocument that properly in the txt file. I don't see a need to specify\nany version numbers as well, that's less burden when bumping the major\nversion number on the master branch every year.\n--\nMichael",
"msg_date": "Tue, 19 Mar 2024 08:39:11 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 10:24:00AM +0100, Alvaro Herrera wrote:\n> FTR I had a rather unpleasant time last week upon finding a wait event\n> named PgSleep. If you grep for that, there are no matches at all; and I\n> spent ten minutes (for real) trying to figure out where that was coming\n> from, until I remembered this thread.\n> \n> Now you have to guess that not only random lowercasing is happening, but\n> also underscore removal. This is not a good developer experience and I\n> think we should rethink this choice. It would be infinitely more\n> usable, and not one bit more difficult, to make these lines be\n> \n> WAIT_EVENT_FOO_BAR\tFooBar\t\"Waiting on Foo Bar.\"\n> \n> then there is no guessing involved.\n\nThis has already gone through a couple of adjustments in 59cbf60c0f2b\nand 183a60a628fe. The latter has led to the elimination of one column\nin the txt file, as a reply to the same kind of comments about the\nformat of this file:\nhttps://www.postgresql.org/message-id/20230705215939.ulnfbr4zavb2x7ri%40awork3.anarazel.de\n\nFWIW, I still like better what we have currently, where it is possible\nto grep the enum values in the source code.\n--\nMichael",
"msg_date": "Tue, 19 Mar 2024 08:59:30 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 05:57:02PM +0000, Bertrand Drouvot wrote:\n> Thanks for looking at it!\n> Oh right, the comment is wrong, re-worded in v2 attached.\n\nI've added a couple of fake events in my txt file, and this results in\nan ordering of the wait events in the docs while the backpatched wait\nevents are added at the end of the enums, based on their order in the\ntxt file.\n\n # When adding a new wait event, make sure it is placed in the appropriate\n-# ClassName section.\n+# ClassName section. If the wait event is backpatched from master to a version\n+# >= 17 then put it under a \"Backpatch:\" delimiter at the end of the related\n+# ClassName section (on the non master branches) or at its natural position on\n+# the master branch.\n+# Ensure that the backpatch regions are always empty on the master branch.\n\nI'd recommend to not mention a version number at all, as this would\nneed a manual refresh each time a new stable branch is forked.\n\nYour solution is simpler than what I finished in mind when looking at\nthe code yesterday, with the addition of a second array that's pushed\nto be at the end of the \"sorted\" lines ordered by the second column.\nThat does the job.\n\n(Note that I'll go silent for some time; I'll handle this thread when\nI get back as this is not urgent.)\n--\nMichael",
"msg_date": "Tue, 19 Mar 2024 09:59:35 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn Tue, Mar 19, 2024 at 09:59:35AM +0900, Michael Paquier wrote:\n> On Mon, Mar 18, 2024 at 05:57:02PM +0000, Bertrand Drouvot wrote:\n> > Thanks for looking at it!\n> > Oh right, the comment is wrong, re-worded in v2 attached.\n> \n> I've added a couple of fake events in my txt file, and this results in\n> an ordering of the wait events in the docs while the backpatched wait\n> events are added at the end of the enums, based on their order in the\n> txt file.\n\nThanks for testing!\n\n> # When adding a new wait event, make sure it is placed in the appropriate\n> -# ClassName section.\n> +# ClassName section. If the wait event is backpatched from master to a version\n> +# >= 17 then put it under a \"Backpatch:\" delimiter at the end of the related\n> +# ClassName section (on the non master branches) or at its natural position on\n> +# the master branch.\n> +# Ensure that the backpatch regions are always empty on the master branch.\n> \n> I'd recommend to not mention a version number at all, as this would\n> need a manual refresh each time a new stable branch is forked.\n\nI'm not sure as v2 used the \"version >= 17\" wording which I think would not need\nmanual refresh each time a new stable branch is forked.\n\nBut to avoid any doubt, I'm following your recommendation in v3 attached (then\nonly mentioning the \"master branch\" and \"any other branch\").\n\n> Your solution is simpler than what I finished in mind when looking at\n> the code yesterday, with the addition of a second array that's pushed\n> to be at the end of the \"sorted\" lines ordered by the second column.\n> That does the job.\n\nYeah.\n\n> (Note that I'll go silent for some time; I'll handle this thread when\n> I get back as this is not urgent.)\n\nRight and enjoy!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 19 Mar 2024 07:34:09 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Tue, Mar 19, 2024 at 07:34:09AM +0000, Bertrand Drouvot wrote:\n> I'm not sure as v2 used the \"version >= 17\" wording which I think would not need\n> manual refresh each time a new stable branch is forked.\n> \n> But to avoid any doubt, I'm following your recommendation in v3 attached (then\n> only mentioning the \"master branch\" and \"any other branch\").\n\nI don't see why we could not be more generic, TBH. Note that the\nBackpatch region should be empty not only the master branch but also\non stable and unreleased branches (aka REL_XX_STABLE branches from\ntheir fork from master to their .0 release). I have reworded the\nwhole, mentioning ABI compatibility, as well.\n\nThe position of the Backpatch regions were a bit incorrect (extra one\nin LWLock, and the one in Lock was not needed).\n\nWe could be stricter with the order of the elements in\npgstat_wait_event.c and wait_event_funcs_data.c, but there's no\nconsequence feature-wise and I cannot get excited about the extra\ncomplexity this creates in generate-wait_event_types.pl between the\nenum generation and the rest.\n\nIs \"Backpatch\" the best choice we have, though? It speaks by itself\nbut I was thinking about something different, like \"Stable\". Other\nideas or objections are welcome. My naming sense is usually not that\ngood, so there's that.\n\n0001 is the patch with my tweaks. 0002 includes some dummy test data\nI've used to validate the whole.\n--\nMichael",
"msg_date": "Thu, 4 Apr 2024 15:50:21 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn Thu, Apr 04, 2024 at 03:50:21PM +0900, Michael Paquier wrote:\n> On Tue, Mar 19, 2024 at 07:34:09AM +0000, Bertrand Drouvot wrote:\n> > I'm not sure as v2 used the \"version >= 17\" wording which I think would not need\n> > manual refresh each time a new stable branch is forked.\n> > \n> > But to avoid any doubt, I'm following your recommendation in v3 attached (then\n> > only mentioning the \"master branch\" and \"any other branch\").\n> \n> I don't see why we could not be more generic, TBH. Note that the\n> Backpatch region should be empty not only the master branch but also\n> on stable and unreleased branches (aka REL_XX_STABLE branches from\n> their fork from master to their .0 release). I have reworded the\n> whole, mentioning ABI compatibility, as well.\n\nYeah, agree. I do prefer your wording.\n\n> The position of the Backpatch regions were a bit incorrect (extra one\n> in LWLock, and the one in Lock was not needed).\n\noops, thanks for the fixes!\n\n> We could be stricter with the order of the elements in\n> pgstat_wait_event.c and wait_event_funcs_data.c, but there's no\n> consequence feature-wise and I cannot get excited about the extra\n> complexity this creates in generate-wait_event_types.pl between the\n> enum generation and the rest.\n\nYeah, and I think generate-wait_event_types.pl is already complex enough.\nSo better to add only the strict necessary in it IMHO.\n\n> Is \"Backpatch\" the best choice we have, though? It speaks by itself\n> but I was thinking about something different, like \"Stable\". Other\n> ideas or objections are welcome. My naming sense is usually not that\n> good, so there's that.\n\nI think \"Stable\" is more confusing because the section should also be empty until\nthe .0 is released.\n\nThat said, what about \"ABI_compatibility\"? (that would also match the comment \nadded in wait_event_names.txt). Attached v4 making use of the ABI_compatibility\nproposal.\n\n> 0001 is the patch with my tweaks.\n\nThanks! \n\n+# No \"Backpatch\" region here as code is generated automatically.\n\nWhat about \"....region here as has its own C code\" (that would be more consistent\nwith the comment in the \"header\" for the file). Done that way in v4.\n\nIt looks like WAL_SENDER_WRITE_ZZZ was also added in it (I guess for testing\npurpose, so I removed it in v4).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 4 Apr 2024 09:28:36 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Thu, Apr 04, 2024 at 09:28:36AM +0000, Bertrand Drouvot wrote:\n> On Thu, Apr 04, 2024 at 03:50:21PM +0900, Michael Paquier wrote:\n>> Is \"Backpatch\" the best choice we have, though? It speaks by itself\n>> but I was thinking about something different, like \"Stable\". Other\n>> ideas or objections are welcome. My naming sense is usually not that\n>> good, so there's that.\n> \n> I think \"Stable\" is more confusing because the section should also be empty until\n> the .0 is released.\n\nOkay.\n\n> That said, what about \"ABI_compatibility\"? (that would also match the comment \n> added in wait_event_names.txt). Attached v4 making use of the ABI_compatibility\n> proposal.\n\nI'm OK with that. If somebody comes up wiht a better name than that,\nthis could always be changed again.\n\n> +# No \"Backpatch\" region here as code is generated automatically.\n> \n> What about \"....region here as has its own C code\" (that would be more consistent\n> with the comment in the \"header\" for the file). Done that way in v4.\n\nI'd add a \"as -this section- has its own C code\", for clarity. This\njust looked a bit strange here.\n\n> It looks like WAL_SENDER_WRITE_ZZZ was also added in it (I guess for testing\n> purpose, so I removed it in v4).\n\nThat's a good brain fade. Thanks.\n--\nMichael",
"msg_date": "Thu, 4 Apr 2024 19:14:47 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "Hi,\n\nOn Thu, Apr 04, 2024 at 07:14:47PM +0900, Michael Paquier wrote:\n> On Thu, Apr 04, 2024 at 09:28:36AM +0000, Bertrand Drouvot wrote:\n> > +# No \"Backpatch\" region here as code is generated automatically.\n> > \n> > What about \"....region here as has its own C code\" (that would be more consistent\n> > with the comment in the \"header\" for the file). Done that way in v4.\n> \n> I'd add a \"as -this section- has its own C code\", for clarity. This\n> just looked a bit strange here.\n\nSounds good, done that way in v5 attached.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 4 Apr 2024 11:56:36 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
},
{
"msg_contents": "On Thu, Apr 04, 2024 at 11:56:36AM +0000, Bertrand Drouvot wrote:\n> On Thu, Apr 04, 2024 at 07:14:47PM +0900, Michael Paquier wrote:\n>> I'd add a \"as -this section- has its own C code\", for clarity. This\n>> just looked a bit strange here.\n> \n> Sounds good, done that way in v5 attached.\n\nIf there's a better suggestion than \"ABI_compatibility\" as keyword for\nthis part of the file, this could always be changed later. For now,\nI've applied what you have here after tweaking a bit more the\ncomments.\n--\nMichael",
"msg_date": "Fri, 5 Apr 2024 09:06:44 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autogenerate some wait events code and documentation"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nA while back we added support for completing time zone names after SET\nTIMEZONE, but we failed to do the same for the AT TIME ZONE operator.\nHere's a trivial patch for that.\n\n- ilmari",
"msg_date": "Wed, 29 Mar 2023 11:28:00 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tab completion for AT TIME ZONE"
},
{
"msg_contents": "st 29. 3. 2023 v 12:28 odesílatel Dagfinn Ilmari Mannsåker <\[email protected]> napsal:\n\n> Hi hackers,\n>\n> A while back we added support for completing time zone names after SET\n> TIMEZONE, but we failed to do the same for the AT TIME ZONE operator.\n> Here's a trivial patch for that.\n>\n\n+1\n\nPavel\n\n\n> - ilmari\n>\n>\n\nst 29. 3. 2023 v 12:28 odesílatel Dagfinn Ilmari Mannsåker <[email protected]> napsal:Hi hackers,\n\nA while back we added support for completing time zone names after SET\nTIMEZONE, but we failed to do the same for the AT TIME ZONE operator.\nHere's a trivial patch for that.+1Pavel\n\n- ilmari",
"msg_date": "Wed, 29 Mar 2023 14:41:25 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for AT TIME ZONE"
},
{
"msg_contents": "Dagfinn Ilmari Mannsåker <[email protected]> writes:\n\n> Hi hackers,\n>\n> A while back we added support for completing time zone names after SET\n> TIMEZONE, but we failed to do the same for the AT TIME ZONE operator.\n> Here's a trivial patch for that.\n\nAdded to the 2023-07 commitfest:\n\nhttps://commitfest.postgresql.org/43/4274/\n\n- ilmari\n\n\n",
"msg_date": "Wed, 12 Apr 2023 18:53:15 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tab completion for AT TIME ZONE"
},
{
"msg_contents": "Hi,\n\nIs this supposed to provide tab completion for the AT TIME ZONE operator \nlike in this query?\n\nSELECT '2023-04-14 08:00:00' AT TIME ZONE 'Europe/Lisbon';\n\nThe patch applied cleanly but I'm afraid I cannot reproduce the intended \nbehaviour:\n\npostgres=# SELECT '2023-04-14 08:00:00' AT<tab>\n\npostgres=# SELECT '2023-04-14 08:00:00' AT T<tab>\n\npostgres=# SELECT '2023-04-14 08:00:00' AT TIME Z<tab>\n\nPerhaps I'm testing it in the wrong place?\n\nBest, Jim\n\nOn 12.04.23 19:53, Dagfinn Ilmari Mannsåker wrote:\n> Dagfinn Ilmari Mannsåker <[email protected]> writes:\n>\n>> Hi hackers,\n>>\n>> A while back we added support for completing time zone names after SET\n>> TIMEZONE, but we failed to do the same for the AT TIME ZONE operator.\n>> Here's a trivial patch for that.\n> Added to the 2023-07 commitfest:\n>\n> https://commitfest.postgresql.org/43/4274/\n>\n> - ilmari\n>\n>",
"msg_date": "Fri, 14 Apr 2023 09:42:05 +0200",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for AT TIME ZONE"
},
{
"msg_contents": "Hi Jim,\n\nThanks for having a look at my patch, but please don't top post on\nPostgreSQL lists.\n\nJim Jones <[email protected]> writes:\n\n> Hi,\n>\n> On 12.04.23 19:53, Dagfinn Ilmari Mannsåker wrote:\n>> Dagfinn Ilmari Mannsåker <[email protected]> writes:\n>>\n>>> Hi hackers,\n>>>\n>>> A while back we added support for completing time zone names after SET\n>>> TIMEZONE, but we failed to do the same for the AT TIME ZONE operator.\n>>> Here's a trivial patch for that.\n>>\n>\n> Is this supposed to provide tab completion for the AT TIME ZONE operator\n> like in this query?\n>\n> SELECT '2023-04-14 08:00:00' AT TIME ZONE 'Europe/Lisbon';\n>\n> The patch applied cleanly but I'm afraid I cannot reproduce the intended\n> behaviour:\n>\n> postgres=# SELECT '2023-04-14 08:00:00' AT<tab>\n>\n> postgres=# SELECT '2023-04-14 08:00:00' AT T<tab>\n>\n> postgres=# SELECT '2023-04-14 08:00:00' AT TIME Z<tab>\n>\n> Perhaps I'm testing it in the wrong place?\n\nIt doesn't tab complete the AT TIME ZONE operator itself, just the\ntimezone name after it, so this sholud work:\n\n # SELECT now() AT TIME ZONE <tab><tab>\n\nor\n\n # SELECT now() AT TIME ZONE am<tab>\n\n\nHowever, looking more closely at the grammar, the word AT only occurs in\nAT TIME ZONE, so we could complete the operator itself as well. Updated\npatch attatched.\n\n> Best, Jim\n\n- ilmari",
"msg_date": "Fri, 14 Apr 2023 10:29:49 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tab completion for AT TIME ZONE"
},
{
"msg_contents": "On 14.04.23 11:29, Dagfinn Ilmari Mannsåker wrote:\n> It doesn't tab complete the AT TIME ZONE operator itself, just the\n> timezone name after it, so this sholud work:\n>\n> # SELECT now() AT TIME ZONE <tab><tab>\n>\n> or\n>\n> # SELECT now() AT TIME ZONE am<tab>\n>\n>\n> However, looking more closely at the grammar, the word AT only occurs in\n> AT TIME ZONE, so we could complete the operator itself as well. Updated\n> patch attatched.\n>\n>> Best, Jim\n> - ilmari\n\nGot it.\n\nIn that case, everything seems to work just fine:\n\npostgres=# SELECT now() AT <tab>\n\n.. autocompletes TIME ZONE :\n\npostgres=# SELECT now() AT TIME ZONE\n\n\npostgres=# SELECT now() AT TIME ZONE <tab><tab>\nDisplay all 598 possibilities? (y or n)\n\n\npostgres=# SELECT now() AT TIME ZONE 'Europe/Is<tab><tab>\nEurope/Isle_of_Man Europe/Istanbul\n\n\nalso neglecting the opening single quotes ...\n\npostgres=# SELECT now() AT TIME ZONE Europe/Is<tab>\n\n... autocompletes it after <tab>:\n\npostgres=# SELECT now() AT TIME ZONE 'Europe/Is\n\n\nThe patch applies cleanly and it does what it is proposing. - and it's \nIMHO a very nice addition.\n\nI've marked the CF entry as \"Ready for Committer\".\n\nJim\n\n\n\n",
"msg_date": "Fri, 14 Apr 2023 12:05:25 +0200",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for AT TIME ZONE"
},
{
"msg_contents": "On Fri, Apr 14, 2023 at 12:05:25PM +0200, Jim Jones wrote:\n> The patch applies cleanly and it does what it is proposing. - and it's IMHO\n> a very nice addition.\n> \n> I've marked the CF entry as \"Ready for Committer\".\n\n+/* ... AT TIME ZONE ... */\n+\telse if (TailMatches(\"AT\"))\n+\t\tCOMPLETE_WITH(\"TIME ZONE\");\n+\telse if (TailMatches(\"AT\", \"TIME\"))\n+\t\tCOMPLETE_WITH(\"ZONE\");\n+\telse if (TailMatches(\"AT\", \"TIME\", \"ZONE\"))\n+\t\tCOMPLETE_WITH_TIMEZONE_NAME();\n\nThis style will for the completion of timezone values even if \"AT\" is\nthe first word of a query. Shouldn't this be more selective by making\nsure that we are at least in the context of a SELECT query?\n--\nMichael",
"msg_date": "Thu, 12 Oct 2023 15:49:51 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for AT TIME ZONE"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n\n> On Fri, Apr 14, 2023 at 12:05:25PM +0200, Jim Jones wrote:\n>> The patch applies cleanly and it does what it is proposing. - and it's IMHO\n>> a very nice addition.\n>> \n>> I've marked the CF entry as \"Ready for Committer\".\n>\n> +/* ... AT TIME ZONE ... */\n> +\telse if (TailMatches(\"AT\"))\n> +\t\tCOMPLETE_WITH(\"TIME ZONE\");\n> +\telse if (TailMatches(\"AT\", \"TIME\"))\n> +\t\tCOMPLETE_WITH(\"ZONE\");\n> +\telse if (TailMatches(\"AT\", \"TIME\", \"ZONE\"))\n> +\t\tCOMPLETE_WITH_TIMEZONE_NAME();\n>\n> This style will for the completion of timezone values even if \"AT\" is\n> the first word of a query. Shouldn't this be more selective by making\n> sure that we are at least in the context of a SELECT query?\n\nIt's valid anywhere an expression is, which is a lot more places than\njust SELECT queries. Off the top of my head I can think of WITH,\nINSERT, UPDATE, VALUES, CALL, CREATE TABLE, CREATE INDEX.\n\nAs I mentioned upthread, the only place in the grammar where the word AT\noccurs is in AT TIME ZONE, so there's no ambiguity. Also, it doesn't\ncomplete time zone names after AT, it completes the literal words TIME\nZONE, and you have to then hit tab again to get a list of time zones.\nIf we (or the SQL committee) were to invent more operators that start\nwith the word AT, we can add those to the first if clause above and\ncomplete with the appropriate values after each one separately.\n\n- ilmari\n\n\n",
"msg_date": "Thu, 12 Oct 2023 10:27:34 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tab completion for AT TIME ZONE"
},
{
"msg_contents": "On 10/12/23 10:27, Dagfinn Ilmari Mannsåker wrote:\n> Michael Paquier <[email protected]> writes:\n> \n>> On Fri, Apr 14, 2023 at 12:05:25PM +0200, Jim Jones wrote:\n>>> The patch applies cleanly and it does what it is proposing. - and it's IMHO\n>>> a very nice addition.\n>>>\n>>> I've marked the CF entry as \"Ready for Committer\".\n>>\n>> +/* ... AT TIME ZONE ... */\n>> +\telse if (TailMatches(\"AT\"))\n>> +\t\tCOMPLETE_WITH(\"TIME ZONE\");\n>> +\telse if (TailMatches(\"AT\", \"TIME\"))\n>> +\t\tCOMPLETE_WITH(\"ZONE\");\n>> +\telse if (TailMatches(\"AT\", \"TIME\", \"ZONE\"))\n>> +\t\tCOMPLETE_WITH_TIMEZONE_NAME();\n>>\n>> This style will for the completion of timezone values even if \"AT\" is\n>> the first word of a query. Shouldn't this be more selective by making\n>> sure that we are at least in the context of a SELECT query?\n> \n> It's valid anywhere an expression is, which is a lot more places than\n> just SELECT queries. Off the top of my head I can think of WITH,\n> INSERT, UPDATE, VALUES, CALL, CREATE TABLE, CREATE INDEX.\n> \n> As I mentioned upthread, the only place in the grammar where the word AT\n> occurs is in AT TIME ZONE, so there's no ambiguity. Also, it doesn't\n> complete time zone names after AT, it completes the literal words TIME\n> ZONE, and you have to then hit tab again to get a list of time zones.\n> If we (or the SQL committee) were to invent more operators that start\n> with the word AT, we can add those to the first if clause above and\n> complete with the appropriate values after each one separately.\n\nSpeaking of this...\n\nThe SQL committee already has another operator starting with AT which is \nAT LOCAL. I am implementing it in \nhttps://commitfest.postgresql.org/45/4343/ where I humbly admit that I \ndid not think of psql tab completion at all.\n\nThese two patches are co-dependent and whichever goes in first the other \nwill need to be adjusted accordingly.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Fri, 13 Oct 2023 03:07:25 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for AT TIME ZONE"
},
{
"msg_contents": "On Fri, Oct 13, 2023 at 03:07:25AM +0200, Vik Fearing wrote:\n> The SQL committee already has another operator starting with AT which is AT\n> LOCAL.\n\nThe other patch was the reason why I looked at this one. At the end,\nI've made peace with Dagfinn's argument two messages ago, and applied\nthe patch after adding LOCAL to the keywords, but after also removing\nthe completion for \"ZONE\" after typing \"AT TIME\" because AT would be\ncompleted by \"TIME ZONE\".\n\n> I am implementing it in https://commitfest.postgresql.org/45/4343/\n> where I humbly admit that I did not think of psql tab completion at all.\n\npsql completion is always nice to have but not really mandatory IMO,\nso leaving it out if one does not want to implement it is fine by me\nto not complicate a patch. Completion could always be tackled on top\nof any feature related to it that got committed.\n--\nMichael",
"msg_date": "Fri, 13 Oct 2023 14:31:22 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for AT TIME ZONE"
},
{
"msg_contents": "On 10/13/23 06:31, Michael Paquier wrote:\n> On Fri, Oct 13, 2023 at 03:07:25AM +0200, Vik Fearing wrote:\n>> The SQL committee already has another operator starting with AT which is AT\n>> LOCAL.\n> \n> The other patch was the reason why I looked at this one. \n\n\nThank you for updating and committing this patch!\n\n\n> but after also removing\n> the completion for \"ZONE\" after typing \"AT TIME\" because AT would be\n> completed by \"TIME ZONE\".\n\n\nWhy? The user can tab at any point.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Fri, 13 Oct 2023 08:01:08 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for AT TIME ZONE"
},
{
"msg_contents": "On Fri, Oct 13, 2023 at 08:01:08AM +0200, Vik Fearing wrote:\n> On 10/13/23 06:31, Michael Paquier wrote:\n>> but after also removing\n>> the completion for \"ZONE\" after typing \"AT TIME\" because AT would be\n>> completed by \"TIME ZONE\".\n> \n> Why? The user can tab at any point.\n\nIMO this leads to unnecessary bloat in tab-complete.c because we\nfinish with the full completion as long as \"TIME\" is not fully typed.\n--\nMichael",
"msg_date": "Fri, 13 Oct 2023 16:32:30 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for AT TIME ZONE"
}
] |
[
{
"msg_contents": "Hi,\n\nWhilst reading up on the transaction commit code, I noticed the following lines:\n\n /* Tell bufmgr and smgr to prepare for commit */\n BufmgrCommit();\n\nBufmgrCommit does exactly nothing; it is an empty function and has\nbeen since commit 33960006 in late 2008 when it stopped calling\nsmgrcommit().\n\nAll two usages of the function (in our code base) seem to be in\nxact.c. Are we maintaining it for potential future use, or can the\nfunction be removed?\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Wed, 29 Mar 2023 13:51:36 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "BufmgrCommit no-op since 2008, remaining uses?"
},
{
"msg_contents": "Matthias van de Meent <[email protected]> writes:\n> BufmgrCommit does exactly nothing; it is an empty function and has\n> been since commit 33960006 in late 2008 when it stopped calling\n> smgrcommit().\n> All two usages of the function (in our code base) seem to be in\n> xact.c. Are we maintaining it for potential future use, or can the\n> function be removed?\n\nSeems reasonable. Even if bufmgr grew a new need to be called\nduring commit, it would quite possibly need to be called from\na different spot; so I doubt that the function is useful even\nas a placeholder.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 Mar 2023 08:12:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BufmgrCommit no-op since 2008, remaining uses?"
},
{
"msg_contents": "On Wed, 29 Mar 2023 at 14:12, Tom Lane <[email protected]> wrote:\n>\n> Matthias van de Meent <[email protected]> writes:\n> > BufmgrCommit does exactly nothing; it is an empty function and has\n> > been since commit 33960006 in late 2008 when it stopped calling\n> > smgrcommit().\n> > All two usages of the function (in our code base) seem to be in\n> > xact.c. Are we maintaining it for potential future use, or can the\n> > function be removed?\n>\n> Seems reasonable. Even if bufmgr grew a new need to be called\n> during commit, it would quite possibly need to be called from\n> a different spot; so I doubt that the function is useful even\n> as a placeholder.\n\nThen, the attached trivial patch removes the function and all\nreferences I could find.\n\nKind regards,\n\nMatthias van de Meent",
"msg_date": "Wed, 29 Mar 2023 14:49:18 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BufmgrCommit no-op since 2008, remaining uses?"
},
{
"msg_contents": "Matthias van de Meent <[email protected]> writes:\n> Then, the attached trivial patch removes the function and all\n> references I could find.\n\nPushed after a bit of fooling with adjacent comments.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 Mar 2023 09:17:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BufmgrCommit no-op since 2008, remaining uses?"
}
] |
[
{
"msg_contents": "Good evening,\nI'm a Master degree student at University of Padua in Italy and I'm\ndeveloping a web application as assignment for the Web application course.\n\nContext: the Web application that my group is developing would ideally be\nused to manage county side fairs where there would be foods and drinks,\nthese displayed into a digital menu.\nThe application uses postgre to implement a database where stores data,\nmostly strings as emails and orders but also some images (representing the\ndishes).\nThe web pages are created using java servlets and jbc\n\nQuestion: for better performance is it better to store images as BYTEA or\nconvert every image in base64 and store the generated string (so in html\nit's enough to insert the base64 string in the tag)?\nConverting an image in base64 would use a 30% more memory than storing\ndirectly the image's bytes, but I don't know if working with characters\nrather than bytes could have more prons than cons\n\nThank for the time you dedicated for the answer and I apologize both for\ndisturbing you and my English.\n\nBest regards, Riccardo.\n\nComputer Engineering\nMat. 2082156\n\nGood evening,I'm a Master degree student at University of Padua in Italy and I'm developing a web application as assignment for the Web application course. Context: the Web application that my group is developing would ideally be used to manage county side fairs where there would be foods and drinks, these displayed into a digital menu.The application uses postgre to implement a database where stores data, mostly strings as emails and orders but also some images (representing the dishes).The web pages are created using java servlets and jbcQuestion: for better performance is it better to store images as BYTEA or convert every image in base64 and store the generated string (so in html it's enough to insert the base64 string in the tag)?Converting an image in base64 would use a 30% more memory than storing directly the image's bytes, but I don't know if working with characters rather than bytes could have more prons than consThank for the time you dedicated for the answer and I apologize both for disturbing you and my English. Best regards, Riccardo.Computer EngineeringMat. 2082156",
"msg_date": "Wed, 29 Mar 2023 23:29:45 +0200",
"msg_from": "Riccardo Gobbo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Images storing techniques"
},
{
"msg_contents": "I would suggest to store your images in a file system and the store paths\nto those images.\n\nYou can keep files and entries in your database synced via triggers/stored\nprocedures (eventually written in python since pgsql doesn't allow you to\ninteract with the file system).\n\nOn Thu, Mar 30, 2023, 11:22 Riccardo Gobbo <\[email protected]> wrote:\n\n> Good evening,\n> I'm a Master degree student at University of Padua in Italy and I'm\n> developing a web application as assignment for the Web application course.\n>\n> Context: the Web application that my group is developing would ideally be\n> used to manage county side fairs where there would be foods and drinks,\n> these displayed into a digital menu.\n> The application uses postgre to implement a database where stores data,\n> mostly strings as emails and orders but also some images (representing the\n> dishes).\n> The web pages are created using java servlets and jbc\n>\n> Question: for better performance is it better to store images as BYTEA or\n> convert every image in base64 and store the generated string (so in html\n> it's enough to insert the base64 string in the tag)?\n> Converting an image in base64 would use a 30% more memory than storing\n> directly the image's bytes, but I don't know if working with characters\n> rather than bytes could have more prons than cons\n>\n> Thank for the time you dedicated for the answer and I apologize both for\n> disturbing you and my English.\n>\n> Best regards, Riccardo.\n>\n> Computer Engineering\n> Mat. 2082156\n>\n\nI would suggest to store your images in a file system and the store paths to those images.You can keep files and entries in your database synced via triggers/stored procedures (eventually written in python since pgsql doesn't allow you to interact with the file system). On Thu, Mar 30, 2023, 11:22 Riccardo Gobbo <[email protected]> wrote:Good evening,I'm a Master degree student at University of Padua in Italy and I'm developing a web application as assignment for the Web application course. Context: the Web application that my group is developing would ideally be used to manage county side fairs where there would be foods and drinks, these displayed into a digital menu.The application uses postgre to implement a database where stores data, mostly strings as emails and orders but also some images (representing the dishes).The web pages are created using java servlets and jbcQuestion: for better performance is it better to store images as BYTEA or convert every image in base64 and store the generated string (so in html it's enough to insert the base64 string in the tag)?Converting an image in base64 would use a 30% more memory than storing directly the image's bytes, but I don't know if working with characters rather than bytes could have more prons than consThank for the time you dedicated for the answer and I apologize both for disturbing you and my English. Best regards, Riccardo.Computer EngineeringMat. 2082156",
"msg_date": "Thu, 30 Mar 2023 11:30:37 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Images storing techniques"
},
{
"msg_contents": "On 29.03.23 23:29, Riccardo Gobbo wrote:\n> Question: for better performance is it better to store images as BYTEA \n> or convert every image in base64 and store the generated string (so in \n> html it's enough to insert the base64 string in the tag)?\n> Converting an image in base64 would use a 30% more memory than storing \n> directly the image's bytes, but I don't know if working with characters \n> rather than bytes could have more prons than cons\n\nStoring as bytea is better.\n\n\n\n",
"msg_date": "Thu, 30 Mar 2023 15:41:42 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Images storing techniques"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 11:30:37AM +0200, Gaetano Mendola wrote:\n> I would suggest to store your images in a file system and the store paths to\n> those images.\n> \n> You can keep files and entries in your database synced via triggers/stored\n> procedures (eventually written in python since pgsql doesn't allow you to\n> interact with the file system). \n\nYou might want to read this blog entry:\n\n\thttps://momjian.us/main/blogs/pgblog/2017.html#November_6_2017\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n",
"msg_date": "Thu, 30 Mar 2023 11:15:36 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Images storing techniques"
}
] |
[
{
"msg_contents": "Dear hackers,\n\nThis thread is motivated from [1]. This patch adds some links that refer publication options.\n\nWhile adding missing ID attributes to create_subscription.sgml, we found that\nthis could extend to create_publication.sgml. In the file no entries have an XML\nID attribute, but I think that adding the attribute and links improves the readability.\n\nHow do you think?\n\n[1]: https://www.postgresql.org/message-id/flat/TYAPR01MB58667AE04D291924671E2051F5879@TYAPR01MB5866.jpnprd01.prod.outlook.com\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Thu, 30 Mar 2023 01:57:30 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PGdoc: add ID attribute to create_publication.sgml"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 12:57 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear hackers,\n>\n> This thread is motivated from [1]. This patch adds some links that refer publication options.\n>\n\nIs this the correct attachment?\n\n------\nKind Regards,\nPeter Smith,\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 30 Mar 2023 13:55:24 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGdoc: add ID attribute to create_publication.sgml"
},
{
"msg_contents": "Dear Peter,\r\n\r\n> > This thread is motivated from [1]. This patch adds some links that refer\r\n> publication options.\r\n> >\r\n> \r\n> Is this the correct attachment?\r\n\r\nOpps, I attached wrong one. Here is currect.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 30 Mar 2023 02:58:46 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: PGdoc: add ID attribute to create_publication.sgml"
},
{
"msg_contents": "Hi Kuroda-san.\n\nThis patch had already gone through several review cycles when it was\nknown as v5-0002, in the previous thread [1], so it is already LGTM.\n\nJust to be sure, I applied it again, rebuilt the HTML docs, and\nre-checked all the rendering.\n\nI have marked the CF entry for this patch as \"ready for committer\"\n\n------\n[1] https://www.postgresql.org/message-id/TYAPR01MB5866879FFE5E0A2726244216F5889%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n[2] https://commitfest.postgresql.org/43/4260/\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 30 Mar 2023 14:51:36 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGdoc: add ID attribute to create_publication.sgml"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 9:22 AM Peter Smith <[email protected]> wrote:\n>\n> I have marked the CF entry for this patch as \"ready for committer\"\n>\n\nLGTM. I'll push this tomorrow unless there are more comments for it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 30 Mar 2023 15:48:59 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGdoc: add ID attribute to create_publication.sgml"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 3:48 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Mar 30, 2023 at 9:22 AM Peter Smith <[email protected]> wrote:\n> >\n> > I have marked the CF entry for this patch as \"ready for committer\"\n> >\n>\n> LGTM. I'll push this tomorrow unless there are more comments for it.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 31 Mar 2023 14:56:27 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGdoc: add ID attribute to create_publication.sgml"
}
] |
[
{
"msg_contents": "The pg_basebackup code has WalSegSz as uint32, whereas the rest of the \ncode has it as int. This seems confusing, and using the extra range \nwouldn't actually work. This was in the original commit (fc49e24fa6), \nbut I suppose it was an oversight.",
"msg_date": "Thu, 30 Mar 2023 08:24:36 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_basebackup: Correct type of WalSegSz"
},
{
"msg_contents": "> On 30 Mar 2023, at 08:24, Peter Eisentraut <[email protected]> wrote:\n> \n> The pg_basebackup code has WalSegSz as uint32, whereas the rest of the code has it as int. This seems confusing, and using the extra range wouldn't actually work. This was in the original commit (fc49e24fa6), but I suppose it was an oversight.\n\nLGTM, it indeed seems to be an accidental oversight.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 31 Mar 2023 10:31:04 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup: Correct type of WalSegSz"
}
] |
[
{
"msg_contents": "Hi Reid!\nSome thoughts\nI was looking at lmgr/proc.c, and I see a potential integer overflow - both max_total_bkend_mem and result are declared as “int”, so the expression “max_total_bkend_mem * 1024 * 1024 - result * 1024 * 1024” could have a problem when max_total_bkend_mem is set to 2G or more.\n /*\n * Account for shared memory size and initialize\n * max_total_bkend_mem_bytes.\n */\n pg_atomic_init_u64(&ProcGlobal->max_total_bkend_mem_bytes,\n max_total_bkend_mem * 1024 * 1024 - result * 1024 * 1024);\n\n\nAs more of a style thing (and definitely not an error), the calling code might look smoother if the memory check and error handling were moved into a helper function, say “backend_malloc”. For example, the following calling code\n\n /* Do not exceed maximum allowed memory allocation */\n if (exceeds_max_total_bkend_mem(Slab_CONTEXT_HDRSZ(chunksPerBlock)))\n {\n MemoryContextStats(TopMemoryContext);\n ereport(ERROR,\n (errcode(ERRCODE_OUT_OF_MEMORY),\n errmsg(\"out of memory - exceeds max_total_backend_memory\"),\n errdetail(\"Failed while creating memory context \\\"%s\\\".\",\n name)));\n }\n\n slab = (SlabContext *) malloc(Slab_CONTEXT_HDRSZ(chunksPerBlock));\n if (slab == NULL)\n ….\nCould become a single line:\n Slab = (SlabContext *) backend_malloc(Slab_CONTEXT_HDRSZ(chunksPerBlock);\n\nNote this is a change in error handling as backend_malloc() throws memory errors. I think this change is already happening, as the error throws you’ve already added are potentially visible to callers (to be verified). It also could result in less informative error messages without the specifics of “while creating memory context”. Still, it pulls repeated code into a single wrapper and might be worth considering.\n\nI do appreciate the changes in updating the global memory counter. My recollection is the original version updated stats with every allocation, and there was something that looked like a spinlock around the update. (My memory may be wrong …). The new approach eliminates contention, both by having fewer updates and by using atomic ops. Excellent.\n\n I also have some thoughts about simplifying the atomic update logic you are currently using. I need to think about it a bit more and will get back to you later on that.\n\n\n * John Morris\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Hi Reid!\nSome thoughts\nI was looking at lmgr/proc.c, and I see a potential integer overflow - both\nmax_total_bkend_mem and \nresult are declared as “int”, so the expression “max_total_bkend_mem * 1024 * 1024 - result * 1024 * 1024” could have a problem when\nmax_total_bkend_mem is set to 2G or more.\n\n /*\n * Account for shared memory size and initialize\n * max_total_bkend_mem_bytes.\n */\n pg_atomic_init_u64(&ProcGlobal->max_total_bkend_mem_bytes,\n max_total_bkend_mem * 1024 * 1024 - result * 1024 * 1024);\n\n\n\nAs more of a style thing (and definitely not an error), the calling code might look smoother if the memory check and error handling were moved into a helper function, say “backend_malloc”. For example, the following calling code\n \n /* Do not exceed maximum allowed memory allocation */\n if (exceeds_max_total_bkend_mem(Slab_CONTEXT_HDRSZ(chunksPerBlock)))\n {\n MemoryContextStats(TopMemoryContext);\n ereport(ERROR,\n (errcode(ERRCODE_OUT_OF_MEMORY),\n errmsg(\"out of memory - exceeds max_total_backend_memory\"),\n errdetail(\"Failed while creating memory context \\\"%s\\\".\",\n name)));\n }\n \n slab = (SlabContext *) malloc(Slab_CONTEXT_HDRSZ(chunksPerBlock));\n if (slab == NULL)\n ….\nCould become a single line:\n Slab = (SlabContext *) backend_malloc(Slab_CONTEXT_HDRSZ(chunksPerBlock);\n \nNote this is a change in error handling as backend_malloc() throws memory errors. I think this change is already happening, as the error throws you’ve already added are potentially visible to callers (to be verified). It also could result\n in less informative error messages without the specifics of “while creating memory context”. Still, it pulls repeated code into a single wrapper and might be worth considering.\n \nI do appreciate the changes in updating the global memory counter. My recollection is the original version updated stats with every allocation, and there was something that looked like a spinlock around the update. (My memory may be wrong\n …). The new approach eliminates contention, both by having fewer updates and by using atomic ops. Excellent.\n \n I also have some thoughts about simplifying the atomic update logic you are currently using. I need to think about it a bit more and will get back to you later on that.\n \n\n\nJohn Morris",
"msg_date": "Thu, 30 Mar 2023 16:11:08 +0000",
"msg_from": "John Morris <[email protected]>",
"msg_from_op": true,
"msg_subject": "FW: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Thu, 2023-03-30 at 16:11 +0000, John Morris wrote:\n> Hi Reid!\n> Some thoughts\n> I was looking at lmgr/proc.c, and I see a potential integer overflow\n> - bothmax_total_bkend_mem and result are declared as “int”, so the\n> expression “max_total_bkend_mem * 1024 * 1024 - result * 1024 * 1024”\n> could have a problem whenmax_total_bkend_mem is set to 2G or more.\n> /*\n> * Account for shared\n> memory size and initialize\n> *\n> max_total_bkend_mem_bytes.\n> */\n> \n> pg_atomic_init_u64(&ProcGlobal->max_total_bkend_mem_bytes,\n> \n> max_total_bkend_mem *\n> 1024 * 1024 - result * 1024 * 1024);\n> \n> \n> As more of a style thing (and definitely not an error), the calling\n> code might look smoother if the memory check and error handling were\n> moved into a helper function, say “backend_malloc”. For example, the\n> following calling code\n> \n> /* Do not exceed maximum allowed memory allocation */\n> if\n> (exceeds_max_total_bkend_mem(Slab_CONTEXT_HDRSZ(chunksPerBlock)))\n> {\n> MemoryContextStats(TopMemoryContext);\n> ereport(ERROR,\n> \n> (errcode(ERRCODE_OUT_OF_MEMORY),\n> \n> errmsg(\"out of memory - exceeds max_total_backend_memory\"),\n> \n> errdetail(\"Failed while creating memory context \\\"%s\\\".\",\n> \n> name)));\n> }\n> \n> slab = (SlabContext *)\n> malloc(Slab_CONTEXT_HDRSZ(chunksPerBlock));\n> if (slab == NULL)\n> ….\n> Could become a single line:\n> Slab = (SlabContext *)\n> backend_malloc(Slab_CONTEXT_HDRSZ(chunksPerBlock);\n> \n> Note this is a change in error handling as backend_malloc() throws\n> memory errors. I think this change is already happening, as the error\n> throws you’ve already added are potentially visible to callers (to be\n> verified). It also could result in less informative error messages\n> without the specifics of “while creating memory context”. Still, it\n> pulls repeated code into a single wrapper and might be worth\n> considering.\n> \n> I do appreciate the changes in updating the global memory counter. My\n> recollection is the original version updated stats with every\n> allocation, and there was something that looked like a spinlock\n> around the update. (My memory may be wrong …). The new approach\n> eliminates contention, both by having fewer updates and by using\n> atomic ops. Excellent.\n> \n> I also have some thoughts about simplifying the atomic update logic\n> you are currently using. I need to think about it a bit more and will\n> get back to you later on that.\n> \n> * John Morris\n> \n> \n> \n> \n\nJohn,\nThanks for looking this over and catching this. I appreciate the catch\nand the guidance. \n\nThanks,\nReid\n\n\n\n\n\nOn Thu, 2023-03-30 at 16:11 +0000, John Morris wrote: Hi Reid!Some thoughtsI was looking at lmgr/proc.c, and I see a potential integer overflow - bothmax_total_bkend_mem and result are declared as “int”, so the expression “max_total_bkend_mem * 1024 * 1024 - result * 1024 * 1024” could have a problem whenmax_total_bkend_mem is set to 2G or more. /* * Account for shared memory size and initialize * max_total_bkend_mem_bytes. */ pg_atomic_init_u64(&ProcGlobal->max_total_bkend_mem_bytes, max_total_bkend_mem * 1024 * 1024 - result * 1024 * 1024);As more of a style thing (and definitely not an error), the calling code might look smoother if the memory check and error handling were moved into a helper function, say “backend_malloc”. For example, the following calling code /* Do not exceed maximum allowed memory allocation */ if (exceeds_max_total_bkend_mem(Slab_CONTEXT_HDRSZ(chunksPerBlock))) { MemoryContextStats(TopMemoryContext); ereport(ERROR, (errcode(ERRCODE_OUT_OF_MEMORY), errmsg(\"out of memory - exceeds max_total_backend_memory\"), errdetail(\"Failed while creating memory context \\\"%s\\\".\", name))); } slab = (SlabContext *) malloc(Slab_CONTEXT_HDRSZ(chunksPerBlock)); if (slab == NULL) ….Could become a single line: Slab = (SlabContext *) backend_malloc(Slab_CONTEXT_HDRSZ(chunksPerBlock); Note this is a change in error handling as backend_malloc() throws memory errors. I think this change is already happening, as the error throws you’ve already added are potentially visible to callers (to be verified). It also could result in less informative error messages without the specifics of “while creating memory context”. Still, it pulls repeated code into a single wrapper and might be worth considering. I do appreciate the changes in updating the global memory counter. My recollection is the original version updated stats with every allocation, and there was something that looked like a spinlock around the update. (My memory may be wrong …). The new approach eliminates contention, both by having fewer updates and by using atomic ops. Excellent. I also have some thoughts about simplifying the atomic update logic you are currently using. I need to think about it a bit more and will get back to you later on that. John Morris John,Thanks for looking this over and catching this. I appreciate the catch and the guidance. Thanks,Reid",
"msg_date": "Fri, 31 Mar 2023 09:39:00 -0400",
"msg_from": "Reid Thompson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: Add the ability to limit the amount of memory that can be\n allocated to backends."
}
] |
[
{
"msg_contents": "Is there a barrier to us using non-core perl modules, in this case\nText::Template?\n\nI think it would be a tremendous improvement in readability and\nmaintainability over our current series of print statements, some\nmultiline, some not.\n\nThe module itself works like this https://www.perlmonks.org/?node_id=33296\n\nSome other digging around shows that the module has been around since 1996\n(Perl5 was 1994) and hasn't had a feature update (or any update for that\nmatter) since 2003. So it should meet our baseline 5.14 requirement, which\ncame out in 2011.\n\nI'm happy to proceed with a proof-of-concept so that people can see the\ncosts/benefits, but wanted to first make sure it wasn't a total non-starter.\n\nIs there a barrier to us using non-core perl modules, in this case Text::Template?I think it would be a tremendous improvement in readability and maintainability over our current series of print statements, some multiline, some not.The module itself works like this https://www.perlmonks.org/?node_id=33296Some other digging around shows that the module has been around since 1996 (Perl5 was 1994) and hasn't had a feature update (or any update for that matter) since 2003. So it should meet our baseline 5.14 requirement, which came out in 2011.I'm happy to proceed with a proof-of-concept so that people can see the costs/benefits, but wanted to first make sure it wasn't a total non-starter.",
"msg_date": "Thu, 30 Mar 2023 13:06:46 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Thoughts on using Text::Template for our autogenerated code?"
},
{
"msg_contents": "Corey Huinker <[email protected]> writes:\n> Is there a barrier to us using non-core perl modules, in this case\n> Text::Template?\n\nUse for what exactly?\n\nI'd be hesitant to require such things to build from a tarball,\nor to run regression tests. If it's used to build a generated file\nthat we include in tarballs, that might be workable ... but I'd bet\na good fraction of the buildfarm falls over (looks like all four of\nmy animals would), and you might get push-back from developers too.\n\n> I think it would be a tremendous improvement in readability and\n> maintainability over our current series of print statements, some\n> multiline, some not.\n\nI suspect it'd have to be quite a remarkable improvement to justify\nadding a new required build tool ... but show us an example.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 Mar 2023 14:04:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on using Text::Template for our autogenerated code?"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-30 13:06:46 -0400, Corey Huinker wrote:\n> Is there a barrier to us using non-core perl modules, in this case\n> Text::Template?\n\nI don't think we should have a hard build-time dependency on non-core perl\nmodules. On some operating systems having to install such dependencies is\nquite painful (e.g. windows).\n\n\n> I think it would be a tremendous improvement in readability and\n> maintainability over our current series of print statements, some\n> multiline, some not.\n\nI think many of those could just be replaced by multi-line strings and/or here\ndocuments to get most of the win.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 30 Mar 2023 11:48:44 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on using Text::Template for our autogenerated code?"
},
{
"msg_contents": "> On 30 Mar 2023, at 19:06, Corey Huinker <[email protected]> wrote:\n\n> Some other digging around shows that the module has been around since 1996 (Perl5 was 1994) and hasn't had a feature update (or any update for that matter) since 2003. So it should meet our baseline 5.14 requirement, which came out in 2011.\n\nI assume you then mean tying this to 1.44 (or another version?), since AFAICT\nthere has been both features and bugfixes well after 2003?\n\n\thttps://metacpan.org/dist/Text-Template/changes\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 30 Mar 2023 21:06:12 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on using Text::Template for our autogenerated code?"
},
{
"msg_contents": "On 2023-03-30 Th 15:06, Daniel Gustafsson wrote:\n>> On 30 Mar 2023, at 19:06, Corey Huinker<[email protected]> wrote:\n>> Some other digging around shows that the module has been around since 1996 (Perl5 was 1994) and hasn't had a feature update (or any update for that matter) since 2003. So it should meet our baseline 5.14 requirement, which came out in 2011.\n> I assume you then mean tying this to 1.44 (or another version?), since AFAICT\n> there has been both features and bugfixes well after 2003?\n>\n> \thttps://metacpan.org/dist/Text-Template/changes\n>\n\nI don't think that's remotely a starter. Asking people to install an old \nand possibly buggy version of a perl module is not something we should do.\n\nI think the barrier for this is pretty high. I try to keep module \nrequirements for the buildfarm client pretty minimal, and this could \naffect a much larger group of people.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-30 Th 15:06, Daniel\n Gustafsson wrote:\n\n\n\nOn 30 Mar 2023, at 19:06, Corey Huinker <[email protected]> wrote:\n\n\n\n\n\nSome other digging around shows that the module has been around since 1996 (Perl5 was 1994) and hasn't had a feature update (or any update for that matter) since 2003. So it should meet our baseline 5.14 requirement, which came out in 2011.\n\n\n\nI assume you then mean tying this to 1.44 (or another version?), since AFAICT\nthere has been both features and bugfixes well after 2003?\n\n\thttps://metacpan.org/dist/Text-Template/changes\n\n\n\n\n\nI don't think that's remotely a starter. Asking people to install\n an old and possibly buggy version of a perl module is not\n something we should do.\nI think the barrier for this is pretty high. I try to keep module\n requirements for the buildfarm client pretty minimal, and this\n could affect a much larger group of people.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 30 Mar 2023 16:09:02 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on using Text::Template for our autogenerated code?"
},
{
"msg_contents": ">\n> I think many of those could just be replaced by multi-line strings and/or\n> here\n> documents to get most of the win.\n>\n\nI agree that a pretty good chunk of it can be done with here-docs, but\ntemplate files do have attractive features (separation of concerns, syntax\nhighlighting, etc) that made it worth asking.\n\nI think many of those could just be replaced by multi-line strings and/or here\ndocuments to get most of the win.I agree that a pretty good chunk of it can be done with here-docs, but template files do have attractive features (separation of concerns, syntax highlighting, etc) that made it worth asking.",
"msg_date": "Thu, 30 Mar 2023 16:54:46 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on using Text::Template for our autogenerated code?"
},
{
"msg_contents": ">\n> I don't think that's remotely a starter. Asking people to install an old\n> and possibly buggy version of a perl module is not something we should do.\n>\n> I think the barrier for this is pretty high. I try to keep module\n> requirements for the buildfarm client pretty minimal, and this could affect\n> a much larger group of people.\n>\nThose are good reasons.\n\nFor those who already responded, are your concerns limited to the\ndependency issue? Would you have concerns about a templating library that\nwas developed by us and included in-tree? I'm not suggesting suggesting we\nwrite one at this time, at least not until after we make a here-doc-ing\npass first, but I want to understand the concerns before embarking on any\nrefactoring.\n\nI don't think that's remotely a starter. Asking people to install\n an old and possibly buggy version of a perl module is not\n something we should do.\nI think the barrier for this is pretty high. I try to keep module\n requirements for the buildfarm client pretty minimal, and this\n could affect a much larger group of people.Those are good reasons.For those who already responded, are your concerns limited to the dependency issue? Would you have concerns about a templating library that was developed by us and included in-tree? I'm not suggesting suggesting we write one at this time, at least not until after we make a here-doc-ing pass first, but I want to understand the concerns before embarking on any refactoring.",
"msg_date": "Thu, 30 Mar 2023 17:15:20 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on using Text::Template for our autogenerated code?"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-30 17:15:20 -0400, Corey Huinker wrote:\n> For those who already responded, are your concerns limited to the\n> dependency issue? Would you have concerns about a templating library that\n> was developed by us and included in-tree? I'm not suggesting suggesting we\n> write one at this time, at least not until after we make a here-doc-ing\n> pass first, but I want to understand the concerns before embarking on any\n> refactoring.\n\nThe dependency is/was my main issue. But I'm also somewhat doubtful that what\nwe do warrants the use of a template library in the first place, but I could\nbe convinced by a patch showing it to be a significant improvement.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 30 Mar 2023 14:33:48 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on using Text::Template for our autogenerated code?"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-03-30 17:15:20 -0400, Corey Huinker wrote:\n>> For those who already responded, are your concerns limited to the\n>> dependency issue? Would you have concerns about a templating library that\n>> was developed by us and included in-tree? I'm not suggesting suggesting we\n>> write one at this time, at least not until after we make a here-doc-ing\n>> pass first, but I want to understand the concerns before embarking on any\n>> refactoring.\n\n> The dependency is/was my main issue. But I'm also somewhat doubtful that what\n> we do warrants the use of a template library in the first place, but I could\n> be convinced by a patch showing it to be a significant improvement.\n\nYeah, it's somewhat hard to believe that the cost/benefit ratio would be\nattractive. But maybe you could mock up some examples of what the input\ncould look like, and get people on board (or not) before writing any\ncode.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 Mar 2023 19:25:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on using Text::Template for our autogenerated code?"
},
{
"msg_contents": "On Fri, Mar 31, 2023 at 4:15 AM Corey Huinker <[email protected]>\nwrote:\n> For those who already responded, are your concerns limited to the\ndependency issue? Would you have concerns about a templating library that\nwas developed by us and included in-tree?\n\nLibraries (and abstractions in general) require some mental effort to\ninterface with them (that also means debugging when the output fails to\nmatch expectations), not to mention maintenance cost. There has to be a\ncompensating benefit in return. The cost-to-benefit ratio here seems\nunfavorable -- seems like inventing a machine that ties shoelaces, but we\nusually wear sandals.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Mar 31, 2023 at 4:15 AM Corey Huinker <[email protected]> wrote:> For those who already responded, are your concerns limited to the dependency issue? Would you have concerns about a templating library that was developed by us and included in-tree? Libraries (and abstractions in general) require some mental effort to interface with them (that also means debugging when the output fails to match expectations), not to mention maintenance cost. There has to be a compensating benefit in return. The cost-to-benefit ratio here seems unfavorable -- seems like inventing a machine that ties shoelaces, but we usually wear sandals.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 31 Mar 2023 10:32:04 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Thoughts on using Text::Template for our autogenerated code?"
},
{
"msg_contents": ">\n> Yeah, it's somewhat hard to believe that the cost/benefit ratio would be\n> attractive. But maybe you could mock up some examples of what the input\n> could look like, and get people on board (or not) before writing any\n> code.\n>\n>\ntl;dr - I tried a few things, nothing that persuades myself let alone the\ncommunity, but perhaps some ideas for the future.\n\nI borrowed Bertrand's ongoing work for waiteventnames.* because that is\nwhat got me thinking about this in the first place. I considered a few\ndifferent templating libraries:\n\nThere is no perl implementation of the golang template library (example of\nthat here: https://blog.gopheracademy.com/advent-2017/using-go-templates/ )\nthat I could find.\n\nText::Template does not support loops, and as such it is no better than\nhere-docs.\n\nTemplate Toolkit seems to do what we need, but it has a kitchen sink of\ndependencies that make it an unattractive option, so I didn't even attempt\nit.\n\nHTML::Template has looping and if/then/else constructs, and it is a single\nstandalone library. It also does a \"separation of concerns\" wherein you\npass in parameter names and values, and some parameters can be for loops,\nwhich means you pass an arrayref of hashrefs that the template engine loops\nover. That's where the advantages stop, however. It is fairly verbose, and\nbecause it is HTML-centric it isn't very good about controlling whitespace,\nwhich leads to piling template directives onto the same line in order to\navoid spurious newlines. As such I cannot recommend it.\n\nMy ideal template library would have text something like this:\n\n[% loop events %]\n[% $enum_value %]\n[% if __first__ +%] = [%+ $inital_value %][% endif %]\n[% if ! __last__ %],[% endif +%]\n[% end loop %]\n[% loop xml_blocks indent: relative,spaces,4 %]\n\n<row>\n\n <SomeElement attrib=[%attrib_val%]>[%element_body%]/>\n\n</row>\n\n[% end loop %]\n\n\n[%+ means \"leading whitespace matters\", +%] means \"trailing whitespace\nmatters\"\nThat pseudocode is a mix of ASP, HTML::Template. The special variables\n__first__ and __last__ refer to which iteration of the loop we are on. You\nwould pass it a data structure like this:\n\n{events: [ { enum_value: \"abc\", initial_value: \"def\"}, ... { enum_value:\n\"wuv\", initial_value: \"xyz\" } ],\n xml_block: [ {attrib_val: \"one\", element_body: \"two\"} ]\n }\n\n\nI did one initial pass with just converting printf statements to here-docs,\nand the results were pretty unsatisfying. It wasn't really possible to\n\"see\" the output files take shape.\n\nMy next attempt was to take the \"separation of concerns\" code from the\nHTML::Template version, constructing the nested data structure of resolved\noutput values, and then iterating over that once per output file. This\nresulted in something cleaner, partly because we're only writing one file\ntype at a time, partly because the interpolated variables have names much\ncloser to their output meaning.\n\nIn doing this, it occurred to me that a lot of this effort is in getting\nthe code to conform to our own style guide, at the cost of the generator\ncode being less readable. What if we wrote the generator and formatted the\ncode in a way that made sense for the generator, and then pgindented it.\nThat's not the workflow right now, but perhaps it could be.\n\nConclusions:\n- There is no \"good enough\" template engine that doesn't require big\nchanges in dependencies.\n- pgindent will not save you from a run-on sentence, like putting all of\na typdef enum values on one line\n- There is some clarity value in either separating input processing from\nthe output processing, or making the input align more closely with the\noutputs\n- Fiddling with indentation and spacing detracts from legibility no matter\nwhat method is used.\n- here docs are basically ok but they necessarily confuse output\nindentation with code indentation. it is possible to de-indent them them\nwith <<~ but that's a 5.25+ feature.\n- Any of these principles can be applied at any time, with no overhaul\nrequired.\n\n\n\"sorted-\" is the slightly modified version of Bertrand's code.\n\"eof-as-is-\" is a direct conversion of the original but using here-docs.\n\"heredoc-fone-file-at-a-time-\" first generates an output-friendly data\nstructure\n\"needs-pgindent-\" is what is possible if we format for our own readability\nand make pgindent fix the output, though it was not a perfect output match",
"msg_date": "Mon, 3 Apr 2023 15:07:15 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Thoughts on using Text::Template for our autogenerated code?"
}
] |
[
{
"msg_contents": "Clean up role created in new subscription test.\n\nThis oversight broke repeated runs of \"make installcheck\".\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/e9d202a1499d6a70e80d080fcdba07fe6707845d\n\nModified Files\n--------------\nsrc/test/regress/expected/subscription.out | 1 +\nsrc/test/regress/sql/subscription.sql | 1 +\n2 files changed, 2 insertions(+)",
"msg_date": "Thu, 30 Mar 2023 17:07:20 +0000",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 1:07 PM Tom Lane <[email protected]> wrote:\n> Clean up role created in new subscription test.\n>\n> This oversight broke repeated runs of \"make installcheck\".\n\nGAAAAH. You would think that I would have learned better by now, but\nevidently not. Is there some way we can add an automated guard against\nthis?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 30 Mar 2023 14:19:40 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Mar 30, 2023 at 1:07 PM Tom Lane <[email protected]> wrote:\n>> This oversight broke repeated runs of \"make installcheck\".\n\n> GAAAAH. You would think that I would have learned better by now, but\n> evidently not. Is there some way we can add an automated guard against\n> this?\n\nHm. We could add a final test step that prints out all still-existing\nroles, but the trick is to have it not fail in a legitimate installcheck\ncontext (ie, when there are indeed some pre-existing roles).\n\nMaybe it'd be close enough to expect there to be no roles named\n\"regress_xxx\". In combination with\n-DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS, that would prevent us\nfrom accidentally leaving stuff behind, and we could hope that it doesn't\ncause false failures in real installations.\n\nAnother idea could be for pg_regress to enforce that \"select count(*)\nfrom pg_roles\" gives the same answer before and after the test run.\nThat would then be enforced against all pg_regress suites not just\nthe main one, but perhaps that's good.\n\nLikewise for tablespaces, subscriptions, and other globally-visible\nobjects, of course.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 Mar 2023 14:44:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "> On 30 Mar 2023, at 20:44, Tom Lane <[email protected]> wrote:\n\n> Maybe it'd be close enough to expect there to be no roles named\n> \"regress_xxx\". In combination with\n> -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS, that would prevent us\n> from accidentally leaving stuff behind, and we could hope that it doesn't\n> cause false failures in real installations.\n\nWould that check be always on or only when pg_regress is compiled with\n-DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS?\n\n> Another idea could be for pg_regress to enforce that \"select count(*)\n> from pg_roles\" gives the same answer before and after the test run.\n\nThat wouldn't prevent the contents of pg_roles to have changed though, so there\nis a (slim) false positive risk with that no?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 30 Mar 2023 21:19:17 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "Daniel Gustafsson <[email protected]> writes:\n>> On 30 Mar 2023, at 20:44, Tom Lane <[email protected]> wrote:\n>> Maybe it'd be close enough to expect there to be no roles named\n>> \"regress_xxx\". In combination with\n>> -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS, that would prevent us\n>> from accidentally leaving stuff behind, and we could hope that it doesn't\n>> cause false failures in real installations.\n\n> Would that check be always on or only when pg_regress is compiled with\n> -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS?\n\nI envisioned it as being on all the time.\n\n>> Another idea could be for pg_regress to enforce that \"select count(*)\n>> from pg_roles\" gives the same answer before and after the test run.\n\n> That wouldn't prevent the contents of pg_roles to have changed though, so there\n> is a (slim) false positive risk with that no?\n\nWell, we could do \"select rolname from pg_roles order by 1\" and\nactually compare the results of the two selects. That might be\nadvisable anyway, in order to produce a complaint with useful\ndetail when there is something wrong.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 Mar 2023 16:29:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "> On 30 Mar 2023, at 22:29, Tom Lane <[email protected]> wrote:\n> Daniel Gustafsson <[email protected]> writes:\n>>> On 30 Mar 2023, at 20:44, Tom Lane <[email protected]> wrote:\n\n>>> Another idea could be for pg_regress to enforce that \"select count(*)\n>>> from pg_roles\" gives the same answer before and after the test run.\n> \n>> That wouldn't prevent the contents of pg_roles to have changed though, so there\n>> is a (slim) false positive risk with that no?\n> \n> Well, we could do \"select rolname from pg_roles order by 1\" and\n> actually compare the results of the two selects. That might be\n> advisable anyway, in order to produce a complaint with useful\n> detail when there is something wrong.\n\nI can see the value in doing something like this to keep us honest.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 30 Mar 2023 22:53:03 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "> On 30 Mar 2023, at 22:29, Tom Lane <[email protected]> wrote:\n\n> Well, we could do \"select rolname from pg_roles order by 1\" and\n> actually compare the results of the two selects. That might be\n> advisable anyway, in order to produce a complaint with useful\n> detail when there is something wrong.\n\nI took a look at this and came up with the attached. This adds a new parameter\nto pg_regress for specifying a test which will be executed before and after the\nsuite, where the first invocation creates the expectfile for the second. For\nstoring the expecfile the temp dir creation is somewhat refactored. I've added\na sample test in the patch (to regress, not ECPG), but I'm sure it can be\nexpanded to be a bit more interesting. The comment which is now incorrectly\nformatted was left like that to make review easier, if this gets committed it\nwill be fixed then.\n\nI opted for this to use the machinery that pg_regress already has rather than\nadd a new mechanism (and dependency) for running and verifying queries. This\nalso avoids hardcoding the test making it easier to have custom queries during\nhacking etc.\n\nLooking at this I also found a bug introduced in the TAP format patch, which\nmade failed single run tests report as 0ms due to the parameters being mixed up\nin the report function call. This is in 0002, which I'll apply to HEAD\nregardless of 0001 as they are unrelated.\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 15 May 2023 10:59:20 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "> On 15 May 2023, at 10:59, Daniel Gustafsson <[email protected]> wrote:\n\n> Looking at this I also found a bug introduced in the TAP format patch, which\n> made failed single run tests report as 0ms due to the parameters being mixed up\n> in the report function call. This is in 0002, which I'll apply to HEAD\n> regardless of 0001 as they are unrelated.\n\nWith 0002 applied, attached is just the 0001 rebased to keep the CFBot from\nbeing angry when applying an already applied patch. Parked in the July CF for\nnow.\n\n--\nDaniel Gustafsson",
"msg_date": "Tue, 16 May 2023 11:17:17 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "> On 16 May 2023, at 11:17, Daniel Gustafsson <[email protected]> wrote:\n\n> Parked in the July CF for now.\n\nRebased to fix a trivial conflict highlighted by the CFBot.\n\n--\nDaniel Gustafsson",
"msg_date": "Thu, 6 Jul 2023 00:00:41 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "On 06.07.23 00:00, Daniel Gustafsson wrote:\n>> On 16 May 2023, at 11:17, Daniel Gustafsson <[email protected]> wrote:\n> \n>> Parked in the July CF for now.\n> \n> Rebased to fix a trivial conflict highlighted by the CFBot.\n\nI think the problem with this approach is that one would need to reapply \nit to each regression test suite separately. For example, there are \nseveral tests under contrib/ that create roles. These would not be \ncovered by this automatically.\n\nI think the earlier idea of just counting roles, tablespaces, etc. \nbefore and after would be sufficient.\n\n\n",
"msg_date": "Wed, 8 Nov 2023 08:55:32 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "On 2023-Nov-08, Peter Eisentraut wrote:\n\n> I think the earlier idea of just counting roles, tablespaces, etc. before\n> and after would be sufficient.\n\nMaybe record global objects in a permanent table in test_setup.sql\n\ncreate table global_objs as\nselect 'role', rolname from pg_roles\nunion all\nselect 'tablespace', spcname from pg_tablespace;\n\nand at the end (maybe in test tablespace, though it's unrelated but it's\nwhat runs last and drops regress_tablespace), have\n\n(select 'role', rolname from pg_roles\nunion all\nselect 'tablespace', spcname from pg_tablespace)\nexcept\nselect * from global_objs;\n\nand check the expected as empty.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La verdad no siempre es bonita, pero el hambre de ella sí\"\n\n\n",
"msg_date": "Wed, 8 Nov 2023 12:42:26 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "> On 8 Nov 2023, at 08:55, Peter Eisentraut <[email protected]> wrote:\n> \n> On 06.07.23 00:00, Daniel Gustafsson wrote:\n>>> On 16 May 2023, at 11:17, Daniel Gustafsson <[email protected]> wrote:\n>>> Parked in the July CF for now.\n>> Rebased to fix a trivial conflict highlighted by the CFBot.\n> \n> I think the problem with this approach is that one would need to reapply it to each regression test suite separately. For example, there are several tests under contrib/ that create roles. These would not be covered by this automatically.\n> \n> I think the earlier idea of just counting roles, tablespaces, etc. before and after would be sufficient.\n\nIt's been a while but if memory serves me right, one of the reasons for this\napproach was that pg_regress didn't use libpq so running queries and storing\nresults for comparisons other than diffing .out files was painful at best.\n\nSince 66d6086cbc pg_regress does have a dependency on libpq so we can now\nperform that bookkeeping a bit easier. I still find it more elegant to at\nleast compare the contents and not just the count, but I can take a stab at a\nrevised patch since this approach doesn't seem to appeal to the thread.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 8 Nov 2023 13:15:00 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "> On 8 Nov 2023, at 12:42, Alvaro Herrera <[email protected]> wrote:\n> On 2023-Nov-08, Peter Eisentraut wrote:\n\n>> I think the earlier idea of just counting roles, tablespaces, etc. before\n>> and after would be sufficient.\n> \n> Maybe record global objects in a permanent table in test_setup.sql\n\nSince test_setup.sql is part of the regress schedule and not pg_regress we\nwould have to implement this for each test run (regress, contribs etc), which\nis what Peter didn't like about the original suggestion.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 8 Nov 2023 13:24:41 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "On 2023-Nov-08, Daniel Gustafsson wrote:\n\n> Since test_setup.sql is part of the regress schedule and not pg_regress we\n> would have to implement this for each test run (regress, contribs etc), which\n> is what Peter didn't like about the original suggestion.\n\nOh, somehow that aspect of his reply failed to register with me. I\nagree with your approach of using libpq in pg_regress then.\n\nI suppose you're just thinking of using PQexec() or whatever, run one\nquery with sufficient ORDER BY, save the result, and at the end of the\ntest run just run the same query and compare that they are cell-by-cell\nidentical? This sounds a lot simpler than the patch you posted.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 8 Nov 2023 13:32:41 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "> On 8 Nov 2023, at 13:32, Alvaro Herrera <[email protected]> wrote:\n> \n> On 2023-Nov-08, Daniel Gustafsson wrote:\n> \n>> Since test_setup.sql is part of the regress schedule and not pg_regress we\n>> would have to implement this for each test run (regress, contribs etc), which\n>> is what Peter didn't like about the original suggestion.\n> \n> Oh, somehow that aspect of his reply failed to register with me. I\n> agree with your approach of using libpq in pg_regress then.\n> \n> I suppose you're just thinking of using PQexec() or whatever, run one\n> query with sufficient ORDER BY, save the result, and at the end of the\n> test run just run the same query and compare that they are cell-by-cell\n> identical? This sounds a lot simpler than the patch you posted.\n\nCorrect, that's my plan. The rationale for the earlier patch was to avoid\nadding a dependency on libpq, but with that already discussed and done we can\nleverage the fact that we can run such queries easy.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 8 Nov 2023 14:06:09 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "> On 8 Nov 2023, at 13:32, Alvaro Herrera <[email protected]> wrote:\n\n> I suppose you're just thinking of using PQexec() or whatever, run one\n> query with sufficient ORDER BY, save the result, and at the end of the\n> test run just run the same query and compare that they are cell-by-cell\n> identical? This sounds a lot simpler than the patch you posted.\n\nI found some spare cycles for this and came up with the attached. The idea was\nto keep it in-line with how pg_regress already today manipulate and traverse\n_stringlists for various things. With the addition of the 0001 patch to clean\nup global objects left in test_pg_dump it passes check-world.\n\n--\nDaniel Gustafsson",
"msg_date": "Fri, 1 Dec 2023 12:22:50 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "On 01/12/2023 13:22, Daniel Gustafsson wrote:\n>> On 8 Nov 2023, at 13:32, Alvaro Herrera <[email protected]> wrote:\n> \n>> I suppose you're just thinking of using PQexec() or whatever, run one\n>> query with sufficient ORDER BY, save the result, and at the end of the\n>> test run just run the same query and compare that they are cell-by-cell\n>> identical? This sounds a lot simpler than the patch you posted.\n> \n> I found some spare cycles for this and came up with the attached. The idea was\n> to keep it in-line with how pg_regress already today manipulate and traverse\n> _stringlists for various things. With the addition of the 0001 patch to clean\n> up global objects left in test_pg_dump it passes check-world.\n\nDo we want to impose this policy to all extensions too?\n\n> +\t/*\n> +\t * Store the global objects before the test starts such that we can check\n> +\t * for any objects left behind after the tests finish.\n> +\t */\n> +\tquery_to_stringlist(\"postgres\",\n> +\t\t\t\t\t\t\"(SELECT rolname AS obj FROM pg_catalog.pg_roles ORDER BY 1) \"\n> +\t\t\t\t\t\t\"UNION ALL \"\n> +\t\t\t\t\t\t\"(SELECT spcname AS obj FROM pg_catalog.pg_tablespace ORDER BY 1) \"\n> +\t\t\t\t\t\t\"UNION ALL \"\n> +\t\t\t\t\t\t\"(SELECT subname AS obj FROM pg_catalog.pg_subscription ORDER BY 1)\",\n> +\t\t\t\t\t\t&globals_before);\n> +\n\nStrictly speaking, the order of this query isn't guaranteed to be \nstable, although in practice it probably is. Maybe something like this:\n\n(SELECT 'role', rolname AS obj FROM pg_catalog.pg_roles\nUNION ALL\nSELECT 'tablespace', spcname AS obj FROM pg_catalog.pg_tablespace\nUNION ALL\nSELECT 'subscription', subname AS obj FROM pg_catalog.pg_subscription\n) ORDER BY 1, 2\n\nIs it OK to leave behind extra databases?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 1 Dec 2023 13:37:02 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "> On 1 Dec 2023, at 12:37, Heikki Linnakangas <[email protected]> wrote:\n> \n> On 01/12/2023 13:22, Daniel Gustafsson wrote:\n>>> On 8 Nov 2023, at 13:32, Alvaro Herrera <[email protected]> wrote:\n>>> I suppose you're just thinking of using PQexec() or whatever, run one\n>>> query with sufficient ORDER BY, save the result, and at the end of the\n>>> test run just run the same query and compare that they are cell-by-cell\n>>> identical? This sounds a lot simpler than the patch you posted.\n>> I found some spare cycles for this and came up with the attached. The idea was\n>> to keep it in-line with how pg_regress already today manipulate and traverse\n>> _stringlists for various things. With the addition of the 0001 patch to clean\n>> up global objects left in test_pg_dump it passes check-world.\n> \n> Do we want to impose this policy to all extensions too?\n\nI don't think it would be bad, and as of today the policy holds for all of\ncheck-world apart from this one test module.\n\n>> +\t/*\n>> +\t * Store the global objects before the test starts such that we can check\n>> +\t * for any objects left behind after the tests finish.\n>> +\t */\n>> +\tquery_to_stringlist(\"postgres\",\n>> +\t\t\t\t\t\t\"(SELECT rolname AS obj FROM pg_catalog.pg_roles ORDER BY 1) \"\n>> +\t\t\t\t\t\t\"UNION ALL \"\n>> +\t\t\t\t\t\t\"(SELECT spcname AS obj FROM pg_catalog.pg_tablespace ORDER BY 1) \"\n>> +\t\t\t\t\t\t\"UNION ALL \"\n>> +\t\t\t\t\t\t\"(SELECT subname AS obj FROM pg_catalog.pg_subscription ORDER BY 1)\",\n>> +\t\t\t\t\t\t&globals_before);\n>> +\n> \n> Strictly speaking, the order of this query isn't guaranteed to be stable, although in practice it probably is. \n\nOf course, will fix. I originally had three separate query_to_stringlist calls\nand had a brainfade when combining. It seemed like pointless use of cycles\nwhen we can get everything in one connection.\n\n> Is it OK to leave behind extra databases?\n\nThe test suite for pg_upgrade can make use of left behind databases to seed the\nold cluster, so I think that's allowed by design.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 1 Dec 2023 12:46:22 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "Isn't it simpler to use DROP OWNED BY in 0001?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Hay que recordar que la existencia en el cosmos, y particularmente la\nelaboración de civilizaciones dentro de él no son, por desgracia,\nnada idílicas\" (Ijon Tichy)\n\n\n",
"msg_date": "Fri, 1 Dec 2023 13:19:12 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "> On 1 Dec 2023, at 13:19, Alvaro Herrera <[email protected]> wrote:\n> \n> Isn't it simpler to use DROP OWNED BY in 0001?\n\nI suppose it is, I kind of like the explicit drops but we do use OWNED BY quite\na lot in the regression tests so changed to that in the attached v5.\n\n--\nDaniel Gustafsson",
"msg_date": "Fri, 1 Dec 2023 13:52:52 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "On Fri, 1 Dec 2023 at 18:23, Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 1 Dec 2023, at 13:19, Alvaro Herrera <[email protected]> wrote:\n> >\n> > Isn't it simpler to use DROP OWNED BY in 0001?\n>\n> I suppose it is, I kind of like the explicit drops but we do use OWNED BY quite\n> a lot in the regression tests so changed to that in the attached v5.\n\nThere are a lot of failures in CFBot at [1] with:\n# test failed\n----------------------------------- stderr -----------------------------------\n# left over global object detected: regress_test_bypassrls\n# 1 of 2 tests failed.\n\nMore details of the same are available at [2].\nDo we need to clean up the objects leftover for the reported issues in the test?\n\n[1] - https://cirrus-ci.com/task/6222185975513088\n[2] - https://api.cirrus-ci.com/v1/artifact/task/6222185975513088/meson_log/build/meson-logs/testlog-running.txt\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 18 Jan 2024 06:27:47 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "> On 18 Jan 2024, at 01:57, vignesh C <[email protected]> wrote:\n\n> There are a lot of failures in CFBot at [1] with:\n\n> More details of the same are available at [2].\n> Do we need to clean up the objects leftover for the reported issues in the test?\n\nNot really, these should not need cleaning up, and it's quite odd that it only\nhappens on FreeBSD. I need to investigate further so I'll mark this waiting on\nauthor in the meantime.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 19 Jan 2024 15:26:13 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "On 19.01.24 15:26, Daniel Gustafsson wrote:\n>> On 18 Jan 2024, at 01:57, vignesh C <[email protected]> wrote:\n> \n>> There are a lot of failures in CFBot at [1] with:\n> \n>> More details of the same are available at [2].\n>> Do we need to clean up the objects leftover for the reported issues in the test?\n> \n> Not really, these should not need cleaning up, and it's quite odd that it only\n> happens on FreeBSD. I need to investigate further so I'll mark this waiting on\n> author in the meantime\n\nMost likely because only the FreeBSD job uses \nENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS.\n\n\n\n",
"msg_date": "Fri, 19 Jan 2024 15:40:21 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "Hi,\n\nOn 2024-01-19 15:40:21 +0100, Peter Eisentraut wrote:\n> On 19.01.24 15:26, Daniel Gustafsson wrote:\n> > > On 18 Jan 2024, at 01:57, vignesh C <[email protected]> wrote:\n> > \n> > > There are a lot of failures in CFBot at [1] with:\n> > \n> > > More details of the same are available at [2].\n> > > Do we need to clean up the objects leftover for the reported issues in the test?\n> > \n> > Not really, these should not need cleaning up, and it's quite odd that it only\n> > happens on FreeBSD. I need to investigate further so I'll mark this waiting on\n> > author in the meantime\n> \n> Most likely because only the FreeBSD job uses\n> ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS.\n\nI don't think it's that, but that the freebsd task tests the installcheck\nequivalent in meson. I haven't checked what your patch is doing, but perhaps\nthe issue is that it's seeing global objects concurrently created by another\ntest?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 25 Mar 2024 11:48:24 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "> On 25 Mar 2024, at 19:48, Andres Freund <[email protected]> wrote:\n\n> I don't think it's that, but that the freebsd task tests the installcheck\n> equivalent in meson. I haven't checked what your patch is doing, but perhaps\n> the issue is that it's seeing global objects concurrently created by another\n> test?\n\nSorry, I had a look when Peter replied a while back but forgot to update this\nthread. The reason for the failure is that when running multiple pg_regress in\nparallel against a single instance it is impossible to avoid global object\npollution from other tests concurrently executing. Since pg_regress has no\nidea about the contents of the tests it also cannot apply any smarts in\nfiltering out such objects. The CI failures comes from the contrib tests which\nrun in parallel.\n\nThe only option is to make the check opt-in via a command-line parameter for\nuse it in the main regress tests, and not use it for the contrib tests. The\nattached v7 does that and passes CI, but I wonder if it's worth it all with\nthat restriction?\n\nThe 0001 cleanup patch is still relevant (and was found by using this feature)\nthough but that might be all for this thread.\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 27 Mar 2024 14:42:02 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> The only option is to make the check opt-in via a command-line parameter for\n> use it in the main regress tests, and not use it for the contrib tests. The\n> attached v7 does that and passes CI, but I wonder if it's worth it all with\n> that restriction?\n\nYeah, that seems hardly worth it --- and it's only an accident of\nimplementation that we don't run the main tests in parallel with\nsomething else, anyway. Too bad, but ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Mar 2024 11:26:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
},
{
"msg_contents": "> On 27 Mar 2024, at 16:26, Tom Lane <[email protected]> wrote:\n> \n> Daniel Gustafsson <[email protected]> writes:\n>> The only option is to make the check opt-in via a command-line parameter for\n>> use it in the main regress tests, and not use it for the contrib tests. The\n>> attached v7 does that and passes CI, but I wonder if it's worth it all with\n>> that restriction?\n> \n> Yeah, that seems hardly worth it --- and it's only an accident of\n> implementation that we don't run the main tests in parallel with\n> something else, anyway. Too bad, but ...\n\nAgreed. The excercise did catch the leftovers in 0001 so I'll go ahead with\nthose before closing the CF entry.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 27 Mar 2024 17:06:17 +0100",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Clean up role created in new subscription test."
}
] |
[
{
"msg_contents": "Hi,\n\nI was working on committing patch 0001 from [1] and was a bit confused about\nsome of the changes to the WAL format for gist and hash index vacuum. It\nlooked to me that the changed code just flat out would not work.\n\nTurns out the problem is that we don't reach deletion for hash and gist\nvacuum:\n\ngist:\n\n> Oh, I see. We apparently don't reach the gist deletion code in the tests:\n> https://coverage.postgresql.org/src/backend/access/gist/gistxlog.c.gcov.html#674\n> https://coverage.postgresql.org/src/backend/access/gist/gistxlog.c.gcov.html#174\n> \n> And indeed, if I add an abort() into , it's not reached.\n> \n> And it's not because tests use a temp table, the caller is also unreachable:\n> https://coverage.postgresql.org/src/backend/access/gist/gist.c.gcov.html#1643\n\n\nhash:\n> And there also are no tests:\n> https://coverage.postgresql.org/src/backend/access/hash/hashinsert.c.gcov.html#372\n\n\nI've since looked to other index AMs.\n\nspgist's XLOG_SPGIST_VACUUM_ROOT emitted, but not replayed:\nhttps://coverage.postgresql.org/src/backend/access/spgist/spgvacuum.c.gcov.html#474\nhttps://coverage.postgresql.org/src/backend/access/spgist/spgxlog.c.gcov.html#962\n\ngin's XLOG_GIN_VACUUM_DATA_LEAF_PAGE is not emitted, but only because of a\nRelationNeedsWAL() check:\nhttps://coverage.postgresql.org/src/backend/access/gin/gindatapage.c.gcov.html#852\n\n\nI also looked at heapam:\nXLOG_HEAP2_LOCK_UPDATED is not replayed, but emitted:\nhttps://coverage.postgresql.org/src/backend/access/heap/heapam.c.gcov.html#5487\nhttps://coverage.postgresql.org/src/backend/access/heap/heapam.c.gcov.html#9965\n\nsame for XLOG_HEAP2_REWRITE:\nhttps://coverage.postgresql.org/src/backend/access/heap/rewriteheap.c.gcov.html#928\nhttps://coverage.postgresql.org/src/backend/access/heap/heapam.c.gcov.html#9975\n\nand XLOG_HEAP_TRUNCATE (ugh, that record is quite the layering violation):\nhttps://coverage.postgresql.org/src/backend/commands/tablecmds.c.gcov.html#2128\nhttps://coverage.postgresql.org/src/backend/access/heap/heapam.c.gcov.html#9918\n\n\nThe fact that those cases aren't replayed isn't too surprising -\nXLOG_HEAP2_LOCK_UPDATED is exercised by isolationtester, XLOG_HEAP2_REWRITE,\nXLOG_HEAP_TRUNCATE by contrib/test_decoding. Neither is part of\n027_stream_regress.pl\n\n\nThe lack of any coverage of hash and gist deletion/vacuum seems quite\nconcerning to me.\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/[email protected]\n\n\n",
"msg_date": "Thu, 30 Mar 2023 22:07:26 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "regression coverage gaps for gist and hash indexes"
},
{
"msg_contents": "Hi!\n\nOn Fri, Mar 31, 2023 at 8:07 AM Andres Freund <[email protected]> wrote:\n>\n> I was working on committing patch 0001 from [1] and was a bit confused about\n> some of the changes to the WAL format for gist and hash index vacuum. It\n> looked to me that the changed code just flat out would not work.\n>\n> Turns out the problem is that we don't reach deletion for hash and gist\n> vacuum:\n>\n> gist:\n>\n> > Oh, I see. We apparently don't reach the gist deletion code in the tests:\n> > https://coverage.postgresql.org/src/backend/access/gist/gistxlog.c.gcov.html#674\n> > https://coverage.postgresql.org/src/backend/access/gist/gistxlog.c.gcov.html#174\n> >\n> > And indeed, if I add an abort() into , it's not reached.\n> >\n> > And it's not because tests use a temp table, the caller is also unreachable:\n> > https://coverage.postgresql.org/src/backend/access/gist/gist.c.gcov.html#1643\n>\n\nGiST logs deletions in gistXLogUpdate(), which is covered.\ngistXLogDelete() is only used for cleaning during page splits. I'd\npropose refactoring GiST WAL to remove gistXLogDelete() and using\ngistXLogUpdate() instead.\nHowever I see that gistXLogPageDelete() is not exercised, and is worth\nfixing IMO. Simply adding 10x more data in gist.sql helps, but I think\nwe can do something more clever...\n\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Fri, 31 Mar 2023 10:45:51 +0300",
"msg_from": "Andrey Borodin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: regression coverage gaps for gist and hash indexes"
},
{
"msg_contents": "Andrey Borodin <[email protected]> writes:\n> On Fri, Mar 31, 2023 at 8:07 AM Andres Freund <[email protected]> wrote:\n>> Turns out the problem is that we don't reach deletion for hash and gist\n>> vacuum:\n\n> GiST logs deletions in gistXLogUpdate(), which is covered.\n> gistXLogDelete() is only used for cleaning during page splits. I'd\n> propose refactoring GiST WAL to remove gistXLogDelete() and using\n> gistXLogUpdate() instead.\n> However I see that gistXLogPageDelete() is not exercised, and is worth\n> fixing IMO. Simply adding 10x more data in gist.sql helps, but I think\n> we can do something more clever...\n\nSee also the thread about bug #16329 [1]. Alexander promised to look\ninto improving the test coverage in this area, maybe he can keep an\neye on the WAL logic coverage too.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/16329-7a6aa9b6fa1118a1%40postgresql.org\n\n\n",
"msg_date": "Fri, 31 Mar 2023 08:55:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: regression coverage gaps for gist and hash indexes"
},
{
"msg_contents": "31.03.2023 15:55, Tom Lane wrote:\n> See also the thread about bug #16329 [1]. Alexander promised to look\n> into improving the test coverage in this area, maybe he can keep an\n> eye on the WAL logic coverage too.\n\nYes, I'm going to analyze that area too. Maybe it'll take more time\n(a week or two) if I encounter some bugs there (for now I observe anomalies\nwith gist__int_ops), but I will definitely try to improve the gist testing.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 31 Mar 2023 17:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: regression coverage gaps for gist and hash indexes"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-31 10:45:51 +0300, Andrey Borodin wrote:\n> On Fri, Mar 31, 2023 at 8:07 AM Andres Freund <[email protected]> wrote:\n> >\n> > I was working on committing patch 0001 from [1] and was a bit confused about\n> > some of the changes to the WAL format for gist and hash index vacuum. It\n> > looked to me that the changed code just flat out would not work.\n> >\n> > Turns out the problem is that we don't reach deletion for hash and gist\n> > vacuum:\n> >\n> > gist:\n> >\n> > > Oh, I see. We apparently don't reach the gist deletion code in the tests:\n> > > https://coverage.postgresql.org/src/backend/access/gist/gistxlog.c.gcov.html#674\n> > > https://coverage.postgresql.org/src/backend/access/gist/gistxlog.c.gcov.html#174\n> > >\n> > > And indeed, if I add an abort() into , it's not reached.\n> > >\n> > > And it's not because tests use a temp table, the caller is also unreachable:\n> > > https://coverage.postgresql.org/src/backend/access/gist/gist.c.gcov.html#1643\n> >\n> \n> GiST logs deletions in gistXLogUpdate(), which is covered.\n> gistXLogDelete() is only used for cleaning during page splits.\n\nI am not sure what your point here is - deleting entries to prevent a page\nsplit is deleting entries. What am I missing?\n\n\n> propose refactoring GiST WAL to remove gistXLogDelete() and using\n> gistXLogUpdate() instead.\n\nI think we still would need to have coverage for gistprunepage(), even if\nso. So that seems a separate issue.\n\nI think what gist.sql is missing is a combination of delete, index scan (to\nmark entries dead), new insertions (to trigger pruning to prevent page\nsplits).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 31 Mar 2023 09:15:33 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: regression coverage gaps for gist and hash indexes"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-31 17:00:00 +0300, Alexander Lakhin wrote:\n> 31.03.2023 15:55, Tom Lane wrote:\n> > See also the thread about bug #16329 [1]. Alexander promised to look\n> > into improving the test coverage in this area, maybe he can keep an\n> > eye on the WAL logic coverage too.\n> \n> Yes, I'm going to analyze that area too. Maybe it'll take more time\n> (a week or two) if I encounter some bugs there (for now I observe anomalies\n> with gist__int_ops), but I will definitely try to improve the gist testing.\n\nBecause I needed it to verify the changes in the referenced patch, I wrote\ntests exercising killtuples based pruning for gist and hash.\n\nFor the former I first wrote it in contrib/btree_gist. But that would mean the\nrecovery side isn't exercised via 027_stream_regress.sql. So I rewrote it to\nuse point instead, which is a tad more awkward, but...\n\nFor now I left the new tests in their own files. But possibly they should be\nin gist.sql and hash_index.sql respectively?\n\n\nAs I also wrote at the top of the tests, we can't easily verify that\nkilltuples pruning has actually happened, nor guarantee that concurrent\nactivity doesn't prevent it (e.g. autovacuum having a snapshot open or\nsuch). At least not without loosing coverage of WAL logging / replay. To make\nit more likely I added them as their own test group.\n\n\nI don't know if we want the tests in this form, but I do find them useful for\nnow.\n\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 31 Mar 2023 16:13:00 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: regression coverage gaps for gist and hash indexes"
},
{
"msg_contents": "Hi,\n\nOn 4/1/23 1:13 AM, Andres Freund wrote:\n> Hi,\n> \n> On 2023-03-31 17:00:00 +0300, Alexander Lakhin wrote:\n>> 31.03.2023 15:55, Tom Lane wrote:\n>>> See also the thread about bug #16329 [1]. Alexander promised to look\n>>> into improving the test coverage in this area, maybe he can keep an\n>>> eye on the WAL logic coverage too.\n>>\n>> Yes, I'm going to analyze that area too. Maybe it'll take more time\n>> (a week or two) if I encounter some bugs there (for now I observe anomalies\n>> with gist__int_ops), but I will definitely try to improve the gist testing.\n> \n> Because I needed it to verify the changes in the referenced patch, I wrote\n> tests exercising killtuples based pruning for gist and hash.\n> \n\nThanks for the patch!\n\nI did not looked at the detail but \"just\" checked that the coverage is now done.\n\nAnd Indeed, when running \"make check\" + \"027_stream_regress.pl\":\n\nI can see it moving from (without the patch):\n\nfunction gistXLogDelete called 0 returned 0% blocks executed 0%\nfunction gistRedoDeleteRecord called 0 returned 0% blocks executed 0%\nfunction gistprunepage called 0 returned 0% blocks executed 0%\nfunction _hash_vacuum_one_page called 0 returned 0% blocks executed 0%\n\nto (with the patch):\n\nfunction gistXLogDelete called 9 returned 100% blocks executed 100%\nfunction gistRedoDeleteRecord called 5 returned 100% blocks executed 100% (thanks to 027_stream_regress.pl)\nfunction gistprunepage called 9 returned 100% blocks executed 79%\nfunction _hash_vacuum_one_page called 12 returned 100% blocks executed 94%\n\n> For now I left the new tests in their own files. But possibly they should be\n> in gist.sql and hash_index.sql respectively?\n\n+1 to put them in gist.sql and hash_index.sql.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 1 Apr 2023 06:02:47 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: regression coverage gaps for gist and hash indexes"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-01 06:02:47 +0200, Drouvot, Bertrand wrote:\n> On 4/1/23 1:13 AM, Andres Freund wrote:\n> > On 2023-03-31 17:00:00 +0300, Alexander Lakhin wrote:\n> > > 31.03.2023 15:55, Tom Lane wrote:\n> > > > See also the thread about bug #16329 [1]. Alexander promised to look\n> > > > into improving the test coverage in this area, maybe he can keep an\n> > > > eye on the WAL logic coverage too.\n> > > \n> > > Yes, I'm going to analyze that area too. Maybe it'll take more time\n> > > (a week or two) if I encounter some bugs there (for now I observe anomalies\n> > > with gist__int_ops), but I will definitely try to improve the gist testing.\n> > \n> > Because I needed it to verify the changes in the referenced patch, I wrote\n> > tests exercising killtuples based pruning for gist and hash.\n> > \n> \n> Thanks for the patch!\n> \n> I did not looked at the detail but \"just\" checked that the coverage is now done.\n> \n> And Indeed, when running \"make check\" + \"027_stream_regress.pl\":\n> \n> I can see it moving from (without the patch):\n> \n> function gistXLogDelete called 0 returned 0% blocks executed 0%\n> function gistRedoDeleteRecord called 0 returned 0% blocks executed 0%\n> function gistprunepage called 0 returned 0% blocks executed 0%\n> function _hash_vacuum_one_page called 0 returned 0% blocks executed 0%\n> \n> to (with the patch):\n> \n> function gistXLogDelete called 9 returned 100% blocks executed 100%\n> function gistRedoDeleteRecord called 5 returned 100% blocks executed 100% (thanks to 027_stream_regress.pl)\n> function gistprunepage called 9 returned 100% blocks executed 79%\n> function _hash_vacuum_one_page called 12 returned 100% blocks executed 94%\n> \n> > For now I left the new tests in their own files. But possibly they should be\n> > in gist.sql and hash_index.sql respectively?\n> \n> +1 to put them in gist.sql and hash_index.sql.\n\nUnfortunately it turns out that running them in a parallel group reliably\nprevents cleanup of the dead rows, at least on my machine. Thereby preventing\nany increase in coverage. As they need to run serially, I think it makes more\nsense to keep the tests focussed and leave gist.sql and hash_index.sql to run\nin parallel.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 2 Apr 2023 09:24:34 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: regression coverage gaps for gist and hash indexes"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-04-01 06:02:47 +0200, Drouvot, Bertrand wrote:\n>> +1 to put them in gist.sql and hash_index.sql.\n\n> Unfortunately it turns out that running them in a parallel group reliably\n> prevents cleanup of the dead rows, at least on my machine. Thereby preventing\n> any increase in coverage. As they need to run serially, I think it makes more\n> sense to keep the tests focussed and leave gist.sql and hash_index.sql to run\n> in parallel.\n\nIf they have to run serially then that means that their runtime\ncontributes 1-for-1 to the total runtime of the core regression tests,\nwhich is not nice at all. Can we move them to some other portion\nof our test suite, preferably one that's not repeated four or more\ntimes in each buildfarm run?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 02 Apr 2023 12:38:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: regression coverage gaps for gist and hash indexes"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-02 12:38:32 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2023-04-01 06:02:47 +0200, Drouvot, Bertrand wrote:\n> >> +1 to put them in gist.sql and hash_index.sql.\n> \n> > Unfortunately it turns out that running them in a parallel group reliably\n> > prevents cleanup of the dead rows, at least on my machine. Thereby preventing\n> > any increase in coverage. As they need to run serially, I think it makes more\n> > sense to keep the tests focussed and leave gist.sql and hash_index.sql to run\n> > in parallel.\n> \n> If they have to run serially then that means that their runtime\n> contributes 1-for-1 to the total runtime of the core regression tests,\n> which is not nice at all.\n\nAgreed, it's not nice. At least reasonably quick (74ms and 54ms on one run\nhere)...\n\n\n> Can we move them to some other portion of our test suite, preferably one\n> that's not repeated four or more times in each buildfarm run?\n\nNot trivially, at least. Right now 027_stream_regress.pl doesn't run other\ntests, so we'd not cover the replay portion if moved the tests to\ncontrib/btree_gist or such.\n\nI wonder if we should have a test infrastructure function for waiting for the\nvisibility horizon to have passed a certain point. I think that might be\nuseful for making a few other tests robust...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 2 Apr 2023 09:54:17 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: regression coverage gaps for gist and hash indexes"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-04-02 12:38:32 -0400, Tom Lane wrote:\n>> If they have to run serially then that means that their runtime\n>> contributes 1-for-1 to the total runtime of the core regression tests,\n>> which is not nice at all.\n\n> Agreed, it's not nice. At least reasonably quick (74ms and 54ms on one run\n> here)...\n\nOh, that's less bad than I expected. The discussion in the other thread\nwas pointing in the direction of needing hundreds of ms to make indexes\nthat are big enough to hit all the code paths.\n\n>> Can we move them to some other portion of our test suite, preferably one\n>> that's not repeated four or more times in each buildfarm run?\n\n> Not trivially, at least. Right now 027_stream_regress.pl doesn't run other\n> tests, so we'd not cover the replay portion if moved the tests to\n> contrib/btree_gist or such.\n\nYeah, I was imagining teaching 027_stream_regress.pl to run additional\nscripts that aren't in the core tests. (I'm still quite unhappy that\n027_stream_regress.pl uses the core tests at all, really, as they were\nnever particularly designed to cover what it cares about. The whole\nthing is extremely inefficient and it's no surprise that its coverage\nis scattershot.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 02 Apr 2023 13:03:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: regression coverage gaps for gist and hash indexes"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-02 13:03:51 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2023-04-02 12:38:32 -0400, Tom Lane wrote:\n> >> If they have to run serially then that means that their runtime\n> >> contributes 1-for-1 to the total runtime of the core regression tests,\n> >> which is not nice at all.\n>\n> > Agreed, it's not nice. At least reasonably quick (74ms and 54ms on one run\n> > here)...\n>\n> Oh, that's less bad than I expected. The discussion in the other thread\n> was pointing in the direction of needing hundreds of ms to make indexes\n> that are big enough to hit all the code paths.\n\nWell, the tests here really just try to hit the killtuples path, not some of\nthe paths discussed in [1]. It needs just enough index entries to encounter a\npage split (which then is avoided by pruning tuples).\n\nLooks like the test in [1] could be made a lot cheaper by changing effective_cache_size\nfor just that test:\n\n\t/*\n\t * In 'auto' mode, check if the index has grown too large to fit in cache,\n\t * and switch to buffering mode if it has.\n\t *\n\t * To avoid excessive calls to smgrnblocks(), only check this every\n\t * BUFFERING_MODE_SWITCH_CHECK_STEP index tuples.\n\t *\n\t * In 'stats' state, switch as soon as we have seen enough tuples to have\n\t * some idea of the average tuple size.\n\t */\n\tif ((buildstate->buildMode == GIST_BUFFERING_AUTO &&\n\t\t buildstate->indtuples % BUFFERING_MODE_SWITCH_CHECK_STEP == 0 &&\n\t\t effective_cache_size < smgrnblocks(RelationGetSmgr(index),\n\t\t\t\t\t\t\t\t\t\t\tMAIN_FORKNUM)) ||\n\t\t(buildstate->buildMode == GIST_BUFFERING_STATS &&\n\t\t buildstate->indtuples >= BUFFERING_MODE_TUPLE_SIZE_STATS_TARGET))\n\t{\n\t\t/*\n\t\t * Index doesn't fit in effective cache anymore. Try to switch to\n\t\t * buffering build mode.\n\t\t */\n\t\tgistInitBuffering(buildstate);\n\t}\n\n\n\n> >> Can we move them to some other portion of our test suite, preferably one\n> >> that's not repeated four or more times in each buildfarm run?\n>\n> > Not trivially, at least. Right now 027_stream_regress.pl doesn't run other\n> > tests, so we'd not cover the replay portion if moved the tests to\n> > contrib/btree_gist or such.\n>\n> Yeah, I was imagining teaching 027_stream_regress.pl to run additional\n> scripts that aren't in the core tests.\n\nNot opposed to that, but I'm not quite sure about what we'd use as\ninfrastructure. A second schedule?\n\nI think the tests patches I am proposing here are quite valuable to run\nwithout replication involved as well, the killtuples path isn't trivial, so\nhaving it be covered by the normal regression tests makes sense to me.\n\n\n> (I'm still quite unhappy that 027_stream_regress.pl uses the core tests at\n> all, really, as they were never particularly designed to cover what it cares\n> about. The whole thing is extremely inefficient and it's no surprise that\n> its coverage is scattershot.)\n\nI don't think anybody would claim it's great as-is. But I still think that\nhaving a meaningful coverage of replay is a heck of a lot better than not\nhaving any, even if it's not a pretty or all that fast design. And the fact\nthat 027_stream_regress.pl runs with a small shared_buffers actually shook out\na few bugs...\n\nI don't think we'd want to use a completely separate set of tests for\n027_stream_regress.pl, typical tests will provide coverage on both the primary\nand the standby, I think, and would just end up being duplicated between the\nmain regression test and something specific for 027_stream_regress.pl. But I\ncould imagine that it's worth maintaining a distinct version of\nparallel_schedule that removes a tests that aren't likely to provide benenfits\nfor 027_stream_regress.pl.\n\n\nBtw, from what I can tell, the main bottleneck for the main regression test\nright now is the granular use of groups. Because the parallel groups have\nfixed size limit, there's a stall waiting for the slowest test at the end of\neach group. If we instead limited group concurrency solely in pg_regress,\ninstead of the schedule, a quick experiment suggest we could get a good bit of\nspeedup. And remove some indecision paralysis about which group to add a new\ntest to, as well as removing the annoyance of having to count the number of\ntests in a group manually.\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/flat/16329-7a6aa9b6fa1118a1%40postgresql.org\n\n\n",
"msg_date": "Sun, 2 Apr 2023 10:50:21 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: regression coverage gaps for gist and hash indexes"
},
{
"msg_contents": "Hi,\n\n31.03.2023 17:00, Alexander Lakhin wrote:\n> 31.03.2023 15:55, Tom Lane wrote:\n>> See also the thread about bug #16329 [1]. Alexander promised to look\n>> into improving the test coverage in this area, maybe he can keep an\n>> eye on the WAL logic coverage too.\n>\n> Yes, I'm going to analyze that area too. Maybe it'll take more time\n> (a week or two) if I encounter some bugs there (for now I observe anomalies\n> with gist__int_ops), but I will definitely try to improve the gist testing.\n\nAfter 2+ weeks of researching I'd like to summarize my findings.\n1) The checking query proposed in [1] could be improved by adding\nthe restriction \"tgk.v = brute.v\" to the condition:\nWHERE tgk.k >> point(brute.min - 1, 0) AND tgk.k << point(brute.max + 1, 0)\nOtherwise that query gives a false positive after\ninsert into test_gist_killtuples values(point(505, 0));\n\nThe similar improved condition could be placed in hash_index_killtuples.sql.\n\nYet another improvement for the checking query could be done with the\nreplacement:\nmin(k <-> point(0, 0)), max(k <-> point(0, 0)) ->\nmin(k <-> point(0, k[1])), max(p <-> point(0, k[1])) ...\n\nIt doesn't change the query plan dramatically, but the query becomes more\nuniversal (it would work for points with any non-negative integer x).\n\n2) I've checked clang`s scan-build notices related to gist as I planned [2],\nnamely:\nLogic error Branch condition evaluates to a garbage value src/backend/access/gist/gistutil.c gistCompressValues 606\nLogic error Dereference of null pointer src/backend/access/gist/gist.c gistFindCorrectParent 1099\nLogic error Dereference of null pointer src/backend/access/gist/gist.c gistdoinsert 671\nLogic error Dereference of null pointer src/backend/access/gist/gist.c gistfinishsplit 1339\nLogic error Dereference of null pointer src/backend/access/gist/gist.c gistplacetopage 340\nLogic error Dereference of null pointer src/backend/access/gist/gistbuildbuffers.c gistPushItupToNodeBuffer 366\nLogic error Result of operation is garbage or undefined src/backend/access/gist/gistbuildbuffers.c \ngistRelocateBuildBuffersOnSplit 677\nLogic error Result of operation is garbage or undefined src/backend/access/gist/gistutil.c gistchoose 463\nUnused code Dead assignment src/backend/access/gist/gist.c gistdoinsert 843\n\nAnd found that all of them (except for the last one, that doesn't worth\nfixing, IMO) are false positives (I can present detailed explanations if it\ncould be of interest.) So I see no grounds here to build new tests on.\n\n3) To date I found other anomalies more or less related to gist:\nfillfactor is ignored for sorted index build mode, which is effectively default now [3]\namcheck verification for gist is not yet ready to use [4] (and the latest patch doesn't apply to the current HEAD)\nbug #17888: Incorrect memory access in gist__int_ops for an input array with many elements [5]\n\n4) I've constructed some tests, that provide full coverage for\ngistFindCorrectParent(), reach for \"very rare situation\", and for\ngistfixsplit(), but all these tests execute concurrent queries, so they\ncan't be implemented as simple regression tests. Moreover, I could not find\nany explicit problems when reaching those places (I used the checking query\nfrom [1] in absence of other means to check gist indexes), so I see no value\nin developing (not to speak of committing) these tests for now. I'm going to\nfurther explore the gist behavior in those dark corners, but it looks like\na long-term task, so I think it shouldn't delay the gist coverage improvement\nalready proposed.\n\n5)\n02.04.2023 20:50, Andres Freund wrote:\n> Looks like the test in [1] could be made a lot cheaper by changing effective_cache_size\n> for just that test:\nThe effective_cache_size is accounted only when buffering = auto, but in\nthat test we have buffering = on, so changing it wouldn't help there.\n\nWhile looking at gist-related tests, I've noticed an incorrect comment\nin index_including_gist.sql:\n * 1.1. test CREATE INDEX with buffered build\n\nIt's incorrect exactly because with the default effective_cache_size the\nbuffered build mode is not enabled for that index size (I've confirmed\nthis with the elog(LOG,..) placed inside gistInitBuffering()).\n\nSo I'd like to propose the patch attached, that:\na) demonstrates the bug #16329:\nWith 8e5eef50c reverted, I get:\n**00:00:00:11.179 1587838** Valgrind detected 1 error(s) during execution of \"CREATE INDEX tbl_gist_idx ON tbl_gist \nusing gist (c4) INCLUDE (c1,c2,c3) WITH (buffering = on);\"\nb) makes the comment in index_including_gist.sql correct\nc) increases a visible test coverage a little, in particular:\n Function 'gistBuffersReleaseBlock'\n-Lines executed:66.67% of 9\n+Lines executed:100.00% of 9\nd) doesn't increase the test duration significantly:\nwithout valgrind I see difference: 84 ms -> 93 ms, under vagrind: 13513 ms -> 14511 ms\n\nThus, I think, it's worth to split the activity related to gist testing\nimprovement to finalizing/accepting the already-emerging patches and to\nbackground research/anomaly findings, which could inspire further\nenhancements in this area.\n\n[1] https://www.postgresql.org/message-id/20230331231300.4kkrl44usvy2pmkv%40awork3.anarazel.de\n[2] https://www.postgresql.org/message-id/cad7055f-0d76-cc31-71d5-f8b600ebb116%40gmail.com\n[3] https://www.postgresql.org/message-id/fbbfe5dc-3dfa-d54a-3a94-e2bee37b85d8%40gmail.com\n[4] https://www.postgresql.org/message-id/885cfb61-26e9-e7c1-49a8-02b3fb12b497%40gmail.com\n[5] https://www.postgresql.org/message-id/[email protected]\n\nBest regards,\nAlexander",
"msg_date": "Mon, 17 Apr 2023 11:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: regression coverage gaps for gist and hash indexes"
}
] |
[
{
"msg_contents": "Hi, hackers\n\nWhen I exec a sql SELECT INTO without columns or * by mistake, it succeeds:\n\nselect * from t1;\n a | b\n---+---\n 1 | 2\n 2 | 3\n 3 | 4\n(3 rows)\n\nselect into t2 from t1;\nSELECT 3\n\n \\pset null '(null)'\nNull display is \"(null)\".\n\nselect * from t2;\n--\n(3 rows)\n\nIt seems that t2 has empty rows but not null. Is it an expected behavior?\nAnd what’s the semantic of SELECT INTO without any columns?\nI also see lots of that SELECT INTO in out test cases like:\n-- SELECT INTO doesn't support USING\nSELECT INTO tableam_tblselectinto_heap2 USING heap2 FROM tableam_tbl_heap2;\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHi, hackers\n\nWhen I exec a sql SELECT INTO without columns or * by mistake, it succeeds:\n\nselect * from t1;\n a | b\n---+---\n 1 | 2\n 2 | 3\n 3 | 4\n(3 rows)\n\nselect into t2 from t1;\nSELECT 3\n\n \\pset null '(null)'\nNull display is \"(null)\".\n\nselect * from t2;\n--\n(3 rows)\n\nIt seems that t2 has empty rows but not null. Is it an expected behavior?\nAnd what’s the semantic of SELECT INTO without any columns?\nI also see lots of that SELECT INTO in out test cases like:\n-- SELECT INTO doesn't support USING\nSELECT INTO tableam_tblselectinto_heap2 USING heap2 FROM tableam_tbl_heap2;\n\n\nRegards,\nZhang Mingli",
"msg_date": "Fri, 31 Mar 2023 23:09:45 +0800",
"msg_from": "Zhang Mingli <[email protected]>",
"msg_from_op": true,
"msg_subject": "SELECT INTO without columns or star"
},
{
"msg_contents": "On Fri, Mar 31, 2023 at 8:10 AM Zhang Mingli <[email protected]> wrote:\n\n> When I exec a sql SELECT INTO without columns or * by mistake, it succeeds:\n>\n>\nYes, a table may have zero columns by design.\n\nDavid J.\n\nOn Fri, Mar 31, 2023 at 8:10 AM Zhang Mingli <[email protected]> wrote:\n\n\nWhen I exec a sql SELECT INTO without columns or * by mistake, it succeeds:Yes, a table may have zero columns by design.David J.",
"msg_date": "Fri, 31 Mar 2023 08:13:28 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT INTO without columns or star"
},
{
"msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Fri, Mar 31, 2023 at 8:10 AM Zhang Mingli <[email protected]> wrote:\n>> When I exec a sql SELECT INTO without columns or * by mistake, it succeeds:\n\n> Yes, a table may have zero columns by design.\n\nYup, we've allowed that for some time now; see the compatibility comments\nat the bottom of the SELECT man page.\n\npsql's display of zero-column results is a bit weird, which maybe\nsomebody should fix sometime:\n\nregression=# select from generate_series(1,4);\n--\n(4 rows)\n\nI'd expect four blank lines there. Expanded format is even less sane:\n\nregression=# \\x\nExpanded display is on.\nregression=# select from generate_series(1,4);\n(4 rows)\n\nISTM that should produce\n\n[ RECORD 1 ]\n[ RECORD 2 ]\n[ RECORD 3 ]\n[ RECORD 4 ]\n\nand no \"(4 rows)\" footer, because \\x mode doesn't normally print that.\n\nThis is all just cosmetic of course, but it's still confusing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 31 Mar 2023 11:26:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT INTO without columns or star"
},
{
"msg_contents": "On Fri, Mar 31, 2023 at 11:26 AM Tom Lane <[email protected]> wrote:\n\n> \"David G. Johnston\" <[email protected]> writes:\n> > On Fri, Mar 31, 2023 at 8:10 AM Zhang Mingli <[email protected]>\n> wrote:\n> >> When I exec a sql SELECT INTO without columns or * by mistake, it\n> succeeds:\n>\n> > Yes, a table may have zero columns by design.\n>\n> Yup, we've allowed that for some time now; see the compatibility comments\n> at the bottom of the SELECT man page.\n>\n> psql's display of zero-column results is a bit weird, which maybe\n> somebody should fix sometime:\n>\n> regression=# select from generate_series(1,4);\n> --\n> (4 rows)\n>\n> I'd expect four blank lines there. Expanded format is even less sane:\n>\n> regression=# \\x\n> Expanded display is on.\n> regression=# select from generate_series(1,4);\n> (4 rows)\n>\n> ISTM that should produce\n>\n> [ RECORD 1 ]\n> [ RECORD 2 ]\n> [ RECORD 3 ]\n> [ RECORD 4 ]\n>\n> and no \"(4 rows)\" footer, because \\x mode doesn't normally print that.\n>\n> This is all just cosmetic of course, but it's still confusing.\n>\n> regards, tom lane\n>\n\nTom,\n I wouldn't mind working on a patch to fix this... (Especially if it helps\nthe %T get into PSQL<grin>).\nI find this output confusing as well.\n\n Should I start a new email thread: Proposal: Fix psql output when\nselecting no columns\nAnd get the discussion moving. I'd like to get a clear answer on what to\noutput. But I have\nbecome more comfortable with PSQL due to messing with readline for windows,\nand 2-3 other patches\nI've been working on.\n\nThanks, Kirk\n\nOn Fri, Mar 31, 2023 at 11:26 AM Tom Lane <[email protected]> wrote:\"David G. Johnston\" <[email protected]> writes:\n> On Fri, Mar 31, 2023 at 8:10 AM Zhang Mingli <[email protected]> wrote:\n>> When I exec a sql SELECT INTO without columns or * by mistake, it succeeds:\n\n> Yes, a table may have zero columns by design.\n\nYup, we've allowed that for some time now; see the compatibility comments\nat the bottom of the SELECT man page.\n\npsql's display of zero-column results is a bit weird, which maybe\nsomebody should fix sometime:\n\nregression=# select from generate_series(1,4);\n--\n(4 rows)\n\nI'd expect four blank lines there. Expanded format is even less sane:\n\nregression=# \\x\nExpanded display is on.\nregression=# select from generate_series(1,4);\n(4 rows)\n\nISTM that should produce\n\n[ RECORD 1 ]\n[ RECORD 2 ]\n[ RECORD 3 ]\n[ RECORD 4 ]\n\nand no \"(4 rows)\" footer, because \\x mode doesn't normally print that.\n\nThis is all just cosmetic of course, but it's still confusing.\n\n regards, tom laneTom, I wouldn't mind working on a patch to fix this... (Especially if it helps the %T get into PSQL<grin>).I find this output confusing as well. Should I start a new email thread: Proposal: Fix psql output when selecting no columnsAnd get the discussion moving. I'd like to get a clear answer on what to output. But I havebecome more comfortable with PSQL due to messing with readline for windows, and 2-3 other patchesI've been working on.Thanks, Kirk",
"msg_date": "Fri, 31 Mar 2023 13:20:38 -0400",
"msg_from": "Kirk Wolak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT INTO without columns or star"
}
] |
[
{
"msg_contents": "Hi,\n\nI think that commit f0d65c0\n<https://github.com/postgres/postgres/commit/f0d65c0eaf05d6acd3ae05cde4a31465eb3992b2>\nhas an oversight.\n\nAttnum == 0, is system column too, right?\n\nAll other places at tablecmds.c, has this test:\n\nif (attnum <= 0)\n ereport(ERROR,\n\nregards,\nRanier Vilela",
"msg_date": "Fri, 31 Mar 2023 13:45:19 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BUG #17877: Referencing a system column in a foreign key leads to\n incorrect memory access"
},
{
"msg_contents": "Ranier Vilela <[email protected]> writes:\n> I think that commit f0d65c0\n> <https://github.com/postgres/postgres/commit/f0d65c0eaf05d6acd3ae05cde4a31465eb3992b2>\n> has an oversight.\n> Attnum == 0, is system column too, right?\n\nNo, it's not valid in pg_attribute rows.\n\n> All other places at tablecmds.c, has this test:\n> if (attnum <= 0)\n> ereport(ERROR,\n\nI was actually copying this code in indexcmds.c:\n\n if (attno < 0)\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"index creation on system columns is not supported\")));\n\nThere's really no reason to prefer one over the other in this context.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 31 Mar 2023 15:25:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17877: Referencing a system column in a foreign key leads to\n incorrect memory access"
},
{
"msg_contents": "Em sex., 31 de mar. de 2023 às 16:25, Tom Lane <[email protected]> escreveu:\n\n> Ranier Vilela <[email protected]> writes:\n> > I think that commit f0d65c0\n> > <\n> https://github.com/postgres/postgres/commit/f0d65c0eaf05d6acd3ae05cde4a31465eb3992b2\n> >\n> > has an oversight.\n> > Attnum == 0, is system column too, right?\n>\n> No, it's not valid in pg_attribute rows.\n>\n> > All other places at tablecmds.c, has this test:\n> > if (attnum <= 0)\n> > ereport(ERROR,\n>\n> I was actually copying this code in indexcmds.c:\n>\n> if (attno < 0)\n> ereport(ERROR,\n> (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> errmsg(\"index creation on system columns is not\n> supported\")));\n>\n> There's really no reason to prefer one over the other in this context.\n>\nI think the documentation is a bit confusing.\nAccording to the current documentation:\n/*\n* attnum is the \"attribute number\" for the attribute: A value that\n* uniquely identifies this attribute within its class. for user\n* attributes, Attribute numbers are greater than 0 and not greater than\n* the number of attributes in the class. i.e. if the Class pg_class says\n* that Class XYZ has 10 attributes, then the user attribute numbers in\n* Class pg_attribute must be 1-10.\n*\n* System attributes have attribute numbers less than 0 that are unique\n* within the class, but not constrained to any particular range.\n*\n* Note that (attnum - 1) is often used as the index to an array.\nAttributes equal to zero are in limbo.\n\nIMO should be:\n* System attributes have attribute numbers less or equal to 0 that are\n* unique\n* within the class, but not constrained to any particular range.\n\nregards,\nRanier Vilela\n\nEm sex., 31 de mar. de 2023 às 16:25, Tom Lane <[email protected]> escreveu:Ranier Vilela <[email protected]> writes:\n> I think that commit f0d65c0\n> <https://github.com/postgres/postgres/commit/f0d65c0eaf05d6acd3ae05cde4a31465eb3992b2>\n> has an oversight.\n> Attnum == 0, is system column too, right?\n\nNo, it's not valid in pg_attribute rows.\n\n> All other places at tablecmds.c, has this test:\n> if (attnum <= 0)\n> ereport(ERROR,\n\nI was actually copying this code in indexcmds.c:\n\n if (attno < 0)\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"index creation on system columns is not supported\")));\n\nThere's really no reason to prefer one over the other in this context.I think the documentation is a bit confusing.According to the current documentation:/** attnum is the \"attribute number\" for the attribute: A value that* uniquely identifies this attribute within its class. for user* attributes, Attribute numbers are greater than 0 and not greater than* the number of attributes in the class. i.e. if the Class pg_class says* that Class XYZ has 10 attributes, then the user attribute numbers in* Class pg_attribute must be 1-10.** System attributes have attribute numbers less than 0 that are unique* within the class, but not constrained to any particular range.** Note that (attnum - 1) is often used as the index to an array.Attributes equal to zero are in limbo.IMO should be:* System attributes have attribute numbers less or equal to 0 that are* unique* within the class, but not constrained to any particular range.regards,Ranier Vilela",
"msg_date": "Sat, 1 Apr 2023 10:56:08 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BUG #17877: Referencing a system column in a foreign key leads to\n incorrect memory access"
},
{
"msg_contents": "At Sat, 1 Apr 2023 10:56:08 -0300, Ranier Vilela <[email protected]> wrote in \n> IMO should be:\n> * System attributes have attribute numbers less or equal to 0 that are\n> * unique\n> * within the class, but not constrained to any particular range.\n\nAttnum == 0 is invalid and doesn't belong to either user columns or\nsystem columns. You're actually right that it's in limbo, but I\nbelieve the change you suggested actually makes the correct comment\nincorrect. In the condition you're asking about, I don't think we\nreally need to worry about an impossible case. If I wanted to pay more\nattenstion to it, I would use an assertion, but I don't think it's\nnecessary here.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 03 Apr 2023 15:31:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17877: Referencing a system column in a foreign key leads\n to incorrect memory access"
}
] |
[
{
"msg_contents": "Hello,\n\nPatch attached below. TLDR, I'd like to add \"host\" to the startup packet.\n\nI'm trying to run multiple Postgres servers in a multi-tenant environment\nbehind a pooler <https://github.com/postgresml/pgcat>. Currently, the only\nway to differentiate Postgres databases is with the user/dbname combination\nwhich are very often included in the startup packet by the client. However,\nthat's not sufficient if you have users that all want to have the user\n\"admin\" and the database \"production\" :)\n\nHTTP hosting providers solved this using the \"Host\" header, allowing the\nserver to identify which website the client wants. In the case of Postgres,\nthis is the DNS or IP address, depending on the client configuration.\n\nUpon receiving a startup packet with user, dbname, and host, the pooler\n(acting also as a proxy) can validate that the credentials exist for the\nhost and that they are valid, and authorize or decline the connection.\n\nI have never submitted a patch for Postgres before, so I'm not entirely\nsure how to test this change, although it seems pretty harmless. Any\nfeedback and help are appreciated!\n\nThank you!\n\nBest,\nLev",
"msg_date": "Fri, 31 Mar 2023 17:23:28 -0700",
"msg_from": "Lev Kokotov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add \"host\" to startup packet"
},
{
"msg_contents": "Hi Lev,02.04.2023, 14:43, \"Lev Kokotov\" <[email protected]>:Patch attached below. TLDR, I'd like to add \"host\" to the startup packet.I'm trying to run multiple Postgres servers in a multi-tenant environment behind a pooler. Currently, the only way to differentiate Postgres databases is with the user/dbname combination which are very often included in the startup packet by the client. However, that's not sufficient if you have users that all want to have the user \"admin\" and the database \"production\" :)HTTP hosting providers solved this using the \"Host\" header, allowing the server to identify which website the client wants. In the case of Postgres, this is the DNS or IP address, depending on the client configuration.Upon receiving a startup packet with user, dbname, and host, the pooler (acting also as a proxy) can validate that the credentials exist for the host and that they are valid, and authorize or decline the connection.I like the idea of giving proxy information on database tenant to which client wants to connect. However, name “host” in web is chosen as part of URL specification. I’m not sure it applies here.And it is not clear from your description how server should interpret this information.I have never submitted a patch for Postgres before, so I'm not entirely sure how to test this change, although it seems pretty harmless. Any feedback and help are appreciated!Well, at minimum from the patch should be clear what purpose new feature has, some documentation and test must be included. You can judge from recently committed libpq load balancing what it takes to add a connection option [0]. But, obviously, it makes sense to discuss it before going all the way of implementation.Best regards, Andrey Borodin.[0] https://github.com/postgres/postgres/commit/7f5b1981",
"msg_date": "Sun, 02 Apr 2023 20:28:54 +0500",
"msg_from": "Andrey Borodin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add \"host\" to startup packet"
},
{
"msg_contents": "Lev Kokotov <[email protected]> writes:\n> Patch attached below. TLDR, I'd like to add \"host\" to the startup packet.\n\nI don't think this is of any use at all in isolation. What is the server\ngoing to do with it? What's your plan for persuading clients other than\nlibpq to supply it? How are poolers supposed to handle it? What will\nyou do about old clients that don't supply it? And most importantly,\nhow can a client know while connecting whether it's safe to include this,\nrealizing that existing servers will error out (they'll think it's a\nGUC setting for \"host\")?\n\nEven if all that infrastructure sprang into existence, is this really any\nmore useful than basing your switching on the host's resolved IP address?\nI'm doubtful that there's enough win there to justify pushing this rock\nto the top of the mountain.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 02 Apr 2023 11:38:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add \"host\" to startup packet"
},
{
"msg_contents": "On Sun, 2 Apr 2023 at 11:38, Tom Lane <[email protected]> wrote:\n>\n> Even if all that infrastructure sprang into existence, is this really any\n> more useful than basing your switching on the host's resolved IP address?\n> I'm doubtful that there's enough win there to justify pushing this rock\n> to the top of the mountain.\n\nHm. I think it's going to turn out to be useful. Experience shows\ndepending on the ip address often paints people into corners. However\nI agree that we need to actually have a real use case in hand where\nsomeone is going to actually do something with it.\n\nMy question is a bit different. How does this interact with TLS SNI.\nCan you just use the SNI name given in the TLS handshake? Should the\nserver require them to match? Is there any value to having a separate\nsource for this info? Is something similar available in GSSAPI\nauthentication?\n\n-- \ngreg\n\n\n",
"msg_date": "Sun, 2 Apr 2023 12:21:41 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add \"host\" to startup packet"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> My question is a bit different. How does this interact with TLS SNI.\n> Can you just use the SNI name given in the TLS handshake? Should the\n> server require them to match? Is there any value to having a separate\n> source for this info? Is something similar available in GSSAPI\n> authentication?\n\nThe idea that I was thinking about was to not hard-wire sending the host\nstring exactly, but instead to invent another connection parameter along\nthe line of \"send_host = name-to-send\". This parallels the situation in\nHTTP where the \"Host\" header doesn't necessarily have to match the actual\ntransport target. I can think of a couple of benefits:\n\n* Avoid breaking backward compatibility with old servers: if user doesn't\nadd this option then nothing extra is sent.\n\n* Separating the send_host name would simplify testing scenarios.\n\nSeems like it would depend a lot on your use-case whether you care about\nthe send_host name matching anything that's authenticated. If you do,\nthere's a whole lot more infrastructure to build out around pg_hba.conf.\nRight now that file is designed on the assumption that it describes\nauthentication rules for a single \"host\", but we'd need to generalize\nit to describe rules for multiple host values.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 02 Apr 2023 12:33:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add \"host\" to startup packet"
},
{
"msg_contents": "> On 2 Apr 2023, at 18:33, Tom Lane <[email protected]> wrote:\n> \n> Greg Stark <[email protected]> writes:\n>> My question is a bit different. How does this interact with TLS SNI.\n>> Can you just use the SNI name given in the TLS handshake? Should the\n>> server require them to match? Is there any value to having a separate\n>> source for this info? Is something similar available in GSSAPI\n>> authentication?\n> \n> The idea that I was thinking about was to not hard-wire sending the host\n> string exactly, but instead to invent another connection parameter along\n> the line of \"send_host = name-to-send\". This parallels the situation in\n> HTTP where the \"Host\" header doesn't necessarily have to match the actual\n> transport target.\n\nSince we already have sslsni in libpq since v14, any SNI being well understood\nand standardized, do we really want to invent our own parallel scheme?\nAlternatively, the protocol in the.PROXY patch by Magnus [0] which stalled a\nfew CF's ago has similar functionality for the client to pass a hostname.\n\n--\nDaniel Gustafsson\n\n[0] https://www.postgresql.org/message-id/flat/CABUevExJ0ifpUEiX4uOREy0s2kHBrBrb=pXLEHhpMTR1vVR1XA@mail.gmail.com\n\n",
"msg_date": "Sun, 2 Apr 2023 19:37:55 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add \"host\" to startup packet"
}
] |
[
{
"msg_contents": "Hi,\n\nCommit 7389aad6 started using WaitEventSetWait() to wait for incoming\nconnections. Before that, we used select(), for which we have our own\nimplementation for Windows.\n\nWhile hacking on patches to rip a bunch of unused Win32 socket wrapper\ncode out, I twigged that the old pgwin32_select() code was careful to\nreport multiple sockets at once by brute force polling of all of them,\nwhile WaitEventSetWait() doesn't do that: it reports just one event,\nbecause that's what the Windows WaitForMultipleEvents() syscall does.\nI guess this means you can probably fill up the listen queue of server\nsocket 1 to prevent server socket 2 from ever being serviced, whereas\non Unix we'll accept one connection at a time from each in round-robin\nfashion.\n\nI think we could get the same effect as pgwin32_select() more cheaply\nby doing the initial WaitForMultipleEvents() call with the caller's\ntimeout exactly as we do today, and then, while we have space,\nrepeatedly calling\nWaitForMultipleEvents(handles=&events[last_signaled_event_index + 1],\ntimeout=0) until it reports timeout. Windows always reports the\nsignaled event with the lowest index in the array, so the idea is to\npoll the remaining part of the array without waiting, to check for any\nmore. In the most common case of FEBE socket waits etc there would be\nno extra system call (because it uses nevents=1, so no space for\nmore), and in the postmaster's main loop there would commonly be only\none extra system call to determine that no other events have been\nsignaled.\n\nThe attached patch shows the idea. It's using an ugly goto, but I\nguess it should be made decent with a real loop; I just didn't want to\nchange the indentation level for this POC.\n\nI mention this now because I'm not sure whether to consider this an\n'open item' for 16, or merely an enhancement for 17. I guess the\nformer, because someone might call that a new denial of service\nvector. On the other hand, if you fill up the listen queue for socket\n1 with enough vigour, you're also denying service to socket 1, so I\ndon't know if it's worth worrying about. Opinions on that?\n\nI don't have Windows to test any of this. Although this patch passes\non CI, that means nothing, as I don't expect the new code to be\nreached, so I'm hoping to find a someone who would be able to set up\nsuch a test on Windows, ie putting elog in to the new code path and\ntrying to reach it by connecting to two different ports fast enough to\nexercise the multiple event case.\n\nOr maybe latch.c really needs its own test suite.",
"msg_date": "Sat, 1 Apr 2023 13:42:21 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "WL_SOCKET_ACCEPT fairness on Windows"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-01 13:42:21 +1300, Thomas Munro wrote:\n> Commit 7389aad6 started using WaitEventSetWait() to wait for incoming\n> connections. Before that, we used select(), for which we have our own\n> implementation for Windows.\n> \n> While hacking on patches to rip a bunch of unused Win32 socket wrapper\n> code out, I twigged that the old pgwin32_select() code was careful to\n> report multiple sockets at once by brute force polling of all of them,\n> while WaitEventSetWait() doesn't do that: it reports just one event,\n> because that's what the Windows WaitForMultipleEvents() syscall does.\n> I guess this means you can probably fill up the listen queue of server\n> socket 1 to prevent server socket 2 from ever being serviced, whereas\n> on Unix we'll accept one connection at a time from each in round-robin\n> fashion.\n\nIt does indeed sound like it'd behave that way:\n\n> If bWaitAll is FALSE, the return value minus WAIT_OBJECT_0 indicates the\n> lpHandles array index of the object that satisfied the wait. If more than\n> one object became signaled during the call, this is the array index of the\n> signaled object with the smallest index value of all the signaled objects.\n\nI wonder if we ought to bite the bullet and replace the use of\nWaitForMultipleObjects() with RegisterWaitForSingleObject() and then use\nGetQueuedCompletionStatus() to wait. The fairness issue here is a motivation,\nbut the bigger one is that that'd get us out from under the\nMAXIMUM_WAIT_OBJECTS (IIRC 64) limit. Afaict that'd also allow us to read\nmultiple notifications at once, using GetQueuedCompletionStatusEx().\n\nMedium term that'd also be a small step towards using readiness based APIs in\na few places...\n\n\n> I think we could get the same effect as pgwin32_select() more cheaply\n> by doing the initial WaitForMultipleEvents() call with the caller's\n> timeout exactly as we do today, and then, while we have space,\n> repeatedly calling\n> WaitForMultipleEvents(handles=&events[last_signaled_event_index + 1],\n> timeout=0) until it reports timeout.\n\nThat would make sense, and should indeed be reasonable cost-wise.\n\n\n> I mention this now because I'm not sure whether to consider this an\n> 'open item' for 16, or merely an enhancement for 17. I guess the\n> former, because someone might call that a new denial of service\n> vector. On the other hand, if you fill up the listen queue for socket\n> 1 with enough vigour, you're also denying service to socket 1, so I\n> don't know if it's worth worrying about. Opinions on that?\n\nI'm not sure either. It doesn't strike me as a particularly relevant\nbottleneck. And the old approach of doing more work for every single\nconnection also made many connections worse, I think?\n\n> diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c\n> index f4123e7de7..cc7b572008 100644\n> --- a/src/backend/storage/ipc/latch.c\n> +++ b/src/backend/storage/ipc/latch.c\n> @@ -2025,6 +2025,8 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,\n> \t */\n> \tcur_event = (WaitEvent *) &set->events[rc - WAIT_OBJECT_0 - 1];\n> \n> +loop:\n> +\n\nAs far as I can tell, we'll never see WL_LATCH_SET or WL_POSTMASTER_DEATH. I\nthink it'd look cleaner to move the body of if (cur_event->events & WL_SOCKET_MASK)\ninto a separate function that we then also can call further down.\n\n\n> \toccurred_events->pos = cur_event->pos;\n> \toccurred_events->user_data = cur_event->user_data;\n> \toccurred_events->events = 0;\n> @@ -2044,6 +2046,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,\n> \t\t\toccurred_events->events = WL_LATCH_SET;\n> \t\t\toccurred_events++;\n> \t\t\treturned_events++;\n> +\t\t\tnevents--;\n> \t\t}\n> \t}\n> \telse if (cur_event->events == WL_POSTMASTER_DEATH)\n> @@ -2063,6 +2066,7 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,\n> \t\t\toccurred_events->events = WL_POSTMASTER_DEATH;\n> \t\t\toccurred_events++;\n> \t\t\treturned_events++;\n> +\t\t\tnevents--;\n> \t\t}\n> \t}\n> \telse if (cur_event->events & WL_SOCKET_MASK)\n> @@ -2124,6 +2128,36 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,\n> \t\t{\n> \t\t\toccurred_events++;\n> \t\t\treturned_events++;\n> +\t\t\tnevents--;\n> +\t\t}\n> +\t}\n\nSeems like we could use returned_events to get nevents in the way you want it,\nwithout adding even more ++/-- to each of the different events?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 31 Mar 2023 18:35:29 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WL_SOCKET_ACCEPT fairness on Windows"
},
{
"msg_contents": "On Sat, Apr 1, 2023 at 2:35 PM Andres Freund <[email protected]> wrote:\n> I wonder if we ought to bite the bullet and replace the use of\n> WaitForMultipleObjects() with RegisterWaitForSingleObject() and then use\n> GetQueuedCompletionStatus() to wait. The fairness issue here is a motivation,\n> but the bigger one is that that'd get us out from under the\n> MAXIMUM_WAIT_OBJECTS (IIRC 64) limit. Afaict that'd also allow us to read\n> multiple notifications at once, using GetQueuedCompletionStatusEx().\n\nInteresting. So a callback would run in a system-managed thread, and\nthat would post a custom message in an IOCP for us to read, kinda like\nthe fake waitpid() thing? Seems a bit gross and context-switchy but I\nagree that the 64 event limit is also terrible.\n\n> Medium term that'd also be a small step towards using readiness based APIs in\n> a few places...\n\nYeah, that would be cool.\n\n> > I think we could get the same effect as pgwin32_select() more cheaply\n> > by doing the initial WaitForMultipleEvents() call with the caller's\n> > timeout exactly as we do today, and then, while we have space,\n> > repeatedly calling\n> > WaitForMultipleEvents(handles=&events[last_signaled_event_index + 1],\n> > timeout=0) until it reports timeout.\n>\n> That would make sense, and should indeed be reasonable cost-wise.\n\nCool.\n\n> > I mention this now because I'm not sure whether to consider this an\n> > 'open item' for 16, or merely an enhancement for 17. I guess the\n> > former, because someone might call that a new denial of service\n> > vector. On the other hand, if you fill up the listen queue for socket\n> > 1 with enough vigour, you're also denying service to socket 1, so I\n> > don't know if it's worth worrying about. Opinions on that?\n>\n> I'm not sure either. It doesn't strike me as a particularly relevant\n> bottleneck. And the old approach of doing more work for every single\n> connection also made many connections worse, I think?\n\nAlright, let's see if anyone else thinks this is worth fixing for 16.\n\n> > diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c\n> > index f4123e7de7..cc7b572008 100644\n> > --- a/src/backend/storage/ipc/latch.c\n> > +++ b/src/backend/storage/ipc/latch.c\n> > @@ -2025,6 +2025,8 @@ WaitEventSetWaitBlock(WaitEventSet *set, int cur_timeout,\n> > */\n> > cur_event = (WaitEvent *) &set->events[rc - WAIT_OBJECT_0 - 1];\n> >\n> > +loop:\n> > +\n>\n> As far as I can tell, we'll never see WL_LATCH_SET or WL_POSTMASTER_DEATH. I\n> think it'd look cleaner to move the body of if (cur_event->events & WL_SOCKET_MASK)\n> into a separate function that we then also can call further down.\n\nWe could see them, AFAICS, and I don't see much advantage in making\nthat assumption? Shouldn't we just shove it in a loop, just like the\nother platforms' implementations? Done in this version, which is best\nviewed with git show --ignore-all-space.\n\n> Seems like we could use returned_events to get nevents in the way you want it,\n> without adding even more ++/-- to each of the different events?\n\nYeah. This time I use reported_events. I also fixed a maths failure:\nI'd forgotten to use rc - WAIT_OBJECT_0, suggesting that CI never\nreached the code.",
"msg_date": "Sat, 1 Apr 2023 16:00:04 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WL_SOCKET_ACCEPT fairness on Windows"
},
{
"msg_contents": "On 3/31/23 11:00 PM, Thomas Munro wrote:\r\n\r\n>>> I mention this now because I'm not sure whether to consider this an\r\n>>> 'open item' for 16, or merely an enhancement for 17. I guess the\r\n>>> former, because someone might call that a new denial of service\r\n>>> vector. On the other hand, if you fill up the listen queue for socket\r\n>>> 1 with enough vigour, you're also denying service to socket 1, so I\r\n>>> don't know if it's worth worrying about. Opinions on that?\r\n>>\r\n>> I'm not sure either. It doesn't strike me as a particularly relevant\r\n>> bottleneck. And the old approach of doing more work for every single\r\n>> connection also made many connections worse, I think?\r\n> \r\n> Alright, let's see if anyone else thinks this is worth fixing for 16.\r\n\r\n[RMT hat]\r\n\r\nGiven this has sat for a bit, I wanted to see if any of your thinking \r\nhas changed on whether this should be fixed for v16 or v17. I have \r\npersonally not formed an opinion yet, but per the current discussion, it \r\nseems like this could wait?\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Tue, 16 May 2023 10:57:09 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WL_SOCKET_ACCEPT fairness on Windows"
},
{
"msg_contents": "On Wed, May 17, 2023 at 2:57 AM Jonathan S. Katz <[email protected]> wrote:\n> On 3/31/23 11:00 PM, Thomas Munro wrote:\n> >>> I mention this now because I'm not sure whether to consider this an\n> >>> 'open item' for 16, or merely an enhancement for 17. I guess the\n> >>> former, because someone might call that a new denial of service\n> >>> vector. On the other hand, if you fill up the listen queue for socket\n> >>> 1 with enough vigour, you're also denying service to socket 1, so I\n> >>> don't know if it's worth worrying about. Opinions on that?\n> >>\n> >> I'm not sure either. It doesn't strike me as a particularly relevant\n> >> bottleneck. And the old approach of doing more work for every single\n> >> connection also made many connections worse, I think?\n> >\n> > Alright, let's see if anyone else thinks this is worth fixing for 16.\n>\n> [RMT hat]\n>\n> Given this has sat for a bit, I wanted to see if any of your thinking\n> has changed on whether this should be fixed for v16 or v17. I have\n> personally not formed an opinion yet, but per the current discussion, it\n> seems like this could wait?\n\nYeah. No one seems to think this is worth worrying about (please\nspeak up if you do). I'll go ahead and remove this from the open item\nlists now, but I'll leave the patch in the CF for 17, to see if a\nWindows hacker/tester thinks it's a worthwhile improvement.\n\n\n",
"msg_date": "Wed, 17 May 2023 08:41:24 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WL_SOCKET_ACCEPT fairness on Windows"
},
{
"msg_contents": "On 5/16/23 4:41 PM, Thomas Munro wrote:\r\n> On Wed, May 17, 2023 at 2:57 AM Jonathan S. Katz <[email protected]> wrote:\r\n\r\n>> Given this has sat for a bit, I wanted to see if any of your thinking\r\n>> has changed on whether this should be fixed for v16 or v17. I have\r\n>> personally not formed an opinion yet, but per the current discussion, it\r\n>> seems like this could wait?\r\n> \r\n> Yeah. No one seems to think this is worth worrying about (please\r\n> speak up if you do). I'll go ahead and remove this from the open item\r\n> lists now, but I'll leave the patch in the CF for 17, to see if a\r\n> Windows hacker/tester thinks it's a worthwhile improvement.\r\n\r\n[RMT hat, personal opinion]\r\n\r\nThat seems reasonable to me.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Tue, 16 May 2023 17:11:29 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WL_SOCKET_ACCEPT fairness on Windows"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-17 08:41:24 +1200, Thomas Munro wrote:\n> Yeah. No one seems to think this is worth worrying about (please\n> speak up if you do).\n\n+1 - we have much bigger fish to fry IMO.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 May 2023 16:26:27 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WL_SOCKET_ACCEPT fairness on Windows"
},
{
"msg_contents": "On Sat, April 1, 2023 at 11:00 AM Thomas Munro <[email protected]> wrote:\r\n>\r\nHi,\r\nThanks for your patch.\r\n\r\nI tried to test this patch on Windows. And I did cover the new code path below:\r\n```\r\n+\t\t/* We have another event to decode. */\r\n+\t\tcur_event = &set->events[next_pos + (rc - WAIT_OBJECT_0)];\r\n```\r\n\r\nBut I have one thing want to confirm:\r\nIn my tests, I think the scenario with two different events (e.g., one ending\r\nsocket connection and one incoming socket connection) has been optimized.\r\nHowever, it seems that when there are multiple incoming socket connections, the\r\nfunction WaitEventSetWaitBlock is called multiple times instead of being called\r\nonce. Is this our expected result?\r\n\r\nHere are my test details:\r\nI use the debugger to ensure that multiple events occur when the function\r\nWaitEventSetWaitBlock is called. First, I added a breakpoint in the below code:\r\n```\r\n\t\t/*\r\n\t\t* Sleep.\r\n\t\t*\r\n\t\t* Need to wait for ->nevents + 1, because signal handle is in [0].\r\n\t\t*/\r\nb\t\trc = WaitForMultipleObjects(set->nevents + 1, set->handles, FALSE,\r\n\t\t\t\t\t\t\t\t\tcur_timeout);\r\n```\r\nAnd then make sure that the postmaster process enters the function\r\nWaitForMultipleObjects. (I think the postmaster process will only return from\r\nthe function WaitForMultipleObjects when any object is signaled or timeout\r\noccurs). Before the timeout occurs, I set up multiple socket connections using\r\npsql (the first connection makes the process returns from the function\r\nWaitForMultipleObjects). Then, as I continued debugging, multiple socket\r\nconnections were handled by different calls of the function\r\nWaitEventSetWaitBlock.\r\n\r\nPlease let me know if I am missing something.\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Thu, 18 May 2023 08:53:03 +0000",
"msg_from": "\"Wei Wang (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: WL_SOCKET_ACCEPT fairness on Windows"
},
{
"msg_contents": "On Thu, May 18, 2023 at 8:53 PM Wei Wang (Fujitsu)\n<[email protected]> wrote:\n> On Sat, April 1, 2023 at 11:00 AM Thomas Munro <[email protected]> wrote:\n> I tried to test this patch on Windows. And I did cover the new code path below:\n> ```\n> + /* We have another event to decode. */\n> + cur_event = &set->events[next_pos + (rc - WAIT_OBJECT_0)];\n> ```\n>\n> But I have one thing want to confirm:\n> In my tests, I think the scenario with two different events (e.g., one ending\n> socket connection and one incoming socket connection) has been optimized.\n> However, it seems that when there are multiple incoming socket connections, the\n> function WaitEventSetWaitBlock is called multiple times instead of being called\n> once. Is this our expected result?\n\nThanks for testing! Maybe I am misunderstanding something: what I\nexpect to happen is that we call *WaitForMultipleObjects()* one extra\ntime (to see if there is another event available immediately, and\nusually there is not), but not WaitEventSetWaitBlock().\n\n> Here are my test details:\n> I use the debugger to ensure that multiple events occur when the function\n> WaitEventSetWaitBlock is called. First, I added a breakpoint in the below code:\n> ```\n> /*\n> * Sleep.\n> *\n> * Need to wait for ->nevents + 1, because signal handle is in [0].\n> */\n> b rc = WaitForMultipleObjects(set->nevents + 1, set->handles, FALSE,\n> cur_timeout);\n> ```\n> And then make sure that the postmaster process enters the function\n> WaitForMultipleObjects. (I think the postmaster process will only return from\n> the function WaitForMultipleObjects when any object is signaled or timeout\n> occurs). Before the timeout occurs, I set up multiple socket connections using\n> psql (the first connection makes the process returns from the function\n> WaitForMultipleObjects). Then, as I continued debugging, multiple socket\n> connections were handled by different calls of the function\n> WaitEventSetWaitBlock.\n\nThis is a good test. Also, to test the exact scenario I was worrying\nabout, you could initiate 4 psql sessions while the server is blocked\nin a debugger, 2 on a Unix domain socket, and 2 on a TCP socket, and\nthen when you \"continue\" the server with a break on (say) accept() so\nthat you can \"continue\" each time, you should see that it alternates\nbetween the two sockets accepting from both \"fairly\", instead of\ndraining one socket entirely first.\n\n> Please let me know if I am missing something.\n\nI think your report already shows that it basically works correctly.\nDo you think it's worth committing for 17 to make it work more like\nUnix, or was I just being paranoid?\n\n\n",
"msg_date": "Sat, 10 Jun 2023 15:06:06 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WL_SOCKET_ACCEPT fairness on Windows"
},
{
"msg_contents": "I committed this for 17. It would be good to come up with something\nfundamentally better than this, to get rid of that 64 event limit\nnonsense, but I don't see it happening in the 17 cycle, and prefer the\nsemantics with this commit in the meantime.\n\n\n",
"msg_date": "Fri, 8 Sep 2023 18:59:33 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WL_SOCKET_ACCEPT fairness on Windows"
}
] |
[
{
"msg_contents": "hi hackers,\n\nnow that the heap relation is passed down to vacuumRedirectAndPlaceholder()\nthanks to 61b313e47e, we can also pass it down to GlobalVisTestFor() too (to\nallow more pruning).\n\nPlease find attached a tiny patch to do so.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 2 Apr 2023 10:23:47 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Pass heaprel to GlobalVisTestFor() in vacuumRedirectAndPlaceholder()"
},
{
"msg_contents": "On Sun, Apr 2, 2023 at 1:25 AM Drouvot, Bertrand\n<[email protected]> wrote:\n> now that the heap relation is passed down to vacuumRedirectAndPlaceholder()\n> thanks to 61b313e47e, we can also pass it down to GlobalVisTestFor() too (to\n> allow more pruning).\n\nWhat about BTPageIsRecyclable() and _bt_pendingfsm_finalize()?\n\nMaking nbtree page deletion more efficient when logical decoding is in\nuse seems well worthwhile. There is an \"XXX\" comment about this issue,\nsimilar to the SP-GiST one. It looks like you already have everything\nyou need to make this work from yesterday's commit 61b313e47e.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 2 Apr 2023 10:18:10 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pass heaprel to GlobalVisTestFor() in\n vacuumRedirectAndPlaceholder()"
},
{
"msg_contents": "On Sun, Apr 2, 2023 at 10:18 AM Peter Geoghegan <[email protected]> wrote:\n> Making nbtree page deletion more efficient when logical decoding is in\n> use seems well worthwhile. There is an \"XXX\" comment about this issue,\n> similar to the SP-GiST one. It looks like you already have everything\n> you need to make this work from yesterday's commit 61b313e47e.\n\nActually, I suppose that isn't quite true, since you'd still need to\nfind a way to pass the heap relation down to nbtree VACUUM. Say by\nadding it to IndexVacuumInfo.\n\nThat doesn't seem hard at all. The hard part was passing the heap rel\ndown to _bt_getbuf(), which you've already taken care of.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 2 Apr 2023 10:22:18 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pass heaprel to GlobalVisTestFor() in\n vacuumRedirectAndPlaceholder()"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-02 10:22:18 -0700, Peter Geoghegan wrote:\n> On Sun, Apr 2, 2023 at 10:18 AM Peter Geoghegan <[email protected]> wrote:\n> > Making nbtree page deletion more efficient when logical decoding is in\n> > use seems well worthwhile. There is an \"XXX\" comment about this issue,\n> > similar to the SP-GiST one. It looks like you already have everything\n> > you need to make this work from yesterday's commit 61b313e47e.\n\n+1\n\n\n> Actually, I suppose that isn't quite true, since you'd still need to\n> find a way to pass the heap relation down to nbtree VACUUM. Say by\n> adding it to IndexVacuumInfo.\n\nIt has been added to that already, so it should really be as trivial as you\nsuggested earlier...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 2 Apr 2023 15:30:55 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pass heaprel to GlobalVisTestFor() in\n vacuumRedirectAndPlaceholder()"
},
{
"msg_contents": "On Sun, Apr 2, 2023 at 3:30 PM Andres Freund <[email protected]> wrote:\n> > Actually, I suppose that isn't quite true, since you'd still need to\n> > find a way to pass the heap relation down to nbtree VACUUM. Say by\n> > adding it to IndexVacuumInfo.\n>\n> It has been added to that already, so it should really be as trivial as you\n> suggested earlier...\n\nOh yeah, I missed it because you put it at the end of the struct,\nrather than at the start, next to the existing Relation.\n\nThis page deletion issue matters a lot more after the Postgres 14\noptimization added by commit e5d8a99903, which came after your\nGlobalVisCheckRemovableFullXid() snapshot scalability work (well, a\nfew months after, at least). I really don't like the idea of something\nlike that being much less effective due to logical decoding. Granted,\nthe optimization in commit e5d8a99903 was itself kind of a hack, which\nshould be replaced by a scheme that explicitly makes recycle safety\nthe responsibility of the FSM itself, not the responsibility of\nVACUUM.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 2 Apr 2023 15:52:14 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pass heaprel to GlobalVisTestFor() in\n vacuumRedirectAndPlaceholder()"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-02 15:52:14 -0700, Peter Geoghegan wrote:\n> On Sun, Apr 2, 2023 at 3:30 PM Andres Freund <[email protected]> wrote:\n> > > Actually, I suppose that isn't quite true, since you'd still need to\n> > > find a way to pass the heap relation down to nbtree VACUUM. Say by\n> > > adding it to IndexVacuumInfo.\n> >\n> > It has been added to that already, so it should really be as trivial as you\n> > suggested earlier...\n> \n> Oh yeah, I missed it because you put it at the end of the struct,\n> rather than at the start, next to the existing Relation.\n\nWell, Bertrand. But I didn't change it, so you're not wrong...\n\n\n> This page deletion issue matters a lot more after the Postgres 14\n> optimization added by commit e5d8a99903, which came after your\n> GlobalVisCheckRemovableFullXid() snapshot scalability work (well, a\n> few months after, at least). I really don't like the idea of something\n> like that being much less effective due to logical decoding.\n\nYea, it's definitely good to use the relation there.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 2 Apr 2023 17:19:45 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pass heaprel to GlobalVisTestFor() in\n vacuumRedirectAndPlaceholder()"
},
{
"msg_contents": "Hi,\n\nOn 4/3/23 12:30 AM, Andres Freund wrote:\n> Hi,\n> \n> On 2023-04-02 10:22:18 -0700, Peter Geoghegan wrote:\n>> On Sun, Apr 2, 2023 at 10:18 AM Peter Geoghegan <[email protected]> wrote:\n>>> Making nbtree page deletion more efficient when logical decoding is in\n>>> use seems well worthwhile. There is an \"XXX\" comment about this issue,\n>>> similar to the SP-GiST one. It looks like you already have everything\n>>> you need to make this work from yesterday's commit 61b313e47e.\n> \n> +1\n> \n\nThanks Peter for the suggestion!\n\n> \n>> Actually, I suppose that isn't quite true, since you'd still need to\n>> find a way to pass the heap relation down to nbtree VACUUM. Say by\n>> adding it to IndexVacuumInfo.\n> \n> It has been added to that already, so it should really be as trivial as you\n> suggested earlier...\n> \n\nRight. Please find enclosed V2 also taking care of BTPageIsRecyclable()\nand _bt_pendingfsm_finalize().\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 3 Apr 2023 09:09:50 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pass heaprel to GlobalVisTestFor() in\n vacuumRedirectAndPlaceholder()"
},
{
"msg_contents": "On Mon, Apr 3, 2023 at 12:09 AM Drouvot, Bertrand\n<[email protected]> wrote:\n> Right. Please find enclosed V2 also taking care of BTPageIsRecyclable()\n> and _bt_pendingfsm_finalize().\n\nPushed that as too separate patches just now. Thanks.\n\nBTW, I'm not overly happy about the extent of the changes to nbtree\nfrom commit 61b313e4. I understand that it was necessary to pass down\na heaprel in a lot of places, which is bound to create a lot of churn.\nHowever, a lot of the churn from the commit seems completely\navoidable. There is no reason why the BT_READ path in _bt_getbuf()\ncould possibly require a valid heaprel. In fact, most individual\nBT_WRITE calls don't need heaprel, either -- only those that pass\nP_NEW. The changes affecting places like _bt_mkscankey() and\n_bt_metaversion() seem particularly bad.\n\nAnyway, I'll take care of this myself at some point after feature freeze.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 3 Apr 2023 12:20:01 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pass heaprel to GlobalVisTestFor() in\n vacuumRedirectAndPlaceholder()"
}
] |
[
{
"msg_contents": "Hi,\n\nIt looks like comments in make file and meson file about not running\nbasic_archive tests in NO_INSTALLCHECK mode are wrong. The comments say the\nmodule needs to be loaded via shared_preload_libraries=basic_archive, but\nit actually doesn't. The custom file needs archive related parameters and\nwal_level=replica. Here's a patch correcting that comment.\n\nThoughts?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 3 Apr 2023 08:56:10 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix a comment in basic_archive about NO_INSTALLCHECK"
},
{
"msg_contents": "On Mon, Apr 03, 2023 at 08:56:10AM +0530, Bharath Rupireddy wrote:\n> It looks like comments in make file and meson file about not running\n> basic_archive tests in NO_INSTALLCHECK mode are wrong. The comments say the\n> module needs to be loaded via shared_preload_libraries=basic_archive, but\n> it actually doesn't. The custom file needs archive related parameters and\n> wal_level=replica. Here's a patch correcting that comment.\n\nWouldn't it be better to also set shared_preload_libraries in\nbasic_archive.conf? It is true that the test works fine if setting\nonly archive_library, which would cause the library with its\n_PG_init() to be loaded in the archiver process. However the GUC \nbasic_archive.archive_directory is missing from the backends.\n\nSaying that, updating the comments about the dependency with\narchive_library and the module's GUC is right.\n--\nMichael",
"msg_date": "Thu, 6 Apr 2023 12:55:56 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix a comment in basic_archive about NO_INSTALLCHECK"
},
{
"msg_contents": "On Thu, Apr 6, 2023 at 9:26 AM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Apr 03, 2023 at 08:56:10AM +0530, Bharath Rupireddy wrote:\n> > It looks like comments in make file and meson file about not running\n> > basic_archive tests in NO_INSTALLCHECK mode are wrong. The comments say the\n> > module needs to be loaded via shared_preload_libraries=basic_archive, but\n> > it actually doesn't. The custom file needs archive related parameters and\n> > wal_level=replica. Here's a patch correcting that comment.\n>\n> Wouldn't it be better to also set shared_preload_libraries in\n> basic_archive.conf? It is true that the test works fine if setting\n> only archive_library, which would cause the library with its\n> _PG_init() to be loaded in the archiver process. However the GUC\n> basic_archive.archive_directory is missing from the backends.\n\nHm, I think the other backends will still see the value of the GUC\nwithout shared_preload_libraries=basic_archive. You can verify it with\nadding SHOW basic_archive.archive_directory; to basic_archive.sql. The\nbasic_archive library gets loaded by archiver via _PG_init. It's the\narchiver defining a custom GUC variable which will propagate to all\nthe postgres processes via set_config_option_ext. Therefore, we don't\nneed shared_preload_libraries=basic_archive.\n\n#3 0x00007f75306406b6 in _PG_init () at basic_archive.c:86\n#4 0x0000562652d0c87c in internal_load_library (\n libname=0x5626549102d8\n\"/home/ubuntu/postgres/tmp_install/home/ubuntu/postgres/inst/lib/basic_archive.so\")\nat dfmgr.c:289\n#5 0x0000562652d0c1e7 in load_external_function\n(filename=0x562654930698 \"basic_archive\",\n funcname=0x562652eca81b \"_PG_archive_module_init\",\nsignalNotFound=false, filehandle=0x0) at dfmgr.c:116\n#6 0x0000562652a3a400 in LoadArchiveLibrary () at pgarch.c:841\n#7 0x0000562652a39489 in PgArchiverMain () at pgarch.c:256\n#8 0x0000562652a353de in AuxiliaryProcessMain\n(auxtype=ArchiverProcess) at auxprocess.c:145\n#9 0x0000562652a40b8e in StartChildProcess (type=ArchiverProcess) at\npostmaster.c:5341\n#10 0x0000562652a3e529 in process_pm_child_exit () at postmaster.c:3072\n#11 0x0000562652a3c329 in ServerLoop () at postmaster.c:1767\n#12 0x0000562652a3bc52 in PostmasterMain (argc=8, argv=0x56265490e1e0)\nat postmaster.c:1462\n#13 0x00005626528efbbf in main (argc=8, argv=0x56265490e1e0) at main.c:198\n\n> Saying that, updating the comments about the dependency with\n> archive_library and the module's GUC is right.\n\nThanks. Any thoughts on the v1 patch attached upthread?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 19 Jul 2023 16:10:16 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix a comment in basic_archive about NO_INSTALLCHECK"
},
{
"msg_contents": "On Thu, Apr 6, 2023 at 9:26 AM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Apr 03, 2023 at 08:56:10AM +0530, Bharath Rupireddy wrote:\n> > It looks like comments in make file and meson file about not running\n> > basic_archive tests in NO_INSTALLCHECK mode are wrong. The comments say the\n> > module needs to be loaded via shared_preload_libraries=basic_archive, but\n> > it actually doesn't. The custom file needs archive related parameters and\n> > wal_level=replica. Here's a patch correcting that comment.\n>\n> Saying that, updating the comments about the dependency with\n> archive_library and the module's GUC is right.\n\nReworded the comment a bit and attached the v2 patch.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 18 Dec 2023 11:28:46 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix a comment in basic_archive about NO_INSTALLCHECK"
},
{
"msg_contents": "On Mon, Dec 18, 2023 at 11:28:46AM +0530, Bharath Rupireddy wrote:\n> Reworded the comment a bit and attached the v2 patch.\n\nForgot about this one, thanks! I've simplified it a bit and applied\nit.\n--\nMichael",
"msg_date": "Wed, 20 Dec 2023 08:43:20 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix a comment in basic_archive about NO_INSTALLCHECK"
}
] |
[
{
"msg_contents": "Hi,\n\nI found that the enable_hashjoin disables HashJoin completely.\nIt's in the function add_paths_to_joinrel:\n\nif (enable_hashjoin || jointype == JOIN_FULL)\n\thash_inner_and_outer(root, joinrel, outerrel, innerrel,\n\t\t\t\tjointype, &extra);\n\nInstead, it should add a disable cost to the cost calculation of \nhashjoin. And now final_cost_hashjoin does the same thing:\n\nif (!enable_hashjoin)\n\tstartup_cost += disable_cost;\n\n\nenable_mergejoin has the same problem.\n\nTest case:\n\nCREATE TABLE t_score_01(\ns_id int,\ns_score int,\ns_course char(8),\nc_id int);\n\nCREATE TABLE t_student_01(\ns_id int,\ns_name char(8));\n\ninsert into t_score_01 values(\ngenerate_series(1, 1000000), random()*100, 'course', generate_series(1, \n1000000));\n\ninsert into t_student_01 values(generate_series(1, 1000000), 'name');\n\nanalyze t_score_01;\nanalyze t_student_01;\n\nSET enable_hashjoin TO off;\nSET enable_nestloop TO off;\nSET enable_mergejoin TO off;\n\nexplain select count(*)\nfrom t_student_01 a join t_score_01 b on a.s_id=b.s_id;\n\nAfter disabling all three, the HashJoin path should still be chosen.\n\nAttached is the patch file.\n\n--\nQuan Zongliang\nVastdata",
"msg_date": "Mon, 3 Apr 2023 18:23:41 +0800",
"msg_from": "Quan Zongliang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why enable_hashjoin Completely disables HashJoin"
},
{
"msg_contents": "On 4/3/23 12:23, Quan Zongliang wrote:\n> Hi,\n> \n> I found that the enable_hashjoin disables HashJoin completely.\n> It's in the function add_paths_to_joinrel:\n> \n> if (enable_hashjoin || jointype == JOIN_FULL)\n> hash_inner_and_outer(root, joinrel, outerrel, innerrel,\n> jointype, &extra);\n> \n> Instead, it should add a disable cost to the cost calculation of\n> hashjoin. And now final_cost_hashjoin does the same thing:\n> \n> if (!enable_hashjoin)\n> startup_cost += disable_cost;\n> \n> \n> enable_mergejoin has the same problem.\n> \n> Test case:\n> \n> CREATE TABLE t_score_01(\n> s_id int,\n> s_score int,\n> s_course char(8),\n> c_id int);\n> \n> CREATE TABLE t_student_01(\n> s_id int,\n> s_name char(8));\n> \n> insert into t_score_01 values(\n> generate_series(1, 1000000), random()*100, 'course', generate_series(1,\n> 1000000));\n> \n> insert into t_student_01 values(generate_series(1, 1000000), 'name');\n> \n> analyze t_score_01;\n> analyze t_student_01;\n> \n> SET enable_hashjoin TO off;\n> SET enable_nestloop TO off;\n> SET enable_mergejoin TO off;\n> \n> explain select count(*)\n> from t_student_01 a join t_score_01 b on a.s_id=b.s_id;\n> \n> After disabling all three, the HashJoin path should still be chosen.\n> \n\nIt's not clear to me why that behavior would be desirable? Why is this\nan issue you need so solve?\n\nAFAIK the reason why some paths are actually disabled (not built at all)\nwhile others are only penalized by adding disable_cost is that we need\nto end up with at least one way to execute the query. So we pick a path\nthat we know is possible (e.g. seqscan) and hard-disable other paths.\nBut the always-possible path is only soft-disabled by disable_cost.\n\nFor joins, we do the same thing. The hash/merge joins may not be\npossible, because the data types may not have hash/sort operators, etc.\nNestloop is always possible. So we soft-disable nestloop but\nhard-disable hash/merge joins.\n\nI doubt we want to change this behavior, unless there's a good reason to\ndo that ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 3 Apr 2023 13:44:03 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why enable_hashjoin Completely disables HashJoin"
},
{
"msg_contents": "Quan Zongliang <[email protected]> writes:\n> I found that the enable_hashjoin disables HashJoin completely.\n\nWell, yeah. It's what you asked for.\n\n> Instead, it should add a disable cost to the cost calculation of \n> hashjoin.\n\nWhy? The disable-cost stuff is a crude hack that we use when\nturning off a particular plan type entirely might render us\nunable to generate a valid plan. Hash join is not in that\ncategory.\n\n> After disabling all three, the HashJoin path should still be chosen.\n\nWhy?\n\nPersonally, I'd get rid of disable_cost altogether if I could.\nI'm not in a hurry to extend its use to more places.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Apr 2023 08:13:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why enable_hashjoin Completely disables HashJoin"
},
{
"msg_contents": "On Mon, Apr 3, 2023 at 8:13 AM Tom Lane <[email protected]> wrote:\n> Personally, I'd get rid of disable_cost altogether if I could.\n> I'm not in a hurry to extend its use to more places.\n\nI agree. I've wondered if we should put some work into that. It feels\nbad to waste CPU cycles generating paths we intend to basically just\nthrow away, and it feels even worse if they manage to beat out some\nother path on cost.\n\nIt hasn't been obvious to me how we could restructure the existing\nlogic to avoid relying on disable_cost. I sort of feel like it should\nbe a two-pass algorithm: go through and generate all the path types\nthat aren't disabled, and then if that results in no paths, try a\ndo-over where you ignore the disable flags (or just some of them). But\nthe code structure doesn't seem particularly amenable to that kind of\nthing.\n\nThis hasn't caused me enough headaches yet that I've been willing to\ninvest time in it, but it has caused me more than zero headaches...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 3 Apr 2023 13:51:03 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why enable_hashjoin Completely disables HashJoin"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Mon, Apr 3, 2023 at 8:13 AM Tom Lane <[email protected]> wrote:\n>> Personally, I'd get rid of disable_cost altogether if I could.\n>> I'm not in a hurry to extend its use to more places.\n\n> I agree. I've wondered if we should put some work into that. It feels\n> bad to waste CPU cycles generating paths we intend to basically just\n> throw away, and it feels even worse if they manage to beat out some\n> other path on cost.\n\n> It hasn't been obvious to me how we could restructure the existing\n> logic to avoid relying on disable_cost.\n\nYeah. In some places it would not be too hard; for example, if we\ngenerated seqscan paths last instead of first for baserels, the rule\ncould be \"generate it if enable_seqscan is on OR if we made no other\npath for the rel\". It's much messier for joins though, partly because\nthe same joinrel will be considered multiple times as we process\ndifferent join orderings, plus it's usually unclear whether failing\nto generate any paths for joinrel X will lead to overall failure.\n\nA solution that would work is to treat disable_cost as a form of infinity\nthat's counted separately from the actual cost estimate, that is we\nlabel paths as \"cost X, plus there are N uses of disabled plan types\".\nThen you sort first on N and after that on X. But this'd add a good\nnumber of cycles to add_path, which I've not wanted to expend on a\nnon-mainstream usage.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Apr 2023 14:04:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why enable_hashjoin Completely disables HashJoin"
},
{
"msg_contents": "On Mon, Apr 3, 2023 at 2:04 PM Tom Lane <[email protected]> wrote:\n> Yeah. In some places it would not be too hard; for example, if we\n> generated seqscan paths last instead of first for baserels, the rule\n> could be \"generate it if enable_seqscan is on OR if we made no other\n> path for the rel\". It's much messier for joins though, partly because\n> the same joinrel will be considered multiple times as we process\n> different join orderings, plus it's usually unclear whether failing\n> to generate any paths for joinrel X will lead to overall failure.\n\nYeah, good point. I'm now remembering that at one point I'd had the\nidea of running the whole find-a-plan-for-a-jointree step and then\nrunning it a second time if it fails to find a plan. But I think that\nrequires some restructuring, because I think right now it does some\nthings that we should only do once we know we're definitely getting a\nplan out. Or else we have to reset some state. Like if we want to go\nback and maybe add more paths then we have to undo and redo whatever\nset_cheapest() did.\n\n> A solution that would work is to treat disable_cost as a form of infinity\n> that's counted separately from the actual cost estimate, that is we\n> label paths as \"cost X, plus there are N uses of disabled plan types\".\n> Then you sort first on N and after that on X. But this'd add a good\n> number of cycles to add_path, which I've not wanted to expend on a\n> non-mainstream usage.\n\nYeah, I thought of that at one point too and rejected it for the same reason.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 3 Apr 2023 14:52:13 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why enable_hashjoin Completely disables HashJoin"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-03 14:04:30 -0400, Tom Lane wrote:\n> Robert Haas <[email protected]> writes:\n> > On Mon, Apr 3, 2023 at 8:13 AM Tom Lane <[email protected]> wrote:\n> >> Personally, I'd get rid of disable_cost altogether if I could.\n> >> I'm not in a hurry to extend its use to more places.\n> \n> > I agree. I've wondered if we should put some work into that. It feels\n> > bad to waste CPU cycles generating paths we intend to basically just\n> > throw away, and it feels even worse if they manage to beat out some\n> > other path on cost.\n> \n> > It hasn't been obvious to me how we could restructure the existing\n> > logic to avoid relying on disable_cost.\n> \n> Yeah. In some places it would not be too hard; for example, if we\n> generated seqscan paths last instead of first for baserels, the rule\n> could be \"generate it if enable_seqscan is on OR if we made no other\n> path for the rel\". It's much messier for joins though, partly because\n> the same joinrel will be considered multiple times as we process\n> different join orderings, plus it's usually unclear whether failing\n> to generate any paths for joinrel X will lead to overall failure.\n> \n> A solution that would work is to treat disable_cost as a form of infinity\n> that's counted separately from the actual cost estimate, that is we\n> label paths as \"cost X, plus there are N uses of disabled plan types\".\n> Then you sort first on N and after that on X. But this'd add a good\n> number of cycles to add_path, which I've not wanted to expend on a\n> non-mainstream usage.\n\nIt sounds too hard compared to the gains, but another way could be to plan\nwith the relevant path generation hard disabled, and plan from scratch, with\nadditional scan types enabled, if we end up being unable to generate valid\nplan.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 3 Apr 2023 16:18:14 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why enable_hashjoin Completely disables HashJoin"
},
{
"msg_contents": "On Tue, 4 Apr 2023 at 11:18, Andres Freund <[email protected]> wrote:\n> It sounds too hard compared to the gains, but another way could be to plan\n> with the relevant path generation hard disabled, and plan from scratch, with\n> additional scan types enabled, if we end up being unable to generate valid\n> plan.\n\nI think there would be quite a bit of work to do before we could ever\nstart to think about that. The planner does quite a bit of writing on\nthe parse, e.g adding new RangeTblEntrys to the query's rtable. We'd\neither need to fix all those first or make a copy of the parse before\nplanning. The latter is quite expensive today. It's also not clear to\nme how you'd know what you'd need to enable again to get the 2nd\nattempt to produce a plan this time around. I'd assume you'd want the\nminimum possible set of enable_* GUCs turned back on, but what would\nyou do in cases where there's an aggregate and both enable_hashagg and\nenable_sort are both disabled and there are no indexes providing\npre-sorted input?\n\nDavid\n\nDavid\n\n\n",
"msg_date": "Tue, 4 Apr 2023 11:31:04 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why enable_hashjoin Completely disables HashJoin"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> It sounds too hard compared to the gains, but another way could be to plan\n> with the relevant path generation hard disabled, and plan from scratch, with\n> additional scan types enabled, if we end up being unable to generate valid\n> plan.\n\nActually, I kind of like that. It would put the extra cost in a place\nit belongs: if you have enough enable_foo turned off to prevent\ngenerating a valid plan, it'll cost you extra to make a plan ... but\nlikely you'll be paying even more in runtime due to not getting a good\nplan, so maybe that doesn't matter anyway. I'd limit it to two passes:\nfirst try honors all enable_foo switches, second try ignores all.\n\nI'm not quite sure how this could be wedged into the existing code\nstructure --- in particular I am not sure that we're prepared to do\ntwo passes of baserel path generation. (GEQO is an existence proof\nthat we could handle it for join planning, though.)\n\nOr we could rethink the design goal of not allowing enable_foo switches\nto cause us to fail to make a plan. That might be unusable though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Apr 2023 19:31:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why enable_hashjoin Completely disables HashJoin"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> I think there would be quite a bit of work to do before we could ever\n> start to think about that. The planner does quite a bit of writing on\n> the parse, e.g adding new RangeTblEntrys to the query's rtable. We'd\n> either need to fix all those first or make a copy of the parse before\n> planning.\n\nYeah, we'd have to be sure that all that preliminary work is teased apart\nfrom the actual path-making. I think we are probably pretty close to\nthat but not there yet. Subqueries might be problematic, but perhaps\nwe could define our way out of that by saying that this retry principle\napplies independently in each planner recursion level.\n\n> It's also not clear to\n> me how you'd know what you'd need to enable again to get the 2nd\n> attempt to produce a plan this time around. I'd assume you'd want the\n> minimum possible set of enable_* GUCs turned back on, but what would\n> you do in cases where there's an aggregate and both enable_hashagg and\n> enable_sort are both disabled and there are no indexes providing\n> pre-sorted input?\n\nAs I commented concurrently, I think we should simply not try to solve\nthat conundrum: if you want control, don't pose impossible problems.\nThere's no principled way that we could decide which of enable_hashagg\nand enable_sort to ignore first, for example.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Apr 2023 19:39:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why enable_hashjoin Completely disables HashJoin"
},
{
"msg_contents": "\n\nOn 2023/4/3 19:44, Tomas Vondra wrote:\n> On 4/3/23 12:23, Quan Zongliang wrote:\n>> Hi,\n>>\n>> I found that the enable_hashjoin disables HashJoin completely.\n>> It's in the function add_paths_to_joinrel:\n>>\n>> if (enable_hashjoin || jointype == JOIN_FULL)\n>> hash_inner_and_outer(root, joinrel, outerrel, innerrel,\n>> jointype, &extra);\n>>\n>> Instead, it should add a disable cost to the cost calculation of\n>> hashjoin. And now final_cost_hashjoin does the same thing:\n>>\n>> if (!enable_hashjoin)\n>> startup_cost += disable_cost;\n>>\n>>\n>> enable_mergejoin has the same problem.\n>>\n>> Test case:\n>>\n>> CREATE TABLE t_score_01(\n>> s_id int,\n>> s_score int,\n>> s_course char(8),\n>> c_id int);\n>>\n>> CREATE TABLE t_student_01(\n>> s_id int,\n>> s_name char(8));\n>>\n>> insert into t_score_01 values(\n>> generate_series(1, 1000000), random()*100, 'course', generate_series(1,\n>> 1000000));\n>>\n>> insert into t_student_01 values(generate_series(1, 1000000), 'name');\n>>\n>> analyze t_score_01;\n>> analyze t_student_01;\n>>\n>> SET enable_hashjoin TO off;\n>> SET enable_nestloop TO off;\n>> SET enable_mergejoin TO off;\n>>\n>> explain select count(*)\n>> from t_student_01 a join t_score_01 b on a.s_id=b.s_id;\n>>\n>> After disabling all three, the HashJoin path should still be chosen.\n>>\n> \n> It's not clear to me why that behavior would be desirable? Why is this\n> an issue you need so solve?\n> \nBecause someone noticed that when he set enable_hashjoin, \nenable_mergejoin and enable_nestloop to off. The statement seemed to get \nstuck (actually because it chose the NestedLoop path, which took a long \nlong time to run).\nIf enable_hashjoin and enable_nestloop disable generating these two \npaths. Then enable_nestloop should do the same thing, but it doesn't.\n\n> AFAIK the reason why some paths are actually disabled (not built at all)\n> while others are only penalized by adding disable_cost is that we need\n> to end up with at least one way to execute the query. So we pick a path\n> that we know is possible (e.g. seqscan) and hard-disable other paths.\n> But the always-possible path is only soft-disabled by disable_cost.\n> \n> For joins, we do the same thing. The hash/merge joins may not be\n> possible, because the data types may not have hash/sort operators, etc.\n> Nestloop is always possible. So we soft-disable nestloop but\n> hard-disable hash/merge joins.\n> \n> I doubt we want to change this behavior, unless there's a good reason to\n> do that ...\nIt doesn't have to change. Because selecting NestedLoop doesn't really \nget stuck either. It just takes too long to run.\n\nI will change the patch status to Withdrawn.\n> \n> \n> regards\n> \n\n\n\n",
"msg_date": "Tue, 4 Apr 2023 15:38:37 +0800",
"msg_from": "Quan Zongliang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why enable_hashjoin Completely disables HashJoin"
},
{
"msg_contents": "On Tue, Apr 4, 2023 at 3:38 AM Quan Zongliang <[email protected]> wrote:\n> Because someone noticed that when he set enable_hashjoin,\n> enable_mergejoin and enable_nestloop to off. The statement seemed to get\n> stuck (actually because it chose the NestedLoop path, which took a long\n> long time to run).\n> If enable_hashjoin and enable_nestloop disable generating these two\n> paths. Then enable_nestloop should do the same thing, but it doesn't.\n\nThis all seems like expected behavior. If you disable an entire plan\ntype, you should expect to get some bad plans. And if you disable all\nthe plan types, you should still expect to get some plan, but maybe an\nextremely bad one.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 4 Apr 2023 09:50:05 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why enable_hashjoin Completely disables HashJoin"
},
{
"msg_contents": "On Mon, 3 Apr 2023 at 19:32, Tom Lane <[email protected]> wrote:\n>\n> Or we could rethink the design goal of not allowing enable_foo switches\n> to cause us to fail to make a plan. That might be unusable though.\n\nOff the top of my head I don't see why. It's not like the possible\nplans are going to change on you often, only when DDL changes the\nschema.\n\nThe only one that gives me pause is enable_seqscan. I've seen multiple\nsites that turn it off as a hammer to force OLTP-style plans. They\nstill get sequential scans where they're absolutely necessary such as\nsmall reference tables with no usable index and rely on that\nbehaviour.\n\nIn that case we would ideally generate a realistic cost estimate for\nthe unavoidable sequential scan to avoid twisting the rest of the plan\nin strange ways.\n\nBut perhaps these sites would be better served with different\nmachinery anyways. If they actually did get a sequential scan on a\nlarge table or any query where the estimate was very high where they\nwere expecting low latency OLTP queries perhaps they would prefer to\nget an error than some weird plan anyways.\n\nAnd for query planning debugging purposes of course it would be more\npowerful to be able to enable/disable plan types per-node. That would\navoid the problem of not being able to effectively test a plan without\na sequential scan on one table when another table still needs it. But\nthat direction...\n\n-- \ngreg\n\n\n",
"msg_date": "Wed, 5 Apr 2023 11:34:07 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why enable_hashjoin Completely disables HashJoin"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> On Mon, 3 Apr 2023 at 19:32, Tom Lane <[email protected]> wrote:\n>> Or we could rethink the design goal of not allowing enable_foo switches\n>> to cause us to fail to make a plan. That might be unusable though.\n\n> The only one that gives me pause is enable_seqscan. I've seen multiple\n> sites that turn it off as a hammer to force OLTP-style plans.\n\nYeah, that. There are definitely people using some of these switches\nin production, hence relying on the current (and documented) behavior.\nOn the whole I doubt we can get away with that answer.\n\n> In that case we would ideally generate a realistic cost estimate for\n> the unavoidable sequential scan to avoid twisting the rest of the plan\n> in strange ways.\n\nAs I mentioned earlier, I think it might be possible to hack up the\nseqscan case to avoid use of disable_cost pretty easily. It's far\neasier to detect that no other plans are possible than it is once\nyou get to the join stage. Perhaps that's worth doing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 05 Apr 2023 13:05:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why enable_hashjoin Completely disables HashJoin"
}
] |
[
{
"msg_contents": "Dear hackers,\n\nWhile reading codes, I noticed that pg_upgrade/t/001_basic.pl and\npg_upgrade/t/002_pg_upgrade.pl do not contain the copyright.\n\nI checked briefly and almost all files have that, so I thought they missed it.\nPSA the patch to fix them.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Mon, 3 Apr 2023 13:55:24 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add missing copyright for pg_upgrade/t/* files"
},
{
"msg_contents": "On Mon, Apr 3, 2023 at 7:25 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> While reading codes, I noticed that pg_upgrade/t/001_basic.pl and\n> pg_upgrade/t/002_pg_upgrade.pl do not contain the copyright.\n>\n> I checked briefly and almost all files have that, so I thought they missed it.\n> PSA the patch to fix them.\n>\n\nYeah, it is good to have the Copyright to keep it consistent with\nother test files and otherwise as well.\n\n--- a/src/bin/pg_upgrade/t/001_basic.pl\n+++ b/src/bin/pg_upgrade/t/001_basic.pl\n@@ -1,3 +1,5 @@\n+# Copyright (c) 2022-2023, PostgreSQL Global Development Group\n\nHow did you decide on the starting year as 2022?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 4 Apr 2023 09:13:24 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add missing copyright for pg_upgrade/t/* files"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for responding!\r\n\r\n> \r\n> Yeah, it is good to have the Copyright to keep it consistent with\r\n> other test files and otherwise as well.\r\n> \r\n> --- a/src/bin/pg_upgrade/t/001_basic.pl\r\n> +++ b/src/bin/pg_upgrade/t/001_basic.pl\r\n> @@ -1,3 +1,5 @@\r\n> +# Copyright (c) 2022-2023, PostgreSQL Global Development Group\r\n> \r\n> How did you decide on the starting year as 2022?\r\n\r\nI checked the commit log.\r\nAbout 001_basic.pl, it had been added at 2017 once but been reverted soon [1][2].\r\n322bec added the file again at 2022[3], so I chose 2022.\r\n\r\nAbout 002_pg_upgrade.pl, it has been added at the same time[3]. \r\nDefinitively it should be 2022.\r\n\r\n[1]: https://github.com/postgres/postgres/commit/f41e56c76e39f02bef7ba002c9de03d62b76de4d\r\n[2] https://github.com/postgres/postgres/commit/58ffe141eb37c3f027acd25c1fc6b36513bf9380\r\n[3: https://github.com/postgres/postgres/commit/322becb6085cb92d3708635eea61b45776bf27b6\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 4 Apr 2023 04:18:53 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Add missing copyright for pg_upgrade/t/* files"
},
{
"msg_contents": "\n> I checked the commit log.\n> About 001_basic.pl, it had been added at 2017 once but been reverted soon [1][2].\n> 322bec added the file again at 2022[3], so I chose 2022.\n>\n> About 002_pg_upgrade.pl, it has been added at the same time[3].\n> Definitively it should be 2022.\n\nIt is great to make sure each file has the Copyright and I see this \npatch has already been committed.\n\nJust curious, is there a rule to add Copyright to Postgres? For example, \nif I run a command `grep -rn Copyright --include=\"*.pl\" | awk -F ':' \n{'print $2, $1'} | sort -nr` inside postgres/src/bin, It seems most \nCopyright were added to the second line, but these two were added to the \nvery beginning (of course, there are three other files following this \npattern as well).\n\n...\n\n2 pg_archivecleanup/t/010_pg_archivecleanup.pl\n2 pg_amcheck/t/005_opclass_damage.pl\n2 pg_amcheck/t/004_verify_heapam.pl\n2 pg_amcheck/t/003_check.pl\n2 pg_amcheck/t/002_nonesuch.pl\n2 pg_amcheck/t/001_basic.pl\n2 initdb/t/001_initdb.pl\n1 pg_verifybackup/t/010_client_untar.pl\n1 pg_verifybackup/t/008_untar.pl\n1 pg_upgrade/t/002_pg_upgrade.pl\n1 pg_upgrade/t/001_basic.pl\n1 pg_basebackup/t/011_in_place_tablespace.pl\n\n\nDavid\n\n\n\n",
"msg_date": "Fri, 21 Apr 2023 12:56:01 -0700",
"msg_from": "David Zhang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add missing copyright for pg_upgrade/t/* files"
},
{
"msg_contents": "Dear David,\r\n\r\n> It is great to make sure each file has the Copyright and I see this\r\n> patch has already been committed.\r\n\r\nThanks!\r\nWhile checking more, I was surprised because I found many files which do not\r\nhave Copyright via \" grep -Lr Copyright --exclude-dir .git ...\" command.\r\nI'm not sure whether it is expected, but all sql files in src/test/regress/sql and\r\nmany files in contrib do not have. Do you know something about it?\r\n\r\n> Just curious, is there a rule to add Copyright to Postgres?\r\n\r\nSorry, I'm not sure about it. Before submitting a patch I have checked the\r\nmanual that \"PostgreSQL Coding Conventions\", but I could not find any.\r\n\r\n> For example,\r\n> if I run a command `grep -rn Copyright --include=\"*.pl\" | awk -F ':'\r\n> {'print $2, $1'} | sort -nr` inside postgres/src/bin, It seems most\r\n> Copyright were added to the second line, but these two were added to the\r\n> very beginning (of course, there are three other files following this\r\n> pattern as well).\r\n\r\nThere seems a tendency that Copyright for recently added files have added it to\r\nthe very beginning, but I can suspect from the result that there are no specific\r\nrules about it.\r\n\r\n```\r\n$ grep -rn Copyright --include=\"*.pl\" | awk -F ':' {'print $2'} | sort -nr | uniq -c\r\n 1 753\r\n 1 752\r\n 1 717\r\n...\r\n 22 3\r\n 158 2\r\n 24 1\r\n```\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 24 Apr 2023 07:08:49 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Add missing copyright for pg_upgrade/t/* files"
},
{
"msg_contents": "On 2023-04-24 Mo 03:08, Hayato Kuroda (Fujitsu) wrote:\n> Dear David,\n>\n>> It is great to make sure each file has the Copyright and I see this\n>> patch has already been committed.\n> Thanks!\n> While checking more, I was surprised because I found many files which do not\n> have Copyright via \" grep -Lr Copyright --exclude-dir .git ...\" command.\n> I'm not sure whether it is expected, but all sql files in src/test/regress/sql and\n> many files in contrib do not have. Do you know something about it?\n>\n>> Just curious, is there a rule to add Copyright to Postgres?\n> Sorry, I'm not sure about it. Before submitting a patch I have checked the\n> manual that \"PostgreSQL Coding Conventions\", but I could not find any.\n>\n>> For example,\n>> if I run a command `grep -rn Copyright --include=\"*.pl\" | awk -F ':'\n>> {'print $2, $1'} | sort -nr` inside postgres/src/bin, It seems most\n>> Copyright were added to the second line, but these two were added to the\n>> very beginning (of course, there are three other files following this\n>> pattern as well).\n> There seems a tendency that Copyright for recently added files have added it to\n> the very beginning, but I can suspect from the result that there are no specific\n> rules about it.\n>\n> ```\n> $ grep -rn Copyright --include=\"*.pl\" | awk -F ':' {'print $2'} | sort -nr | uniq -c\n> 1 753\n> 1 752\n> 1 717\n> ...\n> 22 3\n> 158 2\n> 24 1\n> ```\n\n\nI suspect many of those came from the last time I did this, at commit \n8fa6e6919c.\n\nIIRC I added \"\\nCopyright...\\n\\n\" at line 1 unless that was a \"#!\" line, \nin which case I added it after line 1 (it was done via a sed script IIRC)\n\nI think since then perltidy has dissolved some of the extra blank lines \nadded at the end.\n\nI don't think we actually have a rule about it, but the pattern I \ndescribed doesn't seem unreasonable.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-24 Mo 03:08, Hayato Kuroda\n (Fujitsu) wrote:\n\n\nDear David,\n\n\n\nIt is great to make sure each file has the Copyright and I see this\npatch has already been committed.\n\n\n\nThanks!\nWhile checking more, I was surprised because I found many files which do not\nhave Copyright via \" grep -Lr Copyright --exclude-dir .git ...\" command.\nI'm not sure whether it is expected, but all sql files in src/test/regress/sql and\nmany files in contrib do not have. Do you know something about it?\n\n\n\nJust curious, is there a rule to add Copyright to Postgres?\n\n\n\nSorry, I'm not sure about it. Before submitting a patch I have checked the\nmanual that \"PostgreSQL Coding Conventions\", but I could not find any.\n\n\n\nFor example,\nif I run a command `grep -rn Copyright --include=\"*.pl\" | awk -F ':'\n{'print $2, $1'} | sort -nr` inside postgres/src/bin, It seems most\nCopyright were added to the second line, but these two were added to the\nvery beginning (of course, there are three other files following this\npattern as well).\n\n\n\nThere seems a tendency that Copyright for recently added files have added it to\nthe very beginning, but I can suspect from the result that there are no specific\nrules about it.\n\n```\n$ grep -rn Copyright --include=\"*.pl\" | awk -F ':' {'print $2'} | sort -nr | uniq -c\n 1 753\n 1 752\n 1 717\n...\n 22 3\n 158 2\n 24 1\n```\n\n\n\nI suspect many of those came from the last time I did this, at\n commit 8fa6e6919c.\nIIRC I added \"\\nCopyright...\\n\\n\" at line 1 unless that was a\n \"#!\" line, in which case I added it after line 1 (it was done via\n a sed script IIRC)\n\nI think since then perltidy has dissolved some of the extra blank\n lines added at the end.\nI don't think we actually have a rule about it, but the pattern I\n described doesn't seem unreasonable.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 24 Apr 2023 10:14:34 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add missing copyright for pg_upgrade/t/* files"
}
] |
[
{
"msg_contents": "Hello hackers.\n\nThis patch adds the backend's statement_timeout value to pg_stat_activity.\n\nThis would provide some insights on clients that are disabling a default\nstatement timeout or overriding it through a pgbouncer, messing with other\nsessions.\n\npg_stat_activity seemed like the best place to have this information.\n\nRegards,\nAnthonin",
"msg_date": "Mon, 3 Apr 2023 20:51:31 +0200",
"msg_from": "Anthonin Bonnefoy <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add statement_timeout in pg_stat_activity"
}
] |
[
{
"msg_contents": "Hi,\n\nLooks like fairywren is possibly seeing something I saw before and spent many\ndays looking into:\nhttps://postgr.es/m/20220909235836.lz3igxtkcjb5w7zb%40awork3.anarazel.de\nwhich led me to add the following to .cirrus.yml:\n\n # Cirrus defaults to SetErrorMode(SEM_NOGPFAULTERRORBOX | ...). That\n # prevents crash reporting from working unless binaries do SetErrorMode()\n # themselves. Furthermore, it appears that either python or, more likely,\n # the C runtime has a bug where SEM_NOGPFAULTERRORBOX can very\n # occasionally *trigger* a crash on process exit - which is hard to debug,\n # given that it explicitly prevents crash dumps from working...\n # 0x8001 is SEM_FAILCRITICALERRORS | SEM_NOOPENFILEERRORBOX\n CIRRUS_WINDOWS_ERROR_MODE: 0x8001\n\n\nThe mingw folks also spent a lot of time looking into this ([1]), without a\nlot of success.\n\nIt sure looks like it might be a windows C runtime issue - none of the\nstacktrace handling python has gets invoked. I could not find any relevant\nbehavoural differences in python's code that depend on SEM_NOGPFAULTERRORBOX\nbeing set.\n\nIt'd be interesting to see if fairywren's occasional failures go away if you\nset MSYS=winjitdebug (which prevents msys from adding SEM_NOGPFAULTERRORBOX to\nErrorMode).\n\nGreetings,\n\nAndres Freund\n\n[1] https://github.com/msys2/MINGW-packages/issues/11864\n\n\n",
"msg_date": "Mon, 3 Apr 2023 18:15:46 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "fairywren exiting in ecpg"
},
{
"msg_contents": "On 2023-04-03 Mo 21:15, Andres Freund wrote:\n> Hi,\n>\n> Looks like fairywren is possibly seeing something I saw before and spent many\n> days looking into:\n> https://postgr.es/m/20220909235836.lz3igxtkcjb5w7zb%40awork3.anarazel.de\n> which led me to add the following to .cirrus.yml:\n>\n> # Cirrus defaults to SetErrorMode(SEM_NOGPFAULTERRORBOX | ...). That\n> # prevents crash reporting from working unless binaries do SetErrorMode()\n> # themselves. Furthermore, it appears that either python or, more likely,\n> # the C runtime has a bug where SEM_NOGPFAULTERRORBOX can very\n> # occasionally *trigger* a crash on process exit - which is hard to debug,\n> # given that it explicitly prevents crash dumps from working...\n> # 0x8001 is SEM_FAILCRITICALERRORS | SEM_NOOPENFILEERRORBOX\n> CIRRUS_WINDOWS_ERROR_MODE: 0x8001\n>\n>\n> The mingw folks also spent a lot of time looking into this ([1]), without a\n> lot of success.\n>\n> It sure looks like it might be a windows C runtime issue - none of the\n> stacktrace handling python has gets invoked. I could not find any relevant\n> behavoural differences in python's code that depend on SEM_NOGPFAULTERRORBOX\n> being set.\n>\n> It'd be interesting to see if fairywren's occasional failures go away if you\n> set MSYS=winjitdebug (which prevents msys from adding SEM_NOGPFAULTERRORBOX to\n> ErrorMode).\n>\n\ntrying now. Since this happened every build or so it shouldn't take long \nfor us to see.\n\n(I didn't see anything in the MSYS2 docs that specified the possible \nvalues for MSYS :-( )\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-03 Mo 21:15, Andres Freund\n wrote:\n\n\nHi,\n\nLooks like fairywren is possibly seeing something I saw before and spent many\ndays looking into:\nhttps://postgr.es/m/20220909235836.lz3igxtkcjb5w7zb%40awork3.anarazel.de\nwhich led me to add the following to .cirrus.yml:\n\n # Cirrus defaults to SetErrorMode(SEM_NOGPFAULTERRORBOX | ...). That\n # prevents crash reporting from working unless binaries do SetErrorMode()\n # themselves. Furthermore, it appears that either python or, more likely,\n # the C runtime has a bug where SEM_NOGPFAULTERRORBOX can very\n # occasionally *trigger* a crash on process exit - which is hard to debug,\n # given that it explicitly prevents crash dumps from working...\n # 0x8001 is SEM_FAILCRITICALERRORS | SEM_NOOPENFILEERRORBOX\n CIRRUS_WINDOWS_ERROR_MODE: 0x8001\n\n\nThe mingw folks also spent a lot of time looking into this ([1]), without a\nlot of success.\n\nIt sure looks like it might be a windows C runtime issue - none of the\nstacktrace handling python has gets invoked. I could not find any relevant\nbehavoural differences in python's code that depend on SEM_NOGPFAULTERRORBOX\nbeing set.\n\nIt'd be interesting to see if fairywren's occasional failures go away if you\nset MSYS=winjitdebug (which prevents msys from adding SEM_NOGPFAULTERRORBOX to\nErrorMode).\n\n\n\n\n\ntrying now. Since this happened every build or so it shouldn't\n take long for us to see. \n\n(I didn't see anything in the MSYS2 docs that specified the\n possible values for MSYS :-( )\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 4 Apr 2023 08:22:00 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fairywren exiting in ecpg"
},
{
"msg_contents": "On 2023-04-04 Tu 08:22, Andrew Dunstan wrote:\n>\n>\n> On 2023-04-03 Mo 21:15, Andres Freund wrote:\n>> Hi,\n>>\n>> Looks like fairywren is possibly seeing something I saw before and spent many\n>> days looking into:\n>> https://postgr.es/m/20220909235836.lz3igxtkcjb5w7zb%40awork3.anarazel.de\n>> which led me to add the following to .cirrus.yml:\n>>\n>> # Cirrus defaults to SetErrorMode(SEM_NOGPFAULTERRORBOX | ...). That\n>> # prevents crash reporting from working unless binaries do SetErrorMode()\n>> # themselves. Furthermore, it appears that either python or, more likely,\n>> # the C runtime has a bug where SEM_NOGPFAULTERRORBOX can very\n>> # occasionally *trigger* a crash on process exit - which is hard to debug,\n>> # given that it explicitly prevents crash dumps from working...\n>> # 0x8001 is SEM_FAILCRITICALERRORS | SEM_NOOPENFILEERRORBOX\n>> CIRRUS_WINDOWS_ERROR_MODE: 0x8001\n>>\n>>\n>> The mingw folks also spent a lot of time looking into this ([1]), without a\n>> lot of success.\n>>\n>> It sure looks like it might be a windows C runtime issue - none of the\n>> stacktrace handling python has gets invoked. I could not find any relevant\n>> behavoural differences in python's code that depend on SEM_NOGPFAULTERRORBOX\n>> being set.\n>>\n>> It'd be interesting to see if fairywren's occasional failures go away if you\n>> set MSYS=winjitdebug (which prevents msys from adding SEM_NOGPFAULTERRORBOX to\n>> ErrorMode).\n>>\n>\n> trying now. Since this happened every build or so it shouldn't take \n> long for us to see.\n>\n> (I didn't see anything in the MSYS2 docs that specified the possible \n> values for MSYS :-( )\n>\n>\n>\n\nThe error hasn't been seen since I set this about a week ago.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-04 Tu 08:22, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-04-03 Mo 21:15, Andres Freund\n wrote:\n\n\nHi,\n\nLooks like fairywren is possibly seeing something I saw before and spent many\ndays looking into:\nhttps://postgr.es/m/20220909235836.lz3igxtkcjb5w7zb%40awork3.anarazel.de\nwhich led me to add the following to .cirrus.yml:\n\n # Cirrus defaults to SetErrorMode(SEM_NOGPFAULTERRORBOX | ...). That\n # prevents crash reporting from working unless binaries do SetErrorMode()\n # themselves. Furthermore, it appears that either python or, more likely,\n # the C runtime has a bug where SEM_NOGPFAULTERRORBOX can very\n # occasionally *trigger* a crash on process exit - which is hard to debug,\n # given that it explicitly prevents crash dumps from working...\n # 0x8001 is SEM_FAILCRITICALERRORS | SEM_NOOPENFILEERRORBOX\n CIRRUS_WINDOWS_ERROR_MODE: 0x8001\n\n\nThe mingw folks also spent a lot of time looking into this ([1]), without a\nlot of success.\n\nIt sure looks like it might be a windows C runtime issue - none of the\nstacktrace handling python has gets invoked. I could not find any relevant\nbehavoural differences in python's code that depend on SEM_NOGPFAULTERRORBOX\nbeing set.\n\nIt'd be interesting to see if fairywren's occasional failures go away if you\nset MSYS=winjitdebug (which prevents msys from adding SEM_NOGPFAULTERRORBOX to\nErrorMode).\n\n\n\n\n\ntrying now. Since this happened every build or so it shouldn't\n take long for us to see. \n\n(I didn't see anything in the MSYS2 docs that specified the\n possible values for MSYS :-( )\n\n\n\n\n\n\n\nThe error hasn't been seen since I set this about a week ago.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 11 Apr 2023 07:10:20 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: fairywren exiting in ecpg"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-11 07:10:20 -0400, Andrew Dunstan wrote:\n> The error hasn't been seen since I set this about a week ago.\n\nThis issue really bothers me, but I am at my wits end how to debug it, given\nthat we get a segfault only if we *disable* getting crash reports / core dumps\nin some form. There's no debug printout or anything, python just exits with an\nerror code indicating an access violation.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Apr 2023 11:56:23 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: fairywren exiting in ecpg"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.