threads
listlengths
1
2.99k
[ { "msg_contents": "Hi,\n\nI was looking for a pattern to destroy a hashtable (dynahash).allocated\nin TopMemoryContext\nI found one pattern : create_seq_hashtable uses TopMemoryContext\n memory context to create hash table. It calls hash_destroy in\n ResetSequenceCaches. hash_destroy will destroy the memory\ncontext(TopMemoryContext). Is it the right way to use hash_destroy ?\n\nI have allocated a hash table in TopMemoryContext context and I want\nto destroy it. It seems to me that there is no function to destroy hash\ntable allocated in TopMemoryContext context.\n\n-- Sharique\n\nHi, I was looking for a pattern to destroy a hashtable (dynahash).allocated in TopMemoryContext I found one pattern : create_seq_hashtable uses TopMemoryContext memory context to create hash table. It calls hash_destroy in ResetSequenceCaches. hash_destroy will destroy the memory context(TopMemoryContext). Is it the right way to use hash_destroy ?I have allocated a hash table in TopMemoryContext context and I want to destroy it. It seems to me that there is no function to destroy hash table allocated in TopMemoryContext context.  -- Sharique", "msg_date": "Wed, 17 Apr 2024 08:34:18 +0100", "msg_from": "Sharique Muhammed <[email protected]>", "msg_from_op": true, "msg_subject": "hash_destroy on the hash table allocated with TopMemoryContext" }, { "msg_contents": "On Wed, Apr 17, 2024 at 1:04 PM Sharique Muhammed <[email protected]>\nwrote:\n\n> Hi,\n>\n> I was looking for a pattern to destroy a hashtable (dynahash).allocated\n> in TopMemoryContext\n> I found one pattern : create_seq_hashtable uses TopMemoryContext\n> memory context to create hash table. It calls hash_destroy in\n> ResetSequenceCaches. hash_destroy will destroy the memory\n> context(TopMemoryContext). Is it the right way to use hash_destroy ?\n>\n\nThe context used to pass hash_create() is used to create a child memory\ncontext. The hash table is allocated in the child memory context and it's\nthat context which is destoryed by hash_destory(). Isn't it?\n\n\n>\n> I have allocated a hash table in TopMemoryContext context and I want\n> to destroy it. It seems to me that there is no function to destroy hash\n> table allocated in TopMemoryContext context.\n>\n>\nHow did you create hash table in TopMemoryContext?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Wed, Apr 17, 2024 at 1:04 PM Sharique Muhammed <[email protected]> wrote:Hi, I was looking for a pattern to destroy a hashtable (dynahash).allocated in TopMemoryContext I found one pattern : create_seq_hashtable uses TopMemoryContext memory context to create hash table. It calls hash_destroy in ResetSequenceCaches. hash_destroy will destroy the memory context(TopMemoryContext). Is it the right way to use hash_destroy ?The context used to pass hash_create() is used to create a child memory context. The hash table is allocated in the child memory context and it's that context which is destoryed by hash_destory(). Isn't it? I have allocated a hash table in TopMemoryContext context and I want to destroy it. It seems to me that there is no function to destroy hash table allocated in TopMemoryContext context.  How did you create hash table in TopMemoryContext? -- Best Wishes,Ashutosh Bapat", "msg_date": "Wed, 17 Apr 2024 14:12:08 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hash_destroy on the hash table allocated with TopMemoryContext" } ]
[ { "msg_contents": "I checked the generated ecpg_config.h with make and meson, and the meson \none is missing\n\n#define HAVE_LONG_LONG_INT 1\n\nThis is obviously quite uninteresting, since that is required by C99. \nBut it would be more satisfactory if we didn't have discrepancies like \nthat. Note that we also kept ENABLE_THREAD_SAFETY in ecpg_config.h for \ncompatibility.\n\nFixing this on the meson side would be like\n\ndiff --git a/src/interfaces/ecpg/include/meson.build \nb/src/interfaces/ecpg/include/meson.build\nindex 31610fef589..b85486acbea 100644\n--- a/src/interfaces/ecpg/include/meson.build\n+++ b/src/interfaces/ecpg/include/meson.build\n@@ -12,6 +12,7 @@ ecpg_conf_keys = [\n ecpg_conf_data = configuration_data()\n\n ecpg_conf_data.set('ENABLE_THREAD_SAFETY', 1)\n+ecpg_conf_data.set('HAVE_LONG_LONG_INT', 1)\n\n foreach key : ecpg_conf_keys\n if cdata.has(key)\n\nAlternatively, we could remove the symbol from the make side.\n\n\n", "msg_date": "Wed, 17 Apr 2024 16:48:22 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "ecpg_config.h symbol missing with meson" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I checked the generated ecpg_config.h with make and meson, and the meson \n> one is missing\n\n> #define HAVE_LONG_LONG_INT 1\n\n> This is obviously quite uninteresting, since that is required by C99. \n> But it would be more satisfactory if we didn't have discrepancies like \n> that. Note that we also kept ENABLE_THREAD_SAFETY in ecpg_config.h for \n> compatibility.\n> ...\n> Alternatively, we could remove the symbol from the make side.\n\nThink I'd vote for removing it, since we use it nowhere.\nThe ENABLE_THREAD_SAFETY precedent feels a little bit different,\nsince there's not the C99-requires-the-feature angle.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Apr 2024 12:15:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ecpg_config.h symbol missing with meson" }, { "msg_contents": "On 17.04.24 18:15, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> I checked the generated ecpg_config.h with make and meson, and the meson\n>> one is missing\n> \n>> #define HAVE_LONG_LONG_INT 1\n> \n>> This is obviously quite uninteresting, since that is required by C99.\n>> But it would be more satisfactory if we didn't have discrepancies like\n>> that. Note that we also kept ENABLE_THREAD_SAFETY in ecpg_config.h for\n>> compatibility.\n>> ...\n>> Alternatively, we could remove the symbol from the make side.\n> \n> Think I'd vote for removing it, since we use it nowhere.\n> The ENABLE_THREAD_SAFETY precedent feels a little bit different,\n> since there's not the C99-requires-the-feature angle.\n\nOk, fixed by removing instead.\n\n\n\n", "msg_date": "Wed, 24 Apr 2024 08:32:26 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ecpg_config.h symbol missing with meson" } ]
[ { "msg_contents": "Hackers,\n I often use the ctrl-click on the link after getting help in psql. A\ngreat feature.\n\nChallenge, when there is no help, you don't get any link.\n\n My thought process is to add a default response that would take them to\nhttps://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q={TOKEN}\n<https://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q=%7BTOKEN%7D>\n\n*Example:*\n\\h current_setting\nNo help available for \"current_setting\".\nTry \\h with no arguments to see available help.\n\nhttps://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q=current_setting\n\n To me, this is a huge step in helping me get to the docs.\n\nThis is Question 1: Do others see the potential value here?\n\nQuestion 2: What if we allowed the users to set some extra link Templates\nusing \\pset??\n\n\\pset help_assist_link_1 = https://www.google.com/search?q={token}'\n\\pset help_assist_link_2 = '\nhttps://wiki.postgresql.org/index.php?title=Special%3ASearch&search={token}&go=Go\n<https://wiki.postgresql.org/index.php?title=Special%3ASearch&search=%7Btoken%7D&go=Go>\n'\n\nSuch that the output, this time would be:\n*Example:*\n\\h current_setting\nNo help available for \"current_setting\".\nTry \\h with no arguments to see available help.\n\nhttps://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q=current_setting\n\nhttps://www.google.com/search?q=current_setting\nhttps://wiki.postgresql.org/index.php?title=Special%3ASearch&search=current_setting&go=Go\n\nThis Latter feature, I would consider applying to even successful searches?\n[Based on Feedback here]\n\nThoughts?\n\n  Hackers,  I often use the ctrl-click on the link after getting help in psql.  A great feature.Challenge, when there is no help, you don't get any link.  My thought process is to add a default response that would take them tohttps://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q={TOKEN}Example:\\h current_settingNo help available for \"current_setting\".Try \\h with no arguments to see available help.https://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q=current_setting  To me, this is a huge step in helping me get to the docs.This is Question 1: Do others see the potential value here?Question 2: What if we allowed the users to set some extra link Templates using \\pset??\\pset help_assist_link_1 =  https://www.google.com/search?q={token}'\\pset help_assist_link_2 = 'https://wiki.postgresql.org/index.php?title=Special%3ASearch&search={token}&go=Go'Such that the output, this time would be:Example:\\h current_settingNo help available for \"current_setting\".Try \\h with no arguments to see available help.https://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q=current_settinghttps://www.google.com/search?q=current_settinghttps://wiki.postgresql.org/index.php?title=Special%3ASearch&search=current_setting&go=GoThis Latter feature, I would consider applying to even successful searches? [Based on Feedback here]Thoughts?", "msg_date": "Wed, 17 Apr 2024 13:47:05 -0400", "msg_from": "Kirk Wolak <[email protected]>", "msg_from_op": true, "msg_subject": "Idea Feedback: psql \\h misses -> Offers Links?" }, { "msg_contents": "On 17.04.24 19:47, Kirk Wolak wrote:\n> *Example:*\n> \\h current_setting\n> No help available for \"current_setting\".\n> Try \\h with no arguments to see available help.\n> \n> https://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q=current_setting \n> <https://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q=current_setting>\n\nOne problem is that this search URL does not actually produce any useful \ninformation about current_setting.\n\n\n\n", "msg_date": "Thu, 18 Apr 2024 20:37:22 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Idea Feedback: psql \\h misses -> Offers Links?" }, { "msg_contents": "On Thu, Apr 18, 2024 at 2:37 PM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 17.04.24 19:47, Kirk Wolak wrote:\n> > *Example:*\n> > \\h current_setting\n> > No help available for \"current_setting\".\n> > Try \\h with no arguments to see available help.\n> >\n> > https://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q=current_setting\n> > <https://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q=current_setting>\n>\n> One problem is that this search URL does not actually produce any useful\n> information about current_setting.\n>\n> I see what you mean, but doesn't that imply our web search feature is\nweak? That's the full name of an existing function, and it's in the index.\nBut it cannot be found if searched from the website?\n\nOn Thu, Apr 18, 2024 at 2:37 PM Peter Eisentraut <[email protected]> wrote:On 17.04.24 19:47, Kirk Wolak wrote:\n> *Example:*\n> \\h current_setting\n> No help available for \"current_setting\".\n> Try \\h with no arguments to see available help.\n> \n> https://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q=current_setting \n> <https://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q=current_setting>\n\nOne problem is that this search URL does not actually produce any useful \ninformation about current_setting.\nI see what you mean, but doesn't that imply our web search feature is weak?  That's the full name of an existing function, and it's in the index. But it cannot be found if searched from the website?", "msg_date": "Thu, 18 Apr 2024 17:29:14 -0400", "msg_from": "Kirk Wolak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Idea Feedback: psql \\h misses -> Offers Links?" }, { "msg_contents": "Kirk Wolak <[email protected]> writes:\n\n> On Thu, Apr 18, 2024 at 2:37 PM Peter Eisentraut <[email protected]>\n> wrote:\n>\n>> On 17.04.24 19:47, Kirk Wolak wrote:\n>> > *Example:*\n>> > \\h current_setting\n>> > No help available for \"current_setting\".\n>> > Try \\h with no arguments to see available help.\n>> >\n>> > https://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q=current_setting\n>> > <https://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q=current_setting>\n>>\n>> One problem is that this search URL does not actually produce any useful\n>> information about current_setting.\n>\n> I see what you mean, but doesn't that imply our web search feature is\n> weak? That's the full name of an existing function, and it's in the index.\n> But it cannot be found if searched from the website?\n\nWhile I do think we could do a better job of providing links directly to\nthe documentation of functions and config parameters, I wouldn't say\nthat the search result is _completely_ useless in this case. The first\nhit is https://www.postgresql.org/docs/16/functions-admin.html, which is\nwhere current_setting() is documented (it's even the first function on\nthat page, but that's just luck in this case).\n\n- ilmari\n\n\n", "msg_date": "Thu, 18 Apr 2024 22:46:52 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Idea Feedback: psql \\h misses -> Offers Links?" }, { "msg_contents": "On 18.04.24 23:29, Kirk Wolak wrote:\n> On Thu, Apr 18, 2024 at 2:37 PM Peter Eisentraut <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> On 17.04.24 19:47, Kirk Wolak wrote:\n> > *Example:*\n> > \\h current_setting\n> > No help available for \"current_setting\".\n> > Try \\h with no arguments to see available help.\n> >\n> >\n> https://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q=current_setting <https://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q=current_setting>\n> >\n> <https://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q=current_setting <https://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q=current_setting>>\n> \n> One problem is that this search URL does not actually produce any\n> useful\n> information about current_setting.\n> \n> I see what you mean, but doesn't that imply our web search feature is \n> weak?  That's the full name of an existing function, and it's in the \n> index. But it cannot be found if searched from the website?\n\nMaybe it's weak, or maybe we are using it wrong, I don't know.\n\n\\h has always been (a) local help, and (b) help specifically about SQL \ncommands. If we are going to vastly expand the scope, we need to think \nit through more thoroughly. I could see some kind of \\onlinehelp \ncommand, or maybe even redesigning \\h altogether.\n\nAlso, as you say, the function is in the documentation index, so there \nshould be a deterministic way to go directly to exactly the target \ndestination. Maybe the full-text search functionality of the web site \nis the wrong interface for that.\n\n\n\n", "msg_date": "Fri, 19 Apr 2024 10:45:45 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Idea Feedback: psql \\h misses -> Offers Links?" }, { "msg_contents": "On Wed, Apr 17, 2024, at 2:47 PM, Kirk Wolak wrote:\n> I often use the ctrl-click on the link after getting help in psql. A great feature.\n> \n> Challenge, when there is no help, you don't get any link.\n> \n> My thought process is to add a default response that would take them to\n> https://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q={TOKEN} <https://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q=%7BTOKEN%7D>\n> \n> *Example:*\n> \\h current_setting\n> No help available for \"current_setting\".\n> Try \\h with no arguments to see available help.\n\nThat's because current_setting is a function. Help says:\n\npostgres=# \\?\n.\n.\n.\nHelp\n \\? [commands] show help on backslash commands\n \\? options show help on psql command-line options\n \\? variables show help on special variables\n \\h [NAME] help on syntax of SQL commands, * for all commands\n\nIt is just for SQL commands.\n\n> https://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q=current_setting\n> \n> To me, this is a huge step in helping me get to the docs.\n> \n> This is Question 1: Do others see the potential value here?\n\nYes. However, I expect an exact and direct answer. There will be cases that the\nfirst result is not the one you are looking for. (You are expecting the\nfunction or parameter description but other page is on the top because it is\nmore relevant.) The referred URL does not point you to the direct link.\nInstead, you have to click again to be able to check the content.\n\n> Question 2: What if we allowed the users to set some extra link Templates using \\pset??\n> \n> \\pset help_assist_link_1 = https://www.google.com/search?q={token} <https://www.google.com/search?q=%7Btoken%7D>'\n> \\pset help_assist_link_2 = 'https://wiki.postgresql.org/index.php?title=Special%3ASearch&search={token}&go=Go <https://wiki.postgresql.org/index.php?title=Special%3ASearch&search=%7Btoken%7D&go=Go>'\n\nThat's a different idea. Are you proposing to provide URLs if this psql\nvariable is set and it doesn't find an entry (say \\h foo)? I'm not sure if it\nis a good idea to allow third-party URLs (even if it is configurable).\n\nIMO we should expand \\h to list documentation references for functions and GUCs\nusing SGML files. We already did it for SQL commands. Another broader idea is\nto build an inverted index similar to what Index [1] provides. The main problem\nwith this approach is to create a dependency between documentation build and\npsql. Maybe there is a reasonable way to obtain the links for each term.\n\n\n[1] https://www.postgresql.org/docs/current/bookindex.html\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Apr 17, 2024, at 2:47 PM, Kirk Wolak wrote:  I often use the ctrl-click on the link after getting help in psql.  A great feature.Challenge, when there is no help, you don't get any link.  My thought process is to add a default response that would take them tohttps://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q={TOKEN}Example:\\h current_settingNo help available for \"current_setting\".Try \\h with no arguments to see available help.That's because current_setting is a function. Help says:postgres=# \\?...Help  \\? [commands]          show help on backslash commands  \\? options             show help on psql command-line options  \\? variables           show help on special variables  \\h [NAME]              help on syntax of SQL commands, * for all commandsIt is just for SQL commands.https://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q=current_setting  To me, this is a huge step in helping me get to the docs.This is Question 1: Do others see the potential value here?Yes. However, I expect an exact and direct answer. There will be cases that thefirst result is not the one you are looking for. (You are expecting thefunction or parameter description but other page is on the top because it ismore relevant.) The referred URL does not point you to the direct link.Instead, you have to click again to be able to check the content.Question 2: What if we allowed the users to set some extra link Templates using \\pset??\\pset help_assist_link_1 =  https://www.google.com/search?q={token}'\\pset help_assist_link_2 = 'https://wiki.postgresql.org/index.php?title=Special%3ASearch&search={token}&go=Go'That's a different idea. Are you proposing to provide URLs if this psqlvariable is set and it doesn't find an entry (say \\h foo)? I'm not sure if itis a good idea to allow third-party URLs (even if it is configurable).IMO we should expand \\h to list documentation references for functions and GUCsusing SGML files. We already did it for SQL commands. Another broader idea isto build an inverted index similar to what Index [1] provides. The main problemwith this approach is to create a dependency between documentation build andpsql. Maybe there is a reasonable way to obtain the links for each term.[1] https://www.postgresql.org/docs/current/bookindex.html--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Fri, 19 Apr 2024 11:12:54 -0300", "msg_from": "\"Euler Taveira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Idea Feedback: psql \\h misses -> Offers Links?" }, { "msg_contents": "> On 17 Apr 2024, at 22:47, Kirk Wolak <[email protected]> wrote:\n> \n> Thoughts?\n\nToday we had a hacking session with Nik and Kirk. We produced a patch to assess how these links might look like.\n\nAlso we needed a url_encode() and found none in a codebase. It would be nice to have this as an SQL-callable function.\n\nThanks!\n\n\nBest regards, Andrey Borodin.", "msg_date": "Thu, 2 May 2024 22:50:33 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Idea Feedback: psql \\h misses -> Offers Links?" }, { "msg_contents": "čt 2. 5. 2024 v 19:50 odesílatel Andrey M. Borodin <[email protected]>\nnapsal:\n\n>\n>\n> > On 17 Apr 2024, at 22:47, Kirk Wolak <[email protected]> wrote:\n> >\n> > Thoughts?\n>\n> Today we had a hacking session with Nik and Kirk. We produced a patch to\n> assess how these links might look like.\n>\n> Also we needed a url_encode() and found none in a codebase. It would be\n> nice to have this as an SQL-callable function.\n>\n\n+1\n\nit was requested more times\n\nPavel\n\n\n> Thanks!\n>\n>\n> Best regards, Andrey Borodin.\n>\n>\n\nčt 2. 5. 2024 v 19:50 odesílatel Andrey M. Borodin <[email protected]> napsal:\n\n> On 17 Apr 2024, at 22:47, Kirk Wolak <[email protected]> wrote:\n> \n> Thoughts?\n\nToday we had a hacking session with Nik and Kirk. We produced a patch to assess how these links might look like.\n\nAlso we needed a url_encode() and found none in a codebase. It would be nice to have this as an SQL-callable function.+1it was requested more timesPavel\n\nThanks!\n\n\nBest regards, Andrey Borodin.", "msg_date": "Thu, 2 May 2024 20:24:18 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Idea Feedback: psql \\h misses -> Offers Links?" }, { "msg_contents": "On Fri, Apr 19, 2024 at 10:14 AM Euler Taveira <[email protected]> wrote:\n\n> On Wed, Apr 17, 2024, at 2:47 PM, Kirk Wolak wrote:\n>\n> ...\n>\n> This is Question 1: Do others see the potential value here?\n>\n>\n> Yes. However, I expect an exact and direct answer. There will be cases\n> that the\n> first result is not the one you are looking for. (You are expecting the\n> function or parameter description but other page is on the top because it\n> is\n> more relevant.) The referred URL does not point you to the direct link.\n> Instead, you have to click again to be able to check the content.\n>\n\nAgain, this does get to the point that the current search feature at\npostgresql.org could be better. I would like to see that improved as\nwell...\n\n\n> Question 2: What if we allowed the users to set some extra link Templates\n> using \\pset??\n>\n> \\pset help_assist_link_1 = https://www.google.com/search?q={token}'\n> \\pset help_assist_link_2 = '\n> https://wiki.postgresql.org/index.php?title=Special%3ASearch&search={token}&go=Go\n> <https://wiki.postgresql.org/index.php?title=Special%3ASearch&search=%7Btoken%7D&go=Go>\n> '\n>\n>\n> That's a different idea. Are you proposing to provide URLs if this psql\n> variable is set and it doesn't find an entry (say \\h foo)? I'm not sure if\n> it\n> is a good idea to allow third-party URLs (even if it is configurable).\n>\n\nIf you want to check the patch Andrey published. We basically set the\ndefault value to the set variable, and then allowed the user to override\nthat value with multiple pipe (|) separated URLs. It does BEG the question\nif this is cool for hackers. Personally, I like the option as there are\nprobably a few resources worth checking against. But if someone doesn't\nchange the default, they get a good enough answer.\n\n\n> IMO we should expand \\h to list documentation references for functions and\n> GUCs\n> using SGML files. We already did it for SQL commands. Another broader idea\n> is\n> to build an inverted index similar to what Index [1] provides. The main\n> problem\n> with this approach is to create a dependency between documentation build\n> and\n> psql. Maybe there is a reasonable way to obtain the links for each term.\n>\n>\n> [1] https://www.postgresql.org/docs/current/bookindex.html\n>\n\nI don't want to add more dependencies into psql to the documentation for a\nton of stuff. To me, if we had a better search page on the website for\nfinding things, it would be great. I have been resigned to just googling\n\"postgresql <topic>\" because google does a better job searching\npostgresql.org than the postgresql.org site does (even when it is a known\nindexed item like a function name).\n\nThanks for the feedback.\n\nOn Fri, Apr 19, 2024 at 10:14 AM Euler Taveira <[email protected]> wrote:On Wed, Apr 17, 2024, at 2:47 PM, Kirk Wolak wrote:...This is Question 1: Do others see the potential value here?Yes. However, I expect an exact and direct answer. There will be cases that thefirst result is not the one you are looking for. (You are expecting thefunction or parameter description but other page is on the top because it ismore relevant.) The referred URL does not point you to the direct link.Instead, you have to click again to be able to check the content. Again, this does get to the point that the current search feature at postgresql.org could be better.  I would like to see that improved as well...Question 2: What if we allowed the users to set some extra link Templates using \\pset??\\pset help_assist_link_1 =  https://www.google.com/search?q={token}'\\pset help_assist_link_2 = 'https://wiki.postgresql.org/index.php?title=Special%3ASearch&search={token}&go=Go'That's a different idea. Are you proposing to provide URLs if this psqlvariable is set and it doesn't find an entry (say \\h foo)? I'm not sure if itis a good idea to allow third-party URLs (even if it is configurable).If you want to check the patch Andrey published.  We basically set the default value to the set variable, and then allowed the user to override that value with multiple pipe (|) separated URLs.  It does BEG the question if this is cool for hackers.  Personally, I like the option as there are probably a few resources worth checking against.  But if someone doesn't change the default, they get a good enough answer.IMO we should expand \\h to list documentation references for functions and GUCsusing SGML files. We already did it for SQL commands. Another broader idea isto build an inverted index similar to what Index [1] provides. The main problemwith this approach is to create a dependency between documentation build andpsql. Maybe there is a reasonable way to obtain the links for each term.[1] https://www.postgresql.org/docs/current/bookindex.htmlI don't want to add more dependencies into psql to the documentation for a ton of stuff.  To me, if we had a better search page on the website for finding things, it would be great.  I have been resigned to just googling \"postgresql <topic>\" because google does a better job searching postgresql.org than the postgresql.org site does (even when it is a known indexed item like a function name).Thanks for the feedback.", "msg_date": "Wed, 8 May 2024 21:07:57 -0400", "msg_from": "Kirk Wolak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Idea Feedback: psql \\h misses -> Offers Links?" } ]
[ { "msg_contents": "In the \"Differential code coverage between 16 and HEAD\" thread, Andres\npointed out that there wasn't test case coverage for\npg_combinebackup's code to handle files in tablespaces. I looked at\nadding that, and as nobody could possibly have predicted, found a bug.\n\nHere's a 2-patch series to (1) enhance\nPostgreSQL::Test::Utils::init_from_backup to handle tablespaces and\nthen (2) fix the bug in pg_combinebackup and add test coverage using\nthe facilities from the first patch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 17 Apr 2024 16:16:55 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "fix tablespace handling in pg_combinebackup" }, { "msg_contents": "Hi,\n\nOn 2024-04-17 16:16:55 -0400, Robert Haas wrote:\n> In the \"Differential code coverage between 16 and HEAD\" thread, Andres\n> pointed out that there wasn't test case coverage for\n> pg_combinebackup's code to handle files in tablespaces. I looked at\n> adding that, and as nobody could possibly have predicted, found a bug.\n\nHa ;)\n\n\n> @@ -787,8 +787,13 @@ Does not start the node after initializing it.\n> \n> By default, the backup is assumed to be plain format. To restore from\n> a tar-format backup, pass the name of the tar program to use in the\n> -keyword parameter tar_program. Note that tablespace tar files aren't\n> -handled here.\n> +keyword parameter tar_program.\n> +\n> +If there are tablespace present in the backup, include tablespace_map as\n> +a keyword parameter whose values is a hash. When tar_program is used, the\n> +hash keys are tablespace OIDs; otherwise, they are the tablespace pathnames\n> +used in the backup. In either case, the values are the tablespace pathnames\n> +that should be used for the target cluster.\n\nWhere would one get these oids?\n\n\nCould some of this be simplified by using allow_in_place_tablespaces instead?\nLooks like it'd simplify at least the extended test somewhat?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 17 Apr 2024 14:50:21 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fix tablespace handling in pg_combinebackup" }, { "msg_contents": "On Wed, Apr 17, 2024 at 02:50:21PM -0700, Andres Freund wrote:\n> On 2024-04-17 16:16:55 -0400, Robert Haas wrote:\n>> In the \"Differential code coverage between 16 and HEAD\" thread, Andres\n>> pointed out that there wasn't test case coverage for\n>> pg_combinebackup's code to handle files in tablespaces. I looked at\n>> adding that, and as nobody could possibly have predicted, found a bug.\n> \n> Ha ;)\n\nNote: open_item_counter++\n--\nMichael", "msg_date": "Thu, 18 Apr 2024 16:49:38 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fix tablespace handling in pg_combinebackup" }, { "msg_contents": "On Wed, Apr 17, 2024 at 5:50 PM Andres Freund <[email protected]> wrote:\n> > +If there are tablespace present in the backup, include tablespace_map as\n> > +a keyword parameter whose values is a hash. When tar_program is used, the\n> > +hash keys are tablespace OIDs; otherwise, they are the tablespace pathnames\n> > +used in the backup. In either case, the values are the tablespace pathnames\n> > +that should be used for the target cluster.\n>\n> Where would one get these oids?\n\nYou pretty much have to pick them out of the tar file names. It sucks,\nbut it's not this patch's fault. That's just how pg_basebackup works.\nIf you do a directory format backup, you can use -T to relocate\ntablespaces on the fly, using the pathnames from the origin server.\nThat's a weird convention, and we probably should have based on the\ntablespace names and not exposed the server pathnames to the client at\nall, but we didn't. But it's still better than what happens when you\ndo a tar-format backup. In that case you just get a bunch of $OID.tar\nfiles. No trace of the server pathnames remains, and the only way you\ncould learn the tablespace names is if you rooted through whatever\nfile contains the contents of the pg_tablespace system catalog. So\nyou've just got a bunch of OID-named things and it's all up to you to\nfigure out which one is which and what to put in the tablespace_map\nfile. I'd call this terrible UI design, but I think it's closer to\nabsence of UI design.\n\nI wonder if we (as a project) would consider a patch that redesigned\nthis whole mechanism. Produce ${TABLESPACE_NAME}.tar in tar-format,\ninstead of ${OID}.tar. In directory-format, relocate via\n-T${TABLESPACE_NAME}=${DIR} instead of -T${SERVERDIR}=${DIR}. That\nwould be a significant compatibility break, and you'd somehow need to\nsolve the problem of what to put in the tablespace_map file, which\nrequires OIDs. But it seems like if you could finesse that issue in\nsome elegant way, the result would just be a heck of a lot more usable\nthan what we have today.\n\n> Could some of this be simplified by using allow_in_place_tablespaces instead?\n> Looks like it'd simplify at least the extended test somewhat?\n\nI don't think we can afford to assume that allow_in_place_tablespaces\ndoesn't change the behavior. I said (at least off-list) when that\nfeature was introduced that there was no way it was going to remain an\nisolated development hack, and I think that's proved to be 100%\ncorrect. We keep needing code to support it in more places, and I\nexpect that to continue. Probably we're going to need to start testing\neverything both ways, which I think was a pretty predictable result of\nintroducing it in the first place.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Apr 2024 09:03:21 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: fix tablespace handling in pg_combinebackup" }, { "msg_contents": "Hi,\n\nOn 2024-04-18 09:03:21 -0400, Robert Haas wrote:\n> On Wed, Apr 17, 2024 at 5:50 PM Andres Freund <[email protected]> wrote:\n> > > +If there are tablespace present in the backup, include tablespace_map as\n> > > +a keyword parameter whose values is a hash. When tar_program is used, the\n> > > +hash keys are tablespace OIDs; otherwise, they are the tablespace pathnames\n> > > +used in the backup. In either case, the values are the tablespace pathnames\n> > > +that should be used for the target cluster.\n> >\n> > Where would one get these oids?\n> \n> You pretty much have to pick them out of the tar file names. It sucks,\n> but it's not this patch's fault.\n\nI was really just remarking on this from the angle of a test writer. I know\nthat our interfaces around this suck...\n\nFor tests, do we really need to set anything on a per-tablespace basis? Can't\nwe instead just reparent all of them to a new directory?\n\n\n> I wonder if we (as a project) would consider a patch that redesigned\n> this whole mechanism. Produce ${TABLESPACE_NAME}.tar in tar-format,\n> instead of ${OID}.tar. In directory-format, relocate via\n> -T${TABLESPACE_NAME}=${DIR} instead of -T${SERVERDIR}=${DIR}. That\n> would be a significant compatibility break, and you'd somehow need to\n> solve the problem of what to put in the tablespace_map file, which\n> requires OIDs. But it seems like if you could finesse that issue in\n> some elegant way, the result would just be a heck of a lot more usable\n> than what we have today.\n\nFor some things that'd definitely be nicer - but not sure it work well for\neverything. Consider cases where you actually have external directories on\ndifferent disks, and you want to restore a backup after some data loss. Now\nyou need to list all the tablespaces separately, to put them back into their\nown location.\n\nOne thing I've been wondering about is an option to put the \"new\" tablespaces\ninto a location relative to each of the old ones.\n --tablespace-relative-location=../restore-2024-04-18\nwhich would rewrite all the old tablespaces to that new location.\n\n\n> > Could some of this be simplified by using allow_in_place_tablespaces instead?\n> > Looks like it'd simplify at least the extended test somewhat?\n> \n> I don't think we can afford to assume that allow_in_place_tablespaces\n> doesn't change the behavior.\n\nI think we can't assume that absolutely everywhere, but we don't need to test\nit in a lot of places.\n\n\n> I said (at least off-list) when that feature was introduced that there was\n> no way it was going to remain an isolated development hack, and I think\n> that's proved to be 100% correct.\n\nHm, I guess I kinda agree. But not because I think it wasn't good for\ndevelopment, but because it'd often be much saner to use relative tablespaces\nthan absolute ones even for prod.\n\nMy only point here was that the test would be simpler if you\na) didn't need to create a temp directory for the tablespace, both for\n primary and standby\nb) didn't need to \"gin up\" a tablespace map, because all the paths are\n relative\n\n\nJust to be clear: I don't want the above to block merging your test. If you\nthink you want to do it the way you did, please do.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 18 Apr 2024 10:45:54 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fix tablespace handling in pg_combinebackup" }, { "msg_contents": "On Thu, Apr 18, 2024 at 1:45 PM Andres Freund <[email protected]> wrote:\n> I was really just remarking on this from the angle of a test writer. I know\n> that our interfaces around this suck...\n>\n> For tests, do we really need to set anything on a per-tablespace basis? Can't\n> we instead just reparent all of them to a new directory?\n\nI don't know what you mean by this.\n\n> > I wonder if we (as a project) would consider a patch that redesigned\n> > this whole mechanism. Produce ${TABLESPACE_NAME}.tar in tar-format,\n> > instead of ${OID}.tar. In directory-format, relocate via\n> > -T${TABLESPACE_NAME}=${DIR} instead of -T${SERVERDIR}=${DIR}. That\n> > would be a significant compatibility break, and you'd somehow need to\n> > solve the problem of what to put in the tablespace_map file, which\n> > requires OIDs. But it seems like if you could finesse that issue in\n> > some elegant way, the result would just be a heck of a lot more usable\n> > than what we have today.\n>\n> For some things that'd definitely be nicer - but not sure it work well for\n> everything. Consider cases where you actually have external directories on\n> different disks, and you want to restore a backup after some data loss. Now\n> you need to list all the tablespaces separately, to put them back into their\n> own location.\n\nI mean, don't you need to do that anyway, just in a more awkward way?\nI don't understand who ever wants to keep track of their tablespaces\nby either (a) source pathname or (b) OID rather than (c) user-visible\ntablespace name.\n\n> One thing I've been wondering about is an option to put the \"new\" tablespaces\n> into a location relative to each of the old ones.\n> --tablespace-relative-location=../restore-2024-04-18\n> which would rewrite all the old tablespaces to that new location.\n\nI think this would probably get limited use outside of testing scenarios.\n\n> > I said (at least off-list) when that feature was introduced that there was\n> > no way it was going to remain an isolated development hack, and I think\n> > that's proved to be 100% correct.\n>\n> Hm, I guess I kinda agree. But not because I think it wasn't good for\n> development, but because it'd often be much saner to use relative tablespaces\n> than absolute ones even for prod.\n>\n> My only point here was that the test would be simpler if you\n> a) didn't need to create a temp directory for the tablespace, both for\n> primary and standby\n> b) didn't need to \"gin up\" a tablespace map, because all the paths are\n> relative\n>\n> Just to be clear: I don't want the above to block merging your test. If you\n> think you want to do it the way you did, please do.\n\nI think I will go ahead and do that soon, then. I'm totally fine with\nit if somebody wants to do the testing in some other way, but this was\nwhat made sense to me as the easiest way to adapt what's already\nthere. I think it's lame that init_from_backup() disclaims support for\ntablespaces, as if tablespaces weren't a case that needs to be tested\nheavily with respect to backups. And I think it's better to get\nsomething merged that fixes the bug ASAP, and adds some test coverage,\nand then if there's a better way to do that test coverage down the\nroad, well and good.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Apr 2024 14:46:10 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: fix tablespace handling in pg_combinebackup" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Apr 18, 2024 at 1:45 PM Andres Freund <[email protected]> wrote:\n>> Just to be clear: I don't want the above to block merging your test. If you\n>> think you want to do it the way you did, please do.\n\n> I think I will go ahead and do that soon, then.\n\nThis patch failed to survive contact with the buildfarm. It looks\nlike the animals that are unhappy are choking like this:\n\npg_basebackup: error: backup failed: ERROR: symbolic link target too long for tar format: file name \"pg_tblspc/16415\", target \"/home/bf/bf-build/olingo/HEAD/pgsql.build/testrun/pg_combinebackup/002_compare_backups/data/tmp_test_bV72/ts\"\n\nSo whether it works depends on how long the path to the animal's build\nroot is.\n\nThis is not good at all, because if the buildfarm is hitting this\nlimit then actual users are likely to hit it as well. But doesn't\nPOSIX define some way to get longer symlink paths into tar format?\n(If not POSIX, I bet there's a widely-accepted GNU extension.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Apr 2024 14:44:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fix tablespace handling in pg_combinebackup" }, { "msg_contents": "On Fri, Apr 19, 2024 at 2:44 PM Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n> > On Thu, Apr 18, 2024 at 1:45 PM Andres Freund <[email protected]> wrote:\n> >> Just to be clear: I don't want the above to block merging your test. If you\n> >> think you want to do it the way you did, please do.\n>\n> > I think I will go ahead and do that soon, then.\n>\n> This patch failed to survive contact with the buildfarm. It looks\n> like the animals that are unhappy are choking like this:\n>\n> pg_basebackup: error: backup failed: ERROR: symbolic link target too long for tar format: file name \"pg_tblspc/16415\", target \"/home/bf/bf-build/olingo/HEAD/pgsql.build/testrun/pg_combinebackup/002_compare_backups/data/tmp_test_bV72/ts\"\n>\n> So whether it works depends on how long the path to the animal's build\n> root is.\n>\n> This is not good at all, because if the buildfarm is hitting this\n> limit then actual users are likely to hit it as well. But doesn't\n> POSIX define some way to get longer symlink paths into tar format?\n> (If not POSIX, I bet there's a widely-accepted GNU extension.)\n\nAh, crap. That sucks. As far as I've been able to find, we have no\ncode in the tree that knows how to generate symlinks longer than 99\ncharacters (see tarCreateHeader). I can search around and see if I can\nfind something else out there on the Internet.\n\nI feel like this is not a new problem but one I've had to dodge\nbefore. In fact, I think I had to dodge it locally when developing\nthis patch. I believe that in various test cases, we rely on the fact\nthat PostgreSQL::Test::Utils::tempdir() produces pathnames that tend\nto be shorter than the ones you get if you generate a path using\n$node->backup_dir or $node->basedir or whatever. And I think if I\ndon't do it that way, it fails even locally on my machine. But, I\nthought that were using that workaround elsewhere successfully, so I\nexpected it to be OK.\n\nBut I think in 010_pg_basebackup.pl we actually work harder to avoid\nthe problem than I had realized:\n\nmy $sys_tempdir = PostgreSQL::Test::Utils::tempdir_short;\n...\n# Test backup of a tablespace using tar format.\n# Symlink the system located tempdir to our physical temp location.\n# That way we can use shorter names for the tablespace directories,\n# which hopefully won't run afoul of the 99 character length limit.\nmy $real_sys_tempdir = \"$sys_tempdir/tempdir\";\ndir_symlink \"$tempdir\", $real_sys_tempdir;\n\nmkdir \"$tempdir/tblspc1\";\nmy $realTsDir = \"$real_sys_tempdir/tblspc1\";\n\nMaybe I need to clone that workaround here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 19 Apr 2024 15:14:54 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: fix tablespace handling in pg_combinebackup" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Fri, Apr 19, 2024 at 2:44 PM Tom Lane <[email protected]> wrote:\n>> This patch failed to survive contact with the buildfarm. It looks\n>> like the animals that are unhappy are choking like this:\n>> pg_basebackup: error: backup failed: ERROR: symbolic link target too long for tar format: file name \"pg_tblspc/16415\", target \"/home/bf/bf-build/olingo/HEAD/pgsql.build/testrun/pg_combinebackup/002_compare_backups/data/tmp_test_bV72/ts\"\n\n> Ah, crap. That sucks. As far as I've been able to find, we have no\n> code in the tree that knows how to generate symlinks longer than 99\n> characters (see tarCreateHeader). I can search around and see if I can\n> find something else out there on the Internet.\n\nwikipedia has some useful info:\n\nhttps://en.wikipedia.org/wiki/Tar_(computing)#POSIX.1-2001/pax\n\nHowever, post-feature-freeze is not the time to be messing with\nimplementing pax. Should we revert handling of tablespaces in this\nprogram for now?\n\n> But I think in 010_pg_basebackup.pl we actually work harder to avoid\n> the problem than I had realized:\n> ...\n> Maybe I need to clone that workaround here.\n\nThat would be a reasonable answer if we deem the problem to be\njust \"the buildfarm is unhappy\". What I'm wondering about is\nwhether the feature will be useful to end users with this\npathname length restriction.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Apr 2024 15:31:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fix tablespace handling in pg_combinebackup" }, { "msg_contents": "On Fri, Apr 19, 2024 at 3:31 PM Tom Lane <[email protected]> wrote:\n> That would be a reasonable answer if we deem the problem to be\n> just \"the buildfarm is unhappy\". What I'm wondering about is\n> whether the feature will be useful to end users with this\n> pathname length restriction.\n\nPossibly you're getting a little too enthusiastic about these revert\nrequests, because I'd say it's at least a decade too late to get rid\nof pg_basebackup.\n\nAs discussed elsewhere, I do rather hope that pg_combinebackup will\neventually know how to operate on tar files as well, but right now it\ndoesn't.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 19 Apr 2024 15:40:17 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: fix tablespace handling in pg_combinebackup" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Fri, Apr 19, 2024 at 3:31 PM Tom Lane <[email protected]> wrote:\n>> That would be a reasonable answer if we deem the problem to be\n>> just \"the buildfarm is unhappy\". What I'm wondering about is\n>> whether the feature will be useful to end users with this\n>> pathname length restriction.\n\n> Possibly you're getting a little too enthusiastic about these revert\n> requests, because I'd say it's at least a decade too late to get rid\n> of pg_basebackup.\n\nI misunderstood the context then. I thought you had just added\nsupport for tablespaces in this area. If pg_basebackup has been\nchoking on overly-long tablespace symlinks this whole time, then\nthe lack of field complaints suggests it's not such a common\ncase after all.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Apr 2024 16:18:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fix tablespace handling in pg_combinebackup" }, { "msg_contents": "On Fri, Apr 19, 2024 at 4:18 PM Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n> > On Fri, Apr 19, 2024 at 3:31 PM Tom Lane <[email protected]> wrote:\n> >> That would be a reasonable answer if we deem the problem to be\n> >> just \"the buildfarm is unhappy\". What I'm wondering about is\n> >> whether the feature will be useful to end users with this\n> >> pathname length restriction.\n>\n> > Possibly you're getting a little too enthusiastic about these revert\n> > requests, because I'd say it's at least a decade too late to get rid\n> > of pg_basebackup.\n>\n> I misunderstood the context then. I thought you had just added\n> support for tablespaces in this area. If pg_basebackup has been\n> choking on overly-long tablespace symlinks this whole time, then\n> the lack of field complaints suggests it's not such a common\n> case after all.\n\nNo, the commit that caused all this was a 1-line code change. It was a\npretty stupid mistake which I would have avoided if I'd had proper\ntest case coverage for it, but I didn't do that originally. I think my\nunderlying reason for not doing the work was that I feared it would be\nhard to test in a way that was stable. But, the existence of a bug\nobviously proved that the test cases were needed. As expected,\nhowever, that was hard to do without breaking things.\n\nIf you look at the error message you sent me, you can see that while\nit's a pg_combinebackup test that is failing, the actual failing\nprogram is pg_basebackup.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 19 Apr 2024 16:48:46 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: fix tablespace handling in pg_combinebackup" }, { "msg_contents": "I don't know how to fix 82023d47^ on Windows[1][2], but in case it's\nuseful, here's a small clue. I see that Perl's readlink() does in\nfact know how to read \"junction points\" on Windows[3], so if that was\nthe only problem it might have been very close to working. One\ndifference is that our own src/port/dirmod.c's pgreadlink() also\nstrips \\??\\ from the returned paths (something to do with the\nvarious forms of NT path), but when I tried that:\n\n my $olddir = readlink(\"$backup_path/pg_tblspc/$tsoid\")\n || die \"readlink\n$backup_path/pg_tblspc/$tsoid: $!\";\n\n+ # strip NT path prefix (see src/port/dirmod.c\npgreadlink())\n+ $olddir =~ s/^\\\\\\?\\?\\\\// if\n$PostgreSQL::Test::Utils::windows_os;\n\n... it still broke[4]. So I'm not sure what's going on...\n\n[1] https://github.com/postgres/postgres/runs/24040897199\n[2] https://api.cirrus-ci.com/v1/artifact/task/5550091866472448/testrun/build/testrun/pg_combinebackup/002_compare_backups/log/002_compare_backups_pitr1.log\n[3] https://github.com/Perl/perl5/blob/f936cd91ee430786a1bb6068a4a7c8362610dd5f/win32/win32.c#L2041\n[4] https://cirrus-ci.com/task/6746621638082560\n\n\n", "msg_date": "Sat, 20 Apr 2024 17:56:00 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fix tablespace handling in pg_combinebackup" }, { "msg_contents": "Hello Thomas and Robert,\n\n20.04.2024 08:56, Thomas Munro wrote:\n> ... it still broke[4]. So I'm not sure what's going on...\n>\n\n From what I can see, the following condition (namely, -l):\n                 if ($path =~ /^pg_tblspc\\/(\\d+)$/ && -l \"$backup_path/$path\")\n                 {\n                     push @tsoids, $1;\n                     return 0;\n                 }\n\nis false for junction points on Windows (cf [1]), but the target path is:\n  Directory of \nC:\\src\\postgresql\\build\\testrun\\pg_combinebackup\\002_compare_backups\\data\\t_002_compare_backups_primary_data\\backup\\backup1\\pg_tblspc\n\n04/21/2024  02:05 PM    <DIR>          .\n04/21/2024  02:05 PM    <DIR>          ..\n04/21/2024  02:05 PM    <JUNCTION>     16415 [\\??\\C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\xXMfNDMCot\\ts1backup]\n\n[1] https://www.perlmonks.org/?node_id=1223819\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sun, 21 Apr 2024 15:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fix tablespace handling in pg_combinebackup" }, { "msg_contents": "On Mon, Apr 22, 2024 at 12:00 AM Alexander Lakhin <[email protected]> wrote:\n> From what I can see, the following condition (namely, -l):\n> if ($path =~ /^pg_tblspc\\/(\\d+)$/ && -l \"$backup_path/$path\")\n> {\n> push @tsoids, $1;\n> return 0;\n> }\n>\n> is false for junction points on Windows (cf [1]), but the target path is:\n\nAh, yes, right, -l doesn't like junction points. Well, we're already\nusing the Win32API::File package (it looks like a CPAN library, but I\nguess the Windows perl distros like Strawberry are all including it\nfor free?). See PostgreSQL::Test::Utils::is_symlink(), attached.\nThat seems to work as expected, but someone who actually knows perl\ncan surely make it better. Then I hit the next problem:\n\nreadlink C:\\cirrus\\build/testrun/pg_combinebackup/002_compare_backups\\data/t_002_compare_backups_primary_data/backup/backup1/pg_tblspc/16415:\nInappropriate I/O control operation at\nC:/cirrus/src/test/perl/PostgreSQL/Test/Cluster.pm line 927.\n\nhttps://cirrus-ci.com/task/5162332353986560\n\nI don't know where exactly that error message is coming from, but\nassuming that Strawberry Perl contains this code:\n\nhttps://github.com/Perl/perl5/blob/f936cd91ee430786a1bb6068a4a7c8362610dd5f/win32/win32.c#L2041\nhttps://github.com/Perl/perl5/blob/f936cd91ee430786a1bb6068a4a7c8362610dd5f/win32/win32.c#L1976\n\n... then it's *very* similar to what we're doing in our own\npgreadlink() code. I wondered if the buffer size might be too small\nfor our path, but it doesn't seem so:\n\nhttps://github.com/Perl/perl5/blob/f936cd91ee430786a1bb6068a4a7c8362610dd5f/win32/win32.c#L1581C1-L1581C35\n\n(I think MAX_PATH is 256 on Windows.)\n\nIf there is some unfixable problem with what they're doing in their\nreadlink(), then I guess it should be possible to read the junction\npoint directly in Perl using Win32API::File::DeviceIoControl()... but\nI can't see what's wrong with it! Maybe it's not the same code?\n\nAttached are the new test support functions, and the fixup to Robert's\n6bf5c42b that uses them. To be clear, this doesn't work, yet. It has\ngot to be close though...", "msg_date": "Mon, 22 Apr 2024 09:49:47 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fix tablespace handling in pg_combinebackup" }, { "msg_contents": "22.04.2024 00:49, Thomas Munro wrote:\n> On Mon, Apr 22, 2024 at 12:00 AM Alexander Lakhin <[email protected]> wrote:\n>> From what I can see, the following condition (namely, -l):\n>> if ($path =~ /^pg_tblspc\\/(\\d+)$/ && -l \"$backup_path/$path\")\n>> {\n>> push @tsoids, $1;\n>> return 0;\n>> }\n>>\n>> is false for junction points on Windows (cf [1]), but the target path is:\n> Ah, yes, right, -l doesn't like junction points. Well, we're already\n> using the Win32API::File package (it looks like a CPAN library, but I\n> guess the Windows perl distros like Strawberry are all including it\n> for free?). See PostgreSQL::Test::Utils::is_symlink(), attached.\n> That seems to work as expected, but someone who actually knows perl\n> can surely make it better. Then I hit the next problem:\n>\n> readlink C:\\cirrus\\build/testrun/pg_combinebackup/002_compare_backups\\data/t_002_compare_backups_primary_data/backup/backup1/pg_tblspc/16415:\n> Inappropriate I/O control operation at\n> C:/cirrus/src/test/perl/PostgreSQL/Test/Cluster.pm line 927.\n>\n> https://cirrus-ci.com/task/5162332353986560\n>\n> I don't know where exactly that error message is coming from, but\n> assuming that Strawberry Perl contains this code:\n>\n> https://github.com/Perl/perl5/blob/f936cd91ee430786a1bb6068a4a7c8362610dd5f/win32/win32.c#L2041\n> https://github.com/Perl/perl5/blob/f936cd91ee430786a1bb6068a4a7c8362610dd5f/win32/win32.c#L1976\n>\n> ... then it's *very* similar to what we're doing in our own\n> pgreadlink() code. I wondered if the buffer size might be too small\n> for our path, but it doesn't seem so:\n>\n> https://github.com/Perl/perl5/blob/f936cd91ee430786a1bb6068a4a7c8362610dd5f/win32/win32.c#L1581C1-L1581C35\n>\n> (I think MAX_PATH is 256 on Windows.)\n>\n> If there is some unfixable problem with what they're doing in their\n> readlink(), then I guess it should be possible to read the junction\n> point directly in Perl using Win32API::File::DeviceIoControl()... but\n> I can't see what's wrong with it! Maybe it's not the same code?\n\nI wonder whether the target path (\\??\\) of that junction point is fully correct.\nI tried:\n > mklink /j \n\"C:/src/postgresql/build/testrun/pg_combinebackup/002_compare_backups/data/t_002_compare_backups_primary_data/backup/backup1/pg_tblspc/test\" \n\\\\?\\C:\\t1\nJunction created for \nC:/src/postgresql/build/testrun/pg_combinebackup/002_compare_backups/data/t_002_compare_backups_primary_data/backup/backup1/pg_tblspc/test \n<<===>> \\\\?\\C:\\t1\nand\nmy $path = \n'C:/src/postgresql/build/testrun/pg_combinebackup/002_compare_backups/data/t_002_compare_backups_primary_data/backup/backup1/pg_tblspc/test';\nmy $result = readlink($path);\nworks for me:\nresult: \\\\?\\C:\\t1\n\nWhilst with:\nmy $path = \n'C:/src/postgresql/build/testrun/pg_combinebackup/002_compare_backups/data/t_002_compare_backups_primary_data/backup/backup1/pg_tblspc/16415';\nreadlink() fails with \"Invalid argument\".\n\n > dir \n\"C:/src/postgresql/build/testrun/pg_combinebackup/002_compare_backups/data/t_002_compare_backups_primary_data/backup/backup1/pg_tblspc\"\n04/22/2024  08:16 AM    <DIR>          .\n04/22/2024  08:16 AM    <DIR>          ..\n04/22/2024  06:52 AM    <JUNCTION>     16415 [\\??\\C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\1zznr8FW5N\\ts1backup]\n04/22/2024  08:16 AM    <JUNCTION>     test [\\\\?\\C:\\t1]\n\n > dir \"\\??\\C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\1zznr8FW5N\\ts1backup\"\n  Directory of C:\\??\\C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\1zznr8FW5N\\ts1backup\n\nFile Not Found\n\n > dir \"\\\\?\\C:\\t1\"\n  Directory of \\\\?\\C:\\t1\n\n04/22/2024  08:06 AM    <DIR>          .\n04/22/2024  08:06 AM    <DIR>          ..\n                0 File(s)              0 bytes\n\nThough\n > dir \n\"C:/src/postgresql/build/testrun/pg_combinebackup/002_compare_backups/data/t_002_compare_backups_primary_data/backup/backup1/pg_tblspc/16415\"\nsomehow really works:\n  Directory of \nC:\\src\\postgresql\\build\\testrun\\pg_combinebackup\\002_compare_backups\\data\\t_002_compare_backups_primary_data\\backup\\backup1\\pg_tblspc\\16415\n\n04/22/2024  06:52 AM    <DIR>          .\n04/22/2024  06:52 AM    <DIR>          ..\n04/22/2024  06:52 AM    <DIR>          PG_17_202404021\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 22 Apr 2024 08:59:59 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fix tablespace handling in pg_combinebackup" }, { "msg_contents": "I reworked the test cases so that they don't (I think) rely on\nsymlinks working as they do on normal platforms.\n\nHere's a patch. I'll go create a CommitFest entry for this thread so\nthat cfbot will look at it.\n\n...Robert", "msg_date": "Mon, 22 Apr 2024 16:05:11 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: fix tablespace handling in pg_combinebackup" }, { "msg_contents": "On Tue, Apr 23, 2024 at 8:05 AM Robert Haas <[email protected]> wrote:\n> I reworked the test cases so that they don't (I think) rely on\n> symlinks working as they do on normal platforms.\n\nCool.\n\n(It will remain a mystery for now why perl readlink() can't read the\njunction points that PostgreSQL creates (IIUC), but the OS can follow\nthem and PostgreSQL itself can read them with apparently similar code.\nI find myself wondering if symlinks should go on the list of \"things\nwe pretended Windows had out of convenience, that turned out to be\nmore inconvenient than we expected, and we'd do better to tackle\nhead-on with a more portable idea\". Perhaps we could just use a\ntablespace map file instead to do our own path construction, or\nsomething like that. I suspect that would be one of those changes\nthat is technically easy, but community-wise hard as it affects a\nload of backup tools and procedures...)\n\n\n", "msg_date": "Tue, 23 Apr 2024 09:15:27 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fix tablespace handling in pg_combinebackup" }, { "msg_contents": "Hi,\n\nOn 2024-04-23 09:15:27 +1200, Thomas Munro wrote:\n> I find myself wondering if symlinks should go on the list of \"things\n> we pretended Windows had out of convenience, that turned out to be\n> more inconvenient than we expected, and we'd do better to tackle\n> head-on with a more portable idea\".\n\nYes, I think the symlink design was pretty clearly a mistake.\n\n\n> Perhaps we could just use a tablespace map file instead to do our own path\n> construction, or something like that. I suspect that would be one of those\n> changes that is technically easy, but community-wise hard as it affects a\n> load of backup tools and procedures...)\n\nYea. I wonder if we could do a somewhat transparent upgrade by creating a file\nalongside each tablespace that contains the path or such.\n\nGreetings,\n\nAndres\n\n\n", "msg_date": "Mon, 22 Apr 2024 14:23:52 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fix tablespace handling in pg_combinebackup" }, { "msg_contents": "On Mon, Apr 22, 2024 at 5:16 PM Thomas Munro <[email protected]> wrote:\n> On Tue, Apr 23, 2024 at 8:05 AM Robert Haas <[email protected]> wrote:\n> > I reworked the test cases so that they don't (I think) rely on\n> > symlinks working as they do on normal platforms.\n>\n> Cool.\n\ncfbot is giving me a bunch of green check marks, so I plan to commit\nthis version, barring objections.\n\nI shall leave redesign of the symlink mess as a matter for others to\nponder; I'm not in love with what we have, but I think it will be\ntricky to do better, and I don't think I want to spend time on it, at\nleast not now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Apr 2024 18:10:17 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: fix tablespace handling in pg_combinebackup" }, { "msg_contents": "Hi,\n\nOn 2024-04-22 18:10:17 -0400, Robert Haas wrote:\n> cfbot is giving me a bunch of green check marks, so I plan to commit\n> this version, barring objections.\n\nMakes sense.\n\n\n> I shall leave redesign of the symlink mess as a matter for others to\n> ponder; I'm not in love with what we have, but I think it will be\n> tricky to do better, and I don't think I want to spend time on it, at\n> least not now.\n\nOh, yea, that's clearly a much bigger and separate project...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 Apr 2024 18:30:45 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fix tablespace handling in pg_combinebackup" } ]
[ { "msg_contents": "Hi,\n\nWe have a fair amount of code that uses non-constant function level static\nvariables for read-only data. Which makes little sense - it prevents the\ncompiler from understanding\n\na) that the data is read only and can thus be put into a segment that's shared\n between all invocations of the program\nb) the data will be the same on every invocation, and thus from optimizing\n based on that.\n\nThe most common example of this is that all our binaries use\n static struct option long_options[] = { ... };\nwhich prevents long_options from being put into read-only memory.\n\n\nIs there some reason we went for this pattern in a fair number of places? I\nassume it's mostly copy-pasta, but...\n\n\nIn practice it often is useful to use 'static const' instead of just\n'const'. At least gcc otherwise soemtimes fills the data on the stack, instead\nof having a read-only data member that's already initialized. I'm not sure\nwhy, tbh.\n\n\nAttached are fixes for struct option and a few more occurrences I've found\nwith a bit of grepping.\n\n\nThere are lots of places that could benefit from adding 'static\nconst'.\n\nE.g. most, if not all, HASHCTL's should be that, but that's more verbose to\nchange, so I didn't do that.\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 17 Apr 2024 14:39:53 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "plenty code is confused about function level static" }, { "msg_contents": "On 18/04/2024 00:39, Andres Freund wrote:\n> Hi,\n> \n> We have a fair amount of code that uses non-constant function level static\n> variables for read-only data. Which makes little sense - it prevents the\n> compiler from understanding\n> \n> a) that the data is read only and can thus be put into a segment that's shared\n> between all invocations of the program\n> b) the data will be the same on every invocation, and thus from optimizing\n> based on that.\n> \n> The most common example of this is that all our binaries use\n> static struct option long_options[] = { ... };\n> which prevents long_options from being put into read-only memory.\n> \n> \n> Is there some reason we went for this pattern in a fair number of places? I\n> assume it's mostly copy-pasta, but...\n> \n> \n> In practice it often is useful to use 'static const' instead of just\n> 'const'. At least gcc otherwise soemtimes fills the data on the stack, instead\n> of having a read-only data member that's already initialized. I'm not sure\n> why, tbh.\n\nWeird. I guess it can be faster if you assume the data in the read-only \nsection might not be in cache, but the few instructions needed to fill \nthe data locally in stack are.\n\n> Attached are fixes for struct option and a few more occurrences I've found\n> with a bit of grepping.\n\n+1\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 18 Apr 2024 10:00:59 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plenty code is confused about function level static" }, { "msg_contents": "On 17.04.24 23:39, Andres Freund wrote:\n> Is there some reason we went for this pattern in a fair number of places? I\n> assume it's mostly copy-pasta, but...\n\nRight. I don't think it is commonly understood that adding const \nqualifiers can help compiler optimization, and it's difficult to \nsystematically check for omissions or verify the optimization effects. \nSo I think we just have to keep trying to do our best manually for now.\n\n> Attached are fixes for struct option and a few more occurrences I've found\n> with a bit of grepping.\n\nThese look good to me.\n\n\n\n", "msg_date": "Thu, 18 Apr 2024 10:33:30 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plenty code is confused about function level static" }, { "msg_contents": "\n\n> On 18 Apr 2024, at 02:39, Andres Freund <[email protected]> wrote:\n> \n> There are lots of places that could benefit from adding 'static\n> const'.\n\n+1 for helping compiler.\nGCC has a -Wsuggest-attribute=const, we can count these warnings and threat increase as an error :)\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Thu, 18 Apr 2024 13:43:50 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plenty code is confused about function level static" }, { "msg_contents": "On 18.04.24 10:43, Andrey M. Borodin wrote:\n>> On 18 Apr 2024, at 02:39, Andres Freund <[email protected]> wrote:\n>>\n>> There are lots of places that could benefit from adding 'static\n>> const'.\n> \n> +1 for helping compiler.\n> GCC has a -Wsuggest-attribute=const, we can count these warnings and threat increase as an error :)\n\nThis is different. It's an attribute, not a qualifier, and it's for \nfunctions, not variables. But it could undoubtedly also have a \nperformance benefit.\n\n\n\n", "msg_date": "Thu, 18 Apr 2024 10:59:33 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plenty code is confused about function level static" }, { "msg_contents": "Hi,\n\nOn 2024-04-18 10:33:30 +0200, Peter Eisentraut wrote:\n> > Attached are fixes for struct option and a few more occurrences I've found\n> > with a bit of grepping.\n>\n> These look good to me.\n\nThoughts about when to apply these? Arguably they're fixing mildly broken\ncode, making it appropriate to fix in 17, but it's also something that we\ncould end up fixing for a while...\n\n\nThere are some variations of this that are a bit harder to fix, btw. We have\n\nobjdump -j .data -t src/backend/postgres|sort -k5\n...\n0000000001474d00 g O .data 00000000000015f0 ConfigureNamesReal\n0000000001479a80 g O .data 0000000000001fb0 ConfigureNamesEnum\n0000000001476300 g O .data 0000000000003778 ConfigureNamesString\n...\n00000000014682e0 g O .data 0000000000005848 ConfigureNamesBool\n000000000146db40 g O .data 00000000000071c0 ConfigureNamesInt\n\nNot that thta's all *that* much these days, but it's still pretty silly to use\n~80kB of memory in every postgres instance just because we didn't set\n conf->gen.vartype = PGC_BOOL;\netc at compile time.\n\nLarge modifiable arrays with callbacks are also quite useful for exploitation,\nas one doesn't need to figure out precise addresses.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 18 Apr 2024 10:11:42 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plenty code is confused about function level static" }, { "msg_contents": "On 18.04.24 19:11, Andres Freund wrote:\n> Thoughts about when to apply these? Arguably they're fixing mildly broken\n> code, making it appropriate to fix in 17, but it's also something that we\n> could end up fixing for a while...\n\nYeah, let's keep these for later. They are not regressions, and there \nis no end in sight yet. I have some other related stuff queued up, so \nif we're going to start adjusting these kinds of things now, it would \nopen a can of worms.\n\n\n", "msg_date": "Thu, 18 Apr 2024 19:56:35 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plenty code is confused about function level static" }, { "msg_contents": "On Thu, Apr 18, 2024 at 07:56:35PM +0200, Peter Eisentraut wrote:\n> On 18.04.24 19:11, Andres Freund wrote:\n>> Thoughts about when to apply these? Arguably they're fixing mildly broken\n>> code, making it appropriate to fix in 17, but it's also something that we\n>> could end up fixing for a while...\n> \n> Yeah, let's keep these for later. They are not regressions, and there is no\n> end in sight yet. I have some other related stuff queued up, so if we're\n> going to start adjusting these kinds of things now, it would open a can of\n> worms.\n\nThis is a set of optimizations for stuff that has accumulated across\nthe years in various code paths, so I'd vote on the side of caution\nand wait until v18 opens for business.\n--\nMichael", "msg_date": "Fri, 19 Apr 2024 08:20:18 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plenty code is confused about function level static" } ]
[ { "msg_contents": "While examining pg_upgrade on a cluster with many tables (created with the\ncommand in [0]), I noticed that a huge amount of pg_dump time goes towards\nthe binary_upgrade_set_pg_class_oids() function. This function executes a\nrather expensive query for a single row, and this function appears to be\ncalled for most of the rows in pg_class.\n\nThe attached work-in-progress patch speeds up 'pg_dump --binary-upgrade'\nfor this case. Instead of executing the query in every call to the\nfunction, we can execute it once during the first call and store all the\nrequired information in a sorted array that we can bsearch() in future\ncalls. For the aformentioned test, pg_dump on my machine goes from ~2\nminutes to ~18 seconds, which is much closer to the ~14 seconds it takes\nwithout --binary-upgrade.\n\nOne downside of this approach is the memory usage. This was more-or-less\nthe first approach that crossed my mind, so I wouldn't be surprised if\nthere's a better way. I tried to keep the pg_dump output the same, but if\nthat isn't important, maybe we could dump all the pg_class OIDs at once\ninstead of calling binary_upgrade_set_pg_class_oids() for each one.\n\n[0] https://postgr.es/m/3612876.1689443232%40sss.pgh.pa.us\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 17 Apr 2024 23:17:12 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "improve performance of pg_dump --binary-upgrade" }, { "msg_contents": ">\n> One downside of this approach is the memory usage. This was more-or-less\n>\n>\nBar-napkin math tells me in a worst-case architecture and braindead byte\nalignment, we'd burn 64 bytes per struct, so the 100K tables cited would be\nabout 6.25MB of memory.\n\nThe obvious low-memory alternative would be to make a prepared statement,\nthough that does nothing to cut down on the roundtrips.\n\nI think this is a good trade off.\n\nOne downside of this approach is the memory usage.  This was more-or-lessBar-napkin math tells me in a worst-case architecture and braindead byte alignment, we'd burn 64 bytes per struct, so the 100K tables cited would be about 6.25MB of memory.The obvious low-memory alternative would be to make a prepared statement, though that does nothing to cut down on the roundtrips.I think this is a good trade off.", "msg_date": "Thu, 18 Apr 2024 02:08:28 -0400", "msg_from": "Corey Huinker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve performance of pg_dump --binary-upgrade" }, { "msg_contents": "On Thu, Apr 18, 2024 at 02:08:28AM -0400, Corey Huinker wrote:\n> Bar-napkin math tells me in a worst-case architecture and braindead byte\n> alignment, we'd burn 64 bytes per struct, so the 100K tables cited would be\n> about 6.25MB of memory.\n> \n> The obvious low-memory alternative would be to make a prepared statement,\n> though that does nothing to cut down on the roundtrips.\n> \n> I think this is a good trade off.\n\nI've not checked the patch in details or tested it, but caching this\ninformation to gain this speed sounds like a very good thing.\n--\nMichael", "msg_date": "Thu, 18 Apr 2024 15:24:12 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve performance of pg_dump --binary-upgrade" }, { "msg_contents": "> On 18 Apr 2024, at 06:17, Nathan Bossart <[email protected]> wrote:\n\n> The attached work-in-progress patch speeds up 'pg_dump --binary-upgrade'\n> for this case. Instead of executing the query in every call to the\n> function, we can execute it once during the first call and store all the\n> required information in a sorted array that we can bsearch() in future\n> calls.\n\nThat does indeed seem like a saner approach. Since we look up the relkind we\ncan also remove the is_index parameter to binary_upgrade_set_pg_class_oids\nsince we already know that without the caller telling us?\n\n> One downside of this approach is the memory usage.\n\nI'm not too worried about the worst-case performance of this.\n\n> This was more-or-less\n> the first approach that crossed my mind, so I wouldn't be surprised if\n> there's a better way. I tried to keep the pg_dump output the same, but if\n> that isn't important, maybe we could dump all the pg_class OIDs at once\n> instead of calling binary_upgrade_set_pg_class_oids() for each one.\n\nWithout changing the backend handling of the Oid's we can't really do that\nAFAICT, the backend stores the Oid for the next call so it needs to be per\nrelation like now?\n\nFor Greenplum we moved this to the backend by first dumping all Oids which were\nread into backend cache, and during relation creation the Oid to use was looked\nup in the backend. (This wasn't a performance change, it was to allow multiple\nshared-nothing clusters to have a unified view of Oids, so I never benchmarked\nit all that well.) The upside of that is that the magic Oid variables in the\nbackend can be removed, but it obviously adds slight overhead in others.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 18 Apr 2024 09:24:53 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve performance of pg_dump --binary-upgrade" }, { "msg_contents": "On Thu, Apr 18, 2024 at 02:08:28AM -0400, Corey Huinker wrote:\n> Bar-napkin math tells me in a worst-case architecture and braindead byte\n> alignment, we'd burn 64 bytes per struct, so the 100K tables cited would be\n> about 6.25MB of memory.\n\nThat doesn't seem too terrible.\n\n> The obvious low-memory alternative would be to make a prepared statement,\n> though that does nothing to cut down on the roundtrips.\n> \n> I think this is a good trade off.\n\nCool.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 18 Apr 2024 09:57:18 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve performance of pg_dump --binary-upgrade" }, { "msg_contents": "On Thu, Apr 18, 2024 at 09:24:53AM +0200, Daniel Gustafsson wrote:\n>> On 18 Apr 2024, at 06:17, Nathan Bossart <[email protected]> wrote:\n> \n>> The attached work-in-progress patch speeds up 'pg_dump --binary-upgrade'\n>> for this case. Instead of executing the query in every call to the\n>> function, we can execute it once during the first call and store all the\n>> required information in a sorted array that we can bsearch() in future\n>> calls.\n> \n> That does indeed seem like a saner approach. Since we look up the relkind we\n> can also remove the is_index parameter to binary_upgrade_set_pg_class_oids\n> since we already know that without the caller telling us?\n\nYeah. It looks like that's been possible since commit 9a974cb, so I can\nwrite a prerequisite patch for this.\n\n>> One downside of this approach is the memory usage.\n> \n> I'm not too worried about the worst-case performance of this.\n\nCool. That seems to be the general sentiment.\n\n>> This was more-or-less\n>> the first approach that crossed my mind, so I wouldn't be surprised if\n>> there's a better way. I tried to keep the pg_dump output the same, but if\n>> that isn't important, maybe we could dump all the pg_class OIDs at once\n>> instead of calling binary_upgrade_set_pg_class_oids() for each one.\n> \n> Without changing the backend handling of the Oid's we can't really do that\n> AFAICT, the backend stores the Oid for the next call so it needs to be per\n> relation like now?\n\nRight, this would require additional changes.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 18 Apr 2024 10:23:01 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve performance of pg_dump --binary-upgrade" }, { "msg_contents": "On Thu, Apr 18, 2024 at 10:23:01AM -0500, Nathan Bossart wrote:\n> On Thu, Apr 18, 2024 at 09:24:53AM +0200, Daniel Gustafsson wrote:\n>> That does indeed seem like a saner approach. Since we look up the relkind we\n>> can also remove the is_index parameter to binary_upgrade_set_pg_class_oids\n>> since we already know that without the caller telling us?\n> \n> Yeah. It looks like that's been possible since commit 9a974cb, so I can\n> write a prerequisite patch for this.\n\nHere's a new patch set with this change.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 18 Apr 2024 15:28:13 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve performance of pg_dump --binary-upgrade" }, { "msg_contents": "> On 18 Apr 2024, at 22:28, Nathan Bossart <[email protected]> wrote:\n> \n> On Thu, Apr 18, 2024 at 10:23:01AM -0500, Nathan Bossart wrote:\n>> On Thu, Apr 18, 2024 at 09:24:53AM +0200, Daniel Gustafsson wrote:\n>>> That does indeed seem like a saner approach. Since we look up the relkind we\n>>> can also remove the is_index parameter to binary_upgrade_set_pg_class_oids\n>>> since we already know that without the caller telling us?\n>> \n>> Yeah. It looks like that's been possible since commit 9a974cb, so I can\n>> write a prerequisite patch for this.\n> \n> Here's a new patch set with this change.\n\nFrom a read-through they look good, a very nice performance improvement in an\nimportant path. I think it would be nice with some comments on the\nBinaryUpgradeClassOids struct (since the code using it is thousands of lines\naway), and a comment on the if (oids == NULL) block explaining the caching.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 18 Apr 2024 22:33:08 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve performance of pg_dump --binary-upgrade" }, { "msg_contents": "On Thu, Apr 18, 2024 at 10:33:08PM +0200, Daniel Gustafsson wrote:\n> From a read-through they look good, a very nice performance improvement in an\n> important path. I think it would be nice with some comments on the\n> BinaryUpgradeClassOids struct (since the code using it is thousands of lines\n> away), and a comment on the if (oids == NULL) block explaining the caching.\n\nAdded. Thanks for reviewing! Unfortunately, this one will have to sit for\na couple months...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 18 Apr 2024 16:19:24 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve performance of pg_dump --binary-upgrade" }, { "msg_contents": "I noticed that there are some existing examples of this sort of thing in\npg_dump (e.g., commit d5e8930), so I adjusted the patch to match the\nsurrounding style a bit better.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 22 Apr 2024 14:01:03 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve performance of pg_dump --binary-upgrade" }, { "msg_contents": "rebased\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 17 May 2024 10:22:44 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve performance of pg_dump --binary-upgrade" }, { "msg_contents": "Committed.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 3 Jul 2024 14:28:44 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve performance of pg_dump --binary-upgrade" } ]
[ { "msg_contents": "On 18/04/2024 00:39, Andres Freund wrote:\n\n>We have a fair amount of code that uses non-constant function level static\n>variables for read-only data. Which makes little sense - it prevents the\n>compiler from understanding\n\n>a) that the data is read only and can thus be put into a segment that's\nshared\n>between all invocations of the program\n>b) the data will be the same on every invocation, and thus from optimizing\n>based on that.\n\n>The most common example of this is that all our binaries use\n>static struct option long_options[] = { ... };\n>which prevents long_options from being put into read-only memory.\n\n+1 static const allows the compiler to make additional optimizations.\n\n>There are lots of places that could benefit from adding 'static\n>const'.\n\nI found a few more places.\n\nPatch 004\n\nThe opposite would also help, adding static.\nIn these places, I believe it is safe to add static,\nallowing the compiler to transform into read-only, definitively.\n\nPatch 005\n\nbest regards,\n\nRanier Vilela", "msg_date": "Thu, 18 Apr 2024 09:07:43 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plenty code is confused about function level static" }, { "msg_contents": "Hi,\n\nOn 2024-04-18 09:07:43 -0300, Ranier Vilela wrote:\n> On 18/04/2024 00:39, Andres Freund wrote:\n> >There are lots of places that could benefit from adding 'static\n> >const'.\n> \n> I found a few more places.\n\nGood catches.\n\n\n> Patch 004\n> \n> The opposite would also help, adding static.\n> In these places, I believe it is safe to add static,\n> allowing the compiler to transform into read-only, definitively.\n\nI don't think this would even compile? E.g. LockTagTypeNames, pg_wchar_table\nare declared in a header and used across translation units.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 18 Apr 2024 10:16:02 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plenty code is confused about function level static" }, { "msg_contents": "Em qui., 18 de abr. de 2024 às 14:16, Andres Freund <[email protected]>\nescreveu:\n\n> Hi,\n>\n> On 2024-04-18 09:07:43 -0300, Ranier Vilela wrote:\n> > On 18/04/2024 00:39, Andres Freund wrote:\n> > >There are lots of places that could benefit from adding 'static\n> > >const'.\n> >\n> > I found a few more places.\n>\n> Good catches.\n>\n>\n> > Patch 004\n> >\n> > The opposite would also help, adding static.\n> > In these places, I believe it is safe to add static,\n> > allowing the compiler to transform into read-only, definitively.\n>\n> I don't think this would even compile?\n\nCompile, at least with msvc 2022.\nPass ninja test.\n\n\nE.g. LockTagTypeNames, pg_wchar_table\n> are declared in a header and used across translation units.\n>\nSad.\nThere should be a way to export a read-only (static const) variable.\nBetter remove these.\n\nv1-0005 attached.\n\nbest regards,\nRanier Vilela", "msg_date": "Thu, 18 Apr 2024 14:43:37 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plenty code is confused about function level static" }, { "msg_contents": "Em qui., 18 de abr. de 2024 às 14:43, Ranier Vilela <[email protected]>\nescreveu:\n\n>\n>\n> Em qui., 18 de abr. de 2024 às 14:16, Andres Freund <[email protected]>\n> escreveu:\n>\n>> Hi,\n>>\n>> On 2024-04-18 09:07:43 -0300, Ranier Vilela wrote:\n>> > On 18/04/2024 00:39, Andres Freund wrote:\n>> > >There are lots of places that could benefit from adding 'static\n>> > >const'.\n>> >\n>> > I found a few more places.\n>>\n>> Good catches.\n>>\n>>\n>> > Patch 004\n>> >\n>> > The opposite would also help, adding static.\n>> > In these places, I believe it is safe to add static,\n>> > allowing the compiler to transform into read-only, definitively.\n>>\n>> I don't think this would even compile?\n>\n> Compile, at least with msvc 2022.\n> Pass ninja test.\n>\n>\n> E.g. LockTagTypeNames, pg_wchar_table\n>> are declared in a header and used across translation units.\n>>\n> Sad.\n> There should be a way to export a read-only (static const) variable.\n> Better remove these.\n>\n> v1-0005 attached.\n>\nNow with v18 open, any plans to forward this?\n\nbest regards,\nRanier Vilela\n\nEm qui., 18 de abr. de 2024 às 14:43, Ranier Vilela <[email protected]> escreveu:Em qui., 18 de abr. de 2024 às 14:16, Andres Freund <[email protected]> escreveu:Hi,\n\nOn 2024-04-18 09:07:43 -0300, Ranier Vilela wrote:\n>  On 18/04/2024 00:39, Andres Freund wrote:\n> >There are lots of places that could benefit from adding 'static\n> >const'.\n> \n> I found a few more places.\n\nGood catches.\n\n\n> Patch 004\n> \n> The opposite would also help, adding static.\n> In these places, I believe it is safe to add static,\n> allowing the compiler to transform into read-only, definitively.\n\nI don't think this would even compile?Compile, at least with msvc 2022.Pass ninja test. E.g. LockTagTypeNames, pg_wchar_table\nare declared in a header and used across translation units.Sad.There should be a way to export a read-only (static const) variable.Better remove these.v1-0005 attached.Now with v18 open, any plans to forward this?best regards,Ranier Vilela", "msg_date": "Tue, 2 Jul 2024 08:49:30 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plenty code is confused about function level static" } ]
[ { "msg_contents": "Fix restore of not-null constraints with inheritance\n\nIn tables with primary keys, pg_dump creates tables with primary keys by\ninitially dumping them with throw-away not-null constraints (marked \"no\ninherit\" so that they don't create problems elsewhere), to later drop\nthem once the primary key is restored. Because of a unrelated\nconsideration, on tables with children we add not-null constraints to\nall columns of the primary key when it is created.\n\nIf both a table and its child have primary keys, and pg_dump happens to\nemit the child table first (and its throw-away not-null) and later its\nparent table, the creation of the parent's PK will fail because the\nthrow-away not-null constraint collides with the permanent not-null\nconstraint that the PK wants to add, so the dump fails to restore.\n\nWe can work around this problem by letting the primary key \"take over\"\nthe child's not-null. This requires no changes to pg_dump, just two\nchanges to ALTER TABLE: first, the ability to convert a no-inherit\nnot-null constraint into a regular inheritable one (including recursing\ndown to children, if there are any); second, the ability to \"drop\" a\nconstraint that is defined both directly in the table and inherited from\na parent (which simply means to mark it as no longer having a local\ndefinition).\n\nSecondarily, change ATPrepAddPrimaryKey() to acquire locks all the way\ndown the inheritance hierarchy, in case we need to recurse when\npropagating constraints.\n\nThese two changes allow pg_dump to reproduce more cases involving\ninheritance from versions 16 and older.\n\nLastly, make two changes to pg_dump: 1) do not try to drop a not-null\nconstraint that's marked as inherited; this allows a dump to restore\nwith no errors if a table with a PK inherits from another which also has\na PK; 2) avoid giving inherited constraints throwaway names, for the\nrare cases where such a constraint survives after the restore.\n\nReported-by: Andrew Bille <[email protected]>\nReported-by: Justin Pryzby <[email protected]>\nDiscussion: https://postgr.es/m/CAJnzarwkfRu76_yi3dqVF_WL-MpvT54zMwAxFwJceXdHB76bOA@mail.gmail.com\nDiscussion: https://postgr.es/m/Zh0aAH7tbZb-9HbC@pryzbyj2023\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/d9f686a72ee91f6773e5d2bc52994db8d7157a8e\n\nModified Files\n--------------\nsrc/backend/catalog/heap.c | 36 +++++++++++++++--\nsrc/backend/catalog/pg_constraint.c | 43 +++++++++++++-------\nsrc/backend/commands/tablecmds.c | 65 +++++++++++++++++++++++++++----\nsrc/bin/pg_dump/pg_dump.c | 26 +++++++++++--\nsrc/include/catalog/pg_constraint.h | 2 +-\nsrc/test/regress/expected/constraints.out | 56 ++++++++++++++++++++++++++\nsrc/test/regress/sql/constraints.sql | 22 +++++++++++\n7 files changed, 221 insertions(+), 29 deletions(-)", "msg_date": "Thu, 18 Apr 2024 13:37:43 +0000", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": true, "msg_subject": "pgsql: Fix restore of not-null constraints with inheritance" }, { "msg_contents": "On 2024-Apr-18, Alvaro Herrera wrote:\n\n> Lastly, make two changes to pg_dump: 1) do not try to drop a not-null\n> constraint that's marked as inherited; this allows a dump to restore\n> with no errors if a table with a PK inherits from another which also has\n> a PK; 2) avoid giving inherited constraints throwaway names, for the\n> rare cases where such a constraint survives after the restore.\n\nHmm, this last bit broke pg_upgrade on crake. I'll revert this part,\nmeanwhile I'll be installing 9.2 to see if it can be fixed in a better way.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 18 Apr 2024 16:08:58 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgsql: Fix restore of not-null constraints with inheritance" }, { "msg_contents": "On 2024-Apr-18, Alvaro Herrera wrote:\n\n> On 2024-Apr-18, Alvaro Herrera wrote:\n> \n> > Lastly, make two changes to pg_dump: 1) do not try to drop a not-null\n> > constraint that's marked as inherited; this allows a dump to restore\n> > with no errors if a table with a PK inherits from another which also has\n> > a PK; 2) avoid giving inherited constraints throwaway names, for the\n> > rare cases where such a constraint survives after the restore.\n> \n> Hmm, this last bit broke pg_upgrade on crake. I'll revert this part,\n> meanwhile I'll be installing 9.2 to see if it can be fixed in a better way.\n\nEh, so:\n\n1. running \"make check\" on pg_upgrade using an oldinstall pointing to\n9.2 fails, because PostgreSQL::Test::Cluster doesn't support that\nversion -- it only goes back to 9.2. How difficult was it to port it\nback to all these old versions?\n\n2. running \"make check\" with an oldinstall pointing to 10 fails, because\nthe \"invalid database\" checks fail:\n\nnot ok 7 - invalid database causes failure status (got 0 vs expected 1)\n\n# Failed test 'invalid database causes failure status (got 0 vs expected 1)'\n# at t/002_pg_upgrade.pl line 407.\nnot ok 8 - invalid database causes failure stdout /(?^:invalid)/\n\n\n3. Lastly, even if I put back the code that causes the failures on crake\nand restore from 10 (and ignore those two problems), I cannot reproduce\nthe issues it reported. Is crake running some funky code that's not\nwhat \"make check\" on pg_upgrade does, perchance?\n\n\nI think we should SKIP the tests with invalid databases when running\nwith an oldinstall 10 and older, because that commit only patches back\nto 11:\n\nAuthor: Andres Freund <[email protected]>\nBranch: master [c66a7d75e] 2023-07-13 13:03:28 -0700\nBranch: REL_16_STABLE Release: REL_16_0 [a4b4cc1d6] 2023-07-13 13:03:30 -0700\nBranch: REL_15_STABLE Release: REL_15_4 [f66403749] 2023-07-13 13:04:45 -0700\nBranch: REL_14_STABLE Release: REL_14_9 [d11efe830] 2023-07-13 13:03:33 -0700\nBranch: REL_13_STABLE Release: REL_13_12 [81ce00006] 2023-07-13 13:03:34 -0700\nBranch: REL_12_STABLE Release: REL_12_16 [034a9fcd2] 2023-07-13 13:03:36 -0700\nBranch: REL_11_STABLE Release: REL_11_21 [1c38e7ae1] 2023-07-13 13:03:37 -0700\n\n Handle DROP DATABASE getting interrupted\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Cada quien es cada cual y baja las escaleras como quiere\" (JMSerrat)\n\n\n", "msg_date": "Thu, 18 Apr 2024 17:39:31 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgsql: Fix restore of not-null constraints with inheritance" }, { "msg_contents": "On 2024-04-18 Th 11:39, Alvaro Herrera wrote:\n> On 2024-Apr-18, Alvaro Herrera wrote:\n>\n>> On 2024-Apr-18, Alvaro Herrera wrote:\n>>\n>>> Lastly, make two changes to pg_dump: 1) do not try to drop a not-null\n>>> constraint that's marked as inherited; this allows a dump to restore\n>>> with no errors if a table with a PK inherits from another which also has\n>>> a PK; 2) avoid giving inherited constraints throwaway names, for the\n>>> rare cases where such a constraint survives after the restore.\n>> Hmm, this last bit broke pg_upgrade on crake. I'll revert this part,\n>> meanwhile I'll be installing 9.2 to see if it can be fixed in a better way.\n> Eh, so:\n>\n> 1. running \"make check\" on pg_upgrade using an oldinstall pointing to\n> 9.2 fails, because PostgreSQL::Test::Cluster doesn't support that\n> version -- it only goes back to 9.2. How difficult was it to port it\n> back to all these old versions?\n\n\nIt's not that hard to make it go back to 9.2. Here's a version that's a \ncouple of years old, but it supports versions all the way back to 7.2 :-)\n\nIf there's interest I'll work on supporting our official \"old\" versions \n(i.e. 9.2 and up)\n\n\n>\n> 2. running \"make check\" with an oldinstall pointing to 10 fails, because\n> the \"invalid database\" checks fail:\n>\n> not ok 7 - invalid database causes failure status (got 0 vs expected 1)\n>\n> # Failed test 'invalid database causes failure status (got 0 vs expected 1)'\n> # at t/002_pg_upgrade.pl line 407.\n> not ok 8 - invalid database causes failure stdout /(?^:invalid)/\n\n\n\n>\n>\n> 3. Lastly, even if I put back the code that causes the failures on crake\n> and restore from 10 (and ignore those two problems), I cannot reproduce\n> the issues it reported. Is crake running some funky code that's not\n> what \"make check\" on pg_upgrade does, perchance?\n\n\nIt's running the buildfarm cross version upgrade module. See \n<https://github.com/PGBuildFarm/client-code/blob/main/PGBuild/Modules/TestUpgradeXversion.pm>\n\nIt's choking on the change in constraint names between the dump of the \npre-upgrade database and the dump of the post-upgrade database, e.g.\n\n\n CREATE TABLE public.rule_and_refint_t2 (\n- id2a integer CONSTRAINT pgdump_throwaway_notnull_0 NOT NULL NO INHERIT,\n- id2c integer CONSTRAINT pgdump_throwaway_notnull_1 NOT NULL NO INHERIT\n+ id2a integer CONSTRAINT rule_and_refint_t2_id2a_not_null NOT NULL NO INHERIT,\n+ id2c integer CONSTRAINT rule_and_refint_t2_id2c_not_null NOT NULL NO INHERIT\n );\n\n\nlook at the dumpdiff-REL9_2_STABLE file for the full list.\n\nI assume that means pg_dump is generating names that pg_upgrade throws \naway? That seems ... unfortunate.\n\nThere is a perl module at \nsrc/test/perl/PostgreSQL/Test/AdjustUpgrade.pm. This is used to adjust \nthe dump files before we diff them. Maybe you can remedy the problem by \nadding some code in there.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com", "msg_date": "Thu, 18 Apr 2024 15:18:32 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Fix restore of not-null constraints with inheritance" }, { "msg_contents": "On 2024-Apr-18, Andrew Dunstan wrote:\n\n> On 2024-04-18 Th 11:39, Alvaro Herrera wrote:\n\n> It's not that hard to make it go back to 9.2. Here's a version that's a\n> couple of years old, but it supports versions all the way back to 7.2 :-)\n\nHmm, so I tried grabbing the old-version module definitions from here\nand pasting them into the new Cluster.pm, but that doesn't work, as it\nseems we've never handled some of those problems.\n\n> If there's interest I'll work on supporting our official \"old\" versions\n> (i.e. 9.2 and up)\n\nI'm not sure it's really worth the trouble, depending on how much effort\nit is, or how much uglier would Cluster.pm get. Maybe supporting back\nto 10 is enough, assuming I can reproduce crake's failure with 10, which\nI think should be possible.\n\n> It's running the buildfarm cross version upgrade module. See\n> <https://github.com/PGBuildFarm/client-code/blob/main/PGBuild/Modules/TestUpgradeXversion.pm>\n\nThanks, I'll have a look and see if I can get this to run on my side.\n\n> It's choking on the change in constraint names between the dump of the\n> pre-upgrade database and the dump of the post-upgrade database, e.g.\n> \n> CREATE TABLE public.rule_and_refint_t2 (\n> - id2a integer CONSTRAINT pgdump_throwaway_notnull_0 NOT NULL NO INHERIT,\n> - id2c integer CONSTRAINT pgdump_throwaway_notnull_1 NOT NULL NO INHERIT\n> + id2a integer CONSTRAINT rule_and_refint_t2_id2a_not_null NOT NULL NO INHERIT,\n> + id2c integer CONSTRAINT rule_and_refint_t2_id2c_not_null NOT NULL NO INHERIT\n> );\n\nYeah, I saw this, and it was pretty obvious that the change I reverted\nin d72d32f52d26 was the culprit. It's all good now.\n\n> I assume that means pg_dump is generating names that pg_upgrade throws away?\n> That seems ... unfortunate.\n\nWell, I don't know if you're aware, but now pg_dump will include\nthrowaway not-null constraints for all columns of primary keys that\ndon't have an explicit not-null constraint. Later in the same dump, the\ncreation of the primary key removes that constraint. pg_upgrade doesn't\nreally play any role here, except that apparently the throwaway\nconstraint name is being preserved, or something.\n\nAnyway, it's moot [for] now.\n\nBTW because of a concern from Justin that the NO INHERIT stuff will\ncause errors in old server versions, I started to wonder if it wouldn't\nbe better to add these constraints in a separate line for compatibility,\nso for example in the table above it'd be\n\nCREATE TABLE public.rule_and_refint_t2 (\n id2a integer,\n id2c integer\n);\nALTER TABLE public.rule_and_refint_t2 ADD CONSTRAINT pgdump_throwaway_notnull_0 NOT NULL id2a NO INHERIT;\nALTER TABLE public.rule_and_refint_t2 ADD CONSTRAINT pgdump_throwaway_notnull_1 NOT NULL id2c NO INHERIT;\n\nwhich might be less problematic in terms of compatibility: you still end\nup having the table, it's only the ALTER TABLE that would error out.\n\n\n> There is a perl module at src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm.\n> This is used to adjust the dump files before we diff them. Maybe you can\n> remedy the problem by adding some code in there.\n\nHopefully nothing is needed there. (I think it would be difficult to\nmake the names match anyway.)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"If you have nothing to say, maybe you need just the right tool to help you\nnot say it.\" (New York Times, about Microsoft PowerPoint)\n\n\n", "msg_date": "Fri, 19 Apr 2024 13:59:31 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgsql: Fix restore of not-null constraints with inheritance" }, { "msg_contents": "On Fri, Apr 19, 2024 at 01:59:31PM +0200, Alvaro Herrera wrote:\n> BTW because of a concern from Justin that the NO INHERIT stuff will\n> cause errors in old server versions, I started to wonder if it wouldn't\n> be better to add these constraints in a separate line for compatibility,\n> so for example in the table above it'd be\n> \n> CREATE TABLE public.rule_and_refint_t2 (\n> id2a integer,\n> id2c integer\n> );\n> ALTER TABLE public.rule_and_refint_t2 ADD CONSTRAINT pgdump_throwaway_notnull_0 NOT NULL id2a NO INHERIT;\n> ALTER TABLE public.rule_and_refint_t2 ADD CONSTRAINT pgdump_throwaway_notnull_1 NOT NULL id2c NO INHERIT;\n> \n> which might be less problematic in terms of compatibility: you still end\n> up having the table, it's only the ALTER TABLE that would error out.\n\nUnder pg_restore -d, those would all be run in a single transactional\ncommand, so it would *still* fail to create the table...\n\nIt seems like the workaround to restore into an old server version would\nbe to run:\n| pg_restore -f- |sed 's/ NO INHERIT//' |psql\n\nPutting them on separate lines makes that a tiny bit better, since you\ncould do:\n| pg_restore -f- |sed '/^ALTER TABLE .* ADD CONSTRAINT .* NOT NULL/{ s/ NO INHERIT// }' |psql\n\nBut I'm not sure whether that's enough of an improvement to warrant the\neffort.\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 20 Apr 2024 20:25:48 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Fix restore of not-null constraints with inheritance" } ]
[ { "msg_contents": "Hello hackers,\n\nWhen using a server built with clang (15, 18) with sanitizers enabled,\nthe last query in the following script:\nSET parallel_setup_cost = 0;\nSET min_parallel_table_scan_size = 0;\n\nSELECT a::text INTO t FROM generate_series(1, 1000) a;\n\\timing on\nSELECT string_agg(a, ',') FROM t WHERE a = REPEAT('0', 400000);\n\nruns for more than a minute on my workstation (a larger repeat count gives\nmuch longer duration):\nTime: 66017.594 ms (01:06.018)\n\nThe typical stack trace for a running parallel worker is:\n#0  0x0000559a23671885 in __sanitizer::internal_strlen(char const*) ()\n#1  0x0000559a236568ed in StrtolFixAndCheck(void*, char const*, char**, char*, int) ()\n#2  0x0000559a236423dc in __interceptor_strtol ()\n#3  0x0000559a240027d9 in atoi (...) at readfuncs.c:629\n#5  0x0000559a23fb03f0 in _readConst () at readfuncs.c:275\n#6  parseNodeString () at ./readfuncs.switch.c:29\n#7  0x0000559a23faa421 in nodeRead (\n     token=0x7fee75cf3bd2 \"{CONST :consttype 25 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false \n:constisnull false :location -1 :constvalue 400004 [ 16 106 24 0 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48\"...,\n     tok_len=1) at read.c:338\n#8  0x0000559a23faa916 in nodeRead (...) at read.c:452\n#9  0x0000559a23fb3f34 in _readOpExpr () at ./readfuncs.funcs.c:279\n#10 0x0000559a23fb0842 in parseNodeString () at ./readfuncs.switch.c:47\n#11 0x0000559a23faa421 in nodeRead (...) at read.c:338\n#12 0x0000559a23faa916 in nodeRead (...) at read.c:452\n#13 0x0000559a23fefb74 in _readSeqScan () at ./readfuncs.funcs.c:3954\n#14 0x0000559a23fb2b97 in parseNodeString () at ./readfuncs.switch.c:559\n#15 0x0000559a23faa421 in nodeRead (...) at read.c:338\n#16 0x0000559a23ffc033 in _readAgg () at ./readfuncs.funcs.c:4685\n#17 0x0000559a23fb2dd3 in parseNodeString () at ./readfuncs.switch.c:611\n#18 0x0000559a23faa421 in nodeRead (...) at read.c:338\n#19 0x0000559a23feb340 in _readPlannedStmt () at ./readfuncs.funcs.c:3685\n#20 0x0000559a23fb2ad1 in parseNodeString () at ./readfuncs.switch.c:541\n#21 0x0000559a23faa421 in nodeRead (...) at read.c:338\n#22 0x0000559a23fa99d8 in stringToNodeInternal (...) at read.c:92\n#24 0x0000559a23d66609 in ExecParallelGetQueryDesc (...) at execParallel.c:1250\n#25 ParallelQueryMain (...) at execParallel.c:1424\n#26 0x0000559a238cfe13 in ParallelWorkerMain (...) at parallel.c:1516\n#27 0x0000559a241e5b6a in BackgroundWorkerMain (...) at bgworker.c:848\n#28 0x0000559a241ec254 in postmaster_child_launch (...) at launch_backend.c:265\n#29 0x0000559a241f1c15 in do_start_bgworker (...) at postmaster.c:4270\n#30 maybe_start_bgworkers () at postmaster.c:4501\n#31 0x0000559a241f486e in process_pm_pmsignal () at postmaster.c:3774\n#32 ServerLoop () at postmaster.c:1667\n#33 0x0000559a241f0ed6 in PostmasterMain (...) at postmaster.c:1372\n#34 0x0000559a23ebe16d in main (...) at main.c:197\n\nThe flamegraph (which shows that readDatum() -> __interceptor_strtol() ->\nStrtolFixAndCheck() -> __sanitizer::internal_strlen () takes >99% of time)\nis attached.\n(I could not reproduce this behaviour with the gcc's sanitizer.)\n\nMoreover, this query cannot be canceled, you can only kill -SIGKILL\nparallel workers to interrupt it.\n\nBest regards,\nAlexander", "msg_date": "Thu, 18 Apr 2024 18:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "clang's sanitizer makes stringToNode() extremely slow" } ]
[ { "msg_contents": "Hi,\r\n\r\nPostgreSQL 17 Beta 1 is planned to be release on May 23, 2024. Please \r\ncontinue your hard work on closing out open items[1] ahead of the \r\nrelease and have the fixes targeted for the release committed by May 18, \r\n2024.\r\n\r\nThanks - it's very exciting that we're at this point in the release cycle!\r\n\r\nJonathan\r\n\r\n[1] https://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items", "msg_date": "Thu, 18 Apr 2024 11:57:40 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 17 Beta 1 release date" } ]
[ { "msg_contents": "Here is a patch series that aims to clean up ecpg's preprocessor\ncode a little and fix the compile time problems we're seeing with\nlate-model clang [1]. I guess whether it's a cleanup is in the eye of\nthe beholder, but it definitely succeeds at fixing compile time: for\nme, the time needed to compile preproc.o with clang 16 drops from\n104 seconds to less than 1 second. It might be a little faster at\nprocessing input too, though that wasn't the primary goal.\n\nThe reason that clang is having a problem seems to be the large number\nof virtually-duplicate semantic actions in the generated preproc.y.\nSo I looked for a way to allow most productions to use the default\nsemantic action rather than having to write anything. The core idea\nof this patch is to stop returning <str> results from grammar\nnonterminals and instead pass the strings back as Bison location data,\nwhich we can do by redefining YYLTYPE as \"char *\". Since ecpg isn't\nusing Bison's location logic for error reports, and seems unlikely to\ndo so in future, this doesn't cost us anything. Then we can implement\na one-size-fits-most token concatenation rule in YYLLOC_DEFAULT, and\nonly the various handmade rules that don't want to just concatenate\ntheir inputs need to do something different. (Within those handmade\nrules, the main notational change needed is to write \"@N\" not \"$N\"\nfor the string value of the N'th input token, and \"@@\" not \"$@\"\nfor the output string value.) Aside from not giving clang\nindigestion, this makes the compiled parser a little smaller since\nthere are fewer semantic actions that need code space.\n\nAs Andres remarked in the other thread, the parse.pl script that\nconstructs preproc.y is undocumented and unreadable, so I spent\na good deal of time reverse-engineering and cleaning that up\nbefore I went to work on the actual problem. Four of the six\npatches in this series are in the way of cleanup and adding\ndocumentation, with no significant behavioral changes.\n\nThe patch series comprises:\n\n0001: pgindent the code in pgc.l and preproc.y's precursor files.\nYeah, this was my latent OCD rearing its head, but I hate looking\nat or working on messy code. It did actually pay some dividends\nlater on, by making it easier to make bulk edits.\n\n0002: improve the external documentation and error checking of\nparse.pl. This was basically to convince myself that I knew\nwhat it was supposed to do before I started changing it.\nThe error checks did find some errors, too: in particular,\nit turns out there are two unused entries in ecpg.addons.\n\n(This implies that check_rules.pl is completely worthless and should\nbe nuked: it adds build cycles and maintenance effort while failing\nto reliably accomplish its one job of detecting dead rules, because\nwhat it is testing is not the same thing that parse.pl actually does.\nI've not included that removal in this patch series, though.)\n\n0003: clean up and simplify parse.pl, and write some internal\ndocumentation for it. The effort of understanding it exposed that\nthere was a pretty fair amount of dead or at least redundant code,\nso I got rid of that. This patch changes the output preproc.y\nfile only to the extent of removing some blank lines that didn't\nseem very useful to preserve.\n\n0004: this is where something useful happens, specifically where\nwe change <str>-returning productions to return void and instead\npass back the desired output string as location data. In most\ncases the productions now need no explicit semantic action at all,\nallowing substantial simplification in parse.pl.\n\n0005: more cleanup. I didn't want to add more memory-management\ncode to preproc/type.c, where mm_alloc and mm_strdup have lived\nfor no explicable reason. I pulled those and a couple of other\nfunctions out to a new file util.c, so as to have a better home\nfor new utility code.\n\n0006: the big problem with 0004 is that it can't use the trick\nof freeing input substrings as soon as it's created the merged\nstring, as cat_str and friends have historically done. That's\nbecause YYLLOC_DEFAULT runs before the rule's semantic action\nif any, so that if the action does need to look at the input\nstrings, they'd already be freed. So 0004 is leaking memory\nrather badly. Fix that by creating a notion of \"local\" memory\nthat will be reclaimed at end of statement, analogously to\nshort-lived memory contexts in the backend. All the string\nconcatenation work happens in short-lived storage and we don't\nworry about getting rid of intermediate values immediately.\nBy making cat_str and friends work similarly, we can get rid\nof quite a lot of explicit mm_strdup calls, although we do have\nto add some at places where we're building long-lived data\nstructures. This should greatly reduce the malloc/free traffic\ntoo, at the cost of eating somewhat more space intra-statement.\n\nIn my view 0006 is about the scariest part of this, as it's\nhard to be sure that there are no use-after-free problems\nwherein a pointer to a short-lived string survives past end\nof statement. It gets through the ecpg regression tests\nunder valgrind successfully, but I don't have much faith\nin the thoroughness of the code coverage of those tests.\n(If our code coverage tools worked on bison/flex stuff,\nmaybe this'd be less scary ... but they don't.)\n\nI'll park this in the July commitfest.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CAGECzQQg4qmGbqqLbK9yyReWd1g%3Dd7T07_gua%2BRKXsdsW9BG-Q%40mail.gmail.com", "msg_date": "Thu, 18 Apr 2024 22:18:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "ECPG cleanup and fix for clang compile-time problem" }, { "msg_contents": "Hi,\n\nOn 2024-04-18 22:18:34 -0400, Tom Lane wrote:\n> Here is a patch series that aims to clean up ecpg's preprocessor\n> code a little and fix the compile time problems we're seeing with\n> late-model clang [1]. I guess whether it's a cleanup is in the eye of\n> the beholder, but it definitely succeeds at fixing compile time: for\n> me, the time needed to compile preproc.o with clang 16 drops from\n> 104 seconds to less than 1 second. It might be a little faster at\n> processing input too, though that wasn't the primary goal.\n\nNice! I'll look at this more later.\n\n\nFor now I just wanted to point one minor detail:\n\n> (If our code coverage tools worked on bison/flex stuff,\n> maybe this'd be less scary ... but they don't.)\n\nFor bison coverage seems to work, see e.g.:\n\nhttps://coverage.postgresql.org/src/interfaces/ecpg/preproc/preproc.y.gcov.html#10638\n\nI think the only reason it doesn't work for flex is that we have\n/* LCOV_EXCL_START */\n/* LCOV_EXCL_STOP */\n\naround the scanner \"body\". Without that I get reasonable-looking, albeit not\nvery comforting, coverage for pgc.l as well.\n\n |Lines |Functions|Branches\nFilename |Rate Num|Rate Num|Rate Num\nsrc/interfaces/ecpg/preproc/pgc.l |65.9% 748|87.5% 8| - 0\nsrc/interfaces/ecpg/preproc/preproc.y |29.9% 4964|66.7% 15| - 0\n\n\nThis has been introduced by\n\ncommit 421167362242ce1fb46d6d720798787e7cd65aad\nAuthor: Peter Eisentraut <[email protected]>\nDate: 2017-08-10 23:33:47 -0400\n\n Exclude flex-generated code from coverage testing\n\n Flex generates a lot of functions that are not actually used. In order\n to avoid coverage figures being ruined by that, mark up the part of the\n .l files where the generated code appears by lcov exclusion markers.\n That way, lcov will typically only reported on coverage for the .l file,\n which is under our control, but not for the .c file.\n\n Reviewed-by: Michael Paquier <[email protected]>\n\n\nbut I don't think it's working as intended, as it's also preventing coverage\nfor the actual scanner definition.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 18 Apr 2024 20:03:46 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ECPG cleanup and fix for clang compile-time problem" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-04-18 22:18:34 -0400, Tom Lane wrote:\n>> (If our code coverage tools worked on bison/flex stuff,\n>> maybe this'd be less scary ... but they don't.)\n\n> For bison coverage seems to work, see e.g.:\n\nYeah, I'd just noticed that --- I had it in my head that we'd put\nLCOV_EXCL_START/STOP into bison files too, but nope they are only\nin flex files. That's good for this specific problem, because the\ncode I'm worried about is all in the bison file.\n\n> around the scanner \"body\". Without that I get reasonable-looking, albeit not\n> very comforting, coverage for pgc.l as well.\n\nI was just looking locally at what I got by removing that, and sadly\nI don't think I believe it: there are a lot of places where it claims\nwe hit lines we don't, and vice versa. That might be partially\nblamable on old tools on my RHEL8 workstation, but it sure seems\nthat flex output confuses lcov to some extent.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Apr 2024 23:11:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ECPG cleanup and fix for clang compile-time problem" }, { "msg_contents": "On 2024-04-18 23:11:52 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2024-04-18 22:18:34 -0400, Tom Lane wrote:\n> >> (If our code coverage tools worked on bison/flex stuff,\n> >> maybe this'd be less scary ... but they don't.)\n>\n> > For bison coverage seems to work, see e.g.:\n>\n> Yeah, I'd just noticed that --- I had it in my head that we'd put\n> LCOV_EXCL_START/STOP into bison files too, but nope they are only\n> in flex files. That's good for this specific problem, because the\n> code I'm worried about is all in the bison file.\n\nAt least locally the coverage seems to make sense too, both for the main\ngrammar and for ecpg's.\n\n\n> > around the scanner \"body\". Without that I get reasonable-looking, albeit not\n> > very comforting, coverage for pgc.l as well.\n>\n> I was just looking locally at what I got by removing that, and sadly\n> I don't think I believe it: there are a lot of places where it claims\n> we hit lines we don't, and vice versa. That might be partially\n> blamable on old tools on my RHEL8 workstation, but it sure seems\n> that flex output confuses lcov to some extent.\n\nHm. Here it mostly looks reasonable, except that at least things seem off by\n1. And sure enough, if I look at pgc.l it has code like\n\ncase 2:\nYY_RULE_SETUP\n#line 465 \"/home/andres/src/postgresql/src/interfaces/ecpg/preproc/pgc.l\"\n{\n token_start = yytext;\n state_before_str_start = YYSTATE;\n\nHowever line 465 is actually the \"token_start\" line.\n\nFurther down this seems to get worse, by \"<<EOF>>\" it's off by 4 lines.\n\n\n$ apt policy flex\nflex:\n Installed: 2.6.4-8.2+b2\n Candidate: 2.6.4-8.2+b2\n Version table:\n *** 2.6.4-8.2+b2 500\n 500 http://mirrors.ocf.berkeley.edu/debian unstable/main amd64 Packages\n 100 /var/lib/dpkg/status\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 18 Apr 2024 20:40:22 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ECPG cleanup and fix for clang compile-time problem" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-04-18 23:11:52 -0400, Tom Lane wrote:\n>> I was just looking locally at what I got by removing that, and sadly\n>> I don't think I believe it: there are a lot of places where it claims\n>> we hit lines we don't, and vice versa. That might be partially\n>> blamable on old tools on my RHEL8 workstation, but it sure seems\n>> that flex output confuses lcov to some extent.\n\n> Hm. Here it mostly looks reasonable, except that at least things seem off by\n> 1.\n\nYeah, now that you mention it what I'm seeing looks like the line\nnumbering might be off-by-one. Time for a bug report?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Apr 2024 23:53:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ECPG cleanup and fix for clang compile-time problem" }, { "msg_contents": "One other bit of randomness that I noticed: ecpg's parse.pl has\nthis undocumented bit of logic:\n\n if ($a eq 'IDENT' && $prior eq '%nonassoc')\n {\n\n # add more tokens to the list\n $str = $str . \"\\n%nonassoc CSTRING\";\n }\n\nThe net effect of that is that, where gram.y writes\n\n%nonassoc UNBOUNDED NESTED /* ideally would have same precedence as IDENT */\n%nonassoc IDENT PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING CUBE ROLLUP\n SET KEYS OBJECT_P SCALAR VALUE_P WITH WITHOUT PATH\n%left Op OPERATOR /* multi-character ops and user-defined operators */\n\npreproc.c has\n\n %nonassoc UNBOUNDED NESTED\n %nonassoc IDENT\n%nonassoc CSTRING PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING CUBE ROLLUP\n SET KEYS OBJECT_P SCALAR VALUE_P WITH WITHOUT PATH\n %left Op OPERATOR\n\nIf you don't find that scary as heck, I suggest reading the very long\ncomment just in front of the cited lines of gram.y. The argument why\nassigning these keywords a precedence at all is OK depends heavily\non it being the same precedence as IDENT, yet here's ECPG randomly\nbreaking that.\n\nWe seem to have avoided problems though, because if I fix things\nby manually editing preproc.y to re-join the lines:\n\n %nonassoc IDENT CSTRING PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING CUBE ROLLUP\n\nthe generated preproc.c doesn't change at all. Actually, I can\ntake CSTRING out of this list altogether and it still doesn't\nchange the results ... although looking at how CSTRING is used,\nit looks safer to give it the same precedence as IDENT.\n\nI think we should change parse.pl to give one or the other of these\nresults before something more serious breaks there.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Apr 2024 11:21:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ECPG cleanup and fix for clang compile-time problem" }, { "msg_contents": "The cfbot noticed that this patchset had a conflict with d35cd0619,\nso here's a rebase. It's just a rebase of v1, no other changes.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 05 Jul 2024 11:56:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ECPG cleanup and fix for clang compile-time problem" }, { "msg_contents": "On Fri, Apr 19, 2024 at 10:21 PM Tom Lane <[email protected]> wrote:\n>\n> One other bit of randomness that I noticed: ecpg's parse.pl has\n> this undocumented bit of logic:\n>\n> if ($a eq 'IDENT' && $prior eq '%nonassoc')\n> {\n>\n> # add more tokens to the list\n> $str = $str . \"\\n%nonassoc CSTRING\";\n> }\n\n> preproc.c has\n>\n> %nonassoc UNBOUNDED NESTED\n> %nonassoc IDENT\n> %nonassoc CSTRING PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING CUBE ROLLUP\n> SET KEYS OBJECT_P SCALAR VALUE_P WITH WITHOUT PATH\n> %left Op OPERATOR\n>\n> If you don't find that scary as heck, I suggest reading the very long\n> comment just in front of the cited lines of gram.y. The argument why\n> assigning these keywords a precedence at all is OK depends heavily\n> on it being the same precedence as IDENT, yet here's ECPG randomly\n> breaking that.\n\nBefore 7f380c59f (Reduce size of backend scanner's tables), it was\neven more spread out:\n\n# add two more tokens to the list\n$str = $str . \"\\n%nonassoc CSTRING\\n%nonassoc UIDENT\";\n\n...giving:\n %nonassoc UNBOUNDED\n %nonassoc IDENT\n%nonassoc CSTRING\n%nonassoc UIDENT GENERATED NULL_P PARTITION RANGE ROWS GROUPS\nPRECEDING FOLLOWING CUBE ROLLUP\n\n> We seem to have avoided problems though, because if I fix things\n> by manually editing preproc.y to re-join the lines:\n>\n> %nonassoc IDENT CSTRING PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING CUBE ROLLUP\n>\n> the generated preproc.c doesn't change at all.\n\nOn a whim I tried rejoining on v12 and the .c doesn't change there, either.\n\n> Actually, I can\n> take CSTRING out of this list altogether and it still doesn't\n> change the results ... although looking at how CSTRING is used,\n> it looks safer to give it the same precedence as IDENT.\n\nDoing that on v12 on top of rejoining results in a shift-reduce\nconflict, so I imagine that's why it's there. Maybe it's outdated, but\nthis backs up your inclination that it's safer to keep.\n\n\n", "msg_date": "Mon, 12 Aug 2024 12:33:49 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ECPG cleanup and fix for clang compile-time problem" }, { "msg_contents": "On Fri, Jul 5, 2024 at 10:59 PM Tom Lane <[email protected]> wrote:\n>\n> The cfbot noticed that this patchset had a conflict with d35cd0619,\n> so here's a rebase. It's just a rebase of v1, no other changes.\n\nHi Tom,\n\nI started looking at the earlier cleanup patches.\n\n0001 seems straightforward. Note: It doesn't apply cleanly anymore,\nbut does with 'patch'.\n0002 LGTM, just a couple minor comments:\n\n--- a/src/interfaces/ecpg/preproc/parse.pl\n+++ b/src/interfaces/ecpg/preproc/parse.pl\n@@ -1,7 +1,13 @@\n #!/usr/bin/perl\n # src/interfaces/ecpg/preproc/parse.pl\n # parser generator for ecpg version 2\n-# call with backend parser as stdin\n+#\n+# See README.parser for some explanation of what this does.\n\nDoesn't this patch set put us up to version 3? ;-) Looking in the\nhistory, a very long time ago a separate \"parse2.pl\" was committed for\nsome reason, but that was reconciled some time later. This patch\ndoesn't need to get rid of that meaningless version number, but I find\nit distracting.\n\n+ # There may be multiple ECPG: lines and then multiple lines of code.\n+ # Each block of code needs to be added to all prior ECPG records.\n\nThis took me a while to parse at first. Some places in this script put\nquotes around words-with-colons, and that seems good for readability.\n\n0003:\n\nLooks a heck of a lot better, but I didn't try to understand\neverything in the script, either before or after.\n\n+ # Emit the target part of the rule.\n+ # Note: the leading space is just to match\n+ # the old, rather weird output logic.\n+ $tstr = ' ' . $non_term_id . ':';\n+ add_to_buffer('rules', $tstr);\n\nRemoving the leading space (or making it two spaces) has no effect on\nthe output -- does that get normalized elsewhere?\n\nThat's all I have for now.\n\n\n", "msg_date": "Mon, 12 Aug 2024 17:46:22 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ECPG cleanup and fix for clang compile-time problem" }, { "msg_contents": "John Naylor <[email protected]> writes:\n> I started looking at the earlier cleanup patches.\n\nThanks for looking!\n\n> 0001 seems straightforward. Note: It doesn't apply cleanly anymore,\n> but does with 'patch'.\n\nOdd, after rebasing it seems to have only line-number differences.\n\n> + # Emit the target part of the rule.\n> + # Note: the leading space is just to match\n> + # the old, rather weird output logic.\n> + $tstr = ' ' . $non_term_id . ':';\n> + add_to_buffer('rules', $tstr);\n\n> Removing the leading space (or making it two spaces) has no effect on\n> the output -- does that get normalized elsewhere?\n\nIt does affect horizontal space in the generated preproc.y file,\nwhich'd have no effect on the derived preproc.c file. I tweaked\nthe commit message to clarify that.\n\nI adopted your other suggestions, no need to rehash them.\n\nHere's a rebased but otherwise identical patchset. I also added\nan 0007 that removes check_rules.pl as threatened.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 12 Aug 2024 15:19:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ECPG cleanup and fix for clang compile-time problem" }, { "msg_contents": "I wrote:\n> Here's a rebased but otherwise identical patchset. I also added\n> an 0007 that removes check_rules.pl as threatened.\n\nI've done some more work on this and hope to post an updated patchset\ntomorrow. Before that though, is there any objection to going ahead\nwith pushing the 0001 patch (pgindent'ing ecpg's lexer and parser\nfiles)? It's pretty bulky yet of no intellectual interest, so I'd\nlike to stop carrying it forward.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Aug 2024 20:43:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ECPG cleanup and fix for clang compile-time problem" }, { "msg_contents": "On 15.08.24 02:43, Tom Lane wrote:\n> I wrote:\n>> Here's a rebased but otherwise identical patchset. I also added\n>> an 0007 that removes check_rules.pl as threatened.\n> \n> I've done some more work on this and hope to post an updated patchset\n> tomorrow. Before that though, is there any objection to going ahead\n> with pushing the 0001 patch (pgindent'ing ecpg's lexer and parser\n> files)? It's pretty bulky yet of no intellectual interest, so I'd\n> like to stop carrying it forward.\n\nThe indentation patch looks good to me and it would be good to get it \nout of the way.\n\n\n\n", "msg_date": "Thu, 15 Aug 2024 09:20:13 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ECPG cleanup and fix for clang compile-time problem" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 15.08.24 02:43, Tom Lane wrote:\n>> I've done some more work on this and hope to post an updated patchset\n>> tomorrow. Before that though, is there any objection to going ahead\n>> with pushing the 0001 patch (pgindent'ing ecpg's lexer and parser\n>> files)? It's pretty bulky yet of no intellectual interest, so I'd\n>> like to stop carrying it forward.\n\n> The indentation patch looks good to me and it would be good to get it \n> out of the way.\n\nThanks, done. Here's a revised patchset.\n\n0001-0003 are substantially identical to the previous 0002-0004.\nLikewise 0005 is basically the same as previous 0006,\nand 0009 is identical to the previous 0007.\n\n0004 differs from the previous 0005 in also moving the\ncat_str/make_str functions into util.c, because I found that\nat least the make_str functions could be useful in pgc.l.\n\nThe new stuff is in 0006-0008, and what it basically does is\nclean up all remaining memory leakage in ecpg --- or at least,\nall that valgrind can find while running ecpg's regression tests.\n(I'm not fool enough to think that there might not be some in\nunexercised code paths.) It's fairly straightforward attention\nto detail in data structure management.\n\nI discovered the need for more effort on memory leakage by\ndoing some simple performance testing (basically, running ecpg\non a big file made by pasting together lots of copies of some\nof the regression test inputs). v3 was slower and consumed more\nmemory than HEAD :-(. HEAD does already leak quite a bit of\nmemory, but v3 was worse, mainly because the string tokens\nreturned by pgc.l weren't being reclaimed. I hadn't really\nset out to drive the leakage to zero, but it turned out to not\nbe that hard, so I did it.\n\nWith those fixes, I see v4 running maybe 10% faster than HEAD,\nrather than a similar amount slower. I'm content with that\nresult, and feel that this may now be commit-quality.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 15 Aug 2024 15:58:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ECPG cleanup and fix for clang compile-time problem" }, { "msg_contents": "I wrote:\n> Thanks, done. Here's a revised patchset.\n\nThe cfbot points out that I should probably have marked progname\nas \"static\" in 0008. I'm not going to repost the patchset just for\nthat, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 15 Aug 2024 16:21:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ECPG cleanup and fix for clang compile-time problem" } ]
[ { "msg_contents": "Hello\n\nThis page says that the `@` and `~` operators on various types can be\naccelerated by a GiST index.\n\nhttps://www.postgresql.org/docs/current/gist-builtin-opclasses.html\n\nThese operators have been listed in the file since it was created in 2014,\nbut if they exist then I don't know how to use them or what they do.\n\nCode examples, for clarity:\n\n> select box '(0,0),(1, 1)' ~ box '(2,2),(3,3)';\noperator does not exist: box ~ box\n\n> select box '(0,0),(1, 1)' @ box '(2,2),(3,3)';\noperator does not exist: box @ box\n\nIf they're a typo or some removed thing then I'd like to remove them from\nthe page. This email is me asking to find out if I'm wrong about that\nbefore I try to submit a patch (also very happy for someone with a\ncommitter bit to just fix this).\n\nCheers,\nCol\n\nHelloThis page says that the `@` and `~` operators on various types can be accelerated by a GiST index.https://www.postgresql.org/docs/current/gist-builtin-opclasses.htmlThese operators have been listed in the file since it was created in 2014, but if they exist then I don't know how to use them or what they do.Code examples, for clarity:> select box '(0,0),(1, 1)' ~ box '(2,2),(3,3)';operator does not exist: box ~ box> select box '(0,0),(1, 1)' @ box '(2,2),(3,3)';operator does not exist: box @ boxIf they're a typo or some removed thing then I'd like to remove them from the page. This email is me asking to find out if I'm wrong about that before I try to submit a patch (also very happy for someone with a committer bit to just fix this).Cheers,Col", "msg_date": "Fri, 19 Apr 2024 04:41:41 +0100", "msg_from": "Colin Caine <[email protected]>", "msg_from_op": true, "msg_subject": "Okay to remove mention of mystery @ and ~ operators?" }, { "msg_contents": "Hi,\n\n> This page says that the `@` and `~` operators on various types can be accelerated by a GiST index.\n>\n> https://www.postgresql.org/docs/current/gist-builtin-opclasses.html\n>\n> These operators have been listed in the file since it was created in 2014, but if they exist then I don't know how to use them or what they do.\n>\n> Code examples, for clarity:\n>\n> > select box '(0,0),(1, 1)' ~ box '(2,2),(3,3)';\n> operator does not exist: box ~ box\n>\n> > select box '(0,0),(1, 1)' @ box '(2,2),(3,3)';\n> operator does not exist: box @ box\n>\n> If they're a typo or some removed thing then I'd like to remove them from the page. This email is me asking to find out if I'm wrong about that before I try to submit a patch (also very happy for someone with a committer bit to just fix this).\n\nIndeed, there is no @(box,box) or ~(box,box) in the \\dAo output. These\noperators were removed by 2f70fdb0644c back in 2020.\n\nI will submit a patch for the documentation shortly. Thanks for reporting.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 19 Apr 2024 13:08:50 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Okay to remove mention of mystery @ and ~ operators?" }, { "msg_contents": "Hi,\n\n> > This page says that the `@` and `~` operators on various types can be accelerated by a GiST index.\n> >\n> > https://www.postgresql.org/docs/current/gist-builtin-opclasses.html\n> >\n> > These operators have been listed in the file since it was created in 2014, but if they exist then I don't know how to use them or what they do.\n> >\n> > Code examples, for clarity:\n> >\n> > > select box '(0,0),(1, 1)' ~ box '(2,2),(3,3)';\n> > operator does not exist: box ~ box\n> >\n> > > select box '(0,0),(1, 1)' @ box '(2,2),(3,3)';\n> > operator does not exist: box @ box\n> >\n> > If they're a typo or some removed thing then I'd like to remove them from the page. This email is me asking to find out if I'm wrong about that before I try to submit a patch (also very happy for someone with a committer bit to just fix this).\n>\n> Indeed, there is no @(box,box) or ~(box,box) in the \\dAo output. These\n> operators were removed by 2f70fdb0644c back in 2020.\n>\n> I will submit a patch for the documentation shortly. Thanks for reporting.\n\nHere is the patch.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Fri, 19 Apr 2024 13:31:13 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Okay to remove mention of mystery @ and ~ operators?" }, { "msg_contents": "> On 19 Apr 2024, at 12:31, Aleksander Alekseev <[email protected]> wrote:\n> \n> Hi,\n> \n>>> This page says that the `@` and `~` operators on various types can be accelerated by a GiST index.\n>>> \n>>> https://www.postgresql.org/docs/current/gist-builtin-opclasses.html\n>>> \n>>> These operators have been listed in the file since it was created in 2014, but if they exist then I don't know how to use them or what they do.\n>>> \n>>> Code examples, for clarity:\n>>> \n>>>> select box '(0,0),(1, 1)' ~ box '(2,2),(3,3)';\n>>> operator does not exist: box ~ box\n>>> \n>>>> select box '(0,0),(1, 1)' @ box '(2,2),(3,3)';\n>>> operator does not exist: box @ box\n>>> \n>>> If they're a typo or some removed thing then I'd like to remove them from the page. This email is me asking to find out if I'm wrong about that before I try to submit a patch (also very happy for someone with a committer bit to just fix this).\n>> \n>> Indeed, there is no @(box,box) or ~(box,box) in the \\dAo output. These\n>> operators were removed by 2f70fdb0644c back in 2020.\n>> \n>> I will submit a patch for the documentation shortly. Thanks for reporting.\n> \n> Here is the patch.\n\nNice catch, and thanks for the patch. I'll apply it with a backpatch to when\nthey were removed.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 19 Apr 2024 13:49:45 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Okay to remove mention of mystery @ and ~ operators?" }, { "msg_contents": "> On 19 Apr 2024, at 13:49, Daniel Gustafsson <[email protected]> wrote:\n>> On 19 Apr 2024, at 12:31, Aleksander Alekseev <[email protected]> wrote:\n\n>> Here is the patch.\n> \n> Nice catch, and thanks for the patch. I'll apply it with a backpatch to when\n> they were removed.\n\nDone, thanks for the report and the patch!\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 19 Apr 2024 14:59:16 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Okay to remove mention of mystery @ and ~ operators?" }, { "msg_contents": "Thanks all!\n\nOn Fri, 19 Apr 2024 at 13:59, Daniel Gustafsson <[email protected]> wrote:\n\n> > On 19 Apr 2024, at 13:49, Daniel Gustafsson <[email protected]> wrote:\n> >> On 19 Apr 2024, at 12:31, Aleksander Alekseev <[email protected]>\n> wrote:\n>\n> >> Here is the patch.\n> >\n> > Nice catch, and thanks for the patch. I'll apply it with a backpatch to\n> when\n> > they were removed.\n>\n> Done, thanks for the report and the patch!\n>\n> --\n> Daniel Gustafsson\n>\n>\n\nThanks all!On Fri, 19 Apr 2024 at 13:59, Daniel Gustafsson <[email protected]> wrote:> On 19 Apr 2024, at 13:49, Daniel Gustafsson <[email protected]> wrote:\n>> On 19 Apr 2024, at 12:31, Aleksander Alekseev <[email protected]> wrote:\n\n>> Here is the patch.\n> \n> Nice catch, and thanks for the patch.  I'll apply it with a backpatch to when\n> they were removed.\n\nDone, thanks for the report and the patch!\n\n--\nDaniel Gustafsson", "msg_date": "Fri, 19 Apr 2024 20:37:42 +0100", "msg_from": "Colin Caine <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Okay to remove mention of mystery @ and ~ operators?" } ]
[ { "msg_contents": "Hello!\n\nThere is a macro XLOG_CONTROL_FILE for control file name\ndefined in access/xlog_internal.h\nAnd there are some places in code where this macro is used\nlike here\nhttps://github.com/postgres/postgres/blob/84db9a0eb10dd1dbee6db509c0e427fa237177dc/src/bin/pg_resetwal/pg_resetwal.c#L588\nor here\nhttps://github.com/postgres/postgres/blob/84db9a0eb10dd1dbee6db509c0e427fa237177dc/src/common/controldata_utils.c#L214\n\nBut there are some other places where the control file\nname is used as text string directly.\n\nMay be better use this macro everywhere in C code?\nThe patch attached tries to do this.\n\nWould be glad if take a look on it.\n\nWith the best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 19 Apr 2024 06:50:39 +0300", "msg_from": "\"Anton A. Melnikov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Use XLOG_CONTROL_FILE macro everywhere?" }, { "msg_contents": "> On 19 Apr 2024, at 05:50, Anton A. Melnikov <[email protected]> wrote:\n\n> May be better use this macro everywhere in C code?\n\nOff the cuff that seems to make sense, it does make it easier to grep for uses\nof the control file.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 19 Apr 2024 10:12:14 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use XLOG_CONTROL_FILE macro everywhere?" }, { "msg_contents": "On Fri, Apr 19, 2024 at 10:12:14AM +0200, Daniel Gustafsson wrote:\n> Off the cuff that seems to make sense, it does make it easier to grep for uses\n> of the control file.\n\n+1 for switching to the macro where we can. Still, I don't see a\npoint in rushing and would just switch that once v18 opens up for\nbusiness. \n--\nMichael", "msg_date": "Sat, 20 Apr 2024 08:23:21 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use XLOG_CONTROL_FILE macro everywhere?" }, { "msg_contents": "> On 20 Apr 2024, at 01:23, Michael Paquier <[email protected]> wrote:\n> \n> On Fri, Apr 19, 2024 at 10:12:14AM +0200, Daniel Gustafsson wrote:\n>> Off the cuff that seems to make sense, it does make it easier to grep for uses\n>> of the control file.\n> \n> +1 for switching to the macro where we can. Still, I don't see a\n> point in rushing and would just switch that once v18 opens up for\n> business.\n\nAbsolutely, this is not fixing a defect so it's v18 material.\n\nAnton: please register this patch in the Open commitfest to ensure it's not\nforgotten about.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Sat, 20 Apr 2024 08:36:58 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use XLOG_CONTROL_FILE macro everywhere?" }, { "msg_contents": "\nOn 20.04.2024 09:36, Daniel Gustafsson wrote:\n> Anton: please register this patch in the Open commitfest to ensure it's not\n> forgotten about.\n> \n\nDone.\n \nDaniel and Michael thank you!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Sat, 20 Apr 2024 16:20:35 +0300", "msg_from": "\"Anton A. Melnikov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use XLOG_CONTROL_FILE macro everywhere?" }, { "msg_contents": "On 19.04.24 05:50, Anton A. Melnikov wrote:\n> There is a macro XLOG_CONTROL_FILE for control file name\n> defined in access/xlog_internal.h\n> And there are some places in code where this macro is used\n> like here\n> https://github.com/postgres/postgres/blob/84db9a0eb10dd1dbee6db509c0e427fa237177dc/src/bin/pg_resetwal/pg_resetwal.c#L588\n> or here\n> https://github.com/postgres/postgres/blob/84db9a0eb10dd1dbee6db509c0e427fa237177dc/src/common/controldata_utils.c#L214\n> \n> But there are some other places where the control file\n> name is used as text string directly.\n> \n> May be better use this macro everywhere in C code?\n\nI don't know. I don't find XLOG_CONTROL_FILE to be a very intuitive \nproxy for \"pg_control\".\n\n\n\n", "msg_date": "Wed, 24 Apr 2024 11:02:14 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use XLOG_CONTROL_FILE macro everywhere?" }, { "msg_contents": "On 24.04.2024 12:02, Peter Eisentraut wrote:\n> On 19.04.24 05:50, Anton A. Melnikov wrote:\n>>\n>> May be better use this macro everywhere in C code?\n> \n> I don't know.  I don't find XLOG_CONTROL_FILE to be a very intuitive proxy for \"pg_control\".\n> \n\nThen maybe replace XLOG_CONTROL_FILE with PG_CONTROL_FILE?\n\nThe PG_CONTROL_FILE_SIZE macro is already in the code.\n \nWith the best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Wed, 24 Apr 2024 12:13:10 +0300", "msg_from": "\"Anton A. Melnikov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use XLOG_CONTROL_FILE macro everywhere?" }, { "msg_contents": "> On 24 Apr 2024, at 11:13, Anton A. Melnikov <[email protected]> wrote:\n> \n> On 24.04.2024 12:02, Peter Eisentraut wrote:\n>> On 19.04.24 05:50, Anton A. Melnikov wrote:\n>>> \n>>> May be better use this macro everywhere in C code?\n>> I don't know. I don't find XLOG_CONTROL_FILE to be a very intuitive proxy for \"pg_control\".\n\nMaybe, but inconsistent usage is also unintuitive.\n\n> Then maybe replace XLOG_CONTROL_FILE with PG_CONTROL_FILE?\n> \n> The PG_CONTROL_FILE_SIZE macro is already in the code.\n> With the best regards,\n\nXLOG_CONTROL_FILE is close to two decades old so there might be extensions\nusing it (though the risk might be slim), perhaps using offering it as as well\nas backwards-compatability is warranted should we introduce a new name?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 24 Apr 2024 11:19:59 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use XLOG_CONTROL_FILE macro everywhere?" }, { "msg_contents": "On 24.04.2024 12:19, Daniel Gustafsson wrote:\n>> On 24 Apr 2024, at 11:13, Anton A. Melnikov <[email protected]> wrote:\n>>\n>> On 24.04.2024 12:02, Peter Eisentraut wrote:\n>>> On 19.04.24 05:50, Anton A. Melnikov wrote:\n>>>>\n>>>> May be better use this macro everywhere in C code?\n>>> I don't know. I don't find XLOG_CONTROL_FILE to be a very intuitive proxy for \"pg_control\".\n> \n> Maybe, but inconsistent usage is also unintuitive.\n> \n>> Then maybe replace XLOG_CONTROL_FILE with PG_CONTROL_FILE?\n>>\n>> The PG_CONTROL_FILE_SIZE macro is already in the code.\n>> With the best regards,\n> \n> XLOG_CONTROL_FILE is close to two decades old so there might be extensions\n> using it (though the risk might be slim), perhaps using offering it as as well\n> as backwards-compatability is warranted should we introduce a new name?\n> \n\nTo ensure backward compatibility we can save the old macro like this:\n\n#define XLOG_CONTROL_FILE\t\"global/pg_control\"\n#define PG_CONTROL_FILE\t\tXLOG_CONTROL_FILE\n\n\n\nWith the best wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Wed, 24 Apr 2024 12:32:46 +0300", "msg_from": "\"Anton A. Melnikov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use XLOG_CONTROL_FILE macro everywhere?" }, { "msg_contents": "On Wed, Apr 24, 2024 at 12:32:46PM +0300, Anton A. Melnikov wrote:\n> To ensure backward compatibility we can save the old macro like this:\n> \n> #define XLOG_CONTROL_FILE\t\"global/pg_control\"\n> #define PG_CONTROL_FILE\t\tXLOG_CONTROL_FILE\n> \n> With the best wishes,\n\nNot sure that I would bother with a second one. But, well, why not if\npeople want to rename it, as long as you keep compatibility.\n--\nMichael", "msg_date": "Thu, 25 Apr 2024 09:04:26 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use XLOG_CONTROL_FILE macro everywhere?" }, { "msg_contents": "On Wed, Apr 24, 2024 at 8:04 PM Michael Paquier <[email protected]> wrote:\n> On Wed, Apr 24, 2024 at 12:32:46PM +0300, Anton A. Melnikov wrote:\n> > To ensure backward compatibility we can save the old macro like this:\n> >\n> > #define XLOG_CONTROL_FILE \"global/pg_control\"\n> > #define PG_CONTROL_FILE XLOG_CONTROL_FILE\n> >\n> > With the best wishes,\n>\n> Not sure that I would bother with a second one. But, well, why not if\n> people want to rename it, as long as you keep compatibility.\n\nI vote for just standardizing on XLOG_CONTROL_FILE. That name seems\nsufficiently intuitive to me, and I'd rather have one identifier for\nthis than two. It's simpler that way.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Apr 2024 09:06:59 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use XLOG_CONTROL_FILE macro everywhere?" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Wed, Apr 24, 2024 at 8:04 PM Michael Paquier <[email protected]> wrote:\n>> Not sure that I would bother with a second one. But, well, why not if\n>> people want to rename it, as long as you keep compatibility.\n\n> I vote for just standardizing on XLOG_CONTROL_FILE. That name seems\n> sufficiently intuitive to me, and I'd rather have one identifier for\n> this than two. It's simpler that way.\n\n+1. Back when we did the great xlog-to-wal renaming, we explicitly\nagreed that we wouldn't change internal symbols referring to xlog.\nIt might or might not be appropriate to revisit that decision,\nbut I sure don't want to do it piecemeal, one symbol at a time.\n\nAlso, if we did rename this one, the logical choice would be\nWAL_CONTROL_FILE not PG_CONTROL_FILE.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Apr 2024 16:51:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use XLOG_CONTROL_FILE macro everywhere?" }, { "msg_contents": "On 26.04.24 22:51, Tom Lane wrote:\n> Robert Haas <[email protected]> writes:\n>> On Wed, Apr 24, 2024 at 8:04 PM Michael Paquier <[email protected]> wrote:\n>>> Not sure that I would bother with a second one. But, well, why not if\n>>> people want to rename it, as long as you keep compatibility.\n> \n>> I vote for just standardizing on XLOG_CONTROL_FILE. That name seems\n>> sufficiently intuitive to me, and I'd rather have one identifier for\n>> this than two. It's simpler that way.\n> \n> +1. Back when we did the great xlog-to-wal renaming, we explicitly\n> agreed that we wouldn't change internal symbols referring to xlog.\n> It might or might not be appropriate to revisit that decision,\n> but I sure don't want to do it piecemeal, one symbol at a time.\n> \n> Also, if we did rename this one, the logical choice would be\n> WAL_CONTROL_FILE not PG_CONTROL_FILE.\n\nMy reasoning was mainly that I don't see pg_control as controlling just \nthe WAL. But I don't feel strongly about instigating a great renaming \nhere or something.\n\n\n\n", "msg_date": "Sat, 27 Apr 2024 11:12:05 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use XLOG_CONTROL_FILE macro everywhere?" }, { "msg_contents": "> On 27 Apr 2024, at 11:12, Peter Eisentraut <[email protected]> wrote:\n> \n> On 26.04.24 22:51, Tom Lane wrote:\n>> Robert Haas <[email protected]> writes:\n>>> On Wed, Apr 24, 2024 at 8:04 PM Michael Paquier <[email protected]> wrote:\n>>>> Not sure that I would bother with a second one. But, well, why not if\n>>>> people want to rename it, as long as you keep compatibility.\n>>> I vote for just standardizing on XLOG_CONTROL_FILE. That name seems\n>>> sufficiently intuitive to me, and I'd rather have one identifier for\n>>> this than two. It's simpler that way.\n>> +1. Back when we did the great xlog-to-wal renaming, we explicitly\n>> agreed that we wouldn't change internal symbols referring to xlog.\n>> It might or might not be appropriate to revisit that decision,\n>> but I sure don't want to do it piecemeal, one symbol at a time.\n>> Also, if we did rename this one, the logical choice would be\n>> WAL_CONTROL_FILE not PG_CONTROL_FILE.\n> \n> My reasoning was mainly that I don't see pg_control as controlling just the WAL. But I don't feel strongly about instigating a great renaming here or something.\n\nSummarizing the thread it seems consensus is using XLOG_CONTROL_FILE\nconsistently as per the original patch.\n\nA few comments on the patch though:\n\n- * reads the data from $PGDATA/global/pg_control\n+ * reads the data from $PGDATA/<control file>\n\nI don't think this is an improvement, I'd leave that one as the filename\nspelled out.\n\n- \"the \\\".old\\\" suffix from %s/global/pg_control.old.\\n\"\n+ \"the \\\".old\\\" suffix from %s/%s.old.\\n\"\n\nSame with that change, not sure I think that makes reading the errormessage\ncode any easier.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 2 Sep 2024 15:44:45 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use XLOG_CONTROL_FILE macro everywhere?" }, { "msg_contents": "At Mon, 2 Sep 2024 15:44:45 +0200, Daniel Gustafsson <[email protected]> wrote in \n> Summarizing the thread it seems consensus is using XLOG_CONTROL_FILE\n> consistently as per the original patch.\n> \n> A few comments on the patch though:\n> \n> - * reads the data from $PGDATA/global/pg_control\n> + * reads the data from $PGDATA/<control file>\n> \n> I don't think this is an improvement, I'd leave that one as the filename\n> spelled out.\n> \n> - \"the \\\".old\\\" suffix from %s/global/pg_control.old.\\n\"\n> + \"the \\\".old\\\" suffix from %s/%s.old.\\n\"\n> \n> Same with that change, not sure I think that makes reading the errormessage\n> code any easier.\n\nI agree with the first point. In fact, I think it might be even better\nto just write something like \"reads the data from the control file\" in\nplain language rather than using the actual file name. As for the\nsecond point, I'm fine either way, but if the main goal is to ensure\nresilience against future changes to the value of XLOG_CONTROL_FILE,\nthen changing it makes sense. On the other hand, I don't see any\nstrong reasons not to change it. That said, I don't really expect the\nvalue to change in the first place.\n\nThe following point caught my attention.\n\n> +++ b/src/backend/postmaster/postmaster.c\n...\n> +#include \"access/xlog_internal.h\"\n\nThe name xlog_internal suggests that the file should only be included\nby modules dealing with XLOG details, and postmaster.c doesn't seem to\nfit that category. If the macro is used more broadly, it might be\nbetter to move it to a more public location. However, following the\ncurrent discussion, if we decide to keep the macro's name as it is, it\nwould make more sense to keep it in its current location.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 03 Sep 2024 14:37:56 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use XLOG_CONTROL_FILE macro everywhere?" }, { "msg_contents": "On 03.09.2024 08:37, Kyotaro Horiguchi wrote:\n> At Mon, 2 Sep 2024 15:44:45 +0200, Daniel Gustafsson <[email protected]> wrote in\n>> Summarizing the thread it seems consensus is using XLOG_CONTROL_FILE\n>> consistently as per the original patch.\n\n1)\n>> A few comments on the patch though:\n>>\n>> - * reads the data from $PGDATA/global/pg_control\n>> + * reads the data from $PGDATA/<control file>\n>>\n>> I don't think this is an improvement, I'd leave that one as the filename\n>> spelled out.\n> \n> I agree with the first point. In fact, I think it might be even better\n> to just write something like \"reads the data from the control file\" in\n> plain language rather than using the actual file name. \n\nThanks for remarks! Agreed with both. Tried to fix in v2 attached.\n\n\n2)\n>> - \"the \\\".old\\\" suffix from %s/global/pg_control.old.\\n\"\n>> + \"the \\\".old\\\" suffix from %s/%s.old.\\n\"\n>>\n>> Same with that change, not sure I think that makes reading the errormessage\n>> code any easier.\n> As for the\n> second point, I'm fine either way, but if the main goal is to ensure\n> resilience against future changes to the value of XLOG_CONTROL_FILE,\n> then changing it makes sense. On the other hand, I don't see any\n> strong reasons not to change it. That said, I don't really expect the\n> value to change in the first place.\n\nIn v2 removed XLOG_CONTROL_FILE from args and used it directly in the string.\nIMHO this makes the code more readable and will output the correct\ntext if the macro changes.\n\n3)\n\n> The following point caught my attention.\n> \n>> +++ b/src/backend/postmaster/postmaster.c\n> ...\n>> +#include \"access/xlog_internal.h\"\n> \n> The name xlog_internal suggests that the file should only be included\n> by modules dealing with XLOG details, and postmaster.c doesn't seem to\n> fit that category. If the macro is used more broadly, it might be\n> better to move it to a more public location. However, following the\n> current discussion, if we decide to keep the macro's name as it is, it\n> would make more sense to keep it in its current location.\n\nMaybe include/access/xlogdefs.h would be a good place for this?\nIn v2 moved some definitions from xlog_internal to xlogdefs\nand removed including xlog_internal.h from the postmaster.c.\nPlease, take a look on it.\n\nWith the best wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 3 Sep 2024 12:02:26 +0300", "msg_from": "\"Anton A. Melnikov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use XLOG_CONTROL_FILE macro everywhere?" }, { "msg_contents": "At Tue, 3 Sep 2024 12:02:26 +0300, \"Anton A. Melnikov\" <[email protected]> wrote in \r\n> In v2 removed XLOG_CONTROL_FILE from args and used it directly in the\r\n> string.\r\n> IMHO this makes the code more readable and will output the correct\r\n> text if the macro changes.\r\n\r\nYeah, I had this in my mind. Looks good to me.\r\n\r\n> 3)\r\n..\r\n> Maybe include/access/xlogdefs.h would be a good place for this?\r\n> In v2 moved some definitions from xlog_internal to xlogdefs\r\n> and removed including xlog_internal.h from the postmaster.c.\r\n> Please, take a look on it.\r\n\r\nThe change can help avoid disrupting existing users of the\r\nmacro. However, the file is documented as follows:\r\n\r\n> * Postgres write-ahead log manager record pointer and\r\n> * timeline number definitions\r\n\r\nWe could modify the file definition, but I'm not sure if that's the\r\nbest approach. Instead, I'd like to propose separating the file and\r\npath-related definitions from xlog_internal.h, as shown in the\r\nattached first patch. This change would allow some modules to include\r\nfiles without unnecessary details.\r\n\r\nThe second file is your patch, adjusted based on the first patch.\r\n\r\nI’d appreciate hearing from others on whether they find the first\r\npatch worthwhile. If it’s not considered worthwhile, then I believe\r\nhaving postmaster include xlog_internal.h would be the best approach.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center", "msg_date": "Wed, 04 Sep 2024 17:09:27 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use XLOG_CONTROL_FILE macro everywhere?" }, { "msg_contents": "On 04.09.2024 11:09, Kyotaro Horiguchi wrote:\n> Instead, I'd like to propose separating the file and\n> path-related definitions from xlog_internal.h, as shown in the\n> attached first patch. This change would allow some modules to include\n> files without unnecessary details.\n> \n> The second file is your patch, adjusted based on the first patch.\n> \n> I’d appreciate hearing from others on whether they find the first\n> patch worthwhile. If it’s not considered worthwhile, then I believe\n> having postmaster include xlog_internal.h would be the best approach.\n\nI really liked the idea of ​​extracting only the necessary and logically\ncomplete part from xlog_internal.h.\nBut now the presence of macros related to the segment sizes\nin the xlogfilepaths.h seems does not correspond to its name.\nSo i suggest further extract size-related definition and macros from\nxlogfilepaths.h to xlogfilesize.h.\n\nHere is a patch that tries to do this based on the your first patch.\nWould be glad to hear your opinion.\n\n\nWith the best wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sat, 7 Sep 2024 10:38:06 +0300", "msg_from": "\"Anton A. Melnikov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use XLOG_CONTROL_FILE macro everywhere?" } ]
[ { "msg_contents": "Hi all,\n(Heikki in CC.)\n\nSince 91044ae4baea (require ALPN for direct SSL connections) and\nd39a49c1e459 (direct hanshake), direct SSL connections are supported\n(yeah!), still the thread where this has been discussed does not cover\nthe potential impact on HBA rules:\nhttps://www.postgresql.org/message-id/CAM-w4HOEAzxyY01ZKOj-iq%3DM4-VDk%3DvzQgUsuqiTFjFDZaebdg%40mail.gmail.com\n\nMy point is, would there be a point in being able to enforce that ALPN\nis used from the server rather than just relying on the client-side\nsslnegotiation to decide if direct SSL connections should be forced or\nnot?\n\nHence, I'd imagine that we could have an HBA option for hostssl rules,\nlike a negotiation={direct,postgres,all} that cross-checks\nPort->alpn_used with the option value in a hostssl entry, rejecting\nthe use of connections using direct connections or the default\nprotocol if these are not used, giving users a way to avoid one. As\nthis is a new thing, there may be an argument in this option for\nsecurity reasons, as well, so as it would be possible for operators to\nturn that off in the server.\n\nThoughts or comments?\n--\nMichael", "msg_date": "Fri, 19 Apr 2024 14:06:04 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Direct SSL connection with ALPN and HBA rules " }, { "msg_contents": "On 19/04/2024 08:06, Michael Paquier wrote:\n> Hi all,\n> (Heikki in CC.)\n\n(Adding Jacob)\n\n> Since 91044ae4baea (require ALPN for direct SSL connections) and\n> d39a49c1e459 (direct hanshake), direct SSL connections are supported\n> (yeah!), still the thread where this has been discussed does not cover\n> the potential impact on HBA rules:\n> https://www.postgresql.org/message-id/CAM-w4HOEAzxyY01ZKOj-iq%3DM4-VDk%3DvzQgUsuqiTFjFDZaebdg%40mail.gmail.com\n> \n> My point is, would there be a point in being able to enforce that ALPN\n> is used from the server rather than just relying on the client-side\n> sslnegotiation to decide if direct SSL connections should be forced or\n> not?\n> \n> Hence, I'd imagine that we could have an HBA option for hostssl rules,\n> like a negotiation={direct,postgres,all} that cross-checks\n> Port->alpn_used with the option value in a hostssl entry, rejecting\n> the use of connections using direct connections or the default\n> protocol if these are not used, giving users a way to avoid one. As\n> this is a new thing, there may be an argument in this option for\n> security reasons, as well, so as it would be possible for operators to\n> turn that off in the server.\n\nI don't think ALPN gives any meaningful security benefit, when used with \nthe traditional 'postgres' SSL negotiation. There's little possibility \nof mixing that up with some other protocol, so I don't see any need to \nenforce it from server side. This was briefly discussed on that original \nthread [1]. With direct SSL negotiation, we always require ALPN.\n\nI don't see direct SSL negotiation as a security feature. Rather, the \npoint is to reduce connection latency by saving one round-trip. For \nexample, if gssencmode=prefer, but the server refuses GSS encryption, it \nseems fine to continue with negotiated SSL, instead of establishing a \nnew connection with direct SSL. What would be the use case of requiring \ndirect SSL in the server? What extra security do you get?\n\nControlling these in HBA is a bit inconvenient, because you only find \nout after authentication if it's allowed or not. So if e.g. direct SSL \nconnections are disabled for a user, the client would still establish a \ndirect SSL connection, send the startup packet, and only then get \nrejected. The client would not know if it was rejected because of the \ndirect SSL or for some reason, so it needs to retry with negotiated SSL. \nCurrently, as it is master, if the TLS layer is established with direct \nSSL, you never need to retry with traditional negotiation, or vice versa.\n\n[1] \nhttps://www.postgresql.org/message-id/CAAWbhmjetCVgu9pHJFkQ4ejuXuaz2mD1oniXokRHft0usCa7Yg%40mail.gmail.com\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 19 Apr 2024 16:55:55 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Fri, Apr 19, 2024 at 6:56 AM Heikki Linnakangas <[email protected]> wrote:\n> On 19/04/2024 08:06, Michael Paquier wrote:\n> > Since 91044ae4baea (require ALPN for direct SSL connections) and\n> > d39a49c1e459 (direct hanshake), direct SSL connections are supported\n> > (yeah!), still the thread where this has been discussed does not cover\n> > the potential impact on HBA rules:\n> > https://www.postgresql.org/message-id/CAM-w4HOEAzxyY01ZKOj-iq%3DM4-VDk%3DvzQgUsuqiTFjFDZaebdg%40mail.gmail.com\n> >\n> > My point is, would there be a point in being able to enforce that ALPN\n> > is used from the server rather than just relying on the client-side\n> > sslnegotiation to decide if direct SSL connections should be forced or\n> > not?\n\nI'm a little confused about whether we're talking about requiring ALPN\nor requiring direct connections. I think you mean the latter, Michael?\n\nPersonally, I was hoping that we'd have a postgresql.conf option to\nreject every network connection that wasn't direct SSL, but I ran out\nof time to review the patchset for 17. I would like to see server-side\nenforcement of direct SSL in some way, eventually. I hadn't given much\nthought to HBA, though.\n\n> I don't think ALPN gives any meaningful security benefit, when used with\n> the traditional 'postgres' SSL negotiation. There's little possibility\n> of mixing that up with some other protocol, so I don't see any need to\n> enforce it from server side. This was briefly discussed on that original\n> thread [1].\n\nAgreed. By the time you've issued a traditional SSL startup packet,\nand the server responds with a go-ahead, it's pretty much understood\nwhat protocol is in use.\n\n> With direct SSL negotiation, we always require ALPN.\n\n (As an aside: I haven't gotten to test the version of the patch that\nmade it into 17 yet, but from a quick glance it looks like we're not\nrejecting mismatched ALPN during the handshake as noted in [1].)\n\n> I don't see direct SSL negotiation as a security feature.\n\n`direct` mode is not, since it's opportunistic, but I think people are\ngoing to use `requiredirect` as a security feature. At least, I was\nhoping to do that myself...\n\n> Rather, the\n> point is to reduce connection latency by saving one round-trip. For\n> example, if gssencmode=prefer, but the server refuses GSS encryption, it\n> seems fine to continue with negotiated SSL, instead of establishing a\n> new connection with direct SSL.\n\nWell, assuming the user is okay with plaintext negotiation at all.\n(Was that fixed before the patch went in? Is GSS negotiation still\nallowed even with requiredirect?)\n\n> What would be the use case of requiring\n> direct SSL in the server? What extra security do you get?\n\nYou get protection against attacks that could have otherwise happened\nduring the plaintext portion of the handshake. That has architectural\nimplications for more advanced uses of SCRAM, and it should prevent\nany repeats of CVE-2021-23222/23214. And if the peer doesn't pass the\nTLS handshake, they can't send you anything that you might forget is\nuntrusted (like, say, an error message).\n\n> Controlling these in HBA is a bit inconvenient, because you only find\n> out after authentication if it's allowed or not. So if e.g. direct SSL\n> connections are disabled for a user,\n\nHopefully disabling direct SSL piecemeal is not a desired use case?\nI'm not sure it makes sense to focus on that. Forcing it to be enabled\nshouldn't have the same problem, should it?\n\n--Jacob\n\n[1] https://www.postgresql.org/message-id/CAOYmi%2B%3DcnV-8V8TndSkEF6Htqa7qHQUL_KnQU8-DrT0Jjnm3_Q%40mail.gmail.com\n\n\n", "msg_date": "Fri, 19 Apr 2024 09:48:13 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On 19/04/2024 19:48, Jacob Champion wrote:\n> On Fri, Apr 19, 2024 at 6:56 AM Heikki Linnakangas <[email protected]> wrote:\n>> With direct SSL negotiation, we always require ALPN.\n> \n> (As an aside: I haven't gotten to test the version of the patch that\n> made it into 17 yet, but from a quick glance it looks like we're not\n> rejecting mismatched ALPN during the handshake as noted in [1].)\n\nAh, good catch, that fell through the cracks. Agreed, the client should \nreject a direct SSL connection if the server didn't send ALPN. I'll add \nthat to the Open Items so we don't forget again.\n\n>> I don't see direct SSL negotiation as a security feature.\n> \n> `direct` mode is not, since it's opportunistic, but I think people are\n> going to use `requiredirect` as a security feature. At least, I was\n> hoping to do that myself...\n> \n>> Rather, the\n>> point is to reduce connection latency by saving one round-trip. For\n>> example, if gssencmode=prefer, but the server refuses GSS encryption, it\n>> seems fine to continue with negotiated SSL, instead of establishing a\n>> new connection with direct SSL.\n> \n> Well, assuming the user is okay with plaintext negotiation at all.\n> (Was that fixed before the patch went in? Is GSS negotiation still\n> allowed even with requiredirect?)\n\nTo disable sending the startup packet in plaintext, you need to use \nsslmode=require. Same as before the patch. GSS is still allowed, as it \ntakes precedence over SSL if both are enabled in libpq. Per the docs:\n\n> Note that if gssencmode is set to prefer, a GSS connection is\n> attempted first. If the server rejects GSS encryption, SSL is\n> negotiated over the same TCP connection using the traditional\n> postgres protocol, regardless of sslnegotiation. In other words, the\n> direct SSL handshake is not used, if a TCP connection has already\n> been established and can be used for the SSL handshake.\n\n\n>> What would be the use case of requiring\n>> direct SSL in the server? What extra security do you get?\n> \n> You get protection against attacks that could have otherwise happened\n> during the plaintext portion of the handshake. That has architectural\n> implications for more advanced uses of SCRAM, and it should prevent\n> any repeats of CVE-2021-23222/23214. And if the peer doesn't pass the\n> TLS handshake, they can't send you anything that you might forget is\n> untrusted (like, say, an error message).\n\nCan you elaborate on the more advanced uses of SCRAM?\n\n>> Controlling these in HBA is a bit inconvenient, because you only find\n>> out after authentication if it's allowed or not. So if e.g. direct SSL\n>> connections are disabled for a user,\n> \n> Hopefully disabling direct SSL piecemeal is not a desired use case?\n> I'm not sure it makes sense to focus on that. Forcing it to be enabled\n> shouldn't have the same problem, should it?\n\nForcing it to be enabled piecemeal based on role or database has similar \nproblems. Forcing it enabled for all connections seems sensible, though. \nForcing it enabled based on the client's source IP address, but not \nuser/database would be somewhat sensible too, but we don't currently \nhave the HBA code to check the source IP and accept/reject SSLRequest \nbased on that. The HBA rejection always happens after the client has \nsent the startup packet.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Sat, 20 Apr 2024 00:43:24 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Sat, Apr 20, 2024 at 12:43:24AM +0300, Heikki Linnakangas wrote:\n> On 19/04/2024 19:48, Jacob Champion wrote:\n>> On Fri, Apr 19, 2024 at 6:56 AM Heikki Linnakangas <[email protected]> wrote:\n>>> With direct SSL negotiation, we always require ALPN.\n>> \n>> (As an aside: I haven't gotten to test the version of the patch that\n>> made it into 17 yet, but from a quick glance it looks like we're not\n>> rejecting mismatched ALPN during the handshake as noted in [1].)\n> \n> Ah, good catch, that fell through the cracks. Agreed, the client should\n> reject a direct SSL connection if the server didn't send ALPN. I'll add that\n> to the Open Items so we don't forget again.\n\nWould somebody like to write a patch for that? I'm planning to look\nat this code more closely, as well.\n\n>> You get protection against attacks that could have otherwise happened\n>> during the plaintext portion of the handshake. That has architectural\n>> implications for more advanced uses of SCRAM, and it should prevent\n>> any repeats of CVE-2021-23222/23214. And if the peer doesn't pass the\n>> TLS handshake, they can't send you anything that you might forget is\n>> untrusted (like, say, an error message).\n> \n> Can you elaborate on the more advanced uses of SCRAM?\n\nI'm not sure what you mean here, either, Jacob.\n\n>>> Controlling these in HBA is a bit inconvenient, because you only find\n>>> out after authentication if it's allowed or not. So if e.g. direct SSL\n>>> connections are disabled for a user,\n>> \n>> Hopefully disabling direct SSL piecemeal is not a desired use case?\n>> I'm not sure it makes sense to focus on that. Forcing it to be enabled\n>> shouldn't have the same problem, should it?\n\nI'd get behind the case where a server rejects everything except\ndirect SSL, yeah. Sticking that into a format similar to HBA rules\nwould easily give the flexibility to be able to accept or reject\ndirect or default SSL, though, while making it easy to parse. The\nimplementation is not really complicated, and not far from the\nexisting hostssl and nohostssl.\n\nAs a whole, I can get behind a unique GUC that forces the use of\ndirect. Or, we could extend the existing \"ssl\" GUC with a new\n\"direct\" value to accept only direct connections and restrict the\noriginal protocol (and a new \"postgres\" for the pre-16 protocol,\nrejecting direct?), while \"on\" is able to accept both.\n\n> Forcing it to be enabled piecemeal based on role or database has similar\n> problems. Forcing it enabled for all connections seems sensible, though.\n> Forcing it enabled based on the client's source IP address, but not\n> user/database would be somewhat sensible too, but we don't currently have\n> the HBA code to check the source IP and accept/reject SSLRequest based on\n> that. The HBA rejection always happens after the client has sent the startup\n> packet.\n\nHmm. Splitting the logic checking HBA entries (around check_hba) so\nas we'd check for a portion of its contents depending on what the\nserver has received or not from the client would not be that\ncomplicated. I'd question whether it makes sense to mix this\ninformation within the same configuration files as the ones holding \nthe current HBA rules. If the same rules are used for the\npre-startup-packet phase and the post-startup-packet phase, we'd want\nnew keywords for these HBA rules, something different than the\nexisting sslmode and no sslmode?\n--\nMichael", "msg_date": "Mon, 22 Apr 2024 16:19:53 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On 22/04/2024 10:19, Michael Paquier wrote:\n> On Sat, Apr 20, 2024 at 12:43:24AM +0300, Heikki Linnakangas wrote:\n>> On 19/04/2024 19:48, Jacob Champion wrote:\n>>> On Fri, Apr 19, 2024 at 6:56 AM Heikki Linnakangas <[email protected]> wrote:\n>>>> With direct SSL negotiation, we always require ALPN.\n>>>\n>>> (As an aside: I haven't gotten to test the version of the patch that\n>>> made it into 17 yet, but from a quick glance it looks like we're not\n>>> rejecting mismatched ALPN during the handshake as noted in [1].)\n>>\n>> Ah, good catch, that fell through the cracks. Agreed, the client should\n>> reject a direct SSL connection if the server didn't send ALPN. I'll add that\n>> to the Open Items so we don't forget again.\n> \n> Would somebody like to write a patch for that? I'm planning to look\n> at this code more closely, as well.\n\nI plan to write the patch later today.\n\n>>>> Controlling these in HBA is a bit inconvenient, because you only find\n>>>> out after authentication if it's allowed or not. So if e.g. direct SSL\n>>>> connections are disabled for a user,\n>>>\n>>> Hopefully disabling direct SSL piecemeal is not a desired use case?\n>>> I'm not sure it makes sense to focus on that. Forcing it to be enabled\n>>> shouldn't have the same problem, should it?\n> \n> I'd get behind the case where a server rejects everything except\n> direct SSL, yeah. Sticking that into a format similar to HBA rules\n> would easily give the flexibility to be able to accept or reject\n> direct or default SSL, though, while making it easy to parse. The\n> implementation is not really complicated, and not far from the\n> existing hostssl and nohostssl.\n> \n> As a whole, I can get behind a unique GUC that forces the use of\n> direct. Or, we could extend the existing \"ssl\" GUC with a new\n> \"direct\" value to accept only direct connections and restrict the\n> original protocol (and a new \"postgres\" for the pre-16 protocol,\n> rejecting direct?), while \"on\" is able to accept both.\n\nI'd be OK with that, although I still don't really see the point of \nforcing this from the server side. We could also add this later.\n\n>> Forcing it to be enabled piecemeal based on role or database has similar\n>> problems. Forcing it enabled for all connections seems sensible, though.\n>> Forcing it enabled based on the client's source IP address, but not\n>> user/database would be somewhat sensible too, but we don't currently have\n>> the HBA code to check the source IP and accept/reject SSLRequest based on\n>> that. The HBA rejection always happens after the client has sent the startup\n>> packet.\n> \n> Hmm. Splitting the logic checking HBA entries (around check_hba) so\n> as we'd check for a portion of its contents depending on what the\n> server has received or not from the client would not be that\n> complicated. I'd question whether it makes sense to mix this\n> information within the same configuration files as the ones holding\n> the current HBA rules. If the same rules are used for the\n> pre-startup-packet phase and the post-startup-packet phase, we'd want\n> new keywords for these HBA rules, something different than the\n> existing sslmode and no sslmode?\n\nSounds complicated, and I don't really see the use case for controlling \nthe direct SSL support in such a fine-grained fashion.\n\nIt would be nice if we could reject non-SSL connections before the \nclient sends the startup packet, but that's not possible because in a \nplaintext connection, that's the first packet that the client sends. The \nreverse would be possible: reject SSLRequest or direct SSL connection \nimmediately, if HBA doesn't allow non-SSL connections from that IP \naddress. But that's not very interesting.\n\nHBA-based control would certainly be v18 material.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 22 Apr 2024 10:47:51 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On 22/04/2024 10:47, Heikki Linnakangas wrote:\n> On 22/04/2024 10:19, Michael Paquier wrote:\n>> On Sat, Apr 20, 2024 at 12:43:24AM +0300, Heikki Linnakangas wrote:\n>>> On 19/04/2024 19:48, Jacob Champion wrote:\n>>>> On Fri, Apr 19, 2024 at 6:56 AM Heikki Linnakangas <[email protected]> wrote:\n>>>>> With direct SSL negotiation, we always require ALPN.\n>>>>\n>>>> (As an aside: I haven't gotten to test the version of the patch that\n>>>> made it into 17 yet, but from a quick glance it looks like we're not\n>>>> rejecting mismatched ALPN during the handshake as noted in [1].)\n>>>\n>>> Ah, good catch, that fell through the cracks. Agreed, the client should\n>>> reject a direct SSL connection if the server didn't send ALPN. I'll add that\n>>> to the Open Items so we don't forget again.\n>>\n>> Would somebody like to write a patch for that? I'm planning to look\n>> at this code more closely, as well.\n> \n> I plan to write the patch later today.\n\nHere's the patch for that. The error message is:\n\n\"direct SSL connection was established without ALPN protocol negotiation \nextension\"\n\nThat's accurate, but I wonder if we could make it more useful to a user \nwho's wondering what went wrong. I'd imagine that if the server doesn't \nsupport ALPN, it's because you have some kind of a (not necessarily \nmalicious) generic SSL man-in-the-middle that doesn't support it. Or \nyou're trying to connect to an HTTPS server. Suggestions welcome.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Tue, 23 Apr 2024 01:48:04 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Mon, Apr 22, 2024 at 10:47:51AM +0300, Heikki Linnakangas wrote:\n> On 22/04/2024 10:19, Michael Paquier wrote:\n>> As a whole, I can get behind a unique GUC that forces the use of\n>> direct. Or, we could extend the existing \"ssl\" GUC with a new\n>> \"direct\" value to accept only direct connections and restrict the\n>> original protocol (and a new \"postgres\" for the pre-16 protocol,\n>> rejecting direct?), while \"on\" is able to accept both.\n> \n> I'd be OK with that, although I still don't really see the point of forcing\n> this from the server side. We could also add this later.\n\nI'd be OK with doing something only in v18, if need be. Jacob, what\ndo you think?\n\n>> Hmm. Splitting the logic checking HBA entries (around check_hba) so\n>> as we'd check for a portion of its contents depending on what the\n>> server has received or not from the client would not be that\n>> complicated. I'd question whether it makes sense to mix this\n>> information within the same configuration files as the ones holding\n>> the current HBA rules. If the same rules are used for the\n>> pre-startup-packet phase and the post-startup-packet phase, we'd want\n>> new keywords for these HBA rules, something different than the\n>> existing sslmode and no sslmode?\n> \n> Sounds complicated, and I don't really see the use case for controlling the\n> direct SSL support in such a fine-grained fashion.\n> \n> It would be nice if we could reject non-SSL connections before the client\n> sends the startup packet, but that's not possible because in a plaintext\n> connection, that's the first packet that the client sends. The reverse would\n> be possible: reject SSLRequest or direct SSL connection immediately, if HBA\n> doesn't allow non-SSL connections from that IP address. But that's not very\n> interesting.\n\nI'm not completely sure, actually. We have the APIs to do that in\nsimple ways with existing keywords and new options. And there is some\nmerit being able to have more complex connection policies. If both of\nyou object to that, I won't insist.\n\n> HBA-based control would certainly be v18 material.\n\nSurely.\n--\nMichael", "msg_date": "Tue, 23 Apr 2024 14:42:08 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Tue, Apr 23, 2024 at 01:48:04AM +0300, Heikki Linnakangas wrote:\n> Here's the patch for that. The error message is:\n> \n> \"direct SSL connection was established without ALPN protocol negotiation\n> extension\"\n\nWFM.\n\n> That's accurate, but I wonder if we could make it more useful to a user\n> who's wondering what went wrong. I'd imagine that if the server doesn't\n> support ALPN, it's because you have some kind of a (not necessarily\n> malicious) generic SSL man-in-the-middle that doesn't support it. Or you're\n> trying to connect to an HTTPS server. Suggestions welcome.\n\nHmm. Is there any point in calling SSL_get0_alpn_selected() in\nopen_client_SSL() to get the ALPN if current_enc_method is not\nENC_DIRECT_SSL?\n\nIn the documentation of PQsslAttribute(), it is mentioned that empty\nstring is returned for \"alpn\" if ALPN was not used, however the code\nreturns NULL in this case:\n SSL_get0_alpn_selected(conn->ssl, &data, &len);\n if (data == NULL || len == 0 || len > sizeof(alpn_str) - 1)\n return NULL;\n--\nMichael", "msg_date": "Tue, 23 Apr 2024 16:07:32 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Fri, Apr 19, 2024 at 2:43 PM Heikki Linnakangas <[email protected]> wrote:\n>\n> On 19/04/2024 19:48, Jacob Champion wrote:\n> > On Fri, Apr 19, 2024 at 6:56 AM Heikki Linnakangas <[email protected]> wrote:\n> >> With direct SSL negotiation, we always require ALPN.\n> >\n> > (As an aside: I haven't gotten to test the version of the patch that\n> > made it into 17 yet, but from a quick glance it looks like we're not\n> > rejecting mismatched ALPN during the handshake as noted in [1].)\n>\n> Ah, good catch, that fell through the cracks. Agreed, the client should\n> reject a direct SSL connection if the server didn't send ALPN. I'll add\n> that to the Open Items so we don't forget again.\n\nYes, the client should also reject, but that's not what I'm referring\nto above. The server needs to fail the TLS handshake itself with the\nproper error code (I think it's `no_application_protocol`?); otherwise\na client implementing a different protocol could consume the\napplication-level bytes coming back from the server and act on them.\nThat's the protocol confusion attack from ALPACA we're trying to\navoid.\n\n> > Well, assuming the user is okay with plaintext negotiation at all.\n> > (Was that fixed before the patch went in? Is GSS negotiation still\n> > allowed even with requiredirect?)\n>\n> To disable sending the startup packet in plaintext, you need to use\n> sslmode=require. Same as before the patch. GSS is still allowed, as it\n> takes precedence over SSL if both are enabled in libpq. Per the docs:\n>\n> > Note that if gssencmode is set to prefer, a GSS connection is\n> > attempted first. If the server rejects GSS encryption, SSL is\n> > negotiated over the same TCP connection using the traditional\n> > postgres protocol, regardless of sslnegotiation. In other words, the\n> > direct SSL handshake is not used, if a TCP connection has already\n> > been established and can be used for the SSL handshake.\n\nOh. That's actually disappointing, since gssencmode=prefer is the\ndefault. A question I had in the original thread was, what's the\nrationale behind a \"require direct ssl\" option that doesn't actually\nrequire it?\n\n> >> What would be the use case of requiring\n> >> direct SSL in the server? What extra security do you get?\n> >\n> > You get protection against attacks that could have otherwise happened\n> > during the plaintext portion of the handshake. That has architectural\n> > implications for more advanced uses of SCRAM, and it should prevent\n> > any repeats of CVE-2021-23222/23214. And if the peer doesn't pass the\n> > TLS handshake, they can't send you anything that you might forget is\n> > untrusted (like, say, an error message).\n>\n> Can you elaborate on the more advanced uses of SCRAM?\n\nIf you're using SCRAM to authenticate the server, as opposed to just a\nreally strong password auth, then it really helps an analysis of the\nsecurity to know that there are no plaintext bytes that have been\ninterpreted by the client. This came up briefly in the conversations\nthat led to commit d0f4824a.\n\nTo be fair, it's a more academic concern at the moment; my imagination\ncan only come up with problems for SCRAM-based TLS that would also be\nvulnerabilities for standard certificate-based TLS. But whether or not\nit's an advantage for the code today is also kind of orthogonal to my\npoint. The security argument of direct SSL mode is that it reduces\nrisk for the system as a whole, even in the face of future code\nchanges or regressions. If you can't force its use, you're not\nreducing that risk very much. (If anything, a \"require\" option that\ndoesn't actually require it makes the analysis more complicated, not\nless...)\n\n> >> Controlling these in HBA is a bit inconvenient, because you only find\n> >> out after authentication if it's allowed or not. So if e.g. direct SSL\n> >> connections are disabled for a user,\n> >\n> > Hopefully disabling direct SSL piecemeal is not a desired use case?\n> > I'm not sure it makes sense to focus on that. Forcing it to be enabled\n> > shouldn't have the same problem, should it?\n>\n> Forcing it to be enabled piecemeal based on role or database has similar\n> problems.\n\nHm. For some reason I thought it was easier the other direction, but I\ncan't remember why I thought that. I'll withdraw the comment for now\n:)\n\n--Jacob\n\n\n", "msg_date": "Tue, 23 Apr 2024 10:02:10 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Mon, Apr 22, 2024 at 10:42 PM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Apr 22, 2024 at 10:47:51AM +0300, Heikki Linnakangas wrote:\n> > On 22/04/2024 10:19, Michael Paquier wrote:\n> >> As a whole, I can get behind a unique GUC that forces the use of\n> >> direct. Or, we could extend the existing \"ssl\" GUC with a new\n> >> \"direct\" value to accept only direct connections and restrict the\n> >> original protocol (and a new \"postgres\" for the pre-16 protocol,\n> >> rejecting direct?), while \"on\" is able to accept both.\n> >\n> > I'd be OK with that, although I still don't really see the point of forcing\n> > this from the server side. We could also add this later.\n>\n> I'd be OK with doing something only in v18, if need be. Jacob, what\n> do you think?\n\nI think it would be nice to have an option like that. Whether it's\ndone now or in 18, I don't have a strong opinion about. But I do think\nit'd be helpful to have a consensus on whether or not this is a\nsecurity improvement, or a performance enhancement only, before adding\nsaid option. As it's implemented, if the requiredirect option doesn't\nactually requiredirect, I think it looks like security but isn't\nreally.\n\n(My ideal server-side option removes all plaintext negotiation and\nforces the use of direct SSL for every connection, paired with a new\npostgresqls:// scheme for the client. But I don't have any experience\nmaking a switchover like that at scale, and I'd like to avoid a\nStartTLS-vs-LDAPS sort of situation. That's obviously not a\nconversation for 17.)\n\nAs for HBA control: overall, I don't see a burning need for an\nHBA-based configuration, honestly. I'd prefer to reduce the number of\nknobs and make it easier to apply the strongest security with a broad\nbrush.\n\n--Jacob\n\n\n", "msg_date": "Tue, 23 Apr 2024 10:22:11 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Tue, Apr 23, 2024 at 1:22 PM Jacob Champion\n<[email protected]> wrote:\n> On Mon, Apr 22, 2024 at 10:42 PM Michael Paquier <[email protected]> wrote:\n> > On Mon, Apr 22, 2024 at 10:47:51AM +0300, Heikki Linnakangas wrote:\n> > > On 22/04/2024 10:19, Michael Paquier wrote:\n> > >> As a whole, I can get behind a unique GUC that forces the use of\n> > >> direct. Or, we could extend the existing \"ssl\" GUC with a new\n> > >> \"direct\" value to accept only direct connections and restrict the\n> > >> original protocol (and a new \"postgres\" for the pre-16 protocol,\n> > >> rejecting direct?), while \"on\" is able to accept both.\n> > >\n> > > I'd be OK with that, although I still don't really see the point of forcing\n> > > this from the server side. We could also add this later.\n> >\n> > I'd be OK with doing something only in v18, if need be. Jacob, what\n> > do you think?\n>\n> I think it would be nice to have an option like that. Whether it's\n> done now or in 18, I don't have a strong opinion about. But I do think\n> it'd be helpful to have a consensus on whether or not this is a\n> security improvement, or a performance enhancement only, before adding\n> said option. As it's implemented, if the requiredirect option doesn't\n> actually requiredirect, I think it looks like security but isn't\n> really.\n\nI've not followed this thread closely enough to understand the comment\nabout requiredirect maybe not actually requiring direct, but if that\nwere true it seems like it might be concerning.\n\nBut as far as having a GUC to force direct SSL or not, I agree that's\na good idea, and that it's better than only being able to control the\nbehavior through pg_hba.conf, because it removes room for any possible\ndoubt about whether you're really enforcing the behavior you want to\nbe enforcing. It might also mean that the connection can be rejected\nearlier in the handshaking process on the basis of the GUC value,\nwhich could conceivably prevent a client from reaching some piece of\ncode that turns out to have a security vulnerability. For example, if\nwe found out that direct SSL connections let you take over the\nPentagon before reaching the authentication stage, but for some reason\nregular connections don't have the same problem, being able to\ncategorically shut off direct SSL would be valuable.\n\nHowever, I don't really see why this has to be done for this release.\nIt seems like a separate feature from direct SSL itself. If direct SSL\nhadn't been committed at the very last minute, then it would have been\ngreat if this had been done for this release, too. But it was. The\nmoral we ought to take from that is \"perhaps get the big features in a\nbit further in advance of the freeze,\" not \"well we'll just keep\nhacking after the freeze.\"\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 23 Apr 2024 13:43:34 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Tue, Apr 23, 2024 at 10:43 AM Robert Haas <[email protected]> wrote:\n> I've not followed this thread closely enough to understand the comment\n> about requiredirect maybe not actually requiring direct, but if that\n> were true it seems like it might be concerning.\n\nIt may be my misunderstanding. This seems to imply bad behavior:\n\n> If the server rejects GSS encryption, SSL is\n> negotiated over the same TCP connection using the traditional postgres\n> protocol, regardless of <literal>sslnegotiation</literal>.\n\nAs does this comment:\n\n> + /*\n> + * If enabled, try direct SSL. Unless we have a valid TCP connection that\n> + * failed negotiating GSSAPI encryption or a plaintext connection in case\n> + * of sslmode='allow'; in that case we prefer to reuse the connection with\n> + * negotiated SSL, instead of reconnecting to do direct SSL. The point of\n> + * direct SSL is to avoid the roundtrip from the negotiation, but\n> + * reconnecting would also incur a roundtrip.\n> + */\n\nbut when I actually try those cases, I see that requiredirect does\nactually cause a direct SSL connection to be done, even with\nsslmode=allow. So maybe it's just misleading documentation (or my\nmisreading of it) that needs to be expanded? Am I missing a different\ncorner case where requiredirect is ignored, Heikki?\n\nI still question the utility of allowing sslmode=allow with\nsslnegotiation=requiredirect, because it seems like you've made both\nthe performance and security characteristics actively worse if you\nchoose that combination. But I want to make sure I understand the\ncurrent behavior correctly before I derail the discussion too much...\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 23 Apr 2024 12:33:58 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On 23/04/2024 22:33, Jacob Champion wrote:\n> On Tue, Apr 23, 2024 at 10:43 AM Robert Haas <[email protected]> wrote:\n>> I've not followed this thread closely enough to understand the comment\n>> about requiredirect maybe not actually requiring direct, but if that\n>> were true it seems like it might be concerning.\n> \n> It may be my misunderstanding. This seems to imply bad behavior:\n> \n>> If the server rejects GSS encryption, SSL is\n>> negotiated over the same TCP connection using the traditional postgres\n>> protocol, regardless of <literal>sslnegotiation</literal>.\n> \n> As does this comment:\n> \n>> + /*\n>> + * If enabled, try direct SSL. Unless we have a valid TCP connection that\n>> + * failed negotiating GSSAPI encryption or a plaintext connection in case\n>> + * of sslmode='allow'; in that case we prefer to reuse the connection with\n>> + * negotiated SSL, instead of reconnecting to do direct SSL. The point of\n>> + * direct SSL is to avoid the roundtrip from the negotiation, but\n>> + * reconnecting would also incur a roundtrip.\n>> + */\n> \n> but when I actually try those cases, I see that requiredirect does\n> actually cause a direct SSL connection to be done, even with\n> sslmode=allow. So maybe it's just misleading documentation (or my\n> misreading of it) that needs to be expanded? Am I missing a different\n> corner case where requiredirect is ignored, Heikki?\n\nYou're right, the comment is wrong about sslmode=allow. There is no \nnegotiation of a plaintext connection, the client just sends the startup \npacket directly. The HBA rules can reject it, but the client will have \nto disconnect and reconnect in that case.\n\nThe documentation and that comment are misleading about failed GSSAPI \nencryption too, and I also misremembered that. With \nsslnegotiation=requiredirect, libpq never uses negotiated SSL mode. It \nwill reconnect after a rejected GSSAPI request. So that comment applies \nto sslnegotiation=direct, but not sslnegotiation=requiredirect.\n\nAttached patch tries to fix and clarify those.\n\n(Note that the client will only attempt GSSAPI encryption if it can find \nkerberos credentials in the environment.)\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Wed, 24 Apr 2024 00:19:57 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Tue, Apr 23, 2024 at 2:20 PM Heikki Linnakangas <[email protected]> wrote:\n>\n> Attached patch tries to fix and clarify those.\n\ns/negotiatied/negotiated/ in the attached patch, but other than that\nthis seems like a definite improvement. Thanks!\n\n> (Note that the client will only attempt GSSAPI encryption if it can find\n> kerberos credentials in the environment.)\n\nRight. I don't like that it still happens with\nsslnegotiation=requiredirect, but I suspect that this is not the\nthread to complain about it in. Maybe I can propose a\nsslnegotiation=forcedirect or something for 18, to complement a\npostgresqls:// scheme.\n\nThat leaves the ALPACA handshake correction, I think. (Peter had some\nquestions on the original thread [1] that I've tried to answer.) And\nthe overall consensus, or lack thereof, on whether or not\n`requiredirect` should be considered a security feature.\n\nThanks,\n--Jacob\n\n[1] https://www.postgresql.org/message-id/e782e9f4-a0cd-49f5-800b-5e32a1b29183%40eisentraut.org\n\n\n", "msg_date": "Thu, 25 Apr 2024 09:16:16 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Thu, Apr 25, 2024 at 12:16 PM Jacob Champion\n<[email protected]> wrote\n> Right. I don't like that it still happens with\n> sslnegotiation=requiredirect, but I suspect that this is not the\n> thread to complain about it in. Maybe I can propose a\n> sslnegotiation=forcedirect or something for 18, to complement a\n> postgresqls:// scheme.\n\nIt is difficult to imagine a world in which we have both requiredirect\nand forcedirect and people are not confused.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 25 Apr 2024 12:17:41 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Thu, Apr 25, 2024 at 9:17 AM Robert Haas <[email protected]> wrote:\n>\n> It is difficult to imagine a world in which we have both requiredirect\n> and forcedirect and people are not confused.\n\nYeah... Any thoughts on a better scheme? require_auth was meant to\nlock down overly general authentication; maybe a require_proto or\nsomething could do the same for the transport?\n\nI hate that we have so many options that most people don't need but\ntake precedence, especially when they're based on the existence of\nmagic third-party environmental cues (e.g. Kerberos caches). And it\nwas nice that we got sslrootcert=system to turn on strong security and\nreject nonsensical combinations. If someone sets `requiredirect` and\nleaves the default sslmode, or chooses a weaker one... Is that really\nuseful to someone?\n\n--Jacob\n\n\n", "msg_date": "Thu, 25 Apr 2024 09:28:16 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Thu, Apr 25, 2024 at 12:28 PM Jacob Champion\n<[email protected]> wrote:\n> On Thu, Apr 25, 2024 at 9:17 AM Robert Haas <[email protected]> wrote:\n> > It is difficult to imagine a world in which we have both requiredirect\n> > and forcedirect and people are not confused.\n>\n> Yeah... Any thoughts on a better scheme? require_auth was meant to\n> lock down overly general authentication; maybe a require_proto or\n> something could do the same for the transport?\n\nI don't understand the difference between the two sets of semantics\nmyself, so I'm not in a good position to comment.\n\n> I hate that we have so many options that most people don't need but\n> take precedence, especially when they're based on the existence of\n> magic third-party environmental cues (e.g. Kerberos caches). And it\n> was nice that we got sslrootcert=system to turn on strong security and\n> reject nonsensical combinations. If someone sets `requiredirect` and\n> leaves the default sslmode, or chooses a weaker one... Is that really\n> useful to someone?\n\nMaybe I'm missing something here, but why doesn't sslnegotiation\noverride sslmode completely? Or alternatively, why not remove\nsslnegotiation entirely and just have more sslmode values? I mean\nmaybe this shouldn't happen categorically, but if I say I want to\nrequire a direct SSL connection, to me that implies that I don't want\nan indirect SSL connection, and I really don't want a non-SSL\nconnection.\n\nI think it's pretty questionable in 2024 whether sslmode=allow and\nsslmode=prefer make any sense at all. I don't think it would be crazy\nto remove them entirely. But I certainly don't think that they should\nbe allowed to bleed into the behavior of new, higher-security\nconfigurations. Surely if I say I want direct SSL, it's that or\nnothing, right?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 25 Apr 2024 13:35:12 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Thu, Apr 25, 2024 at 10:35 AM Robert Haas <[email protected]> wrote:\n> Maybe I'm missing something here, but why doesn't sslnegotiation\n> override sslmode completely? Or alternatively, why not remove\n> sslnegotiation entirely and just have more sslmode values? I mean\n> maybe this shouldn't happen categorically, but if I say I want to\n> require a direct SSL connection, to me that implies that I don't want\n> an indirect SSL connection, and I really don't want a non-SSL\n> connection.\n\nI think that comes down to the debate upthread, and whether you think\nit's a performance tweak or a security feature. My take on it is,\n`direct` mode is performance, and `requiredirect` is security.\n(Especially since, with the current implementation, requiredirect can\nslow things down?)\n\n> I think it's pretty questionable in 2024 whether sslmode=allow and\n> sslmode=prefer make any sense at all. I don't think it would be crazy\n> to remove them entirely. But I certainly don't think that they should\n> be allowed to bleed into the behavior of new, higher-security\n> configurations. Surely if I say I want direct SSL, it's that or\n> nothing, right?\n\nI agree, but I more or less lost the battle at [1]. Like Matthias\nmentioned in [2]:\n\n> I'm not sure about this either. The 'gssencmode' option is already\n> quite weird in that it seems to override the \"require\"d priority of\n> \"sslmode=require\", which it IMO really shouldn't.\n\nThanks,\n--Jacob\n\n[1] https://www.postgresql.org/message-id/CAOYmi%2B%3DcnV-8V8TndSkEF6Htqa7qHQUL_KnQU8-DrT0Jjnm3_Q%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAEze2Wi9j5Q3mRnuoD2Hr%3DeOFV-cMzWAUZ88YmSXSwsiJLQOWA%40mail.gmail.com\n\n\n", "msg_date": "Thu, 25 Apr 2024 11:13:12 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On 25/04/2024 21:13, Jacob Champion wrote:\n> On Thu, Apr 25, 2024 at 10:35 AM Robert Haas <[email protected]> wrote:\n>> Maybe I'm missing something here, but why doesn't sslnegotiation\n>> override sslmode completely? Or alternatively, why not remove\n>> sslnegotiation entirely and just have more sslmode values? I mean\n>> maybe this shouldn't happen categorically, but if I say I want to\n>> require a direct SSL connection, to me that implies that I don't want\n>> an indirect SSL connection, and I really don't want a non-SSL\n>> connection.\n\nMy thinking with sslnegotiation is that it controls how SSL is \nnegotiated with the server, if SSL is to be used at all. It does not \ncontrol whether SSL is used or required; that's what sslmode is for.\n\n> I think that comes down to the debate upthread, and whether you think\n> it's a performance tweak or a security feature. My take on it is,\n> `direct` mode is performance, and `requiredirect` is security.\n\nAgreed, although the the security benefits from `requiredirect` are \npretty vague. It reduces the attack surface, but there are no known \nissues with the 'postgres' or 'direct' negotiation either.\n\nPerhaps 'requiredirect' should be renamed to 'directonly'?\n\n> (Especially since, with the current implementation, requiredirect can\n> slow things down?)\n\nYes: the case is gssencmode=prefer, kerberos credentical cache present \nin client, and server doesn't support GSS. With \nsslnegotiation='postgres' or 'direct', libpq can do the SSL negotiation \nover the same TCP connection after the server rejected the GSSRequest. \nWith sslnegotiation='requiredirect', it needs to open a new TCP connection.\n\n> >> I think it's pretty questionable in 2024 whether sslmode=allow and\n>> sslmode=prefer make any sense at all. I don't think it would be crazy\n>> to remove them entirely. But I certainly don't think that they should\n>> be allowed to bleed into the behavior of new, higher-security\n>> configurations. Surely if I say I want direct SSL, it's that or\n>> nothing, right?\n> \n> I agree, but I more or less lost the battle at [1]. Like Matthias\n> mentioned in [2]:\n> \n>> I'm not sure about this either. The 'gssencmode' option is already\n>> quite weird in that it seems to override the \"require\"d priority of\n>> \"sslmode=require\", which it IMO really shouldn't.\n\nYeah, that combination is weird. I think we should forbid it. But that's \nseparate from sslnegotiation.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 26 Apr 2024 00:50:33 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Thu, Apr 25, 2024 at 2:50 PM Heikki Linnakangas <[email protected]> wrote:\n> > I think that comes down to the debate upthread, and whether you think\n> > it's a performance tweak or a security feature. My take on it is,\n> > `direct` mode is performance, and `requiredirect` is security.\n>\n> Agreed, although the the security benefits from `requiredirect` are\n> pretty vague. It reduces the attack surface, but there are no known\n> issues with the 'postgres' or 'direct' negotiation either.\n\nI think reduction in attack surface is a concrete security benefit,\nnot a vague one. True, I don't know of any exploits today, but that\nseems almost tautological -- if there were known exploits in our\nupgrade handshake, I assume we'd be working to fix them ASAP?\n\n> Perhaps 'requiredirect' should be renamed to 'directonly'?\n\nIf it's agreed that we don't want to require a stronger sslmode for\nthat sslnegotiation setting, then that would probably be an\nimprovement. But who is the target user for\n`sslnegotiation=directonly`, in your opinion? Would they ever have a\nreason to use a weak sslmode?\n\n> >> I'm not sure about this either. The 'gssencmode' option is already\n> >> quite weird in that it seems to override the \"require\"d priority of\n> >> \"sslmode=require\", which it IMO really shouldn't.\n>\n> Yeah, that combination is weird. I think we should forbid it. But that's\n> separate from sslnegotiation.\n\nSeparate but related, IMO. If we were all hypothetically okay with\ngssencmode ignoring `sslmode=require`, then it's hard for me to claim\nthat `sslnegotiation=requiredirect` should behave differently. On the\nother hand, if we're not okay with that and we'd like to change it,\nit's easier for me to argue that `requiredirect` should also be\nstricter from the get-go.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Thu, 25 Apr 2024 16:23:59 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Thu, Apr 25, 2024 at 5:50 PM Heikki Linnakangas <[email protected]> wrote:\n> On 25/04/2024 21:13, Jacob Champion wrote:\n> > On Thu, Apr 25, 2024 at 10:35 AM Robert Haas <[email protected]> wrote:\n> >> Maybe I'm missing something here, but why doesn't sslnegotiation\n> >> override sslmode completely? Or alternatively, why not remove\n> >> sslnegotiation entirely and just have more sslmode values? I mean\n> >> maybe this shouldn't happen categorically, but if I say I want to\n> >> require a direct SSL connection, to me that implies that I don't want\n> >> an indirect SSL connection, and I really don't want a non-SSL\n> >> connection.\n>\n> My thinking with sslnegotiation is that it controls how SSL is\n> negotiated with the server, if SSL is to be used at all. It does not\n> control whether SSL is used or required; that's what sslmode is for.\n\nI think this might boil down to the order in which someone thinks that\ndifferent settings should be applied. It sounds like your mental model\nis that GSS settings are applied first, and then SSL settings are\napplied afterwards, and then within the SSL bucket you can select how\nyou want to do SSL (direct or negotiated) and how required it is. My\nmental model is different: I imagine that since direct SSL happens\nfrom the first byte exchanged over the socket, direct SSL \"happens\nfirst\", making settings that pertain to negotiated GSS and negotiated\nSSL irrelevant. Because, logically, if you've decided to use direct\nSSL, you're not even going to get a chance to negotiate those things.\nI understand that the code as written works around that, by being able\nto open a new connection if it turns out that we need to negotiate\nthat stuff after all, but IMHO that's rather confusing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Apr 2024 11:25:21 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On 23/04/2024 20:02, Jacob Champion wrote:\n> On Fri, Apr 19, 2024 at 2:43 PM Heikki Linnakangas <[email protected]> wrote:\n>>\n>> On 19/04/2024 19:48, Jacob Champion wrote:\n>>> On Fri, Apr 19, 2024 at 6:56 AM Heikki Linnakangas <[email protected]> wrote:\n>>>> With direct SSL negotiation, we always require ALPN.\n>>>\n>>> (As an aside: I haven't gotten to test the version of the patch that\n>>> made it into 17 yet, but from a quick glance it looks like we're not\n>>> rejecting mismatched ALPN during the handshake as noted in [1].)\n>>\n>> Ah, good catch, that fell through the cracks. Agreed, the client should\n>> reject a direct SSL connection if the server didn't send ALPN. I'll add\n>> that to the Open Items so we don't forget again.\n> \n> Yes, the client should also reject, but that's not what I'm referring\n> to above. The server needs to fail the TLS handshake itself with the\n> proper error code (I think it's `no_application_protocol`?); otherwise\n> a client implementing a different protocol could consume the\n> application-level bytes coming back from the server and act on them.\n> That's the protocol confusion attack from ALPACA we're trying to\n> avoid.\n\nI finally understood what you mean. So if the client supports ALPN, but \nthe list of protocols that it provides does not include 'postgresql', \nthe server should reject the connection with 'no_applicaton_protocol' \nalert. Makes sense. I thought OpenSSL would do that with the alpn \ncallback we have, but it does not.\n\nThe attached patch makes that change. I used the alpn_cb() function in \nopenssl's own s_server program as example for that.\n\nUnfortunately the error message you got in the client with that was \nhorrible (I modified the server to not accept the 'postgresql' protocol):\n\npsql \"dbname=postgres sslmode=require host=localhost\"\npsql: error: connection to server at \"localhost\" (::1), port 5432 \nfailed: SSL error: SSL error code 167773280\n\nThis is similar to the case with system errors discussed at \nhttps://postgr.es/m/[email protected], but \nthis one is equally bad on OpenSSL 1.1.1 and 3.3.0. It seems like an \nOpenSSL bug to me, because there is an error string \"no application \nprotocol\" in the OpenSSL sources (ssl/ssl_err.c):\n\n {ERR_PACK(ERR_LIB_SSL, 0, SSL_R_NO_APPLICATION_PROTOCOL),\n \"no application protocol\"},\n\nand in the server log, you get that message. But the error code seen in \nthe client is different. There are also messages for other alerts, for \nexample:\n\n {ERR_PACK(ERR_LIB_SSL, 0, SSL_R_TLSV13_ALERT_MISSING_EXTENSION),\n \"tlsv13 alert missing extension\"},\n\nThe bottom line is that that seems like a bug of omission to me in \nOpenSSL, but I wouldn't hold my breath waiting for it to be fixed. We \ncan easily check for that error code and print the right message \nourselves however, as in the attached patch.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Sat, 27 Apr 2024 01:50:54 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On 26/04/2024 02:23, Jacob Champion wrote:\n> On Thu, Apr 25, 2024 at 2:50 PM Heikki Linnakangas <[email protected]> wrote:\n>>> I think that comes down to the debate upthread, and whether you think\n>>> it's a performance tweak or a security feature. My take on it is,\n>>> `direct` mode is performance, and `requiredirect` is security.\n>>\n>> Agreed, although the the security benefits from `requiredirect` are\n>> pretty vague. It reduces the attack surface, but there are no known\n>> issues with the 'postgres' or 'direct' negotiation either.\n> \n> I think reduction in attack surface is a concrete security benefit,\n> not a vague one. True, I don't know of any exploits today, but that\n> seems almost tautological -- if there were known exploits in our\n> upgrade handshake, I assume we'd be working to fix them ASAP?\n\nSure, we'd try to fix them ASAP. But we've had the SSLRequest \nnegotiation since time immemorial. If a new vulnerability is found, it's \nunlikely that we'd need to disable the SSLRequest negotiation altogether \nto fix it. We'd be in serious trouble with back-branches in that case. \nThere's no sudden need to have a kill-switch for it.\n\nTaking that to the extreme, you could argue for a kill-switch for every \nfeature, just in case there's a vulnerability in them. I agree that \nauthentication is more sensitive so reducing the surface of that is more \nreasonable. And but nevertheless.\n\n(This discussion is moot though, because we do have the \nsslnegotiation=requiredirect mode, so you can disable the SSLRequest \nnegotiation.)\n\n>> Perhaps 'requiredirect' should be renamed to 'directonly'?\n> \n> If it's agreed that we don't want to require a stronger sslmode for\n> that sslnegotiation setting, then that would probably be an\n> improvement. But who is the target user for\n> `sslnegotiation=directonly`, in your opinion? Would they ever have a\n> reason to use a weak sslmode?\n\nIt's unlikely, I agree. A user who's sophisticated enough to use \nsslnegotiation=directonly would probably also want sslmode=require and \nrequire_auth=scram-sha256 and channel_binding=require. Or \nsslmode=verify-full. But in principle they're orthogonal. If you only \nhave v17 servers in your environment, so you know all servers support \ndirect negotiation if they support SSL at all, but a mix of servers with \nand without SSL, sslnegotiation=directonly would reduce roundtrips with \nsslmode=prefer.\n\nMaking requiredirect to imply sslmode=require, or error out unless you \nalso set sslmode=require, feels like a cavalier way of forcing SSL. We \nshould have a serious discussion on making sslmode=require the default \ninstead. That would be a more direct way of nudging people to use SSL. \nIt would cause a lot of breakage, but it would also be a big improvement \nto security.\n\nConsider how sslnegotiation=requiredirect/directonly would feel, if we \nmade sslmode=require the default. If you explicitly set \"sslmode=prefer\" \nor \"sslmode=disable\", it would be annoying if you would also need to \nremove \"sslnegotiation=requiredirect\" from your connection string.\n\nI'm leaning towards renaming sslnegotiation=requiredirect to \nsslnegotiation=directonly at this point.\n\n>>>> I'm not sure about this either. The 'gssencmode' option is already\n>>>> quite weird in that it seems to override the \"require\"d priority of\n>>>> \"sslmode=require\", which it IMO really shouldn't.\n>>\n>> Yeah, that combination is weird. I think we should forbid it. But that's\n>> separate from sslnegotiation.\n> \n> Separate but related, IMO. If we were all hypothetically okay with\n> gssencmode ignoring `sslmode=require`, then it's hard for me to claim\n> that `sslnegotiation=requiredirect` should behave differently. On the\n> other hand, if we're not okay with that and we'd like to change it,\n> it's easier for me to argue that `requiredirect` should also be\n> stricter from the get-go.\n\nI think the best way forward for those is something like a new \n\"require_proto\" parameter that you suggested upthread. Perhaps call it \n\"encryption\", with options \"none\", \"ssl\", \"gss\" so that you can provide \nmultiple options and libpq will try them in the order specified. For \nexample:\n\nencryption=none\nencryption=ssl, none # like sslmode=prefer\nencryption=gss\nencryption=gss, ssl # try GSS first, then SSL\nencryption=ssl, gss # try SSL first, then GSS\n\nThis would make gssencmode and sslmode=disable/allow/prefer/require \nsettings obsolete. sslmode would stil be needed to distinguish between \nverify-ca/verify-full though. But sslnegotiation would be still \northogonal to that.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 29 Apr 2024 11:38:17 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On 23/04/2024 10:07, Michael Paquier wrote:\n> In the documentation of PQsslAttribute(), it is mentioned that empty\n> string is returned for \"alpn\" if ALPN was not used, however the code\n> returns NULL in this case:\n> SSL_get0_alpn_selected(conn->ssl, &data, &len);\n> if (data == NULL || len == 0 || len > sizeof(alpn_str) - 1)\n> return NULL;\n\nGood catch. I changed the code to return an empty string, as the \ndocumentation says.\n\nI considered if NULL or empty string would be better here. The docs for \nPQsslAttribute also says:\n\n\"Returns NULL if the connection does not use SSL or the specified \nattribute name is not defined for the library in use.\"\n\nIf a caller wants to distinguish between \"libpq or the SSL library \ndoesn't support ALPN at all\" from \"the server didn't support ALPN\", you \ncan tell from whether PQsslAttribute returns NULL or an empty string. So \nI think an empty string is better.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 29 Apr 2024 12:43:18 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Mon, Apr 29, 2024 at 12:43:18PM +0300, Heikki Linnakangas wrote:\n> If a caller wants to distinguish between \"libpq or the SSL library doesn't\n> support ALPN at all\" from \"the server didn't support ALPN\", you can tell\n> from whether PQsslAttribute returns NULL or an empty string. So I think an\n> empty string is better.\n\nThanks. I would also have used an empty string to differenciate these\ntwo cases.\n--\nMichael", "msg_date": "Mon, 29 Apr 2024 21:07:58 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Mon, Apr 29, 2024 at 4:38 AM Heikki Linnakangas <[email protected]> wrote:\n> Making requiredirect to imply sslmode=require, or error out unless you\n> also set sslmode=require, feels like a cavalier way of forcing SSL. We\n> should have a serious discussion on making sslmode=require the default\n> instead. That would be a more direct way of nudging people to use SSL.\n> It would cause a lot of breakage, but it would also be a big improvement\n> to security.\n>\n> Consider how sslnegotiation=requiredirect/directonly would feel, if we\n> made sslmode=require the default. If you explicitly set \"sslmode=prefer\"\n> or \"sslmode=disable\", it would be annoying if you would also need to\n> remove \"sslnegotiation=requiredirect\" from your connection string.\n\nI think making sslmode=require the default is pretty unworkable,\nunless we also had a way of automatically setting up SSL as part of\ninitdb or something. Otherwise, we'd have to add sslmode=disable to a\nmillion places just to get the regression tests to work, and every\ntest cluster anyone spins up locally would break in annoying ways,\ntoo. I had been thinking we might want to change the default to\nsslmode=disable and remove allow and prefer, but maybe automating a\nbasic SSL setup is better. Either way, we should move toward a world\nwhere you either ask for SSL and get it, or don't ask for it and don't\nget it. Being halfway in between is bad.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Apr 2024 08:38:22 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Fri, Apr 26, 2024 at 3:51 PM Heikki Linnakangas <[email protected]> wrote:\n> I finally understood what you mean. So if the client supports ALPN, but\n> the list of protocols that it provides does not include 'postgresql',\n> the server should reject the connection with 'no_applicaton_protocol'\n> alert.\n\nRight. (And additionally, we reject clients that don't advertise ALPN\nover direct SSL, also during the TLS handshake.)\n\n> The attached patch makes that change. I used the alpn_cb() function in\n> openssl's own s_server program as example for that.\n\nThis patch as written will apply the new requirement to the old\nnegotiation style, though, won't it? My test suite sees a bunch of\nfailures with that.\n\n> Unfortunately the error message you got in the client with that was\n> horrible (I modified the server to not accept the 'postgresql' protocol):\n>\n> psql \"dbname=postgres sslmode=require host=localhost\"\n> psql: error: connection to server at \"localhost\" (::1), port 5432\n> failed: SSL error: SSL error code 167773280\n\n<long sigh>\n\nI filed a bug upstream [1].\n\nThanks,\n--Jacob\n\n[1] https://github.com/openssl/openssl/issues/24300\n\n\n", "msg_date": "Mon, 29 Apr 2024 11:04:41 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Mon, Apr 29, 2024 at 1:38 AM Heikki Linnakangas <[email protected]> wrote:\n> Sure, we'd try to fix them ASAP. But we've had the SSLRequest\n> negotiation since time immemorial. If a new vulnerability is found, it's\n> unlikely that we'd need to disable the SSLRequest negotiation altogether\n> to fix it. We'd be in serious trouble with back-branches in that case.\n> There's no sudden need to have a kill-switch for it.\n\nI'm not really arguing that you'd need the kill switch to fix a\nproblem in the code. (At least, I'm not arguing that in this thread; I\nreserve the right to argue that in the future. :D) But between the\npoint of time that a vulnerability is announced and a user has\nupgraded, it's really nice to have a switch as a mitigation. Even\nbetter if it's server-side, because then the DBA can protect all their\nclients without requiring action on their part.\n\n> Taking that to the extreme, you could argue for a kill-switch for every\n> feature, just in case there's a vulnerability in them. I agree that\n> authentication is more sensitive so reducing the surface of that is more\n> reasonable. And but nevertheless.\n\nI mean... that would be extreme, yeah. I don't think anyone's proposed that.\n\n> If you only\n> have v17 servers in your environment, so you know all servers support\n> direct negotiation if they support SSL at all, but a mix of servers with\n> and without SSL, sslnegotiation=directonly would reduce roundtrips with\n> sslmode=prefer.\n\nBut if you're in that situation, what does the use of directonly give\nyou over `sslnegotiation=direct`? You already know that servers\nsupport direct, so there's no additional performance penalty from the\nless strict mode.\n\n> Making requiredirect to imply sslmode=require, or error out unless you\n> also set sslmode=require, feels like a cavalier way of forcing SSL. We\n> should have a serious discussion on making sslmode=require the default\n> instead. That would be a more direct way of nudging people to use SSL.\n> It would cause a lot of breakage, but it would also be a big improvement\n> to security.\n>\n> Consider how sslnegotiation=requiredirect/directonly would feel, if we\n> made sslmode=require the default. If you explicitly set \"sslmode=prefer\"\n> or \"sslmode=disable\", it would be annoying if you would also need to\n> remove \"sslnegotiation=requiredirect\" from your connection string.\n\nThat's similar to how sslrootcert=system already works. To me, it\nfeels great, because I don't have to worry about nonsensical\ncombinations (with the exception of GSS, which we've touched on\nabove). libpq complains loudly if I try to shoot myself in the foot,\nand if I'm using sslrootcert=system then it's a pretty clear signal\nthat I care more about security than the temporary inconvenience of\nediting my connection string for one weird server that doesn't use SSL\nfor some reason.\n\n> I think the best way forward for those is something like a new\n> \"require_proto\" parameter that you suggested upthread. Perhaps call it\n> \"encryption\", with options \"none\", \"ssl\", \"gss\" so that you can provide\n> multiple options and libpq will try them in the order specified. For\n> example:\n>\n> encryption=none\n> encryption=ssl, none # like sslmode=prefer\n> encryption=gss\n> encryption=gss, ssl # try GSS first, then SSL\n> encryption=ssl, gss # try SSL first, then GSS\n>\n> This would make gssencmode and sslmode=disable/allow/prefer/require\n> settings obsolete. sslmode would stil be needed to distinguish between\n> verify-ca/verify-full though. But sslnegotiation would be still\n> orthogonal to that.\n\nI will give this some more thought.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Mon, 29 Apr 2024 11:43:04 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On 29/04/2024 21:04, Jacob Champion wrote:\n> On Fri, Apr 26, 2024 at 3:51 PM Heikki Linnakangas <[email protected]> wrote:\n>> I finally understood what you mean. So if the client supports ALPN, but\n>> the list of protocols that it provides does not include 'postgresql',\n>> the server should reject the connection with 'no_applicaton_protocol'\n>> alert.\n> \n> Right. (And additionally, we reject clients that don't advertise ALPN\n> over direct SSL, also during the TLS handshake.)\n> \n>> The attached patch makes that change. I used the alpn_cb() function in\n>> openssl's own s_server program as example for that.\n> \n> This patch as written will apply the new requirement to the old\n> negotiation style, though, won't it? My test suite sees a bunch of\n> failures with that.\n\nYes, and that is what we want, right? If the client uses old negotiation \nstyle, and includes ALPN in its ClientHello, but requests protocol \n\"noodles\" instead of \"postgresql\", it seems good to reject the connection.\n\nNote that if the client does not request ALPN at all, the callback is \nnot called, and the connection is accepted. Old clients still work \nbecause they do not request ALPN.\n\n>> Unfortunately the error message you got in the client with that was\n>> horrible (I modified the server to not accept the 'postgresql' protocol):\n>>\n>> psql \"dbname=postgres sslmode=require host=localhost\"\n>> psql: error: connection to server at \"localhost\" (::1), port 5432\n>> failed: SSL error: SSL error code 167773280\n> \n> <long sigh>\n> \n> I filed a bug upstream [1].\n\nThanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 29 Apr 2024 21:43:22 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Mon, Apr 29, 2024 at 11:43 AM Heikki Linnakangas <[email protected]> wrote:\n> Note that if the client does not request ALPN at all, the callback is\n> not called, and the connection is accepted. Old clients still work\n> because they do not request ALPN.\n\nUgh, sorry for the noise -- I couldn't figure out why all my old\nclients were failing and then realized it was because I'd left some\ntest code in place for the OpenSSL bug. I'll rebuild everything and\nkeep reviewing.\n\n--Jacob\n\n\n", "msg_date": "Mon, 29 Apr 2024 11:51:54 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On 29/04/2024 21:43, Jacob Champion wrote:\n> On Mon, Apr 29, 2024 at 1:38 AM Heikki Linnakangas <[email protected]> wrote:\n>> If you only\n>> have v17 servers in your environment, so you know all servers support\n>> direct negotiation if they support SSL at all, but a mix of servers with\n>> and without SSL, sslnegotiation=directonly would reduce roundtrips with\n>> sslmode=prefer.\n> \n> But if you're in that situation, what does the use of directonly give\n> you over `sslnegotiation=direct`? You already know that servers\n> support direct, so there's no additional performance penalty from the\n> less strict mode.\n\nWell, by that argument we don't need requiredirect/directonly at all. \nThis goes back to whether it's a security feature or a performance feature.\n\nThere is a small benefit with sslmode=prefer if you connect to a server \nthat doesn't support SSL, though. With sslnegotiation=direct, if the \nserver rejects the direct SSL connection, the client will reconnect and \ntry SSL with SSLRequest. The server will respond with 'N', and the \nclient will proceed without encryption. sslnegotiation=directonly \nremoves that SSLRequest attempt, eliminating one roundtrip.\n\n>> Making requiredirect to imply sslmode=require, or error out unless you\n>> also set sslmode=require, feels like a cavalier way of forcing SSL. We\n>> should have a serious discussion on making sslmode=require the default\n>> instead. That would be a more direct way of nudging people to use SSL.\n>> It would cause a lot of breakage, but it would also be a big improvement\n>> to security.\n>>\n>> Consider how sslnegotiation=requiredirect/directonly would feel, if we\n>> made sslmode=require the default. If you explicitly set \"sslmode=prefer\"\n>> or \"sslmode=disable\", it would be annoying if you would also need to\n>> remove \"sslnegotiation=requiredirect\" from your connection string.\n> \n> That's similar to how sslrootcert=system already works. To me, it\n> feels great, because I don't have to worry about nonsensical\n> combinations (with the exception of GSS, which we've touched on\n> above). libpq complains loudly if I try to shoot myself in the foot,\n> and if I'm using sslrootcert=system then it's a pretty clear signal\n> that I care more about security than the temporary inconvenience of\n> editing my connection string for one weird server that doesn't use SSL\n> for some reason.\n\nOh I was not aware sslrootcert=system works like that. That's a bit \nsurprising, none of the other ssl-related settings imply or require that \nSSL is actually used. Did we intend to set a precedence for new settings \nwith that?\n\n(adding Daniel in case he has an opinion)\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 29 Apr 2024 22:06:27 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Mon, Apr 29, 2024 at 12:06 PM Heikki Linnakangas <[email protected]> wrote:\n> On 29/04/2024 21:43, Jacob Champion wrote:\n> > But if you're in that situation, what does the use of directonly give\n> > you over `sslnegotiation=direct`? You already know that servers\n> > support direct, so there's no additional performance penalty from the\n> > less strict mode.\n>\n> Well, by that argument we don't need requiredirect/directonly at all.\n> This goes back to whether it's a security feature or a performance feature.\n\nThat's what I've been trying to argue, yeah. If it's not a security\nfeature... why's it there?\n\n> There is a small benefit with sslmode=prefer if you connect to a server\n> that doesn't support SSL, though. With sslnegotiation=direct, if the\n> server rejects the direct SSL connection, the client will reconnect and\n> try SSL with SSLRequest. The server will respond with 'N', and the\n> client will proceed without encryption. sslnegotiation=directonly\n> removes that SSLRequest attempt, eliminating one roundtrip.\n\nOkay, agreed that in this case, there is a performance benefit. It's\nnot enough to convince me, honestly, but are there any other cases I\nmissed as well?\n\n> Oh I was not aware sslrootcert=system works like that. That's a bit\n> surprising, none of the other ssl-related settings imply or require that\n> SSL is actually used.\n\nFor sslrootcert=system in particular, the danger of accidentally weak\nsslmodes is pretty high, especially for verify-ca mode. (It goes back\nto that other argument -- there should be, effectively, zero users who\nboth opt in to the public CA system, and are also okay with silently\nfalling back and not using it.)\n\n> Did we intend to set a precedence for new settings\n> with that?\n\n(I'll let committers answer whether they intended that or not -- I was\njust bringing up that we already have a setting that works like that,\nand I really like how it works in practice. But it's probably\nunsurprising that I like it.)\n\n--Jacob\n\n\n", "msg_date": "Mon, 29 Apr 2024 12:32:36 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Mon, Apr 29, 2024 at 12:32 PM Jacob Champion\n<[email protected]> wrote:\n>\n> On Mon, Apr 29, 2024 at 12:06 PM Heikki Linnakangas <[email protected]> wrote:\n> > On 29/04/2024 21:43, Jacob Champion wrote:\n> > > But if you're in that situation, what does the use of directonly give\n> > > you over `sslnegotiation=direct`? You already know that servers\n> > > support direct, so there's no additional performance penalty from the\n> > > less strict mode.\n> >\n> > Well, by that argument we don't need requiredirect/directonly at all.\n> > This goes back to whether it's a security feature or a performance feature.\n>\n> That's what I've been trying to argue, yeah. If it's not a security\n> feature... why's it there?\n\nEr, I should clarify this. I _want_ requiredirect. I just want it to\nbe a security feature.\n\n--Jacob\n\n\n", "msg_date": "Mon, 29 Apr 2024 12:34:18 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "> On 29 Apr 2024, at 21:06, Heikki Linnakangas <[email protected]> wrote:\n\n> Oh I was not aware sslrootcert=system works like that. That's a bit surprising, none of the other ssl-related settings imply or require that SSL is actually used. Did we intend to set a precedence for new settings with that?\n\nIt was very much intentional, and documented, an sslmode other than verify-full\nmakes little sense when combined with sslrootcert=system. It wasn't intended\nto set a precedence (though there is probably a fair bit of things we can do,\ngetting this right is hard enough as it is), rather it was footgun prevention.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 30 Apr 2024 12:10:38 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On 29/04/2024 22:32, Jacob Champion wrote:\n> On Mon, Apr 29, 2024 at 12:06 PM Heikki Linnakangas <[email protected]> wrote:\n>> There is a small benefit with sslmode=prefer if you connect to a server\n>> that doesn't support SSL, though. With sslnegotiation=direct, if the\n>> server rejects the direct SSL connection, the client will reconnect and\n>> try SSL with SSLRequest. The server will respond with 'N', and the\n>> client will proceed without encryption. sslnegotiation=directonly\n>> removes that SSLRequest attempt, eliminating one roundtrip.\n> \n> Okay, agreed that in this case, there is a performance benefit. It's\n> not enough to convince me, honestly, but are there any other cases I\n> missed as well?\n\nI realized one case that hasn't been discussed so far: If you use the \ncombination of \"sslmode=prefer sslnegotiation=requiredirect\" to connect \nto a pre-v17 server that has SSL enabled but does not support direct SSL \nconnections, you will fall back to a plaintext connection instead. \nThat's almost certainly not what you wanted. I'm coming around to your \nopinion that we should not allow that combination.\n\nStepping back to summarize my thoughts, there are now three things I \ndon't like about the status quo:\n\n1. As noted above, the sslmode=prefer and sslnegotiation=requiredirect \ncombination is somewhat dangerous, as you might unintentionally fall \nback to plaintext authentication when connecting to a pre-v17 server.\n\n2. There is an asymmetry between \"postgres\" and \"direct\"\noption names. \"postgres\" means \"try only traditional negotiation\", while\n\"direct\" means \"try direct first, and fall back to traditional\nnegotiation if it fails\". That is apparent only if you know that the\n\"requiredirect\" mode also exists.\n\n3. The \"require\" word in \"requiredirect\" suggests that it's somehow\nmore strict or more secure, similar to sslmode=require. However, I don't \nconsider direct SSL connections to be a security feature.\n\n\nNew proposal:\n\n- Remove the \"try both\" mode completely, and rename \"requiredirect\" to \njust \"direct\". So there would be just two modes: \"postgres\" and \n\"direct\". On reflection, the automatic fallback mode doesn't seem very \nuseful. It would make sense as the default, because then you would get \nthe benefits automatically in most cases but still be compatible with \nold servers. But if it's not the default, you have to fiddle with libpq \nsettings anyway to enable it, and then you might as well use the \n\"requiredirect\" mode when you know the server supports it. There isn't \nanything wrong with it as such, but given how much confusion there's \nbeen on how this all works, I'd prefer to cut this back to the bare \nminimum now. We can add it back in the future, and perhaps make it the \ndefault at the same time. This addresses points 2. and 3. above.\n\nand:\n\n- Only allow sslnegotiation=direct with sslmode=require or higher. This \nis what you, Jacob, wanted to do all along, and addresses point 1.\n\nThoughts?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 10 May 2024 16:50:32 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Fri, 10 May 2024 at 15:50, Heikki Linnakangas <[email protected]> wrote:\n> New proposal:\n>\n> - Remove the \"try both\" mode completely, and rename \"requiredirect\" to\n> just \"direct\". So there would be just two modes: \"postgres\" and\n> \"direct\". On reflection, the automatic fallback mode doesn't seem very\n> useful. It would make sense as the default, because then you would get\n> the benefits automatically in most cases but still be compatible with\n> old servers. But if it's not the default, you have to fiddle with libpq\n> settings anyway to enable it, and then you might as well use the\n> \"requiredirect\" mode when you know the server supports it. There isn't\n> anything wrong with it as such, but given how much confusion there's\n> been on how this all works, I'd prefer to cut this back to the bare\n> minimum now. We can add it back in the future, and perhaps make it the\n> default at the same time. This addresses points 2. and 3. above.\n>\n> and:\n>\n> - Only allow sslnegotiation=direct with sslmode=require or higher. This\n> is what you, Jacob, wanted to do all along, and addresses point 1.\n>\n> Thoughts?\n\nSounds mostly good to me. But I think we'd want to automatically\nincrease sslmode to require if it is unset, but sslnegotation is set\nto direct. Similar to how we bump sslmode to verify-full if\nsslrootcert is set to system, but sslmode is unset. i.e. it seems\nunnecessary/unwanted to throw an error if the connection string only\ncontains sslnegotiation=direct\n\n\n", "msg_date": "Sat, 11 May 2024 22:45:38 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On 11/05/2024 23:45, Jelte Fennema-Nio wrote:\n> On Fri, 10 May 2024 at 15:50, Heikki Linnakangas <[email protected]> wrote:\n>> New proposal:\n>>\n>> - Remove the \"try both\" mode completely, and rename \"requiredirect\" to\n>> just \"direct\". So there would be just two modes: \"postgres\" and\n>> \"direct\". On reflection, the automatic fallback mode doesn't seem very\n>> useful. It would make sense as the default, because then you would get\n>> the benefits automatically in most cases but still be compatible with\n>> old servers. But if it's not the default, you have to fiddle with libpq\n>> settings anyway to enable it, and then you might as well use the\n>> \"requiredirect\" mode when you know the server supports it. There isn't\n>> anything wrong with it as such, but given how much confusion there's\n>> been on how this all works, I'd prefer to cut this back to the bare\n>> minimum now. We can add it back in the future, and perhaps make it the\n>> default at the same time. This addresses points 2. and 3. above.\n>>\n>> and:\n>>\n>> - Only allow sslnegotiation=direct with sslmode=require or higher. This\n>> is what you, Jacob, wanted to do all along, and addresses point 1.\n>>\n>> Thoughts?\n> \n> Sounds mostly good to me. But I think we'd want to automatically\n> increase sslmode to require if it is unset, but sslnegotation is set\n> to direct. Similar to how we bump sslmode to verify-full if\n> sslrootcert is set to system, but sslmode is unset. i.e. it seems\n> unnecessary/unwanted to throw an error if the connection string only\n> contains sslnegotiation=direct\n\nI find that error-prone. For example:\n\n1. Try to connect to a server with direct negotiation: psql \"host=foobar \ndbname=mydb sslnegotiation=direct\"\n\n2. It fails. Maybe it was an old server? Let's change it to \nsslnegotiation=postgres.\n\n3. Now it succeeds. Great!\n\nYou might miss that by changing sslnegotiation to 'postgres', or by \nremoving it altogether, you not only made it compatible with older \nserver versions, but you also allowed falling back to a plaintext \nconnection. Maybe you're fine with that, but maybe not. I'd like to \nnudge people to use sslmode=require, not rely on implicit stuff like \nthis just to make connection strings a little shorter.\n\nI'm not a fan of sslrootcert=system implying sslmode=verify-full either, \nfor the same reasons. But at least \"sslrootcert\" is a clearly \nsecurity-related setting, so removing it might give you a pause, whereas \nsslnegotition is about performance and compatibility.\n\nIn v18, I'd like to make sslmode=require the default. Or maybe introduce \na new setting like \"encryption=ssl|gss|none\", defaulting to 'ssl'. If we \nwant to encourage encryption, that's the right way to do it. (I'd still \nrecommend everyone to use an explicit sslmode=require in their \nconnection strings for many years, though, because you might be using an \nolder client without realizing it.)\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 13 May 2024 00:39:07 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Sun, 12 May 2024 at 23:39, Heikki Linnakangas <[email protected]> wrote:\n> You might miss that by changing sslnegotiation to 'postgres', or by\n> removing it altogether, you not only made it compatible with older\n> server versions, but you also allowed falling back to a plaintext\n> connection. Maybe you're fine with that, but maybe not. I'd like to\n> nudge people to use sslmode=require, not rely on implicit stuff like\n> this just to make connection strings a little shorter.\n\nI understand your worry, but I'm not sure that this is actually much\nof a security issue in practice. sslmode=prefer and sslmode=require\nare the same amount of insecure imho (i.e. extremely insecure). The\nonly reason sslmode=prefer would connect as non-ssl to a server that\nsupports ssl is if an attacker has write access to the network in the\nmiddle (i.e. eavesdropping ability alone is not enough). Once an\nattacker has this level of network access, it's trivial for this\nattacker to read any data sent to Postgres by intercepting the TLS\nhandshake and doing TLS termination with some arbitrary cert (any cert\nis trusted by sslmode=require).\n\nSo the only actual case where this is a security issue I can think of\nis when an attacker has only eavesdropping ability on the network. And\nsomehow the Postgres server that the client tries to connect to is\nconfigured incorrectly, so that no ssl is set up at all. Then a client\nwould drop to plaintext, when connecting to this server instead of\nerroring, and the attacker could now read the traffic. But I don't\nreally see this scenario end up any differently when requiring people\nto enter sslmode=require. The only action a user can take to connect\nto a server that does not have ssl support is to remove\nsslmode=require from the connection string. Except if they are also\nthe server operator, in which case they could enable ssl on the\nserver. But if ssl is not set up, then it was probably never set up,\nand thus providing sslnegotiation=direct would be to test if ssl\nworks.\n\nBut, if you disagree I'm fine with erroring on plain sslnegotiation=direct\n\n> In v18, I'd like to make sslmode=require the default. Or maybe introduce\n> a new setting like \"encryption=ssl|gss|none\", defaulting to 'ssl'. If we\n> want to encourage encryption, that's the right way to do it. (I'd still\n> recommend everyone to use an explicit sslmode=require in their\n> connection strings for many years, though, because you might be using an\n> older client without realizing it.)\n\nI'm definitely a huge proponent of making the connection defaults more\nsecure. But as described above sslmode=require is still extremely\ninsecure and I don't think we gain anything substantial by making it\nthe default. I think the only useful secure default would be to use\nsslmode=verify-full (with probably some automatic fallback to\nsslmode=prefer when connecting to hardcoded IPs or localhost). Which\nprobably means that sslrootcert=system should also be made the\ndefault. Which would mean that ~/.postgresql/root.crt would not be the\ndefault anymore, which I personally think is fine but others likely\ndisagree.\n\n\n", "msg_date": "Mon, 13 May 2024 11:50:59 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On 13/05/2024 12:50, Jelte Fennema-Nio wrote:\n> On Sun, 12 May 2024 at 23:39, Heikki Linnakangas <[email protected]> wrote:\n>> In v18, I'd like to make sslmode=require the default. Or maybe introduce\n>> a new setting like \"encryption=ssl|gss|none\", defaulting to 'ssl'. If we\n>> want to encourage encryption, that's the right way to do it. (I'd still\n>> recommend everyone to use an explicit sslmode=require in their\n>> connection strings for many years, though, because you might be using an\n>> older client without realizing it.)\n> \n> I'm definitely a huge proponent of making the connection defaults more\n> secure. But as described above sslmode=require is still extremely\n> insecure and I don't think we gain anything substantial by making it\n> the default. I think the only useful secure default would be to use\n> sslmode=verify-full (with probably some automatic fallback to\n> sslmode=prefer when connecting to hardcoded IPs or localhost). Which\n> probably means that sslrootcert=system should also be made the\n> default. Which would mean that ~/.postgresql/root.crt would not be the\n> default anymore, which I personally think is fine but others likely\n> disagree.\n\n\"channel_binding=require sslmode=require\" also protects from MITM attacks.\n\nI think these options should be designed from the user's point of view, \nso that the user can specify the risks they're willing to accept, and \nthe details of how that's accomplished are handled by libpq. For \nexample, I'm OK with (tick all that apply):\n\n[ ] passive eavesdroppers seeing all the traffic\n[ ] MITM being able to hijack the session\n[ ] connecting without verifying the server's identity\n[ ] divulging the plaintext password to the server\n[ ] ...\n\nThe requirements for whether SSL or GSS encryption is required, whether \nthe server's certificate needs to signed with known CA, etc. can be \nderived from those. For example, if you need protection from \neavesdroppers, SSL or GSS encryption must be used. If you need to verify \nthe server's identity, it implies sslmode=verify-CA or \nchannel_binding=true. If you don't want to divulge the password, it \nimplies a suitable require_auth setting. I don't have a concrete \nproposal yet, but something like that. And the defaults for those are up \nfor debate.\n\npsql could perhaps help by listing the above properties at the beginning \nof the session, something like:\n\npsql (16.2)\nWARNING: Connection is not encrypted.\nWARNING: The server's identity has not been verified\nType \"help\" for help.\n\npostgres=#\n\nAlthough for the \"divulge plaintext password to server\" property, it's \ntoo late to print a warning after connecting, because the damage has \nalready been done.\n\nA different line of thought is that to keep the attack surface as smal \nas possible, you should specify very explicitly what exactly you expect \nto happen in the authentication, and disallow any variance. For example, \nyou expect SCRAM to be used, with a certificate signed by a particular \nCA, and if the server requests anything else, that's suspicious and the \nconnection is aborted. We should make that possible too, but the above \nflexible risk-based approach seems good for the defaults.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 13 May 2024 14:07:53 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On 10/05/2024 16:50, Heikki Linnakangas wrote:\n> New proposal:\n> \n> - Remove the \"try both\" mode completely, and rename \"requiredirect\" to\n> just \"direct\". So there would be just two modes: \"postgres\" and\n> \"direct\". On reflection, the automatic fallback mode doesn't seem very\n> useful. It would make sense as the default, because then you would get\n> the benefits automatically in most cases but still be compatible with\n> old servers. But if it's not the default, you have to fiddle with libpq\n> settings anyway to enable it, and then you might as well use the\n> \"requiredirect\" mode when you know the server supports it. There isn't\n> anything wrong with it as such, but given how much confusion there's\n> been on how this all works, I'd prefer to cut this back to the bare\n> minimum now. We can add it back in the future, and perhaps make it the\n> default at the same time. This addresses points 2. and 3. above.\n> \n> and:\n> \n> - Only allow sslnegotiation=direct with sslmode=require or higher. This\n> is what you, Jacob, wanted to do all along, and addresses point 1.\n\nHere's a patch to implement that.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Mon, 13 May 2024 16:37:50 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Mon, 13 May 2024 at 15:38, Heikki Linnakangas <[email protected]> wrote:\n> Here's a patch to implement that.\n\n+ if (conn->sslnegotiation[0] == 'd' &&\n+ conn->sslmode[0] != 'r' && conn->sslmode[0] != 'v')\n\nI think these checks should use strcmp instead of checking magic first\ncharacters. I see this same clever trick is used in the recently added\ninit_allowed_encryption_methods, and I think that should be changed to\nuse strcmp too for readability.\n\n\n", "msg_date": "Mon, 13 May 2024 15:55:48 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Mon, 13 May 2024 at 13:07, Heikki Linnakangas <[email protected]> wrote:\n> \"channel_binding=require sslmode=require\" also protects from MITM attacks.\n\nCool, I didn't realize we had this connection option and it could be\nused like this. But I think there's a few security downsides of\nchannel_binding=require over sslmode=verify-full: If the client relies\non channel_binding to validate server authenticity, a leaked\nserver-side SCRAM hash is enough for an attacker to impersonate a\nserver. While normally a leaked scram hash isn't really much of a\nsecurity concern (assuming long enough passwords). I also don't know\nof many people rotating their scram hashes, even though many rotate\nTLS certs.\n\n> I think these options should be designed from the user's point of view,\n> so that the user can specify the risks they're willing to accept, and\n> the details of how that's accomplished are handled by libpq. For\n> example, I'm OK with (tick all that apply):\n>\n> [ ] passive eavesdroppers seeing all the traffic\n> [ ] MITM being able to hijack the session\n> [ ] connecting without verifying the server's identity\n> [ ] divulging the plaintext password to the server\n> [ ] ...\n\nI think that sounds like a great idea, looking forward to the proposal.\n\n\n", "msg_date": "Mon, 13 May 2024 16:19:36 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On 13/05/2024 16:55, Jelte Fennema-Nio wrote:\n> On Mon, 13 May 2024 at 15:38, Heikki Linnakangas <[email protected]> wrote:\n>> Here's a patch to implement that.\n> \n> + if (conn->sslnegotiation[0] == 'd' &&\n> + conn->sslmode[0] != 'r' && conn->sslmode[0] != 'v')\n> \n> I think these checks should use strcmp instead of checking magic first\n> characters. I see this same clever trick is used in the recently added\n> init_allowed_encryption_methods, and I think that should be changed to\n> use strcmp too for readability.\n\nOh yeah, I hate that too. These should be refactored into enums, with a \nclear separate stage of parsing the options from strings. But we use \nthat pattern all over the place, so I didn't want to start reforming it \nwith this patch.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 13 May 2024 17:54:30 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Mon, May 13, 2024 at 9:37 AM Heikki Linnakangas <[email protected]> wrote:\n> On 10/05/2024 16:50, Heikki Linnakangas wrote:\n> > New proposal:\n> >\n> > - Remove the \"try both\" mode completely, and rename \"requiredirect\" to\n> > just \"direct\". So there would be just two modes: \"postgres\" and\n> > \"direct\". On reflection, the automatic fallback mode doesn't seem very\n> > useful. It would make sense as the default, because then you would get\n> > the benefits automatically in most cases but still be compatible with\n> > old servers. But if it's not the default, you have to fiddle with libpq\n> > settings anyway to enable it, and then you might as well use the\n> > \"requiredirect\" mode when you know the server supports it. There isn't\n> > anything wrong with it as such, but given how much confusion there's\n> > been on how this all works, I'd prefer to cut this back to the bare\n> > minimum now. We can add it back in the future, and perhaps make it the\n> > default at the same time. This addresses points 2. and 3. above.\n> >\n> > and:\n> >\n> > - Only allow sslnegotiation=direct with sslmode=require or higher. This\n> > is what you, Jacob, wanted to do all along, and addresses point 1.\n>\n> Here's a patch to implement that.\n\nI find this idea to be a massive improvement over the status quo, and\nI didn't spot any major problems when I read through the patch,\neither. I'm not quite sure if the patch takes the right approach in\nemphasizing that weaker sslmode settings are not allowed because of\nunintended fallbacks. It seems to me that we could equally well say\nthat those combinations are nonsensical. If we're making a direct SSL\nconnection, SSL is eo ipso required.\n\nI don't have a strong opinion about whether sslnegotiation=direct\nshould error out (as you propose here) or silently promote sslmode to\nrequire. I think either is defensible. Had I been implementing it, I\nthink I would have done as Jacob proposes, just because once we've\nforced a direct SSL negotiation surely the only sensible behavior is\nto be using SSL, unless you think there should be a\nsilently-reconnect-without-SSL behavior, which I sure don't. However,\nI disagree with Jacob's assertion that sslmode=require has no security\nbenefits over sslmode=prefer. That seems like the kind of pessimism\nthat makes people hate security professionals. There have got to be\nsome attacks that are foreclosed by encrypting the connection, even if\nyou don't stop MITM attacks or other things that are more\nsophisticated than running wireshark and seeing what goes by on the\nwire.\n\nI'm pleased to hear that you will propose to make sslmode=require the\ndefault in v18. I think we'll need to do some work to figure out how\nmuch collateral damage that will cause, and maybe it will be more than\nwe can stomach, but Magnus has been saying for years that the current\ndefault is terrible. I'm not sure I was entirely convinced of that the\nfirst time I heard him say it, but I'm smarter now than I was then.\nIt's really hard to believe in 2024 that anyone should ever be using a\nsetting that may or may not encrypt the connection. There's http and\nhttps but there's no httpmaybes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 May 2024 12:13:32 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "(There's, uh, a lot to respond to above and I'm trying to figure out\nhow best to type up all of it.)\n\nOn Mon, May 13, 2024 at 9:13 AM Robert Haas <[email protected]> wrote:\n> However,\n> I disagree with Jacob's assertion that sslmode=require has no security\n> benefits over sslmode=prefer.\n\nFor the record, I didn't say that... You mean Jelte's quote up above?:\n\n> sslmode=prefer and sslmode=require\n> are the same amount of insecure imho (i.e. extremely insecure).\n\nI agree that requiring passive security is tangibly better than\nallowing fallback to plaintext. I think Jelte's point might be better\nstated as, =prefer and =require give the same amount of protection\nagainst active attack (none).\n\n--Jacob\n\n\n", "msg_date": "Mon, 13 May 2024 09:45:27 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Mon, 13 May 2024 at 18:14, Robert Haas <[email protected]> wrote:\n> I disagree with Jacob's assertion that sslmode=require has no security\n> benefits over sslmode=prefer. That seems like the kind of pessimism\n> that makes people hate security professionals. There have got to be\n> some attacks that are foreclosed by encrypting the connection, even if\n> you don't stop MITM attacks or other things that are more\n> sophisticated than running wireshark and seeing what goes by on the\n> wire.\n\nLike Jacob already said, I guess you meant me here. The main point I\nwas trying to make is that sslmode=require is extremely insecure too,\nso if we're changing the default then I'd rather bite the bullet and\nactually make the default a secure one this time. No-ones browser\ntrusts self-signed certs by default, but currently lots of people\ntrust self-signed certs when connecting to their production database\nwithout realizing.\n\nIMHO the only benefit that sslmode=require brings over sslmode=prefer\nis detecting incorrectly configured servers i.e. servers that are\nsupposed to support ssl but somehow don't due to a misconfigured\nGUC/pg_hba. Such \"incorrectly configured server\" detection avoids\nsending data to such a server, which an eavesdropper on the network\ncould see. Which is definitely a security benefit, but it's an\nextremely small one. In all other cases sslmode=prefer brings exactly\nthe same protection as sslmode=require, because sslmode=prefer\nencrypts the connection unless postgres actively tells the client to\ndowngrade to plaintext (which never happens when the server is\nconfigured correctly).\n\n\n", "msg_date": "Mon, 13 May 2024 19:41:58 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "[soapbox thread, so I've changed the Subject]\n\nOn Mon, May 13, 2024 at 4:08 AM Heikki Linnakangas <[email protected]> wrote:\n> \"channel_binding=require sslmode=require\" also protects from MITM attacks.\n\nThis isn't true in the same way that \"standard\" TLS protects against\nMITM. I know you know that, but for the benefit of bystanders reading\nthe threads, I think we should stop phrasing it like this. Most people\nwho want MITM protection need to be using verify-full.\n\nDetails for those bystanders: Channel binding alone will only\ndisconnect you after the MITM is discovered, after your startup packet\nis leaked but before you send any queries to the server. A hash of\nyour password will also be leaked in that situation, which starts the\ntimer on an offline attack. And IIRC, you won't get an alert that says\n\"someone's in the middle\"; it'll just look like you mistyped your\npassword.\n\n(Stronger passwords provide stronger protection in this situation,\nwhich is not a property that most people are used to. If I choose to\nsign into Google with the password \"hunter2\", it doesn't somehow make\nthe TLS protection weaker. But if you rely on SCRAM by itself for\nserver authentication, it does more or less work like that.)\n\nUse channel_binding *in addition to* sslmode=verify-full if you want\nenhanced authentication of the peer, as suggested in the docs [1].\nDon't rely on channel binding alone for the vast majority of use\ncases, and if you know better for your particular use case, then you\nalready know enough to be able to ignore my advice.\n\n[/soapbox]\n\nThanks,\n--Jacob\n\n[1] https://www.postgresql.org/docs/current/preventing-server-spoofing.html\n\n\n", "msg_date": "Mon, 13 May 2024 11:09:50 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "On the use of channel binding without server certificates (was:\n Direct SSL connection with ALPN and HBA rules)" }, { "msg_contents": "On Mon, May 13, 2024 at 12:45 PM Jacob Champion\n<[email protected]> wrote:\n> For the record, I didn't say that... You mean Jelte's quote up above?\n\nYeah, sorry, I got my J-named hackers confused. Apologies.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 May 2024 15:12:48 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Mon, May 13, 2024 at 1:42 PM Jelte Fennema-Nio <[email protected]> wrote:\n> Like Jacob already said, I guess you meant me here. The main point I\n> was trying to make is that sslmode=require is extremely insecure too,\n> so if we're changing the default then I'd rather bite the bullet and\n> actually make the default a secure one this time. No-ones browser\n> trusts self-signed certs by default, but currently lots of people\n> trust self-signed certs when connecting to their production database\n> without realizing.\n>\n> IMHO the only benefit that sslmode=require brings over sslmode=prefer\n> is detecting incorrectly configured servers i.e. servers that are\n> supposed to support ssl but somehow don't due to a misconfigured\n> GUC/pg_hba. Such \"incorrectly configured server\" detection avoids\n> sending data to such a server, which an eavesdropper on the network\n> could see. Which is definitely a security benefit, but it's an\n> extremely small one. In all other cases sslmode=prefer brings exactly\n> the same protection as sslmode=require, because sslmode=prefer\n> encrypts the connection unless postgres actively tells the client to\n> downgrade to plaintext (which never happens when the server is\n> configured correctly).\n\nI think I agree with *nearly* every word of this. However:\n\n(1) I don't want to hijack this thread about a v17 open item to talk\ntoo much about a hypothetical v18 proposal.\n\n(2) While in general you need more than just SSL to ensure security,\nI'm not sure that there's only one way to do it, and I doubt that we\nshould try to pick a winner.\n\n(3) I suspect that even going as far as sslmode=require by default is\ngoing to be extremely painful for hackers, the project, and users.\nMoving the goalposts further increases the likelihood of nothing\nhappening at all.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 May 2024 15:24:32 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Mon, May 13, 2024 at 9:13 AM Robert Haas <[email protected]> wrote:\n> I find this idea to be a massive improvement over the status quo,\n\n+1\n\n> and\n> I didn't spot any major problems when I read through the patch,\n> either.\n\nDefinitely not a major problem, but I think\nselect_next_encryption_method() has gone stale, since it originally\nprovided generality and lines of fallback that no longer exist. In\nother words, I think the following code is now misleading:\n\n> if (conn->sslmode[0] == 'a')\n> SELECT_NEXT_METHOD(ENC_PLAINTEXT);\n>\n> SELECT_NEXT_METHOD(ENC_NEGOTIATED_SSL);\n> SELECT_NEXT_METHOD(ENC_DIRECT_SSL);\n>\n> if (conn->sslmode[0] != 'a')\n> SELECT_NEXT_METHOD(ENC_PLAINTEXT);\n\nTo me, that implies that negotiated mode takes precedence over direct,\nbut the point of the patch is that it's not possible to have both. And\nif direct SSL is in use, then sslmode can't be \"allow\" anyway, and we\ndefinitely don't want ENC_PLAINTEXT.\n\nSo if someone proposes a change to select_next_encryption_method(),\nyou'll have to remember to stare at init_allowed_encryption_methods()\nas well, and think really hard about what's going on. And vice-versa.\nThat worries me.\n\n> I don't have a strong opinion about whether sslnegotiation=direct\n> should error out (as you propose here) or silently promote sslmode to\n> require. I think either is defensible.\n\nI'm comforted that, since sslrootcert=system already does it, plenty\nof use cases will get that for free. And if you decide in the future\nthat you really really want it to promote, it won't be a compatibility\nbreak to make that change. (That gives us more time for wider v16-17\nadoption, to see how the sslrootcert=system magic promotion behavior\nis going in practice.)\n\n> Had I been implementing it, I\n> think I would have done as Jacob proposes, just because once we've\n> forced a direct SSL negotiation surely the only sensible behavior is\n> to be using SSL, unless you think there should be a\n> silently-reconnect-without-SSL behavior, which I sure don't.\n\nWe still allow GSS to preempt SSL, though, so \"forced\" is probably\noverstating things.\n\n> It's really hard to believe in 2024 that anyone should ever be using a\n> setting that may or may not encrypt the connection. There's http and\n> https but there's no httpmaybes.\n\n+1. I think (someone hop in and correct me please) that Opportunistic\nEncryption for HTTP mostly fizzled, and they gave it a *lot* of\nthought.\n\n--Jacob\n\n\n", "msg_date": "Mon, 13 May 2024 15:29:17 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "[this should probably belong to a different thread, but I'm not sure\nwhat to title it]\n\nOn Mon, May 13, 2024 at 4:08 AM Heikki Linnakangas <[email protected]> wrote:\n> I think these options should be designed from the user's point of view,\n> so that the user can specify the risks they're willing to accept, and\n> the details of how that's accomplished are handled by libpq. For\n> example, I'm OK with (tick all that apply):\n>\n> [ ] passive eavesdroppers seeing all the traffic\n> [ ] MITM being able to hijack the session\n> [ ] connecting without verifying the server's identity\n> [ ] divulging the plaintext password to the server\n> [ ] ...\n\nI'm pessimistic about a quality-of-protection scheme for this use case\n(*). I don't think users need more knobs in their connection strings,\nand many of the goals of transport encryption are really not\nindependent from each other in practice. As evidence I point to the\nabsolute mess of GSSAPI wrapping, which lets you check separate boxes\nfor things like \"require the server to authenticate itself\" and\n\"require integrity\" and \"allow MITMs to reorder messages\" and so on,\nas if the whole idea of \"integrity\" is useful if you don't know who\nyou're talking to in the first place. I think I recall slapd having\nsomething similarly arcane (but at least it doesn't make the clients\ndo it!). Those kinds of APIs don't evolve well, in my opinion.\n\nI think most people probably just want browser-grade security as\nquickly and cheaply as possible, and we don't make that very easy\ntoday. I'll try to review a QoP scheme if someone works on it, don't\nget me wrong, but I'd much rather spend time on a \"very secure by\ndefault\" mode that gets rid of most of the options (i.e. a\npostgresqls:// scheme).\n\n(*) I've proposed quality-of-protection in the past, for Postgres\nproxy authentication [1]. But I'm comfortable in my hypocrisy, because\nin that case, the end user doing the configuration is a DBA with a\nconfig file who is expected to understand the whole system, and it's a\nniche use case (IMO) without an obvious \"common setup\". And frankly I\nthink my proposal is unlikely to go anywhere; the cost/benefit\nprobably isn't good enough.\n\n> If you need to verify\n> the server's identity, it implies sslmode=verify-CA or\n> channel_binding=true.\n\nNeither of those two options provides strong authentication of the\npeer, and personally I wouldn't want them to be considered worthy of\n\"verify the server's identity\" mode.\n\nAnd -- taking a wild leap here -- if we disagree, then granularity\nbecomes a problem: either the QoP scheme now has to have\nsub-checkboxes for \"if the server knows my password, that's good\nenough\" and \"it's fine if the server's hostname doesn't match the\ncert, for some reason\", or it smashes all of those different ideas\ninto one setting and then I have to settle for the weakest common\ndenominator during an active attack. Assuming I haven't missed a third\noption, will that be easier/better than the status quo of\nrequire_auth+sslmode?\n\n> A different line of thought is that to keep the attack surface as smal\n> as possible, you should specify very explicitly what exactly you expect\n> to happen in the authentication, and disallow any variance. For example,\n> you expect SCRAM to be used, with a certificate signed by a particular\n> CA, and if the server requests anything else, that's suspicious and the\n> connection is aborted. We should make that possible too\n\nThat's 'require_auth=scram-sha-256 sslmode=verify-ca', no?\n\n--Jacob\n\n[1] https://www.postgresql.org/message-id/0768cedb-695a-8841-5f8b-da2aa64c8f3a%40timescale.com\n\n\n", "msg_date": "Mon, 13 May 2024 17:05:41 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Mon, Apr 29, 2024 at 11:04 AM Jacob Champion\n<[email protected]> wrote:\n> On Fri, Apr 26, 2024 at 3:51 PM Heikki Linnakangas <[email protected]> wrote:\n> > Unfortunately the error message you got in the client with that was\n> > horrible (I modified the server to not accept the 'postgresql' protocol):\n> >\n> > psql \"dbname=postgres sslmode=require host=localhost\"\n> > psql: error: connection to server at \"localhost\" (::1), port 5432\n> > failed: SSL error: SSL error code 167773280\n>\n> <long sigh>\n>\n> I filed a bug upstream [1].\n\nI think this is on track to be fixed in a future set of OpenSSL 3.x\nreleases [2]. We'll still need to carry the workaround while we\nsupport 1.1.1.\n\n--Jacob\n\n[2] https://github.com/openssl/openssl/pull/24351\n\n\n", "msg_date": "Tue, 14 May 2024 10:14:38 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On 14/05/2024 01:29, Jacob Champion wrote:\n> Definitely not a major problem, but I think\n> select_next_encryption_method() has gone stale, since it originally\n> provided generality and lines of fallback that no longer exist. In\n> other words, I think the following code is now misleading:\n> \n>> if (conn->sslmode[0] == 'a')\n>> SELECT_NEXT_METHOD(ENC_PLAINTEXT);\n>>\n>> SELECT_NEXT_METHOD(ENC_NEGOTIATED_SSL);\n>> SELECT_NEXT_METHOD(ENC_DIRECT_SSL);\n>>\n>> if (conn->sslmode[0] != 'a')\n>> SELECT_NEXT_METHOD(ENC_PLAINTEXT);\n> \n> To me, that implies that negotiated mode takes precedence over direct,\n> but the point of the patch is that it's not possible to have both. And\n> if direct SSL is in use, then sslmode can't be \"allow\" anyway, and we\n> definitely don't want ENC_PLAINTEXT.\n> \n> So if someone proposes a change to select_next_encryption_method(),\n> you'll have to remember to stare at init_allowed_encryption_methods()\n> as well, and think really hard about what's going on. And vice-versa.\n> That worries me.\n\nOk, yeah, I can see that now. Here's a new version to address that. I \nmerged ENC_SSL_NEGOTIATED_SSL and ENC_SSL_DIRECT_SSL to a single method, \nENC_SSL. The places that need to distinguish between them now check \nconn-sslnegotiation. That seems more clear now that there is no fallback.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Wed, 15 May 2024 16:33:33 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Wed, May 15, 2024 at 6:33 AM Heikki Linnakangas <[email protected]> wrote:\n> Ok, yeah, I can see that now. Here's a new version to address that. I\n> merged ENC_SSL_NEGOTIATED_SSL and ENC_SSL_DIRECT_SSL to a single method,\n> ENC_SSL. The places that need to distinguish between them now check\n> conn-sslnegotiation. That seems more clear now that there is no fallback.\n\nThat change and the new comment that were added seem a lot clearer to\nme, too; +1. And I like that this potentially preps for\nencryption=gss/ssl/none or similar.\n\nThis assertion seems a little strange to me:\n\n> if (conn->sslnegotiation[0] == 'p')\n> {\n> ProtocolVersion pv;\n>\n> Assert(conn->sslnegotiation[0] == 'p');\n\nBut other than that nitpick, nothing else jumps out at me at the moment.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Wed, 15 May 2024 11:24:00 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Wed, May 15, 2024 at 9:33 AM Heikki Linnakangas <[email protected]> wrote:\n> Ok, yeah, I can see that now. Here's a new version to address that. I\n> merged ENC_SSL_NEGOTIATED_SSL and ENC_SSL_DIRECT_SSL to a single method,\n> ENC_SSL. The places that need to distinguish between them now check\n> conn-sslnegotiation. That seems more clear now that there is no fallback.\n\nUnless there is a compelling reason to do otherwise, we should\nexpedite getting this committed so that it is included in beta1.\nRelease freeze begins Saturday.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 May 2024 09:53:47 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "> On 16 May 2024, at 15:54, Robert Haas <[email protected]> wrote:\n> \n> On Wed, May 15, 2024 at 9:33 AM Heikki Linnakangas <[email protected]> wrote:\n>> Ok, yeah, I can see that now. Here's a new version to address that. I\n>> merged ENC_SSL_NEGOTIATED_SSL and ENC_SSL_DIRECT_SSL to a single method,\n>> ENC_SSL. The places that need to distinguish between them now check\n>> conn-sslnegotiation. That seems more clear now that there is no fallback.\n> \n> Unless there is a compelling reason to do otherwise, we should\n> expedite getting this committed so that it is included in beta1.\n> Release freeze begins Saturday.\n\n+1. Having reread the thread and patch I think we should go for this one.\n\n./daniel\n\n", "msg_date": "Thu, 16 May 2024 16:08:14 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules " }, { "msg_contents": "On 16/05/2024 17:08, Daniel Gustafsson wrote:\n>> On 16 May 2024, at 15:54, Robert Haas <[email protected]> wrote:\n>>\n>> On Wed, May 15, 2024 at 9:33 AM Heikki Linnakangas <[email protected]> wrote:\n>>> Ok, yeah, I can see that now. Here's a new version to address that. I\n>>> merged ENC_SSL_NEGOTIATED_SSL and ENC_SSL_DIRECT_SSL to a single method,\n>>> ENC_SSL. The places that need to distinguish between them now check\n>>> conn-sslnegotiation. That seems more clear now that there is no fallback.\n>>\n>> Unless there is a compelling reason to do otherwise, we should\n>> expedite getting this committed so that it is included in beta1.\n>> Release freeze begins Saturday.\n> \n> +1. Having reread the thread and patch I think we should go for this one.\n\nYep, committed. Thanks everyone!\n\nOn 15/05/2024 21:24, Jacob Champion wrote:\n> This assertion seems a little strange to me:\n> \n>> if (conn->sslnegotiation[0] == 'p')\n>> {\n>> ProtocolVersion pv;\n>>\n>> Assert(conn->sslnegotiation[0] == 'p');\n> \n> But other than that nitpick, nothing else jumps out at me at the moment.\n\nFixed that. It was a leftover, I had the if-else conditions the other \nway round at one point during development.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 16 May 2024 17:23:54 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" }, { "msg_contents": "On Thu, May 16, 2024 at 10:23 AM Heikki Linnakangas <[email protected]> wrote:\n> Yep, committed. Thanks everyone!\n\nThanks!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 May 2024 10:55:15 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection with ALPN and HBA rules" } ]
[ { "msg_contents": "Hi, I have a use case where I want a particular database to not load a few\nmodules in shared_preload_libraries.\nI was wondering if there's any way to tweak the codebase to achieve this.\nOtherwise, can I block the modules' hooks/bgw from performing actions on my\nparticular database?\n\n-- \nRegards\nRajan Pandey\n\nHi, I have a use case where I want a particular database to not load a few modules in shared_preload_libraries.I was wondering if there's any way to tweak the codebase to achieve this. Otherwise, can I block the modules' hooks/bgw from performing actions on my particular database?-- RegardsRajan Pandey", "msg_date": "Fri, 19 Apr 2024 18:11:04 +0530", "msg_from": "Rajan Pandey <[email protected]>", "msg_from_op": true, "msg_subject": "Possible to exclude a database from loading a\n shared_preload_libraries\n module?" }, { "msg_contents": "Rajan Pandey <[email protected]> writes:\n> Hi, I have a use case where I want a particular database to not load a few\n> modules in shared_preload_libraries.\n> I was wondering if there's any way to tweak the codebase to achieve this.\n\nNo. It's not even theoretically possible, because the whole point of\nshared_preload_libraries is that the module gets loaded at postmaster\nstart.\n\nDepending on what the module does, it might work to load it\nper-session with session_preload_libraries, which you could attach\nto the specific database(s) where you want it to be effective.\n\n> Otherwise, can I block the modules' hooks/bgw from performing actions on my\n> particular database?\n\nThat would be a feature for the individual module to implement.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Apr 2024 10:10:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible to exclude a database from loading a\n shared_preload_libraries module?" } ]
[ { "msg_contents": "Hi, hackers\n\nI see [1] has already implemented on login event trigger, why not implement\nthe logoff event trigger?\n\nMy friend Song Jinzhou and I try to implement the logoff event trigger, so\nattach it.\n\nHere is a problem with the regression test when using \\c to create a new\nsession, because it might be running concurrently, which may lead to the\nchecking being unstable.\n\nAny thoughts?\n\n[1] https://www.postgresql.org/message-id/0d46d29f-4558-3af9-9c85-7774e14a7709%40postgrespro.ru\n\n--\nRegards,\nJapin Li", "msg_date": "Fri, 19 Apr 2024 23:46:10 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Support event trigger for logoff" }, { "msg_contents": "Japin Li <[email protected]> writes:\n> I see [1] has already implemented on login event trigger, why not implement\n> the logoff event trigger?\n\nWhat happens if the session crashes, or otherwise fails unexpectedly?\n\n> Here is a problem with the regression test when using \\c to create a new\n> session, because it might be running concurrently, which may lead to the\n> checking being unstable.\n\n> Any thoughts?\n\nLet's not go there at all. I don't think there's enough field\ndemand to justify dealing with this concept.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Apr 2024 13:36:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support event trigger for logoff" }, { "msg_contents": "\nOn Sat, 20 Apr 2024 at 01:36, Tom Lane <[email protected]> wrote:\n> Japin Li <[email protected]> writes:\n>> I see [1] has already implemented on login event trigger, why not implement\n>> the logoff event trigger?\n>\n> What happens if the session crashes, or otherwise fails unexpectedly?\n>\n\nI am currently unsure how to handle such situations, but I believe it is not\nonly for logoff events.\n\n>> Here is a problem with the regression test when using \\c to create a new\n>> session, because it might be running concurrently, which may lead to the\n>> checking being unstable.\n>\n>> Any thoughts?\n>\n> Let's not go there at all. I don't think there's enough field\n> demand to justify dealing with this concept.\n>\n\nThanks for your point out this for me.\n\n--\nRegards,\nJapin Li\n\n\n", "msg_date": "Mon, 22 Apr 2024 21:24:29 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Support event trigger for logoff" } ]
[ { "msg_contents": "I just did a run of the regression test where this test was the last\none to finish by quite a lot. Key log entries:\n\n[13:35:48.583](0.039s) # initializing database system by copying initdb template\n...\n[13:35:52.397](0.108s) ok 5 - Check reset timestamp for\n'test_tab1_sub' is newer after second reset.\n\n#### Begin standard error\n\npsql:<stdin>:1: NOTICE: created replication slot \"test_tab2_sub\" on publisher\n\n#### End standard error\n\nWaiting for replication conn test_tab2_sub's replay_lsn to pass\n0/151E8C8 on publisher\n\ndone\n\n[13:38:53.706](181.310s) ok 6 - Check that table 'test_tab2' now has 1 row.\n...\n[13:38:54.344](0.294s) 1..13\n\nI reran the test and it looks very different:\n\n[13:54:01.703](0.090s) ok 5 - Check reset timestamp for\n'test_tab1_sub' is newer after second reset.\n...\nWaiting for replication conn test_tab2_sub's replay_lsn to pass\n0/151E900 on publisher\n...\n[13:54:03.006](1.303s) ok 6 - Check that table 'test_tab2' now has 1 row.\n\nIt looks to me like in the first run it took 3 minutes for the\nreplay_lsn to catch up to the desired value, and in the second run,\ntwo seconds. I think I have seen previous instances where something\nsimilar happened, although in those cases I did not stop to record any\ndetails. Have others seen this? Is there something we can/should do\nabout it?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 19 Apr 2024 13:57:41 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "subscription/026_stats test is intermittently slow?" }, { "msg_contents": "On Fri, Apr 19, 2024 at 01:57:41PM -0400, Robert Haas wrote:\n> It looks to me like in the first run it took 3 minutes for the\n> replay_lsn to catch up to the desired value, and in the second run,\n> two seconds. I think I have seen previous instances where something\n> similar happened, although in those cases I did not stop to record any\n> details. Have others seen this? Is there something we can/should do\n> about it?\n\nFWIW, I've also seen delays as well with this test on a few occasions.\nThanks for mentioning it.\n--\nMichael", "msg_date": "Sat, 20 Apr 2024 11:57:45 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: subscription/026_stats test is intermittently slow?" }, { "msg_contents": "Hello Michael and Robert,\n\n20.04.2024 05:57, Michael Paquier wrote:\n> On Fri, Apr 19, 2024 at 01:57:41PM -0400, Robert Haas wrote:\n>> It looks to me like in the first run it took 3 minutes for the\n>> replay_lsn to catch up to the desired value, and in the second run,\n>> two seconds. I think I have seen previous instances where something\n>> similar happened, although in those cases I did not stop to record any\n>> details. Have others seen this? Is there something we can/should do\n>> about it?\n> FWIW, I've also seen delays as well with this test on a few occasions.\n> Thanks for mentioning it.\n\nIt reminds me of\nhttps://www.postgresql.org/message-id/858a7622-2c81-1687-d1df-1322dfcb2e72%40gmail.com\n\nAt least, I could reproduce such a delay with the attached patch applied.\n\nBest regards,\nAlexander", "msg_date": "Sat, 20 Apr 2024 08:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: subscription/026_stats test is intermittently slow?" }, { "msg_contents": "On Sat, 20 Apr 2024 at 10:30, Alexander Lakhin <[email protected]> wrote:\n>\n> Hello Michael and Robert,\n>\n> 20.04.2024 05:57, Michael Paquier wrote:\n> > On Fri, Apr 19, 2024 at 01:57:41PM -0400, Robert Haas wrote:\n> >> It looks to me like in the first run it took 3 minutes for the\n> >> replay_lsn to catch up to the desired value, and in the second run,\n> >> two seconds. I think I have seen previous instances where something\n> >> similar happened, although in those cases I did not stop to record any\n> >> details. Have others seen this? Is there something we can/should do\n> >> about it?\n> > FWIW, I've also seen delays as well with this test on a few occasions.\n> > Thanks for mentioning it.\n>\n> It reminds me of\n> https://www.postgresql.org/message-id/858a7622-2c81-1687-d1df-1322dfcb2e72%40gmail.com\n\nThanks Alexander for the test, I was able to reproduce the issue with\nthe test you shared and also verify that the patch at [1] fixes the\nsame. This is the same issue where the apply worker for test_tab2_sub\nwas getting started after 180 seconds because the common latch (which\nis used for worker attached, subscription creation/modification and\napply worker process exit) was getting reset when the other\nsubscription test_tab1_sub's worker gets started. The same can be seen\nfrom the logs:\n2024-04-22 20:47:52.009 IST [323280] 026_stats.pl LOG: statement: BEGIN;\n2024-04-22 20:47:52.009 IST [323280] 026_stats.pl LOG: statement:\nSELECT pg_sleep(0.5);\n2024-04-22 20:47:52.426 IST [323281] LOG: logical replication apply\nworker for subscription \"test_tab1_sub\" has started\n2024-04-22 20:47:52.511 IST [323280] 026_stats.pl LOG: statement:\nCREATE TABLE test_tab2(a int primary key);\n2024-04-22 20:47:52.518 IST [323280] 026_stats.pl LOG: statement:\nINSERT INTO test_tab2 VALUES (1);\n2024-04-22 20:47:52.519 IST [323280] 026_stats.pl LOG: statement: COMMIT;\n2024-04-22 20:47:52.540 IST [323286] 026_stats.pl LOG: statement:\nCREATE SUBSCRIPTION test_tab2_sub CONNECTION 'port=56685\nhost=/tmp/RwzpQrVMYH dbname=postgres' PUBLICATION test_tab2_pub\n2024-04-22 20:50:52.658 IST [326265] LOG: logical replication apply\nworker for subscription \"test_tab2_sub\" has started\n2024-04-22 20:50:52.668 IST [326267] LOG: logical replication table\nsynchronization worker for subscription \"test_tab2_sub\", table\n\"test_tab2\" has started\n\n[1] - https://www.postgresql.org/message-id/CALDaNm10R7L0Dxq%2B-J%3DPp3AfM_yaokpbhECvJ69QiGH8-jQquw%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 22 Apr 2024 21:02:10 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: subscription/026_stats test is intermittently slow?" }, { "msg_contents": "Hi,\n\nOn 2024-04-19 13:57:41 -0400, Robert Haas wrote:\n> Have others seen this? Is there something we can/should do about it?\n\nYes, I've also seen this - but never quite reproducible enough to properly\ntackle it.\n\nThe first thing I'd like to do is to make the wait_for_catchup routine\nregularly log the current state, so we can in retrospect analyze e.g. whether\nthere was continual, but slow, replay progress, or whether replay was entirely\nstuck. wait_for_catchup() not being debuggable has been a problem in many\ndifferent tests, so I think it's high time to fix that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 Apr 2024 10:55:08 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: subscription/026_stats test is intermittently slow?" }, { "msg_contents": "On Tue, Apr 23, 2024 at 11:49 AM vignesh C <[email protected]> wrote:\n>\n> On Sat, 20 Apr 2024 at 10:30, Alexander Lakhin <[email protected]> wrote:\n> >\n> > Hello Michael and Robert,\n> >\n> > 20.04.2024 05:57, Michael Paquier wrote:\n> > > On Fri, Apr 19, 2024 at 01:57:41PM -0400, Robert Haas wrote:\n> > >> It looks to me like in the first run it took 3 minutes for the\n> > >> replay_lsn to catch up to the desired value, and in the second run,\n> > >> two seconds. I think I have seen previous instances where something\n> > >> similar happened, although in those cases I did not stop to record any\n> > >> details. Have others seen this? Is there something we can/should do\n> > >> about it?\n> > > FWIW, I've also seen delays as well with this test on a few occasions.\n> > > Thanks for mentioning it.\n> >\n> > It reminds me of\n> > https://www.postgresql.org/message-id/858a7622-2c81-1687-d1df-1322dfcb2e72%40gmail.com\n>\n> Thanks Alexander for the test, I was able to reproduce the issue with\n> the test you shared and also verify that the patch at [1] fixes the\n> same.\n>\n\nOne of the issues reported in the thread you referred to has the same\nsymptoms [1]. I'll review and analyze your proposal.\n\n[1] - https://www.postgresql.org/message-id/858a7622-2c81-1687-d1df-1322dfcb2e72%40gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 23 Apr 2024 17:40:10 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: subscription/026_stats test is intermittently slow?" } ]
[ { "msg_contents": "Hi hackers,\n\nI noticed that the header file resowner_private.h is deprecated and no\nlonger useful after commit b8bff07[^1]. We should remove it.\n\n[^1]: https://github.com/postgres/postgres/commit/b8bff07daa85c837a2747b4d35cd5a27e73fb7b2\n\nBest Regards,\nXing", "msg_date": "Sat, 20 Apr 2024 10:07:45 +0800", "msg_from": "Xing Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Remove deprecated header file resowner_private.h." }, { "msg_contents": "On Sat, Apr 20, 2024 at 10:07:45AM +0800, Xing Guo wrote:\n> I noticed that the header file resowner_private.h is deprecated and no\n> longer useful after commit b8bff07[^1]. We should remove it.\n\nNice catch, looks like a `git rm` has been slippery here . It is\nindeed confusing to keep it around now that all these routines are\nmostly internal or have been switched to static inline that work as\nwrappers of some other resowner routines.\n\nWill clean up.\n--\nMichael", "msg_date": "Sat, 20 Apr 2024 11:54:29 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove deprecated header file resowner_private.h." }, { "msg_contents": "On Sat, Apr 20, 2024 at 11:54:29AM +0900, Michael Paquier wrote:\n> Will clean up.\n\nWhile looking at the whole, I've noticed that this has been mentioned\nhere by Andres, but the committed patch did not get the call:\nhttps://www.postgresql.org/message-id/[email protected]\n--\nMichael", "msg_date": "Sat, 20 Apr 2024 18:03:21 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove deprecated header file resowner_private.h." }, { "msg_contents": "On Sat, Apr 20, 2024 at 5:03 PM Michael Paquier <[email protected]> wrote:\n>\n> On Sat, Apr 20, 2024 at 11:54:29AM +0900, Michael Paquier wrote:\n> > Will clean up.\n>\n> While looking at the whole, I've noticed that this has been mentioned\n> here by Andres, but the committed patch did not get the call:\n> https://www.postgresql.org/message-id/[email protected]\n\nYou're right. I didn't go through the original thread carefully.\nThanks for the fix.\n\n> --\n> Michael\n\n\n", "msg_date": "Sat, 20 Apr 2024 18:21:38 +0800", "msg_from": "Xing Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove deprecated header file resowner_private.h." } ]
[ { "msg_contents": "Hello hackers,\n\nWhen playing with JSON_TABLE, I tried to replace tenk1 in regression tests\nwith a view based on JSON_TABLE, with the same content, and discovered\nthat for one sub-optimal query it's execution duration increased many-fold.\nWith the preparation script attached, I see the following durations\n(for a build compiled by clang 18.1.3 with -O3):\nexplain (verbose, analyze)\nselect\n   (select max((select i.unique2 from tenk1 i where i.unique1 = o.unique1)))\nfrom tenk1 o;\n-- original tenk1\n  Execution Time: 4769.481 ms\n\nexplain (verbose, analyze)\nselect\n   (select max((select i.unique2 from jsonb_rs_tenk1 i where i.unique1 = o.unique1)))\nfrom jsonb_rs_tenk1 o;\n-- Function Call: jsonb_to_recordset...\n  Execution Time: 6841.767 ms\n\nexplain (verbose, analyze)\nselect\n   (select max((select i.unique2 from jsontable_tenk1 i where i.unique1 = o.unique1)))\nfrom jsontable_tenk1 o;\n-- Table Function Call: JSON_TABLE...\n  Execution Time: 288310.131 ms\n(with 63% of time spent inside ExecEvalJsonExprPath())\n\nJust for fun I've tested also XMLTABLE with the similar content:\nexplain (verbose, analyze)\nselect\n   (select max((select i.unique2 from xml_tenk1 i where i.unique1 = o.unique1)))\nfrom xml_tenk1 o;\n-- Table Function Call: XMLTABLE...\n  Execution Time: 1235066.636 ms\n\nMaybe it's worth to add a note to the JSON_TABLE() documentation saying that\njsonb_to_recordset is (inherently?) more performant when processing arrays\nof flat structures for users not to re-discover this fact...\n\nBest regards,\nAlexander", "msg_date": "Sat, 20 Apr 2024 14:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Performance of JSON_TABLE vs jsonb_to_recordset" }, { "msg_contents": "Alexander Lakhin <[email protected]> writes:\n> When playing with JSON_TABLE, I tried to replace tenk1 in regression tests\n> with a view based on JSON_TABLE, with the same content, and discovered\n> that for one sub-optimal query it's execution duration increased many-fold.\n> With the preparation script attached, I see the following durations\n> (for a build compiled by clang 18.1.3 with -O3):\n> explain (verbose, analyze)\n> select\n>   (select max((select i.unique2 from tenk1 i where i.unique1 = o.unique1)))\n> from tenk1 o;\n> -- original tenk1\n>  Execution Time: 4769.481 ms\n\nHm, I get about 13 ms for that example. Do you have some really\nexpensive debugging infrastructure enabled, perhaps?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 Apr 2024 10:47:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of JSON_TABLE vs jsonb_to_recordset" }, { "msg_contents": "I wrote:\n> Alexander Lakhin <[email protected]> writes:\n>> explain (verbose, analyze)\n>> select\n>>   (select max((select i.unique2 from tenk1 i where i.unique1 = o.unique1)))\n>> from tenk1 o;\n>> -- original tenk1\n>>  Execution Time: 4769.481 ms\n\n> Hm, I get about 13 ms for that example. Do you have some really\n> expensive debugging infrastructure enabled, perhaps?\n\nOh, never mind, now I see you are testing a version of the table\nwith no indexes, rather than the way it's set up in the regression\ndatabase. Apologies for the noise.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 Apr 2024 10:58:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of JSON_TABLE vs jsonb_to_recordset" }, { "msg_contents": "Alexander Lakhin <[email protected]> writes:\n> explain (verbose, analyze)\n> select\n> (select max((select i.unique2 from jsontable_tenk1 i where i.unique1 = o.unique1)))\n> from jsontable_tenk1 o;\n> -- Table Function Call: JSON_TABLE...\n> Execution Time: 288310.131 ms\n> (with 63% of time spent inside ExecEvalJsonExprPath())\n\nYeah, I looked at this with perf too, and what I'm seeing is\n\n - 55.87% ExecEvalJsonExprPath\n - 39.30% JsonPathValue\n - 37.63% executeJsonPath\n - 34.87% executeItem (inlined)\n - executeItemOptUnwrapTarget\n - 32.39% executeNextItem\n - 31.02% executeItem (inlined)\n - 30.90% executeItemOptUnwrapTarget\n - 26.81% getKeyJsonValueFromContainer\n 14.35% getJsonbOffset (inlined)\n - 4.90% lengthCompareJsonbString (inlined)\n 3.19% __memcmp_avx2_movbe\n - 2.32% palloc\n 1.67% AllocSetAlloc\n 0.93% fillJsonbValue\n 1.18% executeNextItem\n 0.51% findJsonbValueFromContainer\n - 1.04% jspGetNext\n 0.72% jspInitByBuffer\n - 1.46% check_stack_depth\n stack_is_too_deep (inlined)\n 0.61% jspInitByBuffer\n - 9.82% ExecGetJsonValueItemString (inlined)\n - 8.68% DirectFunctionCall1Coll\n - 8.07% numeric_out\n - 6.15% get_str_from_var\n - 2.07% palloc\n - 1.80% AllocSetAlloc\n 0.72% AllocSetAllocChunkFromBlock (inlined)\n 1.28% init_var_from_num\n - 1.61% namein\n 0.90% __strlen_avx2\n 0.52% palloc0\n - 0.74% int4in\n 0.69% pg_strtoint32_safe\n\nDepressingly small amount of useful work being done there compared\nto the management overhead. Seems like some micro-optimization\nin this area could be a useful project for v18.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 Apr 2024 13:12:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of JSON_TABLE vs jsonb_to_recordset" } ]
[ { "msg_contents": "Hi,\n\nWhile doing some testing with createdb, I noticed it only accepts\nfile_copy/wal_log as valid strategies, not FILE_COPY/WAL_LOG (which is\nwhat the docs say). The same thing applies to CREATE DATABASE.\n\nThe problem is that createdb() does the check using strcmp() which is\ncase-sensitive. IMHO this should do pg_strcasecmp() which is what we do\nfor other string parameters nearby.\n\nPatch attached. This should be backpatched to 15, I think.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 20 Apr 2024 22:03:12 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "createdb compares strategy as case-sensitive" }, { "msg_contents": "Tomas Vondra <[email protected]> writes:\n> While doing some testing with createdb, I noticed it only accepts\n> file_copy/wal_log as valid strategies, not FILE_COPY/WAL_LOG (which is\n> what the docs say). The same thing applies to CREATE DATABASE.\n\nHmm, actually it does work in CREATE DATABASE:\n\nregression=# create database foo STRATEGY = FILE_COPY;\nCREATE DATABASE\n\nbut it fails in createdb because that does\n\n\tif (strategy)\n\t\tappendPQExpBuffer(&sql, \" STRATEGY %s\", fmtId(strategy));\n\nand fmtId will double-quote the strategy if it's upper-case, and then\nthe backend grammar doesn't case-fold it, and kaboom.\n\n> The problem is that createdb() does the check using strcmp() which is\n> case-sensitive. IMHO this should do pg_strcasecmp() which is what we do\n> for other string parameters nearby.\n\nSeems reasonable. The alternative could be to remove createdb.c's use\nof fmtId() here, but I don't think that's actually better.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 Apr 2024 16:40:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: createdb compares strategy as case-sensitive" }, { "msg_contents": "\n\nOn 4/20/24 22:40, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> While doing some testing with createdb, I noticed it only accepts\n>> file_copy/wal_log as valid strategies, not FILE_COPY/WAL_LOG (which is\n>> what the docs say). The same thing applies to CREATE DATABASE.\n> \n> Hmm, actually it does work in CREATE DATABASE:\n> \n> regression=# create database foo STRATEGY = FILE_COPY;\n> CREATE DATABASE\n> \n> but it fails in createdb because that does\n> \n> \tif (strategy)\n> \t\tappendPQExpBuffer(&sql, \" STRATEGY %s\", fmtId(strategy));\n> \n> and fmtId will double-quote the strategy if it's upper-case, and then\n> the backend grammar doesn't case-fold it, and kaboom.\n> \n\nOh, right. I should have tested CREATE DATABASE instead of just assuming\nit has the same issue ...\n\n>> The problem is that createdb() does the check using strcmp() which is\n>> case-sensitive. IMHO this should do pg_strcasecmp() which is what we do\n>> for other string parameters nearby.\n> \n> Seems reasonable. The alternative could be to remove createdb.c's use\n> of fmtId() here, but I don't think that's actually better.\n> \n\nWhy? It seems to me this is quite close to e.g. LOCALE_PROVIDER, and we\ndon't do fmtId() for that. So why should we do that for STRATEGY?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 20 Apr 2024 23:53:06 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: createdb compares strategy as case-sensitive" }, { "msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 4/20/24 22:40, Tom Lane wrote:\n>> Seems reasonable. The alternative could be to remove createdb.c's use\n>> of fmtId() here, but I don't think that's actually better.\n\n> Why? It seems to me this is quite close to e.g. LOCALE_PROVIDER, and we\n> don't do fmtId() for that. So why should we do that for STRATEGY?\n\nHah, nothing like being randomly inconsistent with adjacent code.\nEvery other argument handled by createdb gets wrapped by either\nfmtId or appendStringLiteralConn.\n\nI think the argument for this is it ensures that the switch value as\naccepted by createdb is the argument that CREATE DATABASE will see.\nCompare\n\n$ createdb --strategy=\"foo bar\" mydb\ncreatedb: error: database creation failed: ERROR: invalid create database strategy \"foo bar\"\nHINT: Valid strategies are \"wal_log\", and \"file_copy\".\n\n$ createdb --locale-provider=\"foo bar\" mydb\ncreatedb: error: database creation failed: ERROR: syntax error at or near \";\"\nLINE 1: CREATE DATABASE mydb LOCALE_PROVIDER foo bar;\n ^\n\nI'm not suggesting that this is an interesting security vulnerability,\nbecause if you can control the arguments to createdb it's probably\ngame over long since. But wrapping the arguments is good for\ndelivering on-point error messages. So I'd add a fmtId() call to\nLOCALE_PROVIDER too.\n\nBTW, someone's taken the Oxford comma too far in that HINT.\nNobody writes a comma when there are only two alternatives.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 Apr 2024 18:19:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: createdb compares strategy as case-sensitive" }, { "msg_contents": "On 4/21/24 00:19, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> On 4/20/24 22:40, Tom Lane wrote:\n>>> Seems reasonable. The alternative could be to remove createdb.c's use\n>>> of fmtId() here, but I don't think that's actually better.\n> \n>> Why? It seems to me this is quite close to e.g. LOCALE_PROVIDER, and we\n>> don't do fmtId() for that. So why should we do that for STRATEGY?\n> \n> Hah, nothing like being randomly inconsistent with adjacent code.\n> Every other argument handled by createdb gets wrapped by either\n> fmtId or appendStringLiteralConn.\n> \n> I think the argument for this is it ensures that the switch value as\n> accepted by createdb is the argument that CREATE DATABASE will see.\n> Compare\n> \n> $ createdb --strategy=\"foo bar\" mydb\n> createdb: error: database creation failed: ERROR: invalid create database strategy \"foo bar\"\n> HINT: Valid strategies are \"wal_log\", and \"file_copy\".\n> \n> $ createdb --locale-provider=\"foo bar\" mydb\n> createdb: error: database creation failed: ERROR: syntax error at or near \";\"\n> LINE 1: CREATE DATABASE mydb LOCALE_PROVIDER foo bar;\n> ^\n> \n> I'm not suggesting that this is an interesting security vulnerability,\n> because if you can control the arguments to createdb it's probably\n> game over long since. But wrapping the arguments is good for\n> delivering on-point error messages. So I'd add a fmtId() call to\n> LOCALE_PROVIDER too.\n> \n> BTW, someone's taken the Oxford comma too far in that HINT.\n> Nobody writes a comma when there are only two alternatives.\n> \n\nOK, the attached 0001 patch does these three things - adds the fmtId()\nfor locale_provider, make the comparison case-insensitive for strategy\nand also removes the comma from the hint.\n\nThe createdb vs. CREATE DATABASE difference made me look if we have any\nregression tests for CREATE DATABASE, and we don't. I guess it would be\ngood to have some, so I added a couple, for some of the parameters, see\n0002. But there's a problem with the locale stuff - this seems to work\nin plain \"make check\", but pg_upgrade creates the clusters with\ndifferent providers etc. which changes the expected output. I'm not sure\nthere's a good way to deal with this ...\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sun, 21 Apr 2024 14:35:15 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: createdb compares strategy as case-sensitive" }, { "msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 4/21/24 00:19, Tom Lane wrote:\n>> I'm not suggesting that this is an interesting security vulnerability,\n>> because if you can control the arguments to createdb it's probably\n>> game over long since. But wrapping the arguments is good for\n>> delivering on-point error messages. So I'd add a fmtId() call to\n>> LOCALE_PROVIDER too.\n\n> OK, the attached 0001 patch does these three things - adds the fmtId()\n> for locale_provider, make the comparison case-insensitive for strategy\n> and also removes the comma from the hint.\n\nLGTM.\n\n> The createdb vs. CREATE DATABASE difference made me look if we have any\n> regression tests for CREATE DATABASE, and we don't. I guess it would be\n> good to have some, so I added a couple, for some of the parameters, see\n> 0002. But there's a problem with the locale stuff - this seems to work\n> in plain \"make check\", but pg_upgrade creates the clusters with\n> different providers etc. which changes the expected output. I'm not sure\n> there's a good way to deal with this ...\n\nProbably not worth the trouble, really.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 21 Apr 2024 11:10:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: createdb compares strategy as case-sensitive" }, { "msg_contents": "On 4/21/24 17:10, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> On 4/21/24 00:19, Tom Lane wrote:\n>>> I'm not suggesting that this is an interesting security vulnerability,\n>>> because if you can control the arguments to createdb it's probably\n>>> game over long since. But wrapping the arguments is good for\n>>> delivering on-point error messages. So I'd add a fmtId() call to\n>>> LOCALE_PROVIDER too.\n> \n>> OK, the attached 0001 patch does these three things - adds the fmtId()\n>> for locale_provider, make the comparison case-insensitive for strategy\n>> and also removes the comma from the hint.\n> \n> LGTM.\n> \n\nPushed, after tweaking the commit message a bit.\n\n>> The createdb vs. CREATE DATABASE difference made me look if we have any\n>> regression tests for CREATE DATABASE, and we don't. I guess it would be\n>> good to have some, so I added a couple, for some of the parameters, see\n>> 0002. But there's a problem with the locale stuff - this seems to work\n>> in plain \"make check\", but pg_upgrade creates the clusters with\n>> different providers etc. which changes the expected output. I'm not sure\n>> there's a good way to deal with this ...\n> \n> Probably not worth the trouble, really.\n> \n\nAgreed.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 21 Apr 2024 21:34:47 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: createdb compares strategy as case-sensitive" } ]
[ { "msg_contents": "I came across an assert failure in _bt_preprocess_array_keys regarding\nthe sanity check on the datatype of the array elements. It can be\nreproduced with the query below.\n\ncreate table t (c int4range);\ncreate unique index on t (c);\n\nselect * from t where c in ('(1, 100]'::int4range, '(50, 300]'::int4range);\n\nIt fails on this Assert:\n\n+ elemtype = cur->sk_subtype;\n+ if (elemtype == InvalidOid)\n+ elemtype = rel->rd_opcintype[cur->sk_attno - 1];\n+ Assert(elemtype == ARR_ELEMTYPE(arrayval));\n\n... which was introduced in 5bf748b86b.\n\nI didn't spend much time digging into it, but I wonder if this Assert is\nsensible. I noticed that before commit 5bf748b86b, the two datatypes\nwere not equal to each other either (anyrange vs. int4range).\n\nThanks\nRichard\n\nI came across an assert failure in _bt_preprocess_array_keys regardingthe sanity check on the datatype of the array elements.  It can bereproduced with the query below.create table t (c int4range);create unique index on t (c);select * from t where c in ('(1, 100]'::int4range, '(50, 300]'::int4range);It fails on this Assert:+               elemtype = cur->sk_subtype;+               if (elemtype == InvalidOid)+                       elemtype = rel->rd_opcintype[cur->sk_attno - 1];+               Assert(elemtype == ARR_ELEMTYPE(arrayval));... which was introduced in 5bf748b86b.I didn't spend much time digging into it, but I wonder if this Assert issensible.  I noticed that before commit 5bf748b86b, the two datatypeswere not equal to each other either (anyrange vs. int4range).ThanksRichard", "msg_date": "Mon, 22 Apr 2024 10:36:13 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Assert failure in _bt_preprocess_array_keys" }, { "msg_contents": "On Sun, Apr 21, 2024 at 10:36 PM Richard Guo <[email protected]> wrote:\n> I didn't spend much time digging into it, but I wonder if this Assert is\n> sensible. I noticed that before commit 5bf748b86b, the two datatypes\n> were not equal to each other either (anyrange vs. int4range).\n\nThe assertion is wrong. It is testing behavior that's much older than\ncommit 5bf748b86b, though. We can just get rid of it, since all of the\ninformation that we'll actually apply when preprocessing scan keys\ncomes from the operator class.\n\nPushed a fix removing the assertion just now. Thanks for the report.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 21 Apr 2024 22:52:16 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assert failure in _bt_preprocess_array_keys" }, { "msg_contents": "On Mon, Apr 22, 2024 at 10:52 AM Peter Geoghegan <[email protected]> wrote:\n\n> On Sun, Apr 21, 2024 at 10:36 PM Richard Guo <[email protected]>\n> wrote:\n> > I didn't spend much time digging into it, but I wonder if this Assert is\n> > sensible. I noticed that before commit 5bf748b86b, the two datatypes\n> > were not equal to each other either (anyrange vs. int4range).\n>\n> The assertion is wrong. It is testing behavior that's much older than\n> commit 5bf748b86b, though. We can just get rid of it, since all of the\n> information that we'll actually apply when preprocessing scan keys\n> comes from the operator class.\n>\n> Pushed a fix removing the assertion just now. Thanks for the report.\n\n\nThat's so quick. Thank you for the prompt fix.\n\nThanks\nRichard\n\nOn Mon, Apr 22, 2024 at 10:52 AM Peter Geoghegan <[email protected]> wrote:On Sun, Apr 21, 2024 at 10:36 PM Richard Guo <[email protected]> wrote:\n> I didn't spend much time digging into it, but I wonder if this Assert is\n> sensible.  I noticed that before commit 5bf748b86b, the two datatypes\n> were not equal to each other either (anyrange vs. int4range).\n\nThe assertion is wrong. It is testing behavior that's much older than\ncommit 5bf748b86b, though. We can just get rid of it, since all of the\ninformation that we'll actually apply when preprocessing scan keys\ncomes from the operator class.\n\nPushed a fix removing the assertion just now. Thanks for the report.That's so quick.  Thank you for the prompt fix.ThanksRichard", "msg_date": "Mon, 22 Apr 2024 14:22:25 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Assert failure in _bt_preprocess_array_keys" } ]
[ { "msg_contents": "hi.\nminor issue in guc.c.\n\nset work_mem to '1kB';\nERROR: 1 kB is outside the valid range for parameter \"work_mem\" (64\n.. 2147483647)\nshould it be\nERROR: 1 kB is outside the valid range for parameter \"work_mem\" (64\nkB .. 2147483647 kB)\n?\nsince the units for work_mem are { \"B\", \"kB\", \"MB\", \"GB\", and \"TB\"}\n\nsearch `outside the valid range for parameter`,\nthere are two occurrences in guc.c.\n\n\n", "msg_date": "Mon, 22 Apr 2024 16:43:42 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "slightly misleading Error message in guc.c" }, { "msg_contents": "Hi,\n\nOn Mon, 22 Apr 2024 at 11:44, jian he <[email protected]> wrote:\n>\n> hi.\n> minor issue in guc.c.\n>\n> set work_mem to '1kB';\n> ERROR: 1 kB is outside the valid range for parameter \"work_mem\" (64\n> .. 2147483647)\n> should it be\n> ERROR: 1 kB is outside the valid range for parameter \"work_mem\" (64\n> kB .. 2147483647 kB)\n> ?\n> since the units for work_mem are { \"B\", \"kB\", \"MB\", \"GB\", and \"TB\"}\n>\n> search `outside the valid range for parameter`,\n> there are two occurrences in guc.c.\n\nNice find. I agree it could cause confusion.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Mon, 22 Apr 2024 17:08:05 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slightly misleading Error message in guc.c" }, { "msg_contents": "jian he <[email protected]> writes:\n> set work_mem to '1kB';\n> ERROR: 1 kB is outside the valid range for parameter \"work_mem\" (64\n> .. 2147483647)\n> should it be\n> ERROR: 1 kB is outside the valid range for parameter \"work_mem\" (64\n> kB .. 2147483647 kB)\n> ?\n> since the units for work_mem are { \"B\", \"kB\", \"MB\", \"GB\", and \"TB\"}\n\nSeems like a useful change ... about like this?\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 22 Apr 2024 12:04:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slightly misleading Error message in guc.c" }, { "msg_contents": "> On 22 Apr 2024, at 18:04, Tom Lane <[email protected]> wrote:\n\n> Seems like a useful change\n\nAgreed.\n\n> ... about like this?\n\nPatch LGTM.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 23 Apr 2024 14:21:52 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slightly misleading Error message in guc.c" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n>> On 22 Apr 2024, at 18:04, Tom Lane <[email protected]> wrote:\n>> Seems like a useful change\n\n> Agreed.\n\n>> ... about like this?\n\n> Patch LGTM.\n\nPushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Apr 2024 11:53:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slightly misleading Error message in guc.c" } ]
[ { "msg_contents": "Hi,\n\nThis new thread is a follow-up of [1] and [2].\n\nProblem description:\n\nWe have occasionally observed objects having an orphaned dependency, the \nmost common case we have seen is functions not linked to any namespaces.\n\nExamples to produce such orphaned dependencies:\n\nScenario 1:\n\nsession 1: begin; drop schema schem;\nsession 2: create a function in the schema schem\nsession 1: commit;\n\nWith the above, the function created in session 2 would be linked to a non\nexisting schema.\n\nScenario 2:\n\nsession 1: begin; create a function in the schema schem\nsession 2: drop schema schem;\nsession 1: commit;\n\nWith the above, the function created in session 1 would be linked to a non\nexisting schema.\n\nA patch has been initially proposed to fix this particular \n(function-to-namespace) dependency (see [1]), but there could be much \nmore scenarios (like the function-to-datatype one highlighted by Gilles \nin [1] that could lead to a function having an invalid parameter datatype).\n\nAs Tom said there are dozens more cases that would need to be \nconsidered, and a global approach to avoid those race conditions should \nbe considered instead.\n\nA first global approach attempt has been proposed in [2] making use of a dirty\nsnapshot when recording the dependency. But this approach appeared to be \"scary\"\nand it was still failing to close some race conditions (see [2] for details).\n\nThen, Tom proposed another approach in [2] which is that \"creation DDL will have\nto take a lock on each referenced object that'd conflict with a lock taken by\nDROP\".\n\nThis is what the attached patch is trying to achieve.\n\nIt does the following:\n\n1) A new lock (that conflicts with a lock taken by DROP) has been put in place\nwhen the dependencies are being recorded.\n\nThanks to it, the drop schema in scenario 2 would be locked (resulting in an\nerror should session 1 committs).\n\n2) After locking the object while recording the dependency, the patch checks\nthat the object still exists.\n\nThanks to it, session 2 in scenario 1 would be locked and would report an error\nonce session 1 committs (that would not be the case should session 1 abort the\ntransaction).\n\nThe patch also adds a few tests for some dependency cases (that would currently\nproduce orphaned objects):\n\n- schema and function (as the above scenarios)\n- function and type\n- table and type (which is I think problematic enough, as involving a table into\nthe game, to fix this stuff as a whole).\n\n[1]: https://www.postgresql.org/message-id/flat/[email protected]#9af5cdaa9e80879beb1def3604c976e8\n[2]: https://www.postgresql.org/message-id/flat/8369ff70-0e31-f194-2954-787f4d9e21dd%40amazon.com\n\nPlease note that I'm not used to with this area of the code so that the patch\nmight not take the option proposed by Tom the \"right\" way.\n\nAdding the patch to the July CF.\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 22 Apr 2024 08:45:19 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi Bertrand,\n\n22.04.2024 11:45, Bertrand Drouvot wrote:\n> Hi,\n>\n> This new thread is a follow-up of [1] and [2].\n>\n> Problem description:\n>\n> We have occasionally observed objects having an orphaned dependency, the\n> most common case we have seen is functions not linked to any namespaces.\n>\n> ...\n>\n> Looking forward to your feedback,\n\nThis have reminded me of bug #17182 [1].\nUnfortunately, with the patch applied, the following script:\n\nfor ((i=1;i<=100;i++)); do\n   ( { for ((n=1;n<=20;n++)); do echo \"DROP SCHEMA s;\"; done } | psql ) >psql1.log 2>&1 &\n   echo \"\nCREATE SCHEMA s;\nCREATE FUNCTION s.func1() RETURNS int LANGUAGE SQL AS 'SELECT 1;';\nCREATE FUNCTION s.func2() RETURNS int LANGUAGE SQL AS 'SELECT 2;';\nCREATE FUNCTION s.func3() RETURNS int LANGUAGE SQL AS 'SELECT 3;';\nCREATE FUNCTION s.func4() RETURNS int LANGUAGE SQL AS 'SELECT 4;';\nCREATE FUNCTION s.func5() RETURNS int LANGUAGE SQL AS 'SELECT 5;';\n   \"  | psql >psql2.log 2>&1 &\n   wait\n   psql -c \"DROP SCHEMA s CASCADE\" >psql3.log\ndone\necho \"\nSELECT pg_identify_object('pg_proc'::regclass, pp.oid, 0), pp.oid FROM pg_proc pp\n   LEFT JOIN pg_namespace pn ON pp.pronamespace = pn.oid WHERE pn.oid IS NULL\" | psql\n\nstill ends with:\nserver closed the connection unexpectedly\n         This probably means the server terminated abnormally\n         before or while processing the request.\n\n2024-04-22 09:54:39.171 UTC|||662633dc.152bbc|LOG:  server process (PID 1388378) was terminated by signal 11: \nSegmentation fault\n2024-04-22 09:54:39.171 UTC|||662633dc.152bbc|DETAIL:  Failed process was running: SELECT \npg_identify_object('pg_proc'::regclass, pp.oid, 0), pp.oid FROM pg_proc pp\n       LEFT JOIN pg_namespace pn ON pp.pronamespace = pn.oid WHERE pn.oid IS NULL\n\n[1] https://www.postgresql.org/message-id/flat/17182-a6baa001dd1784be%40postgresql.org\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 22 Apr 2024 13:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Mon, Apr 22, 2024 at 01:00:00PM +0300, Alexander Lakhin wrote:\n> Hi Bertrand,\n> \n> 22.04.2024 11:45, Bertrand Drouvot wrote:\n> > Hi,\n> > \n> > This new thread is a follow-up of [1] and [2].\n> > \n> > Problem description:\n> > \n> > We have occasionally observed objects having an orphaned dependency, the\n> > most common case we have seen is functions not linked to any namespaces.\n> > \n> > ...\n> > \n> > Looking forward to your feedback,\n> \n> This have reminded me of bug #17182 [1].\n\nThanks for the link to the bug!\n\n> Unfortunately, with the patch applied, the following script:\n> \n> for ((i=1;i<=100;i++)); do\n> � ( { for ((n=1;n<=20;n++)); do echo \"DROP SCHEMA s;\"; done } | psql ) >psql1.log 2>&1 &\n> � echo \"\n> CREATE SCHEMA s;\n> CREATE FUNCTION s.func1() RETURNS int LANGUAGE SQL AS 'SELECT 1;';\n> CREATE FUNCTION s.func2() RETURNS int LANGUAGE SQL AS 'SELECT 2;';\n> CREATE FUNCTION s.func3() RETURNS int LANGUAGE SQL AS 'SELECT 3;';\n> CREATE FUNCTION s.func4() RETURNS int LANGUAGE SQL AS 'SELECT 4;';\n> CREATE FUNCTION s.func5() RETURNS int LANGUAGE SQL AS 'SELECT 5;';\n> � \"� | psql >psql2.log 2>&1 &\n> � wait\n> � psql -c \"DROP SCHEMA s CASCADE\" >psql3.log\n> done\n> echo \"\n> SELECT pg_identify_object('pg_proc'::regclass, pp.oid, 0), pp.oid FROM pg_proc pp\n> � LEFT JOIN pg_namespace pn ON pp.pronamespace = pn.oid WHERE pn.oid IS NULL\" | psql\n> \n> still ends with:\n> server closed the connection unexpectedly\n> ������� This probably means the server terminated abnormally\n> ������� before or while processing the request.\n> \n> 2024-04-22 09:54:39.171 UTC|||662633dc.152bbc|LOG:� server process (PID\n> 1388378) was terminated by signal 11: Segmentation fault\n> 2024-04-22 09:54:39.171 UTC|||662633dc.152bbc|DETAIL:� Failed process was\n> running: SELECT pg_identify_object('pg_proc'::regclass, pp.oid, 0), pp.oid\n> FROM pg_proc pp\n> ����� LEFT JOIN pg_namespace pn ON pp.pronamespace = pn.oid WHERE pn.oid IS NULL\n> \n\nThanks for sharing the script.\n\nThat's weird, I just launched it several times with the patch applied and I'm not\nable to see the seg fault (while I can see it constently failing on the master\nbranch).\n\nAre you 100% sure you tested it against a binary with the patch applied?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 22 Apr 2024 10:52:21 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "22.04.2024 13:52, Bertrand Drouvot wrote:\n>\n> That's weird, I just launched it several times with the patch applied and I'm not\n> able to see the seg fault (while I can see it constently failing on the master\n> branch).\n>\n> Are you 100% sure you tested it against a binary with the patch applied?\n>\n\nYes, at least I can't see what I'm doing wrong. Please try my\nself-contained script attached.\n\nBest regards,\nAlexander", "msg_date": "Mon, 22 Apr 2024 15:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Mon, Apr 22, 2024 at 03:00:00PM +0300, Alexander Lakhin wrote:\n> 22.04.2024 13:52, Bertrand Drouvot wrote:\n> > \n> > That's weird, I just launched it several times with the patch applied and I'm not\n> > able to see the seg fault (while I can see it constently failing on the master\n> > branch).\n> > \n> > Are you 100% sure you tested it against a binary with the patch applied?\n> > \n> \n> Yes, at least I can't see what I'm doing wrong. Please try my\n> self-contained script attached.\n\nThanks for sharing your script!\n\nYeah your script ensures the patch is applied before the repro is executed.\n\nI do confirm that I can also see the issue with the patch applied (but I had to\nlaunch multiple attempts, while on master one attempt is enough).\n\nI'll have a look.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 23 Apr 2024 04:59:09 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Tue, Apr 23, 2024 at 04:59:09AM +0000, Bertrand Drouvot wrote:\n> Hi,\n> \n> On Mon, Apr 22, 2024 at 03:00:00PM +0300, Alexander Lakhin wrote:\n> > 22.04.2024 13:52, Bertrand Drouvot wrote:\n> > > \n> > > That's weird, I just launched it several times with the patch applied and I'm not\n> > > able to see the seg fault (while I can see it constently failing on the master\n> > > branch).\n> > > \n> > > Are you 100% sure you tested it against a binary with the patch applied?\n> > > \n> > \n> > Yes, at least I can't see what I'm doing wrong. Please try my\n> > self-contained script attached.\n> \n> Thanks for sharing your script!\n> \n> Yeah your script ensures the patch is applied before the repro is executed.\n> \n> I do confirm that I can also see the issue with the patch applied (but I had to\n> launch multiple attempts, while on master one attempt is enough).\n> \n> I'll have a look.\n\nPlease find attached v2 that should not produce the issue anymore (I launched a\nlot of attempts without any issues). v1 was not strong enough as it was not\nalways checking for the dependent object existence. v2 now always checks if the\nobject still exists after the additional lock acquisition attempt while recording\nthe dependency.\n\nI still need to think about v2 but in the meantime could you please also give\nv2 a try on you side?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 23 Apr 2024 16:20:46 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Tue, Apr 23, 2024 at 04:20:46PM +0000, Bertrand Drouvot wrote:\n> Please find attached v2 that should not produce the issue anymore (I launched a\n> lot of attempts without any issues). v1 was not strong enough as it was not\n> always checking for the dependent object existence. v2 now always checks if the\n> object still exists after the additional lock acquisition attempt while recording\n> the dependency.\n> \n> I still need to think about v2 but in the meantime could you please also give\n> v2 a try on you side?\n\nI gave more thought to v2 and the approach seems reasonable to me. Basically what\nit does is that in case the object is already dropped before we take the new lock\n(while recording the dependency) then the error message is a \"generic\" one (means\nit does not provide the description of the \"already\" dropped object). I think it\nmakes sense to write the patch that way by abandoning the patch's ambition to\ntell the description of the dropped object in all the cases.\n\nOf course, I would be happy to hear others thought about it.\n\nPlease find v3 attached (which is v2 with more comments).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 24 Apr 2024 08:38:01 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi Bertrand,\n\n24.04.2024 11:38, Bertrand Drouvot wrote:\n>> Please find attached v2 that should not produce the issue anymore (I launched a\n>> lot of attempts without any issues). v1 was not strong enough as it was not\n>> always checking for the dependent object existence. v2 now always checks if the\n>> object still exists after the additional lock acquisition attempt while recording\n>> the dependency.\n>>\n>> I still need to think about v2 but in the meantime could you please also give\n>> v2 a try on you side?\n> I gave more thought to v2 and the approach seems reasonable to me. Basically what\n> it does is that in case the object is already dropped before we take the new lock\n> (while recording the dependency) then the error message is a \"generic\" one (means\n> it does not provide the description of the \"already\" dropped object). I think it\n> makes sense to write the patch that way by abandoning the patch's ambition to\n> tell the description of the dropped object in all the cases.\n>\n> Of course, I would be happy to hear others thought about it.\n>\n> Please find v3 attached (which is v2 with more comments).\n\nThank you for the improved version!\n\nI can confirm that it fixes that case.\nI've also tested other cases that fail on master (most of them also fail\nwith v1), please try/look at the attached script. (There could be other\nbroken-dependency cases, of course, but I think I've covered all the\nrepresentative ones.)\n\nAll tested cases work correctly with v3 applied — I couldn't get broken\ndependencies, though concurrent create/drop operations can still produce\nthe \"cache lookup failed\" error, which is probably okay, except that it is\nan INTERNAL_ERROR, which assumed to be not easily triggered by users.\n\nBest regards,\nAlexander", "msg_date": "Wed, 24 Apr 2024 15:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Wed, Apr 24, 2024 at 03:00:00PM +0300, Alexander Lakhin wrote:\n> 24.04.2024 11:38, Bertrand Drouvot wrote:\n> > I gave more thought to v2 and the approach seems reasonable to me. Basically what\n> > it does is that in case the object is already dropped before we take the new lock\n> > (while recording the dependency) then the error message is a \"generic\" one (means\n> > it does not provide the description of the \"already\" dropped object). I think it\n> > makes sense to write the patch that way by abandoning the patch's ambition to\n> > tell the description of the dropped object in all the cases.\n> > \n> > Of course, I would be happy to hear others thought about it.\n> > \n> > Please find v3 attached (which is v2 with more comments).\n> \n> Thank you for the improved version!\n> \n> I can confirm that it fixes that case.\n\nGreat, thanks for the test!\n\n> I've also tested other cases that fail on master (most of them also fail\n> with v1), please try/look at the attached script.\n\nThanks for all those tests!\n\n> (There could be other broken-dependency cases, of course, but I think I've\n> covered all the representative ones.)\n\nAgree. Furthermore the way the patch is written should be agnostic to the\nobject's kind that are part of the dependency. Having said that, that does not\nhurt to add more tests in this patch, so v4 attached adds some of your tests (\nthat would fail on the master branch without the patch applied).\n\nThe way the tests are written in the patch are less \"racy\" that when triggered\nwith your script. Indeed, I think that in the patch the dependent object can not\nbe removed before the new lock is taken when recording the dependency. We may\nwant to add injection points in the game if we feel the need.\n\n> All tested cases work correctly with v3 applied —\n\nYeah, same on my side, I did run them too and did not find any issues.\n\n> I couldn't get broken\n> dependencies,\n\nSame here.\n\n> though concurrent create/drop operations can still produce\n> the \"cache lookup failed\" error, which is probably okay, except that it is\n> an INTERNAL_ERROR, which assumed to be not easily triggered by users.\n\nI did not see any of those \"cache lookup failed\" during my testing with/without\nyour script. During which test(s) did you see those with v3 applied?\n\nAttached v4, simply adding more tests to v3.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 25 Apr 2024 05:00:22 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\n25.04.2024 08:00, Bertrand Drouvot wrote:\n>\n>> though concurrent create/drop operations can still produce\n>> the \"cache lookup failed\" error, which is probably okay, except that it is\n>> an INTERNAL_ERROR, which assumed to be not easily triggered by users.\n> I did not see any of those \"cache lookup failed\" during my testing with/without\n> your script. During which test(s) did you see those with v3 applied?\n\nYou can try, for example, table-trigger, or other tests that check for\n\"cache lookup failed\" psql output only (maybe you'll need to increase the\niteration count). For instance, I've got (with v4 applied):\n2024-04-25 05:48:08.102 UTC [3638763] ERROR:  cache lookup failed for function 20893\n2024-04-25 05:48:08.102 UTC [3638763] STATEMENT:  CREATE TRIGGER modified_c1 BEFORE UPDATE OF c ON t\n         FOR EACH ROW WHEN (OLD.c <> NEW.c) EXECUTE PROCEDURE trigger_func('modified_c');\n\nOr with function-function:\n2024-04-25 05:52:31.390 UTC [3711897] ERROR:  cache lookup failed for function 32190 at character 54\n2024-04-25 05:52:31.390 UTC [3711897] STATEMENT:  CREATE FUNCTION f1() RETURNS int LANGUAGE SQL RETURN f() + 1;\n--\n2024-04-25 05:52:37.639 UTC [3720011] ERROR:  cache lookup failed for function 34465 at character 54\n2024-04-25 05:52:37.639 UTC [3720011] STATEMENT:  CREATE FUNCTION f1() RETURNS int LANGUAGE SQL RETURN f() + 1;\n\n> Attached v4, simply adding more tests to v3.\n\nThank you for working on this!\n\nBest regards,\nAlexander\n\n\n\n\n\nHi, \n\n 25.04.2024 08:00, Bertrand Drouvot wrote:\n\n\n\nthough concurrent create/drop operations can still produce\nthe \"cache lookup failed\" error, which is probably okay, except that it is\nan INTERNAL_ERROR, which assumed to be not easily triggered by users.\n\n\n\nI did not see any of those \"cache lookup failed\" during my testing with/without\nyour script. During which test(s) did you see those with v3 applied?\n\n\n You can try, for example, table-trigger, or other tests that check\n for\n \"cache lookup failed\" psql output only (maybe you'll need to\n increase the\n iteration count). For instance, I've got (with v4 applied):\n 2024-04-25 05:48:08.102 UTC [3638763] ERROR:  cache lookup failed\n for function 20893\n 2024-04-25 05:48:08.102 UTC [3638763] STATEMENT:  CREATE TRIGGER\n modified_c1 BEFORE UPDATE OF c ON t\n         FOR EACH ROW WHEN (OLD.c <> NEW.c) EXECUTE PROCEDURE\n trigger_func('modified_c');\n\n Or with function-function:\n 2024-04-25 05:52:31.390 UTC [3711897] ERROR:  cache lookup failed\n for function 32190 at character 54\n 2024-04-25 05:52:31.390 UTC [3711897] STATEMENT:  CREATE FUNCTION\n f1() RETURNS int LANGUAGE SQL RETURN f() + 1;\n --\n 2024-04-25 05:52:37.639 UTC [3720011] ERROR:  cache lookup failed\n for function 34465 at character 54\n 2024-04-25 05:52:37.639 UTC [3720011] STATEMENT:  CREATE FUNCTION\n f1() RETURNS int LANGUAGE SQL RETURN f() + 1;\n\n\n\nAttached v4, simply adding more tests to v3.\n\n\n\n Thank you for working on this!\n\n Best regards,\n Alexander", "msg_date": "Thu, 25 Apr 2024 09:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Thu, Apr 25, 2024 at 09:00:00AM +0300, Alexander Lakhin wrote:\n> Hi,\n> \n> 25.04.2024 08:00, Bertrand Drouvot wrote:\n> > \n> > > though concurrent create/drop operations can still produce\n> > > the \"cache lookup failed\" error, which is probably okay, except that it is\n> > > an INTERNAL_ERROR, which assumed to be not easily triggered by users.\n> > I did not see any of those \"cache lookup failed\" during my testing with/without\n> > your script. During which test(s) did you see those with v3 applied?\n> \n> You can try, for example, table-trigger, or other tests that check for\n> \"cache lookup failed\" psql output only (maybe you'll need to increase the\n> iteration count). For instance, I've got (with v4 applied):\n> 2024-04-25 05:48:08.102 UTC [3638763] ERROR:� cache lookup failed for function 20893\n> 2024-04-25 05:48:08.102 UTC [3638763] STATEMENT:� CREATE TRIGGER modified_c1 BEFORE UPDATE OF c ON t\n> ������� FOR EACH ROW WHEN (OLD.c <> NEW.c) EXECUTE PROCEDURE trigger_func('modified_c');\n> \n> Or with function-function:\n> 2024-04-25 05:52:31.390 UTC [3711897] ERROR:� cache lookup failed for function 32190 at character 54\n> 2024-04-25 05:52:31.390 UTC [3711897] STATEMENT:� CREATE FUNCTION f1() RETURNS int LANGUAGE SQL RETURN f() + 1;\n> --\n> 2024-04-25 05:52:37.639 UTC [3720011] ERROR:� cache lookup failed for function 34465 at character 54\n> 2024-04-25 05:52:37.639 UTC [3720011] STATEMENT:� CREATE FUNCTION f1() RETURNS int LANGUAGE SQL RETURN f() + 1;\n\nI see, so this is during object creation.\n\nIt's easy to reproduce this kind of issue with gdb. For example set a breakpoint\non SearchSysCache1() and during the create function f1() once it breaks on:\n\n#0 SearchSysCache1 (cacheId=45, key1=16400) at syscache.c:221\n#1 0x00005ad305beacd6 in func_get_detail (funcname=0x5ad308204d50, fargs=0x0, fargnames=0x0, nargs=0, argtypes=0x7ffff2ff9cc0, expand_variadic=true, expand_defaults=true, include_out_arguments=false, funcid=0x7ffff2ff9ba0, rettype=0x7ffff2ff9b9c, retset=0x7ffff2ff9b94, nvargs=0x7ffff2ff9ba4,\n vatype=0x7ffff2ff9ba8, true_typeids=0x7ffff2ff9bd8, argdefaults=0x7ffff2ff9be0) at parse_func.c:1622\n#2 0x00005ad305be7dd0 in ParseFuncOrColumn (pstate=0x5ad30823be98, funcname=0x5ad308204d50, fargs=0x0, last_srf=0x0, fn=0x5ad308204da0, proc_call=false, location=55) at parse_func.c:266\n#3 0x00005ad305bdffb0 in transformFuncCall (pstate=0x5ad30823be98, fn=0x5ad308204da0) at parse_expr.c:1474\n#4 0x00005ad305bdd2ee in transformExprRecurse (pstate=0x5ad30823be98, expr=0x5ad308204da0) at parse_expr.c:230\n#5 0x00005ad305bdec34 in transformAExprOp (pstate=0x5ad30823be98, a=0x5ad308204e20) at parse_expr.c:990\n#6 0x00005ad305bdd1a0 in transformExprRecurse (pstate=0x5ad30823be98, expr=0x5ad308204e20) at parse_expr.c:187\n#7 0x00005ad305bdd00b in transformExpr (pstate=0x5ad30823be98, expr=0x5ad308204e20, exprKind=EXPR_KIND_SELECT_TARGET) at parse_expr.c:131\n#8 0x00005ad305b96b7e in transformReturnStmt (pstate=0x5ad30823be98, stmt=0x5ad308204ee0) at analyze.c:2395\n#9 0x00005ad305b92213 in transformStmt (pstate=0x5ad30823be98, parseTree=0x5ad308204ee0) at analyze.c:375\n#10 0x00005ad305c6321a in interpret_AS_clause (languageOid=14, languageName=0x5ad308204c40 \"sql\", funcname=0x5ad308204ad8 \"f100\", as=0x0, sql_body_in=0x5ad308204ee0, parameterTypes=0x0, inParameterNames=0x0, prosrc_str_p=0x7ffff2ffa208, probin_str_p=0x7ffff2ffa200, sql_body_out=0x7ffff2ffa210,\n queryString=0x5ad3082040b0 \"CREATE FUNCTION f100() RETURNS int LANGUAGE SQL RETURN f() + 1;\") at functioncmds.c:953\n#11 0x00005ad305c63c93 in CreateFunction (pstate=0x5ad308186310, stmt=0x5ad308204f00) at functioncmds.c:1221\n\nthen drop function f() in another session. Then the create function f1() would:\n\npostgres=# CREATE FUNCTION f() RETURNS int LANGUAGE SQL RETURN f() + 1;\nERROR: cache lookup failed for function 16400\n\nThis stuff does appear before we get a chance to call the new depLockAndCheckObject()\nfunction.\n\nI think this is what Tom was referring to in [1]:\n\n\"\nSo the only real fix for this would be to make every object lookup in the entire\nsystem do the sort of dance that's done in RangeVarGetRelidExtended.\n\"\n\nThe fact that those kind of errors appear also somehow ensure that no orphaned\ndependencies can be created.\n\nThe patch does ensure that no orphaned depencies can occur after those \"initial\"\nlook up are done (should the dependent object be dropped).\n\nI'm tempted to not add extra complexity to avoid those kind of errors and keep the\npatch as it is. All of those servicing the same goal: no orphaned depencies are\ncreated.\n\n[1]: https://www.postgresql.org/message-id/2872252.1630851337%40sss.pgh.pa.us\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 25 Apr 2024 07:20:05 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi Bertrand,\n\n25.04.2024 10:20, Bertrand Drouvot wrote:\n> postgres=# CREATE FUNCTION f() RETURNS int LANGUAGE SQL RETURN f() + 1;\n> ERROR: cache lookup failed for function 16400\n>\n> This stuff does appear before we get a chance to call the new depLockAndCheckObject()\n> function.\n>\n> I think this is what Tom was referring to in [1]:\n>\n> \"\n> So the only real fix for this would be to make every object lookup in the entire\n> system do the sort of dance that's done in RangeVarGetRelidExtended.\n> \"\n>\n> The fact that those kind of errors appear also somehow ensure that no orphaned\n> dependencies can be created.\n\nI agree; the only thing that I'd change here, is the error code.\n\nBut I've discovered yet another possibility to get a broken dependency.\nPlease try this script:\nres=0\nnumclients=20\nfor ((i=1;i<=100;i++)); do\nfor ((c=1;c<=numclients;c++)); do\n   echo \"\nCREATE SCHEMA s_$c;\nCREATE CONVERSION myconv_$c FOR 'LATIN1' TO 'UTF8' FROM iso8859_1_to_utf8;\nALTER CONVERSION myconv_$c SET SCHEMA s_$c;\n   \" | psql >psql1-$c.log 2>&1 &\n   echo \"DROP SCHEMA s_$c RESTRICT;\" | psql >psql2-$c.log 2>&1 &\ndone\nwait\npg_dump -f db.dump || { echo \"on iteration $i\"; res=1; break; }\nfor ((c=1;c<=numclients;c++)); do\n   echo \"DROP SCHEMA s_$c CASCADE;\" | psql >psql3-$c.log 2>&1\ndone\ndone\npsql -c \"SELECT * FROM pg_conversion WHERE connamespace NOT IN (SELECT oid FROM pg_namespace);\"\n\nIt fails for me (with the v4 patch applied) as follows:\npg_dump: error: schema with OID 16392 does not exist\non iteration 1\n   oid  | conname  | connamespace | conowner | conforencoding | contoencoding |      conproc      | condefault\n-------+----------+--------------+----------+----------------+---------------+-------------------+------------\n  16396 | myconv_6 |        16392 |       10 |              8 |             6 | iso8859_1_to_utf8 | f\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 30 Apr 2024 20:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Tue, Apr 30, 2024 at 08:00:00PM +0300, Alexander Lakhin wrote:\n> Hi Bertrand,\n> \n> But I've discovered yet another possibility to get a broken dependency.\n\nThanks for the testing and the report!\n\n> Please try this script:\n> res=0\n> numclients=20\n> for ((i=1;i<=100;i++)); do\n> for ((c=1;c<=numclients;c++)); do\n> � echo \"\n> CREATE SCHEMA s_$c;\n> CREATE CONVERSION myconv_$c FOR 'LATIN1' TO 'UTF8' FROM iso8859_1_to_utf8;\n> ALTER CONVERSION myconv_$c SET SCHEMA s_$c;\n> � \" | psql >psql1-$c.log 2>&1 &\n> � echo \"DROP SCHEMA s_$c RESTRICT;\" | psql >psql2-$c.log 2>&1 &\n> done\n> wait\n> pg_dump -f db.dump || { echo \"on iteration $i\"; res=1; break; }\n> for ((c=1;c<=numclients;c++)); do\n> � echo \"DROP SCHEMA s_$c CASCADE;\" | psql >psql3-$c.log 2>&1\n> done\n> done\n> psql -c \"SELECT * FROM pg_conversion WHERE connamespace NOT IN (SELECT oid FROM pg_namespace);\"\n> \n> It fails for me (with the v4 patch applied) as follows:\n> pg_dump: error: schema with OID 16392 does not exist\n> on iteration 1\n> � oid� | conname� | connamespace | conowner | conforencoding | contoencoding |����� conproc����� | condefault\n> -------+----------+--------------+----------+----------------+---------------+-------------------+------------\n> �16396 | myconv_6 |������� 16392 |������ 10 |������������� 8 |������������ 6 | iso8859_1_to_utf8 | f\n> \n\nThanks for sharing the test, I'm able to reproduce the issue with v4.\n\nOh I see, your test updates an existing dependency. v4 took care about brand new \ndependencies creation (recordMultipleDependencies()) but forgot to take care\nabout changing an existing dependency (which is done in another code path:\nchangeDependencyFor()).\n\nPlease find attached v5 that adds:\n\n- a call to the new depLockAndCheckObject() function in changeDependencyFor().\n- a test when altering an existing dependency.\n\nWith v5 applied, I don't see the issue anymore.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 9 May 2024 12:20:51 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi Bertrand,\n\n09.05.2024 15:20, Bertrand Drouvot wrote:\n> Oh I see, your test updates an existing dependency. v4 took care about brand new\n> dependencies creation (recordMultipleDependencies()) but forgot to take care\n> about changing an existing dependency (which is done in another code path:\n> changeDependencyFor()).\n>\n> Please find attached v5 that adds:\n>\n> - a call to the new depLockAndCheckObject() function in changeDependencyFor().\n> - a test when altering an existing dependency.\n>\n> With v5 applied, I don't see the issue anymore.\n\nMe too. Thank you for the improved version!\nI will test the patch in the background, but for now I see no other\nissues with it.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 14 May 2024 15:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "On Thu, May 09, 2024 at 12:20:51PM +0000, Bertrand Drouvot wrote:\n> Oh I see, your test updates an existing dependency. v4 took care about brand new \n> dependencies creation (recordMultipleDependencies()) but forgot to take care\n> about changing an existing dependency (which is done in another code path:\n> changeDependencyFor()).\n> \n> Please find attached v5 that adds:\n> \n> - a call to the new depLockAndCheckObject() function in changeDependencyFor().\n> - a test when altering an existing dependency.\n> \n> With v5 applied, I don't see the issue anymore.\n\n+ if (object_description)\n+ ereport(ERROR, errmsg(\"%s does not exist\", object_description));\n+ else\n+ ereport(ERROR, errmsg(\"a dependent object does not ex\n\nThis generates an internal error code. Is that intended?\n\n--- /dev/null\n+++ b/src/test/modules/test_dependencies_locks/specs/test_dependencies_locks.spec \n\nThis introduces a module with only one single spec. I could get\nbehind an extra module if splitting the tests into more specs makes\nsense or if there is a restriction on installcheck. However, for \none spec file filed with a bunch of objects, and note that I'm OK to\nlet this single spec grow more for this range of tests, it seems to me\nthat this would be better under src/test/isolation/.\n\n+\t\tif (use_dirty_snapshot)\n+\t\t{\n+\t\t\tInitDirtySnapshot(DirtySnapshot);\n+\t\t\tsnapshot = &DirtySnapshot;\n+\t\t}\n+\t\telse\n+\t\t\tsnapshot = NULL;\n\nI'm wondering if Robert has a comment about that. It looks backwards\nin a world where we try to encourage MVCC snapshots for catalog\nscans (aka 568d4138c646), especially for the part related to\ndependency.c and ObjectAddresses.\n--\nMichael", "msg_date": "Wed, 15 May 2024 10:14:09 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Wed, May 15, 2024 at 10:14:09AM +0900, Michael Paquier wrote:\n> On Thu, May 09, 2024 at 12:20:51PM +0000, Bertrand Drouvot wrote:\n> > Oh I see, your test updates an existing dependency. v4 took care about brand new \n> > dependencies creation (recordMultipleDependencies()) but forgot to take care\n> > about changing an existing dependency (which is done in another code path:\n> > changeDependencyFor()).\n> > \n> > Please find attached v5 that adds:\n> > \n> > - a call to the new depLockAndCheckObject() function in changeDependencyFor().\n> > - a test when altering an existing dependency.\n> > \n> > With v5 applied, I don't see the issue anymore.\n> \n> + if (object_description)\n> + ereport(ERROR, errmsg(\"%s does not exist\", object_description));\n> + else\n> + ereport(ERROR, errmsg(\"a dependent object does not ex\n> \n> This generates an internal error code. Is that intended?\n\nThanks for looking at it!\n\nYes, it's like when say you want to create an object in a schema that does not\nexist (see get_namespace_oid()).\n\n> --- /dev/null\n> +++ b/src/test/modules/test_dependencies_locks/specs/test_dependencies_locks.spec \n> \n> This introduces a module with only one single spec. I could get\n> behind an extra module if splitting the tests into more specs makes\n> sense or if there is a restriction on installcheck. However, for \n> one spec file filed with a bunch of objects, and note that I'm OK to\n> let this single spec grow more for this range of tests, it seems to me\n> that this would be better under src/test/isolation/.\n\nYeah, I was not sure about this one (the location is from take 2 mentioned\nup-thread). I'll look at moving the tests to src/test/isolation/.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 15 May 2024 08:31:43 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Tue, May 14, 2024 at 03:00:00PM +0300, Alexander Lakhin wrote:\n> Hi Bertrand,\n> \n> 09.05.2024 15:20, Bertrand Drouvot wrote:\n> > Oh I see, your test updates an existing dependency. v4 took care about brand new\n> > dependencies creation (recordMultipleDependencies()) but forgot to take care\n> > about changing an existing dependency (which is done in another code path:\n> > changeDependencyFor()).\n> > \n> > Please find attached v5 that adds:\n> > \n> > - a call to the new depLockAndCheckObject() function in changeDependencyFor().\n> > - a test when altering an existing dependency.\n> > \n> > With v5 applied, I don't see the issue anymore.\n> \n> Me too. Thank you for the improved version!\n> I will test the patch in the background, but for now I see no other\n> issues with it.\n\nThanks for confirming!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 15 May 2024 08:33:05 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Wed, May 15, 2024 at 08:31:43AM +0000, Bertrand Drouvot wrote:\n> Hi,\n> \n> On Wed, May 15, 2024 at 10:14:09AM +0900, Michael Paquier wrote:\n> > +++ b/src/test/modules/test_dependencies_locks/specs/test_dependencies_locks.spec \n> > \n> > This introduces a module with only one single spec. I could get\n> > behind an extra module if splitting the tests into more specs makes\n> > sense or if there is a restriction on installcheck. However, for \n> > one spec file filed with a bunch of objects, and note that I'm OK to\n> > let this single spec grow more for this range of tests, it seems to me\n> > that this would be better under src/test/isolation/.\n> \n> Yeah, I was not sure about this one (the location is from take 2 mentioned\n> up-thread). I'll look at moving the tests to src/test/isolation/.\n\nPlease find attached v6 (only diff with v5 is moving the tests as suggested\nabove).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 15 May 2024 10:31:02 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hello Bertrand,\n\n15.05.2024 11:31, Bertrand Drouvot wrote:\n> On Wed, May 15, 2024 at 10:14:09AM +0900, Michael Paquier wrote:\n>\n>> + if (object_description)\n>> + ereport(ERROR, errmsg(\"%s does not exist\", object_description));\n>> + else\n>> + ereport(ERROR, errmsg(\"a dependent object does not ex\n>>\n>> This generates an internal error code. Is that intended?\n> Yes, it's like when say you want to create an object in a schema that does not\n> exist (see get_namespace_oid()).\n\nAFAICS, get_namespace_oid() throws not ERRCODE_INTERNAL_ERROR,\nbut ERRCODE_UNDEFINED_SCHEMA:\n\n# SELECT regtype('unknown_schema.type');\nERROR:  schema \"unknown_schema\" does not exist\nLINE 1: SELECT regtype('unknown_schema.type');\n                        ^\n# \\echo :LAST_ERROR_SQLSTATE\n3F000\n\nProbably, it's worth to avoid ERRCODE_INTERNAL_ERROR here in light of [1]\nand [2], as these errors are not that abnormal (not Assert-like).\n\n[1] https://www.postgresql.org/message-id/Zic_GNgos5sMxKoa%40paquier.xyz\n[2] https://commitfest.postgresql.org/48/4735/\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sun, 19 May 2024 11:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Sun, May 19, 2024 at 11:00:00AM +0300, Alexander Lakhin wrote:\n> Hello Bertrand,\n> \n> Probably, it's worth to avoid ERRCODE_INTERNAL_ERROR here in light of [1]\n> and [2], as these errors are not that abnormal (not Assert-like).\n> \n> [1] https://www.postgresql.org/message-id/Zic_GNgos5sMxKoa%40paquier.xyz\n> [2] https://commitfest.postgresql.org/48/4735/\n> \n\nThanks for mentioning the above examples, I agree that it's worth to avoid\nERRCODE_INTERNAL_ERROR here: please find attached v7 that makes use of a new\nERRCODE: ERRCODE_DEPENDENT_OBJECTS_DOES_NOT_EXIST \n\nI thought about this name as it is close enough to the already existing \n\"ERRCODE_DEPENDENT_OBJECTS_STILL_EXIST\" but I'm open to suggestion too.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 21 May 2024 07:22:57 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "On Wed, May 15, 2024 at 6:31 AM Bertrand Drouvot\n<[email protected]> wrote:\n> Please find attached v6 (only diff with v5 is moving the tests as suggested\n> above).\n\nI don't immediately know what to think about this patch. I've known\nabout this issue for a long time, but I didn't think this was how we\nwould fix it.\n\nI think the problem here is basically that we don't lock namespaces\n(schemas) when we're adding and removing things from the schema. So I\nassumed that if we ever did something about this, what we would do\nwould be add a bunch of calls to lock schemas to the appropriate parts\nof the code. What you've done here instead is add locking at a much\nlower level - whenever we are adding a dependency on an object, we\nlock the object. The advantage of that approach is that we definitely\nwon't miss anything. The disadvantage of that approach is that it\nmeans we have some very low-level code taking locks, which means it's\nnot obvious which operations are taking what locks. Maybe it could\neven result in some redundancy, like the higher-level code taking a\nlock also (possibly in a different mode) and then this code taking\nanother one.\n\nI haven't gone through the previous threads; it sounds like there's\nalready been some discussion of this, but I'm just telling you how it\nstrikes me on first look.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 May 2024 08:53:06 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Tue, May 21, 2024 at 08:53:06AM -0400, Robert Haas wrote:\n> On Wed, May 15, 2024 at 6:31 AM Bertrand Drouvot\n> <[email protected]> wrote:\n> > Please find attached v6 (only diff with v5 is moving the tests as suggested\n> > above).\n> \n> I don't immediately know what to think about this patch.\n\nThanks for looking at it!\n\n> I've known about this issue for a long time, but I didn't think this was how we\n> would fix it.\n\nI started initially with [1] but it was focusing on function-schema only.\n\nThen I proposed [2] making use of a dirty snapshot when recording the dependency.\nBut this approach appeared to be \"scary\" and it was still failing to close\nsome race conditions.\n\nThen, Tom proposed another approach in [2] which is that \"creation DDL will have\nto take a lock on each referenced object that'd conflict with a lock taken by DROP\".\nThis is the one the current patch is trying to implement.\n\n> What you've done here instead is add locking at a much\n> lower level - whenever we are adding a dependency on an object, we\n> lock the object. The advantage of that approach is that we definitely\n> won't miss anything.\n\nRight, as there is much more than the ones related to schemas, for example:\n\n- function and arg type\n- function and return type\n- function and function\n- domain and domain\n- table and type\n- server and foreign data wrapper\n\nto name a few.\n\n> The disadvantage of that approach is that it\n> means we have some very low-level code taking locks, which means it's\n> not obvious which operations are taking what locks.\n\nRight, but the new operations are \"only\" the ones leading to recording or altering\na dependency.\n\n> Maybe it could\n> even result in some redundancy, like the higher-level code taking a\n> lock also (possibly in a different mode) and then this code taking\n> another one.\n\nThe one that is added here is in AccessShareLock mode. It could conflict with\nthe ones in AccessExclusiveLock means (If I'm not missing any):\n\n- AcquireDeletionLock(): which is exactly what we want\n- get_object_address()\n - get_object_address_rv()\n - ExecAlterObjectDependsStmt()\n - ExecRenameStmt()\n - ExecAlterObjectDependsStmt()\n - ExecAlterOwnerStmt()\n - RemoveObjects()\n- AlterPublication()\n\nI think there is 2 cases here:\n\nFirst case: the \"conflicting\" lock mode is for one of our own lock then LockCheckConflicts()\nwould report this as a NON conflict.\n\nSecond case: the \"conflicting\" lock mode is NOT for one of our own lock then LockCheckConflicts()\nwould report a conflict. But I've the feeling that the existing code would\nalready lock those sessions.\n\nOne example where it would be the case:\n\nSession 1: doing \"BEGIN; ALTER FUNCTION noschemas.foo2() SET SCHEMA alterschema\" would\nacquire the lock in AccessExclusiveLock during ExecAlterObjectSchemaStmt()->get_object_address()->LockDatabaseObject()\n(in the existing code and before the new call that would occur through changeDependencyFor()->depLockAndCheckObject()\nwith the patch in place).\n\nThen, session 2: doing \"alter function noschemas.foo2() owner to newrole;\"\nwould be locked in the existing code while doing ExecAlterOwnerStmt()->get_object_address()->LockDatabaseObject()).\n\nMeans that in this example, the new lock this patch is introducing would not be\nresponsible of session 2 beging locked.\n\nWas your concern about \"First case\" or \"Second case\" or both?\n\n[1]: https://www.postgresql.org/message-id/flat/5a9daaae-5538-b209-6279-e903c3ea2157%40amazon.com\n[2]: https://www.postgresql.org/message-id/flat/8369ff70-0e31-f194-2954-787f4d9e21dd%40amazon.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 22 May 2024 10:21:34 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "On Wed, May 22, 2024 at 6:21 AM Bertrand Drouvot\n<[email protected]> wrote:\n> I started initially with [1] but it was focusing on function-schema only.\n\nYeah, that's what I thought we would want to do. And then just extend\nthat to the other cases.\n\n> Then I proposed [2] making use of a dirty snapshot when recording the dependency.\n> But this approach appeared to be \"scary\" and it was still failing to close\n> some race conditions.\n\nThe current patch still seems to be using dirty snapshots for some\nreason, which struck me as a bit odd. My intuition is that if we're\nrelying on dirty snapshots to solve problems, we likely haven't solved\nthe problems correctly, which seems consistent with your statement\nabout \"failing to close some race conditions\". But I don't think I\nunderstand the situation well enough to be sure just yet.\n\n> Then, Tom proposed another approach in [2] which is that \"creation DDL will have\n> to take a lock on each referenced object that'd conflict with a lock taken by DROP\".\n> This is the one the current patch is trying to implement.\n\nIt's a clever idea, but I'm not sure that I like it.\n\n> I think there is 2 cases here:\n>\n> First case: the \"conflicting\" lock mode is for one of our own lock then LockCheckConflicts()\n> would report this as a NON conflict.\n>\n> Second case: the \"conflicting\" lock mode is NOT for one of our own lock then LockCheckConflicts()\n> would report a conflict. But I've the feeling that the existing code would\n> already lock those sessions.\n>\n> Was your concern about \"First case\" or \"Second case\" or both?\n\nThe second case, I guess. It's bad to take a weaker lock first and\nthen a stronger lock on the same object later, because it can lead to\ndeadlocks that would have been avoided if the stronger lock had been\ntaken at the outset. Here, it seems like things would happen in the\nother order: if we took two locks, we'd probably take the stronger\nlock in the higher-level code and then the weaker lock in the\ndependency code. That shouldn't break anything; it's just a bit\ninefficient. My concern was really more about the maintainability of\nthe code. I fear that if we add code that takes heavyweight locks in\nsurprising places, we might later find the behavior difficult to\nreason about.\n\nTom, what is your thought about that concern?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 22 May 2024 10:48:12 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Wed, May 22, 2024 at 10:48:12AM -0400, Robert Haas wrote:\n> On Wed, May 22, 2024 at 6:21 AM Bertrand Drouvot\n> <[email protected]> wrote:\n> > I started initially with [1] but it was focusing on function-schema only.\n> \n> Yeah, that's what I thought we would want to do. And then just extend\n> that to the other cases.\n> \n> > Then I proposed [2] making use of a dirty snapshot when recording the dependency.\n> > But this approach appeared to be \"scary\" and it was still failing to close\n> > some race conditions.\n> \n> The current patch still seems to be using dirty snapshots for some\n> reason, which struck me as a bit odd. My intuition is that if we're\n> relying on dirty snapshots to solve problems, we likely haven't solved\n> the problems correctly, which seems consistent with your statement\n> about \"failing to close some race conditions\". But I don't think I\n> understand the situation well enough to be sure just yet.\n\nThe reason why we are using a dirty snapshot here is for the cases where we are\nrecording a dependency on a referenced object that we are creating at the same\ntime behind the scene (for example, creating a composite type while creating\na relation). Without the dirty snapshot, then the object we are creating behind\nthe scene (the composite type) would not be visible and we would wrongly assume\nthat it has been dropped.\n\nNote that the usage of the dirty snapshot is only when the object is first\nreported as \"non existing\" by the new ObjectByIdExist() function.\n\n> > I think there is 2 cases here:\n> >\n> > First case: the \"conflicting\" lock mode is for one of our own lock then LockCheckConflicts()\n> > would report this as a NON conflict.\n> >\n> > Second case: the \"conflicting\" lock mode is NOT for one of our own lock then LockCheckConflicts()\n> > would report a conflict. But I've the feeling that the existing code would\n> > already lock those sessions.\n> >\n> > Was your concern about \"First case\" or \"Second case\" or both?\n> \n> The second case, I guess. It's bad to take a weaker lock first and\n> then a stronger lock on the same object later, because it can lead to\n> deadlocks that would have been avoided if the stronger lock had been\n> taken at the outset.\n\nIn the example I shared up-thread that would be the opposite: the Session 1 would\ntake an AccessExclusiveLock lock on the object before taking an AccessShareLock\nduring changeDependencyFor().\n\n> Here, it seems like things would happen in the\n> other order: if we took two locks, we'd probably take the stronger\n> lock in the higher-level code and then the weaker lock in the\n> dependency code.\n\nYeah, I agree.\n\n> That shouldn't break anything; it's just a bit\n> inefficient.\n\nYeah, the second lock is useless in that case (like in the example up-thread).\n\n> My concern was really more about the maintainability of\n> the code. I fear that if we add code that takes heavyweight locks in\n> surprising places, we might later find the behavior difficult to\n> reason about.\n>\n\nI think I understand your concern about code maintainability but I'm not sure\nthat adding locks while recording a dependency is that surprising.\n\n> Tom, what is your thought about that concern?\n\n+1\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 23 May 2024 04:19:36 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "On Thu, May 23, 2024 at 12:19 AM Bertrand Drouvot\n<[email protected]> wrote:\n> The reason why we are using a dirty snapshot here is for the cases where we are\n> recording a dependency on a referenced object that we are creating at the same\n> time behind the scene (for example, creating a composite type while creating\n> a relation). Without the dirty snapshot, then the object we are creating behind\n> the scene (the composite type) would not be visible and we would wrongly assume\n> that it has been dropped.\n\nThe usual reason for using a dirty snapshot is that you want to see\nuncommitted work by other transactions. It sounds like you're saying\nyou just need to see uncommitted work by the same transaction. If\nthat's true, I think using HeapTupleSatisfiesSelf would be clearer. Or\nmaybe we just need to put CommandCounterIncrement() calls in the right\nplaces to avoid having the problem in the first place. Or maybe this\nis another sign that we're doing the work at the wrong level.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 23 May 2024 14:10:54 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Thu, May 23, 2024 at 02:10:54PM -0400, Robert Haas wrote:\n> On Thu, May 23, 2024 at 12:19 AM Bertrand Drouvot\n> <[email protected]> wrote:\n> > The reason why we are using a dirty snapshot here is for the cases where we are\n> > recording a dependency on a referenced object that we are creating at the same\n> > time behind the scene (for example, creating a composite type while creating\n> > a relation). Without the dirty snapshot, then the object we are creating behind\n> > the scene (the composite type) would not be visible and we would wrongly assume\n> > that it has been dropped.\n> \n> The usual reason for using a dirty snapshot is that you want to see\n> uncommitted work by other transactions. It sounds like you're saying\n> you just need to see uncommitted work by the same transaction.\n\nRight.\n\n> If that's true, I think using HeapTupleSatisfiesSelf would be clearer.\n\nOh thanks! I did not know about the SNAPSHOT_SELF snapshot type (I should have\ncheck all the snapshot types first though) and that's exactly what is needed here.\n\nPlease find attached v8 making use of SnapshotSelf instead of a dirty snapshot.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 23 May 2024 21:12:28 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Thu, May 23, 2024 at 02:10:54PM -0400, Robert Haas wrote:\n> On Thu, May 23, 2024 at 12:19 AM Bertrand Drouvot\n> <[email protected]> wrote:\n> > The reason why we are using a dirty snapshot here is for the cases where we are\n> > recording a dependency on a referenced object that we are creating at the same\n> > time behind the scene (for example, creating a composite type while creating\n> > a relation). Without the dirty snapshot, then the object we are creating behind\n> > the scene (the composite type) would not be visible and we would wrongly assume\n> > that it has been dropped.\n> \n> The usual reason for using a dirty snapshot is that you want to see\n> uncommitted work by other transactions. It sounds like you're saying\n> you just need to see uncommitted work by the same transaction. If\n> that's true, I think using HeapTupleSatisfiesSelf would be clearer. Or\n> maybe we just need to put CommandCounterIncrement() calls in the right\n> places to avoid having the problem in the first place. Or maybe this\n> is another sign that we're doing the work at the wrong level.\n\nThanks for having discussed your concern with Tom last week during pgconf.dev\nand shared the outcome to me. I understand your concern regarding code\nmaintainability with the current approach.\n\nPlease find attached v9 that:\n\n- Acquire the lock and check for object existence at an upper level, means before\ncalling recordDependencyOn() and recordMultipleDependencies().\n\n- Get rid of the SNAPSHOT_SELF snapshot usage and relies on\nCommandCounterIncrement() instead to ensure new entries are visible when\nwe check for object existence (for the cases where we create additional object\nbehind the scene: like composite type while creating a relation).\n\n- Add an assertion in recordMultipleDependencies() to ensure that we locked the\nobject before recording the dependency (to ensure we don't miss any cases now that\nthe lock is acquired at an upper level).\n\nA few remarks:\n\nMy first attempt has been to move eliminate_duplicate_dependencies() out of\nrecord_object_address_dependencies() so that we get the calls in this order:\n\neliminate_duplicate_dependencies()\ndepLockAndCheckObjects()\nrecord_object_address_dependencies()\n\nWhat I'm doing instead in v9 is to rename record_object_address_dependencies()\nto lock_record_object_address_dependencies() and add depLockAndCheckObjects()\nin it at the right place. That way the caller of\n[lock_]record_object_address_dependencies() is not responsible of calling\neliminate_duplicate_dependencies() (which would have been the case with my first\nattempt).\n\nWe need to setup the LOCKTAG before calling the new Assert in\nrecordMultipleDependencies(). So, using \"#ifdef USE_ASSERT_CHECKING\" here to\nnot setup the LOCKTAG on a non Assert enabled build.\n\nv9 is more invasive (as it changes code in much more places) than v8 but it is\neasier to follow (as it is now clear where the new lock is acquired).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 6 Jun 2024 05:56:14 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "On Thu, Jun 6, 2024 at 1:56 AM Bertrand Drouvot\n<[email protected]> wrote:\n> v9 is more invasive (as it changes code in much more places) than v8 but it is\n> easier to follow (as it is now clear where the new lock is acquired).\n\nHmm, this definitely isn't what I had in mind. Possibly that's a sign\nthat what I had in mind was dumb, but for sure it's not what I\nimagined. What I thought you were going to do was add calls like\nLockDatabaseObject(NamespaceRelationId, schemaid, 0, AccessShareLock)\nin various places, or perhaps LockRelationOid(reloid,\nAccessShareLock), or whatever the case may be. Here you've got stuff\nlike this:\n\n- record_object_address_dependencies(&conobject, addrs_auto,\n- DEPENDENCY_AUTO);\n+ lock_record_object_address_dependencies(&conobject, addrs_auto,\n+ DEPENDENCY_AUTO);\n\n...which to me looks like the locking is still pushed down inside the\ndependency code.\n\nAnd you also have stuff like this:\n\n ObjectAddressSet(referenced, RelationRelationId, childTableId);\n+ depLockAndCheckObject(&referenced);\n recordDependencyOn(&depender, &referenced, DEPENDENCY_PARTITION_SEC);\n\nBut in depLockAndCheckObject you have:\n\n+ if (object->classId == RelationRelationId || object->classId ==\nAuthMemRelationId)\n+ return;\n\nThat doesn't seem right, because then it seems like the call isn't\ndoing anything, but there isn't really any reason for it to not be\ndoing anything. If we're dropping a dependency on a table, then it\nseems like we need to have a lock on that table. Presumably the reason\nwhy we don't end up with dangling dependencies in such cases now is\nbecause we're careful about doing LockRelation() in the right places,\nbut we're not similarly careful about other operations e.g.\nConstraintSetParentConstraint is called by DefineIndex which calls\ntable_open(childRelId, ...) first, but there's no logic in DefineIndex\nto lock the constraint.\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Jun 2024 16:00:23 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Thu, Jun 06, 2024 at 04:00:23PM -0400, Robert Haas wrote:\n> On Thu, Jun 6, 2024 at 1:56 AM Bertrand Drouvot\n> <[email protected]> wrote:\n> > v9 is more invasive (as it changes code in much more places) than v8 but it is\n> > easier to follow (as it is now clear where the new lock is acquired).\n> \n> Hmm, this definitely isn't what I had in mind. Possibly that's a sign\n> that what I had in mind was dumb, but for sure it's not what I\n> imagined. What I thought you were going to do was add calls like\n> LockDatabaseObject(NamespaceRelationId, schemaid, 0, AccessShareLock)\n> in various places, or perhaps LockRelationOid(reloid,\n> AccessShareLock), or whatever the case may be.\n\nI see what you’re saying, doing things like:\n\nLockDatabaseObject(TypeRelationId, returnType, 0, AccessShareLock);\nin ProcedureCreate() for example.\n\n> Here you've got stuff\n> like this:\n> \n> - record_object_address_dependencies(&conobject, addrs_auto,\n> - DEPENDENCY_AUTO);\n> + lock_record_object_address_dependencies(&conobject, addrs_auto,\n> + DEPENDENCY_AUTO);\n> \n> ...which to me looks like the locking is still pushed down inside the\n> dependency code.\n\nYes but it’s now located in places where, I think, it’s easier to understand\nwhat’s going on (as compare to v8), except maybe for:\n\nrecordDependencyOnExpr()\nmakeOperatorDependencies()\nGenerateTypeDependencies()\nmakeParserDependencies()\nmakeDictionaryDependencies()\nmakeTSTemplateDependencies()\nmakeConfigurationDependencies()\n\nbut probably for:\n\nheap_create_with_catalog()\nStorePartitionKey()\nindex_create()\nAggregateCreate()\nCastCreate()\nCreateConstraintEntry()\nProcedureCreate()\nRangeCreate()\nInsertExtensionTuple()\nCreateTransform()\nCreateProceduralLanguage()\n\nThe reasons I keep it linked to the dependency code are:\n\n- To ensure we don’t miss anything (well, with the new Assert in place that’s\nprobably a tangential argument)\n\n- It’s not only about locking the object: it’s also about 1) verifying the object\nis pinned, 2) checking it still exists and 3) provide a description in the error\nmessage if we can (in case the object does not exist anymore). Relying on an\nalready build object (in the dependency code) avoid to 1) define the object(s)\none more time or 2) create new functions that would do the same as isObjectPinned()\nand getObjectDescription() with a different set of arguments.\n\nThat may sounds like weak arguments but it has been my reasoning.\n\nDo you still find the code hard to maintain with v9?\n\n> \n> And you also have stuff like this:\n> \n> ObjectAddressSet(referenced, RelationRelationId, childTableId);\n> + depLockAndCheckObject(&referenced);\n> recordDependencyOn(&depender, &referenced, DEPENDENCY_PARTITION_SEC);\n> \n> But in depLockAndCheckObject you have:\n> \n> + if (object->classId == RelationRelationId || object->classId ==\n> AuthMemRelationId)\n> + return;\n> \n> That doesn't seem right, because then it seems like the call isn't\n> doing anything, but there isn't really any reason for it to not be\n> doing anything. If we're dropping a dependency on a table, then it\n> seems like we need to have a lock on that table. Presumably the reason\n> why we don't end up with dangling dependencies in such cases now is\n> because we're careful about doing LockRelation() in the right places,\n\nYeah, that's what I think: we're already careful when we deal with relations.\n\n> but we're not similarly careful about other operations e.g.\n> ConstraintSetParentConstraint is called by DefineIndex which calls\n> table_open(childRelId, ...) first, but there's no logic in DefineIndex\n> to lock the constraint.\n\ntable_open(childRelId, ...) would lock any \"ALTER TABLE <childRelId> DROP CONSTRAINT\"\nalready. Not sure I understand your concern here.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 7 Jun 2024 08:41:36 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "On Fri, Jun 7, 2024 at 4:41 AM Bertrand Drouvot\n<[email protected]> wrote:\n> Do you still find the code hard to maintain with v9?\n\nI don't think it substantially changes my concerns as compared with\nthe earlier version.\n\n> > but we're not similarly careful about other operations e.g.\n> > ConstraintSetParentConstraint is called by DefineIndex which calls\n> > table_open(childRelId, ...) first, but there's no logic in DefineIndex\n> > to lock the constraint.\n>\n> table_open(childRelId, ...) would lock any \"ALTER TABLE <childRelId> DROP CONSTRAINT\"\n> already. Not sure I understand your concern here.\n\nI believe this is not true. This would take a lock on the table, not\nthe constraint itself.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Jun 2024 10:49:34 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Thu, Jun 13, 2024 at 10:49:34AM -0400, Robert Haas wrote:\n> On Fri, Jun 7, 2024 at 4:41 AM Bertrand Drouvot\n> <[email protected]> wrote:\n> > Do you still find the code hard to maintain with v9?\n> \n> I don't think it substantially changes my concerns as compared with\n> the earlier version.\n\nThanks for the feedback, I'll give it more thoughts.\n\n> \n> > > but we're not similarly careful about other operations e.g.\n> > > ConstraintSetParentConstraint is called by DefineIndex which calls\n> > > table_open(childRelId, ...) first, but there's no logic in DefineIndex\n> > > to lock the constraint.\n> >\n> > table_open(childRelId, ...) would lock any \"ALTER TABLE <childRelId> DROP CONSTRAINT\"\n> > already. Not sure I understand your concern here.\n> \n> I believe this is not true. This would take a lock on the table, not\n> the constraint itself.\n\nI agree that it would not lock the constraint itself. What I meant to say is that\n, nevertheless, the constraint can not be dropped. Indeed, the \"ALTER TABLE\"\nnecessary to drop the constraint (ALTER TABLE <childRelId> DROP CONSTRAINT) would\nbe locked by the table_open(childRelId, ...).\n\nThat's why I don't understand your concern with this particular example. But\nanyway, I'll double check your related concern:\n\n+ if (object->classId == RelationRelationId || object->classId ==\nAuthMemRelationId)\n+ return;\n\nin depLockAndCheckObject(). \n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 13 Jun 2024 16:52:09 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "On Thu, Jun 13, 2024 at 12:52 PM Bertrand Drouvot\n<[email protected]> wrote:\n> > > table_open(childRelId, ...) would lock any \"ALTER TABLE <childRelId> DROP CONSTRAINT\"\n> > > already. Not sure I understand your concern here.\n> >\n> > I believe this is not true. This would take a lock on the table, not\n> > the constraint itself.\n>\n> I agree that it would not lock the constraint itself. What I meant to say is that\n> , nevertheless, the constraint can not be dropped. Indeed, the \"ALTER TABLE\"\n> necessary to drop the constraint (ALTER TABLE <childRelId> DROP CONSTRAINT) would\n> be locked by the table_open(childRelId, ...).\n\nAh, right. So, I was assuming that, with either this version of your\npatch or the earlier version, we'd end up locking the constraint\nitself. Was I wrong about that?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Jun 2024 14:27:45 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Thu, Jun 13, 2024 at 02:27:45PM -0400, Robert Haas wrote:\n> On Thu, Jun 13, 2024 at 12:52 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> > > > table_open(childRelId, ...) would lock any \"ALTER TABLE <childRelId> DROP CONSTRAINT\"\n> > > > already. Not sure I understand your concern here.\n> > >\n> > > I believe this is not true. This would take a lock on the table, not\n> > > the constraint itself.\n> >\n> > I agree that it would not lock the constraint itself. What I meant to say is that\n> > , nevertheless, the constraint can not be dropped. Indeed, the \"ALTER TABLE\"\n> > necessary to drop the constraint (ALTER TABLE <childRelId> DROP CONSTRAINT) would\n> > be locked by the table_open(childRelId, ...).\n> \n> Ah, right. So, I was assuming that, with either this version of your\n> patch or the earlier version, we'd end up locking the constraint\n> itself. Was I wrong about that?\n\nThe child contraint itself is not locked when going through\nConstraintSetParentConstraint().\n\nWhile at it, let's look at a full example and focus on your concern.\n\nLet's do that with this gdb file:\n\n\"\n$ cat gdb.txt\nb dependency.c:1542\ncommand 1\n printf \"Will return for: classId %d and objectId %d\\n\", object->classId, object->objectId\n c\nend\nb dependency.c:1547 if object->classId == 2606\ncommand 2\n printf \"Will lock constraint: classId %d and objectId %d\\n\", object->classId, object->objectId\n c\nend\n\"\n\nknowing that:\n\n\"\nLine 1542 is the return here in depLockAndCheckObject() (your concern):\n\n if (object->classId == RelationRelationId || object->classId == AuthMemRelationId)\n return;\n\nLine 1547 is the lock here in depLockAndCheckObject():\n\n /* assume we should lock the whole object not a sub-object */\n LockDatabaseObject(object->classId, object->objectId, 0, AccessShareLock);\n\"\n\nSo, with gdb attached to a session let's:\n\n1. Create the parent relation\n\nCREATE TABLE upsert_test (\n a INT,\n b TEXT\n) PARTITION BY LIST (a);\n\ngdb produces:\n\n---\nWill return for: classId 1259 and objectId 16384\nWill return for: classId 1259 and objectId 16384\n---\n\nOid 16384 is upsert_test, so I think the return (dependency.c:1542) is fine as\nwe are creating the object (it can't be dropped as not visible to anyone else).\n\n2. Create another relation (will be the child)\n\nCREATE TABLE upsert_test_2 (b TEXT, a int);\n\ngdb produces:\n\n---\nWill return for: classId 1259 and objectId 16391\nWill return for: classId 1259 and objectId 16394\nWill return for: classId 1259 and objectId 16394\nWill return for: classId 1259 and objectId 16391\n---\n\nOid 16391 is upsert_test_2\nOid 16394 is pg_toast_16391\n\nso I think the return (dependency.c:1542) is fine as we are creating those\nobjects (can't be dropped as not visible to anyone else).\n\n3. Attach the partition\n\nALTER TABLE upsert_test ATTACH PARTITION upsert_test_2 FOR VALUES IN (2);\n\ngdb produces:\n\n---\nWill return for: classId 1259 and objectId 16384\n---\n\nThat's fine because we'd already had locked the relation 16384 through\nAlterTableLookupRelation()->RangeVarGetRelidExtended()->LockRelationOid().\n\n4. Add a constraint on the child relation\n\nALTER TABLE upsert_test_2 add constraint bdtc2 UNIQUE (a);\n\ngdb produces:\n\n---\nWill return for: classId 1259 and objectId 16391\nWill lock constraint: classId 2606 and objectId 16397\n---\n\nThat's fine because we'd already had locked the relation 16391 through\nAlterTableLookupRelation()->RangeVarGetRelidExtended()->LockRelationOid().\n\nOid 16397 is the constraint we're creating (bdtc2).\n\n5. Add a constraint on the parent relation (this goes through\nConstraintSetParentConstraint())\n\nALTER TABLE upsert_test add constraint bdtc1 UNIQUE (a);\n\ngdb produces:\n\n---\nWill return for: classId 1259 and objectId 16384\nWill lock constraint: classId 2606 and objectId 16399\nWill return for: classId 1259 and objectId 16398\nWill return for: classId 1259 and objectId 16391\nWill lock constraint: classId 2606 and objectId 16399\nWill return for: classId 1259 and objectId 16391\n---\n\nRegarding \"Will return for: classId 1259 and objectId 16384\":\nThat's fine because we'd already had locked the relation 16384 through\nAlterTableLookupRelation()->RangeVarGetRelidExtended()->LockRelationOid().\n\nRegarding \"Will lock constraint: classId 2606 and objectId 16399\":\nOid 16399 is the constraint that we're creating.\n\nRegarding \"Will return for: classId 1259 and objectId 16398\":\nThat's fine because Oid 16398 is an index that we're creating while creating\nthe constraint (so the index can't be dropped as not visible to anyone else).\n\nRegarding \"Will return for: classId 1259 and objectId 16391\":\nThat's fine because 16384 we'd be already locked (as mentioned above). And\nI think that's fine because trying to drop \"upsert_test_2\" (aka 16391) would produce\nRemoveRelations()->RangeVarGetRelidExtended()->RangeVarCallbackForDropRelation()\n->LockRelationOid(relid=16384, lockmode=8) and so would be locked.\n\nRegarding this example, I don't think that the return in depLockAndCheckObject():\n\n\"\nif (object->classId == RelationRelationId || object->classId == AuthMemRelationId)\n return;\n\"\n\nis an issue. Indeed, the example above shows it would return for an object that\nwe'd be creating (so not visible to anyone else) or for an object that we'd\nalready have locked.\n\nIs it an issue outside of this example?: I've the feeling it's not as we're\nalready careful when we deal with relations. That said, to be on the safe side\nwe could get rid of this return and make use of LockRelationOid() instead.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 14 Jun 2024 07:54:52 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Thu, Jun 13, 2024 at 04:52:09PM +0000, Bertrand Drouvot wrote:\n> Hi,\n> \n> On Thu, Jun 13, 2024 at 10:49:34AM -0400, Robert Haas wrote:\n> > On Fri, Jun 7, 2024 at 4:41 AM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > > Do you still find the code hard to maintain with v9?\n> > \n> > I don't think it substantially changes my concerns as compared with\n> > the earlier version.\n> \n> Thanks for the feedback, I'll give it more thoughts.\n\nPlease find attached v10 that puts the object locking outside of the dependency\ncode.\n\nIt's done that way except for:\n\nrecordDependencyOnExpr() \nrecordDependencyOnSingleRelExpr()\nmakeConfigurationDependencies()\n\nThe reason is that I think that it would need part of the logic that his inside\nthe above functions to be duplicated and I'm not sure that's worth it.\n\nFor example, we would probably need to:\n\n- make additional calls to find_expr_references_walker() \n- make additional scan on the config map\n\nIt's also not done outside of recordDependencyOnCurrentExtension() as:\n\n1. I think it is clear enough that way (as it is clear that the lock is taken on\na ExtensionRelationId object class).\n2. why to include \"commands/extension.h\" in more places (locking would\ndepend of \"creating_extension\" and \"CurrentExtensionObject\"), while 1.?\n\nRemarks:\n\n1. depLockAndCheckObject() and friends in v9 have been renamed to\nLockNotPinnedObject() and friends (as the vast majority of their calls are now\ndone outside of the dependency code).\n\n2. regarding the concern around RelationRelationId (discussed in [1]), v10 adds\na comment \"XXX Do we need a lock for RelationRelationId?\" at the places we\nmay want to lock this object class. I did not think about it yet (but will do),\nI only added this comment at multiple places.\n\nI think that v10 is easier to follow (as compare to v9) as we can easily see for\nwhich object class we'll put a lock on.\n\nThoughts?\n\n[1]: https://www.postgresql.org/message-id/Zmv3TPfJAyQXhIdu%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 17 Jun 2024 10:50:56 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "On Fri, Jun 14, 2024 at 3:54 AM Bertrand Drouvot\n<[email protected]> wrote:\n> > Ah, right. So, I was assuming that, with either this version of your\n> > patch or the earlier version, we'd end up locking the constraint\n> > itself. Was I wrong about that?\n>\n> The child contraint itself is not locked when going through\n> ConstraintSetParentConstraint().\n>\n> While at it, let's look at a full example and focus on your concern.\n\nI'm not at the point of having a concern yet, honestly. I'm trying to\nunderstand the design ideas. The commit message just says that we take\na conflicting lock, but it doesn't mention which object types that\nprinciple does or doesn't apply to. I think the idea of skipping it\nfor cases where it's redundant with the relation lock could be the\nright idea, but if that's what we're doing, don't we need to explain\nthe principle somewhere? And shouldn't we also apply it across all\nobject types that have the same property?\n\nAlong the same lines:\n\n+ /*\n+ * Those don't rely on LockDatabaseObject() when being dropped (see\n+ * AcquireDeletionLock()). Also it looks like they can not produce\n+ * orphaned dependent objects when being dropped.\n+ */\n+ if (object->classId == RelationRelationId || object->classId ==\nAuthMemRelationId)\n+ return;\n\n\"It looks like X cannot happen\" is not confidence-inspiring. At the\nvery least, a better comment is needed here. But also, that relation\nhas no exception for AuthMemRelationId, only for RelationRelationId.\nAnd also, the exception for RelationRelationId doesn't imply that we\ndon't need a conflicting lock here: the special case for\nRelationRelationId in AcquireDeletionLock() is necessary because the\nlock tag format is different for relations than for other database\nobjects, not because we don't need a lock at all. If the handling here\nwere really symmetric with what AcquireDeletionLock(), the coding\nwould be to call either LockRelationOid() or LockDatabaseObject()\ndepending on whether classid == RelationRelationId. Now, that isn't\nactually necessary, because we already have relation-locking calls\nelsewhere, but my point is that the rationale this commit gives for\nWHY it isn't necessary seems to me to be wrong in multiple ways.\n\nSo to try to sum up here: I'm not sure I agree with this design. But I\nalso feel like the design is not as clear and consistently implemented\nas it could be. So even if I just ignored the question of whether it's\nthe right design, it feels like we're a ways from having something\npotentially committable here, because of issues like the ones I\nmentioned in the last paragraph.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 Jun 2024 12:24:46 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Mon, Jun 17, 2024 at 12:24:46PM -0400, Robert Haas wrote:\n> On Fri, Jun 14, 2024 at 3:54 AM Bertrand Drouvot\n> <[email protected]> wrote:\n> > > Ah, right. So, I was assuming that, with either this version of your\n> > > patch or the earlier version, we'd end up locking the constraint\n> > > itself. Was I wrong about that?\n> >\n> > The child contraint itself is not locked when going through\n> > ConstraintSetParentConstraint().\n> >\n> > While at it, let's look at a full example and focus on your concern.\n> \n> I'm not at the point of having a concern yet, honestly. I'm trying to\n> understand the design ideas. The commit message just says that we take\n> a conflicting lock, but it doesn't mention which object types that\n> principle does or doesn't apply to. I think the idea of skipping it\n> for cases where it's redundant with the relation lock could be the\n> right idea, but if that's what we're doing, don't we need to explain\n> the principle somewhere? And shouldn't we also apply it across all\n> object types that have the same property?\n\nYeah, I still need to deeply study this area and document it. \n\n> Along the same lines:\n> \n> + /*\n> + * Those don't rely on LockDatabaseObject() when being dropped (see\n> + * AcquireDeletionLock()). Also it looks like they can not produce\n> + * orphaned dependent objects when being dropped.\n> + */\n> + if (object->classId == RelationRelationId || object->classId ==\n> AuthMemRelationId)\n> + return;\n> \n> \"It looks like X cannot happen\" is not confidence-inspiring.\n\nYeah, it is not. It is just a \"feeling\" that I need to work on to remove\nany ambiguity and/or adjust the code as needed.\n\n> At the\n> very least, a better comment is needed here. But also, that relation\n> has no exception for AuthMemRelationId, only for RelationRelationId.\n> And also, the exception for RelationRelationId doesn't imply that we\n> don't need a conflicting lock here: the special case for\n> RelationRelationId in AcquireDeletionLock() is necessary because the\n> lock tag format is different for relations than for other database\n> objects, not because we don't need a lock at all. If the handling here\n> were really symmetric with what AcquireDeletionLock(), the coding\n> would be to call either LockRelationOid() or LockDatabaseObject()\n> depending on whether classid == RelationRelationId.\n\nAgree.\n\n> Now, that isn't\n> actually necessary, because we already have relation-locking calls\n> elsewhere, but my point is that the rationale this commit gives for\n> WHY it isn't necessary seems to me to be wrong in multiple ways.\n\nAgree. I'm not done with that part yet (should have made it more clear).\n\n> So to try to sum up here: I'm not sure I agree with this design. But I\n> also feel like the design is not as clear and consistently implemented\n> as it could be. So even if I just ignored the question of whether it's\n> the right design, it feels like we're a ways from having something\n> potentially committable here, because of issues like the ones I\n> mentioned in the last paragraph.\n> \n\nAgree. I'll now move on with the \"XXX Do we need a lock for RelationRelationId?\"\ncomments that I put in v10 (see [1]) and study all the cases around them.\n\nOnce done, I think that it will easier to 1.remove ambiguity, 2.document and\n3.do the \"right\" thing regarding the RelationRelationId object class.\n\n[1]: https://www.postgresql.org/message-id/ZnAVEBhlGvpDDVOD%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 17 Jun 2024 17:57:05 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nIf the dependency is more, this can hit max_locks_per_transaction\nlimit very fast. Won't it? I just tried this little experiment with\nand without patch.\n\n1) created some UDTs (I have just chosen some random number, 15)\ndo $$\ndeclare\n i int := 1;\n type_name text;\nbegin\n while i <= 15 loop\n type_name := format('ct_%s', i);\n\n -- check if the type already exists\n if not exists (\n select 1\n from pg_type\n where typname = type_name\n ) then\n execute format('create type %I as (f1 INT, f2 TEXT);', type_name);\n end if;\n\n i := i + 1;\n end loop;\nend $$;\n\n2) started a transaction and tried creating a table that uses all udts\ncreated above:\nbegin;\ncreate table dep_tab(a ct_1, b ct_2, c ct_3, d ct_4, e ct_5, f ct_6, g\nct_7, h ct_8, i ct_9, j ct_10, k ct_11, l ct_12, m ct_13, n ct_14, o\nct_15);\n\n3) checked the pg_locks entries inside the transaction both with and\nwithout patch:\n\n-- with patch:\nselect count(*) from pg_locks;\n count\n-------\n 23\n(1 row)\n\n-- without patch:\nselect count(*) from pg_locks;\n count\n-------\n 7\n(1 row)\n\nWith patch, it increased by 3 times. Won't that create a problem if\nmany concurrent sessions engage in similar activity?\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Wed, 19 Jun 2024 17:19:28 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "On Wed, Jun 19, 2024 at 7:49 AM Ashutosh Sharma <[email protected]> wrote:\n> If the dependency is more, this can hit max_locks_per_transaction\n> limit very fast.\n\nYour experiment doesn't support this conclusion. Very few users would\nhave 15 separate user-defined types in the same table, and even if\nthey did, and dropped the table, using 23 locks is no big deal. By\ndefault, max_locks_per_transaction is 64, so the user would need to\nhave more like 45 separate user-defined types in the same table in\norder to use more than 64 locks. So, yes, it is possible that if every\nbackend in the system were simultaneously trying to drop a table and\nall of those tables had an average of at least 45 or so user-defined\ntypes, all different from each other, you might run out of lock table\nspace.\n\nBut probably nobody will ever do that in real life, and if they did,\nthey could just raise max_locks_per_transaction.\n\nWhen posting about potential problems like this, it is a good idea to\nfirst do a careful thought experiment to assess how realistic the\nproblem is. I would consider an issue like this serious if there were\na realistic scenario under which a small number of backends could\nexhaust the lock table for the whole system, but I think you can see\nthat this isn't the case here. Even your original scenario is more\nextreme than what most people are likely to hit in real life, and it\nonly uses 23 locks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Jun 2024 08:20:08 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi Robert,\n\nOn Wed, Jun 19, 2024 at 5:50 PM Robert Haas <[email protected]> wrote:\n>\n> On Wed, Jun 19, 2024 at 7:49 AM Ashutosh Sharma <[email protected]> wrote:\n> > If the dependency is more, this can hit max_locks_per_transaction\n> > limit very fast.\n>\n> Your experiment doesn't support this conclusion. Very few users would\n> have 15 separate user-defined types in the same table, and even if\n> they did, and dropped the table, using 23 locks is no big deal. By\n> default, max_locks_per_transaction is 64, so the user would need to\n> have more like 45 separate user-defined types in the same table in\n> order to use more than 64 locks. So, yes, it is possible that if every\n> backend in the system were simultaneously trying to drop a table and\n> all of those tables had an average of at least 45 or so user-defined\n> types, all different from each other, you might run out of lock table\n> space.\n>\n> But probably nobody will ever do that in real life, and if they did,\n> they could just raise max_locks_per_transaction.\n>\n> When posting about potential problems like this, it is a good idea to\n> first do a careful thought experiment to assess how realistic the\n> problem is. I would consider an issue like this serious if there were\n> a realistic scenario under which a small number of backends could\n> exhaust the lock table for the whole system, but I think you can see\n> that this isn't the case here. Even your original scenario is more\n> extreme than what most people are likely to hit in real life, and it\n> only uses 23 locks.\n>\n\nI agree that based on the experiment I shared (which is somewhat\nunrealistic), this doesn't seem to have any significant implications.\nHowever, I was concerned that it could potentially increase the usage\nof max_locks_per_transaction, which is why I wanted to mention it\nhere. Nonetheless, my experiment did not reveal any serious issues\nrelated to this. Sorry for the noise.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Wed, 19 Jun 2024 19:05:36 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Mon, Jun 17, 2024 at 05:57:05PM +0000, Bertrand Drouvot wrote:\n> Hi,\n> \n> On Mon, Jun 17, 2024 at 12:24:46PM -0400, Robert Haas wrote:\n> > So to try to sum up here: I'm not sure I agree with this design. But I\n> > also feel like the design is not as clear and consistently implemented\n> > as it could be. So even if I just ignored the question of whether it's\n> > the right design, it feels like we're a ways from having something\n> > potentially committable here, because of issues like the ones I\n> > mentioned in the last paragraph.\n> > \n> \n> Agree. I'll now move on with the \"XXX Do we need a lock for RelationRelationId?\"\n> comments that I put in v10 (see [1]) and study all the cases around them.\n\nA. I went through all of them, did some testing for all, and reached the\nconclusion that we must be in one of the two following cases that would already\nprevent the relation to be dropped:\n\n1. The relation is already locked (could be an existing relation or a relation\nthat we are creating).\n\n2. The relation is protected indirectly (i.e an index protected by a lock on\nits table, a table protected by a lock on a function that depends of\nthe table...).\n\nSo we don't need to add a lock if this is a RelationRelationId object class for\nthe cases above.\n\nAs a consequence, I replaced the \"XXX\" related comments that were in v10 by\nanother set of comments in v11 (attached) like \"No need to call LockRelationOid()\n(through LockNotPinnedObject())....\". Reason is to make it clear in the code\nand also to ease the review.\n\nB. I explained in [1] (while sharing v10) that the object locking is now outside\nof the dependency code except for (and I explained why):\n\nrecordDependencyOnExpr() \nrecordDependencyOnSingleRelExpr()\nmakeConfigurationDependencies()\n\nSo I also did some testing, on the RelationRelationId case, for those and I\nreached the same conclusion as the one shared above.\n\nFor A. and B. the testing has been done by adding a \"ereport(WARNING..\" at\nthose places when a RelationRelationId is involved. Then I run \"make check\"\nand went to the failed tests (output were not the expected ones due to the\nextra \"WARNING\"), reproduced them with gdb and checked for the lock on the\nrelation producing the \"WARNING\". All of those were linked to 1. or 2.\n\nNote that adding an assertion on an existing lock would not work for the cases\ndescribed in 2.\n\nSo, I'm now confident that we must be in 1. or 2. but it's also possible\nthat I've missed some cases (though I think the way the testing has been done is\nnot that weak).\n\nTo sum up, I did not see any cases that did not lead to 1. or 2., so I think\nit's safe to not add an extra lock for the RelationRelationId case. If, for any\nreason, there is still cases that are outside 1. or 2. then they may lead to\norphaned dependencies linked to the RelationRelationId class. I think that's\nfine to take that \"risk\" given that a. that would not be worst than currently\nand b. I did not see any of those in our fleet currently (while I have seen a non\nnegligible amount outside of the RelationRelationId case).\n\n> Once done, I think that it will easier to 1.remove ambiguity, 2.document and\n> 3.do the \"right\" thing regarding the RelationRelationId object class.\n> \n\nPlease find attached v11, where I added more detailed comments in the commit\nmessage and also in the code (I also removed the useless check on\nAuthMemRelationId).\n\n[1]: https://www.postgresql.org/message-id/ZnAVEBhlGvpDDVOD%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 19 Jun 2024 14:11:50 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Wed, Jun 19, 2024 at 02:11:50PM +0000, Bertrand Drouvot wrote:\n> To sum up, I did not see any cases that did not lead to 1. or 2., so I think\n> it's safe to not add an extra lock for the RelationRelationId case. If, for any\n> reason, there is still cases that are outside 1. or 2. then they may lead to\n> orphaned dependencies linked to the RelationRelationId class. I think that's\n> fine to take that \"risk\" given that a. that would not be worst than currently\n> and b. I did not see any of those in our fleet currently (while I have seen a non\n> negligible amount outside of the RelationRelationId case).\n\nAnother thought for the RelationRelationId class case: we could check if there\nis a lock first and if there is no lock then acquire one. That way that would\nensure the relation is always locked (so no \"risk\" anymore), but OTOH it may\nadd \"unecessary\" locking (see 2. mentioned previously).\n\nI think I do prefer this approach to be on the safe side of thing, what do\nyou think?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 21 Jun 2024 13:22:43 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Fri, Jun 21, 2024 at 01:22:43PM +0000, Bertrand Drouvot wrote:\n> Another thought for the RelationRelationId class case: we could check if there\n> is a lock first and if there is no lock then acquire one. That way that would\n> ensure the relation is always locked (so no \"risk\" anymore), but OTOH it may\n> add \"unecessary\" locking (see 2. mentioned previously).\n\nPlease find attached v12 implementing this idea for the RelationRelationId class\ncase. As mentioned, it may add unnecessary locking for 2. but I think that's\nworth it to ensure that we are always on the safe side of thing. This idea is\nimplemented in LockNotPinnedObjectById().\n\nA few remarks:\n\n- there is one place where the relation is not visible (even if\nCommandCounterIncrement() is used). That's in TypeCreate(), because the new \nrelation Oid is _not_ added to pg_class yet.\nIndeed, in heap_create_with_catalog(), AddNewRelationType() is called before\nAddNewRelationTuple()). I put a comment in this part of the code explaining why\nit's not necessary to call LockRelationOid() here.\n\n- some namespace related stuff is removed from \"test_oat_hooks/expected/alter_table.out\".\nThat's due to the logic in cachedNamespacePath() and the fact that the same\nnamespace related stuff is added prior in alter_table.out.\n\n- the patch touches 37 .c files, but that's mainly due to the fact that\nLockNotPinnedObjectById() has to be called in a lot of places.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 26 Jun 2024 10:24:41 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Wed, Jun 26, 2024 at 10:24:41AM +0000, Bertrand Drouvot wrote:\n> Hi,\n> \n> On Fri, Jun 21, 2024 at 01:22:43PM +0000, Bertrand Drouvot wrote:\n> > Another thought for the RelationRelationId class case: we could check if there\n> > is a lock first and if there is no lock then acquire one. That way that would\n> > ensure the relation is always locked (so no \"risk\" anymore), but OTOH it may\n> > add \"unecessary\" locking (see 2. mentioned previously).\n> \n> Please find attached v12 implementing this idea for the RelationRelationId class\n> case. As mentioned, it may add unnecessary locking for 2. but I think that's\n> worth it to ensure that we are always on the safe side of thing. This idea is\n> implemented in LockNotPinnedObjectById().\n\nPlease find attached v13, mandatory rebase due to 0cecc908e97. In passing, make\nuse of CheckRelationOidLockedByMe() added in 0cecc908e97.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 1 Jul 2024 09:39:17 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Mon, Jul 01, 2024 at 09:39:17AM +0000, Bertrand Drouvot wrote:\n> Hi,\n> \n> On Wed, Jun 26, 2024 at 10:24:41AM +0000, Bertrand Drouvot wrote:\n> > Hi,\n> > \n> > On Fri, Jun 21, 2024 at 01:22:43PM +0000, Bertrand Drouvot wrote:\n> > > Another thought for the RelationRelationId class case: we could check if there\n> > > is a lock first and if there is no lock then acquire one. That way that would\n> > > ensure the relation is always locked (so no \"risk\" anymore), but OTOH it may\n> > > add \"unecessary\" locking (see 2. mentioned previously).\n> > \n> > Please find attached v12 implementing this idea for the RelationRelationId class\n> > case. As mentioned, it may add unnecessary locking for 2. but I think that's\n> > worth it to ensure that we are always on the safe side of thing. This idea is\n> > implemented in LockNotPinnedObjectById().\n> \n> Please find attached v13, mandatory rebase due to 0cecc908e97. In passing, make\n> use of CheckRelationOidLockedByMe() added in 0cecc908e97.\n\nPlease find attached v14, mandatory rebase due to 65b71dec2d5.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 2 Jul 2024 05:56:23 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Tue, Jul 02, 2024 at 05:56:23AM +0000, Bertrand Drouvot wrote:\n> Hi,\n> \n> On Mon, Jul 01, 2024 at 09:39:17AM +0000, Bertrand Drouvot wrote:\n> > Hi,\n> > \n> > On Wed, Jun 26, 2024 at 10:24:41AM +0000, Bertrand Drouvot wrote:\n> > > Hi,\n> > > \n> > > On Fri, Jun 21, 2024 at 01:22:43PM +0000, Bertrand Drouvot wrote:\n> > > > Another thought for the RelationRelationId class case: we could check if there\n> > > > is a lock first and if there is no lock then acquire one. That way that would\n> > > > ensure the relation is always locked (so no \"risk\" anymore), but OTOH it may\n> > > > add \"unecessary\" locking (see 2. mentioned previously).\n> > > \n> > > Please find attached v12 implementing this idea for the RelationRelationId class\n> > > case. As mentioned, it may add unnecessary locking for 2. but I think that's\n> > > worth it to ensure that we are always on the safe side of thing. This idea is\n> > > implemented in LockNotPinnedObjectById().\n> > \n> > Please find attached v13, mandatory rebase due to 0cecc908e97. In passing, make\n> > use of CheckRelationOidLockedByMe() added in 0cecc908e97.\n> \n> Please find attached v14, mandatory rebase due to 65b71dec2d5.\n\nIn [1] I mentioned that the object locking logic has been put outside of the \ndependency code except for:\n\nrecordDependencyOnExpr() \nrecordDependencyOnSingleRelExpr()\nmakeConfigurationDependencies()\n\nPlease find attached v15 that also removes the logic outside of the 3 above \nfunctions. Well, for recordDependencyOnExpr() and recordDependencyOnSingleRelExpr()\nthat's now done in find_expr_references_walker(): It's somehow still in the\ndependency code but at least it is now clear which objects we are locking (and\nI'm not sure how we could do better than that for those 2 functions).\n\nThere is still one locking call in recordDependencyOnCurrentExtension() but I\nthink this one is clear enough and does not need to be put outside (for\nthe same reason mentioned in [1]).\n\nSo, to sum up:\n\nA. Locking is now done exclusively with LockNotPinnedObject(Oid classid, Oid objid)\nso that it's now always clear what object we want to acquire a lock for. It means\nwe are not manipulating directly an object address or a list of objects address\nas it was the case when the locking was done \"directly\" within the dependency code.\n\nB. A special case is done for objects that belong to the RelationRelationId class.\nFor those, we should be in one of the two following cases that would already\nprevent the relation to be dropped:\n\n 1. The relation is already locked (could be an existing relation or a relation\n that we are creating).\n\n 2. The relation is protected indirectly (i.e an index protected by a lock on\n its table, a table protected by a lock on a function that depends the table...)\n\nTo avoid any risks for the RelationRelationId class case, we acquire a lock if\nthere is none. That may add unnecessary lock for 2. but that seems worth it. \n\n[1]: https://www.postgresql.org/message-id/ZnAVEBhlGvpDDVOD%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 10 Jul 2024 07:31:06 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" }, { "msg_contents": "Hi,\n\nOn Wed, Jul 10, 2024 at 07:31:06AM +0000, Bertrand Drouvot wrote:\n> So, to sum up:\n> \n> A. Locking is now done exclusively with LockNotPinnedObject(Oid classid, Oid objid)\n> so that it's now always clear what object we want to acquire a lock for. It means\n> we are not manipulating directly an object address or a list of objects address\n> as it was the case when the locking was done \"directly\" within the dependency code.\n> \n> B. A special case is done for objects that belong to the RelationRelationId class.\n> For those, we should be in one of the two following cases that would already\n> prevent the relation to be dropped:\n> \n> 1. The relation is already locked (could be an existing relation or a relation\n> that we are creating).\n> \n> 2. The relation is protected indirectly (i.e an index protected by a lock on\n> its table, a table protected by a lock on a function that depends the table...)\n> \n> To avoid any risks for the RelationRelationId class case, we acquire a lock if\n> there is none. That may add unnecessary lock for 2. but that seems worth it. \n> \n\nPlease find attached v16, mandatory rebase due to 80ffcb8427.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 19 Aug 2024 15:35:14 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid orphaned objects dependencies, take 3" } ]
[ { "msg_contents": "Hi,\n\nWhile working on the 'reduce nodeToString output' patch, I noticed\nthat my IDE marked one field in the TidScanState node as 'unused'.\nAfter checking this seemed to be accurate, and I found a few more such\nfields in Node structs.\n\nPFA some patches that clean this up: 0001 is plain removal of fields\nthat are not accessed anywhere anymore, 0002 and up clean up fields\nthat are effectively write-only, with no effective use inside\nPostgreSQL's own code, and no effective usage found on Debian code\nsearch, nor Github code search.\n\nI'm quite confident about the correctness of patches 1 and 3 (no usage\nat all, and newly introduced with no meaningful uses), while patches\n2, 4, and 5 could be considered 'as designed'.\nFor those last ones I have no strong opinion for removal or against\nkeeping them around, this is just to point out we can remove the\nfields, as nobody seems to be using them.\n\n/cc Tom Lane and Etsuro Fujita: 2 and 4 were introduced with your\ncommit afb9249d in 2015.\n/cc Amit Kapila: 0003 was introduced with your spearheaded commit\n6185c973 this year.\n\nKind regards,\n\nMatthias van de Meent\n\n\n0001 removes two old fields that are not in use anywhere anymore, but\nat some point these were used.\n\n0002/0004 remove fields in ExecRowMark which were added for FDWs to\nuse, but there are no FDWs which use this: I could only find two FDWs\nwho implement RefetchForeignRow, one being BlackHoleFDW, and the other\na no-op implementation in kafka_fdw [0]. We also don't seem to have\nany testing on this feature.\n\n0003 drops the input_finfo field on the new JsonExprState struct. It\nwasn't read anywhere, so keeping it around makes little sense IMO.\n\n0005 drops field DeallocateStmt.isall: the value of the field is\nimplied by !name, and that field was used as such.\n\n\n[0] https://github.com/cohenjo/kafka_fdw/blob/master/src/kafka_fdw.c#L1793", "msg_date": "Mon, 22 Apr 2024 17:21:22 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Cleanup: remove unused fields from nodes" }, { "msg_contents": "Matthias van de Meent <[email protected]> writes:\n> 0001 removes two old fields that are not in use anywhere anymore, but\n> at some point these were used.\n\n+1. They're not being initialized, so they're useless and confusing.\n\n> 0002/0004 remove fields in ExecRowMark which were added for FDWs to\n> use, but there are no FDWs which use this: I could only find two FDWs\n> who implement RefetchForeignRow, one being BlackHoleFDW, and the other\n> a no-op implementation in kafka_fdw [0]. We also don't seem to have\n> any testing on this feature.\n\nI'm kind of down on removing either of these. ermExtra is explicitly\nintended for extensions to use, and just because we haven't found any\nusers doesn't mean there aren't any, or might not be next week.\nSimilarly, ermActive seems like it's at least potentially useful:\nis there another way for onlookers to discover that state?\n\n> 0003 drops the input_finfo field on the new JsonExprState struct. It\n> wasn't read anywhere, so keeping it around makes little sense IMO.\n\n+1. The adjacent input_fcinfo field has this info if anyone needs it.\n\n> 0005 drops field DeallocateStmt.isall: the value of the field is\n> implied by !name, and that field was used as such.\n\nSeems reasonable.\n\nI think it would be a good idea to push 0003 for v17, just so nobody\ngrows an unnecessary dependency on that field. 0001 and 0005 could\nbe left for v18, but on the other hand they're so trivial that it\ncould also be sensible to just push them to get them out of the way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Apr 2024 11:41:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cleanup: remove unused fields from nodes" }, { "msg_contents": "On Mon, 22 Apr 2024 at 17:41, Tom Lane <[email protected]> wrote:\n>\n> Matthias van de Meent <[email protected]> writes:\n> > 0002/0004 remove fields in ExecRowMark which were added for FDWs to\n> > use, but there are no FDWs which use this: I could only find two FDWs\n> > who implement RefetchForeignRow, one being BlackHoleFDW, and the other\n> > a no-op implementation in kafka_fdw [0]. We also don't seem to have\n> > any testing on this feature.\n>\n> I'm kind of down on removing either of these. ermExtra is explicitly\n> intended for extensions to use, and just because we haven't found any\n> users doesn't mean there aren't any, or might not be next week.\n\nThat's a good point, and also why I wasn't 100% sure removing it was a\ngood idea. I'm not quite sure why this would be used (rather than the\ninternal state of the FDW, or no state at all), but haven't looked\nvery deep into it either, so I'm quite fine with not channging that.\n\n> Similarly, ermActive seems like it's at least potentially useful:\n> is there another way for onlookers to discover that state?\n\nThe ermActive field is always true when RefetchForeignRow is called\n(in ExecLockRows(), in nodeLockRows.c), and we don't seem to care\nabout the value of the field afterwards. Additionally, we always set\nerm->curCtid to a valid value when ermActive is also first set in that\ncode path.\nIn all, it feels like a duplicative field with no real uses inside\nPostgreSQL itself. If an extension (FDW) needs it, it should probably\nuse ermExtra instead, as ermActive seemingly doesn't carry any\nmeaningful value into the FDW call.\n\n> I think it would be a good idea to push 0003 for v17, just so nobody\n> grows an unnecessary dependency on that field. 0001 and 0005 could\n> be left for v18, but on the other hand they're so trivial that it\n> could also be sensible to just push them to get them out of the way.\n\nBeta 1 scheduled to be released for quite some time, so I don't think\nthere are any problems with fixing these kinds of minor issues in the\nprovisional ABI for v17.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Mon, 22 Apr 2024 18:46:27 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cleanup: remove unused fields from nodes" }, { "msg_contents": "On Mon, Apr 22, 2024 at 06:46:27PM +0200, Matthias van de Meent wrote:\n> On Mon, 22 Apr 2024 at 17:41, Tom Lane <[email protected]> wrote:\n>> Matthias van de Meent <[email protected]> writes:\n>>> 0002/0004 remove fields in ExecRowMark which were added for FDWs to\n>>> use, but there are no FDWs which use this: I could only find two FDWs\n>>> who implement RefetchForeignRow, one being BlackHoleFDW, and the other\n>>> a no-op implementation in kafka_fdw [0]. We also don't seem to have\n>>> any testing on this feature.\n>>\n>> I'm kind of down on removing either of these. ermExtra is explicitly\n>> intended for extensions to use, and just because we haven't found any\n>> users doesn't mean there aren't any, or might not be next week.\n> \n> That's a good point, and also why I wasn't 100% sure removing it was a\n> good idea. I'm not quite sure why this would be used (rather than the\n> internal state of the FDW, or no state at all), but haven't looked\n> very deep into it either, so I'm quite fine with not channging that.\n\nCustom nodes are one extra possibility? I'd leave ermActive and\nermExtra be.\n\n>> I think it would be a good idea to push 0003 for v17, just so nobody\n>> grows an unnecessary dependency on that field. 0001 and 0005 could\n>> be left for v18, but on the other hand they're so trivial that it\n>> could also be sensible to just push them to get them out of the way.\n> \n> Beta 1 scheduled to be released for quite some time, so I don't think\n> there are any problems with fixing these kinds of minor issues in the\n> provisional ABI for v17.\n\nTweaking the APIs should be OK until GA, as long as we agree that the\ncurrent interfaces can be improved.\n\n0003 is new in v17, so let's apply it now. I don't see much a strong\nargument in waiting for the removal of 0001 and 0005, either, to keep\nthe interfaces cleaner moving on. However, this is not a regression\nand these have been around for years, so I'd suggest for v18 to open\nbefore moving on with the removal.\n\nI was wondering for a bit about how tss_htup could be abused in the\nopen, and the only references I can see come from forks of the\npre-2019 area, where this uses TidNext(). As a whole, ripping it out\ndoes not stress me much.\n--\nMichael", "msg_date": "Tue, 23 Apr 2024 10:14:42 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cleanup: remove unused fields from nodes" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> On Mon, Apr 22, 2024 at 06:46:27PM +0200, Matthias van de Meent wrote:\n>> On Mon, 22 Apr 2024 at 17:41, Tom Lane <[email protected]> wrote:\n>>> I think it would be a good idea to push 0003 for v17, just so nobody\n>>> grows an unnecessary dependency on that field. 0001 and 0005 could\n>>> be left for v18, but on the other hand they're so trivial that it\n>>> could also be sensible to just push them to get them out of the way.\n\n> Tweaking the APIs should be OK until GA, as long as we agree that the\n> current interfaces can be improved.\n> 0003 is new in v17, so let's apply it now. I don't see much a strong\n> argument in waiting for the removal of 0001 and 0005, either, to keep\n> the interfaces cleaner moving on. However, this is not a regression\n> and these have been around for years, so I'd suggest for v18 to open\n> before moving on with the removal.\n\nI went ahead and pushed 0001 and 0003, figuring there was little\npoint in waiting on 0001. I'd intended to push 0005 (remove \"isall\")\nas well, but it failed check-world:\n\ndiff -U3 /home/postgres/pgsql/contrib/pg_stat_statements/expected/utility.out /home/postgres/pgsql/contrib/pg_stat_statements/results/utility.out\n--- /home/postgres/pgsql/contrib/pg_stat_statements/expected/utility.out\t2023-12-08 15:14:55.689347888 -0500\n+++ /home/postgres/pgsql/contrib/pg_stat_statements/results/utility.out\t2024-04-23 12:17:22.187721947 -0400\n@@ -536,12 +536,11 @@\n SELECT calls, rows, query FROM pg_stat_statements ORDER BY query COLLATE \"C\";\n calls | rows | query \n -------+------+----------------------------------------------------\n- 2 | 0 | DEALLOCATE $1\n- 2 | 0 | DEALLOCATE ALL\n+ 4 | 0 | DEALLOCATE $1\n 2 | 2 | PREPARE stat_select AS SELECT $1 AS a\n 1 | 1 | SELECT $1 as a\n 1 | 1 | SELECT pg_stat_statements_reset() IS NOT NULL AS t\n-(5 rows)\n+(4 rows)\n \n SELECT pg_stat_statements_reset() IS NOT NULL AS t;\n\nThat is, query jumbling no longer distinguishes \"DEALLOCATE x\" from\n\"DEALLOCATE ALL\", because the DeallocateStmt.name field is marked\nquery_jumble_ignore. Now maybe that's fine, but it's a point\nwe'd not considered so far in this thread. Thoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Apr 2024 13:01:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cleanup: remove unused fields from nodes" }, { "msg_contents": "On Tue, Apr 23, 2024 at 01:01:04PM -0400, Tom Lane wrote:\n> That is, query jumbling no longer distinguishes \"DEALLOCATE x\" from\n> \"DEALLOCATE ALL\", because the DeallocateStmt.name field is marked\n> query_jumble_ignore. Now maybe that's fine, but it's a point\n> we'd not considered so far in this thread. Thoughts?\n\nAnd of course, I've managed to forget about bb45156f342c and the\nreason behind the addition of the field is to be able to make the\ndifference between the named and ALL cases for DEALLOCATE, around\nhere:\nhttps://www.postgresql.org/message-id/ZNq9kRwWbKzvR%2B2a%40paquier.xyz\n\nThis is new in v17, so perhaps it could be changed, but I think that's\nimportant to make the difference here for monitoring purposes as\nDEALLOCATE ALL could be used as a way to clean up prepared statements\nin connection poolers (for example, pgbouncer's server_reset_query).\nAnd doing this tweak in the Node structure of DeallocateStmt is\nsimpler than having to maintain a new pg_node_attr for query jumbling.\n--\nMichael", "msg_date": "Wed, 24 Apr 2024 11:57:05 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cleanup: remove unused fields from nodes" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> On Tue, Apr 23, 2024 at 01:01:04PM -0400, Tom Lane wrote:\n>> That is, query jumbling no longer distinguishes \"DEALLOCATE x\" from\n>> \"DEALLOCATE ALL\", because the DeallocateStmt.name field is marked\n>> query_jumble_ignore. Now maybe that's fine, but it's a point\n>> we'd not considered so far in this thread. Thoughts?\n\n> And of course, I've managed to forget about bb45156f342c and the\n> reason behind the addition of the field is to be able to make the\n> difference between the named and ALL cases for DEALLOCATE, around\n> here:\n> https://www.postgresql.org/message-id/ZNq9kRwWbKzvR%2B2a%40paquier.xyz\n\nHah. Seems like the comment for isall needs to explain that it\nexists for this purpose, so we don't make this mistake again.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Apr 2024 23:03:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cleanup: remove unused fields from nodes" }, { "msg_contents": "On Tue, Apr 23, 2024 at 11:03:40PM -0400, Tom Lane wrote:\n> Hah. Seems like the comment for isall needs to explain that it\n> exists for this purpose, so we don't make this mistake again.\n\nHow about something like the attached?\n--\nMichael", "msg_date": "Wed, 24 Apr 2024 16:28:21 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cleanup: remove unused fields from nodes" }, { "msg_contents": "On Wed, 24 Apr 2024 at 09:28, Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Apr 23, 2024 at 11:03:40PM -0400, Tom Lane wrote:\n> > Hah. Seems like the comment for isall needs to explain that it\n> > exists for this purpose, so we don't make this mistake again.\n>\n> How about something like the attached?\n\nLGTM.\n\nThanks,\n\nMatthias\n\n\n", "msg_date": "Wed, 24 Apr 2024 11:50:54 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cleanup: remove unused fields from nodes" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> On Tue, Apr 23, 2024 at 11:03:40PM -0400, Tom Lane wrote:\n>> Hah. Seems like the comment for isall needs to explain that it\n>> exists for this purpose, so we don't make this mistake again.\n\n> How about something like the attached?\n\nI was thinking about wording like\n\n * True if DEALLOCATE ALL. This is redundant with \"name == NULL\",\n * but we make it a separate field so that exactly this condition\n * (and not the precise name) will be accounted for in query jumbling.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Apr 2024 08:31:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cleanup: remove unused fields from nodes" }, { "msg_contents": "On Wed, Apr 24, 2024 at 08:31:57AM -0400, Tom Lane wrote:\n> I was thinking about wording like\n> \n> * True if DEALLOCATE ALL. This is redundant with \"name == NULL\",\n> * but we make it a separate field so that exactly this condition\n> * (and not the precise name) will be accounted for in query jumbling.\n\nFine by me. I've just used that and applied a patch to doing so.\nThanks.\n--\nMichael", "msg_date": "Thu, 25 Apr 2024 10:25:36 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cleanup: remove unused fields from nodes" } ]
[ { "msg_contents": "Hey,\n\nThe attached patch addresses four somewhat related aspects of the create\ntable reference page that bother me.\n\nThis got started with Bug# 15954 [1] (unlogged on a partitioned table\ndoesn't make sense) and I've added a paragraph under \"unlogged\" to address\nit.\n\nWhile doing that, it seemed desirable to explicitly frame up both temporary\nand unlogged as being \"persistence modes\" so I added a mention of both in\nthe description. Additionally, it seemed appropriate to do so immediately\nafter the opening paragraph since the existing second paragraph goes\nimmediately into talking about temporary tables and schemas. I figured a\nlink to the reliability chapter where one learns about WAL and why/how\nthese alternative persistence modes exist is worthwhile. (I added a missing\ncomma to the first sentence while I was in the area)\n\nThird, I've had a long-standing annoyance regarding the excessive length of\nthe CREATE line of each of the create table syntax blocks. Replacing the\nsyntax for the persistence modes with a named placeholder introduces\nstructure and clarity while reducing the length of the line nicely.\n\nFinally, while fixing line lengths, the subsequent line (first form) for\ncolumn specification is even more excessive. Pulling out the\ncolumn_storage syntax into a named reference nicely cleans this line up.\n\nDavid J.\n\nP.S. I didn't go into depth on the fact the persistence options are not\ninherited/copied/like-able; so for now the fact they are not so is\ndiscovered by their omission when discussing those topics.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/15954-b61523bed4b110c4%40postgresql.org", "msg_date": "Mon, 22 Apr 2024 12:19:11 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "doc: create table improvements" }, { "msg_contents": " > + The reliability characteristics of a table are governed by its\n > + persistence mode. The default mode is described\n > + <link linkend=\"wal-reliability\">here</link>\n > + There are two alternative modes that can be specified during\n > + table creation:\n > + <link linkend=\"sql-createtable-temporary\">temporary</link> and\n > + <link linkend=\"sql-createtable-unlogged\">unlogged</link>.\n\nNot sure reliability is the best word here. I mean, a temporary table \nisn't any less reliable than any other table. It just does different \nthings.\n\n\n\n", "msg_date": "Wed, 24 Apr 2024 12:30:30 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doc: create table improvements" }, { "msg_contents": "On Wed, Apr 24, 2024 at 3:30 AM Peter Eisentraut <[email protected]>\nwrote:\n\n> > + The reliability characteristics of a table are governed by its\n> > + persistence mode. The default mode is described\n> > + <link linkend=\"wal-reliability\">here</link>\n> > + There are two alternative modes that can be specified during\n> > + table creation:\n> > + <link linkend=\"sql-createtable-temporary\">temporary</link> and\n> > + <link linkend=\"sql-createtable-unlogged\">unlogged</link>.\n>\n> Not sure reliability is the best word here. I mean, a temporary table\n> isn't any less reliable than any other table. It just does different\n> things.\n>\n>\nGiven the name of the section where this is all discussed I'm having\ntrouble going with a different word. But better framing and phrasing I can\ndo:\n\nA table may be opted out of certain storage aspects of reliability, as\ndescribed [here], by specifying either of the alternate persistence modes:\n[temporary] or [logged]. The specific trade-offs and implications are\ndetailed below.\n\nDavid J.\n\nOn Wed, Apr 24, 2024 at 3:30 AM Peter Eisentraut <[email protected]> wrote: > +   The reliability characteristics of a table are governed by its\n > +   persistence mode.  The default mode is described\n > +   <link linkend=\"wal-reliability\">here</link>\n > +   There are two alternative modes that can be specified during\n > +   table creation:\n > +   <link linkend=\"sql-createtable-temporary\">temporary</link> and\n > +   <link linkend=\"sql-createtable-unlogged\">unlogged</link>.\n\nNot sure reliability is the best word here.  I mean, a temporary table \nisn't any less reliable than any other table.  It just does different \nthings.\nGiven the name of the section where this is all discussed I'm having trouble going with a different word.  But better framing and phrasing I can do:A table may be opted out of certain storage aspects of reliability, as described [here], by specifying either of the alternate persistence modes: [temporary] or [logged]. The specific trade-offs and implications are detailed below.David J.", "msg_date": "Wed, 24 Apr 2024 07:45:47 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: doc: create table improvements" }, { "msg_contents": "On Wed, Apr 24, 2024 at 7:45 AM David G. Johnston <\[email protected]> wrote:\n\n> On Wed, Apr 24, 2024 at 3:30 AM Peter Eisentraut <[email protected]>\n> wrote:\n>\n>> > + The reliability characteristics of a table are governed by its\n>> > + persistence mode. The default mode is described\n>> > + <link linkend=\"wal-reliability\">here</link>\n>> > + There are two alternative modes that can be specified during\n>> > + table creation:\n>> > + <link linkend=\"sql-createtable-temporary\">temporary</link> and\n>> > + <link linkend=\"sql-createtable-unlogged\">unlogged</link>.\n>>\n>> Not sure reliability is the best word here. I mean, a temporary table\n>> isn't any less reliable than any other table. It just does different\n>> things.\n>>\n>>\n> Given the name of the section where this is all discussed I'm having\n> trouble going with a different word. But better framing and phrasing I can\n> do:\n>\n> A table may be opted out of certain storage aspects of reliability, as\n> described [here], by specifying either of the alternate persistence modes:\n> [temporary] or [logged]. The specific trade-offs and implications are\n> detailed below.\n>\n>\nOr maybe:\n\nA table operates in one of three persistence modes (default, [temporary],\nand [unlogged]) described in [Chapter 28]. --point to the intro page for\nthe chapter as expanded as below, not the reliability page.\n\ndiff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml\nindex 05e2a8f8be..102cfeca68 100644\n--- a/doc/src/sgml/wal.sgml\n+++ b/doc/src/sgml/wal.sgml\n@@ -5,8 +5,17 @@\n\n <para>\n This chapter explains how to control the reliability of\n- <productname>PostgreSQL</productname>, including details about the\n- Write-Ahead Log.\n+ <productname>PostgreSQL</productname>. At its core this\n+ involves writing all changes to disk twice - first to a\n+ journal of changes called the write-ahead-log (WAL) and\n+ then to the physical pages that comprise permanent tables\n+ on disk (heap). This results in four high-level\n+ <term>persistence modes</term> for tables.\n+ The default mode results in both these features being\n+ enabled. Temporary tables forgo both of these options,\n+ while unlogged tables only forgo WAL. There is no WAL-only\n+ operating mode. The rest of this chapter discusses\n+ implementation details related to these two options.\n </para>\n\nDavid J.\n\nOn Wed, Apr 24, 2024 at 7:45 AM David G. Johnston <[email protected]> wrote:On Wed, Apr 24, 2024 at 3:30 AM Peter Eisentraut <[email protected]> wrote: > +   The reliability characteristics of a table are governed by its\n > +   persistence mode.  The default mode is described\n > +   <link linkend=\"wal-reliability\">here</link>\n > +   There are two alternative modes that can be specified during\n > +   table creation:\n > +   <link linkend=\"sql-createtable-temporary\">temporary</link> and\n > +   <link linkend=\"sql-createtable-unlogged\">unlogged</link>.\n\nNot sure reliability is the best word here.  I mean, a temporary table \nisn't any less reliable than any other table.  It just does different \nthings.\nGiven the name of the section where this is all discussed I'm having trouble going with a different word.  But better framing and phrasing I can do:A table may be opted out of certain storage aspects of reliability, as described [here], by specifying either of the alternate persistence modes: [temporary] or [logged]. The specific trade-offs and implications are detailed below.Or maybe:A table operates in one of three persistence modes (default, [temporary], and [unlogged]) described in [Chapter 28]. --point to the intro page for the chapter as expanded as below, not the reliability page.diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgmlindex 05e2a8f8be..102cfeca68 100644--- a/doc/src/sgml/wal.sgml+++ b/doc/src/sgml/wal.sgml@@ -5,8 +5,17 @@  <para>   This chapter explains how to control the reliability of-  <productname>PostgreSQL</productname>, including details about the-  Write-Ahead Log.+  <productname>PostgreSQL</productname>.  At its core this+  involves writing all changes to disk twice - first to a+  journal of changes called the write-ahead-log (WAL) and+  then to the physical pages that comprise permanent tables+  on disk (heap).  This results in four high-level+  <term>persistence modes</term> for tables.+  The default mode results in both these features being+  enabled.  Temporary tables forgo both of these options,+  while unlogged tables only forgo WAL.  There is no WAL-only+  operating mode.  The rest of this chapter discusses+  implementation details related to these two options.  </para>David J.", "msg_date": "Wed, 24 Apr 2024 10:08:29 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: doc: create table improvements" } ]
[ { "msg_contents": "I used to do this step when I first started hacking on Postgres because\nthat's what it says to do, but I've only ever used the in-tree one for many\nyears now, and I'm not aware of any scenario where I might need to download\na new version from the buildfarm. I see that the in-tree copy wasn't added\nuntil 2010 (commit 1604057), so maybe this is just leftover from back then.\n\nCould we remove this note now?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 22 Apr 2024 14:22:18 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Why does pgindent's README say to download typedefs.list from the\n buildfarm?" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> I used to do this step when I first started hacking on Postgres because\n> that's what it says to do, but I've only ever used the in-tree one for many\n> years now, and I'm not aware of any scenario where I might need to download\n> a new version from the buildfarm. I see that the in-tree copy wasn't added\n> until 2010 (commit 1604057), so maybe this is just leftover from back then.\n\n> Could we remove this note now?\n\nI think the actual plan now is that we'll sync the in-tree copy\nwith the buildfarm's results (and then do a tree-wide pgindent)\nevery so often, probably shortly before beta every year.\n\nThe problem with the README is that it describes that process,\nrather than the now-typical workflow of incrementally keeping\nthe tree indented. I don't think we want to remove the info\nabout how to do the full-monty process, but you're right that\nthe README needs to explain the incremental method as being\nthe one most developers would usually use.\n\nWant to write some text?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Apr 2024 16:08:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from the\n buildfarm?" }, { "msg_contents": "On Mon, Apr 22, 2024 at 04:08:08PM -0400, Tom Lane wrote:\n> I think the actual plan now is that we'll sync the in-tree copy\n> with the buildfarm's results (and then do a tree-wide pgindent)\n> every so often, probably shortly before beta every year.\n\nOkay. Is this just to resolve the delta between the manual updates and a\nclean autogenerated copy every once in a while?\n\n> The problem with the README is that it describes that process,\n> rather than the now-typical workflow of incrementally keeping\n> the tree indented. I don't think we want to remove the info\n> about how to do the full-monty process, but you're right that\n> the README needs to explain the incremental method as being\n> the one most developers would usually use.\n> \n> Want to write some text?\n\nYup, I'll put something together.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 22 Apr 2024 15:20:10 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from\n the buildfarm?" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Mon, Apr 22, 2024 at 04:08:08PM -0400, Tom Lane wrote:\n>> I think the actual plan now is that we'll sync the in-tree copy\n>> with the buildfarm's results (and then do a tree-wide pgindent)\n>> every so often, probably shortly before beta every year.\n\n> Okay. Is this just to resolve the delta between the manual updates and a\n> clean autogenerated copy every once in a while?\n\nThe main reason there's a delta is that people don't manage to\nmaintain the in-tree copy perfectly (at least, they certainly\nhaven't done so for this past year). So we need to do that\nto clean up every now and then.\n\nA secondary reason is that the set of typedefs we absorb from\nsystem include files changes over time.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Apr 2024 16:28:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from the\n buildfarm?" }, { "msg_contents": "On 2024-Apr-22, Tom Lane wrote:\n\n> The main reason there's a delta is that people don't manage to\n> maintain the in-tree copy perfectly (at least, they certainly\n> haven't done so for this past year). So we need to do that\n> to clean up every now and then.\n\nOut of curiosity, I downloaded the buildfarm-generated file and\nre-indented the whole tree. It turns out that most commits seem to have\nmaintained the in-tree typedefs list correctly when adding entries (even\nif out of alphabetical order), but a few haven't; and some people have\nadded entries that the buildfarm script does not detect. So the import\nfrom BF will delete those entries and mess up the overall indent. For\nexample it does stuff like\n\n+++ b/src/backend/commands/async.c\n@@ -399,7 +399,7 @@ typedef struct NotificationList\n typedef struct NotificationHash\n {\n Notification *event; /* => the actual Notification struct */\n-} NotificationHash;\n+} NotificationHash;\n\nThere's a good half dozen of those.\n\nI wonder if we're interested in keeping a (very short) manually-\nmaintained list of symbols that we know are in use but the scripts\ndon't extract for whatever reason.\n\n\nThe change of NotificationHash looks surprising at first sight:\napparently 095d109ccd7 deleted the only use of that type as a variable\nanywhere. But then I wonder if that datatype is useful at all anymore,\nsince it only contains one pointer -- it seems we could just remove it.\n\nBut there are others: InjectionPointEntry, ResourceOwnerData,\nJsonNonTerminal, JsonParserSem, ...\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"This is what I like so much about PostgreSQL. Most of the surprises\nare of the \"oh wow! That's cool\" Not the \"oh shit!\" kind. :)\"\nScott Marlowe, http://archives.postgresql.org/pgsql-admin/2008-10/msg00152.php\n\n\n", "msg_date": "Tue, 23 Apr 2024 12:23:25 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from\n the buildfarm?" }, { "msg_contents": "On Tue, Apr 23, 2024 at 6:23 AM Alvaro Herrera <[email protected]> wrote:\n> I wonder if we're interested in keeping a (very short) manually-\n> maintained list of symbols that we know are in use but the scripts\n> don't extract for whatever reason.\n\n+1. I think this idea has been proposed and rejected before, but I\nthink it's more important to have our code indented correctly than to\nbe able to rely on a 100% automated process for gathering typedefs.\n\nThere is of course the risk that the manually generated file will\naccumulate stale cruft over time, but I don't really see that being a\nbig problem. First, it doesn't cost much to have a few extra symbols\nin there. Second, I suspect someone will go through it at least every\ncouple of years, if not more often, and figure out which entries are\nstill doing something.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 23 Apr 2024 08:12:20 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from the\n buildfarm?" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> On 2024-Apr-22, Tom Lane wrote:\n>> The main reason there's a delta is that people don't manage to\n>> maintain the in-tree copy perfectly (at least, they certainly\n>> haven't done so for this past year). So we need to do that\n>> to clean up every now and then.\n\n> Out of curiosity, I downloaded the buildfarm-generated file and\n> re-indented the whole tree. It turns out that most commits seem to have\n> maintained the in-tree typedefs list correctly when adding entries (even\n> if out of alphabetical order), but a few haven't; and some people have\n> added entries that the buildfarm script does not detect.\n\nYeah. I believe that happens when there is no C variable or field\nanywhere that has that specific struct type. In your example,\nNotificationHash appears to only be referenced in a sizeof()\ncall, which suggests that maybe the coding is a bit squirrely\nand could be done another way.\n\nHaving said that, there already are manually-curated lists of\ninclusions and exclusions hard-wired into pgindent (see around\nline 70). I wouldn't have any great objection to adding more\nentries there. Or if somebody wanted to do the work, they\ncould be pulled out into separate files.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Apr 2024 10:11:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from the\n buildfarm?" }, { "msg_contents": "On Mon, Apr 22, 2024 at 03:20:10PM -0500, Nathan Bossart wrote:\n> On Mon, Apr 22, 2024 at 04:08:08PM -0400, Tom Lane wrote:\n>> The problem with the README is that it describes that process,\n>> rather than the now-typical workflow of incrementally keeping\n>> the tree indented. I don't think we want to remove the info\n>> about how to do the full-monty process, but you're right that\n>> the README needs to explain the incremental method as being\n>> the one most developers would usually use.\n>> \n>> Want to write some text?\n> \n> Yup, I'll put something together.\n\nHere is a first attempt. I'm not tremendously happy with it, but it at\nleast gets something on the page to build on. I was originally going to\ncopy/paste the relevant steps into the description of the incremental\nprocess, but that seemed kind-of silly, so I instead just pointed to the\nrelevant steps of the \"full\" process, along with the deviations from those\nsteps. That's a little more work for the reader, but maybe it isn't too\nbad... I plan to iterate on this patch some more.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 23 Apr 2024 15:04:55 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from\n the buildfarm?" }, { "msg_contents": "On 2024-04-23 Tu 06:23, Alvaro Herrera wrote:\n> But there are others: InjectionPointEntry, ResourceOwnerData,\n> JsonNonTerminal, JsonParserSem, ...\n>\n\nThe last two are down to me. Let's just get rid of them like the \nattached (no need for a typedef at all)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com", "msg_date": "Wed, 24 Apr 2024 06:07:17 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from the\n buildfarm?" }, { "msg_contents": "On 22.04.24 22:28, Tom Lane wrote:\n> Nathan Bossart<[email protected]> writes:\n>> On Mon, Apr 22, 2024 at 04:08:08PM -0400, Tom Lane wrote:\n>>> I think the actual plan now is that we'll sync the in-tree copy\n>>> with the buildfarm's results (and then do a tree-wide pgindent)\n>>> every so often, probably shortly before beta every year.\n>> Okay. Is this just to resolve the delta between the manual updates and a\n>> clean autogenerated copy every once in a while?\n> The main reason there's a delta is that people don't manage to\n> maintain the in-tree copy perfectly (at least, they certainly\n> haven't done so for this past year). So we need to do that\n> to clean up every now and then.\n> \n> A secondary reason is that the set of typedefs we absorb from\n> system include files changes over time.\n\nIs the code to extract typedefs available somewhere independent of the \nbuildfarm? It would be useful sometimes to be able to run this locally, \nlike before and after some patch, to keep the in-tree typedefs list updated.\n\n\n\n", "msg_date": "Wed, 24 Apr 2024 12:12:28 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from the\n buildfarm?" }, { "msg_contents": "\nOn 2024-04-24 We 06:12, Peter Eisentraut wrote:\n> On 22.04.24 22:28, Tom Lane wrote:\n>> Nathan Bossart<[email protected]>  writes:\n>>> On Mon, Apr 22, 2024 at 04:08:08PM -0400, Tom Lane wrote:\n>>>> I think the actual plan now is that we'll sync the in-tree copy\n>>>> with the buildfarm's results (and then do a tree-wide pgindent)\n>>>> every so often, probably shortly before beta every year.\n>>> Okay.  Is this just to resolve the delta between the manual updates \n>>> and a\n>>> clean autogenerated copy every once in a while?\n>> The main reason there's a delta is that people don't manage to\n>> maintain the in-tree copy perfectly (at least, they certainly\n>> haven't done so for this past year).  So we need to do that\n>> to clean up every now and then.\n>>\n>> A secondary reason is that the set of typedefs we absorb from\n>> system include files changes over time.\n>\n> Is the code to extract typedefs available somewhere independent of the \n> buildfarm?  It would be useful sometimes to be able to run this \n> locally, like before and after some patch, to keep the in-tree \n> typedefs list updated.\n>\n>\n>\n\nThere's been talk about it but I don't think anyone's done it. I'd be \nmore than happy if the buildfarm client could just call something in the \ncore repo (c.f. src/test/perl/Postgres/Test/AdjustUpgrade.pm).\n\nRegarding testing with your patch, some years ago I wrote this blog \npost: \n<http://adpgtech.blogspot.com/2015/05/running-pgindent-on-non-core-code-or.html>\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 24 Apr 2024 06:37:00 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from the\n buildfarm?" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 2024-04-24 We 06:12, Peter Eisentraut wrote:\n>> Is the code to extract typedefs available somewhere independent of the \n>> buildfarm? It would be useful sometimes to be able to run this \n>> locally, like before and after some patch, to keep the in-tree \n>> typedefs list updated.\n\n> There's been talk about it but I don't think anyone's done it. I'd be \n> more than happy if the buildfarm client could just call something in the \n> core repo (c.f. src/test/perl/Postgres/Test/AdjustUpgrade.pm).\n\nThere is already src/tools/find_typedef, which looks like it might\nbe an ancestral version of the current buildfarm code (which is sub\nfind_typedefs in run_build.pl of the client distribution). Perhaps\nit'd be useful to bring that up to speed with the current BF code.\n\nThe main problem with this though is that a local run can only\ngive you the system-supplied typedefs for your own platform and\nbuild options. The value-add that the buildfarm brings is to\nmerge the results from several different platforms.\n\nI suppose you could set up some merging process that would add\nsymbols from a local run to src/tools/pgindent/typedefs.list\nbut never remove any. But that hardly removes the need for\nan occasional cleanup pass.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Apr 2024 08:27:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from the\n buildfarm?" }, { "msg_contents": "On Tue, Apr 23, 2024 at 4:05 PM Nathan Bossart <[email protected]> wrote:\n> Here is a first attempt. I'm not tremendously happy with it, but it at\n> least gets something on the page to build on. I was originally going to\n> copy/paste the relevant steps into the description of the incremental\n> process, but that seemed kind-of silly, so I instead just pointed to the\n> relevant steps of the \"full\" process, along with the deviations from those\n> steps. That's a little more work for the reader, but maybe it isn't too\n> bad... I plan to iterate on this patch some more.\n\nWhat jumps out at me when I read this patch is that it says that an\nincremental run should do steps 1-3 of a complete run, and then\nimmediately backtracks and says not to do step 2, which seems a little\nstrange.\n\nI played around with this a bit and came up with the attached, which\ntakes a slightly different approach. Feel free to use, ignore, etc.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 15 May 2024 12:06:03 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from the\n buildfarm?" }, { "msg_contents": "On Wed, May 15, 2024 at 12:06:03PM -0400, Robert Haas wrote:\n> What jumps out at me when I read this patch is that it says that an\n> incremental run should do steps 1-3 of a complete run, and then\n> immediately backtracks and says not to do step 2, which seems a little\n> strange.\n> \n> I played around with this a bit and came up with the attached, which\n> takes a slightly different approach. Feel free to use, ignore, etc.\n\nThis is much cleaner, thanks. The only thing that stands out to me is that\nthe \"once per release cycle\" section should probably say to do an indent\nrun after downloading the typedef file.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 15 May 2024 14:29:52 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from\n the buildfarm?" }, { "msg_contents": "On Wed, May 15, 2024 at 3:30 PM Nathan Bossart <[email protected]> wrote:\n> On Wed, May 15, 2024 at 12:06:03PM -0400, Robert Haas wrote:\n> > What jumps out at me when I read this patch is that it says that an\n> > incremental run should do steps 1-3 of a complete run, and then\n> > immediately backtracks and says not to do step 2, which seems a little\n> > strange.\n> >\n> > I played around with this a bit and came up with the attached, which\n> > takes a slightly different approach. Feel free to use, ignore, etc.\n>\n> This is much cleaner, thanks. The only thing that stands out to me is that\n> the \"once per release cycle\" section should probably say to do an indent\n> run after downloading the typedef file.\n\nHow's this?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 15 May 2024 16:07:18 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from the\n buildfarm?" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Wed, May 15, 2024 at 3:30 PM Nathan Bossart <[email protected]> wrote:\n>> This is much cleaner, thanks. The only thing that stands out to me is that\n>> the \"once per release cycle\" section should probably say to do an indent\n>> run after downloading the typedef file.\n\n> How's this?\n\nThis works for me. One point that could stand discussion while we're\nhere is whether the once-a-cycle run should use the verbatim buildfarm\nresults or it's okay to editorialize on that typedefs list. I did a\nlittle of the latter in da256a4a7, and I feel like we should either\nbless that practice in this document or decide that it was a bad idea.\n\nFor reference, what I added to the buildfarm's list was\n\n+InjectionPointCacheEntry\n+InjectionPointCondition\n+InjectionPointConditionType\n+InjectionPointEntry\n+InjectionPointSharedState\n+NotificationHash\n+ReadBuffersFlags\n+ResourceOwnerData\n+WaitEventExtension\n+WalSyncMethod\n\nI believe all of these must have been added manually during v17.\nIf I took any one of them out there was some visible disimprovement\nin pgindent's results, so I kept them. Was that the right decision?\nIf so we should explain it here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 May 2024 16:23:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from the\n buildfarm?" }, { "msg_contents": "On Wed, May 15, 2024 at 04:07:18PM -0400, Robert Haas wrote:\n> On Wed, May 15, 2024 at 3:30 PM Nathan Bossart <[email protected]> wrote:\n>> This is much cleaner, thanks. The only thing that stands out to me is that\n>> the \"once per release cycle\" section should probably say to do an indent\n>> run after downloading the typedef file.\n> \n> How's this?\n\nI compared this with my v1, and the only bit of information there that I\nsee missing in v3 is that validation step 4 only applies in the\nonce-per-cycle run (or if you forget to pgindent before committing a\npatch). This might be why I was struggling to untangle the two types of\npgindent runs in my attempt. Perhaps it's worth adding a note to that step\nabout when it is required.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 15 May 2024 15:28:49 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from\n the buildfarm?" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Wed, May 15, 2024 at 04:07:18PM -0400, Robert Haas wrote:\n>> How's this?\n\n> I compared this with my v1, and the only bit of information there that I\n> see missing in v3 is that validation step 4 only applies in the\n> once-per-cycle run (or if you forget to pgindent before committing a\n> patch). This might be why I was struggling to untangle the two types of\n> pgindent runs in my attempt. Perhaps it's worth adding a note to that step\n> about when it is required.\n\nOh ... another problem is that the VALIDATION steps really apply to\nboth kinds of indent runs, but it's not clear to me that that's\nobvious in v3. Maybe the steps should be rearranged to be\n(1) base case, (2) VALIDATION, (3) ONCE PER CYCLE.\n\nAt this point my OCD got the better of me and I did a little\nadditional wordsmithing. How about the attached?\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 15 May 2024 16:50:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from the\n buildfarm?" }, { "msg_contents": "On Wed, May 15, 2024 at 4:50 PM Tom Lane <[email protected]> wrote:\n> At this point my OCD got the better of me and I did a little\n> additional wordsmithing. How about the attached?\n\nNo objections here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 15 May 2024 16:52:19 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from the\n buildfarm?" }, { "msg_contents": "On Wed, May 15, 2024 at 04:52:19PM -0400, Robert Haas wrote:\n> On Wed, May 15, 2024 at 4:50 PM Tom Lane <[email protected]> wrote:\n>> At this point my OCD got the better of me and I did a little\n>> additional wordsmithing. How about the attached?\n> \n> No objections here.\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 15 May 2024 15:54:27 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from\n the buildfarm?" }, { "msg_contents": "I wrote:\n> This works for me. One point that could stand discussion while we're\n> here is whether the once-a-cycle run should use the verbatim buildfarm\n> results or it's okay to editorialize on that typedefs list. I did a\n> little of the latter in da256a4a7, and I feel like we should either\n> bless that practice in this document or decide that it was a bad idea.\n\n> For reference, what I added to the buildfarm's list was\n\n> +InjectionPointCacheEntry\n> +InjectionPointCondition\n> +InjectionPointConditionType\n> +InjectionPointEntry\n> +InjectionPointSharedState\n> +NotificationHash\n> +ReadBuffersFlags\n> +ResourceOwnerData\n> +WaitEventExtension\n> +WalSyncMethod\n\nI realized that the reason the InjectionPoint typedefs were missing\nis that none of the buildfarm animals that contribute typedefs are\nbuilding with --enable-injection-points. I rectified that on sifaka,\nand now those are in the list available from the buildfarm.\n\nAs for the remainder, they aren't showing up because no variable\nor field is declared using them, which means no debug symbol\ntable entry is made for them. This means we could just drop those\ntypedefs and be little the worse off notationally. I experimented\nwith a patch for that, as attached. (In the case of NotificationHash,\nI thought it better to arrange for there to be a suitable variable;\nbut it could certainly be done the other way too.) Is this too anal?\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 15 May 2024 19:32:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from the\n buildfarm?" }, { "msg_contents": "On 16.05.24 01:32, Tom Lane wrote:\n> As for the remainder, they aren't showing up because no variable\n> or field is declared using them, which means no debug symbol\n> table entry is made for them. This means we could just drop those\n> typedefs and be little the worse off notationally. I experimented\n> with a patch for that, as attached. (In the case of NotificationHash,\n> I thought it better to arrange for there to be a suitable variable;\n> but it could certainly be done the other way too.) Is this too anal?\n\nI agree we should get rid of these.\n\nOver the last release cycle, I've been leaning a bit more toward not \ntypdef'ing enums and structs that are only in local use, in part because \nof the implied need to keep the typedefs list up to date.\n\nIn these cases, I think for\n\nNotificationHash\nResourceOwnerData\nWalSyncMethod\n\nwe can just get rid of the typedef.\n\nReadBuffersFlags shouldn't be an enum at all, because its values are \nused as flag bits.\n\nWaitEventExtension, I'm not sure, it's like, an extensible enum? I \nguess let's remove the typedef there, too.\n\nAttached is a variant patch.", "msg_date": "Thu, 16 May 2024 12:03:30 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from the\n buildfarm?" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> In these cases, I think for\n> NotificationHash\n> ResourceOwnerData\n> WalSyncMethod\n> we can just get rid of the typedef.\n\nI have no objection to dealing with NotificationHash as you have here.\n\n> ReadBuffersFlags shouldn't be an enum at all, because its values are \n> used as flag bits.\n\nYeah, that was bothering me too, but I went for the minimum delta.\nI did think that a couple of integer macros would be a better idea,\nso +1 for what you did here.\n\n> WaitEventExtension, I'm not sure, it's like, an extensible enum? I \n> guess let's remove the typedef there, too.\n\nI am also quite confused by that. It seems to be kind of an enum\nthat is supposed to be extended at runtime, meaning that neither\nof the existing enum member values ought to be used as such, although\neither autoprewarm.c didn't get the memo or I misunderstand the\nintended usage. NUM_BUILTIN_WAIT_EVENT_EXTENSION is possibly the\nmost bizarre idea I've ever seen: what would a \"built-in extension\"\nevent be exactly? I think the enum should be nuked altogether, but\nit's a bit late to be redesigning that for v17 perhaps.\n\n> Attached is a variant patch.\n\nI'm good with this, with a mental note to look again at\nWaitEventExtension later.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 May 2024 10:45:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from the\n buildfarm?" }, { "msg_contents": "On Thu, May 16, 2024 at 10:45:18AM -0400, Tom Lane wrote:\n> I am also quite confused by that. It seems to be kind of an enum\n> that is supposed to be extended at runtime, meaning that neither\n> of the existing enum member values ought to be used as such, although\n> either autoprewarm.c didn't get the memo or I misunderstand the\n> intended usage. NUM_BUILTIN_WAIT_EVENT_EXTENSION is possibly the\n> most bizarre idea I've ever seen: what would a \"built-in extension\"\n> event be exactly? I think the enum should be nuked altogether, but\n> it's a bit late to be redesigning that for v17 perhaps.\n\nYou're right, WaitEventExtension is better gone. The only thing that\nmatters is that we want to start computing the IDs assigned to the\ncustom wait events for extensions with a number higher than the\nexisting WAIT_EXTENSION to avoid interferences in pg_stat_activity, so\nthis could be cleaned up as the attached.\n\nThe reason why autoprewarm.c does not have a custom wait event\nassigned is that it does not make sense there: this would not show up\nin pg_stat_activity. I think that we should just switch back to\nPG_WAIT_EXTENSION there and call it a day.\n\nI can still clean up that in time for beta1, as in today's time. But\nthat can wait, as well. Thoughts?\n--\nMichael", "msg_date": "Fri, 17 May 2024 09:45:59 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from\n the buildfarm?" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> On Thu, May 16, 2024 at 10:45:18AM -0400, Tom Lane wrote:\n>> ... I think the enum should be nuked altogether, but\n>> it's a bit late to be redesigning that for v17 perhaps.\n\n> You're right, WaitEventExtension is better gone. The only thing that\n> matters is that we want to start computing the IDs assigned to the\n> custom wait events for extensions with a number higher than the\n> existing WAIT_EXTENSION to avoid interferences in pg_stat_activity, so\n> this could be cleaned up as the attached.\n\nWFM, and this is probably a place where we don't want to change the\nAPI in v17 and again in v18, so I agree with pushing now.\n\nReminder though: beta1 release freeze begins Saturday.\nNot many hours left.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 May 2024 21:09:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from the\n buildfarm?" }, { "msg_contents": "On Thu, May 16, 2024 at 09:09:36PM -0400, Tom Lane wrote:\n> WFM, and this is probably a place where we don't want to change the\n> API in v17 and again in v18, so I agree with pushing now.\n>\n> Reminder though: beta1 release freeze begins Saturday.\n> Not many hours left.\n\nYep. I can handle that in 2~3 hours.\n--\nMichael", "msg_date": "Fri, 17 May 2024 10:24:57 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from\n the buildfarm?" }, { "msg_contents": "On Fri, May 17, 2024 at 10:24:57AM +0900, Michael Paquier wrote:\n> Yep. I can handle that in 2~3 hours.\n\nAnd done with 110eb4aefbad. If there's anything else, feel free to\nlet me know.\n--\nMichael", "msg_date": "Fri, 17 May 2024 14:24:52 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from\n the buildfarm?" }, { "msg_contents": "On 16.05.24 16:45, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> In these cases, I think for\n>> NotificationHash\n>> ResourceOwnerData\n>> WalSyncMethod\n>> we can just get rid of the typedef.\n> \n> I have no objection to dealing with NotificationHash as you have here.\n> \n>> ReadBuffersFlags shouldn't be an enum at all, because its values are\n>> used as flag bits.\n> \n> Yeah, that was bothering me too, but I went for the minimum delta.\n> I did think that a couple of integer macros would be a better idea,\n> so +1 for what you did here.\n\nI committed this, and Michael took care of WaitEventExtension, so we \nshould be all clear here.\n\n\n\n", "msg_date": "Fri, 17 May 2024 07:50:56 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from the\n buildfarm?" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 16.05.24 16:45, Tom Lane wrote:\n>> Yeah, that was bothering me too, but I went for the minimum delta.\n>> I did think that a couple of integer macros would be a better idea,\n>> so +1 for what you did here.\n\n> I committed this, and Michael took care of WaitEventExtension, so we \n> should be all clear here.\n\nThanks. I just made the committed typedefs.list exactly match the\ncurrent buildfarm output, so we're clean for now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 May 2024 11:44:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why does pgindent's README say to download typedefs.list from the\n buildfarm?" } ]
[ { "msg_contents": "hi.\n\n/*\n * clamp_row_est\n * Force a row-count estimate to a sane value.\n */\ndouble\nclamp_row_est(double nrows)\n{\n/*\n* Avoid infinite and NaN row estimates. Costs derived from such values\n* are going to be useless. Also force the estimate to be at least one\n* row, to make explain output look better and to avoid possible\n* divide-by-zero when interpolating costs. Make it an integer, too.\n*/\nif (nrows > MAXIMUM_ROWCOUNT || isnan(nrows))\nnrows = MAXIMUM_ROWCOUNT;\nelse if (nrows <= 1.0)\nnrows = 1.0;\nelse\nnrows = rint(nrows);\n\nreturn nrows;\n}\n\n\nThe comments say `Avoid infinite and NaN`\nbut actually we only avoid NaN.\n\nDo we need to add isinf(nrows)?\n\n\n", "msg_date": "Tue, 23 Apr 2024 11:22:17 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "clamp_row_est avoid infinite" }, { "msg_contents": "jian he <[email protected]> writes:\n> if (nrows > MAXIMUM_ROWCOUNT || isnan(nrows))\n> nrows = MAXIMUM_ROWCOUNT;\n> else if (nrows <= 1.0)\n> nrows = 1.0;\n> else\n> nrows = rint(nrows);\n\n> The comments say `Avoid infinite and NaN`\n> but actually we only avoid NaN.\n\nReally? The IEEE float arithmetic standard says that Inf is\ngreater than any finite value, and in particular it'd be\ngreater than MAXIMUM_ROWCOUNT.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Apr 2024 23:54:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: clamp_row_est avoid infinite" } ]
[ { "msg_contents": "Hi all,\n\nWhile analyzing the use of internal error codes in the code base, I've\nsome problems, and that's a mixed bag of:\n- Incorrect uses, for errors that can be triggered by users with\nvallid cases.\n- Expected error cases, wanted by the tests like corruption cases or\n just to keep some code simpler.\n\nHere is a summary of the ones that should be fixed with proper\nerrcodes:\n1) be-secure-openssl.c is forgetting an error codes for code comparing\nthe ssl_{min,max}_protocol_version range, which should be a\nERRCODE_CONFIG_FILE_ERROR.\n2) 010_pg_basebackup.pl includes a test for an unmatching system ID at\nits bottom, that triggers an internal error as an effect of\nmanifest_report_error().\n3) basebackup.c, with a too long symlink or tar name, where\nERRCODE_PROGRAM_LIMIT_EXCEEDED should make sense.\n4) pg_walinspect, for an invalid LSN. That would be\nERRCODE_INVALID_PARAMETER_VALUE.\n5) Some paths of pg_ucol_open() are failing internally, still these\nrefer to parameters that can be set, so I've attached\nERRCODE_INVALID_PARAMETER_VALUE to them.\n6) PerformWalRecovery(), where recovery ends before target is reached,\nfor a ERRCODE_CONFIG_FILE_ERROR.\n7) pg_replication_slot_advance(), missing for the target LSN a\nERRCODE_INVALID_PARAMETER_VALUE.\n8) Isolation test alter-table-4/s2, able to trigger a \"attribute \"a\"\nof relation \"c1\" does not match parent's type\". Shouldn't that be a\nERRCODE_INVALID_COLUMN_DEFINITION? \n\nThen there is a series of issues triggered by the main regression test\nsuite, applying three times (pg_upgrade, make check and\n027_stream_regress.pl):\n1) MergeConstraintsIntoExisting() under a case of relhassubclass, for\nERRCODE_INVALID_OBJECT_DEFINITION.\n2) Trigger rename on a partition, see renametrig(), for\nERRCODE_FEATURE_NOT_SUPPORTED.\n3) \"could not find suitable unique index on materialized view\", with a\nplain elog() in matview.c, for ERRCODE_FEATURE_NOT_SUPPORTED\n4) \"role \\\"blah\\\" cannot have explicit members\", for\nERRCODE_FEATURE_NOT_SUPPORTED.\n5) Similar to previous one, AddRoleMems() with \"role \\\"blah\\\" cannot\nbe a member of any role\"\n6) icu_validate_locale(), icu_language_tag() and make_icu_collator()\nfor invalid parameter inputs.\n7) ATExecAlterConstraint()\n\nThere are a few fancy cases where we expect an internal error per the\nstate of the tests:\n1) amcheck\n1-1) bt_index_check_internal() in amcheck, where the code has the idea\nto open an OID given in input, trigerring an elog(). That does not\nstrike me as a good idea, though that's perhaps acceptable. The error\nis an \"could not open relation with OID\".\n1-2) 003_check.pl has 12 cases with backtraces expected in the outcome\nas these check corruption cases.\n2) pg_visibility does the same thing, for two tests trigerring a\n\"could not open relation with OID\".\n3) plpython with 6 cases which are internal, not sure what to do about\nthese.\n4) test_resowner has two cases triggered by SQL functions, which are\nexpected to be internal.\n5) test_tidstore, with \"tuple offset out of range\" triggered by a SQL\ncall.\n6) injection_points, that are aimed for tests, has six backtraces.\n7) currtid_internal().. Perhaps we should try to rip out this stuff,\nwhich is specific to ODBC. There are a lot of backtraces here.\n8) satisfies_hash_partition() in test hash_part, generating a\nbacktrace for an InvalidOid in the main regression test suite.\n\nAll these cases are able to trigger backtraces, and while of them are\nOK to keep as they are, the cases of the first and second lists ought\nto be fixed, and attached is a patch do close the gap. This reduces\nthe number of internal errors generated by the tests from 85 to 35,\nwith injection points enabled.\n\nNote that I've kept the currtid() errcodes in it, though I don't think\nmuch of them. The rest looks sensible enough to address.\n\nThoughts?\n--\nMichael", "msg_date": "Tue, 23 Apr 2024 13:54:48 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Internal error codes triggered by tests" }, { "msg_contents": "I sent a list of user-facing elogs here, a few times.\nZDclRM/[email protected]\n\nAnd if someone had expressed an interest, I might have sent a longer\nlist.\n\n\n", "msg_date": "Mon, 29 Apr 2024 08:02:45 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internal error codes triggered by tests" }, { "msg_contents": "On Mon, Apr 29, 2024 at 08:02:45AM -0500, Justin Pryzby wrote:\n> I sent a list of user-facing elogs here, a few times.\n> ZDclRM/[email protected]\n> \n> And if someone had expressed an interest, I might have sent a longer\n> list.\n\nThanks. I'll take a look at what you have there. Nothing would be\ncommitted before v18 opens, though.\n--\nMichael", "msg_date": "Tue, 30 Apr 2024 07:38:00 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Internal error codes triggered by tests" }, { "msg_contents": "On Tue, Apr 23, 2024 at 12:55 AM Michael Paquier <[email protected]> wrote:\n> Thoughts?\n\nThe patch as proposed seems fine. Marking RfC.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 May 2024 13:41:29 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internal error codes triggered by tests" }, { "msg_contents": "On Fri, May 17, 2024 at 01:41:29PM -0400, Robert Haas wrote:\n> On Tue, Apr 23, 2024 at 12:55 AM Michael Paquier <[email protected]> wrote:\n>> Thoughts?\n> \n> The patch as proposed seems fine. Marking RfC.\n\nThanks. I'll look again at that once v18 opens up for business.\n--\nMichael", "msg_date": "Sat, 18 May 2024 10:56:43 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Internal error codes triggered by tests" }, { "msg_contents": "On Sat, May 18, 2024 at 10:56:43AM +0900, Michael Paquier wrote:\n> Thanks. I'll look again at that once v18 opens up for business.\n\nLooked at that again, and one in tablecmds.c is not needed anymore,\nand there was a conflict in be-secure-openssl.c. Removed the first\none, fixed the second one, then applied the patch after a second look.\n--\nMichael", "msg_date": "Thu, 4 Jul 2024 09:51:16 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Internal error codes triggered by tests" }, { "msg_contents": "Hello Michael,\n\n04.07.2024 03:51, Michael Paquier wrote:\n> On Sat, May 18, 2024 at 10:56:43AM +0900, Michael Paquier wrote:\n>> Thanks. I'll look again at that once v18 opens up for business.\n> Looked at that again, and one in tablecmds.c is not needed anymore,\n> and there was a conflict in be-secure-openssl.c. Removed the first\n> one, fixed the second one, then applied the patch after a second look.\n\nCould you please share your thoughts regarding other error cases, which is\nnot triggered by existing tests, but still can be easily reached by users?\n\nFor example:\nSELECT satisfies_hash_partition(1, 1, 0, 0);\n\nERROR:  XX000: could not open relation with OID 1\nLOCATION:  relation_open, relation.c:61\n\nor:\nCREATE TABLE t (b bytea);\nINSERT INTO t SELECT ''::bytea;\nCREATE INDEX brinidx ON t USING brin\n  (b bytea_bloom_ops(n_distinct_per_range = -1.0));\n\nERROR:  XX000: the bloom filter is too large (44629 > 8144)\nLOCATION:  bloom_init, brin_bloom.c:344\n\nShould such cases be corrected too?\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Thu, 4 Jul 2024 11:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internal error codes triggered by tests" }, { "msg_contents": "On Thu, Jul 04, 2024 at 11:00:01AM +0300, Alexander Lakhin wrote:\n> Could you please share your thoughts regarding other error cases, which is\n> not triggered by existing tests, but still can be easily reached by users?\n> \n> For example:\n> SELECT satisfies_hash_partition(1, 1, 0, 0);\n> \n> ERROR:  XX000: could not open relation with OID 1\n> LOCATION:  relation_open, relation.c:61\n> \n> or:\n> CREATE TABLE t (b bytea);\n> INSERT INTO t SELECT ''::bytea;\n> CREATE INDEX brinidx ON t USING brin\n>  (b bytea_bloom_ops(n_distinct_per_range = -1.0));\n> \n> ERROR:  XX000: the bloom filter is too large (44629 > 8144)\n> LOCATION:  bloom_init, brin_bloom.c:344\n> \n> Should such cases be corrected too?\n\nThis is a case-by-case. satisfies_hash_partition() is undocumented,\nso doing nothing is fine by me. The second one, though is something\ntaht can be triggered with rather normal DDL sequences. That's more\nannoying. \n--\nMichael", "msg_date": "Fri, 5 Jul 2024 09:57:19 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Internal error codes triggered by tests" }, { "msg_contents": "Hello Michael,\n\n05.07.2024 03:57, Michael Paquier wrote:\n> On Thu, Jul 04, 2024 at 11:00:01AM +0300, Alexander Lakhin wrote:\n>> Could you please share your thoughts regarding other error cases, which is\n>> not triggered by existing tests, but still can be easily reached by users?\n>>\n>> For example:\n>> SELECT satisfies_hash_partition(1, 1, 0, 0);\n>>\n>> ERROR:  XX000: could not open relation with OID 1\n>> LOCATION:  relation_open, relation.c:61\n>>\n>> or:\n>> CREATE TABLE t (b bytea);\n>> INSERT INTO t SELECT ''::bytea;\n>> CREATE INDEX brinidx ON t USING brin\n>>  (b bytea_bloom_ops(n_distinct_per_range = -1.0));\n>>\n>> ERROR:  XX000: the bloom filter is too large (44629 > 8144)\n>> LOCATION:  bloom_init, brin_bloom.c:344\n>>\n>> Should such cases be corrected too?\n> This is a case-by-case. satisfies_hash_partition() is undocumented,\n> so doing nothing is fine by me. The second one, though is something\n> taht can be triggered with rather normal DDL sequences. That's more\n> annoying.\n\nThank you for the answer!\n\nLet me show you other error types for discussion/classification:\nSELECT pg_describe_object(1::regclass, 0, 0);\n\nERROR:  XX000: unsupported object class: 1\nLOCATION:  getObjectDescription, objectaddress.c:4016\nor\nSELECT pg_identify_object_as_address('1'::regclass, 1, 1);\n\nERROR:  XX000: unsupported object class: 1\nLOCATION:  getObjectTypeDescription, objectaddress.c:4597\n\n--\nSELECT format('BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;\nSET TRANSACTION SNAPSHOT ''%s''', repeat('-', 1000))\n\\gexec\nERROR:  XX000: could not open file \"pg_snapshots/-----...---\" for reading: File name too long\nLOCATION:  ImportSnapshot, snapmgr.c:1428\n\n--\nCREATE OPERATOR === (leftarg = int4, rightarg = int4, procedure = int4eq,\n   commutator = ===, hashes);\n\nCREATE TABLE t1 (a int);\nANALYZE t1;\nCREATE TABLE t2 (a int);\n\nSELECT * FROM t1, t2 WHERE t1.a === t2.a;\n\nERROR:  XX000: could not find hash function for hash operator 16385\nLOCATION:  ExecHashTableCreate, nodeHash.c:560\n\n--\nWITH RECURSIVE oq(x) AS (\n     WITH iq as (SELECT * FROM oq)\n     SELECT * FROM iq\n     UNION\n     SELECT * from iq\n)\nSELECT * FROM oq;\n\nERROR:  XX000: missing recursive reference\nLOCATION:  checkWellFormedRecursion, parse_cte.c:896\n(commented as \"should not happen\", but still...)\n\n--\nCREATE EXTENSION ltree;\nSELECT '1' ::ltree @ (repeat('!', 100)) ::ltxtquery;\n\nERROR:  XX000: stack too short\nLOCATION:  makepol, ltxtquery_io.c:252\n\n--\nThere is also a couple of dubious ereport(DEBUG1,\n(errcode(ERRCODE_INTERNAL_ERROR), ...) calls like:\n         /*\n          * User-defined picksplit failed to create an actual split, ie it put\n          * everything on the same side.  Complain but cope.\n          */\n         ereport(DEBUG1,\n                 (errcode(ERRCODE_INTERNAL_ERROR),\n                  errmsg(\"picksplit method for column %d of index \\\"%s\\\" failed\",\n                         attno + 1, RelationGetRelationName(r)),\n\nI'm not mentioning errors, that require more analysis and maybe correcting\nthe surrounding logic, not ERRCODE only.\n\nMaybe it makes sense to separate the discussion of such errors, which are\nnot triggered by tests/not covered; I'm just not sure how to handle them\nefficiently.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 8 Jul 2024 12:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internal error codes triggered by tests" }, { "msg_contents": "On Mon, Jul 08, 2024 at 12:00:00PM +0300, Alexander Lakhin wrote:\n> Let me show you other error types for discussion/classification:\n> SELECT pg_describe_object(1::regclass, 0, 0);\n> \n> ERROR:  XX000: unsupported object class: 1\n> LOCATION:  getObjectDescription, objectaddress.c:4016\n> or\n> SELECT pg_identify_object_as_address('1'::regclass, 1, 1);\n> \n> ERROR:  XX000: unsupported object class: 1\n> LOCATION:  getObjectTypeDescription, objectaddress.c:4597\n\nThese ones are old enough, indeed. Knowing that they usually come\ndown to be used with scans of pg_shdepend and pg_depend to get some\ninformation about the objects involved, I've never come down to see if\nthese were really worth tackling. The cases of dropped/undefined\ncase is much more interesting, because we may give in input object IDs\nthat have been dropped concurrently. Class IDs are constants fixed in\ntime.\n\n> --\n> SELECT format('BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;\n> SET TRANSACTION SNAPSHOT ''%s''', repeat('-', 1000))\n> \\gexec\n> ERROR:  XX000: could not open file \"pg_snapshots/-----...---\" for reading: File name too long\n> LOCATION:  ImportSnapshot, snapmgr.c:1428\n\nThis one is fun. errcode_for_file_access() does not know about\nENAMETOOLONG, as an effect of the errno returned by AllocateFile().\nPerhaps it should map to ERRCODE_NAME_TOO_LONG?\n\n> CREATE OPERATOR === (leftarg = int4, rightarg = int4, procedure = int4eq,\n>   commutator = ===, hashes);\n> \n> CREATE TABLE t1 (a int);\n> ANALYZE t1;\n> CREATE TABLE t2 (a int);\n> \n> SELECT * FROM t1, t2 WHERE t1.a === t2.a;\n> \n> ERROR:  XX000: could not find hash function for hash operator 16385\n> LOCATION:  ExecHashTableCreate, nodeHash.c:560\n\nHehe. You are telling that this operator supports a hash join, but\nnope. I am not really convinced that this is worth worrying.\n\n\n> --\n> WITH RECURSIVE oq(x) AS (\n>     WITH iq as (SELECT * FROM oq)\n>     SELECT * FROM iq\n>     UNION\n>     SELECT * from iq\n> )\n> SELECT * FROM oq;\n> \n> ERROR:  XX000: missing recursive reference\n> LOCATION:  checkWellFormedRecursion, parse_cte.c:896\n> (commented as \"should not happen\", but still...)\n\nHmm. This one feels like a bug, indeed.\n\n> --\n> CREATE EXTENSION ltree;\n> SELECT '1' ::ltree @ (repeat('!', 100)) ::ltxtquery;\n> \n> ERROR:  XX000: stack too short\n> LOCATION:  makepol, ltxtquery_io.c:252\n\nltree has little maintenance, not sure that's worth worrying.\n\n> I'm not mentioning errors, that require more analysis and maybe correcting\n> the surrounding logic, not ERRCODE only.\n> \n> Maybe it makes sense to separate the discussion of such errors, which are\n> not triggered by tests/not covered; I'm just not sure how to handle them\n> efficiently.\n\nThe scope is too broad for some of them like the CTE case, so separate\nthreads to attract the correct review audience makes sense to me.\n--\nMichael", "msg_date": "Wed, 10 Jul 2024 13:42:33 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Internal error codes triggered by tests" }, { "msg_contents": "> On 10 Jul 2024, at 06:42, Michael Paquier <[email protected]> wrote:\n\n>> SELECT format('BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;\n>> SET TRANSACTION SNAPSHOT ''%s''', repeat('-', 1000))\n>> \\gexec\n>> ERROR: XX000: could not open file \"pg_snapshots/-----...---\" for reading: File name too long\n>> LOCATION: ImportSnapshot, snapmgr.c:1428\n> \n> This one is fun. errcode_for_file_access() does not know about\n> ENAMETOOLONG, as an effect of the errno returned by AllocateFile().\n> Perhaps it should map to ERRCODE_NAME_TOO_LONG?\n\nMapping this case to ERRCODE_NAME_TOO_LONG seems like a legit improvement, even\nthough the error is likely to be quite rare in production.\n\nThe rest mentioned upthread seems either not worth the effort or are likely to\nbe bugs warranting proper investigation.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 12 Jul 2024 22:41:14 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internal error codes triggered by tests" }, { "msg_contents": "Hello Daniel and Michael,\n\n12.07.2024 23:41, Daniel Gustafsson wrote:\n>> On 10 Jul 2024, at 06:42, Michael Paquier <[email protected]> wrote:\n> The rest mentioned upthread seems either not worth the effort or are likely to\n> be bugs warranting proper investigation.\n>\n\nI've filed a bug report about the \"WITH RECURSIVE\" anomaly: [1], but what\nI wanted to understand when presenting different error kinds is what\ndefinition XX000 errors could have in principle?\n\nIt seems to me that we can't define them as indicators of unexpected (from\nthe server's POV) conditions, similar to assertion failures (but produced\nwith no asserts enabled), which should be and mostly get fixed.\n\nIf the next thing to do is to get backtrace_on_internal_error back and\nthat GUC is mainly intended for developers, then maybe having clean (or\ncontaining expected backtraces only) regression test logs is a final goal\nand we should stop here. But if it's expected that that GUC could be\nhelpful for users to analyze such errors in production and thus pay extra\nattention to them, maybe having XX000 status for presumably\nunreachable conditions only is desirable...\n\n[1] https://www.postgresql.org/message-id/18536-0a342ec07901203e%40postgresql.org\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sun, 14 Jul 2024 19:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internal error codes triggered by tests" }, { "msg_contents": "On Sun, Jul 14, 2024 at 07:00:00PM +0300, Alexander Lakhin wrote:\n> I've filed a bug report about the \"WITH RECURSIVE\" anomaly: [1], but what\n> I wanted to understand when presenting different error kinds is what\n> definition XX000 errors could have in principle?\n\nCool, thanks! I can see that Tom has already committed a fix.\n\nI'm going to start a new thread for ERRCODE_NAME_TOO_LONG. It would\nbe confusing to solve the issue in the middle of this thread.\n\n> If the next thing to do is to get backtrace_on_internal_error back and\n> that GUC is mainly intended for developers, then maybe having clean (or\n> containing expected backtraces only) regression test logs is a final goal\n> and we should stop here. But if it's expected that that GUC could be\n> helpful for users to analyze such errors in production and thus pay extra\n> attention to them, maybe having XX000 status for presumably\n> unreachable conditions only is desirable...\n\nPerhaps. Let's see where it leads if we have this discussion again.\nSome internal errors cannot be avoided because some tests expect such\ncases (failures with on-disk file manipulation is one). \n--\nMichael", "msg_date": "Tue, 16 Jul 2024 10:32:01 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Internal error codes triggered by tests" }, { "msg_contents": "On Fri, Jul 12, 2024 at 10:41:14PM +0200, Daniel Gustafsson wrote:\n> Mapping this case to ERRCODE_NAME_TOO_LONG seems like a legit improvement, even\n> though the error is likely to be quite rare in production.\n\nHmm. This is interesting, still it could be confusing as\nERRCODE_NAME_TOO_LONG is used only for names, when they are longer\nthan NAMEDATALEN, so in context that's a bit different than a longer\nfile name.\n\nHow about using a new error code in class 58, say a\nERRCODE_FILE_NAME_TOO_LONG like in the attached?\nERRCODE_DUPLICATE_FILE is like that; it exists just for the mapping\nwith EEXIST.\n--\nMichael", "msg_date": "Thu, 18 Jul 2024 16:29:36 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Internal error codes triggered by tests" }, { "msg_contents": "> On 18 Jul 2024, at 09:29, Michael Paquier <[email protected]> wrote:\n\n> How about using a new error code in class 58, say a\n> ERRCODE_FILE_NAME_TOO_LONG like in the attached?\n> ERRCODE_DUPLICATE_FILE is like that; it exists just for the mapping\n> with EEXIST.\n\nAgreed, that's probably a better option.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 18 Jul 2024 09:37:06 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Internal error codes triggered by tests" }, { "msg_contents": "On Thu, Jul 18, 2024 at 09:37:06AM +0200, Daniel Gustafsson wrote:\n> On 18 Jul 2024, at 09:29, Michael Paquier <[email protected]> wrote:\n>> How about using a new error code in class 58, say a\n>> ERRCODE_FILE_NAME_TOO_LONG like in the attached?\n>> ERRCODE_DUPLICATE_FILE is like that; it exists just for the mapping\n>> with EEXIST.\n> \n> Agreed, that's probably a better option.\n\nStill sounds like a better idea to me after a night of sleep. Would\nsomebody disagree about this idea? HEAD only, of course.\n--\nMichael", "msg_date": "Fri, 19 Jul 2024 11:35:18 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Internal error codes triggered by tests" }, { "msg_contents": "On Thu, Jul 18, 2024 at 09:37:06AM +0200, Daniel Gustafsson wrote:\n> On 18 Jul 2024, at 09:29, Michael Paquier <[email protected]> wrote:\n>> How about using a new error code in class 58, say a\n>> ERRCODE_FILE_NAME_TOO_LONG like in the attached?\n>> ERRCODE_DUPLICATE_FILE is like that; it exists just for the mapping\n>> with EEXIST.\n> \n> Agreed, that's probably a better option.\n\nApplied this one now on HEAD. On second look, all buildfarm\nenvironments seem to be OK with this errno, as far as I've checked, so\nthat should be OK.\n--\nMichael", "msg_date": "Mon, 22 Jul 2024 09:45:24 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Internal error codes triggered by tests" } ]
[ { "msg_contents": "Hi,\n\nI'm proposing a patch that making COPY format extendable:\nhttps://www.postgresql.org/message-id/20231204.153548.2126325458835528809.kou%40clear-code.com\nhttps://commitfest.postgresql.org/48/4681/\n\nIt's based on the discussion at:\nhttps://www.postgresql.org/message-id/flat/[email protected]#2bb7af4a3d2c7669f9a49808d777a20d\n\n> > > IIUC, you want extensibility in FORMAT argument to COPY command\n> > > https://www.postgresql.org/docs/current/sql-copy.html. Where the\n> > > format is pluggable. That seems useful.\n>\n> > Agreed.\n>\n> Ditto.\n\n\nBut my proposal stalled. It it acceptable the feature\nrequest that making COPY format extendable? If it's not\nacceptable, what are blockers? Performance?\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Tue, 23 Apr 2024 14:14:55 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Is it acceptable making COPY format extendable?" }, { "msg_contents": "On 23.04.24 07:14, Sutou Kouhei wrote:\n> Hi,\n> \n> I'm proposing a patch that making COPY format extendable:\n> https://www.postgresql.org/message-id/20231204.153548.2126325458835528809.kou%40clear-code.com\n> https://commitfest.postgresql.org/48/4681/\n> \n> It's based on the discussion at:\n> https://www.postgresql.org/message-id/flat/[email protected]#2bb7af4a3d2c7669f9a49808d777a20d\n> \n>>>> IIUC, you want extensibility in FORMAT argument to COPY command\n>>>> https://www.postgresql.org/docs/current/sql-copy.html. Where the\n>>>> format is pluggable. That seems useful.\n>>\n>>> Agreed.\n>>\n>> Ditto.\n> \n> But my proposal stalled. It it acceptable the feature\n> request that making COPY format extendable? If it's not\n> acceptable, what are blockers? Performance?\n\nI think that thread is getting an adequate amount of attention. Of \ncourse, you can always wish for more, but \"stalled\" looks very different.\n\nLet's not start a new thread here to discuss the merits of the other \nthread; that discussion belongs there.\n\n\n", "msg_date": "Wed, 24 Apr 2024 09:57:38 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is it acceptable making COPY format extendable?" }, { "msg_contents": "Hi,\n\nThanks for replying this.\n\nIn <[email protected]>\n \"Re: Is it acceptable making COPY format extendable?\" on Wed, 24 Apr 2024 09:57:38 +0200,\n Peter Eisentraut <[email protected]> wrote:\n\n>> I'm proposing a patch that making COPY format extendable:\n>> https://www.postgresql.org/message-id/20231204.153548.2126325458835528809.kou%40clear-code.com\n>> https://commitfest.postgresql.org/48/4681/\n>>\n>> But my proposal stalled. It it acceptable the feature\n>> request that making COPY format extendable? If it's not\n>> acceptable, what are blockers? Performance?\n> \n> I think that thread is getting an adequate amount of attention. Of\n> course, you can always wish for more, but \"stalled\" looks very\n> different.\n\nSorry for \"stalled\" misuse.\n\nI haven't got any reply since 2024-03-15:\nhttps://www.postgresql.org/message-id/flat/20240315.173754.2049843193122381085.kou%40clear-code.com#07aefc636d8165204ddfba971dc9a490\n(I sent some pings.)\n\nSo I called this status as \"stalled\".\n\nI'm not familiar with the PostgreSQL's development\nstyle. What should I do for this case? Should I just wait\nfor a reply from others without doing anything?\n\n> Let's not start a new thread here to discuss the merits of the other\n> thread; that discussion belongs there.\n\nI wanted to discuss possibility for this feature request in\nthis thread. I wanted to use my existing thread is for how\nto implement this feature request.\n\nShould I reuse\nhttps://www.postgresql.org/message-id/flat/CAJ7c6TM6Bz1c3F04Cy6%2BSzuWfKmr0kU8c_3Stnvh_8BR0D6k8Q%40mail.gmail.com\nfor it instead of this thread?\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Thu, 25 Apr 2024 13:53:56 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is it acceptable making COPY format extendable?" }, { "msg_contents": "On 25.04.24 06:53, Sutou Kouhei wrote:\n> I haven't got any reply since 2024-03-15:\n> https://www.postgresql.org/message-id/flat/20240315.173754.2049843193122381085.kou%40clear-code.com#07aefc636d8165204ddfba971dc9a490\n> (I sent some pings.)\n> \n> So I called this status as \"stalled\".\n> \n> I'm not familiar with the PostgreSQL's development\n> style. What should I do for this case? Should I just wait\n> for a reply from others without doing anything?\n\nPostgreSQL development is currently in feature freeze for PostgreSQL 17 \n(since April 8). So most participants will not be looking very closely \nat development projects for releases beyond that. The next commitfest \nfor patches targeting PostgreSQL 18 is in July, and I see your patch \nregistered there. It's possible that your thread is not going to get \nmuch attention until then.\n\nSo, in summary, you are doing everything right, you just have to be a \nbit more patient right now. :)\n\n\n\n", "msg_date": "Thu, 25 Apr 2024 09:59:18 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is it acceptable making COPY format extendable?" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Is it acceptable making COPY format extendable?\" on Thu, 25 Apr 2024 09:59:18 +0200,\n Peter Eisentraut <[email protected]> wrote:\n\n> PostgreSQL development is currently in feature freeze for PostgreSQL\n> 17 (since April 8). So most participants will not be looking very\n> closely at development projects for releases beyond that. The next\n> commitfest for patches targeting PostgreSQL 18 is in July, and I see\n> your patch registered there. It's possible that your thread is not\n> going to get much attention until then.\n> \n> So, in summary, you are doing everything right, you just have to be a\n> bit more patient right now. :)\n\nI see. I'll wait for the next commitfest.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Tue, 07 May 2024 06:40:11 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is it acceptable making COPY format extendable?" } ]
[ { "msg_contents": "Hi hackers,\n\nI proposal adding privileges test to improve\ntest coverage of pg_stat_statements.\n\n## test procedure\n\n./configure --enable-coverage --enable-tap-tests --with-llvm CFLAGS=-O0\nmake check-world\nmake coverage-html\n\n## coverage\n\nbefore Line Coverage 74.0 %(702/949 lines)\nafter Line Coverage 74.4 %(706/949 lines)\n\nAlthough the improvement is small, I think that test regarding\nprivileges is necessary.\n\nAs a side note,\nInitially, I was considering adding a test for dealloc.\nHowever, after reading the thread below, I confirmed that\nit is difficult to create tests due to differences due to endian.\n(https://www.postgresql.org/message-id/flat/40d1e4f2-835f-448f-a541-8ff5db75bf3d%40eisentraut.org)\nFor this reason, I first added a test about privileges.\n\nBest Regards,\n\nKeisuke Kuroda\nNTT Comware", "msg_date": "Tue, 23 Apr 2024 15:44:00 +0900", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Add privileges test for pg_stat_statements to improve coverage" }, { "msg_contents": "\n\nOn 2024/04/23 15:44, [email protected] wrote:\n> Hi hackers,\n> \n> I proposal adding privileges test to improve\n> test coverage of pg_stat_statements.\n\n+1\n\nHere are the review comments:\n\nmeson.build needs to be updated as well, like the Makefile.\n\n\nFor the privileges test, should we explicitly set pg_stat_statements.track_utility\nat the start, as done in other pg_stat_statements tests, to make sure\nif utility command statistics are collected or not?\n\n\n+SELECT\n+ CASE\n+ WHEN queryid <> 0 THEN TRUE ELSE FALSE\n+ END AS queryid_bool\n+ ,query FROM pg_stat_statements ORDER BY query COLLATE \"C\";\n\nCan't we simplify \"CASE ... END\" to just \"queryid <> 0\"?\n\n\nShould the test check not only queryid and query but also\nthe statistics column like \"calls\"? Roles that aren't superusers\nor pg_read_all_stats should be able to see statistics but not\nquery or queryid. So we should test that such roles can't see\nquery or queryid but can see statistics. Thoughts?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Sat, 20 Jul 2024 04:16:46 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add privileges test for pg_stat_statements to improve coverage" }, { "msg_contents": "Hi Fujii-san,\nThank you for your reply and comment!\n\nattach v2 fixed patch.\n\n> meson.build needs to be updated as well, like the Makefile.\n\nYes.\nUpdate 'contrib/pg_stat_statements/meson.build'.\n\n> For the privileges test, should we explicitly set \n> pg_stat_statements.track_utility\n> at the start, as done in other pg_stat_statements tests, to make sure\n> if utility command statistics are collected or not?\n\nIt certainly needs consideration.\nI think the results of the utility commands are not needed in privileges \ntest.\nSET 'pg_stat_statements.track_utility = FALSE'.\n\n> Can't we simplify \"CASE ... END\" to just \"queryid <> 0\"?\n\nYes.\nIf we add \"queryid <> 0\" to the WHERE clause, we can get the same \nresult.\nChange the SQL to the following:\n\n+SELECT query, calls, rows FROM pg_stat_statements\n+ WHERE queryid <> 0 ORDER BY query COLLATE \"C\";\n\n> Should the test check not only queryid and query but also\n> the statistics column like \"calls\"? Roles that aren't superusers\n> or pg_read_all_stats should be able to see statistics but not\n> query or queryid. So we should test that such roles can't see\n> query or queryid but can see statistics. Thoughts?\n\nI agree. We should test that such roles can't see\nquery or queryid but can see statistics.\nAdd the SQL to the test.\nTest that calls and rows are displayed even if the queryid is NULL.\n\n+-- regress_stats_user1 can read calls and rows\n+-- executed by other users\n+--\n+\n+SET ROLE regress_stats_user1;\n+SELECT query, calls, rows FROM pg_stat_statements\n+ WHERE queryid IS NULL ORDER BY query COLLATE \"C\";\n\nBest Regards,\nKeisuke Kuroda\nNTT Comware", "msg_date": "Mon, 22 Jul 2024 15:23:41 +0900", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Add privileges test for pg_stat_statements to improve coverage" }, { "msg_contents": "\n\nOn 2024/07/22 15:23, [email protected] wrote:\n> Hi Fujii-san,\n> Thank you for your reply and comment!\n> \n> attach v2 fixed patch.\n\nThanks for updating the patch!\n\n+SELECT query, calls, rows FROM pg_stat_statements\n+ WHERE queryid IS NULL ORDER BY query COLLATE \"C\";\n\nShouldn't we also include calls and rows in the ORDER BY clause?\nWithout this, if there are multiple records with the same query\nbut different calls or rows, the query result might be unstable.\nI believe this is causing the test failure reported by\nhe PostgreSQL Patch Tester.\n\nhttp://cfbot.cputube.org/\nhttps://cirrus-ci.com/task/4533613939654656\n\n\n> Yes.\n> If we add \"queryid <> 0\" to the WHERE clause, we can get the same result.\n> Change the SQL to the following:\n> \n> +SELECT query, calls, rows FROM pg_stat_statements\n> +  WHERE queryid <> 0 ORDER BY query COLLATE \"C\";\n\nI was thinking of adding \"queryid <> 0\" in the SELECT clause\ninstead of the WHERE clause. This way, we can verify if\nthe query results are as expected regardless of the queryid value,\nincluding both queryid <> 0 and queryid = 0.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 23 Jul 2024 01:51:19 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add privileges test for pg_stat_statements to improve coverage" }, { "msg_contents": "On Tue, Jul 23, 2024 at 01:51:19AM +0900, Fujii Masao wrote:\n> +SELECT query, calls, rows FROM pg_stat_statements\n> + WHERE queryid IS NULL ORDER BY query COLLATE \"C\";\n> \n> Shouldn't we also include calls and rows in the ORDER BY clause?\n> Without this, if there are multiple records with the same query\n> but different calls or rows, the query result might be unstable.\n> I believe this is causing the test failure reported by\n> he PostgreSQL Patch Tester.\n> \n> http://cfbot.cputube.org/\n> https://cirrus-ci.com/task/4533613939654656\n\n+SELECT query, calls, rows FROM pg_stat_statements\n+ WHERE queryid IS NULL ORDER BY query COLLATE \"C\";\n+ query | calls | rows \n+--------------------------+-------+------\n+ <insufficient privilege> | 1 | 1\n+ <insufficient privilege> | 1 | 1\n+ <insufficient privilege> | 1 | 3\n+(3 rows)\n\nI'd recommend to add a GROUP BY on calls and rows, with a\ncount(query), rather than print the same row without the query text\nmultiple times.\n\n+-- regress_stats_user2 can read query text and queryid\n+SET ROLE regress_stats_user2;\n+SELECT query, calls, rows FROM pg_stat_statements\n+ WHERE queryid <> 0 ORDER BY query COLLATE \"C\";\n+ query | calls | rows \n+----------------------------------------------------+-------+------\n+ SELECT $1 AS \"ONE\" | 1 | 1\n+ SELECT $1+$2 AS \"TWO\" | 1 | 1\n+ SELECT pg_stat_statements_reset() IS NOT NULL AS t | 1 | 1\n+ SELECT query, calls, rows FROM pg_stat_statements +| 1 | 1\n+ WHERE queryid <> $1 ORDER BY query COLLATE \"C\" | | \n+ SELECT query, calls, rows FROM pg_stat_statements +| 1 | 3\n+ WHERE queryid <> $1 ORDER BY query COLLATE \"C\" | | \n\nWe have two entries here with the same query and the same query ID,\nbecause they have a different userid. Shouldn't this query reflect\nthis information rather than have the reader guess it? This is going\nto require a join with pg_authid to grab the role name, and an ORDER\nBY on the role name.\n--\nMichael", "msg_date": "Tue, 23 Jul 2024 09:17:18 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add privileges test for pg_stat_statements to improve coverage" }, { "msg_contents": "Hi Fujii-san,\nThank you for your reply and comment!\n\nattach v3 fixed patch.\n\n> Shouldn't we also include calls and rows in the ORDER BY clause?\n> Without this, if there are multiple records with the same query\n> but different calls or rows, the query result might be unstable.\n> I believe this is causing the test failure reported by\n> he PostgreSQL Patch Tester.\n\n> I was thinking of adding \"queryid <> 0\" in the SELECT clause\n> instead of the WHERE clause. This way, we can verify if\n> the query results are as expected regardless of the queryid value,\n> including both queryid <> 0 and queryid = 0.\n\nIt's exactly as you said.\n* Add calls and rows in the ORDER BY caluse.\n* Modify \"queryid <> 0\" in the SELECT clause.\nModify test SQL belows, and the regress_stats_user1 check SQL\nonly needs to be done once.\n\n+SELECT queryid <> 0 AS queryid_bool, query, calls, rows\n+ FROM pg_stat_statements\n+ ORDER BY query COLLATE \"C\", calls, rows;\n\nBest Regards,\nKeisuke Kuroda\nNTT Comware", "msg_date": "Tue, 23 Jul 2024 09:26:02 +0900", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Add privileges test for pg_stat_statements to improve coverage" }, { "msg_contents": "Hi Michael-san,\nThank you for your reply and comment!\n\nattach v4 fixed patch.\n\n> We have two entries here with the same query and the same query ID,\n> because they have a different userid. Shouldn't this query reflect\n> this information rather than have the reader guess it? This is going\n> to require a join with pg_authid to grab the role name, and an ORDER\n> BY on the role name.\n\nI agree.\nThe information of different userids is mixed up.\nIt is easier to understand if the role name is displayed.\nJoin with pg_roles (view of pg_authid) to output the role name.\n\n> I'd recommend to add a GROUP BY on calls and rows, with a\n> count(query), rather than print the same row without the query text\n> multiple times.\n\nIndeed, same row have been output multiple times.\nIf we use GROUP BY, we would expect the following.\n\n```\nSELECT r.rolname, ss.queryid <> 0 AS queryid_bool, count(ss.query), \nss.calls, ss.rows\n FROM pg_stat_statements ss JOIN pg_roles r ON ss.userid = r.oid\n GROUP BY r.rolname, queryid_bool, ss.calls, ss.rows\n ORDER BY r.rolname, count(ss.query), ss.calls, ss.rows;\n rolname | queryid_bool | count | calls | rows\n---------------------+--------------+-------+-------+------\n postgres | | 1 | 1 | 3\n postgres | | 2 | 1 | 1\n regress_stats_user1 | t | 1 | 1 | 1\n(3 rows)\n```\n\nHowever, in this test I would like to see '<insufficient permissions>' \noutput\nand the SQL text 'SELECT $1+$2 AS “TWO”' executed by \nregress_stats_user1.\n\nThe attached patch executes the following SQL.\nWhat do you think?\n\n```\nSELECT r.rolname, ss.queryid <> 0 AS queryid_bool, ss.query, ss.calls, \nss.rows\n FROM pg_stat_statements ss JOIN pg_roles r ON ss.userid = r.oid\n ORDER BY r.rolname, ss.query COLLATE \"C\", ss.calls, ss.rows;\n rolname | queryid_bool | query | calls | \nrows\n---------------------+--------------+--------------------------+-------+------\n postgres | | <insufficient privilege> | 1 | \n 1\n postgres | | <insufficient privilege> | 1 | \n 1\n postgres | | <insufficient privilege> | 1 | \n 3\n regress_stats_user1 | t | SELECT $1+$2 AS \"TWO\" | 1 | \n 1\n(4 rows)\n```", "msg_date": "Tue, 23 Jul 2024 10:40:49 +0900", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Add privileges test for pg_stat_statements to improve coverage" }, { "msg_contents": "\n\nOn 2024/07/23 10:40, [email protected] wrote:\n> I agree.\n> The information of different userids is mixed up.\n> It is easier to understand if the role name is displayed.\n> Join with pg_roles (view of pg_authid) to output the role name.\n\n+ rolname | queryid_bool | query | calls | rows\n+---------------------+--------------+----------------------------------------------------+-------+------\n+ postgres | t | SELECT $1 AS \"ONE\" | 1 | 1\n+ postgres | t | SELECT pg_stat_statements_reset() IS NOT NULL AS t | 1 | 1\n+ regress_stats_user1 | t | SELECT $1+$2 AS \"TWO\" | 1 | 1\n\nUsing \"postgres\" as the default superuser name can cause instability.\nThis is why the Patch Tester reports now test failures again.\nYou should create and use a different superuser, such as \"regress_stats_superuser.\"\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 23 Jul 2024 14:28:14 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add privileges test for pg_stat_statements to improve coverage" }, { "msg_contents": "Hi Fujii-san,\nThank you for your comment!\n\nattach v5 fixed patch.\n\n> Using \"postgres\" as the default superuser name can cause instability.\n> This is why the Patch Tester reports now test failures again.\n> You should create and use a different superuser, such as\n> \"regress_stats_superuser.\"\n\nI understand.\nAdd \"regress_stats_superuser\" for test by superuser.\n\nBest Regards,\nKeisuke Kuroda\nNTT Comware", "msg_date": "Tue, 23 Jul 2024 15:02:07 +0900", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Add privileges test for pg_stat_statements to improve coverage" }, { "msg_contents": "On 2024/07/23 15:02, [email protected] wrote:\n> Hi Fujii-san,\n> Thank you for your comment!\n> \n> attach v5 fixed patch.\n> \n>> Using \"postgres\" as the default superuser name can cause instability.\n>> This is why the Patch Tester reports now test failures again.\n>> You should create and use a different superuser, such as\n>> \"regress_stats_superuser.\"\n> \n> I understand.\n> Add \"regress_stats_superuser\" for test by superuser.\n\nThanks for updating the patch!\n\nI've slightly modified the comments in the regression tests for clarity.\nAttached is the v6 patch. If there are no objections,\nI will push this version.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Wed, 24 Jul 2024 03:27:15 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add privileges test for pg_stat_statements to improve coverage" }, { "msg_contents": "> I've slightly modified the comments in the regression tests for \n> clarity.\n> Attached is the v6 patch. If there are no objections,\n> I will push this version.\n\nThank you for updating patch! I have no objection.\n\nBest Regards,\nKeisuke Kuroda\nNTT Comware\n\n\n\n", "msg_date": "Wed, 24 Jul 2024 10:23:13 +0900", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Add privileges test for pg_stat_statements to improve coverage" }, { "msg_contents": "\n\nOn 2024/07/24 10:23, [email protected] wrote:\n>> I've slightly modified the comments in the regression tests for clarity.\n>> Attached is the v6 patch. If there are no objections,\n>> I will push this version.\n> \n> Thank you for updating patch! I have no objection.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 24 Jul 2024 21:43:54 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add privileges test for pg_stat_statements to improve coverage" } ]
[ { "msg_contents": "Hi!\n\nI've been trying to introduce 64-bit transaction identifications to\nPostgres for quite a while [0]. All this implies,\nof course, an enormous amount of change that will have to take place in\nvarious modules. Consider this, the\npatch set become too big to be committed “at once”.\n\nThe obvious solutions was to try to split the patch set into smaller ones.\nBut here it comes a new challenge,\nnot every one of these parts, make Postgres better at the moment. Actually,\neven switching to a\nFullTransactionId in PGPROC will have disadvantage in increasing of WAL\nsize [1].\n\nIn fact, I believe, we're dealing with the chicken and the egg problem\nhere. Not able to commit full patch set\nsince it is too big to handle and not able to commit parts of it, since\nthey make sense all together and do not\nhelp improve Postgres at the moment.\n\nBut it's not that bad. Since the commit 4ed8f0913bfdb5f, added in [3], we\nare capable to use 64 bits to\nindexing SLRUs.\n\nPROPOSAL\nMake multixact offsets 64 bit.\n\nRATIONALE\nIt is not very difficult to overflow 32-bit mxidoff. Since, it is created\nfor every unique combination of the\ntransaction for each tuple, including XIDs and respective flags. And when a\ntransaction is added to a\nspecific multixact, it is rewritten with a new offset. In other words, it\nis possible to overflow the offsets of\nmultixacts without overflowing the multixacts themselves and/or ordinary\ntransactions. I believe, there\nwas something about in the hackers mail lists, but I could not find links\nnow.\n\nPFA, patch. Here is a WIP version. Upgrade machinery should be added later.\n\nAs always, any opinions on a subject a very welcome!\n\n[0]\nhttps://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com\n[1]\nhttps://www.postgresql.org/message-id/flat/CACG%3DezY7msw%2Bjip%3Drtfvnfz051dRqz4s-diuO46v3rAoAE0T0g%40mail.gmail.com\n[3]\nhttps://postgr.es/m/CAJ7c6TPDOYBYrnCAeyndkBktO0WG2xSdYduTF0nxq%2BvfkmTF5Q%40mail.gmail.com\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Tue, 23 Apr 2024 11:23:41 +0300", "msg_from": "Maxim Orlov <[email protected]>", "msg_from_op": true, "msg_subject": "POC: make mxidoff 64 bits" }, { "msg_contents": "On 23/04/2024 11:23, Maxim Orlov wrote:\n> PROPOSAL\n> Make multixact offsets 64 bit.\n\n+1, this is a good next step and useful regardless of 64-bit XIDs.\n\n> @@ -156,7 +148,7 @@\n> \t\t((uint32) ((0xFFFFFFFF % MULTIXACT_MEMBERS_PER_PAGE) + 1))\n> \n> /* page in which a member is to be found */\n> -#define MXOffsetToMemberPage(xid) ((xid) / (TransactionId) MULTIXACT_MEMBERS_PER_PAGE)\n> +#define MXOffsetToMemberPage(xid) ((xid) / (MultiXactOffset) MULTIXACT_MEMBERS_PER_PAGE)\n> #define MXOffsetToMemberSegment(xid) (MXOffsetToMemberPage(xid) / SLRU_PAGES_PER_SEGMENT)\n> \n> /* Location (byte offset within page) of flag word for a given member */\n\nThis is really a bug fix. It didn't matter when TransactionId and \nMultiXactOffset were both typedefs of uint32, but it was always wrong. \nThe argument name 'xid' is also misleading.\n\nI think there are some more like that, MXOffsetToFlagsBitShift for example.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 23 Apr 2024 12:37:40 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: POC: make mxidoff 64 bits" }, { "msg_contents": "\n\n> On 23 Apr 2024, at 11:23, Maxim Orlov <[email protected]> wrote:\n> \n> Make multixact offsets 64 bit.\n\n-\t\tereport(ERROR,\n-\t\t\t\t(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n-\t\t\t\t errmsg(\"multixact \\\"members\\\" limit exceeded\"),\nPersonally, I'd be happy with this! We had some incidents where the only mitigation was vacuum settings tweaking.\n\nBTW as a side note... I see lot's of casts to (unsigned long long), can't we just cast to MultiXactOffset?\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Tue, 23 Apr 2024 16:02:56 +0300", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: POC: make mxidoff 64 bits" }, { "msg_contents": "Hi Maxim Orlov\n Thank you so much for your tireless work on this. Increasing the WAL\nsize by a few bytes should have very little impact with today's disk\nperformance(Logical replication of this feature wal log is also increased a\nlot, logical replication is a milestone new feature, and the community has\nbeen improving the logical replication of functions),I believe removing\ntroubled postgresql Transaction ID Wraparound was also a milestone new\nfeature adding a few bytes is worth it!\n\nBest regards\n\nOn Tue, 23 Apr 2024 at 17:37, Heikki Linnakangas <[email protected]> wrote:\n\n> On 23/04/2024 11:23, Maxim Orlov wrote:\n> > PROPOSAL\n> > Make multixact offsets 64 bit.\n>\n> +1, this is a good next step and useful regardless of 64-bit XIDs.\n>\n> > @@ -156,7 +148,7 @@\n> > ((uint32) ((0xFFFFFFFF % MULTIXACT_MEMBERS_PER_PAGE) + 1))\n> >\n> > /* page in which a member is to be found */\n> > -#define MXOffsetToMemberPage(xid) ((xid) / (TransactionId)\n> MULTIXACT_MEMBERS_PER_PAGE)\n> > +#define MXOffsetToMemberPage(xid) ((xid) / (MultiXactOffset)\n> MULTIXACT_MEMBERS_PER_PAGE)\n> > #define MXOffsetToMemberSegment(xid) (MXOffsetToMemberPage(xid) /\n> SLRU_PAGES_PER_SEGMENT)\n> >\n> > /* Location (byte offset within page) of flag word for a given member */\n>\n> This is really a bug fix. It didn't matter when TransactionId and\n> MultiXactOffset were both typedefs of uint32, but it was always wrong.\n> The argument name 'xid' is also misleading.\n>\n> I think there are some more like that, MXOffsetToFlagsBitShift for example.\n>\n> --\n> Heikki Linnakangas\n> Neon (https://neon.tech)\n>\n>\n>\n>\n\nHi Maxim Orlov   Thank you so much for your tireless work on this. Increasing the WAL size by a few bytes should have very little impact with today's disk performance(Logical replication of this feature wal log is also increased a lot, logical replication is a milestone new feature, and the community has been improving the logical replication of functions),I believe removing troubled postgresql Transaction ID Wraparound was also a milestone  new feature  adding a few bytes is worth it!Best regardsOn Tue, 23 Apr 2024 at 17:37, Heikki Linnakangas <[email protected]> wrote:On 23/04/2024 11:23, Maxim Orlov wrote:\n> PROPOSAL\n> Make multixact offsets 64 bit.\n\n+1, this is a good next step and useful regardless of 64-bit XIDs.\n\n> @@ -156,7 +148,7 @@\n>               ((uint32) ((0xFFFFFFFF % MULTIXACT_MEMBERS_PER_PAGE) + 1))\n>  \n>  /* page in which a member is to be found */\n> -#define MXOffsetToMemberPage(xid) ((xid) / (TransactionId) MULTIXACT_MEMBERS_PER_PAGE)\n> +#define MXOffsetToMemberPage(xid) ((xid) / (MultiXactOffset) MULTIXACT_MEMBERS_PER_PAGE)\n>  #define MXOffsetToMemberSegment(xid) (MXOffsetToMemberPage(xid) / SLRU_PAGES_PER_SEGMENT)\n>  \n>  /* Location (byte offset within page) of flag word for a given member */\n\nThis is really a bug fix. It didn't matter when TransactionId and \nMultiXactOffset were both typedefs of uint32, but it was always wrong. \nThe argument name 'xid' is also misleading.\n\nI think there are some more like that, MXOffsetToFlagsBitShift for example.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Tue, 23 Apr 2024 21:03:06 +0800", "msg_from": "wenhui qiu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: POC: make mxidoff 64 bits" }, { "msg_contents": "On Tue, 23 Apr 2024 at 12:37, Heikki Linnakangas <[email protected]> wrote:\n\n> This is really a bug fix. It didn't matter when TransactionId and\n> MultiXactOffset were both typedefs of uint32, but it was always wrong.\n> The argument name 'xid' is also misleading.\n>\n> I think there are some more like that, MXOffsetToFlagsBitShift for example.\n\nYeah, I always thought so too. I believe, this is just a copy-paste. You\nmean, it is worth creating a separate CF\nentry for these fixes?\n\n\nOn Tue, 23 Apr 2024 at 16:03, Andrey M. Borodin <[email protected]>\nwrote:\n\n> BTW as a side note... I see lot's of casts to (unsigned long long), can't\n> we just cast to MultiXactOffset?\n>\nActually, first versions of the 64xid patch set have such a cast to types\nTransactionID, MultiXact and so on. But,\nafter some discussions, we are switched to unsigned long long cast.\nUnfortunately, I could not find an exact link\nfor that discussion. On the other hand, such a casting is already used\nthroughout the code. So, just for the\nsake of the consistency, I would like to stay with these casts.\n\n\nOn Tue, 23 Apr 2024 at 16:03, wenhui qiu <[email protected]> wrote:\n\n> Hi Maxim Orlov\n> Thank you so much for your tireless work on this. Increasing the WAL\n> size by a few bytes should have very little impact with today's disk\n> performance(Logical replication of this feature wal log is also increased a\n> lot, logical replication is a milestone new feature, and the community has\n> been improving the logical replication of functions),I believe removing\n> troubled postgresql Transaction ID Wraparound was also a milestone new\n> feature adding a few bytes is worth it!\n>\nI'm 100% agree. Maybe, I should return to this approach and find some\nbenefits for having FXIDs in WAL.\n\nOn Tue, 23 Apr 2024 at 12:37, Heikki Linnakangas <[email protected]> wrote:\nThis is really a bug fix. It didn't matter when TransactionId and \nMultiXactOffset were both typedefs of uint32, but it was always wrong. \nThe argument name 'xid' is also misleading.\n\nI think there are some more like that, MXOffsetToFlagsBitShift for example.Yeah, I always thought so too.  I believe, this is just a copy-paste.  You mean, it is worth creating a separate CF entry for these fixes?On Tue, 23 Apr 2024 at 16:03, Andrey M. Borodin <[email protected]> wrote:\nBTW as a side note... I see lot's of casts to (unsigned long long), can't we just cast to MultiXactOffset?Actually, first versions of the 64xid patch set have such a cast to types TransactionID, MultiXact and so on.  But, after some discussions, we are switched to unsigned long long cast.  Unfortunately, I could not find an exact link for that discussion.  On the other hand, such a casting is already used throughout the code.  So, just for the sake of the consistency, I would like to stay with these casts.On Tue, 23 Apr 2024 at 16:03, wenhui qiu <[email protected]> wrote:Hi Maxim Orlov   Thank you so much for your tireless work on this. Increasing the WAL size by a few bytes should have very little impact with today's disk performance(Logical replication of this feature wal log is also increased a lot, logical replication is a milestone new feature, and the community has been improving the logical replication of functions),I believe removing troubled postgresql Transaction ID Wraparound was also a milestone  new feature  adding a few bytes is worth it!I'm 100% agree.  Maybe, I should return to this approach and find some benefits for having FXIDs in WAL.", "msg_date": "Thu, 25 Apr 2024 17:20:53 +0300", "msg_from": "Maxim Orlov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: POC: make mxidoff 64 bits" }, { "msg_contents": "Hi!\n\nSorry for delay. I was a bit busy last month. Anyway, here is my\nproposal for making multioffsets 64 bit.\nThe patch set consists of three parts:\n0001 - making user output of offsets 64-bit ready;\n0002 - making offsets 64-bit;\n0003 - provide 32 to 64 bit conversion in pg_upgarde.\n\nI'm pretty sure this is just a beginning of the conversation, so any\nopinions and reviews, as always, are very welcome!\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Wed, 14 Aug 2024 18:30:15 +0300", "msg_from": "Maxim Orlov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: POC: make mxidoff 64 bits" }, { "msg_contents": "Here is rebase. Apparently I'll have to do it often, since the\nCATALOG_VERSION_NO changed in the patch.\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Tue, 3 Sep 2024 16:30:10 +0300", "msg_from": "Maxim Orlov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: POC: make mxidoff 64 bits" }, { "msg_contents": "On Tue, Sep 3, 2024 at 4:30 PM Maxim Orlov <[email protected]> wrote:\n> Here is rebase. Apparently I'll have to do it often, since the CATALOG_VERSION_NO changed in the patch.\n\nI don't think you need to maintain CATALOG_VERSION_NO change in your\npatch for the exact reason you have mentioned: patch will get conflict\neach time CATALOG_VERSION_NO is advanced. It's responsibility of\ncommitter to advance CATALOG_VERSION_NO when needed.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Tue, 3 Sep 2024 16:32:46 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: POC: make mxidoff 64 bits" }, { "msg_contents": "On Tue, 3 Sept 2024 at 16:32, Alexander Korotkov <[email protected]>\nwrote:\n\n> I don't think you need to maintain CATALOG_VERSION_NO change in your\n> patch for the exact reason you have mentioned: patch will get conflict\n> each time CATALOG_VERSION_NO is advanced. It's responsibility of\n> committer to advance CATALOG_VERSION_NO when needed.\n>\n\nOK, I got it. My intention here was to help to test the patch. If someone\nwants to have a\nlook at the patch, he won't need to make changes in the code. In the next\niteration, I'll\nremove CATALOG_VERSION_NO version change.\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Tue, 3 Sept 2024 at 16:32, Alexander Korotkov <[email protected]> wrote:\nI don't think you need to maintain CATALOG_VERSION_NO change in your\npatch for the exact reason you have mentioned: patch will get conflict\neach time CATALOG_VERSION_NO is advanced.  It's responsibility of\ncommitter to advance CATALOG_VERSION_NO when needed.OK, I got it. My intention here was to help to test the patch. If someone wants to have a look at the patch, he won't need to make changes in the code. In the next iteration, I'll remove CATALOG_VERSION_NO version change.  -- Best regards,Maxim Orlov.", "msg_date": "Wed, 4 Sep 2024 11:49:32 +0300", "msg_from": "Maxim Orlov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: POC: make mxidoff 64 bits" }, { "msg_contents": "Here is v3. I removed CATALOG_VERSION_NO change, so this should be done by\nthe actual commiter.\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Sat, 7 Sep 2024 07:36:52 +0300", "msg_from": "Maxim Orlov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: POC: make mxidoff 64 bits" }, { "msg_contents": "Hi, Maxim!\n\nPreviously we accessed offsets in shared MultiXactState without locks as\n32-bit read is always atomic. But I'm not sure it's so when offset become\n64-bit.\nE.g. GetNewMultiXactId():\n\nnextOffset = MultiXactState->nextOffset;\nis outside lock.\n\nThere might be other places we do the same as well.\n\nRegards,\nPavel Borisov\nSupabase\n\nHi, Maxim!Previously we accessed offsets in shared MultiXactState without locks as 32-bit read is always atomic. But I'm not sure it's so when offset become 64-bit.E.g. GetNewMultiXactId():nextOffset = MultiXactState->nextOffset;is outside lock. There might be other places we do the same as well. Regards,Pavel BorisovSupabase", "msg_date": "Thu, 12 Sep 2024 16:09:08 +0400", "msg_from": "Pavel Borisov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: POC: make mxidoff 64 bits" }, { "msg_contents": "On Thu, 12 Sept 2024 at 16:09, Pavel Borisov <[email protected]> wrote:\n\n> Hi, Maxim!\n>\n> Previously we accessed offsets in shared MultiXactState without locks as\n> 32-bit read is always atomic. But I'm not sure it's so when offset become\n> 64-bit.\n> E.g. GetNewMultiXactId():\n>\n> nextOffset = MultiXactState->nextOffset;\n> is outside lock.\n>\n> There might be other places we do the same as well.\n>\nI think the replacement of plain assignments by\npg_atomic_read_u64/pg_atomic_write_u64 would be sufficient.\n\n(The same I think is needed for the patchset [1])\n[1]\nhttps://www.postgresql.org/message-id/flat/CAJ7c6TMvPz8q+nC=JoKniy7yxPzQYcCTnNFYmsDP-nnWsAOJ2g@mail.gmail.com\n\nRegards,\nPavel Borisov\n\nOn Thu, 12 Sept 2024 at 16:09, Pavel Borisov <[email protected]> wrote:Hi, Maxim!Previously we accessed offsets in shared MultiXactState without locks as 32-bit read is always atomic. But I'm not sure it's so when offset become 64-bit.E.g. GetNewMultiXactId():nextOffset = MultiXactState->nextOffset;is outside lock. There might be other places we do the same as well. I think the replacement of plain assignments by pg_atomic_read_u64/pg_atomic_write_u64 would be sufficient.(The same I think is needed for the patchset [1])[1] https://www.postgresql.org/message-id/flat/CAJ7c6TMvPz8q+nC=JoKniy7yxPzQYcCTnNFYmsDP-nnWsAOJ2g@mail.gmail.comRegards,Pavel Borisov", "msg_date": "Thu, 12 Sep 2024 16:25:53 +0400", "msg_from": "Pavel Borisov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: POC: make mxidoff 64 bits" }, { "msg_contents": "On 2024-Sep-12, Pavel Borisov wrote:\n\n> Hi, Maxim!\n> \n> Previously we accessed offsets in shared MultiXactState without locks as\n> 32-bit read is always atomic. But I'm not sure it's so when offset become\n> 64-bit.\n> E.g. GetNewMultiXactId():\n> \n> nextOffset = MultiXactState->nextOffset;\n> is outside lock.\n\nGood though. But fortunately I think it's not a problem. The one you\nsay is with MultiXactGetLock held in shared mode -- and that works OK,\nas the assignment (in line 1263 at the bottom of the same routine) is\ndone with exclusive lock held.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 12 Sep 2024 15:14:08 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: POC: make mxidoff 64 bits" } ]
[ { "msg_contents": "Hi,\n\ndoc/src/sgml/monitoring.sgml seems to have a minor typo:\n\nIn pg_stat_database_conflicts section (around line 3621) we have:\n\n <para>\n Number of uses of logical slots in this database that have been\n canceled due to old snapshots or too low a <xref linkend=\"guc-wal-level\"/>\n on the primary\n </para></entry>\n\nI think \"too low a\" should be \"too low\" ('a' is not\nnecessary). Attached is the patch.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp", "msg_date": "Tue, 23 Apr 2024 20:17:39 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Minor document typo" }, { "msg_contents": "On Tue, 23 Apr 2024 at 23:17, Tatsuo Ishii <[email protected]> wrote:\n> Number of uses of logical slots in this database that have been\n> canceled due to old snapshots or too low a <xref linkend=\"guc-wal-level\"/>\n> on the primary\n>\n> I think \"too low a\" should be \"too low\" ('a' is not\n> necessary). Attached is the patch.\n\nThe existing text looks fine to me. The other form would use \"of a\"\nand become \"too low of a wal_level on the primary\".\n\n\"too low wal_level on the primary\" sounds wrong to my native\nEnglish-speaking ear.\n\nThere's some discussion in [1] that might be of interest to you.\n\nDavid\n\n[1] https://www.reddit.com/r/grammar/comments/qr9z6e/i_need_help_with_sothat_adj_of_a_sing_noun/?ref=share&ref_source=link\n\n\n", "msg_date": "Tue, 23 Apr 2024 23:36:52 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Minor document typo" }, { "msg_contents": ">> I think \"too low a\" should be \"too low\" ('a' is not\n>> necessary). Attached is the patch.\n> \n> The existing text looks fine to me. The other form would use \"of a\"\n> and become \"too low of a wal_level on the primary\".\n> \n> \"too low wal_level on the primary\" sounds wrong to my native\n> English-speaking ear.\n> \n> There's some discussion in [1] that might be of interest to you.\n> \n> David\n> \n> [1] https://www.reddit.com/r/grammar/comments/qr9z6e/i_need_help_with_sothat_adj_of_a_sing_noun/?ref=share&ref_source=link\n\nThank you for the explanation. English is difficult :-)\n\nJust out of a curiosity, is it possible to say \"low a wal_level on the\nprimary\"? (just \"too\" is removed)\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n\n", "msg_date": "Tue, 23 Apr 2024 21:11:38 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Minor document typo" }, { "msg_contents": "On Wed, 24 Apr 2024 at 00:11, Tatsuo Ishii <[email protected]> wrote:\n> Just out of a curiosity, is it possible to say \"low a wal_level on the\n> primary\"? (just \"too\" is removed)\n\nPrefixing the adjective with \"too\" means it's beyond the acceptable\nrange. \"This coffee is too hot\".\n\nhttps://dictionary.cambridge.org/grammar/british-grammar/too\n\nDavid\n\n\n", "msg_date": "Wed, 24 Apr 2024 01:45:25 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Minor document typo" } ]
[ { "msg_contents": "Hi all,\r\n\r\nI would like to report an issue with the pg_trgm extension on cross-architecture replication scenarios. When an x86_64 standby server is replicating from an aarch64 primary server or vice versa, the gist_trgm_ops opclass returns different results on the primary and standby. Masahiko previously reported a similar issue affecting the pg_bigm extension [1].\r\n\r\nTo reproduce, execute the following on the x86_64 primary server:\r\n\r\nCREATE EXTENSION pg_trgm;\r\nCREATE TABLE tbl (c text);\r\nCREATE INDEX ON tbl USING gist (c gist_trgm_ops);\r\nINSERT INTO tbl VALUES ('Bóbr');\r\n\r\nOn the x86_64 primary server:\r\n\r\npostgres=> select * from tbl where c like '%Bób%';\r\n c\r\n------\r\nBóbr\r\n(1 row)\r\n\r\nOn the aarch64 replica server:\r\n\r\npostgres=> select * from tbl where c like '%Bób%';\r\nc\r\n---\r\n(0 rows)\r\n\r\nThe root cause looks the same as the pg_bigm issue that Masahiko reported. To compare trigrams, pg_trgm uses a numerical comparison of chars [2]. On x86_64 a char is signed by default, whereas on aarch64 it is unsigned by default. gist_trgm_ops expects the trigram list to be sorted, but due to the different signedness of chars, the sort order is broken when replicating the values across architectures.\r\n\r\nThe different sort behaviour can be demonstrated using show_trgm.\r\n\r\nOn the x86_64 primary server:\r\n\r\npostgres=> SELECT show_trgm('Bóbr');\r\n show_trgm\r\n------------------------------------------\r\n{0x89194c,\" b\",\"br \",0x707c72,0x7f7849}\r\n(1 row)\r\n\r\nOn the aarch64 replica server:\r\n\r\npostgres=> SELECT show_trgm('Bóbr');\r\n show_trgm\r\n------------------------------------------\r\n{\" b\",\"br \",0x707c72,0x7f7849,0x89194c}\r\n(1 row)\r\n\r\nOne simple solution for this specific case is to declare the char signedness in the CMPPCHAR macro.\r\n\r\n--- a/contrib/pg_trgm/trgm.h\r\n+++ b/contrib/pg_trgm/trgm.h\r\n@@ -42,7 +42,7 @@\r\ntypedef char trgm[3];\r\n #define CMPCHAR(a,b) ( ((a)==(b)) ? 0 : ( ((a)<(b)) ? -1 : 1 ) )\r\n-#define CMPPCHAR(a,b,i) CMPCHAR( *(((const char*)(a))+i), *(((const char*)(b))+i) )\r\n+#define CMPPCHAR(a,b,i) CMPCHAR( *(((unsigned char*)(a))+i), *(((unsigned char*)(b))+i) )\r\n#define CMPTRGM(a,b) ( CMPPCHAR(a,b,0) ? CMPPCHAR(a,b,0) : ( CMPPCHAR(a,b,1) ? CMPPCHAR(a,b,1) : CMPPCHAR(a,b,2) ) )\r\n #define CPTRGM(a,b) do { \\\r\n\r\nAlternatively, Postgres can be compiled with -funsigned-char or -fsigned-char. I came across a code comment suggesting that this may not be a good idea in general [3].\r\n\r\nGiven that this has problem has come up before and seems likely to come up again, I'm curious what other broad solutions there might be to resolve it? Looking forward to any feedback, thanks!\r\n\r\nBest,\r\n\r\nAdam Guo\r\nAmazon Web Services: https://aws.amazon.com\r\n\r\n[1] https://osdn.net/projects/pgbigm/lists/archive/hackers/2024-February/000370.html\r\n[2] https://github.com/postgres/postgres/blob/480bc6e3ed3a5719cdec076d4943b119890e8171/contrib/pg_trgm/trgm.h#L45\r\n[3] https://github.com/postgres/postgres/blob/master/src/backend/utils/adt/cash.c#L114-L123\r\n\n\n\n\n\n\n\n\n\nHi all,\n \nI would like to report an issue with the pg_trgm extension on cross-architecture replication scenarios. When an x86_64 standby server is replicating from an aarch64 primary server or vice versa, the gist_trgm_ops opclass returns different\r\n results on the primary and standby. Masahiko previously reported a similar issue affecting the pg_bigm extension [1].\n \nTo reproduce, execute the following on the x86_64 primary server:\n \nCREATE EXTENSION pg_trgm;\nCREATE TABLE tbl (c text);\nCREATE INDEX ON tbl USING gist (c gist_trgm_ops);\nINSERT INTO tbl VALUES ('Bóbr');\n \nOn the x86_64 primary server:\n \npostgres=> select * from tbl where c like '%Bób%';\n  c\n------\nBóbr\n(1 row)\n \nOn the aarch64 replica server:\n \npostgres=> select * from tbl where c like '%Bób%';\nc \n---\n(0 rows)\n \nThe root cause looks the same as the pg_bigm issue that Masahiko reported. To compare trigrams, pg_trgm uses a numerical comparison of chars [2]. On x86_64 a char is signed by default, whereas on aarch64 it is unsigned by default. gist_trgm_ops\r\n expects the trigram list to be sorted, but due to the different signedness of chars, the sort order is broken when replicating the values across architectures.\n \nThe different sort behaviour can be demonstrated using show_trgm.\n \nOn the x86_64 primary server:\n \npostgres=> SELECT show_trgm('Bóbr');\n                show_trgm                 \n------------------------------------------\n{0x89194c,\"  b\",\"br \",0x707c72,0x7f7849}\n(1 row)\n \nOn the aarch64 replica server:\n \npostgres=> SELECT show_trgm('Bóbr');\n                show_trgm                 \n------------------------------------------\n{\"  b\",\"br \",0x707c72,0x7f7849,0x89194c}\n(1 row)\n \nOne simple solution for this specific case is to declare the char signedness in the CMPPCHAR macro.\n \n--- a/contrib/pg_trgm/trgm.h\n+++ b/contrib/pg_trgm/trgm.h\n@@ -42,7 +42,7 @@\ntypedef char trgm[3];\n\n #define CMPCHAR(a,b) ( ((a)==(b)) ? 0 : ( ((a)<(b)) ? -1 : 1 ) )\n-#define CMPPCHAR(a,b,i)  CMPCHAR( *(((const char*)(a))+i), *(((const char*)(b))+i) )\n+#define CMPPCHAR(a,b,i)  CMPCHAR( *(((unsigned char*)(a))+i), *(((unsigned char*)(b))+i) )\n#define CMPTRGM(a,b) ( CMPPCHAR(a,b,0) ? CMPPCHAR(a,b,0) : ( CMPPCHAR(a,b,1) ? CMPPCHAR(a,b,1) : CMPPCHAR(a,b,2) ) )\n\n #define CPTRGM(a,b) do {                                                           \\\n \nAlternatively, Postgres can be compiled with -funsigned-char or -fsigned-char. I came across a code comment suggesting that this may not be a good idea in general [3].\n \nGiven that this has problem has come up before and seems likely to come up again, I'm curious what other broad solutions there might be to resolve it? Looking forward to any feedback, thanks!\n \nBest,\n \nAdam Guo\nAmazon Web Services: https://aws.amazon.com\n \n[1] https://osdn.net/projects/pgbigm/lists/archive/hackers/2024-February/000370.html\n[2] https://github.com/postgres/postgres/blob/480bc6e3ed3a5719cdec076d4943b119890e8171/contrib/pg_trgm/trgm.h#L45\n[3] https://github.com/postgres/postgres/blob/master/src/backend/utils/adt/cash.c#L114-L123", "msg_date": "Tue, 23 Apr 2024 14:45:20 +0000", "msg_from": "\"Guo, Adam\" <[email protected]>", "msg_from_op": true, "msg_subject": "pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "\"Guo, Adam\" <[email protected]> writes:\n> I would like to report an issue with the pg_trgm extension on\n> cross-architecture replication scenarios. When an x86_64 standby\n> server is replicating from an aarch64 primary server or vice versa,\n> the gist_trgm_ops opclass returns different results on the primary\n> and standby.\n\nI do not think that is a supported scenario. Hash functions and\nsuchlike are not guaranteed to produce the same results on different\nCPU architectures. As a quick example, I get\n\nregression=# select hashfloat8(34);\n hashfloat8 \n------------\n 21570837\n(1 row)\n\non x86_64 but\n\npostgres=# select hashfloat8(34);\n hashfloat8 \n------------\n -602898821\n(1 row)\n\non ppc32 thanks to the endianness difference.\n\n> Given that this has problem has come up before and seems likely to\n> come up again, I'm curious what other broad solutions there might be\n> to resolve it?\n\nReject as not a bug. Discourage people from thinking that physical\nreplication will work across architectures.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Apr 2024 10:57:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Tue, Apr 23, 2024 at 11:57 PM Tom Lane <[email protected]> wrote:\n>\n> \"Guo, Adam\" <[email protected]> writes:\n> > I would like to report an issue with the pg_trgm extension on\n> > cross-architecture replication scenarios. When an x86_64 standby\n> > server is replicating from an aarch64 primary server or vice versa,\n> > the gist_trgm_ops opclass returns different results on the primary\n> > and standby.\n>\n> I do not think that is a supported scenario. Hash functions and\n> suchlike are not guaranteed to produce the same results on different\n> CPU architectures. As a quick example, I get\n>\n> regression=# select hashfloat8(34);\n> hashfloat8\n> ------------\n> 21570837\n> (1 row)\n>\n> on x86_64 but\n>\n> postgres=# select hashfloat8(34);\n> hashfloat8\n> ------------\n> -602898821\n> (1 row)\n>\n> on ppc32 thanks to the endianness difference.\n>\n> > Given that this has problem has come up before and seems likely to\n> > come up again, I'm curious what other broad solutions there might be\n> > to resolve it?\n>\n> Reject as not a bug. Discourage people from thinking that physical\n> replication will work across architectures.\n\nWhile cross-arch physical replication is not supported, I think having\narchitecture dependent differences is not good and It's legitimate to\nfix it. FYI the 'char' data type comparison is done as though char is\nunsigned. I've attached a small patch to fix it. What do you think?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 30 Apr 2024 12:15:20 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "Masahiko Sawada <[email protected]> writes:\n> On Tue, Apr 23, 2024 at 11:57 PM Tom Lane <[email protected]> wrote:\n>> Reject as not a bug. Discourage people from thinking that physical\n>> replication will work across architectures.\n\n> While cross-arch physical replication is not supported, I think having\n> architecture dependent differences is not good and It's legitimate to\n> fix it. FYI the 'char' data type comparison is done as though char is\n> unsigned. I've attached a small patch to fix it. What do you think?\n\nI think this will break existing indexes that are working fine.\nYeah, it would have been better to avoid the difference, but\nit's too late now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Apr 2024 23:37:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Tue, Apr 30, 2024 at 12:37 PM Tom Lane <[email protected]> wrote:\n>\n> Masahiko Sawada <[email protected]> writes:\n> > On Tue, Apr 23, 2024 at 11:57 PM Tom Lane <[email protected]> wrote:\n> >> Reject as not a bug. Discourage people from thinking that physical\n> >> replication will work across architectures.\n>\n> > While cross-arch physical replication is not supported, I think having\n> > architecture dependent differences is not good and It's legitimate to\n> > fix it. FYI the 'char' data type comparison is done as though char is\n> > unsigned. I've attached a small patch to fix it. What do you think?\n>\n> I think this will break existing indexes that are working fine.\n> Yeah, it would have been better to avoid the difference, but\n> it's too late now.\n\nTrue. So it will be a PG18 item.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 30 Apr 2024 16:38:12 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "Masahiko Sawada <[email protected]> writes:\n> On Tue, Apr 30, 2024 at 12:37 PM Tom Lane <[email protected]> wrote:\n>> I think this will break existing indexes that are working fine.\n>> Yeah, it would have been better to avoid the difference, but\n>> it's too late now.\n\n> True. So it will be a PG18 item.\n\nHow will it be any better in v18? It's still an on-disk\ncompatibility break for affected platforms.\n\nNow, people could recover by reindexing affected indexes,\nbut I think we need to have a better justification than this\nfor making them do so.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Apr 2024 10:32:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Tue, Apr 23, 2024 at 5:57 PM Tom Lane <[email protected]> wrote:\n> \"Guo, Adam\" <[email protected]> writes:\n> > I would like to report an issue with the pg_trgm extension on\n> > cross-architecture replication scenarios. When an x86_64 standby\n> > server is replicating from an aarch64 primary server or vice versa,\n> > the gist_trgm_ops opclass returns different results on the primary\n> > and standby.\n>\n> I do not think that is a supported scenario. Hash functions and\n> suchlike are not guaranteed to produce the same results on different\n> CPU architectures. As a quick example, I get\n>\n> regression=# select hashfloat8(34);\n> hashfloat8\n> ------------\n> 21570837\n> (1 row)\n>\n> on x86_64 but\n>\n> postgres=# select hashfloat8(34);\n> hashfloat8\n> ------------\n> -602898821\n> (1 row)\n>\n> on ppc32 thanks to the endianness difference.\n\nGiven this, should we try to do better with binary compatibility\nchecks using ControlFileData? AFAICS they are supposed to check if\nthe database cluster is binary compatible with the running\narchitecture. But it obviously allows incompatibilities.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 30 Apr 2024 19:43:12 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "Alexander Korotkov <[email protected]> writes:\n> Given this, should we try to do better with binary compatibility\n> checks using ControlFileData? AFAICS they are supposed to check if\n> the database cluster is binary compatible with the running\n> architecture. But it obviously allows incompatibilities.\n\nPerhaps. pg_control already covers endianness, which I think\nis the root of the hashing differences I showed. Adding a field\nfor char signedness feels a little weird, since it's not directly\na property of the bits-on-disk, but maybe we should.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Apr 2024 12:54:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Tue, Apr 30, 2024 at 7:54 PM Tom Lane <[email protected]> wrote:\n> Alexander Korotkov <[email protected]> writes:\n> > Given this, should we try to do better with binary compatibility\n> > checks using ControlFileData? AFAICS they are supposed to check if\n> > the database cluster is binary compatible with the running\n> > architecture. But it obviously allows incompatibilities.\n>\n> Perhaps. pg_control already covers endianness, which I think\n> is the root of the hashing differences I showed. Adding a field\n> for char signedness feels a little weird, since it's not directly\n> a property of the bits-on-disk, but maybe we should.\n\nI agree that storing char signedness might seem weird. But it appears\nthat we already store indexes that depend on char signedness. So,\nit's effectively property of bits-on-disk even though it affects\nindirectly. Then I see two options to make the picture consistent.\n1) Assume that char signedness is somehow a property of bits-on-disk\neven though it's weird. Then pg_trgm indexes are correct, but we need\nto store char signedness in pg_control.\n2) Assume that char signedness is not a property of bits-on-disk.\nThen pg_trgm indexes are buggy and need to be fixed.\nWhat do you think?\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 30 Apr 2024 20:02:04 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "Alexander Korotkov <[email protected]> writes:\n> I agree that storing char signedness might seem weird. But it appears\n> that we already store indexes that depend on char signedness. So,\n> it's effectively property of bits-on-disk even though it affects\n> indirectly. Then I see two options to make the picture consistent.\n> 1) Assume that char signedness is somehow a property of bits-on-disk\n> even though it's weird. Then pg_trgm indexes are correct, but we need\n> to store char signedness in pg_control.\n> 2) Assume that char signedness is not a property of bits-on-disk.\n> Then pg_trgm indexes are buggy and need to be fixed.\n> What do you think?\n\nThe problem with option (2) is the assumption that pg_trgm's behavior\nis the only bug of this kind, either now or in the future. I think\nthat's just about an impossible standard to meet, because there's no\nrealistic way to test whether char signedness is affecting things.\n(Sure, you can compare results across platforms, but maybe you\njust didn't test the right case.)\n\nAlso, the bigger picture here is the seeming assumption that \"if\nwe change pg_trgm then it will be safe to replicate from x86 to\narm\". I don't believe that that's a good idea and I'm unwilling\nto promise that it will work, regardless of what we do about\nchar signedness. That being the case, I don't want to invest a\nlot of effort in the signedness issue. Option (1) is clearly\na small change with little if any risk of future breakage.\nOption (2) ... not so much.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Apr 2024 13:29:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Wed, May 1, 2024 at 2:29 AM Tom Lane <[email protected]> wrote:\n>\n> Alexander Korotkov <[email protected]> writes:\n> > I agree that storing char signedness might seem weird. But it appears\n> > that we already store indexes that depend on char signedness. So,\n> > it's effectively property of bits-on-disk even though it affects\n> > indirectly. Then I see two options to make the picture consistent.\n> > 1) Assume that char signedness is somehow a property of bits-on-disk\n> > even though it's weird. Then pg_trgm indexes are correct, but we need\n> > to store char signedness in pg_control.\n> > 2) Assume that char signedness is not a property of bits-on-disk.\n> > Then pg_trgm indexes are buggy and need to be fixed.\n> > What do you think?\n>\n> The problem with option (2) is the assumption that pg_trgm's behavior\n> is the only bug of this kind, either now or in the future. I think\n> that's just about an impossible standard to meet, because there's no\n> realistic way to test whether char signedness is affecting things.\n> (Sure, you can compare results across platforms, but maybe you\n> just didn't test the right case.)\n>\n> Also, the bigger picture here is the seeming assumption that \"if\n> we change pg_trgm then it will be safe to replicate from x86 to\n> arm\". I don't believe that that's a good idea and I'm unwilling\n> to promise that it will work, regardless of what we do about\n> char signedness. That being the case, I don't want to invest a\n> lot of effort in the signedness issue.\n\nI think that the char signedness issue is an issue also for developers\n(and extension authors) since it could lead to confusion and potential\nbugs in the future due to that. x86 developers would think of char as\nalways being signed and write code that will misbehave on arm\nmachines. For example, since logical replication should behave\ncorrectly even in cross-arch replication all developers need to be\naware of that. I thought of using the -funsigned-char (or\n-fsigned-char) compiler flag to avoid that but it would have a broader\nimpact not only on indexes.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 1 May 2024 11:45:27 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On 30.04.24 19:29, Tom Lane wrote:\n>> 1) Assume that char signedness is somehow a property of bits-on-disk\n>> even though it's weird. Then pg_trgm indexes are correct, but we need\n>> to store char signedness in pg_control.\n>> 2) Assume that char signedness is not a property of bits-on-disk.\n>> Then pg_trgm indexes are buggy and need to be fixed.\n>> What do you think?\n> Also, the bigger picture here is the seeming assumption that \"if\n> we change pg_trgm then it will be safe to replicate from x86 to\n> arm\". I don't believe that that's a good idea and I'm unwilling\n> to promise that it will work, regardless of what we do about\n> char signedness. That being the case, I don't want to invest a\n> lot of effort in the signedness issue. Option (1) is clearly\n> a small change with little if any risk of future breakage.\n\nBut note that option 1 would prevent some replication that is currently \nworking.\n\n\n\n", "msg_date": "Fri, 3 May 2024 14:20:21 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 30.04.24 19:29, Tom Lane wrote:\n>> Also, the bigger picture here is the seeming assumption that \"if\n>> we change pg_trgm then it will be safe to replicate from x86 to\n>> arm\". I don't believe that that's a good idea and I'm unwilling\n>> to promise that it will work, regardless of what we do about\n>> char signedness. That being the case, I don't want to invest a\n>> lot of effort in the signedness issue. Option (1) is clearly\n>> a small change with little if any risk of future breakage.\n\n> But note that option 1 would prevent some replication that is currently \n> working.\n\nThe point of this thread though is that it's working only for small\nvalues of \"work\". People are rightfully unhappy if it seems to work\nand then later they get bitten by compatibility problems.\n\nTreating char signedness as a machine property in pg_control would\nsignal that we don't intend to make it work, and would ensure that\neven the most minimal testing would find out that it doesn't work.\n\nIf we do not do that, it seems to me we have to buy into making\nit work. That would mean dealing with the consequences of an\nincompatible change in pg_trgm indexes, and then going through\nthe same dance again the next time(s) similar problems are found.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 May 2024 10:13:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On 03.05.24 16:13, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> On 30.04.24 19:29, Tom Lane wrote:\n>>> Also, the bigger picture here is the seeming assumption that \"if\n>>> we change pg_trgm then it will be safe to replicate from x86 to\n>>> arm\". I don't believe that that's a good idea and I'm unwilling\n>>> to promise that it will work, regardless of what we do about\n>>> char signedness. That being the case, I don't want to invest a\n>>> lot of effort in the signedness issue. Option (1) is clearly\n>>> a small change with little if any risk of future breakage.\n> \n>> But note that option 1 would prevent some replication that is currently\n>> working.\n> \n> The point of this thread though is that it's working only for small\n> values of \"work\". People are rightfully unhappy if it seems to work\n> and then later they get bitten by compatibility problems.\n> \n> Treating char signedness as a machine property in pg_control would\n> signal that we don't intend to make it work, and would ensure that\n> even the most minimal testing would find out that it doesn't work.\n> \n> If we do not do that, it seems to me we have to buy into making\n> it work. That would mean dealing with the consequences of an\n> incompatible change in pg_trgm indexes, and then going through\n> the same dance again the next time(s) similar problems are found.\n\nYes, that is understood. But anecdotally, replicating between x86-64 \narm64 is occasionally used for upgrades or migrations. In practice, \nthis appears to have mostly worked. If we now discover that it won't \nwork with certain index extension modules, it's usable for most users. \nEven if we say, you have to reindex everything afterwards, it's probably \nstill useful for these scenarios.\n\nThe way I understand the original report, the issue has to do \nspecifically with how signed and unsigned chars compare differently. I \ndon't imagine this is used anywhere in the table/heap code. So it's \nplausible that this issue is really contained to indexes.\n\nOn the other hand, if we put in a check against this, then at least we \ncan answer any user questions about this with more certainty: No, won't \nwork, here is why.\n\n\n\n", "msg_date": "Fri, 3 May 2024 20:44:42 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On 5/3/24 11:44, Peter Eisentraut wrote:\n> On 03.05.24 16:13, Tom Lane wrote:\n>> Peter Eisentraut <[email protected]> writes:\n>>> On 30.04.24 19:29, Tom Lane wrote:\n>>>> Also, the bigger picture here is the seeming assumption that \"if\n>>>> we change pg_trgm then it will be safe to replicate from x86 to\n>>>> arm\".  I don't believe that that's a good idea and I'm unwilling\n>>>> to promise that it will work, regardless of what we do about\n>>>> char signedness.  That being the case, I don't want to invest a\n>>>> lot of effort in the signedness issue.  Option (1) is clearly\n>>>> a small change with little if any risk of future breakage.\n>>\n>>> But note that option 1 would prevent some replication that is currently\n>>> working.\n>>\n>> The point of this thread though is that it's working only for small\n>> values of \"work\".  People are rightfully unhappy if it seems to work\n>> and then later they get bitten by compatibility problems.\n>>\n>> Treating char signedness as a machine property in pg_control would\n>> signal that we don't intend to make it work, and would ensure that\n>> even the most minimal testing would find out that it doesn't work.\n>>\n>> If we do not do that, it seems to me we have to buy into making\n>> it work.  That would mean dealing with the consequences of an\n>> incompatible change in pg_trgm indexes, and then going through\n>> the same dance again the next time(s) similar problems are found.\n> \n> Yes, that is understood.  But anecdotally, replicating between x86-64 arm64 is \n> occasionally used for upgrades or migrations.  In practice, this appears to have \n> mostly worked.  If we now discover that it won't work with certain index \n> extension modules, it's usable for most users. Even if we say, you have to \n> reindex everything afterwards, it's probably still useful for these scenarios.\n\n+1\n\nI have heard similar anecdotes, and the reported experience goes even further -- \nmany such upgrade/migration uses, with exceedingly rare reported failures.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Fri, 3 May 2024 15:36:05 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Sat, May 4, 2024 at 7:36 AM Joe Conway <[email protected]> wrote:\n>\n> On 5/3/24 11:44, Peter Eisentraut wrote:\n> > On 03.05.24 16:13, Tom Lane wrote:\n> >> Peter Eisentraut <[email protected]> writes:\n> >>> On 30.04.24 19:29, Tom Lane wrote:\n> >>>> Also, the bigger picture here is the seeming assumption that \"if\n> >>>> we change pg_trgm then it will be safe to replicate from x86 to\n> >>>> arm\". I don't believe that that's a good idea and I'm unwilling\n> >>>> to promise that it will work, regardless of what we do about\n> >>>> char signedness. That being the case, I don't want to invest a\n> >>>> lot of effort in the signedness issue. Option (1) is clearly\n> >>>> a small change with little if any risk of future breakage.\n> >>\n> >>> But note that option 1 would prevent some replication that is currently\n> >>> working.\n> >>\n> >> The point of this thread though is that it's working only for small\n> >> values of \"work\". People are rightfully unhappy if it seems to work\n> >> and then later they get bitten by compatibility problems.\n> >>\n> >> Treating char signedness as a machine property in pg_control would\n> >> signal that we don't intend to make it work, and would ensure that\n> >> even the most minimal testing would find out that it doesn't work.\n> >>\n> >> If we do not do that, it seems to me we have to buy into making\n> >> it work. That would mean dealing with the consequences of an\n> >> incompatible change in pg_trgm indexes, and then going through\n> >> the same dance again the next time(s) similar problems are found.\n> >\n> > Yes, that is understood. But anecdotally, replicating between x86-64 arm64 is\n> > occasionally used for upgrades or migrations. In practice, this appears to have\n> > mostly worked. If we now discover that it won't work with certain index\n> > extension modules, it's usable for most users. Even if we say, you have to\n> > reindex everything afterwards, it's probably still useful for these scenarios.\n>\n> +1\n\n+1\n\nHow about extending amcheck to support GIN and GIst indexes so that it\ncan detect potential data incompatibility due to changing 'char' to\n'unsigned char'? I think these new tests would be useful also for\nusers to check if they really need to reindex indexes due to such\nchanges. Also we fix pg_trgm so that it uses 'unsigned char' in PG18.\nUsers who upgraded to PG18 can run the new amcheck tests on the\nprimary as well as the standby.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 15 May 2024 14:56:54 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Wed, May 15, 2024 at 02:56:54PM +0900, Masahiko Sawada wrote:\n> On Sat, May 4, 2024 at 7:36 AM Joe Conway <[email protected]> wrote:\n> > On 5/3/24 11:44, Peter Eisentraut wrote:\n> > > On 03.05.24 16:13, Tom Lane wrote:\n> > >> Peter Eisentraut <[email protected]> writes:\n> > >>> On 30.04.24 19:29, Tom Lane wrote:\n> > >>>> Also, the bigger picture here is the seeming assumption that \"if\n> > >>>> we change pg_trgm then it will be safe to replicate from x86 to\n> > >>>> arm\". I don't believe that that's a good idea and I'm unwilling\n> > >>>> to promise that it will work, regardless of what we do about\n> > >>>> char signedness. That being the case, I don't want to invest a\n> > >>>> lot of effort in the signedness issue. Option (1) is clearly\n> > >>>> a small change with little if any risk of future breakage.\n> > >>\n> > >>> But note that option 1 would prevent some replication that is currently\n> > >>> working.\n> > >>\n> > >> The point of this thread though is that it's working only for small\n> > >> values of \"work\". People are rightfully unhappy if it seems to work\n> > >> and then later they get bitten by compatibility problems.\n> > >>\n> > >> Treating char signedness as a machine property in pg_control would\n> > >> signal that we don't intend to make it work, and would ensure that\n> > >> even the most minimal testing would find out that it doesn't work.\n> > >>\n> > >> If we do not do that, it seems to me we have to buy into making\n> > >> it work. That would mean dealing with the consequences of an\n> > >> incompatible change in pg_trgm indexes, and then going through\n> > >> the same dance again the next time(s) similar problems are found.\n> > >\n> > > Yes, that is understood. But anecdotally, replicating between x86-64 arm64 is\n> > > occasionally used for upgrades or migrations. In practice, this appears to have\n> > > mostly worked. If we now discover that it won't work with certain index\n> > > extension modules, it's usable for most users. Even if we say, you have to\n> > > reindex everything afterwards, it's probably still useful for these scenarios.\n\nLike you, I would not introduce a ControlFileData block for sign-of-char,\ngiven the signs of breakage in extension indexing only. That would lose much\nuseful migration capability. I'm sympathetic to Tom's point, which I'll\nattempt to summarize as: a ControlFileData block is a promise we know how to\nkeep, whereas we should expect further bug discoveries without it. At the\nsame time, I put more weight on the apparently-wide span of functionality that\nworks fine. (I wonder whether any static analyzer does or practically could\ndetect char sign dependence with a decent false positive rate.)\n\n> +1\n> \n> How about extending amcheck to support GIN and GIst indexes so that it\n> can detect potential data incompatibility due to changing 'char' to\n> 'unsigned char'? I think these new tests would be useful also for\n> users to check if they really need to reindex indexes due to such\n> changes. Also we fix pg_trgm so that it uses 'unsigned char' in PG18.\n> Users who upgraded to PG18 can run the new amcheck tests on the\n> primary as well as the standby.\n\nIf I were standardizing pg_trgm on one or the other notion of \"char\", I would\nchoose signed char, since I think it's still the majority. More broadly, I\nsee these options to fix pg_trgm:\n\n1. Change to signed char. Every arm64 system needs to scan pg_trgm indexes.\n2. Change to unsigned char. Every x86 system needs to scan pg_trgm indexes.\n3. Offer both, as an upgrade path. For example, pg_trgm could have separate\n operator classes gin_trgm_ops and gin_trgm_ops_unsigned. Running\n pg_upgrade on an unsigned-char system would automatically map v17\n gin_trgm_ops to v18 gin_trgm_ops_unsigned. This avoids penalizing any\n architecture with upgrade-time scans.\n\nIndependently, having amcheck for GIN and/or GiST would be great.\n\n\n", "msg_date": "Sat, 18 May 2024 14:45:46 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Sun, May 19, 2024 at 6:46 AM Noah Misch <[email protected]> wrote:\n>\n> On Wed, May 15, 2024 at 02:56:54PM +0900, Masahiko Sawada wrote:\n> >\n> > How about extending amcheck to support GIN and GIst indexes so that it\n> > can detect potential data incompatibility due to changing 'char' to\n> > 'unsigned char'? I think these new tests would be useful also for\n> > users to check if they really need to reindex indexes due to such\n> > changes. Also we fix pg_trgm so that it uses 'unsigned char' in PG18.\n> > Users who upgraded to PG18 can run the new amcheck tests on the\n> > primary as well as the standby.\n>\n> If I were standardizing pg_trgm on one or the other notion of \"char\", I would\n> choose signed char, since I think it's still the majority. More broadly, I\n> see these options to fix pg_trgm:\n>\n> 1. Change to signed char. Every arm64 system needs to scan pg_trgm indexes.\n> 2. Change to unsigned char. Every x86 system needs to scan pg_trgm indexes.\n\nEven though it's true that signed char systems are the majority, it\nwould not be acceptable to force the need to scan pg_trgm indexes on\nunsigned char systems.\n\n> 3. Offer both, as an upgrade path. For example, pg_trgm could have separate\n> operator classes gin_trgm_ops and gin_trgm_ops_unsigned. Running\n> pg_upgrade on an unsigned-char system would automatically map v17\n> gin_trgm_ops to v18 gin_trgm_ops_unsigned. This avoids penalizing any\n> architecture with upgrade-time scans.\n\nVery interesting idea. How can new v18 users use the correct operator\nclass? I don't want to require users to specify the correct signed or\nunsigned operator classes when creating a GIN index. Maybe we need to\ndynamically use the correct compare function for the same operator\nclass depending on the char signedness. But is it possible to do it on\nthe extension (e.g. pg_trgm) side?\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 29 Aug 2024 15:48:53 -0500", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Thu, Aug 29, 2024 at 03:48:53PM -0500, Masahiko Sawada wrote:\n> On Sun, May 19, 2024 at 6:46 AM Noah Misch <[email protected]> wrote:\n> > If I were standardizing pg_trgm on one or the other notion of \"char\", I would\n> > choose signed char, since I think it's still the majority. More broadly, I\n> > see these options to fix pg_trgm:\n> >\n> > 1. Change to signed char. Every arm64 system needs to scan pg_trgm indexes.\n> > 2. Change to unsigned char. Every x86 system needs to scan pg_trgm indexes.\n> \n> Even though it's true that signed char systems are the majority, it\n> would not be acceptable to force the need to scan pg_trgm indexes on\n> unsigned char systems.\n> \n> > 3. Offer both, as an upgrade path. For example, pg_trgm could have separate\n> > operator classes gin_trgm_ops and gin_trgm_ops_unsigned. Running\n> > pg_upgrade on an unsigned-char system would automatically map v17\n> > gin_trgm_ops to v18 gin_trgm_ops_unsigned. This avoids penalizing any\n> > architecture with upgrade-time scans.\n> \n> Very interesting idea. How can new v18 users use the correct operator\n> class? I don't want to require users to specify the correct signed or\n> unsigned operator classes when creating a GIN index. Maybe we need to\n\nIn brief, it wouldn't matter which operator class new v18 indexes use. The\ndocumentation would focus on gin_trgm_ops and also say something like:\n\n There's an additional operator class, gin_trgm_ops_unsigned. It behaves\n exactly like gin_trgm_ops, but it uses a deprecated on-disk representation.\n Use gin_trgm_ops in new indexes, but there's no disadvantage from continuing\n to use gin_trgm_ops_unsigned. Before PostgreSQL 18, gin_trgm_ops used a\n platform-dependent representation. pg_upgrade automatically uses\n gin_trgm_ops_unsigned when upgrading from source data that used the\n deprecated representation.\n\nWhat concerns might users have, then? (Neither operator class would use plain\n\"char\" in a context that affects on-disk state. They'll use \"signed char\" and\n\"unsigned char\".)\n\n> dynamically use the correct compare function for the same operator\n> class depending on the char signedness. But is it possible to do it on\n> the extension (e.g. pg_trgm) side?\n\nNo, I don't think the extension can do that cleanly. It would need to store\nthe signedness in the index somehow, and GIN doesn't call the opclass at a\ntime facilitating that. That would need help from the core server.\n\n\n", "msg_date": "Fri, 30 Aug 2024 20:10:38 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Fri, Aug 30, 2024 at 8:10 PM Noah Misch <[email protected]> wrote:\n>\n> On Thu, Aug 29, 2024 at 03:48:53PM -0500, Masahiko Sawada wrote:\n> > On Sun, May 19, 2024 at 6:46 AM Noah Misch <[email protected]> wrote:\n> > > If I were standardizing pg_trgm on one or the other notion of \"char\", I would\n> > > choose signed char, since I think it's still the majority. More broadly, I\n> > > see these options to fix pg_trgm:\n> > >\n> > > 1. Change to signed char. Every arm64 system needs to scan pg_trgm indexes.\n> > > 2. Change to unsigned char. Every x86 system needs to scan pg_trgm indexes.\n> >\n> > Even though it's true that signed char systems are the majority, it\n> > would not be acceptable to force the need to scan pg_trgm indexes on\n> > unsigned char systems.\n> >\n> > > 3. Offer both, as an upgrade path. For example, pg_trgm could have separate\n> > > operator classes gin_trgm_ops and gin_trgm_ops_unsigned. Running\n> > > pg_upgrade on an unsigned-char system would automatically map v17\n> > > gin_trgm_ops to v18 gin_trgm_ops_unsigned. This avoids penalizing any\n> > > architecture with upgrade-time scans.\n> >\n> > Very interesting idea. How can new v18 users use the correct operator\n> > class? I don't want to require users to specify the correct signed or\n> > unsigned operator classes when creating a GIN index. Maybe we need to\n>\n> In brief, it wouldn't matter which operator class new v18 indexes use. The\n> documentation would focus on gin_trgm_ops and also say something like:\n>\n> There's an additional operator class, gin_trgm_ops_unsigned. It behaves\n> exactly like gin_trgm_ops, but it uses a deprecated on-disk representation.\n> Use gin_trgm_ops in new indexes, but there's no disadvantage from continuing\n> to use gin_trgm_ops_unsigned. Before PostgreSQL 18, gin_trgm_ops used a\n> platform-dependent representation. pg_upgrade automatically uses\n> gin_trgm_ops_unsigned when upgrading from source data that used the\n> deprecated representation.\n>\n> What concerns might users have, then? (Neither operator class would use plain\n> \"char\" in a context that affects on-disk state. They'll use \"signed char\" and\n> \"unsigned char\".)\n\nI think I understand your idea now. Since gin_trgm_ops will use\n\"signed char\", there is no impact for v18 users -- they can continue\nusing gin_trgm_ops.\n\nBut how does pg_upgrade use gin_trgm_ops_unsigned? This opclass will\nbe created by executing the pg_trgm script for v18, but it isn't\nexecuted during pg_upgrade. Another way would be to do these opclass\nreplacement when executing the pg_trgm's update script (i.e., 1.6 to\n1.7).\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 6 Sep 2024 14:07:10 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Fri, Sep 06, 2024 at 02:07:10PM -0700, Masahiko Sawada wrote:\n> On Fri, Aug 30, 2024 at 8:10 PM Noah Misch <[email protected]> wrote:\n> > On Thu, Aug 29, 2024 at 03:48:53PM -0500, Masahiko Sawada wrote:\n> > > On Sun, May 19, 2024 at 6:46 AM Noah Misch <[email protected]> wrote:\n> > > > If I were standardizing pg_trgm on one or the other notion of \"char\", I would\n> > > > choose signed char, since I think it's still the majority. More broadly, I\n> > > > see these options to fix pg_trgm:\n> > > >\n> > > > 1. Change to signed char. Every arm64 system needs to scan pg_trgm indexes.\n> > > > 2. Change to unsigned char. Every x86 system needs to scan pg_trgm indexes.\n> > >\n> > > Even though it's true that signed char systems are the majority, it\n> > > would not be acceptable to force the need to scan pg_trgm indexes on\n> > > unsigned char systems.\n> > >\n> > > > 3. Offer both, as an upgrade path. For example, pg_trgm could have separate\n> > > > operator classes gin_trgm_ops and gin_trgm_ops_unsigned. Running\n> > > > pg_upgrade on an unsigned-char system would automatically map v17\n> > > > gin_trgm_ops to v18 gin_trgm_ops_unsigned. This avoids penalizing any\n> > > > architecture with upgrade-time scans.\n> > >\n> > > Very interesting idea. How can new v18 users use the correct operator\n> > > class? I don't want to require users to specify the correct signed or\n> > > unsigned operator classes when creating a GIN index. Maybe we need to\n> >\n> > In brief, it wouldn't matter which operator class new v18 indexes use. The\n> > documentation would focus on gin_trgm_ops and also say something like:\n> >\n> > There's an additional operator class, gin_trgm_ops_unsigned. It behaves\n> > exactly like gin_trgm_ops, but it uses a deprecated on-disk representation.\n> > Use gin_trgm_ops in new indexes, but there's no disadvantage from continuing\n> > to use gin_trgm_ops_unsigned. Before PostgreSQL 18, gin_trgm_ops used a\n> > platform-dependent representation. pg_upgrade automatically uses\n> > gin_trgm_ops_unsigned when upgrading from source data that used the\n> > deprecated representation.\n> >\n> > What concerns might users have, then? (Neither operator class would use plain\n> > \"char\" in a context that affects on-disk state. They'll use \"signed char\" and\n> > \"unsigned char\".)\n> \n> I think I understand your idea now. Since gin_trgm_ops will use\n> \"signed char\", there is no impact for v18 users -- they can continue\n> using gin_trgm_ops.\n\nRight.\n\n> But how does pg_upgrade use gin_trgm_ops_unsigned? This opclass will\n> be created by executing the pg_trgm script for v18, but it isn't\n> executed during pg_upgrade. Another way would be to do these opclass\n> replacement when executing the pg_trgm's update script (i.e., 1.6 to\n> 1.7).\n\nYes, that's one way to make it work. If we do it that way, it would be\ncritical to make the ALTER EXTENSION UPDATE run before anything uses the\nindex. Otherwise, we'll run new v18 \"signed char\" code on a v17 \"unsigned\nchar\" data file. Running ALTER EXTENSION UPDATE early enough should be\nfeasible, so that's fine. Some other options:\n\n- If v18 \"pg_dump -b\" decides to emit CREATE INDEX ... gin_trgm_ops_unsigned,\n then make it also emit the statements to create the opclass.\n\n- Ship pg_trgm--1.6--1.7.sql in back branches. If the upgrade wants to use\n gin_trgm_ops_unsigned, require the user to ALTER EXTENSION UPDATE first.\n (In back branches, the C code behind gin_trgm_ops_unsigned could just raise\n an error if called.)\n\nWhat other options exist? I bet there are more.\n\n\n", "msg_date": "Fri, 6 Sep 2024 14:59:37 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "Noah Misch <[email protected]> writes:\n> Yes, that's one way to make it work. If we do it that way, it would be\n> critical to make the ALTER EXTENSION UPDATE run before anything uses the\n> index. Otherwise, we'll run new v18 \"signed char\" code on a v17 \"unsigned\n> char\" data file. Running ALTER EXTENSION UPDATE early enough should be\n> feasible, so that's fine. Some other options:\n\n> - If v18 \"pg_dump -b\" decides to emit CREATE INDEX ... gin_trgm_ops_unsigned,\n> then make it also emit the statements to create the opclass.\n\n> - Ship pg_trgm--1.6--1.7.sql in back branches. If the upgrade wants to use\n> gin_trgm_ops_unsigned, require the user to ALTER EXTENSION UPDATE first.\n> (In back branches, the C code behind gin_trgm_ops_unsigned could just raise\n> an error if called.)\n\nI feel like all of these are leaning heavily on users to get it right,\nand therefore have a significant chance of breaking use-cases that\nwere perfectly fine before.\n\nRemind me of why we can't do something like this:\n\n* Add APIs that allow opclasses to read/write some space in the GIN\nmetapage. (We could get ambitious and add such APIs for other AMs\ntoo, but doing it just for GIN is probably a prudent start.) Ensure\nthat this space is initially zeroed.\n\n* In gin_trgm_ops, read a byte of this space and interpret it as\n\t0 = unset\n\t1 = signed chars\n\t2 = unsigned chars\nIf the value is zero, set the byte on the basis of the native\nchar-signedness of the current platform. (Obviously, a\nsecondary server couldn't write the byte, and would have to\nwait for the primary to update the value. In the meantime,\nit's no more broken than today.)\n\n* Subsequently, use either signed or unsigned comparisons\nbased on that value.\n\nThis would automatically do the right thing in all cases that\nwork today, and it'd provide the ability for cross-platform\nreplication to work in future. You can envision cases where\ntransferring a pre-existing index to a platform of the other\nstripe would misbehave, but they're the same cases that fail\ntoday, and the fix remains the same: reindex.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 06 Sep 2024 18:36:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Fri, Sep 06, 2024 at 06:36:53PM -0400, Tom Lane wrote:\n> Noah Misch <[email protected]> writes:\n> > Yes, that's one way to make it work. If we do it that way, it would be\n> > critical to make the ALTER EXTENSION UPDATE run before anything uses the\n> > index. Otherwise, we'll run new v18 \"signed char\" code on a v17 \"unsigned\n> > char\" data file. Running ALTER EXTENSION UPDATE early enough should be\n> > feasible, so that's fine. Some other options:\n> \n> > - If v18 \"pg_dump -b\" decides to emit CREATE INDEX ... gin_trgm_ops_unsigned,\n> > then make it also emit the statements to create the opclass.\n> \n> > - Ship pg_trgm--1.6--1.7.sql in back branches. If the upgrade wants to use\n> > gin_trgm_ops_unsigned, require the user to ALTER EXTENSION UPDATE first.\n> > (In back branches, the C code behind gin_trgm_ops_unsigned could just raise\n> > an error if called.)\n> \n> I feel like all of these are leaning heavily on users to get it right,\n\nWhat do you have in mind? I see things for the pg_upgrade implementation to\nget right, but I don't see things for pg_upgrade users to get right.\n\nThe last option is disruptive for users of unsigned char platforms, since it\nrequires a two-step upgrade if upgrading from v11 or earlier. The other two\ndon't have that problem.\n\n> and therefore have a significant chance of breaking use-cases that\n> were perfectly fine before.\n> \n> Remind me of why we can't do something like this:\n> \n> * Add APIs that allow opclasses to read/write some space in the GIN\n> metapage. (We could get ambitious and add such APIs for other AMs\n> too, but doing it just for GIN is probably a prudent start.) Ensure\n> that this space is initially zeroed.\n> \n> * In gin_trgm_ops, read a byte of this space and interpret it as\n> \t0 = unset\n> \t1 = signed chars\n> \t2 = unsigned chars\n> If the value is zero, set the byte on the basis of the native\n> char-signedness of the current platform. (Obviously, a\n> secondary server couldn't write the byte, and would have to\n> wait for the primary to update the value. In the meantime,\n> it's no more broken than today.)\n> \n> * Subsequently, use either signed or unsigned comparisons\n> based on that value.\n> \n> This would automatically do the right thing in all cases that\n> work today, and it'd provide the ability for cross-platform\n> replication to work in future. You can envision cases where\n> transferring a pre-existing index to a platform of the other\n> stripe would misbehave, but they're the same cases that fail\n> today, and the fix remains the same: reindex.\n\nNo reason you couldn't do it that way. Works for me.\n\n\n\n", "msg_date": "Fri, 6 Sep 2024 16:25:39 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "Noah Misch <[email protected]> writes:\n> On Fri, Sep 06, 2024 at 06:36:53PM -0400, Tom Lane wrote:\n>> I feel like all of these are leaning heavily on users to get it right,\n\n> What do you have in mind? I see things for the pg_upgrade implementation to\n> get right, but I don't see things for pg_upgrade users to get right.\n\nWell, yeah, if you are willing to put pg_upgrade in charge of\nexecuting ALTER EXTENSION UPDATE, then that would be a reasonably\nomission-proof path. But we have always intended the pg_upgrade\nprocess to *not* auto-update extensions, so I'm concerned about\nthe potential side-effects of drilling a hole in that policy.\nAs an example: even if we ensure that pg_trgm 1.6 to 1.7 is totally\ntransparent except for this fix, what happens if the user's old\ndatabase is still on some pre-1.6 version? Is it okay to force an\nupdate that includes feature upgrades?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 06 Sep 2024 19:37:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Fri, Sep 06, 2024 at 07:37:09PM -0400, Tom Lane wrote:\n> Noah Misch <[email protected]> writes:\n> > On Fri, Sep 06, 2024 at 06:36:53PM -0400, Tom Lane wrote:\n> >> I feel like all of these are leaning heavily on users to get it right,\n> \n> > What do you have in mind? I see things for the pg_upgrade implementation to\n> > get right, but I don't see things for pg_upgrade users to get right.\n> \n> Well, yeah, if you are willing to put pg_upgrade in charge of\n> executing ALTER EXTENSION UPDATE, then that would be a reasonably\n> omission-proof path. But we have always intended the pg_upgrade\n> process to *not* auto-update extensions, so I'm concerned about\n> the potential side-effects of drilling a hole in that policy.\n> As an example: even if we ensure that pg_trgm 1.6 to 1.7 is totally\n> transparent except for this fix, what happens if the user's old\n> database is still on some pre-1.6 version? Is it okay to force an\n> update that includes feature upgrades?\n\nThose are disadvantages of some of the options. I think it could be okay to\ninject the mandatory upgrade here or just tell the user to update to 1.7\nbefore attempting the upgrade (if not at 1.7, pg_upgrade would fail with an\nerror message to that effect). Your counterproposal avoids the issue, and I'm\ngood with that design.\n\n\n", "msg_date": "Fri, 6 Sep 2024 17:19:54 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Fri, Sep 6, 2024 at 3:36 PM Tom Lane <[email protected]> wrote:\n>\n> Noah Misch <[email protected]> writes:\n> > Yes, that's one way to make it work. If we do it that way, it would be\n> > critical to make the ALTER EXTENSION UPDATE run before anything uses the\n> > index. Otherwise, we'll run new v18 \"signed char\" code on a v17 \"unsigned\n> > char\" data file. Running ALTER EXTENSION UPDATE early enough should be\n> > feasible, so that's fine. Some other options:\n>\n> > - If v18 \"pg_dump -b\" decides to emit CREATE INDEX ... gin_trgm_ops_unsigned,\n> > then make it also emit the statements to create the opclass.\n>\n> > - Ship pg_trgm--1.6--1.7.sql in back branches. If the upgrade wants to use\n> > gin_trgm_ops_unsigned, require the user to ALTER EXTENSION UPDATE first.\n> > (In back branches, the C code behind gin_trgm_ops_unsigned could just raise\n> > an error if called.)\n>\n> I feel like all of these are leaning heavily on users to get it right,\n> and therefore have a significant chance of breaking use-cases that\n> were perfectly fine before.\n>\n> Remind me of why we can't do something like this:\n>\n> * Add APIs that allow opclasses to read/write some space in the GIN\n> metapage. (We could get ambitious and add such APIs for other AMs\n> too, but doing it just for GIN is probably a prudent start.) Ensure\n> that this space is initially zeroed.\n>\n> * In gin_trgm_ops, read a byte of this space and interpret it as\n> 0 = unset\n> 1 = signed chars\n> 2 = unsigned chars\n> If the value is zero, set the byte on the basis of the native\n> char-signedness of the current platform. (Obviously, a\n> secondary server couldn't write the byte, and would have to\n> wait for the primary to update the value. In the meantime,\n> it's no more broken than today.)\n\nWhen do we set the byte on the primary server? If it's the first time\nto use the GIN index, secondary servers would have to wait for the\nprimary to use the GIN index, which could be an unpredictable time or\nit may never come depending on index usages. Probably we can make\npg_upgrade set the byte in the meta page of GIN indexes that use the\ngin_trgm_ops.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 9 Sep 2024 16:33:26 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "Masahiko Sawada <[email protected]> writes:\n> When do we set the byte on the primary server? If it's the first time\n> to use the GIN index, secondary servers would have to wait for the\n> primary to use the GIN index, which could be an unpredictable time or\n> it may never come depending on index usages. Probably we can make\n> pg_upgrade set the byte in the meta page of GIN indexes that use the\n> gin_trgm_ops.\n\nHmm, perhaps. That plus set-it-during-index-create would remove the\nneed for dynamic update like I suggested. So very roughly the amount\nof complexity would balance out. Do you have an idea for how we'd get\nthis to happen during pg_upgrade, exactly?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 09 Sep 2024 19:42:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Mon, Sep 9, 2024 at 4:42 PM Tom Lane <[email protected]> wrote:\n>\n> Masahiko Sawada <[email protected]> writes:\n> > When do we set the byte on the primary server? If it's the first time\n> > to use the GIN index, secondary servers would have to wait for the\n> > primary to use the GIN index, which could be an unpredictable time or\n> > it may never come depending on index usages. Probably we can make\n> > pg_upgrade set the byte in the meta page of GIN indexes that use the\n> > gin_trgm_ops.\n>\n> Hmm, perhaps. That plus set-it-during-index-create would remove the\n> need for dynamic update like I suggested. So very roughly the amount\n> of complexity would balance out.\n\nYes, I think your set-it-during-index-create would be necessary.\n\n> Do you have an idea for how we'd get\n> this to happen during pg_upgrade, exactly?\n\nWhat I was thinking is that we have \"pg_dump --binary-upgrade\" emit a\nfunction call, say \"SELECT binary_upgrade_update_gin_meta_page()\" for\neach GIN index, and the meta pages are updated when restoring the\nschemas.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 9 Sep 2024 22:18:49 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "Masahiko Sawada <[email protected]> writes:\n> On Mon, Sep 9, 2024 at 4:42 PM Tom Lane <[email protected]> wrote:\n>> Do you have an idea for how we'd get\n>> this to happen during pg_upgrade, exactly?\n\n> What I was thinking is that we have \"pg_dump --binary-upgrade\" emit a\n> function call, say \"SELECT binary_upgrade_update_gin_meta_page()\" for\n> each GIN index, and the meta pages are updated when restoring the\n> schemas.\n\nHmm, but ...\n\n1. IIRC we don't move the relation files into the new cluster until\nafter we've run the schema dump/restore step. I think this'd have to\nbe driven in some other way than from the pg_dump output. I guess we\ncould have pg_upgrade start up the new postmaster and call a function\nin each DB, which would have to scan for GIN indexes by itself.\n\n2. How will this interact with --link mode? I don't see how it\ndoesn't involve scribbling on files that are shared with the old\ncluster, leading to possible problems if the pg_upgrade fails later\nand the user tries to go back to using the old cluster. It's not so\nmuch the metapage update that is worrisome --- we're assuming that\nthat will modify storage that's unused in old versions. But the\nchange would be unrecorded in the old cluster's WAL, which sounds\nrisky.\n\nMaybe we could get away with forcing --copy mode for affected\nindexes, but that feels a bit messy. We'd not want to do it\nfor unaffected indexes, so the copying code would need to know\na great deal about this problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Sep 2024 02:25:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Mon, Sep 9, 2024 at 11:25 PM Tom Lane <[email protected]> wrote:\n>\n> Masahiko Sawada <[email protected]> writes:\n> > On Mon, Sep 9, 2024 at 4:42 PM Tom Lane <[email protected]> wrote:\n> >> Do you have an idea for how we'd get\n> >> this to happen during pg_upgrade, exactly?\n>\n> > What I was thinking is that we have \"pg_dump --binary-upgrade\" emit a\n> > function call, say \"SELECT binary_upgrade_update_gin_meta_page()\" for\n> > each GIN index, and the meta pages are updated when restoring the\n> > schemas.\n>\n> Hmm, but ...\n>\n> 1. IIRC we don't move the relation files into the new cluster until\n> after we've run the schema dump/restore step. I think this'd have to\n> be driven in some other way than from the pg_dump output. I guess we\n> could have pg_upgrade start up the new postmaster and call a function\n> in each DB, which would have to scan for GIN indexes by itself.\n\nYou're right.\n\n>\n> 2. How will this interact with --link mode? I don't see how it\n> doesn't involve scribbling on files that are shared with the old\n> cluster, leading to possible problems if the pg_upgrade fails later\n> and the user tries to go back to using the old cluster. It's not so\n> much the metapage update that is worrisome --- we're assuming that\n> that will modify storage that's unused in old versions. But the\n> change would be unrecorded in the old cluster's WAL, which sounds\n> risky.\n>\n> Maybe we could get away with forcing --copy mode for affected\n> indexes, but that feels a bit messy. We'd not want to do it\n> for unaffected indexes, so the copying code would need to know\n> a great deal about this problem.\n\nGood point. I agree that it would not be a very good idea to force --copy mode.\n\nAn alternative way would be that we store the char signedness in the\ncontrol file, and gin_trgm_ops opclass reads it if the bytes in the\nmeta page shows 'unset'. The char signedness in the control file\ndoesn't mean to be used for the compatibility check for physical\nreplication but used as a hint. But it also could be a bit messy,\nthough.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 10 Sep 2024 11:31:46 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "Masahiko Sawada <[email protected]> writes:\n> An alternative way would be that we store the char signedness in the\n> control file, and gin_trgm_ops opclass reads it if the bytes in the\n> meta page shows 'unset'. The char signedness in the control file\n> doesn't mean to be used for the compatibility check for physical\n> replication but used as a hint. But it also could be a bit messy,\n> though.\n\nYeah, that seems like it could work. But are we sure that replicas\nget a copy of the primary's control file rather than creating their\nown?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Sep 2024 14:57:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Tue, Sep 10, 2024 at 11:57 AM Tom Lane <[email protected]> wrote:\n>\n> Masahiko Sawada <[email protected]> writes:\n> > An alternative way would be that we store the char signedness in the\n> > control file, and gin_trgm_ops opclass reads it if the bytes in the\n> > meta page shows 'unset'. The char signedness in the control file\n> > doesn't mean to be used for the compatibility check for physical\n> > replication but used as a hint. But it also could be a bit messy,\n> > though.\n>\n> Yeah, that seems like it could work. But are we sure that replicas\n> get a copy of the primary's control file rather than creating their\n> own?\n\nYes, I think so. Since at least the system identifiers of primary and\nreplicas must be identical for physical replication, if replicas use\ntheir own control files then they cannot start the replication.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 10 Sep 2024 14:51:51 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "Masahiko Sawada <[email protected]> writes:\n> On Tue, Sep 10, 2024 at 11:57 AM Tom Lane <[email protected]> wrote:\n>> Yeah, that seems like it could work. But are we sure that replicas\n>> get a copy of the primary's control file rather than creating their\n>> own?\n\n> Yes, I think so. Since at least the system identifiers of primary and\n> replicas must be identical for physical replication, if replicas use\n> their own control files then they cannot start the replication.\n\nGot it. So now I'm wondering if we need all the complexity of storing\nstuff in the GIN metapages. Could we simply read the (primary's)\nsignedness out of pg_control and use that? We might need some caching\nmechanism to make that cheap enough, but accessing the current index's\nmetapage is far from free either.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Sep 2024 17:56:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Tue, Sep 10, 2024 at 05:56:47PM -0400, Tom Lane wrote:\n> Masahiko Sawada <[email protected]> writes:\n> > On Tue, Sep 10, 2024 at 11:57 AM Tom Lane <[email protected]> wrote:\n> >> Yeah, that seems like it could work. But are we sure that replicas\n> >> get a copy of the primary's control file rather than creating their\n> >> own?\n> \n> > Yes, I think so. Since at least the system identifiers of primary and\n> > replicas must be identical for physical replication, if replicas use\n> > their own control files then they cannot start the replication.\n> \n> Got it. So now I'm wondering if we need all the complexity of storing\n> stuff in the GIN metapages. Could we simply read the (primary's)\n> signedness out of pg_control and use that?\n\nYes.\n\n\n", "msg_date": "Tue, 10 Sep 2024 15:05:45 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Tue, Sep 10, 2024 at 3:05 PM Noah Misch <[email protected]> wrote:\n>\n> On Tue, Sep 10, 2024 at 05:56:47PM -0400, Tom Lane wrote:\n> > Masahiko Sawada <[email protected]> writes:\n> > > On Tue, Sep 10, 2024 at 11:57 AM Tom Lane <[email protected]> wrote:\n> > >> Yeah, that seems like it could work. But are we sure that replicas\n> > >> get a copy of the primary's control file rather than creating their\n> > >> own?\n> >\n> > > Yes, I think so. Since at least the system identifiers of primary and\n> > > replicas must be identical for physical replication, if replicas use\n> > > their own control files then they cannot start the replication.\n> >\n> > Got it. So now I'm wondering if we need all the complexity of storing\n> > stuff in the GIN metapages. Could we simply read the (primary's)\n> > signedness out of pg_control and use that?\n>\n> Yes.\n\nIndeed.\n\nI've attached a PoC patch for this idea. We write the default char\nsignedness to the control file at initdb time. Then when comparing two\ntrgms, pg_trgm opclasses use a comparison function based on the char\nsignedness of the cluster. I've confirmed that the patch fixes the\nreported case at least.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 12 Sep 2024 15:42:48 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Thu, Sep 12, 2024 at 03:42:48PM -0700, Masahiko Sawada wrote:\n> On Tue, Sep 10, 2024 at 3:05 PM Noah Misch <[email protected]> wrote:\n> > On Tue, Sep 10, 2024 at 05:56:47PM -0400, Tom Lane wrote:\n> > > Got it. So now I'm wondering if we need all the complexity of storing\n> > > stuff in the GIN metapages. Could we simply read the (primary's)\n> > > signedness out of pg_control and use that?\n\n> I've attached a PoC patch for this idea. We write the default char\n> signedness to the control file at initdb time. Then when comparing two\n> trgms, pg_trgm opclasses use a comparison function based on the char\n> signedness of the cluster. I've confirmed that the patch fixes the\n> reported case at least.\n\nI agree that proves the concept.\n\n\n", "msg_date": "Mon, 16 Sep 2024 09:24:30 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Mon, Sep 16, 2024 at 9:24 AM Noah Misch <[email protected]> wrote:\n>\n> On Thu, Sep 12, 2024 at 03:42:48PM -0700, Masahiko Sawada wrote:\n> > On Tue, Sep 10, 2024 at 3:05 PM Noah Misch <[email protected]> wrote:\n> > > On Tue, Sep 10, 2024 at 05:56:47PM -0400, Tom Lane wrote:\n> > > > Got it. So now I'm wondering if we need all the complexity of storing\n> > > > stuff in the GIN metapages. Could we simply read the (primary's)\n> > > > signedness out of pg_control and use that?\n>\n> > I've attached a PoC patch for this idea. We write the default char\n> > signedness to the control file at initdb time. Then when comparing two\n> > trgms, pg_trgm opclasses use a comparison function based on the char\n> > signedness of the cluster. I've confirmed that the patch fixes the\n> > reported case at least.\n>\n> I agree that proves the concept.\n\nThanks. I like the simplicity of this approach. If we agree with this\napproach, I'd like to proceed with it.\n\nRegardless of what approach we take, I wanted to provide some\nregression tests for these changes, but I could not come up with a\nreasonable idea. It would be great if we could do something like\n027_stream_regress.pl on cross-architecture replication. But just\nexecuting 027_stream_regress.pl on cross-architecture replication\ncould not be sufficient since we would like to confirm query results\nas well. If we could have a reasonable tool or way, it would also help\nfind other similar bugs related architectural differences.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 17 Sep 2024 21:43:41 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Tue, Sep 17, 2024 at 09:43:41PM -0700, Masahiko Sawada wrote:\n> On Mon, Sep 16, 2024 at 9:24 AM Noah Misch <[email protected]> wrote:\n> > On Thu, Sep 12, 2024 at 03:42:48PM -0700, Masahiko Sawada wrote:\n> > > On Tue, Sep 10, 2024 at 3:05 PM Noah Misch <[email protected]> wrote:\n> > > > On Tue, Sep 10, 2024 at 05:56:47PM -0400, Tom Lane wrote:\n> > > > > Got it. So now I'm wondering if we need all the complexity of storing\n> > > > > stuff in the GIN metapages. Could we simply read the (primary's)\n> > > > > signedness out of pg_control and use that?\n> >\n> > > I've attached a PoC patch for this idea. We write the default char\n> > > signedness to the control file at initdb time. Then when comparing two\n> > > trgms, pg_trgm opclasses use a comparison function based on the char\n> > > signedness of the cluster. I've confirmed that the patch fixes the\n> > > reported case at least.\n> >\n> > I agree that proves the concept.\n> \n> Thanks. I like the simplicity of this approach. If we agree with this\n> approach, I'd like to proceed with it.\n\nWorks for me.\n\n> Regardless of what approach we take, I wanted to provide some\n> regression tests for these changes, but I could not come up with a\n> reasonable idea. It would be great if we could do something like\n> 027_stream_regress.pl on cross-architecture replication. But just\n> executing 027_stream_regress.pl on cross-architecture replication\n> could not be sufficient since we would like to confirm query results\n> as well. If we could have a reasonable tool or way, it would also help\n> find other similar bugs related architectural differences.\n\nPerhaps add a regress.c function that changes the control file flag and\nflushes the change to disk?\n\n\n", "msg_date": "Wed, 18 Sep 2024 18:46:44 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Wed, Sep 18, 2024 at 6:46 PM Noah Misch <[email protected]> wrote:\n>\n> On Tue, Sep 17, 2024 at 09:43:41PM -0700, Masahiko Sawada wrote:\n> > On Mon, Sep 16, 2024 at 9:24 AM Noah Misch <[email protected]> wrote:\n> > > On Thu, Sep 12, 2024 at 03:42:48PM -0700, Masahiko Sawada wrote:\n> > > > On Tue, Sep 10, 2024 at 3:05 PM Noah Misch <[email protected]> wrote:\n> > > > > On Tue, Sep 10, 2024 at 05:56:47PM -0400, Tom Lane wrote:\n> > > > > > Got it. So now I'm wondering if we need all the complexity of storing\n> > > > > > stuff in the GIN metapages. Could we simply read the (primary's)\n> > > > > > signedness out of pg_control and use that?\n> > >\n> > > > I've attached a PoC patch for this idea. We write the default char\n> > > > signedness to the control file at initdb time. Then when comparing two\n> > > > trgms, pg_trgm opclasses use a comparison function based on the char\n> > > > signedness of the cluster. I've confirmed that the patch fixes the\n> > > > reported case at least.\n> > >\n> > > I agree that proves the concept.\n> >\n> > Thanks. I like the simplicity of this approach. If we agree with this\n> > approach, I'd like to proceed with it.\n>\n> Works for me.\n>\n> > Regardless of what approach we take, I wanted to provide some\n> > regression tests for these changes, but I could not come up with a\n> > reasonable idea. It would be great if we could do something like\n> > 027_stream_regress.pl on cross-architecture replication. But just\n> > executing 027_stream_regress.pl on cross-architecture replication\n> > could not be sufficient since we would like to confirm query results\n> > as well. If we could have a reasonable tool or way, it would also help\n> > find other similar bugs related architectural differences.\n>\n> Perhaps add a regress.c function that changes the control file flag and\n> flushes the change to disk?\n\nI've tried this idea but found out that extensions can flush the\ncontrolfile on the disk after flipping the char signedness value, they\ncannot update the controlfile data on the shared memory. And, when the\nserver shuts down, the on-disk controlfile is updated with the\non-memory controlfile data, so the char signedness is reverted.\n\nWe can add a function to set the char signedness of on-memory\ncontrolfile data or add a new option to pg_resetwal to change the char\nsignedness on-disk controlfile data but they seem to be overkill to me\nand I'm concerned about misusing these option and function.\n\nI've updated and splitted patches. They don't have regression tests yet.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 25 Sep 2024 15:46:27 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" }, { "msg_contents": "On Wed, Sep 25, 2024 at 03:46:27PM -0700, Masahiko Sawada wrote:\n> On Wed, Sep 18, 2024 at 6:46 PM Noah Misch <[email protected]> wrote:\n> > On Tue, Sep 17, 2024 at 09:43:41PM -0700, Masahiko Sawada wrote:\n> > > On Mon, Sep 16, 2024 at 9:24 AM Noah Misch <[email protected]> wrote:\n> > > > On Thu, Sep 12, 2024 at 03:42:48PM -0700, Masahiko Sawada wrote:\n> > > > > On Tue, Sep 10, 2024 at 3:05 PM Noah Misch <[email protected]> wrote:\n> > > > > > On Tue, Sep 10, 2024 at 05:56:47PM -0400, Tom Lane wrote:\n> > > > > > > Got it. So now I'm wondering if we need all the complexity of storing\n> > > > > > > stuff in the GIN metapages. Could we simply read the (primary's)\n> > > > > > > signedness out of pg_control and use that?\n\n> > > Thanks. I like the simplicity of this approach. If we agree with this\n> > > approach, I'd like to proceed with it.\n> >\n> > Works for me.\n> >\n> > > Regardless of what approach we take, I wanted to provide some\n> > > regression tests for these changes, but I could not come up with a\n> > > reasonable idea. It would be great if we could do something like\n> > > 027_stream_regress.pl on cross-architecture replication. But just\n> > > executing 027_stream_regress.pl on cross-architecture replication\n> > > could not be sufficient since we would like to confirm query results\n> > > as well. If we could have a reasonable tool or way, it would also help\n> > > find other similar bugs related architectural differences.\n> >\n> > Perhaps add a regress.c function that changes the control file flag and\n> > flushes the change to disk?\n> \n> I've tried this idea but found out that extensions can flush the\n> controlfile on the disk after flipping the char signedness value, they\n> cannot update the controlfile data on the shared memory. And, when the\n> server shuts down, the on-disk controlfile is updated with the\n> on-memory controlfile data, so the char signedness is reverted.\n> \n> We can add a function to set the char signedness of on-memory\n> controlfile data or add a new option to pg_resetwal to change the char\n> signedness on-disk controlfile data but they seem to be overkill to me\n> and I'm concerned about misusing these option and function.\n\nBefore the project is over, pg_upgrade will have some way to set the control\nfile signedness to the source cluster signedness. I think pg_upgrade should\nalso take an option for overriding the source cluster signedness for source\nclusters v17 and older. That way, a user knowing they copied the v17 source\ncluster from x86 to ARM can pg_upgrade properly without the prerequisite of\nacquiring an x86 VM. Once that all exists, perhaps it will be clearer how\nbest to change signedness for testing.\n\n> While this change does not encourage the use of 'char' without\n> explicitly specifying its signedness, it provides a hint to ensure\n> consistent behavior for indexes (e.g., GIN and GiST) and tables that\n> already store data sorted by the 'char' type on disk, especially in\n> cross-platform replication scenarios.\n\nLet's put that sort of information in the GetDefaultCharSignedness() header\ncomment. New code should use explicit \"signed char\" or \"unsigned char\" when\nit matters for data file compatibility. GetDefaultCharSignedness() exists for\ncode that deals with pre-v18 data files, where we accidentally let C\nimplementation signedness affect persistent data.\n\n> @@ -4256,6 +4257,18 @@ WriteControlFile(void)\n> \n> \tControlFile->float8ByVal = FLOAT8PASSBYVAL;\n> \n> +\t/*\n> +\t * The signedness of the char type is implementation-defined. For example\n> +\t * on x86 architecture CPUs, the char data type is typically treated as\n> +\t * signed by default whereas on aarch architecture CPUs it is typically\n> +\t * treated as unsigned by default.\n> +\t */\n> +#if CHAR_MIN != 0\n> +\tControlFile->default_char_signedness = true;\n> +#else\n> +\tControlFile->default_char_signedness = false;\n> +#endif\n\nThis has initdb set the field to the current C implementation's signedness. I\nwonder if, instead, initdb should set signedness=true unconditionally. Then\nthe only source of signedness=false will be pg_upgrade.\n\nAdvantage: signedness=false will get rarer over time. If we had known about\nthis problem during the last development cycle that forced initdb (v8.3), we\nwould have made all clusters signed or all clusters unsigned. Making\npg_upgrade the only source of signedness=false will cause the population of\ndatabase clusters to converge toward that retrospective ideal.\n\nDisadvantage: testing signedness=false will require more planning.\n\nWhat other merits should we consider as part of deciding?\n\n\n", "msg_date": "Mon, 30 Sep 2024 11:49:41 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_trgm comparison bug on cross-architecture replication due to\n different char implementation" } ]
[ { "msg_contents": "Hey,\n\nI'm developing a postgres extension as a custom Table Interface method\ndefinition.\nWIthin the extension, I\"m planning to create two background processes using\n`fork()` that will process data in the background.\n\nAre there any recommendations / guidelines around creating background\nprocesses within extensions in postgres?\n\nThanks,\nSushrut\n\nHey,I'm developing a postgres extension as a custom Table Interface method definition.WIthin the extension, I\"m planning to create two background processes using `fork()` that will process data in the background.Are there any recommendations / guidelines around creating background  processes within extensions in postgres?Thanks,Sushrut", "msg_date": "Tue, 23 Apr 2024 20:47:25 +0530", "msg_from": "Sushrut Shivaswamy <[email protected]>", "msg_from_op": true, "msg_subject": "Background Processes in Postgres Extension" }, { "msg_contents": "Sushrut Shivaswamy <[email protected]> writes:\n> I'm developing a postgres extension as a custom Table Interface method\n> definition.\n> WIthin the extension, I\"m planning to create two background processes using\n> `fork()` that will process data in the background.\n\n> Are there any recommendations / guidelines around creating background\n> processes within extensions in postgres?\n\nfork() is entirely the wrong way to do it, mainly because when the\ncreating session exits the postmaster will be unaware of those\nnow-disconnected child processes. See the mechanisms for creating\nbackground worker processes (start with bgworker.h).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Apr 2024 11:59:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Background Processes in Postgres Extension" }, { "msg_contents": "Thanks for the suggestion on using postgres background worker.\n\nI tried creating one following the implementation in worker_spi and am able\nto spawn a background worker successfully.\n\nHowever, the background worker seems to cause postmaster to crash when I\nwait for it to finish using `WaitForBackgroundWorkerShutdown.\nThe function called by the background worker is empty except for logging a\nmessage to disk.\n\nAny ideas on what could be going wrong / tips for debugging?\n\nThanks,\nSushrut\n\nThanks for the suggestion on using postgres background worker.I tried creating one following the implementation in worker_spi and am able to spawn a background worker successfully.However, the background worker seems to cause postmaster to crash when I wait for it to finish using `WaitForBackgroundWorkerShutdown.The function called by the background worker is empty except for logging a message to disk.Any ideas on what could be going wrong / tips for debugging?Thanks,Sushrut", "msg_date": "Sat, 27 Apr 2024 21:25:59 +0530", "msg_from": "Sushrut Shivaswamy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Background Processes in Postgres Extension" }, { "msg_contents": "On Sat, Apr 27, 2024 at 9:26 PM Sushrut Shivaswamy <\[email protected]> wrote:\n\n> Thanks for the suggestion on using postgres background worker.\n>\n> I tried creating one following the implementation in worker_spi and am\n> able to spawn a background worker successfully.\n>\n> However, the background worker seems to cause postmaster to crash when I\n> wait for it to finish using `WaitForBackgroundWorkerShutdown.\n> The function called by the background worker is empty except for logging a\n> message to disk.\n>\n\nWhen debugging a crash, backtrace from core dump is useful. if you can\nreproduce the crash, running reproduction with gdb attached to a process\nintended to crash is more useful.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Sat, Apr 27, 2024 at 9:26 PM Sushrut Shivaswamy <[email protected]> wrote:Thanks for the suggestion on using postgres background worker.I tried creating one following the implementation in worker_spi and am able to spawn a background worker successfully.However, the background worker seems to cause postmaster to crash when I wait for it to finish using `WaitForBackgroundWorkerShutdown.The function called by the background worker is empty except for logging a message to disk.When debugging a crash, backtrace from core dump is useful. if you can reproduce the crash, running reproduction with gdb attached to a process intended to crash is more useful.-- Best Wishes,Ashutosh Bapat", "msg_date": "Mon, 29 Apr 2024 09:13:54 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Background Processes in Postgres Extension" } ]
[ { "msg_contents": "With one eye on the beta-release calendar, I thought it'd be a\ngood idea to test whether our tarball build script works for\nthe new plan where we'll use \"git archive\" instead of the\ntraditional process.\n\nIt doesn't.\n\nIt makes tarballs all right, but whatever commit ID you specify\nis semi-ignored, and you get a tarball corresponding to HEAD\nof master. (The PDFs come from the right version, though!)\n\nThe reason for that is that the mk-one-release script does this\n(shorn of not-relevant-here details):\n\n\texport BASE=/home/pgsql\n\texport GIT_DIR=$BASE/postgresql.git\n\n\tmkdir pgsql\n\n\t# Export the selected git ref\n\tgit archive ${gitref} | tar xf - -C pgsql\n\n\tcd pgsql\n\t./configure\n\n\t# Produce .tar.gz and .tar.bz2 files\n\tmake dist\n\nSince 619bc23a1, what \"make dist\" does is\n\n\t$(GIT) -C $(srcdir) -c core.autocrlf=false archive --format tar.gz -9 --prefix $(distdir)/ HEAD -o $(abs_top_builddir)/$@\n\t$(GIT) -C $(srcdir) -c core.autocrlf=false -c tar.tar.bz2.command='$(BZIP2) -c' archive --format tar.bz2 --prefix $(distdir)/ HEAD -o $(abs_top_builddir)/$@\n\nSince GIT_DIR is set, git consults that repo not the current working\ndirectory, so HEAD means whatever it means in a just-fetched repo,\nand mk-one-release's efforts to select the ${gitref} commit mean\nnothing. (If git had tried to consult the current working directory,\nit would've failed for lack of any .git subdir therein.)\n\nI really really don't want to put version-specific coding into\nmk-one-release, but fortunately I think we don't have to.\nWhat I suggest is doing this in mk-one-release:\n\n-make dist\n+make dist PG_COMMIT_HASH=${gitref}\n\nand changing the \"make dist\" rules to write $(PG_COMMIT_HASH) not\nHEAD. The extra make variable will have no effect in the back\nbranches, while it should cause the right thing to happen with\nthe new implementation of \"make dist\".\n\nThis change seems like a good thing anyway for anyone who's tempted\nto use \"make dist\" manually, since they wouldn't necessarily want\nto package HEAD either. Now, if we just do it exactly like that\nthen trying to \"make dist\" without setting PG_COMMIT_HASH will\nfail, since \"git archive\" has no default for its <tree-ish>\nargument. I can't quite decide if that's a good thing, or if we\nshould hack the makefile a little further to allow PG_COMMIT_HASH\nto default to HEAD.\n\nThoughts, better ideas?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Apr 2024 18:05:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Tarball builds in the new world order" }, { "msg_contents": "On Tue, Apr 23, 2024 at 6:06 PM Tom Lane <[email protected]> wrote:\n\n> This change seems like a good thing anyway for anyone who's tempted\n> to use \"make dist\" manually, since they wouldn't necessarily want\n> to package HEAD either. Now, if we just do it exactly like that\n> then trying to \"make dist\" without setting PG_COMMIT_HASH will\n> fail, since \"git archive\" has no default for its <tree-ish>\n> argument. I can't quite decide if that's a good thing, or if we\n> should hack the makefile a little further to allow PG_COMMIT_HASH\n> to default to HEAD.\n>\n\nJust having it fail seems harsh. What if we had plain \"make dist\" at least\noutput a friendly hint about \"please specify a hash\"? That seems better\nthan an implicit HEAD default, as they can manually set it to HEAD\nthemselves per the hint.\n\nCheers,\nGreg\n\nOn Tue, Apr 23, 2024 at 6:06 PM Tom Lane <[email protected]> wrote:This change seems like a good thing anyway for anyone who's tempted\nto use \"make dist\" manually, since they wouldn't necessarily want\nto package HEAD either.  Now, if we just do it exactly like that\nthen trying to \"make dist\" without setting PG_COMMIT_HASH will\nfail, since \"git archive\" has no default for its <tree-ish>\nargument.  I can't quite decide if that's a good thing, or if we\nshould hack the makefile a little further to allow PG_COMMIT_HASH\nto default to HEAD.Just having it fail seems harsh. What if we had plain \"make dist\" at least output a friendly hint about \"please specify a hash\"? That seems better than an implicit HEAD default, as they can manually set it to HEAD themselves per the hint.Cheers,Greg", "msg_date": "Wed, 24 Apr 2024 10:38:31 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tarball builds in the new world order" }, { "msg_contents": "On 24.04.24 00:05, Tom Lane wrote:\n> It makes tarballs all right, but whatever commit ID you specify\n> is semi-ignored, and you get a tarball corresponding to HEAD\n> of master. (The PDFs come from the right version, though!)\n> \n> The reason for that is that the mk-one-release script does this\n> (shorn of not-relevant-here details):\n> \n> \texport BASE=/home/pgsql\n> \texport GIT_DIR=$BASE/postgresql.git\n> \n> \tmkdir pgsql\n> \n> \t# Export the selected git ref\n> \tgit archive ${gitref} | tar xf - -C pgsql\n\nWhere does ${gitref} come from? Why doesn't this line use git archive \nHEAD | ... ?\n\n> What I suggest is doing this in mk-one-release:\n> \n> -make dist\n> +make dist PG_COMMIT_HASH=${gitref}\n> \n> and changing the \"make dist\" rules to write $(PG_COMMIT_HASH) not\n> HEAD. The extra make variable will have no effect in the back\n> branches, while it should cause the right thing to happen with\n> the new implementation of \"make dist\".\n\nI suppose we could do something like that, but we'd also need to come up \nwith a meson version.\n\n(Let's not use \"hash\" though, since other ways to commit specify a \ncommit can be used.)\n\n> This change seems like a good thing anyway for anyone who's tempted\n> to use \"make dist\" manually, since they wouldn't necessarily want\n> to package HEAD either.\n\nA tin-foil-hat argument is that we might not want to encourage that, \nbecause for reproducibility, we need a known git commit and also a known \nimplementation of make dist. If in the future someone uses the make \ndist implementation of PG19 to build a tarball for PG17, it might not \ncome out the same way as using the make dist implementation of PG17.\n\n\n\n", "msg_date": "Wed, 24 Apr 2024 16:46:46 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tarball builds in the new world order" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 24.04.24 00:05, Tom Lane wrote:\n>> # Export the selected git ref\n>> git archive ${gitref} | tar xf - -C pgsql\n\n> Where does ${gitref} come from? Why doesn't this line use git archive \n> HEAD | ... ?\n\n${gitref} is an argument to the script, specifying the commit\nto be packaged. HEAD would certainly not work when trying to\npackage a back-branch release, and in general it would hardly\never be what you want if your goal is to make a reproducible\npackage.\n\n>> What I suggest is doing this in mk-one-release:\n>> -make dist\n>> +make dist PG_COMMIT_HASH=${gitref}\n\n> I suppose we could do something like that, but we'd also need to come up \n> with a meson version.\n\nPackaging via meson is years away yet IMO, so I'm unconcerned\nabout that for now. See below.\n\n> (Let's not use \"hash\" though, since other ways to commit specify a \n> commit can be used.)\n\nOK, do you have a different term in mind?\n\n>> This change seems like a good thing anyway for anyone who's tempted\n>> to use \"make dist\" manually, since they wouldn't necessarily want\n>> to package HEAD either.\n\n> A tin-foil-hat argument is that we might not want to encourage that, \n> because for reproducibility, we need a known git commit and also a known \n> implementation of make dist. If in the future someone uses the make \n> dist implementation of PG19 to build a tarball for PG17, it might not \n> come out the same way as using the make dist implementation of PG17.\n\nOf course. The entire reason why this script invokes \"make dist\",\nrather than implementing the behavior for itself, is so that\nbranch-specific behaviors can be accounted for in the branches\nnot here. (To be clear, the script has no idea which branch\nit's packaging --- that's implicit in the commit ID.)\n\nBecause of that, I really don't want to rely on some equivalent\nmeson infrastructure until it's available in all the live branches.\nv15 will be EOL in 3.5 years, and that's more or less the time frame\nthat we've spoken of for dropping the makefile infrastructure, so\nI don't think that's an unreasonable plan.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Apr 2024 11:03:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tarball builds in the new world order" }, { "msg_contents": "Greg Sabino Mullane <[email protected]> writes:\n> On Tue, Apr 23, 2024 at 6:06 PM Tom Lane <[email protected]> wrote:\n>> Now, if we just do it exactly like that\n>> then trying to \"make dist\" without setting PG_COMMIT_HASH will\n>> fail, since \"git archive\" has no default for its <tree-ish>\n>> argument. I can't quite decide if that's a good thing, or if we\n>> should hack the makefile a little further to allow PG_COMMIT_HASH\n>> to default to HEAD.\n\n> Just having it fail seems harsh. What if we had plain \"make dist\" at least\n> output a friendly hint about \"please specify a hash\"? That seems better\n> than an implicit HEAD default, as they can manually set it to HEAD\n> themselves per the hint.\n\nYeah, it would be easy to do something like\n\nifneq ($(PG_COMMIT_HASH),)\n\t$(GIT) ...\nelse\n\t@echo \"Please specify PG_COMMIT_HASH.\" && exit 1\nendif\n\nI'm just debating whether that's better than inserting a default\nvalue.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Apr 2024 11:21:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tarball builds in the new world order" }, { "msg_contents": "Concretely, I'm proposing the attached. Peter didn't like\nPG_COMMIT_HASH, so I have PG_COMMIT_REFSPEC below, but I'm not\nwedded to that if a better name is proposed.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 26 Apr 2024 15:24:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tarball builds in the new world order" }, { "msg_contents": "On 2024-Apr-26, Tom Lane wrote:\n\n> --- mk-one-release.orig\t2024-04-23 17:30:08.983226671 -0400\n> +++ mk-one-release\t2024-04-26 15:17:29.713669677 -0400\n> @@ -39,13 +39,17 @@ mkdir pgsql\n> git archive ${gitref} | tar xf - -C pgsql\n> \n> # Include the git ref in the output tarballs\n> +# (This has no effect with v17 and up; instead we rely on \"git archive\"\n> +# to include the commit hash in the tar header)\n> echo ${gitref} >pgsql/.gitrevision\n\nWhy is it that the .gitrevision file is only created here, instead of\nbeing added to the tarball that \"git archive\" produces? Adding an\nargument like\n\t--add-virtual-file $(distdir)/.gitrevision:$(GIT_REFSPEC)\n\nto the git archive call should suffice.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I can't go to a restaurant and order food because I keep looking at the\nfonts on the menu. Five minutes later I realize that it's also talking\nabout food\" (Donald Knuth)\n\n\n", "msg_date": "Sun, 28 Apr 2024 21:16:48 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tarball builds in the new world order" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Why is it that the .gitrevision file is only created here, instead of\n> being added to the tarball that \"git archive\" produces? Adding an\n> argument like\n> \t--add-virtual-file $(distdir)/.gitrevision:$(GIT_REFSPEC)\n> to the git archive call should suffice.\n\nI think we don't want to do that. In the first place, it's redundant\nbecause \"git archive\" includes the commit hash in the tar header,\nand in the second place it gets away from the concept that the tarball\ncontains exactly what is in our git tree.\n\nNow admittedly, if anyone's built tooling that relies on the presence\nof the .gitrevision file, they might prefer that we keep on including\nit. But I'm not sure anyone has, and in any case I think switching\nto the git-approved way of incorporating the hash is the best thing\nin the long run.\n\nWhat I'm thinking of doing, as soon as we've sorted the tarball\ncreation process, is to make a test tarball available to the\npackagers group so that anyone interested can start working on\nupdating their packaging process for the new approach. Hopefully,\nif anyone's especially unhappy about omitting .gitrevision, they'll\nspeak up.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 28 Apr 2024 15:44:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tarball builds in the new world order" }, { "msg_contents": "On 26.04.24 21:24, Tom Lane wrote:\n> Concretely, I'm proposing the attached. Peter didn't like\n> PG_COMMIT_HASH, so I have PG_COMMIT_REFSPEC below, but I'm not\n> wedded to that if a better name is proposed.\n\nThis seems ok to me, but note that we do have an equivalent \nimplementation in meson. If we don't want to update that in a similar \nway, maybe we should disable it.\n\n\n\n", "msg_date": "Mon, 29 Apr 2024 14:36:29 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tarball builds in the new world order" }, { "msg_contents": "On 26.04.24 21:24, Tom Lane wrote:\n> Concretely, I'm proposing the attached. Peter didn't like\n> PG_COMMIT_HASH, so I have PG_COMMIT_REFSPEC below, but I'm not\n> wedded to that if a better name is proposed.\n\nUm, \"refspec\" leads me here \n<https://git-scm.com/book/en/v2/Git-Internals-The-Refspec>, which seems \nlike the wrong concept. I think the more correct concept is \"revision\" \n(https://git-scm.com/docs/gitrevisions), so something like PG_GIT_REVISION?\n\n\n\n", "msg_date": "Mon, 29 Apr 2024 14:39:57 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tarball builds in the new world order" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 26.04.24 21:24, Tom Lane wrote:\n>> Concretely, I'm proposing the attached. Peter didn't like\n>> PG_COMMIT_HASH, so I have PG_COMMIT_REFSPEC below, but I'm not\n>> wedded to that if a better name is proposed.\n\n> This seems ok to me, but note that we do have an equivalent \n> implementation in meson. If we don't want to update that in a similar \n> way, maybe we should disable it.\n\nOK. After poking at that for awhile, it seemed like \"default to\nHEAD\" fits into meson a lot better than \"throw an error if the\nvariable isn't set\", so I switched to doing it like that.\nOne reason is that AFAICT you can only set the variable during\n\"meson setup\" not during \"ninja\". This won't matter to the\ntarball build script, which does a one-off configuration run\nanyway. But for manual use, a movable target like HEAD might be\nmore convenient given that behavior.\n\nI tested this by building tarballs using the makefiles on a RHEL8\nbox, and using meson on my MacBook (with recent MacPorts tools).\nI got bit-for-bit identical files, which I found rather impressive\ngiven the gap between the platforms. Maybe this \"reproducible builds\"\nwheeze will actually work.\n\nI also changed the variable name to PG_GIT_REVISION per your\nother suggestion.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 29 Apr 2024 12:14:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tarball builds in the new world order" }, { "msg_contents": "On 29.04.24 18:14, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> On 26.04.24 21:24, Tom Lane wrote:\n>>> Concretely, I'm proposing the attached. Peter didn't like\n>>> PG_COMMIT_HASH, so I have PG_COMMIT_REFSPEC below, but I'm not\n>>> wedded to that if a better name is proposed.\n> \n>> This seems ok to me, but note that we do have an equivalent\n>> implementation in meson. If we don't want to update that in a similar\n>> way, maybe we should disable it.\n> \n> OK. After poking at that for awhile, it seemed like \"default to\n> HEAD\" fits into meson a lot better than \"throw an error if the\n> variable isn't set\", so I switched to doing it like that.\n> One reason is that AFAICT you can only set the variable during\n> \"meson setup\" not during \"ninja\". This won't matter to the\n> tarball build script, which does a one-off configuration run\n> anyway. But for manual use, a movable target like HEAD might be\n> more convenient given that behavior.\n\nThis patch looks good to me.\n\n\n\n", "msg_date": "Fri, 3 May 2024 15:57:35 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tarball builds in the new world order" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 29.04.24 18:14, Tom Lane wrote:\n>> OK. After poking at that for awhile, it seemed like \"default to\n>> HEAD\" fits into meson a lot better than \"throw an error if the\n>> variable isn't set\", so I switched to doing it like that.\n\n> This patch looks good to me.\n\nPushed, thanks.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 May 2024 11:09:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tarball builds in the new world order" } ]
[ { "msg_contents": "Hi,\n\nCurrently, ALTER DEFAULT PRIVILEGE doesn't support large objects,\nso if we want to allow users other than the owner to use the large\nobject, we need to grant a privilege on it every time a large object\nis created. One of our clients feels that this is annoying, so I would\nlike propose to extend ALTER DEFAULT PRIVILEGE to large objects. \n\nHere are the new actions allowed in abbreviated_grant_or_revoke;\n\n+GRANT { { SELECT | UPDATE }\n+ [, ...] | ALL [ PRIVILEGES ] }\n+ ON LARGE OBJECTS\n+ TO { [ GROUP ] role_name | PUBLIC } [, ...] [ WITH GRANT OPTION ]\n\n+REVOKE [ GRANT OPTION FOR ]\n+ { { SELECT | UPDATE }\n+ [, ...] | ALL [ PRIVILEGES ] }\n+ ON LARGE OBJECTS\n+ FROM { [ GROUP ] role_name | PUBLIC } [, ...]\n+ [ CASCADE | RESTRICT ]\n\nA new keyword OBJECTS is introduced for using plural form in the syntax\nas other supported objects. A schema name is not allowed to be specified\nfor large objects since any large objects don't belong to a schema.\n\nThe attached patch is originally proposed by Haruka Takatsuka\nand some fixes and tests are made by me. \n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>", "msg_date": "Wed, 24 Apr 2024 11:52:42 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": true, "msg_subject": "Extend ALTER DEFAULT PRIVILEGES for large objects" }, { "msg_contents": "Yugo NAGATA <[email protected]> writes:\n> Currently, ALTER DEFAULT PRIVILEGE doesn't support large objects,\n> so if we want to allow users other than the owner to use the large\n> object, we need to grant a privilege on it every time a large object\n> is created. One of our clients feels that this is annoying, so I would\n> like propose to extend ALTER DEFAULT PRIVILEGE to large objects. \n\nI wonder how this plays with pg_dump, and in particular whether it\nbreaks the optimizations that a45c78e32 installed for large numbers\nof large objects. The added test cases seem to go out of their way\nto leave no trace behind that the pg_dump/pg_upgrade tests might\nencounter.\n\nI think you broke psql's \\ddp, too. And some other places; grepping\nfor DEFACLOBJ_NAMESPACE finds other oversights.\n\nOn the whole I find this proposed feature pretty unexciting\nand dubiously worthy of the implementation/maintenance effort.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Apr 2024 23:47:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extend ALTER DEFAULT PRIVILEGES for large objects" }, { "msg_contents": "On Tue, 23 Apr 2024 23:47:38 -0400\nTom Lane <[email protected]> wrote:\n\n> Yugo NAGATA <[email protected]> writes:\n> > Currently, ALTER DEFAULT PRIVILEGE doesn't support large objects,\n> > so if we want to allow users other than the owner to use the large\n> > object, we need to grant a privilege on it every time a large object\n> > is created. One of our clients feels that this is annoying, so I would\n> > like propose to extend ALTER DEFAULT PRIVILEGE to large objects. \n> \n> I wonder how this plays with pg_dump, and in particular whether it\n> breaks the optimizations that a45c78e32 installed for large numbers\n> of large objects. The added test cases seem to go out of their way\n> to leave no trace behind that the pg_dump/pg_upgrade tests might\n> encounter.\n\nThank you for your comments.\n\nThe previous patch did not work with pg_dump since I forgot some fixes.\nI attached a updated patch including fixes.\n\nI believe a45c78e32 is about already-existing large objects and does\nnot directly related to default privileges, so will not be affected\nby this patch.\n\n> I think you broke psql's \\ddp, too. And some other places; grepping\n> for DEFACLOBJ_NAMESPACE finds other oversights.\n\nYes, I did. The attached patch include fixes for psql, too.\n \n> On the whole I find this proposed feature pretty unexciting\n> and dubiously worthy of the implementation/maintenance effort.\n\nI believe this feature is beneficial to some users allows because\nthis enables to omit GRANT that was necessary every large object\ncreation. It seems to me that implementation/maintenance cost is not\nso high compared to other objects (e.g. default privileges on schemas)\nunless I am still missing something wrong. \n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>", "msg_date": "Wed, 24 Apr 2024 15:32:33 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extend ALTER DEFAULT PRIVILEGES for large objects" }, { "msg_contents": "On Tue, Apr 23, 2024 at 11:47:38PM -0400, Tom Lane wrote:\n> On the whole I find this proposed feature pretty unexciting\n> and dubiously worthy of the implementation/maintenance effort.\n\nI don't have any particularly strong feelings on $SUBJECT, but I'll admit\nI'd be much more interested in resolving any remaining reasons folks are\nusing large objects over TOAST. I see a couple of reasons listed in the\ndocs [0] that might be worth examining.\n\n[0] https://www.postgresql.org/docs/devel/lo-intro.html\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 24 Apr 2024 16:08:39 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extend ALTER DEFAULT PRIVILEGES for large objects" }, { "msg_contents": "On Wed, 24 Apr 2024 16:08:39 -0500\nNathan Bossart <[email protected]> wrote:\n\n> On Tue, Apr 23, 2024 at 11:47:38PM -0400, Tom Lane wrote:\n> > On the whole I find this proposed feature pretty unexciting\n> > and dubiously worthy of the implementation/maintenance effort.\n> \n> I don't have any particularly strong feelings on $SUBJECT, but I'll admit\n> I'd be much more interested in resolving any remaining reasons folks are\n> using large objects over TOAST. I see a couple of reasons listed in the\n> docs [0] that might be worth examining.\n> \n> [0] https://www.postgresql.org/docs/devel/lo-intro.html\n\nIf we could replace large objects with BYTEA in any use cases, large objects\nwould be completely obsolete. However, currently some users use large objects\nin fact, so improvement in this feature seems beneficial for them. \n\n\nApart from that, extending TOAST to support more than 1GB data and\nstream-style access seems a good challenge. I don't know if there was a\nproposal for this in past. This is just a thought, for this purpose, we\nwill need a new type of varlena that can contains large size information,\nand a new toast table schema that can store offset information or some way\nto convert a offset to chunk_seq.\n\nRegards,\nYugo Nagata\n\n> -- \n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n", "msg_date": "Fri, 26 Apr 2024 17:54:06 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extend ALTER DEFAULT PRIVILEGES for large objects" }, { "msg_contents": "On Fri, 26 Apr 2024 at 10:54, Yugo NAGATA <[email protected]> wrote:\n>\n> On Wed, 24 Apr 2024 16:08:39 -0500\n> Nathan Bossart <[email protected]> wrote:\n>\n> > On Tue, Apr 23, 2024 at 11:47:38PM -0400, Tom Lane wrote:\n> > > On the whole I find this proposed feature pretty unexciting\n> > > and dubiously worthy of the implementation/maintenance effort.\n> >\n> > I don't have any particularly strong feelings on $SUBJECT, but I'll admit\n> > I'd be much more interested in resolving any remaining reasons folks are\n> > using large objects over TOAST. I see a couple of reasons listed in the\n> > docs [0] that might be worth examining.\n> >\n> > [0] https://www.postgresql.org/docs/devel/lo-intro.html\n>\n> If we could replace large objects with BYTEA in any use cases, large objects\n> would be completely obsolete. However, currently some users use large objects\n> in fact, so improvement in this feature seems beneficial for them.\n>\n>\n> Apart from that, extending TOAST to support more than 1GB data and\n> stream-style access seems a good challenge. I don't know if there was a\n> proposal for this in past. This is just a thought, for this purpose, we\n> will need a new type of varlena that can contains large size information,\n> and a new toast table schema that can store offset information or some way\n> to convert a offset to chunk_seq.\n\nIf you're interested in this, you may want to check out [0] and [1] as\nthreads on the topic of improving TOAST handling of large values ([1]\nbeing a thread where the limitations of our current external TOAST\npointer became clear once more), and maybe talk with Aleksander\nAlekseev and Nikita Malakhov. They've been working closely with\nsystems that involve toast pointers and their limitations.\n\nThe most recent update on the work of Nikita (reworking TOAST\nhandling) [2] is that he got started adapting their externally\npluggable toast into type-internal methods only, though I've not yet\nnoticed any updated patches appear on the list.\n\nAs for other issues with creating larger TOAST values:\nTOAST has a value limit of ~1GB, which means a single large value (or\ntwo, for that matter) won't break anything in the wire protocol, as\nDataRow messages have a message size field of uint32 [^3]. However, if\nwe're going to allow even larger values to be stored in table's\nattributes, we'll have to figure out how we're going to transfer those\nlarger values to (and from) clients. For large objects, this is much\nless of an issue because the IO operations are already chunked by\ndesign, but this may not work well for types that you want to use in\nyour table's columns.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://postgr.es/m/flat/CAN-LCVMq2X%3Dfhx7KLxfeDyb3P%2BBXuCkHC0g%3D9GF%2BJD4izfVa0Q%40mail.gmail.com\n[1] https://postgr.es/m/flat/CAJ7c6TOtAB0z1UrksvGTStNE-herK-43bj22%3D5xVBg7S4vr5rQ%40mail.gmail.com\n[2] https://postgr.es/m/CAN-LCVOgMrda9hOdzGkCMdwY6dH0JQa13QvPsqUwY57TEn6jww%40mail.gmail.com\n\n[^3] Most, if not all PostgreSQL wire protocol messages have this\nuint32 message size field, but the DataRow one is relevant here as\nit's the one way users get their data out of the database.\n\n\n", "msg_date": "Fri, 26 Apr 2024 12:23:45 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extend ALTER DEFAULT PRIVILEGES for large objects" }, { "msg_contents": "On Fri, 26 Apr 2024 12:23:45 +0200\nMatthias van de Meent <[email protected]> wrote:\n\n> On Fri, 26 Apr 2024 at 10:54, Yugo NAGATA <[email protected]> wrote:\n> >\n> > On Wed, 24 Apr 2024 16:08:39 -0500\n> > Nathan Bossart <[email protected]> wrote:\n> >\n> > > On Tue, Apr 23, 2024 at 11:47:38PM -0400, Tom Lane wrote:\n> > > > On the whole I find this proposed feature pretty unexciting\n> > > > and dubiously worthy of the implementation/maintenance effort.\n> > >\n> > > I don't have any particularly strong feelings on $SUBJECT, but I'll admit\n> > > I'd be much more interested in resolving any remaining reasons folks are\n> > > using large objects over TOAST. I see a couple of reasons listed in the\n> > > docs [0] that might be worth examining.\n> > >\n> > > [0] https://www.postgresql.org/docs/devel/lo-intro.html\n> >\n> > If we could replace large objects with BYTEA in any use cases, large objects\n> > would be completely obsolete. However, currently some users use large objects\n> > in fact, so improvement in this feature seems beneficial for them.\n> >\n> >\n> > Apart from that, extending TOAST to support more than 1GB data and\n> > stream-style access seems a good challenge. I don't know if there was a\n> > proposal for this in past. This is just a thought, for this purpose, we\n> > will need a new type of varlena that can contains large size information,\n> > and a new toast table schema that can store offset information or some way\n> > to convert a offset to chunk_seq.\n> \n> If you're interested in this, you may want to check out [0] and [1] as\n> threads on the topic of improving TOAST handling of large values ([1]\n> being a thread where the limitations of our current external TOAST\n> pointer became clear once more), and maybe talk with Aleksander\n> Alekseev and Nikita Malakhov. They've been working closely with\n> systems that involve toast pointers and their limitations.\n> \n> The most recent update on the work of Nikita (reworking TOAST\n> handling) [2] is that he got started adapting their externally\n> pluggable toast into type-internal methods only, though I've not yet\n> noticed any updated patches appear on the list.\n\nThank you for your information. I'll check the threads you mentioned.\n\n> As for other issues with creating larger TOAST values:\n> TOAST has a value limit of ~1GB, which means a single large value (or\n> two, for that matter) won't break anything in the wire protocol, as\n> DataRow messages have a message size field of uint32 [^3]. However, if\n> we're going to allow even larger values to be stored in table's\n> attributes, we'll have to figure out how we're going to transfer those\n> larger values to (and from) clients. For large objects, this is much\n> less of an issue because the IO operations are already chunked by\n> design, but this may not work well for types that you want to use in\n> your table's columns.\n\nI overlooked this issue. I faced the similar issue when I tried to\npg_dump large text values, although the error was raised from\nenlargeStringInfo() in that case....\n\nRegards,\nYugo Nagata\n\n\n> Kind regards,\n> \n> Matthias van de Meent\n> \n> [0] https://postgr.es/m/flat/CAN-LCVMq2X%3Dfhx7KLxfeDyb3P%2BBXuCkHC0g%3D9GF%2BJD4izfVa0Q%40mail.gmail.com\n> [1] https://postgr.es/m/flat/CAJ7c6TOtAB0z1UrksvGTStNE-herK-43bj22%3D5xVBg7S4vr5rQ%40mail.gmail.com\n> [2] https://postgr.es/m/CAN-LCVOgMrda9hOdzGkCMdwY6dH0JQa13QvPsqUwY57TEn6jww%40mail.gmail.com\n> \n> [^3] Most, if not all PostgreSQL wire protocol messages have this\n> uint32 message size field, but the DataRow one is relevant here as\n> it's the one way users get their data out of the database.\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n", "msg_date": "Thu, 2 May 2024 18:00:27 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extend ALTER DEFAULT PRIVILEGES for large objects" }, { "msg_contents": "On Fri, 26 Apr 2024 17:54:06 +0900\nYugo NAGATA <[email protected]> wrote:\n\n> On Wed, 24 Apr 2024 16:08:39 -0500\n> Nathan Bossart <[email protected]> wrote:\n> \n> > On Tue, Apr 23, 2024 at 11:47:38PM -0400, Tom Lane wrote:\n> > > On the whole I find this proposed feature pretty unexciting\n> > > and dubiously worthy of the implementation/maintenance effort.\n> > \n> > I don't have any particularly strong feelings on $SUBJECT, but I'll admit\n> > I'd be much more interested in resolving any remaining reasons folks are\n> > using large objects over TOAST. I see a couple of reasons listed in the\n> > docs [0] that might be worth examining.\n> > \n> > [0] https://www.postgresql.org/docs/devel/lo-intro.html\n> \n> If we could replace large objects with BYTEA in any use cases, large objects\n> would be completely obsolete. However, currently some users use large objects\n> in fact, so improvement in this feature seems beneficial for them. \n\nI've attached a updated patch. The test is rewritten using has_largeobject_privilege()\nfunction instead of calling loread & lowrite, which makes the test a bit simpler.\nThare are no other changes.\n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <[email protected]>", "msg_date": "Fri, 13 Sep 2024 16:18:01 +0900", "msg_from": "Yugo Nagata <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extend ALTER DEFAULT PRIVILEGES for large objects" } ]
[ { "msg_contents": "Hello,\n\nA diff tool that would generate create, drop, alter, etc. commands from the\ndifferences between 2 specified schemes would be very useful. The diff\ncould even manage data, so there would be insert, delete update command\noutputs, although I think the schema diff management is much more important\nand necessary.\n\nToday, all modern applications are version-tracked, including the sql\nscheme. Now the schema changes must be handled twice: on the one hand, the\nschema must be modified, and on the other hand, the schema modification\ncommands must also be written for the upgrade process. A good diff tool\nwould allow only the schema to be modified.\n\nSuch a tool already exists because the community needed it, e.g. apgdiff. I\nthink the problem with this is that the concept isn't even good. I think\nthis tool should be part of postgresql, because postgresql always knows\nwhat the 100% sql syntax is current, an external program, for example\napgdiff can only follow changes afterwards, generating continuous problems.\nNot to mention that an external application can stop, e.g. apgdiff is also\nno longer actively developed, so users who built on a diff tool are now in\ntrouble.\n\nFurthermore, it is the least amount of work to do this on the postgresql\ndevelopment side, you have the expertise, the sql language processor, etc.\n\nWhat is your opinion on this?\n\nRegards,\n Neszt Tibor\n\nHello,A diff tool that would generate create, drop, alter, etc. commands from the differences between 2 specified schemes would be very useful. The diff could even manage data, so there would be insert, delete update command outputs, although I think the schema diff management is much more important and necessary.Today, all modern applications are version-tracked, including the sql scheme. Now the schema changes must be handled twice: on the one hand, the schema must be modified, and on the other hand, the schema modification commands must also be written for the upgrade process. A good diff tool would allow only the schema to be modified.Such a tool already exists because the community needed it, e.g. apgdiff. I think the problem with this is that the concept isn't even good. I think this tool should be part of postgresql, because postgresql always knows what the 100% sql syntax is current, an external program, for example apgdiff can only follow changes afterwards, generating continuous problems. Not to mention that an external application can stop, e.g. apgdiff is also no longer actively developed, so users who built on a diff tool are now in trouble.Furthermore, it is the least amount of work to do this on the postgresql development side, you have the expertise, the sql language processor, etc.What is your opinion on this?Regards, Neszt Tibor", "msg_date": "Wed, 24 Apr 2024 08:44:35 +0200", "msg_from": "Neszt Tibor <[email protected]>", "msg_from_op": true, "msg_subject": "Feature request: schema diff tool" }, { "msg_contents": "On 24/04/2024 09:44, Neszt Tibor wrote:\n> Hello,\n> \n> A diff tool that would generate create, drop, alter, etc. commands from \n> the differences between 2 specified schemes would be very useful. The \n> diff could even manage data, so there would be insert, delete update \n> command outputs, although I think the schema diff management is much \n> more important and necessary.\n> \n> Today, all modern applications are version-tracked, including the sql \n> scheme. Now the schema changes must be handled twice: on the one hand, \n> the schema must be modified, and on the other hand, the schema \n> modification commands must also be written for the upgrade process. A \n> good diff tool would allow only the schema to be modified.\n> \n> Such a tool already exists because the community needed it, e.g. \n> apgdiff. I think the problem with this is that the concept isn't even \n> good. I think this tool should be part of postgresql, because postgresql \n> always knows what the 100% sql syntax is current, an external program, \n> for example apgdiff can only follow changes afterwards, generating \n> continuous problems.\n\nDoes a schema diff tool need / want to actually parse the SQL syntax?\n\nOn the other hand, an external tool can be developed independently of \nthe PostgreSQL release schedule. And you'd want the same tool to work \nwith different PostgreSQL versions. Those are reasons to *not* bundle it \nwith PostgreSQL itself. PostgreSQL has a rich ecosystem of tools with \ndifferent approaches and tradeoffs, and that's a good thing.\n\nOn the whole, -1 from me. I could be convinced otherwise if there's a \ntechnical reason it would need to be part of PostgreSQL itself, but \notherwise it's better to not bundle it.\n\n> Not to mention that an external application can \n> stop, e.g. apgdiff is also no longer actively developed, so users who \n> built on a diff tool are now in trouble.\n> \n> Furthermore, it is the least amount of work to do this on the postgresql \n> development side, you have the expertise, the sql language processor, etc.\n\nThat's just asking for Someone Else to do the work. There are many other \nschema diff tools out there. And you can get pretty far by just running \n'pg_dump --schema-only' on both databases and comparing them with 'diff'.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 24 Apr 2024 10:51:07 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Feature request: schema diff tool" } ]
[ { "msg_contents": "Hi,\n\nHi,\n\nWe can specify more than one privilege type in \n\"ALTER DEFAULT PRIVILEGES GRANT/REVOKE ON SCHEMAS\",\nfor example,\n\n ALTER DEFAULT PRIVILEGES GRANT USAGE,CREATE ON SCHEMAS TO PUBLIC;\n\nHowever, the syntax described in the documentation looks to\nbe allowing only one,\n\n GRANT { USAGE | CREATE | ALL [ PRIVILEGES ] }\n ON SCHEMAS\n TO { [ GROUP ] role_name | PUBLIC } [, ...] [ WITH GRANT OPTION ]\n\nwhile the syntaxes for tables and sequences are described correctly.\n\ne.g.\n GRANT { { SELECT | INSERT | UPDATE | DELETE | TRUNCATE | REFERENCES | TRIGGER }\n [, ...] | ALL [ PRIVILEGES ] }\n ON TABLES\n TO { [ GROUP ] role_name | PUBLIC } [, ...] [ WITH GRANT OPTION ]\n\nI attached a small patch to fix the description.\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>", "msg_date": "Wed, 24 Apr 2024 15:50:52 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": true, "msg_subject": "Small filx on the documentation of ALTER DEFAULT PRIVILEGES" }, { "msg_contents": "Yugo NAGATA <[email protected]> writes:\n> We can specify more than one privilege type in \n> \"ALTER DEFAULT PRIVILEGES GRANT/REVOKE ON SCHEMAS\",\n> for example,\n\n> ALTER DEFAULT PRIVILEGES GRANT USAGE,CREATE ON SCHEMAS TO PUBLIC;\n\n> However, the syntax described in the documentation looks to\n> be allowing only one,\n\n> GRANT { USAGE | CREATE | ALL [ PRIVILEGES ] }\n> ON SCHEMAS\n> TO { [ GROUP ] role_name | PUBLIC } [, ...] [ WITH GRANT OPTION ]\n\n> while the syntaxes for tables and sequences are described correctly.\n\nYup, you're right. I'll push this shortly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Apr 2024 09:57:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Small filx on the documentation of ALTER DEFAULT PRIVILEGES" } ]
[ { "msg_contents": "hi.\n\nin make_pathkey_from_sortinfo\n\nequality_op = get_opfamily_member(opfamily,\n opcintype,\n opcintype,\n BTEqualStrategyNumber);\nif (!OidIsValid(equality_op)) /* shouldn't happen */\nelog(ERROR, \"missing operator %d(%u,%u) in opfamily %u\",\nBTEqualStrategyNumber, opcintype, opcintype, opfamily);\n\nthe error message seems not right?\n\nmaybe\nif (!OidIsValid(equality_op)) /* shouldn't happen */\nelog(ERROR, \"missing operator =(%u,%u) in opfamily %u\",opcintype,\nopcintype, opfamily);\n\nor\n\nif (!OidIsValid(equality_op)) /* shouldn't happen */\nelog(ERROR, \"missing equality operator for type %u in opfamily\n%u\",opcintype, opcintype, opfamily);\n\n\n", "msg_date": "Wed, 24 Apr 2024 15:05:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "minor error message inconsistency in make_pathkey_from_sortinfo" }, { "msg_contents": "On Wed, 24 Apr 2024 15:05:00 +0800\njian he <[email protected]> wrote:\n\n> hi.\n> \n> in make_pathkey_from_sortinfo\n> \n> equality_op = get_opfamily_member(opfamily,\n> opcintype,\n> opcintype,\n> BTEqualStrategyNumber);\n> if (!OidIsValid(equality_op)) /* shouldn't happen */\n> elog(ERROR, \"missing operator %d(%u,%u) in opfamily %u\",\n> BTEqualStrategyNumber, opcintype, opcintype, opfamily);\n> \n> the error message seems not right?\n\nThis message was introduced by 278cb434110 which was aiming to\nstandardize the wording for similar errors. We can find the pattern\n\n \"missing {support function | operator} %d(%u,%u) in opfamily %u\"\n\nin several places.\n\nRegards,\nYugo Nagata\n\n> \n> maybe\n> if (!OidIsValid(equality_op)) /* shouldn't happen */\n> elog(ERROR, \"missing operator =(%u,%u) in opfamily %u\",opcintype,\n> opcintype, opfamily);\n> \n> or\n> \n> if (!OidIsValid(equality_op)) /* shouldn't happen */\n> elog(ERROR, \"missing equality operator for type %u in opfamily\n> %u\",opcintype, opcintype, opfamily);\n> \n> \n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n", "msg_date": "Wed, 24 Apr 2024 18:47:36 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: minor error message inconsistency in make_pathkey_from_sortinfo" }, { "msg_contents": "On Wed, Apr 24, 2024 at 5:47 PM Yugo NAGATA <[email protected]> wrote:\n>\n> On Wed, 24 Apr 2024 15:05:00 +0800\n> jian he <[email protected]> wrote:\n>\n> > hi.\n> >\n> > in make_pathkey_from_sortinfo\n> >\n> > equality_op = get_opfamily_member(opfamily,\n> > opcintype,\n> > opcintype,\n> > BTEqualStrategyNumber);\n> > if (!OidIsValid(equality_op)) /* shouldn't happen */\n> > elog(ERROR, \"missing operator %d(%u,%u) in opfamily %u\",\n> > BTEqualStrategyNumber, opcintype, opcintype, opfamily);\n> >\n> > the error message seems not right?\n>\n> This message was introduced by 278cb434110 which was aiming to\n> standardize the wording for similar errors. We can find the pattern\n>\n> \"missing {support function | operator} %d(%u,%u) in opfamily %u\"\n>\n> in several places.\n>\n\nthe error message\n` operator %d`\nwould translate to\n` operator 3`\n\nbut there is oid as 3 operator in the catalog.\nthat's my confusion.\nthe discussion at [1] didn't explain my confusion.\n\n\n[1] https://postgr.es/m/CAGPqQf2R9Nk8htpv0FFi+FP776EwMyGuORpc9zYkZKC8sFQE3g@mail.gmail.com\n\n\n", "msg_date": "Thu, 25 Apr 2024 22:52:14 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: minor error message inconsistency in make_pathkey_from_sortinfo" }, { "msg_contents": "jian he <[email protected]> writes:\n> On Wed, Apr 24, 2024 at 5:47 PM Yugo NAGATA <[email protected]> wrote:\n>> This message was introduced by 278cb434110 which was aiming to\n>> standardize the wording for similar errors. We can find the pattern\n>> \"missing {support function | operator} %d(%u,%u) in opfamily %u\"\n>> in several places.\n\n> the error message\n> ` operator %d`\n> would translate to\n> ` operator 3`\n\n> but there is oid as 3 operator in the catalog.\n> that's my confusion.\n\nThat number is the opclass' operator strategy number, not an OID\n(which is why it's formatted as %d not %u). See\n\nhttps://www.postgresql.org/docs/devel/xindex.html#XINDEX-STRATEGIES\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Apr 2024 11:07:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: minor error message inconsistency in make_pathkey_from_sortinfo" } ]
[ { "msg_contents": "Hi all,\n\nAs a recent poke on a thread of 2019 has reminded me, the current\nsituation of partitioned tables with unlogged is kind of weird:\nhttps://www.postgresql.org/message-id/15954-b61523bed4b110c4%40postgresql.org\n\nTo sum up the situation:\n- ALTER TABLE .. SET LOGGED/UNLOGGED does not affect partitioned\ntables.\n- New partitions don't inherit the loggedness of the partitioned\ntable.\n\nOne of the things that could be done is to forbid the use of UNLOGGED\nin partitioned tables, but I'm wondering if we could be smarter than\nthat to provide a more natural user experience. I've been\nexperiencing with that and finished with the POC attached, that uses\nthe following properties:\n- Support ALTER TABLE .. SET LOGGED/UNLOGGED for partitioned tables,\nwhere the command only works on partitioned tables so that's only a\ncatalog switch.\n- CREATE TABLE PARTITION OF would make a new partition inherit the\nlogged ness of the parent if UNLOGGED is not directly specified.\n\nThere are some open questions that need attention:\n- How about ONLY? Would it make sense to support it so as ALTER TABLE\nONLY on a partitioned table does not touch any of its partitions?\nWould not specifying ONLY mean that the loggedness of the partitioned\ntable and all its partitions is changed? That would mean a burst in\nWAL when switching to LOGGED, which was a concern when this feature\nwas first implemented even for a single table, so for potentially\nhundreds of them, that would really hurt if a DBA is not careful.\n- CREATE TABLE does not have a LOGGED keyword, hence it is not\npossible through the parser to force a partition to have a permanent\npersistence even if its partitioned table uses UNLOGGED.\n\nLike tablespaces or even recently access methods, the heuristics can\nbe fun to define depending on the impact of the table rewrites. The\nattached has the code and regression tests to make the rewrites\ncheaper, which is to make new partitions inherit the loggedness of the\nparent while altering a parent's property leaves the partitions\nuntouched. With the lack of a LOGGED keyword to force a partition to\nbe permanent when its partitioned table is unlogged, that's a bit\nfunny but feel free to comment.\n\nThanks,\n--\nMichael", "msg_date": "Wed, 24 Apr 2024 16:17:44 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Partitioned tables and [un]loggedness" }, { "msg_contents": "On Wed, Apr 24, 2024 at 04:17:44PM +0900, Michael Paquier wrote:\n> - Support ALTER TABLE .. SET LOGGED/UNLOGGED for partitioned tables,\n> where the command only works on partitioned tables so that's only a\n> catalog switch.\n\nI'm not following what this means. Does SET [UN]LOGGED on a partitioned\ntable recurse to its partitions? Does this mean that you cannot changed\nwhether a single partition is [UN]LOGGED? How does this work with\nsub-partitioning?\n\n> - CREATE TABLE PARTITION OF would make a new partition inherit the\n> logged ness of the parent if UNLOGGED is not directly specified.\n\nThis one seems generally reasonable to me, provided it's properly noted in\nthe docs.\n\n> - How about ONLY? Would it make sense to support it so as ALTER TABLE\n> ONLY on a partitioned table does not touch any of its partitions?\n> Would not specifying ONLY mean that the loggedness of the partitioned\n> table and all its partitions is changed? That would mean a burst in\n> WAL when switching to LOGGED, which was a concern when this feature\n> was first implemented even for a single table, so for potentially\n> hundreds of them, that would really hurt if a DBA is not careful.\n\nI guess ONLY could be a way of changing the default for new partitions\nwithout changing whether existing ones were logged. I'm not tremendously\nconcerned about the burst-of-WAL problem. Or, at least, I'm not any more\nconcerned about it for partitioned tables than I am for regular tables. I\ndo wonder if we can improve the performance of setting tables LOGGED, but\nthat's for a separate thread...\n\n> - CREATE TABLE does not have a LOGGED keyword, hence it is not\n> possible through the parser to force a partition to have a permanent\n> persistence even if its partitioned table uses UNLOGGED.\n\nCould we add LOGGED for CREATE TABLE?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 24 Apr 2024 15:26:40 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Wed, Apr 24, 2024 at 1:26 PM Nathan Bossart <[email protected]>\nwrote:\n\n> On Wed, Apr 24, 2024 at 04:17:44PM +0900, Michael Paquier wrote:\n> > - Support ALTER TABLE .. SET LOGGED/UNLOGGED for partitioned tables,\n> > where the command only works on partitioned tables so that's only a\n> > catalog switch.\n>\n> I'm not following what this means. Does SET [UN]LOGGED on a partitioned\n> table recurse to its partitions? Does this mean that you cannot changed\n> whether a single partition is [UN]LOGGED? How does this work with\n> sub-partitioning?\n>\n> > - CREATE TABLE PARTITION OF would make a new partition inherit the\n> > logged ness of the parent if UNLOGGED is not directly specified.\n>\n> This one seems generally reasonable to me, provided it's properly noted in\n> the docs.\n>\n\nI don't see this being desirable at this point. And especially not by\nitself. It is an error to not specify TEMP on the partition create table\ncommand when the parent is temporary, and that one is a no-brainer for\nhaving the persistence mode of the child be derived from the parent. The\ncurrent policy of requiring the persistence of the child to be explicitly\nspecified seems perfectly reasonable. Or, put differently, the specific\ncurrent persistence of the parent doesn't get copied or even considered\nwhen creating children.\n\nIn any case we aren't going to be able to do exactly what it means by\nmarking a partitioned table unlogged - namely that we execute the truncate\ncommand on it after a crash. Forcing the implementation here just so that\nour existing decision to ignore unlogged on the parent table is, IMO,\nletting optics corrupt good design.\n\nI do agree with having an in-core way for the DBA to communicate that they\nwish for partitions to be created with a known persistence unless the\ncreate table command specifies something different. The way I would\napproach this is to add something like the following to the syntax/system:\n\nCREATE [ persistence_mode ] TABLE ...\n...\nWITH (partition_default_persistence = { logged, unlogged, temporary }) --\nstorage_parameter, logged is effectively the default\n\nThis method is more explicit and has zero implications for existing backups\nor upgrading.\n\n\n> > - How about ONLY? Would it make sense to support it so as ALTER TABLE\n> > ONLY on a partitioned table does not touch any of its partitions?\n>\n\nI'd leave it to the community to develop and maintain scripts that iterate\nover the partition hierarchy and toggle the persistence mode between logged\nand unlogged on the individual partitions using whatever throttling and\nbatching strategy each individual user requires for their situation.\n\n\n> > - CREATE TABLE does not have a LOGGED keyword, hence it is not\n> > possible through the parser to force a partition to have a permanent\n> > persistence even if its partitioned table uses UNLOGGED.\n>\n> Could we add LOGGED for CREATE TABLE?\n>\n>\nI do agree with adding LOGGED to the list of optional persistence_mode\nspecifiers, possibly regardless of whether any of this happens. Seems\ndesirable to give our default mode a name.\n\nDavid J.\n\nOn Wed, Apr 24, 2024 at 1:26 PM Nathan Bossart <[email protected]> wrote:On Wed, Apr 24, 2024 at 04:17:44PM +0900, Michael Paquier wrote:\n> - Support ALTER TABLE .. SET LOGGED/UNLOGGED for partitioned tables,\n> where the command only works on partitioned tables so that's only a\n> catalog switch.\n\nI'm not following what this means.  Does SET [UN]LOGGED on a partitioned\ntable recurse to its partitions?  Does this mean that you cannot changed\nwhether a single partition is [UN]LOGGED?  How does this work with\nsub-partitioning?\n\n> - CREATE TABLE PARTITION OF would make a new partition inherit the\n> logged ness of the parent if UNLOGGED is not directly specified.\n\nThis one seems generally reasonable to me, provided it's properly noted in\nthe docs.I don't see this being desirable at this point.  And especially not by itself.  It is an error to not specify TEMP on the partition create table command when the parent is temporary, and that one is a no-brainer for having the persistence mode of the child be derived from the parent.  The current policy of requiring the persistence of the child to be explicitly specified seems perfectly reasonable.  Or, put differently, the specific current persistence of the parent doesn't get copied or even considered when creating children.In any case we aren't going to be able to do exactly what it means by marking a partitioned table unlogged - namely that we execute the truncate command on it after a crash.  Forcing the implementation here just so that our existing decision to ignore unlogged on the parent table is, IMO, letting optics corrupt good design.I do agree with having an in-core way for the DBA to communicate that they wish for partitions to be created with a known persistence unless the create table command specifies something different.  The way I would approach this is to add something like the following to the syntax/system:CREATE [ persistence_mode ] TABLE ......WITH (partition_default_persistence = { logged, unlogged, temporary }) -- storage_parameter, logged is effectively the defaultThis method is more explicit and has zero implications for existing backups or upgrading.\n> - How about ONLY?  Would it make sense to support it so as ALTER TABLE\n> ONLY on a partitioned table does not touch any of its partitions?I'd leave it to the community to develop and maintain scripts that iterate over the partition hierarchy and toggle the persistence mode between logged and unlogged on the individual partitions using whatever throttling and batching strategy each individual user requires for their situation.\n\n> - CREATE TABLE does not have a LOGGED keyword, hence it is not\n> possible through the parser to force a partition to have a permanent\n> persistence even if its partitioned table uses UNLOGGED.\n\nCould we add LOGGED for CREATE TABLE? I do agree with adding LOGGED to the list of optional persistence_mode specifiers, possibly regardless of whether any of this happens.  Seems desirable to give our default mode a name.David J.", "msg_date": "Wed, 24 Apr 2024 15:36:35 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Wed, Apr 24, 2024 at 03:26:40PM -0500, Nathan Bossart wrote:\n> On Wed, Apr 24, 2024 at 04:17:44PM +0900, Michael Paquier wrote:\n> > - Support ALTER TABLE .. SET LOGGED/UNLOGGED for partitioned tables,\n> > where the command only works on partitioned tables so that's only a\n> > catalog switch.\n> \n> I'm not following what this means. Does SET [UN]LOGGED on a partitioned\n> table recurse to its partitions? Does this mean that you cannot changed\n> whether a single partition is [UN]LOGGED? How does this work with\n> sub-partitioning?\n\nThe POC implements ALTER TABLE .. SET LOGGED/UNLOGGED so as the change\nonly reflects on the relation defined on the query. In short, if\nspecifying a partitioned table as relation, only its relpersistence is\nchanged in the patch. If sub-partitioning is involved, the POC\nbehaves the same way, relpersistence for partitioned table A1 attached\nto partitioned table A does not change under ALTER TABLE A.\n\nRecursing to all the partitions and partitioned tables attached to a\npartitioned table A when using an ALTER TABLE A, when ONLY is not\nspecified, is OK by me if that's the consensus folks prefer. I just\nwanted to gather some opinions first about what people find the most\nnatural.\n\n>> - CREATE TABLE PARTITION OF would make a new partition inherit the\n>> logged ness of the parent if UNLOGGED is not directly specified.\n> \n> This one seems generally reasonable to me, provided it's properly noted in\n> the docs.\n\nNoted. That's a second piece. This part felt natural to me, but it\nwould not fly far without a LOGGED keyword and a way to attach a new\n\"undefined\" RELPERSISTENCE_ in gram.y's OptTemp, likely a '\\0'.\nThat's going to require some tweaks for CTAS as these cannot be\npartitioned, but that should be a few lines to fall back to a\npermanent if the query is parsed with the new \"undefined\".\n\n>> - How about ONLY? Would it make sense to support it so as ALTER TABLE\n>> ONLY on a partitioned table does not touch any of its partitions?\n>> Would not specifying ONLY mean that the loggedness of the partitioned\n>> table and all its partitions is changed? That would mean a burst in\n>> WAL when switching to LOGGED, which was a concern when this feature\n>> was first implemented even for a single table, so for potentially\n>> hundreds of them, that would really hurt if a DBA is not careful.\n> \n> I guess ONLY could be a way of changing the default for new partitions\n> without changing whether existing ones were logged. I'm not tremendously\n> concerned about the burst-of-WAL problem. Or, at least, I'm not any more\n> concerned about it for partitioned tables than I am for regular tables. I\n> do wonder if we can improve the performance of setting tables LOGGED, but\n> that's for a separate thread...\n\nYeah. A burst of WAL caused by a switch to LOGGED for a few thousand\npartitions would never be fun, perhaps I'm just worrying to much as\nthat should not be a regular operation.\n\n>> - CREATE TABLE does not have a LOGGED keyword, hence it is not\n>> possible through the parser to force a partition to have a permanent\n>> persistence even if its partitioned table uses UNLOGGED.\n> \n> Could we add LOGGED for CREATE TABLE?\n\nI don't see why not if people agree with it, that's a patch of its own\nthat would help greatly in making the [un]logged attribute be\ninherited for new partitions, because it is them possible to force a\npersistence to be permanent as much as unlogged at query level, easing\nthe manipulation of partition trees.\n--\nMichael", "msg_date": "Thu, 25 Apr 2024 08:09:23 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Wed, Apr 24, 2024 at 03:36:35PM -0700, David G. Johnston wrote:\n> On Wed, Apr 24, 2024 at 1:26 PM Nathan Bossart <[email protected]>\n> wrote:\n>> On Wed, Apr 24, 2024 at 04:17:44PM +0900, Michael Paquier wrote:\n>>> - CREATE TABLE PARTITION OF would make a new partition inherit the\n>>> logged ness of the parent if UNLOGGED is not directly specified.\n>>\n>> This one seems generally reasonable to me, provided it's properly noted in\n>> the docs.\n> \n> I don't see this being desirable at this point. And especially not by\n> itself. It is an error to not specify TEMP on the partition create table\n> command when the parent is temporary, and that one is a no-brainer for\n> having the persistence mode of the child be derived from the parent. The\n> current policy of requiring the persistence of the child to be explicitly\n> specified seems perfectly reasonable. Or, put differently, the specific\n> current persistence of the parent doesn't get copied or even considered\n> when creating children.\n\nI disagree here, actually. Temporary tables are a different beast\nbecause they require automated cleanup which would include interacting\nwith the partitionining information if temp and non-temp relations are\nmixed. That's why the current restrictions are in place: you should\nbe able to mix them. Mixing unlogged and logged is OK, they retain a\nstate in their catalogs.\n\n> In any case we aren't going to be able to do exactly what it means by\n> marking a partitioned table unlogged - namely that we execute the truncate\n> command on it after a crash. Forcing the implementation here just so that\n> our existing decision to ignore unlogged on the parent table is, IMO,\n> letting optics corrupt good design.\n\nIt depends on retention policies, for one. I could imagine an\napplication where partitioning is used based on a key where we\nclassify records based on their persistency, and one does not care\nabout a portion of them, still would like some retention as long as\nthe application is healthy.\n\n> I do agree with having an in-core way for the DBA to communicate that they\n> wish for partitions to be created with a known persistence unless the\n> create table command specifies something different. The way I would\n> approach this is to add something like the following to the syntax/system:\n> \n> CREATE [ persistence_mode ] TABLE ...\n> ...\n> WITH (partition_default_persistence = { logged, unlogged, temporary }) --\n> storage_parameter, logged is effectively the default\n\nWhile we have keywords to drive that at query level for TEMP and\nUNLOGGED? Not sure to be on board with this idea. pg_dump may need\nsome changes to reflect the loggedness across the partitions, now that\nI think about it even if there should be an ATTACH once the table is\ncreated to link it to its partitioned table. There should be no\nrewrites at restore.\n\n> I do agree with adding LOGGED to the list of optional persistence_mode\n> specifiers, possibly regardless of whether any of this happens. Seems\n> desirable to give our default mode a name.\n\nYeah, at least it looks like Nathan and you are OK with this addition.\n--\nMichael", "msg_date": "Thu, 25 Apr 2024 08:35:32 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Wed, Apr 24, 2024 at 4:35 PM Michael Paquier <[email protected]> wrote:\n\n>\n> I disagree here, actually. Temporary tables are a different beast\n> because they require automated cleanup which would include interacting\n> with the partitionining information if temp and non-temp relations are\n> mixed. That's why the current restrictions are in place: you should\n>\n[ not ] ?\n\n> be able to mix them.\n>\n\nMy point is that if you feel that treating logged as a copy-able property\nis OK then doing the following should also just work:\n\npostgres=# create temp table parentt ( id integer ) partition by range (id);\nCREATE TABLE\npostgres=# create table child10t partition of parentt for values from (0)\nto (9);\nERROR: cannot create a permanent relation as partition of temporary\nrelation \"parentt\"\n\ni.e., child10t should be created as a temporary partition under parentt.\n\nDavid J.\n\nOn Wed, Apr 24, 2024 at 4:35 PM Michael Paquier <[email protected]> wrote:\nI disagree here, actually.  Temporary tables are a different beast\nbecause they require automated cleanup which would include interacting\nwith the partitionining information if temp and non-temp relations are\nmixed.  That's why the current restrictions are in place: you should[ not ] ? \nbe able to mix them.My point is that if you feel that treating logged as a copy-able property is OK then doing the following should also just work:postgres=# create temp table parentt ( id integer ) partition by range (id);CREATE TABLEpostgres=# create table child10t partition of parentt for values from (0) to (9);ERROR:  cannot create a permanent relation as partition of temporary relation \"parentt\"i.e., child10t should be created as a temporary partition under parentt.David J.", "msg_date": "Wed, 24 Apr 2024 16:43:58 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Thu, Apr 25, 2024 at 08:35:32AM +0900, Michael Paquier wrote:\n> That's why the current restrictions are in place: you should\n> be able to mix them.\n\nCorrection due to a ENOCOFFEE: you should *not* be able to mix them.\n--\nMichael", "msg_date": "Thu, 25 Apr 2024 08:44:41 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Wed, Apr 24, 2024 at 04:43:58PM -0700, David G. Johnston wrote:\n> My point is that if you feel that treating logged as a copy-able property\n> is OK then doing the following should also just work:\n> \n> postgres=# create temp table parentt ( id integer ) partition by range (id);\n> CREATE TABLE\n> postgres=# create table child10t partition of parentt for values from (0)\n> to (9);\n> ERROR: cannot create a permanent relation as partition of temporary\n> relation \"parentt\"\n> \n> i.e., child10t should be created as a temporary partition under parentt.\n\nAh, indeed, I've missed your point here. Lifting the error and\ninheriting temporary in this case would make sense.\n--\nMichael", "msg_date": "Thu, 25 Apr 2024 08:55:27 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Thu, Apr 25, 2024 at 08:55:27AM +0900, Michael Paquier wrote:\n> On Wed, Apr 24, 2024 at 04:43:58PM -0700, David G. Johnston wrote:\n>> My point is that if you feel that treating logged as a copy-able property\n>> is OK then doing the following should also just work:\n>> \n>> postgres=# create temp table parentt ( id integer ) partition by range (id);\n>> CREATE TABLE\n>> postgres=# create table child10t partition of parentt for values from (0)\n>> to (9);\n>> ERROR: cannot create a permanent relation as partition of temporary\n>> relation \"parentt\"\n>> \n>> i.e., child10t should be created as a temporary partition under parentt.\n> \n> Ah, indeed, I've missed your point here. Lifting the error and\n> inheriting temporary in this case would make sense.\n\nThe case of a temporary persistence is actually *very* tricky. The\nnamespace, where the relation is created, is guessed and locked with\npermission checks done based on the RangeVar when the CreateStmt is\ntransformed, which is before we try to look at its inheritance tree to\nfind its partitioned parent. So we would somewhat need to shortcut\nthe existing RangeVar lookup and include the parents in the loop to\nfind out the correct namespace. And this is much earlier than now.\nThe code complexity is not trivial, so I am getting cold feet when\ntrying to support this case in a robust fashion. For now, I have\ndiscarded this case and focused on the main problem with SET LOGGED\nand UNLOGGED.\n\nSwitching between logged <-> unlogged does not have such\ncomplications, because the namespace where the relation is created is\ngoing to be the same. So we won't lock or perform permission checks\non an incorrect namespace.\n\nThe addition of LOGGED makes the logic deciding how the loggedness of\na partition table based on its partitioned table or the query quite\neasy to follow, but this needs some safety nets in the sequence, view\nand CTAS code paths to handle with the case where the query specifies\nno relpersistence.\n\nI have also looked at support for ONLY, and I've been surprised that\nit is not that complicated. tablecmds.c has a ATSimpleRecursion()\nthat is smart enough to do an inheritance tree lookup and apply the\nrewrites where they should happen in the step 3 of ALTER TABLE, while\nhandling ONLY on its own. The relpersistence of partitioned tables is\nupdated in step 2, with the catalog changes.\n\nAttached is a new patch series:\n- 0001 refactors some code around ATPrepChangePersistence() that I\nfound confusing after applying the operation to partitioned tables.\n- 0002 adds support for a LOGGED keyword.\n- 0003 expands ALTER TABLE SET [UN]LOGGED to partitioned tables,\nwithout recursion to partitions.\n- 0004 adds the recursion logic, expanding regression tests to show\nthe difference.\n\n0003 and 0004 should be merged together, I think. Still, splitting\nthem makes reviews a bit easier.\n--\nMichael", "msg_date": "Thu, 2 May 2024 15:06:51 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Thu, May 02, 2024 at 03:06:51PM +0900, Michael Paquier wrote:\n> The case of a temporary persistence is actually *very* tricky. The\n> namespace, where the relation is created, is guessed and locked with\n> permission checks done based on the RangeVar when the CreateStmt is\n> transformed, which is before we try to look at its inheritance tree to\n> find its partitioned parent. So we would somewhat need to shortcut\n> the existing RangeVar lookup and include the parents in the loop to\n> find out the correct namespace. And this is much earlier than now.\n> The code complexity is not trivial, so I am getting cold feet when\n> trying to support this case in a robust fashion. For now, I have\n> discarded this case and focused on the main problem with SET LOGGED\n> and UNLOGGED.\n> \n> Switching between logged <-> unlogged does not have such\n> complications, because the namespace where the relation is created is\n> going to be the same. So we won't lock or perform permission checks\n> on an incorrect namespace.\n\nI've been thinking about this thread some more, and I'm finding myself -0.5\nfor adding relpersistence inheritance for UNLOGGED. There are a few\nreasons:\n\n* Existing partitioned tables may be marked UNLOGGED, and after upgrade,\n new partitions would be UNLOGGED unless the user discovers that they need\n to begin specifying LOGGED or change the persistence of the partitioned\n table. I've seen many problems with UNLOGGED over the years, so I am\n wary about anything that might increase the probability of someone using\n it accidentally.\n\n* I don't think partitions inheriting persistence is necessarily intuitive.\n IIUC there's nothing stopping you from having a mix of LOGGED and\n UNLOGGED partitions, so it's not clear to me why we should assume that\n users want them to be the same by default. IMHO UNLOGGED is dangerous\n enough that we really want users to unambiguously indicate that's what\n they want.\n\n* Inheriting certain persistences (e.g., UNLOGGED) and not others (e.g.,\n TEMPORARY) seems confusing. Furthermore, if a partitioned table is\n marked TEMPORARY, its partitions must also be marked TEMPORARY. There is\n no such restriction when a partitioned table is marked UNLOGGED.\n\nMy current thinking is that it would be better to disallow marking\npartitioned tables as LOGGED/UNLOGGED and continue to have users explicitly\nspecify what they want for each partition. It'd still probably be good to\nexpand the documentation, but a clear ERROR when trying to set a\npartitioned table as UNLOGGED would hopefully clue folks in.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 27 Aug 2024 16:01:58 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Tue, Aug 27, 2024 at 04:01:58PM -0500, Nathan Bossart wrote:\n> I've been thinking about this thread some more, and I'm finding myself -0.5\n> for adding relpersistence inheritance for UNLOGGED. There are a few\n> reasons:\n> \n> * Existing partitioned tables may be marked UNLOGGED, and after upgrade,\n> new partitions would be UNLOGGED unless the user discovers that they need\n> to begin specifying LOGGED or change the persistence of the partitioned\n> table. I've seen many problems with UNLOGGED over the years, so I am\n> wary about anything that might increase the probability of someone using\n> it accidentally.\n> \n> * I don't think partitions inheriting persistence is necessarily intuitive.\n> IIUC there's nothing stopping you from having a mix of LOGGED and\n> UNLOGGED partitions, so it's not clear to me why we should assume that\n> users want them to be the same by default. IMHO UNLOGGED is dangerous\n> enough that we really want users to unambiguously indicate that's what\n> they want.\n\nOkay. Thanks for sharing an opinion.\n\n> * Inheriting certain persistences (e.g., UNLOGGED) and not others (e.g.,\n> TEMPORARY) seems confusing. Furthermore, if a partitioned table is\n> marked TEMPORARY, its partitions must also be marked TEMPORARY. There is\n> no such restriction when a partitioned table is marked UNLOGGED.\n\nThe reason for temporary tables is different though: we expect\neverything to be gone once the backend that created these relations is\ngone. If persistence cocktails were allowed, the worse thing that\ncould happen would be to have a partitioned table that had temporary\npartitions; its catalog state can easily get broken depending on the\nDDLs issued on it. Valid partitioned index that should not be once\nthe partitions are gone, for example, which would require more exit\nlogic to flip states in pg_class, pg_index, etc.\n\n> My current thinking is that it would be better to disallow marking\n> partitioned tables as LOGGED/UNLOGGED and continue to have users explicitly\n> specify what they want for each partition. It'd still probably be good to\n> expand the documentation, but a clear ERROR when trying to set a\n> partitioned table as UNLOGGED would hopefully clue folks in.\n\nThe addition of the new LOGGED keyword is not required if we limit\nourselves to an error when defining UNLOGGED, so if we drop this\nproposal, let's also drop this part entirely and keep DefineRelation()\nsimpler. Actually, is really issuing an error the best thing we can\ndo after so many years allowing this grammar flavor to go through,\neven if it is perhaps accidental? relpersistence is marked correctly\nfor partitioned tables, it's just useless. Expanding the\ndocumentation sounds fine to me, one way or the other, to tell what\nhappens with partitioned tables.\n\nBy the way, I was looking at this patch series, and still got annoyed\nwith the code duplication with ALTER TABLE SET LOGGED/UNLOGGED, so\nI've done something about that for now.\n--\nMichael", "msg_date": "Thu, 29 Aug 2024 15:44:45 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Thu, Aug 29, 2024 at 03:44:45PM +0900, Michael Paquier wrote:\n> On Tue, Aug 27, 2024 at 04:01:58PM -0500, Nathan Bossart wrote:\n>> My current thinking is that it would be better to disallow marking\n>> partitioned tables as LOGGED/UNLOGGED and continue to have users explicitly\n>> specify what they want for each partition. It'd still probably be good to\n>> expand the documentation, but a clear ERROR when trying to set a\n>> partitioned table as UNLOGGED would hopefully clue folks in.\n> \n> The addition of the new LOGGED keyword is not required if we limit\n> ourselves to an error when defining UNLOGGED, so if we drop this\n> proposal, let's also drop this part entirely and keep DefineRelation()\n> simpler.\n\n+1\n\n> Actually, is really issuing an error the best thing we can\n> do after so many years allowing this grammar flavor to go through,\n> even if it is perhaps accidental? relpersistence is marked correctly\n> for partitioned tables, it's just useless. Expanding the\n> documentation sounds fine to me, one way or the other, to tell what\n> happens with partitioned tables.\n\nIMHO continuing to allow partitioned tables to be marked UNLOGGED just\npreserves the illusion that it does something. An ERROR could help dispel\nthat misconception.\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 29 Aug 2024 09:49:44 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Thu, May 2, 2024 at 2:07 PM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Apr 25, 2024 at 08:55:27AM +0900, Michael Paquier wrote:\n> > On Wed, Apr 24, 2024 at 04:43:58PM -0700, David G. Johnston wrote:\n> >> My point is that if you feel that treating logged as a copy-able property\n> >> is OK then doing the following should also just work:\n> >>\n> >> postgres=# create temp table parentt ( id integer ) partition by range (id);\n> >> CREATE TABLE\n> >> postgres=# create table child10t partition of parentt for values from (0)\n> >> to (9);\n> >> ERROR: cannot create a permanent relation as partition of temporary\n> >> relation \"parentt\"\n> >>\n> >> i.e., child10t should be created as a temporary partition under parentt.\n> >\n> > Ah, indeed, I've missed your point here. Lifting the error and\n> > inheriting temporary in this case would make sense.\n>\n> The case of a temporary persistence is actually *very* tricky. The\n> namespace, where the relation is created, is guessed and locked with\n> permission checks done based on the RangeVar when the CreateStmt is\n> transformed, which is before we try to look at its inheritance tree to\n> find its partitioned parent. So we would somewhat need to shortcut\n> the existing RangeVar lookup and include the parents in the loop to\n> find out the correct namespace. And this is much earlier than now.\n> The code complexity is not trivial, so I am getting cold feet when\n> trying to support this case in a robust fashion. For now, I have\n> discarded this case and focused on the main problem with SET LOGGED\n> and UNLOGGED.\n>\n> Switching between logged <-> unlogged does not have such\n> complications, because the namespace where the relation is created is\n> going to be the same. So we won't lock or perform permission checks\n> on an incorrect namespace.\n>\n> The addition of LOGGED makes the logic deciding how the loggedness of\n> a partition table based on its partitioned table or the query quite\n> easy to follow, but this needs some safety nets in the sequence, view\n> and CTAS code paths to handle with the case where the query specifies\n> no relpersistence.\n>\n> I have also looked at support for ONLY, and I've been surprised that\n> it is not that complicated. tablecmds.c has a ATSimpleRecursion()\n> that is smart enough to do an inheritance tree lookup and apply the\n> rewrites where they should happen in the step 3 of ALTER TABLE, while\n> handling ONLY on its own. The relpersistence of partitioned tables is\n> updated in step 2, with the catalog changes.\n>\n> Attached is a new patch series:\n> - 0001 refactors some code around ATPrepChangePersistence() that I\n> found confusing after applying the operation to partitioned tables.\n> - 0002 adds support for a LOGGED keyword.\n> - 0003 expands ALTER TABLE SET [UN]LOGGED to partitioned tables,\n> without recursion to partitions.\n> - 0004 adds the recursion logic, expanding regression tests to show\n> the difference.\n>\n> 0003 and 0004 should be merged together, I think. Still, splitting\n> them makes reviews a bit easier.\n> --\n> Michael\n\nWhile reviewing the patches, I found a weird error msg:\n\n+ALTER TABLE logged_part_1 SET UNLOGGED; -- fails as a foreign-key exists\n+ERROR: could not change table \"logged_part_1\" to unlogged because it\nreferences logged table \"logged_part_2\"\n\nshould this be *it is referenced by* here?\n\nThe error msg is from ATPrepChangePersistence, and I think we should\ndo something like:\n\ndiff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c\nindex b3cc6f8f69..30fbc3836a 100644\n--- a/src/backend/commands/tablecmds.c\n+++ b/src/backend/commands/tablecmds.c\n@@ -16986,7 +16986,7 @@ ATPrepChangePersistence(AlteredTableInfo *tab,\nRelation rel, bool toLogged)\n if (RelationIsPermanent(foreignrel))\n ereport(ERROR,\n\n(errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n- errmsg(\"could\nnot change table \\\"%s\\\" to unlogged because it references logged table\n\\\"%s\\\"\",\n+ errmsg(\"could\nnot change table \\\"%s\\\" to unlogged because it is referenced by logged\ntable \\\"%s\\\"\",\n\n\nWhat do you think?\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Sun, 1 Sep 2024 13:24:38 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Thu, Aug 29, 2024 at 09:49:44AM -0500, Nathan Bossart wrote:\n> IMHO continuing to allow partitioned tables to be marked UNLOGGED just\n> preserves the illusion that it does something. An ERROR could help dispel\n> that misconception.\n\nOkay. This is going to be disruptive if we do nothing about pg_dump,\nunfortunately. How about tweaking dumpTableSchema() so as we'd never\nissue UNLOGGED for a partitioned table? We could filter that out as\nthere is tbinfo->relkind.\n--\nMichael", "msg_date": "Tue, 3 Sep 2024 09:22:58 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Tue, Sep 03, 2024 at 09:22:58AM +0900, Michael Paquier wrote:\n> On Thu, Aug 29, 2024 at 09:49:44AM -0500, Nathan Bossart wrote:\n>> IMHO continuing to allow partitioned tables to be marked UNLOGGED just\n>> preserves the illusion that it does something. An ERROR could help dispel\n>> that misconception.\n> \n> Okay. This is going to be disruptive if we do nothing about pg_dump,\n> unfortunately. How about tweaking dumpTableSchema() so as we'd never\n> issue UNLOGGED for a partitioned table? We could filter that out as\n> there is tbinfo->relkind.\n\nThat's roughly what I had in mind.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 2 Sep 2024 20:35:15 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Mon, Sep 02, 2024 at 08:35:15PM -0500, Nathan Bossart wrote:\n> On Tue, Sep 03, 2024 at 09:22:58AM +0900, Michael Paquier wrote:\n>> Okay. This is going to be disruptive if we do nothing about pg_dump,\n>> unfortunately. How about tweaking dumpTableSchema() so as we'd never\n>> issue UNLOGGED for a partitioned table? We could filter that out as\n>> there is tbinfo->relkind.\n> \n> That's roughly what I had in mind.\n\nAn idea is attached. The pgbench bit was unexpected.\n--\nMichael", "msg_date": "Tue, 3 Sep 2024 16:59:18 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Tue, Sep 03, 2024 at 04:59:18PM +0900, Michael Paquier wrote:\n> An idea is attached. The pgbench bit was unexpected.\n\nThis works correctly for CREATE TABLE, but ALTER TABLE still succeeds.\nInterestingly, it doesn't seem to actually change relpersistence for\npartitioned tables. I think we might need to adjust\nATPrepChangePersistence().\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 3 Sep 2024 10:33:02 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Tue, Sep 03, 2024 at 10:33:02AM -0500, Nathan Bossart wrote:\n> This works correctly for CREATE TABLE, but ALTER TABLE still succeeds.\n> Interestingly, it doesn't seem to actually change relpersistence for\n> partitioned tables. I think we might need to adjust\n> ATPrepChangePersistence().\n\nAdjusting ATPrepChangePersistence() looks incorrect to me seeing that\nwe have all the machinery in ATSimplePermissions() at our disposal,\nand that this routine decides that ATT_TABLE should map to both\nRELKIND_RELATION and RELKIND_PARTITIONED_TABLE.\n\nHow about inventing a new ATT_PARTITIONED_TABLE and make a clean split\nbetween both relkinds? I'd guess that blocking both SET LOGGED and\nUNLOGGED for partitioned tables is the best move, even if it is\npossible to block only one or the other, of course.\n--\nMichael", "msg_date": "Mon, 9 Sep 2024 15:56:14 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Mon, Sep 09, 2024 at 03:56:14PM +0900, Michael Paquier wrote:\n> How about inventing a new ATT_PARTITIONED_TABLE and make a clean split\n> between both relkinds? I'd guess that blocking both SET LOGGED and\n> UNLOGGED for partitioned tables is the best move, even if it is\n> possible to block only one or the other, of course.\n\nI gave it a try, and while it is much more invasive, it is also much\nmore consistent with the rest of the file.\n\nThoughts?\n--\nMichael", "msg_date": "Tue, 10 Sep 2024 09:42:31 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Tue, Sep 10, 2024 at 09:42:31AM +0900, Michael Paquier wrote:\n> On Mon, Sep 09, 2024 at 03:56:14PM +0900, Michael Paquier wrote:\n>> How about inventing a new ATT_PARTITIONED_TABLE and make a clean split\n>> between both relkinds? I'd guess that blocking both SET LOGGED and\n>> UNLOGGED for partitioned tables is the best move, even if it is\n>> possible to block only one or the other, of course.\n> \n> I gave it a try, and while it is much more invasive, it is also much\n> more consistent with the rest of the file.\n\nThis looks reasonable to me. Could we also use ATT_PARTITIONED_TABLE to\nremove the partitioned table check in ATExecAddIndexConstraint()?\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 18 Sep 2024 10:17:47 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Wed, Sep 18, 2024 at 10:17:47AM -0500, Nathan Bossart wrote:\n> Could we also use ATT_PARTITIONED_TABLE to remove the partitioned table\n> check in ATExecAddIndexConstraint()?\n\nEh, never mind. That ends up being gross because you have to redo the\nrelation type check in a few places.\n\n-- \nnathan", "msg_date": "Wed, 18 Sep 2024 10:58:34 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Wed, Sep 18, 2024 at 10:58:34AM -0500, Nathan Bossart wrote:\n> On Wed, Sep 18, 2024 at 10:17:47AM -0500, Nathan Bossart wrote:\n>> Could we also use ATT_PARTITIONED_TABLE to remove the partitioned table\n>> check in ATExecAddIndexConstraint()?\n> \n> Eh, never mind. That ends up being gross because you have to redo the\n> relation type check in a few places.\n\nI did not notice this one. I have to admit that the error message\nconsistency is welcome, not the extra checks required at\ntransformation.\n--\nMichael", "msg_date": "Thu, 19 Sep 2024 08:06:19 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Thu, Sep 19, 2024 at 08:06:19AM +0900, Michael Paquier wrote:\n> I did not notice this one. I have to admit that the error message\n> consistency is welcome, not the extra checks required at\n> transformation.\n\nI have applied 0001 for now to add ATT_PARTITIONED_TABLE. Attached is\nthe remaining piece.\n--\nMichael", "msg_date": "Thu, 19 Sep 2024 13:08:33 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On 2024-Sep-10, Michael Paquier wrote:\n\n> @@ -5060,17 +5061,17 @@ ATPrepCmd(List **wqueue, Relation rel, AlterTableCmd *cmd,\n> \t\t\tpass = AT_PASS_MISC;\n> \t\t\tbreak;\n> \t\tcase AT_AttachPartition:\n> -\t\t\tATSimplePermissions(cmd->subtype, rel, ATT_TABLE | ATT_PARTITIONED_INDEX);\n> +\t\t\tATSimplePermissions(cmd->subtype, rel, ATT_TABLE | ATT_PARTITIONED_TABLE | ATT_PARTITIONED_INDEX);\n> \t\t\t/* No command-specific prep needed */\n> \t\t\tpass = AT_PASS_MISC;\n> \t\t\tbreak;\n> \t\tcase AT_DetachPartition:\n> -\t\t\tATSimplePermissions(cmd->subtype, rel, ATT_TABLE);\n> +\t\t\tATSimplePermissions(cmd->subtype, rel, ATT_TABLE | ATT_PARTITIONED_TABLE);\n> \t\t\t/* No command-specific prep needed */\n> \t\t\tpass = AT_PASS_MISC;\n> \t\t\tbreak;\n> \t\tcase AT_DetachPartitionFinalize:\n> -\t\t\tATSimplePermissions(cmd->subtype, rel, ATT_TABLE);\n> +\t\t\tATSimplePermissions(cmd->subtype, rel, ATT_TABLE | ATT_PARTITIONED_TABLE);\n> \t\t\t/* No command-specific prep needed */\n> \t\t\tpass = AT_PASS_MISC;\n> \t\t\tbreak;\n\nIt looks to me like these cases could be modified to accept only\nATT_PARTITIONED_TABLE, not ATT_TABLE anymore. The ATT_TABLE cases are\nuseless anyway, because they're rejected by transformPartitionCmd.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/", "msg_date": "Thu, 19 Sep 2024 10:03:09 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Thu, Sep 19, 2024 at 10:03:09AM +0200, Alvaro Herrera wrote:\n> It looks to me like these cases could be modified to accept only\n> ATT_PARTITIONED_TABLE, not ATT_TABLE anymore. The ATT_TABLE cases are\n> useless anyway, because they're rejected by transformPartitionCmd.\n\n+1. If anything, this makes the error messages more consistent.\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 19 Sep 2024 14:45:04 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Thu, Sep 19, 2024 at 10:03:09AM +0200, Alvaro Herrera wrote:\n> It looks to me like these cases could be modified to accept only\n> ATT_PARTITIONED_TABLE, not ATT_TABLE anymore. The ATT_TABLE cases are\n> useless anyway, because they're rejected by transformPartitionCmd.\n\nMakes sense to me, thanks for the suggestion.\n\nWhat do you think about adding a test with DETACH FINALIZE when\nattempting it on a normal table, its path being under a different\nsubcommand than DETACH [CONCURRENTLY]?\n\nThere are no tests for normal tables with DETACH CONCURRENTLY, but as\nit is the same as DETACH under the AT_DetachPartition subcommand, that\ndoes not seem worth the extra cycles.\n--\nMichael", "msg_date": "Fri, 20 Sep 2024 09:37:54 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partitioned tables and [un]loggedness" }, { "msg_contents": "On Fri, Sep 20, 2024 at 09:37:54AM +0900, Michael Paquier wrote:\n> What do you think about adding a test with DETACH FINALIZE when\n> attempting it on a normal table, its path being under a different\n> subcommand than DETACH [CONCURRENTLY]?\n> \n> There are no tests for normal tables with DETACH CONCURRENTLY, but as\n> it is the same as DETACH under the AT_DetachPartition subcommand, that\n> does not seem worth the extra cycles.\n\nAdded an extra test for the CONCURRENTLY case at the end, and applied\nthe simplification.\n\nHmm. Should we replace the error messages in transformPartitionCmd()\nwith some assertions at this stage? I don't see a high cost in\nkeeping these even now, and keeping errors is perhaps more useful for\nsome extension code that builds AT_AttachPartition or\nAT_DetachPartition commands?\n--\nMichael", "msg_date": "Tue, 24 Sep 2024 09:06:40 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partitioned tables and [un]loggedness" } ]
[ { "msg_contents": "libpq's pqTraceOutputMessage() used to look like this:\n\n case 'Z': /* Ready For Query */\n pqTraceOutputZ(conn->Pfdebug, message, &logCursor);\n break;\n\nCommit f4b54e1ed98 introduced macros for protocol characters, so now\nit looks like this:\n\n case PqMsg_ReadyForQuery:\n pqTraceOutputZ(conn->Pfdebug, message, &logCursor);\n break;\n\nBut this introduced a disconnect between the symbol in the switch case\nand the function name to be called, so this made the manageability of\nthis file a bit worse.\n\nThis patch changes the function names to match, so now it looks like\nthis:\n\n case PqMsg_ReadyForQuery:\n pqTraceOutput_ReadyForQuery(conn->Pfdebug, message, &logCursor);\n break;\n\n(This also improves the readability of the file in general, since some\nfunction names like \"pqTraceOutputt\" were a little hard to read\naccurately.)\n\nSome protocol characters have different meanings to and from the\nserver. The old code structure had a common function for both, for\nexample, pqTraceOutputD(). The new structure splits this up into\nseparate ones to match the protocol message name, like\npqTraceOutput_Describe() and pqTraceOutput_DataRow().", "msg_date": "Wed, 24 Apr 2024 09:39:02 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Rename libpq trace internal functions" }, { "msg_contents": "On Wed, 24 Apr 2024 09:39:02 +0200\nPeter Eisentraut <[email protected]> wrote:\n\n> libpq's pqTraceOutputMessage() used to look like this:\n> \n> case 'Z': /* Ready For Query */\n> pqTraceOutputZ(conn->Pfdebug, message, &logCursor);\n> break;\n> \n> Commit f4b54e1ed98 introduced macros for protocol characters, so now\n> it looks like this:\n> \n> case PqMsg_ReadyForQuery:\n> pqTraceOutputZ(conn->Pfdebug, message, &logCursor);\n> break;\n> \n> But this introduced a disconnect between the symbol in the switch case\n> and the function name to be called, so this made the manageability of\n> this file a bit worse.\n> \n> This patch changes the function names to match, so now it looks like\n> this:\n> \n> case PqMsg_ReadyForQuery:\n> pqTraceOutput_ReadyForQuery(conn->Pfdebug, message, &logCursor);\n> break;\n\n+1\n\nI prefer the new function names since it seems more natural and easier to read.\n\nI noticed pqTraceOutputNR() is left as is, is this intentional? Or, shoud this\nbe changed to pqTranceOutput_NoticeResponse()?\n\nRegards,\nYugo Nagata\n\n> (This also improves the readability of the file in general, since some\n> function names like \"pqTraceOutputt\" were a little hard to read\n> accurately.)\n>\n> Some protocol characters have different meanings to and from the\n> server. The old code structure had a common function for both, for\n> example, pqTraceOutputD(). The new structure splits this up into\n> separate ones to match the protocol message name, like\n> pqTraceOutput_Describe() and pqTraceOutput_DataRow().\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n", "msg_date": "Wed, 24 Apr 2024 19:34:22 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Rename libpq trace internal functions" }, { "msg_contents": "On 24.04.24 12:34, Yugo NAGATA wrote:\n> On Wed, 24 Apr 2024 09:39:02 +0200\n> Peter Eisentraut <[email protected]> wrote:\n> \n>> libpq's pqTraceOutputMessage() used to look like this:\n>>\n>> case 'Z': /* Ready For Query */\n>> pqTraceOutputZ(conn->Pfdebug, message, &logCursor);\n>> break;\n>>\n>> Commit f4b54e1ed98 introduced macros for protocol characters, so now\n>> it looks like this:\n>>\n>> case PqMsg_ReadyForQuery:\n>> pqTraceOutputZ(conn->Pfdebug, message, &logCursor);\n>> break;\n>>\n>> But this introduced a disconnect between the symbol in the switch case\n>> and the function name to be called, so this made the manageability of\n>> this file a bit worse.\n>>\n>> This patch changes the function names to match, so now it looks like\n>> this:\n>>\n>> case PqMsg_ReadyForQuery:\n>> pqTraceOutput_ReadyForQuery(conn->Pfdebug, message, &logCursor);\n>> break;\n> \n> +1\n> \n> I prefer the new function names since it seems more natural and easier to read.\n> \n> I noticed pqTraceOutputNR() is left as is, is this intentional? Or, shoud this\n> be changed to pqTranceOutput_NoticeResponse()?\n\npqTraceOutputNR() is shared code used internally by _ErrorResponse() and \n_NoticeResponse(). I have updated the comments a bit to make this clearer.\n\nWith that, I have committed it. Thanks.\n\n\n\n", "msg_date": "Thu, 2 May 2024 16:22:49 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Rename libpq trace internal functions" } ]
[ { "msg_contents": "In sort_inner_and_outer, we create mergejoin join paths by explicitly\nsorting both relations on each possible ordering of the available\nmergejoin clauses. However, if there are no available mergejoin\nclauses, we can skip this process entirely. It seems that this is a\nrelatively common scenario. Checking the regression tests I noticed\nthat there are a lot of cases where we would arrive here with an empty\nmergeclause_list.\n\nSo I'm wondering if it's worth considering a shortcut in\nsort_inner_and_outer by checking if mergeclause_list is empty. This can\nhelp skip all the statements preceding select_outer_pathkeys_for_merge.\nIn particular this may help avoid building UniquePath paths in the case\nof JOIN_UNIQUE_OUTER or JOIN_UNIQUE_INNER.\n\nI asked this because in the \"Right Semi Join\" patch [1] I wanted to\nexclude mergejoin from being considered for JOIN_RIGHT_SEMI. So I set\nmergeclause_list to NIL, but noticed that it still runs the statements\nin sort_inner_and_outer until no available outer pathkeys are found in\nselect_outer_pathkeys_for_merge.\n\nAttached is a trivial patch for this. Thoughts?\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAMbWs4_X1mN%3Dic%2BSxcyymUqFx9bB8pqSLTGJ-F%3DMHy4PW3eRXw%40mail.gmail.com\n\nThanks\nRichard", "msg_date": "Wed, 24 Apr 2024 17:13:12 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Short-circuit sort_inner_and_outer if there are no mergejoin clauses" }, { "msg_contents": "On Wed, Apr 24, 2024 at 5:13 PM Richard Guo <[email protected]> wrote:\n\n> In sort_inner_and_outer, we create mergejoin join paths by explicitly\n> sorting both relations on each possible ordering of the available\n> mergejoin clauses. However, if there are no available mergejoin\n> clauses, we can skip this process entirely. It seems that this is a\n> relatively common scenario. Checking the regression tests I noticed\n> that there are a lot of cases where we would arrive here with an empty\n> mergeclause_list.\n>\n\nFWIW, during the run of the core regression tests, I found that we enter\nsort_inner_and_outer with an empty mergeclause_list a total of 11064\ntimes. Out of these occurrences, there are 293 instances where the join\ntype is JOIN_UNIQUE_OUTER, indicating the need to create a UniquePath\nfor the outer path. Similarly, there are also 293 instances where the\njoin type is JOIN_UNIQUE_INNER, indicating the need to create a\nUniquePath for the inner path.\n\nThanks\nRichard\n\nOn Wed, Apr 24, 2024 at 5:13 PM Richard Guo <[email protected]> wrote:In sort_inner_and_outer, we create mergejoin join paths by explicitlysorting both relations on each possible ordering of the availablemergejoin clauses.  However, if there are no available mergejoinclauses, we can skip this process entirely.  It seems that this is arelatively common scenario.  Checking the regression tests I noticedthat there are a lot of cases where we would arrive here with an emptymergeclause_list.FWIW, during the run of the core regression tests, I found that we entersort_inner_and_outer with an empty mergeclause_list a total of 11064times.  Out of these occurrences, there are 293 instances where the jointype is JOIN_UNIQUE_OUTER, indicating the need to create a UniquePathfor the outer path.  Similarly, there are also 293 instances where thejoin type is JOIN_UNIQUE_INNER, indicating the need to create aUniquePath for the inner path.ThanksRichard", "msg_date": "Thu, 25 Apr 2024 15:20:10 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Short-circuit sort_inner_and_outer if there are no mergejoin\n clauses" }, { "msg_contents": "On Thu, Apr 25, 2024 at 12:50 PM Richard Guo <[email protected]> wrote:\n\n>\n> On Wed, Apr 24, 2024 at 5:13 PM Richard Guo <[email protected]>\n> wrote:\n>\n>> In sort_inner_and_outer, we create mergejoin join paths by explicitly\n>> sorting both relations on each possible ordering of the available\n>> mergejoin clauses. However, if there are no available mergejoin\n>> clauses, we can skip this process entirely. It seems that this is a\n>> relatively common scenario. Checking the regression tests I noticed\n>> that there are a lot of cases where we would arrive here with an empty\n>> mergeclause_list.\n>>\n>\n> FWIW, during the run of the core regression tests, I found that we enter\n> sort_inner_and_outer with an empty mergeclause_list a total of 11064\n> times. Out of these occurrences, there are 293 instances where the join\n> type is JOIN_UNIQUE_OUTER, indicating the need to create a UniquePath\n> for the outer path. Similarly, there are also 293 instances where the\n> join type is JOIN_UNIQUE_INNER, indicating the need to create a\n> UniquePath for the inner path.\n>\n\nQuickly looking at the function, the patch would make it more apparent that\nthe function is a noop when mergeclause_list is empty. I haven't looked\nclosely to see if creating unique path nonetheless is useful somewhere\nelse. Please add to the next commitfest. If the patch shows some measurable\nperformance improvement, it would become more attractive.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Thu, Apr 25, 2024 at 12:50 PM Richard Guo <[email protected]> wrote:On Wed, Apr 24, 2024 at 5:13 PM Richard Guo <[email protected]> wrote:In sort_inner_and_outer, we create mergejoin join paths by explicitlysorting both relations on each possible ordering of the availablemergejoin clauses.  However, if there are no available mergejoinclauses, we can skip this process entirely.  It seems that this is arelatively common scenario.  Checking the regression tests I noticedthat there are a lot of cases where we would arrive here with an emptymergeclause_list.FWIW, during the run of the core regression tests, I found that we entersort_inner_and_outer with an empty mergeclause_list a total of 11064times.  Out of these occurrences, there are 293 instances where the jointype is JOIN_UNIQUE_OUTER, indicating the need to create a UniquePathfor the outer path.  Similarly, there are also 293 instances where thejoin type is JOIN_UNIQUE_INNER, indicating the need to create aUniquePath for the inner path.Quickly looking at the function, the patch would make it more apparent that the function is a noop when mergeclause_list is empty. I haven't looked closely to see if creating unique path nonetheless is useful somewhere else. Please add to the next commitfest. If the patch shows some measurable performance improvement, it would become more attractive.-- Best Wishes,Ashutosh Bapat", "msg_date": "Thu, 25 Apr 2024 16:55:34 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Short-circuit sort_inner_and_outer if there are no mergejoin\n clauses" }, { "msg_contents": "On Thu, Apr 25, 2024 at 7:25 PM Ashutosh Bapat <[email protected]>\nwrote:\n\n> Quickly looking at the function, the patch would make it more apparent\n> that the function is a noop when mergeclause_list is empty.\n>\n\nThanks for looking at this patch. Yes, that's what it does.\n\n\n> I haven't looked closely to see if creating unique path nonetheless is\n> useful somewhere else.\n>\n\nIt seems that one of the side effects of create_unique_path is that it\ncaches the generated unique path so that we can avoid creating it\nrepeatedly for the same rel. But this does not seem to justify calling\ncreate_unique_path when we know it is unnecessary.\n\nPlease add to the next commitfest.\n>\n\nDone.\n\n\n> If the patch shows some measurable performance improvement, it would\n> become more attractive.\n>\n\nI doubt that there is measurable performance improvement. But I found\nthat throughout the run of the regression tests, sort_inner_and_outer is\ncalled a total of 44,424 times. Among these calls, there are 11,064\ninstances where mergeclause_list is found to be empty. This accounts\nfor ~1/4. I think maybe this suggests that it's worth the shortcut as\nthe patch does.\n\nThanks\nRichard\n\nOn Thu, Apr 25, 2024 at 7:25 PM Ashutosh Bapat <[email protected]> wrote:Quickly looking at the function, the patch would make it more apparent that the function is a noop when mergeclause_list is empty.Thanks for looking at this patch.  Yes, that's what it does. I haven't looked closely to see if creating unique path nonetheless is useful somewhere else.It seems that one of the side effects of create_unique_path is that itcaches the generated unique path so that we can avoid creating itrepeatedly for the same rel.  But this does not seem to justify callingcreate_unique_path when we know it is unnecessary.  Please add to the next commitfest.Done.  If the patch shows some measurable performance improvement, it would become more attractive.I doubt that there is measurable performance improvement.  But I foundthat throughout the run of the regression tests, sort_inner_and_outer iscalled a total of 44,424 times.  Among these calls, there are 11,064instances where mergeclause_list is found to be empty.  This accountsfor ~1/4.  I think maybe this suggests that it's worth the shortcut asthe patch does.ThanksRichard", "msg_date": "Fri, 26 Apr 2024 14:57:14 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Short-circuit sort_inner_and_outer if there are no mergejoin\n clauses" }, { "msg_contents": "On Fri, Apr 26, 2024 at 2:57 PM Richard Guo <[email protected]> wrote:\n> I doubt that there is measurable performance improvement. But I found\n> that throughout the run of the regression tests, sort_inner_and_outer is\n> called a total of 44,424 times. Among these calls, there are 11,064\n> instances where mergeclause_list is found to be empty. This accounts\n> for ~1/4. I think maybe this suggests that it's worth the shortcut as\n> the patch does.\n\nPushed.\n\nThanks\nRichard\n\n\n", "msg_date": "Tue, 30 Jul 2024 15:46:55 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Short-circuit sort_inner_and_outer if there are no mergejoin\n clauses" } ]
[ { "msg_contents": "Hi,\n\nI noticed that a permission check is performed in be_lo_put()\njust after returning inv_open(), but teh permission should be\nalready checked in inv_open(), so I think we can remove this\npart of codes. I attached a patch for this fix.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>", "msg_date": "Wed, 24 Apr 2024 18:59:32 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": true, "msg_subject": "Remove unnecessary code rom be_lo_put()" }, { "msg_contents": "On 24.04.24 11:59, Yugo NAGATA wrote:\n> I noticed that a permission check is performed in be_lo_put()\n> just after returning inv_open(), but teh permission should be\n> already checked in inv_open(), so I think we can remove this\n> part of codes. I attached a patch for this fix.\n\nYes, I think you are right.\n\nThis check was added in 8d9881911f0, but then the refactoring in \nae20b23a9e7 should probably have removed it.\n\n\n\n", "msg_date": "Wed, 24 Apr 2024 12:41:06 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove unnecessary code rom be_lo_put()" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 24.04.24 11:59, Yugo NAGATA wrote:\n>> I noticed that a permission check is performed in be_lo_put()\n>> just after returning inv_open(), but teh permission should be\n>> already checked in inv_open(), so I think we can remove this\n>> part of codes. I attached a patch for this fix.\n\n> Yes, I think you are right.\n\nI agree. Do you want to do the honors?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Apr 2024 09:25:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove unnecessary code rom be_lo_put()" }, { "msg_contents": "On Wed, Apr 24, 2024 at 09:25:09AM -0400, Tom Lane wrote:\n> I agree. Do you want to do the honors?\n\nGood catch. The same check happens when the object is opened. Note\nthat you should be able to remove utils/acl.h at the top of\nbe-fsstubs.c as this would remove the last piece of code that does an\nACL check in this file. No objections with doing that now, removing\nthis code.\n--\nMichael", "msg_date": "Thu, 25 Apr 2024 08:50:22 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove unnecessary code rom be_lo_put()" }, { "msg_contents": "On 25.04.24 01:50, Michael Paquier wrote:\n> On Wed, Apr 24, 2024 at 09:25:09AM -0400, Tom Lane wrote:\n>> I agree. Do you want to do the honors?\n> \n> Good catch. The same check happens when the object is opened. Note\n> that you should be able to remove utils/acl.h at the top of\n> be-fsstubs.c as this would remove the last piece of code that does an\n> ACL check in this file. No objections with doing that now, removing\n> this code.\n\nutils/acl.h is still needed for object_ownercheck() called in \nbe_lo_unlink().\n\n\n\n", "msg_date": "Thu, 25 Apr 2024 10:17:00 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove unnecessary code rom be_lo_put()" }, { "msg_contents": "On 24.04.24 15:25, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> On 24.04.24 11:59, Yugo NAGATA wrote:\n>>> I noticed that a permission check is performed in be_lo_put()\n>>> just after returning inv_open(), but teh permission should be\n>>> already checked in inv_open(), so I think we can remove this\n>>> part of codes. I attached a patch for this fix.\n> \n>> Yes, I think you are right.\n> \n> I agree. Do you want to do the honors?\n\ndone\n\n\n\n", "msg_date": "Thu, 25 Apr 2024 10:26:41 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove unnecessary code rom be_lo_put()" }, { "msg_contents": "On Thu, 25 Apr 2024 10:26:41 +0200\nPeter Eisentraut <[email protected]> wrote:\n\n> On 24.04.24 15:25, Tom Lane wrote:\n> > Peter Eisentraut <[email protected]> writes:\n> >> On 24.04.24 11:59, Yugo NAGATA wrote:\n> >>> I noticed that a permission check is performed in be_lo_put()\n> >>> just after returning inv_open(), but teh permission should be\n> >>> already checked in inv_open(), so I think we can remove this\n> >>> part of codes. I attached a patch for this fix.\n> > \n> >> Yes, I think you are right.\n> > \n> > I agree. Do you want to do the honors?\n> \n> done\n> \n\nThank you!\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n", "msg_date": "Thu, 25 Apr 2024 17:41:55 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove unnecessary code rom be_lo_put()" } ]
[ { "msg_contents": "Yesterday Fedora 40 was released as GA available release.\n\n\nAccording to the project website (download for Linux/Fedora):\n\n\n\" the PostgreSQL project provides a repository<https://www.postgresql.org/download/linux/redhat/#yum> of packages of all supported versions for the most common distributions.\"\n\nToday (24.4.2024) I upgraded my laptop to Fedora 40, but there where no repository available, so I ended with a mix of Fedora 40 and Fedora 39 installation.\n\nTo mitigate this situation, I propose to provide these repositories much earlier in the beta phase of the distributions:\n\nFedora easily switches from a beta to a full installation by simply dnf upgrading after GA release. So it should be possible, to test the standard repositories and installations in the beta phase.\nThis also could give a broader coverage of the new standard tools utilized (e.g. gcc compiler), wich changed in the current case from major version 13 to 14.\n\nInstead of following every beta introduction of linux distros I propose to provide the repositories with (or shortly after) the standard dates for minor releases, especially in February and August of the year, wich seems well fitting the rythm of distribution releases.\n\nThe website for download also should contain the new upcoming distributions to avoid the current 404 error during installation/upgrade.\n\nShould any serious compiler problems arise, the publication can be delayed until they are solved.\n\nThis proposal also follows the idea of getting reproducable builds and integrating build scripts into the source tree. [1]\n\nI think there should be not much extra work, since the same minor versions of the Postgres repositories have to be build anyway at release time of the linux distribution.\n\nI think for easy installation it is important to have a standard installation experience for all potential users so that manual installation or self compilation can be avoided.\n\nTo complete this message I would kindly ask if somebody has taken a look on bug report # 18339 which addressed another problem with fedora installations. [2]\n\n(CC Devrim according to the thread Security lessons from libzlma - libsystemd)\n\nregards\n\nHans Buschmann\n\n[1] https://www.postgresql.org/message-id/flat/ZgdCpFThi9ODcCsJ%40momjian.us\n\n[2] https://www.postgresql.org/message-id/18339-f9dda01a46c0432f%40postgresql.org\n\n\n\n\n\n\n\n\nYesterday Fedora 40 was released as GA available release.\n\n\nAccording to the project website (download for Linux/Fedora):\n\n\n\n\" the PostgreSQL project provides a \nrepository of packages of all supported versions for the most common distributions.\"\n\n\nToday (24.4.2024) I upgraded my laptop to Fedora 40, but there where no repository available, so I ended with a mix of Fedora 40 and Fedora 39 installation.\n\n\nTo mitigate this situation, I propose to provide these repositories much earlier in the beta phase of the distributions:\n\n\nFedora easily switches from a beta to a full installation by simply dnf upgrading after GA release. So it should be possible, to test the standard repositories and installations in the beta phase.\nThis also could give a broader coverage of the new standard tools utilized (e.g. gcc compiler), wich changed in the current case from major version 13 to 14.\n\n\nInstead of following every beta introduction of linux distros I propose to provide the repositories with (or shortly after) the standard dates for minor releases, especially in February and August of the year, wich seems well fitting the rythm of distribution\n releases.\n\n\nThe website for download also should contain the new upcoming distributions to avoid the current 404 error during installation/upgrade.\n\n\nShould any serious compiler problems arise, the publication can be delayed until they are solved.\n\n\nThis proposal also follows the idea of getting reproducable builds and integrating build scripts into the source tree. [1]\n\n\nI think there should be not much extra work, since the same minor versions of the Postgres repositories have to be build anyway at release time of the linux distribution.\n\n\nI think for easy installation it is important to have a standard installation experience for all potential users so that manual installation or self compilation can be avoided.\n\n\nTo complete this message I would kindly ask if somebody has taken a look on bug report # 18339 which addressed another problem with fedora installations. [2]\n\n\n(CC Devrim according to the thread Security lessons from libzlma - libsystemd)\n\n\nregards\n\n\nHans Buschmann \n\n\n[1] https://www.postgresql.org/message-id/flat/ZgdCpFThi9ODcCsJ%40momjian.us\n\n\n[2] https://www.postgresql.org/message-id/18339-f9dda01a46c0432f%40postgresql.org", "msg_date": "Wed, 24 Apr 2024 11:02:18 +0000", "msg_from": "Hans Buschmann <[email protected]>", "msg_from_op": true, "msg_subject": "Proposal: Early providing of PGDG repositories for the major Linux\n distributions like Fedora or Debian" }, { "msg_contents": "Hi,\n\nOn Wed, 2024-04-24 at 11:02 +0000, Hans Buschmann wrote:\n> Today (24.4.2024) I upgraded my laptop to Fedora 40, but there where\n> no repository available, so I ended with a mix of Fedora 40 and Fedora\n> 39 installation.\n\nThis was caused by an unexpected and very long network outage that\nblocked my access to the build instances. I released packages last\nThursday (2 days after F40 was released, which is 26 April 2024)\n\nI sent emails to the RPM mailing list about the issue:\n\nhttps://www.postgresql.org/message-id/aec36aec623741ae314692b318c890c646498ca6.camel%40gunduz.org\nhttps://www.postgresql.org/message-id/1fe99b0def5d7539939421fa5b35db2c8f2a40ad.camel%40gunduz.org\nhttps://www.postgresql.org/message-id/3a1b0f58673d35fae9979ed2b149972195c7b8bc.camel%40gunduz.org\n\n> To mitigate this situation, I propose to provide these repositories\n> much earlier in the beta phase of the distributions:\n\nThis is what I do for the Fedora releases. I'm sure you've noticed that\nin the past.\n\nRegards,\n\n-- \nDevrim Gündüz\nOpen Source Solution Architect, PostgreSQL Major Contributor\nTwitter: @DevrimGunduz , @DevrimGunduzTR\n\n\n", "msg_date": "Fri, 03 May 2024 00:44:10 +0100", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Early providing of PGDG repositories for the major\n Linux distributions like Fedora or Debian" }, { "msg_contents": "Hello Devrim,\n\n\nSorry for the long network outage!\n\n>>This is what I do for the Fedora releases. I'm sure you've noticed that\n>>in the past.\n\nNo, I didn't notice it and I missed it recently, so I made this proposal (see 1. below).\n\nHere are some points concerning $topic:\n\n\n 1. Availability in beta phase of OS distribution\n\nI recently installed a new server with Fedora 39 for production and another virtual machine for test/developing with Fedora 40beta to avoid the immediatly coming system upgrade from F39 to F40.\n\nBy using the standard installation procedure from the PGDG download site I wanted to download the repository for F40 for postgres, but this was not available:\n\nsudo dnf install https://download.postgresql.org/pub/repos/yum/reporpms/F-40-x86_64/pgdg-fedora-repo-latest.noarch.rpm\n\nAs this is the start point of the standard installation I could not install the newest release from PGDG.\n\nBecause the beta software for Fedora silently upgrades to the final version when it becomes GA, this kind of installation guaranties no interruption later on.\n\nEarly installation helps to test the installation and possible changes concerning glibc, icu, jit, compiler or similar cases.\n\nFor the simplicity I propose to make the repositories available at the last regular minor version update before the normal beta phase of the operating system starts.\nThis frees you from hurriying on OS GA day to provide them at time. It is highly probable that in this period neither postgres nor the OS get significant changes so every user (being aware of a beta release of an OS) is quite safe.\n\nThe schedule for many OS (I only have experience with Fedora, but similar for ubuntu, debian and certainly others) is spring time or automn time.\n\nSo the solution is:\n\nProvide the latest minor version packages at the regulary february and august dates for beta versions of upcoming OS versions and also advertise them on the website.\n\n2. Fast propagation of minor versions to OS distributions\n\nI really encourage you in emphasizing downstream to use always the latest minor releases of postgres with a new OS version!\n\nApart from possible security fixes I see a lot of complaints on general/hackers where users report problems with outdated versions.\nPerhaps upstream is not aware of the fact that minor versions are only maintenance releases and users are urged to always use the newest version. (see the corresponding discussion on hackers)\n\nI recently saw a chart (don't remember where) of the delays major distributions have for integrating minor versions. this mostly took over 100 days, sometimes more then 200!\n\nlooking at the example of Fedora 40 they provide PG 16.1 available on their repositories (not PGDG), more then 2 months after release of 16.2.\nIn contrast they integrated GCC 14.01 in F40 (still in beta until may), so no shy of early adoption of important changes.\n\nIt really would be usefull if they apply minor releases like any normal upstream patches in a timely fashion (e.g. google chrome, firefox, httpd ...).\n\n3. Packaging and installation\n\nFor the normal user packaging and installation is not always obvious.\nPostgres is split into different packages which can be installed separately. This is not documented well. There is no guide which packages should be installed to get a standard server like on windows or normal self compilation (including contrib, server, client, libs, libpq).\n\nThere is no information of how installing more then one major version on the same server (e.g. for pg_upgrade) interacts with each other (which libpq is in use?, which path is set?).\n\nRemoving an installation of an elder version (with more then one installed) leaves the path unset, so that the normal commands don't work as expected. (see my mail from february).\n\nFurther it is unclear if the installations bundled with the OS and the repositories provided by PGDG use the same packaging/tools and are fully interchangeable.\n\nIs it possible to upgrade the pg version from OS distribution with the PGDG version on production without any hiccup?\n\n4. Undocumented initialization behaviour\n\nThere are subtle differences in initializing the database.\n\nI used to initialize the cluster with initdb and the appropriate flags, coming from windows installations.\n\nUsing the standard recommandation on Fedora:\n\n\nPGSETUP_INITDB_OPTIONS=\" --encoding=UTF8 --data-checksums --lc-messages=C --lc-collate=C\"\nexport PGSETUP_INITDB_OPTIONS\n\nsudo /usr/pgsql-16/bin/postgresql-16-setup initdb\n\nI got a database without data checksum enabled!\n\nIt took me a long time to realize\n\n-- no sudo, otherwise the options are not taken !!!\n\n to succesfull get\n\n/usr/pgsql-16/bin/postgresql-16-setup initdb\n\nThat sudo prohibits the options to be promoted so that postgresql-16-setup did not see them.\n\nI think postgresql-xx-setup (and other changes to the source tree behaviour like pg_hba,conf,ownership of files, which tools run under which user (server, clients applications), priviliges for writing to the file system with server side copy, wal archive copying, firewall settings, service enabling, selinux implications etc.) should be fully documented and the user should be given a real example of its usage, since this is nowhere present in the source tree and unknown for peoble coming from e.g. windows.\n\n5. Improving usability\nAll these points are from my real world experience of usablity of Postgres. I have managed to learn or circumvent specific aspects, but I think usability for every not so skilled user should be in the main focus.\n\nPlease see my suggestions this way!\n\nHans Buschmann\n\n\n\n\n________________________________\nVon: Devrim Gündüz <[email protected]>\nGesendet: Freitag, 3. Mai 2024 01:44\nAn: Hans Buschmann; [email protected]\nBetreff: Re: Proposal: Early providing of PGDG repositories for the major Linux distributions like Fedora or Debian\n\nHi,\n\nOn Wed, 2024-04-24 at 11:02 +0000, Hans Buschmann wrote:\n> Today (24.4.2024) I upgraded my laptop to Fedora 40, but there where\n> no repository available, so I ended with a mix of Fedora 40 and Fedora\n> 39 installation.\n\nThis was caused by an unexpected and very long network outage that\nblocked my access to the build instances. I released packages last\nThursday (2 days after F40 was released, which is 26 April 2024)\n\nI sent emails to the RPM mailing list about the issue:\n\nhttps://www.postgresql.org/message-id/aec36aec623741ae314692b318c890c646498ca6.camel%40gunduz.org\nhttps://www.postgresql.org/message-id/1fe99b0def5d7539939421fa5b35db2c8f2a40ad.camel%40gunduz.org\nhttps://www.postgresql.org/message-id/3a1b0f58673d35fae9979ed2b149972195c7b8bc.camel%40gunduz.org\n\n> To mitigate this situation, I propose to provide these repositories\n> much earlier in the beta phase of the distributions:\n\nThis is what I do for the Fedora releases. I'm sure you've noticed that\nin the past.\n\nRegards,\n\n--\nDevrim Gündüz\nOpen Source Solution Architect, PostgreSQL Major Contributor\nTwitter: @DevrimGunduz , @DevrimGunduzTR\n\n\n\n\n\n\n\n\nHello Devrim,\n\n\nSorry for the long network outage!\n\n>>This is what I do for the Fedora releases. I'm sure you've noticed that\n>>in the past.\n\n\nNo, I didn't notice it and I missed it recently, so I made this proposal (see 1. below).\n\n\nHere are some points concerning $topic:\n\n\n\n\nAvailability in beta phase of OS distribution \n\nI recently installed a new server with Fedora 39 for production and another virtual machine\n for test/developing with Fedora 40beta to avoid the immediatly coming system upgrade from F39 to F40.\n\n\nBy using the standard installation procedure from the PGDG download site I wanted to download the repository\n for F40 for postgres, but this was not available:\n\n\nsudo dnf install https://download.postgresql.org/pub/repos/yum/reporpms/F-40-x86_64/pgdg-fedora-repo-latest.noarch.rpm\n\n\n\nAs\n this is the start point of the standard installation I could not install the newest release from PGDG.\n\n\nBecause the beta software for Fedora silently upgrades to the final version when it becomes GA, this kind\n of installation guaranties no interruption later on.\n\n\nEarly installation helps to test the installation and possible changes concerning glibc, icu, jit, compiler\n or similar cases.\n\n\nFor the simplicity I propose to make the repositories available at the last regular minor version update\n before the normal beta phase of the operating system starts.\nThis frees you from hurriying on OS GA day to provide them at time. It is highly probable that in this\n period neither postgres nor the OS get significant changes so every user (being aware of a beta release of an OS) is quite safe.\n\n\nThe schedule for many OS (I only have experience with Fedora, but similar for ubuntu, debian and certainly\n others) is spring time or automn time.\n\n\nSo the solution is:\n\n\nProvide the latest minor version packages at the regulary february and august dates for beta versions of\n upcoming OS versions and also advertise them on the website.\n\n\n2. Fast propagation of minor versions to OS distributions\n\n\nI really encourage you in emphasizing downstream to use always the latest minor releases of postgres with\n a new OS version!\n\n\nApart from possible security fixes I see a lot of complaints on general/hackers where users report problems\n with outdated versions.\nPerhaps upstream is not aware of the fact that minor versions are only maintenance releases and users are\n urged to always use the newest version. (see the corresponding discussion on hackers)\n\n\nI recently saw a chart (don't remember where) of the delays major distributions have for integrating minor\n versions. this mostly took over 100 days, sometimes more then 200!\n\n\nlooking at  the example of Fedora 40 they provide PG 16.1 available on their repositories (not PGDG), more\n then 2 months after release of 16.2.\nIn contrast they integrated GCC 14.01 in F40 (still in beta until may), so no shy of early adoption of\n important changes.\n\n\nIt really would be usefull if they apply minor releases like any normal upstream patches in a timely fashion\n (e.g. google chrome, firefox, httpd ...).\n\n\n3. Packaging and installation\n\n\nFor the normal user packaging and installation is not always obvious.\nPostgres is split into different packages which can be installed separately. This is not documented well. There is no guide which packages should be installed to get a standard server like on windows or normal self compilation (including contrib, server,\n client, libs, libpq).\n\n\nThere is no information of how installing more then one major version on the same server (e.g. for pg_upgrade) interacts with each other (which libpq is in use?, which path is set?).\n\n\nRemoving an installation of an elder version (with more then one installed) leaves the path unset, so that the normal commands don't work as expected. (see my mail from february).\n\n\nFurther it is unclear if the installations bundled with the OS and the repositories provided by PGDG use the same packaging/tools and are fully interchangeable.\n\n\nIs it possible to upgrade the pg version from OS distribution with the PGDG version on production without any hiccup?\n\n\n4. Undocumented initialization behaviour\n\n\nThere are subtle differences in initializing the database.\n\n\nI used to initialize the cluster with initdb and the appropriate flags, coming from windows installations.\n\n\nUsing the standard recommandation on Fedora:\n\n\n\n\n\nPGSETUP_INITDB_OPTIONS=\" --encoding=UTF8 --data-checksums --lc-messages=C --lc-collate=C\"\nexport PGSETUP_INITDB_OPTIONS\n\n\nsudo /usr/pgsql-16/bin/postgresql-16-setup initdb\n\n\n\nI got a database without data checksum enabled!\n\n\nIt took me a long time to realize\n\n\n\n\n-- no sudo, otherwise the options are not taken !!!\n\n\n\n\n to succesfull get\n\n\n\n\n/usr/pgsql-16/bin/postgresql-16-setup initdb\n\n\n\n\n\nThat sudo prohibits the options to be promoted so that postgresql-16-setup did not see them.\n\n\n\n\nI think postgresql-xx-setup (and other changes to the source tree behaviour like pg_hba,conf,ownership of files, which tools run under which user (server, clients applications), priviliges for writing to the file system with server side copy, wal archive copying,\n firewall settings, service enabling, selinux implications etc.) should be fully documented and the user should be given a real example of its usage, since this is nowhere present in the source tree and unknown for peoble coming from e.g. windows.\n\n\n\n\n5. Improving usability\n\nAll these points are from my real world experience of usablity of Postgres. I have managed to learn or circumvent specific aspects, but I think usability for every not so skilled user should be in the main focus.\n\n\n\n\nPlease see my suggestions this way!\n\n\n\n\nHans Buschmann\n\n\n\n\n\n\n\n\n\n\n\n\nVon: Devrim Gündüz <[email protected]>\nGesendet: Freitag, 3. Mai 2024 01:44\nAn: Hans Buschmann; [email protected]\nBetreff: Re: Proposal: Early providing of PGDG repositories for the major Linux distributions like Fedora or Debian\n \n\n\n\nHi,\n\nOn Wed, 2024-04-24 at 11:02 +0000, Hans Buschmann wrote:\n> Today (24.4.2024) I upgraded my laptop to Fedora 40, but there where\n> no repository available, so I ended with a mix of Fedora 40 and Fedora\n> 39 installation.\n\nThis was caused by an unexpected and very long network outage that\nblocked my access to the build instances. I released packages last\nThursday (2 days after F40 was released, which is 26 April 2024)\n\nI sent emails to the RPM mailing list about the issue:\n\nhttps://www.postgresql.org/message-id/aec36aec623741ae314692b318c890c646498ca6.camel%40gunduz.org\nhttps://www.postgresql.org/message-id/1fe99b0def5d7539939421fa5b35db2c8f2a40ad.camel%40gunduz.org\nhttps://www.postgresql.org/message-id/3a1b0f58673d35fae9979ed2b149972195c7b8bc.camel%40gunduz.org\n\n> To mitigate this situation, I propose to provide these repositories\n> much earlier in the beta phase of the distributions:\n\nThis is what I do for the Fedora releases. I'm sure you've noticed that\nin the past.\n\nRegards,\n\n-- \nDevrim Gündüz\nOpen Source Solution Architect, PostgreSQL Major Contributor\nTwitter: @DevrimGunduz , @DevrimGunduzTR", "msg_date": "Fri, 3 May 2024 06:22:34 +0000", "msg_from": "Hans Buschmann <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Proposal: Early providing of PGDG repositories for the major\n Linux distributions like Fedora or Debian" } ]
[ { "msg_contents": "Hello,\n\nI would like to suggest a new parameter, autovacuum_max_threshold, which \nwould set an upper limit on the number of tuples to delete/update/insert \nprior to vacuum/analyze.\n\nA good default might be 500000.\n\nThe idea would be to replace the following calculation :\n\nvacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples;\n\nwith this one :\n\nvacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples / (1 \n+ vac_scale_factor * reltuples / autovacuum_max_threshold)\n\n(and the same for the others, vacinsthresh and anlthresh).\n\nThe attached graph plots vacthresh against pgclass.reltuples, with \ndefault settings :\n\nautovacuum_vacuum_threshold = 50\nautovacuum_vacuum_scale_factor = 0.2\n\nand\n\nautovacuum_max_threshold = 500000 (the suggested default)\n\nThus, for small tables, vacthresh is only slightly smaller than 0.2 * \npgclass.reltuples, but it grows towards 500000 when reltuples → ∞\n\nThe idea is to reduce the need for autovacuum tuning.\n\nThe attached (draft) patch further illustrates the idea.\n\nMy guess is that a similar proposal has already been submitted... and \nrejected 🙂 If so, I'm very sorry for the useless noise.\n\nBest regards,\nFrédéric", "msg_date": "Wed, 24 Apr 2024 14:08:00 +0200", "msg_from": "=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Yhuel?= <[email protected]>", "msg_from_op": true, "msg_subject": "New GUC autovacuum_max_threshold ?" }, { "msg_contents": "On Wed, Apr 24, 2024 at 8:08 AM Frédéric Yhuel\n<[email protected]> wrote:\n>\n> Hello,\n>\n> I would like to suggest a new parameter, autovacuum_max_threshold, which\n> would set an upper limit on the number of tuples to delete/update/insert\n> prior to vacuum/analyze.\n\nHi Frédéric, thanks for the proposal! You are tackling a very tough\nproblem. I would also find it useful to know more about what led you\nto suggest this particular solution. I am very interested in user\nstories around difficulties with what tables are autovacuumed and\nwhen.\n\nAm I correct in thinking that one of the major goals here is for a\nvery large table to be more likely to be vacuumed?\n\n> The idea would be to replace the following calculation :\n>\n> vacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples;\n>\n> with this one :\n>\n> vacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples / (1\n> + vac_scale_factor * reltuples / autovacuum_max_threshold)\n>\n> (and the same for the others, vacinsthresh and anlthresh).\n\nMy first thought when reviewing the GUC and how it is used is\nwondering if its description is a bit misleading.\n\nautovacuum_vacuum_threshold is the \"minimum number of updated or\ndeleted tuples needed to trigger a vacuum\". That is, if this many\ntuples are modified, it *may* trigger a vacuum, but we also may skip\nvacuuming the table for other reasons or due to other factors.\nautovacuum_max_threshold's proposed definition is the upper\nlimit/maximum number of tuples to insert/update/delete prior to\nvacuum/analyze. This implies that if that many tuples have been\nmodified or inserted, the table will definitely be vacuumed -- which\nisn't true. Maybe that is okay, but I thought I would bring it up.\n\n> The attached (draft) patch further illustrates the idea.\n\nThanks for including a patch!\n\n> My guess is that a similar proposal has already been submitted... and\n> rejected 🙂 If so, I'm very sorry for the useless noise.\n\nI rooted around in the hackers archive and couldn't find any threads\non this specific proposal. I copied some other hackers I knew of who\nhave worked on this problem and thought about it in the past, in case\nthey know of some existing threads or prior work on this specific\ntopic.\n\n- Melanie\n\n\n", "msg_date": "Wed, 24 Apr 2024 15:10:27 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "On Wed, Apr 24, 2024 at 03:10:27PM -0400, Melanie Plageman wrote:\n> On Wed, Apr 24, 2024 at 8:08 AM Frédéric Yhuel\n> <[email protected]> wrote:\n>> I would like to suggest a new parameter, autovacuum_max_threshold, which\n>> would set an upper limit on the number of tuples to delete/update/insert\n>> prior to vacuum/analyze.\n> \n> Hi Frédéric, thanks for the proposal! You are tackling a very tough\n> problem. I would also find it useful to know more about what led you\n> to suggest this particular solution. I am very interested in user\n> stories around difficulties with what tables are autovacuumed and\n> when.\n> \n> Am I correct in thinking that one of the major goals here is for a\n> very large table to be more likely to be vacuumed?\n\nIf this is indeed the goal, +1 from me for doing something along these\nlines.\n\n>> The idea would be to replace the following calculation :\n>>\n>> vacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples;\n>>\n>> with this one :\n>>\n>> vacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples / (1\n>> + vac_scale_factor * reltuples / autovacuum_max_threshold)\n>>\n>> (and the same for the others, vacinsthresh and anlthresh).\n> \n> My first thought when reviewing the GUC and how it is used is\n> wondering if its description is a bit misleading.\n\nYeah, I'm having trouble following the proposed mechanics for this new GUC,\nand it's difficult to understand how users would choose a value. If we\njust want to cap the number of tuples required before autovacuum takes\naction, perhaps we could simplify it to something like\n\n\tvacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples;\n\tvacthresh = Min(vacthres, vac_max_thresh);\n\nThis would effectively cause autovacuum_vacuum_scale_factor to be\noverridden for large tables where the scale factor would otherwise cause\nthe calculated threshold to be extremely high.\n\n>> My guess is that a similar proposal has already been submitted... and\n>> rejected 🙂 If so, I'm very sorry for the useless noise.\n> \n> I rooted around in the hackers archive and couldn't find any threads\n> on this specific proposal. I copied some other hackers I knew of who\n> have worked on this problem and thought about it in the past, in case\n> they know of some existing threads or prior work on this specific\n> topic.\n\nFWIW I have heard about this problem in the past, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 24 Apr 2024 14:57:47 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "\n\nLe 24/04/2024 à 21:10, Melanie Plageman a écrit :\n> On Wed, Apr 24, 2024 at 8:08 AM Frédéric Yhuel\n> <[email protected]> wrote:\n>>\n>> Hello,\n>>\n>> I would like to suggest a new parameter, autovacuum_max_threshold, which\n>> would set an upper limit on the number of tuples to delete/update/insert\n>> prior to vacuum/analyze.\n> \n> Hi Frédéric, thanks for the proposal! You are tackling a very tough\n> problem. I would also find it useful to know more about what led you\n> to suggest this particular solution. I am very interested in user\n> stories around difficulties with what tables are autovacuumed and\n> when.\n>\n\nHi Melanie! I can certainly start compiling user stories about that.\n\nRecently, one of my colleagues wrote an email to our DBA team saying \nsomething along these lines:\n\n« Hey, here is our suggested settings for per table autovacuum \nconfiguration:\n\n| *autovacuum* | L < 1 million | L >= 1 million | L >= 5 \nmillions | L >= 10 millions |\n|:---------------------|--------------:|---------------:|----------------:|-----------------:|\n|`vacuum_scale_factor` | 0.2 (défaut) | 0.1 | 0.05 \n | 0.0 |\n|`vacuum_threshold` | 50 (défaut) | 50 (défaut) | 50 \n(défaut) | 500 000 |\n|`analyze_scale_factor`| 0.1 (défaut) | 0.1 (défaut) | 0.05 \n | 0.0 |\n|`analyze_threshold` | 50 (défaut) | 50 (défaut) | 50 \n(défaut) | 500 000 |\n\nLet's update this table with values for the vacuum_insert_* parameters. »\n\nI wasn't aware that we had this table, and although the settings made \nsense to me, I thought it was rather ugly and cumbersome for the user, \nand I started thinking about how postgres could make his life easier.\n\n> Am I correct in thinking that one of the major goals here is for a\n> very large table to be more likely to be vacuumed?\n>\n\nAbsolutely.\n\n>> The idea would be to replace the following calculation :\n>>\n>> vacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples;\n>>\n>> with this one :\n>>\n>> vacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples / (1\n>> + vac_scale_factor * reltuples / autovacuum_max_threshold)\n>>\n>> (and the same for the others, vacinsthresh and anlthresh).\n> \n> My first thought when reviewing the GUC and how it is used is\n> wondering if its description is a bit misleading.\n> \n> autovacuum_vacuum_threshold is the \"minimum number of updated or\n> deleted tuples needed to trigger a vacuum\". That is, if this many\n> tuples are modified, it *may* trigger a vacuum, but we also may skip\n> vacuuming the table for other reasons or due to other factors.\n> autovacuum_max_threshold's proposed definition is the upper\n> limit/maximum number of tuples to insert/update/delete prior to\n> vacuum/analyze. This implies that if that many tuples have been\n> modified or inserted, the table will definitely be vacuumed -- which\n> isn't true. Maybe that is okay, but I thought I would bring it up.\n>\n\nI'm not too sure I understand. What are the reasons it might by skipped? \nI can think of a concurrent index creation on the same table, or \nanything holding a SHARE UPDATE EXCLUSIVE lock or above. Is this the \nsort of thing you are talking about?\n\nPerhaps a better name for the GUC would be \nautovacuum_asymptotic_limit... or something like that?\n\n>> The attached (draft) patch further illustrates the idea.\n> \n> Thanks for including a patch!\n> \n>> My guess is that a similar proposal has already been submitted... and\n>> rejected 🙂 If so, I'm very sorry for the useless noise.\n> \n> I rooted around in the hackers archive and couldn't find any threads\n> on this specific proposal. I copied some other hackers I knew of who\n> have worked on this problem and thought about it in the past, in case\n> they know of some existing threads or prior work on this specific\n> topic.\n> \n\nThanks!\n\n\n", "msg_date": "Thu, 25 Apr 2024 08:52:50 +0200", "msg_from": "=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Yhuel?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "Hi Nathan, thanks for your review.\n\nLe 24/04/2024 à 21:57, Nathan Bossart a écrit :\n> Yeah, I'm having trouble following the proposed mechanics for this new GUC,\n> and it's difficult to understand how users would choose a value. If we\n> just want to cap the number of tuples required before autovacuum takes\n> action, perhaps we could simplify it to something like\n> \n> \tvacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples;\n> \tvacthresh = Min(vacthres, vac_max_thresh);\n> \n> This would effectively cause autovacuum_vacuum_scale_factor to be\n> overridden for large tables where the scale factor would otherwise cause\n> the calculated threshold to be extremely high.\n\n\nThis would indeed work, and the parameter would be easier to define in \nthe user documentation. I prefer a continuous function... but that is \npersonal taste. It seems to me that autovacuum tuning is quite hard \nanyway, and that it wouldn't be that much difficult with this kind of \nasymptotic limit parameter.\n\nBut I think the most important thing is to avoid per-table configuration \nfor most of the users, or event autovacuum tuning at all, so either of \nthese two formulas would do.\n\n\n", "msg_date": "Thu, 25 Apr 2024 09:13:07 +0200", "msg_from": "=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Yhuel?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "On Thu, Apr 25, 2024 at 09:13:07AM +0200, Fr�d�ric Yhuel wrote:\n> Le 24/04/2024 � 21:57, Nathan Bossart a �crit�:\n>> Yeah, I'm having trouble following the proposed mechanics for this new GUC,\n>> and it's difficult to understand how users would choose a value. If we\n>> just want to cap the number of tuples required before autovacuum takes\n>> action, perhaps we could simplify it to something like\n>> \n>> \tvacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples;\n>> \tvacthresh = Min(vacthres, vac_max_thresh);\n>> \n>> This would effectively cause autovacuum_vacuum_scale_factor to be\n>> overridden for large tables where the scale factor would otherwise cause\n>> the calculated threshold to be extremely high.\n> \n> This would indeed work, and the parameter would be easier to define in the\n> user documentation. I prefer a continuous function... but that is personal\n> taste. It seems to me that autovacuum tuning is quite hard anyway, and that\n> it wouldn't be that much difficult with this kind of asymptotic limit\n> parameter.\n\nI do think this is a neat idea, but would the two approaches really be much\ndifferent in practice? The scale factor parameters already help keep the\nlimit smaller for small tables and larger for large ones, so it strikes me\nas needless complexity. I think we'd need some sort of tangible reason to\nthink the asymptotic limit is better.\n\n> But I think the most important thing is to avoid per-table configuration for\n> most of the users, or event autovacuum tuning at all, so either of these two\n> formulas would do.\n\nYeah, I agree with the goal of minimizing the need for per-table\nconfigurations.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 25 Apr 2024 10:30:58 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "On Thu, Apr 25, 2024 at 2:52 AM Frédéric Yhuel\n<[email protected]> wrote:\n>\n> Le 24/04/2024 à 21:10, Melanie Plageman a écrit :\n> > On Wed, Apr 24, 2024 at 8:08 AM Frédéric Yhuel\n> > <[email protected]> wrote:\n> >>\n> >> Hello,\n> >>\n> >> I would like to suggest a new parameter, autovacuum_max_threshold, which\n> >> would set an upper limit on the number of tuples to delete/update/insert\n> >> prior to vacuum/analyze.\n> >\n> > Hi Frédéric, thanks for the proposal! You are tackling a very tough\n> > problem. I would also find it useful to know more about what led you\n> > to suggest this particular solution. I am very interested in user\n> > stories around difficulties with what tables are autovacuumed and\n> > when.\n> >\n>\n> Hi Melanie! I can certainly start compiling user stories about that.\n\nCool! That would be very useful.\n\n> >> The idea would be to replace the following calculation :\n> >>\n> >> vacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples;\n> >>\n> >> with this one :\n> >>\n> >> vacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples / (1\n> >> + vac_scale_factor * reltuples / autovacuum_max_threshold)\n> >>\n> >> (and the same for the others, vacinsthresh and anlthresh).\n> >\n> > My first thought when reviewing the GUC and how it is used is\n> > wondering if its description is a bit misleading.\n> >\n> > autovacuum_vacuum_threshold is the \"minimum number of updated or\n> > deleted tuples needed to trigger a vacuum\". That is, if this many\n> > tuples are modified, it *may* trigger a vacuum, but we also may skip\n> > vacuuming the table for other reasons or due to other factors.\n> > autovacuum_max_threshold's proposed definition is the upper\n> > limit/maximum number of tuples to insert/update/delete prior to\n> > vacuum/analyze. This implies that if that many tuples have been\n> > modified or inserted, the table will definitely be vacuumed -- which\n> > isn't true. Maybe that is okay, but I thought I would bring it up.\n> >\n>\n> I'm not too sure I understand. What are the reasons it might by skipped?\n> I can think of a concurrent index creation on the same table, or\n> anything holding a SHARE UPDATE EXCLUSIVE lock or above. Is this the\n> sort of thing you are talking about?\n\nNo, I was thinking more literally that, if reltuples (assuming\nreltuples is modified/inserted tuples) > autovacuum_max_threshold, I\nwould expect the table to be vacuumed. However, with your formula,\nthat wouldn't necessarily be true.\n\nI think there are values of reltuples and autovacuum_max_threshold at\nwhich reltuples > autovacuum_max_threshold but reltuples <=\nvac_base_thresh + vac_scale_factor * reltuples / (1 + vac_scale_factor\n* reltuples / autovacuum_max_threshold)\n\nI tried to reduce the formula to come up with a precise definition of\nthe range of values for which this is true, however I wasn't able to\nreduce it to something nice.\n\nHere is just an example of a case:\n\nvac_base_thresh = 2000\nvac_scale_factor = 0.9\nreltuples = 3200\nautovacuum_max_threshold = 2500\n\ntotal_thresh = vac_base_thresh + vac_scale_factor * reltuples / (1 +\nvac_scale_factor * reltuples / autovacuum_max_threshold)\n\ntotal_thresh: 3338. dead tuples: 3200. autovacuum_max_threshold: 2500\n\nso there are more dead tuples than the max threshold, so it should\ntrigger a vacuum, but it doesn't because the total calculated\nthreshold is higher than the number of dead tuples.\n\nThis of course may not be a realistic scenario in practice. It works\nbest the closer scale factor is to 1 (wish I had derived the formula\nsuccessfully) and when autovacuum_max_threshold > 2 * vac_base_thresh.\nSo, maybe it is not an issue.\n\n> Perhaps a better name for the GUC would be\n> autovacuum_asymptotic_limit... or something like that?\n\nIf we keep the asymptotic part, that makes sense. I wonder if we have\nto add another \"vacuum\" in there (e.g.\nautovacuum_vacuum_max_threshold) to be consistent with the other gucs.\nI don't really know why they have that extra \"vacuum\" in them, though.\nMakes the names so long.\n\n- Melanie\n\n\n", "msg_date": "Thu, 25 Apr 2024 12:51:45 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "On Wed, Apr 24, 2024 at 3:57 PM Nathan Bossart <[email protected]> wrote:\n> Yeah, I'm having trouble following the proposed mechanics for this new GUC,\n> and it's difficult to understand how users would choose a value. If we\n> just want to cap the number of tuples required before autovacuum takes\n> action, perhaps we could simplify it to something like\n>\n> vacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples;\n> vacthresh = Min(vacthres, vac_max_thresh);\n>\n> This would effectively cause autovacuum_vacuum_scale_factor to be\n> overridden for large tables where the scale factor would otherwise cause\n> the calculated threshold to be extremely high.\n\n+1 for this. It seems a lot easier to understand than the original\nproposal. And in fact, when I was working on my 2024.pgconf.dev\npresentation, I suggested exactly this idea on one of my slides.\n\nI believe that the underlying problem here can be summarized in this\nway: just because I'm OK with 2MB of bloat in my 10MB table doesn't\nmean that I'm OK with 2TB of bloat in my 10TB table. One reason for\nthis is simply that I can afford to waste 2MB much more easily than I\ncan afford to waste 2TB -- and that applies both on disk and in\nmemory. Another reason, at least in existing releases, is that at some\npoint index vacuuming hits a wall because we run out of space for dead\ntuples. We *most definitely* want to do index vacuuming before we get\nto the point where we're going to have to do multiple cycles of index\nvacuuming. That latter problem should be fixed in v17 by the recent\ndead TID storage changes. But even so, you generally want to contain\nbloat before too many pages get added to your tables or indexes,\nbecause you can't easily get rid of them again afterward, so I think\nthere's still a good case for preventing autovacuum from scaling the\nthreshold out to infinity.\n\nWhat does surprise me is that Frédéric suggests a default value of\n500,000. If half a million tuples (proposed default) is 20% of your\ntable (default value of autovacuum_vacuum_scale_factor) then your\ntable has 2.5 million tuples. Unless those tuples are very wide, that\ntable isn't even 1GB in size. I'm not aware that there's any problem\nat all with the current formula on a table of that size, or even ten\ntimes that size. I think you need to have tables that are hundreds of\ngigabytes in size at least before this starts to become a serious\nproblem. Looking at this from another angle, in existing releases, the\nmaximum usable amount of autovacuum_work_mem is 1GB, which means we\ncan store one-sixth of a billion dead TIDs, or roughly 166 million.\nAnd that limit has been a source of occasional complaints for years.\nSo we have those complaints on the one hand, suggesting that 166\nmillion is not enough, and then we have this proposal, saying that\nmore than half a million is too much. That's really strange; my\ninitial hunch is that the value should be 100-500x higher than what\nFrédéric proposed.\n\nI'm also sort of wondering how much the tuple width matters here. I'm\nnot quite sure.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 25 Apr 2024 14:33:05 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "On Thu, Apr 25, 2024 at 02:33:05PM -0400, Robert Haas wrote:\n> What does surprise me is that Fr�d�ric suggests a default value of\n> 500,000. If half a million tuples (proposed default) is 20% of your\n> table (default value of autovacuum_vacuum_scale_factor) then your\n> table has 2.5 million tuples. Unless those tuples are very wide, that\n> table isn't even 1GB in size. I'm not aware that there's any problem\n> at all with the current formula on a table of that size, or even ten\n> times that size. I think you need to have tables that are hundreds of\n> gigabytes in size at least before this starts to become a serious\n> problem. Looking at this from another angle, in existing releases, the\n> maximum usable amount of autovacuum_work_mem is 1GB, which means we\n> can store one-sixth of a billion dead TIDs, or roughly 166 million.\n> And that limit has been a source of occasional complaints for years.\n> So we have those complaints on the one hand, suggesting that 166\n> million is not enough, and then we have this proposal, saying that\n> more than half a million is too much. That's really strange; my\n> initial hunch is that the value should be 100-500x higher than what\n> Fr�d�ric proposed.\n\nAgreed, the default should probably be on the order of 100-200M minimum.\n\nThe original proposal also seems to introduce one parameter that would\naffect all three of autovacuum_vacuum_threshold,\nautovacuum_vacuum_insert_threshold, and autovacuum_analyze_threshold. Is\nthat okay? Or do we need to introduce a \"limit\" GUC for each? I guess the\nquestion is whether we anticipate any need to have different values for\nthese limits, which might be unlikely.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 25 Apr 2024 14:21:31 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "On Thu, Apr 25, 2024 at 3:21 PM Nathan Bossart <[email protected]> wrote:\n> Agreed, the default should probably be on the order of 100-200M minimum.\n>\n> The original proposal also seems to introduce one parameter that would\n> affect all three of autovacuum_vacuum_threshold,\n> autovacuum_vacuum_insert_threshold, and autovacuum_analyze_threshold. Is\n> that okay? Or do we need to introduce a \"limit\" GUC for each? I guess the\n> question is whether we anticipate any need to have different values for\n> these limits, which might be unlikely.\n\nI don't think we should make the same limit apply to more than one of\nthose. I would phrase the question in the opposite way that you did:\nis there any particular reason to believe that the limits should be\nthe same? I don't see one.\n\nI think it would be OK to introduce limits for some and leave the\nothers uncapped, but I don't like the idea of reusing the same limit\nfor different things.\n\nMy intuition is strongest for the vacuum threshold -- that's such an\nexpensive operation, takes so long, and has such dire consequences if\nit isn't done. We need to force the table to be vacuumed before it\nbloats out of control. Maybe essentially the same logic applies to the\ninsert threshold, namely, that we should vacuum before the number of\nnot-all-visible pages gets too large, but I think it's less clear.\nIt's just not nearly as bad if that happens. Sure, it may not be great\nwhen vacuum eventually runs and hits a ton of pages all at once, but\nit's not even close to being as catastrophic as the vacuum case.\n\nThe analyze case, I feel, is really murky.\nautovacuum_analyze_scale_factor stands for the proposition that as the\ntable becomes larger, analyze doesn't need to be done as often. If\nwhat you're concerned about is the frequency estimates, that's true:\nan injection of a million new rows can shift frequencies dramatically\nin a small table, but the effect is blunted in a large one. But a lot\nof the cases I've seen have involved the histogram boundaries. If\nyou're inserting data into a table in increasing order, every new\nmillion rows shifts the boundary of the last histogram bucket by the\nsame amount. You either need those rows included in the histogram to\nget good query plans, or you don't. If you do, the frequency with\nwhich you need to analyze does not change as the table grows. If you\ndon't, then it probably does. But the answer doesn't really depend on\nhow big the table is already, but on your workload. So it's unclear to\nme that the proposed parameter is the right idea here at all. It's\nalso unclear to me that the existing system is the right idea. :-)\n\nSo overall I guess I'd lean toward just introducing a cap for the\n\"vacuum\" case and leave the \"insert\" and \"analyze\" cases as ideas for\npossible future consideration, but I'm not 100% sure.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 25 Apr 2024 16:21:02 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "\n\nLe 25/04/2024 à 21:21, Nathan Bossart a écrit :\n> On Thu, Apr 25, 2024 at 02:33:05PM -0400, Robert Haas wrote:\n>> What does surprise me is that Frédéric suggests a default value of\n>> 500,000. If half a million tuples (proposed default) is 20% of your\n>> table (default value of autovacuum_vacuum_scale_factor) then your\n>> table has 2.5 million tuples. Unless those tuples are very wide, that\n>> table isn't even 1GB in size. I'm not aware that there's any problem\n>> at all with the current formula on a table of that size, or even ten\n>> times that size. I think you need to have tables that are hundreds of\n>> gigabytes in size at least before this starts to become a serious\n>> problem. Looking at this from another angle, in existing releases, the\n>> maximum usable amount of autovacuum_work_mem is 1GB, which means we\n>> can store one-sixth of a billion dead TIDs, or roughly 166 million.\n>> And that limit has been a source of occasional complaints for years.\n>> So we have those complaints on the one hand, suggesting that 166\n>> million is not enough, and then we have this proposal, saying that\n>> more than half a million is too much. That's really strange; my\n>> initial hunch is that the value should be 100-500x higher than what\n>> Frédéric proposed.\n> \n> Agreed, the default should probably be on the order of 100-200M minimum.\n>\n\nI'm not sure... 500000 comes from the table given in a previous message. \nIt may not be large enough. But vacuum also updates the visibility map, \nand a few hundred thousand heap fetches can already hurt the performance \nof an index-only scan, even if most of the blocs are read from cache.\n\n> The original proposal also seems to introduce one parameter that would\n> affect all three of autovacuum_vacuum_threshold,\n> autovacuum_vacuum_insert_threshold, and autovacuum_analyze_threshold. Is\n> that okay? Or do we need to introduce a \"limit\" GUC for each? I guess the\n> question is whether we anticipate any need to have different values for\n> these limits, which might be unlikely.\n> \n\nI agree with you, it seems unlikely. This is also an answer to Melanie's \nquestion about the name of the GUC : I deliberately left out the other \n\"vacuum\" because I thought we only needed one parameter for these three \nthresholds.\n\nNow I have just read Robert's new message, and I understand his point. \nBut is there a real problem with triggering analyze after every 500000 \n(or more) modifications in the table anyway?\n\n\n", "msg_date": "Thu, 25 Apr 2024 22:57:08 +0200", "msg_from": "=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Yhuel?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "\n\nLe 25/04/2024 à 18:51, Melanie Plageman a écrit :\n>> I'm not too sure I understand. What are the reasons it might by skipped?\n>> I can think of a concurrent index creation on the same table, or\n>> anything holding a SHARE UPDATE EXCLUSIVE lock or above. Is this the\n>> sort of thing you are talking about?\n> No, I was thinking more literally that, if reltuples (assuming\n> reltuples is modified/inserted tuples) > autovacuum_max_threshold, I\n> would expect the table to be vacuumed. However, with your formula,\n> that wouldn't necessarily be true.\n> \n> I think there are values of reltuples and autovacuum_max_threshold at\n> which reltuples > autovacuum_max_threshold but reltuples <=\n> vac_base_thresh + vac_scale_factor * reltuples / (1 + vac_scale_factor\n> * reltuples / autovacuum_max_threshold)\n> \n> I tried to reduce the formula to come up with a precise definition of\n> the range of values for which this is true, however I wasn't able to\n> reduce it to something nice.\n> \n> Here is just an example of a case:\n> \n> vac_base_thresh = 2000\n> vac_scale_factor = 0.9\n> reltuples = 3200\n> autovacuum_max_threshold = 2500\n> \n> total_thresh = vac_base_thresh + vac_scale_factor * reltuples / (1 +\n> vac_scale_factor * reltuples / autovacuum_max_threshold)\n> \n> total_thresh: 3338. dead tuples: 3200. autovacuum_max_threshold: 2500\n> \n> so there are more dead tuples than the max threshold, so it should\n> trigger a vacuum, but it doesn't because the total calculated\n> threshold is higher than the number of dead tuples.\n> \n\nOK, thank you! I got it.\n\n> This of course may not be a realistic scenario in practice. It works\n> best the closer scale factor is to 1 (wish I had derived the formula\n> successfully) and when autovacuum_max_threshold > 2 * vac_base_thresh.\n> So, maybe it is not an issue.\n\nI haven't thought much about this yet. I hope we can avoid such an \nextreme scenario by imposing some kind of constraint on this parameter, \nin relation to the others.\n\nAnyway, with Nathan and Robert upvoting the simpler formula, this will \nprobably become irrelevant anyway :-)\n\n\n", "msg_date": "Thu, 25 Apr 2024 23:12:22 +0200", "msg_from": "=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Yhuel?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "On Thu, Apr 25, 2024 at 4:57 PM Frédéric Yhuel\n<[email protected]> wrote:\n> Now I have just read Robert's new message, and I understand his point.\n> But is there a real problem with triggering analyze after every 500000\n> (or more) modifications in the table anyway?\n\nIt depends on the situation, but even on a laptop, you can do that\nnumber of modifications in one second. You could easily have a\nmoderate large number of tables that hit that threshold every minute,\nand thus get auto-analyzed every minute when an autovacuum worker is\nlaunched in that database. Now, in some situations, that could be a\ngood thing, because I suspect it's not very hard to construct a\nworkload where constantly analyzing all of your busy tables is\nnecessary to maintain query performance. But in general I think what\nwould happen with such a low threshold is that you'd end up with\nautovacuum spending an awful lot of its available resources on useless\nanalyze operations, which would waste I/O and CPU time, and more\nimportantly, interfere with its ability to get vacuums done.\n\nTo put it another way, suppose my tables contain 10 million tuples\neach, which is not particularly large. The analyze scale factor is\n10%, so currently I'd analyze after a million table modifications.\nYour proposal drops that to half a million, so I'm going to start\nanalyzing 20 times more often. If you start doing ANYTHING to a\ndatabase twenty times more often, it can cause a problem. Twenty times\nmore selects, twenty times more checkpoints, twenty times more\nvacuuming, whatever. It's just a lot of resources to spend on\nsomething if that thing isn't actually necessary.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 25 Apr 2024 20:01:46 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "On Thu, 2024-04-25 at 14:33 -0400, Robert Haas wrote:\n> I believe that the underlying problem here can be summarized in this\n> way: just because I'm OK with 2MB of bloat in my 10MB table doesn't\n> mean that I'm OK with 2TB of bloat in my 10TB table. One reason for\n> this is simply that I can afford to waste 2MB much more easily than I\n> can afford to waste 2TB -- and that applies both on disk and in\n> memory.\n\nI don't find that convincing. Why are 2TB of wasted space in a 10TB\ntable worse than 2TB of wasted space in 100 tables of 100GB each?\n\n> Another reason, at least in existing releases, is that at some\n> point index vacuuming hits a wall because we run out of space for dead\n> tuples. We *most definitely* want to do index vacuuming before we get\n> to the point where we're going to have to do multiple cycles of index\n> vacuuming.\n\nThat is more convincing. But do we need a GUC for that? What about\nmaking a table eligible for autovacuum as soon as the number of dead\ntuples reaches 90% of what you can hold in \"autovacuum_work_mem\"?\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 26 Apr 2024 04:24:45 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "\n\nLe 26/04/2024 à 04:24, Laurenz Albe a écrit :\n> On Thu, 2024-04-25 at 14:33 -0400, Robert Haas wrote:\n>> I believe that the underlying problem here can be summarized in this\n>> way: just because I'm OK with 2MB of bloat in my 10MB table doesn't\n>> mean that I'm OK with 2TB of bloat in my 10TB table. One reason for\n>> this is simply that I can afford to waste 2MB much more easily than I\n>> can afford to waste 2TB -- and that applies both on disk and in\n>> memory.\n> \n> I don't find that convincing. Why are 2TB of wasted space in a 10TB\n> table worse than 2TB of wasted space in 100 tables of 100GB each?\n> \n\nGood point, but another way of summarizing the problem would be that the \nautovacuum_*_scale_factor parameters work well as long as we have a more \nor less evenly distributed access pattern in the table.\n\nSuppose my very large table gets updated only for its 1% most recent \nrows. We probably want to decrease autovacuum_analyze_scale_factor and \nautovacuum_vacuum_scale_factor for this one.\n\nPartitioning would be a good solution, but IMHO postgres should be able \nto handle this case anyway, ideally without per-table configuration.\n\n\n", "msg_date": "Fri, 26 Apr 2024 09:35:24 +0200", "msg_from": "=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Yhuel?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "Hi,\n\nOn Fri, Apr 26, 2024 at 04:24:45AM +0200, Laurenz Albe wrote:\n> On Thu, 2024-04-25 at 14:33 -0400, Robert Haas wrote:\n> > Another reason, at least in existing releases, is that at some\n> > point index vacuuming hits a wall because we run out of space for dead\n> > tuples. We *most definitely* want to do index vacuuming before we get\n> > to the point where we're going to have to do multiple cycles of index\n> > vacuuming.\n> \n> That is more convincing. But do we need a GUC for that? What about\n> making a table eligible for autovacuum as soon as the number of dead\n> tuples reaches 90% of what you can hold in \"autovacuum_work_mem\"?\n\nDue to the improvements in v17, this would basically never trigger\naccordings to my understanding, or at least only after an excessive\namount of bloat has been accumulated.\n\n\nMichael\n\n\n", "msg_date": "Fri, 26 Apr 2024 10:08:33 +0200", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "\n\nLe 25/04/2024 à 22:21, Robert Haas a écrit :\n> The analyze case, I feel, is really murky.\n> autovacuum_analyze_scale_factor stands for the proposition that as the\n> table becomes larger, analyze doesn't need to be done as often. If\n> what you're concerned about is the frequency estimates, that's true:\n> an injection of a million new rows can shift frequencies dramatically\n> in a small table, but the effect is blunted in a large one. But a lot\n> of the cases I've seen have involved the histogram boundaries. If\n> you're inserting data into a table in increasing order, every new\n> million rows shifts the boundary of the last histogram bucket by the\n> same amount. You either need those rows included in the histogram to\n> get good query plans, or you don't. If you do, the frequency with\n> which you need to analyze does not change as the table grows. If you\n> don't, then it probably does. But the answer doesn't really depend on\n> how big the table is already, but on your workload. So it's unclear to\n> me that the proposed parameter is the right idea here at all. It's\n> also unclear to me that the existing system is the right idea. 🙂\n\nThis is very interesting. And what about ndistinct? I believe it could \nbe problematic, too, in some (admittedly rare or pathological) cases.\n\nFor example, suppose that the actual number of distinct values grows \nfrom 1000 to 200000 after a batch of insertions, for a particular \ncolumn. OK, in such a case, the default analyze sampling isn't large \nenough to compute a ndistinct close enough to reality anyway. But \nwithout any analyze at all, it can lead to very bad planning - think of \na Nested Loop with a parallel seq scan for the outer table instead of a \nsimple efficient index scan, because the index scan of the inner table \nis overestimated (each index scan cost and number or rows returned).\n\n\n", "msg_date": "Fri, 26 Apr 2024 10:10:20 +0200", "msg_from": "=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Yhuel?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "On Fri, 2024-04-26 at 09:35 +0200, Frédéric Yhuel wrote:\n> \n> Le 26/04/2024 à 04:24, Laurenz Albe a écrit :\n> > On Thu, 2024-04-25 at 14:33 -0400, Robert Haas wrote:\n> > > I believe that the underlying problem here can be summarized in this\n> > > way: just because I'm OK with 2MB of bloat in my 10MB table doesn't\n> > > mean that I'm OK with 2TB of bloat in my 10TB table. One reason for\n> > > this is simply that I can afford to waste 2MB much more easily than I\n> > > can afford to waste 2TB -- and that applies both on disk and in\n> > > memory.\n> > \n> > I don't find that convincing. Why are 2TB of wasted space in a 10TB\n> > table worse than 2TB of wasted space in 100 tables of 100GB each?\n> \n> Good point, but another way of summarizing the problem would be that the \n> autovacuum_*_scale_factor parameters work well as long as we have a more \n> or less evenly distributed access pattern in the table.\n> \n> Suppose my very large table gets updated only for its 1% most recent \n> rows. We probably want to decrease autovacuum_analyze_scale_factor and \n> autovacuum_vacuum_scale_factor for this one.\n> \n> Partitioning would be a good solution, but IMHO postgres should be able \n> to handle this case anyway, ideally without per-table configuration.\n\nI agree that you may well want autovacuum and autoanalyze treat your large\ntable differently from your small tables.\n\nBut I am reluctant to accept even more autovacuum GUCs. It's not like\nwe don't have enough of them, rather the opposite. You can slap on more\nGUCs to treat more special cases, but we will never reach the goal of\nhaving a default that will make everybody happy.\n\nI believe that the defaults should work well in moderately sized databases\nwith moderate usage characteristics. If you have large tables or a high\nnumber of transactions per second, you can be expected to make the effort\nand adjust the settings for your case. Adding more GUCs makes life *harder*\nfor the users who are trying to understand and configure how autovacuum works.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 26 Apr 2024 10:18:00 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "Hi,\n\nOn Fri, Apr 26, 2024 at 10:18:00AM +0200, Laurenz Albe wrote:\n> On Fri, 2024-04-26 at 09:35 +0200, Fr�d�ric Yhuel wrote:\n> > Le 26/04/2024 � 04:24, Laurenz Albe a �crit�:\n> > > On Thu, 2024-04-25 at 14:33 -0400, Robert Haas wrote:\n> > > > I believe that the underlying problem here can be summarized in this\n> > > > way: just because I'm OK with 2MB of bloat in my 10MB table doesn't\n> > > > mean that I'm OK with 2TB of bloat in my 10TB table. One reason for\n> > > > this is simply that I can afford to waste 2MB much more easily than I\n> > > > can afford to waste 2TB -- and that applies both on disk and in\n> > > > memory.\n> > > \n> > > I don't find that convincing. Why are 2TB of wasted space in a 10TB\n> > > table worse than 2TB of wasted space in 100 tables of 100GB each?\n> > \n> > Good point, but another way of summarizing the problem would be that the \n> > autovacuum_*_scale_factor parameters work well as long as we have a more \n> > or less evenly distributed access pattern in the table.\n> > \n> > Suppose my very large table gets updated only for its 1% most recent \n> > rows. We probably want to decrease autovacuum_analyze_scale_factor and \n> > autovacuum_vacuum_scale_factor for this one.\n> > \n> > Partitioning would be a good solution, but IMHO postgres should be able \n> > to handle this case anyway, ideally without per-table configuration.\n> \n> I agree that you may well want autovacuum and autoanalyze treat your large\n> table differently from your small tables.\n> \n> But I am reluctant to accept even more autovacuum GUCs. It's not like\n> we don't have enough of them, rather the opposite. You can slap on more\n> GUCs to treat more special cases, but we will never reach the goal of\n> having a default that will make everybody happy.\n> \n> I believe that the defaults should work well in moderately sized databases\n> with moderate usage characteristics. If you have large tables or a high\n> number of transactions per second, you can be expected to make the effort\n> and adjust the settings for your case. Adding more GUCs makes life *harder*\n> for the users who are trying to understand and configure how autovacuum works.\n\nWell, I disagree to some degree. I agree that the defaults should work\nwell in moderately sized databases with moderate usage characteristics.\nBut I also think we can do better than telling DBAs to they have to\nmanually fine-tune autovacuum for large tables (and frequenlty\nimplementing by hand what this patch is proposed, namely setting\nautovacuum_vacuum_scale_factor to 0 and autovacuum_vacuum_threshold to a\nhigh number), as this is cumbersome and needs adult supervision that is\nnot always available. Of course, it would be great if we just slap some\nAI into the autovacuum launcher that figures things out automagically,\nbut I don't think we are there, yet.\n\nSo this proposal (probably along with a higher default threshold than\n500000, but IMO less than what Robert and Nathan suggested) sounds like\na stop forward to me. DBAs can set the threshold lower if they want, or\nmaybe we can just turn it off by default if we cannot agree on a sane\ndefault, but I think this (using the simplified formula from Nathan) is\na good approach that takes some pain away from autovacuum tuning and\nreserves that for the really difficult cases.\n\n\nMichael\n\n\n", "msg_date": "Fri, 26 Apr 2024 10:43:44 +0200", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "On 4/26/24 04:43, Michael Banck wrote:\n> So this proposal (probably along with a higher default threshold than\n> 500000, but IMO less than what Robert and Nathan suggested) sounds like\n> a stop forward to me. DBAs can set the threshold lower if they want, or\n> maybe we can just turn it off by default if we cannot agree on a sane\n> default, but I think this (using the simplified formula from Nathan) is\n> a good approach that takes some pain away from autovacuum tuning and\n> reserves that for the really difficult cases.\n\n+1 to the above\n\nAlthough I don't think 500000 is necessarily too small. In my view, \nhaving autovac run very quickly, even if more frequently, provides an \noverall better user experience.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Fri, 26 Apr 2024 09:22:32 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "On Thu, Apr 25, 2024 at 10:24 PM Laurenz Albe <[email protected]> wrote:\n> I don't find that convincing. Why are 2TB of wasted space in a 10TB\n> table worse than 2TB of wasted space in 100 tables of 100GB each?\n\nIt's not worse, but it's more avoidable. No matter what you do, any\ntable that suffers a reasonable number of updates and/or deletes is\ngoing to have some wasted space. When a tuple is deleted or update,\nthe old one has to stick around until its xmax is all-visible, and\nthen after that until the page is HOT pruned which may not happen\nimmediately, and then even after that the line pointer sticks around\nuntil the next vacuum which doesn't happen instantly either. No matter\nhow aggressive you make autovacuum, or even no matter how aggressively\nyou vacuum manually, non-insert-only tables are always going to end up\ncontaining some bloat.\n\nBut how much? Well, it's basically given by\nRATE_AT_WHICH_SPACE_IS_WASTED * AVERAGE_TIME_UNTIL_SPACE_IS_RECLAIMED.\nWhich, you'll note, does not really depend on the table size. It does\na little bit, because the time until a tuple is fully removed,\nincluding the line pointer, depends on how long vacuum takes, and\nvacuum takes larger on a big table than a small one. But the effect is\nmuch less than linear, I believe, because you can HOT-prune as soon as\nthe xmax is all-visible, which reclaims most of the space instantly.\nSo in practice, the minimum feasible steady-state bloat for a table\ndepends a great deal on how fast updates and deletes are happening,\nbut only weakly on the size of the table.\n\nWhich, in plain English, means that you should be able to vacuum a\n10TB table often enough that it doesn't accumulate 2TB of bloat, if\nyou want to. It's going to be harder to vacuum a 10GB table often\nenough that it doesn't accumulate 2GB of bloat. And it's going to be\n*really* hard to vacuum a 10MB table often enough that it doesn't\naccumulate 2MB of bloat. The only way you're going to be able to do\nthat last one at all is if the update rate is very low.\n\n> > Another reason, at least in existing releases, is that at some\n> > point index vacuuming hits a wall because we run out of space for dead\n> > tuples. We *most definitely* want to do index vacuuming before we get\n> > to the point where we're going to have to do multiple cycles of index\n> > vacuuming.\n>\n> That is more convincing. But do we need a GUC for that? What about\n> making a table eligible for autovacuum as soon as the number of dead\n> tuples reaches 90% of what you can hold in \"autovacuum_work_mem\"?\n\nThat would have been a good idea to do in existing releases, a long\ntime before now, but we didn't. However, the new dead TID store\nchanges the picture, because if I understand John Naylor's remarks\ncorrectly, the new TID store can hold so many TIDs so efficiently that\nyou basically won't run out of memory. So now I think this wouldn't be\neffective - yet I still think it's wrong to let the vacuum threshold\nscale without bound as the table size increases.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Apr 2024 09:27:23 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "On Fri, Apr 26, 2024 at 9:22 AM Joe Conway <[email protected]> wrote:\n> Although I don't think 500000 is necessarily too small. In my view,\n> having autovac run very quickly, even if more frequently, provides an\n> overall better user experience.\n\nCan you elaborate on why you think that? I mean, to me, that's almost\nequivalent to removing autovacuum_vacuum_scale_factor entirely,\nbecause only for very small tables will that calculation produce a\nvalue lower than 500k.\n\nWe might need to try to figure out some test cases here. My intuition\nis that this is going to vacuum large tables insanely aggressively.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Apr 2024 09:31:23 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "On Fri, Apr 26, 2024 at 4:43 AM Michael Banck <[email protected]> wrote:\n> > I believe that the defaults should work well in moderately sized databases\n> > with moderate usage characteristics. If you have large tables or a high\n> > number of transactions per second, you can be expected to make the effort\n> > and adjust the settings for your case. Adding more GUCs makes life *harder*\n> > for the users who are trying to understand and configure how autovacuum works.\n>\n> Well, I disagree to some degree. I agree that the defaults should work\n> well in moderately sized databases with moderate usage characteristics.\n> But I also think we can do better than telling DBAs to they have to\n> manually fine-tune autovacuum for large tables (and frequenlty\n> implementing by hand what this patch is proposed, namely setting\n> autovacuum_vacuum_scale_factor to 0 and autovacuum_vacuum_threshold to a\n> high number), as this is cumbersome and needs adult supervision that is\n> not always available. Of course, it would be great if we just slap some\n> AI into the autovacuum launcher that figures things out automagically,\n> but I don't think we are there, yet.\n>\n> So this proposal (probably along with a higher default threshold than\n> 500000, but IMO less than what Robert and Nathan suggested) sounds like\n> a stop forward to me. DBAs can set the threshold lower if they want, or\n> maybe we can just turn it off by default if we cannot agree on a sane\n> default, but I think this (using the simplified formula from Nathan) is\n> a good approach that takes some pain away from autovacuum tuning and\n> reserves that for the really difficult cases.\n\nI agree with this. If having an extra setting substantially reduces\nthe number of cases that require manual tuning, it's totally worth it.\nAnd I think it will.\n\nTo be clear, I don't think this is the biggest problem with the\nautovacuum algorithm, not by quite a bit. But it's a relatively easy\none to fix.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Apr 2024 09:37:38 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "On 4/26/24 09:31, Robert Haas wrote:\n> On Fri, Apr 26, 2024 at 9:22 AM Joe Conway <[email protected]> wrote:\n>> Although I don't think 500000 is necessarily too small. In my view,\n>> having autovac run very quickly, even if more frequently, provides an\n>> overall better user experience.\n> \n> Can you elaborate on why you think that? I mean, to me, that's almost\n> equivalent to removing autovacuum_vacuum_scale_factor entirely,\n> because only for very small tables will that calculation produce a\n> value lower than 500k.\n\nIf I understood Nathan's proposed calc, for small tables you would still \nget (thresh + sf * numtuples). Once that number exceeds the new limit \nparameter, then the latter would kick in. So small tables would retain \nthe current behavior and large enough tables would be clamped.\n\n> We might need to try to figure out some test cases here. My intuition\n> is that this is going to vacuum large tables insanely aggressively.\n\nIt depends on workload to be sure. Just because a table is large, it \ndoesn't mean that dead rows are generated that fast.\n\nAdmittedly it has been quite a while since I looked at all this that \nclosely, but if A/V runs on some large busy table for a few milliseconds \nonce every few minutes, that is far less disruptive than A/V running for \ntens of seconds once every few hours or for minutes ones every few days \n-- or whatever. The key thing to me is the \"few milliseconds\" runtime. \nThe short duration means that no one notices an impact, and the longer \nduration almost guarantees that an impact will be felt.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Fri, 26 Apr 2024 09:40:05 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "On Fri, Apr 26, 2024 at 9:40 AM Joe Conway <[email protected]> wrote:\n> > Can you elaborate on why you think that? I mean, to me, that's almost\n> > equivalent to removing autovacuum_vacuum_scale_factor entirely,\n> > because only for very small tables will that calculation produce a\n> > value lower than 500k.\n>\n> If I understood Nathan's proposed calc, for small tables you would still\n> get (thresh + sf * numtuples). Once that number exceeds the new limit\n> parameter, then the latter would kick in. So small tables would retain\n> the current behavior and large enough tables would be clamped.\n\nRight. But with a 500k threshold, \"large enough\" is not very large at\nall. The default scale factor is 20%, so the crossover point is at 2.5\nmillion tuples. That's pgbench scale factor 25, which is a 320MB\ntable.\n\n> It depends on workload to be sure. Just because a table is large, it\n> doesn't mean that dead rows are generated that fast.\n\nThat is true, as far as it goes.\n\n> Admittedly it has been quite a while since I looked at all this that\n> closely, but if A/V runs on some large busy table for a few milliseconds\n> once every few minutes, that is far less disruptive than A/V running for\n> tens of seconds once every few hours or for minutes ones every few days\n> -- or whatever. The key thing to me is the \"few milliseconds\" runtime.\n> The short duration means that no one notices an impact, and the longer\n> duration almost guarantees that an impact will be felt.\n\nSure, I mean, I totally agree with that, but how is a vacuum on a\nlarge table going to run for milliseconds? If it can skip index\nvacuuming, sure, then it's quick, because it only needs to scan the\nheap pages that are not all-visible. But as soon as index vacuuming is\ntriggered, it's going to take a while. You can't afford to trigger\nthat constantly.\n\nLet's compare the current situation to the situation post-patch with a\ncap of 500k. Consider a table 1024 times larger than the one I\nmentioned above, so pgbench scale factor 25600, size on disk 320GB.\nCurrently, that table will be vacuumed for bloat when the number of\ndead tuples exceeds 20% of the table size, because that's the default\nvalue of autovacuum_vacuum_scale_factor. The table has 2.56 billion\ntuples, so that means that we're going to vacuum it when there are\nmore than 510 million dead tuples. Post-patch, we will vacuum when we\nhave 500 thousand dead tuples. Suppose a uniform workload that slowly\nupdates rows in the table. If we were previously autovacuuming the\ntable once per day (1440 minutes) we're now going to try to vacuum it\nalmost every minute (1440 minutes / 1024 = 84 seconds).\n\nUnless I'm missing something major, that's completely bonkers. It\nmight be true that it would be a good idea to vacuum such a table more\noften than we do at present, but there's no shot that we want to do it\nthat much more often. The pgbench_accounts_pkey index will, I believe,\nbe on the order of 8-10GB at that scale. We can't possibly want to\nincur that much extra I/O every minute, and I don't think it's going\nto finish in milliseconds, either.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Apr 2024 10:12:48 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "I've been following this discussion and would like to add my\r\n2 cents.\r\n\r\n> Unless I'm missing something major, that's completely bonkers. It\r\n> might be true that it would be a good idea to vacuum such a table more\r\n> often than we do at present, but there's no shot that we want to do it\r\n> that much more often. \r\n\r\nThis is really an important point.\r\n\r\nToo small of a threshold and a/v will constantly be vacuuming a fairly large \r\nand busy table with many indexes. \r\n\r\nIf the threshold is large, say 100 or 200 million, I question if you want autovacuum \r\nto be doing the work of cleanup here? That long of a period without a autovacuum \r\non a table means there maybe something misconfigured in your autovacuum settings. \r\n\r\nAt that point aren't you just better off performing a manual vacuum and\r\ntaking advantage of parallel index scans?\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "msg_date": "Wed, 1 May 2024 18:19:03 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "On Wed, May 1, 2024 at 2:19 PM Imseih (AWS), Sami <[email protected]> wrote:\n> > Unless I'm missing something major, that's completely bonkers. It\n> > might be true that it would be a good idea to vacuum such a table more\n> > often than we do at present, but there's no shot that we want to do it\n> > that much more often.\n>\n> This is really an important point.\n>\n> Too small of a threshold and a/v will constantly be vacuuming a fairly large\n> and busy table with many indexes.\n>\n> If the threshold is large, say 100 or 200 million, I question if you want autovacuum\n> to be doing the work of cleanup here? That long of a period without a autovacuum\n> on a table means there maybe something misconfigured in your autovacuum settings.\n>\n> At that point aren't you just better off performing a manual vacuum and\n> taking advantage of parallel index scans?\n\nAs far as that last point goes, it would be good if we taught\nautovacuum about several things it doesn't currently know about;\nparallelism is one. IMHO, it's probably not the most important one,\nbut it's certainly on the list. I think, though, that we should\nconfine ourselves on this thread to talking about what the threshold\nought to be.\n\nAnd as far as that goes, I'd like you - and others - to spell out more\nprecisely why you think 100 or 200 million tuples is too much. It\nmight be, or maybe it is in some cases but not in others. To me,\nthat's not a terribly large amount of data. Unless your tuples are\nvery wide, it's a few tens of gigabytes. That is big enough that I can\nbelieve that you *might* want autovacuum to run when you hit that\nthreshold, but it's definitely not *obvious* to me that you want\nautovacuum to run when you hit that threshold.\n\nTo make that concrete: If the table is 10TB, do you want to vacuum to\nreclaim 20GB of bloat? You might be vacuuming 5TB of indexes to\nreclaim 20GB of heap space - is that the right thing to do? If yes,\nwhy?\n\nI do think it's interesting that other people seem to think we should\nbe vacuuming more often on tables that are substantially smaller than\nthe ones that seem like a big problem to me. I'm happy to admit that\nmy knowledge of this topic is not comprehensive and I'd like to learn\nfrom the experience of others. But I think it's clearly and obviously\nunworkable to multiply the current frequency of vacuuming for large\ntables by a three or four digit number. Possibly what we need here is\nsomething other than a cap, where, say, we vacuum a 10GB table twice\nas often as now, a 100GB table four times as often, and a 1TB table\neight times as often. Or whatever the right answer is. But we can't\njust pull numbers out of the air like that: we need to be able to\njustify our choices. I think we all agree that big tables need to be\nvacuumed more often than the current formula does, but we seem to be\nrather far apart on the values of \"big\" and \"more\".\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 May 2024 14:50:57 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "On Sat, 27 Apr 2024 at 02:13, Robert Haas <[email protected]> wrote:\n> Let's compare the current situation to the situation post-patch with a\n> cap of 500k. Consider a table 1024 times larger than the one I\n> mentioned above, so pgbench scale factor 25600, size on disk 320GB.\n> Currently, that table will be vacuumed for bloat when the number of\n> dead tuples exceeds 20% of the table size, because that's the default\n> value of autovacuum_vacuum_scale_factor. The table has 2.56 billion\n> tuples, so that means that we're going to vacuum it when there are\n> more than 510 million dead tuples. Post-patch, we will vacuum when we\n> have 500 thousand dead tuples. Suppose a uniform workload that slowly\n> updates rows in the table. If we were previously autovacuuming the\n> table once per day (1440 minutes) we're now going to try to vacuum it\n> almost every minute (1440 minutes / 1024 = 84 seconds).\n\nI've not checked your maths, but if that's true, that's not going to work.\n\nI think there are fundamental problems with the parameters that drive\nautovacuum that need to be addressed before we can consider a patch\nlike this one.\n\nHere are some of the problems that I know about:\n\n1. Autovacuum has exactly zero forward vision and operates reactively\nrather than proactively. This \"blind operating\" causes tables to\neither not need vacuumed or suddenly need vacuumed without any\nconsideration of how busy autovacuum is at that current moment.\n2. There is no prioritisation for the order in which tables are autovacuumed.\n3. With the default scale factor, the larger a table becomes, the more\ninfrequent the autovacuums.\n4. Autovacuum is more likely to trigger when the system is busy\nbecause more transaction IDs are being consumed and there is more DML\noccurring. This results in autovacuum having less work to do during\nquiet periods when there are more free resources to be doing the\nvacuum work.\n\nIn my opinion, the main problem with Frédéric's proposed GUC/reloption\nis that it increases the workload that autovacuum is responsible for\nand, because of #2, it becomes more likely that autovacuum works on\nsome table that isn't the highest priority table to work on which can\nresult in autovacuum starvation of tables that are more important to\nvacuum now.\n\nI think we need to do a larger overhaul of autovacuum to improve\npoints 1-4 above. I also think that there's some work coming up that\nmight force us into this sooner than we think. As far as I understand\nit, AIO will break vacuum_cost_page_miss because everything (providing\nIO keeps up) will become vacuum_cost_page_hit. Maybe that's not that\nimportant as that costing is quite terrible anyway.\n\nHere's a sketch of an idea that's been in my head for a while:\n\nRound 1:\n1a) Give autovacuum forward vision (#1 above) and instead of vacuuming\na table when it (atomically) crosses some threshold, use the existing\nscale_factors and autovacuum_freeze_max_age to give each table an\nautovacuum \"score\", which could be a number from 0-100, where 0 means\ndo nothing and 100 means nuclear meltdown. Let's say a table gets 10\npoints for the dead tuples meeting the current scale_factor and maybe\nan additional point for each 10% of proportion the size of the table\nis according to the size of the database (gives some weight to space\nrecovery for larger tables). For relfrozenxid, make the score the\nmaximum of dead tuple score vs the percentage of the age(relfrozenxid)\nis to 2 billion. Use a similar maximum score calc for age(relminmxid)\n2 billion.\n1b) Add a new GUC that defines the minimum score a table must reach\nbefore autovacuum will consider it.\n1c) Change autovacuum to vacuum the tables with the highest scores first.\n\nRound 2:\n2a) Have autovacuum monitor the score of the highest scoring table\nover time with buckets for each power of 2 seconds in history from\nnow. Let's say 20 buckets, about 12 days of history. Migrate scores\ninto older buckets to track the score over time.\n2b) Have autovacuum cost limits adapt according to the history so that\nif the maximum score of any table is trending upwards, that autovacuum\nspeeds up until the score buckets trend downwards towards the present.\n2c) Add another GUC to define the minimum score that autovacuum will\nbe \"proactive\". Must be less than the minimum score to consider\nautovacuum (or at least, ignored unless it is.). This GUC would not\ncause an autovacuum speedup due to 2b) as we'd only consider tables\nwhich meet the GUC added in 1b) in the score history array in 2a).\nThis stops autovacuum running faster than autovacuum_cost_limit when\ntrying to be proactive.\n\nWhile the above isn't well a well-baked idea. The exact way to\ncalculate the scores isn't well thought through, certainly. However, I\ndo think it's an idea that we should consider and improve upon. I\nbelieve 2c) helps solve the problem of large tables becoming bloated\nas autovacuum could get to these sooner when the workload is low\nenough for it to run proactively.\n\nI think we need at least 1a) before we can give autovacuum more work\nto do, especially if we do something like multiply its workload by\n1024x, per your comment above.\n\nDavid\n\n\n", "msg_date": "Thu, 2 May 2024 14:02:46 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "\n\nLe 01/05/2024 à 20:50, Robert Haas a écrit :\n> Possibly what we need here is\n> something other than a cap, where, say, we vacuum a 10GB table twice\n> as often as now, a 100GB table four times as often, and a 1TB table\n> eight times as often. Or whatever the right answer is.\n\nIMO, it would make more sense. So maybe something like this:\n\nvacthresh = Min(vac_base_thresh + vac_scale_factor * reltuples, \nvac_base_thresh + vac_scale_factor * sqrt(reltuples) * 1000);\n\n(it could work to compute a score, too, like in David's proposal)\n\n\n\n", "msg_date": "Thu, 2 May 2024 08:44:26 +0200", "msg_from": "=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Yhuel?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "> And as far as that goes, I'd like you - and others - to spell out more\r\n> precisely why you think 100 or 200 million tuples is too much. It\r\n> might be, or maybe it is in some cases but not in others. To me,\r\n> that's not a terribly large amount of data. Unless your tuples are\r\n> very wide, it's a few tens of gigabytes. That is big enough that I can\r\n> believe that you *might* want autovacuum to run when you hit that\r\n> threshold, but it's definitely not *obvious* to me that you want\r\n> autovacuum to run when you hit that threshold.\r\n\r\n\r\nVacuuming the heap alone gets faster the more I do it, thanks to the \r\nvisibility map. However, the more indexes I have, and the larger ( in the TBs),\r\nthe indexes become, autovacuum workers will be \r\nmonopolized vacuuming these indexes.\r\n\r\n\r\nAt 500k tuples, I am constantly vacuuming large indexes\r\nand monopolizing autovacuum workers. At 100 or 200\r\nmillion tuples, I will also monopolize autovacuum workers\r\nas they vacuums indexes for many minutes or hours. \r\n\r\n\r\nAt 100 or 200 million, the monopolization will occur less often, \r\nbut it will still occur leading an operator to maybe have to terminate\r\nthe autovacuum an kick of a manual vacuum. \r\n\r\n\r\nI am not convinced a new tuple based threshold will address this, \r\nbut I may also may be misunderstanding the intention of this GUC.\r\n\r\n\r\n> To make that concrete: If the table is 10TB, do you want to vacuum to\r\n> reclaim 20GB of bloat? You might be vacuuming 5TB of indexes to\r\n> reclaim 20GB of heap space - is that the right thing to do? If yes,\r\n> why?\r\n\r\n\r\nNo, I would not want to run autovacuum on 5TB indexes to reclaim \r\na small amount of bloat.\r\n\r\n\r\n\r\n\r\nRegards,\r\n\r\n\r\nSami\r\n\r\n\r\n\r\n\r\n\r\n", "msg_date": "Thu, 2 May 2024 16:01:23 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "On Wed, May 1, 2024 at 10:03 PM David Rowley <[email protected]> wrote:\n> Here are some of the problems that I know about:\n>\n> 1. Autovacuum has exactly zero forward vision and operates reactively\n> rather than proactively. This \"blind operating\" causes tables to\n> either not need vacuumed or suddenly need vacuumed without any\n> consideration of how busy autovacuum is at that current moment.\n> 2. There is no prioritisation for the order in which tables are autovacuumed.\n> 3. With the default scale factor, the larger a table becomes, the more\n> infrequent the autovacuums.\n> 4. Autovacuum is more likely to trigger when the system is busy\n> because more transaction IDs are being consumed and there is more DML\n> occurring. This results in autovacuum having less work to do during\n> quiet periods when there are more free resources to be doing the\n> vacuum work.\n\nI agree with all of these points. For a while now, I've been thinking\nthat we really needed a prioritization scheme, so that we don't waste\nour time on low-priority tasks when there are high-priority tasks that\nneed to be completed. But lately I've started to think that what\nmatters most is the rate at which autovacuum work is happening\noverall. I feel like prioritization is mostly going to matter when\nwe're not keeping up, and I think the primary goal should be to keep\nup. I think we could use the same data to make both decisions -- if\nautovacuum were proactive rather than reactive, that would mean that\nwe know something about what is going to happen in the future, and I\nthink that data could be used both to decide whether we're keeping up,\nand also to prioritize. But if I had to pick a first target, I'd\nforget about trying to make things happen in the right order and just\ntry to make sure we get all the things done.\n\n> I think we need at least 1a) before we can give autovacuum more work\n> to do, especially if we do something like multiply its workload by\n> 1024x, per your comment above.\n\nI guess I view it differently. It seems to me that right now, we're\nnot vacuuming large tables often enough. We should fix that,\nindependently of anything else. If the result is that small and medium\nsized tables get vacuumed less often, then that just means there were\nnever enough resources to go around in the first place. We haven't\ntaken a system that was working fine and broken it: we've just moved\nthe problem from one category of tables (the big ones) to a different\ncategory of tables. If the user wants to solve that problem, they need\nto bump up the cost limit or add hardware. I don't see that we have\nany particular reason to believe such users will be worse off on\naverage than they are today. On the other hand, users who do have a\nsufficiently high cost limit and enough hardware will be better off,\nbecause we'll start doing all the vacuuming work that needs to be done\ninstead of only some of it.\n\nNow, if we start vacuuming any class of table whatsoever 1024x as\noften as we do today, we are going to lose. But that would still be\ntrue even if we did everything on your list. Large tables need to be\nvacuumed more frequently than we now do, but not THAT much more\nfrequently. Any system that produces that result is just using a wrong\nalgorithm, or wrong constants, or something. Even if all the necessary\nresources are available, nobody is going to thank us for vacuuming\ngigantic tables in a tight loop. The problem with such a large\nincrease is not that we don't have prioritization, but that such a\nlarge increase is fundamentally the wrong thing to do. On the other\nhand, I think a more modest increase is the right thing to do, and I\nthink it's the right thing to do whether we have prioritization or\nnot.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 May 2024 10:31:00 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "On Tue, May 07, 2024 at 10:31:00AM -0400, Robert Haas wrote:\n> On Wed, May 1, 2024 at 10:03 PM David Rowley <[email protected]> wrote:\n>> I think we need at least 1a) before we can give autovacuum more work\n>> to do, especially if we do something like multiply its workload by\n>> 1024x, per your comment above.\n> \n> I guess I view it differently. It seems to me that right now, we're\n> not vacuuming large tables often enough. We should fix that,\n> independently of anything else. If the result is that small and medium\n> sized tables get vacuumed less often, then that just means there were\n> never enough resources to go around in the first place. We haven't\n> taken a system that was working fine and broken it: we've just moved\n> the problem from one category of tables (the big ones) to a different\n> category of tables. If the user wants to solve that problem, they need\n> to bump up the cost limit or add hardware. I don't see that we have\n> any particular reason to believe such users will be worse off on\n> average than they are today. On the other hand, users who do have a\n> sufficiently high cost limit and enough hardware will be better off,\n> because we'll start doing all the vacuuming work that needs to be done\n> instead of only some of it.\n> \n> Now, if we start vacuuming any class of table whatsoever 1024x as\n> often as we do today, we are going to lose. But that would still be\n> true even if we did everything on your list. Large tables need to be\n> vacuumed more frequently than we now do, but not THAT much more\n> frequently. Any system that produces that result is just using a wrong\n> algorithm, or wrong constants, or something. Even if all the necessary\n> resources are available, nobody is going to thank us for vacuuming\n> gigantic tables in a tight loop. The problem with such a large\n> increase is not that we don't have prioritization, but that such a\n> large increase is fundamentally the wrong thing to do. On the other\n> hand, I think a more modest increase is the right thing to do, and I\n> think it's the right thing to do whether we have prioritization or\n> not.\n\nThis is about how I feel, too. In any case, I +1'd a higher default\nbecause I think we need to be pretty conservative with these changes, at\nleast until we have a better prioritization strategy. While folks may opt\nto set this value super low, I think that's more likely to lead to some\ninteresting secondary effects. If the default is high, hopefully these\nsecondary effects will be minimized or avoided.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 7 May 2024 16:17:02 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "> This is about how I feel, too. In any case, I +1'd a higher default\r\n> because I think we need to be pretty conservative with these changes, at\r\n> least until we have a better prioritization strategy. While folks may opt\r\n> to set this value super low, I think that's more likely to lead to some\r\n> interesting secondary effects. If the default is high, hopefully these\r\n> secondary effects will be minimized or avoided.\r\n\r\n\r\nThere is also an alternative of making this GUC -1 by default, which\r\nmeans it has not effect and any value larger will be used in the threshold\r\ncalculation of autovacuunm. A user will have to be careful not to set it too low, \r\nbut that is going to be a concern either way.\r\n\r\n\r\nThis idea maybe worth considering as it does not change the default\r\nbehavior of the autovac threshold calculation, and if a user has cases in \r\nwhich they have many tables with a few billion tuples that they wish to \r\nsee autovacuumed more often, they now have a GUC to make \r\nthat possible and potentially avoid per-table threshold configuration.\r\n\r\n\r\nAlso, I think coming up with a good default will be challenging,\r\nand perhaps this idea is a good middle ground.\r\n\r\n\r\nRegards,\r\n\r\nSami \r\n\r\n\r\n\r\n", "msg_date": "Wed, 8 May 2024 17:30:44 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "On Wed, May 8, 2024 at 1:30 PM Imseih (AWS), Sami <[email protected]> wrote:\n> There is also an alternative of making this GUC -1 by default, which\n> means it has not effect and any value larger will be used in the threshold\n> calculation of autovacuunm. A user will have to be careful not to set it too low,\n> but that is going to be a concern either way.\n\nPersonally, I'd much rather ship it with a reasonable default. If we\nship it disabled, most people won't end up using it at all, which\nsucks, and those who do will be more likely to set it to a ridiculous\nvalue, which also sucks. If we ship it with a value that has a good\nchance of being within 2x or 3x of the optimal value on a given user's\nsystem, then a lot more people will benefit from it.\n\n> Also, I think coming up with a good default will be challenging,\n> and perhaps this idea is a good middle ground.\n\nMaybe. I freely admit that I don't know exactly what the optimal value\nis here, and I think there is some experimentation that is needed to\ntry to get some better intuition there. At what table size does the\ncurrent system actually result in too little vacuuming, and how can we\ndemonstrate that? Does the point at which that happens depend more on\nthe table size in gigabytes, or more on the number of rows? These are\nthings that someone can research and about which they can present\ndata.\n\nAs I see it, a lot of the lack of agreement up until now is people\njust not understanding the math. Since I think I've got the right idea\nabout the math, I attribute this to other people being confused about\nwhat is going to happen and would tend to phrase it as: some people\ndon't understand how catastrophically bad it will be if you set this\nvalue too low. However, another possibility is that it is I who am\nmisunderstanding the math. In that case, the correct phrasing is\nprobably something like: Robert wants a completely useless and\nworthless value for this parameter that will be of no help to anyone.\nRegardless, at least some of us are confused. If we can reduce that\nconfusion, then people's ideas about what values for this parameter\nmight be suitable should start to come closer together.\n\nI tend to feel like the disagreement here is not really about whether\nit's a good idea to increase the frequency of vacuuming on large\ntables by three orders of magnitude compared to what we do now, but\nrather than about whether that's actually going to happen.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 May 2024 10:58:56 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "\n\nLe 09/05/2024 à 16:58, Robert Haas a écrit :\n> As I see it, a lot of the lack of agreement up until now is people\n> just not understanding the math. Since I think I've got the right idea\n> about the math, I attribute this to other people being confused about\n> what is going to happen and would tend to phrase it as: some people\n> don't understand how catastrophically bad it will be if you set this\n> value too low.\n\nFWIW, I do agree with your math. I found your demonstration convincing. \n500000 was selected with the wet finger.\n\nUsing the formula I suggested earlier:\n\nvacthresh = Min(vac_base_thresh + vac_scale_factor * reltuples, \nvac_base_thresh + vac_scale_factor * sqrt(reltuples) * 1000);\n\nyour table of 2.56 billion tuples will be vacuumed if there are\nmore than 10 million dead tuples (every 28 minutes).\n\nIf we want to stick with the simple formula, we should probably choose a \nvery high default, maybe 100 million, as you suggested earlier.\n\nHowever, it would be nice to have the visibility map updated more \nfrequently than every 100 million dead tuples. I wonder if this could be \ndecoupled from the vacuum process?\n\n\n", "msg_date": "Mon, 13 May 2024 17:14:53 +0200", "msg_from": "=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Yhuel?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "On Mon, May 13, 2024 at 11:14 AM Frédéric Yhuel\n<[email protected]> wrote:\n> FWIW, I do agree with your math. I found your demonstration convincing.\n> 500000 was selected with the wet finger.\n\nGood to know.\n\n> Using the formula I suggested earlier:\n>\n> vacthresh = Min(vac_base_thresh + vac_scale_factor * reltuples,\n> vac_base_thresh + vac_scale_factor * sqrt(reltuples) * 1000);\n>\n> your table of 2.56 billion tuples will be vacuumed if there are\n> more than 10 million dead tuples (every 28 minutes).\n\nYeah, so that is about 50x what we do now (twice an hour vs. once a\nday). While that's a lot more reasonable than the behavior that we'd\nget from a 500k hard cap (every 84 seconds), I suspect it's still too\naggressive.\n\nI find these things much easier to reason about in gigabytes than in\ntime units. In that example, the table was 320GB and was getting\nvacuumed after accumulating 64GB of bloat. That seems like a lot. It\nmeans that the table can grow from 320GB all the way up until 384GB\nbefore we even think about starting to vacuum it, and then we might\nnot start right away, depending on resource availability, and we may\ntake some time to finish, possibly considerable time, depending on the\nnumber and size of indexes and the availability of I/O resources. So\nactually the table might very plausibly be well above 400GB before we\nget done processing it, or potentially even more. I think that's not\naggressive enough.\n\nBut how much would we like to push that 64GB of bloat number down for\na table of this size? I would argue that if we're vacuuming the table\nwhen it's only got 1GB of bloat, or 2GB of bloat, that seems\nexcessive. Unless the system is very lightly loaded and has no\nlong-running transactions at all, we're unlikely to be able to vacuum\naggressively enough to keep a 320GB table at a size of 321GB or 322GB.\nWithout testing or doing any research, I'm going to guess that a\nrealistic number is probably in the range of 10-20GB of bloat. If the\ntable activity is very light, we might be able to get it even lower,\nlike say 5GB, but the costs ramp up very quickly as you push the\nvacuuming threshold down. Also, if the table accumulates X amount of\nbloat during the time it takes to run one vacuum, you can never\nsucceed in limiting bloat to a value less than X (and probably more\nlike 1.5*X or 2*X or something).\n\nSo without actually trying anything, which I do think somebody should\ndo and report results, my guess is that for a 320GB table, you'd like\nto multiply the vacuum frequency by a value somewhere between 3 and\n10, and probably much closer to 3 than to 10. Maybe even less than 3.\nNot sure exactly. Like I say, I think someone needs to try some\ndifferent workloads and database sizes and numbers of indexes, and try\nto get a feeling for what actually works well in practice.\n\n> If we want to stick with the simple formula, we should probably choose a\n> very high default, maybe 100 million, as you suggested earlier.\n>\n> However, it would be nice to have the visibility map updated more\n> frequently than every 100 million dead tuples. I wonder if this could be\n> decoupled from the vacuum process?\n\nYes, but if a page has had any non-HOT updates, it can't become\nall-visible again without vacuum. If it has had only HOT updates, then\na HOT-prune could make it all-visible. I don't think we do that\ncurrently, but I think in theory we could.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 May 2024 16:22:53 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "I didn't see a commitfest entry for this, so I created one to make sure we\ndon't lose track of this:\n\n\thttps://commitfest.postgresql.org/48/5046/\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 17 Jun 2024 22:06:59 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "\n\nLe 18/06/2024 à 05:06, Nathan Bossart a écrit :\n> I didn't see a commitfest entry for this, so I created one to make sure we\n> don't lose track of this:\n> \n> \thttps://commitfest.postgresql.org/48/5046/\n> \n\nOK thanks!\n\nBy the way, I wonder if there were any off-list discussions after \nRobert's conference at PGConf.dev (and I'm waiting for the video of the \nconf).\n\n\n", "msg_date": "Tue, 18 Jun 2024 12:36:42 +0200", "msg_from": "=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Yhuel?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "I've attached a new patch to show roughly what I think this new GUC should\nlook like. I'm hoping this sparks more discussion, if nothing else.\n\nOn Tue, Jun 18, 2024 at 12:36:42PM +0200, Fr�d�ric Yhuel wrote:\n> By the way, I wonder if there were any off-list discussions after Robert's\n> conference at PGConf.dev (and I'm waiting for the video of the conf).\n\nI don't recall any discussions about this idea, but Robert did briefly\nmention it in his talk [0].\n\n[0] https://www.youtube.com/watch?v=RfTD-Twpvac\n\n-- \nnathan", "msg_date": "Wed, 7 Aug 2024 16:39:57 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" }, { "msg_contents": "\n\nOn 8/7/24 23:39, Nathan Bossart wrote:\n> I've attached a new patch to show roughly what I think this new GUC should\n> look like. I'm hoping this sparks more discussion, if nothing else.\n>\n\nThank you. FWIW, I would prefer a sub-linear growth, so maybe something \nlike this:\n\nvacthresh = Min(vac_base_thresh + vac_scale_factor * reltuples, \nvac_base_thresh + vac_scale_factor * pow(reltuples, 0.7) * 100);\n\nThis would give :\n\n* 386M (instead of 5.1 billion currently) for a 25.6 billion tuples table ;\n* 77M for a 2.56 billion tuples table (Robert's example) ;\n* 15M (instead of 51M currently) for a 256M tuples table ;\n* 3M (instead of 5M currently) for a 25.6M tuples table.\n\nThe other advantage is that you don't need another GUC.\n\n> On Tue, Jun 18, 2024 at 12:36:42PM +0200, Frédéric Yhuel wrote:\n>> By the way, I wonder if there were any off-list discussions after Robert's\n>> conference at PGConf.dev (and I'm waiting for the video of the conf).\n> \n> I don't recall any discussions about this idea, but Robert did briefly\n> mention it in his talk [0].\n> \n> [0] https://www.youtube.com/watch?v=RfTD-Twpvac\n> \n\nVery interesting, thanks!\n\n\n", "msg_date": "Mon, 12 Aug 2024 15:41:26 +0200", "msg_from": "=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Yhuel?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New GUC autovacuum_max_threshold ?" } ]
[ { "msg_contents": "Hello everyone, I hope you're doing well. Does anyone have a guide or know\nhow to perform an upgrade from PostgreSQL 13.12 to 13.14 on Linux? I've\nsearched in various places but haven't found any solid guides, and truth be\ntold, I'm a bit of a novice with PostgreSQL. Any help would be appreciated.\n\nHello everyone, I hope you're doing well. Does anyone have a guide or know how to perform an upgrade from PostgreSQL 13.12 to 13.14 on Linux? I've searched in various places but haven't found any solid guides, and truth be told, I'm a bit of a novice with PostgreSQL. Any help would be appreciated.", "msg_date": "Wed, 24 Apr 2024 08:48:14 -0600", "msg_from": "=?UTF-8?B?4oCiSXNhYWMgUnY=?= <[email protected]>", "msg_from_op": true, "msg_subject": "Help update PostgreSQL 13.12 to 13.14" }, { "msg_contents": "Hi Isaac\n\nYou are doing the minor version upgrade so it's not a big effort as\ncompared to major version upgrade, following is the process to do it.\n\n*Minor releases never change the internal storage format and are always\ncompatible with earlier and later minor releases of the same major version\nnumber. For example, version 10.1 is compatible with version 10.0 and\nversion 10.6. Similarly, for example, 9.5.3 is compatible with 9.5.0,\n9.5.1, and 9.5.6. To update between compatible versions, you simply replace\nthe executables while the server is down and restart the server. The data\ndirectory remains unchanged — minor upgrades are that simple.*\n\n\nPlease follow the links below for more information.\nhttps://www.postgresql.org/docs/13/upgrading.html\nhttps://www.postgresql.org/support/versioning/\n\nThanks\nKashif Zeeshan\nBitnine Global\n\nOn Thu, Apr 25, 2024 at 9:37 PM •Isaac Rv <[email protected]> wrote:\n\n> Hello everyone, I hope you're doing well. Does anyone have a guide or know\n> how to perform an upgrade from PostgreSQL 13.12 to 13.14 on Linux? I've\n> searched in various places but haven't found any solid guides, and truth be\n> told, I'm a bit of a novice with PostgreSQL. Any help would be appreciated.\n>\n\nHi IsaacYou are doing the minor version upgrade so it's not a big effort as compared to major version upgrade, following is the process to do it.Minor releases never change the internal storage format and are \nalways compatible with earlier and later minor releases of the same \nmajor version number. For example, version 10.1 is compatible with \nversion 10.0 and version 10.6. Similarly, for example, 9.5.3 is \ncompatible with 9.5.0, 9.5.1, and 9.5.6. To update between compatible \nversions, you simply replace the executables while the server is down \nand restart the server. The data directory remains unchanged — minor \nupgrades are that simple.Please follow the links below for more information.https://www.postgresql.org/docs/13/upgrading.htmlhttps://www.postgresql.org/support/versioning/ThanksKashif ZeeshanBitnine GlobalOn Thu, Apr 25, 2024 at 9:37 PM •Isaac Rv <[email protected]> wrote:Hello everyone, I hope you're doing well. Does anyone have a guide or know how to perform an upgrade from PostgreSQL 13.12 to 13.14 on Linux? I've searched in various places but haven't found any solid guides, and truth be told, I'm a bit of a novice with PostgreSQL. Any help would be appreciated.", "msg_date": "Thu, 25 Apr 2024 22:20:13 +0500", "msg_from": "Kashif Zeeshan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help update PostgreSQL 13.12 to 13.14" }, { "msg_contents": "On Thu, Apr 25, 2024 at 11:55 PM •Isaac Rv <[email protected]> wrote:\n\n> Entiendo si, me han dicho que es sencillo, pero no entiendo si solo\n> descargo los binarios y en cual carpeta reemplazo? no hay una guía cómo tal\n> de cómo realizarlo, me podrías ayudar?\n>\n\nFollow the below steps\n1. Backup your data\n2. Review the release notes of the update release\n3. Stop the PG Server\n4. Upgrade postgres to newer version, e.g. on CentOS use the command 'sudo\nyum update postgresql'\n5. Restart PG Server\n\nThanks\nKashif Zeeshan\nBitnine Global\n\n>\n> El jue, 25 abr 2024 a las 11:20, Kashif Zeeshan (<[email protected]>)\n> escribió:\n>\n>> Hi Isaac\n>>\n>> You are doing the minor version upgrade so it's not a big effort as\n>> compared to major version upgrade, following is the process to do it.\n>>\n>> *Minor releases never change the internal storage format and are always\n>> compatible with earlier and later minor releases of the same major version\n>> number. For example, version 10.1 is compatible with version 10.0 and\n>> version 10.6. Similarly, for example, 9.5.3 is compatible with 9.5.0,\n>> 9.5.1, and 9.5.6. To update between compatible versions, you simply replace\n>> the executables while the server is down and restart the server. The data\n>> directory remains unchanged — minor upgrades are that simple.*\n>>\n>>\n>> Please follow the links below for more information.\n>> https://www.postgresql.org/docs/13/upgrading.html\n>> https://www.postgresql.org/support/versioning/\n>>\n>> Thanks\n>> Kashif Zeeshan\n>> Bitnine Global\n>>\n>> On Thu, Apr 25, 2024 at 9:37 PM •Isaac Rv <[email protected]> wrote:\n>>\n>>> Hello everyone, I hope you're doing well. Does anyone have a guide or\n>>> know how to perform an upgrade from PostgreSQL 13.12 to 13.14 on Linux?\n>>> I've searched in various places but haven't found any solid guides, and\n>>> truth be told, I'm a bit of a novice with PostgreSQL. Any help would be\n>>> appreciated.\n>>>\n>>\n\nOn Thu, Apr 25, 2024 at 11:55 PM •Isaac Rv <[email protected]> wrote:Entiendo si, me han dicho que es sencillo, pero no entiendo si solo descargo los binarios y en cual carpeta reemplazo? no hay una guía cómo tal de cómo realizarlo, me  podrías ayudar? Follow the below steps1. Backup your data2. Review the release notes of the update release3. Stop the PG Server4. Upgrade postgres to newer version, e.g. on CentOS use the command 'sudo yum update postgresql' 5. Restart PG ServerThanksKashif ZeeshanBitnine GlobalEl jue, 25 abr 2024 a las 11:20, Kashif Zeeshan (<[email protected]>) escribió:Hi IsaacYou are doing the minor version upgrade so it's not a big effort as compared to major version upgrade, following is the process to do it.Minor releases never change the internal storage format and are \nalways compatible with earlier and later minor releases of the same \nmajor version number. For example, version 10.1 is compatible with \nversion 10.0 and version 10.6. Similarly, for example, 9.5.3 is \ncompatible with 9.5.0, 9.5.1, and 9.5.6. To update between compatible \nversions, you simply replace the executables while the server is down \nand restart the server. The data directory remains unchanged — minor \nupgrades are that simple.Please follow the links below for more information.https://www.postgresql.org/docs/13/upgrading.htmlhttps://www.postgresql.org/support/versioning/ThanksKashif ZeeshanBitnine GlobalOn Thu, Apr 25, 2024 at 9:37 PM •Isaac Rv <[email protected]> wrote:Hello everyone, I hope you're doing well. Does anyone have a guide or know how to perform an upgrade from PostgreSQL 13.12 to 13.14 on Linux? I've searched in various places but haven't found any solid guides, and truth be told, I'm a bit of a novice with PostgreSQL. Any help would be appreciated.", "msg_date": "Fri, 26 Apr 2024 10:16:37 +0500", "msg_from": "Kashif Zeeshan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help update PostgreSQL 13.12 to 13.14" }, { "msg_contents": "On Fri, Apr 26, 2024 at 9:22 PM •Isaac Rv <[email protected]> wrote:\n\n> Mira intente con el yum y si actualizó pero sin embargo no actualizo a la\n> 13.14\n>\n> sudo yum update postgresql13\n> Updating Subscription Management repositories.\n>\n> This system is registered with an entitlement server, but is not receiving\n> updates. You can use subscription-manager to assign subscriptions.\n>\n> Last metadata expiration check: 0:07:02 ago on Fri 26 Apr 2024 10:01:36 AM\n> CST.\n> Dependencies resolved.\n> Nothing to do.\n> Complete!\n>\n\nIt seemed yum is not able to get the latest package update, try clearing\nthe cache and rebuilding it\n\nyum clean all\n\nyum makecache\n\n\n\n>\n> El jue, 25 abr 2024 a las 23:16, Kashif Zeeshan (<[email protected]>)\n> escribió:\n>\n>>\n>>\n>> On Thu, Apr 25, 2024 at 11:55 PM •Isaac Rv <[email protected]>\n>> wrote:\n>>\n>>> Entiendo si, me han dicho que es sencillo, pero no entiendo si solo\n>>> descargo los binarios y en cual carpeta reemplazo? no hay una guía cómo tal\n>>> de cómo realizarlo, me podrías ayudar?\n>>>\n>>\n>> Follow the below steps\n>> 1. Backup your data\n>> 2. Review the release notes of the update release\n>> 3. Stop the PG Server\n>> 4. Upgrade postgres to newer version, e.g. on CentOS use the command\n>> 'sudo yum update postgresql'\n>> 5. Restart PG Server\n>>\n>> Thanks\n>> Kashif Zeeshan\n>> Bitnine Global\n>>\n>>>\n>>> El jue, 25 abr 2024 a las 11:20, Kashif Zeeshan (<\n>>> [email protected]>) escribió:\n>>>\n>>>> Hi Isaac\n>>>>\n>>>> You are doing the minor version upgrade so it's not a big effort as\n>>>> compared to major version upgrade, following is the process to do it.\n>>>>\n>>>> *Minor releases never change the internal storage format and are always\n>>>> compatible with earlier and later minor releases of the same major version\n>>>> number. For example, version 10.1 is compatible with version 10.0 and\n>>>> version 10.6. Similarly, for example, 9.5.3 is compatible with 9.5.0,\n>>>> 9.5.1, and 9.5.6. To update between compatible versions, you simply replace\n>>>> the executables while the server is down and restart the server. The data\n>>>> directory remains unchanged — minor upgrades are that simple.*\n>>>>\n>>>>\n>>>> Please follow the links below for more information.\n>>>> https://www.postgresql.org/docs/13/upgrading.html\n>>>> https://www.postgresql.org/support/versioning/\n>>>>\n>>>> Thanks\n>>>> Kashif Zeeshan\n>>>> Bitnine Global\n>>>>\n>>>> On Thu, Apr 25, 2024 at 9:37 PM •Isaac Rv <[email protected]>\n>>>> wrote:\n>>>>\n>>>>> Hello everyone, I hope you're doing well. Does anyone have a guide or\n>>>>> know how to perform an upgrade from PostgreSQL 13.12 to 13.14 on Linux?\n>>>>> I've searched in various places but haven't found any solid guides, and\n>>>>> truth be told, I'm a bit of a novice with PostgreSQL. Any help would be\n>>>>> appreciated.\n>>>>>\n>>>>\n\nOn Fri, Apr 26, 2024 at 9:22 PM •Isaac Rv <[email protected]> wrote:Mira intente con el yum y si actualizó pero sin embargo no actualizo a la 13.14  sudo yum update postgresql13Updating Subscription Management repositories.This system is registered with an entitlement server, but is not receiving updates. You can use subscription-manager to assign subscriptions.Last metadata expiration check: 0:07:02 ago on Fri 26 Apr 2024 10:01:36 AM CST.Dependencies resolved.Nothing to do.Complete!It seemed yum is not able to get the latest package update, try clearing the cache and rebuilding ityum clean allyum makecache El jue, 25 abr 2024 a las 23:16, Kashif Zeeshan (<[email protected]>) escribió:On Thu, Apr 25, 2024 at 11:55 PM •Isaac Rv <[email protected]> wrote:Entiendo si, me han dicho que es sencillo, pero no entiendo si solo descargo los binarios y en cual carpeta reemplazo? no hay una guía cómo tal de cómo realizarlo, me  podrías ayudar? Follow the below steps1. Backup your data2. Review the release notes of the update release3. Stop the PG Server4. Upgrade postgres to newer version, e.g. on CentOS use the command 'sudo yum update postgresql' 5. Restart PG ServerThanksKashif ZeeshanBitnine GlobalEl jue, 25 abr 2024 a las 11:20, Kashif Zeeshan (<[email protected]>) escribió:Hi IsaacYou are doing the minor version upgrade so it's not a big effort as compared to major version upgrade, following is the process to do it.Minor releases never change the internal storage format and are \nalways compatible with earlier and later minor releases of the same \nmajor version number. For example, version 10.1 is compatible with \nversion 10.0 and version 10.6. Similarly, for example, 9.5.3 is \ncompatible with 9.5.0, 9.5.1, and 9.5.6. To update between compatible \nversions, you simply replace the executables while the server is down \nand restart the server. The data directory remains unchanged — minor \nupgrades are that simple.Please follow the links below for more information.https://www.postgresql.org/docs/13/upgrading.htmlhttps://www.postgresql.org/support/versioning/ThanksKashif ZeeshanBitnine GlobalOn Thu, Apr 25, 2024 at 9:37 PM •Isaac Rv <[email protected]> wrote:Hello everyone, I hope you're doing well. Does anyone have a guide or know how to perform an upgrade from PostgreSQL 13.12 to 13.14 on Linux? I've searched in various places but haven't found any solid guides, and truth be told, I'm a bit of a novice with PostgreSQL. Any help would be appreciated.", "msg_date": "Fri, 26 Apr 2024 22:34:40 +0500", "msg_from": "Kashif Zeeshan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help update PostgreSQL 13.12 to 13.14" }, { "msg_contents": "Glad to be of help.\npg_uprade is used with major version upgrade e.g. from PG13 to 14 etc\n\nRegards\nKashif Zeeshan\nBitnine Global\n\nOn Fri, Apr 26, 2024 at 10:47 PM •Isaac Rv <[email protected]> wrote:\n\n> Hola, lo acabo de hacer, quedó bien luego detuve el servidor, aplique otra\n> vez el sudo yum update postgresql13 y me devolvió otra vez el mensaje que\n> ya no tiene más actualizaciones pendientes\n> Veo que esta el pg_upgrade, pero no entiendo bien cómo usarlo\n>\n> Saludos y muchas gracias\n>\n> El vie, 26 abr 2024 a las 11:34, Kashif Zeeshan (<[email protected]>)\n> escribió:\n>\n>>\n>>\n>> On Fri, Apr 26, 2024 at 9:22 PM •Isaac Rv <[email protected]> wrote:\n>>\n>>> Mira intente con el yum y si actualizó pero sin embargo no actualizo a\n>>> la 13.14\n>>>\n>>> sudo yum update postgresql13\n>>> Updating Subscription Management repositories.\n>>>\n>>> This system is registered with an entitlement server, but is not\n>>> receiving updates. You can use subscription-manager to assign subscriptions.\n>>>\n>>> Last metadata expiration check: 0:07:02 ago on Fri 26 Apr 2024 10:01:36\n>>> AM CST.\n>>> Dependencies resolved.\n>>> Nothing to do.\n>>> Complete!\n>>>\n>>\n>> It seemed yum is not able to get the latest package update, try clearing\n>> the cache and rebuilding it\n>>\n>> yum clean all\n>>\n>> yum makecache\n>>\n>>\n>>\n>>>\n>>> El jue, 25 abr 2024 a las 23:16, Kashif Zeeshan (<\n>>> [email protected]>) escribió:\n>>>\n>>>>\n>>>>\n>>>> On Thu, Apr 25, 2024 at 11:55 PM •Isaac Rv <[email protected]>\n>>>> wrote:\n>>>>\n>>>>> Entiendo si, me han dicho que es sencillo, pero no entiendo si solo\n>>>>> descargo los binarios y en cual carpeta reemplazo? no hay una guía cómo tal\n>>>>> de cómo realizarlo, me podrías ayudar?\n>>>>>\n>>>>\n>>>> Follow the below steps\n>>>> 1. Backup your data\n>>>> 2. Review the release notes of the update release\n>>>> 3. Stop the PG Server\n>>>> 4. Upgrade postgres to newer version, e.g. on CentOS use the command\n>>>> 'sudo yum update postgresql'\n>>>> 5. Restart PG Server\n>>>>\n>>>> Thanks\n>>>> Kashif Zeeshan\n>>>> Bitnine Global\n>>>>\n>>>>>\n>>>>> El jue, 25 abr 2024 a las 11:20, Kashif Zeeshan (<\n>>>>> [email protected]>) escribió:\n>>>>>\n>>>>>> Hi Isaac\n>>>>>>\n>>>>>> You are doing the minor version upgrade so it's not a big effort as\n>>>>>> compared to major version upgrade, following is the process to do it.\n>>>>>>\n>>>>>> *Minor releases never change the internal storage format and are\n>>>>>> always compatible with earlier and later minor releases of the same major\n>>>>>> version number. For example, version 10.1 is compatible with version 10.0\n>>>>>> and version 10.6. Similarly, for example, 9.5.3 is compatible with 9.5.0,\n>>>>>> 9.5.1, and 9.5.6. To update between compatible versions, you simply replace\n>>>>>> the executables while the server is down and restart the server. The data\n>>>>>> directory remains unchanged — minor upgrades are that simple.*\n>>>>>>\n>>>>>>\n>>>>>> Please follow the links below for more information.\n>>>>>> https://www.postgresql.org/docs/13/upgrading.html\n>>>>>> https://www.postgresql.org/support/versioning/\n>>>>>>\n>>>>>> Thanks\n>>>>>> Kashif Zeeshan\n>>>>>> Bitnine Global\n>>>>>>\n>>>>>> On Thu, Apr 25, 2024 at 9:37 PM •Isaac Rv <[email protected]>\n>>>>>> wrote:\n>>>>>>\n>>>>>>> Hello everyone, I hope you're doing well. Does anyone have a guide\n>>>>>>> or know how to perform an upgrade from PostgreSQL 13.12 to 13.14 on Linux?\n>>>>>>> I've searched in various places but haven't found any solid guides, and\n>>>>>>> truth be told, I'm a bit of a novice with PostgreSQL. Any help would be\n>>>>>>> appreciated.\n>>>>>>>\n>>>>>>\n\nGlad to be of help.pg_uprade is used with major version upgrade e.g. from PG13 to 14 etcRegardsKashif ZeeshanBitnine GlobalOn Fri, Apr 26, 2024 at 10:47 PM •Isaac Rv <[email protected]> wrote:Hola, lo acabo de hacer, quedó bien luego detuve el servidor, aplique otra vez el  \n\nsudo yum update postgresql13 y me devolvió otra vez el mensaje que ya no tiene más actualizaciones pendientesVeo que esta el pg_upgrade, pero no entiendo bien cómo usarloSaludos y muchas graciasEl vie, 26 abr 2024 a las 11:34, Kashif Zeeshan (<[email protected]>) escribió:On Fri, Apr 26, 2024 at 9:22 PM •Isaac Rv <[email protected]> wrote:Mira intente con el yum y si actualizó pero sin embargo no actualizo a la 13.14  sudo yum update postgresql13Updating Subscription Management repositories.This system is registered with an entitlement server, but is not receiving updates. You can use subscription-manager to assign subscriptions.Last metadata expiration check: 0:07:02 ago on Fri 26 Apr 2024 10:01:36 AM CST.Dependencies resolved.Nothing to do.Complete!It seemed yum is not able to get the latest package update, try clearing the cache and rebuilding ityum clean allyum makecache El jue, 25 abr 2024 a las 23:16, Kashif Zeeshan (<[email protected]>) escribió:On Thu, Apr 25, 2024 at 11:55 PM •Isaac Rv <[email protected]> wrote:Entiendo si, me han dicho que es sencillo, pero no entiendo si solo descargo los binarios y en cual carpeta reemplazo? no hay una guía cómo tal de cómo realizarlo, me  podrías ayudar? Follow the below steps1. Backup your data2. Review the release notes of the update release3. Stop the PG Server4. Upgrade postgres to newer version, e.g. on CentOS use the command 'sudo yum update postgresql' 5. Restart PG ServerThanksKashif ZeeshanBitnine GlobalEl jue, 25 abr 2024 a las 11:20, Kashif Zeeshan (<[email protected]>) escribió:Hi IsaacYou are doing the minor version upgrade so it's not a big effort as compared to major version upgrade, following is the process to do it.Minor releases never change the internal storage format and are \nalways compatible with earlier and later minor releases of the same \nmajor version number. For example, version 10.1 is compatible with \nversion 10.0 and version 10.6. Similarly, for example, 9.5.3 is \ncompatible with 9.5.0, 9.5.1, and 9.5.6. To update between compatible \nversions, you simply replace the executables while the server is down \nand restart the server. The data directory remains unchanged — minor \nupgrades are that simple.Please follow the links below for more information.https://www.postgresql.org/docs/13/upgrading.htmlhttps://www.postgresql.org/support/versioning/ThanksKashif ZeeshanBitnine GlobalOn Thu, Apr 25, 2024 at 9:37 PM •Isaac Rv <[email protected]> wrote:Hello everyone, I hope you're doing well. Does anyone have a guide or know how to perform an upgrade from PostgreSQL 13.12 to 13.14 on Linux? I've searched in various places but haven't found any solid guides, and truth be told, I'm a bit of a novice with PostgreSQL. Any help would be appreciated.", "msg_date": "Sat, 27 Apr 2024 20:29:22 +0500", "msg_from": "Kashif Zeeshan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help update PostgreSQL 13.12 to 13.14" }, { "msg_contents": "On Mon, Apr 29, 2024 at 9:07 PM •Isaac Rv <[email protected]> wrote:\n\n> Ok entiendo sí, pero mi versión sigue en la 13.12 y necesito que sea\n> 13.14, me indica que ya no tiene actualizaciones pero realmente sí, ya no\n> sé cómo actualizarla a la 13.14\n>\n\nHi\n\nPlease make sure that your postgres repository is set properly, that's the\nonly reason that it's not finding V13.14. Please follow the link below.\n\nhttps://www.postgresql.org/download/linux/redhat/\n\nThere is another way to avoid it by downloading the V13.14 on your system\nand then install this version on your system which will upgrade your\nexisting installation.\n\nRegards\nKashif Zeeshan\nBitnine Global\n\n>\n> El sáb, 27 abr 2024 a las 9:29, Kashif Zeeshan (<[email protected]>)\n> escribió:\n>\n>> Glad to be of help.\n>> pg_uprade is used with major version upgrade e.g. from PG13 to 14 etc\n>>\n>> Regards\n>> Kashif Zeeshan\n>> Bitnine Global\n>>\n>> On Fri, Apr 26, 2024 at 10:47 PM •Isaac Rv <[email protected]>\n>> wrote:\n>>\n>>> Hola, lo acabo de hacer, quedó bien luego detuve el servidor, aplique\n>>> otra vez el sudo yum update postgresql13 y me devolvió otra vez el\n>>> mensaje que ya no tiene más actualizaciones pendientes\n>>> Veo que esta el pg_upgrade, pero no entiendo bien cómo usarlo\n>>>\n>>> Saludos y muchas gracias\n>>>\n>>> El vie, 26 abr 2024 a las 11:34, Kashif Zeeshan (<\n>>> [email protected]>) escribió:\n>>>\n>>>>\n>>>>\n>>>> On Fri, Apr 26, 2024 at 9:22 PM •Isaac Rv <[email protected]>\n>>>> wrote:\n>>>>\n>>>>> Mira intente con el yum y si actualizó pero sin embargo no actualizo a\n>>>>> la 13.14\n>>>>>\n>>>>> sudo yum update postgresql13\n>>>>> Updating Subscription Management repositories.\n>>>>>\n>>>>> This system is registered with an entitlement server, but is not\n>>>>> receiving updates. You can use subscription-manager to assign subscriptions.\n>>>>>\n>>>>> Last metadata expiration check: 0:07:02 ago on Fri 26 Apr 2024\n>>>>> 10:01:36 AM CST.\n>>>>> Dependencies resolved.\n>>>>> Nothing to do.\n>>>>> Complete!\n>>>>>\n>>>>\n>>>> It seemed yum is not able to get the latest package update, try\n>>>> clearing the cache and rebuilding it\n>>>>\n>>>> yum clean all\n>>>>\n>>>> yum makecache\n>>>>\n>>>>\n>>>>\n>>>>>\n>>>>> El jue, 25 abr 2024 a las 23:16, Kashif Zeeshan (<\n>>>>> [email protected]>) escribió:\n>>>>>\n>>>>>>\n>>>>>>\n>>>>>> On Thu, Apr 25, 2024 at 11:55 PM •Isaac Rv <[email protected]>\n>>>>>> wrote:\n>>>>>>\n>>>>>>> Entiendo si, me han dicho que es sencillo, pero no entiendo si solo\n>>>>>>> descargo los binarios y en cual carpeta reemplazo? no hay una guía cómo tal\n>>>>>>> de cómo realizarlo, me podrías ayudar?\n>>>>>>>\n>>>>>>\n>>>>>> Follow the below steps\n>>>>>> 1. Backup your data\n>>>>>> 2. Review the release notes of the update release\n>>>>>> 3. Stop the PG Server\n>>>>>> 4. Upgrade postgres to newer version, e.g. on CentOS use the command\n>>>>>> 'sudo yum update postgresql'\n>>>>>> 5. Restart PG Server\n>>>>>>\n>>>>>> Thanks\n>>>>>> Kashif Zeeshan\n>>>>>> Bitnine Global\n>>>>>>\n>>>>>>>\n>>>>>>> El jue, 25 abr 2024 a las 11:20, Kashif Zeeshan (<\n>>>>>>> [email protected]>) escribió:\n>>>>>>>\n>>>>>>>> Hi Isaac\n>>>>>>>>\n>>>>>>>> You are doing the minor version upgrade so it's not a big effort as\n>>>>>>>> compared to major version upgrade, following is the process to do it.\n>>>>>>>>\n>>>>>>>> *Minor releases never change the internal storage format and are\n>>>>>>>> always compatible with earlier and later minor releases of the same major\n>>>>>>>> version number. For example, version 10.1 is compatible with version 10.0\n>>>>>>>> and version 10.6. Similarly, for example, 9.5.3 is compatible with 9.5.0,\n>>>>>>>> 9.5.1, and 9.5.6. To update between compatible versions, you simply replace\n>>>>>>>> the executables while the server is down and restart the server. The data\n>>>>>>>> directory remains unchanged — minor upgrades are that simple.*\n>>>>>>>>\n>>>>>>>>\n>>>>>>>> Please follow the links below for more information.\n>>>>>>>> https://www.postgresql.org/docs/13/upgrading.html\n>>>>>>>> https://www.postgresql.org/support/versioning/\n>>>>>>>>\n>>>>>>>> Thanks\n>>>>>>>> Kashif Zeeshan\n>>>>>>>> Bitnine Global\n>>>>>>>>\n>>>>>>>> On Thu, Apr 25, 2024 at 9:37 PM •Isaac Rv <[email protected]>\n>>>>>>>> wrote:\n>>>>>>>>\n>>>>>>>>> Hello everyone, I hope you're doing well. Does anyone have a guide\n>>>>>>>>> or know how to perform an upgrade from PostgreSQL 13.12 to 13.14 on Linux?\n>>>>>>>>> I've searched in various places but haven't found any solid guides, and\n>>>>>>>>> truth be told, I'm a bit of a novice with PostgreSQL. Any help would be\n>>>>>>>>> appreciated.\n>>>>>>>>>\n>>>>>>>>\n\nOn Mon, Apr 29, 2024 at 9:07 PM •Isaac Rv <[email protected]> wrote:Ok entiendo sí, pero mi versión sigue en la 13.12 y necesito que sea 13.14, me indica que ya no tiene actualizaciones pero realmente sí, ya no sé cómo actualizarla a la 13.14HiPlease make sure that your postgres repository is set properly, that's the only reason that it's not finding V13.14. Please follow the link below.https://www.postgresql.org/download/linux/redhat/There is another way to avoid it by downloading the V13.14 on your system and then install this version on your system which will upgrade your existing installation.RegardsKashif ZeeshanBitnine GlobalEl sáb, 27 abr 2024 a las 9:29, Kashif Zeeshan (<[email protected]>) escribió:Glad to be of help.pg_uprade is used with major version upgrade e.g. from PG13 to 14 etcRegardsKashif ZeeshanBitnine GlobalOn Fri, Apr 26, 2024 at 10:47 PM •Isaac Rv <[email protected]> wrote:Hola, lo acabo de hacer, quedó bien luego detuve el servidor, aplique otra vez el  \n\nsudo yum update postgresql13 y me devolvió otra vez el mensaje que ya no tiene más actualizaciones pendientesVeo que esta el pg_upgrade, pero no entiendo bien cómo usarloSaludos y muchas graciasEl vie, 26 abr 2024 a las 11:34, Kashif Zeeshan (<[email protected]>) escribió:On Fri, Apr 26, 2024 at 9:22 PM •Isaac Rv <[email protected]> wrote:Mira intente con el yum y si actualizó pero sin embargo no actualizo a la 13.14  sudo yum update postgresql13Updating Subscription Management repositories.This system is registered with an entitlement server, but is not receiving updates. You can use subscription-manager to assign subscriptions.Last metadata expiration check: 0:07:02 ago on Fri 26 Apr 2024 10:01:36 AM CST.Dependencies resolved.Nothing to do.Complete!It seemed yum is not able to get the latest package update, try clearing the cache and rebuilding ityum clean allyum makecache El jue, 25 abr 2024 a las 23:16, Kashif Zeeshan (<[email protected]>) escribió:On Thu, Apr 25, 2024 at 11:55 PM •Isaac Rv <[email protected]> wrote:Entiendo si, me han dicho que es sencillo, pero no entiendo si solo descargo los binarios y en cual carpeta reemplazo? no hay una guía cómo tal de cómo realizarlo, me  podrías ayudar? Follow the below steps1. Backup your data2. Review the release notes of the update release3. Stop the PG Server4. Upgrade postgres to newer version, e.g. on CentOS use the command 'sudo yum update postgresql' 5. Restart PG ServerThanksKashif ZeeshanBitnine GlobalEl jue, 25 abr 2024 a las 11:20, Kashif Zeeshan (<[email protected]>) escribió:Hi IsaacYou are doing the minor version upgrade so it's not a big effort as compared to major version upgrade, following is the process to do it.Minor releases never change the internal storage format and are \nalways compatible with earlier and later minor releases of the same \nmajor version number. For example, version 10.1 is compatible with \nversion 10.0 and version 10.6. Similarly, for example, 9.5.3 is \ncompatible with 9.5.0, 9.5.1, and 9.5.6. To update between compatible \nversions, you simply replace the executables while the server is down \nand restart the server. The data directory remains unchanged — minor \nupgrades are that simple.Please follow the links below for more information.https://www.postgresql.org/docs/13/upgrading.htmlhttps://www.postgresql.org/support/versioning/ThanksKashif ZeeshanBitnine GlobalOn Thu, Apr 25, 2024 at 9:37 PM •Isaac Rv <[email protected]> wrote:Hello everyone, I hope you're doing well. Does anyone have a guide or know how to perform an upgrade from PostgreSQL 13.12 to 13.14 on Linux? I've searched in various places but haven't found any solid guides, and truth be told, I'm a bit of a novice with PostgreSQL. Any help would be appreciated.", "msg_date": "Tue, 30 Apr 2024 08:59:38 +0500", "msg_from": "Kashif Zeeshan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help update PostgreSQL 13.12 to 13.14" }, { "msg_contents": "Hi\n\nUpgrade works when you have an existing Postgres installation with server\nrunning.\nIf you run the following command then it will upgrade the\nexisting installation of postgre.\nsudo dnf install -y postgresql13-server\n\nBut you don't need to execute the below commands as this will create data\ndirectory and start the server on it.\nsudo /usr/pgsql-13/bin/postgresql-13-setup initdb\nsudo systemctl enable postgresql-13\nsudo systemctl start postgresql-13\n\nYou just need to install the version of postgres you need and it will\nupgrade the existing installation and you just need to restart the server.\nsudo systemctl restart postgresql-13\n\nThe following commands you mentioned are going to setup the repos for\npostgres which will used to download and install postgres packages.\n# Install the RPM from the repository:\nsudo dnf install -y\nhttps://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm\nBut you done need to disable it as this will disable the repo you installed\nabove.\nsudo dnf -qy module disables postgresql\n\n\nRegards\nKashif Zeeshan\nBitnine Global\n\nOn Fri, May 3, 2024 at 7:28 PM •Isaac Rv <[email protected]> wrote:\n\n> Hola, estos son los pasos que me dan\n>\n> Pero esto es solamente para instalar el Yum? O instala una instancia nueva\n> de PostgreSQL?\n> Y que pasa si ya esta instalado el yum pero mal configurado cómo bien\n> dices?\n>\n> # Instalar el RPM del repositorio:\n> sudo dnf install -y\n> https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm\n>\n> sudo dnf -qy módulo deshabilita postgresql\n>\n> sudo dnf install -y postgresql13-servidor\n>\n> sudo /usr/pgsql-13/bin/postgresql-13-setup initdb\n> sudo systemctl habilitar postgresql-13\n> sudo systemctl iniciar postgresql-13\n>\n>\n> Quedo atento\n>\n>\n> Saludos\n>\n> El lun, 29 abr 2024 a las 21:59, Kashif Zeeshan (<[email protected]>)\n> escribió:\n>\n>>\n>>\n>> On Mon, Apr 29, 2024 at 9:07 PM •Isaac Rv <[email protected]> wrote:\n>>\n>>> Ok entiendo sí, pero mi versión sigue en la 13.12 y necesito que sea\n>>> 13.14, me indica que ya no tiene actualizaciones pero realmente sí, ya no\n>>> sé cómo actualizarla a la 13.14\n>>>\n>>\n>> Hi\n>>\n>> Please make sure that your postgres repository is set properly, that's\n>> the only reason that it's not finding V13.14. Please follow the link below.\n>>\n>> https://www.postgresql.org/download/linux/redhat/\n>>\n>> There is another way to avoid it by downloading the V13.14 on your system\n>> and then install this version on your system which will upgrade your\n>> existing installation.\n>>\n>> Regards\n>> Kashif Zeeshan\n>> Bitnine Global\n>>\n>>>\n>>> El sáb, 27 abr 2024 a las 9:29, Kashif Zeeshan (<[email protected]>)\n>>> escribió:\n>>>\n>>>> Glad to be of help.\n>>>> pg_uprade is used with major version upgrade e.g. from PG13 to 14 etc\n>>>>\n>>>> Regards\n>>>> Kashif Zeeshan\n>>>> Bitnine Global\n>>>>\n>>>> On Fri, Apr 26, 2024 at 10:47 PM •Isaac Rv <[email protected]>\n>>>> wrote:\n>>>>\n>>>>> Hola, lo acabo de hacer, quedó bien luego detuve el servidor, aplique\n>>>>> otra vez el sudo yum update postgresql13 y me devolvió otra vez el\n>>>>> mensaje que ya no tiene más actualizaciones pendientes\n>>>>> Veo que esta el pg_upgrade, pero no entiendo bien cómo usarlo\n>>>>>\n>>>>> Saludos y muchas gracias\n>>>>>\n>>>>> El vie, 26 abr 2024 a las 11:34, Kashif Zeeshan (<\n>>>>> [email protected]>) escribió:\n>>>>>\n>>>>>>\n>>>>>>\n>>>>>> On Fri, Apr 26, 2024 at 9:22 PM •Isaac Rv <[email protected]>\n>>>>>> wrote:\n>>>>>>\n>>>>>>> Mira intente con el yum y si actualizó pero sin embargo no actualizo\n>>>>>>> a la 13.14\n>>>>>>>\n>>>>>>> sudo yum update postgresql13\n>>>>>>> Updating Subscription Management repositories.\n>>>>>>>\n>>>>>>> This system is registered with an entitlement server, but is not\n>>>>>>> receiving updates. You can use subscription-manager to assign subscriptions.\n>>>>>>>\n>>>>>>> Last metadata expiration check: 0:07:02 ago on Fri 26 Apr 2024\n>>>>>>> 10:01:36 AM CST.\n>>>>>>> Dependencies resolved.\n>>>>>>> Nothing to do.\n>>>>>>> Complete!\n>>>>>>>\n>>>>>>\n>>>>>> It seemed yum is not able to get the latest package update, try\n>>>>>> clearing the cache and rebuilding it\n>>>>>>\n>>>>>> yum clean all\n>>>>>>\n>>>>>> yum makecache\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>>>\n>>>>>>> El jue, 25 abr 2024 a las 23:16, Kashif Zeeshan (<\n>>>>>>> [email protected]>) escribió:\n>>>>>>>\n>>>>>>>>\n>>>>>>>>\n>>>>>>>> On Thu, Apr 25, 2024 at 11:55 PM •Isaac Rv <[email protected]>\n>>>>>>>> wrote:\n>>>>>>>>\n>>>>>>>>> Entiendo si, me han dicho que es sencillo, pero no entiendo si\n>>>>>>>>> solo descargo los binarios y en cual carpeta reemplazo? no hay una guía\n>>>>>>>>> cómo tal de cómo realizarlo, me podrías ayudar?\n>>>>>>>>>\n>>>>>>>>\n>>>>>>>> Follow the below steps\n>>>>>>>> 1. Backup your data\n>>>>>>>> 2. Review the release notes of the update release\n>>>>>>>> 3. Stop the PG Server\n>>>>>>>> 4. Upgrade postgres to newer version, e.g. on CentOS use the\n>>>>>>>> command 'sudo yum update postgresql'\n>>>>>>>> 5. Restart PG Server\n>>>>>>>>\n>>>>>>>> Thanks\n>>>>>>>> Kashif Zeeshan\n>>>>>>>> Bitnine Global\n>>>>>>>>\n>>>>>>>>>\n>>>>>>>>> El jue, 25 abr 2024 a las 11:20, Kashif Zeeshan (<\n>>>>>>>>> [email protected]>) escribió:\n>>>>>>>>>\n>>>>>>>>>> Hi Isaac\n>>>>>>>>>>\n>>>>>>>>>> You are doing the minor version upgrade so it's not a big effort\n>>>>>>>>>> as compared to major version upgrade, following is the process to do it.\n>>>>>>>>>>\n>>>>>>>>>> *Minor releases never change the internal storage format and are\n>>>>>>>>>> always compatible with earlier and later minor releases of the same major\n>>>>>>>>>> version number. For example, version 10.1 is compatible with version 10.0\n>>>>>>>>>> and version 10.6. Similarly, for example, 9.5.3 is compatible with 9.5.0,\n>>>>>>>>>> 9.5.1, and 9.5.6. To update between compatible versions, you simply replace\n>>>>>>>>>> the executables while the server is down and restart the server. The data\n>>>>>>>>>> directory remains unchanged — minor upgrades are that simple.*\n>>>>>>>>>>\n>>>>>>>>>>\n>>>>>>>>>> Please follow the links below for more information.\n>>>>>>>>>> https://www.postgresql.org/docs/13/upgrading.html\n>>>>>>>>>> https://www.postgresql.org/support/versioning/\n>>>>>>>>>>\n>>>>>>>>>> Thanks\n>>>>>>>>>> Kashif Zeeshan\n>>>>>>>>>> Bitnine Global\n>>>>>>>>>>\n>>>>>>>>>> On Thu, Apr 25, 2024 at 9:37 PM •Isaac Rv <[email protected]>\n>>>>>>>>>> wrote:\n>>>>>>>>>>\n>>>>>>>>>>> Hello everyone, I hope you're doing well. Does anyone have a\n>>>>>>>>>>> guide or know how to perform an upgrade from PostgreSQL 13.12 to 13.14 on\n>>>>>>>>>>> Linux? I've searched in various places but haven't found any solid guides,\n>>>>>>>>>>> and truth be told, I'm a bit of a novice with PostgreSQL. Any help would be\n>>>>>>>>>>> appreciated.\n>>>>>>>>>>>\n>>>>>>>>>>\n\nHiUpgrade works when you have an existing Postgres installation with server running.If you run the following command then it will upgrade the existing installation of postgre.sudo dnf install -y postgresql13-serverBut you don't need to execute the below commands as this will create data directory and start the server on it. sudo /usr/pgsql-13/bin/postgresql-13-setup initdbsudo systemctl enable postgresql-13sudo systemctl start postgresql-13You just need to install the version of postgres you need and it will upgrade the existing installation and you just need to restart the server.sudo systemctl restart postgresql-13The following commands you mentioned are going to setup the repos for postgres which will used to download and install postgres packages.# Install the RPM from the repository:sudo dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpmBut you done need to disable it as this will disable the repo you installed above.sudo dnf -qy module disables postgresqlRegardsKashif ZeeshanBitnine GlobalOn Fri, May 3, 2024 at 7:28 PM •Isaac Rv <[email protected]> wrote:Hola, estos son los pasos que me danPero esto es solamente para instalar el Yum? O instala una instancia nueva de PostgreSQL?Y que pasa si ya esta instalado el yum pero mal configurado cómo bien dices?# Instalar el RPM del repositorio:sudo dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpmsudo dnf -qy módulo deshabilita postgresqlsudo dnf install -y postgresql13-servidorsudo /usr/pgsql-13/bin/postgresql-13-setup initdbsudo systemctl habilitar postgresql-13sudo systemctl iniciar postgresql-13Quedo atentoSaludos El lun, 29 abr 2024 a las 21:59, Kashif Zeeshan (<[email protected]>) escribió:On Mon, Apr 29, 2024 at 9:07 PM •Isaac Rv <[email protected]> wrote:Ok entiendo sí, pero mi versión sigue en la 13.12 y necesito que sea 13.14, me indica que ya no tiene actualizaciones pero realmente sí, ya no sé cómo actualizarla a la 13.14HiPlease make sure that your postgres repository is set properly, that's the only reason that it's not finding V13.14. Please follow the link below.https://www.postgresql.org/download/linux/redhat/There is another way to avoid it by downloading the V13.14 on your system and then install this version on your system which will upgrade your existing installation.RegardsKashif ZeeshanBitnine GlobalEl sáb, 27 abr 2024 a las 9:29, Kashif Zeeshan (<[email protected]>) escribió:Glad to be of help.pg_uprade is used with major version upgrade e.g. from PG13 to 14 etcRegardsKashif ZeeshanBitnine GlobalOn Fri, Apr 26, 2024 at 10:47 PM •Isaac Rv <[email protected]> wrote:Hola, lo acabo de hacer, quedó bien luego detuve el servidor, aplique otra vez el  \n\nsudo yum update postgresql13 y me devolvió otra vez el mensaje que ya no tiene más actualizaciones pendientesVeo que esta el pg_upgrade, pero no entiendo bien cómo usarloSaludos y muchas graciasEl vie, 26 abr 2024 a las 11:34, Kashif Zeeshan (<[email protected]>) escribió:On Fri, Apr 26, 2024 at 9:22 PM •Isaac Rv <[email protected]> wrote:Mira intente con el yum y si actualizó pero sin embargo no actualizo a la 13.14  sudo yum update postgresql13Updating Subscription Management repositories.This system is registered with an entitlement server, but is not receiving updates. You can use subscription-manager to assign subscriptions.Last metadata expiration check: 0:07:02 ago on Fri 26 Apr 2024 10:01:36 AM CST.Dependencies resolved.Nothing to do.Complete!It seemed yum is not able to get the latest package update, try clearing the cache and rebuilding ityum clean allyum makecache El jue, 25 abr 2024 a las 23:16, Kashif Zeeshan (<[email protected]>) escribió:On Thu, Apr 25, 2024 at 11:55 PM •Isaac Rv <[email protected]> wrote:Entiendo si, me han dicho que es sencillo, pero no entiendo si solo descargo los binarios y en cual carpeta reemplazo? no hay una guía cómo tal de cómo realizarlo, me  podrías ayudar? Follow the below steps1. Backup your data2. Review the release notes of the update release3. Stop the PG Server4. Upgrade postgres to newer version, e.g. on CentOS use the command 'sudo yum update postgresql' 5. Restart PG ServerThanksKashif ZeeshanBitnine GlobalEl jue, 25 abr 2024 a las 11:20, Kashif Zeeshan (<[email protected]>) escribió:Hi IsaacYou are doing the minor version upgrade so it's not a big effort as compared to major version upgrade, following is the process to do it.Minor releases never change the internal storage format and are \nalways compatible with earlier and later minor releases of the same \nmajor version number. For example, version 10.1 is compatible with \nversion 10.0 and version 10.6. Similarly, for example, 9.5.3 is \ncompatible with 9.5.0, 9.5.1, and 9.5.6. To update between compatible \nversions, you simply replace the executables while the server is down \nand restart the server. The data directory remains unchanged — minor \nupgrades are that simple.Please follow the links below for more information.https://www.postgresql.org/docs/13/upgrading.htmlhttps://www.postgresql.org/support/versioning/ThanksKashif ZeeshanBitnine GlobalOn Thu, Apr 25, 2024 at 9:37 PM •Isaac Rv <[email protected]> wrote:Hello everyone, I hope you're doing well. Does anyone have a guide or know how to perform an upgrade from PostgreSQL 13.12 to 13.14 on Linux? I've searched in various places but haven't found any solid guides, and truth be told, I'm a bit of a novice with PostgreSQL. Any help would be appreciated.", "msg_date": "Mon, 6 May 2024 08:53:15 +0500", "msg_from": "Kashif Zeeshan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help update PostgreSQL 13.12 to 13.14" }, { "msg_contents": "\nThis seven email thread should never have appeared on the hackers list. \nIt is more appropriate for the [email protected] email list.\n\n---------------------------------------------------------------------------\n\nOn Mon, May 6, 2024 at 08:53:15AM +0500, Kashif Zeeshan wrote:\n> Hi\n> \n> Upgrade works when you have an existing Postgres installation with server\n> running.\n> If you run the following command then it will upgrade the\n> existing installation of postgre.\n> sudo dnf install -y postgresql13-server\n> \n> But you don't need to execute the below commands as this will create data\n> directory and start the server on it. \n> sudo /usr/pgsql-13/bin/postgresql-13-setup initdb\n> sudo systemctl enable postgresql-13\n> sudo systemctl start postgresql-13\n> \n> You just need to install the version of postgres you need and it will upgrade\n> the existing installation and you just need to restart the server.\n> sudo systemctl restart postgresql-13\n> \n> The following commands you mentioned are going to setup the repos for postgres\n> which will used to download and install postgres packages.\n> # Install the RPM from the repository:\n> sudo dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/\n> EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm\n> But you done need to disable it as this will disable the repo you installed\n> above.\n> sudo dnf -qy module disables postgresql\n> \n> \n> Regards\n> Kashif Zeeshan\n> Bitnine Global\n> \n> On Fri, May 3, 2024 at 7:28 PM •Isaac Rv <[email protected]> wrote:\n> \n> Hola, estos son los pasos que me dan\n> \n> Pero esto es solamente para instalar el Yum? O instala una instancia nueva\n> de PostgreSQL?\n> Y que pasa si ya esta instalado el yum pero mal configurado cómo bien\n> dices?\n> \n> # Instalar el RPM del repositorio:\n> sudo dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/\n> EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm\n> \n> sudo dnf -qy módulo deshabilita postgresql\n> \n> sudo dnf install -y postgresql13-servidor\n> \n> sudo /usr/pgsql-13/bin/postgresql-13-setup initdb\n> sudo systemctl habilitar postgresql-13\n> sudo systemctl iniciar postgresql-13\n> \n> \n> Quedo atento\n> \n> \n> Saludos \n> \n> El lun, 29 abr 2024 a las 21:59, Kashif Zeeshan (<[email protected]>)\n> escribió:\n> \n> \n> \n> On Mon, Apr 29, 2024 at 9:07 PM •Isaac Rv <[email protected]>\n> wrote:\n> \n> Ok entiendo sí, pero mi versión sigue en la 13.12 y necesito que\n> sea 13.14, me indica que ya no tiene actualizaciones pero realmente\n> sí, ya no sé cómo actualizarla a la 13.14\n> \n> \n> Hi\n> \n> Please make sure that your postgres repository is set properly, that's\n> the only reason that it's not finding V13.14. Please follow the link\n> below.\n> \n> https://www.postgresql.org/download/linux/redhat/\n> \n> There is another way to avoid it by downloading the V13.14 on your\n> system and then install this version on your system which will upgrade\n> your existing installation.\n> \n> Regards\n> Kashif Zeeshan\n> Bitnine Global\n> \n> \n> El sáb, 27 abr 2024 a las 9:29, Kashif Zeeshan (<\n> [email protected]>) escribió:\n> \n> Glad to be of help.\n> pg_uprade is used with major version upgrade e.g. from PG13 to\n> 14 etc\n> \n> Regards\n> Kashif Zeeshan\n> Bitnine Global\n> \n> On Fri, Apr 26, 2024 at 10:47 PM •Isaac Rv <\n> [email protected]> wrote:\n> \n> Hola, lo acabo de hacer, quedó bien luego detuve el\n> servidor, aplique otra vez el   sudo yum update\n> postgresql13 y me devolvió otra vez el mensaje que ya no\n> tiene más actualizaciones pendientes\n> Veo que esta el pg_upgrade, pero no entiendo bien cómo\n> usarlo\n> \n> Saludos y muchas gracias\n> \n> El vie, 26 abr 2024 a las 11:34, Kashif Zeeshan (<\n> [email protected]>) escribió:\n> \n> \n> \n> On Fri, Apr 26, 2024 at 9:22 PM •Isaac Rv <\n> [email protected]> wrote:\n> \n> Mira intente con el yum y si actualizó pero sin\n> embargo no actualizo a la 13.14  \n> \n> sudo yum update postgresql13\n> Updating Subscription Management repositories.\n> \n> This system is registered with an entitlement\n> server, but is not receiving updates. You can use\n> subscription-manager to assign subscriptions.\n> \n> Last metadata expiration check: 0:07:02 ago on Fri\n> 26 Apr 2024 10:01:36 AM CST.\n> Dependencies resolved.\n> Nothing to do.\n> Complete!\n> \n> \n> It seemed yum is not able to get the latest package\n> update, try clearing the cache and rebuilding it\n> \n> yum clean all\n> \n> yum makecache\n> \n>  \n> \n> \n> El jue, 25 abr 2024 a las 23:16, Kashif Zeeshan (<\n> [email protected]>) escribió:\n> \n> \n> \n> On Thu, Apr 25, 2024 at 11:55 PM •Isaac Rv <\n> [email protected]> wrote:\n> \n> Entiendo si, me han dicho que es sencillo,\n> pero no entiendo si solo descargo los\n> binarios y en cual carpeta reemplazo? no\n> hay una guía cómo tal de cómo realizarlo,\n> me  podrías ayudar? \n> \n> \n> Follow the below steps\n> 1. Backup your data\n> 2. Review the release notes of the update\n> release\n> 3. Stop the PG Server\n> 4. Upgrade postgres to newer version, e.g. on\n> CentOS use the command 'sudo yum update\n> postgresql' \n> 5. Restart PG Server\n> \n> Thanks\n> Kashif Zeeshan\n> Bitnine Global\n> \n> \n> El jue, 25 abr 2024 a las 11:20, Kashif\n> Zeeshan (<[email protected]>)\n> escribió:\n> \n> Hi Isaac\n> \n> You are doing the minor version upgrade\n> so it's not a big effort as compared to\n> major version upgrade, following is the\n> process to do it.\n> \n> Minor releases never change the\n> internal storage format and are always\n> compatible with earlier and later minor\n> releases of the same major version\n> number. For example, version 10.1 is\n> compatible with version 10.0 and\n> version 10.6. Similarly, for example,\n> 9.5.3 is compatible with 9.5.0, 9.5.1,\n> and 9.5.6. To update between compatible\n> versions, you simply replace the\n> executables while the server is down\n> and restart the server. The data\n> directory remains unchanged — minor\n> upgrades are that simple.\n> \n> \n> Please follow the links below for more\n> information.\n> \n> https://www.postgresql.org/docs/13/\n> upgrading.html\n> https://www.postgresql.org/support/\n> versioning/\n> \n> \n> Thanks\n> Kashif Zeeshan\n> Bitnine Global\n> \n> On Thu, Apr 25, 2024 at 9:37 PM •Isaac\n> Rv <[email protected]> wrote:\n> \n> Hello everyone, I hope you're doing\n> well. Does anyone have a guide or know\n> how to perform an upgrade from\n> PostgreSQL 13.12 to 13.14 on Linux?\n> I've searched in various places but\n> haven't found any solid guides, and\n> truth be told, I'm a bit of a novice\n> with PostgreSQL. Any help would be\n> appreciated.\n> \n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 6 May 2024 10:22:53 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help update PostgreSQL 13.12 to 13.14" } ]
[ { "msg_contents": "Hi,\n\nCommit 91f2cae7a4e that introduced WALReadFromBuffers only used it for\nphysical walsenders. It can also be used in more places benefitting\nlogical walsenders, backends running pg_walinspect and logical\ndecoding functions if the WAL is available in WAL buffers. I'm\nattaching a 0001 patch for this.\n\nWhile at it, I've also added a test module in 0002 patch to\ndemonstrate 2 things: 1) how the caller can ensure the requested WAL\nis fully copied to WAL buffers using WaitXLogInsertionsToFinish before\nreading from WAL buffers. 2) how one can implement an xlogreader\npage_read callback to read unflushed/not-yet-flushed WAL directly from\nWAL buffers. FWIW, a separate test module to explicitly test the new\nfunction is suggested here -\nhttps://www.postgresql.org/message-id/CAFiTN-sE7CJn-ZFj%2B-0Wv6TNytv_fp4n%2BeCszspxJ3mt77t5ig%40mail.gmail.com.\n\nPlease have a look at the attached patches.\n\nI will register this for the next commit fest.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 24 Apr 2024 21:38:57 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Use WALReadFromBuffers in more places" }, { "msg_contents": "Hi, Bharath. I've been testing this. It's cool. Is there any way we could\nmonitor the hit rate about directly reading from WAL buffers by exporting\nto some views?\n\n---\n\nRegards, Jingtang\n\nHi, Bharath. I've been testing this. It's cool. Is there any way we couldmonitor the hit rate about directly reading from WAL buffers by exportingto some views?---Regards, Jingtang", "msg_date": "Wed, 8 May 2024 12:20:56 +0800", "msg_from": "Jingtang Zhang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use WALReadFromBuffers in more places" }, { "msg_contents": "On Wed, May 8, 2024 at 9:51 AM Jingtang Zhang <[email protected]> wrote:\n>\n> Hi, Bharath. I've been testing this. It's cool. Is there any way we could\n> monitor the hit rate about directly reading from WAL buffers by exporting\n> to some views?\n\nThanks for looking into this. I used purpose-built patches for\nverifying the WAL buffers hit ratio, please check\nUSE-ON-HEAD-Collect-WAL-read-from-file-stats.txt and\nUSE-ON-PATCH-Collect-WAL-read-from-buffers-and-file-stats.txt at\nhttps://www.postgresql.org/message-id/CALj2ACU9cfAcfVsGwUqXMace_7rfSBJ7%2BhXVJfVV1jnspTDGHQ%40mail.gmail.com.\nIn the long run, it's better to extend what's proposed in the thread\nhttps://www.postgresql.org/message-id/CALj2ACU_f5_c8F%2BxyNR4HURjG%3DJziiz07wCpQc%3DAqAJUFh7%2B8w%40mail.gmail.com.\nI haven't had a chance to dive deep into that thread yet, but a quick\nglance over v8 patch tells me that it hasn't yet implemented the idea\nof collecting WAL read stats for both walsenders and the backends\nreading WAL. If that's done, we can extend it for WAL read from WAL\nbuffers.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 13 May 2024 11:05:40 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use WALReadFromBuffers in more places" }, { "msg_contents": "Hi Bharath,\n\nI spent some time examining the patch. Here are my observations from the review.\n\n- I believe there’s no need for an extra variable ‘nbytes’ in this\ncontext. We can repurpose the ‘count’ variable for the same function.\nIf necessary, we might think about renaming ‘count’ to ‘nbytes’.\n\n- The operations performed by logical_read_xlog_page() and\nread_local_xlog_page_guts() are identical. It might be beneficial to\ncreate a shared function to minimize code repetition.\n\nBest Regards,\nNitin Jadhav\nAzure Database for PostgreSQL\nMicrosoft\n\nOn Mon, May 13, 2024 at 12:17 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Wed, May 8, 2024 at 9:51 AM Jingtang Zhang <[email protected]> wrote:\n> >\n> > Hi, Bharath. I've been testing this. It's cool. Is there any way we could\n> > monitor the hit rate about directly reading from WAL buffers by exporting\n> > to some views?\n>\n> Thanks for looking into this. I used purpose-built patches for\n> verifying the WAL buffers hit ratio, please check\n> USE-ON-HEAD-Collect-WAL-read-from-file-stats.txt and\n> USE-ON-PATCH-Collect-WAL-read-from-buffers-and-file-stats.txt at\n> https://www.postgresql.org/message-id/CALj2ACU9cfAcfVsGwUqXMace_7rfSBJ7%2BhXVJfVV1jnspTDGHQ%40mail.gmail.com.\n> In the long run, it's better to extend what's proposed in the thread\n> https://www.postgresql.org/message-id/CALj2ACU_f5_c8F%2BxyNR4HURjG%3DJziiz07wCpQc%3DAqAJUFh7%2B8w%40mail.gmail.com.\n> I haven't had a chance to dive deep into that thread yet, but a quick\n> glance over v8 patch tells me that it hasn't yet implemented the idea\n> of collecting WAL read stats for both walsenders and the backends\n> reading WAL. If that's done, we can extend it for WAL read from WAL\n> buffers.\n>\n> --\n> Bharath Rupireddy\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n>\n>\n\n\n", "msg_date": "Sat, 8 Jun 2024 17:24:17 +0530", "msg_from": "Nitin Jadhav <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use WALReadFromBuffers in more places" }, { "msg_contents": "Hi,\n\nOn Sat, Jun 8, 2024 at 5:24 PM Nitin Jadhav <[email protected]>\nwrote:\n>\n> I spent some time examining the patch. Here are my observations from the\nreview.\n\nThanks.\n\n> - I believe there’s no need for an extra variable ‘nbytes’ in this\n> context. We can repurpose the ‘count’ variable for the same function.\n> If necessary, we might think about renaming ‘count’ to ‘nbytes’.\n\n'count' variable can't be altered once determined as the page_read\ncallbacks need to return the total number of bytes read. However, I ended\nup removing 'nbytes' like in the attached v2 patch.\n\n> - The operations performed by logical_read_xlog_page() and\n> read_local_xlog_page_guts() are identical. It might be beneficial to\n> create a shared function to minimize code repetition.\n\nIMO, creating another function to just wrap two other functions doesn't\nseem good to me.\n\nI attached v2 patches for further review. No changes in 0002 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 12 Jun 2024 22:10:03 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use WALReadFromBuffers in more places" } ]
[ { "msg_contents": "Tomas Vondra pointed out to me a couple of mistakes that I made with\nregard to pg_combinebackup and tablespaces.\n\nOne is that I screwed up the long_options array by specifying\ntablespace-mapping as no_argument rather than required_argument. That\ndoesn't break the tests I just committed because, in the actual string\npassed to getopt_long(), I wrote \"T:\", which means the short form of\nthe option works; only the long form does not.\n\nThe other is that the documentation says that --tablespace-mapping is\napplied to the pathnames from the first backup specified on the\ncommand line. It should have said \"final\" rather than \"first\".\n\nHere is a very small patch correcting these regrettable errors.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 24 Apr 2024 13:59:33 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "some additional (small) problems with pg_combinebackup and\n tablespaces" }, { "msg_contents": "> On 24 Apr 2024, at 19:59, Robert Haas <[email protected]> wrote:\n\n> Here is a very small patch correcting these regrettable errors.\n\nPatch LGTM.\n\nIn addition to those, unless I'm reading it wrong the current coding seems to\ninclude a \"-P\" short option which is missing in the command parsing switch\nstatement (or in the help output):\n\n while ((c = getopt_long(argc, argv, \"dnNPo:T:\",\n\nShould that be removed?\n\nA tiny nit-pick in the same area: while not wrong, it reads a bit odd to have\n\"don't\" and \"do not\" on lines directly next to each other:\n\n printf(_(\" -n, --dry-run don't actually do anything\\n\"));\n printf(_(\" -N, --no-sync do not wait for changes to be written safely to disk\\n\"));\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 24 Apr 2024 21:19:57 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: some additional (small) problems with pg_combinebackup and\n tablespaces" }, { "msg_contents": "On Wed, Apr 24, 2024 at 3:20 PM Daniel Gustafsson <[email protected]> wrote:\n> Patch LGTM.\n\nThanks. Here's an updated version with fixes for the other issues you mentioned.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 25 Apr 2024 11:48:05 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: some additional (small) problems with pg_combinebackup and\n tablespaces" }, { "msg_contents": "> On 25 Apr 2024, at 17:48, Robert Haas <[email protected]> wrote:\n> \n> On Wed, Apr 24, 2024 at 3:20 PM Daniel Gustafsson <[email protected]> wrote:\n>> Patch LGTM.\n> \n> Thanks. Here's an updated version with fixes for the other issues you mentioned.\n\nLGTM, only one small error in the commitmessage: s/Gustaffson/Gustafsson/\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 25 Apr 2024 20:03:03 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: some additional (small) problems with pg_combinebackup and\n tablespaces" }, { "msg_contents": "On Thu, Apr 25, 2024 at 2:03 PM Daniel Gustafsson <[email protected]> wrote:\n> LGTM, only one small error in the commitmessage: s/Gustaffson/Gustafsson/\n\nOh no! You're in danger of becoming the second person on this project\nwhose name I chronically misspell. Fixed (I hope) and committed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Apr 2024 08:54:44 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: some additional (small) problems with pg_combinebackup and\n tablespaces" } ]
[ { "msg_contents": "Hi\n\nHere:\n\n https://www.postgresql.org/docs/current/functions-range.html#MULTIRANGE-FUNCTIONS-TABLE\n\nthe description for \"lower(anymultirange)\":\n\n> (NULL if the multirange is empty has no lower bound).\n\nis missing \"or\" and should be:\n\n> (NULL if the multirange is empty or has no lower bound).\n\nSeems to have been omitted with 7539a1b2 (REL_16_STABLE + master).\n\nVery minor issue, but I have seen it and can no longer unsee it.\n\nRegards\n\nIan Barwick", "msg_date": "Thu, 25 Apr 2024 09:39:47 +0900", "msg_from": "Ian Lawrence Barwick <[email protected]>", "msg_from_op": true, "msg_subject": "docs: minor typo fix for \"lower(anymultirange)\"" }, { "msg_contents": "On Thu, Apr 25, 2024 at 8:40 AM Ian Lawrence Barwick <[email protected]>\nwrote:\n\n> Hi\n>\n> Here:\n>\n>\n> https://www.postgresql.org/docs/current/functions-range.html#MULTIRANGE-FUNCTIONS-TABLE\n>\n> the description for \"lower(anymultirange)\":\n>\n> > (NULL if the multirange is empty has no lower bound).\n>\n> is missing \"or\" and should be:\n>\n> > (NULL if the multirange is empty or has no lower bound).\n\n\nGood catch! I checked the descriptions for \"upper(anymultirange)\",\n\"lower(anyrange)\" and \"upper(anyrange)\", and they are all correct. We\nshould fix this one.\n\nThanks\nRichard\n\nOn Thu, Apr 25, 2024 at 8:40 AM Ian Lawrence Barwick <[email protected]> wrote:Hi\n\nHere:\n\n     https://www.postgresql.org/docs/current/functions-range.html#MULTIRANGE-FUNCTIONS-TABLE\n\nthe description for \"lower(anymultirange)\":\n\n> (NULL if the multirange is empty has no lower bound).\n\nis missing \"or\" and should be:\n\n> (NULL if the multirange is empty or has no lower bound).Good catch!  I checked the descriptions for \"upper(anymultirange)\",\"lower(anyrange)\" and \"upper(anyrange)\", and they are all correct.  Weshould fix this one.ThanksRichard", "msg_date": "Thu, 25 Apr 2024 09:02:34 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: docs: minor typo fix for \"lower(anymultirange)\"" }, { "msg_contents": "On Thu, Apr 25, 2024 at 09:02:34AM +0800, Richard Guo wrote:\n> Good catch! I checked the descriptions for \"upper(anymultirange)\",\n> \"lower(anyrange)\" and \"upper(anyrange)\", and they are all correct. We\n> should fix this one.\n\n+1.\n--\nMichael", "msg_date": "Thu, 25 Apr 2024 12:23:38 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: docs: minor typo fix for \"lower(anymultirange)\"" } ]
[ { "msg_contents": "Hi,\n\nI'm curious about composite types in PostgreSQL. By default, when we\ncreate a composite type, it utilizes the \"record_in\" and \"record_out\"\nfunctions for input/output. Do you think it would be beneficial to\nexpand the syntax to allow users to specify custom input/output\nfunctions when creating composite types? Has anyone attempted this\nbefore, and are there any design challenges associated with it? Or is\nit simply not implemented because it's not seen as a valuable\naddition?\n\nI believe it would be beneficial because users creating a new type\nmight prefer to define specific input/output syntax rather than\nconforming to what is accepted by the RECORD type.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 25 Apr 2024 10:04:37 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Why don't we support external input/output functions for the\n composite types" }, { "msg_contents": "Dilip Kumar <[email protected]> writes:\n> I'm curious about composite types in PostgreSQL. By default, when we\n> create a composite type, it utilizes the \"record_in\" and \"record_out\"\n> functions for input/output. Do you think it would be beneficial to\n> expand the syntax to allow users to specify custom input/output\n> functions when creating composite types?\n\nNo.\n\n> I believe it would be beneficial because users creating a new type\n> might prefer to define specific input/output syntax rather than\n> conforming to what is accepted by the RECORD type.\n\nThe primary outcome would be to require a huge amount of new work\nto be done by a lot of software, much of it not under our control.\nAnd the impact wouldn't only be to software that would prefer not\nto know about this. For example, how likely do you think it is\nthat these hypothetical user-defined I/O functions would cope\nwell with ALTER TABLE/ALTER TYPE commands that change those\nrowtypes?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Apr 2024 00:44:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why don't we support external input/output functions for the\n composite types" }, { "msg_contents": "On Thu, Apr 25, 2024 at 10:14 AM Tom Lane <[email protected]> wrote:\n>\n> Dilip Kumar <[email protected]> writes:\n> > I'm curious about composite types in PostgreSQL. By default, when we\n> > create a composite type, it utilizes the \"record_in\" and \"record_out\"\n> > functions for input/output. Do you think it would be beneficial to\n> > expand the syntax to allow users to specify custom input/output\n> > functions when creating composite types?\n>\n> No.\n>\n> > I believe it would be beneficial because users creating a new type\n> > might prefer to define specific input/output syntax rather than\n> > conforming to what is accepted by the RECORD type.\n>\n\nThanks for the quick response, Tom.\n\n> The primary outcome would be to require a huge amount of new work\n> to be done by a lot of software, much of it not under our control.\n\nYeah, I agree with that.\n\n> And the impact wouldn't only be to software that would prefer not\n> to know about this. For example, how likely do you think it is\n> that these hypothetical user-defined I/O functions would cope\n> well with ALTER TABLE/ALTER TYPE commands that change those\n> rowtypes?\n\nThat's a good point. I was primarily focused on altering the\nrepresentation of input and output values, rather than considering\nchanges to internal storage. However, offering this feature could\nindeed allow users to influence how values are stored. And that can\npotentially affect ALTER TYPE because then we do not have control over\nhow those values are stored internally.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 25 Apr 2024 10:37:26 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why don't we support external input/output functions for the\n composite types" }, { "msg_contents": "This thread caught my eye this morning, and I'm confused.\n\nOn Thu, Apr 25, 2024 at 12:44 AM Tom Lane <[email protected]> wrote:\n> The primary outcome would be to require a huge amount of new work\n> to be done by a lot of software, much of it not under our control.\n\nWhat, what is the \"lot of software\" that would have to be changed? It\ncan't be existing extensions, because they wouldn't be forced into\nusing this feature. Are you thinking that drivers or admin tools would\nneed to support it? To me it seems like the only required changes\nwould be to things that know how to parse the output of record_out(),\nand there is probably some of that, but the language you're using here\nis so emphatic as to make me suspect that you anticipate some larger\nimpact.\n\n> And the impact wouldn't only be to software that would prefer not\n> to know about this. For example, how likely do you think it is\n> that these hypothetical user-defined I/O functions would cope\n> well with ALTER TABLE/ALTER TYPE commands that change those\n> rowtypes?\n\nHmm. Dilip mentioned changing the storage format, but if you do that,\nyou have bigger problems, like my_record_type.x no longer working. At\nthat point I don't see why what you have is properly called a record\ntype at all. So I guess what you're imagining here is that ALTER TABLE\n.. ALTER TYPE would try COERCION_PATH_COERCEVIAIO, but, uh so what? We\ncould probably fix it so that such coercions were handled in some\nother way, but even if we didn't, it just means the user has to\nprovide a USING clause, which is no different than what happens in any\nother case where coerce-via-I/O doesn't work out.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 25 Apr 2024 08:50:38 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why don't we support external input/output functions for the\n composite types" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> This thread caught my eye this morning, and I'm confused.\n> On Thu, Apr 25, 2024 at 12:44 AM Tom Lane <[email protected]> wrote:\n>> The primary outcome would be to require a huge amount of new work\n>> to be done by a lot of software, much of it not under our control.\n\n> What, what is the \"lot of software\" that would have to be changed?\n\nI think this potentially affects stuff as low-level as drivers,\nparticularly those that deal in binary not text I/O. It's not\nunreasonable for client code to assume that any type with\ntyptype 'c' (composite) will adhere to the specifications at\nhttps://www.postgresql.org/docs/current/rowtypes.html#ROWTYPES-IO-SYNTAX\nespecially because that section pretty much says in so many words\nthat that's the case.\n\n> It\n> can't be existing extensions, because they wouldn't be forced into\n> using this feature. Are you thinking that drivers or admin tools would\n> need to support it?\n\nYes. We've heard that argument about \"this only affects extensions\nthat choose to use it\" before, and it's nonsense. As soon as you\nextend system-wide APIs, the consequences are system-wide: everybody\nhas to cope with the new definition.\n\n>> For example, how likely do you think it is\n>> that these hypothetical user-defined I/O functions would cope\n>> well with ALTER TABLE/ALTER TYPE commands that change those\n>> rowtypes?\n\n> Hmm. Dilip mentioned changing the storage format, but if you do that,\n> you have bigger problems, like my_record_type.x no longer working. At\n> that point I don't see why what you have is properly called a record\n> type at all.\n\nYup, I agree.\n\n> So I guess what you're imagining here is that ALTER TABLE\n> .. ALTER TYPE would try COERCION_PATH_COERCEVIAIO, but, uh so what?\n\nUh, no. My point is that if you make a custom output function\nfor \"type complex (real float8, imaginary float8)\", that function\nwill probably crash pretty hard if what it's handed is something\nother than two float8s. But there is nothing to stop somebody\nfrom trying to ALTER the type to be two numerics or whatever.\nConversely, the type's custom input function would likely keep on\nproducing two float8s, yielding corrupt data so far as the rest\nof the system is concerned.\n\nYou could imagine preventing such trouble by forbidding ALTER TYPE\non types with custom I/O functions. But that just makes it even\nmore obvious that what this is is a poorly-thought-through hack,\nrather than a feature that interoperates well with the rest of\nPostgres.\n\nI think that to the extent that there's a need for custom I/O\nof something like this, it should be handled by bespoke types,\nsimilar to (say) type point.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Apr 2024 12:34:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why don't we support external input/output functions for the\n composite types" }, { "msg_contents": "On Thu, Apr 25, 2024 at 12:34 PM Tom Lane <[email protected]> wrote:\n> Yes. We've heard that argument about \"this only affects extensions\n> that choose to use it\" before, and it's nonsense. As soon as you\n> extend system-wide APIs, the consequences are system-wide: everybody\n> has to cope with the new definition.\n\nSure. Any new feature has this problem to some extent.\n\n> Uh, no. My point is that if you make a custom output function\n> for \"type complex (real float8, imaginary float8)\", that function\n> will probably crash pretty hard if what it's handed is something\n> other than two float8s. But there is nothing to stop somebody\n> from trying to ALTER the type to be two numerics or whatever.\n> Conversely, the type's custom input function would likely keep on\n> producing two float8s, yielding corrupt data so far as the rest\n> of the system is concerned.\n\nI'm not sure I really buy this. Changing the column definitions\namounts to changing the on-disk format, and no data type can survive a\nchange to the on-disk format without updating the I/O functions to\nmatch.\n\n> I think that to the extent that there's a need for custom I/O\n> of something like this, it should be handled by bespoke types,\n> similar to (say) type point.\n\nI somewhat agree with this. The main disadvantage of that approach is\nthat you lose the ability to directly refer to the members, which in\nsome cases would be quite nice. I bet a lot of people would enjoy\nbeing able to write my_point.x and my_point.y instead of my_point[0]\nand my_point[1], for example. But maybe the solution to that is not\n$SUBJECT.\n\nA related problem is that, even if my_point behaved like a composite\ntype, you'd have to write (my_point).x and (my_point).y to avoid\nsomething like:\n\nERROR: missing FROM-clause entry for table \"my_point\"\n\nI think it's confusing and counterintuitive that putting parentheses\naround a subexpression completely changes the meaning. I don't know of\nany other programming language that behaves that way, and I find the\nway the \"indirection\" productions are coded in gram.y to be highly\nquestionable. I suspect everything we currently treat as an\nindirection_el should instead be a way of constructing a new a_expr or\nc_expr or something like that, but I strongly suspect if I try to make\nthe work I'll discover horrible problems I can't fix. Still, it's\nawful.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 25 Apr 2024 13:12:39 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why don't we support external input/output functions for the\n composite types" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Apr 25, 2024 at 12:34 PM Tom Lane <[email protected]> wrote:\n>> Uh, no. My point is that if you make a custom output function\n>> for \"type complex (real float8, imaginary float8)\", that function\n>> will probably crash pretty hard if what it's handed is something\n>> other than two float8s.\n\n> I'm not sure I really buy this. Changing the column definitions\n> amounts to changing the on-disk format, and no data type can survive a\n> change to the on-disk format without updating the I/O functions to\n> match.\n\nWhat I'm trying to say is: given that the command \"alter type T alter\nattribute A type foo\" exists, users would reasonably expect the system\nto honor that on its own for any composite type, because that's what\nit does today. But it can't if T has custom I/O functions, at least\nnot without understanding how to rewrite those functions.\n\n>> I think that to the extent that there's a need for custom I/O\n>> of something like this, it should be handled by bespoke types,\n>> similar to (say) type point.\n\n> I somewhat agree with this. The main disadvantage of that approach is\n> that you lose the ability to directly refer to the members, which in\n> some cases would be quite nice. I bet a lot of people would enjoy\n> being able to write my_point.x and my_point.y instead of my_point[0]\n> and my_point[1], for example. But maybe the solution to that is not\n> $SUBJECT.\n\nNope, it isn't IMO. We already added infrastructure to allow\narbitrary custom types to define subscripting operations. I think a\ncase could be made to allow them to define field selection, too.\n\n> I think it's confusing and counterintuitive that putting parentheses\n> around a subexpression completely changes the meaning. I don't know of\n> any other programming language that behaves that way,\n\nI take it that you also don't believe that \"2 + 3 * 4\" should yield\ndifferent results from \"(2 + 3) * 4\"?\n\nI could get behind offering an alternative notation, eg \"a.b->c does\nthe same thing as (a.b).c\", if we could find a reasonable notation\nthat doesn't infringe on user operator namespace. I think that might\nbe hard to do though, and I don't think the existing notation is so\nawful than we should break existing operators to have an alternative.\nThe business with deprecating => operators a few years ago had the\nexcuse that \"the standard says so\", but we don't have that\njustification here.\n\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Apr 2024 17:05:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why don't we support external input/output functions for the\n composite types" }, { "msg_contents": "On Thu, 25 Apr 2024 at 17:05, Tom Lane <[email protected]> wrote:\n\n\n> > I think it's confusing and counterintuitive that putting parentheses\n> > around a subexpression completely changes the meaning. I don't know of\n> > any other programming language that behaves that way,\n>\n> I take it that you also don't believe that \"2 + 3 * 4\" should yield\n> different results from \"(2 + 3) * 4\"?\n>\n\nIn that expression \"2 + 3\" is not a subexpression, although \"3 * 4\"\nis, thanks to the operator precedence rules.\n\nI could get behind offering an alternative notation, eg \"a.b->c does\n> the same thing as (a.b).c\", if we could find a reasonable notation\n> that doesn't infringe on user operator namespace. I think that might\n> be hard to do though, and I don't think the existing notation is so\n> awful than we should break existing operators to have an alternative.\n> The business with deprecating => operators a few years ago had the\n> excuse that \"the standard says so\", but we don't have that\n> justification here.\n>\n\nThis is one of those areas where it will be difficult at best to do\nsomething which makes things work the way people expect without breaking\nother cases. I certainly would like to be able to use . to extract a field\nfrom a composite value without parenthesizing everything, but given the\npotential for having a schema name that matches a table or field name I\nwould want to be very careful about changing anything.\n\nOn Thu, 25 Apr 2024 at 17:05, Tom Lane <[email protected]> wrote: \n> I think it's confusing and counterintuitive that putting parentheses\n> around a subexpression completely changes the meaning. I don't know of\n> any other programming language that behaves that way,\n\nI take it that you also don't believe that \"2 + 3 * 4\" should yield\ndifferent results from \"(2 + 3) * 4\"?In that expression \"2 + 3\" is not a subexpression, although \"3 * 4\" is, thanks to the operator precedence rules.\nI could get behind offering an alternative notation, eg \"a.b->c does\nthe same thing as (a.b).c\", if we could find a reasonable notation\nthat doesn't infringe on user operator namespace.  I think that might\nbe hard to do though, and I don't think the existing notation is so\nawful than we should break existing operators to have an alternative.\nThe business with deprecating => operators a few years ago had the\nexcuse that \"the standard says so\", but we don't have that\njustification here.This is one of those areas where it will be difficult at best to do something which makes things work the way people expect without breaking other cases. I certainly would like to be able to use . to extract a field from a composite value without parenthesizing everything, but given the potential for having a schema name that matches a table or field name I would want to be very careful about changing anything.", "msg_date": "Thu, 25 Apr 2024 17:39:17 -0400", "msg_from": "Isaac Morland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why don't we support external input/output functions for the\n composite types" }, { "msg_contents": "I wrote:\n> I could get behind offering an alternative notation, eg \"a.b->c does\n> the same thing as (a.b).c\", if we could find a reasonable notation\n> that doesn't infringe on user operator namespace. I think that might\n> be hard to do though, and I don't think the existing notation is so\n> awful than we should break existing operators to have an alternative.\n> The business with deprecating => operators a few years ago had the\n> excuse that \"the standard says so\", but we don't have that\n> justification here.\n\nA different approach we could take is to implement the SQL99 rules\nfor <identifier chain>, or at least move closer to that. Our\nexisting rules for resolving qualified column references are more\nor less SQL92. I think the reasons we didn't do that when we first\nimplemented SQL99 are\n\n(1) The SQL99 rules are fundamentally ambiguous, which they wave\naway by saying that it's user error if there's more than one way\nto interpret the reference. This approach is decidedly not nice,\nnotably because it means that unrelated-looking changes in your\nschema can break your query. Having to check multiple\ninterpretations slows parsing, too.\n\n(2) Switching from SQL92 to SQL99 rules would break some queries\nanyway. (At least, that's my recollection, though looking at\nthe specs right now I don't see any case where SQL99 doesn't\ntake a SQL92 alternative, so long as you don't run into (1).)\n\nStill, maybe it's time to think about changing? We could use\nthe \"the standard says so\" excuse with anybody who complains.\n\nIn the long run I wish we could ditch the SQL92 rules altogether\nand say that the head identifier of a qualified column reference\nmust be a table's correlation name, not a schema or catalog name.\nThere's zero good reason for the latter two cases, other than\ncompatibility with thirty-year-old design mistakes. I kind of\ndoubt we could make that fly though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Apr 2024 17:51:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why don't we support external input/output functions for the\n composite types" }, { "msg_contents": "On Thu, Apr 25, 2024 at 5:05 PM Tom Lane <[email protected]> wrote:\n> > I'm not sure I really buy this. Changing the column definitions\n> > amounts to changing the on-disk format, and no data type can survive a\n> > change to the on-disk format without updating the I/O functions to\n> > match.\n>\n> What I'm trying to say is: given that the command \"alter type T alter\n> attribute A type foo\" exists, users would reasonably expect the system\n> to honor that on its own for any composite type, because that's what\n> it does today. But it can't if T has custom I/O functions, at least\n> not without understanding how to rewrite those functions.\n\nI understand your point, but I don't agree with it. Ordinary users\nwouldn't be able to create types like this anyway, because there's no\nway we can allow an unprivileged user to set an input or output\nfunction. It would have to be restricted to superusers, just as we do\nfor base types. And IMHO those have basically the same issue: you have\nto ensure that all the functions and operators that operate on the\ntype, and any subscripting operations, are on the same page about what\nthe underlying storage is. This doesn't seem different. It may well\nstill be a bad idea for other reasons, or just kind of useless, but I\ndisagree that it's a bad idea for that particular reason.\n\n> Nope, it isn't IMO. We already added infrastructure to allow\n> arbitrary custom types to define subscripting operations. I think a\n> case could be made to allow them to define field selection, too.\n\nThat would be cool!\n\n> I take it that you also don't believe that \"2 + 3 * 4\" should yield\n> different results from \"(2 + 3) * 4\"?\n\nIsaac's rebuttal to this particular point was perfect; I have nothing to add.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Apr 2024 10:34:30 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why don't we support external input/output functions for the\n composite types" }, { "msg_contents": "On Thu, Apr 25, 2024 at 5:51 PM Tom Lane <[email protected]> wrote:\n> A different approach we could take is to implement the SQL99 rules\n> for <identifier chain>, or at least move closer to that. Our\n> existing rules for resolving qualified column references are more\n> or less SQL92. I think the reasons we didn't do that when we first\n> implemented SQL99 are\n\nI'm not familiar with these rules. Do they allow stuff like a.b.c.d.e,\nor better yet, a.b(args).c(args).d(args).e(args)?\n\n> Still, maybe it's time to think about changing? We could use\n> the \"the standard says so\" excuse with anybody who complains.\n\nI certainly agree that if we're going to break stuff, breaking stuff\nto get closer to the standard is superior to other ways of breaking\nstuff. Without knowing what we'd get out of it, I don't have an\nopinion about whether it's worth it here or not, but making our syntax\nmore like other programming languages and especially other popular\ndatabase products does seem to me to have positive value.\n\n> In the long run I wish we could ditch the SQL92 rules altogether\n> and say that the head identifier of a qualified column reference\n> must be a table's correlation name, not a schema or catalog name.\n> There's zero good reason for the latter two cases, other than\n> compatibility with thirty-year-old design mistakes. I kind of\n> doubt we could make that fly though.\n\nYeah, I think that would break too much stuff.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Apr 2024 10:41:47 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why don't we support external input/output functions for the\n composite types" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Apr 25, 2024 at 5:05 PM Tom Lane <[email protected]> wrote:\n>> What I'm trying to say is: given that the command \"alter type T alter\n>> attribute A type foo\" exists, users would reasonably expect the system\n>> to honor that on its own for any composite type, because that's what\n>> it does today. But it can't if T has custom I/O functions, at least\n>> not without understanding how to rewrite those functions.\n\n> I understand your point, but I don't agree with it. Ordinary users\n> wouldn't be able to create types like this anyway, because there's no\n> way we can allow an unprivileged user to set an input or output\n> function. It would have to be restricted to superusers, just as we do\n> for base types.\n\nWell, that would be one way of making the consistency problem be not\nour problem, but it would be a sad restriction. It'd void a lot of\nthe arguable use-case for this feature, if you ask me. I realize\nthat non-superusers couldn't create the C-language I/O functions that\nwould be most at risk here, but you could imagine people building\nI/O functions in some other PL.\n\n(We'd have to remove the restriction that cstring isn't an allowed\ninput or return type for user-defined functions; but AFAIK that's\njust a holdover from days when cstring was a lot more magic than\nit is now.)\n\nMaybe there's an argument that PL functions already have to be\nproof enough against datatype inconsistencies that nothing really\nawful could happen. Not sure.\n\nIn any case, if we have to put strange restrictions on a composite\ntype when it has custom I/O functions, then that still is an\nindication that the feature is a hack that doesn't play nice\nwith the rest of the system. So I remain of the opinion that\nwe shouldn't go there. If field selection support for custom\ntypes will solve the use-case, I find that a lot more attractive.\n\n>> I take it that you also don't believe that \"2 + 3 * 4\" should yield\n>> different results from \"(2 + 3) * 4\"?\n\n> Isaac's rebuttal to this particular point was perfect; I have nothing to add.\n\nAs far as I could tell, Isaac's rebuttal was completely off-point.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Apr 2024 11:55:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why don't we support external input/output functions for the\n composite types" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Apr 25, 2024 at 5:51 PM Tom Lane <[email protected]> wrote:\n>> A different approach we could take is to implement the SQL99 rules\n>> for <identifier chain>, or at least move closer to that.\n\n> I'm not familiar with these rules. Do they allow stuff like a.b.c.d.e,\n> or better yet, a.b(args).c(args).d(args).e(args)?\n\nThe former.\n\n <identifier chain> ::=\n <identifier> [ { <period> <identifier> }... ]\n\nThe hard part is to figure out what the first identifier is:\ncolumn name? table correlation name (AS name)? schema name or\ncatalog name of a qualified table name? function parameter name?\nAfter that, as long as what you have is of composite type,\nyou can drill down into it.\n\nIf I'm reading SQL99 correctly, they deny allowing the first\nidentifier to be a column name when there's more than one identifier,\nso that you must table-qualify a composite column before you can\nselect a field from it. But they allow all the other possibilities\nand claim it's user error if more than one could apply, which seems\nlike an awful design to me. At minimum I'd want to say that the\ncorrelation name should be the first choice and wins if there's\na match, regardless of anything else, because otherwise there is\nno safe way for ruleutils to deparse such a construct. And\nprobably function parameter name should be second choice and\nsimilarly win over other choices, for the same reason. The other\noptions are SQL92 compatibility holdovers and should only be\nconsidered if we can't find a matching correlation or parameter name.\n\nThe net result of doing it like this, I think, is that we'd accept\nsome cases where SQL99 would prefer to raise an ambiguity error.\nBut we'd still be much closer to the modern standard than we are\ntoday.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Apr 2024 12:15:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why don't we support external input/output functions for the\n composite types" }, { "msg_contents": "On Fri, Apr 26, 2024 at 11:55 AM Tom Lane <[email protected]> wrote:\n> Well, that would be one way of making the consistency problem be not\n> our problem, but it would be a sad restriction. It'd void a lot of\n> the arguable use-case for this feature, if you ask me. I realize\n> that non-superusers couldn't create the C-language I/O functions that\n> would be most at risk here, but you could imagine people building\n> I/O functions in some other PL.\n\nHuh, I hadn't considered that. I figured the performance would be too\nbad to even think about it. I also wasn't even sure such a thing would\nbe supportable: I thought cstrings were generally limited to\nC/internal functions.\n\n> >> I take it that you also don't believe that \"2 + 3 * 4\" should yield\n> >> different results from \"(2 + 3) * 4\"?\n>\n> > Isaac's rebuttal to this particular point was perfect; I have nothing to add.\n>\n> As far as I could tell, Isaac's rebuttal was completely off-point.\n\nOK, I'm not sure why, but let me explain my position. In an expression\nlike (2 + 3) * 4, the parentheses change the order of evaluation,\nwhich makes sense. That's what parentheses are for, or at least one\nthing that parentheses are for. But in an expression like\n(my_point).x, that's not the case. There's only one operator here, the\nperiod, and so there's only one possible order of evaluation, so why\ndo parentheses make any difference? Having (my_point).x be different\nfrom my_point.x is like having 2 + 3 give a different answer from (2 +\n3), which would be bonkers.\n\nBut it's not at all like the difference between 2 + 3 * 4 and (2 + 3)\n* 4. The comparable case there would be foo.bar.baz as against\n(foo.bar).baz or alternatively foo.(bar.baz). Now there are two dot\noperators, and one of them has to be applied first, and there's some\ndefault based on associativity, and if you want it the other way you\nstick parentheses in there to tell the parser what you meant.\n\nAnd the reason I thought Isaac put it well is that he said, \"In that\nexpression 2 + 3 is not a subexpression, although 3 * 4 is, thanks to\nthe operator precedence rules.\" Exactly so. 2 + 3 * 4 is going to be\nparsed as something like OpExpr(+, 2, OpExpr(*, 3, 4)) -- and that\ndoes not have OpExpr(+, 2, 3) anywhere inside of it, so my comment\nthat parenthesizing a subexpression shouldn't change its meaning is\nnot relevant here. I'm perfectly fine with parentheses changing which\nthings we parse as subexpressions. Users have no license to just stick\nparentheses into your SQL expressions in random places and expect that\nthey don't do anything; if that were so, we'd have to make ((2 +)\n3)()()() evaluate to 5, which is obviously nonsense. Rather, what I\ndon't like about the status quo is that putting parentheses around\nsomething that we were already going to consider as a unit changes the\nmeaning of it. And that's exactly what happens when you write x.y vs.\n(x).y. The parentheses around the x make us think that it's a\ndifferent kind of thing, somehow. That seems odd, and the practical\nresult is that you have to insert a bunch of parentheses into your\nPostgreSQL code that look like they shouldn't be needed, but are.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Apr 2024 12:19:07 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why don't we support external input/output functions for the\n composite types" }, { "msg_contents": "On Fri, Apr 26, 2024 at 12:15 PM Tom Lane <[email protected]> wrote:\n> > I'm not familiar with these rules. Do they allow stuff like a.b.c.d.e,\n> > or better yet, a.b(args).c(args).d(args).e(args)?\n>\n> The former.\n>\n> <identifier chain> ::=\n> <identifier> [ { <period> <identifier> }... ]\n\nOK, nice.\n\n> The hard part is to figure out what the first identifier is:\n> column name? table correlation name (AS name)? schema name or\n> catalog name of a qualified table name? function parameter name?\n> After that, as long as what you have is of composite type,\n> you can drill down into it.\n\nRight, makes sense.\n\n> If I'm reading SQL99 correctly, they deny allowing the first\n> identifier to be a column name when there's more than one identifier,\n> so that you must table-qualify a composite column before you can\n> select a field from it. But they allow all the other possibilities\n> and claim it's user error if more than one could apply, which seems\n> like an awful design to me. At minimum I'd want to say that the\n> correlation name should be the first choice and wins if there's\n> a match, regardless of anything else, because otherwise there is\n> no safe way for ruleutils to deparse such a construct. And\n> probably function parameter name should be second choice and\n> similarly win over other choices, for the same reason. The other\n> options are SQL92 compatibility holdovers and should only be\n> considered if we can't find a matching correlation or parameter name.\n\nI definitely agree that there must always be some way to make it\nunambiguous, not just because of deparsing but also because users are\ngoing to want a way to force their preferred interpretation. I've been\na PostgreSQL developer now for considerably longer than I was an end\nuser, but I definitely would not have liked \"ERROR: you can't get\nthere from here\".\n\nI'm less certain how that should be spelled. The rules you propose\nmake sense to me up to a point, but what happens if the same\nunqualified name is both a table alias and a function parameter name?\nI think I need a way of forcing the function-parameter interpretation.\nYou could make function_name.parameter_name resolve to that, but then\nwhat happens if function_name is also a table alias in the containing\nquery? It's really hard to think of a set of rules here that don't\nleave any room for unfixable problems. Maybe the answer is that we\nshould support some completely different notion for unambiguously\nreferencing parameters, like ${parameter_name}. I don't know. I think\nthat what you're proposing here could be a nice improvement but it\ndefinitely seems tricky to get it completely right.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Apr 2024 12:34:08 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why don't we support external input/output functions for the\n composite types" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Fri, Apr 26, 2024 at 11:55 AM Tom Lane <[email protected]> wrote:\n>> Well, that would be one way of making the consistency problem be not\n>> our problem, but it would be a sad restriction. It'd void a lot of\n>> the arguable use-case for this feature, if you ask me. I realize\n>> that non-superusers couldn't create the C-language I/O functions that\n>> would be most at risk here, but you could imagine people building\n>> I/O functions in some other PL.\n\n> Huh, I hadn't considered that. I figured the performance would be too\n> bad to even think about it. I also wasn't even sure such a thing would\n> be supportable: I thought cstrings were generally limited to\n> C/internal functions.\n\nPerformance could indeed be an issue, but I think people taking this\npath would be doing so because they value programmer time more than\nmachine time. And while there once were good reasons to not let\nuser functions deal in cstrings, I'm not sure there are anymore.\n(The point would deserve closer investigation if we actually tried\nto move forward on it, of course.)\n\n> Rather, what I\n> don't like about the status quo is that putting parentheses around\n> something that we were already going to consider as a unit changes the\n> meaning of it. And that's exactly what happens when you write x.y vs.\n> (x).y.\n\nBut that's exactly the point: we DON'T consider the initial identifier\nof a qualified name \"as a unit\", and neither does the standard.\nWe have to figure out how many of the identifiers name an object\n(column or table) referenced in the query, and that is not clear\nup-front. SQL99's rules don't make that any better. The parens in\nour current notation serve to separate the object name from any field\nselection(s) done on the object. There's still some ambiguity,\nbecause we allow you to write either \"(table.column).field\" or\n\"(table).column.field\", but we've dealt with that for ages.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Apr 2024 12:54:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why don't we support external input/output functions for the\n composite types" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Fri, Apr 26, 2024 at 12:15 PM Tom Lane <[email protected]> wrote:\n>> If I'm reading SQL99 correctly, they deny allowing the first\n>> identifier to be a column name when there's more than one identifier,\n>> so that you must table-qualify a composite column before you can\n>> select a field from it. But they allow all the other possibilities\n>> and claim it's user error if more than one could apply, which seems\n>> like an awful design to me.\n\n> I'm less certain how that should be spelled. The rules you propose\n> make sense to me up to a point, but what happens if the same\n> unqualified name is both a table alias and a function parameter name?\n> I think I need a way of forcing the function-parameter interpretation.\n> You could make function_name.parameter_name resolve to that, but then\n> what happens if function_name is also a table alias in the containing\n> query? It's really hard to think of a set of rules here that don't\n> leave any room for unfixable problems. Maybe the answer is that we\n> should support some completely different notion for unambiguously\n> referencing parameters, like ${parameter_name}. I don't know. I think\n> that what you're proposing here could be a nice improvement but it\n> definitely seems tricky to get it completely right.\n\nI think you're moving the goal posts too far. It's on the user to\nspell the initially-written query unambiguously: if you chose a\nfunction parameter name that matches a table correlation name in the\nquery, that's your fault and you'd better rename one of those things.\nWhat concerns me is the hazard that the query is okay, and we store\nit, and then subsequent object creations or renamings create a\nsituation where an identifier chain is ambiguous per the SQL99 rules.\nruleutils has to be able to deparse the stored query in a way that is\nvalid regardless of that. Giving first priority to correlation and\nparameter names makes this possible because external operations, even\nincluding renaming tables or columns used in the query, won't affect\neither.\n\n\t\t\tregards, tom lane\n\nPS: ruleutils does sometimes choose new correlation names, and it\nsuddenly occurs to me that it's probably not being careful to avoid\nduplicating function parameter names. But that's independent of this\ndiscussion.\n\n\n", "msg_date": "Fri, 26 Apr 2024 13:07:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why don't we support external input/output functions for the\n composite types" }, { "msg_contents": "On Fri, Apr 26, 2024 at 12:54 PM Tom Lane <[email protected]> wrote:\n> But that's exactly the point: we DON'T consider the initial identifier\n> of a qualified name \"as a unit\", and neither does the standard.\n> We have to figure out how many of the identifiers name an object\n> (column or table) referenced in the query, and that is not clear\n> up-front. SQL99's rules don't make that any better. The parens in\n> our current notation serve to separate the object name from any field\n> selection(s) done on the object. There's still some ambiguity,\n> because we allow you to write either \"(table.column).field\" or\n> \"(table).column.field\", but we've dealt with that for ages.\n\nI agree that this is exactly the point.\n\nNo other programming language that I know of, and no other database\nthat I know of, looks at x.y.z and says \"ok, well first we have to\nfigure out whether the object is named x or x.y or x.y.z, and then\nafter that, we'll use whatever is left over as a field selector.\"\nInstead, they have a top-level namespace where x refers to one and\nonly one thing, and then they look for something called y inside of\nthat, and if that's a valid object then they look inside of that for\nz.\n\nJavaScript is probably the purest example of this. Everything is an\nobject, and x.y just looks up 'x' in the object that is the current\nnamespace. Assuming that returns an object rather than nothing, we\nthen try to find 'y' inside of that object.\n\nI'm not an Oracle expert, but I am under the impression that the way\nthat Oracle works is closer to that than it is to our\nread-the-tea-leaves approach.\n\nI'm almost positive you're about to tell me that there's no way in the\ninfernal regions that we could make a semantics change of this\nmagnitude, and maybe you're right. But I think our current approach is\ndeeply unsatisfying and seriously counterintuitive. People do not get\nan error about x.y and think \"oh, right, I need to write (x).y so that\nthe parser understands that the name is x rather than x.y and the .y\npart is field-selection rather than a part of the name itself.\" They\nget an error about x.y and say \"crap, I guess this syntax isn't\nsupported\" and then when you show them that \"(x).y\" fixes it, they say\n\"why in the world does that fix it?\" or \"wow, that's dumb.\"\n\nImagine if we made _ perform string concatenation but also continued\nto allow it as an identifier character. When we saw a_b without\nspaces, we'd test for whether there's an a_b variable, and/or whether\nthere are a and b variables, to guess which interpretation was meant.\nI hope we would all agree that this would be insane language design.\nYet that's essentially what we've done with period, and I don't think\nwe can blame that on the SQL standard, because I don't think other\nsystems have this problem. I wonder if anyone knows of another system\nthat works like PostgreSQL in this regard (without sharing code).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Apr 2024 14:04:25 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why don't we support external input/output functions for the\n composite types" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> No other programming language that I know of, and no other database\n> that I know of, looks at x.y.z and says \"ok, well first we have to\n> figure out whether the object is named x or x.y or x.y.z, and then\n> after that, we'll use whatever is left over as a field selector.\"\n\nIt may indeed be true that nobody but SQL does that, but nonetheless\nthis is exactly what SQL99 requires AFAICT. The reason we use this\nparenthesis notation is precisely that we didn't want to get into\nthat sort of tea-leaf-reading about how many identifiers mean what.\nThe parens put it on the user to tell us what part of the chain\nis field selection.\n\nNow do you see why I'd prefer to ditch the SQL92-compatibility\nmeasures? If we said that the first identifier in a chain must\nbe a correlation name or parameter name, never anything else,\nit'd be substantially saner.\n\n> Yet that's essentially what we've done with period, and I don't think\n> we can blame that on the SQL standard\n\nYes, we can. Please do not rant further about this until you've\nread the <identifier chain> section of a recent SQL spec.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Apr 2024 14:25:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why don't we support external input/output functions for the\n composite types" }, { "msg_contents": "On Fri, 26 Apr 2024 at 14:04, Robert Haas <[email protected]> wrote:\n\nsystems have this problem. I wonder if anyone knows of another system\n> that works like PostgreSQL in this regard (without sharing code).\n>\n\nIn Haskell period (\".\") is used both to form a qualified name (module.name),\nvery similar to our schema.object, and it is also a perfectly normal\noperator which is defined by the standard prelude as function composition\n(but you can re-bind it to any object whatsoever). This is disambiguated in\na very simple way however: Module names must begin with an uppercase letter\nwhile variable names must begin with a lowercase letter.\n\nA related point is that parentheses in Haskell act to group expressions,\nbut they, and commas, are not involved in calling functions: to call a\nfunction, just write it to the left of its parameter (and it only has one\nparameter, officially).\n\nAll this might sound weird but it actually works very well in the Haskell\ncontext.\n\nOn Fri, 26 Apr 2024 at 14:04, Robert Haas <[email protected]> wrote:systems have this problem. I wonder if anyone knows of another system\nthat works like PostgreSQL in this regard (without sharing code).In Haskell period (\".\") is used both to form a qualified name (module.name), very similar to our schema.object, and it is also a perfectly normal operator which is defined by the standard prelude as function composition (but you can re-bind it to any object whatsoever). This is disambiguated in a very simple way however: Module names must begin with an uppercase letter while variable names must begin with a lowercase letter.A related point is that parentheses in Haskell act to group expressions, but they, and commas, are not involved in calling functions: to call a function, just write it to the left of its parameter (and it only has one parameter, officially).All this might sound weird but it actually works very well in the Haskell context.", "msg_date": "Fri, 26 Apr 2024 14:42:37 -0400", "msg_from": "Isaac Morland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why don't we support external input/output functions for the\n composite types" }, { "msg_contents": "On Fri, Apr 26, 2024 at 2:25 PM Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n> > No other programming language that I know of, and no other database\n> > that I know of, looks at x.y.z and says \"ok, well first we have to\n> > figure out whether the object is named x or x.y or x.y.z, and then\n> > after that, we'll use whatever is left over as a field selector.\"\n>\n> It may indeed be true that nobody but SQL does that, but nonetheless\n> this is exactly what SQL99 requires AFAICT. The reason we use this\n> parenthesis notation is precisely that we didn't want to get into\n> that sort of tea-leaf-reading about how many identifiers mean what.\n> The parens put it on the user to tell us what part of the chain\n> is field selection.\n\nI really thought this was just PostgreSQL, not SQL generally, but I\njust experimented a bit with Oracle on dbfiddle.uk using this example:\n\nCREATE TYPE foo AS OBJECT (a number(10), b varchar2(2000));\nCREATE TABLE bar (quux foo);\nINSERT INTO bar VALUES (foo(1, 'one'));\nSELECT bar.quux, quux, (quux).a, (bar.quux).a FROM bar;\n\nThis works, but if I delete the parentheses from the last line, then\nit fails. So evidently my understanding of how this works in other\nsystems is incorrect, or incomplete. I feel like I've encountered\ncases where we required extra parenthesization that Oracle didn't\nneed, but it's hard to discuss that without examples, and I don't have\nthem right now.\n\n> Yes, we can. Please do not rant further about this until you've\n> read the <identifier chain> section of a recent SQL spec.\n\nI'm hurt to see emails that I spent time on characterized as a rant,\neven if I was wrong on the facts. And I think appealing to the SQL\nstandard is a poor way of trying to end debate on a topic.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Apr 2024 14:59:03 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why don't we support external input/output functions for the\n composite types" } ]
[ { "msg_contents": "Hi\n\nyesterday, I had to fix strange issue on standby server\n\nThe query to freshly updated data fails\n\nselect * from seller_success_rate where create_time::date = '2024-04-23';\nERROR: 58P01: could not access status of transaction 1393466389\nDETAIL: Could not open file \"pg_xact/0530\": No such file or directory.\nLOCATION: SlruReportIOError, slru.c:947\n\namcheck\n\nselect * from verify_heapam('seller_success_rate');\n blkno | offnum | attnum | msg\n\n-------+--------+--------+-------------------------------------------------------------------\n 5763 | 111 | | xmin 1393466389 precedes oldest valid\ntransaction ID 3:1523885078\n 5863 | 109 | | xmin 1393466389 precedes oldest valid\ntransaction ID 3:1523885078\n 5863 | 110 | | xmin 1393466389 precedes oldest valid\ntransaction ID 3:1523885078\n 5868 | 110 | | xmin 1393466389 precedes oldest valid\ntransaction ID 3:1523885078\n 5868 | 111 | | xmin 1393466389 precedes oldest valid\ntransaction ID 3:1523885078\n 5875 | 111 | | xmin 1393466389 precedes oldest valid\ntransaction ID 3:1523885078\n 5895 | 109 | | xmin 1393466389 precedes oldest valid\ntransaction ID 3:1523885078\n 5895 | 110 | | xmin 1439564642 precedes oldest valid\ntransaction ID 3:1523885078\n 6245 | 108 | | xmin 1393466389 precedes oldest valid\ntransaction ID 3:1523885078\n 6245 | 109 | | xmin 1393466389 precedes oldest valid\ntransaction ID 3:1523885078\n 6245 | 110 | | xmin 1439564642 precedes oldest valid\ntransaction ID 3:1523885078\n 6245 | 112 | | xmin 1424677216 precedes oldest valid\ntransaction ID 3:1523885078\n 6378 | 109 | | xmin 1393466389 precedes oldest valid\ntransaction ID 3:1523885078\n 6378 | 110 | | xmin 1393466389 precedes oldest valid\ntransaction ID 3:1523885078\n 6382 | 110 | | xmin 1393466389 precedes oldest valid\ntransaction ID 3:1523885078\n 6590 | 110 | | xmin 1393466389 precedes oldest valid\ntransaction ID 3:1523885078\n 6590 | 111 | | xmin 1393466389 precedes oldest valid\ntransaction ID 3:1523885078\n 7578 | 112 | | xmin 1393466389 precedes oldest valid\ntransaction ID 3:1523885078\n 7581 | 112 | | xmin 1393466389 precedes oldest valid\ntransaction ID 3:1523885078\n 8390 | 112 | | xmin 1393466389 precedes oldest valid\ntransaction ID 3:1523885078\n 10598 | 109 | | xmin 1393466389 precedes oldest valid\ntransaction ID 3:1523885078\n 10598 | 110 | | xmin 1393466389 precedes oldest valid\ntransaction ID 3:1523885078\n\nI verified xmin against the primary server, and it was the same. There was\nnot any replication gap.\n\nI checked the fields from pg_database table, and looks same too\n\nThese rows were valid (and visible) on primary.\n\nOn this server there was not any long session (when I was connected),\nunfortunately I cannot test restart of this server. One wal sender is\nexecuting on standby. Fortunately, there was a possibility to run VACUUM\nFULL, and it fixed the issue.\n\nThe customer has archived wals.\n\nMy question - is it possible to do some diagnostics from SQL level? I\ndidn't find a way to get values that are used for comparison by amcheck\nfrom SQL.\n\nRegards\n\nPavel\n\nHiyesterday, I had to fix strange issue on standby server The query to freshly updated data failsselect * from seller_success_rate where create_time::date = '2024-04-23';ERROR:  58P01: could not access status of transaction 1393466389DETAIL:  Could not open file \"pg_xact/0530\": No such file or directory.LOCATION:  SlruReportIOError, slru.c:947amcheckselect * from verify_heapam('seller_success_rate'); blkno | offnum | attnum |                                msg                                -------+--------+--------+-------------------------------------------------------------------  5763 |    111 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078  5863 |    109 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078  5863 |    110 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078  5868 |    110 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078  5868 |    111 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078  5875 |    111 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078  5895 |    109 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078  5895 |    110 |        | xmin 1439564642 precedes oldest valid transaction ID 3:1523885078  6245 |    108 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078  6245 |    109 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078  6245 |    110 |        | xmin 1439564642 precedes oldest valid transaction ID 3:1523885078  6245 |    112 |        | xmin 1424677216 precedes oldest valid transaction ID 3:1523885078  6378 |    109 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078  6378 |    110 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078  6382 |    110 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078  6590 |    110 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078  6590 |    111 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078  7578 |    112 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078  7581 |    112 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078  8390 |    112 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078 10598 |    109 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078 10598 |    110 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078I verified xmin against the primary server, and it was the same. There was not any replication gap.I checked the fields from pg_database table, and looks same tooThese rows were valid (and visible) on primary.On this server there was not any long session (when I was connected), unfortunately I cannot test restart of this server.  One wal sender is executing on standby. Fortunately, there was a possibility to run VACUUM FULL, and it fixed the issue.The customer has archived wals. My question - is it possible to do some diagnostics from SQL level? I didn't find a way to get values that are used for comparison by amcheck from SQL.RegardsPavel", "msg_date": "Thu, 25 Apr 2024 08:12:17 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "broken reading on standby (PostgreSQL 16.2)" }, { "msg_contents": "\n\n> On 25 Apr 2024, at 11:12, Pavel Stehule <[email protected]> wrote:\n> \n> yesterday, I had to fix strange issue on standby server \n\nIt’s not just broken reading, if this standby is promoted in HA cluster - this would lead to data loss.\nRecently I’ve observed some lost heap updates ofter OOM-ing cluster on 14.11. This might be unrelated most probably, but I’ll post a link here, just in case [0]. In February and March we had 3 clusters with similar problem, and this is unusually big number for us in just 2 months.\n\nCan you check LSN of blocks with corrupted tuples with pageinpsect on primary and on standby? I suspect they are frozen on primary, but have usual xmin on standby.\n\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/67EADE8F-AEA6-4B73-8E38-A69E5D48BAFE%40yandex-team.ru#1266dd8b898ba02686c2911e0a50ab47\n\n", "msg_date": "Thu, 25 Apr 2024 11:52:41 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken reading on standby (PostgreSQL 16.2)" }, { "msg_contents": "čt 25. 4. 2024 v 8:52 odesílatel Andrey M. Borodin <[email protected]>\nnapsal:\n\n>\n>\n> > On 25 Apr 2024, at 11:12, Pavel Stehule <[email protected]> wrote:\n> >\n> > yesterday, I had to fix strange issue on standby server\n>\n> It’s not just broken reading, if this standby is promoted in HA cluster -\n> this would lead to data loss.\n> Recently I’ve observed some lost heap updates ofter OOM-ing cluster on\n> 14.11. This might be unrelated most probably, but I’ll post a link here,\n> just in case [0]. In February and March we had 3 clusters with similar\n> problem, and this is unusually big number for us in just 2 months.\n>\n> Can you check LSN of blocks with corrupted tuples with pageinpsect on\n> primary and on standby? I suspect they are frozen on primary, but have\n> usual xmin on standby.\n>\n\nUnfortunately, I have not direct access to backup, so I am not able to test\nit. But VACUUM FREEZE DISABLE_PAGE_SKIPPING on master didn't help\n\n\n\n\n>\n>\n> Best regards, Andrey Borodin.\n>\n> [0]\n> https://www.postgresql.org/message-id/flat/67EADE8F-AEA6-4B73-8E38-A69E5D48BAFE%40yandex-team.ru#1266dd8b898ba02686c2911e0a50ab47\n\nčt 25. 4. 2024 v 8:52 odesílatel Andrey M. Borodin <[email protected]> napsal:\n\n> On 25 Apr 2024, at 11:12, Pavel Stehule <[email protected]> wrote:\n> \n> yesterday, I had to fix strange issue on standby server \n\nIt’s not just broken reading, if this standby is promoted in HA cluster - this would lead to data loss.\nRecently I’ve observed some lost heap updates ofter OOM-ing cluster on 14.11. This might be unrelated most probably, but I’ll post a link here, just in case [0]. In February and March we had 3 clusters with similar problem, and this is unusually big number for us in just 2 months.\n\nCan you check LSN of blocks with corrupted tuples with pageinpsect on primary and on standby? I suspect they are frozen on primary, but have usual xmin on standby.Unfortunately, I have not direct access to backup, so I am not able to test it. But VACUUM FREEZE DISABLE_PAGE_SKIPPING on master didn't help \n\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/67EADE8F-AEA6-4B73-8E38-A69E5D48BAFE%40yandex-team.ru#1266dd8b898ba02686c2911e0a50ab47", "msg_date": "Thu, 25 Apr 2024 09:06:20 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "Re: broken reading on standby (PostgreSQL 16.2)" }, { "msg_contents": "\n\n> On 25 Apr 2024, at 12:06, Pavel Stehule <[email protected]> wrote:\n> \n> Unfortunately, I have not direct access to backup, so I am not able to test it. But VACUUM FREEZE DISABLE_PAGE_SKIPPING on master didn't help\n> \n\nIf Primary considers all freezable tuples frozen, but a standby does not, \"disable page skipping\" won't change anything. Primary will not emit WAL record to freeze tuples again.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Thu, 25 Apr 2024 13:51:59 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken reading on standby (PostgreSQL 16.2)" }, { "msg_contents": "čt 25. 4. 2024 v 10:52 odesílatel Andrey M. Borodin <[email protected]>\nnapsal:\n\n>\n>\n> > On 25 Apr 2024, at 12:06, Pavel Stehule <[email protected]> wrote:\n> >\n> > Unfortunately, I have not direct access to backup, so I am not able to\n> test it. But VACUUM FREEZE DISABLE_PAGE_SKIPPING on master didn't help\n> >\n>\n> If Primary considers all freezable tuples frozen, but a standby does not,\n> \"disable page skipping\" won't change anything. Primary will not emit WAL\n> record to freeze tuples again.\n>\n\nyes, this is possible. I git just info about broken xmin, so I expected\nbroken xlog, and then I first tested FREEZE\n\n\n\n>\n>\n> Best regards, Andrey Borodin.\n\nčt 25. 4. 2024 v 10:52 odesílatel Andrey M. Borodin <[email protected]> napsal:\n\n> On 25 Apr 2024, at 12:06, Pavel Stehule <[email protected]> wrote:\n> \n> Unfortunately, I have not direct access to backup, so I am not able to test it. But VACUUM FREEZE DISABLE_PAGE_SKIPPING on master didn't help\n> \n\nIf Primary considers all freezable tuples frozen, but a standby does not, \"disable page skipping\" won't change anything. Primary will not emit WAL record to freeze tuples again.yes, this is possible. I git just info about broken xmin, so I expected broken xlog, and then I first tested FREEZE \n\n\nBest regards, Andrey Borodin.", "msg_date": "Thu, 25 Apr 2024 11:07:13 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "Re: broken reading on standby (PostgreSQL 16.2)" }, { "msg_contents": "On Thu, Apr 25, 2024 at 2:13 AM Pavel Stehule <[email protected]> wrote:\n>\n> Hi\n>\n> yesterday, I had to fix strange issue on standby server\n>\n> The query to freshly updated data fails\n>\n> select * from seller_success_rate where create_time::date = '2024-04-23';\n> ERROR: 58P01: could not access status of transaction 1393466389\n> DETAIL: Could not open file \"pg_xact/0530\": No such file or directory.\n> LOCATION: SlruReportIOError, slru.c:947\n>\n> amcheck\n>\n> select * from verify_heapam('seller_success_rate');\n> blkno | offnum | attnum | msg\n> -------+--------+--------+-------------------------------------------------------------------\n> 5763 | 111 | | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n> 5863 | 109 | | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n> 5863 | 110 | | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n> 5868 | 110 | | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n> 5868 | 111 | | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n> 5875 | 111 | | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n> 5895 | 109 | | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n> 5895 | 110 | | xmin 1439564642 precedes oldest valid transaction ID 3:1523885078\n> 6245 | 108 | | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n> 6245 | 109 | | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n> 6245 | 110 | | xmin 1439564642 precedes oldest valid transaction ID 3:1523885078\n> 6245 | 112 | | xmin 1424677216 precedes oldest valid transaction ID 3:1523885078\n> 6378 | 109 | | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n> 6378 | 110 | | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n> 6382 | 110 | | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n> 6590 | 110 | | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n> 6590 | 111 | | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n> 7578 | 112 | | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n> 7581 | 112 | | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n> 8390 | 112 | | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n> 10598 | 109 | | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n> 10598 | 110 | | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n>\n> I verified xmin against the primary server, and it was the same. There was not any replication gap.\n>\n> I checked the fields from pg_database table, and looks same too\n>\n> These rows were valid (and visible) on primary.\n>\n> On this server there was not any long session (when I was connected), unfortunately I cannot test restart of this server. One wal sender is executing on standby. Fortunately, there was a possibility to run VACUUM FULL, and it fixed the issue.\n>\n> The customer has archived wals.\n\nDid you mention what version of Postgres this is?\n\n- Melanie\n\n\n", "msg_date": "Thu, 25 Apr 2024 06:52:48 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken reading on standby (PostgreSQL 16.2)" }, { "msg_contents": "čt 25. 4. 2024 v 12:53 odesílatel Melanie Plageman <\[email protected]> napsal:\n\n> On Thu, Apr 25, 2024 at 2:13 AM Pavel Stehule <[email protected]>\n> wrote:\n> >\n> > Hi\n> >\n> > yesterday, I had to fix strange issue on standby server\n> >\n> > The query to freshly updated data fails\n> >\n> > select * from seller_success_rate where create_time::date = '2024-04-23';\n> > ERROR: 58P01: could not access status of transaction 1393466389\n> > DETAIL: Could not open file \"pg_xact/0530\": No such file or directory.\n> > LOCATION: SlruReportIOError, slru.c:947\n> >\n> > amcheck\n> >\n> > select * from verify_heapam('seller_success_rate');\n> > blkno | offnum | attnum | msg\n> >\n> -------+--------+--------+-------------------------------------------------------------------\n> > 5763 | 111 | | xmin 1393466389 precedes oldest valid\n> transaction ID 3:1523885078\n> > 5863 | 109 | | xmin 1393466389 precedes oldest valid\n> transaction ID 3:1523885078\n> > 5863 | 110 | | xmin 1393466389 precedes oldest valid\n> transaction ID 3:1523885078\n> > 5868 | 110 | | xmin 1393466389 precedes oldest valid\n> transaction ID 3:1523885078\n> > 5868 | 111 | | xmin 1393466389 precedes oldest valid\n> transaction ID 3:1523885078\n> > 5875 | 111 | | xmin 1393466389 precedes oldest valid\n> transaction ID 3:1523885078\n> > 5895 | 109 | | xmin 1393466389 precedes oldest valid\n> transaction ID 3:1523885078\n> > 5895 | 110 | | xmin 1439564642 precedes oldest valid\n> transaction ID 3:1523885078\n> > 6245 | 108 | | xmin 1393466389 precedes oldest valid\n> transaction ID 3:1523885078\n> > 6245 | 109 | | xmin 1393466389 precedes oldest valid\n> transaction ID 3:1523885078\n> > 6245 | 110 | | xmin 1439564642 precedes oldest valid\n> transaction ID 3:1523885078\n> > 6245 | 112 | | xmin 1424677216 precedes oldest valid\n> transaction ID 3:1523885078\n> > 6378 | 109 | | xmin 1393466389 precedes oldest valid\n> transaction ID 3:1523885078\n> > 6378 | 110 | | xmin 1393466389 precedes oldest valid\n> transaction ID 3:1523885078\n> > 6382 | 110 | | xmin 1393466389 precedes oldest valid\n> transaction ID 3:1523885078\n> > 6590 | 110 | | xmin 1393466389 precedes oldest valid\n> transaction ID 3:1523885078\n> > 6590 | 111 | | xmin 1393466389 precedes oldest valid\n> transaction ID 3:1523885078\n> > 7578 | 112 | | xmin 1393466389 precedes oldest valid\n> transaction ID 3:1523885078\n> > 7581 | 112 | | xmin 1393466389 precedes oldest valid\n> transaction ID 3:1523885078\n> > 8390 | 112 | | xmin 1393466389 precedes oldest valid\n> transaction ID 3:1523885078\n> > 10598 | 109 | | xmin 1393466389 precedes oldest valid\n> transaction ID 3:1523885078\n> > 10598 | 110 | | xmin 1393466389 precedes oldest valid\n> transaction ID 3:1523885078\n> >\n> > I verified xmin against the primary server, and it was the same. There\n> was not any replication gap.\n> >\n> > I checked the fields from pg_database table, and looks same too\n> >\n> > These rows were valid (and visible) on primary.\n> >\n> > On this server there was not any long session (when I was connected),\n> unfortunately I cannot test restart of this server. One wal sender is\n> executing on standby. Fortunately, there was a possibility to run VACUUM\n> FULL, and it fixed the issue.\n> >\n> > The customer has archived wals.\n>\n> Did you mention what version of Postgres this is?\n>\n\n16.2 and related rows was today one day old\n\n\n\n>\n> - Melanie\n>\n\nčt 25. 4. 2024 v 12:53 odesílatel Melanie Plageman <[email protected]> napsal:On Thu, Apr 25, 2024 at 2:13 AM Pavel Stehule <[email protected]> wrote:\n>\n> Hi\n>\n> yesterday, I had to fix strange issue on standby server\n>\n> The query to freshly updated data fails\n>\n> select * from seller_success_rate where create_time::date = '2024-04-23';\n> ERROR:  58P01: could not access status of transaction 1393466389\n> DETAIL:  Could not open file \"pg_xact/0530\": No such file or directory.\n> LOCATION:  SlruReportIOError, slru.c:947\n>\n> amcheck\n>\n> select * from verify_heapam('seller_success_rate');\n>  blkno | offnum | attnum |                                msg\n> -------+--------+--------+-------------------------------------------------------------------\n>   5763 |    111 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n>   5863 |    109 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n>   5863 |    110 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n>   5868 |    110 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n>   5868 |    111 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n>   5875 |    111 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n>   5895 |    109 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n>   5895 |    110 |        | xmin 1439564642 precedes oldest valid transaction ID 3:1523885078\n>   6245 |    108 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n>   6245 |    109 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n>   6245 |    110 |        | xmin 1439564642 precedes oldest valid transaction ID 3:1523885078\n>   6245 |    112 |        | xmin 1424677216 precedes oldest valid transaction ID 3:1523885078\n>   6378 |    109 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n>   6378 |    110 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n>   6382 |    110 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n>   6590 |    110 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n>   6590 |    111 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n>   7578 |    112 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n>   7581 |    112 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n>   8390 |    112 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n>  10598 |    109 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n>  10598 |    110 |        | xmin 1393466389 precedes oldest valid transaction ID 3:1523885078\n>\n> I verified xmin against the primary server, and it was the same. There was not any replication gap.\n>\n> I checked the fields from pg_database table, and looks same too\n>\n> These rows were valid (and visible) on primary.\n>\n> On this server there was not any long session (when I was connected), unfortunately I cannot test restart of this server.  One wal sender is executing on standby. Fortunately, there was a possibility to run VACUUM FULL, and it fixed the issue.\n>\n> The customer has archived wals.\n\nDid you mention what version of Postgres this is?16.2 and related rows was today one day old \n\n- Melanie", "msg_date": "Thu, 25 Apr 2024 13:22:01 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "Re: broken reading on standby (PostgreSQL 16.2)" } ]
[ { "msg_contents": "Hi,\n\nCurrently the launcher's latch is used for the following: a) worker\nprocess attach b) worker process exit and c) subscription creation.\nSince this same latch is used for multiple cases, the launcher process\nis not able to handle concurrent scenarios like: a) Launcher started a\nnew apply worker and waiting for apply worker to attach and b) create\nsubscription sub2 sending launcher wake up signal. In this scenario,\nboth of them will set latch of the launcher process, the launcher\nprocess is not able to identify that both operations have occurred 1)\nworker is attached 2) subscription is created and apply worker should\nbe started. As a result the apply worker does not get started for the\nnew subscription created immediately and gets started after the\ntimeout of 180 seconds.\n\nI have started a new thread for this based on suggestions at [1].\n\nWe could improvise this by one of the following:\na) Introduce a new latch to handle worker attach and exit.\nb) Add a new GUC launcher_retry_time which gives more flexibility to\nusers as suggested by Amit at [1]. Before 5a3a953, the\nwal_retrieve_retry_interval plays a similar role as the suggested new\nGUC launcher_retry_time, e.g. even if a worker is launched, the\nlauncher only wait wal_retrieve_retry_interval time before next round.\nc) Don't reset the latch at worker attach and allow launcher main to\nidentify and handle it. For this there is a patch v6-0002 available at\n[2].\n\nI'm not sure which approach is better in this case. I was thinking if\nwe should add a new latch to handle worker attach and exit.\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1KR29XfBi5rObgV06xcBLn7y%2BLCuxcSMdKUkKZK740L2w%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CALDaNm10R7L0Dxq%2B-J%3DPp3AfM_yaokpbhECvJ69QiGH8-jQquw%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 25 Apr 2024 14:28:40 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Improving the latch handling between logical replication launcher and\n worker processes." }, { "msg_contents": "On Thursday, April 25, 2024 4:59 PM vignesh C <[email protected]> wrote:\r\n> \r\n> Hi,\r\n> \r\n> Currently the launcher's latch is used for the following: a) worker process attach\r\n> b) worker process exit and c) subscription creation.\r\n> Since this same latch is used for multiple cases, the launcher process is not able\r\n> to handle concurrent scenarios like: a) Launcher started a new apply worker and\r\n> waiting for apply worker to attach and b) create subscription sub2 sending\r\n> launcher wake up signal. In this scenario, both of them will set latch of the\r\n> launcher process, the launcher process is not able to identify that both\r\n> operations have occurred 1) worker is attached 2) subscription is created and\r\n> apply worker should be started. As a result the apply worker does not get\r\n> started for the new subscription created immediately and gets started after the\r\n> timeout of 180 seconds.\r\n> \r\n> I have started a new thread for this based on suggestions at [1].\r\n> \r\n> a) Introduce a new latch to handle worker attach and exit.\r\n\r\nI found the startup process also uses two latches(see recoveryWakeupLatch) for\r\ndifferent purposes, so maybe this is OK. But note that both logical launcher\r\nand apply worker will call logicalrep_worker_launch(), if we only add one new\r\nlatch, it means both workers will wait on the same new Latch, and the launcher\r\nmay consume the SetLatch that should have been consumed by the apply\r\nworker(although it's rare).\r\n\r\n> b) Add a new GUC launcher_retry_time which gives more flexibility to users as\r\n> suggested by Amit at [1]. Before 5a3a953, the wal_retrieve_retry_interval plays\r\n> a similar role as the suggested new GUC launcher_retry_time, e.g. even if a\r\n> worker is launched, the launcher only wait wal_retrieve_retry_interval time\r\n> before next round.\r\n\r\nIIUC, the issue does not happen frequently, and may not be noticeable where\r\napply workers wouldn't be restarted often. So, I am slightly not sure if it's\r\nworth adding a new GUC.\r\n\r\n> c) Don't reset the latch at worker attach and allow launcher main to identify and\r\n> handle it. For this there is a patch v6-0002 available at [2].\r\n\r\nThis seems simple. I found we are doing something similar in\r\nRegisterSyncRequest() and WalSummarizerMain().\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Fri, 26 Apr 2024 07:52:51 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Improving the latch handling between logical replication launcher\n and worker processes." }, { "msg_contents": "Dear Vignesh,\r\n\r\nThanks for raising idea!\r\n\r\n> a) Introduce a new latch to handle worker attach and exit.\r\n\r\nJust to confirm - there are three wait events for launchers, so I feel we may be\r\nable to create latches per wait event. Is there a reason to introduce\r\n\"a\" latch?\r\n\r\n> b) Add a new GUC launcher_retry_time which gives more flexibility to\r\n> users as suggested by Amit at [1]. Before 5a3a953, the\r\n> wal_retrieve_retry_interval plays a similar role as the suggested new\r\n> GUC launcher_retry_time, e.g. even if a worker is launched, the\r\n> launcher only wait wal_retrieve_retry_interval time before next round.\r\n\r\nHmm. My concern is how users estimate the value. Maybe the default will be\r\n3min, but should users change it? If so, how long? I think even if it becomes\r\ntunable, they cannot control well.\r\n\r\n> c) Don't reset the latch at worker attach and allow launcher main to\r\n> identify and handle it. For this there is a patch v6-0002 available at\r\n> [2].\r\n\r\nDoes it mean that you want to remove ResetLatch() from\r\nWaitForReplicationWorkerAttach(), right? If so, what about the scenario?\r\n\r\n1) The launcher waiting the worker is attached in WaitForReplicationWorkerAttach(),\r\nand 2) subscription is created before attaching. In this case, the launcher will\r\nbecome un-sleepable because the latch is set but won't be reset. It may waste the\r\nCPU time.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n", "msg_date": "Fri, 10 May 2024 02:09:19 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Improving the latch handling between logical replication launcher\n and worker processes." }, { "msg_contents": "On Thu, Apr 25, 2024 at 6:59 PM vignesh C <[email protected]> wrote:\n>\n...\n> a) Introduce a new latch to handle worker attach and exit.\n\nIIUC there is normally only the “shared” latch or a “process local”\nlatch. i.e. AFAICT is is quite uncommon to invent a new latches like\noption a is proposing. I did not see any examples of making new\nlatches (e.g. MyLatchA, MyLatchB, MyLatchC) in the PostgreSQL source\nother than that ‘recoveryWakeupLatch’ mentioned above by Hou-san. So\nthis option could be OK, but OTOH since there is hardly any precedent\nmaybe that should be taken as an indication to avoid doing it this way\n(???)\n\n> b) Add a new GUC launcher_retry_time which gives more flexibility to\n> users as suggested by Amit at [1].\n\nI'm not sure that introducing a new GUC is a good option because this\nseems a rare problem to have -- so it will be hard to tune since it\nwill be difficult to know you even have this problem and then\ndifficult to know that it is fixed. Anyway. this made me consider more\nwhat the WaitLatch timeout value should be. Here are some thoughts:\n\nIdea 1)\n\nI was wondering where did that DEFAULT_NAPTIME_PER_CYCLE value of 180\nseconds come from or was that just a made-up number? AFAICT it just\ncame into existence in the first pub/sub commit [1] but there is no\nexplanation for why 180s was chosen. Anyway, I assume a low value\n(5s?) would be bad because it incurs unacceptable CPU usage, right?\nBut if 180s is too long and 5s is too short then what does a “good”\nnumber even look like? E.g.,. if 60s is deemed OK, then is there any\nharm in just defining DEFAULT_NAPTIME_PER_CYCLE to be 60s and leaving\nit at that?\n\nIdea 2)\n\nAnother idea could be to use a “dynamic timeout” in the WaitLatch of\nApplyLauncherMain. Let me try to explain my thought bubble:\n- If the preceding foreach(lc, sublist) loop launched any workers then\nthe WaitLatch timeout can be a much shorter 10s\n- If the preceding foreach(lc, sublist) loop did NOT launch any\nworkers (this would be the most common case) then the WaitLatch\ntimeout can be the usual 180s.\n\nIIUC this strategy will still give any recently launched workers\nenough time to attach shmem but it also means that any concurrent\nCREATE SUBSCRIPTION will be addressed within 10s instead of 180s.\nMaybe this is sufficient to make an already rare problem become\ninsignificant.\n\n\n> c) Don't reset the latch at worker attach and allow launcher main to\n> identify and handle it. For this there is a patch v6-0002 available at\n> [2].\n\nThis option c seems the easiest. Can you explain what are the\ndrawbacks of using this approach?\n\n======\n[1] github - https://github.com/postgres/postgres/commit/665d1fad99e7b11678b0d5fa24d2898424243cd6#diff-127f8eb009151ec548d14c877a57a89d67da59e35ea09189411aed529c6341bf\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 29 May 2024 15:11:01 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving the latch handling between logical replication launcher\n and worker processes." }, { "msg_contents": "On Fri, 10 May 2024 at 07:39, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Vignesh,\n>\n> Thanks for raising idea!\n>\n> > a) Introduce a new latch to handle worker attach and exit.\n>\n> Just to confirm - there are three wait events for launchers, so I feel we may be\n> able to create latches per wait event. Is there a reason to introduce\n> \"a\" latch?\n\nOne latch is enough, we can use the new latch for both worker starting\nand worker exiting. The other existing latch can be used for other\npurposes. Something like the attached patch.\n\nRegards,\nVignesh", "msg_date": "Wed, 29 May 2024 15:07:56 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving the latch handling between logical replication launcher\n and worker processes." }, { "msg_contents": "On Wed, 29 May 2024 at 10:41, Peter Smith <[email protected]> wrote:\n>\n> On Thu, Apr 25, 2024 at 6:59 PM vignesh C <[email protected]> wrote:\n> >\n>\n> > c) Don't reset the latch at worker attach and allow launcher main to\n> > identify and handle it. For this there is a patch v6-0002 available at\n> > [2].\n>\n> This option c seems the easiest. Can you explain what are the\n> drawbacks of using this approach?\n\nThis solution will resolve the issue. However, one drawback to\nconsider is that because we're not resetting the latch, in this\nscenario, the launcher process will need to perform an additional\nround of acquiring subscription details and determining whether the\nworker should start, regardless of any changes in subscriptions.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 29 May 2024 15:09:54 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving the latch handling between logical replication launcher\n and worker processes." }, { "msg_contents": "On Wed, May 29, 2024 at 7:53 PM vignesh C <[email protected]> wrote:\n>\n> On Wed, 29 May 2024 at 10:41, Peter Smith <[email protected]> wrote:\n> >\n> > On Thu, Apr 25, 2024 at 6:59 PM vignesh C <[email protected]> wrote:\n> > >\n> >\n> > > c) Don't reset the latch at worker attach and allow launcher main to\n> > > identify and handle it. For this there is a patch v6-0002 available at\n> > > [2].\n> >\n> > This option c seems the easiest. Can you explain what are the\n> > drawbacks of using this approach?\n>\n> This solution will resolve the issue. However, one drawback to\n> consider is that because we're not resetting the latch, in this\n> scenario, the launcher process will need to perform an additional\n> round of acquiring subscription details and determining whether the\n> worker should start, regardless of any changes in subscriptions.\n>\n\nHmm. IIUC the WaitLatch of the Launcher.WaitForReplicationWorkerAttach\nwas not expecting to get notified.\n\ne.g.1. The WaitList comment in the function says so:\n/*\n * We need timeout because we generally don't get notified via latch\n * about the worker attach. But we don't expect to have to wait long.\n */\n\ne.g.2 The logicalrep_worker_attach() function (which is AFAIK what\nWaitForReplicationWorkerAttach was waiting for) is not doing any\nSetLatch. So that matches what the comment said.\n\n~~~\n\nAFAICT the original problem reported by this thread happened because\nthe SetLatch (from CREATE SUBSCRIPTION) has been inadvertently gobbled\nby the WaitForReplicationWorkerAttach.WaitLatch/ResetLatch which BTW\nwasn't expecting to be notified at all.\n\n~~~\n\nYour option c removes the ResetLatch done by WaitForReplicationWorkerAttach:\n\nYou said above that one drawback is \"the launcher process will need to\nperform an additional round of acquiring subscription details and\ndetermining whether the worker should start, regardless of any changes\nin subscriptions\"\n\nI think you mean if some CREATE SUBSCRIPTION (i.e. SetLatch) happens\nduring the attaching of other workers then the latch would (now after\noption c) remain set and so the WaitLatch of ApplyLauncherMain would\nbe notified and/or return immediately end causing an immediate\nre-iteration of the \"foreach(lc, sublist)\" loop.\n\nBut I don't understand why that is a problem.\n\na) I didn't know what you meant \"regardless of any changes in\nsubscriptions\" because I think the troublesome SetLatch originated\nfrom the CREATE SUBSCRIPTION and so there *is* a change to\nsubscriptions.\n\nb) We will need to repeat that sublist loop anyway to start the worker\nfor the new CREATE SUBSCRIPTION, and we need to do that at the\nearliest opportunity because the whole point of the SetLatch is so the\nCREATE SUBSCRIPTION worker can get started promptly. So the earlier we\ndo that the better, right?\n\nc) AFAICT there is no danger of accidentally tying to starting workers\nwho are still in the middle of trying to start (from the previous\niteration) because those cases should be guarded by the\nApplyLauncherGetWorkerStartTime logic.\n\n~~\n\nTo summarise, I felt removing the ResetLatch and the WL_LATCH_SET\nbitset (like your v6-0002 patch does) is not only an easy way of\nfixing the problem reported by this thread, but also it now makes that\nWaitForReplicationWorkerAttach code behave like the comment (\"we\ngenerally don't get notified via latch about the worker attach\") is\nsaying.\n\n(Unless there are some other problems with it that I can't see)\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 30 May 2024 13:16:10 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving the latch handling between logical replication launcher\n and worker processes." }, { "msg_contents": "On Thu, 30 May 2024 at 08:46, Peter Smith <[email protected]> wrote:\n>\n> On Wed, May 29, 2024 at 7:53 PM vignesh C <[email protected]> wrote:\n> >\n> > On Wed, 29 May 2024 at 10:41, Peter Smith <[email protected]> wrote:\n> > >\n> > > On Thu, Apr 25, 2024 at 6:59 PM vignesh C <[email protected]> wrote:\n> > > >\n> > >\n> > > > c) Don't reset the latch at worker attach and allow launcher main to\n> > > > identify and handle it. For this there is a patch v6-0002 available at\n> > > > [2].\n> > >\n> > > This option c seems the easiest. Can you explain what are the\n> > > drawbacks of using this approach?\n> >\n> > This solution will resolve the issue. However, one drawback to\n> > consider is that because we're not resetting the latch, in this\n> > scenario, the launcher process will need to perform an additional\n> > round of acquiring subscription details and determining whether the\n> > worker should start, regardless of any changes in subscriptions.\n> >\n>\n> Hmm. IIUC the WaitLatch of the Launcher.WaitForReplicationWorkerAttach\n> was not expecting to get notified.\n>\n> e.g.1. The WaitList comment in the function says so:\n> /*\n> * We need timeout because we generally don't get notified via latch\n> * about the worker attach. But we don't expect to have to wait long.\n> */\n>\n> e.g.2 The logicalrep_worker_attach() function (which is AFAIK what\n> WaitForReplicationWorkerAttach was waiting for) is not doing any\n> SetLatch. So that matches what the comment said.\n>\n> ~~~\n>\n> AFAICT the original problem reported by this thread happened because\n> the SetLatch (from CREATE SUBSCRIPTION) has been inadvertently gobbled\n> by the WaitForReplicationWorkerAttach.WaitLatch/ResetLatch which BTW\n> wasn't expecting to be notified at all.\n>\n> ~~~\n>\n> Your option c removes the ResetLatch done by WaitForReplicationWorkerAttach:\n>\n> You said above that one drawback is \"the launcher process will need to\n> perform an additional round of acquiring subscription details and\n> determining whether the worker should start, regardless of any changes\n> in subscriptions\"\n>\n> I think you mean if some CREATE SUBSCRIPTION (i.e. SetLatch) happens\n> during the attaching of other workers then the latch would (now after\n> option c) remain set and so the WaitLatch of ApplyLauncherMain would\n> be notified and/or return immediately end causing an immediate\n> re-iteration of the \"foreach(lc, sublist)\" loop.\n>\n> But I don't understand why that is a problem.\n>\n> a) I didn't know what you meant \"regardless of any changes in\n> subscriptions\" because I think the troublesome SetLatch originated\n> from the CREATE SUBSCRIPTION and so there *is* a change to\n> subscriptions.\n\nThe process of setting the latch unfolds as follows: Upon creating a\nnew subscription, the launcher process initiates a request to the\npostmaster, prompting it to initiate a new apply worker process.\nSubsequently, the postmaster commences the apply worker process and\ndispatches a SIGUSR1 signal to the launcher process(this is done from\ndo_start_bgworker & ReportBackgroundWorkerPID). Upon receiving this\nsignal, the launcher process sets the latch.\nNow, there are two potential scenarios:\na) Concurrent Creation of Another Subscription: In this situation, the\nlauncher traverses the subscription list to detect the creation of a\nnew subscription and proceeds to initiate a new apply worker for the\nconcurrently created subscription. This is ok. b) Absence of\nConcurrent Subscription Creation: In this case, since the latch\nremains unset, the launcher iterates through the subscription list and\nidentifies the absence of new subscriptions. This verification occurs\nas the latch remains unset. Here there is an additional check.\nI'm talking about the second scenario where no subscription is\nconcurrently created. In this case, as the latch remains unset, we\nperform an additional check on the subscription list. There is no\nproblem with this.\nThis additional check can occur in the existing code too if the\nfunction WaitForReplicationWorkerAttach returns from the initial if\ncheck i.e. if the worker already started when this check happens.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 30 May 2024 12:13:41 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving the latch handling between logical replication launcher\n and worker processes." }, { "msg_contents": "I'm don't quite understand the problem we're trying to fix:\n\n> Currently the launcher's latch is used for the following: a) worker\n> process attach b) worker process exit and c) subscription creation.\n> Since this same latch is used for multiple cases, the launcher process\n> is not able to handle concurrent scenarios like: a) Launcher started a\n> new apply worker and waiting for apply worker to attach and b) create\n> subscription sub2 sending launcher wake up signal. In this scenario,\n> both of them will set latch of the launcher process, the launcher\n> process is not able to identify that both operations have occurred 1)\n> worker is attached 2) subscription is created and apply worker should\n> be started. As a result the apply worker does not get started for the\n> new subscription created immediately and gets started after the\n> timeout of 180 seconds.\n\nI don't see how we could miss a notification. Yes, both events will set \nthe same latch. Why is that a problem? The loop will see that the new \nworker has attached, and that the new subscription has been created, and \nprocess both events. Right?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 4 Jul 2024 14:22:06 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving the latch handling between logical replication launcher\n and worker processes." }, { "msg_contents": "On Thu, 4 Jul 2024 at 16:52, Heikki Linnakangas <[email protected]> wrote:\n>\n> I'm don't quite understand the problem we're trying to fix:\n>\n> > Currently the launcher's latch is used for the following: a) worker\n> > process attach b) worker process exit and c) subscription creation.\n> > Since this same latch is used for multiple cases, the launcher process\n> > is not able to handle concurrent scenarios like: a) Launcher started a\n> > new apply worker and waiting for apply worker to attach and b) create\n> > subscription sub2 sending launcher wake up signal. In this scenario,\n> > both of them will set latch of the launcher process, the launcher\n> > process is not able to identify that both operations have occurred 1)\n> > worker is attached 2) subscription is created and apply worker should\n> > be started. As a result the apply worker does not get started for the\n> > new subscription created immediately and gets started after the\n> > timeout of 180 seconds.\n>\n> I don't see how we could miss a notification. Yes, both events will set\n> the same latch. Why is that a problem?\n\nThe launcher process waits for the apply worker to attach at\nWaitForReplicationWorkerAttach function. The launcher's latch is\ngetting set concurrently for another create subscription and apply\nworker attached. The launcher now detects the latch is set while\nwaiting at WaitForReplicationWorkerAttach, it resets the latch and\nproceed to the main loop and waits for DEFAULT_NAPTIME_PER_CYCLE(as\nthe latch has already been reset). Further details are provided below.\n\nThe loop will see that the new\n> worker has attached, and that the new subscription has been created, and\n> process both events. Right?\n\nSince the latch is reset at WaitForReplicationWorkerAttach, it skips\nprocessing the create subscription event.\n\nSlightly detailing further:\nIn the scenario when we execute two concurrent create subscription\ncommands, first CREATE SUBSCRIPTION sub1, followed immediately by\nCREATE SUBSCRIPTION sub2.\nIn a few random scenarios, the following events may unfold:\nAfter the first create subscription command(sub1), the backend will\nset the launcher's latch because of ApplyLauncherWakeupAtCommit.\nSubsequently, the launcher process will reset the latch and identify\nthe addition of a new subscription in the pg_subscription list. The\nlauncher process will proceed to request the postmaster to start the\napply worker background process (sub1) and request the postmaster to\nnotify the launcher once the apply worker(sub1) has been started.\nLauncher will then wait for the apply worker(sub1) to attach in the\nWaitForReplicationWorkerAttach function.\nMeanwhile, the second CREATE SUBSCRIPTION command (sub2) which was\nexecuted concurrently, also set the launcher's latch(because of\nApplyLauncherWakeupAtCommit).\nSimultaneously when the launcher remains in the\nWaitForReplicationWorkerAttach function waiting for apply worker of\nsub1 to start, the postmaster will start the apply worker for\nsubscription sub1 and send a SIGUSR1 signal to the launcher process\nvia ReportBackgroundWorkerPID. Upon receiving this signal, the\nlauncher process will then set its latch.\n\nNow, the launcher's latch has been set concurrently after the apply\nworker for sub1 is started and the execution of the CREATE\nSUBSCRIPTION sub2 command.\n\nAt this juncture, the launcher, which had been awaiting the attachment\nof the apply worker, detects that the latch is set and proceeds to\nreset it. The reset operation of the latch occurs within the following\nsection of code in WaitForReplicationWorkerAttach:\n...\nrc = WaitLatch(MyLatch,\n WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n 10L, WAIT_EVENT_BGWORKER_STARTUP);\n\nif (rc & WL_LATCH_SET)\n{\nResetLatch(MyLatch);\nCHECK_FOR_INTERRUPTS();\n}\n...\n\nAfter resetting the latch here, the activation signal intended to\nstart the apply worker for subscription sub2 is no longer present. The\nlauncher will return to the ApplyLauncherMain function, where it will\nawait the DEFAULT_NAPTIME_PER_CYCLE, which is 180 seconds, before\nprocessing the create subscription request (i.e., creating a new apply\nworker for sub2).\nThe issue arises from the latch being reset in\nWaitForReplicationWorkerAttach, which can occasionally delay the\nsynchronization of table data for the subscription.\nAnother concurrency scenario, as mentioned by Alexander Lakhin at [3],\ninvolves \"apply worker exiting with failure and concurrent create\nsubscription.\" The underlying cause for both issues is from the same\nsource: the reset of the latch in WaitForReplicationWorkerAttach.\nEven though there is no failure, such issues contribute to a delay of\n180 seconds in the execution of buildfarm tests and local tests.\n\nThis was noticed in buildfarm where 026_stats.pl took more than 180\nseconds at [2]:\n...\n[21:11:21] t/025_rep_changes_for_schema.pl .... ok 3340 ms ( 0.00\nusr 0.00 sys + 0.43 cusr 0.23 csys = 0.66 CPU)\n[21:11:25] t/026_stats.pl ..................... ok 4499 ms ( 0.00\nusr 0.00 sys + 0.42 cusr 0.21 csys = 0.63 CPU)\n[21:14:31] t/027_nosuperuser.pl ............... ok 185457 ms ( 0.01\nusr 0.00 sys + 5.91 cusr 2.68 csys = 8.60 CPU)\n[21:14:35] t/028_row_filter.pl ................ ok 3998 ms ( 0.01\nusr 0.00 sys + 0.86 cusr 0.34 csys = 1.21 CPU)\n...\n\nAnd also by Robert at [3].\n\n[1] - https://www.postgresql.org/message-id/858a7622-2c81-1687-d1df-1322dfcb2e72%40gmail.com\n[2] - https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=snakefly&dt=2024-02-01%2020%3A34%3A03&stg=subscription-check\n[3] - https://www.postgresql.org/message-id/CA%2BTgmoZkj%3DA39i4obKXADMhzJW%3D6dyGq-C1aGfb%2BjUy9XvxwYA%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 5 Jul 2024 16:37:46 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving the latch handling between logical replication launcher\n and worker processes." }, { "msg_contents": "On 05/07/2024 14:07, vignesh C wrote:\n> On Thu, 4 Jul 2024 at 16:52, Heikki Linnakangas <[email protected]> wrote:\n>>\n>> I'm don't quite understand the problem we're trying to fix:\n>>\n>>> Currently the launcher's latch is used for the following: a) worker\n>>> process attach b) worker process exit and c) subscription creation.\n>>> Since this same latch is used for multiple cases, the launcher process\n>>> is not able to handle concurrent scenarios like: a) Launcher started a\n>>> new apply worker and waiting for apply worker to attach and b) create\n>>> subscription sub2 sending launcher wake up signal. In this scenario,\n>>> both of them will set latch of the launcher process, the launcher\n>>> process is not able to identify that both operations have occurred 1)\n>>> worker is attached 2) subscription is created and apply worker should\n>>> be started. As a result the apply worker does not get started for the\n>>> new subscription created immediately and gets started after the\n>>> timeout of 180 seconds.\n>>\n>> I don't see how we could miss a notification. Yes, both events will set\n>> the same latch. Why is that a problem?\n> \n> The launcher process waits for the apply worker to attach at\n> WaitForReplicationWorkerAttach function. The launcher's latch is\n> getting set concurrently for another create subscription and apply\n> worker attached. The launcher now detects the latch is set while\n> waiting at WaitForReplicationWorkerAttach, it resets the latch and\n> proceed to the main loop and waits for DEFAULT_NAPTIME_PER_CYCLE(as\n> the latch has already been reset). Further details are provided below.\n> \n> The loop will see that the new\n>> worker has attached, and that the new subscription has been created, and\n>> process both events. Right?\n> \n> Since the latch is reset at WaitForReplicationWorkerAttach, it skips\n> processing the create subscription event.\n> \n> Slightly detailing further:\n> In the scenario when we execute two concurrent create subscription\n> commands, first CREATE SUBSCRIPTION sub1, followed immediately by\n> CREATE SUBSCRIPTION sub2.\n> In a few random scenarios, the following events may unfold:\n> After the first create subscription command(sub1), the backend will\n> set the launcher's latch because of ApplyLauncherWakeupAtCommit.\n> Subsequently, the launcher process will reset the latch and identify\n> the addition of a new subscription in the pg_subscription list. The\n> launcher process will proceed to request the postmaster to start the\n> apply worker background process (sub1) and request the postmaster to\n> notify the launcher once the apply worker(sub1) has been started.\n> Launcher will then wait for the apply worker(sub1) to attach in the\n> WaitForReplicationWorkerAttach function.\n> Meanwhile, the second CREATE SUBSCRIPTION command (sub2) which was\n> executed concurrently, also set the launcher's latch(because of\n> ApplyLauncherWakeupAtCommit).\n> Simultaneously when the launcher remains in the\n> WaitForReplicationWorkerAttach function waiting for apply worker of\n> sub1 to start, the postmaster will start the apply worker for\n> subscription sub1 and send a SIGUSR1 signal to the launcher process\n> via ReportBackgroundWorkerPID. Upon receiving this signal, the\n> launcher process will then set its latch.\n> \n> Now, the launcher's latch has been set concurrently after the apply\n> worker for sub1 is started and the execution of the CREATE\n> SUBSCRIPTION sub2 command.\n> \n> At this juncture, the launcher, which had been awaiting the attachment\n> of the apply worker, detects that the latch is set and proceeds to\n> reset it. The reset operation of the latch occurs within the following\n> section of code in WaitForReplicationWorkerAttach:\n> ...\n> rc = WaitLatch(MyLatch,\n> WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n> 10L, WAIT_EVENT_BGWORKER_STARTUP);\n> \n> if (rc & WL_LATCH_SET)\n> {\n> ResetLatch(MyLatch);\n> CHECK_FOR_INTERRUPTS();\n> }\n> ...\n> \n> After resetting the latch here, the activation signal intended to\n> start the apply worker for subscription sub2 is no longer present. The\n> launcher will return to the ApplyLauncherMain function, where it will\n> await the DEFAULT_NAPTIME_PER_CYCLE, which is 180 seconds, before\n> processing the create subscription request (i.e., creating a new apply\n> worker for sub2).\n> The issue arises from the latch being reset in\n> WaitForReplicationWorkerAttach, which can occasionally delay the\n> synchronization of table data for the subscription.\n\nOk, I see it now. Thanks for the explanation. So the problem isn't using \nthe same latch for different purposes per se. It's that we're trying to \nuse it in a nested fashion, resetting it in the inner loop.\n\nLooking at the proposed patch more closely:\n\n> @@ -221,13 +224,13 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,\n> \t\t * We need timeout because we generally don't get notified via latch\n> \t\t * about the worker attach. But we don't expect to have to wait long.\n> \t\t */\n> -\t\trc = WaitLatch(MyLatch,\n> +\t\trc = WaitLatch(&worker->launcherWakeupLatch,\n> \t\t\t\t\t WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n> \t\t\t\t\t 10L, WAIT_EVENT_BGWORKER_STARTUP);\n> \n> \t\tif (rc & WL_LATCH_SET)\n> \t\t{\n> -\t\t\tResetLatch(MyLatch);\n> +\t\t\tResetLatch(&worker->launcherWakeupLatch);\n> \t\t\tCHECK_FOR_INTERRUPTS();\n> \t\t}\n> \t}\n\nThe comment here is now outdated, right? The patch adds an explicit \nSetLatch() call to ParallelApplyWorkerMain(), to notify the launcher \nabout the attachment.\n\nIs the launcherWakeupLatch field protected by some lock, to protect \nwhich process owns it? OwnLatch() is called in \nlogicalrep_worker_stop_internal() without holding a lock for example. Is \nthere a guarantee that only one process can do that at the same time?\n\nWhat happens if a process never calls DisownLatch(), e.g. because it \nbailed out with an error while waiting for the worker to attach or stop?\n\nAs an alternative, smaller fix, I think we could do the attached. It \nforces the launcher's main loop to do another iteration, if it calls \nlogicalrep_worker_launch(). That extra iteration should pick up any \nmissed notifications.\n\nYour solution with an additional latch seems better because it makes \nWaitForReplicationWorkerAttach() react more quickly, without the 10 s \nwait. I'm surprised we have that in the first place, 10 s seems like a \npretty long time to wait for a parallel apply worker to start. Why was \nthat ever OK?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Fri, 5 Jul 2024 16:08:07 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving the latch handling between logical replication launcher\n and worker processes." }, { "msg_contents": "On Fri, Jul 5, 2024 at 6:38 PM Heikki Linnakangas <[email protected]> wrote:\n>\n>\n> Your solution with an additional latch seems better because it makes\n> WaitForReplicationWorkerAttach() react more quickly, without the 10 s\n> wait. I'm surprised we have that in the first place, 10 s seems like a\n> pretty long time to wait for a parallel apply worker to start. Why was\n> that ever OK?\n>\n\nIsn't the call wait for 10 milliseconds? The comment atop\nWaitLatch(\"The \"timeout\" is given in milliseconds...) indicates the\ntimeout is in milliseconds.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 8 Jul 2024 15:25:35 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving the latch handling between logical replication launcher\n and worker processes." }, { "msg_contents": "On Fri, 5 Jul 2024 at 18:38, Heikki Linnakangas <[email protected]> wrote:\n>\n> On 05/07/2024 14:07, vignesh C wrote:\n> > On Thu, 4 Jul 2024 at 16:52, Heikki Linnakangas <[email protected]> wrote:\n> >>\n> >> I'm don't quite understand the problem we're trying to fix:\n> >>\n> >>> Currently the launcher's latch is used for the following: a) worker\n> >>> process attach b) worker process exit and c) subscription creation.\n> >>> Since this same latch is used for multiple cases, the launcher process\n> >>> is not able to handle concurrent scenarios like: a) Launcher started a\n> >>> new apply worker and waiting for apply worker to attach and b) create\n> >>> subscription sub2 sending launcher wake up signal. In this scenario,\n> >>> both of them will set latch of the launcher process, the launcher\n> >>> process is not able to identify that both operations have occurred 1)\n> >>> worker is attached 2) subscription is created and apply worker should\n> >>> be started. As a result the apply worker does not get started for the\n> >>> new subscription created immediately and gets started after the\n> >>> timeout of 180 seconds.\n> >>\n> >> I don't see how we could miss a notification. Yes, both events will set\n> >> the same latch. Why is that a problem?\n> >\n> > The launcher process waits for the apply worker to attach at\n> > WaitForReplicationWorkerAttach function. The launcher's latch is\n> > getting set concurrently for another create subscription and apply\n> > worker attached. The launcher now detects the latch is set while\n> > waiting at WaitForReplicationWorkerAttach, it resets the latch and\n> > proceed to the main loop and waits for DEFAULT_NAPTIME_PER_CYCLE(as\n> > the latch has already been reset). Further details are provided below.\n> >\n> > The loop will see that the new\n> >> worker has attached, and that the new subscription has been created, and\n> >> process both events. Right?\n> >\n> > Since the latch is reset at WaitForReplicationWorkerAttach, it skips\n> > processing the create subscription event.\n> >\n> > Slightly detailing further:\n> > In the scenario when we execute two concurrent create subscription\n> > commands, first CREATE SUBSCRIPTION sub1, followed immediately by\n> > CREATE SUBSCRIPTION sub2.\n> > In a few random scenarios, the following events may unfold:\n> > After the first create subscription command(sub1), the backend will\n> > set the launcher's latch because of ApplyLauncherWakeupAtCommit.\n> > Subsequently, the launcher process will reset the latch and identify\n> > the addition of a new subscription in the pg_subscription list. The\n> > launcher process will proceed to request the postmaster to start the\n> > apply worker background process (sub1) and request the postmaster to\n> > notify the launcher once the apply worker(sub1) has been started.\n> > Launcher will then wait for the apply worker(sub1) to attach in the\n> > WaitForReplicationWorkerAttach function.\n> > Meanwhile, the second CREATE SUBSCRIPTION command (sub2) which was\n> > executed concurrently, also set the launcher's latch(because of\n> > ApplyLauncherWakeupAtCommit).\n> > Simultaneously when the launcher remains in the\n> > WaitForReplicationWorkerAttach function waiting for apply worker of\n> > sub1 to start, the postmaster will start the apply worker for\n> > subscription sub1 and send a SIGUSR1 signal to the launcher process\n> > via ReportBackgroundWorkerPID. Upon receiving this signal, the\n> > launcher process will then set its latch.\n> >\n> > Now, the launcher's latch has been set concurrently after the apply\n> > worker for sub1 is started and the execution of the CREATE\n> > SUBSCRIPTION sub2 command.\n> >\n> > At this juncture, the launcher, which had been awaiting the attachment\n> > of the apply worker, detects that the latch is set and proceeds to\n> > reset it. The reset operation of the latch occurs within the following\n> > section of code in WaitForReplicationWorkerAttach:\n> > ...\n> > rc = WaitLatch(MyLatch,\n> > WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n> > 10L, WAIT_EVENT_BGWORKER_STARTUP);\n> >\n> > if (rc & WL_LATCH_SET)\n> > {\n> > ResetLatch(MyLatch);\n> > CHECK_FOR_INTERRUPTS();\n> > }\n> > ...\n> >\n> > After resetting the latch here, the activation signal intended to\n> > start the apply worker for subscription sub2 is no longer present. The\n> > launcher will return to the ApplyLauncherMain function, where it will\n> > await the DEFAULT_NAPTIME_PER_CYCLE, which is 180 seconds, before\n> > processing the create subscription request (i.e., creating a new apply\n> > worker for sub2).\n> > The issue arises from the latch being reset in\n> > WaitForReplicationWorkerAttach, which can occasionally delay the\n> > synchronization of table data for the subscription.\n>\n> Ok, I see it now. Thanks for the explanation. So the problem isn't using\n> the same latch for different purposes per se. It's that we're trying to\n> use it in a nested fashion, resetting it in the inner loop.\n>\n> Looking at the proposed patch more closely:\n>\n> > @@ -221,13 +224,13 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,\n> > * We need timeout because we generally don't get notified via latch\n> > * about the worker attach. But we don't expect to have to wait long.\n> > */\n> > - rc = WaitLatch(MyLatch,\n> > + rc = WaitLatch(&worker->launcherWakeupLatch,\n> > WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n> > 10L, WAIT_EVENT_BGWORKER_STARTUP);\n> >\n> > if (rc & WL_LATCH_SET)\n> > {\n> > - ResetLatch(MyLatch);\n> > + ResetLatch(&worker->launcherWakeupLatch);\n> > CHECK_FOR_INTERRUPTS();\n> > }\n> > }\n>\n> The comment here is now outdated, right? The patch adds an explicit\n> SetLatch() call to ParallelApplyWorkerMain(), to notify the launcher\n> about the attachment.\n\nI will update the comment for this.\n\n> Is the launcherWakeupLatch field protected by some lock, to protect\n> which process owns it? OwnLatch() is called in\n> logicalrep_worker_stop_internal() without holding a lock for example. Is\n> there a guarantee that only one process can do that at the same time?\n\nI have analyzed a few scenarios, I will analyze the remaining\nscenarios and update on this.\n\n> What happens if a process never calls DisownLatch(), e.g. because it\n> bailed out with an error while waiting for the worker to attach or stop?\n\nI tried a lot of scenarios by erroring out after ownlatch call and did\nnot hit the panic code of OwnLatch yet. I will try a few more\nscenarios that I have in mind and see if we can hit the PANIC in\nOwnLatch and update on this.\n\n> As an alternative, smaller fix, I think we could do the attached. It\n> forces the launcher's main loop to do another iteration, if it calls\n> logicalrep_worker_launch(). That extra iteration should pick up any\n> missed notifications.\n\nThis also works. I had another solution in similar lines like you\nproposed at [1], where we don't reset the latch at the inner loop. I'm\nnot sure which solution we should proceed with. Thoughts?\n\n> Your solution with an additional latch seems better because it makes\n> WaitForReplicationWorkerAttach() react more quickly, without the 10 s\n> wait. I'm surprised we have that in the first place, 10 s seems like a\n> pretty long time to wait for a parallel apply worker to start. Why was\n> that ever OK?\n\nHere 10 means 10 milliseconds, so it just waits for 10 milliseconds\nbefore going to the next iteration.\n\n[1] - https://www.postgresql.org/message-id/CALDaNm10R7L0Dxq%2B-J%3DPp3AfM_yaokpbhECvJ69QiGH8-jQquw%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 8 Jul 2024 17:46:47 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving the latch handling between logical replication launcher\n and worker processes." }, { "msg_contents": "On Mon, Jul 8, 2024 at 5:47 PM vignesh C <[email protected]> wrote:\n>\n> On Fri, 5 Jul 2024 at 18:38, Heikki Linnakangas <[email protected]> wrote:\n> >\n>\n> > As an alternative, smaller fix, I think we could do the attached. It\n> > forces the launcher's main loop to do another iteration, if it calls\n> > logicalrep_worker_launch(). That extra iteration should pick up any\n> > missed notifications.\n>\n> This also works.\n>\n\nThe minor drawback would be that in many cases the extra iteration\nwould not lead to anything meaningful.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 8 Jul 2024 18:02:01 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving the latch handling between logical replication launcher\n and worker processes." }, { "msg_contents": "On Mon, 8 Jul 2024 at 17:46, vignesh C <[email protected]> wrote:\n>\n> On Fri, 5 Jul 2024 at 18:38, Heikki Linnakangas <[email protected]> wrote:\n> >\n> > On 05/07/2024 14:07, vignesh C wrote:\n> > > On Thu, 4 Jul 2024 at 16:52, Heikki Linnakangas <[email protected]> wrote:\n> > >>\n> > >> I'm don't quite understand the problem we're trying to fix:\n> > >>\n> > >>> Currently the launcher's latch is used for the following: a) worker\n> > >>> process attach b) worker process exit and c) subscription creation.\n> > >>> Since this same latch is used for multiple cases, the launcher process\n> > >>> is not able to handle concurrent scenarios like: a) Launcher started a\n> > >>> new apply worker and waiting for apply worker to attach and b) create\n> > >>> subscription sub2 sending launcher wake up signal. In this scenario,\n> > >>> both of them will set latch of the launcher process, the launcher\n> > >>> process is not able to identify that both operations have occurred 1)\n> > >>> worker is attached 2) subscription is created and apply worker should\n> > >>> be started. As a result the apply worker does not get started for the\n> > >>> new subscription created immediately and gets started after the\n> > >>> timeout of 180 seconds.\n> > >>\n> > >> I don't see how we could miss a notification. Yes, both events will set\n> > >> the same latch. Why is that a problem?\n> > >\n> > > The launcher process waits for the apply worker to attach at\n> > > WaitForReplicationWorkerAttach function. The launcher's latch is\n> > > getting set concurrently for another create subscription and apply\n> > > worker attached. The launcher now detects the latch is set while\n> > > waiting at WaitForReplicationWorkerAttach, it resets the latch and\n> > > proceed to the main loop and waits for DEFAULT_NAPTIME_PER_CYCLE(as\n> > > the latch has already been reset). Further details are provided below.\n> > >\n> > > The loop will see that the new\n> > >> worker has attached, and that the new subscription has been created, and\n> > >> process both events. Right?\n> > >\n> > > Since the latch is reset at WaitForReplicationWorkerAttach, it skips\n> > > processing the create subscription event.\n> > >\n> > > Slightly detailing further:\n> > > In the scenario when we execute two concurrent create subscription\n> > > commands, first CREATE SUBSCRIPTION sub1, followed immediately by\n> > > CREATE SUBSCRIPTION sub2.\n> > > In a few random scenarios, the following events may unfold:\n> > > After the first create subscription command(sub1), the backend will\n> > > set the launcher's latch because of ApplyLauncherWakeupAtCommit.\n> > > Subsequently, the launcher process will reset the latch and identify\n> > > the addition of a new subscription in the pg_subscription list. The\n> > > launcher process will proceed to request the postmaster to start the\n> > > apply worker background process (sub1) and request the postmaster to\n> > > notify the launcher once the apply worker(sub1) has been started.\n> > > Launcher will then wait for the apply worker(sub1) to attach in the\n> > > WaitForReplicationWorkerAttach function.\n> > > Meanwhile, the second CREATE SUBSCRIPTION command (sub2) which was\n> > > executed concurrently, also set the launcher's latch(because of\n> > > ApplyLauncherWakeupAtCommit).\n> > > Simultaneously when the launcher remains in the\n> > > WaitForReplicationWorkerAttach function waiting for apply worker of\n> > > sub1 to start, the postmaster will start the apply worker for\n> > > subscription sub1 and send a SIGUSR1 signal to the launcher process\n> > > via ReportBackgroundWorkerPID. Upon receiving this signal, the\n> > > launcher process will then set its latch.\n> > >\n> > > Now, the launcher's latch has been set concurrently after the apply\n> > > worker for sub1 is started and the execution of the CREATE\n> > > SUBSCRIPTION sub2 command.\n> > >\n> > > At this juncture, the launcher, which had been awaiting the attachment\n> > > of the apply worker, detects that the latch is set and proceeds to\n> > > reset it. The reset operation of the latch occurs within the following\n> > > section of code in WaitForReplicationWorkerAttach:\n> > > ...\n> > > rc = WaitLatch(MyLatch,\n> > > WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n> > > 10L, WAIT_EVENT_BGWORKER_STARTUP);\n> > >\n> > > if (rc & WL_LATCH_SET)\n> > > {\n> > > ResetLatch(MyLatch);\n> > > CHECK_FOR_INTERRUPTS();\n> > > }\n> > > ...\n> > >\n> > > After resetting the latch here, the activation signal intended to\n> > > start the apply worker for subscription sub2 is no longer present. The\n> > > launcher will return to the ApplyLauncherMain function, where it will\n> > > await the DEFAULT_NAPTIME_PER_CYCLE, which is 180 seconds, before\n> > > processing the create subscription request (i.e., creating a new apply\n> > > worker for sub2).\n> > > The issue arises from the latch being reset in\n> > > WaitForReplicationWorkerAttach, which can occasionally delay the\n> > > synchronization of table data for the subscription.\n> >\n> > Ok, I see it now. Thanks for the explanation. So the problem isn't using\n> > the same latch for different purposes per se. It's that we're trying to\n> > use it in a nested fashion, resetting it in the inner loop.\n> >\n> > Looking at the proposed patch more closely:\n> >\n> > > @@ -221,13 +224,13 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,\n> > > * We need timeout because we generally don't get notified via latch\n> > > * about the worker attach. But we don't expect to have to wait long.\n> > > */\n> > > - rc = WaitLatch(MyLatch,\n> > > + rc = WaitLatch(&worker->launcherWakeupLatch,\n> > > WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n> > > 10L, WAIT_EVENT_BGWORKER_STARTUP);\n> > >\n> > > if (rc & WL_LATCH_SET)\n> > > {\n> > > - ResetLatch(MyLatch);\n> > > + ResetLatch(&worker->launcherWakeupLatch);\n> > > CHECK_FOR_INTERRUPTS();\n> > > }\n> > > }\n> >\n> > The comment here is now outdated, right? The patch adds an explicit\n> > SetLatch() call to ParallelApplyWorkerMain(), to notify the launcher\n> > about the attachment.\n>\n> I will update the comment for this.\n>\n> > Is the launcherWakeupLatch field protected by some lock, to protect\n> > which process owns it? OwnLatch() is called in\n> > logicalrep_worker_stop_internal() without holding a lock for example. Is\n> > there a guarantee that only one process can do that at the same time?\n>\n> I have analyzed a few scenarios, I will analyze the remaining\n> scenarios and update on this.\n>\n> > What happens if a process never calls DisownLatch(), e.g. because it\n> > bailed out with an error while waiting for the worker to attach or stop?\n>\n> I tried a lot of scenarios by erroring out after ownlatch call and did\n> not hit the panic code of OwnLatch yet. I will try a few more\n> scenarios that I have in mind and see if we can hit the PANIC in\n> OwnLatch and update on this.\n\nI could encounter the PANIC from OwnLatch by having a lot of create\nsubscriptions doing a table sync with a limited number of slots. This\nand the above scenario are slightly related. I need some more time to\nprepare the fix.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 11 Jul 2024 10:16:21 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving the latch handling between logical replication launcher\n and worker processes." }, { "msg_contents": "On Fri, 5 Jul 2024 at 18:38, Heikki Linnakangas <[email protected]> wrote:\n>\n> On 05/07/2024 14:07, vignesh C wrote:\n> > On Thu, 4 Jul 2024 at 16:52, Heikki Linnakangas <[email protected]> wrote:\n> >>\n> >> I'm don't quite understand the problem we're trying to fix:\n> >>\n> >>> Currently the launcher's latch is used for the following: a) worker\n> >>> process attach b) worker process exit and c) subscription creation.\n> >>> Since this same latch is used for multiple cases, the launcher process\n> >>> is not able to handle concurrent scenarios like: a) Launcher started a\n> >>> new apply worker and waiting for apply worker to attach and b) create\n> >>> subscription sub2 sending launcher wake up signal. In this scenario,\n> >>> both of them will set latch of the launcher process, the launcher\n> >>> process is not able to identify that both operations have occurred 1)\n> >>> worker is attached 2) subscription is created and apply worker should\n> >>> be started. As a result the apply worker does not get started for the\n> >>> new subscription created immediately and gets started after the\n> >>> timeout of 180 seconds.\n> >>\n> >> I don't see how we could miss a notification. Yes, both events will set\n> >> the same latch. Why is that a problem?\n> >\n> > The launcher process waits for the apply worker to attach at\n> > WaitForReplicationWorkerAttach function. The launcher's latch is\n> > getting set concurrently for another create subscription and apply\n> > worker attached. The launcher now detects the latch is set while\n> > waiting at WaitForReplicationWorkerAttach, it resets the latch and\n> > proceed to the main loop and waits for DEFAULT_NAPTIME_PER_CYCLE(as\n> > the latch has already been reset). Further details are provided below.\n> >\n> > The loop will see that the new\n> >> worker has attached, and that the new subscription has been created, and\n> >> process both events. Right?\n> >\n> > Since the latch is reset at WaitForReplicationWorkerAttach, it skips\n> > processing the create subscription event.\n> >\n> > Slightly detailing further:\n> > In the scenario when we execute two concurrent create subscription\n> > commands, first CREATE SUBSCRIPTION sub1, followed immediately by\n> > CREATE SUBSCRIPTION sub2.\n> > In a few random scenarios, the following events may unfold:\n> > After the first create subscription command(sub1), the backend will\n> > set the launcher's latch because of ApplyLauncherWakeupAtCommit.\n> > Subsequently, the launcher process will reset the latch and identify\n> > the addition of a new subscription in the pg_subscription list. The\n> > launcher process will proceed to request the postmaster to start the\n> > apply worker background process (sub1) and request the postmaster to\n> > notify the launcher once the apply worker(sub1) has been started.\n> > Launcher will then wait for the apply worker(sub1) to attach in the\n> > WaitForReplicationWorkerAttach function.\n> > Meanwhile, the second CREATE SUBSCRIPTION command (sub2) which was\n> > executed concurrently, also set the launcher's latch(because of\n> > ApplyLauncherWakeupAtCommit).\n> > Simultaneously when the launcher remains in the\n> > WaitForReplicationWorkerAttach function waiting for apply worker of\n> > sub1 to start, the postmaster will start the apply worker for\n> > subscription sub1 and send a SIGUSR1 signal to the launcher process\n> > via ReportBackgroundWorkerPID. Upon receiving this signal, the\n> > launcher process will then set its latch.\n> >\n> > Now, the launcher's latch has been set concurrently after the apply\n> > worker for sub1 is started and the execution of the CREATE\n> > SUBSCRIPTION sub2 command.\n> >\n> > At this juncture, the launcher, which had been awaiting the attachment\n> > of the apply worker, detects that the latch is set and proceeds to\n> > reset it. The reset operation of the latch occurs within the following\n> > section of code in WaitForReplicationWorkerAttach:\n> > ...\n> > rc = WaitLatch(MyLatch,\n> > WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n> > 10L, WAIT_EVENT_BGWORKER_STARTUP);\n> >\n> > if (rc & WL_LATCH_SET)\n> > {\n> > ResetLatch(MyLatch);\n> > CHECK_FOR_INTERRUPTS();\n> > }\n> > ...\n> >\n> > After resetting the latch here, the activation signal intended to\n> > start the apply worker for subscription sub2 is no longer present. The\n> > launcher will return to the ApplyLauncherMain function, where it will\n> > await the DEFAULT_NAPTIME_PER_CYCLE, which is 180 seconds, before\n> > processing the create subscription request (i.e., creating a new apply\n> > worker for sub2).\n> > The issue arises from the latch being reset in\n> > WaitForReplicationWorkerAttach, which can occasionally delay the\n> > synchronization of table data for the subscription.\n>\n> Ok, I see it now. Thanks for the explanation. So the problem isn't using\n> the same latch for different purposes per se. It's that we're trying to\n> use it in a nested fashion, resetting it in the inner loop.\n>\n> Looking at the proposed patch more closely:\n>\n> > @@ -221,13 +224,13 @@ WaitForReplicationWorkerAttach(LogicalRepWorker *worker,\n> > * We need timeout because we generally don't get notified via latch\n> > * about the worker attach. But we don't expect to have to wait long.\n> > */\n> > - rc = WaitLatch(MyLatch,\n> > + rc = WaitLatch(&worker->launcherWakeupLatch,\n> > WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n> > 10L, WAIT_EVENT_BGWORKER_STARTUP);\n> >\n> > if (rc & WL_LATCH_SET)\n> > {\n> > - ResetLatch(MyLatch);\n> > + ResetLatch(&worker->launcherWakeupLatch);\n> > CHECK_FOR_INTERRUPTS();\n> > }\n> > }\n>\n> The comment here is now outdated, right? The patch adds an explicit\n> SetLatch() call to ParallelApplyWorkerMain(), to notify the launcher\n> about the attachment.\n\nI have removed these comments\n\n> Is the launcherWakeupLatch field protected by some lock, to protect\n> which process owns it? OwnLatch() is called in\n> logicalrep_worker_stop_internal() without holding a lock for example. Is\n> there a guarantee that only one process can do that at the same time?\n\nAdded a lock to prevent concurrent access\n\n> What happens if a process never calls DisownLatch(), e.g. because it\n> bailed out with an error while waiting for the worker to attach or stop?\n\nThis will not happen now as we call own and disown just before wait\nlatch and not in other cases.\n\nThe attached v2 version patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Tue, 3 Sep 2024 09:10:07 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving the latch handling between logical replication launcher\n and worker processes." }, { "msg_contents": "At Tue, 3 Sep 2024 09:10:07 +0530, vignesh C <[email protected]> wrote in \r\n> The attached v2 version patch has the changes for the same.\r\n\r\nSorry for jumping in at this point. I've just reviewed the latest\r\npatch (v2), and the frequent Own/Disown-Latch operations caught my\r\nattention. Additionally, handling multiple concurrently operating\r\ntrigger sources with nested latch waits seems bug-prone, which I’d\r\nprefer to avoid from both a readability and safety perspective.\r\n\r\nWith that in mind, I’d like to suggest an alternative approach. I may\r\nnot be fully aware of all the previous discussions, so apologies if\r\nthis idea has already been considered and dismissed.\r\n\r\nCurrently, WaitForReplicationWorkerAttach() and\r\nlogicalrep_worker_stop_internal() wait on a latch after verifying the\r\ndesired state. This ensures that even if there are spurious or missed\r\nwakeups, they won't cause issues. In contrast, ApplyLauncherMain()\r\nenters a latch wait without checking the desired state\r\nfirst. Consequently, if another process sets the latch to wake up the\r\nmain loop while the former two functions are waiting, that wakeup\r\ncould be missed. If my understanding is correct, the problem lies in\r\nApplyLauncherMain() not checking the expected state before beginning\r\nto wait on the latch. There is no issue with waiting if the state\r\nhasn't been satisfied yet.\r\n\r\nSo, I propose that ApplyLauncherMain() should check the condition that\r\ntriggers a main loop wakeup before calling WaitLatch(). To do this, we\r\ncould add a flag in LogicalRepCtxStruct to signal that the main loop\r\nhas immediate tasks to handle. ApplyLauncherWakeup() would set this\r\nflag before sending SIGUSR1. In turn, ApplyLauncherMain() would check\r\nthis flag before calling WaitLatch() and skip the WaitLatch() call if\r\nthe flag is set.\r\n\r\nI think this approach could solve the issue without adding\r\ncomplexity. What do you think?\r\n\r\nregard.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Wed, 04 Sep 2024 12:02:14 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving the latch handling between logical replication\n launcher and worker processes." }, { "msg_contents": "On Wed, 4 Sept 2024 at 08:32, Kyotaro Horiguchi <[email protected]> wrote:\n>\n> At Tue, 3 Sep 2024 09:10:07 +0530, vignesh C <[email protected]> wrote in\n> > The attached v2 version patch has the changes for the same.\n>\n> Sorry for jumping in at this point. I've just reviewed the latest\n> patch (v2), and the frequent Own/Disown-Latch operations caught my\n> attention. Additionally, handling multiple concurrently operating\n> trigger sources with nested latch waits seems bug-prone, which I’d\n> prefer to avoid from both a readability and safety perspective.\n>\n> With that in mind, I’d like to suggest an alternative approach. I may\n> not be fully aware of all the previous discussions, so apologies if\n> this idea has already been considered and dismissed.\n>\n> Currently, WaitForReplicationWorkerAttach() and\n> logicalrep_worker_stop_internal() wait on a latch after verifying the\n> desired state. This ensures that even if there are spurious or missed\n> wakeups, they won't cause issues. In contrast, ApplyLauncherMain()\n> enters a latch wait without checking the desired state\n> first. Consequently, if another process sets the latch to wake up the\n> main loop while the former two functions are waiting, that wakeup\n> could be missed. If my understanding is correct, the problem lies in\n> ApplyLauncherMain() not checking the expected state before beginning\n> to wait on the latch. There is no issue with waiting if the state\n> hasn't been satisfied yet.\n>\n> So, I propose that ApplyLauncherMain() should check the condition that\n> triggers a main loop wakeup before calling WaitLatch(). To do this, we\n> could add a flag in LogicalRepCtxStruct to signal that the main loop\n> has immediate tasks to handle. ApplyLauncherWakeup() would set this\n> flag before sending SIGUSR1. In turn, ApplyLauncherMain() would check\n> this flag before calling WaitLatch() and skip the WaitLatch() call if\n> the flag is set.\n>\n> I think this approach could solve the issue without adding\n> complexity. What do you think?\n\nI agree that this approach is more simple than the other approach. How\nabout something like the attached patch to handle the same.\n\nRegards,\nVignesh", "msg_date": "Wed, 4 Sep 2024 16:54:35 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving the latch handling between logical replication launcher\n and worker processes." }, { "msg_contents": "On 04/09/2024 14:24, vignesh C wrote:\n> On Wed, 4 Sept 2024 at 08:32, Kyotaro Horiguchi <[email protected]> wrote:\n>> I think this approach could solve the issue without adding\n>> complexity. What do you think?\n> \n> I agree that this approach is more simple than the other approach. How\n> about something like the attached patch to handle the same.\n\nI haven't looked at these new patches from the last few days, but please \nalso note the work at \nhttps://www.postgresql.org/message-id/476672e7-62f1-4cab-a822-f3a8e949dd3f%40iki.fi. \nIf those \"interrupts\" patches are committed, this is pretty \nstraightforward to fix by using a separate interrupt bit for this, as \nthe patch on that thread does.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 4 Sep 2024 16:53:11 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving the latch handling between logical replication launcher\n and worker processes." } ]
[ { "msg_contents": "Hi hackers,\n\nAn IRC conversation just now made me notice that it would be handy to\nhave stable links for the descrpitions of the various COPY formats, per\nthe attached patch.\n\n- ilmari", "msg_date": "Thu, 25 Apr 2024 12:23:18 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": true, "msg_subject": "Doc anchors for COPY formats" }, { "msg_contents": "> On 25 Apr 2024, at 13:23, Dagfinn Ilmari Mannsåker <[email protected]> wrote:\n\n> An IRC conversation just now made me notice that it would be handy to\n> have stable links for the descrpitions of the various COPY formats, per\n> the attached patch.\n\nNo objections, that seems perfectly a reasonable idea. Maybe we should set an\nxreflabel while at it for completeness sake?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 25 Apr 2024 13:51:47 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc anchors for COPY formats" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n\n>> On 25 Apr 2024, at 13:23, Dagfinn Ilmari Mannsåker <[email protected]> wrote:\n>\n>> An IRC conversation just now made me notice that it would be handy to\n>> have stable links for the descrpitions of the various COPY formats, per\n>> the attached patch.\n>\n> No objections, that seems perfectly a reasonable idea. Maybe we should set an\n> xreflabel while at it for completeness sake?\n\nxreflabel only affects the link text when using <xref>, but there are no\nlinks to these sections in the docs, so I don't see much point. One\nthing that could make sense would be to link to the File Formats section\nfrom the FORMAT keyword docs, like the attached.\n\n- ilmari", "msg_date": "Thu, 25 Apr 2024 15:48:48 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Doc anchors for COPY formats" } ]
[ { "msg_contents": "Hi!\n\nRecently, I'm stumbled in such an easy task as build Postgres with clang\nand\nsanitizer.\n\nUsually, I use autotools to build Postgres something like this:\n================================================================================\nSRC=\"../postgres\"\nTRG=\"/tmp\"\n\nLINUX_CONFIGURE_FEATURES=\"\n --without-llvm\n --with-tcl --with-tclconfig=/usr/lib/tcl8.6/ --with-perl\n --with-python --with-gssapi --with-pam --with-ldap --with-selinux\n --with-systemd --with-uuid=ossp --with-libxml --with-libxslt --with-zstd\n --with-ssl=openssl\n\"\n\nCC=\"ccache clang\" CXX=\"ccache clang++\" \\\nCFLAGS=\"-Og -ggdb -fno-sanitize-recover=all -fsanitize=alignment,undefined\"\n\\\nCXXFLAGS=\"-Og -ggdb -fno-sanitize-recover=all\n-fsanitize=alignment,undefined\" \\\nLDFALGS=\"-fsanitize=alignment,undefined\" \\\n\\\n$SRC/configure \\\n -C \\\n --prefix=$TRG/\"pgsql\" \\\n --enable-debug --enable-tap-tests --enable-depend --enable-cassert \\\n --enable-injection-points --enable-nls \\\n $LINUX_CONFIGURE_FEATURES\n\n...\n\n$ ./config.status --config\n'-C' '--prefix=/tmp/pgsql' '--enable-debug' '--enable-tap-tests'\n'--enable-depend'\n'--enable-cassert' '--enable-injection-points' '--enable-nls'\n'--without-llvm'\n'--with-tcl' '--with-tclconfig=/usr/lib/tcl8.6/' '--with-perl'\n'--with-python'\n'--with-gssapi' '--with-pam' '--with-ldap' '--with-selinux'\n'--with-systemd'\n'--with-uuid=ossp' '--with-libxml' '--with-libxslt' '--with-zstd'\n'--with-ssl=openssl'\n'CC=ccache clang'\n'CFLAGS=-Og -ggdb -fno-sanitize-recover=all -fsanitize=alignment,undefined'\n'CXX=ccache clang++'\n'CXXFLAGS=-Og -ggdb -fno-sanitize-recover=all\n-fsanitize=alignment,undefined'\n================================================================================\n\nThen it compiles with no problems.\n\nNow I exact the same, but with meson build.\n================================================================================\nLINUX_CONFIGURE_FEATURES=\"\n--with-gssapi --with-icu --with-ldap --with-libxml --with-libxslt\n--with-lz4 --with-zstd --with-pam --with-perl --with-python\n--with-tcl --with-tclconfig=/usr/lib/tcl8.6\n--with-selinux --with-sll=openssl --with-systemd --with-uuid=ossp\n\"\n\nLINUX_MESON_FEATURES=\"-Dllvm=disabled -Duuid=e2fs\"\n\nPG_TEST_EXTRA=\"kerberos ldap ssl libpq_encryption load_balance\"\n\nSANITIZER_FLAGS=\"-fsanitize=alignment,undefined\" \\\nCC=\"ccache clang\" \\\nCXX=\"ccache clang++\" \\\nCFLAGS=\"-Og -ggdb -fno-sanitize-recover=all $SANITIZER_FLAGS\" \\\nCXXFLAGS=\"$CFLAGS\" \\\nLDFALGS=\"$SANITIZER_FLAGS\" \\\nmeson setup \\\n --buildtype=debug -Dcassert=true -Dinjection_points=true \\\n -Dprefix=/tmp/pgsql \\\n ${LINUX_MESON_FEATURES} \\\n -DPG_TEST_EXTRA=\"$PG_TEST_EXTRA\" \\\n build-meson postgres\n\n...\n\n System\n host system : linux x86_64\n build system : linux x86_64\n\n Compiler\n linker : ld.bfd\n C compiler : clang 14.0.0-1ubuntu1\n\n Compiler Flags\n CPP FLAGS : -D_GNU_SOURCE\n C FLAGS, functional : -fno-strict-aliasing -fwrapv\n C FLAGS, warnings : -Wmissing-prototypes -Wpointer-arith\n-Werror=vla\n-Werror=unguarded-availability-new -Wendif-labels\n-Wmissing-format-attribute\n-Wcast-function-type -Wformat-security -Wdeclaration-after-statement\n-Wno-unused-command-line-argument -Wno-compound-token-split-by-macro\n C FLAGS, modules : -fvisibility=hidden\n C FLAGS, user specified: -Og -ggdb -fno-sanitize-recover=all\n-fsanitize=alignment,undefined\n LD FLAGS : -Og -ggdb -fno-sanitize-recover=all\n-fsanitize=alignment,undefined\n\n...\n\n================================================================================\n\nAnd then upon build I've got overwhelmed by thousands of undefined\nreference errors.\n\nfe-auth-scram.c:(.text+0x17a): undefined reference to\n`__ubsan_handle_builtin_unreachable'\n/usr/bin/ld: fe-auth-scram.c:(.text+0x189): undefined reference to\n`__ubsan_handle_type_mismatch_v1_abort'\n/usr/bin/ld: fe-auth-scram.c:(.text+0x195): undefined reference to\n`__ubsan_handle_type_mismatch_v1_abort'\n/usr/bin/ld: fe-auth-scram.c:(.text+0x1a1): undefined reference to\n`__ubsan_handle_type_mismatch_v1_abort'\n/usr/bin/ld: fe-auth-scram.c:(.text+0x1ad): undefined reference to\n`__ubsan_handle_type_mismatch_v1_abort'\n/usr/bin/ld: fe-auth-scram.c:(.text+0x1b9): undefined reference to\n`__ubsan_handle_type_mismatch_v1_abort'\n/usr/bin/ld:\nsrc/interfaces/libpq/libpq.so.5.17.p/fe-auth-scram.c.o:fe-auth-scram.c:(.text+0x1c8):\nmore undefined references to `__ubsan_handle_type_mismatch_v1_abort' follow\n/usr/bin/ld: src/interfaces/libpq/libpq.so.5.17.p/fe-auth-scram.c.o: in\nfunction `scram_init':\nfe-auth-scram.c:(.text+0x1d4): undefined reference to\n`__ubsan_handle_nonnull_arg_abort'\n/usr/bin/ld: src/interfaces/libpq/libpq.so.5.17.p/fe-auth-scram.c.o: in\nfunction `scram_exchange':\nfe-auth-scram.c:(.text+0x11c2): undefined reference to\n`__ubsan_handle_type_mismatch_v1_abort'\n/usr/bin/ld: fe-auth-scram.c:(.text+0x11d1): undefined reference to\n`__ubsan_handle_type_mismatch_v1_abort'\n/usr/bin/ld: fe-auth-scram.c:(.text+0x11e0): undefined reference to\n`__ubsan_handle_type_mismatch_v1_abort'\n/usr/bin/ld: fe-auth-scram.c:(.text+0x11ef): undefined reference to\n`__ubsan_handle_type_mismatch_v1_abort'\n/usr/bin/ld: fe-auth-scram.c:(.text+0x11fe): undefined reference to\n`__ubsan_handle_type_mismatch_v1_abort'\n/usr/bin/ld:\nsrc/interfaces/libpq/libpq.so.5.17.p/fe-auth-scram.c.o:fe-auth-scram.c:(.text+0x120d):\nmore undefined references to `__ubsan_handle_type_mismatch_v1_abort' follow\n/usr/bin/ld: src/interfaces/libpq/libpq.so.5.17.p/fe-auth-scram.c.o: in\nfunction `scram_exchange':\n...\nmany many many more\n...\n\nMy OS info:\n$ lsb_release -a\nNo LSB modules are available.\nDistributor ID: Ubuntu\nDescription: Ubuntu 22.04.4 LTS\nRelease: 22.04\nCodename: jammy\n\nPreviously, I've got the same troubles on my old trusty 32-bit laptop on\nDebian. But, there was no time to dig deeper in the problem\nat the time. And now same for 64-bit Ubuntu. The most common reason for\nsuch errors are not passing appropriate sanitizer\nflags to LDFLAGS, but this is not the case. What could be the reason for\nthis? Am I doin' something wrong?\n\nExact the same sequence, but for GCC, works splendidly.\n\n-- \nBest regards,\nMaxim Orlov.\n\nHi!Recently, I'm stumbled in such an easy task as build Postgres with clang and sanitizer.Usually, I use autotools to build Postgres something like this:================================================================================SRC=\"../postgres\"TRG=\"/tmp\"LINUX_CONFIGURE_FEATURES=\"    --without-llvm    --with-tcl --with-tclconfig=/usr/lib/tcl8.6/ --with-perl    --with-python --with-gssapi --with-pam --with-ldap --with-selinux    --with-systemd --with-uuid=ossp --with-libxml --with-libxslt --with-zstd    --with-ssl=openssl\"CC=\"ccache clang\" CXX=\"ccache clang++\" \\CFLAGS=\"-Og -ggdb -fno-sanitize-recover=all -fsanitize=alignment,undefined\" \\CXXFLAGS=\"-Og -ggdb -fno-sanitize-recover=all -fsanitize=alignment,undefined\" \\LDFALGS=\"-fsanitize=alignment,undefined\" \\\\$SRC/configure \\    -C \\    --prefix=$TRG/\"pgsql\" \\    --enable-debug --enable-tap-tests --enable-depend --enable-cassert \\    --enable-injection-points --enable-nls \\    $LINUX_CONFIGURE_FEATURES...$ ./config.status --config'-C' '--prefix=/tmp/pgsql' '--enable-debug' '--enable-tap-tests' '--enable-depend' '--enable-cassert' '--enable-injection-points' '--enable-nls' '--without-llvm' '--with-tcl' '--with-tclconfig=/usr/lib/tcl8.6/' '--with-perl' '--with-python' '--with-gssapi' '--with-pam' '--with-ldap' '--with-selinux' '--with-systemd' '--with-uuid=ossp' '--with-libxml' '--with-libxslt' '--with-zstd' '--with-ssl=openssl' 'CC=ccache clang' 'CFLAGS=-Og -ggdb -fno-sanitize-recover=all -fsanitize=alignment,undefined' 'CXX=ccache clang++' 'CXXFLAGS=-Og -ggdb -fno-sanitize-recover=all -fsanitize=alignment,undefined'================================================================================Then it compiles with no problems.Now I exact the same, but with meson build.================================================================================LINUX_CONFIGURE_FEATURES=\"--with-gssapi --with-icu --with-ldap --with-libxml --with-libxslt--with-lz4 --with-zstd --with-pam --with-perl --with-python--with-tcl --with-tclconfig=/usr/lib/tcl8.6--with-selinux --with-sll=openssl --with-systemd --with-uuid=ossp\"LINUX_MESON_FEATURES=\"-Dllvm=disabled -Duuid=e2fs\"PG_TEST_EXTRA=\"kerberos ldap ssl libpq_encryption load_balance\"SANITIZER_FLAGS=\"-fsanitize=alignment,undefined\" \\CC=\"ccache clang\" \\CXX=\"ccache clang++\" \\CFLAGS=\"-Og -ggdb -fno-sanitize-recover=all $SANITIZER_FLAGS\" \\CXXFLAGS=\"$CFLAGS\" \\LDFALGS=\"$SANITIZER_FLAGS\" \\meson setup \\    --buildtype=debug -Dcassert=true -Dinjection_points=true \\    -Dprefix=/tmp/pgsql \\    ${LINUX_MESON_FEATURES} \\    -DPG_TEST_EXTRA=\"$PG_TEST_EXTRA\" \\    build-meson postgres...  System    host system            : linux x86_64    build system           : linux x86_64  Compiler    linker                 : ld.bfd    C compiler             : clang 14.0.0-1ubuntu1  Compiler Flags    CPP FLAGS              : -D_GNU_SOURCE    C FLAGS, functional    : -fno-strict-aliasing -fwrapv    C FLAGS, warnings      : -Wmissing-prototypes -Wpointer-arith -Werror=vla -Werror=unguarded-availability-new -Wendif-labels -Wmissing-format-attribute -Wcast-function-type -Wformat-security -Wdeclaration-after-statement -Wno-unused-command-line-argument -Wno-compound-token-split-by-macro    C FLAGS, modules       : -fvisibility=hidden    C FLAGS, user specified: -Og -ggdb -fno-sanitize-recover=all -fsanitize=alignment,undefined    LD FLAGS               : -Og -ggdb -fno-sanitize-recover=all -fsanitize=alignment,undefined...================================================================================And then upon build I've got overwhelmed by thousands of undefined reference errors.fe-auth-scram.c:(.text+0x17a): undefined reference to `__ubsan_handle_builtin_unreachable'/usr/bin/ld: fe-auth-scram.c:(.text+0x189): undefined reference to `__ubsan_handle_type_mismatch_v1_abort'/usr/bin/ld: fe-auth-scram.c:(.text+0x195): undefined reference to `__ubsan_handle_type_mismatch_v1_abort'/usr/bin/ld: fe-auth-scram.c:(.text+0x1a1): undefined reference to `__ubsan_handle_type_mismatch_v1_abort'/usr/bin/ld: fe-auth-scram.c:(.text+0x1ad): undefined reference to `__ubsan_handle_type_mismatch_v1_abort'/usr/bin/ld: fe-auth-scram.c:(.text+0x1b9): undefined reference to `__ubsan_handle_type_mismatch_v1_abort'/usr/bin/ld: src/interfaces/libpq/libpq.so.5.17.p/fe-auth-scram.c.o:fe-auth-scram.c:(.text+0x1c8): more undefined references to `__ubsan_handle_type_mismatch_v1_abort' follow/usr/bin/ld: src/interfaces/libpq/libpq.so.5.17.p/fe-auth-scram.c.o: in function `scram_init':fe-auth-scram.c:(.text+0x1d4): undefined reference to `__ubsan_handle_nonnull_arg_abort'/usr/bin/ld: src/interfaces/libpq/libpq.so.5.17.p/fe-auth-scram.c.o: in function `scram_exchange':fe-auth-scram.c:(.text+0x11c2): undefined reference to `__ubsan_handle_type_mismatch_v1_abort'/usr/bin/ld: fe-auth-scram.c:(.text+0x11d1): undefined reference to `__ubsan_handle_type_mismatch_v1_abort'/usr/bin/ld: fe-auth-scram.c:(.text+0x11e0): undefined reference to `__ubsan_handle_type_mismatch_v1_abort'/usr/bin/ld: fe-auth-scram.c:(.text+0x11ef): undefined reference to `__ubsan_handle_type_mismatch_v1_abort'/usr/bin/ld: fe-auth-scram.c:(.text+0x11fe): undefined reference to `__ubsan_handle_type_mismatch_v1_abort'/usr/bin/ld: src/interfaces/libpq/libpq.so.5.17.p/fe-auth-scram.c.o:fe-auth-scram.c:(.text+0x120d): more undefined references to `__ubsan_handle_type_mismatch_v1_abort' follow/usr/bin/ld: src/interfaces/libpq/libpq.so.5.17.p/fe-auth-scram.c.o: in function `scram_exchange':...many many many more...My OS info:$ lsb_release -aNo LSB modules are available.Distributor ID: UbuntuDescription:    Ubuntu 22.04.4 LTSRelease:        22.04Codename:       jammyPreviously, I've got the same troubles on my old trusty 32-bit laptop on Debian.  But, there was no time to dig deeper in the problem at the time.  And now same for 64-bit Ubuntu.  The most common reason for such errors are not passing appropriate sanitizer flags to LDFLAGS, but this is not the case.  What could be the reason for this?  Am I doin' something wrong?Exact the same sequence, but for GCC, works splendidly.-- Best regards,Maxim Orlov.", "msg_date": "Thu, 25 Apr 2024 18:38:58 +0300", "msg_from": "Maxim Orlov <[email protected]>", "msg_from_op": true, "msg_subject": "Build with meson + clang + sanitizer resulted in undefined reference" }, { "msg_contents": "> On Thu, Apr 25, 2024 at 06:38:58PM +0300, Maxim Orlov wrote:\n>\n> And then upon build I've got overwhelmed by thousands of undefined\n> reference errors.\n>\n> fe-auth-scram.c:(.text+0x17a): undefined reference to\n> `__ubsan_handle_builtin_unreachable'\n> /usr/bin/ld: fe-auth-scram.c:(.text+0x189): undefined reference to\n> `__ubsan_handle_type_mismatch_v1_abort'\n> /usr/bin/ld: fe-auth-scram.c:(.text+0x195): undefined reference to\n> `__ubsan_handle_type_mismatch_v1_abort'\n> /usr/bin/ld: fe-auth-scram.c:(.text+0x1a1): undefined reference to\n> `__ubsan_handle_type_mismatch_v1_abort'\n> /usr/bin/ld: fe-auth-scram.c:(.text+0x1ad): undefined reference to\n> `__ubsan_handle_type_mismatch_v1_abort'\n> /usr/bin/ld: fe-auth-scram.c:(.text+0x1b9): undefined reference to\n> `__ubsan_handle_type_mismatch_v1_abort'\n\nSeems to be a meson quirk [1]. I could reproduce this, and adding\n-Db_lundef=false on top of your configuration solved the issue.\n\n[1]: https://github.com/mesonbuild/meson/issues/3853\n\n\n", "msg_date": "Tue, 30 Apr 2024 23:11:12 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Build with meson + clang + sanitizer resulted in undefined\n reference" }, { "msg_contents": "On Wed, 1 May 2024 at 00:11, Dmitry Dolgov <[email protected]> wrote:\n\n> Seems to be a meson quirk [1]. I could reproduce this, and adding\n> -Db_lundef=false on top of your configuration solved the issue.\n>\n> [1]: https://github.com/mesonbuild/meson/issues/3853\n>\n\nThank you for a reply! Yes, it seems to be that way. Many thanks for the\nclarification.\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Wed, 1 May 2024 at 00:11, Dmitry Dolgov <[email protected]> wrote:\nSeems to be a meson quirk [1]. I could reproduce this, and adding\n-Db_lundef=false on top of your configuration solved the issue.\n\n[1]: https://github.com/mesonbuild/meson/issues/3853 Thank you for a reply!  Yes, it seems to be that way.  Many thanks for the clarification.-- Best regards,Maxim Orlov.", "msg_date": "Thu, 2 May 2024 17:47:54 +0300", "msg_from": "Maxim Orlov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Build with meson + clang + sanitizer resulted in undefined\n reference" } ]
[ { "msg_contents": "Can anyone tell me why my index access method isn't seeing an order_by\nScanKey when there is a query with an ORDER BY clause that uses an operator\nthat the access method supports?\n\n(Apologies for the volume of code below. I just don't know what is\nimportant. This is all in Rust.)\n\n\nCREATE OPERATOR pg_catalog.<===> (\n FUNCTION = rdb.userquery_match,\n LEFTARG = record,\n RIGHTARG = rdb.userqueryspec\n);\n\nCREATE OPERATOR pg_catalog.<<=>> (\n FUNCTION = rdb.test_sort_match,\n LEFTARG = record,\n RIGHTARG = rdb.TestSort\n);\n\nCREATE OPERATOR CLASS rdb_ops DEFAULT FOR TYPE record USING rdb AS\n OPERATOR 1 pg_catalog.<===> (record, rdb.userqueryspec),\n OPERATOR 2 pg_catalog.<<=>> (record, rdb.testsort) FOR ORDER BY\npg_catalog.float_ops;\n\n\n#[derive(Serialize, Deserialize, PostgresType, Debug)]\npub struct TestSort {\n foo: f32,\n}\n\n#[pg_extern(\nsql = \"CREATE FUNCTION rdb.test_sort_match(rec record, testsort\nrdb.TestSort) RETURNS bool IMMUTABLE STRICT PARALLEL SAFE LANGUAGE c AS\n'MODULE_PATHNAME', 'dummysort_match_wrapper';\",\nrequires = [DummySortSpec]\n)]\nfn test_sort_match(_fcinfo: pg_sys::FunctionCallInfo, testsort: TestSort)\n-> f32 {\n testsort.foo + 2.0\n}\n\n#[pg_extern(immutable, strict, parallel_safe)]\nfn get_testsort(foo: f32) -> TestSort {\n TestSort { foo }\n}\n\nHere's the query:\n\nSELECT title\nFROM products\nWHERE products <===> rdb.userquery('teddy')\nORDER BY products <<=>> rdb.get_testsort(5.0);\n\nThe amhander gets and uses the WHERE clause just fine. The problem is that\nambeginscan() and amrescan() both get a norderbys parameter equal to 0.\n\nDoing an EXPLAIN on the query above yields this:\n\n Sort (cost=1000027.67..1000028.92 rows=500 width=28)\n Sort Key: ((products.* <<=>> '{\"foo\":5.0}'::rdb.testsort))\n -> Index Scan using product_idx on products (cost=0.00..1000005.26\nrows=500 width=28)\n Index Cond: (products.* <===> '{\"query_str\":\"teddy\", /* snip */\n}'::rdb.userqueryspec)\n\nSo, the sort isn't getting passed to the index scan. Weirdly, the\ntest_sort_match() method never gets invoked, so I'm not sure how the system\nthinks it's actually doing the sort.\n\nThe fact that the operators work on a RECORD type does not seem to be\nsignificant. If I swap it for a TEXT data type then I get the same behavior.\n\nI'm sure I'm missing something small.\n\nHere's the amhandler definition:\n\n#[pg_extern(sql = \"\n CREATE OR REPLACE FUNCTION amhandler(internal) RETURNS index_am_handler\nPARALLEL SAFE IMMUTABLE STRICT LANGUAGE c AS 'MODULE_PATHNAME',\n'@FUNCTION_NAME@';\n CREATE ACCESS METHOD rdb TYPE INDEX HANDLER amhandler;\n\")]\nfn amhandler(_fcinfo: pg_sys::FunctionCallInfo) ->\nPgBox<pg_sys::IndexAmRoutine> {\n let mut amroutine =\n unsafe {\nPgBox::<pg_sys::IndexAmRoutine>::alloc_node(pg_sys::NodeTag::T_IndexAmRoutine)\n};\n\n amroutine.amstrategies = OPERATOR_COUNT;\n amroutine.amsupport = PROC_COUNT;\n amroutine.amoptsprocnum = OPTIONS_PROC;\n amroutine.amcanorder = false;\n amroutine.amcanorderbyop = true;\n amroutine.amcanbackward = false;\n amroutine.amcanunique = false;\n amroutine.amcanmulticol = true;\n amroutine.amoptionalkey = true;\n amroutine.amsearcharray = false;\n amroutine.amsearchnulls = false;\n amroutine.amstorage = false;\n amroutine.amclusterable = false;\n amroutine.ampredlocks = false;\n amroutine.amcanparallel = false;\n amroutine.amcaninclude = true;\n amroutine.amusemaintenanceworkmem = false;\n amroutine.amparallelvacuumoptions = 0;\n amroutine.amkeytype = pg_sys::InvalidOid;\n\n /* interface functions */\n amroutine.ambuild = Some(build::ambuild);\n amroutine.ambuildempty = Some(build::ambuildempty);\n amroutine.aminsert = Some(insert::aminsert);\n amroutine.ambulkdelete = Some(delete::ambulkdelete);\n amroutine.amvacuumcleanup = Some(delete::amvacuumcleanup);\n amroutine.amcanreturn = Some(scan::amcanreturn); /* test if can return\na particular col in index-only scan */\n amroutine.amcostestimate = Some(amcostestimate);\n amroutine.amoptions = Some(index_options::amoptions);\n\n amroutine.amproperty = None;\n amroutine.ambuildphasename = None;\n amroutine.amvalidate = Some(amvalidate);\n // amroutine.amadjustmembers = None; omit so we can compile earlier\nversions\n\n amroutine.ambeginscan = Some(scan::ambeginscan);\n amroutine.amrescan = Some(scan::amrescan);\n amroutine.amgettuple = Some(scan::amgettuple);\n amroutine.amgetbitmap = None; // Some(scan::ambitmapscan);\n amroutine.amendscan = Some(scan::amendscan);\n\n amroutine.ammarkpos = None;\n amroutine.amrestrpos = None;\n\n /* interface functions to support parallel index scans */\n amroutine.amestimateparallelscan = None;\n amroutine.aminitparallelscan = None;\n amroutine.amparallelrescan = None;\n\n amroutine.into_pg_boxed()\n}\n\n\n\n-- \nChris Cleveland\n312-339-2677 mobile\n\nCan anyone tell me why my index access method isn't seeing an order_by ScanKey when there is a query with an ORDER BY clause that uses an operator that the access method supports?(Apologies for the volume of code below. I just don't know what is important. This is all in Rust.)CREATE OPERATOR pg_catalog.<===> (    FUNCTION = rdb.userquery_match,    LEFTARG = record,    RIGHTARG = rdb.userqueryspec);CREATE OPERATOR pg_catalog.<<=>> (    FUNCTION = rdb.test_sort_match,    LEFTARG = record,    RIGHTARG = rdb.TestSort);CREATE OPERATOR CLASS rdb_ops DEFAULT FOR TYPE record USING rdb AS    OPERATOR 1 pg_catalog.<===> (record, rdb.userqueryspec),    OPERATOR 2 pg_catalog.<<=>> (record, rdb.testsort) FOR ORDER BY pg_catalog.float_ops;#[derive(Serialize, Deserialize, PostgresType, Debug)]pub struct TestSort {    foo: f32,}#[pg_extern(sql = \"CREATE FUNCTION rdb.test_sort_match(rec record, testsort rdb.TestSort) RETURNS bool IMMUTABLE STRICT PARALLEL SAFE  LANGUAGE c AS 'MODULE_PATHNAME', 'dummysort_match_wrapper';\",requires = [DummySortSpec])]fn test_sort_match(_fcinfo: pg_sys::FunctionCallInfo, testsort: TestSort) -> f32 {    testsort.foo + 2.0}#[pg_extern(immutable, strict, parallel_safe)]fn get_testsort(foo: f32) -> TestSort {    TestSort { foo }}Here's the query:SELECT title FROM products WHERE products <===> rdb.userquery('teddy') ORDER BY products <<=>> rdb.get_testsort(5.0);The amhander gets and uses the WHERE clause just fine. The problem is that ambeginscan() and amrescan() both get a norderbys parameter equal to 0.Doing an EXPLAIN on the query above yields this: Sort  (cost=1000027.67..1000028.92 rows=500 width=28)   Sort Key: ((products.* <<=>> '{\"foo\":5.0}'::rdb.testsort))   ->  Index Scan using product_idx on products  (cost=0.00..1000005.26 rows=500 width=28)         Index Cond: (products.* <===> '{\"query_str\":\"teddy\", /* snip */ }'::rdb.userqueryspec)So, the sort isn't getting passed to the index scan. Weirdly, the test_sort_match() method never gets invoked, so I'm not sure how the system thinks it's actually doing the sort.The fact that the operators work on a RECORD type does not seem to be significant. If I swap it for a TEXT data type then I get the same behavior.I'm sure I'm missing something small. Here's the amhandler definition:#[pg_extern(sql = \"    CREATE OR REPLACE FUNCTION amhandler(internal) RETURNS index_am_handler PARALLEL SAFE IMMUTABLE STRICT LANGUAGE c AS 'MODULE_PATHNAME', '@FUNCTION_NAME@';    CREATE ACCESS METHOD rdb TYPE INDEX HANDLER amhandler;\")]fn amhandler(_fcinfo: pg_sys::FunctionCallInfo) -> PgBox<pg_sys::IndexAmRoutine> {    let mut amroutine =        unsafe { PgBox::<pg_sys::IndexAmRoutine>::alloc_node(pg_sys::NodeTag::T_IndexAmRoutine) };    amroutine.amstrategies = OPERATOR_COUNT;     amroutine.amsupport = PROC_COUNT;     amroutine.amoptsprocnum = OPTIONS_PROC;     amroutine.amcanorder = false;     amroutine.amcanorderbyop = true;     amroutine.amcanbackward = false;    amroutine.amcanunique = false;    amroutine.amcanmulticol = true;    amroutine.amoptionalkey = true;     amroutine.amsearcharray = false;     amroutine.amsearchnulls = false;     amroutine.amstorage = false;     amroutine.amclusterable = false;     amroutine.ampredlocks = false;     amroutine.amcanparallel = false;     amroutine.amcaninclude = true;    amroutine.amusemaintenanceworkmem = false;     amroutine.amparallelvacuumoptions = 0;    amroutine.amkeytype = pg_sys::InvalidOid;    /* interface functions */    amroutine.ambuild = Some(build::ambuild);    amroutine.ambuildempty = Some(build::ambuildempty);    amroutine.aminsert = Some(insert::aminsert);    amroutine.ambulkdelete = Some(delete::ambulkdelete);    amroutine.amvacuumcleanup = Some(delete::amvacuumcleanup);    amroutine.amcanreturn = Some(scan::amcanreturn); /* test if can return a particular col in index-only scan */    amroutine.amcostestimate = Some(amcostestimate);    amroutine.amoptions = Some(index_options::amoptions);    amroutine.amproperty = None;    amroutine.ambuildphasename = None;    amroutine.amvalidate = Some(amvalidate);    // amroutine.amadjustmembers = None; omit so we can compile earlier versions    amroutine.ambeginscan = Some(scan::ambeginscan);    amroutine.amrescan = Some(scan::amrescan);    amroutine.amgettuple = Some(scan::amgettuple);    amroutine.amgetbitmap = None; // Some(scan::ambitmapscan);    amroutine.amendscan = Some(scan::amendscan);    amroutine.ammarkpos = None;    amroutine.amrestrpos = None;    /* interface functions to support parallel index scans */    amroutine.amestimateparallelscan = None;    amroutine.aminitparallelscan = None;    amroutine.amparallelrescan = None;    amroutine.into_pg_boxed()}-- Chris Cleveland312-339-2677 mobile", "msg_date": "Thu, 25 Apr 2024 18:36:58 -0500", "msg_from": "Chris Cleveland <[email protected]>", "msg_from_op": true, "msg_subject": "Index access method not receiving an orderbys ScanKey" }, { "msg_contents": "Chris Cleveland <[email protected]> writes:\n> Can anyone tell me why my index access method isn't seeing an order_by\n> ScanKey when there is a query with an ORDER BY clause that uses an operator\n> that the access method supports?\n\nHmm, you have\n\n> CREATE OPERATOR pg_catalog.<<=>> (\n> FUNCTION = rdb.test_sort_match,\n> LEFTARG = record,\n> RIGHTARG = rdb.TestSort\n> );\n\n> CREATE OPERATOR CLASS rdb_ops DEFAULT FOR TYPE record USING rdb AS\n> OPERATOR 1 pg_catalog.<===> (record, rdb.userqueryspec),\n> OPERATOR 2 pg_catalog.<<=>> (record, rdb.testsort) FOR ORDER BY\n> pg_catalog.float_ops;\n\n> sql = \"CREATE FUNCTION rdb.test_sort_match(rec record, testsort\n> rdb.TestSort) RETURNS bool IMMUTABLE STRICT PARALLEL SAFE LANGUAGE c AS\n> 'MODULE_PATHNAME', 'dummysort_match_wrapper';\",\n> requires = [DummySortSpec]\n\nSo <<=>> returns a bool, not anything that float_ops would apply to.\n\nYou might think that that should result in an error, but not\nper this comment in opclasscmds.c:\n\n * Ordering op, check index supports that. (We could perhaps also\n * check that the operator returns a type supported by the sortfamily,\n * but that seems more trouble than it's worth here. If it does not,\n * the operator will never be matchable to any ORDER BY clause, but no\n * worse consequences can ensue. Also, trying to check that would\n * create an ordering hazard during dump/reload: it's possible that\n * the family has been created but not yet populated with the required\n * operators.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Apr 2024 20:14:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index access method not receiving an orderbys ScanKey" } ]
[ { "msg_contents": "Hi PostgreSQL Community,\nRecently I have been working on foreign servers regarding my project and\nwanted to add some extensions in server options to support query pushdown.\nFor this, suppose I had 20 extensions in the beginning I used ALTER SERVER\nsrv OPTIONS (ADD EXTENSIONS 'all 20 extensions'), then again, I had to add\na few or drop some, I had to write names of all the 20 extensions\nincluding/excluding some.\nI wonder why we can't have some sort of INCLUDE / EXCLUDE option for this\nuse case that can be useful for other options as well which have\ncomma-separated values. I believe this is a useful feature to have for the\nusers.\nSince I needed that support, I took the initiative to contribute to the\ncommunity. In addition, I have improved the documentation too as currently\nwhile reading the documentation it looks like ADD can be used multiple\ntimes even to include some values on top of existing values.\nAttached is the patch for the same. Looking forward to your feedback.\n\nRegards\nAyush Vatsa\nAmazon Web Services (AWS)", "msg_date": "Fri, 26 Apr 2024 11:05:04 +0530", "msg_from": "Ayush Vatsa <[email protected]>", "msg_from_op": true, "msg_subject": "Proposal to have INCLUDE/EXCLUDE options for altering option values" }, { "msg_contents": "Added a CF entry for the same - https://commitfest.postgresql.org/48/4955/\n\nRegards\nAyush Vatsa\nAmazon Web Services (AWS)\n\nOn Fri, 26 Apr 2024 at 11:05, Ayush Vatsa <[email protected]> wrote:\n\n> Hi PostgreSQL Community,\n> Recently I have been working on foreign servers regarding my project and\n> wanted to add some extensions in server options to support query pushdown.\n> For this, suppose I had 20 extensions in the beginning I used ALTER SERVER\n> srv OPTIONS (ADD EXTENSIONS 'all 20 extensions'), then again, I had to add\n> a few or drop some, I had to write names of all the 20 extensions\n> including/excluding some.\n> I wonder why we can't have some sort of INCLUDE / EXCLUDE option for this\n> use case that can be useful for other options as well which have\n> comma-separated values. I believe this is a useful feature to have for the\n> users.\n> Since I needed that support, I took the initiative to contribute to the\n> community. In addition, I have improved the documentation too as currently\n> while reading the documentation it looks like ADD can be used multiple\n> times even to include some values on top of existing values.\n> Attached is the patch for the same. Looking forward to your feedback.\n>\n> Regards\n> Ayush Vatsa\n> Amazon Web Services (AWS)\n>\n\nAdded a CF entry for the same - https://commitfest.postgresql.org/48/4955/RegardsAyush VatsaAmazon Web Services (AWS)On Fri, 26 Apr 2024 at 11:05, Ayush Vatsa <[email protected]> wrote:Hi PostgreSQL Community,Recently I have been working on foreign servers regarding my project and wanted to add some extensions in server options to support query pushdown. For this, suppose I had 20 extensions in the beginning I used ALTER SERVER srv OPTIONS (ADD EXTENSIONS 'all 20 extensions'), then again, I had to add a few or drop some, I had to write names of all the 20 extensions including/excluding some.I wonder why we can't have some sort of INCLUDE / EXCLUDE option for this use case that can be useful for other options as well which have comma-separated values. I believe this is a useful feature to have for the users.Since I needed that support, I took the initiative to contribute to the community. In addition, I have improved the documentation too as currently while reading the documentation it looks like ADD can be used multiple times even to include some values on top of existing values.Attached is the patch for the same. Looking forward to your feedback.RegardsAyush VatsaAmazon Web Services (AWS)", "msg_date": "Fri, 26 Apr 2024 11:08:00 +0530", "msg_from": "Ayush Vatsa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal to have INCLUDE/EXCLUDE options for altering option\n values" }, { "msg_contents": "Hi PostgreSQL Community,\nI would like to provide an update on the patch I previously submitted,\nalong with a clearer explanation of the issue it addresses\nand the improvements it introduces.\nCurrent Issue:\nPostgreSQL currently supports several options with actions like ADD, SET,\nand DROP for foreign servers, user mappings,\nforeign tables, etc. For example, the syntax for modifying a server option\nis as follows:\n\nALTER SERVER name [ VERSION 'new_version' ]\n [ OPTIONS ( [ ADD | SET | DROP ] option ['value'] [, ... ] ) ]\n\nHowever, there is a limitations with the current approach:\n\nIf a user wants to add new values to an existing option, they can use:\nALTER SERVER foo OPTIONS (ADD extensions 'ext1,ext2');\nBut when modifying existing values, users must repeat all the existing\nvalues along with the new ones:\nALTER SERVER foo OPTIONS (ADD extensions 'ext1,ext2');\nALTER SERVER foo OPTIONS (SET extensions 'ext1,ext2,ext3');\n\nThis repetition can be cumbersome and error-prone when there are large\nnumber of comma separated values.\n\nProposed Solution:\nTo address this, I propose introducing two new actions: APPEND and REMOVE.\nThese will allow users to modify existing\nvalues without needing to repeat all current entries.\nALTER SERVER foo OPTIONS (APPEND extensions 'ext4,ext5,ext6');\n --extensions will be like 'ext1,ext2,ext3,ext4,ext5,ext6'\nALTER SERVER foo OPTIONS (REMOVE extensions 'ext1');\n--extensions will be like 'ext2,ext4,ext5,ext6'\n\nI had an off-site discussion with Nathan Bossart (bossartn) and have\nincorporated his feedback about changing actions\nname to be more clear into the updated patch. Furthermore, I noticed that\nthe documentation for the existing actions\ncould be clearer, so I have revised it as well. The documentation for the\nnewly proposed actions is included in a separate patch.\nLooking forward to your comments and feedback.\n\nRegards\nAyush Vatsa\nAWS", "msg_date": "Thu, 15 Aug 2024 12:22:34 +0530", "msg_from": "Ayush Vatsa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal to have INCLUDE/EXCLUDE options for altering option\n values" }, { "msg_contents": "Hello PostgreSQL Community,\nI noticed that my last commit needs rebase through cfbot -\nhttp://cfbot.cputube.org/ayush-vatsa.html\nPFA the rebased patch for the same.\n\nRegards\nAyush Vatsa\nAWS", "msg_date": "Mon, 26 Aug 2024 22:04:36 +0530", "msg_from": "Ayush Vatsa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal to have INCLUDE/EXCLUDE options for altering option\n values" }, { "msg_contents": "On Mon, Aug 26, 2024 at 12:34 PM Ayush Vatsa <[email protected]> wrote:\n> I noticed that my last commit needs rebase through cfbot - http://cfbot.cputube.org/ayush-vatsa.html\n> PFA the rebased patch for the same.\n\nHi Ayush,\n\nThanks for working on this. One problem that I notice is that your\ndocumentation changes seem to suppose that all options are lists, but\nactually most of them aren't, and these new variants wouldn't be\napplicable to non-list cases. They also suppose that everybody's using\ncomma-separated lists specifically, but that's not required and some\nextensions might be doing something different. Also, I'm not convinced\nthat this problem would arise often enough in practice that it's worth\nadding a feature to address it. A user who has this problem can pretty\neasily do some scripting to address it - e.g. SELECT the current\noption value, split it on commas, add or remove whatever, and then SET\nthe new option value. If that were something lots of users were doing\nall the time, then I think it might make a lot of sense to have a\nbuilt-in solution to make it easier, but I doubt that's the case.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 26 Aug 2024 13:31:47 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal to have INCLUDE/EXCLUDE options for altering option\n values" }, { "msg_contents": "On Mon, Aug 26, 2024, at 1:34 PM, Ayush Vatsa wrote:\n> I noticed that my last commit needs rebase through cfbot - http://cfbot.cputube.org/ayush-vatsa.html\n> PFA the rebased patch for the same.\n> \n\nThere is no list concept for OPTIONS. What happen if you use it in a non-list\nvalue?\n\nALTER SERVER foo OPTIONS(ADD bar '1');\nALTER SERVER foo OPTIONS(REMOVE bar '1');\n\nError? Remove option 'bar'?\n\nThis proposal is not idempotent. It means that at the end of the SQL commands,\nthe final state is not predictable. That's disappointed since some tools rely on\nthis property to create migration scripts.\n\nThe syntax is not SQL standard. It does not mean that we cannot extend the\nstandard but sometimes it is a sign that it is not very useful or the current syntax\nalready covers all cases.\n\nAFAICS this proposal also represents a tiny use case. The options to define a\nforeign server don't generally include a list as value. And if so, it is not\ncommon to change the foreign server options.\n\nI also think that REMOVE is a synonym for DROP. It would be confuse to explain\nthat REMOVE is for list elements and DROP is for the list.\n\nI'm not convinced that this new syntax improves the user experience.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Aug 26, 2024, at 1:34 PM, Ayush Vatsa wrote:I noticed that my last commit needs rebase through cfbot - http://cfbot.cputube.org/ayush-vatsa.htmlPFA the rebased patch for the same.There is no list concept for OPTIONS. What happen if you use it in a non-listvalue?ALTER SERVER foo OPTIONS(ADD bar '1');ALTER SERVER foo OPTIONS(REMOVE bar '1');Error? Remove option 'bar'?This proposal is not idempotent. It means that at the end of the SQL commands,the final state is not predictable. That's disappointed since some tools rely onthis property to create migration scripts.The syntax is not SQL standard. It does not mean that we cannot extend thestandard but sometimes it is a sign that it is not very useful or the current syntaxalready covers all cases.AFAICS this proposal also represents a tiny use case. The options to define aforeign server don't generally include a list as value. And if so, it is notcommon to change the foreign server options.I also think that REMOVE is a synonym for DROP. It would be confuse to explainthat REMOVE is for list elements and DROP is for the list.I'm not convinced that this new syntax improves the user experience.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Mon, 26 Aug 2024 14:47:43 -0300", "msg_from": "\"Euler Taveira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal to have INCLUDE/EXCLUDE options for altering option\n values" }, { "msg_contents": "> I'm not convinced\n> that this problem would arise often enough in practice that it's worth\n> adding a feature to address it. A user who has this problem can pretty\n> easily do some scripting to address it\n\n> AFAICS this proposal also represents a tiny use case.\n\nThanks Robert and Euler for the feedback.\nAck, Initially while working on the solution I thought this can be a common\nproblem. But I agree if the use case is small we can postpone supporting\nsuch syntax for the future.\nI can then withdraw the patch.\n\nRegards\nAyush Vatsa\nAWS\n\n> I'm not convinced> that this problem would arise often enough in practice that it's worth> adding a feature to address it. A user who has this problem can pretty> easily do some scripting to address it> AFAICS this proposal also represents a tiny use case.Thanks Robert and Euler for the feedback.Ack, Initially while working on the solution I thought this can be a commonproblem. But I agree if the use case is small we can postpone supportingsuch syntax for the future.I can then withdraw the patch. RegardsAyush VatsaAWS", "msg_date": "Tue, 27 Aug 2024 23:17:00 +0530", "msg_from": "Ayush Vatsa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal to have INCLUDE/EXCLUDE options for altering option\n values" } ]
[ { "msg_contents": "The Core Team would like to extend our congratulations to Melanie \r\nPlageman and Richard Guo, who have accepted invitations to become our \r\nnewest PostgreSQL committers.\r\n\r\nPlease join us in wishing them much success and few reverts!\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Fri, 26 Apr 2024 06:54:26 -0500", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "On Fri, Apr 26, 2024 at 5:24 PM Jonathan S. Katz <[email protected]> wrote:\n>\n> The Core Team would like to extend our congratulations to Melanie\n> Plageman and Richard Guo, who have accepted invitations to become our\n> newest PostgreSQL committers.\n>\n> Please join us in wishing them much success and few reverts!\n\nCongratulations to both of you!\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 26 Apr 2024 17:43:00 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "On 4/26/24 07:54, Jonathan S. Katz wrote:\n> The Core Team would like to extend our congratulations to Melanie \n> Plageman and Richard Guo, who have accepted invitations to become our \n> newest PostgreSQL committers.\n> \n> Please join us in wishing them much success and few reverts!\n> \n\nCongrats !\n\nBest regards,\n Jesper\n\n\n\n", "msg_date": "Fri, 26 Apr 2024 08:21:05 -0400", "msg_from": "Jesper Pedersen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "On Fri, Apr 26, 2024 at 8:54 PM Jonathan S. Katz <[email protected]> wrote:\n> The Core Team would like to extend our congratulations to Melanie\n> Plageman and Richard Guo, who have accepted invitations to become our\n> newest PostgreSQL committers.\n>\n> Please join us in wishing them much success and few reverts!\n\nCongratulations to both!\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Fri, 26 Apr 2024 21:42:45 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "On Fri, Apr 26, 2024 at 8:54 AM Jonathan S. Katz <[email protected]>\nwrote:\n\n> The Core Team would like to extend our congratulations to Melanie\n> Plageman and Richard Guo, who have accepted invitations to become our\n> newest PostgreSQL committers.\n>\n>\nIt is a HUGE step for the community having a woman committer... Congrats to\nyou both Melanie and Richard!!!\n\nRegards,\n\n-- \nFabrízio de Royes Mello\n\nOn Fri, Apr 26, 2024 at 8:54 AM Jonathan S. Katz <[email protected]> wrote:The Core Team would like to extend our congratulations to Melanie \nPlageman and Richard Guo, who have accepted invitations to become our \nnewest PostgreSQL committers.It is a HUGE step for the community having a woman committer... Congrats to you both Melanie and Richard!!!Regards,-- Fabrízio de Royes Mello", "msg_date": "Fri, 26 Apr 2024 09:43:06 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "On Fri, 26 Apr 2024 at 14:54, Jonathan S. Katz <[email protected]> wrote:\n>\n> The Core Team would like to extend our congratulations to Melanie\n> Plageman and Richard Guo, who have accepted invitations to become our\n> newest PostgreSQL committers.\n>\n> Please join us in wishing them much success and few reverts!\n\nCongratulations to both of you!\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Fri, 26 Apr 2024 16:04:57 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "\nOn 2024-04-26 Fr 07:54, Jonathan S. Katz wrote:\n> The Core Team would like to extend our congratulations to Melanie \n> Plageman and Richard Guo, who have accepted invitations to become our \n> newest PostgreSQL committers.\n>\n> Please join us in wishing them much success and few reverts!\n>\n>\n\nVery happy about both of these. Many congratulations to Melanie and Richard.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 26 Apr 2024 09:08:19 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "Congratulations to Melanie and Richard.\n\nOn Fri, Apr 26, 2024 at 5:24 PM Jonathan S. Katz <[email protected]>\nwrote:\n\n> The Core Team would like to extend our congratulations to Melanie\n> Plageman and Richard Guo, who have accepted invitations to become our\n> newest PostgreSQL committers.\n>\n> Please join us in wishing them much success and few reverts!\n>\n> Thanks,\n>\n> Jonathan\n>\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nCongratulations to Melanie and Richard.On Fri, Apr 26, 2024 at 5:24 PM Jonathan S. Katz <[email protected]> wrote:The Core Team would like to extend our congratulations to Melanie \nPlageman and Richard Guo, who have accepted invitations to become our \nnewest PostgreSQL committers.\n\nPlease join us in wishing them much success and few reverts!\n\nThanks,\n\nJonathan\n-- Best Wishes,Ashutosh Bapat", "msg_date": "Fri, 26 Apr 2024 18:39:00 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "On Fri, Apr 26, 2024 at 4:54 AM Jonathan S. Katz <[email protected]> wrote:\n>\n> The Core Team would like to extend our congratulations to Melanie\n> Plageman and Richard Guo, who have accepted invitations to become our\n> newest PostgreSQL committers.\n\nCongratulations!!\n\n--Jacob\n\n\n", "msg_date": "Fri, 26 Apr 2024 06:11:06 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "On Fri, Apr 26, 2024 at 2:54 PM Jonathan S. Katz <[email protected]> wrote:\n>\n> The Core Team would like to extend our congratulations to Melanie\n> Plageman and Richard Guo, who have accepted invitations to become our\n> newest PostgreSQL committers.\n>\n> Please join us in wishing them much success and few reverts!\n\nCompletely deserved. Congrats!\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 26 Apr 2024 16:41:18 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "On Fri, Apr 26, 2024 at 06:54:26AM -0500, Jonathan S. Katz wrote:\n> The Core Team would like to extend our congratulations to Melanie Plageman\n> and Richard Guo, who have accepted invitations to become our newest\n> PostgreSQL committers.\n> \n> Please join us in wishing them much success and few reverts!\n\nCongratulations to both!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 26 Apr 2024 08:48:45 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "On Fri, 26 Apr 2024 at 17:48, Nathan Bossart <[email protected]>\nwrote:\n\n> On Fri, Apr 26, 2024 at 06:54:26AM -0500, Jonathan S. Katz wrote:\n> > The Core Team would like to extend our congratulations to Melanie\n> Plageman\n> > and Richard Guo, who have accepted invitations to become our newest\n> > PostgreSQL committers.\n> >\n> > Please join us in wishing them much success and few reverts!\n>\n\nCongratulations! Well deserved!\n\nPavel Borisov\n\nOn Fri, 26 Apr 2024 at 17:48, Nathan Bossart <[email protected]> wrote:On Fri, Apr 26, 2024 at 06:54:26AM -0500, Jonathan S. Katz wrote:\n> The Core Team would like to extend our congratulations to Melanie Plageman\n> and Richard Guo, who have accepted invitations to become our newest\n> PostgreSQL committers.\n> \n> Please join us in wishing them much success and few reverts!Congratulations! Well deserved!Pavel Borisov", "msg_date": "Fri, 26 Apr 2024 17:55:44 +0400", "msg_from": "Pavel Borisov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "On Apr 26, 2024, at 07:54, Jonathan S. Katz <[email protected]> wrote:\n\n> The Core Team would like to extend our congratulations to Melanie Plageman and Richard Guo, who have accepted invitations to become our newest PostgreSQL committers.\n> \n> Please join us in wishing them much success and few reverts!\n\nGreat news, congratulations!\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Fri, 26 Apr 2024 10:17:16 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "Many congratulations both of you..!!\n\nRegards,\nAmul\n\nOn Friday 26 April 2024, Jonathan S. Katz <[email protected]> wrote:\n\n> The Core Team would like to extend our congratulations to Melanie Plageman\n> and Richard Guo, who have accepted invitations to become our newest\n> PostgreSQL committers.\n>\n> Please join us in wishing them much success and few reverts!\n>\n> Thanks,\n>\n> Jonathan\n>\n\n\n-- \nRegards,\nAmul Sul\nEDB: http://www.enterprisedb.com\n\nMany congratulations both of you..!!Regards,AmulOn Friday 26 April 2024, Jonathan S. Katz <[email protected]> wrote:The Core Team would like to extend our congratulations to Melanie Plageman and Richard Guo, who have accepted invitations to become our newest PostgreSQL committers.\n\nPlease join us in wishing them much success and few reverts!\n\nThanks,\n\nJonathan\n-- Regards,Amul SulEDB: http://www.enterprisedb.com", "msg_date": "Fri, 26 Apr 2024 23:06:50 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "On Fri, Apr 26, 2024 at 2:54 PM Jonathan S. Katz <[email protected]> wrote:\n>\n> The Core Team would like to extend our congratulations to Melanie\n> Plageman and Richard Guo, who have accepted invitations to become our\n> newest PostgreSQL committers.\n>\n> Please join us in wishing them much success and few reverts!\n\nMany congratulations! Well deserved!\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Fri, 26 Apr 2024 20:56:18 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "Big congrats to you two!!!\n\n- Jimmy\n\n-- \nThis electronic communication and the information and any files transmitted \nwith it, or attached to it, are confidential and are intended solely for \nthe use of the individual or entity to whom it is addressed and may contain \ninformation that is confidential, legally privileged, protected by privacy \nlaws, or otherwise restricted from disclosure to anyone else. If you are \nnot the intended recipient or the person responsible for delivering the \ne-mail to the intended recipient, you are hereby notified that any use, \ncopying, distributing, dissemination, forwarding, printing, or copying of \nthis e-mail is strictly prohibited. If you received this e-mail in error, \nplease return the e-mail to the sender, delete it from your computer, and \ndestroy any printed copy of it.\n\nBig congrats to you two!!!- Jimmy\n\nThis electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.", "msg_date": "Fri, 26 Apr 2024 11:21:55 -0700", "msg_from": "Jimmy Yih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "On 2024-04-26 06:54:26 -0500, Jonathan S. Katz wrote:\n> The Core Team would like to extend our congratulations to Melanie Plageman\n> and Richard Guo, who have accepted invitations to become our newest\n> PostgreSQL committers.\n> \n> Please join us in wishing them much success and few reverts!\n\nCongratulations!\n\n\n", "msg_date": "Fri, 26 Apr 2024 11:26:26 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "On Fri, Apr 26, 2024 at 7:54 AM Jonathan S. Katz <[email protected]> wrote:\n> The Core Team would like to extend our congratulations to Melanie\n> Plageman and Richard Guo, who have accepted invitations to become our\n> newest PostgreSQL committers.\n\nCongratulations to both!\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 26 Apr 2024 15:51:52 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "> The Core Team would like to extend our congratulations to Melanie\n> Plageman and Richard Guo, who have accepted invitations to become our\n> newest PostgreSQL committers.\n> \n> Please join us in wishing them much success and few reverts!\n\nCongratulations!\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Sat, 27 Apr 2024 05:45:24 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo,New committers:\n Melanie Plageman, Richard Guo" }, { "msg_contents": "On Fri, Apr 26, 2024 at 6:54 AM Jonathan S. Katz <[email protected]> wrote:\n>\n> The Core Team would like to extend our congratulations to Melanie\n> Plageman and Richard Guo, who have accepted invitations to become our\n> newest PostgreSQL committers.\n>\n> Please join us in wishing them much success and few reverts!\n>\n\nNo surprise here! Congratulations to both of you!\nYou both have done a great job!\n\n\n-- \nJaime Casanova\nSYSTEMGUARDS S.A.\n\n\n", "msg_date": "Fri, 26 Apr 2024 16:11:28 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "Congratulations to both of you Melanie and Richard.\nThank you so much for stepping forward to this great cause.\n\nRegards...\n\nYasir Hussain\nBitnine Global\n\nOn Fri, Apr 26, 2024 at 4:54 PM Jonathan S. Katz <[email protected]>\nwrote:\n\n> The Core Team would like to extend our congratulations to Melanie\n> Plageman and Richard Guo, who have accepted invitations to become our\n> newest PostgreSQL committers.\n>\n> Please join us in wishing them much success and few reverts!\n>\n> Thanks,\n>\n> Jonathan\n>\n\nCongratulations to both of you Melanie and Richard. Thank you so much for stepping forward to this great cause. Regards...Yasir HussainBitnine GlobalOn Fri, Apr 26, 2024 at 4:54 PM Jonathan S. Katz <[email protected]> wrote:The Core Team would like to extend our congratulations to Melanie \nPlageman and Richard Guo, who have accepted invitations to become our \nnewest PostgreSQL committers.\n\nPlease join us in wishing them much success and few reverts!\n\nThanks,\n\nJonathan", "msg_date": "Sat, 27 Apr 2024 02:32:42 +0500", "msg_from": "Yasir <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "On Fri, Apr 26, 2024 at 06:54:26AM -0500, Jonathan S. Katz wrote:\n> The Core Team would like to extend our congratulations to Melanie Plageman\n> and Richard Guo, who have accepted invitations to become our newest\n> PostgreSQL committers.\n> \n> Please join us in wishing them much success and few reverts!\n\nCongratulations!\n--\nMichael", "msg_date": "Sat, 27 Apr 2024 07:14:55 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "Congratulations!\n\nOn 2024/4/26 19:54, Jonathan S. Katz wrote:\n> The Core Team would like to extend our congratulations to Melanie \n> Plageman and Richard Guo, who have accepted invitations to become our \n> newest PostgreSQL committers.\n>\n> Please join us in wishing them much success and few reverts!\n>\n> Thanks,\n>\n> Jonathan\n\n\n", "msg_date": "Sat, 27 Apr 2024 09:28:46 +0800", "msg_from": "DEVOPS_WwIT <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "On Fri, Apr 26, 2024 at 6:54 PM Jonathan S. Katz <[email protected]> wrote:\n>\n> The Core Team would like to extend our congratulations to Melanie\n> Plageman and Richard Guo, who have accepted invitations to become our\n> newest PostgreSQL committers.\n>\n> Please join us in wishing them much success and few reverts!\n\nCongratulations to you both!\n\n\n", "msg_date": "Sat, 27 Apr 2024 10:36:12 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "On Fri, Apr 26, 2024 at 5:54 AM Jonathan S. Katz <[email protected]>\nwrote:\n\n> The Core Team would like to extend our congratulations to Melanie\n> Plageman and Richard Guo, who have accepted invitations to become our\n> newest PostgreSQL committers.\n>\n> Please join us in wishing them much success and few reverts!\n>\n\nAmazing accomplishment. Thank you for your contributions to PostgreSQL.\nWell deserved!\n\nRoberto\n\nOn Fri, Apr 26, 2024 at 5:54 AM Jonathan S. Katz <[email protected]> wrote:The Core Team would like to extend our congratulations to Melanie \nPlageman and Richard Guo, who have accepted invitations to become our \nnewest PostgreSQL committers.\n\nPlease join us in wishing them much success and few reverts!Amazing accomplishment. Thank you for your contributions to PostgreSQL. Well deserved!Roberto", "msg_date": "Fri, 26 Apr 2024 22:03:33 -0600", "msg_from": "Roberto Mello <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "On Fri, Apr 26, 2024 at 5:24 PM Jonathan S. Katz <[email protected]> wrote:\n>\n> The Core Team would like to extend our congratulations to Melanie\n> Plageman and Richard Guo, who have accepted invitations to become our\n> newest PostgreSQL committers.\n>\n> Please join us in wishing them much success and few reverts!\n\nCongratulations to both of you.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 27 Apr 2024 09:37:26 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "\n\n> On 26 Apr 2024, at 16:54, Jonathan S. Katz <[email protected]> wrote:\n> \n> The Core Team would like to extend our congratulations to Melanie Plageman and Richard Guo, who have accepted invitations to become our newest PostgreSQL committers.\n> \n> Please join us in wishing them much success and few reverts!\n\nCongratulations!\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sat, 27 Apr 2024 11:34:15 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "Congratulations!\n\nOn Sat, Apr 27, 2024 at 11:34 AM Andrey M. Borodin <[email protected]>\nwrote:\n\n>\n>\n> > On 26 Apr 2024, at 16:54, Jonathan S. Katz <[email protected]> wrote:\n> >\n> > The Core Team would like to extend our congratulations to Melanie\n> Plageman and Richard Guo, who have accepted invitations to become our\n> newest PostgreSQL committers.\n> >\n> > Please join us in wishing them much success and few reverts!\n>\n> Congratulations!\n>\n>\n> Best regards, Andrey Borodin.\n>\n>\n\nCongratulations!On Sat, Apr 27, 2024 at 11:34 AM Andrey M. Borodin <[email protected]> wrote:\n\n> On 26 Apr 2024, at 16:54, Jonathan S. Katz <[email protected]> wrote:\n> \n> The Core Team would like to extend our congratulations to Melanie Plageman and Richard Guo, who have accepted invitations to become our newest PostgreSQL committers.\n> \n> Please join us in wishing them much success and few reverts!\n\nCongratulations!\n\n\nBest regards, Andrey Borodin.", "msg_date": "Sat, 27 Apr 2024 11:58:16 +0500", "msg_from": "Kashif Zeeshan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "On Fri, Apr 26, 2024 at 8:54 PM Jonathan S. Katz <[email protected]> wrote:\n>\n> The Core Team would like to extend our congratulations to Melanie\n> Plageman and Richard Guo, who have accepted invitations to become our\n> newest PostgreSQL committers.\n>\n> Please join us in wishing them much success and few reverts!\n>\n\nCongratulations to both!\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 27 Apr 2024 22:04:33 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "\n\"Jonathan S. Katz\" <[email protected]> writes:\n\n> [[PGP Signed Part:Undecided]]\n> The Core Team would like to extend our congratulations to Melanie\n> Plageman and Richard Guo, who have accepted invitations to become our\n> newest PostgreSQL committers.\n>\n> Please join us in wishing them much success and few reverts!\n>\n\nCongratulations to both, Well deserved!\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Sun, 28 Apr 2024 08:02:13 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "On 2024-Apr-26, Jonathan S. Katz wrote:\n\n> The Core Team would like to extend our congratulations to Melanie Plageman\n> and Richard Guo, who have accepted invitations to become our newest\n> PostgreSQL committers.\n> \n> Please join us in wishing them much success and few reverts!\n\nMay your commits be plenty and your reverts rare :-)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Ni aún el genio muy grande llegaría muy lejos\nsi tuviera que sacarlo todo de su propio interior\" (Goethe)\n\n\n", "msg_date": "Sun, 28 Apr 2024 11:38:08 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" }, { "msg_contents": "On Fri, 26 Apr 2024 at 17:24, Jonathan S. Katz <[email protected]> wrote:\n>\n> The Core Team would like to extend our congratulations to Melanie\n> Plageman and Richard Guo, who have accepted invitations to become our\n> newest PostgreSQL committers.\n>\n> Please join us in wishing them much success and few reverts!\n\nCongratulations to both of you.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 29 Apr 2024 08:59:09 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New committers: Melanie Plageman, Richard Guo" } ]
[ { "msg_contents": "Hi,\n\nManually specifying tablespace mappings in pg_basebackup, especially in\nenvironments where tablespaces can come and go, or with incremental\nbackups, can be tedious and error-prone. I propose a solution using\npattern-based mapping to automate this process.\n\nSo rather than having to specify.\n\n-T /path/to/original/tablespace/a=/path/to/backup/tablespace/a -T\n/path/to/original/tablespace/b=/path/to/backup/tablespace/b\n\nAnd then coming up with a new location to map to for the subsequent\nincremental backups, perhaps we could have a parameter (I’m just going to\nchoose M for “mapping”), like so:\n\n-M %p/%d_backup_1.1\n\nWhere it can interpolate the following values:\n%p = path\n%d = directory\n%l = label (not sure about this one)\n\n\nUsing the -M example above, when pg_basebackup finds:\n\n/path/to/original/tablespace/a\n/path/to/original/tablespace/b\n\nIt creates:\n\n/path/to/original/tablespace/a_backup_1.1\n/path/to/original/tablespace/b_backup_1.1\n\n\nOr:\n\n-M /path/to/backup/tablespaces/1.1/%d\n\nCreates:\n\n/path/to/backup/tablespaces/1.1/a\n/path/to/backup/tablespaces/1.1/b\n\n\nOr possibly allowing something like %l to insert the backup label.\n\nFor example:\n\n-M /path/to/backup/tablespaces/%f_%l -l 1.1\n\nCreates:\n\n/path/to/backup/tablespaces/a_1.1\n/path/to/backup/tablespaces/b_1.1\n\n\nThis of course would not work if there were tablespaces as follows:\n\n/path/to/first/tablespace/a\n/path/to/second/tablespace/a\n\nWhere %d would yield the same result for both tablespaces. However, this\nseems like an unlikely scenario as the tablespace name within the database\nwould need to be unique, but then requires them to use a directory name\nthat isn't unique. This could just be a scenario that isn't supported.\n\nPerhaps even allow it to auto-increment a version number it defines\nitself. Maybe %v implies “make up a version number here, and if one\nexisted in the manifest previously, increment it”.\n\n\nUltimately, it would turn this:\n\npg_basebackup\n -D /Users/thombrown/Development/backups/data1.5\n -h /tmp\n -p 5999\n -c fast\n -U thombrown\n -l 1.5\n -T\n/Users/thombrown/Development/tablespaces/ts_a=/Users/thombrown/Development/backups/tablespaces/1.5/backup_ts_a\n -T\n/Users/thombrown/Development/tablespaces/ts_b=/Users/thombrown/Development/backups/tablespaces/1.5/backup_ts_b\n -T\n/Users/thombrown/Development/tablespaces/ts_c=/Users/thombrown/Development/backups/tablespaces/1.5/backup_ts_c\n -T\n/Users/thombrown/Development/tablespaces/ts_d=/Users/thombrown/Development/backups/tablespaces/1.5/backup_ts_d\n -i /Users/thombrown/Development/backups/data1.4/backup_manifest\n\nInto this:\n\npg_basebackup\n -D /Users/thombrown/Development/backups/1.5/data\n -h /tmp\n -p 5999\n -c fast\n -U thombrown\n -l 1.5\n -M /Users/thombrown/Development/backups/tablespaces/%v/%d\n -i /Users/thombrown/Development/backups/data1.4/backup_manifest\n\nIn fact, if I were permitted to get carried away:\n\n-D /Users/thombrown/Development/backups/%v/%d\n\nThen, the only thing that needs changing for each incremental backup is the\nmanifest location (and optionally the label).\n\n\nGiven that pg_combinebackup has the same option, I imagine something\nsimilar would need to be added there too. We should already know where the\ntablespaces reside, as they are in the final backup specified in the list\nof backups, so that seems to just be a matter of getting input of how the\ntablespaces should be named in the reconstructed backup.\n\nFor example:\n\npg_combinebackup\n -T\n/Users/thombrown/Development/backups/tablespaces/1.4/ts_a=/Users/thombrown/Development/backups/tablespaces/2.0_combined/ts_a\n -T\n/Users/thombrown/Development/backups/tablespaces/1.4/ts_b=/Users/thombrown/Development/backups/tablespaces/2.0_combined/ts_b\n -T\n/Users/thombrown/Development/backups/tablespaces/1.4/ts_c=/Users/thombrown/Development/backups/tablespaces/2.0_combined/ts_c\n -T\n/Users/thombrown/Development/backups/tablespaces/1.4/ts_d=/Users/thombrown/Development/backups/tablespaces/2.0_combined/ts_d\n -o /Users/thombrown/Development/backups/combined\n /Users/thombrown/Development/backups/data{1.0_full,1.1,1.2,1.3,1.4}\n\nBecomes:\npg_combinebackup\n -M /Users/thombrown/Development/backups/tablespaces/%v_combined/%d\n -o /Users/thombrown/Development/backups/%v_combined/%d\n /Users/thombrown/Development/backups/{1.0_full,1.1,1.2,1.3,1.4}/data\n\nYou may have inferred that I decided pg_combinebackup increments the\nversion to the next major version, whereas pg_basebackup in incremental\nmode increments the minor version number.\n\nThis, of course, becomes messy if the user decided to include the version\nnumber in the backup tablespace directory name, but then these sorts of\nthings need to be figured out prior to placing into production anyway.\n\nI also get the feeling that accepting an unquoted % as a parameter on the\ncommand line could be problematic, such as it having a special meaning I\nhaven't accounted for here. In which case, it may require quoting.\n\nThoughts?\n\nRegards\n\nThom\n\nHi,Manually specifying tablespace mappings in pg_basebackup, especially in environments where tablespaces can come and go, or with incremental backups, can be tedious and error-prone. I propose a solution using pattern-based mapping to automate this process.So rather than having to specify.-T /path/to/original/tablespace/a=/path/to/backup/tablespace/a -T /path/to/original/tablespace/b=/path/to/backup/tablespace/bAnd then coming up with a new location to map to for the subsequent incremental backups, perhaps we could have a parameter (I’m just going to choose M for “mapping”), like so:-M %p/%d_backup_1.1Where it can interpolate the following values:%p = path%d = directory%l = label (not sure about this one)Using the -M example above, when pg_basebackup finds:/path/to/original/tablespace/a/path/to/original/tablespace/bIt creates:/path/to/original/tablespace/a_backup_1.1/path/to/original/tablespace/b_backup_1.1Or:-M /path/to/backup/tablespaces/1.1/%dCreates:/path/to/backup/tablespaces/1.1/a/path/to/backup/tablespaces/1.1/bOr possibly allowing something like %l to insert the backup label.For example:-M /path/to/backup/tablespaces/%f_%l -l 1.1Creates:/path/to/backup/tablespaces/a_1.1/path/to/backup/tablespaces/b_1.1This of course would not work if there were tablespaces as follows:/path/to/first/tablespace/a/path/to/second/tablespace/aWhere %d would yield the same result for both tablespaces.  However, this seems like an unlikely scenario as the tablespace name within the database would need to be unique, but then requires them to use a directory name that isn't unique.  This could just be a scenario that isn't supported.Perhaps even allow it to auto-increment a version number it defines itself.  Maybe %v implies “make up a version number here, and if one existed in the manifest previously, increment it”.Ultimately, it would turn this:pg_basebackup   -D /Users/thombrown/Development/backups/data1.5   -h /tmp   -p 5999   -c fast   -U thombrown   -l 1.5  -T /Users/thombrown/Development/tablespaces/ts_a=/Users/thombrown/Development/backups/tablespaces/1.5/backup_ts_a  -T /Users/thombrown/Development/tablespaces/ts_b=/Users/thombrown/Development/backups/tablespaces/1.5/backup_ts_b  -T /Users/thombrown/Development/tablespaces/ts_c=/Users/thombrown/Development/backups/tablespaces/1.5/backup_ts_c  -T /Users/thombrown/Development/tablespaces/ts_d=/Users/thombrown/Development/backups/tablespaces/1.5/backup_ts_d  -i /Users/thombrown/Development/backups/data1.4/backup_manifestInto this:pg_basebackup  -D /Users/thombrown/Development/backups/1.5/data  -h /tmp  -p 5999  -c fast  -U thombrown  -l 1.5  -M /Users/thombrown/Development/backups/tablespaces/%v/%d  -i /Users/thombrown/Development/backups/data1.4/backup_manifestIn fact, if I were permitted to get carried away:-D /Users/thombrown/Development/backups/%v/%dThen, the only thing that needs changing for each incremental backup is the manifest location (and optionally the label).Given that pg_combinebackup has the same option, I imagine something similar would need to be added there too.  We should already know where the tablespaces reside, as they are in the final backup specified in the list of backups, so that seems to just be a matter of getting input of how the tablespaces should be named in the reconstructed backup.For example:pg_combinebackup  -T /Users/thombrown/Development/backups/tablespaces/1.4/ts_a=/Users/thombrown/Development/backups/tablespaces/2.0_combined/ts_a  -T /Users/thombrown/Development/backups/tablespaces/1.4/ts_b=/Users/thombrown/Development/backups/tablespaces/2.0_combined/ts_b  -T /Users/thombrown/Development/backups/tablespaces/1.4/ts_c=/Users/thombrown/Development/backups/tablespaces/2.0_combined/ts_c  -T /Users/thombrown/Development/backups/tablespaces/1.4/ts_d=/Users/thombrown/Development/backups/tablespaces/2.0_combined/ts_d  -o /Users/thombrown/Development/backups/combined  /Users/thombrown/Development/backups/data{1.0_full,1.1,1.2,1.3,1.4}Becomes:pg_combinebackup  -M /Users/thombrown/Development/backups/tablespaces/%v_combined/%d  -o /Users/thombrown/Development/backups/%v_combined/%d  /Users/thombrown/Development/backups/{1.0_full,1.1,1.2,1.3,1.4}/dataYou may have inferred that I decided pg_combinebackup increments the version to the next major version, whereas pg_basebackup in incremental mode increments the minor version number.This, of course, becomes messy if the user decided to include the version number in the backup tablespace directory name, but then these sorts of things need to be figured out prior to placing into production anyway.I also get the feeling that accepting an unquoted % as a parameter on the command line could be problematic, such as it having a special meaning I haven't accounted for here.  In which case, it may require quoting.Thoughts?RegardsThom", "msg_date": "Sat, 27 Apr 2024 04:07:14 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": true, "msg_subject": "Automatic tablespace management in pg_basebackup" } ]
[ { "msg_contents": "hi.\n\nI found some minor issues related to the EXPLAIN command.\n\ncannot auto-complete with a white space.\nsrc8=# explain (analyze,b\n\ncan auto-complete:\nsrc8=# explain (analyze, b\n\nto make tab-complete work, comma, must be followed with a white space,\nnot sure why.\n--------------\nexplain (serialize binary) select 1;\nERROR: EXPLAIN option SERIALIZE requires ANALYZE\n\ndo you think it's better to rephrase it as:\nERROR: EXPLAIN option SERIALIZE requires ANALYZE option\n\nsince we have separate ANALYZE SQL commands.\n--------------\n\n <para>\n Specify the output format, which can be TEXT, XML, JSON, or YAML.\n Non-text output contains the same information as the text output\n format, but is easier for programs to parse. This parameter defaults to\n <literal>TEXT</literal>.\n </para>\n\nshould we add <literal> attribute for {TEXT, XML, JSON, YAML} in the\nabove paragraph?\n\n--------------\ni created a patch for tab-complete for memory, SERIALIZE option.", "msg_date": "Sat, 27 Apr 2024 13:00:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "add tab-complete for memory, serialize option and other minor issues." }, { "msg_contents": "jian he <[email protected]> writes:\n> to make tab-complete work, comma, must be followed with a white space,\n> not sure why.\n\nhttps://www.postgresql.org/message-id/3870833.1712696581%40sss.pgh.pa.us\n\nPost-feature-freeze is no time to be messing with behavior as basic\nas WORD_BREAKS, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 27 Apr 2024 11:15:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add tab-complete for memory,\n serialize option and other minor issues." }, { "msg_contents": "On Sat, Apr 27, 2024 at 11:15:47AM -0400, Tom Lane wrote:\n> https://www.postgresql.org/message-id/3870833.1712696581%40sss.pgh.pa.us\n> \n> Post-feature-freeze is no time to be messing with behavior as basic\n> as WORD_BREAKS, though.\n\nIndeed.\n\nBy the way, that psql completion patch has fallen through the cracks\nand I don't see a point in waiting for that, so I have applied it.\nNote that the patch did not order the options according to the docs,\nwhich was consistent on HEAD but not anymore with the patch. \n--\nMichael", "msg_date": "Wed, 1 May 2024 12:01:47 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add tab-complete for memory, serialize option and other minor\n issues." } ]
[ { "msg_contents": "Hey,\n\nI\"m trying to read the rows of a table in chunks to process them in a\nbackground worker.\nI want to ensure that each row is processed only once.\n\nI was thinking of using the `SELECT * ... OFFSET {offset_size} LIMIT\n{limit_size}` functionality for this but I\"m running into issues.\n\nSome approaches I had in mind that aren't working out:\n - Try to use the transaction id to query rows created since the last\nprocessed transaction id\n - It seems Postgres does not expose row transaction ids so this\napproach is not feasible\n - Rely on OFFSET / LIMIT combination to query the next chunk of data\n - SELECT * does not guarantee ordering of rows so it's possible older\nrows repeat or newer rows are missed in a chunk\n\nCan you please suggest any alternative to periodically read rows from a\ntable in chunks while processing each row exactly once.\n\nThanks,\nSushrut\n\nHey,I\"m trying to read the rows of a table in chunks to process them in a background worker.I want to ensure that each row is processed only once.I was thinking of using the `SELECT * ... OFFSET {offset_size} LIMIT {limit_size}` functionality for this but I\"m running into issues.Some approaches I had in mind that aren't working out: - Try to use the transaction id to query rows created since the last processed transaction id       - It seems Postgres does not expose row transaction ids so this approach is not feasible - Rely on OFFSET / LIMIT combination to query the next chunk of data       - SELECT * does not guarantee ordering of rows so it's possible older rows repeat or newer rows are missed in a chunkCan you please suggest any alternative to periodically read rows from a table in chunks while processing each row exactly once.Thanks,Sushrut", "msg_date": "Sat, 27 Apr 2024 13:16:45 +0530", "msg_from": "Sushrut Shivaswamy <[email protected]>", "msg_from_op": true, "msg_subject": "Read table rows in chunks" }, { "msg_contents": "Hi\n\nYou can also use the following approaches.\n\n1. Cursors\n2. FETCH with OFFSET clause\n\nRegards\nKashif Zeeshan\nBitnine Global\n\nOn Sat, Apr 27, 2024 at 12:47 PM Sushrut Shivaswamy <\[email protected]> wrote:\n\n> Hey,\n>\n> I\"m trying to read the rows of a table in chunks to process them in a\n> background worker.\n> I want to ensure that each row is processed only once.\n>\n> I was thinking of using the `SELECT * ... OFFSET {offset_size} LIMIT\n> {limit_size}` functionality for this but I\"m running into issues.\n>\n> Some approaches I had in mind that aren't working out:\n> - Try to use the transaction id to query rows created since the last\n> processed transaction id\n> - It seems Postgres does not expose row transaction ids so this\n> approach is not feasible\n> - Rely on OFFSET / LIMIT combination to query the next chunk of data\n> - SELECT * does not guarantee ordering of rows so it's possible\n> older rows repeat or newer rows are missed in a chunk\n>\n> Can you please suggest any alternative to periodically read rows from a\n> table in chunks while processing each row exactly once.\n>\n> Thanks,\n> Sushrut\n>\n>\n>\n>\n\nHiYou can also use the following approaches.1. Cursors2. FETCH with OFFSET clauseRegardsKashif ZeeshanBitnine GlobalOn Sat, Apr 27, 2024 at 12:47 PM Sushrut Shivaswamy <[email protected]> wrote:Hey,I\"m trying to read the rows of a table in chunks to process them in a background worker.I want to ensure that each row is processed only once.I was thinking of using the `SELECT * ... OFFSET {offset_size} LIMIT {limit_size}` functionality for this but I\"m running into issues.Some approaches I had in mind that aren't working out: - Try to use the transaction id to query rows created since the last processed transaction id       - It seems Postgres does not expose row transaction ids so this approach is not feasible - Rely on OFFSET / LIMIT combination to query the next chunk of data       - SELECT * does not guarantee ordering of rows so it's possible older rows repeat or newer rows are missed in a chunkCan you please suggest any alternative to periodically read rows from a table in chunks while processing each row exactly once.Thanks,Sushrut", "msg_date": "Sat, 27 Apr 2024 22:04:27 +0500", "msg_from": "Kashif Zeeshan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read table rows in chunks" }, { "msg_contents": "On Sat, Apr 27, 2024 at 12:47 AM Sushrut Shivaswamy <\[email protected]> wrote:\n\n>\n> I\"m trying to read the rows of a table in chunks to process them in a\n> background worker.\n>\n\nThis list really isn't the place for this kind of discussion. You are\ndoing application-level stuff, not working on patches for core. General\ndiscussions and questions like this should be directed to the -general\nmailing list.\n\nI want to ensure that each row is processed only once.\n>\n\nIs failure during processing possible?\n\n\n> I was thinking of using the `SELECT * ... OFFSET {offset_size} LIMIT\n> {limit_size}` functionality for this but I\"m running into issues.\n>\n\nFOR UPDATE and SKIPPED LOCKED clauses usually come into play for this use\ncase.\n\n\n> Can you please suggest any alternative to periodically read rows from a\n> table in chunks while processing each row exactly once.\n>\n>\nI think you are fooling yourself if you think you can do this without\nwriting back to the row the fact it has been processed. At which point\nensuring that you only retrieve and process unprocessed rows is trivial -\njust don't select ones with a status of processed.\n\nIf adding a column to the data is not possible, record processed row\nidentifiers into a separate table and inspect that.\n\nDavId J.\n\nOn Sat, Apr 27, 2024 at 12:47 AM Sushrut Shivaswamy <[email protected]> wrote:I\"m trying to read the rows of a table in chunks to process them in a background worker.This list really isn't the place for this kind of discussion.  You are doing application-level stuff, not working on patches for core.  General discussions and questions like this should be directed to the -general mailing list.I want to ensure that each row is processed only once.Is failure during processing possible?I was thinking of using the `SELECT * ... OFFSET {offset_size} LIMIT {limit_size}` functionality for this but I\"m running into issues.FOR UPDATE and SKIPPED LOCKED clauses usually come into play for this use case. Can you please suggest any alternative to periodically read rows from a table in chunks while processing each row exactly once.I think you are fooling yourself if you think you can do this without writing back to the row the fact it has been processed.  At which point ensuring that you only retrieve and process unprocessed rows is trivial - just don't select ones with a status of processed.If adding a column to the data is not possible, record processed row identifiers into a separate table and inspect that.DavId J.", "msg_date": "Sat, 27 Apr 2024 10:46:16 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Read table rows in chunks" } ]
[ { "msg_contents": "Hi all,\n\nAttached is a patch that fixes some overflow/underflow hazards in\n`timestamp_pl_interval`. The microseconds overflow could produce\nincorrect result. The month overflow would generally still result in an\nerror from the timestamp month field being too low, but it's still\nbetter to catch it early.\n\nI couldn't find any existing timestamp plus interval tests so I stuck\na new tests in `timestamp.sql`. If there's a better place, then\nplease let me know.\n\nThanks,\nJoe Koshakow", "msg_date": "Sat, 27 Apr 2024 22:59:44 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Fix overflow hazard in timestamp_pl_interval" }, { "msg_contents": "Hi all,\n\nImmediately after sending this I realized that timestamptz suffers\nfrom the same problem. Attached is an updated patch that fixes\ntimestamptz too.\n\nThanks,\nJoe Koshakow\n\nOn Sat, Apr 27, 2024 at 10:59 PM Joseph Koshakow <[email protected]> wrote:\n\n> Hi all,\n>\n> Attached is a patch that fixes some overflow/underflow hazards in\n> `timestamp_pl_interval`. The microseconds overflow could produce\n> incorrect result. The month overflow would generally still result in an\n> error from the timestamp month field being too low, but it's still\n> better to catch it early.\n>\n> I couldn't find any existing timestamp plus interval tests so I stuck\n> a new tests in `timestamp.sql`. If there's a better place, then\n> please let me know.\n>\n> Thanks,\n> Joe Koshakow\n>", "msg_date": "Sat, 27 Apr 2024 23:19:52 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix overflow hazard in timestamp_pl_interval" }, { "msg_contents": "Joseph Koshakow <[email protected]> writes:\n>> Attached is a patch that fixes some overflow/underflow hazards in\n>> `timestamp_pl_interval`. The microseconds overflow could produce\n>> incorrect result. The month overflow would generally still result in an\n>> error from the timestamp month field being too low, but it's still\n>> better to catch it early.\n\nYeah. I had earlier concluded that we couldn't overflow here without\ntriggering the range checks in tm2timestamp, but clearly that was\ntoo optimistic.\n\n>> I couldn't find any existing timestamp plus interval tests so I stuck\n>> a new tests in `timestamp.sql`. If there's a better place, then\n>> please let me know.\n\nThey're in horology.sql, so I moved the tests there and pushed it.\nThanks for the report!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 28 Apr 2024 13:45:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix overflow hazard in timestamp_pl_interval" } ]
[ { "msg_contents": "Yesterday I noticed a failure on cirrus-ci for the 'Right Semi Join'\npatch. The failure can be found at [1], and it looks like:\n\n--- /tmp/cirrus-ci-build/src/test/regress/expected/prepared_xacts.out\n2024-04-27 00:41:25.831297000 +0000\n+++\n/tmp/cirrus-ci-build/build/testrun/regress-running/regress/results/prepared_xacts.out\n 2024-04-27 00:45:50.261369000 +0000\n@@ -83,8 +83,9 @@\n SELECT gid FROM pg_prepared_xacts;\n gid\n ------\n+ gxid\n foo3\n-(1 row)\n+(2 rows)\n\nUpon closer look, it seems that this issue is not caused by the patch\nabout 'Right Semi Join', because this query, which initially included\ntwo left joins, can actually be reduced to a function scan after\nremoving these two useless left joins. It seems that no semi-joins\nwould be involved.\n\nEXPLAIN SELECT gid FROM pg_prepared_xacts;\n QUERY PLAN\n----------------------------------------------------------------------------\n Function Scan on pg_prepared_xact p (cost=0.00..10.00 rows=1000 width=32)\n(1 row)\n\nDoes anyone have any clue to this failure?\n\nFWIW, after another run of this test, the failure just disappears. Does\nit suggest that the test case is flaky?\n\n[1]\nhttps://api.cirrus-ci.com/v1/artifact/task/6220592364388352/testrun/build/testrun/regress-running/regress/regression.diffs\n\nThanks\nRichard\n\nYesterday I noticed a failure on cirrus-ci for the 'Right Semi Join'patch.  The failure can be found at [1], and it looks like:--- /tmp/cirrus-ci-build/src/test/regress/expected/prepared_xacts.out   2024-04-27 00:41:25.831297000 +0000+++ /tmp/cirrus-ci-build/build/testrun/regress-running/regress/results/prepared_xacts.out   2024-04-27 00:45:50.261369000 +0000@@ -83,8 +83,9 @@ SELECT gid FROM pg_prepared_xacts;  gid ------+ gxid  foo3-(1 row)+(2 rows)Upon closer look, it seems that this issue is not caused by the patchabout 'Right Semi Join', because this query, which initially includedtwo left joins, can actually be reduced to a function scan afterremoving these two useless left joins.  It seems that no semi-joinswould be involved.EXPLAIN SELECT gid FROM pg_prepared_xacts;                                 QUERY PLAN---------------------------------------------------------------------------- Function Scan on pg_prepared_xact p  (cost=0.00..10.00 rows=1000 width=32)(1 row)Does anyone have any clue to this failure?FWIW, after another run of this test, the failure just disappears.  Doesit suggest that the test case is flaky?[1] https://api.cirrus-ci.com/v1/artifact/task/6220592364388352/testrun/build/testrun/regress-running/regress/regression.diffsThanksRichard", "msg_date": "Mon, 29 Apr 2024 09:12:48 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "A failure in prepared_xacts test" }, { "msg_contents": "On Mon, Apr 29, 2024 at 09:12:48AM +0800, Richard Guo wrote:\n> Does anyone have any clue to this failure?\n> \n> FWIW, after another run of this test, the failure just disappears. Does\n> it suggest that the test case is flaky?\n\nIf you grep the source tree, you'd notice that a prepared transaction\nnamed gxid only exists in the 2PC tests of ECPG, in\nsrc/interfaces/ecpg/test/sql/twophase.pgc. So the origin of the\nfailure comes from a race condition due to test parallelization,\nbecause the scan of pg_prepared_xacts affects all databases with\ninstallcheck, and in your case it means that the scan of\npg_prepared_xacts was running in parallel of the ECPG tests with an\ninstallcheck.\n\nThe only location in the whole tree where we want to do predictible\nscans of pg_prepared_xacts is prepared_xacts.sql, so rather than\nplaying with 2PC transactions across a bunch of tests, I think that we\nshould do two things, both touching prepared_xacts.sql:\n- The 2PC transactions run in the main regression test suite should\nuse names that would be unlikely used elsewhere.\n- Limit the scans of pg_prepared_xacts on these name patterns to avoid\ninterferences.\n\nSee for example the attached with both expected outputs updated\ndepending on the value set for max_prepared_transactions in the\nbackend. There may be an argument in back-patching that, but I don't\nrecall seeing this failure in the CI, so perhaps that's not worth\nbothering with. What do you think?\n--\nMichael", "msg_date": "Mon, 29 Apr 2024 13:58:51 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A failure in prepared_xacts test" }, { "msg_contents": "Hello Richard,\n\n29.04.2024 04:12, Richard Guo wrote:\n> Does anyone have any clue to this failure?\n>\n> FWIW, after another run of this test, the failure just disappears.  Does\n> it suggest that the test case is flaky?\n>\n\nI think this could be caused by the ecpg test twophase executed\nsimultaneously with the test prepared_xacts thanks to meson's jobs\nparallelization.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 29 Apr 2024 08:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A failure in prepared_xacts test" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> If you grep the source tree, you'd notice that a prepared transaction\n> named gxid only exists in the 2PC tests of ECPG, in\n> src/interfaces/ecpg/test/sql/twophase.pgc. So the origin of the\n> failure comes from a race condition due to test parallelization,\n> because the scan of pg_prepared_xacts affects all databases with\n> installcheck, and in your case it means that the scan of\n> pg_prepared_xacts was running in parallel of the ECPG tests with an\n> installcheck.\n\nUp to now, we've only worried about whether tests running in parallel\nwithin a single test suite can interact. It's quite scary to think\nthat the meson setup has expanded the possibility of interactions\nto our entire source tree. Maybe that was a bad idea and we should\nfix the meson infrastructure to not do that. I fear that otherwise,\nwe'll get bit regularly by very-low-probability bugs of this kind.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Apr 2024 01:11:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A failure in prepared_xacts test" }, { "msg_contents": "On Mon, Apr 29, 2024 at 01:11:00AM -0400, Tom Lane wrote:\n> Up to now, we've only worried about whether tests running in parallel\n> within a single test suite can interact. It's quite scary to think\n> that the meson setup has expanded the possibility of interactions\n> to our entire source tree. Maybe that was a bad idea and we should\n> fix the meson infrastructure to not do that. I fear that otherwise,\n> we'll get bit regularly by very-low-probability bugs of this kind.\n\nI don't disagree with your point, still I'm not sure that this can be\nmade entirely bullet-proof. Anyway, I think that we should still\nimprove this test and make it more robust for parallel operations:\ninstallcheck fails equally on HEAD if there is a prepared transaction\non the backend where the tests run, and that seems like a bad idea to\nme to rely on cluster-wide scans for what should be a \"local\" test.\n--\nMichael", "msg_date": "Mon, 29 Apr 2024 14:25:10 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A failure in prepared_xacts test" }, { "msg_contents": "Hello Tom and Michael,\n\n29.04.2024 08:11, Tom Lane wrote:\n> Michael Paquier <[email protected]> writes:\n>> If you grep the source tree, you'd notice that a prepared transaction\n>> named gxid only exists in the 2PC tests of ECPG, in\n>> src/interfaces/ecpg/test/sql/twophase.pgc. So the origin of the\n>> failure comes from a race condition due to test parallelization,\n>> because the scan of pg_prepared_xacts affects all databases with\n>> installcheck, and in your case it means that the scan of\n>> pg_prepared_xacts was running in parallel of the ECPG tests with an\n>> installcheck.\n> Up to now, we've only worried about whether tests running in parallel\n> within a single test suite can interact. It's quite scary to think\n> that the meson setup has expanded the possibility of interactions\n> to our entire source tree. Maybe that was a bad idea and we should\n> fix the meson infrastructure to not do that. I fear that otherwise,\n> we'll get bit regularly by very-low-probability bugs of this kind.\n\nYes, I'm afraid of the same. For example, the test failure [1] is of that\nilk, I guess.\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2024-04-17%2016%3A33%3A23\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 29 Apr 2024 08:30:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A failure in prepared_xacts test" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> I don't disagree with your point, still I'm not sure that this can be\n> made entirely bullet-proof. Anyway, I think that we should still\n> improve this test and make it more robust for parallel operations:\n> installcheck fails equally on HEAD if there is a prepared transaction\n> on the backend where the tests run, and that seems like a bad idea to\n> me to rely on cluster-wide scans for what should be a \"local\" test.\n\nTrue, it's antithetical to the point of an \"installcheck\" test if\nunrelated actions in another database can break it. So I'm fine\nwith tightening up prepared_xacts's query. I just wonder how far\nwe want to try to carry this.\n\n(BTW, on the same logic, should ecpg's twophase.pgc be using a\nprepared-transaction name that's less generic than \"gxid\"?)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Apr 2024 01:32:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A failure in prepared_xacts test" }, { "msg_contents": "On Mon, Apr 29, 2024 at 01:32:40AM -0400, Tom Lane wrote:\n> (BTW, on the same logic, should ecpg's twophase.pgc be using a\n> prepared-transaction name that's less generic than \"gxid\"?)\n\nI've hesitated a few seconds about that before sending my patch, but\nrefrained because this stuff does not care about the contents of\npg_prepared_xacts. I'd be OK to use something like an \"ecpg_regress\"\nor something similar there.\n--\nMichael", "msg_date": "Mon, 29 Apr 2024 15:57:58 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A failure in prepared_xacts test" }, { "msg_contents": "On Mon, Apr 29, 2024 at 1:11 PM Tom Lane <[email protected]> wrote:\n\n> Up to now, we've only worried about whether tests running in parallel\n> within a single test suite can interact. It's quite scary to think\n> that the meson setup has expanded the possibility of interactions\n> to our entire source tree. Maybe that was a bad idea and we should\n> fix the meson infrastructure to not do that. I fear that otherwise,\n> we'll get bit regularly by very-low-probability bugs of this kind.\n\n\nI have the same concern. I suspect that the scan of pg_prepared_xacts\nis not the only test that could cause problems when running in parallel\nto other tests from the entire source tree.\n\nThanks\nRichard\n\nOn Mon, Apr 29, 2024 at 1:11 PM Tom Lane <[email protected]> wrote:\nUp to now, we've only worried about whether tests running in parallel\nwithin a single test suite can interact.  It's quite scary to think\nthat the meson setup has expanded the possibility of interactions\nto our entire source tree.  Maybe that was a bad idea and we should\nfix the meson infrastructure to not do that.  I fear that otherwise,\nwe'll get bit regularly by very-low-probability bugs of this kind.I have the same concern.  I suspect that the scan of pg_prepared_xactsis not the only test that could cause problems when running in parallelto other tests from the entire source tree.ThanksRichard", "msg_date": "Mon, 29 Apr 2024 16:49:28 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A failure in prepared_xacts test" }, { "msg_contents": "On Mon, Apr 29, 2024 at 2:58 PM Michael Paquier <[email protected]> wrote:\n\n> On Mon, Apr 29, 2024 at 01:32:40AM -0400, Tom Lane wrote:\n> > (BTW, on the same logic, should ecpg's twophase.pgc be using a\n> > prepared-transaction name that's less generic than \"gxid\"?)\n>\n> I've hesitated a few seconds about that before sending my patch, but\n> refrained because this stuff does not care about the contents of\n> pg_prepared_xacts. I'd be OK to use something like an \"ecpg_regress\"\n> or something similar there.\n\n\nI noticed that some TAP tests from recovery and subscription would\nselect the count from pg_prepared_xacts. I wonder if these tests would\nbe affected if there are any prepared transactions on the backend.\n\nThanks\nRichard\n\nOn Mon, Apr 29, 2024 at 2:58 PM Michael Paquier <[email protected]> wrote:On Mon, Apr 29, 2024 at 01:32:40AM -0400, Tom Lane wrote:\n> (BTW, on the same logic, should ecpg's twophase.pgc be using a\n> prepared-transaction name that's less generic than \"gxid\"?)\n\nI've hesitated a few seconds about that before sending my patch, but\nrefrained because this stuff does not care about the contents of\npg_prepared_xacts.  I'd be OK to use something like an \"ecpg_regress\"\nor something similar there.I noticed that some TAP tests from recovery and subscription wouldselect the count from pg_prepared_xacts.  I wonder if these tests wouldbe affected if there are any prepared transactions on the backend.ThanksRichard", "msg_date": "Mon, 29 Apr 2024 17:11:19 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A failure in prepared_xacts test" }, { "msg_contents": "On Mon, Apr 29, 2024 at 05:11:19PM +0800, Richard Guo wrote:\n> I noticed that some TAP tests from recovery and subscription would\n> select the count from pg_prepared_xacts. I wonder if these tests would\n> be affected if there are any prepared transactions on the backend.\n\nTAP tests run in isolation of the rest with their own clusters\ninitialized from a copy initdb'd (rather than initdb because that's\nmuch cheaper), so these scans are OK left alone.\n--\nMichael", "msg_date": "Mon, 29 Apr 2024 18:19:48 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A failure in prepared_xacts test" }, { "msg_contents": "Richard Guo <[email protected]> writes:\n> I noticed that some TAP tests from recovery and subscription would\n> select the count from pg_prepared_xacts. I wonder if these tests would\n> be affected if there are any prepared transactions on the backend.\n\nTAP tests shouldn't be at risk, because there is no \"make\ninstallcheck\" equivalent for them. Each TAP test creates its own\ndatabase instance (or maybe several), so that instance won't have\nanything else going on.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Apr 2024 09:45:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A failure in prepared_xacts test" }, { "msg_contents": "On Mon, Apr 29, 2024 at 09:45:16AM -0400, Tom Lane wrote:\n> TAP tests shouldn't be at risk, because there is no \"make\n> installcheck\" equivalent for them. Each TAP test creates its own\n> database instance (or maybe several), so that instance won't have\n> anything else going on.\n\nThere are a few more 2PC transactions in test_decoding (no\ninstallcheck), temp.sql, test_extensions.sql and pg_stat_statements's\nutility.sql (no installcheck) but their GIDs are not that bad.\ntwophase_stream.sql has a GID \"test1\", which is kind of generic, but\nit won't run in parallel. At the end, only addressing the\nprepared_xacts.sql and the ECPG bits looked enough to me, so I've\ntweaked these with 7e61e4cc7cfc and called it a day.\n\nI'd be curious about any discussion involving the structure of the\nmeson tests.\n--\nMichael", "msg_date": "Tue, 30 Apr 2024 07:42:52 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A failure in prepared_xacts test" }, { "msg_contents": "On Mon, Apr 29, 2024 at 9:45 PM Tom Lane <[email protected]> wrote:\n\n> Richard Guo <[email protected]> writes:\n> > I noticed that some TAP tests from recovery and subscription would\n> > select the count from pg_prepared_xacts. I wonder if these tests would\n> > be affected if there are any prepared transactions on the backend.\n>\n> TAP tests shouldn't be at risk, because there is no \"make\n> installcheck\" equivalent for them. Each TAP test creates its own\n> database instance (or maybe several), so that instance won't have\n> anything else going on.\n\n\nThank you for the explanation. I wasn't aware of this before.\n\nThanks\nRichard\n\nOn Mon, Apr 29, 2024 at 9:45 PM Tom Lane <[email protected]> wrote:Richard Guo <[email protected]> writes:\n> I noticed that some TAP tests from recovery and subscription would\n> select the count from pg_prepared_xacts.  I wonder if these tests would\n> be affected if there are any prepared transactions on the backend.\n\nTAP tests shouldn't be at risk, because there is no \"make\ninstallcheck\" equivalent for them.  Each TAP test creates its own\ndatabase instance (or maybe several), so that instance won't have\nanything else going on.Thank you for the explanation.  I wasn't aware of this before.ThanksRichard", "msg_date": "Tue, 30 Apr 2024 08:43:32 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A failure in prepared_xacts test" }, { "msg_contents": "On Tue, Apr 30, 2024 at 6:43 AM Michael Paquier <[email protected]> wrote:\n\n> I'd be curious about any discussion involving the structure of the\n> meson tests.\n\n\n+1. I'm kind of worried that the expansion of parallelization could\nlead to more instances of instability. Alexander mentioned one such\ncase at [1]. I haven't looked into it though.\n\n[1]\nhttps://www.postgresql.org/message-id/cbf0156f-5aa1-91db-5802-82435dda03e6%40gmail.com\n\nThanks\nRichard\n\nOn Tue, Apr 30, 2024 at 6:43 AM Michael Paquier <[email protected]> wrote:\nI'd be curious about any discussion involving the structure of the\nmeson tests.+1.  I'm kind of worried that the expansion of parallelization couldlead to more instances of instability.  Alexander mentioned one suchcase at [1].  I haven't looked into it though.[1] https://www.postgresql.org/message-id/cbf0156f-5aa1-91db-5802-82435dda03e6%40gmail.comThanksRichard", "msg_date": "Tue, 30 Apr 2024 08:54:47 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A failure in prepared_xacts test" }, { "msg_contents": "Richard Guo <[email protected]> writes:\n> +1. I'm kind of worried that the expansion of parallelization could\n> lead to more instances of instability. Alexander mentioned one such\n> case at [1]. I haven't looked into it though.\n> [1] https://www.postgresql.org/message-id/cbf0156f-5aa1-91db-5802-82435dda03e6%40gmail.com\n\nThe mechanism there is pretty obvious: a plancache flush happened\nat just the wrong (right?) time and caused the output to change,\nas indeed the comment acknowledges:\n\n -- currently, this fails due to cached plan for \"r.f1 + 1\" expression\n -- (but if debug_discard_caches is on, it will succeed)\n\nI wonder if we shouldn't just remove that test case as being\ntoo unstable -- especially since it's not proving much anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Apr 2024 21:48:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A failure in prepared_xacts test" }, { "msg_contents": "Hi all,\n\nAttached is a patch that fixes bug when calling strncmp function, in \nwhich case the third argument (authmethod - strchr(authmethod, ' ')) \nmay be negative, which is not as expected..\n\n\nWith Regards,\nJingxian Li.", "msg_date": "Tue, 30 Apr 2024 10:41:39 +0800", "msg_from": "\"Jingxian Li\" <[email protected]>", "msg_from_op": false, "msg_subject": "[PATCH] Fix bug when calling strncmp in check_authmethod_valid" }, { "msg_contents": "On Tue, Apr 30, 2024 at 10:41 AM Jingxian Li <[email protected]> wrote:\n\n> Attached is a patch that fixes bug when calling strncmp function, in\n> which case the third argument (authmethod - strchr(authmethod, ' '))\n> may be negative, which is not as expected..\n\n\nNice catch. I think you're right from a quick glance.\n\nThanks\nRichard\n\nOn Tue, Apr 30, 2024 at 10:41 AM Jingxian Li <[email protected]> wrote:\nAttached is a patch that fixes bug when calling strncmp function, in \nwhich case the third argument (authmethod - strchr(authmethod, ' ')) \nmay be negative, which is not as expected..Nice catch.  I think you're right from a quick glance.ThanksRichard", "msg_date": "Tue, 30 Apr 2024 12:16:23 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix bug when calling strncmp in check_authmethod_valid" }, { "msg_contents": "> On 30 Apr 2024, at 04:41, Jingxian Li <[email protected]> wrote:\n\n> Attached is a patch that fixes bug when calling strncmp function, in \n> which case the third argument (authmethod - strchr(authmethod, ' ')) \n> may be negative, which is not as expected..\n\nThe calculation is indeed incorrect, but the lack of complaints of it being\nbroken made me wonder why this exist in the first place. This dates back to\ne7029b212755, just shy of 2 decades old, which added --auth with support for\nstrings with auth-options to ident and pam like --auth 'pam <servicename>' and\n'ident sameuser'. Support for options to ident was removed in 01c1a12a5bb4 but\noptions to pam is still supported (although not documented), but was AFAICT\nbroken in commit 8a02339e9ba3 some 12 years ago with this strncmp().\n\n- if (strncmp(authmethod, *p, (authmethod - strchr(authmethod, ' '))) == 0)\n+ if (strncmp(authmethod, *p, (strchr(authmethod, ' ') - authmethod)) == 0)\n\nThis with compare \"pam postgresql\" with \"pam\" and not \"pam \" so the length\nshould be \"(strchr(authmethod, ' ') - authmethod + 1)\" since \"pam \" is a method\nseparate from \"pam\" in auth_methods_{host|local}. We don't want to allow \"md5\n\" as that's not a method in the array of valid methods.\n\nBut, since it's been broken in all supported versions of postgres and has\nAFAICT never been documented to exist, should we fix it or just remove it? We\ndon't support auth-options for any other methods, like clientcert to cert for\nexample. If we fix it we should also document that it works IMHO.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 30 Apr 2024 11:14:37 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix bug when calling strncmp in check_authmethod_valid" }, { "msg_contents": "On 29.04.24 07:11, Tom Lane wrote:\n> Up to now, we've only worried about whether tests running in parallel\n> within a single test suite can interact. It's quite scary to think\n> that the meson setup has expanded the possibility of interactions\n> to our entire source tree. Maybe that was a bad idea and we should\n> fix the meson infrastructure to not do that. I fear that otherwise,\n> we'll get bit regularly by very-low-probability bugs of this kind.\n\nI don't think there is anything fundamentally different in the \nparallelism setups of the make-based and the meson-based tests. There \nare just different implementation details that might affect the likely \norderings and groupings.\n\n\n\n", "msg_date": "Fri, 3 May 2024 14:27:40 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A failure in prepared_xacts test" }, { "msg_contents": "Hi Daniel,\nThank you for explaining the ins and outs of this problem.\n\nOn 2024/4/30 17:14, Daniel Gustafsson wrote:\n>> On 30 Apr 2024, at 04:41, Jingxian Li <[email protected]> wrote:\n>\n>> Attached is a patch that fixes bug when calling strncmp function, in \n>> which case the third argument (authmethod - strchr(authmethod, ' ')) \n>> may be negative, which is not as expected..\n>\n> The calculation is indeed incorrect, but the lack of complaints of it being\n> broken made me wonder why this exist in the first place. This dates back to\n> e7029b212755, just shy of 2 decades old, which added --auth with support for\n> strings with auth-options to ident and pam like --auth 'pam <servicename>' and\n> 'ident sameuser'. Support for options to ident was removed in 01c1a12a5bb4 but\n> options to pam is still supported (although not documented), but was AFAICT\n> broken in commit 8a02339e9ba3 some 12 years ago with this strncmp().\n>\n> - if (strncmp(authmethod, *p, (authmethod - strchr(authmethod, ' '))) == 0)\n> + if (strncmp(authmethod, *p, (strchr(authmethod, ' ') - authmethod)) == 0)\n>\n> This with compare \"pam postgresql\" with \"pam\" and not \"pam \" so the length\n> should be \"(strchr(authmethod, ' ') - authmethod + 1)\" since \"pam \" is a method\n> separate from \"pam\" in auth_methods_{host|local}. We don't want to allow \"md5\n> \" as that's not a method in the array of valid methods.\n>\n> But, since it's been broken in all supported versions of postgres and has\n> AFAICT never been documented to exist, should we fix it or just remove it? We\n> don't support auth-options for any other methods, like clientcert to cert for\n> example. If we fix it we should also document that it works IMHO.\n\nYou mentioned that auth-options are not supported for auth methods except pam,\nbut I found that some methods (such as ldap and radius etc.) also requires aut-options,\nand there are no corresponding auth methods ending with space (such as \"ldap \" and \nradius \") present in auth_methods_host and auth_methods_local arrays.\n\n--\nJingxian Li\n", "msg_date": "Tue, 7 May 2024 12:46:27 +0800", "msg_from": "\"Jingxian Li\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix bug when calling strncmp in check_authmethod_valid" }, { "msg_contents": "> On 7 May 2024, at 06:46, Jingxian Li <[email protected]> wrote:\n\n>> But, since it's been broken in all supported versions of postgres and has\n>> AFAICT never been documented to exist, should we fix it or just remove it? We\n>> don't support auth-options for any other methods, like clientcert to cert for\n>> example. If we fix it we should also document that it works IMHO.\n> \n> You mentioned that auth-options are not supported for auth methods except pam,\n> but I found that some methods (such as ldap and radius etc.) also requires aut-options,\n> and there are no corresponding auth methods ending with space (such as \"ldap \" and \n> radius \") present in auth_methods_host and auth_methods_local arrays.\n\nCorrect, only pam and ident were ever supported (yet not documented) and ident\nwas removed a long time ago.\n\nSearching the archives I was unable to find any complaints, and this has been\nbroken for the entire window of supported releases, so I propose we remove it\nas per the attached patch. If anyone is keen on making this work again for all\nthe types where it makes sense, it can be resurrected (probably with a better\nimplementation).\n\nAny objections to fixing this in 17 by removing it? (cc:ing Michael from the RMT)\n\n--\nDaniel Gustafsson", "msg_date": "Mon, 13 May 2024 10:34:48 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix bug when calling strncmp in check_authmethod_valid" }, { "msg_contents": "Hi,\n\n> Searching the archives I was unable to find any complaints, and this has been\n> broken for the entire window of supported releases, so I propose we remove it\n> as per the attached patch. If anyone is keen on making this work again for all\n> the types where it makes sense, it can be resurrected (probably with a better\n> implementation).\n>\n> Any objections to fixing this in 17 by removing it? (cc:ing Michael from the RMT)\n\n+1 Something that is not documented or used by anyone (apparently) and\nis broken should just be removed.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 13 May 2024 13:01:21 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix bug when calling strncmp in check_authmethod_valid" }, { "msg_contents": "On Mon, May 13, 2024 at 01:01:21PM +0300, Aleksander Alekseev wrote:\n>> Any objections to fixing this in 17 by removing it? (cc:ing Michael from the RMT)\n> \n> +1 Something that is not documented or used by anyone (apparently) and\n> is broken should just be removed.\n\n8a02339e9ba3 sounds like an argument good enough to prove there is no\ndemand in the field for being able to support options through initdb\n--auth, and this does not concern only pam. If somebody is interested\nin that, that could always be done later. My take is that this would\nbe simpler if implemented through a separate option, leaving the\nchecks between the options and the auth method up to the postmaster\nwhen loading pg_hba.conf at startup.\n\nHence, no objections to clean up that now. Thanks for asking.\n--\nMichael", "msg_date": "Tue, 14 May 2024 14:12:38 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix bug when calling strncmp in check_authmethod_valid" }, { "msg_contents": "> On 14 May 2024, at 07:12, Michael Paquier <[email protected]> wrote:\n\n> Hence, no objections to clean up that now. Thanks for asking.\n\nThanks for verifying, I've pushed this now.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 14 May 2024 11:45:28 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix bug when calling strncmp in check_authmethod_valid" } ]
[ { "msg_contents": "hi.\n\nselect * from pg_input_error_info('42000000000', 'integer')\nselect message, detail from pg_input_error_info('1234.567', 'numeric(7,4)')\nI found above two examples at [0] crammed together.\n\n\n <para>\n <literal>select * from pg_input_error_info('42000000000',\n'integer')</literal>\n <returnvalue></returnvalue>\n<programlisting>\n message | detail | hint\n| sql_error_code\n------------------------------------------------------+--------+------+----------------\n value \"42000000000\" is out of range for type integer | | | 22003\n</programlisting>\n </para>\n <para>\n <literal>select message, detail from\npg_input_error_info('1234.567', 'numeric(7,4)')</literal>\n <returnvalue></returnvalue>\n<programlisting>\n message | detail\n------------------------+----------------------------------&zwsp;-------------------------------------------------\n numeric field overflow | A field with precision 7, scale 4 must round\nto an absolute value less than 10^3.\n</programlisting>\n\n\nafter checking the definition of <programlisting>[1], <screen>[2],\nmaybe here we should use <screen> and also add `(1 row)` information.\n\nor we can simply add a empty new line between\n` value \"42000000000\" is out of range for type integer | | | 22003`\nand\n`</programlisting>`\n\n\n[0] https://www.postgresql.org/docs/current/functions-info.html#FUNCTIONS-INFO-VALIDITY\n[1] https://tdg.docbook.org/tdg/4.5/programlisting\n[2] https://tdg.docbook.org/tdg/4.5/screen\n\n\n\n-- \n I recommend David Deutsch's <<The Beginning of Infinity>>\n\n Jian\n\n\n", "msg_date": "Mon, 29 Apr 2024 09:28:03 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "pg_input_error_info doc 2 exampled crammed together" }, { "msg_contents": "On Sunday, April 28, 2024, jian he <[email protected]> wrote:\n>\n>\n> after checking the definition of <programlisting>[1], <screen>[2],\n> maybe here we should use <screen>\n\n\n>\nPossibly, though I’d be curious to see how consistent we are on this point\nelsewhere before making a point of it.\n\n\n>\n> and also add `(1 row)` information.\n\n\nDoesn’t seem like added value.\n\n\n>\n> or we can simply add a empty new line between\n> ` value \"42000000000\" is out of range for type integer | | |\n> 22003`\n> and\n> `</programlisting>`\n\n\nMy preference would be to limit this section to a single example. The\nnumeric one, as it provides values for more output columns. I would change\nthe output format to expanded from default, in order to show all columns\nand not overrun the length of a single line.\n\nDavid J.\n\nOn Sunday, April 28, 2024, jian he <[email protected]> wrote:\n\nafter checking the definition of <programlisting>[1], <screen>[2],\nmaybe here we should use <screen>Possibly, though I’d be curious to see how consistent we are on this point elsewhere before making a point of it. and also add `(1 row)` information.Doesn’t seem like added value. \n\nor we can simply add a empty new line between\n` value \"42000000000\" is out of range for type integer |        |      | 22003`\nand\n`</programlisting>`My preference would be to limit this section to a single example.  The numeric one, as it provides values for more output columns.  I would change the output format to expanded from default, in order to show all columns and not overrun the length of a single line.David J.", "msg_date": "Sun, 28 Apr 2024 18:45:30 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_input_error_info doc 2 exampled crammed together" }, { "msg_contents": "On Sun, Apr 28, 2024 at 06:45:30PM -0700, David G. Johnston wrote:\n> My preference would be to limit this section to a single example. The\n> numeric one, as it provides values for more output columns. I would change\n> the output format to expanded from default, in order to show all columns\n> and not overrun the length of a single line.\n\nAgreed that having two examples does not bring much, so this could be\nbrought to a single one. The first one is enough to show the point of\nthe function, IMO. It is shorter in width and it shows all the output\ncolumns.\n--\nMichael", "msg_date": "Mon, 29 Apr 2024 13:32:34 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_input_error_info doc 2 exampled crammed together" }, { "msg_contents": "On Sunday, April 28, 2024, Michael Paquier <[email protected]> wrote:\n\n> On Sun, Apr 28, 2024 at 06:45:30PM -0700, David G. Johnston wrote:\n> > My preference would be to limit this section to a single example. The\n> > numeric one, as it provides values for more output columns. I would\n> change\n> > the output format to expanded from default, in order to show all columns\n> > and not overrun the length of a single line.\n>\n> Agreed that having two examples does not bring much, so this could be\n> brought to a single one. The first one is enough to show the point of\n> the function, IMO. It is shorter in width and it shows all the output\n> columns.\n>\n>\nAgreed. The column names are self-explanatory if you’ve seen errors\nbefore. The values are immaterial. Plus we don’t generally use\npsql-specific features in our examples.\n\nDavid J.\n\nOn Sunday, April 28, 2024, Michael Paquier <[email protected]> wrote:On Sun, Apr 28, 2024 at 06:45:30PM -0700, David G. Johnston wrote:\n> My preference would be to limit this section to a single example.  The\n> numeric one, as it provides values for more output columns.  I would change\n> the output format to expanded from default, in order to show all columns\n> and not overrun the length of a single line.\n\nAgreed that having two examples does not bring much, so this could be\nbrought to a single one.  The first one is enough to show the point of\nthe function, IMO.  It is shorter in width and it shows all the output\ncolumns.\nAgreed.  The column names are self-explanatory if you’ve seen errors before.  The values are immaterial.  Plus we don’t generally use psql-specific features in our examples.David J.", "msg_date": "Sun, 28 Apr 2024 22:07:49 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_input_error_info doc 2 exampled crammed together" }, { "msg_contents": "On Sun, Apr 28, 2024 at 10:07:49PM -0700, David G. Johnston wrote:\n> Agreed. The column names are self-explanatory if you’ve seen errors\n> before. The values are immaterial. Plus we don’t generally use\n> psql-specific features in our examples.\n\nOkay, I've just cleaned up that a bit with f6ab942f5de0.\n--\nMichael", "msg_date": "Tue, 30 Apr 2024 19:25:28 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_input_error_info doc 2 exampled crammed together" } ]
[ { "msg_contents": "This patch adds support for using LIKE with nondeterministic collations. \n So you can do things such as\n\n col LIKE 'foo%' COLLATE case_insensitive\n\nThis currently results in a \"not supported\" error. The reason for that \nis that when I first developed support for nondeterministic collations, \nI didn't know what the semantics of that should be, especially since \nwith nondeterministic collations, strings of different lengths could be \nequal, and then dropped the issue for a while.\n\nAfter further research, the SQL standard's definition of the LIKE \npredicate actually provides a clear definition of the semantics: The \npattern is partitioned into substrings at wildcard characters (so \n'foo%bar' is partitioned into 'foo', '%', 'bar') and then then whole \npredicate matches if a match can be found for each partition under the \napplicable collation (so for 'foo%bar' we look to partition the input \nstring into s1 || s2 || s3 such that s1 = 'foo', s2 is anything, and s3 \n= 'bar'.) The only difference to deterministic collations is that for \ndeterministic collations we can optimize this by matching by character, \nbut for nondeterministic collations we have to go by substring.", "msg_date": "Mon, 29 Apr 2024 08:45:26 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Support LIKE with nondeterministic collations" }, { "msg_contents": "\tPeter Eisentraut wrote:\n\n> This patch adds support for using LIKE with nondeterministic\n> collations. So you can do things such as\n> \n> col LIKE 'foo%' COLLATE case_insensitive\n\nNice!\n\n> The pattern is partitioned into substrings at wildcard characters\n> (so 'foo%bar' is partitioned into 'foo', '%', 'bar') and then then\n> whole predicate matches if a match can be found for each partition\n> under the applicable collation\n\nTrying with a collation that ignores punctuation:\n\n postgres=# CREATE COLLATION \"ign_punct\" (\n provider = 'icu',\n locale='und-u-ka-shifted',\n deterministic = false\n );\n\n postgres=# SELECT '.foo.' like 'foo' COLLATE ign_punct;\n ?column? \n ----------\n t\n (1 row)\n\n postgres=# SELECT '.foo.' like 'f_o' COLLATE ign_punct;\n ?column? \n ----------\n t\n (1 row)\n\n postgres=# SELECT '.foo.' like '_oo' COLLATE ign_punct;\n ?column? \n ----------\n f\n (1 row)\n\nThe first two results look fine, but the next one is inconsistent.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Tue, 30 Apr 2024 14:39:11 +0200", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support LIKE with nondeterministic collations" }, { "msg_contents": "On 30.04.24 14:39, Daniel Verite wrote:\n> postgres=# SELECT '.foo.' like '_oo' COLLATE ign_punct;\n> ?column?\n> ----------\n> f\n> (1 row)\n> \n> The first two results look fine, but the next one is inconsistent.\n\nThis is correct, because '_' means \"any single character\". This is \nindependent of the collation.\n\nI think with nondeterministic collations, the single-character wildcard \nis often not going to be all that useful.\n\n\n\n", "msg_date": "Thu, 2 May 2024 15:38:32 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Support LIKE with nondeterministic collations" }, { "msg_contents": "On Thu, May 2, 2024 at 9:38 AM Peter Eisentraut <[email protected]> wrote:\n> On 30.04.24 14:39, Daniel Verite wrote:\n> > postgres=# SELECT '.foo.' like '_oo' COLLATE ign_punct;\n> > ?column?\n> > ----------\n> > f\n> > (1 row)\n> >\n> > The first two results look fine, but the next one is inconsistent.\n>\n> This is correct, because '_' means \"any single character\". This is\n> independent of the collation.\n\nSeems really counterintuitive. I had to think for a long time to be\nable to guess what was happening here. Finally I came up with this\nguess:\n\nIf the collation-aware matching tries to match up f with the initial\nperiod, the period is skipped and the f matches f. But when the\nwildcard is matched to the initial period, that uses up the wildcard\nand then we're left trying to match o with f, which doesn't work.\n\nIs that right?\n\nIt'd probably be good to use something like this as an example in the\ndocumentation. My intuition is that if foo matches a string, then _oo\nf_o and fo_ should also match that string. Apparently that's not the\ncase, but I doubt I'll be the last one who thinks it should be.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 May 2024 20:11:48 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support LIKE with nondeterministic collations" }, { "msg_contents": "On 03.05.24 02:11, Robert Haas wrote:\n> On Thu, May 2, 2024 at 9:38 AM Peter Eisentraut <[email protected]> wrote:\n>> On 30.04.24 14:39, Daniel Verite wrote:\n>>> postgres=# SELECT '.foo.' like '_oo' COLLATE ign_punct;\n>>> ?column?\n>>> ----------\n>>> f\n>>> (1 row)\n>>>\n>>> The first two results look fine, but the next one is inconsistent.\n>>\n>> This is correct, because '_' means \"any single character\". This is\n>> independent of the collation.\n> \n> Seems really counterintuitive. I had to think for a long time to be\n> able to guess what was happening here. Finally I came up with this\n> guess:\n> \n> If the collation-aware matching tries to match up f with the initial\n> period, the period is skipped and the f matches f. But when the\n> wildcard is matched to the initial period, that uses up the wildcard\n> and then we're left trying to match o with f, which doesn't work.\n\nFormally, what\n\n X like '_oo'\n\nmeans is, can X be partitioned into substrings such that the first \nsubstring is a single character and the second substring is equal to \n'oo' under the applicable collation? This is false in this case, there \nis no such partitioning.\n\nWhat the implementation does is, it walks through the pattern. It sees \n'_', so it steps over one character in the input string, which is '.' \nhere. Then we have 'foo.' left to match in the input string. Then it \ntakes from the pattern the next substring up to but not including either \na wildcard character or the end of the string, which is 'oo', and then \nit checks if a prefix of the remaining input string can be found that is \n\"equal to\" 'oo'. So here it would try in turn\n\n '' = 'oo' collate ign_punct ?\n 'f' = 'oo' collate ign_punct ?\n 'fo' = 'oo' collate ign_punct ?\n 'foo' = 'oo' collate ign_punct ?\n 'foo.' = 'oo' collate ign_punct ?\n\nand they all fail, so the match fails.\n\n> It'd probably be good to use something like this as an example in the\n> documentation. My intuition is that if foo matches a string, then _oo\n> f_o and fo_ should also match that string. Apparently that's not the\n> case, but I doubt I'll be the last one who thinks it should be.\n\nThis intuition fails because with nondeterministic collations, strings \nof different lengths can be equal, and so the question arises, what does \nthe pattern '_' mean. It could mean either, (1) a single character, or \nperhaps something like, (2) a string that is equal to some other string \nof length one.\n\nThe second definition would satisfy the expectation here, because then \n'.f' matches '_' because '.f' is equal to some string of length one, \nsuch as 'f'. (And then 'oo.' matches 'oo' for the rest of the pattern.) \n However, off the top of my head, this definition has three flaws: (1) \nIt would make the single-character wildcard effectively an \nany-number-of-characters wildcard, but only in some circumstances, which \ncould be confusing, (2) it would be difficult to compute, because you'd \nhave to check equality against all possible single-character strings, \nand (3) it is not what the SQL standard says.\n\nIn any case, yes, some explanation and examples should be added.\n\n\n\n", "msg_date": "Fri, 3 May 2024 10:52:31 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Support LIKE with nondeterministic collations" }, { "msg_contents": "On Fri, May 3, 2024 at 4:52 AM Peter Eisentraut <[email protected]> wrote:\n> What the implementation does is, it walks through the pattern. It sees\n> '_', so it steps over one character in the input string, which is '.'\n> here. Then we have 'foo.' left to match in the input string. Then it\n> takes from the pattern the next substring up to but not including either\n> a wildcard character or the end of the string, which is 'oo', and then\n> it checks if a prefix of the remaining input string can be found that is\n> \"equal to\" 'oo'. So here it would try in turn\n>\n> '' = 'oo' collate ign_punct ?\n> 'f' = 'oo' collate ign_punct ?\n> 'fo' = 'oo' collate ign_punct ?\n> 'foo' = 'oo' collate ign_punct ?\n> 'foo.' = 'oo' collate ign_punct ?\n>\n> and they all fail, so the match fails.\n\nInteresting. Does that imply that these matches are slower than normal ones?\n\n> The second definition would satisfy the expectation here, because then\n> '.f' matches '_' because '.f' is equal to some string of length one,\n> such as 'f'. (And then 'oo.' matches 'oo' for the rest of the pattern.)\n> However, off the top of my head, this definition has three flaws: (1)\n> It would make the single-character wildcard effectively an\n> any-number-of-characters wildcard, but only in some circumstances, which\n> could be confusing, (2) it would be difficult to compute, because you'd\n> have to check equality against all possible single-character strings,\n> and (3) it is not what the SQL standard says.\n\nRight, those are good arguments.\n\n> In any case, yes, some explanation and examples should be added.\n\nCool.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 May 2024 09:20:52 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support LIKE with nondeterministic collations" }, { "msg_contents": "On 03.05.24 15:20, Robert Haas wrote:\n> On Fri, May 3, 2024 at 4:52 AM Peter Eisentraut <[email protected]> wrote:\n>> What the implementation does is, it walks through the pattern. It sees\n>> '_', so it steps over one character in the input string, which is '.'\n>> here. Then we have 'foo.' left to match in the input string. Then it\n>> takes from the pattern the next substring up to but not including either\n>> a wildcard character or the end of the string, which is 'oo', and then\n>> it checks if a prefix of the remaining input string can be found that is\n>> \"equal to\" 'oo'. So here it would try in turn\n>>\n>> '' = 'oo' collate ign_punct ?\n>> 'f' = 'oo' collate ign_punct ?\n>> 'fo' = 'oo' collate ign_punct ?\n>> 'foo' = 'oo' collate ign_punct ?\n>> 'foo.' = 'oo' collate ign_punct ?\n>>\n>> and they all fail, so the match fails.\n> \n> Interesting. Does that imply that these matches are slower than normal ones?\n\nYes, certainly, and there is also no indexing support (other than for \nexact matches).\n\n\n\n", "msg_date": "Fri, 3 May 2024 16:12:46 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Support LIKE with nondeterministic collations" }, { "msg_contents": "\tPeter Eisentraut wrote:\n\n> Yes, certainly, and there is also no indexing support (other than for \n> exact matches).\n\nThe ICU docs have this note about prefix matching:\n\nhttps://unicode-org.github.io/icu/userguide/collation/architecture.html#generating-bounds-for-a-sort-key-prefix-matching\n\n * Generating bounds for a sort key (prefix matching)\n\n Having sort keys for strings allows for easy creation of bounds -\n sort keys that are guaranteed to be smaller or larger than any sort\n key from a give range. For example, if bounds are produced for a\n sortkey of string “smith”, strings between upper and lower bounds\n with one level would include “Smith”, “SMITH”, “sMiTh”. Two kinds\n of upper bounds can be generated - the first one will match only\n strings of equal length, while the second one will match all the\n strings with the same initial prefix.\n\n CLDR 1.9/ICU 4.6 and later map U+FFFF to a collation element with\n the maximum primary weight, so that for example the string\n “smith\\uFFFF” can be used as the upper bound rather than modifying\n the sort key for “smith”.\n\nIn other words it says that\n\n col LIKE 'smith%' collate \"nd\"\n\nis equivalent to:\n\n col >= 'smith' collate \"nd\" AND col < U&'smith\\ffff' collate \"nd\"\n\nwhich could be obtained from an index scan, assuming a btree\nindex on \"col\" collate \"nd\".\n\nU+FFFF is a valid code point but a \"non-character\" [1] so it's\nnot supposed to be present in normal strings.\n\n[1] https://www.unicode.org/glossary/#noncharacter\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Fri, 03 May 2024 16:58:55 +0200", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support LIKE with nondeterministic collations" }, { "msg_contents": "\tPeter Eisentraut wrote:\n\n> However, off the top of my head, this definition has three flaws: (1) \n> It would make the single-character wildcard effectively an \n> any-number-of-characters wildcard, but only in some circumstances, which \n> could be confusing, (2) it would be difficult to compute, because you'd \n> have to check equality against all possible single-character strings, \n> and (3) it is not what the SQL standard says.\n\nFor #1 we're currently using the definition of a \"character\" as \nbeing any single point of code, but this definition fits poorly\nwith non-deterministic collation rules.\n\nThe simplest illustration I can think of is the canonical\nequivalence match between the NFD and NFC forms of an\naccented character.\n\npostgres=# CREATE COLLATION nd (\n provider = 'icu',\n locale = 'und',\n deterministic = false\n);\t\t \n\n-- match NFD form with NFC form of eacute\n\npostgres=# select U&'e\\0301' like 'é' collate nd;\n ?column? \n----------\n t\n\npostgres=# select U&'e\\0301' like '_' collate nd;\n ?column? \n----------\n f\n(1 row)\n\nI understand why the algorithm produces these opposite results.\nBut at the semantic level, when asked if the left-hand string matches\na specific character, it says yes, and when asked if it matches any\ncharacter, it says no.\nTo me it goes beyond counter-intuitive, it's not reasonable enough to\nbe called correct.\n\nWhat could we do about it?\nIntuitively I think that our interpretation of \"character\" here should\nbe whatever sequence of code points are between character\nboundaries [1], and that the equality of such characters would be the\nequality of their sequences of code points, with the string equality\ncheck of the collation, whatever the length of these sequences.\n\n[1]:\nhttps://unicode-org.github.io/icu/userguide/boundaryanalysis/#character-boundary\n\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Fri, 03 May 2024 17:47:48 +0200", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support LIKE with nondeterministic collations" }, { "msg_contents": "On 03.05.24 16:58, Daniel Verite wrote:\n> * Generating bounds for a sort key (prefix matching)\n> \n> Having sort keys for strings allows for easy creation of bounds -\n> sort keys that are guaranteed to be smaller or larger than any sort\n> key from a give range. For example, if bounds are produced for a\n> sortkey of string “smith”, strings between upper and lower bounds\n> with one level would include “Smith”, “SMITH”, “sMiTh”. Two kinds\n> of upper bounds can be generated - the first one will match only\n> strings of equal length, while the second one will match all the\n> strings with the same initial prefix.\n> \n> CLDR 1.9/ICU 4.6 and later map U+FFFF to a collation element with\n> the maximum primary weight, so that for example the string\n> “smith\\uFFFF” can be used as the upper bound rather than modifying\n> the sort key for “smith”.\n> \n> In other words it says that\n> \n> col LIKE 'smith%' collate \"nd\"\n> \n> is equivalent to:\n> \n> col >= 'smith' collate \"nd\" AND col < U&'smith\\ffff' collate \"nd\"\n> \n> which could be obtained from an index scan, assuming a btree\n> index on \"col\" collate \"nd\".\n> \n> U+FFFF is a valid code point but a \"non-character\" [1] so it's\n> not supposed to be present in normal strings.\n\nThanks, this could be very useful!\n\n\n\n", "msg_date": "Fri, 3 May 2024 20:53:52 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Support LIKE with nondeterministic collations" }, { "msg_contents": "On 03.05.24 17:47, Daniel Verite wrote:\n> \tPeter Eisentraut wrote:\n> \n>> However, off the top of my head, this definition has three flaws: (1)\n>> It would make the single-character wildcard effectively an\n>> any-number-of-characters wildcard, but only in some circumstances, which\n>> could be confusing, (2) it would be difficult to compute, because you'd\n>> have to check equality against all possible single-character strings,\n>> and (3) it is not what the SQL standard says.\n> \n> For #1 we're currently using the definition of a \"character\" as\n> being any single point of code,\n\nThat is the definition that is used throughout SQL and PostgreSQL. We \ncan't change that without redefining everything. To pick just one \nexample, the various trim function also behave in seemingly inconsistent \nways when you apply then to strings in different normalization forms. \nThe better fix there is to enforce the normalization form somehow.\n\n> Intuitively I think that our interpretation of \"character\" here should\n> be whatever sequence of code points are between character\n> boundaries [1], and that the equality of such characters would be the\n> equality of their sequences of code points, with the string equality\n> check of the collation, whatever the length of these sequences.\n> \n> [1]:\n> https://unicode-org.github.io/icu/userguide/boundaryanalysis/#character-boundary\n\nEven that page says, what we are calling character here is really called \na grapheme cluster.\n\nIn a different world, pattern matching, character trimming, etc. would \nwork by grapheme, but it does not.\n\n\n\n", "msg_date": "Fri, 3 May 2024 20:58:30 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Support LIKE with nondeterministic collations" }, { "msg_contents": "Here is an updated patch for this.\n\nI have added some more documentation based on the discussions, including \nsome examples taken directly from the emails here.\n\nOne thing I have been struggling with a bit is the correct use of \nLIKE_FALSE versus LIKE_ABORT in the MatchText() code. I have made some \nsmall tweaks about this in this version that I think are more correct, \nbut it could use another look. Maybe also some more tests to verify \nthis one way or the other.\n\n\nOn 30.04.24 14:39, Daniel Verite wrote:\n> \tPeter Eisentraut wrote:\n> \n>> This patch adds support for using LIKE with nondeterministic\n>> collations. So you can do things such as\n>>\n>> col LIKE 'foo%' COLLATE case_insensitive\n> \n> Nice!\n> \n>> The pattern is partitioned into substrings at wildcard characters\n>> (so 'foo%bar' is partitioned into 'foo', '%', 'bar') and then then\n>> whole predicate matches if a match can be found for each partition\n>> under the applicable collation\n> \n> Trying with a collation that ignores punctuation:\n> \n> postgres=# CREATE COLLATION \"ign_punct\" (\n> provider = 'icu',\n> locale='und-u-ka-shifted',\n> deterministic = false\n> );\n> \n> postgres=# SELECT '.foo.' like 'foo' COLLATE ign_punct;\n> ?column?\n> ----------\n> t\n> (1 row)\n> \n> postgres=# SELECT '.foo.' like 'f_o' COLLATE ign_punct;\n> ?column?\n> ----------\n> t\n> (1 row)\n> \n> postgres=# SELECT '.foo.' like '_oo' COLLATE ign_punct;\n> ?column?\n> ----------\n> f\n> (1 row)\n> \n> The first two results look fine, but the next one is inconsistent.", "msg_date": "Fri, 28 Jun 2024 08:31:23 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Support LIKE with nondeterministic collations" }, { "msg_contents": "On Thu, Jun 27, 2024 at 11:31 PM Peter Eisentraut <[email protected]> wrote:\n> Here is an updated patch for this.\n\nI took a look at this. I added some tests and found a few that give\nthe wrong result (I believe). The new tests are included in the\nattached patch, along with the results I expect. Here are the\nfailures:\n\n -- inner %% matches b then zero:\n SELECT U&'cb\\0061\\0308' LIKE U&'c%%\\00E4' COLLATE ignore_accents;\n ?column?\n ----------\n- t\n+ f\n (1 row)\n\n -- trailing _ matches two codepoints that form one char:\n SELECT U&'cb\\0061\\0308' LIKE U&'cb_' COLLATE ignore_accents;\n ?column?\n ----------\n- t\n+ f\n (1 row)\n\n-- leading % matches zero:\n SELECT U&'\\0061\\0308bc' LIKE U&'%\\00E4bc' COLLATE ignore_accents;\n ?column?\n ----------\n- t\n+ f\n (1 row)\n\n -- leading % matches zero (with later %):\n SELECT U&'\\0061\\0308bc' LIKE U&'%\\00E4%c' COLLATE ignore_accents;\n ?column?\n ----------\n- t\n+ f\n (1 row)\n\nI think the 1st, 3rd, and 4th failures are all from % not backtracking\nto match zero chars.\n\nThe 2nd failure I'm not sure about. Maybe my expectation is wrong, but\nthen why does the same test pass with __ leading not trailing? Surely\nthey should be consistent.\n\n> I have added some more documentation based on the discussions, including\n> some examples taken directly from the emails here.\n\nThis looks good to me.\n\n> One thing I have been struggling with a bit is the correct use of\n> LIKE_FALSE versus LIKE_ABORT in the MatchText() code. I have made some\n> small tweaks about this in this version that I think are more correct,\n> but it could use another look. Maybe also some more tests to verify\n> this one way or the other.\n\nI haven't looked at this yet.\n\nYours,\n\n-- \nPaul ~{:-)\[email protected]", "msg_date": "Fri, 26 Jul 2024 15:32:08 -0700", "msg_from": "Paul A Jungwirth <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support LIKE with nondeterministic collations" }, { "msg_contents": "On 27.07.24 00:32, Paul A Jungwirth wrote:\n> On Thu, Jun 27, 2024 at 11:31 PM Peter Eisentraut <[email protected]> wrote:\n>> Here is an updated patch for this.\n> \n> I took a look at this. I added some tests and found a few that give\n> the wrong result (I believe). The new tests are included in the\n> attached patch, along with the results I expect. Here are the\n> failures:\n\nThanks, these are great test cases.\n\n> \n> -- inner %% matches b then zero:\n> SELECT U&'cb\\0061\\0308' LIKE U&'c%%\\00E4' COLLATE ignore_accents;\n> ?column?\n> ----------\n> - t\n> + f\n> (1 row)\n> \n> -- trailing _ matches two codepoints that form one char:\n> SELECT U&'cb\\0061\\0308' LIKE U&'cb_' COLLATE ignore_accents;\n> ?column?\n> ----------\n> - t\n> + f\n> (1 row)\n> \n> -- leading % matches zero:\n> SELECT U&'\\0061\\0308bc' LIKE U&'%\\00E4bc' COLLATE ignore_accents;\n> ?column?\n> ----------\n> - t\n> + f\n> (1 row)\n> \n> -- leading % matches zero (with later %):\n> SELECT U&'\\0061\\0308bc' LIKE U&'%\\00E4%c' COLLATE ignore_accents;\n> ?column?\n> ----------\n> - t\n> + f\n> (1 row)\n> \n> I think the 1st, 3rd, and 4th failures are all from % not backtracking\n> to match zero chars.\n\nThese are all because of this in like_match.c:\n\n * Otherwise, scan for a text position at which we can match the\n * rest of the pattern. The first remaining pattern char is known\n * to be a regular or escaped literal character, so we can compare\n * the first pattern byte to each text byte to avoid recursing\n * more than we have to. [...]\n\nThis shortcut doesn't work with nondeterministic collations, so we have \nto recurse in any case here. I have fixed that in the new patch; these \ntest cases work now.\n\n> The 2nd failure I'm not sure about. Maybe my expectation is wrong, but\n> then why does the same test pass with __ leading not trailing? Surely\n> they should be consistent.\n\nThe question is why is\n\nSELECT U&'cb\\0061\\0308' LIKE U&'cb_' COLLATE ignore_accents; -- false\n\nbut\n\nSELECT U&'\\0061\\0308bc' LIKE U&'_bc' COLLATE ignore_accents; -- true\n\nThe second one matches because\n\n SELECT U&'\\0308bc' = 'bc' COLLATE ignore_accents;\n\nSo the accent character will be ignored if it's adjacent to another \nfixed substring in the pattern.", "msg_date": "Tue, 30 Jul 2024 21:46:47 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Support LIKE with nondeterministic collations" }, { "msg_contents": "On Fri, 2024-05-03 at 16:58 +0200, Daniel Verite wrote:\n>    * Generating bounds for a sort key (prefix matching)\n> \n>    Having sort keys for strings allows for easy creation of bounds -\n>    sort keys that are guaranteed to be smaller or larger than any\n> sort\n>    key from a give range. For example, if bounds are produced for a\n>    sortkey of string “smith”, strings between upper and lower bounds\n>    with one level would include “Smith”, “SMITH”, “sMiTh”. Two kinds\n>    of upper bounds can be generated - the first one will match only\n>    strings of equal length, while the second one will match all the\n>    strings with the same initial prefix.\n> \n>    CLDR 1.9/ICU 4.6 and later map U+FFFF to a collation element with\n>    the maximum primary weight, so that for example the string\n>    “smith\\uFFFF” can be used as the upper bound rather than modifying\n>    the sort key for “smith”.\n> \n> In other words it says that\n> \n>   col LIKE 'smith%' collate \"nd\"\n> \n> is equivalent to:\n> \n>   col >= 'smith' collate \"nd\" AND col < U&'smith\\ffff' collate \"nd\"\n\nThat logic seems to assume something about the collation. If you have a\ncollation that orders strings by their sha256 hash, that would entirely\nbreak the connection between prefixes and ranges, and it wouldn't work.\n\nIs there something about the way collations are defined that inherently\nmaintains a connection between a prefix and a range? Does it remain\ntrue even when strange rules are added to a collation?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 31 Jul 2024 15:26:34 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support LIKE with nondeterministic collations" }, { "msg_contents": "\tJeff Davis wrote:\n\n> > col LIKE 'smith%' collate \"nd\"\n> > \n> > is equivalent to:\n> > \n> > col >= 'smith' collate \"nd\" AND col < U&'smith\\ffff' collate \"nd\"\n> \n> That logic seems to assume something about the collation. If you have a\n> collation that orders strings by their sha256 hash, that would entirely\n> break the connection between prefixes and ranges, and it wouldn't work.\n\nIndeed I would not trust this trick to work outside of ICU collations.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Thu, 01 Aug 2024 21:55:55 +0200", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support LIKE with nondeterministic collations" }, { "msg_contents": "Here is an updated patch. It is rebased over the various recent changes \nin the locale APIs. No other changes.\n\n\nOn 30.07.24 21:46, Peter Eisentraut wrote:\n> On 27.07.24 00:32, Paul A Jungwirth wrote:\n>> On Thu, Jun 27, 2024 at 11:31 PM Peter Eisentraut \n>> <[email protected]> wrote:\n>>> Here is an updated patch for this.\n>>\n>> I took a look at this. I added some tests and found a few that give\n>> the wrong result (I believe). The new tests are included in the\n>> attached patch, along with the results I expect. Here are the\n>> failures:\n> \n> Thanks, these are great test cases.\n> \n>>\n>>   -- inner %% matches b then zero:\n>>   SELECT U&'cb\\0061\\0308' LIKE U&'c%%\\00E4' COLLATE ignore_accents;\n>>    ?column?\n>>   ----------\n>> - t\n>> + f\n>>   (1 row)\n>>\n>>   -- trailing _ matches two codepoints that form one char:\n>>   SELECT U&'cb\\0061\\0308' LIKE U&'cb_' COLLATE ignore_accents;\n>>    ?column?\n>>   ----------\n>> - t\n>> + f\n>>   (1 row)\n>>\n>> -- leading % matches zero:\n>>   SELECT U&'\\0061\\0308bc' LIKE U&'%\\00E4bc' COLLATE ignore_accents;\n>>    ?column?\n>>   ----------\n>> - t\n>> + f\n>>   (1 row)\n>>\n>>   -- leading % matches zero (with later %):\n>>   SELECT U&'\\0061\\0308bc' LIKE U&'%\\00E4%c' COLLATE ignore_accents;\n>>    ?column?\n>>   ----------\n>> - t\n>> + f\n>>   (1 row)\n>>\n>> I think the 1st, 3rd, and 4th failures are all from % not backtracking\n>> to match zero chars.\n> \n> These are all because of this in like_match.c:\n> \n>        * Otherwise, scan for a text position at which we can match the\n>        * rest of the pattern.  The first remaining pattern char is known\n>        * to be a regular or escaped literal character, so we can compare\n>        * the first pattern byte to each text byte to avoid recursing\n>        * more than we have to.  [...]\n> \n> This shortcut doesn't work with nondeterministic collations, so we have \n> to recurse in any case here.  I have fixed that in the new patch; these \n> test cases work now.\n> \n>> The 2nd failure I'm not sure about. Maybe my expectation is wrong, but\n>> then why does the same test pass with __ leading not trailing? Surely\n>> they should be consistent.\n> \n> The question is why is\n> \n> SELECT U&'cb\\0061\\0308' LIKE U&'cb_' COLLATE ignore_accents;  -- false\n> \n> but\n> \n> SELECT U&'\\0061\\0308bc' LIKE U&'_bc' COLLATE ignore_accents;  -- true\n> \n> The second one matches because\n> \n>     SELECT U&'\\0308bc' = 'bc' COLLATE ignore_accents;\n> \n> So the accent character will be ignored if it's adjacent to another \n> fixed substring in the pattern.", "msg_date": "Mon, 16 Sep 2024 08:26:37 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Support LIKE with nondeterministic collations" } ]
[ { "msg_contents": "I found two mistakes related to collation and/or ICU support in the \ndocumentation that should probably be fixed and backpatched. See \nattached patches.", "msg_date": "Mon, 29 Apr 2024 09:05:46 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "small documentation fixes related to collations/ICU" }, { "msg_contents": "Looks good.\n\nOn Mon, Apr 29, 2024 at 12:05 PM Peter Eisentraut <[email protected]>\nwrote:\n\n> I found two mistakes related to collation and/or ICU support in the\n> documentation that should probably be fixed and backpatched. See\n> attached patches.\n\nLooks good.On Mon, Apr 29, 2024 at 12:05 PM Peter Eisentraut <[email protected]> wrote:I found two mistakes related to collation and/or ICU support in the \ndocumentation that should probably be fixed and backpatched.  See \nattached patches.", "msg_date": "Mon, 29 Apr 2024 12:18:32 +0500", "msg_from": "Kashif Zeeshan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: small documentation fixes related to collations/ICU" }, { "msg_contents": "On 29.04.24 09:18, Kashif Zeeshan wrote:\n> Looks good.\n> \n> On Mon, Apr 29, 2024 at 12:05 PM Peter Eisentraut <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> I found two mistakes related to collation and/or ICU support in the\n> documentation that should probably be fixed and backpatched.  See\n> attached patches.\n\nCommitted, thanks.\n\n\n\n", "msg_date": "Thu, 2 May 2024 08:35:56 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: small documentation fixes related to collations/ICU" } ]
[ { "msg_contents": "While the \"unique keys\" feature [1] is still under development, I'm thinking\nhow it could be used to enhance the removal of useless outer joins. Is\nsomething really bad about the 0002 patch attached?\n\nI recognize it may be weird that a join relation possibly produces non-join\npaths (e.g. SeqScan), but right now don't have better idea where the new code\nshould appear. I considered planning the subqueries before the existing call\nof remove_useless_joins(), to make the unique keys available earlier. However\nit seems that more items can be added to 'baserestrictinfo' of the subquery\nrelation after that. Thus by planning the subquery too early we could miss\nsome opportunities to push clauses down to the subquery.\n\nPlease note that this patch depends on [1], which enhances\nrel_is_distinct_for() and thus makes join_is_removable() a bit smareter. Also\nnote that 0001 is actually a minor fix to [1].\n\n[1] https://www.postgresql.org/message-id/7971.1713526758%40antos\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Mon, 29 Apr 2024 10:10:36 +0200", "msg_from": "Antonin Houska <[email protected]>", "msg_from_op": true, "msg_subject": "Use \"unique keys\" to enhance outer join removal" } ]
[ { "msg_contents": "Here is a patch set to implement virtual generated columns.\n\nSome history first: The original development of generated columns was \ndiscussed in [0]. It started with virtual columns, then added stored \ncolumns. Before the release of PG12, it was decided that only stored \ncolumns were ready, so I cut out virtual columns, and stored generated \ncolumns shipped with PG12, which is where we are today.\n\nVirtual generated columns are occasionally requested still, and it's a \nbit of unfinished business for me, too, so I started to resurrect it. \nWhat I did here first was to basically reverse interdiff the patches \nwhere I cut out virtual generated columns above (this was between \npatches v8 and v9 in [0]) and clean that up and make it work again.\n\nOne thing that I needed to decide was how to organize the tests for \nthis. The original patch series had both stored and virtual tests in \nthe same test file src/test/regress/sql/generated.sql. As that file has \ngrown, I think it would have been a mess to weave another whole set of \ntests into that. So instead I figured I'd make two separate test files\n\n src/test/regress/sql/generated_stored.sql (renamed from current)\n src/test/regress/sql/generated_virtual.sql\n\nand kind of try to keep them aligned, similar to how the various \ncollate* tests are handled. So I put that renaming in as preparatory \npatches. And there are also some other preparatory cleanup patches that \nI'm including.\n\nThe main feature patch (0005 here) generally works but has a number of \nopen corner cases that need to be thought about and/or fixed, many of \nwhich are marked in the code or the tests. I'll continue working on \nthat. But I wanted to see if I can get some feedback on the test \nstructure, so I don't have to keep changing it around later.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/[email protected]", "msg_date": "Mon, 29 Apr 2024 10:23:53 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Virtual generated columns" }, { "msg_contents": "On Mon, Apr 29, 2024 at 4:24 AM Peter Eisentraut <[email protected]>\nwrote:\n\n> Here is a patch set to implement virtual generated columns.\n>\n\nI'm very excited about this!\n\n\n\n> The main feature patch (0005 here) generally works but has a number of\n> open corner cases that need to be thought about and/or fixed, many of\n> which are marked in the code or the tests. I'll continue working on\n> that. But I wanted to see if I can get some feedback on the test\n> structure, so I don't have to keep changing it around later.\n>\n\nI'd be very interested to see virtual generated columns working, as one of\nmy past customers had a need to reclassify data in a partitioned table, and\nthe ability to detach a partition, alter the virtual generated columns, and\nre-attach would have been great. In case you care, it was basically an\n\"expired\" flag, but the rules for what data \"expired\" varied by country of\ncustomer and level of service.\n\n+ * Stored generated columns cannot work: They are computed after\n+ * BEFORE triggers, but partition routing is done before all\n+ * triggers. Maybe virtual generated columns could be made to\n+ * work, but then they would need to be handled as an expression\n+ * below.\n\nI'd say you nailed it with the test structure. The stored/virtual\ncopy/split is the ideal way to approach this, which makes the diff very\neasy to understand.\n\n+1 for not handling domain types yet.\n\n -- generation expression must be immutable\n-CREATE TABLE gtest_err_4 (a int PRIMARY KEY, b double precision GENERATED\nALWAYS AS (random()) STORED);\n+CREATE TABLE gtest_err_4 (a int PRIMARY KEY, b double precision GENERATED\nALWAYS AS (random()) VIRTUAL);\n\n\nDoes a VIRTUAL generated column have to be immutable? I can see where the\nSTORED one has to be, but consider the following:\n\nCREATE TABLE foo (\ncreated_at timestamptz DEFAULT CURRENT_TIMESTAMP,\nrow_age interval GENERATED ALWAYS AS CURRENT_TIMESTAMP - created_at\n);\n\n\n -- can't have generated column that is a child of normal column\n CREATE TABLE gtest_normal (a int, b int);\n-CREATE TABLE gtest_normal_child (a int, b int GENERATED ALWAYS AS (a * 2)\nSTORED) INHERITS (gtest_normal); -- error\n+CREATE TABLE gtest_normal_child (a int, b int GENERATED ALWAYS AS (a * 2)\nVIRTUAL) INHERITS (gtest_normal); -- error\n\n\nThis is the barrier to the partitioning reorganization scheme I described\nabove. Is there any hard rule why a child table couldn't have a generated\ncolumn matching the parent's regular column? I can see where it might\nprevent indexing that column on the parent table, but is there some other\ndealbreaker or is this just a \"it doesn't work yet\" situation?\n\nOne last thing to keep in mind is that there are two special case\nexpressions in the spec:\n\nGENERATED ALWAYS AS ROW START\nGENERATED ALWAYS AS ROW END\n\n\nand we'll need to be able to fit those into the catalog. I'll start another\nthread for that unless you prefer I keep it here.\n\nOn Mon, Apr 29, 2024 at 4:24 AM Peter Eisentraut <[email protected]> wrote:Here is a patch set to implement virtual generated columns.I'm very excited about this!\n\nThe main feature patch (0005 here) generally works but has a number of \nopen corner cases that need to be thought about and/or fixed, many of \nwhich are marked in the code or the tests.  I'll continue working on \nthat.  But I wanted to see if I can get some feedback on the test \nstructure, so I don't have to keep changing it around later.I'd be very interested to see virtual generated columns working, as one of my past customers had a need to reclassify data in a partitioned table, and the ability to detach a partition, alter the virtual generated columns, and re-attach would have been great. In case you care, it was basically an \"expired\" flag, but the rules for what data \"expired\" varied by country of customer and level of service.+\t\t\t * Stored generated columns cannot work: They are computed after+\t\t\t * BEFORE triggers, but partition routing is done before all+\t\t\t * triggers.  Maybe virtual generated columns could be made to+\t\t\t * work, but then they would need to be handled as an expression+\t\t\t * below.I'd say you nailed it with the test structure. The stored/virtual copy/split is the ideal way to approach this, which makes the diff very easy to understand.+1 for not handling domain types yet. -- generation expression must be immutable-CREATE TABLE gtest_err_4 (a int PRIMARY KEY, b double precision GENERATED ALWAYS AS (random()) STORED);+CREATE TABLE gtest_err_4 (a int PRIMARY KEY, b double precision GENERATED ALWAYS AS (random()) VIRTUAL);Does a VIRTUAL generated column have to be immutable? I can see where the STORED one has to be, but consider the following:CREATE TABLE foo (created_at timestamptz DEFAULT CURRENT_TIMESTAMP,row_age interval GENERATED ALWAYS AS CURRENT_TIMESTAMP - created_at); -- can't have generated column that is a child of normal column CREATE TABLE gtest_normal (a int, b int);-CREATE TABLE gtest_normal_child (a int, b int GENERATED ALWAYS AS (a * 2) STORED) INHERITS (gtest_normal);  -- error+CREATE TABLE gtest_normal_child (a int, b int GENERATED ALWAYS AS (a * 2) VIRTUAL) INHERITS (gtest_normal);  -- errorThis is the barrier to the partitioning reorganization scheme I described above. Is there any hard rule why a child table couldn't have a generated column matching the parent's regular column? I can see where it might prevent indexing that column on the parent table, but is there some other dealbreaker or is this just a \"it doesn't work yet\" situation?One last thing to keep in mind is that there are two special case expressions in the spec:GENERATED ALWAYS AS ROW STARTGENERATED ALWAYS AS ROW ENDand we'll need to be able to fit those into the catalog. I'll start another thread for that unless you prefer I keep it here.", "msg_date": "Mon, 29 Apr 2024 14:54:10 -0400", "msg_from": "Corey Huinker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On 29.04.24 10:23, Peter Eisentraut wrote:\n> Here is a patch set to implement virtual generated columns.\n\n> The main feature patch (0005 here) generally works but has a number of \n> open corner cases that need to be thought about and/or fixed, many of \n> which are marked in the code or the tests.  I'll continue working on \n> that.  But I wanted to see if I can get some feedback on the test \n> structure, so I don't have to keep changing it around later.\n\nHere is an updated patch set. It needed some rebasing, especially \naround the reverting of the catalogued not-null constraints. I have \nalso fixed up the various incomplete or \"fixme\" pieces of code mentioned \nabove. I have in most cases added \"not supported yet\" error messages \nfor now, with the idea that some of these things can be added in later, \nas incremental features.\n\nIn particular, quoting from the commit message, the following are \ncurrently not supported (but could possibly be added as incremental \nfeatures, some easier than others):\n\n- index on virtual column\n- expression index using a virtual column\n- hence also no unique constraints on virtual columns\n- not-null constraints on virtual columns\n- (check constraints are supported)\n- foreign key constraints on virtual columns\n- extended statistics on virtual columns\n- ALTER TABLE / SET EXPRESSION\n- ALTER TABLE / DROP EXPRESSION\n- virtual columns as trigger columns\n- virtual column cannot have domain type\n\nSo, I think this basically works now, and the things that don't work \nshould be appropriately prevented. So if someone wants to test this and \ntell me what in fact doesn't work correctly, that would be helpful.", "msg_date": "Wed, 22 May 2024 19:22:47 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On 29.04.24 20:54, Corey Huinker wrote:\n>  -- generation expression must be immutable\n> -CREATE TABLE gtest_err_4 (a int PRIMARY KEY, b double precision\n> GENERATED ALWAYS AS (random()) STORED);\n> +CREATE TABLE gtest_err_4 (a int PRIMARY KEY, b double precision\n> GENERATED ALWAYS AS (random()) VIRTUAL);\n> \n> Does a VIRTUAL generated column have to be immutable? I can see where \n> the STORED one has to be, but consider the following:\n> \n> CREATE TABLE foo (\n> created_at timestamptz DEFAULT CURRENT_TIMESTAMP,\n> row_age interval GENERATED ALWAYS AS CURRENT_TIMESTAMP - created_at\n> );\n\nI have been hesitant about this, but I'm now leaning toward that we \ncould allow this.\n\n>  -- can't have generated column that is a child of normal column\n>  CREATE TABLE gtest_normal (a int, b int);\n> -CREATE TABLE gtest_normal_child (a int, b int GENERATED ALWAYS AS\n> (a * 2) STORED) INHERITS (gtest_normal);  -- error\n> +CREATE TABLE gtest_normal_child (a int, b int GENERATED ALWAYS AS\n> (a * 2) VIRTUAL) INHERITS (gtest_normal);  -- error\n> \n> This is the barrier to the partitioning reorganization scheme I \n> described above. Is there any hard rule why a child table couldn't have \n> a generated column matching the parent's regular column? I can see where \n> it might prevent indexing that column on the parent table, but is there \n> some other dealbreaker or is this just a \"it doesn't work yet\" situation?\n\nWe had a quite a difficult time getting the inheritance business of \nstored generated columns working correctly. I'm sticking to the \nwell-trodden path here. We can possibly expand this if someone wants to \nwork out the details.\n\n> One last thing to keep in mind is that there are two special case \n> expressions in the spec:\n> \n> GENERATED ALWAYS AS ROW START\n> GENERATED ALWAYS AS ROW END\n> \n> and we'll need to be able to fit those into the catalog. I'll start \n> another thread for that unless you prefer I keep it here.\n\nI think this is a separate feature.\n\n\n\n", "msg_date": "Wed, 22 May 2024 19:25:59 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On Wed, 2024-05-22 at 19:22 +0200, Peter Eisentraut wrote:\n> On 29.04.24 10:23, Peter Eisentraut wrote:\n> > Here is a patch set to implement virtual generated columns.\n> \n> > The main feature patch (0005 here) generally works but has a number\n> > of \n> > open corner cases that need to be thought about and/or fixed, many\n> > of \n> > which are marked in the code or the tests.  I'll continue working\n> > on \n> > that.  But I wanted to see if I can get some feedback on the test \n> > structure, so I don't have to keep changing it around later.\n> \n> Here is an updated patch set.  It needed some rebasing, especially \n> around the reverting of the catalogued not-null constraints.  I have \n> also fixed up the various incomplete or \"fixme\" pieces of code\n> mentioned \n> above.  I have in most cases added \"not supported yet\" error messages\n> for now, with the idea that some of these things can be added in\n> later, \n> as incremental features.\n> \n\nThis is not (yet) full review.\n\nPatches applied cleanly on 76618097a6c027ec603a3dd143f61098e3fb9794\nfrom 2024-06-14.\nI've run\n./configure && make world-bin && make check && make check-world\non 0001, then 0001+0002, then 0001+0002+0003, up to applying\nall 5 patches. All cases passed on Debian unstable on aarch64 (ARM64)\non gcc (Debian 13.2.0-25) 13.2.0.\n\nv1-0001-Rename-regress-test-generated-to-generated_stored.patch:\nno objections here, makes sense as preparation for future changes\n\nv1-0002-Put-generated_stored-test-objects-in-a-schema.patch:\nalso no objections.\nOTOH other tests (like publication.out, rowsecurity.out, stats_ext.out,\ntablespace.out) are creating schemas and later dropping them - so here\nit might also make sense to drop schema at the end of testing.\n\nv1-0003-Remove-useless-initializations.patch:\nAll other cases (I checked directory src/backend/utils/cache)\ncalling MemoryContextAllocZero only initialize fields when they\nare non-zero, so removing partial initialization with false brings\nconsistency to the code.\n\nv1-0004-Remove-useless-code.patch:\nPatch removes filling in of constraints from function\nBuildDescForRelation. This function is only called from file\nview.c and tablecmds.c (twice). In none of those cases\nresult->constr is used, so proposed change makes sense.\nWhile I do not know code well, so might be wrong here,\nI would apply this patch.\n\nI haven't looked at the most important (and biggest) file yet,\nv1-0005-Virtual-generated-columns.patch; hope to look at it\nthis week.\nAt the same I believe 0001-0004 can be applied - even backported\nif it'll make maintenance of future changes easier. But that should\nbe commiter's decision.\n\n\nBest regards\n\n-- \nTomasz Rybak, Debian Developer <[email protected]>\nGPG: A565 CE64 F866 A258 4DDC F9C7 ECB7 3E37 E887 AA8C", "msg_date": "Mon, 17 Jun 2024 21:31:18 +0200", "msg_from": "Tomasz Rybak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On Thu, May 23, 2024 at 1:23 AM Peter Eisentraut <[email protected]> wrote:\n>\n> On 29.04.24 10:23, Peter Eisentraut wrote:\n> > Here is a patch set to implement virtual generated columns.\n>\n> > The main feature patch (0005 here) generally works but has a number of\n> > open corner cases that need to be thought about and/or fixed, many of\n> > which are marked in the code or the tests. I'll continue working on\n> > that. But I wanted to see if I can get some feedback on the test\n> > structure, so I don't have to keep changing it around later.\n\nthe test structure you made ( generated_stored.sql,\ngenerated_virtual.sq) looks ok to me.\nbut do we need to reset the search_path at the end of\ngenerated_stored.sql, generated_virtual.sql?\n\nmost of the test tables didn't use much storage,\nmaybe not necessary to clean up (drop the test table) at the end of sql files.\n\n>\n> So, I think this basically works now, and the things that don't work\n> should be appropriately prevented. So if someone wants to test this and\n> tell me what in fact doesn't work correctly, that would be helpful.\n\n\nin https://www.postgresql.org/docs/current/catalog-pg-attrdef.html\n>>>\nThe catalog pg_attrdef stores column default values. The main\ninformation about columns is stored in pg_attribute. Only columns for\nwhich a default value has been explicitly set will have an entry here.\n>>\ndidn't mention generated columns related expressions.\nDo we need to add something here? maybe a separate issue?\n\n\n+ /*\n+ * TODO: This could be done, but it would need a different implementation:\n+ * no rewriting, but still need to recheck any constraints.\n+ */\n+ if (attTup->attgenerated == ATTRIBUTE_GENERATED_VIRTUAL)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"ALTER TABLE / SET EXPRESSION is not supported for virtual\ngenerated columns\"),\n+ errdetail(\"Column \\\"%s\\\" of relation \\\"%s\\\" is a virtual generated column.\",\n+ colName, RelationGetRelationName(rel))));\n\nminor typo, should be\n+ errmsg(\"ALTER TABLE SET EXPRESSION is not supported for virtual\ngenerated columns\"),\n\ninsert/update/delete/merge returning have problems:\nCREATE TABLE t2 (\na int ,\nb int GENERATED ALWAYS AS (a * 2),\nd int default 22);\ninsert into t2(a) select g from generate_series(1,10) g;\n\ninsert into t2 select 100 returning *, (select t2.b), t2.b = t2.a * 2;\nupdate t2 set a = 12 returning *, (select t2.b), t2.b = t2.a * 2;\nupdate t2 set a = 12 returning *, (select (select t2.b)), t2.b = t2.a * 2;\ndelete from t2 where t2.b = t2.a * 2 returning *, 1,((select t2.b));\n\ncurrently all these query, error message is \"unexpected virtual\ngenerated column reference\"\nwe expect above these query work?\n\n\nissue with merge:\nCREATE TABLE t0 (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a * 2) VIRTUAL);\ninsert into t0(a) select g from generate_series(1,10) g;\nMERGE INTO t0 t USING t0 AS s ON 2 * t.a = s.b WHEN MATCHED THEN\nDELETE returning *;\n\nthe above query returns zero rows, but for stored generated columns it\nwill return 10 rows.\n\nin transformMergeStmt(ParseState *pstate, MergeStmt *stmt)\nadd\n`qry->hasGeneratedVirtual = pstate->p_hasGeneratedVirtual;`\nbefore\n`assign_query_collations(pstate, qry);`\nsolve the problem.\n\n\n", "msg_date": "Fri, 28 Jun 2024 08:00:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On 28.06.24 02:00, jian he wrote:\n> inhttps://www.postgresql.org/docs/current/catalog-pg-attrdef.html\n> The catalog pg_attrdef stores column default values. The main\n> information about columns is stored in pg_attribute. Only columns for\n> which a default value has been explicitly set will have an entry here.\n> didn't mention generated columns related expressions.\n> Do we need to add something here? maybe a separate issue?\n\nYes and yes. I have committed a separate change to update the \ndocumentation here.\n\n\n", "msg_date": "Mon, 1 Jul 2024 09:06:46 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On 17.06.24 21:31, Tomasz Rybak wrote:\n> v1-0001-Rename-regress-test-generated-to-generated_stored.patch:\n> no objections here, makes sense as preparation for future changes\n> \n> v1-0002-Put-generated_stored-test-objects-in-a-schema.patch:\n> also no objections.\n> OTOH other tests (like publication.out, rowsecurity.out, stats_ext.out,\n> tablespace.out) are creating schemas and later dropping them - so here\n> it might also make sense to drop schema at the end of testing.\n\nThe existing tests for generated columns don't drop what they create at \nthe end, which can be useful for pg_upgrade testing for example. So \nunless there are specific reasons to change it, I would leave that as is.\n\nOther tests might have other reasons. For example, publications or row \nsecurity might interfere with many other tests.\n\n> v1-0003-Remove-useless-initializations.patch:\n> All other cases (I checked directory src/backend/utils/cache)\n> calling MemoryContextAllocZero only initialize fields when they\n> are non-zero, so removing partial initialization with false brings\n> consistency to the code.\n> \n> v1-0004-Remove-useless-code.patch:\n> Patch removes filling in of constraints from function\n> BuildDescForRelation. This function is only called from file\n> view.c and tablecmds.c (twice). In none of those cases\n> result->constr is used, so proposed change makes sense.\n> While I do not know code well, so might be wrong here,\n> I would apply this patch.\n\nI have committed these two now.\n\n\n", "msg_date": "Mon, 1 Jul 2024 12:56:22 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On 28.06.24 02:00, jian he wrote:\n> the test structure you made ( generated_stored.sql,\n> generated_virtual.sq) looks ok to me.\n> but do we need to reset the search_path at the end of\n> generated_stored.sql, generated_virtual.sql?\n\nNo, the session ends at the end of the test file, so we don't need to \nreset session state.\n\n> + /*\n> + * TODO: This could be done, but it would need a different implementation:\n> + * no rewriting, but still need to recheck any constraints.\n> + */\n> + if (attTup->attgenerated == ATTRIBUTE_GENERATED_VIRTUAL)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"ALTER TABLE / SET EXPRESSION is not supported for virtual\n> generated columns\"),\n> + errdetail(\"Column \\\"%s\\\" of relation \\\"%s\\\" is a virtual generated column.\",\n> + colName, RelationGetRelationName(rel))));\n> \n> minor typo, should be\n> + errmsg(\"ALTER TABLE SET EXPRESSION is not supported for virtual\n> generated columns\"),\n\nThis style \"ALTER TABLE / something else\" is also used for other error \nmessages related to ALTER TABLE subcommands, so I am using the same here.\n\n> insert/update/delete/merge returning have problems:\n> CREATE TABLE t2 (\n> a int ,\n> b int GENERATED ALWAYS AS (a * 2),\n> d int default 22);\n> insert into t2(a) select g from generate_series(1,10) g;\n> \n> insert into t2 select 100 returning *, (select t2.b), t2.b = t2.a * 2;\n> update t2 set a = 12 returning *, (select t2.b), t2.b = t2.a * 2;\n> update t2 set a = 12 returning *, (select (select t2.b)), t2.b = t2.a * 2;\n> delete from t2 where t2.b = t2.a * 2 returning *, 1,((select t2.b));\n> \n> currently all these query, error message is \"unexpected virtual\n> generated column reference\"\n> we expect above these query work?\n\nYes, this is a bug. I'm looking into it.\n\n> issue with merge:\n> CREATE TABLE t0 (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a * 2) VIRTUAL);\n> insert into t0(a) select g from generate_series(1,10) g;\n> MERGE INTO t0 t USING t0 AS s ON 2 * t.a = s.b WHEN MATCHED THEN\n> DELETE returning *;\n> \n> the above query returns zero rows, but for stored generated columns it\n> will return 10 rows.\n> \n> in transformMergeStmt(ParseState *pstate, MergeStmt *stmt)\n> add\n> `qry->hasGeneratedVirtual = pstate->p_hasGeneratedVirtual;`\n> before\n> `assign_query_collations(pstate, qry);`\n> solve the problem.\n\nGood catch. Will fix.\n\nThanks for this review. I will work on fixing the issues above and come \nback with a new patch set.\n\n\n\n", "msg_date": "Mon, 1 Jul 2024 12:59:13 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "statistic related bug.\nborrow examples from\nhttps://www.postgresql.org/docs/current/sql-createstatistics.html\n\nCREATE TABLE t3 (a timestamp PRIMARY KEY, b timestamp GENERATED\nALWAYS AS (a) VIRTUAL);\nCREATE STATISTICS s3 (ndistinct) ON b FROM t3;\nINSERT INTO t3(a) SELECT i FROM generate_series('2020-01-01'::timestamp,\n '2020-12-31'::timestamp,\n '1 minute'::interval) s(i);\nANALYZE t3;\nCREATE STATISTICS s3 (ndistinct) ON date_trunc('month', a),\ndate_trunc('day', b) FROM t3;\nANALYZE t3;\nERROR: unexpected virtual generated column reference\n\n\n\n--this is allowed\nCREATE STATISTICS s5 ON (b + interval '1 day') FROM t3;\n--this is not allowed. seems inconsistent?\nCREATE STATISTICS s6 ON (b ) FROM t3;\n\n\nin CreateStatistics(CreateStatsStmt *stmt)\nwe have\n\nif (selem->name)\n{\n if (attForm->attgenerated == ATTRIBUTE_GENERATED_VIRTUAL)\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"statistics creation on virtual\ngenerated columns is not supported\")));\n}\nelse if (IsA(selem->expr, Var)) /* column reference in parens */\n{\n if (get_attgenerated(relid, var->varattno) ==\nATTRIBUTE_GENERATED_VIRTUAL)\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"statistics creation on virtual\ngenerated columns is not supported\")));\n}\nelse /* expression */\n{\n...\n}\n\nyou didn't make sure the last \"else\" branch is not related to virtual\ngenerated columns\n\n\n", "msg_date": "Mon, 22 Jul 2024 16:01:45 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "another bug?\ndrop table gtest12v;\nCREATE TABLE gtest12v (a int PRIMARY KEY, b bigint, c int GENERATED\nALWAYS AS (b * 2) VIRTUAL);\ninsert into gtest12v (a,b) values (11, 22147483647);\ntable gtest12v;\n\ninsert ok, but select error:\nERROR: integer out of range\n\nshould insert fail?\n\n\n\nCREATE TABLE gtest12v (a int PRIMARY KEY, b bigint, c int GENERATED\nALWAYS AS (b * 2) VIRTUAL);\nCREATE SEQUENCE sequence_testx OWNED BY gtest12v.c;\n\nseems to work. But I am not sure if there are any corner cases that\nmake it not work.\njust want to raise this issue.\n\n\n", "msg_date": "Mon, 22 Jul 2024 18:53:51 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "drop table t3;\nCREATE TABLE t3( b bigint, c int GENERATED ALWAYS AS (b * 2) VIRTUAL);\ninsert into t3 (b) values (22147483647);\nANALYZE t3;\n\nfor ANALYZE\nsince column c has no actual storage, so it's not analyzable?\nwe need to change the function examine_attribute accordingly?\n\n\nFor the above example, for each insert row, we actually need to call\nint84 to validate c value.\nwe probably need something similar to have ExecComputeStoredGenerated etc,\nbut we don't need to store it.\n\n\n", "msg_date": "Tue, 23 Jul 2024 12:03:57 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On 22.07.24 12:53, jian he wrote:\n> another bug?\n> drop table gtest12v;\n> CREATE TABLE gtest12v (a int PRIMARY KEY, b bigint, c int GENERATED\n> ALWAYS AS (b * 2) VIRTUAL);\n> insert into gtest12v (a,b) values (11, 22147483647);\n> table gtest12v;\n> \n> insert ok, but select error:\n> ERROR: integer out of range\n> \n> should insert fail?\n\nI think this is the correct behavior.\n\nThere has been a previous discussion: \nhttps://www.postgresql.org/message-id/2e3d5147-16f8-af0f-00ab-4c72cafc896f%402ndquadrant.com\n\n> CREATE TABLE gtest12v (a int PRIMARY KEY, b bigint, c int GENERATED\n> ALWAYS AS (b * 2) VIRTUAL);\n> CREATE SEQUENCE sequence_testx OWNED BY gtest12v.c;\n> \n> seems to work. But I am not sure if there are any corner cases that\n> make it not work.\n> just want to raise this issue.\n\nI don't think this matters. You can make a sequence owned by any \ncolumn, even if that column doesn't have a default that invokes the \nsequence. So nonsensical setups are possible, but they are harmless.\n\n\n\n", "msg_date": "Mon, 29 Jul 2024 16:59:07 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "Thank you for your extensive testing. Here is a new patch set that has \nfixed all the issues you have reported (MERGE, sublinks, statistics, \nANALYZE).", "msg_date": "Thu, 8 Aug 2024 08:23:32 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On Thu, 8 Aug 2024 at 07:23, Peter Eisentraut <[email protected]> wrote:\n>\n> Thank you for your extensive testing. Here is a new patch set that has\n> fixed all the issues you have reported (MERGE, sublinks, statistics,\n> ANALYZE).\n\nI had a quick look at this and found one issue, which is that it\ndoesn't properly deal with virtual generated columns in wholerow\nattributes:\n\nCREATE TABLE foo(a int, a2 int GENERATED ALWAYS AS (a*2) VIRTUAL);\nINSERT INTO foo VALUES (1);\nSELECT foo FROM foo;\n\n foo\n------\n (1,)\n(1 row)\n\nLooking at the rewriter changes, it occurred to me that it could\nperhaps be done more simply using ReplaceVarsFromTargetList() for each\nRTE with virtual generated columns. That function already has the\nrequired wholerow handling code, so there'd be less code duplication.\nI think it might be better to do this from within fireRIRrules(), just\nafter RLS policies are applied, so it wouldn't need to worry about\nCTEs and sublink subqueries. That would also make the\nhasGeneratedVirtual flags unnecessary, since we'd already only be\ndoing the extra work for tables with virtual generated columns. That\nwould eliminate possible bugs caused by failing to set those flags.\n\nRegards,\nDean\n\n\n", "msg_date": "Thu, 8 Aug 2024 19:22:28 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On Thu, Aug 8, 2024 at 2:23 PM Peter Eisentraut <[email protected]> wrote:\n>\n> Thank you for your extensive testing. Here is a new patch set that has\n> fixed all the issues you have reported (MERGE, sublinks, statistics,\n> ANALYZE).\n\n if (coldef->generated && restdef->generated &&\ncoldef->generated != restdef->generated)\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_COLUMN_DEFINITION),\n errmsg(\"column \\\"%s\\\" inherits from\ngenerated column of different kind\",\n restdef->colname)));\nthe error message is not informal. maybe add errhint that\n\"column \\\"%s\\\" should be same as parent table's generated column kind:\n%s\", \"virtual\"|\"stored\"\n\n\n .../regress/expected/create_table_like.out | 23 +-\n .../regress/expected/generated_stored.out | 27 +-\n ...rated_stored.out => generated_virtual.out} | 835 +++++++++---------\n src/test/regress/parallel_schedule | 3 +\n src/test/regress/sql/create_table_like.sql | 2 +-\n src/test/regress/sql/generated_stored.sql | 10 +-\n ...rated_stored.sql => generated_virtual.sql} | 301 ++++---\n src/test/subscription/t/011_generated.pl | 38 +-\n 55 files changed, 1280 insertions(+), 711 deletions(-)\n copy src/test/regress/expected/{generated_stored.out\ngenerated_virtual.out} (69%)\n copy src/test/regress/sql/{generated_stored.sql => generated_virtual.sql} (72%)\n\nI don't understand the \"copy =>\" part, I guess related to copy content\nfrom stored to virtual.\nanyway. some minor issue:\n\n-- alter generation expression of parent and all its children altogether\nALTER TABLE gtest_parent ALTER COLUMN f3 SET EXPRESSION AS (f2 * 2);\n\\d gtest_parent\n\\d gtest_child\n\\d gtest_child2\n\\d gtest_child3\nSELECT tableoid::regclass, * FROM gtest_parent ORDER BY 1, 2, 3;\n\nThe first line ALTER TABLE will fail for\nsrc/test/regress/sql/generated_virtual.sql.\nso no need\n\"\"\"\n\\d gtest_parent\n\\d gtest_child\n\\d gtest_child2\n\\d gtest_child3\nSELECT tableoid::regclass, * FROM gtest_parent ORDER BY 1, 2, 3;\n\"\"\"\n\nSimilarly the following tests for gtest29 may aslo need change\n-- ALTER TABLE ... ALTER COLUMN ... DROP EXPRESSION\n\nsince we cannot do ALTER TABLE SET EXPRESSION for virtual generated columns.\n\n\n-- ALTER TABLE ... ALTER COLUMN\nCREATE TABLE gtest27 (\n a int,\n b int,\n x int GENERATED ALWAYS AS ((a + b) * 2) VIRTUAL\n);\nINSERT INTO gtest27 (a, b) VALUES (3, 7), (4, 11);\nALTER TABLE gtest27 ALTER COLUMN a TYPE text; -- error\nALTER TABLE gtest27 ALTER COLUMN x TYPE numeric;\n\nwill\nALTER TABLE gtest27 ALTER COLUMN a TYPE int4;\nbe a no-op?\n\n\ndo we need a document that virtual generated columns will use the\nexpression's collation.\nsee:\ndrop table if exists t5;\nCREATE TABLE t5 (\n a text collate \"C\",\n b text collate \"C\" GENERATED ALWAYS AS (a collate case_insensitive) ,\n d int DEFAULT 22\n);\nINSERT INTO t5(a,d) values ('d1',28), ('D2',27), ('D1',26);\nselect * from t5 order by b asc, d asc;\n\n\n\n+ /*\n+ * TODO: Prevent virtual generated columns from having a\n+ * domain type. We would have to enforce domain constraints\n+ * when columns underlying the generated column change. This\n+ * could possibly be implemented, but it's not.\n+ *\n+ * XXX If column->typeName is not set, then this column\n+ * definition is probably a partition definition and will\n+ * presumably get its pre-vetted type from elsewhere. If that\n+ * doesn't hold, maybe this check needs to be moved elsewhere.\n+ */\n+ if (column->generated == ATTRIBUTE_GENERATED_VIRTUAL && column->typeName)\n+ {\n+ Type ctype;\n+\n+ ctype = typenameType(cxt->pstate, column->typeName, NULL);\n+ if (((Form_pg_type) GETSTRUCT(ctype))->typtype == TYPTYPE_DOMAIN)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"virtual generated column \\\"%s\\\" cannot have a domain type\",\n+ column->colname),\n+ parser_errposition(cxt->pstate,\n+ column->location)));\n+ ReleaseSysCache(ctype);\n+ }\n\ncreate domain mydomain as int4;\ncreate type mydomainrange as range(subtype=mydomain);\nCREATE TABLE t3( b bigint, c mydomain GENERATED ALWAYS AS ('11') VIRTUAL);\nCREATE TABLE t3( b bigint, c mydomainrange GENERATED ALWAYS AS\n('[4,50)') VIRTUAL);\ndomain will error out, domain over range is ok, is this fine?\n\n\n\n+ When <literal>VIRTUAL</literal> is specified, the column will be\n+ computed when it is read, and it will not occupy any storage. When\n+ <literal>STORED</literal> is specified, the column will be computed on\n+ write and will be stored on disk. <literal>VIRTUAL</literal> is the\n+ default.\ndrop table if exists t5;\nCREATE TABLE t5 (\n a int,\n b text storage extended collate \"C\" GENERATED ALWAYS AS (a::text\ncollate case_insensitive) ,\n d int DEFAULT 22\n);\nselect reltoastrelid <> 0 as has_toast_table from pg_class where oid =\n't5'::regclass;\n\nif really no storage, should table t5 have an associated toast table or not?\nalso check ALTER TABLE variant:\nalter table t5 alter b set storage extended;\n\n\n\nDo we need to do something in ATExecSetStatistics for cases like:\nALTER TABLE t5 ALTER b SET STATISTICS 2000;\n(b is a generated virtual column).\nbecause of\nexamine_attribute\n if (attr->attgenerated == ATTRIBUTE_GENERATED_VIRTUAL)\n return NULL;\ni guess, this won't have a big impact.\n\n\n\nThere are some issues with changing virtual generated column type.\nlike:\ndrop table if exists another;\ncreate table another (f4 int, f2 text, f3 text, f1 int GENERATED\nALWAYS AS (f4));\ninsert into another values(1, 'one', 'uno'), (2, 'two', 'due'),(3,\n'three', 'tre');\nalter table another\n alter f1 type text using f2 || ' and ' || f3 || ' more';\ntable another;\n\nor\nalter table another\n alter f1 type text using f2 || ' and ' || f3 || ' more',\n drop column f1;\nERROR: column \"f1\" of relation \"another\" does not exist\n\nThese two command outputs seem not right.\nthe stored generated column which works as expected.\n\n\nin src/test/regress/sql/alter_table.sql\n-- We disallow changing table's row type if it's used for storage\ncreate table at_tab1 (a int, b text);\ncreate table at_tab2 (x int, y at_tab1);\nalter table at_tab1 alter column b type varchar; -- fails\ndrop table at_tab2;\n\nI think the above restriction should apply to virtual generated columns too.\ngiven in ATPrepAlterColumnType, not storage we still call\nfind_composite_type_dependencies\n\n if (!RELKIND_HAS_STORAGE(tab->relkind))\n {\n /*\n * For relations without storage, do this check now. Regular tables\n * will check it later when the table is being rewritten.\n */\n find_composite_type_dependencies(rel->rd_rel->reltype, rel, NULL);\n }\n\nso i think in ATPrepAlterColumnType, we should do:\n\n if (attTup->attgenerated == ATTRIBUTE_GENERATED_VIRTUAL)\n {\n find_composite_type_dependencies(rel->rd_rel->reltype, rel, NULL);\n }\n else if (tab->relkind == RELKIND_RELATION ||\n tab->relkind == RELKIND_PARTITIONED_TABLE)\n {\n }\n else if (transform)\n ereport(ERROR,\n (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n errmsg(\"\\\"%s\\\" is not a table\",\n RelationGetRelationName(rel))));\n\nyou may add following tests:\n------------------------------------------------------------------------\ncreate table at_tab1 (a int, b text GENERATED ALWAYS AS ('hello'), c text);\ncreate table at_tab2 (x int, y at_tab1);\nalter table at_tab1 alter column b type varchar; -- fails\ndrop table at_tab1, at_tab2;\n\n-- Check it for a partitioned table, too\ncreate table at_tab1 (a int, b text GENERATED ALWAYS AS ('hello'), c\ntext) partition by list(a);;\ncreate table at_tab2 (x int, y at_tab1);\nalter table at_tab1 alter column b type varchar; -- fails\ndrop table at_tab1, at_tab2;\n---------------------------------------------------------------------------------\n\n\n", "msg_date": "Wed, 14 Aug 2024 08:00:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "Thanks for the great testing again. Here is an updated patch that \naddresses the issues you have pointed out.\n\n\nOn 14.08.24 02:00, jian he wrote:\n> On Thu, Aug 8, 2024 at 2:23 PM Peter Eisentraut <[email protected]> wrote:\n>>\n>> Thank you for your extensive testing. Here is a new patch set that has\n>> fixed all the issues you have reported (MERGE, sublinks, statistics,\n>> ANALYZE).\n> \n> if (coldef->generated && restdef->generated &&\n> coldef->generated != restdef->generated)\n> ereport(ERROR,\n> (errcode(ERRCODE_INVALID_COLUMN_DEFINITION),\n> errmsg(\"column \\\"%s\\\" inherits from\n> generated column of different kind\",\n> restdef->colname)));\n> the error message is not informal. maybe add errhint that\n> \"column \\\"%s\\\" should be same as parent table's generated column kind:\n> %s\", \"virtual\"|\"stored\"\n\nOk, I added an errdetail().\n\n> .../regress/expected/create_table_like.out | 23 +-\n> .../regress/expected/generated_stored.out | 27 +-\n> ...rated_stored.out => generated_virtual.out} | 835 +++++++++---------\n> src/test/regress/parallel_schedule | 3 +\n> src/test/regress/sql/create_table_like.sql | 2 +-\n> src/test/regress/sql/generated_stored.sql | 10 +-\n> ...rated_stored.sql => generated_virtual.sql} | 301 ++++---\n> src/test/subscription/t/011_generated.pl | 38 +-\n> 55 files changed, 1280 insertions(+), 711 deletions(-)\n> copy src/test/regress/expected/{generated_stored.out\n> generated_virtual.out} (69%)\n> copy src/test/regress/sql/{generated_stored.sql => generated_virtual.sql} (72%)\n> \n> I don't understand the \"copy =>\" part, I guess related to copy content\n> from stored to virtual.\n> anyway. some minor issue:\n\nThat's just what git format-patch produces. It shows that 72% of the \nnew file is a copy of the existing file.\n\n> -- alter generation expression of parent and all its children altogether\n> ALTER TABLE gtest_parent ALTER COLUMN f3 SET EXPRESSION AS (f2 * 2);\n> \\d gtest_parent\n> \\d gtest_child\n> \\d gtest_child2\n> \\d gtest_child3\n> SELECT tableoid::regclass, * FROM gtest_parent ORDER BY 1, 2, 3;\n> \n> The first line ALTER TABLE will fail for\n> src/test/regress/sql/generated_virtual.sql.\n> so no need\n> \"\"\"\n> \\d gtest_parent\n> \\d gtest_child\n> \\d gtest_child2\n> \\d gtest_child3\n> SELECT tableoid::regclass, * FROM gtest_parent ORDER BY 1, 2, 3;\n> \"\"\"\n> \n> Similarly the following tests for gtest29 may aslo need change\n> -- ALTER TABLE ... ALTER COLUMN ... DROP EXPRESSION\n> \n> since we cannot do ALTER TABLE SET EXPRESSION for virtual generated columns.\n\nI left all these tests in place from the equivalent STORED tests, in \ncase we want to add support for the VIRTUAL case as well. I expect that \nwe'll add support for some of these before too long.\n\n> -- ALTER TABLE ... ALTER COLUMN\n> CREATE TABLE gtest27 (\n> a int,\n> b int,\n> x int GENERATED ALWAYS AS ((a + b) * 2) VIRTUAL\n> );\n> INSERT INTO gtest27 (a, b) VALUES (3, 7), (4, 11);\n> ALTER TABLE gtest27 ALTER COLUMN a TYPE text; -- error\n> ALTER TABLE gtest27 ALTER COLUMN x TYPE numeric;\n> \n> will\n> ALTER TABLE gtest27 ALTER COLUMN a TYPE int4;\n> be a no-op?\n\nChanging the type of a column that is used by a generated column is \nalready prohibited. Are you proposing to change anything here?\n\n> do we need a document that virtual generated columns will use the\n> expression's collation.\n> see:\n> drop table if exists t5;\n> CREATE TABLE t5 (\n> a text collate \"C\",\n> b text collate \"C\" GENERATED ALWAYS AS (a collate case_insensitive) ,\n> d int DEFAULT 22\n> );\n> INSERT INTO t5(a,d) values ('d1',28), ('D2',27), ('D1',26);\n> select * from t5 order by b asc, d asc;\n\nI have fixed this. It will now apply the collation of the column.\n\n> create domain mydomain as int4;\n> create type mydomainrange as range(subtype=mydomain);\n> CREATE TABLE t3( b bigint, c mydomain GENERATED ALWAYS AS ('11') VIRTUAL);\n> CREATE TABLE t3( b bigint, c mydomainrange GENERATED ALWAYS AS\n> ('[4,50)') VIRTUAL);\n> domain will error out, domain over range is ok, is this fine?\n\nFixed. The check is now in CheckAttributeType() in heap.c, which has \nthe ability to recurse into composite data types.\n\n> + When <literal>VIRTUAL</literal> is specified, the column will be\n> + computed when it is read, and it will not occupy any storage. When\n> + <literal>STORED</literal> is specified, the column will be computed on\n> + write and will be stored on disk. <literal>VIRTUAL</literal> is the\n> + default.\n> drop table if exists t5;\n> CREATE TABLE t5 (\n> a int,\n> b text storage extended collate \"C\" GENERATED ALWAYS AS (a::text\n> collate case_insensitive) ,\n> d int DEFAULT 22\n> );\n> select reltoastrelid <> 0 as has_toast_table from pg_class where oid =\n> 't5'::regclass;\n> \n> if really no storage, should table t5 have an associated toast table or not?\n> also check ALTER TABLE variant:\n> alter table t5 alter b set storage extended;\n\nFixed. It does not trigger a toast table now.\n\n> Do we need to do something in ATExecSetStatistics for cases like:\n> ALTER TABLE t5 ALTER b SET STATISTICS 2000;\n> (b is a generated virtual column).\n> because of\n> examine_attribute\n> if (attr->attgenerated == ATTRIBUTE_GENERATED_VIRTUAL)\n> return NULL;\n> i guess, this won't have a big impact.\n\nThis is also an error now.\n\n> There are some issues with changing virtual generated column type.\n> like:\n> drop table if exists another;\n> create table another (f4 int, f2 text, f3 text, f1 int GENERATED\n> ALWAYS AS (f4));\n> insert into another values(1, 'one', 'uno'), (2, 'two', 'due'),(3,\n> 'three', 'tre');\n> alter table another\n> alter f1 type text using f2 || ' and ' || f3 || ' more';\n> table another;\n> \n> or\n> alter table another\n> alter f1 type text using f2 || ' and ' || f3 || ' more',\n> drop column f1;\n> ERROR: column \"f1\" of relation \"another\" does not exist\n> \n> These two command outputs seem not right.\n> the stored generated column which works as expected.\n\nI noticed this is already buggy for stored generated columns. It should \nprevent the use of the USING clause here. I'll propose a fix for that \nin a separate thread. There might be further adjustments needed for \nchanging the types of virtual columns, but I'll come back to that after \nthe existing bug is fixed.", "msg_date": "Tue, 20 Aug 2024 12:38:25 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On 08.08.24 20:22, Dean Rasheed wrote:\n> Looking at the rewriter changes, it occurred to me that it could\n> perhaps be done more simply using ReplaceVarsFromTargetList() for each\n> RTE with virtual generated columns. That function already has the\n> required wholerow handling code, so there'd be less code duplication.\n\nHmm, I don't quite see how ReplaceVarsFromTargetList() could be used \nhere. It does have the wholerow logic that we need somehow, but other \nthan that it seems to target something different?\n\n> I think it might be better to do this from within fireRIRrules(), just\n> after RLS policies are applied, so it wouldn't need to worry about\n> CTEs and sublink subqueries. That would also make the\n> hasGeneratedVirtual flags unnecessary, since we'd already only be\n> doing the extra work for tables with virtual generated columns. That\n> would eliminate possible bugs caused by failing to set those flags.\n\nYes, ideally, we'd piggy-back this into fireRIRrules(). One thing I'm \nmissing is that if you're descending into subqueries, there is no link \nto the upper levels' range tables, which we need to lookup the \npg_attribute entries of column referencing Vars. That's why there is \nthis whole custom walk with its own context data. Maybe there is a way \nto do this already that I missed?\n\n\n\n", "msg_date": "Wed, 21 Aug 2024 09:00:44 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On Wed, 21 Aug 2024 at 08:00, Peter Eisentraut <[email protected]> wrote:\n>\n> On 08.08.24 20:22, Dean Rasheed wrote:\n> > Looking at the rewriter changes, it occurred to me that it could\n> > perhaps be done more simply using ReplaceVarsFromTargetList() for each\n> > RTE with virtual generated columns. That function already has the\n> > required wholerow handling code, so there'd be less code duplication.\n>\n> Hmm, I don't quite see how ReplaceVarsFromTargetList() could be used\n> here. It does have the wholerow logic that we need somehow, but other\n> than that it seems to target something different?\n>\n\nWell what I was thinking was that (in fireRIRrules()'s final loop over\nrelations in the rtable), if the relation had any virtual generated\ncolumns, you'd build a targetlist containing a TLE for each one,\ncontaining the generated expression. Then you could just call\nReplaceVarsFromTargetList() to replace any Vars in the query with the\ncorresponding generated expressions. That takes care of descending\ninto subqueries, adjusting varlevelsup, and expanding wholerow Vars\nthat might refer to the generated expression.\n\nI also have half an eye on how this patch will interact with my patch\nto support RETURNING OLD/NEW values. If you use\nReplaceVarsFromTargetList(), it should just do the right thing for\nRETURNING OLD/NEW generated expressions.\n\n> > I think it might be better to do this from within fireRIRrules(), just\n> > after RLS policies are applied, so it wouldn't need to worry about\n> > CTEs and sublink subqueries. That would also make the\n> > hasGeneratedVirtual flags unnecessary, since we'd already only be\n> > doing the extra work for tables with virtual generated columns. That\n> > would eliminate possible bugs caused by failing to set those flags.\n>\n> Yes, ideally, we'd piggy-back this into fireRIRrules(). One thing I'm\n> missing is that if you're descending into subqueries, there is no link\n> to the upper levels' range tables, which we need to lookup the\n> pg_attribute entries of column referencing Vars. That's why there is\n> this whole custom walk with its own context data. Maybe there is a way\n> to do this already that I missed?\n>\n\nThat link to the upper levels' range tables wouldn't be needed because\nessentially using ReplaceVarsFromTargetList() flips the whole thing\nround: instead of traversing the tree looking for Var nodes that need\nto be replaced (possibly from upper query levels), you build a list of\nreplacement expressions to be applied and apply them from the top,\ndescending into subqueries as needed.\n\nAnother argument for doing it that way round is to not add too many\nextra cycles to the processing of existing queries that don't\nreference generated expressions. ISTM that this patch is potentially\nadding quite a lot of additional overhead -- it looks like, for every\nVar in the tree, it's calling get_attgenerated(), which involves a\nsyscache lookup to see if that column is a generated expression (which\nmost won't be). Ideally, we should be trying to do the minimum amount\nof extra work in the common case where there are no generated\nexpressions.\n\nLooking ahead, I can also imagine that one day we might want to\nsupport subqueries in generated expressions. That would require\nrecursive processing of generated expressions in the generated\nexpression's subquery, as well as applying RLS policies to the new\nrelations pulled in, and checks to guard against infinite recursion.\nfireRIRrules() already has the infrastructure to support all of that,\nso that feels like a much more natural place to do this.\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 21 Aug 2024 11:51:59 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "drop table if exists gtest_err_1 cascade;\nCREATE TABLE gtest_err_1 (\na int PRIMARY KEY generated by default as identity,\nb int GENERATED ALWAYS AS (22),\nd int default 22);\ncreate view gtest_err_1_v as select * from gtest_err_1;\nSELECT events & 4 != 0 AS can_upd, events & 8 != 0 AS can_ins,events &\n16 != 0 AS can_del\nFROM pg_catalog.pg_relation_is_updatable('gtest_err_1_v'::regclass,\nfalse) t(events);\n\ninsert into gtest_err_1_v(a,b, d) values ( 11, default,33) returning *;\nshould the above query, b return 22?\neven b is \"b int default\" will return 22.\n\n\ndrop table if exists comment_test cascade;\nCREATE TABLE comment_test (\n id int,\n positive_col int GENERATED ALWAYS AS (22) CHECK (positive_col > 0),\n positive_col1 int GENERATED ALWAYS AS (22) stored CHECK (positive_col > 0) ,\n indexed_col int,\n CONSTRAINT comment_test_pk PRIMARY KEY (id));\nCREATE INDEX comment_test_index ON comment_test(indexed_col);\nALTER TABLE comment_test ALTER COLUMN positive_col1 SET DATA TYPE text;\nALTER TABLE comment_test ALTER COLUMN positive_col SET DATA TYPE text;\nthe last query should work just fine?\n\n\ndrop table if exists def_test cascade;\ncreate table def_test (\n c0 int4 GENERATED ALWAYS AS (22) stored,\n c1 int4 GENERATED ALWAYS AS (22),\n c2 text default 'initial_default'\n);\nalter table def_test alter column c1 set default 10;\nERROR: column \"c1\" of relation \"def_test\" is a generated column\nHINT: Use ALTER TABLE ... ALTER COLUMN ... SET EXPRESSION instead.\nalter table def_test alter column c1 drop default;\nERROR: column \"c1\" of relation \"def_test\" is a generated column\n\nIs the first error message hint wrong?\nalso the second error message (column x is a generated column) is not helpful.\nhere, we should just say that cannot set/drop default for virtual\ngenerated column?\n\n\n\ndrop table if exists bar1, bar2;\ncreate table bar1(a integer, b integer GENERATED ALWAYS AS (22))\npartition by range (a);\ncreate table bar2(a integer);\nalter table bar2 add column b integer GENERATED ALWAYS AS (22) stored;\nalter table bar1 attach partition bar2 default;\nthis works, which will make partitioned table and partition have\ndifferent kinds of generated column,\nbut this is not what we expected?\n\nanother variant:\nCREATE TABLE list_parted (\na int NOT NULL,\nb char(2) COLLATE \"C\",\nc int GENERATED ALWAYS AS (22)\n) PARTITION BY LIST (a);\nCREATE TABLE parent (LIKE list_parted);\nALTER TABLE parent drop column c, add column c int GENERATED ALWAYS AS\n(22) stored;\nALTER TABLE list_parted ATTACH PARTITION parent FOR VALUES IN (1);\n\n\n\n\ndrop table if exists tp, tpp1, tpp2;\nCREATE TABLE tp (a int NOT NULL,b text GENERATED ALWAYS AS (22),c\ntext) PARTITION BY LIST (a);\nCREATE TABLE tpp1(a int NOT NULL, b text GENERATED ALWAYS AS (c\n||'1000' ), c text);\nALTER TABLE tp ATTACH PARTITION tpp1 FOR VALUES IN (1);\ninsert into tp(a,b,c) values (1,default, 'hello') returning a,b,c;\ninsert into tpp1(a,b,c) values (1,default, 'hello') returning a,b,c;\n\nselect tableoid::regclass, * from tpp1;\nselect tableoid::regclass, * from tp;\nthe above two queries return different results, slightly unintuitive, i guess.\nDo we need to mention it somewhere?\n\n\n\nCREATE TABLE atnotnull1 ();\nALTER TABLE atnotnull1 ADD COLUMN c INT GENERATED ALWAYS AS (22), ADD\nPRIMARY KEY (c);\nERROR: not-null constraints are not supported on virtual generated columns\nDETAIL: Column \"c\" of relation \"atnotnull1\" is a virtual generated column.\nI guess this error message is fine.\n\nThe last issue in the previous thread [1], ATPrepAlterColumnType\nseems not addressed.\n\n[1] https://postgr.es/m/CACJufxEGPYtFe79hbsMeOBOivfNnPRsw7Gjvk67m1x2MQggyiQ@mail.gmail.com\n\n\n", "msg_date": "Fri, 23 Aug 2024 17:06:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On Wed, Aug 21, 2024 at 6:52 PM Dean Rasheed <[email protected]> wrote:\n>\n> On Wed, 21 Aug 2024 at 08:00, Peter Eisentraut <[email protected]> wrote:\n> >\n> > On 08.08.24 20:22, Dean Rasheed wrote:\n> > > Looking at the rewriter changes, it occurred to me that it could\n> > > perhaps be done more simply using ReplaceVarsFromTargetList() for each\n> > > RTE with virtual generated columns. That function already has the\n> > > required wholerow handling code, so there'd be less code duplication.\n> >\n> > Hmm, I don't quite see how ReplaceVarsFromTargetList() could be used\n> > here. It does have the wholerow logic that we need somehow, but other\n> > than that it seems to target something different?\n> >\n>\n> Well what I was thinking was that (in fireRIRrules()'s final loop over\n> relations in the rtable), if the relation had any virtual generated\n> columns, you'd build a targetlist containing a TLE for each one,\n> containing the generated expression. Then you could just call\n> ReplaceVarsFromTargetList() to replace any Vars in the query with the\n> corresponding generated expressions. That takes care of descending\n> into subqueries, adjusting varlevelsup, and expanding wholerow Vars\n> that might refer to the generated expression.\n>\n> I also have half an eye on how this patch will interact with my patch\n> to support RETURNING OLD/NEW values. If you use\n> ReplaceVarsFromTargetList(), it should just do the right thing for\n> RETURNING OLD/NEW generated expressions.\n>\n> > > I think it might be better to do this from within fireRIRrules(), just\n> > > after RLS policies are applied, so it wouldn't need to worry about\n> > > CTEs and sublink subqueries. That would also make the\n> > > hasGeneratedVirtual flags unnecessary, since we'd already only be\n> > > doing the extra work for tables with virtual generated columns. That\n> > > would eliminate possible bugs caused by failing to set those flags.\n> >\n> > Yes, ideally, we'd piggy-back this into fireRIRrules(). One thing I'm\n> > missing is that if you're descending into subqueries, there is no link\n> > to the upper levels' range tables, which we need to lookup the\n> > pg_attribute entries of column referencing Vars. That's why there is\n> > this whole custom walk with its own context data. Maybe there is a way\n> > to do this already that I missed?\n> >\n>\n> That link to the upper levels' range tables wouldn't be needed because\n> essentially using ReplaceVarsFromTargetList() flips the whole thing\n> round: instead of traversing the tree looking for Var nodes that need\n> to be replaced (possibly from upper query levels), you build a list of\n> replacement expressions to be applied and apply them from the top,\n> descending into subqueries as needed.\n>\n> Another argument for doing it that way round is to not add too many\n> extra cycles to the processing of existing queries that don't\n> reference generated expressions. ISTM that this patch is potentially\n> adding quite a lot of additional overhead -- it looks like, for every\n> Var in the tree, it's calling get_attgenerated(), which involves a\n> syscache lookup to see if that column is a generated expression (which\n> most won't be). Ideally, we should be trying to do the minimum amount\n> of extra work in the common case where there are no generated\n> expressions.\n>\n> Looking ahead, I can also imagine that one day we might want to\n> support subqueries in generated expressions. That would require\n> recursive processing of generated expressions in the generated\n> expression's subquery, as well as applying RLS policies to the new\n> relations pulled in, and checks to guard against infinite recursion.\n> fireRIRrules() already has the infrastructure to support all of that,\n> so that feels like a much more natural place to do this.\n>\n\nIs the attached something you are thinking of?\n(mainly see changes of src/backend/rewrite/rewriteHandler.c)\n\ni bloated rewriteHandler.c a lot, mainly because\nexpand_generated_columns_in_expr\nnot using ReplaceVarsFromTargetList, only expand_generated_columns_in_query do.\n\n\n\nif we are using ReplaceVarsFromTargetList, then\nexpand_generated_columns_in_expr also needs to use ReplaceVarsFromTargetList?\n\n\nI don't think we can call ReplaceVarsFromTargetList within\nexpand_generated_columns_in_expr.\n\n\nif so, then the pattern would be like:\n{\n Node *tgqual;\n tgqual = (Node *) expand_generated_columns_in_expr(tgqual,\nrelinfo->ri_RelationDesc, context);\n ReplaceVarsFromTargetList\n}\n\nThere are 6 of expand_generated_columns_in_expr called.", "msg_date": "Thu, 29 Aug 2024 17:01:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On 23.08.24 11:06, jian he wrote:\n> drop table if exists gtest_err_1 cascade;\n> CREATE TABLE gtest_err_1 (\n> a int PRIMARY KEY generated by default as identity,\n> b int GENERATED ALWAYS AS (22),\n> d int default 22);\n> create view gtest_err_1_v as select * from gtest_err_1;\n> SELECT events & 4 != 0 AS can_upd, events & 8 != 0 AS can_ins,events &\n> 16 != 0 AS can_del\n> FROM pg_catalog.pg_relation_is_updatable('gtest_err_1_v'::regclass,\n> false) t(events);\n> \n> insert into gtest_err_1_v(a,b, d) values ( 11, default,33) returning *;\n> should the above query, b return 22?\n> even b is \"b int default\" will return 22.\n\nConfirmed. This is a bug in the rewriting that will hopefully be fixed \nwhen I get to, uh, rewriting that using Dean's suggestions. Not done \nhere. (The problem, in the current implementation, is that \nquery->hasGeneratedVirtual does not get preserved in the view, and then \nexpand_generated_columns_in_query() does the wrong thing.)\n\n> drop table if exists comment_test cascade;\n> CREATE TABLE comment_test (\n> id int,\n> positive_col int GENERATED ALWAYS AS (22) CHECK (positive_col > 0),\n> positive_col1 int GENERATED ALWAYS AS (22) stored CHECK (positive_col > 0) ,\n> indexed_col int,\n> CONSTRAINT comment_test_pk PRIMARY KEY (id));\n> CREATE INDEX comment_test_index ON comment_test(indexed_col);\n> ALTER TABLE comment_test ALTER COLUMN positive_col1 SET DATA TYPE text;\n> ALTER TABLE comment_test ALTER COLUMN positive_col SET DATA TYPE text;\n> the last query should work just fine?\n\nI played with this and I don't see anything wrong with the current \nbehavior. I noticed that in your test case\n\n > positive_col1 int GENERATED ALWAYS AS (22) stored CHECK \n(positive_col > 0) ,\n\nyou have the wrong column name in the check constraint. I'm not sure if \nthat was intentional.\n\n> drop table if exists def_test cascade;\n> create table def_test (\n> c0 int4 GENERATED ALWAYS AS (22) stored,\n> c1 int4 GENERATED ALWAYS AS (22),\n> c2 text default 'initial_default'\n> );\n> alter table def_test alter column c1 set default 10;\n> ERROR: column \"c1\" of relation \"def_test\" is a generated column\n> HINT: Use ALTER TABLE ... ALTER COLUMN ... SET EXPRESSION instead.\n> alter table def_test alter column c1 drop default;\n> ERROR: column \"c1\" of relation \"def_test\" is a generated column\n> \n> Is the first error message hint wrong?\n\nYes, somewhat. I looked into fixing that, but that got a bit messy. I \nhope to be able to implement SET EXPRESSION before too long, so I'm \nleaving it for now.\n\n> also the second error message (column x is a generated column) is not helpful.\n> here, we should just say that cannot set/drop default for virtual\n> generated column?\n\nMaybe, but that's not part of this patch.\n\n> drop table if exists bar1, bar2;\n> create table bar1(a integer, b integer GENERATED ALWAYS AS (22))\n> partition by range (a);\n> create table bar2(a integer);\n> alter table bar2 add column b integer GENERATED ALWAYS AS (22) stored;\n> alter table bar1 attach partition bar2 default;\n> this works, which will make partitioned table and partition have\n> different kinds of generated column,\n> but this is not what we expected?\n\nFixed. (Needed code in MergeAttributesIntoExisting() similar to \nMergeChildAttribute().)\n\n> drop table if exists tp, tpp1, tpp2;\n> CREATE TABLE tp (a int NOT NULL,b text GENERATED ALWAYS AS (22),c\n> text) PARTITION BY LIST (a);\n> CREATE TABLE tpp1(a int NOT NULL, b text GENERATED ALWAYS AS (c\n> ||'1000' ), c text);\n> ALTER TABLE tp ATTACH PARTITION tpp1 FOR VALUES IN (1);\n> insert into tp(a,b,c) values (1,default, 'hello') returning a,b,c;\n> insert into tpp1(a,b,c) values (1,default, 'hello') returning a,b,c;\n> \n> select tableoid::regclass, * from tpp1;\n> select tableoid::regclass, * from tp;\n> the above two queries return different results, slightly unintuitive, i guess.\n> Do we need to mention it somewhere?\n\nIt is documented in ddl.sgml:\n\n+ For virtual\n+ generated columns, the generation expression of the table named in the\n+ query applies when a table is read.\n\n> CREATE TABLE atnotnull1 ();\n> ALTER TABLE atnotnull1 ADD COLUMN c INT GENERATED ALWAYS AS (22), ADD\n> PRIMARY KEY (c);\n> ERROR: not-null constraints are not supported on virtual generated columns\n> DETAIL: Column \"c\" of relation \"atnotnull1\" is a virtual generated column.\n> I guess this error message is fine.\n\nYeah, maybe this will get improved when the catalogued not-null \nconstraints come back. Better wait for that.\n\n> The last issue in the previous thread [1], ATPrepAlterColumnType\n> seems not addressed.\n> \n> [1] https://postgr.es/m/CACJufxEGPYtFe79hbsMeOBOivfNnPRsw7Gjvk67m1x2MQggyiQ@mail.gmail.com\n\nThis is fixed now.\n\nI also committed the two patches that renamed the existing test files, \nso those are not included here anymore.\n\nThe new patch does some rebasing and contains various fixes to the \nissues you presented. As I mentioned, I'll look into improving the \nrewriting.", "msg_date": "Thu, 29 Aug 2024 14:15:50 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On Thu, Aug 29, 2024 at 8:15 PM Peter Eisentraut <[email protected]> wrote:\n>\n>\n> > drop table if exists comment_test cascade;\n> > CREATE TABLE comment_test (\n> > id int,\n> > positive_col int GENERATED ALWAYS AS (22) CHECK (positive_col > 0),\n> > positive_col1 int GENERATED ALWAYS AS (22) stored CHECK (positive_col > 0) ,\n> > indexed_col int,\n> > CONSTRAINT comment_test_pk PRIMARY KEY (id));\n> > CREATE INDEX comment_test_index ON comment_test(indexed_col);\n> > ALTER TABLE comment_test ALTER COLUMN positive_col1 SET DATA TYPE text;\n> > ALTER TABLE comment_test ALTER COLUMN positive_col SET DATA TYPE text;\n> > the last query should work just fine?\n>\n> I played with this and I don't see anything wrong with the current\n> behavior. I noticed that in your test case\n>\n> > positive_col1 int GENERATED ALWAYS AS (22) stored CHECK\n> (positive_col > 0) ,\n>\n> you have the wrong column name in the check constraint. I'm not sure if\n> that was intentional.\n>\n\nThat's my mistake. sorry for the noise.\n\n\nOn Wed, Aug 21, 2024 at 6:52 PM Dean Rasheed <[email protected]> wrote:\n>\n>\n> Another argument for doing it that way round is to not add too many\n> extra cycles to the processing of existing queries that don't\n> reference generated expressions. ISTM that this patch is potentially\n> adding quite a lot of additional overhead -- it looks like, for every\n> Var in the tree, it's calling get_attgenerated(), which involves a\n> syscache lookup to see if that column is a generated expression (which\n> most won't be). Ideally, we should be trying to do the minimum amount\n> of extra work in the common case where there are no generated\n> expressions.\n>\n> Regards,\n> Dean\n\n\n\n>\n> The new patch does some rebasing and contains various fixes to the\n> issues you presented. As I mentioned, I'll look into improving the\n> rewriting.\n\n\nbased on your latest patch (v4-0001-Virtual-generated-columns.patch),\nI did some minor cosmetic code change\nand tried to address get_attgenerated overhead.\n\nbasically in expand_generated_columns_in_query\nand expand_generated_columns_in_expr preliminary collect (reloid,attnum)\nthat have generated_virtual flag into expand_generated_context.\nlater in expand_generated_columns_mutator use the collected information.\n\ndeal with wholerow within the expand_generated_columns_mutator seems\ntricky, will try later.", "msg_date": "Thu, 29 Aug 2024 21:35:49 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On Wed, Aug 21, 2024 at 6:52 PM Dean Rasheed <[email protected]> wrote:\n>\n> On Wed, 21 Aug 2024 at 08:00, Peter Eisentraut <[email protected]> wrote:\n> >\n> > On 08.08.24 20:22, Dean Rasheed wrote:\n> > > Looking at the rewriter changes, it occurred to me that it could\n> > > perhaps be done more simply using ReplaceVarsFromTargetList() for each\n> > > RTE with virtual generated columns. That function already has the\n> > > required wholerow handling code, so there'd be less code duplication.\n> >\n> > Hmm, I don't quite see how ReplaceVarsFromTargetList() could be used\n> > here. It does have the wholerow logic that we need somehow, but other\n> > than that it seems to target something different?\n> >\n>\n\n\n> Well what I was thinking was that (in fireRIRrules()'s final loop over\n> relations in the rtable), if the relation had any virtual generated\n> columns, you'd build a targetlist containing a TLE for each one,\n> containing the generated expression. Then you could just call\n> ReplaceVarsFromTargetList() to replace any Vars in the query with the\n> corresponding generated expressions. That takes care of descending\n> into subqueries, adjusting varlevelsup, and expanding wholerow Vars\n> that might refer to the generated expression.\n>\n> I also have half an eye on how this patch will interact with my patch\n> to support RETURNING OLD/NEW values. If you use\n> ReplaceVarsFromTargetList(), it should just do the right thing for\n> RETURNING OLD/NEW generated expressions.\n>\n> > > I think it might be better to do this from within fireRIRrules(), just\n> > > after RLS policies are applied, so it wouldn't need to worry about\n> > > CTEs and sublink subqueries. That would also make the\n> > > hasGeneratedVirtual flags unnecessary, since we'd already only be\n> > > doing the extra work for tables with virtual generated columns. That\n> > > would eliminate possible bugs caused by failing to set those flags.\n> >\n> > Yes, ideally, we'd piggy-back this into fireRIRrules(). One thing I'm\n> > missing is that if you're descending into subqueries, there is no link\n> > to the upper levels' range tables, which we need to lookup the\n> > pg_attribute entries of column referencing Vars. That's why there is\n> > this whole custom walk with its own context data. Maybe there is a way\n> > to do this already that I missed?\n> >\n>\n> That link to the upper levels' range tables wouldn't be needed because\n> essentially using ReplaceVarsFromTargetList() flips the whole thing\n> round: instead of traversing the tree looking for Var nodes that need\n> to be replaced (possibly from upper query levels), you build a list of\n> replacement expressions to be applied and apply them from the top,\n> descending into subqueries as needed.\n>\n\nCREATE TABLE gtest1 (a int, b int GENERATED ALWAYS AS (a * 2) VIRTUAL);\nINSERT INTO gtest1 VALUES (1,default), (2, DEFAULT);\n\nselect b from (SELECT b FROM gtest1) sub;\nhere we only need to translate the second \"b\" to (a *2), not the first one.\nbut these two \"b\" query tree representation almost the same (varno,\nvarattno, varlevelsup)\n\nI am not sure how ReplaceVarsFromTargetList can disambiguate this?\nCurrently v4-0001-Virtual-generated-columns.patch\nworks. because v4 properly tags the main query hasGeneratedVirtual to false,\nand tag subquery's hasGeneratedVirtual to true.\n\n\n", "msg_date": "Mon, 2 Sep 2024 21:25:15 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "Hi,\n\nOn Thu, 29 Aug 2024 at 15:16, Peter Eisentraut <[email protected]> wrote:\n>\n> I also committed the two patches that renamed the existing test files,\n> so those are not included here anymore.\n>\n> The new patch does some rebasing and contains various fixes to the\n> issues you presented. As I mentioned, I'll look into improving the\n> rewriting.\n\nxid_wraparound test started to fail after edee0c621d. It seems the\nerror message used in xid_wraparound/002_limits is updated. The patch\nthat applies the same update to the test file is attached.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Mon, 2 Sep 2024 16:29:11 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On Thu, Aug 29, 2024 at 9:35 PM jian he <[email protected]> wrote:\n>\n> On Thu, Aug 29, 2024 at 8:15 PM Peter Eisentraut <[email protected]> wrote:\n> >\n\n\n> >\n> > The new patch does some rebasing and contains various fixes to the\n> > issues you presented. As I mentioned, I'll look into improving the\n> > rewriting.\n>\n>\n> based on your latest patch (v4-0001-Virtual-generated-columns.patch),\n> I did some minor cosmetic code change\n> and tried to address get_attgenerated overhead.\n>\n> basically in expand_generated_columns_in_query\n> and expand_generated_columns_in_expr preliminary collect (reloid,attnum)\n> that have generated_virtual flag into expand_generated_context.\n> later in expand_generated_columns_mutator use the collected information.\n>\n> deal with wholerow within the expand_generated_columns_mutator seems\n> tricky, will try later.\n\n\nplease just ignore v4-0001-Virtual-generated-columns_minorchange.no-cfbot,\nwhich I made some mistakes, but the tests still passed.\n\nplease checking this mail attached\nv5-0001-Virtual-generated-wholerow-var-and-virtual-che.no-cfbot\n\nIt solves:\n1. minor cosmetic changes.\n2. virtual generated column wholerow var reference, tests added.\n3. optimize get_attgenerated overhead, instead of for each var call\nget_attgenerated.\n walk through the query tree, collect the virtual column's relation\noid, and the virtual generated column's attnum\nand use this information later.\n\n\nI will check the view insert case later.", "msg_date": "Tue, 3 Sep 2024 12:59:37 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On 21.08.24 12:51, Dean Rasheed wrote:\n> On Wed, 21 Aug 2024 at 08:00, Peter Eisentraut<[email protected]> wrote:\n>> On 08.08.24 20:22, Dean Rasheed wrote:\n>>> Looking at the rewriter changes, it occurred to me that it could\n>>> perhaps be done more simply using ReplaceVarsFromTargetList() for each\n>>> RTE with virtual generated columns. That function already has the\n>>> required wholerow handling code, so there'd be less code duplication.\n>> Hmm, I don't quite see how ReplaceVarsFromTargetList() could be used\n>> here. It does have the wholerow logic that we need somehow, but other\n>> than that it seems to target something different?\n>>\n> Well what I was thinking was that (in fireRIRrules()'s final loop over\n> relations in the rtable), if the relation had any virtual generated\n> columns, you'd build a targetlist containing a TLE for each one,\n> containing the generated expression. Then you could just call\n> ReplaceVarsFromTargetList() to replace any Vars in the query with the\n> corresponding generated expressions. That takes care of descending\n> into subqueries, adjusting varlevelsup, and expanding wholerow Vars\n> that might refer to the generated expression.\n> \n> I also have half an eye on how this patch will interact with my patch\n> to support RETURNING OLD/NEW values. If you use\n> ReplaceVarsFromTargetList(), it should just do the right thing for\n> RETURNING OLD/NEW generated expressions.\n\nHere is an implementation of this. It's much nicer! It also appears to \nfix all the additional test cases that have been presented. (I haven't \nintegrated them into the patch set yet.)\n\nI left the 0001 patch alone for now and put the new rewriting \nimplementation into 0002. (Unfortunately, the diff is kind of useless \nfor visual inspection.) Let me know if this matches what you had in \nmind, please. Also, is this the right place in fireRIRrules()?", "msg_date": "Wed, 4 Sep 2024 10:40:06 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On Wed, 4 Sept 2024 at 09:40, Peter Eisentraut <[email protected]> wrote:\n>\n> On 21.08.24 12:51, Dean Rasheed wrote:\n> >>\n> > Well what I was thinking was that (in fireRIRrules()'s final loop over\n> > relations in the rtable), if the relation had any virtual generated\n> > columns, you'd build a targetlist containing a TLE for each one,\n> > containing the generated expression. Then you could just call\n> > ReplaceVarsFromTargetList() to replace any Vars in the query with the\n> > corresponding generated expressions.\n>\n> Here is an implementation of this. It's much nicer! It also appears to\n> fix all the additional test cases that have been presented. (I haven't\n> integrated them into the patch set yet.)\n>\n> I left the 0001 patch alone for now and put the new rewriting\n> implementation into 0002. (Unfortunately, the diff is kind of useless\n> for visual inspection.) Let me know if this matches what you had in\n> mind, please. Also, is this the right place in fireRIRrules()?\n\nYes, that's what I had in mind except that it has to be called from\nthe second loop in fireRIRrules(), after any RLS policies have been\nadded, because it's possible for a RLS policy expression to refer to\nvirtual generated columns. It's OK to do it in the same loop that\nexpands RLS policies, because such policies can only refer to columns\nof the same relation, so once the RLS policies have been expanded for\na given relation, nothing else should get added to the query that can\nrefer to columns of that relation, at that query level, so at that\npoint it should be safe to expand virtual generated columns.\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 4 Sep 2024 11:33:53 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On Wed, Sep 4, 2024 at 4:40 PM Peter Eisentraut <[email protected]> wrote:\n>\n>\n> Here is an implementation of this. It's much nicer! It also appears to\n> fix all the additional test cases that have been presented. (I haven't\n> integrated them into the patch set yet.)\n>\n> I left the 0001 patch alone for now and put the new rewriting\n> implementation into 0002. (Unfortunately, the diff is kind of useless\n> for visual inspection.) Let me know if this matches what you had in\n> mind, please. Also, is this the right place in fireRIRrules()?\n\nhi. some minor issues.\n\nin get_dependent_generated_columns we can\n\n /* skip if not generated column */\n if (!TupleDescAttr(tupdesc, defval->adnum - 1)->attgenerated)\n continue;\nchange to\n /* skip if not generated stored column */\n if (!(TupleDescAttr(tupdesc, defval->adnum -\n1)->attgenerated == ATTRIBUTE_GENERATED_STORED))\n continue;\n\n\nin ExecInitStoredGenerated\n\"if ((tupdesc->constr && tupdesc->constr->has_generated_stored)))\"\nis true.\nthen later we finish the loop\n(for (int i = 0; i < natts; i++) loop)\n\nwe can \"Assert(ri_NumGeneratedNeeded > 0)\"\nso we can ensure once has_generated_stored flag is true,\nthen we should have at least one stored generated attribute.\n\n\n\nsimilarly, in expand_generated_columns_internal\nwe can aslo add \"Assert(list_length(tlist) > 0);\"\nabove\nnode = ReplaceVarsFromTargetList(node, rt_index, 0, rte, tlist,\nREPLACEVARS_CHANGE_VARNO, rt_index, NULL);\n\n\n\n@@ -2290,7 +2291,9 @@ ExecBuildSlotValueDescription(Oid reloid,\nif (table_perm || column_perm)\n{\n- if (slot->tts_isnull[i])\n+ if (att->attgenerated == ATTRIBUTE_GENERATED_VIRTUAL)\n+ val = \"virtual\";\n+ else if (slot->tts_isnull[i])\n val = \"null\";\nelse\n{\nOid foutoid;\nbool typisvarlena;\ngetTypeOutputInfo(att->atttypid, &foutoid, &typisvarlena);\nval = OidOutputFunctionCall(foutoid, slot->tts_values[i]);\n}\n\nwe can add Assert here, if i understand it correctly, like\n if (att->attgenerated == ATTRIBUTE_GENERATED_VIRTUAL)\n{\nAssert(slot->tts_isnull[i]);\n val = \"virtual\";\n}\n\n\n", "msg_date": "Thu, 5 Sep 2024 16:27:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On 04.09.24 12:33, Dean Rasheed wrote:\n>> I left the 0001 patch alone for now and put the new rewriting\n>> implementation into 0002. (Unfortunately, the diff is kind of useless\n>> for visual inspection.) Let me know if this matches what you had in\n>> mind, please. Also, is this the right place in fireRIRrules()?\n> Yes, that's what I had in mind except that it has to be called from\n> the second loop in fireRIRrules(), after any RLS policies have been\n> added, because it's possible for a RLS policy expression to refer to\n> virtual generated columns. It's OK to do it in the same loop that\n> expands RLS policies, because such policies can only refer to columns\n> of the same relation, so once the RLS policies have been expanded for\n> a given relation, nothing else should get added to the query that can\n> refer to columns of that relation, at that query level, so at that\n> point it should be safe to expand virtual generated columns.\n\nIf I move the code like that, then the postgres_fdw test fails. So \nthere is some additional interaction there that I need to study.\n\n\n", "msg_date": "Mon, 9 Sep 2024 08:02:54 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On 05.09.24 10:27, jian he wrote:\n> On Wed, Sep 4, 2024 at 4:40 PM Peter Eisentraut <[email protected]> wrote:\n>>\n>>\n>> Here is an implementation of this. It's much nicer! It also appears to\n>> fix all the additional test cases that have been presented. (I haven't\n>> integrated them into the patch set yet.)\n>>\n>> I left the 0001 patch alone for now and put the new rewriting\n>> implementation into 0002. (Unfortunately, the diff is kind of useless\n>> for visual inspection.) Let me know if this matches what you had in\n>> mind, please. Also, is this the right place in fireRIRrules()?\n> \n> hi. some minor issues.\n> \n> in get_dependent_generated_columns we can\n> \n> /* skip if not generated column */\n> if (!TupleDescAttr(tupdesc, defval->adnum - 1)->attgenerated)\n> continue;\n> change to\n> /* skip if not generated stored column */\n> if (!(TupleDescAttr(tupdesc, defval->adnum -\n> 1)->attgenerated == ATTRIBUTE_GENERATED_STORED))\n> continue;\n\nI need to study more what to do with this function. I'm not completely \nsure whether this should apply only to stored generated columns.\n\n> in ExecInitStoredGenerated\n> \"if ((tupdesc->constr && tupdesc->constr->has_generated_stored)))\"\n> is true.\n> then later we finish the loop\n> (for (int i = 0; i < natts; i++) loop)\n> \n> we can \"Assert(ri_NumGeneratedNeeded > 0)\"\n> so we can ensure once has_generated_stored flag is true,\n> then we should have at least one stored generated attribute.\n\nThis is technically correct, but this code isn't touched by this patch, \nso I don't think it belongs here.\n\n> similarly, in expand_generated_columns_internal\n> we can aslo add \"Assert(list_length(tlist) > 0);\"\n> above\n> node = ReplaceVarsFromTargetList(node, rt_index, 0, rte, tlist,\n> REPLACEVARS_CHANGE_VARNO, rt_index, NULL);\n\nOk, I'll add that.\n\n> @@ -2290,7 +2291,9 @@ ExecBuildSlotValueDescription(Oid reloid,\n> if (table_perm || column_perm)\n> {\n> - if (slot->tts_isnull[i])\n> + if (att->attgenerated == ATTRIBUTE_GENERATED_VIRTUAL)\n> + val = \"virtual\";\n> + else if (slot->tts_isnull[i])\n> val = \"null\";\n> else\n> {\n> Oid foutoid;\n> bool typisvarlena;\n> getTypeOutputInfo(att->atttypid, &foutoid, &typisvarlena);\n> val = OidOutputFunctionCall(foutoid, slot->tts_values[i]);\n> }\n> \n> we can add Assert here, if i understand it correctly, like\n> if (att->attgenerated == ATTRIBUTE_GENERATED_VIRTUAL)\n> {\n> Assert(slot->tts_isnull[i]);\n> val = \"virtual\";\n> }\n\nAlso technically correct, but I don't see what benefit this would bring. \n The code guarded by that assert would not make use of the thing being \nasserted.\n\n\n\n", "msg_date": "Mon, 9 Sep 2024 08:06:40 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "in v7.\n\ndoc/src/sgml/ref/alter_table.sgml\n<phrase>and <replaceable\nclass=\"parameter\">column_constraint</replaceable> is:</phrase>\n\nsection need representation of:\nGENERATED ALWAYS AS ( <replaceable>generation_expr</replaceable> ) [VIRTUAL]\n\n\nin RelationBuildTupleDesc(Relation relation)\nwe need to add \"constr->has_generated_virtual\" for the following code?\n\n if (constr->has_not_null ||\n constr->has_generated_stored ||\n ndef > 0 ||\n attrmiss ||\n relation->rd_rel->relchecks > 0)\n\n\nalso seems there will be table_rewrite for adding virtual generated\ncolumns, but we can avoid that.\nThe attached patch is the change and the tests.\n\ni've put the tests in src/test/regress/sql/fast_default.sql,\nsince it already has event triggers and trigger functions, we don't\nwant to duplicate it.", "msg_date": "Mon, 16 Sep 2024 17:22:29 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On Mon, Sep 16, 2024 at 5:22 PM jian he <[email protected]> wrote:\n>\n> in v7.\n>\nseems I am confused with the version number.\n\nhere, I attached another minor change in tests.\n\nmake\nERROR: invalid ON DELETE action for foreign key constraint containing\ngenerated column\nbecomes\nERROR: foreign key constraints on virtual generated columns are not supported\n\nchange contrib/pageinspect/sql/page.sql\nexpand information on t_infomask, t_bits information.\n\nchange RelationBuildLocalRelation\nmake the transient TupleDesc->TupleConstr three bool flags more accurate.", "msg_date": "Wed, 18 Sep 2024 10:38:10 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Virtual generated columns" }, { "msg_contents": "On 09.09.24 08:02, Peter Eisentraut wrote:\n> On 04.09.24 12:33, Dean Rasheed wrote:\n>>> I left the 0001 patch alone for now and put the new rewriting\n>>> implementation into 0002.  (Unfortunately, the diff is kind of useless\n>>> for visual inspection.)  Let me know if this matches what you had in\n>>> mind, please.  Also, is this the right place in fireRIRrules()?\n>> Yes, that's what I had in mind except that it has to be called from\n>> the second loop in fireRIRrules(), after any RLS policies have been\n>> added, because it's possible for a RLS policy expression to refer to\n>> virtual generated columns. It's OK to do it in the same loop that\n>> expands RLS policies, because such policies can only refer to columns\n>> of the same relation, so once the RLS policies have been expanded for\n>> a given relation, nothing else should get added to the query that can\n>> refer to columns of that relation, at that query level, so at that\n>> point it should be safe to expand virtual generated columns.\n> \n> If I move the code like that, then the postgres_fdw test fails.  So \n> there is some additional interaction there that I need to study.\n\nThis was actually a trivial issue. The RLS loop skips relation kinds \nthat can't have RLS policies, which includes foreign tables. So I did \nthis slightly differently and added another loop below the RLS loop for \nthe virtual columns. Now this all works.\n\nI'm attaching a consolidated patch here, so we have something up to date \non the record. I haven't worked through all the other recent feedback \nfrom Jian He yet; I'll do that next.", "msg_date": "Sun, 29 Sep 2024 22:09:22 -0400", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Virtual generated columns" } ]
[ { "msg_contents": "The attached patch tries to fix a corner case where attr_needed of an inner\nrelation of an OJ contains the join relid only because another,\nalready-removed OJ, needed some of its attributes. The unnecessary presence of\nthe join relid in attr_needed can prevent the planner from further join\nremovals.\n\nDo cases like this seem worth the effort and is the logic I use correct?\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Mon, 29 Apr 2024 12:34:03 +0200", "msg_from": "Antonin Houska <[email protected]>", "msg_from_op": true, "msg_subject": "Join removal and attr_needed cleanup" } ]
[ { "msg_contents": "Hi,\n\nPer Coverity.\n\nCID 1542943: (#1 of 1): Data race condition (MISSING_LOCK)\n3. missing_lock: Accessing slot->AH without holding lock signal_info_lock.\nElsewhere, ParallelSlot.AH is written to with signal_info_lock held 1 out\nof 1 times (1 of these accesses strongly imply that it is necessary).\n\nThe function DisconnectDatabase effectively writes the ParallelSlot.AH.\nSo the call in the function archive_close_connection:\n\nif (slot->AH)\nDisconnectDatabase(&(slot->AH->public));\n\nIt should also be protected on Windows, correct?\n\nPatch attached.\n\nbest regards,\nRanier Vilela", "msg_date": "Mon, 29 Apr 2024 10:16:33 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Possible data race on Windows (src/bin/pg_dump/parallel.c)" } ]
[ { "msg_contents": "There's been a bunch of bugs, and discussion on the intended behavior of \nsslnegotiation and ALPN. This email summarizes the current status:\n\n## Status and loose ends for beta1\n\nAll reported bugs have now been fixed. We now enforce ALPN in all the \nright places. Please let me know if I missed something.\n\nThere are two open items remaining that I intend to address in the next \nfew days, before beta1:\n\n- I am going to rename sslnegotiation=requiredirect to \nsslnegotiation=directonly. I acknowledge that there is still some debate \non this: Jacob (and Robert?) would prefer to change the behavior \ninstead, so that sslnegotiation=requiredirect would also imply or \nrequire sslmode=require, while IMHO the settings should be orthogonal so \nthat sslmode controls whether SSL is used or not, and sslnegotiation \ncontrols how the SSL layer is negotiated when SSL is used. Given that \nthey are orthogonal, \"directonly\" is a better name. I will also take \nanother look at the documentation, if it needs clarification on that \npoint. If you have more comments on whether this is a good idea or not \nor how sslnegotiation should work, please reply on the other thread, \nlet's keep this one focused on the overall status. [1]\n\n- The registration of the ALPN name with IANA hasn't been finished yet \n[2]. I originally requested the name \"pgsql\", but after Peter's comment, \nI changed the request to \"postgresql\". The string we have in 'master' is \ncurrently \"TBD-pgsql\". I'm very confident that the registration will go \nthrough with \"postgresql\", so my plan is to commit that change before \nbeta1, even if the IANA process hasn't completed by then.\n\n## V18 material\n\n- Add an option to disable traditional SSL negotiation in the server. \nThere was discussion on doing this via HBA rules or as a global option, \nand the consensus seems to be for a global option. This would be just to \nreduce the attach surface, there is no known vulnerabilities or other \nissues with the traditional negotiation. And maybe to help with testing. [3]\n\nThese are not directly related to sslnegotiation, but came up in the \ndiscussion:\n\n- Clarify the situation with sslmode=require and gssencmode=require \ncombination, by replacing sslmode and gssencmode options with a single \n\"encryption=[ssl|gss|none], [...]\" option. [4]\n\n- Make sslmode=require the default. This is orthogonal to the SSL \nnegotiation, but I think the root cause for the disagreements on \nsslnegotiation is actually that we'd like SSL to be the default. [5]\n\nThe details of these need to be hashed out, in particular the \nbackwards-compatibility and migration aspects, but the consensus seems \nto be that it's the right direction.\n\n## V19 and beyond\n\nIn the future, once v17 is ubiquitous and the ecosystem (pgbouncer etc) \nhave added direct SSL support, we can change the default sslnegotiation \nfrom 'postgres' to 'direct'. I'm thinking 3-5 years from now. In the \nmore distant future, we could remove the traditional SSLRequest \nnegotiation altogether and always use direct SSL negotiation.\n\nThere's no rush on these.\n\n## Retrospective\n\nThere were a lot more cleanups required for this work than I expected, \ngiven that there were little changes to the patches between January and \nMarch commitfests. I was mostly worried about the refactoring of the \nretry logic in libpq (and about the pre-existing logic too to be honest, \nit was complicated before these changes already). That's why I added a \nlot more tests for that. However, I did not foresee all the ALPN related \nissues. In hindsight, it would have been good to commit most of the ALPN \nchanges first, and with more tests. Jacob wrote a python test suite; I \nshould've played more with that, that could have demonstrated the ALPN \nissues earlier.\n\n[1] \nhttps://www.postgresql.org/message-id/CA%2BTgmobV9JEk4AFy61Xw%2B2%2BcCTBqdTsDopkeB%2Bgb81kq3f-o6A%40mail.gmail.com\n\n[2] \nhttps://mailarchive.ietf.org/arch/msg/tls-reg-review/9LWPzQfOpbc8dTT7vc9ahNeNaiw/\n\n[3] \nhttps://www.postgresql.org/message-id/CA%2BTgmoaLpDVY2ywqQUfxvKEQZ%2Bnwkabcw_f%3Di4Zyivt9CLjcmA%40mail.gmail.com\n\n[4] \nhttps://www.postgresql.org/message-id/3a6f126c-e1aa-4dcc-9252-9868308f6cf0%40iki.fi\n\n[5] \nhttps://www.postgresql.org/message-id/CA%2BTgmoaNkRerEmB9JPgW0FhcJAe337AA%3D5kp6je9KekQhhRbmA%40mail.gmail.com\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 29 Apr 2024 18:24:04 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Direct SSL connection and ALPN loose ends" }, { "msg_contents": "On Mon, Apr 29, 2024 at 8:24 AM Heikki Linnakangas <[email protected]> wrote:\n> I was mostly worried about the refactoring of the\n> retry logic in libpq (and about the pre-existing logic too to be honest,\n> it was complicated before these changes already).\n\nSome changes in the v17 negotiation fallback order caught my eye:\n\n1. For sslmode=prefer, a modern v3 error during negotiation now\nresults in a fallback to plaintext. For v16 this resulted in an\nimmediate failure. (v2 errors retain the v16 behavior.)\n2. For gssencmode=prefer, a legacy v2 error during negotiation now\nresults in an immediate failure. In v16 it allowed fallback to SSL or\nplaintext depending on sslmode.\n\nAre both these changes intentional/desirable? Change #1 seems to\npartially undo the decision made in a49fbaaf:\n\n> Don't assume that \"E\" response to NEGOTIATE_SSL_CODE means pre-7.0 server.\n>\n> These days, such a response is far more likely to signify a server-side\n> problem, such as fork failure. [...]\n>\n> Hence, it seems best to just eliminate the assumption that backing off\n> to non-SSL/2.0 protocol is the way to recover from an \"E\" response, and\n> instead treat the server error the same as we would in non-SSL cases.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Mon, 17 Jun 2024 07:11:06 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection and ALPN loose ends" }, { "msg_contents": "On 17/06/2024 17:11, Jacob Champion wrote:\n> On Mon, Apr 29, 2024 at 8:24 AM Heikki Linnakangas <[email protected]> wrote:\n>> I was mostly worried about the refactoring of the\n>> retry logic in libpq (and about the pre-existing logic too to be honest,\n>> it was complicated before these changes already).\n> \n> Some changes in the v17 negotiation fallback order caught my eye:\n> \n> 1. For sslmode=prefer, a modern v3 error during negotiation now\n> results in a fallback to plaintext. For v16 this resulted in an\n> immediate failure. (v2 errors retain the v16 behavior.)\n> 2. For gssencmode=prefer, a legacy v2 error during negotiation now\n> results in an immediate failure. In v16 it allowed fallback to SSL or\n> plaintext depending on sslmode.\n> \n> Are both these changes intentional/desirable? Change #1 seems to\n> partially undo the decision made in a49fbaaf:\n> \n>> Don't assume that \"E\" response to NEGOTIATE_SSL_CODE means pre-7.0 server.\n>>\n>> These days, such a response is far more likely to signify a server-side\n>> problem, such as fork failure. [...]\n>>\n>> Hence, it seems best to just eliminate the assumption that backing off\n>> to non-SSL/2.0 protocol is the way to recover from an \"E\" response, and\n>> instead treat the server error the same as we would in non-SSL cases.\n\nThey were not intentional. Let me think about the desirable part :-).\n\nBy \"negotiation\", which part of the protocol are we talking about \nexactly? In the middle of the TLS handshake? After sending the startup \npacket?\n\nI think the behavior with v2 and v3 errors should be the same. And I \nthink an immediate failure is appropriate on any v2/v3 error during \nnegotiation, assuming we don't use those errors for things like \"TLS not \nsupported\", which would warrant a fallback.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 17 Jun 2024 18:23:54 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Direct SSL connection and ALPN loose ends" }, { "msg_contents": "On Mon, Jun 17, 2024 at 8:24 AM Heikki Linnakangas <[email protected]> wrote:\n> By \"negotiation\", which part of the protocol are we talking about\n> exactly? In the middle of the TLS handshake? After sending the startup\n> packet?\n\nBy \"negotiation\" I mean the server's response to the startup packet.\nI.e. \"supported\"/\"not supported\"/\"error\".\n\n> I think the behavior with v2 and v3 errors should be the same. And I\n> think an immediate failure is appropriate on any v2/v3 error during\n> negotiation, assuming we don't use those errors for things like \"TLS not\n> supported\", which would warrant a fallback.\n\nFor GSS encryption, it was my vague understanding that older servers\nrespond with an error rather than the \"not supported\" indication. For\nTLS, though, the decision in a49fbaaf (immediate failure) seemed\nreasonable.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Mon, 17 Jun 2024 09:23:09 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection and ALPN loose ends" }, { "msg_contents": "Hi,\n\nOn 2024-04-29 18:24:04 +0300, Heikki Linnakangas wrote:\n> All reported bugs have now been fixed. We now enforce ALPN in all the right\n> places. Please let me know if I missed something.\n\nVery minor and not really your responsibility:\n\nIf provided with the necessary key information, wireshark can decode TLS\nexchanges when using sslnegotiation=postgres but not with direct. Presumably\nit needs to be taught postgres' ALPN id or something.\n\nExample with direct:\n\n 476 6513.310308457 192.168.0.113 → 192.168.0.200 48978 5432 142 TLSv1.3 Finished\n 477 6513.310341492 192.168.0.113 → 192.168.0.200 48978 5432 151 TLSv1.3 Application Data\n 478 6513.320730295 192.168.0.200 → 192.168.0.113 5432 48978 147 TLSv1.3 New Session Ticket\n 479 6513.320745684 192.168.0.200 → 192.168.0.113 5432 48978 147 TLSv1.3 New Session Ticket\n 480 6513.321175713 192.168.0.113 → 192.168.0.200 48978 5432 68 TCP 48978 → 5432 [ACK] Seq=915 Ack=1665 Win=62848 Len=0 TSval=3779915421 TSecr=3469016093\n 481 6513.323161553 192.168.0.200 → 192.168.0.113 5432 48978 518 TLSv1.3 Application Data\n 482 6513.323626180 192.168.0.113 → 192.168.0.200 48978 5432 125 TLSv1.3 Application Data\n 483 6513.333977769 192.168.0.200 → 192.168.0.113 5432 48978 273 TLSv1.3 Application Data\n 484 6513.334581920 192.168.0.113 → 192.168.0.200 48978 5432 95 TLSv1.3 Application Data\n 485 6513.334666116 192.168.0.113 → 192.168.0.200 48978 5432 92 TLSv1.3 Alert (Level: Warning, Description: Close Notify)\n\nExample with postgres:\n\n 502 6544.752799560 192.168.0.113 → 192.168.0.200 46300 5432 142 TLSv1.3 Finished\n 503 6544.752842863 192.168.0.113 → 192.168.0.200 46300 5432 151 PGSQL >?\n 504 6544.763152222 192.168.0.200 → 192.168.0.113 5432 46300 147 TLSv1.3 New Session Ticket\n 505 6544.763163155 192.168.0.200 → 192.168.0.113 5432 46300 147 TLSv1.3 New Session Ticket\n 506 6544.763587595 192.168.0.113 → 192.168.0.200 46300 5432 68 TCP 46300 → 5432 [ACK] Seq=923 Ack=1666 Win=62848 Len=0 TSval=3779946864 TSecr=3469047536\n 507 6544.765024827 192.168.0.200 → 192.168.0.113 5432 46300 518 PGSQL <R/S/S/S/S/S/S/S/S/S/S/S/S/S/S/K/Z\n 508 6544.766288155 192.168.0.113 → 192.168.0.200 46300 5432 125 PGSQL >Q\n 509 6544.776974164 192.168.0.200 → 192.168.0.113 5432 46300 273 PGSQL <T/D/D/D/D/D/D/D/D/D/D/C/Z\n 510 6544.777597927 192.168.0.113 → 192.168.0.200 46300 5432 95 PGSQL >X\n 511 6544.777631520 192.168.0.113 → 192.168.0.200 46300 5432 92 TLSv1.3 Alert (Level: Warning, Description: Close Notify)\n\nNote that in the second one it knows what's inside the \"Application Data\"\nmessages and decodes them (S: startup, authentication ok, parameters, cancel key,\nready for query, C: simple query, S: description, 10 rows, command complete,\nready for query).\n\nIn the GUI you can obviously go into the \"postgres messages\" in more detail\nthan I know how to do on the console.\n\n\n\nA second aspect is that I'm not super happy about the hack of stashing data\ninto Port. I think medium term we'd be better off separating out the\nbuffering for unencrypted and encrypted data properly. It turns out that not\nhaving any buffering *below* openssl (i.e. the encrypted data) hurts both for\nthe send and receive side, due to a) increased number of syscalls b) too many\nsmall packets being sent, as we use TCP_NODELAY c) kernel memory copies being\nslower due to the small increments.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 17 Jun 2024 11:33:35 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection and ALPN loose ends" }, { "msg_contents": "On Mon, Jun 17, 2024 at 9:23 AM Jacob Champion\n<[email protected]> wrote:\n> > I think the behavior with v2 and v3 errors should be the same. And I\n> > think an immediate failure is appropriate on any v2/v3 error during\n> > negotiation, assuming we don't use those errors for things like \"TLS not\n> > supported\", which would warrant a fallback.\n>\n> For GSS encryption, it was my vague understanding that older servers\n> respond with an error rather than the \"not supported\" indication. For\n> TLS, though, the decision in a49fbaaf (immediate failure) seemed\n> reasonable.\n\nWould an open item for this be appropriate?\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Thu, 20 Jun 2024 10:02:41 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection and ALPN loose ends" }, { "msg_contents": "On 20/06/2024 20:02, Jacob Champion wrote:\n> On Mon, Jun 17, 2024 at 9:23 AM Jacob Champion\n> <[email protected]> wrote:\n>>> I think the behavior with v2 and v3 errors should be the same. And I\n>>> think an immediate failure is appropriate on any v2/v3 error during\n>>> negotiation, assuming we don't use those errors for things like \"TLS not\n>>> supported\", which would warrant a fallback.\n>>\n>> For GSS encryption, it was my vague understanding that older servers\n>> respond with an error rather than the \"not supported\" indication. For\n>> TLS, though, the decision in a49fbaaf (immediate failure) seemed\n>> reasonable.\n> \n> Would an open item for this be appropriate?\n\nAdded.\n\n> By \"negotiation\" I mean the server's response to the startup packet.\n> I.e. \"supported\"/\"not supported\"/\"error\".\n\nOk, I'm still a little confused, probably a terminology issue. The \nserver doesn't respond with \"supported\" or \"not supported\" to the \nstartup packet, that happens earlier. I think you mean the SSLRequst / \nGSSRequest packet, which is sent *before* the startup packet?\n\n>> I think the behavior with v2 and v3 errors should be the same. And I\n>> think an immediate failure is appropriate on any v2/v3 error during\n>> negotiation, assuming we don't use those errors for things like \"TLS not\n>> supported\", which would warrant a fallback.\n> \n> For GSS encryption, it was my vague understanding that older servers\n> respond with an error rather than the \"not supported\" indication. For\n> TLS, though, the decision in a49fbaaf (immediate failure) seemed\n> reasonable.\n\nHmm, right, GSS encryption was introduced in v12, and older versions \nrespond with an error to a GSSRequest.\n\nWe probably could make the same assumption for GSS as we did for TLS in \na49fbaaf, i.e. that an error means that something's wrong with the \nserver, rather than that it's just very old and doesn't support GSS. But \nthe case for that is a lot weaker case than with TLS. There are still \npre-v12 servers out there in the wild.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 21 Jun 2024 02:13:05 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Direct SSL connection and ALPN loose ends" }, { "msg_contents": "On Thu, Jun 20, 2024 at 4:13 PM Heikki Linnakangas <[email protected]> wrote:\n> > By \"negotiation\" I mean the server's response to the startup packet.\n> > I.e. \"supported\"/\"not supported\"/\"error\".\n>\n> Ok, I'm still a little confused, probably a terminology issue. The\n> server doesn't respond with \"supported\" or \"not supported\" to the\n> startup packet, that happens earlier. I think you mean the SSLRequst /\n> GSSRequest packet, which is sent *before* the startup packet?\n\nYes, sorry. (I'm used to referring to those as startup packets too, ha.)\n\n> Hmm, right, GSS encryption was introduced in v12, and older versions\n> respond with an error to a GSSRequest.\n>\n> We probably could make the same assumption for GSS as we did for TLS in\n> a49fbaaf, i.e. that an error means that something's wrong with the\n> server, rather than that it's just very old and doesn't support GSS. But\n> the case for that is a lot weaker case than with TLS. There are still\n> pre-v12 servers out there in the wild.\n\nRight. Since we default to gssencmode=prefer, if you have Kerberos\ncreds in your environment, I think this could potentially break\nexisting software that connects to v11 servers once you upgrade libpq.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Thu, 20 Jun 2024 16:32:45 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection and ALPN loose ends" }, { "msg_contents": "On 21/06/2024 02:32, Jacob Champion wrote:\n> On Thu, Jun 20, 2024 at 4:13 PM Heikki Linnakangas <[email protected]> wrote:\n>>> By \"negotiation\" I mean the server's response to the startup packet.\n>>> I.e. \"supported\"/\"not supported\"/\"error\".\n>>\n>> Ok, I'm still a little confused, probably a terminology issue. The\n>> server doesn't respond with \"supported\" or \"not supported\" to the\n>> startup packet, that happens earlier. I think you mean the SSLRequst /\n>> GSSRequest packet, which is sent *before* the startup packet?\n> \n> Yes, sorry. (I'm used to referring to those as startup packets too, ha.)\n\nYeah I'm not sure what the right term would be.\n\n>> Hmm, right, GSS encryption was introduced in v12, and older versions\n>> respond with an error to a GSSRequest.\n>>\n>> We probably could make the same assumption for GSS as we did for TLS in\n>> a49fbaaf, i.e. that an error means that something's wrong with the\n>> server, rather than that it's just very old and doesn't support GSS. But\n>> the case for that is a lot weaker case than with TLS. There are still\n>> pre-v12 servers out there in the wild.\n> \n> Right. Since we default to gssencmode=prefer, if you have Kerberos\n> creds in your environment, I think this could potentially break\n> existing software that connects to v11 servers once you upgrade libpq.\n\nWhen you connect to a V11 server and attempt to perform GSSAPI \nauthentication, it will respond with a V3 error that says: \"unsupported \nfrontend protocol 1234.5680: server supports 2.0 to 3.0\". That was a \nsurprise to me until I tested it just now. I thought that it would \nrespond with a protocol V2 error, but it is not so. The backend sets \nFrontendProtocol to 1234.5680 before sending the error, and because it \nis >= 3, the error is sent with protocol version 3.\n\nGiven that, I think it is a good thing to fail the connection completely \non receiving a V2 error.\n\nAttached is a patch to fix the other issue, with falling back from SSL \nto plaintext. And some tests and comment fixes I spotted while at it.\n\n0001: A small comment fix\n0002: This is the main patch that fixes the SSL fallback issue\n\n0003: This adds fault injection tests to exercise these early error \ncodepaths. It is not ready to be merged, as it contains a hack to skip \nlocking. See thread at \nhttps://www.postgresql.org/message-id/e1ffb822-054e-4006-ac06-50532767f75b%40iki.fi.\n\n0004: More tests, for what happens if the server sends an error after \nresponding \"yes\" to the SSLRequest or GSSRequest, but before performing \nthe SSL/GSS handshake.\n\nAttached is also a little stand-alone perl program that listens on a \nsocket, and when you connect to it, it immediately sends a V2 or V3 \nerror, depending on the argument. That's useful for testing. It could be \nused as an alternative strategy to the injection points I used in the \n0003-0004 patches, but for now I just used it for manual testing.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Mon, 24 Jun 2024 23:30:53 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Direct SSL connection and ALPN loose ends" }, { "msg_contents": "I reviewed the documentation for \"direct ALPN connections' ', and it looks\nlike it could be improved.\nHere's the link:\nhttps://www.postgresql.org/docs/17/protocol-flow.html#PROTOCOL-FLOW-SSL\n\nThe currently suggested values for \"sslnegotiations\" are \"direct\" and\n\"postgres\".\nThe project name is PostgreSQL and the ALPN name is postgresql. Is there a\nreason why property value uses \"postgres\"?\nCan the value be renamed to postgresql for consistency?\n\n\"SSL\". Technically, the proper term is TLS, and even the document refers to\n\"IANA TLS ALPN Protocol IDs\" (TLS, not SSL).\nI would not die on that hill, however, going for tlsnegotiation would look\nbetter than sslnegotiation.\n\nVladimir\n\nI reviewed the documentation for \"direct ALPN connections' ', and it looks like it could be improved.Here's the link: https://www.postgresql.org/docs/17/protocol-flow.html#PROTOCOL-FLOW-SSLThe currently suggested values for \"sslnegotiations\" are \"direct\" and \"postgres\".The project name is PostgreSQL and the ALPN name is postgresql. Is there a reason why property value uses \"postgres\"?Can the value be renamed to postgresql for consistency?\"SSL\". Technically, the proper term is TLS, and even the document refers to \"IANA TLS ALPN Protocol IDs\" (TLS, not SSL).I would not die on that hill, however, going for tlsnegotiation would look better than sslnegotiation.Vladimir", "msg_date": "Tue, 25 Jun 2024 16:36:58 +0300", "msg_from": "Vladimir Sitnikov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection and ALPN loose ends" }, { "msg_contents": "On Tue, 25 Jun 2024 at 09:37, Vladimir Sitnikov <[email protected]>\nwrote:\n\n> I reviewed the documentation for \"direct ALPN connections' ', and it looks\n> like it could be improved.\n> Here's the link:\n> https://www.postgresql.org/docs/17/protocol-flow.html#PROTOCOL-FLOW-SSL\n>\n> The currently suggested values for \"sslnegotiations\" are \"direct\" and\n> \"postgres\".\n> The project name is PostgreSQL and the ALPN name is postgresql. Is there a\n> reason why property value uses \"postgres\"?\n> Can the value be renamed to postgresql for consistency?\n>\n\n+1 I found it strange that we are not using postgresql\n\n>\n> \"SSL\". Technically, the proper term is TLS, and even the document refers\n> to \"IANA TLS ALPN Protocol IDs\" (TLS, not SSL).\n> I would not die on that hill, however, going for tlsnegotiation would look\n> better than sslnegotiation.\n>\n\n+1 again, unusual to use SSL when this really is TLS.\n\nDave\n\nOn Tue, 25 Jun 2024 at 09:37, Vladimir Sitnikov <[email protected]> wrote:I reviewed the documentation for \"direct ALPN connections' ', and it looks like it could be improved.Here's the link: https://www.postgresql.org/docs/17/protocol-flow.html#PROTOCOL-FLOW-SSLThe currently suggested values for \"sslnegotiations\" are \"direct\" and \"postgres\".The project name is PostgreSQL and the ALPN name is postgresql. Is there a reason why property value uses \"postgres\"?Can the value be renamed to postgresql for consistency?+1 I found it strange that we are not using postgresql \"SSL\". Technically, the proper term is TLS, and even the document refers to \"IANA TLS ALPN Protocol IDs\" (TLS, not SSL).I would not die on that hill, however, going for tlsnegotiation would look better than sslnegotiation.+1 again, unusual to use SSL when this really is TLS.Dave", "msg_date": "Tue, 25 Jun 2024 10:20:30 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection and ALPN loose ends" }, { "msg_contents": "On Thu, Jun 20, 2024 at 4:32 PM Jacob Champion\n<[email protected]> wrote:\n> Thanks,\n> --Jacob\n\nHey Heikki,\n\n[sending this to the list in case it's not just me]\n\nI cannot for the life of me get GMail to deliver your latest message,\neven though I see it on postgresql.org. It's not in spam; it's just\ngone. I wonder if it's possibly the Perl server script causing\nvirus-scanner issues?\n\n--Jacob\n\n\n", "msg_date": "Tue, 25 Jun 2024 08:51:03 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection and ALPN loose ends" }, { "msg_contents": "On Tue, Jun 25, 2024 at 7:20 AM Dave Cramer <[email protected]> wrote:\n>\n> On Tue, 25 Jun 2024 at 09:37, Vladimir Sitnikov <[email protected]> wrote:\n>>\n>> \"SSL\". Technically, the proper term is TLS, and even the document refers to \"IANA TLS ALPN Protocol IDs\" (TLS, not SSL).\n>> I would not die on that hill, however, going for tlsnegotiation would look better than sslnegotiation.\n>\n> +1 again, unusual to use SSL when this really is TLS.\n\nThis was sort of litigated last ye-(checks notes) oh no, three years ago:\n\n https://www.postgresql.org/message-id/flat/CE12DD5C-4BB3-4166-BC9A-39779568734C%40yesql.se\n\nI'm your side when it comes to the use of the TLS acronym, personally,\nbut I think introducing a brand new option that interfaces with\nsslmode and sslrootcert and etc. while not being named like them would\nbe outright unhelpful. And the idea of switching everything to use TLS\nin docs seemed to be met with a solid \"meh\" on the other thread.\n\n--Jacob\n\n\n", "msg_date": "Tue, 25 Jun 2024 09:05:19 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection and ALPN loose ends" }, { "msg_contents": "On Mon, Jun 24, 2024 at 11:30:53PM +0300, Heikki Linnakangas wrote:\n> Given that, I think it is a good thing to fail the connection completely on\n> receiving a V2 error.\n> \n> Attached is a patch to fix the other issue, with falling back from SSL to\n> plaintext. And some tests and comment fixes I spotted while at it.\n> \n> 0001: A small comment fix\n\nAlready committed as of cc68ca6d420e.\n\n> 0002: This is the main patch that fixes the SSL fallback issue\n\n+ conn->failed_enc_methods |= conn->allowed_enc_methods &\n(~conn->current_enc_method); \n\nSounds reasonable to me.\n\nIt's a bit annoying to have to guess that current_enc_method is\ntracking only one method at a time (aka these three fields are not\ndocumented in libpq-int.h), while allowed_enc_methods and\nfailed_enc_methods is a bitwise combination of the methods that are\nstill allowed or that have already failed.\n\n> 0003: This adds fault injection tests to exercise these early error\n> codepaths. It is not ready to be merged, as it contains a hack to skip\n> locking. See thread at\n> https://www.postgresql.org/message-id/e1ffb822-054e-4006-ac06-50532767f75b%40iki.fi.\n\nLocking when running an injection point has been replaced by some\natomics in 86db52a5062a.\n\n+ if (IsInjectionPointAttached(\"backend-initialize-v2-error\"))\n+ {\n+ FrontendProtocol = PG_PROTOCOL(2,0);\n+ elog(FATAL, \"protocol version 2 error triggered\");\n+ }\n\nThis is an attempt to do stack manipulation with an injection point\nset. FrontendProtocol is a global variable, so you could have a new\ncallback setting up this global variable directly, then FATAL (I\nreally don't mind is modules/injection_points finishes with a library\nof callbacks).\n\nNot sure to like much this new IsInjectionPointAttached() that does a\nsearch in the existing injection point pool, though. This leads to\nmore code footprint in the core backend, and I'm trying to minimize\nthat. Not everybody agrees with this view, I'd guess, which is also\nfine.\n\n> 0004: More tests, for what happens if the server sends an error after\n> responding \"yes\" to the SSLRequest or GSSRequest, but before performing the\n> SSL/GSS handshake.\n\nNo objections to these two additions.\n\n> Attached is also a little stand-alone perl program that listens on a socket,\n> and when you connect to it, it immediately sends a V2 or V3 error, depending\n> on the argument. That's useful for testing. It could be used as an\n> alternative strategy to the injection points I used in the 0003-0004\n> patches, but for now I just used it for manual testing.\n\nNice toy.\n--\nMichael", "msg_date": "Tue, 16 Jul 2024 15:54:22 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection and ALPN loose ends" }, { "msg_contents": "On 16/07/2024 09:54, Michael Paquier wrote:\n> On Mon, Jun 24, 2024 at 11:30:53PM +0300, Heikki Linnakangas wrote:\n>> 0002: This is the main patch that fixes the SSL fallback issue\n> \n> + conn->failed_enc_methods |= conn->allowed_enc_methods &\n> (~conn->current_enc_method);\n> \n> Sounds reasonable to me.\n> \n> It's a bit annoying to have to guess that current_enc_method is\n> tracking only one method at a time (aka these three fields are not\n> documented in libpq-int.h), while allowed_enc_methods and\n> failed_enc_methods is a bitwise combination of the methods that are\n> still allowed or that have already failed.\n\nYeah. In hindsight I'm still not very happy with the code structure with \n\"allowed_enc_methods\" and \"current_enc_methods\" and all that. The \nfallback logic is still complicated. It's better than in v16, IMHO, but \nstill not great. This patch seems like the best fix for v17, but I \nwouldn't mind another round of refactoring for v18, if anyone's got some \ngood ideas on how to structure it better. All these new tests are a \ngreat asset when refactoring this again.\n\n> + if (IsInjectionPointAttached(\"backend-initialize-v2-error\"))\n> + {\n> + FrontendProtocol = PG_PROTOCOL(2,0);\n> + elog(FATAL, \"protocol version 2 error triggered\");\n> + }\n> \n> This is an attempt to do stack manipulation with an injection point\n> set. FrontendProtocol is a global variable, so you could have a new\n> callback setting up this global variable directly, then FATAL (I\n> really don't mind is modules/injection_points finishes with a library\n> of callbacks).\n> \n> Not sure to like much this new IsInjectionPointAttached() that does a\n> search in the existing injection point pool, though. This leads to\n> more code footprint in the core backend, and I'm trying to minimize\n> that. Not everybody agrees with this view, I'd guess, which is also\n> fine.\n\nYeah, I'm also not too excited about the additional code in the backend, \nbut I'm also not excited about writing another test C module just for \nthis. I'm inclined to commit this as it is, but we can certainly revisit \nthis later, since it's just test code.\n\nHere's a new rebased version with some minor cleanup. Notably, I added \ndocs for the new IS_INJECTION_POINT_ATTACHED() macro.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Tue, 23 Jul 2024 20:32:29 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Direct SSL connection and ALPN loose ends" }, { "msg_contents": "On Tue, Jul 23, 2024 at 08:32:29PM +0300, Heikki Linnakangas wrote:\n> All these new tests are a great asset when refactoring this again.\n\nThanks for doing that. The coverage, especially with v2, is going to\nbe really useful.\n\n> Yeah, I'm also not too excited about the additional code in the backend, but\n> I'm also not excited about writing another test C module just for this. I'm\n> inclined to commit this as it is, but we can certainly revisit this later,\n> since it's just test code.\n\nThe point would be to rely on the existing injection_points module,\nwith a new callback in it. The callbacks could be on a file of their\nown in the module, for clarity. What you have is OK for me anyway, it\nis good to add more options to developers in this area and this gets\nused in core. That's also enough to manipulate the stack in or even\nout of core.\n\n> Here's a new rebased version with some minor cleanup. Notably, I added docs\n> for the new IS_INJECTION_POINT_ATTACHED() macro.\n\n0001 looks OK.\n\n+ push @events, \"backenderror\" if $line =~ /error triggered for\ninjection point backend-/;\n+ push @events, \"v2error\" if $line =~ /protocol version 2 error\ntriggered/;\n\nPerhaps append an \"injection_\" for these two keywords?\n\n+#include \"storage/proc.h\"\n\nThis inclusion in injection_point.c should not be needed.\n\n> sets the FrontendProtocol global variable, but I think it's more\n> straightforward to have the test code\n\nThe last sentence in the commit message of 0002 seems to be\nunfinished.\n\nCould you run a perltidy on 005_negotiate_encryption.pl? There are a\nbunch of new diffs in it.\n--\nMichael", "msg_date": "Wed, 24 Jul 2024 08:37:31 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection and ALPN loose ends" }, { "msg_contents": "On 24/07/2024 02:37, Michael Paquier wrote:\n> On Tue, Jul 23, 2024 at 08:32:29PM +0300, Heikki Linnakangas wrote:\n>> All these new tests are a great asset when refactoring this again.\n> \n> Thanks for doing that. The coverage, especially with v2, is going to\n> be really useful.\n> \n>> Yeah, I'm also not too excited about the additional code in the backend, but\n>> I'm also not excited about writing another test C module just for this. I'm\n>> inclined to commit this as it is, but we can certainly revisit this later,\n>> since it's just test code.\n> \n> The point would be to rely on the existing injection_points module,\n> with a new callback in it. The callbacks could be on a file of their\n> own in the module, for clarity.\n\nHmm, do we want injection_points module to be a dumping ground for \ncallbacks that are only useful for very specific injection points, in \nspecific tests? I view it as a more general purpose module, containing \ncallbacks that are useful for many different tests. Don't get me wrong, \nI'm not necessarily against it, and it would be expedient, that's just \nnot how I see the purpose of injection_points.\n\n> What you have is OK for me anyway, it\n> is good to add more options to developers in this area and this gets\n> used in core. That's also enough to manipulate the stack in or even\n> out of core.\n\nOk, I kept it that way.\n\n>> Here's a new rebased version with some minor cleanup. Notably, I added docs\n>> for the new IS_INJECTION_POINT_ATTACHED() macro.\n> \n> 0001 looks OK.\n> \n> + push @events, \"backenderror\" if $line =~ /error triggered for\n> injection point backend-/;\n> + push @events, \"v2error\" if $line =~ /protocol version 2 error\n> triggered/;\n> \n> Perhaps append an \"injection_\" for these two keywords?\n> \n> +#include \"storage/proc.h\"\n> \n> This inclusion in injection_point.c should not be needed.\n> \n>> sets the FrontendProtocol global variable, but I think it's more\n>> straightforward to have the test code\n> \n> The last sentence in the commit message of 0002 seems to be\n> unfinished.\n\nFixed.\n\n> Could you run a perltidy on 005_negotiate_encryption.pl? There are a\n> bunch of new diffs in it.\n\nFixed.\n\nCommitted, thanks for the review, and thanks Jacob for the testing!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 26 Jul 2024 15:40:50 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Direct SSL connection and ALPN loose ends" }, { "msg_contents": "On 17/06/2024 21:33, Andres Freund wrote:\n> If provided with the necessary key information, wireshark can decode TLS\n> exchanges when using sslnegotiation=postgres but not with direct. Presumably\n> it needs to be taught postgres' ALPN id or something.\n\nI opened https://gitlab.com/wireshark/wireshark/-/merge_requests/16612 \nto fix that in the wireshark pgsql protocol dissector.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 26 Jul 2024 17:34:06 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Direct SSL connection and ALPN loose ends" } ]
[ { "msg_contents": "I'm developing an index access method.\n\nSELECT *\nFROM foo\nWHERE col <=> constant\nORDER BY col <==> constant\nLIMIT 10;\n\nI'm successfully getting the WHERE and the ORDER BY clauses in my\nbeginscan() method. Is there any way to get the LIMIT (or OFFSET, for that\nmatter)?\n\nMy access method is designed such that you have to fetch the entire result\nset in one go. It's not streaming, like most access methods. As such, it\nwould be very helpful to know up front how many items I need to fetch from\nthe index.\n\n-- \nChris Cleveland\n312-339-2677 mobile\n\nI'm developing an index access method.SELECT *FROM fooWHERE col <=> constantORDER BY col <==> constantLIMIT 10;I'm successfully getting the WHERE and the ORDER BY clauses in my beginscan() method. Is there any way to get the LIMIT (or OFFSET, for that matter)?My access method is designed such that you have to fetch the entire result set in one go. It's not streaming, like most access methods. As such, it would be very helpful to know up front how many items I need to fetch from the index.-- Chris Cleveland312-339-2677 mobile", "msg_date": "Mon, 29 Apr 2024 11:17:15 -0500", "msg_from": "Chris Cleveland <[email protected]>", "msg_from_op": true, "msg_subject": "Possible to get LIMIT in an index access method?" }, { "msg_contents": "On Mon, 29 Apr 2024 at 18:17, Chris Cleveland\n<[email protected]> wrote:\n>\n> I'm developing an index access method.\n>\n> SELECT *\n> FROM foo\n> WHERE col <=> constant\n> ORDER BY col <==> constant\n> LIMIT 10;\n>\n> I'm successfully getting the WHERE and the ORDER BY clauses in my beginscan() method. Is there any way to get the LIMIT (or OFFSET, for that matter)?\n\nNo, that is not possible.\nThe index AM does not know (*should* not know) about the visibility\nstate of indexed entries, except in those cases where the indexed\nentries are dead to all running transactions. Additionally, it can't\n(shouldn't) know about filters on columns that are not included in the\nindex. As such, pushing down limits into the index AM is only possible\nin situations where you know that the table is fully visible (which\ncan't be guaranteed at runtime) and that no other quals on the table's\ncolumns exist (which is possible, but unlikely to be useful).\n\nGIN has one \"solution\" to this when you enable gin_fuzzy_search_limit\n(default: disabled), where it throws an error if you try to extract\nmore results from the resultset after it's been exhausted while the AM\nknows more results could exist.\n\n> My access method is designed such that you have to fetch the entire result set in one go. It's not streaming, like most access methods. As such, it would be very helpful to know up front how many items I need to fetch from the index.\n\nSorry, but I don't think we can know in advance how many tuples are\ngoing to be extracted from an index scan.\n\n\nKind regards,\n\nMatthias van de Meent.\n\n\n", "msg_date": "Mon, 29 Apr 2024 20:07:36 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible to get LIMIT in an index access method?" } ]
[ { "msg_contents": "Hi,\n\nWith TLS 1.3 and others there is possibly a security flaw using ALPN [1].\n\nIt seems to me that the ALPN protocol can be bypassed if the client does\nnot correctly inform the ClientHello header.\n\nSo, the suggestion is to check the ClientHello header in the server and\nterminate the TLS handshake early.\n\nPatch attached.\n\nbest regards,\nRanier Vilela\n\n[1] terminate-tlsv1-3-handshake-if-alpn-is-missing\n<https://stackoverflow.com/questions/77271498/terminate-tlsv1-3-handshake-if-alpn-is-missing>", "msg_date": "Mon, 29 Apr 2024 14:10:44 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "re: Direct SSL connection and ALPN loose ends" }, { "msg_contents": "On 29/04/2024 20:10, Ranier Vilela wrote:\n> Hi,\n> \n> With TLS 1.3 and others there is possibly a security flaw using ALPN [1].\n> \n> It seems to me that the ALPN protocol can be bypassed if the client does \n> not correctly inform the ClientHello header.\n> \n> So, the suggestion is to check the ClientHello header in the server and\n> terminate the TLS handshake early.\n\nSounds to me like it's working as designed. ALPN in general is optional; \nif the client doesn't request it, then you proceed without it. We do \nrequire ALPN for direct SSL connections though. We can, because direct \nSSL connections is a new feature in Postgres. But we cannot require it \nfor the connections negotiated with SSLRequest, or we break \ncompatibility with old clients that don't use ALPN.\n\nThere is a check in direct SSL mode that ALPN was used \n(ProcessSSLStartup in backend_startup.c):\n\n> if (!port->alpn_used)\n> {\n> ereport(COMMERROR,\n> (errcode(ERRCODE_PROTOCOL_VIOLATION),\n> errmsg(\"received direct SSL connection request without ALPN protocol negotiation extension\")));\n> goto reject;\n> }\n\nThat happens immediately after the SSL connection has been established.\n\nHmm. I guess it would be better to abort the connection earlier, without \ncompleting the TLS handshake. Otherwise the client might send the first \nmessage in wrong protocol to the PostgreSQL server. That's not a \nsecurity issue for the PostgreSQL server: the server disconnects without \nreading the message. And I don't see any way for an ALPACA attack when \nthe server ignores the client's message. Nevertheless, from the point of \nview of keeping the attack surface as small as possible, aborting \nearlier seems better.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 29 Apr 2024 20:56:42 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection and ALPN loose ends" }, { "msg_contents": "Em seg., 29 de abr. de 2024 às 14:56, Heikki Linnakangas <[email protected]>\nescreveu:\n\n> On 29/04/2024 20:10, Ranier Vilela wrote:\n> > Hi,\n> >\n> > With TLS 1.3 and others there is possibly a security flaw using ALPN [1].\n> >\n> > It seems to me that the ALPN protocol can be bypassed if the client does\n> > not correctly inform the ClientHello header.\n> >\n> > So, the suggestion is to check the ClientHello header in the server and\n> > terminate the TLS handshake early.\n>\n> Sounds to me like it's working as designed. ALPN in general is optional;\n> if the client doesn't request it, then you proceed without it. We do\n> require ALPN for direct SSL connections though. We can, because direct\n> SSL connections is a new feature in Postgres. But we cannot require it\n> for the connections negotiated with SSLRequest, or we break\n> compatibility with old clients that don't use ALPN.\n>\nOk.\nBut what if I have a server configured for TLS 1.3 and that requires ALPN\nto allow access?\nWhat about a client configured without ALPN requiring connection?\n\n\n>\n> There is a check in direct SSL mode that ALPN was used\n> (ProcessSSLStartup in backend_startup.c):\n>\n> > if (!port->alpn_used)\n> > {\n> > ereport(COMMERROR,\n> > (errcode(ERRCODE_PROTOCOL_VIOLATION),\n> > errmsg(\"received direct SSL connection\n> request without ALPN protocol negotiation extension\")));\n> > goto reject;\n> > }\n>\n> That happens immediately after the SSL connection has been established.\n>\n> Hmm. I guess it would be better to abort the connection earlier, without\n> completing the TLS handshake. Otherwise the client might send the first\n> message in wrong protocol to the PostgreSQL server. That's not a\n> security issue for the PostgreSQL server: the server disconnects without\n> reading the message. And I don't see any way for an ALPACA attack when\n> the server ignores the client's message. Nevertheless, from the point of\n> view of keeping the attack surface as small as possible, aborting\n> earlier seems better.\n>\nSo the ClientHello callback is the correct way to determine the end.\n\nbest regards,\nRanier Vilela\n\nEm seg., 29 de abr. de 2024 às 14:56, Heikki Linnakangas <[email protected]> escreveu:On 29/04/2024 20:10, Ranier Vilela wrote:\n> Hi,\n> \n> With TLS 1.3 and others there is possibly a security flaw using ALPN [1].\n> \n> It seems to me that the ALPN protocol can be bypassed if the client does \n> not correctly inform the ClientHello header.\n> \n> So, the suggestion is to check the ClientHello header in the server and\n> terminate the TLS handshake early.\n\nSounds to me like it's working as designed. ALPN in general is optional; \nif the client doesn't request it, then you proceed without it. We do \nrequire ALPN for direct SSL connections though. We can, because direct \nSSL connections is a new feature in Postgres. But we cannot require it \nfor the connections negotiated with SSLRequest, or we break \ncompatibility with old clients that don't use ALPN.Ok.But what if I have a server configured for TLS 1.3 and that requires ALPN to allow access?What about a client configured without ALPN requiring connection? \n\nThere is a check in direct SSL mode that ALPN was used \n(ProcessSSLStartup in backend_startup.c):\n\n>         if (!port->alpn_used)\n>         {\n>                 ereport(COMMERROR,\n>                                 (errcode(ERRCODE_PROTOCOL_VIOLATION),\n>                                  errmsg(\"received direct SSL connection request without ALPN protocol negotiation extension\")));\n>                 goto reject;\n>         }\n\nThat happens immediately after the SSL connection has been established.\n\nHmm. I guess it would be better to abort the connection earlier, without \ncompleting the TLS handshake. Otherwise the client might send the first \nmessage in wrong protocol to the PostgreSQL server. That's not a \nsecurity issue for the PostgreSQL server: the server disconnects without \nreading the message. And I don't see any way for an ALPACA attack when \nthe server ignores the client's message. Nevertheless, from the point of \nview of keeping the attack surface as small as possible, aborting \nearlier seems better.So the ClientHello callback is the correct way to determine the end. best regards,Ranier Vilela", "msg_date": "Mon, 29 Apr 2024 15:06:52 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Direct SSL connection and ALPN loose ends" }, { "msg_contents": "On 29/04/2024 21:06, Ranier Vilela wrote:\n> Em seg., 29 de abr. de 2024 às 14:56, Heikki Linnakangas \n> <[email protected] <mailto:[email protected]>> escreveu:\n> \n> On 29/04/2024 20:10, Ranier Vilela wrote:\n> > Hi,\n> >\n> > With TLS 1.3 and others there is possibly a security flaw using\n> ALPN [1].\n> >\n> > It seems to me that the ALPN protocol can be bypassed if the\n> client does\n> > not correctly inform the ClientHello header.\n> >\n> > So, the suggestion is to check the ClientHello header in the\n> server and\n> > terminate the TLS handshake early.\n> \n> Sounds to me like it's working as designed. ALPN in general is\n> optional;\n> if the client doesn't request it, then you proceed without it. We do\n> require ALPN for direct SSL connections though. We can, because direct\n> SSL connections is a new feature in Postgres. But we cannot require it\n> for the connections negotiated with SSLRequest, or we break\n> compatibility with old clients that don't use ALPN.\n> \n> Ok.\n> But what if I have a server configured for TLS 1.3 and that requires \n> ALPN to allow access?\n> What about a client configured without ALPN requiring connection?\n\nSorry, I don't understand the questions. What about them?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 29 Apr 2024 21:36:34 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Direct SSL connection and ALPN loose ends" }, { "msg_contents": "Em seg., 29 de abr. de 2024 às 15:36, Heikki Linnakangas <[email protected]>\nescreveu:\n\n> On 29/04/2024 21:06, Ranier Vilela wrote:\n> > Em seg., 29 de abr. de 2024 às 14:56, Heikki Linnakangas\n> > <[email protected] <mailto:[email protected]>> escreveu:\n> >\n> > On 29/04/2024 20:10, Ranier Vilela wrote:\n> > > Hi,\n> > >\n> > > With TLS 1.3 and others there is possibly a security flaw using\n> > ALPN [1].\n> > >\n> > > It seems to me that the ALPN protocol can be bypassed if the\n> > client does\n> > > not correctly inform the ClientHello header.\n> > >\n> > > So, the suggestion is to check the ClientHello header in the\n> > server and\n> > > terminate the TLS handshake early.\n> >\n> > Sounds to me like it's working as designed. ALPN in general is\n> > optional;\n> > if the client doesn't request it, then you proceed without it. We do\n> > require ALPN for direct SSL connections though. We can, because\n> direct\n> > SSL connections is a new feature in Postgres. But we cannot require\n> it\n> > for the connections negotiated with SSLRequest, or we break\n> > compatibility with old clients that don't use ALPN.\n> >\n> > Ok.\n> > But what if I have a server configured for TLS 1.3 and that requires\n> > ALPN to allow access?\n> > What about a client configured without ALPN requiring connection?\n>\n> Sorry, I don't understand the questions. What about them?\n>\nSorry, I'll try to be clearer.\nThe way it is designed, can we impose TLS 1.3 and ALPN to allow access to a\npublic server?\n\nAnd if on the other side we have a client, configured without ALPN,\nwhen requesting access, the server will refuse?\n\nbest regards,\nRanier Vilela\n\nEm seg., 29 de abr. de 2024 às 15:36, Heikki Linnakangas <[email protected]> escreveu:On 29/04/2024 21:06, Ranier Vilela wrote:\n> Em seg., 29 de abr. de 2024 às 14:56, Heikki Linnakangas \n> <[email protected] <mailto:[email protected]>> escreveu:\n> \n>     On 29/04/2024 20:10, Ranier Vilela wrote:\n>      > Hi,\n>      >\n>      > With TLS 1.3 and others there is possibly a security flaw using\n>     ALPN [1].\n>      >\n>      > It seems to me that the ALPN protocol can be bypassed if the\n>     client does\n>      > not correctly inform the ClientHello header.\n>      >\n>      > So, the suggestion is to check the ClientHello header in the\n>     server and\n>      > terminate the TLS handshake early.\n> \n>     Sounds to me like it's working as designed. ALPN in general is\n>     optional;\n>     if the client doesn't request it, then you proceed without it. We do\n>     require ALPN for direct SSL connections though. We can, because direct\n>     SSL connections is a new feature in Postgres. But we cannot require it\n>     for the connections negotiated with SSLRequest, or we break\n>     compatibility with old clients that don't use ALPN.\n> \n> Ok.\n> But what if I have a server configured for TLS 1.3 and that requires \n> ALPN to allow access?\n> What about a client configured without ALPN requiring connection?\n\nSorry, I don't understand the questions. What about them?Sorry, I'll try to be clearer.The way it is designed, can we impose TLS 1.3 and ALPN to allow access to a public server?And if on the other side we have a client, configured without ALPN, when requesting access, the server will refuse?best regards,Ranier Vilela", "msg_date": "Mon, 29 Apr 2024 16:19:07 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Direct SSL connection and ALPN loose ends" } ]
[ { "msg_contents": "Do we know, is it posted anywhere on the postgresql.org site what CVE's will be addressed in the next round up updates\nto Postgres which should come out next Thursday, May 9th, 2024?\n\nThanks, Mark\n\n\n\n\n\n\n\n\n\nDo we know, is it posted anywhere on the postgresql.org site what CVE’s will be addressed in the next round up updates\nto Postgres which should come out next Thursday, May 9th, 2024?\n\nThanks, Mark", "msg_date": "Mon, 29 Apr 2024 17:47:10 +0000", "msg_from": "Mark Hill <[email protected]>", "msg_from_op": true, "msg_subject": "CVE's addressed in next update" }, { "msg_contents": "Mark Hill <[email protected]> writes:\n> Do we know, is it posted anywhere on the postgresql.org site what CVE's will be addressed in the next round up updates\n> to Postgres which should come out next Thursday, May 9th, 2024?\n\nWe do not announce that sort of thing in advance.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Apr 2024 13:52:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CVE's addressed in next update" } ]
[ { "msg_contents": "Hello\n\n \n\nWhen using ctid as a\nrestriction clause with lower and upper bounds, PostgreSQL's planner will use\nTID range scan plan to handle such query. This works and generally fine.\nHowever, if the ctid range covers a huge amount of data, the planner will not\nuse parallel worker to perform ctid range scan because it is not supported. It could, however,\nstill choose to use parallel sequential scan to complete the scan if ti costs less. \n\n \n\nIn one of our\nmigration scenarios, we rely on tid range scan to migrate huge table from one\ndatabase to another once the lower and upper ctid bound is determined. With the\nsupport of parallel ctid range scan, this process could be done much quicker.\n\n \n\nThe attached patch\nis my approach to add parallel ctid range scan to PostgreSQL's planner and executor. In my\ntests, I do see an increase in performance using parallel tid range scan over\nthe single worker tid range scan and it is also faster than parallel sequential\nscan covering similar ranges. Of course, the table needs to be large enough to\nreflect the performance increase.\n\n \n\nbelow is the timing to complete a select query covering all the records in a simple 2-column\ntable with 40 million records,\n\n \n\n - tid range scan takes 10216ms\n\n - tid range scan with 2 workers takes 7109ms\n\n - sequential scan with 2 workers takes 8499ms\n\n \n\nHaving the support\nfor parallel ctid range scan is definitely helpful in our migration case, I am\nsure it could be useful in other cases as well. I am sharing the patch here and\nif someone could provide a quick feedback or review that would be greatly appreciated.\n\n \n\nThank you!\n\n \n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:[email protected]\n\nhttp://www.highgo.ca", "msg_date": "Mon, 29 Apr 2024 15:35:59 -0700", "msg_from": "Cary Huang <[email protected]>", "msg_from_op": true, "msg_subject": "Support tid range scan in parallel?" }, { "msg_contents": "On Tue, 30 Apr 2024 at 10:36, Cary Huang <[email protected]> wrote:\n> In one of our migration scenarios, we rely on tid range scan to migrate huge table from one database to another once the lower and upper ctid bound is determined. With the support of parallel ctid range scan, this process could be done much quicker.\n\nI would have thought that the best way to migrate would be to further\ndivide the TID range into N segments and run N queries, one per\nsegment to get the data out.\n\n From a CPU point of view, I'd hard to imagine that a SELECT * query\nwithout any other items in the WHERE clause other than the TID range\nquals would run faster with multiple workers than with 1. The problem\nis the overhead of pushing tuples to the main process often outweighs\nthe benefits of the parallelism. However, from an I/O point of view\non a server with slow enough disks, I can imagine there'd be a\nspeedup.\n\n> The attached patch is my approach to add parallel ctid range scan to PostgreSQL's planner and executor. In my tests, I do see an increase in performance using parallel tid range scan over the single worker tid range scan and it is also faster than parallel sequential scan covering similar ranges. Of course, the table needs to be large enough to reflect the performance increase.\n>\n> below is the timing to complete a select query covering all the records in a simple 2-column table with 40 million records,\n>\n> - tid range scan takes 10216ms\n> - tid range scan with 2 workers takes 7109ms\n> - sequential scan with 2 workers takes 8499ms\n\nCan you share more details about this test? i.e. the query, what the\ntimes are that you've measured (EXPLAIN ANALYZE, or SELECT, COPY?).\nAlso, which version/commit did you patch against? I was wondering if\nthe read stream code added in v17 would result in the serial case\nrunning faster because the parallelism just resulted in more I/O\nconcurrency.\n\nOf course, it may be beneficial to have parallel TID Range for other\ncases when more row filtering or aggregation is being done as that\nrequires pushing fewer tuples over from the parallel worker to the\nmain process. It just would be good to get to the bottom of if there's\nstill any advantage to parallelism when no filtering other than the\nctid quals is being done now that we've less chance of having to wait\nfor I/O coming from disk with the read streams code.\n\nDavid\n\n\n", "msg_date": "Tue, 30 Apr 2024 11:14:37 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support tid range scan in parallel?" }, { "msg_contents": "Hi David\n\nThank you for your reply.\n\n> From a CPU point of view, I'd hard to imagine that a SELECT * query\n> without any other items in the WHERE clause other than the TID range\n> quals would run faster with multiple workers than with 1. The problem\n> is the overhead of pushing tuples to the main process often outweighs\n> the benefits of the parallelism. However, from an I/O point of view\n> on a server with slow enough disks, I can imagine there'd be a\n> speedup.\n\nyeah, this is generally true. With everything set to default, the planner would not choose parallel sequential scan if the scan range covers mostly all tuples of a table (to reduce the overhead of pushing tuples to main proc as you mentioned). It is preferred when the target data is small but the table is huge. In my case, it is also the same, the planner by default uses normal tid range scan, so I had to alter cost parameters to influence the planner's decision. This is where I found that with WHERE clause only containing TID ranges that cover the entire table would result faster with parallel workers, at least in my environment.\n\n> Of course, it may be beneficial to have parallel TID Range for other\n> cases when more row filtering or aggregation is being done as that\n> requires pushing fewer tuples over from the parallel worker to the\n> main process. It just would be good to get to the bottom of if there's\n> still any advantage to parallelism when no filtering other than the\n> ctid quals is being done now that we've less chance of having to wait\n> for I/O coming from disk with the read streams code.\n\nI believe so too. I shared my test procedure below with ctid being the only quals. \n\n>> below is the timing to complete a select query covering all the records in a simple 2-column table with 40 million records,\n>>\n>> - tid range scan takes 10216ms\n>> - tid range scan with 2 workers takes 7109ms\n>> - sequential scan with 2 workers takes 8499ms\n>\n> Can you share more details about this test? i.e. the query, what the\n> times are that you've measured (EXPLAIN ANALYZE, or SELECT, COPY?).\n> Also, which version/commit did you patch against? I was wondering if\n> the read stream code added in v17 would result in the serial case\n> running faster because the parallelism just resulted in more I/O\n> concurrency.\n\nYes of course. These numbers were obtained earlier this year on master with the patch applied most likely without the read stream code you mentioned. The patch attached here is rebased to commit dd0183469bb779247c96e86c2272dca7ff4ec9e7 on master, which is quite recent and should have the read stream code for v17 as I can immediately tell that the serial scans run much faster now in my setup. I increased the records on the test table from 40 to 100 million because serial scans are much faster now. Below is the summary and details of my test. Note that I only include the EXPLAIN ANALYZE details of round1 test. Round2 is the same except for different execution times. \n\n[env]\n- OS: Ubuntu 18.04\n- CPU: 4 cores @ 3.40 GHz\n- MEM: 16 GB\n\n[test table setup]\ninitdb with all default values\nCREATE TABLE test (a INT, b TEXT);\nINSERT INTO test VALUES(generate_series(1,100000000), 'testing');\nSELECT min(ctid), max(ctid) from test;\n min | max\n-------+--------------\n (0,1) | (540540,100)\n(1 row)\n\n[summary]\nround 1:\ntid range scan: 14915ms\ntid range scan 2 workers: 12265ms\nseq scan with 2 workers: 12675ms\n\nround2:\ntid range scan: 12112ms\ntid range scan 2 workers: 10649ms\nseq scan with 2 workers: 11206ms\n\n[details of EXPLAIN ANALYZE below]\n\n[default tid range scan]\nEXPLAIN ANALYZE SELECT a FROM test WHERE ctid >= '(1,0)' AND ctid <= '(540540,100)';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Tid Range Scan on test (cost=0.01..1227029.81 rows=68648581 width=4) (actual time=0.188..12280.791 rows=99999815 loops=1)\n TID Cond: ((ctid >= '(1,0)'::tid) AND (ctid <= '(540540,100)'::tid))\n Planning Time: 0.817 ms\n Execution Time: 14915.035 ms\n(4 rows)\n\n[parallel tid range scan with 2 workers]\nset parallel_setup_cost=0;\nset parallel_tuple_cost=0;\nset min_parallel_table_scan_size=0;\nset max_parallel_workers_per_gather=2;\n\nEXPLAIN ANALYZE SELECT a FROM test WHERE ctid >= '(1,0)' AND ctid <= '(540540,100)';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Gather (cost=0.01..511262.43 rows=68648581 width=4) (actual time=1.322..9249.197 rows=99999815 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Tid Range Scan on test (cost=0.01..511262.43 rows=28603575 width=4) (actual time=0.332..4906.262 rows=33333272 loops=3)\n TID Cond: ((ctid >= '(1,0)'::tid) AND (ctid <= '(540540,100)'::tid))\n Planning Time: 0.213 ms\n Execution Time: 12265.873 ms\n(7 rows)\n\n[parallel seq scan with 2 workers]\nset enable_tidscan = 'off';\n\nEXPLAIN ANALYZE SELECT a FROM test WHERE ctid >= '(1,0)' AND ctid <= '(540540,100)';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Gather (cost=0.00..969595.42 rows=68648581 width=4) (actual time=4.489..9713.299 rows=99999815 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Seq Scan on test (cost=0.00..969595.42 rows=28603575 width=4) (actual time=0.995..5541.178 rows=33333272 loops=3)\n Filter: ((ctid >= '(1,0)'::tid) AND (ctid <= '(540540,100)'::tid))\n Rows Removed by Filter: 62\n Planning Time: 0.129 ms\n Execution Time: 12675.681 ms\n(8 rows)\n\n\nBest regards\n\nCary Huang\n-------------\nHighGo Software Inc. (Canada)\[email protected]\nwww.highgo.ca\n\n\n\n", "msg_date": "Tue, 30 Apr 2024 12:10:12 -0700", "msg_from": "Cary Huang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Support tid range scan in parallel?" }, { "msg_contents": "On Wed, 1 May 2024 at 07:10, Cary Huang <[email protected]> wrote:\n> Yes of course. These numbers were obtained earlier this year on master with the patch applied most likely without the read stream code you mentioned. The patch attached here is rebased to commit dd0183469bb779247c96e86c2272dca7ff4ec9e7 on master, which is quite recent and should have the read stream code for v17 as I can immediately tell that the serial scans run much faster now in my setup. I increased the records on the test table from 40 to 100 million because serial scans are much faster now. Below is the summary and details of my test. Note that I only include the EXPLAIN ANALYZE details of round1 test. Round2 is the same except for different execution times.\n\nIt would be good to see the EXPLAIN (ANALYZE, BUFFERS) with SET\ntrack_io_timing = 1;\n\nHere's a quick review\n\n1. Does not produce correct results:\n\n-- serial plan\npostgres=# select count(*) from t where ctid >= '(0,0)' and ctid < '(10,0)';\n count\n-------\n 2260\n(1 row)\n\n-- parallel plan\npostgres=# set max_parallel_workers_per_gather=2;\nSET\npostgres=# select count(*) from t where ctid >= '(0,0)' and ctid < '(10,0)';\n count\n-------\n 0\n(1 row)\n\nI've not really looked into why, but I see you're not calling\nheap_setscanlimits() in parallel mode. You need to somehow restrict\nthe block range of the scan to the range specified in the quals. You\nmight need to do more work to make the scan limits work with parallel\nscans.\n\nIf you look at heap_scan_stream_read_next_serial(), it's calling\nheapgettup_advance_block(), where there's \"if (--scan->rs_numblocks\n== 0)\". But no such equivalent code in\ntable_block_parallelscan_nextpage() called by\nheap_scan_stream_read_next_parallel(). To make Parallel TID Range\nwork, you'll need heap_scan_stream_read_next_parallel() to abide by\nthe scan limits.\n\n2. There's a 4 line comment you've added to cost_tidrangescan() which\nis just a copy and paste from cost_seqscan(). If you look at the\nseqscan costing, the comment is true in that scenario, but not true in\nwhere you've pasted it. The I/O cost is all tied in to run_cost.\n\n+ /* The CPU cost is divided among all the workers. */\n+ run_cost /= parallel_divisor;\n+\n+ /*\n+ * It may be possible to amortize some of the I/O cost, but probably\n+ * not very much, because most operating systems already do aggressive\n+ * prefetching. For now, we assume that the disk run cost can't be\n+ * amortized at all.\n+ */\n\n3. Calling TidRangeQualFromRestrictInfoList() once for the serial path\nand again for the partial path isn't great. It would be good to just\ncall that function once and use the result for both path types.\n\n4. create_tidrangescan_subpaths() seems like a weird name for a\nfunction. That seems to imply that scans have subpaths. Scans are\nalways leaf paths and have no subpaths.\n\nThis isn't a complete review. It's just that this seems enough to keep\nyou busy for a while. I can look a bit harder when the patch is\nworking correctly. I think you should have enough feedback to allow\nthat now.\n\nDavid\n\n\n", "msg_date": "Wed, 1 May 2024 19:53:01 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support tid range scan in parallel?" }, { "msg_contents": " > This isn't a complete review. It's just that this seems enough to keep\n > you busy for a while. I can look a bit harder when the patch is\n > working correctly. I think you should have enough feedback to allow\n > that now.\n\nThanks for the test, review and feedback. They are greatly appreciated! \nI will polish the patch some more following your feedback and share new\nresults / patch when I have them. \n\nThanks again!\n\nCary\n\n\n\n\n", "msg_date": "Wed, 01 May 2024 09:44:14 -0700", "msg_from": "Cary Huang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Support tid range scan in parallel?" }, { "msg_contents": "Hello\n\n> -- parallel plan\n> postgres=# set max_parallel_workers_per_gather=2;\n> SET\n> postgres=# select count(*) from t where ctid >= '(0,0)' and ctid < '(10,0)';\n> count\n> -------\n> 0\n> (1 row)\n> \n> I've not really looked into why, but I see you're not calling\n> heap_setscanlimits() in parallel mode. You need to somehow restrict\n> the block range of the scan to the range specified in the quals. You\n> might need to do more work to make the scan limits work with parallel\n> scans.\n\nI found that select count(*) using parallel tid rangescan for the very first time,\nit would return the correct result, but the same subsequent queries would\nresult in 0 as you stated. This is due to the \"pscan->phs_syncscan\" set to true\nin ExecTidRangeScanInitializeDSM(), inherited from parallel seq scan case.\nWith syncscan enabled, the \"table_block_parallelscan_nextpage()\" would\nreturn the next block since the end of the first tid rangescan instead of the\ncorrect start block that should be scanned. I see that single tid rangescan\ndoes not have SO_ALLOW_SYNC set, so I figure syncscan should also be\ndisabled in parallel case. With this change, then it would be okay to call\nheap_setscanlimits() in parallel case, so I added this call back to\nheap_set_tidrange() in both serial and parallel cases.\n\n\n> 2. There's a 4 line comment you've added to cost_tidrangescan() which\n> is just a copy and paste from cost_seqscan(). If you look at the\n> seqscan costing, the comment is true in that scenario, but not true in\n> where you've pasted it. The I/O cost is all tied in to run_cost.\n\nthanks for pointing out, I have removed these incorrect comments\n\n> 3. Calling TidRangeQualFromRestrictInfoList() once for the serial path\n> and again for the partial path isn't great. It would be good to just\n> call that function once and use the result for both path types.\n\ngood point. I moved the adding of tid range scan partial path inside\ncreate_tidscan_paths() where it makes a TidRangeQualFromRestrictInfoList()\ncall for serial path, so I can just reuse tidrangequals if it is appropriate to\nconsider parallel tid rangescan.\n\n> 4. create_tidrangescan_subpaths() seems like a weird name for a\n> function. That seems to imply that scans have subpaths. Scans are\n> always leaf paths and have no subpaths.\n\nI removed this function with weird name; it is not needed because the logic inside\nis moved to create_tidscan_paths() where it can reuse tidrangequals.\n\n> It would be good to see the EXPLAIN (ANALYZE, BUFFERS) with SET\n> track_io_timing = 1;\n\nthe v2 patch is attached that should address the issues above. Below are the EXPLAIN\noutputs with track_io_timing = 1 in my environment. Generally, parallel tid range scan\nresults in more I/O timings and shorter execution time.\n\n\nSET track_io_timing = 1;\n\n===serial tid rangescan===\n\nEXPLAIN (ANALYZE, BUFFERS) select a from test where ctid >= '(0,0)' and ctid < '(216216,40)';\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Tid Range Scan on test (cost=0.01..490815.59 rows=27459559 width=4) (actual time=0.072..10143.770 rows=39999999 loops=1)\n TID Cond: ((ctid >= '(0,0)'::tid) AND (ctid < '(216216,40)'::tid))\n Buffers: shared hit=298 read=215919 written=12972\n I/O Timings: shared read=440.277 write=58.525\n Planning:\n Buffers: shared hit=2\n Planning Time: 0.289 ms\n Execution Time: 12497.081 ms\n(8 rows)\n\nset parallel_setup_cost=0;\nset parallel_tuple_cost=0;\nset min_parallel_table_scan_size=0;\nset max_parallel_workers_per_gather=2;\n\n===parallel tid rangescan===\n\nEXPLAIN (ANALYZE, BUFFERS) select a from test where ctid >= '(0,0)' and ctid < '(216216,40)';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Gather (cost=0.01..256758.88 rows=40000130 width=4) (actual time=0.878..7083.705 rows=39999999 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared read=216217\n I/O Timings: shared read=1224.153\n -> Parallel Tid Range Scan on test (cost=0.01..256758.88 rows=16666721 width=4) (actual time=0.256..3980.770 rows=13333333 loops=3)\n TID Cond: ((ctid >= '(0,0)'::tid) AND (ctid < '(216216,40)'::tid))\n Buffers: shared read=216217\n I/O Timings: shared read=1224.153\n Planning Time: 0.258 ms\n Execution Time: 9731.800 ms\n(11 rows)\n\n===serial tid rangescan with aggregate===\n\nset max_parallel_workers_per_gather=0;\n\nEXPLAIN (ANALYZE, BUFFERS) select count(a) from test where ctid >= '(0,0)' and ctid < '(216216,40)';\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=716221.63..716221.64 rows=1 width=8) (actual time=12931.695..12931.696 rows=1 loops=1)\n Buffers: shared read=216217\n I/O Timings: shared read=599.331\n -> Tid Range Scan on test (cost=0.01..616221.31 rows=40000130 width=4) (actual time=0.079..6800.482 rows=39999999 loops=1)\n TID Cond: ((ctid >= '(0,0)'::tid) AND (ctid < '(216216,40)'::tid))\n Buffers: shared read=216217\n I/O Timings: shared read=599.331\n Planning:\n Buffers: shared hit=1 read=2\n I/O Timings: shared read=0.124\n Planning Time: 0.917 ms\n Execution Time: 12932.348 ms\n(12 rows)\n\n===parallel tid rangescan with aggregate===\n\nset max_parallel_workers_per_gather=2;\nEXPLAIN (ANALYZE, BUFFERS) select count(a) from test where ctid >= '(0,0)' and ctid < '(216216,40)';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=298425.70..298425.71 rows=1 width=8) (actual time=4842.512..4847.863 rows=1 loops=1)\n Buffers: shared read=216217\n I/O Timings: shared read=1155.321\n -> Gather (cost=298425.68..298425.69 rows=2 width=8) (actual time=4842.020..4847.851 rows=3 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared read=216217\n I/O Timings: shared read=1155.321\n -> Partial Aggregate (cost=298425.68..298425.69 rows=1 width=8) (actual time=4824.730..4824.731 rows=1 loops=3)\n Buffers: shared read=216217\n I/O Timings: shared read=1155.321\n -> Parallel Tid Range Scan on test (cost=0.01..256758.88 rows=16666721 width=4) (actual time=0.098..2614.108 rows=13333333 loops=3)\n TID Cond: ((ctid >= '(0,0)'::tid) AND (ctid < '(216216,40)'::tid))\n Buffers: shared read=216217\n I/O Timings: shared read=1155.321\n Planning:\n Buffers: shared read=3\n I/O Timings: shared read=3.323\n Planning Time: 4.124 ms\n Execution Time: 4847.992 ms\n(20 rows)\n\n\nCary Huang\n-------------\nHighGo Software Inc. (Canada)\[email protected]\nwww.highgo.ca", "msg_date": "Fri, 03 May 2024 11:55:01 -0700", "msg_from": "Cary Huang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Support tid range scan in parallel?" }, { "msg_contents": "On Sat, 4 May 2024 at 06:55, Cary Huang <[email protected]> wrote:\n> With syncscan enabled, the \"table_block_parallelscan_nextpage()\" would\n> return the next block since the end of the first tid rangescan instead of the\n> correct start block that should be scanned. I see that single tid rangescan\n> does not have SO_ALLOW_SYNC set, so I figure syncscan should also be\n> disabled in parallel case. With this change, then it would be okay to call\n> heap_setscanlimits() in parallel case, so I added this call back to\n> heap_set_tidrange() in both serial and parallel cases.\n\nThis now calls heap_setscanlimits() for the parallel version, it's\njust that table_block_parallelscan_nextpage() does nothing to obey\nthose limits.\n\nThe only reason the code isn't reading the entire table is due to the\noptimisation in heap_getnextslot_tidrange() which returns false when\nthe ctid goes out of range. i.e, this code:\n\n/*\n* When scanning forward, the TIDs will be in ascending order.\n* Future tuples in this direction will be higher still, so we can\n* just return false to indicate there will be no more tuples.\n*/\nif (ScanDirectionIsForward(direction))\n return false;\n\nIf I comment out that line, I see all pages are accessed:\n\npostgres=# explain (analyze, buffers) select count(*) from a where\nctid >= '(0,1)' and ctid < '(11,0)';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=18.80..18.81 rows=1 width=8) (actual\ntime=33.530..36.118 rows=1 loops=1)\n Buffers: shared read=4425\n -> Gather (cost=18.78..18.79 rows=2 width=8) (actual\ntime=33.456..36.102 rows=3 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared read=4425\n -> Partial Aggregate (cost=18.78..18.79 rows=1 width=8)\n(actual time=20.389..20.390 rows=1 loops=3)\n Buffers: shared read=4425\n -> Parallel Tid Range Scan on a (cost=0.01..16.19\nrows=1035 width=0) (actual time=9.375..20.349 rows=829 loops=3)\n TID Cond: ((ctid >= '(0,1)'::tid) AND (ctid <\n'(11,0)'::tid))\n Buffers: shared read=4425 <---- this is all\npages in the table instead of 11 pages.\n\nWith that code still commented out, the non-parallel version still\nwon't read all pages due to the setscanlimits being obeyed.\n\npostgres=# set max_parallel_workers_per_gather=0;\nSET\npostgres=# explain (analyze, buffers) select count(*) from a where\nctid >= '(0,1)' and ctid < '(11,0)';\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------\n Aggregate (cost=45.07..45.08 rows=1 width=8) (actual\ntime=0.302..0.302 rows=1 loops=1)\n Buffers: shared hit=11\n -> Tid Range Scan on a (cost=0.01..38.86 rows=2485 width=0)\n(actual time=0.019..0.188 rows=2486 loops=1)\n TID Cond: ((ctid >= '(0,1)'::tid) AND (ctid < '(11,0)'::tid))\n Buffers: shared hit=11\n\n\nIf I put that code back in, how many pages are read depends on the\nnumber of parallel workers as workers will keep running with higher\npage numbers and heap_getnextslot_tidrange() will just (inefficiently)\nfilter those out.\n\nmax_parallel_workers_per_gather=2;\n -> Parallel Tid Range Scan on a (cost=0.01..16.19\nrows=1035 width=0) (actual time=0.191..0.310 rows=829 loops=3)\n TID Cond: ((ctid >= '(0,1)'::tid) AND (ctid <\n'(11,0)'::tid))\n Buffers: shared read=17\n\nmax_parallel_workers_per_gather=3;\n -> Parallel Tid Range Scan on a (cost=0.01..12.54\nrows=802 width=0) (actual time=0.012..0.114 rows=622 loops=4)\n TID Cond: ((ctid >= '(0,1)'::tid) AND (ctid <\n'(11,0)'::tid))\n Buffers: shared hit=19\n\nmax_parallel_workers_per_gather=4;\n -> Parallel Tid Range Scan on a (cost=0.01..9.72\nrows=621 width=0) (actual time=0.014..0.135 rows=497 loops=5)\n TID Cond: ((ctid >= '(0,1)'::tid) AND (ctid <\n'(11,0)'::tid))\n Buffers: shared hit=21\n\nTo fix this you need to make table_block_parallelscan_nextpage obey\nthe limits imposed by heap_setscanlimits().\n\nThe equivalent code in the non-parallel version is in\nheapgettup_advance_block().\n\n/* check if the limit imposed by heap_setscanlimits() is met */\nif (scan->rs_numblocks != InvalidBlockNumber)\n{\n if (--scan->rs_numblocks == 0)\n return InvalidBlockNumber;\n}\n\nI've not studied exactly how you'd get the rs_numblock information\ndown to the parallel scan descriptor. But when you figure that out,\njust remember that you can't do the --scan->rs_numblocks from\ntable_block_parallelscan_nextpage() as that's not parallel safe. You\nmight be able to add an or condition to: \"if (nallocated >=\npbscan->phs_nblocks)\" to make it \"if (nallocated >=\npbscan->phs_nblocks || nallocated >= pbscan->phs_numblocks)\",\nalthough the field names don't seem very intuitive there. It would be\nnicer if the HeapScanDesc field was called rs_blocklimit rather than\nrs_numblocks. It's not for this patch to go messing with that,\nhowever.\n\nDavid\n\n\n", "msg_date": "Sat, 4 May 2024 13:14:16 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support tid range scan in parallel?" }, { "msg_contents": "Thank you very much for the test and review. Greatly appreciated!\n \n> This now calls heap_setscanlimits() for the parallel version, it's\n> just that table_block_parallelscan_nextpage() does nothing to obey\n> those limits.\n\nyes, you are absolutely right. Though heap_setscanlimits() is now called by\nparallel tid range scan, table_block_parallelscan_nextpage() does nothing\nto obey these limits, resulting in more blocks being inefficiently filtered out\nby the optimization code you mentioned in heap_getnextslot_tidrange().\n\n> I've not studied exactly how you'd get the rs_numblock information\n> down to the parallel scan descriptor. But when you figure that out,\n> just remember that you can't do the --scan->rs_numblocks from\n> table_block_parallelscan_nextpage() as that's not parallel safe. You\n> might be able to add an or condition to: \"if (nallocated >=\n> pbscan->phs_nblocks)\" to make it \"if (nallocated >=\n> pbscan->phs_nblocks || nallocated >= pbscan->phs_numblocks)\",\n> although the field names don't seem very intuitive there. It would be\n> nicer if the HeapScanDesc field was called rs_blocklimit rather than\n> rs_numblocks. It's not for this patch to go messing with that,\n> however.\n\nrs_numblock was not passed down to the parallel scan context and\ntable_block_parallelscan_nextpage() did not seem to have a logic to limit\nthe block scan range set by heap_setscanlimits() in parallel scan. Also, I\nnoticed that the rs_startblock was also not passed to the parallel scan\ncontext, which causes the parallel scan always start from 0 even when a\nlower ctid bound is specified.\n\nso I added a logic in heap_set_tidrange() to pass these 2 values to parallel\nscan descriptor as \"phs_startblock\" and \"phs_numblock\". These will be\navailable in table_block_parallelscan_nextpage() in parallel scan. \n\nI followed your recommendation and modified the condition to:\n\nif (nallocated >= pbscan->phs_nblocks || (pbscan->phs_numblock != 0 &&\n nallocated >= pbscan->phs_numblock))\n\nso that the parallel tid range scan will stop when the upper scan limit is\nreached. With these changes, I see that the number of buffer reads is\nconsistent between single and parallel ctid range scans. The v3 patch is\nattached.\n\n\npostgres=# explain (analyze, buffers) select count(*) from test where ctid >= '(0,1)' and ctid < '(11,0)';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Aggregate (cost=39.43..39.44 rows=1 width=8) (actual time=1.007..1.008 rows=1 loops=1)\n Buffers: shared read=11\n -> Tid Range Scan on test (cost=0.01..34.35 rows=2034 width=0) (actual time=0.076..0.639 rows=2035 loops=1)\n TID Cond: ((ctid >= '(0,1)'::tid) AND (ctid < '(11,0)'::tid))\n Buffers: shared read=11\n\npostgres=# set max_parallel_workers_per_gather=2;\nSET\npostgres=# explain (analyze, buffers) select count(*) from test where ctid >= '(0,1)' and ctid < '(11,0)';\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=16.45..16.46 rows=1 width=8) (actual time=14.329..16.840 rows=1 loops=1)\n Buffers: shared hit=11\n -> Gather (cost=16.43..16.44 rows=2 width=8) (actual time=3.197..16.814 rows=3 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=11\n -> Partial Aggregate (cost=16.43..16.44 rows=1 width=8) (actual time=0.705..0.706 rows=1 loops=3)\n Buffers: shared hit=11\n -> Parallel Tid Range Scan on test (cost=0.01..14.31 rows=848 width=0) (actual time=0.022..0.423 rows=678 loops=3)\n TID Cond: ((ctid >= '(0,1)'::tid) AND (ctid < '(11,0)'::tid))\n Buffers: shared hit=11\n\npostgres=# set max_parallel_workers_per_gather=3;\nSET\npostgres=# explain (analyze, buffers) select count(*) from test where ctid >= '(0,1)' and ctid < '(11,0)';\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=12.74..12.75 rows=1 width=8) (actual time=16.793..19.053 rows=1 loops=1)\n Buffers: shared hit=11\n -> Gather (cost=12.72..12.73 rows=3 width=8) (actual time=2.827..19.012 rows=4 loops=1)\n Workers Planned: 3\n Workers Launched: 3\n Buffers: shared hit=11\n -> Partial Aggregate (cost=12.72..12.73 rows=1 width=8) (actual time=0.563..0.565 rows=1 loops=4)\n Buffers: shared hit=11\n -> Parallel Tid Range Scan on test (cost=0.01..11.08 rows=656 width=0) (actual time=0.018..0.338 rows=509 loops=4)\n TID Cond: ((ctid >= '(0,1)'::tid) AND (ctid < '(11,0)'::tid))\n Buffers: shared hit=11\n\n\nthank you!\n\nCary Huang\n-------------\nHighGo Software Inc. (Canada)\[email protected]\nwww.highgo.ca", "msg_date": "Wed, 08 May 2024 15:23:55 -0700", "msg_from": "Cary Huang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Support tid range scan in parallel?" }, { "msg_contents": "On Thu, 9 May 2024 at 10:23, Cary Huang <[email protected]> wrote:\n> The v3 patch is attached.\n\nI've not looked at the patch, but please add it to the July CF. I'll\ntry and look in more detail then.\n\nDavid\n\n\n", "msg_date": "Thu, 9 May 2024 15:16:30 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support tid range scan in parallel?" }, { "msg_contents": " > I've not looked at the patch, but please add it to the July CF. I'll\n > try and look in more detail then.\n\nThanks David, I have added this patch on July commitfest under the\nserver feature category. \n\nI understand that the regression tests for parallel ctid range scan is a\nbit lacking now. It only has a few EXPLAIN clauses to ensure parallel \nworkers are used when tid ranges are specified. They are added as\npart of select_parallel.sql test. I am not sure if it is more appropriate\nto have them as part of tidrangescan.sql test instead. So basically\nre-run the same test cases in tidrangescan.sql but in parallel? \n\nthank you\n\nCary\n\n\n\n\n\n\n\n", "msg_date": "Thu, 09 May 2024 10:16:32 -0700", "msg_from": "Cary Huang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Support tid range scan in parallel?" }, { "msg_contents": "On Fri, 10 May 2024 at 05:16, Cary Huang <[email protected]> wrote:\n> I understand that the regression tests for parallel ctid range scan is a\n> bit lacking now. It only has a few EXPLAIN clauses to ensure parallel\n> workers are used when tid ranges are specified. They are added as\n> part of select_parallel.sql test. I am not sure if it is more appropriate\n> to have them as part of tidrangescan.sql test instead. So basically\n> re-run the same test cases in tidrangescan.sql but in parallel?\n\nI think tidrangescan.sql is a more suitable location than\nselect_parallel.sql I don't think you need to repeat all the tests as\nmany of them are testing the tid qual processing which is the same\ncode as it is in the parallel version.\n\nYou should add a test that creates a table with a very low fillfactor,\nlow enough so only 1 tuple can fit on each page and insert a few dozen\ntuples. The test would do SELECT COUNT(*) to ensure you find the\ncorrect subset of tuples. You'd maybe want MIN(ctid) and MAX(ctid) in\nthere too for extra coverage to ensure that the correct tuples are\nbeing counted. Just make sure and EXPLAIN (COSTS OFF) the query first\nin the test to ensure that it's always testing the plan you're\nexpecting to test.\n\nDavid\n\n\n", "msg_date": "Sat, 11 May 2024 12:35:15 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support tid range scan in parallel?" }, { "msg_contents": "> You should add a test that creates a table with a very low fillfactor,\n> low enough so only 1 tuple can fit on each page and insert a few dozen\n> tuples. The test would do SELECT COUNT(*) to ensure you find the\n> correct subset of tuples. You'd maybe want MIN(ctid) and MAX(ctid) in\n> there too for extra coverage to ensure that the correct tuples are\n> being counted. Just make sure and EXPLAIN (COSTS OFF) the query first\n> in the test to ensure that it's always testing the plan you're\n> expecting to test.\n\nthank you for the test suggestion. I moved the regress tests from select_parallel\nto tidrangescan instead. I follow the existing test table creation in tidrangescan\nwith the lowest fillfactor of 10, I am able to get consistent 5 tuples per page\ninstead of 1. It should be okay as long as it is consistently 5 tuples per page so\nthe tuple count results from parallel tests would be in multiples of 5.\n\nThe attached v4 patch includes the improved regression tests.\n\nThank you very much!\n\nCary Huang\n-------------\nHighGo Software Inc. (Canada)\[email protected]\nwww.highgo.ca", "msg_date": "Tue, 14 May 2024 14:33:43 -0700", "msg_from": "Cary Huang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Support tid range scan in parallel?" }, { "msg_contents": "Hi Cary,\n\nOn Wed, May 15, 2024 at 5:33 AM Cary Huang <[email protected]> wrote:\n>\n> > You should add a test that creates a table with a very low fillfactor,\n> > low enough so only 1 tuple can fit on each page and insert a few dozen\n> > tuples. The test would do SELECT COUNT(*) to ensure you find the\n> > correct subset of tuples. You'd maybe want MIN(ctid) and MAX(ctid) in\n> > there too for extra coverage to ensure that the correct tuples are\n> > being counted. Just make sure and EXPLAIN (COSTS OFF) the query first\n> > in the test to ensure that it's always testing the plan you're\n> > expecting to test.\n>\n> thank you for the test suggestion. I moved the regress tests from select_parallel\n> to tidrangescan instead. I follow the existing test table creation in tidrangescan\n> with the lowest fillfactor of 10, I am able to get consistent 5 tuples per page\n> instead of 1. It should be okay as long as it is consistently 5 tuples per page so\n> the tuple count results from parallel tests would be in multiples of 5.\n>\n> The attached v4 patch includes the improved regression tests.\n>\n> Thank you very much!\n>\n> Cary Huang\n> -------------\n> HighGo Software Inc. (Canada)\n> [email protected]\n> www.highgo.ca\n>\n\n+++ b/src/backend/access/heap/heapam.c\n@@ -307,6 +307,7 @@ initscan(HeapScanDesc scan, ScanKey key, bool\nkeep_startblock)\n* results for a non-MVCC snapshot, the caller must hold some higher-level\n* lock that ensures the interesting tuple(s) won't change.)\n*/\n+\n\nI see no reason why you add a blank line here, is it a typo?\n\n+/* ----------------------------------------------------------------\n+ * ExecSeqScanInitializeWorker\n+ *\n+ * Copy relevant information from TOC into planstate.\n+ * ----------------------------------------------------------------\n+ */\n+void\n+ExecTidRangeScanInitializeWorker(TidRangeScanState *node,\n+ ParallelWorkerContext *pwcxt)\n+{\n+ ParallelTableScanDesc pscan;\n\nFunction name in the comment is not consistent.\n\n@@ -81,6 +81,7 @@ typedef struct ParallelBlockTableScanDescData\nBlockNumber phs_startblock; /* starting block number */\npg_atomic_uint64 phs_nallocated; /* number of blocks allocated to\n* workers so far. */\n+ BlockNumber phs_numblock; /* max number of blocks to scan */\n} ParallelBlockTableScanDescData;\n\nCan this be reorganized by putting phs_numblock after phs_startblock?\n\n\n>\n>\n>\n>\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Sun, 11 Aug 2024 15:03:28 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support tid range scan in parallel?" }, { "msg_contents": "This is a good idea to extend parallelism in postgres.\nI went through this patch, and here are a few review comments,\n\n+ Size pscan_len; /* size of parallel tid range scan descriptor */\n\nThe other name for this var could be tidrs_PscanLen, following the pattern\nin indexScanState and IndexOnlyScanState.\nAlso add it and its description in the comment above the struct.\n\n/* ----------------------------------------------------------------\n * ExecTidRangeScanInitializeDSM\n *\n * Set up a parallel heap scan descriptor.\n * ----------------------------------------------------------------\n */\nThis comment doesn't seem right, please correct it to say for Tid range\nscan descriptor.\n\n+\n+ sscan = relation->rd_tableam->scan_begin(relation, snapshot, 0, NULL,\n+ pscan, flags);\n+\n+ return sscan;\n\nI do not see any requirement of using this sscan var.\n\n- if (nallocated >= pbscan->phs_nblocks)\n+ if (nallocated >= pbscan->phs_nblocks || (pbscan->phs_numblock != 0 &&\n+ nallocated >= pbscan->phs_numblock))\n page = InvalidBlockNumber; /* all blocks have been allocated */\n\nPlease add a comment for the reason for this change. As far as I\nunderstood, this is only for the case of TIDRangeScan so it requires an\nexplanation for the case.\n\n\nOn Sun, 11 Aug 2024 at 09:03, Junwang Zhao <[email protected]> wrote:\n\n> Hi Cary,\n>\n> On Wed, May 15, 2024 at 5:33 AM Cary Huang <[email protected]> wrote:\n> >\n> > > You should add a test that creates a table with a very low fillfactor,\n> > > low enough so only 1 tuple can fit on each page and insert a few dozen\n> > > tuples. The test would do SELECT COUNT(*) to ensure you find the\n> > > correct subset of tuples. You'd maybe want MIN(ctid) and MAX(ctid) in\n> > > there too for extra coverage to ensure that the correct tuples are\n> > > being counted. Just make sure and EXPLAIN (COSTS OFF) the query first\n> > > in the test to ensure that it's always testing the plan you're\n> > > expecting to test.\n> >\n> > thank you for the test suggestion. I moved the regress tests from\n> select_parallel\n> > to tidrangescan instead. I follow the existing test table creation in\n> tidrangescan\n> > with the lowest fillfactor of 10, I am able to get consistent 5 tuples\n> per page\n> > instead of 1. It should be okay as long as it is consistently 5 tuples\n> per page so\n> > the tuple count results from parallel tests would be in multiples of 5.\n> >\n> > The attached v4 patch includes the improved regression tests.\n> >\n> > Thank you very much!\n> >\n> > Cary Huang\n> > -------------\n> > HighGo Software Inc. (Canada)\n> > [email protected]\n> > www.highgo.ca\n> >\n>\n> +++ b/src/backend/access/heap/heapam.c\n> @@ -307,6 +307,7 @@ initscan(HeapScanDesc scan, ScanKey key, bool\n> keep_startblock)\n> * results for a non-MVCC snapshot, the caller must hold some higher-level\n> * lock that ensures the interesting tuple(s) won't change.)\n> */\n> +\n>\n> I see no reason why you add a blank line here, is it a typo?\n>\n> +/* ----------------------------------------------------------------\n> + * ExecSeqScanInitializeWorker\n> + *\n> + * Copy relevant information from TOC into planstate.\n> + * ----------------------------------------------------------------\n> + */\n> +void\n> +ExecTidRangeScanInitializeWorker(TidRangeScanState *node,\n> + ParallelWorkerContext *pwcxt)\n> +{\n> + ParallelTableScanDesc pscan;\n>\n> Function name in the comment is not consistent.\n>\n> @@ -81,6 +81,7 @@ typedef struct ParallelBlockTableScanDescData\n> BlockNumber phs_startblock; /* starting block number */\n> pg_atomic_uint64 phs_nallocated; /* number of blocks allocated to\n> * workers so far. */\n> + BlockNumber phs_numblock; /* max number of blocks to scan */\n> } ParallelBlockTableScanDescData;\n>\n> Can this be reorganized by putting phs_numblock after phs_startblock?\n>\n>\n> >\n> >\n> >\n> >\n> >\n>\n>\n> --\n> Regards\n> Junwang Zhao\n>\n>\n>\n\n-- \nRegards,\nRafia Sabih\n\nThis is a good idea to extend parallelism in postgres.I went through this patch, and here are a few review comments,+\tSize\t\tpscan_len;\t\t/* size of parallel tid range scan descriptor */The other name for this var could be tidrs_PscanLen, following the pattern in indexScanState and IndexOnlyScanState.Also add it and its description in the comment above the struct./* ---------------------------------------------------------------- *\t\tExecTidRangeScanInitializeDSM * *\t\tSet up a parallel heap scan descriptor. * ---------------------------------------------------------------- */This comment doesn't seem right, please correct it to say for Tid range scan descriptor.++\tsscan = relation->rd_tableam->scan_begin(relation, snapshot, 0, NULL,+\t\t\t\t\t\t\t\t\t\t\t pscan, flags);++\treturn sscan;I do not  see any requirement of using this sscan var.-\tif (nallocated >= pbscan->phs_nblocks)+\tif (nallocated >= pbscan->phs_nblocks || (pbscan->phs_numblock != 0 &&+\t\tnallocated >= pbscan->phs_numblock)) \t\tpage = InvalidBlockNumber;\t/* all blocks have been allocated */Please add a comment for the reason for this change. As far as I understood, this is only for the case of TIDRangeScan so it requires an explanation for the case.On Sun, 11 Aug 2024 at 09:03, Junwang Zhao <[email protected]> wrote:Hi Cary,\n\nOn Wed, May 15, 2024 at 5:33 AM Cary Huang <[email protected]> wrote:\n>\n> > You should add a test that creates a table with a very low fillfactor,\n> > low enough so only 1 tuple can fit on each page and insert a few dozen\n> > tuples. The test would do SELECT COUNT(*) to ensure you find the\n> > correct subset of tuples. You'd maybe want MIN(ctid) and MAX(ctid) in\n> > there too for extra coverage to ensure that the correct tuples are\n> > being counted.  Just make sure and EXPLAIN (COSTS OFF) the query first\n> > in the test to ensure that it's always testing the plan you're\n> > expecting to test.\n>\n> thank you for the test suggestion. I moved the regress tests from select_parallel\n> to tidrangescan instead. I follow the existing test table creation in tidrangescan\n> with the lowest fillfactor of 10, I am able to get consistent 5 tuples per page\n> instead of 1. It should be okay as long as it is consistently 5 tuples per page so\n> the tuple count results from parallel tests would be in multiples of 5.\n>\n> The attached v4 patch includes the improved regression tests.\n>\n> Thank you very much!\n>\n> Cary Huang\n> -------------\n> HighGo Software Inc. (Canada)\n> [email protected]\n> www.highgo.ca\n>\n\n+++ b/src/backend/access/heap/heapam.c\n@@ -307,6 +307,7 @@ initscan(HeapScanDesc scan, ScanKey key, bool\nkeep_startblock)\n* results for a non-MVCC snapshot, the caller must hold some higher-level\n* lock that ensures the interesting tuple(s) won't change.)\n*/\n+\n\nI see no reason why you add a blank line here, is it a typo?\n\n+/* ----------------------------------------------------------------\n+ * ExecSeqScanInitializeWorker\n+ *\n+ * Copy relevant information from TOC into planstate.\n+ * ----------------------------------------------------------------\n+ */\n+void\n+ExecTidRangeScanInitializeWorker(TidRangeScanState *node,\n+ ParallelWorkerContext *pwcxt)\n+{\n+ ParallelTableScanDesc pscan;\n\nFunction name in the comment is not consistent.\n\n@@ -81,6 +81,7 @@ typedef struct ParallelBlockTableScanDescData\nBlockNumber phs_startblock; /* starting block number */\npg_atomic_uint64 phs_nallocated; /* number of blocks allocated to\n* workers so far. */\n+ BlockNumber phs_numblock; /* max number of blocks to scan */\n} ParallelBlockTableScanDescData;\n\nCan this be reorganized by putting phs_numblock after phs_startblock?\n\n\n>\n>\n>\n>\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n-- Regards,Rafia Sabih", "msg_date": "Thu, 22 Aug 2024 13:29:07 +0200", "msg_from": "Rafia Sabih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support tid range scan in parallel?" } ]
[ { "msg_contents": "Hello hackers,\n\nI noticed that the jsonpath date/time functions (.time() and timestamp(), et al.) don’t support some valid but special-case PostgreSQL values, notably `infinity`, `-infinity`, and, for times, '24:00:00`:\n\n❯ psql\npsql (17devel)\nType \"help\" for help.\n\ndavid=# select jsonb_path_query(to_jsonb('24:00:00'::time), '$.time()');\nERROR: time format is not recognized: \"24:00:00\"\n\ndavid=# select jsonb_path_query(to_jsonb('infinity'::timestamptz), '$.timestamp_tz()');\nERROR: timestamp_tz format is not recognized: \"infinity\"\n\nI assume this is because the standard doesn’t support these, or references JavaScript-only values or some such. Is that right?\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Mon, 29 Apr 2024 17:45:43 -0700", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "jsonpath Time and Timestamp Special Cases" }, { "msg_contents": "On Apr 29, 2024, at 20:45, David E. Wheeler <[email protected]> wrote:\n\n> I noticed that the jsonpath date/time functions (.time() and timestamp(), et al.) don’t support some valid but special-case PostgreSQL values, notably `infinity`, `-infinity`, and, for times, '24:00:00`:\n\nLooking at ECMA-404[1], “The JSON data interchange syntax”, it says, of numbers:\n\n> Numeric values that cannot be represented as sequences of digits (such as Infinity and NaN) are not\npermitted.\n\nSo it makes sense that the JSON path standard would be the same, since such JSON explicitly cannot represent these values as numbers.\n\nStill not sure about `24:00:00` as a time, though. I presume the jsonpath standard disallows it.\n\nBest,\n\nDavid\n\n[1]: https://ecma-international.org/wp-content/uploads/ECMA-404_2nd_edition_december_2017.pdf\n\n\n\n", "msg_date": "Thu, 20 Jun 2024 10:54:00 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: jsonpath Time and Timestamp Special Cases" }, { "msg_contents": "On 06/20/24 10:54, David E. Wheeler wrote:\n> Still not sure about `24:00:00` as a time, though. I presume the jsonpath standard disallows it.\n\nIn 9075-2 9.46 \"SQL/JSON path language: syntax and semantics\", the behavior\nof the .time() and .time_tz() and similar item methods defers to the\nbehavior of SQL's CAST.\n\nFor example, .time(PS) (where PS is the optional precision spec) expects\nto be applied to a character string X from the JSON source, and its\nsuccess/failure and result are the same as for CAST(X AS TIME PS).\n\nThe fact that our CAST(X AS TIME) will succeed for '24:00:00' might be\nits own extension (or violation) of the spec (I haven't checked that),\nbut given that it does, we could debate whether it violates the jsonpath\nspec for our jsonpath .time() to behave the same way.\n\nThe same argument may also apply for ±infinity.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Thu, 20 Jun 2024 11:49:18 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonpath Time and Timestamp Special Cases" } ]
[ { "msg_contents": "Hackers,\n\nWhile testing pgAudit against PG17 I noticed the following behavioral \nchange:\n\nCREATE TABLE public.test\n(\n\tid INT,\n\tname TEXT,\n\tdescription TEXT,\n\tCONSTRAINT test_pkey PRIMARY KEY (id)\n);\nNOTICE: AUDIT: SESSION,23,1,DDL,CREATE TABLE,TABLE,public.test,\"CREATE \nTABLE public.test\n(\n\tid INT,\n\tname TEXT,\n\tdescription TEXT,\n\tCONSTRAINT test_pkey PRIMARY KEY (id)\n);\",<none>\nNOTICE: AUDIT: SESSION,23,1,DDL,ALTER TABLE,TABLE,public.test,\"CREATE \nTABLE public.test\n(\n\tid INT,\n\tname TEXT,\n\tdescription TEXT,\n\tCONSTRAINT test_pkey PRIMARY KEY (id)\n);\",<none>\nNOTICE: AUDIT: SESSION,23,1,DDL,CREATE \nINDEX,INDEX,public.test_pkey,\"CREATE TABLE public.test\n(\n\tid INT,\n\tname TEXT,\n\tdescription TEXT,\n\tCONSTRAINT test_pkey PRIMARY KEY (id)\n);\",<none>\n\nPrior to PG17, ProcessUtility was not being called for ALTER TABLE in \nthis context. The change probably makes some sense, since the table is \nbeing created then altered to add the primary key, but since it is a \nbehavioral change I wanted to bring it up.\n\nI attempted a bisect to identify the commit that caused this behavioral \nchange but pgAudit has been broken since at least November on master and \ndidn't get fixed again until bb766cde (April 8). Looking at that commit \nI'm a bit baffled by how it fixed the issue I was seeing, which was that \nan event trigger was firing in the wrong context in the pgAudit tests:\n\n CREATE TABLE tmp (id int, data text);\n+ERROR: not fired by event trigger manager\n\nThat error comes out of pgAudit itself:\n\n if (!CALLED_AS_EVENT_TRIGGER(fcinfo))\n elog(ERROR, \"not fired by event trigger manager\");\n\nSince bb766cde cannot be readily applied to older commits in master I'm \nunable to continue bisecting to find the ALTER TABLE behavioral change.\n\nThis seems to leave me with manual code inspection and there have been a \nlot of changes in this area, so I'm hoping somebody will know why this \nALTER TABLE change happened before I start digging into it.\n\nRegards,\n-David\n\n\n", "msg_date": "Tue, 30 Apr 2024 11:56:31 +0930", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "CREATE TABLE/ProcessUtility hook behavior change" }, { "msg_contents": "On Tue, Apr 30, 2024 at 10:26 AM David Steele <[email protected]> wrote:\n>\n> Hackers,\n>\n> While testing pgAudit against PG17 I noticed the following behavioral\n> change:\n>\n> CREATE TABLE public.test\n> (\n> id INT,\n> name TEXT,\n> description TEXT,\n> CONSTRAINT test_pkey PRIMARY KEY (id)\n> );\n> NOTICE: AUDIT: SESSION,23,1,DDL,CREATE TABLE,TABLE,public.test,\"CREATE\n> TABLE public.test\n> (\n> id INT,\n> name TEXT,\n> description TEXT,\n> CONSTRAINT test_pkey PRIMARY KEY (id)\n> );\",<none>\n> NOTICE: AUDIT: SESSION,23,1,DDL,ALTER TABLE,TABLE,public.test,\"CREATE\n> TABLE public.test\n> (\n> id INT,\n> name TEXT,\n> description TEXT,\n> CONSTRAINT test_pkey PRIMARY KEY (id)\n> );\",<none>\n> NOTICE: AUDIT: SESSION,23,1,DDL,CREATE\n> INDEX,INDEX,public.test_pkey,\"CREATE TABLE public.test\n> (\n> id INT,\n> name TEXT,\n> description TEXT,\n> CONSTRAINT test_pkey PRIMARY KEY (id)\n> );\",<none>\n>\n> Prior to PG17, ProcessUtility was not being called for ALTER TABLE in\n> this context. The change probably makes some sense, since the table is\n> being created then altered to add the primary key, but since it is a\n> behavioral change I wanted to bring it up.\n>\n> I attempted a bisect to identify the commit that caused this behavioral\n> change but pgAudit has been broken since at least November on master and\n> didn't get fixed again until bb766cde (April 8). Looking at that commit\n> I'm a bit baffled by how it fixed the issue I was seeing, which was that\n> an event trigger was firing in the wrong context in the pgAudit tests:\n>\n> CREATE TABLE tmp (id int, data text);\n> +ERROR: not fired by event trigger manager\n>\n> That error comes out of pgAudit itself:\n>\n> if (!CALLED_AS_EVENT_TRIGGER(fcinfo))\n> elog(ERROR, \"not fired by event trigger manager\");\n>\n> Since bb766cde cannot be readily applied to older commits in master I'm\n> unable to continue bisecting to find the ALTER TABLE behavioral change.\n>\n> This seems to leave me with manual code inspection and there have been a\n> lot of changes in this area, so I'm hoping somebody will know why this\n> ALTER TABLE change happened before I start digging into it.\n\nprobably this commit:\nhttps://git.postgresql.org/cgit/postgresql.git/commit/?id=0cd711271d42b0888d36f8eda50e1092c2fed4b3\n\nespecially:\n\n- * The parser will create AT_AttSetNotNull subcommands for\n- * columns of PRIMARY KEY indexes/constraints, but we need\n- * not do anything with them here, because the columns'\n- * NOT NULL marks will already have been propagated into\n- * the new table definition.\n+ * We see this subtype when a primary key is created\n+ * internally, for example when it is replaced with a new\n+ * constraint (say because one of the columns changes\n+ * type); in this case we need to reinstate attnotnull,\n+ * because it was removed because of the drop of the old\n+ * PK. Schedule this subcommand to an upcoming AT pass.\n*/\n+ cmd->subtype = AT_SetAttNotNull;\n+ tab->subcmds[AT_PASS_OLD_COL_ATTRS] =\n+ lappend(tab->subcmds[AT_PASS_OLD_COL_ATTRS], cmd);\n\n\n", "msg_date": "Tue, 30 Apr 2024 11:27:07 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE/ProcessUtility hook behavior change" }, { "msg_contents": "On 4/30/24 12:57, jian he wrote:\n> On Tue, Apr 30, 2024 at 10:26 AM David Steele <[email protected]> wrote:\n>>\n>> Since bb766cde cannot be readily applied to older commits in master I'm\n>> unable to continue bisecting to find the ALTER TABLE behavioral change.\n>>\n>> This seems to leave me with manual code inspection and there have been a\n>> lot of changes in this area, so I'm hoping somebody will know why this\n>> ALTER TABLE change happened before I start digging into it.\n> \n> probably this commit:\n> https://git.postgresql.org/cgit/postgresql.git/commit/?id=0cd711271d42b0888d36f8eda50e1092c2fed4b3\n\nHmm, I don't think so since the problem already existed in bb766cde, \nwhich was committed on Apr 8 vs Apr 19 for the above patch.\n\nProbably I'll need to figure out which exact part of bb766cde fixes the \nevent trigger issue so I can backpatch just that part and continue \nbisecting.\n\nRegards,\n-David\n\n\n", "msg_date": "Tue, 30 Apr 2024 18:05:46 +0930", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CREATE TABLE/ProcessUtility hook behavior change" }, { "msg_contents": "On Tue, Apr 30, 2024 at 4:35 PM David Steele <[email protected]> wrote:\n>\n> On 4/30/24 12:57, jian he wrote:\n> > On Tue, Apr 30, 2024 at 10:26 AM David Steele <[email protected]> wrote:\n> >>\n> >> Since bb766cde cannot be readily applied to older commits in master I'm\n> >> unable to continue bisecting to find the ALTER TABLE behavioral change.\n> >>\n> >> This seems to leave me with manual code inspection and there have been a\n> >> lot of changes in this area, so I'm hoping somebody will know why this\n> >> ALTER TABLE change happened before I start digging into it.\n> >\n\nI just tested these two commits.\nhttps://git.postgresql.org/cgit/postgresql.git/commit/?id=3da13a6257bc08b1d402c83feb2a35450f988365\nhttps://git.postgresql.org/cgit/postgresql.git/commit/?id=b0e96f311985bceba79825214f8e43f65afa653a\n\ni think it's related to the catalog not null commit.\nit will alter table and add not null constraint internally (equivalent\nto `alter table test alter id set not null`).\n\n\n", "msg_date": "Tue, 30 Apr 2024 19:15:35 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE/ProcessUtility hook behavior change" }, { "msg_contents": "On 4/30/24 21:15, jian he wrote:\n> On Tue, Apr 30, 2024 at 4:35 PM David Steele <[email protected]> wrote:\n>>\n>> On 4/30/24 12:57, jian he wrote:\n>>> On Tue, Apr 30, 2024 at 10:26 AM David Steele <[email protected]> wrote:\n>>>>\n>>>> Since bb766cde cannot be readily applied to older commits in master I'm\n>>>> unable to continue bisecting to find the ALTER TABLE behavioral change.\n>>>>\n>>>> This seems to leave me with manual code inspection and there have been a\n>>>> lot of changes in this area, so I'm hoping somebody will know why this\n>>>> ALTER TABLE change happened before I start digging into it.\n>>>\n> \n> I just tested these two commits.\n> https://git.postgresql.org/cgit/postgresql.git/commit/?id=3da13a6257bc08b1d402c83feb2a35450f988365\n> https://git.postgresql.org/cgit/postgresql.git/commit/?id=b0e96f311985bceba79825214f8e43f65afa653a\n> \n> i think it's related to the catalog not null commit.\n> it will alter table and add not null constraint internally (equivalent\n> to `alter table test alter id set not null`).\n\nYeah, looks like b0e96f31 was the culprit. Now that it has been reverted \nin 6f8bb7c1 the pgAudit output is back to expected.\n\nTo be clear, I'm not saying this (now reverted) behavior was a bug, but \nit was certainly a change in behavior so I thought it was worth pointing \nout.\n\nAnyway, I guess we'll see how this goes in the next dev cycle.\n\nThanks for helping me track that down!\n\nRegards,\n-David\n\n\n", "msg_date": "Thu, 16 May 2024 11:01:59 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CREATE TABLE/ProcessUtility hook behavior change" } ]
[ { "msg_contents": "Hi all,\n\nWhen json_lex_string() hits certain types of invalid input, it calls\npg_encoding_mblen_bounded(), which assumes that its input is\nnull-terminated and calls strnlen(). But the JSON lexer is constructed\nwith an explicit string length, and we don't ensure that the string is\nnull-terminated in all cases, so we can walk off the end of the\nbuffer. This isn't really relevant on the server side, where you'd\nhave to get a superuser to help you break string encodings, but for\nclient-side usage on untrusted input (such as my OAuth patch) it would\nbe more important.\n\nAttached is a draft patch that explicitly checks against the\nend-of-string pointer and clamps the token_terminator to it. Note that\nthis removes the only caller of pg_encoding_mblen_bounded() and I'm\nnot sure what we should do with that function. It seems like a\nreasonable API, just not here.\n\nThe new test needs to record two versions of the error message, one\nfor invalid token and one for invalid escape sequence. This is\nbecause, for smaller chunk sizes, the partial-token logic in the\nincremental JSON parser skips the affected code entirely when it can't\nfind an ending double-quote.\n\nTangentially: Should we maybe rethink pieces of the json_lex_string\nerror handling? For example, do we really want to echo an incomplete\nmultibyte sequence once we know it's bad? It also looks like there are\nplaces where the FAIL_AT_CHAR_END macro is called after the `s`\npointer has already advanced past the code point of interest. I'm not\nsure if that's intentional.\n\nThanks,\n--Jacob", "msg_date": "Tue, 30 Apr 2024 10:39:04 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH] json_lex_string: don't overread on bad UTF8" }, { "msg_contents": "On Tue, Apr 30, 2024 at 10:39:04AM -0700, Jacob Champion wrote:\n> When json_lex_string() hits certain types of invalid input, it calls\n> pg_encoding_mblen_bounded(), which assumes that its input is\n> null-terminated and calls strnlen(). But the JSON lexer is constructed\n> with an explicit string length, and we don't ensure that the string is\n> null-terminated in all cases, so we can walk off the end of the\n> buffer. This isn't really relevant on the server side, where you'd\n> have to get a superuser to help you break string encodings, but for\n> client-side usage on untrusted input (such as my OAuth patch) it would\n> be more important.\n\nNot sure to like much the fact that this advances token_terminator\nfirst. Wouldn't it be better to calculate pg_encoding_mblen() first,\nthen save token_terminator? I feel a bit uneasy about saving a value\nin token_terminator past the end of the string. That a nit in this\ncontext, still..\n\n> Attached is a draft patch that explicitly checks against the\n> end-of-string pointer and clamps the token_terminator to it. Note that\n> this removes the only caller of pg_encoding_mblen_bounded() and I'm\n> not sure what we should do with that function. It seems like a\n> reasonable API, just not here.\n\nHmm. Keeping it around as currently designed means that it could\ncause more harm than anything in the long term, even in the stable\nbranches if new code uses it. There is a risk of seeing this new code\nincorrectly using it again, even if its top comment tells that we rely\non the string being nul-terminated. A safer alternative would be to\nredesign it so as the length of the string is provided in input,\nremoving the dependency of strlen in its internals, perhaps. Anyway,\nwithout any callers of it, I'd be tempted to wipe it from HEAD and\ncall it a day.\n\n> The new test needs to record two versions of the error message, one\n> for invalid token and one for invalid escape sequence. This is\n> because, for smaller chunk sizes, the partial-token logic in the\n> incremental JSON parser skips the affected code entirely when it can't\n> find an ending double-quote.\n\nAh, that makes sense. That looks OK here. A comment around the test\nwould be adapted to document that, I guess. \n\n> Tangentially: Should we maybe rethink pieces of the json_lex_string\n> error handling? For example, do we really want to echo an incomplete\n> multibyte sequence once we know it's bad? It also looks like there are\n> places where the FAIL_AT_CHAR_END macro is called after the `s`\n> pointer has already advanced past the code point of interest. I'm not\n> sure if that's intentional.\n\nAdvancing the tracking pointer 's' before reporting an error related\nthe end of the string is a bad practive, IMO, and we should avoid\nthat. json_lex_string() does not offer a warm feeling regarding that\nwith escape characters, at least :/\n--\nMichael", "msg_date": "Wed, 1 May 2024 15:08:45 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] json_lex_string: don't overread on bad UTF8" }, { "msg_contents": "On Tue, Apr 30, 2024 at 11:09 PM Michael Paquier <[email protected]> wrote:\n> Not sure to like much the fact that this advances token_terminator\n> first. Wouldn't it be better to calculate pg_encoding_mblen() first,\n> then save token_terminator? I feel a bit uneasy about saving a value\n> in token_terminator past the end of the string. That a nit in this\n> context, still..\n\nv2 tries it that way; see what you think. Is the concern that someone\nmight add code later that escapes that macro early?\n\n> Hmm. Keeping it around as currently designed means that it could\n> cause more harm than anything in the long term, even in the stable\n> branches if new code uses it. There is a risk of seeing this new code\n> incorrectly using it again, even if its top comment tells that we rely\n> on the string being nul-terminated. A safer alternative would be to\n> redesign it so as the length of the string is provided in input,\n> removing the dependency of strlen in its internals, perhaps. Anyway,\n> without any callers of it, I'd be tempted to wipe it from HEAD and\n> call it a day.\n\nRemoved in v2.\n\n> > The new test needs to record two versions of the error message, one\n> > for invalid token and one for invalid escape sequence. This is\n> > because, for smaller chunk sizes, the partial-token logic in the\n> > incremental JSON parser skips the affected code entirely when it can't\n> > find an ending double-quote.\n>\n> Ah, that makes sense. That looks OK here. A comment around the test\n> would be adapted to document that, I guess.\n\nDone.\n\n> Advancing the tracking pointer 's' before reporting an error related\n> the end of the string is a bad practive, IMO, and we should avoid\n> that. json_lex_string() does not offer a warm feeling regarding that\n> with escape characters, at least :/\n\nYeah... I think some expansion of the json_errdetail test coverage is\nprobably in my future. :)\n\nThanks,\n--Jacob", "msg_date": "Wed, 1 May 2024 16:22:24 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] json_lex_string: don't overread on bad UTF8" }, { "msg_contents": "On Wed, May 01, 2024 at 04:22:24PM -0700, Jacob Champion wrote:\n> On Tue, Apr 30, 2024 at 11:09 PM Michael Paquier <[email protected]> wrote:\n>> Not sure to like much the fact that this advances token_terminator\n>> first. Wouldn't it be better to calculate pg_encoding_mblen() first,\n>> then save token_terminator? I feel a bit uneasy about saving a value\n>> in token_terminator past the end of the string. That a nit in this\n>> context, still..\n> \n> v2 tries it that way; see what you think. Is the concern that someone\n> might add code later that escapes that macro early?\n\nYeah, I am not sure if that's something that would really happen, but\nthat looks like a good practice to keep anyway to keep a clean stack\nat any time.\n\n>> Ah, that makes sense. That looks OK here. A comment around the test\n>> would be adapted to document that, I guess.\n> \n> Done.\n\nThat seems OK at quick glance. I don't have much room to do something\nabout this patch this week as an effect of Golden Week and the\nbuildfarm effect, but I should be able to get to it next week once the\nnext round of minor releases is tagged.\n\nAbout the fact that we may finish by printing unfinished UTF-8\nsequences, I'd be curious to hear your thoughts. Now, the information\nprovided about the partial byte sequences can be also useful for\ndebugging on top of having the error code, no?\n--\nMichael", "msg_date": "Thu, 2 May 2024 11:23:13 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] json_lex_string: don't overread on bad UTF8" }, { "msg_contents": "On Thu, May 02, 2024 at 11:23:13AM +0900, Michael Paquier wrote:\n> About the fact that we may finish by printing unfinished UTF-8\n> sequences, I'd be curious to hear your thoughts. Now, the information\n> provided about the partial byte sequences can be also useful for\n> debugging on top of having the error code, no?\n\nBy the way, as long as I have that in mind.. I am not sure that it is\nworth spending cycles in detecting the unfinished sequences and make\nthese printable. Wouldn't it be enough for more cases to adjust\ntoken_error() to truncate the byte sequences we cannot print?\n\nAnother thing that I think would be nice would be to calculate the\nlocation of what we're parsing on a given line, and provide that in\nthe error context. That would not be backpatchable as it requires a\nchange in JsonLexContext, unfortunately, but it would help in making\nmore sense with an error if the incomplete byte sequence is at the\nbeginning of a token or after an expected character.\n--\nMichael", "msg_date": "Thu, 2 May 2024 12:39:40 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] json_lex_string: don't overread on bad UTF8" }, { "msg_contents": "On Wed, May 1, 2024 at 8:40 PM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, May 02, 2024 at 11:23:13AM +0900, Michael Paquier wrote:\n> > About the fact that we may finish by printing unfinished UTF-8\n> > sequences, I'd be curious to hear your thoughts. Now, the information\n> > provided about the partial byte sequences can be also useful for\n> > debugging on top of having the error code, no?\n\nYes, but which information do you want? Do you want to know the bad\nbyte sequence, or see the glyph that corresponds to it (which is\nprobably �)? The glyph is better as long as it's complete; if it's a\nbad sequence, then maybe you'd prefer to know the particular byte, but\nthat assumes a lot of technical knowledge on the part of whoever's\nreading the message.\n\n> By the way, as long as I have that in mind.. I am not sure that it is\n> worth spending cycles in detecting the unfinished sequences and make\n> these printable. Wouldn't it be enough for more cases to adjust\n> token_error() to truncate the byte sequences we cannot print?\n\nMaybe. I'm beginning to wonder if I'm overthinking this particular\nproblem, and if we should just go ahead and print the bad sequence. At\nleast for the case of UTF-8 console encoding, replacement glyphs will\nshow up as needed.\n\nThere is the matter of a client that's not using UTF-8, though. Do we\ndeal with that correctly today? (I understand why it was done the way\nit was, at least on the server side, but it's still really weird to\nhave code that parses \"JSON\" that isn't actually Unicode.)\n\n> Another thing that I think would be nice would be to calculate the\n> location of what we're parsing on a given line, and provide that in\n> the error context. That would not be backpatchable as it requires a\n> change in JsonLexContext, unfortunately, but it would help in making\n> more sense with an error if the incomplete byte sequence is at the\n> beginning of a token or after an expected character.\n\n+1, at least that way you can skip directly to the broken spot during\na postmortem.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Thu, 2 May 2024 16:29:18 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] json_lex_string: don't overread on bad UTF8" }, { "msg_contents": "On 30.04.24 19:39, Jacob Champion wrote:\n> Tangentially: Should we maybe rethink pieces of the json_lex_string\n> error handling? For example, do we really want to echo an incomplete\n> multibyte sequence once we know it's bad?\n\nI can't quite find the place you might be looking at in \njson_lex_string(), but for the general encoding conversion we have what \nwould appear to be the same behavior in report_invalid_encoding(), and \nwe go out of our way there to produce a verbose error message including \nthe invalid data.\n\n\n\n", "msg_date": "Fri, 3 May 2024 13:54:11 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] json_lex_string: don't overread on bad UTF8" }, { "msg_contents": "On Fri, May 3, 2024 at 4:54 AM Peter Eisentraut <[email protected]> wrote:\n>\n> On 30.04.24 19:39, Jacob Champion wrote:\n> > Tangentially: Should we maybe rethink pieces of the json_lex_string\n> > error handling? For example, do we really want to echo an incomplete\n> > multibyte sequence once we know it's bad?\n>\n> I can't quite find the place you might be looking at in\n> json_lex_string(),\n\n(json_lex_string() reports the beginning and end of the \"area of\ninterest\" via the JsonLexContext; it's json_errdetail() that turns\nthat into an error message.)\n\n> but for the general encoding conversion we have what\n> would appear to be the same behavior in report_invalid_encoding(), and\n> we go out of our way there to produce a verbose error message including\n> the invalid data.\n\nWe could port something like that to src/common. IMO that'd be more\nsuited for an actual conversion routine, though, as opposed to a\nparser that for the most part assumes you didn't lie about the input\nencoding and is just trying not to crash if you're wrong. Most of the\ntime, the parser just copies bytes between delimiters around and it's\nup to the caller to handle encodings... the exceptions to that are the\n\\uXXXX escapes and the error handling.\n\nOffhand, are all of our supported frontend encodings\nself-synchronizing? By that I mean, is it safe to print a partial byte\nsequence if the locale isn't UTF-8? (As I type this I'm starting at\nShift-JIS, and thinking \"probably not.\")\n\nActually -- hopefully this is not too much of a tangent -- that\nfurther crystallizes a vague unease about the API that I have. The\nJsonLexContext is initialized with something called the\n\"input_encoding\", but that encoding is necessarily also the output\nencoding for parsed string literals and error messages. For the server\nside that's fine, but frontend clients have the input_encoding locked\nto UTF-8, which seems like it might cause problems? Maybe I'm missing\ncode somewhere, but I don't see a conversion routine from\njson_errdetail() to the actual client/locale encoding. (And the parser\ndoes not support multibyte input_encodings that contain ASCII in trail\nbytes.)\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Fri, 3 May 2024 07:05:38 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] json_lex_string: don't overread on bad UTF8" }, { "msg_contents": "On Fri, May 03, 2024 at 07:05:38AM -0700, Jacob Champion wrote:\n> On Fri, May 3, 2024 at 4:54 AM Peter Eisentraut <[email protected]> wrote:\n>> but for the general encoding conversion we have what\n>> would appear to be the same behavior in report_invalid_encoding(), and\n>> we go out of our way there to produce a verbose error message including\n>> the invalid data.\n\nI was looking for that a couple of days ago in the backend but could\nnot put my finger on it. Thanks.\n\n> We could port something like that to src/common. IMO that'd be more\n> suited for an actual conversion routine, though, as opposed to a\n> parser that for the most part assumes you didn't lie about the input\n> encoding and is just trying not to crash if you're wrong. Most of the\n> time, the parser just copies bytes between delimiters around and it's\n> up to the caller to handle encodings... the exceptions to that are the\n> \\uXXXX escapes and the error handling.\n\nHmm. That would still leave the backpatch issue at hand, which is\nkind of confusing to leave as it is. Would it be complicated to\ntruncate the entire byte sequence in the error message and just give\nup because we cannot do better if the input byte sequence is\nincomplete? We could still have some information depending on the\nstring given in input, which should be enough, hopefully. With the\nlocation pointing to the beginning of the sequence, even better.\n\n> Offhand, are all of our supported frontend encodings\n> self-synchronizing? By that I mean, is it safe to print a partial byte\n> sequence if the locale isn't UTF-8? (As I type this I'm starting at\n> Shift-JIS, and thinking \"probably not.\")\n>\n> Actually -- hopefully this is not too much of a tangent -- that\n> further crystallizes a vague unease about the API that I have. The\n> JsonLexContext is initialized with something called the\n> \"input_encoding\", but that encoding is necessarily also the output\n> encoding for parsed string literals and error messages. For the server\n> side that's fine, but frontend clients have the input_encoding locked\n> to UTF-8, which seems like it might cause problems? Maybe I'm missing\n> code somewhere, but I don't see a conversion routine from\n> json_errdetail() to the actual client/locale encoding. (And the parser\n> does not support multibyte input_encodings that contain ASCII in trail\n> bytes.)\n\nReferring to json_lex_string() that does UTF-8 -> ASCII -> give-up in\nits conversion for FRONTEND, I guess? Yep. This limitation looks\nlike a problem, especially if plugging that to libpq.\n--\nMichael", "msg_date": "Tue, 7 May 2024 12:42:55 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] json_lex_string: don't overread on bad UTF8" }, { "msg_contents": "On Mon, May 6, 2024 at 8:43 PM Michael Paquier <[email protected]> wrote:\n> On Fri, May 03, 2024 at 07:05:38AM -0700, Jacob Champion wrote:\n> > We could port something like that to src/common. IMO that'd be more\n> > suited for an actual conversion routine, though, as opposed to a\n> > parser that for the most part assumes you didn't lie about the input\n> > encoding and is just trying not to crash if you're wrong. Most of the\n> > time, the parser just copies bytes between delimiters around and it's\n> > up to the caller to handle encodings... the exceptions to that are the\n> > \\uXXXX escapes and the error handling.\n>\n> Hmm. That would still leave the backpatch issue at hand, which is\n> kind of confusing to leave as it is. Would it be complicated to\n> truncate the entire byte sequence in the error message and just give\n> up because we cannot do better if the input byte sequence is\n> incomplete?\n\nMaybe I've misunderstood, but isn't that what's being done in v2?\n\n> > Maybe I'm missing\n> > code somewhere, but I don't see a conversion routine from\n> > json_errdetail() to the actual client/locale encoding. (And the parser\n> > does not support multibyte input_encodings that contain ASCII in trail\n> > bytes.)\n>\n> Referring to json_lex_string() that does UTF-8 -> ASCII -> give-up in\n> its conversion for FRONTEND, I guess? Yep. This limitation looks\n> like a problem, especially if plugging that to libpq.\n\nOkay. How we deal with that will likely guide the \"optimal\" fix to\nerror reporting, I think...\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 7 May 2024 14:06:10 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] json_lex_string: don't overread on bad UTF8" }, { "msg_contents": "On Tue, May 07, 2024 at 02:06:10PM -0700, Jacob Champion wrote:\n> Maybe I've misunderstood, but isn't that what's being done in v2?\n\nSomething a bit different.. I was wondering if it could be possible\nto tweak this code to truncate the data in the generated error string\nso as the incomplete multi-byte sequence is entirely cut out, which\nwould come to setting token_terminator to \"s\" (last byte before the\nincomplete byte sequence) rather than \"term\" (last byte available,\neven if incomplete):\n#define FAIL_AT_CHAR_END(code) \\\ndo { \\\n char *term = s + pg_encoding_mblen(lex->input_encoding, s); \\\n lex->token_terminator = (term <= end) ? term : s; \\\n return code; \\\n} while (0)\n\nBut looking closer, I can see that in the JSON_INVALID_TOKEN case,\nwhen !tok_done, we set token_terminator to point to the end of the\ntoken, and that would include an incomplete byte sequence like in your\ncase. :/\n\nAt the end of the day, I think that I'm OK with your patch and avoid\nthe overread for now in the back-branches. This situation makes me\nuncomfortable and we should put more effort in printing error messages\nin a readable format, but that could always be tackled later as a\nseparate problem.. And I don't see something backpatchable at short\nsight for v16.\n\nThoughts and/or objections?\n--\nMichael", "msg_date": "Wed, 8 May 2024 14:30:43 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] json_lex_string: don't overread on bad UTF8" }, { "msg_contents": "On Tue, May 7, 2024 at 10:31 PM Michael Paquier <[email protected]> wrote:\n> But looking closer, I can see that in the JSON_INVALID_TOKEN case,\n> when !tok_done, we set token_terminator to point to the end of the\n> token, and that would include an incomplete byte sequence like in your\n> case. :/\n\nAh, I see what you're saying. Yeah, that approach would need some more\ninvasive changes.\n\n> This situation makes me\n> uncomfortable and we should put more effort in printing error messages\n> in a readable format, but that could always be tackled later as a\n> separate problem.. And I don't see something backpatchable at short\n> sight for v16.\n\nAgreed. Fortunately (or unfortunately?) I think the JSON\nclient-encoding work is now a prerequisite for OAuth in libpq, so\nhopefully some improvements can fall out of that work too.\n\n> Thoughts and/or objections?\n\nNone here.\n\nThanks!\n--Jacob\n\n\n", "msg_date": "Wed, 8 May 2024 07:01:08 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] json_lex_string: don't overread on bad UTF8" }, { "msg_contents": "On Wed, May 08, 2024 at 07:01:08AM -0700, Jacob Champion wrote:\n> On Tue, May 7, 2024 at 10:31 PM Michael Paquier <[email protected]> wrote:\n>> But looking closer, I can see that in the JSON_INVALID_TOKEN case,\n>> when !tok_done, we set token_terminator to point to the end of the\n>> token, and that would include an incomplete byte sequence like in your\n>> case. :/\n> \n> Ah, I see what you're saying. Yeah, that approach would need some more\n> invasive changes.\n\nMy first feeling was actually to do that, and report the location in\nthe input string where we are seeing issues. All code paths playing\nwith token_terminator would need to track that.\n\n> Agreed. Fortunately (or unfortunately?) I think the JSON\n> client-encoding work is now a prerequisite for OAuth in libpq, so\n> hopefully some improvements can fall out of that work too.\n\nI'm afraid so. I don't quite see how this would be OK to tweak on\nstable branches, but all areas that could report error states with\npartial byte sequence contents would benefit from such a change.\n\n>> Thoughts and/or objections?\n> \n> None here.\n\nThis is a bit mitigated by the fact that d6607016c738 is recent, but\nthis is incorrect since v13 so backpatched down to that.\n--\nMichael", "msg_date": "Thu, 9 May 2024 13:27:08 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] json_lex_string: don't overread on bad UTF8" }, { "msg_contents": "On Wed, May 8, 2024 at 9:27 PM Michael Paquier <[email protected]> wrote:\n> This is a bit mitigated by the fact that d6607016c738 is recent, but\n> this is incorrect since v13 so backpatched down to that.\n\nThank you!\n\n--Jacob\n\n\n", "msg_date": "Thu, 9 May 2024 10:21:00 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] json_lex_string: don't overread on bad UTF8" } ]
[ { "msg_contents": "\nHi,\n\nI wanted to check my understanding of how control flows in a walsender doing logical replication. My understanding is that the (single) thread in each walsender process, in the simplest case, loops on:\n\n1. Pull a record out of the WAL.\n2. Pass it to the recorder buffer code, which,\n3. Sorts it out into the appropriate in-memory structure for that transaction (spilling to disk as required), and then continues with #1, or,\n4. If it's a commit record, it iteratively passes the transaction data one change at a time to,\n5. The logical decoding plugin, which returns the output format of that plugin, and then,\n6. The walsender sends the output from the plugin to the client. It cycles on passing the data to the plugin and sending it to the client until it runs out of changes in that transaction, and then resumes reading the WAL in #1.\n\nIn particular, I wanted to confirm that while it is pulling the reordered transaction and sending it to the plugin (and thence to the client), that particular walsender is *not* reading new WAL records or putting them in the reorder buffer.\n\nThe specific issue I'm trying to track down is an enormous pileup of spill files. This is in a non-supported version of PostgreSQL (v11), so an upgrade may fix it, but at the moment, I'm trying to find a cause and a mitigation.\n\n", "msg_date": "Tue, 30 Apr 2024 10:57:28 -0700", "msg_from": "Christophe Pettus <[email protected]>", "msg_from_op": true, "msg_subject": "Control flow in logical replication walsender" }, { "msg_contents": "On Tue, Apr 30, 2024 at 11:28 PM Christophe Pettus <[email protected]> wrote:\n\n>\n> Hi,\n>\n> I wanted to check my understanding of how control flows in a walsender\n> doing logical replication. My understanding is that the (single) thread in\n> each walsender process, in the simplest case, loops on:\n>\n> 1. Pull a record out of the WAL.\n> 2. Pass it to the recorder buffer code, which,\n> 3. Sorts it out into the appropriate in-memory structure for that\n> transaction (spilling to disk as required), and then continues with #1, or,\n> 4. If it's a commit record, it iteratively passes the transaction data one\n> change at a time to,\n> 5. The logical decoding plugin, which returns the output format of that\n> plugin, and then,\n> 6. The walsender sends the output from the plugin to the client. It cycles\n> on passing the data to the plugin and sending it to the client until it\n> runs out of changes in that transaction, and then resumes reading the WAL\n> in #1.\n>\n>\nThis is correct barring some details on master.\n\n\n> In particular, I wanted to confirm that while it is pulling the reordered\n> transaction and sending it to the plugin (and thence to the client), that\n> particular walsender is *not* reading new WAL records or putting them in\n> the reorder buffer.\n>\n>\nThis is correct.\n\n\n> The specific issue I'm trying to track down is an enormous pileup of spill\n> files. This is in a non-supported version of PostgreSQL (v11), so an\n> upgrade may fix it, but at the moment, I'm trying to find a cause and a\n> mitigation.\n>\n>\nIs there a large transaction which is failing to be replicated repeatedly -\ntimeouts, crashes on upstream or downstream?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Tue, Apr 30, 2024 at 11:28 PM Christophe Pettus <[email protected]> wrote:\nHi,\n\nI wanted to check my understanding of how control flows in a walsender doing logical replication.  My understanding is that the (single) thread in each walsender process, in the simplest case, loops on:\n\n1. Pull a record out of the WAL.\n2. Pass it to the recorder buffer code, which,\n3. Sorts it out into the appropriate in-memory structure for that transaction (spilling to disk as required), and then continues with #1, or,\n4. If it's a commit record, it iteratively passes the transaction data one change at a time to,\n5. The logical decoding plugin, which returns the output format of that plugin, and then,\n6. The walsender sends the output from the plugin to the client. It cycles on passing the data to the plugin and sending it to the client until it runs out of changes in that transaction, and then resumes reading the WAL in #1.\nThis is correct barring some details on master. \nIn particular, I wanted to confirm that while it is pulling the reordered transaction and sending it to the plugin (and thence to the client), that particular walsender is *not* reading new WAL records or putting them in the reorder buffer.\nThis is correct. \nThe specific issue I'm trying to track down is an enormous pileup of spill files.  This is in a non-supported version of PostgreSQL (v11), so an upgrade may fix it, but at the moment, I'm trying to find a cause and a mitigation.\n\nIs there a large transaction which is failing to be replicated repeatedly - timeouts, crashes on upstream or downstream?-- Best Wishes,Ashutosh Bapat", "msg_date": "Wed, 1 May 2024 14:48:36 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Control flow in logical replication walsender" }, { "msg_contents": "Thank you for the reply!\n\n> On May 1, 2024, at 02:18, Ashutosh Bapat <[email protected]> wrote:\n> Is there a large transaction which is failing to be replicated repeatedly - timeouts, crashes on upstream or downstream?\n\nAFAIK, no, although I am doing this somewhat by remote control (I don't have direct access to the failing system). This did bring up one other question, though:\n\nAre subtransactions written to their own individual reorder buffers (and thus potentially spill files), or are they appended to the topmost transaction's reorder buffer?\n\n", "msg_date": "Mon, 6 May 2024 11:29:39 -0700", "msg_from": "Christophe Pettus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Control flow in logical replication walsender" }, { "msg_contents": "On Tue, May 7, 2024 at 12:00 AM Christophe Pettus <[email protected]> wrote:\n\n> Thank you for the reply!\n>\n> > On May 1, 2024, at 02:18, Ashutosh Bapat <[email protected]>\n> wrote:\n> > Is there a large transaction which is failing to be replicated\n> repeatedly - timeouts, crashes on upstream or downstream?\n>\n> AFAIK, no, although I am doing this somewhat by remote control (I don't\n> have direct access to the failing system). This did bring up one other\n> question, though:\n>\n> Are subtransactions written to their own individual reorder buffers (and\n> thus potentially spill files), or are they appended to the topmost\n> transaction's reorder buffer?\n\n\nIIRC, they have their own RB, but once they commit, they are transferred to\ntopmost transaction's RB. So they can spill files.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Tue, May 7, 2024 at 12:00 AM Christophe Pettus <[email protected]> wrote:Thank you for the reply!\n\n> On May 1, 2024, at 02:18, Ashutosh Bapat <[email protected]> wrote:\n> Is there a large transaction which is failing to be replicated repeatedly - timeouts, crashes on upstream or downstream?\n\nAFAIK, no, although I am doing this somewhat by remote control (I don't have direct access to the failing system).  This did bring up one other question, though:\n\nAre subtransactions written to their own individual reorder buffers (and thus potentially spill files), or are they appended to the topmost transaction's reorder buffer?IIRC, they have their own RB, but once they commit, they are transferred to topmost transaction's RB. So they can spill files.-- Best Wishes,Ashutosh Bapat", "msg_date": "Tue, 7 May 2024 09:50:41 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Control flow in logical replication walsender" }, { "msg_contents": "On Tue, Apr 30, 2024 at 11:28 PM Christophe Pettus <[email protected]> wrote:\n>\n> I wanted to check my understanding of how control flows in a walsender doing logical replication. My understanding is that the (single) thread in each walsender process, in the simplest case, loops on:\n>\n> 1. Pull a record out of the WAL.\n> 2. Pass it to the recorder buffer code, which,\n> 3. Sorts it out into the appropriate in-memory structure for that transaction (spilling to disk as required), and then continues with #1, or,\n> 4. If it's a commit record, it iteratively passes the transaction data one change at a time to,\n> 5. The logical decoding plugin, which returns the output format of that plugin, and then,\n> 6. The walsender sends the output from the plugin to the client. It cycles on passing the data to the plugin and sending it to the client until it runs out of changes in that transaction, and then resumes reading the WAL in #1.\n>\n> In particular, I wanted to confirm that while it is pulling the reordered transaction and sending it to the plugin (and thence to the client), that particular walsender is *not* reading new WAL records or putting them in the reorder buffer.\n>\n> The specific issue I'm trying to track down is an enormous pileup of spill files. This is in a non-supported version of PostgreSQL (v11), so an upgrade may fix it, but at the moment, I'm trying to find a cause and a mitigation.\n>\n\nIn PG-14, we have added a feature in logical replication to stream\nlong in-progress transactions which should reduce spilling to a good\nextent. You might want to try that.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 7 May 2024 17:32:26 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Control flow in logical replication walsender" }, { "msg_contents": "On Tue, May 7, 2024 at 9:51 AM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Tue, May 7, 2024 at 12:00 AM Christophe Pettus <[email protected]> wrote:\n>>\n>> Thank you for the reply!\n>>\n>> > On May 1, 2024, at 02:18, Ashutosh Bapat <[email protected]> wrote:\n>> > Is there a large transaction which is failing to be replicated repeatedly - timeouts, crashes on upstream or downstream?\n>>\n>> AFAIK, no, although I am doing this somewhat by remote control (I don't have direct access to the failing system). This did bring up one other question, though:\n>>\n>> Are subtransactions written to their own individual reorder buffers (and thus potentially spill files), or are they appended to the topmost transaction's reorder buffer?\n>\n>\n> IIRC, they have their own RB,\n>\n\nRight.\n\n>\n but once they commit, they are transferred to topmost transaction's RB.\n>\n\nI don't think they are transferred to topmost transaction's RB. We\nperform a k-way merge between transactions/subtransactions to retrieve\nthe changes. See comments: \"Support for efficiently iterating over a\ntransaction's and its subtransactions' changes...\" in reorderbuffer.c\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 7 May 2024 17:46:31 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Control flow in logical replication walsender" }, { "msg_contents": "\n\n> On May 7, 2024, at 05:02, Amit Kapila <[email protected]> wrote:\n> \n> \n> In PG-14, we have added a feature in logical replication to stream\n> long in-progress transactions which should reduce spilling to a good\n> extent. You might want to try that.\n\nThat's been my principal recommendation (since that would also allow controlling the amount of logical replication working memory). Thank you!\n\n", "msg_date": "Tue, 7 May 2024 11:12:19 -0700", "msg_from": "Christophe Pettus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Control flow in logical replication walsender" } ]
[ { "msg_contents": "Hi,\n\nWhen a client of our JSON parser registers semantic action callbacks,\nthe parser will allocate copies of the lexed tokens to pass into those\ncallbacks. The intent is for the callbacks to always take ownership of\nthe copied memory, and if they don't want it then they can pfree() it.\n\nHowever, if parsing fails on the token before the callback is invoked,\nthat allocation is leaked. That's not a problem for the server side,\nor for clients that immediately exit on parse failure, but if the JSON\nparser gets added to libpq for OAuth, it will become more of a\nproblem.\n\n(I'm building a fuzzer to flush out these sorts of issues; not only\ndoes it complain loudly about the leaks, but the accumulation of\nleaked memory puts a hard limit on how long a fuzzer run can last. :D)\n\nAttached is a draft patch to illustrate what I mean, but it's\nincomplete: it only solves the problem for scalar values. We also make\na copy of object field names, which is much harder to fix, because we\nmake only a single copy and pass that to both the object_field_start\nand object_field_end callbacks. Consider:\n\n- If a client only implements object_field_start, it takes ownership\nof the field name when we call it. It can free the allocation if it\ndecides that the field is irrelevant.\n- The same is true for clients that only implement object_field_end.\n- If a client implements both callbacks, it takes ownership of the\nfield name when we call object_field_start. But irrelevant field names\ncan't be freed during that callback, because the same copy is going to\nbe passed to object_field_end. And object_field_end isn't guaranteed\nto be called, because parsing could fail for any number of reasons\nbetween now and then. So what code should be responsible for cleanup?\nThe parser doesn't know whether the callback already freed the pointer\nor kept a reference to it in semstate.\n\nAny thoughts on how we can improve this? I was thinking we could maybe\nmake two copies of the field name and track ownership individually?\n\nThanks,\n--Jacob", "msg_date": "Tue, 30 Apr 2024 14:29:27 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "pg_parse_json() should not leak token copies on failure" }, { "msg_contents": "On Tue, Apr 30, 2024 at 2:29 PM Jacob Champion\n<[email protected]> wrote:\n> Attached is a draft patch to illustrate what I mean, but it's\n> incomplete: it only solves the problem for scalar values.\n\n(Attached is a v2 of that patch; in solving a frontend leak I should\nprobably not introduce a backend segfault.)\n\n--Jacob", "msg_date": "Fri, 24 May 2024 08:43:01 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_parse_json() should not leak token copies on failure" }, { "msg_contents": "Hi,\r\n\r\nAt Fri, 24 May 2024 08:43:01 -0700, Jacob Champion <[email protected]> wrote in \r\n> On Tue, Apr 30, 2024 at 2:29 PM Jacob Champion\r\n> <[email protected]> wrote:\r\n> > Attached is a draft patch to illustrate what I mean, but it's\r\n> > incomplete: it only solves the problem for scalar values.\r\n> \r\n> (Attached is a v2 of that patch; in solving a frontend leak I should\r\n> probably not introduce a backend segfault.)\r\n\r\nI understand that the part enclosed in parentheses refers to the \"if\r\n(ptr) pfree(ptr)\" structure. I believe we rely on the behavior of\r\nfree(NULL), which safely does nothing. (I couldn't find the related\r\ndiscussion due to a timeout error on the ML search page.)\r\n\r\nAlthough I don't fully understand the entire parser code, it seems\r\nthat the owner transfer only occurs in JSON_TOKEN_STRING cases. That\r\nis, the memory pointed by scalar_val would become dangling in\r\nJSON_TOKEN_NUBMER cases. Even if this is not the case, the ownership\r\ntransition apperas quite callenging to follow.\r\n\r\nIt might be safer or clearer to pstrdup the token in jsonb_in_scalar()\r\nand avoid NULLifying scalar_val after calling callbacks, or to let\r\njsonb_in_sclar() NULLify the pointer.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Tue, 04 Jun 2024 13:32:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_parse_json() should not leak token copies on failure" }, { "msg_contents": "On Mon, Jun 3, 2024 at 9:32 PM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> Hi,\n\nThanks for the review!\n\n> I understand that the part enclosed in parentheses refers to the \"if\n> (ptr) pfree(ptr)\" structure. I believe we rely on the behavior of\n> free(NULL), which safely does nothing. (I couldn't find the related\n> discussion due to a timeout error on the ML search page.)\n\nFor the frontend, right. For the backend, pfree(NULL) causes a crash.\nI think [1] is a related discussion on the list, maybe the one you\nwere looking for?\n\n> Although I don't fully understand the entire parser code, it seems\n> that the owner transfer only occurs in JSON_TOKEN_STRING cases. That\n> is, the memory pointed by scalar_val would become dangling in\n> JSON_TOKEN_NUBMER cases.\n\nIt should happen for both cases, I think. I see that the\nJSON_SEM_SCALAR_CALL case passes scalar_val into the callback\nregardless of whether we're parsing a string or a number, and\nparse_scalar() does the same thing over in the recursive\nimplementation. Have I missed a code path?\n\n> Even if this is not the case, the ownership\n> transition apperas quite callenging to follow.\n\nI agree!\n\n> It might be safer or clearer to pstrdup the token in jsonb_in_scalar()\n> and avoid NULLifying scalar_val after calling callbacks, or to let\n> jsonb_in_sclar() NULLify the pointer.\n\nMaybe. jsonb_in_scalar isn't the only scalar callback implementation,\nthough. It might be a fairly cruel change for dependent extensions,\nsince there will be no compiler complaint.\n\nIf a compatibility break is acceptable, maybe we should make the API\nzero-copy for the recursive case, and just pass the length of the\nparsed token? Either way, we'd have to change the object field\ncallbacks, too, but maybe an API change makes even more sense there,\ngiven how convoluted it is right now.\n\nThanks,\n--Jacob\n\n[1] https://www.postgresql.org/message-id/flat/cf26e970-8e92-59f1-247a-aa265235075b%40enterprisedb.com\n\n\n", "msg_date": "Tue, 4 Jun 2024 10:10:06 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_parse_json() should not leak token copies on failure" }, { "msg_contents": "On Tue, Jun 04, 2024 at 10:10:06AM -0700, Jacob Champion wrote:\n> On Mon, Jun 3, 2024 at 9:32 PM Kyotaro Horiguchi\n> <[email protected]> wrote:\n>> I understand that the part enclosed in parentheses refers to the \"if\n>> (ptr) pfree(ptr)\" structure. I believe we rely on the behavior of\n>> free(NULL), which safely does nothing. (I couldn't find the related\n>> discussion due to a timeout error on the ML search page.)\n> \n> For the frontend, right. For the backend, pfree(NULL) causes a crash.\n> I think [1] is a related discussion on the list, maybe the one you\n> were looking for?\n\nFYI, these choices relate also to 805a397db40b, e890ce7a4feb and\n098c703d308f.\n--\nMichael", "msg_date": "Wed, 5 Jun 2024 14:15:20 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_parse_json() should not leak token copies on failure" } ]
[ { "msg_contents": "Hello, I've recently joined the list on a tip from one of the maintainers\nof jdbc-postgres as I wanted to discuss an issue we've run into and find\nout if the fix we've worked out is the right thing to do, or if there is\nactually a bug that needs to be fixed.\n\nThe full details can be found at github.com/pgjdbc/pgjdbc/discussions/3236\n- in summary, both jdbc-postgres and the psql cli seem to be affected by an\nissue validating the certificate chain up to a publicly trusted root\ncertificate that has cross-signed an intermediate certificate coming from a\nPostgres server in Azure, when using sslmode=verify-full and trying to rely\non the default path for sslrootcert.\n\nThe workaround we came up with is to add the original root cert, not the\nroot that cross-signed the intermediate, to root.crt, in order to avoid\nneeding to specify sslrootcert=<the default path>. This allows the full\nchain to be verified.\n\nI believe that either one should be able to be placed there without me\nneeding to explicitly specify sslrootcert=<the default path>, but if I use\nthe CA that cross-signed the intermediate cert, and don't specify\nsslrootcert=<some path, either default or not> the chain validation fails.\n\nThank you,\n\nThomas\n\nHello, I've recently joined the list on a tip from one of the maintainers of jdbc-postgres as I wanted to discuss an issue we've run into and find out if the fix we've worked out is the right thing to do, or if there is actually a bug that needs to be fixed.The full details can be found at github.com/pgjdbc/pgjdbc/discussions/3236 - in summary, both jdbc-postgres and the psql cli seem to be affected by an issue validating the certificate chain up to a publicly trusted root certificate that has cross-signed an intermediate certificate coming from a Postgres server in Azure, when using sslmode=verify-full and trying to rely on the default path for sslrootcert.The workaround we came up with is to add the original root cert, not the root that cross-signed the intermediate, to root.crt, in order to avoid needing to specify sslrootcert=<the default path>. This allows the full chain to be verified.I believe that either one should be able to be placed there without me needing to explicitly specify sslrootcert=<the default path>, but if I use the CA that cross-signed the intermediate cert, and don't specify sslrootcert=<some path, either default or not> the chain validation fails.Thank you,Thomas", "msg_date": "Tue, 30 Apr 2024 16:40:50 -0500", "msg_from": "Thomas Spear <[email protected]>", "msg_from_op": true, "msg_subject": "TLS certificate alternate trust paths issue in libpq - certificate\n chain validation failing" }, { "msg_contents": "On Tue, Apr 30, 2024 at 2:41 PM Thomas Spear <[email protected]> wrote:\n> The full details can be found at github.com/pgjdbc/pgjdbc/discussions/3236 - in summary, both jdbc-postgres and the psql cli seem to be affected by an issue validating the certificate chain up to a publicly trusted root certificate that has cross-signed an intermediate certificate coming from a Postgres server in Azure, when using sslmode=verify-full and trying to rely on the default path for sslrootcert.\n\nHopefully someone more familiar with the Azure cross-signing setup\nsees something obvious and chimes in, but in the meantime there are a\ncouple things I can think to ask:\n\n1. Are you sure that the server is actually putting the cross-signed\nintermediate in the chain it's serving to the client?\n\n2. What version of OpenSSL? There used to be validation bugs with\nalternate trust paths; hopefully you're not using any of those (I\nthink they're old as dirt), but it doesn't hurt to know.\n\n3. Can you provide a sample public certificate chain that should\nvalidate and doesn't?\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 30 Apr 2024 15:19:37 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TLS certificate alternate trust paths issue in libpq -\n certificate chain validation failing" }, { "msg_contents": "On Tue, Apr 30, 2024 at 5:19 PM Jacob Champion <\[email protected]> wrote:\n\nOn Tue, Apr 30, 2024 at 2:41 PM Thomas Spear <[email protected]> wrote:\n> The full details can be found at github.com/pgjdbc/pgjdbc/discussions/3236 -\nin summary, both jdbc-postgres and the psql cli seem to be affected by an\nissue validating the certificate chain up to a publicly trusted root\ncertificate that has cross-signed an intermediate certificate coming from a\nPostgres server in Azure, when using sslmode=verify-full and trying to rely\non the default path for sslrootcert.\n\nHopefully someone more familiar with the Azure cross-signing setup\nsees something obvious and chimes in, but in the meantime there are a\ncouple things I can think to ask:\n\n1. Are you sure that the server is actually putting the cross-signed\nintermediate in the chain it's serving to the client?\n\n\nHello Jacob, thanks for your reply.\n\nI can say I'm reasonably certain. I dumped out the certificates presented\nby the server using openssl, and the chain that gets output includes\n\"Microsoft Azure RSA TLS Issuing CA 08\".\nOn https://www.microsoft.com/pkiops/docs/repository.htm the page says that\nthat cert was cross-signed by the DigiCert RSA G2 root.\nThe postgres server appears to send the Microsoft root certificate instead\nof the DigiCert one, which should be fine. The server sends the \"Microsoft\nRSA Root Certificate Authority 2017\" root.\nAs far as I understand, a server sending a root certificate along with the\nintermediate is a big no-no, but that's a topic for a different thread and\naudience most likely. :)\n\n2. What version of OpenSSL? There used to be validation bugs with\nalternate trust paths; hopefully you're not using any of those (I\nthink they're old as dirt), but it doesn't hurt to know.\n\n\nThe openssl version in my Windows test system is 3.0.7. It's running\nAlmalinux 9 in WSL2, so openssl is from the package manager. The container\nimage I'm using has an old-as-dirt openssl 1.1.1k. It's built using a RHEL\nUBI8 image as the base layer, so it doesn't surprise me that the package\nmanager-provided version of openssl here is old as dirt, so I'll have to\nlook at making a build of 3.x for this container or maybe switching out the\nbase layer to ubuntu temporarily to test if we need to.\n\n\n3. Can you provide a sample public certificate chain that should\nvalidate and doesn't?\n\n\nI'll get back to you on this one. I'll have to check one of our public\ncloud postgres instances to see if I can reproduce the issue there in order\nto get a chain that I can share because the system where I'm testing is a\nlocked down jump host to our Azure GovCloud infrastructure, and I can't\ncopy anything out from it.\n\nThanks again\n\n--Thomas\n\nOn Tue, Apr 30, 2024 at 5:19 PM Jacob Champion <[email protected]> wrote:On Tue, Apr 30, 2024 at 2:41 PM Thomas Spear <[email protected]> wrote:> The full details can be found at github.com/pgjdbc/pgjdbc/discussions/3236 - in summary, both jdbc-postgres and the psql cli seem to be affected by an issue validating the certificate chain up to a publicly trusted root certificate that has cross-signed an intermediate certificate coming from a Postgres server in Azure, when using sslmode=verify-full and trying to rely on the default path for sslrootcert.Hopefully someone more familiar with the Azure cross-signing setupsees something obvious and chimes in, but in the meantime there are acouple things I can think to ask:1. Are you sure that the server is actually putting the cross-signedintermediate in the chain it's serving to the client?Hello Jacob, thanks for your reply.I can say I'm reasonably certain. I dumped out the certificates presented by the server using openssl, and the chain that gets output includes \"Microsoft Azure RSA TLS Issuing CA 08\".On https://www.microsoft.com/pkiops/docs/repository.htm the page says that that cert was cross-signed by the DigiCert RSA G2 root.The postgres server appears to send the Microsoft root certificate instead of the DigiCert one, which should be fine. The server sends the \"Microsoft RSA Root Certificate Authority 2017\" root.As far as I understand, a server sending a root certificate along with the intermediate is a big no-no, but that's a topic for a different thread and audience most likely. :)2. What version of OpenSSL? There used to be validation bugs withalternate trust paths; hopefully you're not using any of those (Ithink they're old as dirt), but it doesn't hurt to know.The openssl version in my Windows test system is 3.0.7. It's running Almalinux 9 in WSL2, so openssl is from the package manager. The container image I'm using has an old-as-dirt openssl 1.1.1k. It's built using a RHEL UBI8 image as the base layer, so it doesn't surprise me that the package manager-provided version of openssl here is old as dirt, so I'll have to look at making a build of 3.x for this container or maybe switching out the base layer to ubuntu temporarily to test if we need to. 3. Can you provide a sample public certificate chain that shouldvalidate and doesn't?I'll get back to you on this one. I'll have to check one of our public cloud postgres instances to see if I can reproduce the issue there in order to get a chain that I can share because the system where I'm testing is a locked down jump host to our Azure GovCloud infrastructure, and I can't copy anything out from it.Thanks again--Thomas", "msg_date": "Wed, 1 May 2024 08:48:06 -0500", "msg_from": "Thomas Spear <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TLS certificate alternate trust paths issue in libpq -\n certificate chain validation failing" }, { "msg_contents": "On Wed, May 1, 2024 at 6:48 AM Thomas Spear <[email protected]> wrote:\n> I dumped out the certificates presented by the server using openssl, and the chain that gets output includes \"Microsoft Azure RSA TLS Issuing CA 08\".\n> On https://www.microsoft.com/pkiops/docs/repository.htm the page says that that cert was cross-signed by the DigiCert RSA G2 root.\n\nIt's been a while since I've looked at cross-signing, but that may not\nbe enough information to prove that it's the \"correct\" version of the\nintermediate. You'd need to know the Issuer, not just the Subject, for\nall the intermediates that were given to the client. (It may not match\nthe one they have linked on their support page.)\n\n> The postgres server appears to send the Microsoft root certificate instead of the DigiCert one, which should be fine. The server sends the \"Microsoft RSA Root Certificate Authority 2017\" root.\n> As far as I understand, a server sending a root certificate along with the intermediate is a big no-no, but that's a topic for a different thread and audience most likely. :)\n\nTo me, that only makes me more suspicious that the chain the server is\nsending you may not be the chain you're expecting. Especially since\nyou mentioned on the other thread that the MS root is working and the\nDigiCert root is not.\n\n> The openssl version in my Windows test system is 3.0.7. It's running Almalinux 9 in WSL2, so openssl is from the package manager. The container image I'm using has an old-as-dirt openssl 1.1.1k.\n\nI'm not aware of any validation issues with 1.1.1k, for what it's\nworth. If upgrading helps, great! -- but I wouldn't be surprised if it\ndidn't.\n\n> I'll have to check one of our public cloud postgres instances to see if I can reproduce the issue there in order to get a chain that I can share because the system where I'm testing is a locked down jump host to our Azure GovCloud infrastructure, and I can't copy anything out from it.\n\nYeah, if at all possible, that'd make it easier to point at any\nglaring problems.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Wed, 1 May 2024 07:23:48 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TLS certificate alternate trust paths issue in libpq -\n certificate chain validation failing" }, { "msg_contents": "On Wed, May 1, 2024 at 9:23 AM Jacob Champion <\[email protected]> wrote:\n\n> On Wed, May 1, 2024 at 6:48 AM Thomas Spear <[email protected]> wrote:\n> > I dumped out the certificates presented by the server using openssl, and\n> the chain that gets output includes \"Microsoft Azure RSA TLS Issuing CA 08\".\n> > On https://www.microsoft.com/pkiops/docs/repository.htm the page says\n> that that cert was cross-signed by the DigiCert RSA G2 root.\n>\n> It's been a while since I've looked at cross-signing, but that may not\n> be enough information to prove that it's the \"correct\" version of the\n> intermediate. You'd need to know the Issuer, not just the Subject, for\n> all the intermediates that were given to the client. (It may not match\n> the one they have linked on their support page.)\n>\n>\nFair enough. The server issuer is C=US,O=Microsoft Corporation,CN=Microsoft\nAzure RSA TLS Issuing CA 08\nThe intermediate's issuer is C=US,O=Microsoft Corporation,CN=Microsoft RSA\nRoot Certificate Authority 2017 so I think that you're absolutely correct.\nThe intermediate on the support page reflects the DigiCert issuer, but the\none from the server reflects the Microsoft issuer.\n\nCircling back to my original question, why is there a difference in\nbehavior?\n\nWhat I believe should be happening isn't what's happening:\n1. If ~/.postgresql/root.crt contains the MS root, and I don't specify\nsslrootcert= -- successful validation\n2. If ~/.postgresql/root.crt contains the MS root, and I specify\nsslrootcert= -- successful validation\n3. If ~/.postgresql/root.crt contains the DigiCert root, and I don't\nspecify sslrootcert= -- validation fails\n4. If ~/.postgresql/root.crt contains the DigiCert root, and I specify\nsslrootcert= -- successful validation\n\nCase 3 should succeed IMHO since case 4 does.\n\n\n> > The postgres server appears to send the Microsoft root certificate\n> instead of the DigiCert one, which should be fine. The server sends the\n> \"Microsoft RSA Root Certificate Authority 2017\" root.\n> > As far as I understand, a server sending a root certificate along with\n> the intermediate is a big no-no, but that's a topic for a different thread\n> and audience most likely. :)\n>\n> To me, that only makes me more suspicious that the chain the server is\n> sending you may not be the chain you're expecting. Especially since\n> you mentioned on the other thread that the MS root is working and the\n> DigiCert root is not.\n>\n>\nYeah, I agree. So then I need to talk to MS about why the portal is giving\nus the wrong root -- and I'll open a support ticket with them for this. I\nstill don't understand why the above difference in behavior happens though.\nIs that specifically because the server is sending the MS root? Still\ndoesn't seem to make a whole lot of sense. If the DigiCert root can\nvalidate the chain when it's explicitly passed, it should be able to\nvalidate the chain when it's implicitly the only root cert available to a\npostgres client.\n\n\n> > The openssl version in my Windows test system is 3.0.7. It's running\n> Almalinux 9 in WSL2, so openssl is from the package manager. The container\n> image I'm using has an old-as-dirt openssl 1.1.1k.\n>\n> I'm not aware of any validation issues with 1.1.1k, for what it's\n> worth. If upgrading helps, great! -- but I wouldn't be surprised if it\n> didn't.\n>\n>\nI was thinking the same honestly. If it breaks for jdbc-postgres on 1.1.1k\nand psql cli on 3.0.7 then it's likely not an issue there.\n\n> I'll have to check one of our public cloud postgres instances to see if I\n> can reproduce the issue there in order to get a chain that I can share\n> because the system where I'm testing is a locked down jump host to our\n> Azure GovCloud infrastructure, and I can't copy anything out from it.\n>\n> Yeah, if at all possible, that'd make it easier to point at any\n> glaring problems.\n>\n>\nI should be able to do this today.\n\nThanks again!\n\n--Thomas\n\nOn Wed, May 1, 2024 at 9:23 AM Jacob Champion <[email protected]> wrote:On Wed, May 1, 2024 at 6:48 AM Thomas Spear <[email protected]> wrote:\n> I dumped out the certificates presented by the server using openssl, and the chain that gets output includes \"Microsoft Azure RSA TLS Issuing CA 08\".\n> On https://www.microsoft.com/pkiops/docs/repository.htm the page says that that cert was cross-signed by the DigiCert RSA G2 root.\n\nIt's been a while since I've looked at cross-signing, but that may not\nbe enough information to prove that it's the \"correct\" version of the\nintermediate. You'd need to know the Issuer, not just the Subject, for\nall the intermediates that were given to the client. (It may not match\nthe one they have linked on their support page.)\nFair enough. The server issuer is C=US,O=Microsoft Corporation,CN=Microsoft Azure RSA TLS Issuing CA 08The intermediate's issuer is C=US,O=Microsoft Corporation,CN=Microsoft RSA Root Certificate Authority 2017 so I think that you're absolutely correct. The intermediate on the support page reflects the DigiCert issuer, but the one from the server reflects the Microsoft issuer.Circling back to my original question, why is there a difference in behavior?What I believe should be happening isn't what's happening:1. If ~/.postgresql/root.crt contains the MS root, and I don't specify sslrootcert= -- successful validation2. If ~/.postgresql/root.crt contains the MS root, and I specify sslrootcert= -- successful validation3. If ~/.postgresql/root.crt contains the DigiCert root, and I don't specify sslrootcert= -- validation fails4. If ~/.postgresql/root.crt contains the DigiCert root, and I specify sslrootcert= -- successful validationCase 3 should succeed IMHO since case 4 does. \n> The postgres server appears to send the Microsoft root certificate instead of the DigiCert one, which should be fine. The server sends the \"Microsoft RSA Root Certificate Authority 2017\" root.\n> As far as I understand, a server sending a root certificate along with the intermediate is a big no-no, but that's a topic for a different thread and audience most likely. :)\n\nTo me, that only makes me more suspicious that the chain the server is\nsending you may not be the chain you're expecting. Especially since\nyou mentioned on the other thread that the MS root is working and the\nDigiCert root is not.\nYeah, I agree. So then I need to talk to MS about why the portal is giving us the wrong root -- and I'll open a support ticket with them for this. I still don't understand why the above difference in behavior happens though. Is that specifically because the server is sending the MS root? Still doesn't seem to make a whole lot of sense. If the DigiCert root can validate the chain when it's explicitly passed, it should be able to validate the chain when it's implicitly the only root cert available to a postgres client. \n> The openssl version in my Windows test system is 3.0.7. It's running Almalinux 9 in WSL2, so openssl is from the package manager. The container image I'm using has an old-as-dirt openssl 1.1.1k.\n\nI'm not aware of any validation issues with 1.1.1k, for what it's\nworth. If upgrading helps, great! -- but I wouldn't be surprised if it\ndidn't.\n I was thinking the same honestly. If it breaks for jdbc-postgres on 1.1.1k and psql cli on 3.0.7 then it's likely not an issue there.\n> I'll have to check one of our public cloud postgres instances to see if I can reproduce the issue there in order to get a chain that I can share because the system where I'm testing is a locked down jump host to our Azure GovCloud infrastructure, and I can't copy anything out from it.\n\nYeah, if at all possible, that'd make it easier to point at any\nglaring problems.\nI should be able to do this today.Thanks again!--Thomas", "msg_date": "Wed, 1 May 2024 10:17:13 -0500", "msg_from": "Thomas Spear <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TLS certificate alternate trust paths issue in libpq -\n certificate chain validation failing" }, { "msg_contents": "On Wed, May 1, 2024 at 8:17 AM Thomas Spear <[email protected]> wrote:\n> Circling back to my original question, why is there a difference in behavior?\n>\n> What I believe should be happening isn't what's happening:\n> 1. If ~/.postgresql/root.crt contains the MS root, and I don't specify sslrootcert= -- successful validation\n> 2. If ~/.postgresql/root.crt contains the MS root, and I specify sslrootcert= -- successful validation\n> 3. If ~/.postgresql/root.crt contains the DigiCert root, and I don't specify sslrootcert= -- validation fails\n> 4. If ~/.postgresql/root.crt contains the DigiCert root, and I specify sslrootcert= -- successful validation\n\nNumber 4 is the only one that seems off to me given what we know. If\nyou're saying that the server's chain never mentions DigiCert as an\nissuer, then I see no reason that the DigiCert root should ever\nsuccessfully validate the chain. You mentioned on the other thread\nthat\n\n> We eventually found the intermediate cert was missing from the system trust, so we tried adding that without success\n\nand that has me a little worried. Why would the intermediate need to\nbe explicitly trusted?\n\nI also notice from the other thread that sometimes you're testing on\nLinux and sometimes you're testing on Windows, and that you've mixed\nin a couple of different sslmodes during debugging. So I want to make\nabsolutely sure: are you _certain_ that case number 4 is a true\nstatement? In other words, there's nothing in your default root.crt\nexcept the DigiCert root, you've specified exactly the same path in\nsslrootcert as the one that's loaded by default, and your experiments\nare all using verify-full?\n\nThe default can also be modified by a bunch of environmental factors,\nincluding $PGSSLROOTCERT, $HOME, the effective user ID, etc. (On\nWindows I don't know what the %APPDATA% conventions are.) If you empty\nout your root.crt file, you should get a clear message that libpq\ntried to load certificates from it and couldn't; otherwise, it's\nfinding the default somewhere else.\n\n--Jacob\n\n\n", "msg_date": "Wed, 1 May 2024 10:30:56 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TLS certificate alternate trust paths issue in libpq -\n certificate chain validation failing" }, { "msg_contents": "On Wed, May 1, 2024 at 12:31 PM Jacob Champion <\[email protected]> wrote:\n\n> On Wed, May 1, 2024 at 8:17 AM Thomas Spear <[email protected]> wrote:\n> > Circling back to my original question, why is there a difference in\n> behavior?\n> >\n> > What I believe should be happening isn't what's happening:\n> > 1. If ~/.postgresql/root.crt contains the MS root, and I don't specify\n> sslrootcert= -- successful validation\n> > 2. If ~/.postgresql/root.crt contains the MS root, and I specify\n> sslrootcert= -- successful validation\n> > 3. If ~/.postgresql/root.crt contains the DigiCert root, and I don't\n> specify sslrootcert= -- validation fails\n> > 4. If ~/.postgresql/root.crt contains the DigiCert root, and I specify\n> sslrootcert= -- successful validation\n>\n> Number 4 is the only one that seems off to me given what we know.\n\n\nI see how that could be true.\n\n\n> If you're saying that the server's chain never mentions DigiCert as an\n> issuer, then I see no reason that the DigiCert root should ever\n> successfully validate the chain. You mentioned on the other thread\n> that\n>\n> > We eventually found the intermediate cert was missing from the system\n> trust, so we tried adding that without success\n>\n> and that has me a little worried. Why would the intermediate need to\n> be explicitly trusted?\n>\n>\nRight, so just to be clear, all of the details from the other thread was\ntesting done in a container running on Kubernetes, so when adding the\nintermediate to the \"system trust\" it was the container's java trust store.\nWhen that didn't work, we removed it from the Dockerfile again so the\nfuture builds didn't include the trust for that cert.\n\n\n> I also notice from the other thread that sometimes you're testing on\n> Linux and sometimes you're testing on Windows, and that you've mixed\n> in a couple of different sslmodes during debugging. So I want to make\n> absolutely sure: are you _certain_ that case number 4 is a true\n> statement? In other words, there's nothing in your default root.crt\n> except the DigiCert root, you've specified exactly the same path in\n> sslrootcert as the one that's loaded by default, and your experiments\n> are all using verify-full?\n>\n\n> The default can also be modified by a bunch of environmental factors,\n> including $PGSSLROOTCERT, $HOME, the effective user ID, etc. (On\n> Windows I don't know what the %APPDATA% conventions are.) If you empty\n> out your root.crt file, you should get a clear message that libpq\n> tried to load certificates from it and couldn't; otherwise, it's\n> finding the default somewhere else.\n>\n>\nI redid the command line tests to be sure, from Windows command prompt so\nthat I can't rely on my bash command history from AlmaLinux and instead had\nto type everything out by hand.\nIt does fail to validate for case 4 after all. I must have had a copy/paste\nerror during past tests.\n\nWith no root.crt file present, the psql command complains that root.crt is\nmissing as well.\n\nSo then it sounds like putting the MS root in root.crt (as we have done to\nfix this) is the correct thing to do, and there's no issue. It doesn't seem\nlibpq will use the trusted roots that are typically located in either\n/etc/ssl or /etc/pki so we have to provide the root in the path where libpq\nexpects it to be to get verify-full to work properly.\n\nThanks for helping me to confirm this. I'll get a case open with MS\nregarding the wrong root download from the portal in GovCloud.\n\n--Thomas\n\nOn Wed, May 1, 2024 at 12:31 PM Jacob Champion <[email protected]> wrote:On Wed, May 1, 2024 at 8:17 AM Thomas Spear <[email protected]> wrote:\n> Circling back to my original question, why is there a difference in behavior?\n>\n> What I believe should be happening isn't what's happening:\n> 1. If ~/.postgresql/root.crt contains the MS root, and I don't specify sslrootcert= -- successful validation\n> 2. If ~/.postgresql/root.crt contains the MS root, and I specify sslrootcert= -- successful validation\n> 3. If ~/.postgresql/root.crt contains the DigiCert root, and I don't specify sslrootcert= -- validation fails\n> 4. If ~/.postgresql/root.crt contains the DigiCert root, and I specify sslrootcert= -- successful validation\n\nNumber 4 is the only one that seems off to me given what we know. I see how that could be true. If you're saying that the server's chain never mentions DigiCert as anissuer, then I see no reason that the DigiCert root should eversuccessfully validate the chain. You mentioned on the other threadthat> We eventually found the intermediate cert was missing from the system trust, so we tried adding that without successand that has me a little worried. Why would the intermediate need tobe explicitly trusted?Right, so just to be clear, all of the details from the other thread was testing done in a container running on Kubernetes, so when adding the intermediate to the \"system trust\" it was the container's java trust store. When that didn't work, we removed it from the Dockerfile again so the future builds didn't include the trust for that cert. I also notice from the other thread that sometimes you're testing on\nLinux and sometimes you're testing on Windows, and that you've mixed\nin a couple of different sslmodes during debugging. So I want to make\nabsolutely sure: are you _certain_ that case number 4 is a true\nstatement? In other words, there's nothing in your default root.crt\nexcept the DigiCert root, you've specified exactly the same path in\nsslrootcert as the one that's loaded by default, and your experiments\nare all using verify-full?\nThe default can also be modified by a bunch of environmental factors,\nincluding $PGSSLROOTCERT, $HOME, the effective user ID, etc. (On\nWindows I don't know what the %APPDATA% conventions are.) If you empty\nout your root.crt file, you should get a clear message that libpq\ntried to load certificates from it and couldn't; otherwise, it's\nfinding the default somewhere else.I redid the command line tests to be sure, from Windows command prompt so that I can't rely on my bash command history from AlmaLinux and instead had to type everything out by hand.It does fail to validate for case 4 after all. I must have had a copy/paste error during past tests.With no root.crt file present, the psql command complains that root.crt is missing as well.So then it sounds like putting the MS root in root.crt (as we have done to fix this) is the correct thing to do, and there's no issue. It doesn't seem libpq will use the trusted roots that are typically located in either /etc/ssl or /etc/pki so we have to provide the root in the path where libpq expects it to be to get verify-full to work properly.Thanks for helping me to confirm this. I'll get a case open with MS regarding the wrong root download from the portal in GovCloud.--Thomas", "msg_date": "Wed, 1 May 2024 13:56:26 -0500", "msg_from": "Thomas Spear <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TLS certificate alternate trust paths issue in libpq -\n certificate chain validation failing" }, { "msg_contents": "On Wed, May 1, 2024 at 11:57 AM Thomas Spear <[email protected]> wrote:\n> It does fail to validate for case 4 after all. I must have had a copy/paste error during past tests.\n\nOkay, good. Glad it's behaving as expected!\n\n> So then it sounds like putting the MS root in root.crt (as we have done to fix this) is the correct thing to do, and there's no issue. It doesn't seem libpq will use the trusted roots that are typically located in either /etc/ssl or /etc/pki so we have to provide the root in the path where libpq expects it to be to get verify-full to work properly.\n\nRight. Versions 16 and later will let you use `sslrootcert=system` to\nload those /etc locations more easily, but if the MS root isn't in the\nsystem PKI stores and the server isn't sending the DigiCert chain then\nthat probably doesn't help you.\n\n> Thanks for helping me to confirm this. I'll get a case open with MS regarding the wrong root download from the portal in GovCloud.\n\nHappy to help!\n\nHave a good one,\n--Jacob\n\n\n", "msg_date": "Wed, 1 May 2024 12:18:41 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TLS certificate alternate trust paths issue in libpq -\n certificate chain validation failing" } ]
[ { "msg_contents": "If you create an unlogged sequence on a primary, pg_sequence_last_value()\nfor that sequence on a standby will error like so:\n\n\tpostgres=# select pg_sequence_last_value('test'::regclass);\n\tERROR: could not open file \"base/5/16388\": No such file or directory\n\nThis function is used by the pg_sequences system view, which fails with the\nsame error on standbys. The two options I see are:\n\n* Return a better ERROR and adjust pg_sequences to avoid calling this\n function for unlogged sequences on standbys.\n* Return NULL from pg_sequence_last_value() if called for an unlogged\n sequence on a standby.\n\nAs pointed out a few years ago [0], this function is undocumented, so\nthere's no stated contract to uphold. I lean towards just returning NULL\nbecause that's what we'll have to put in the relevant pg_sequences field\nanyway, but I can see an argument for fixing the ERROR to align with what\nyou see when you try to access unlogged relations on a standby (i.e.,\n\"cannot access temporary or unlogged relations during recovery\").\n\nThoughts?\n\n[0] https://postgr.es/m/CAAaqYe8JL8Et2DoO0RRjGaMvy7-C6eDH-2wHXK-gp3dOssvBkQ%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 30 Apr 2024 19:57:30 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "pg_sequence_last_value() for unlogged sequences on standbys" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> If you create an unlogged sequence on a primary, pg_sequence_last_value()\n> for that sequence on a standby will error like so:\n> \tpostgres=# select pg_sequence_last_value('test'::regclass);\n> \tERROR: could not open file \"base/5/16388\": No such file or directory\n\n> As pointed out a few years ago [0], this function is undocumented, so\n> there's no stated contract to uphold. I lean towards just returning NULL\n> because that's what we'll have to put in the relevant pg_sequences field\n> anyway, but I can see an argument for fixing the ERROR to align with what\n> you see when you try to access unlogged relations on a standby (i.e.,\n> \"cannot access temporary or unlogged relations during recovery\").\n\nYeah, I agree with putting that logic into the function. Putting\nsuch conditions into the SQL of a system view is risky because it\nis really, really painful to adjust the SQL in a released version.\nYou could back-patch a fix for this if done at the C level, but\nI doubt we'd go to the trouble if it's done in the view.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Apr 2024 21:06:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_sequence_last_value() for unlogged sequences on standbys" }, { "msg_contents": "On Tue, Apr 30, 2024 at 09:06:04PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> If you create an unlogged sequence on a primary, pg_sequence_last_value()\n>> for that sequence on a standby will error like so:\n>> \tpostgres=# select pg_sequence_last_value('test'::regclass);\n>> \tERROR: could not open file \"base/5/16388\": No such file or directory\n> \n>> As pointed out a few years ago [0], this function is undocumented, so\n>> there's no stated contract to uphold. I lean towards just returning NULL\n>> because that's what we'll have to put in the relevant pg_sequences field\n>> anyway, but I can see an argument for fixing the ERROR to align with what\n>> you see when you try to access unlogged relations on a standby (i.e.,\n>> \"cannot access temporary or unlogged relations during recovery\").\n> \n> Yeah, I agree with putting that logic into the function. Putting\n> such conditions into the SQL of a system view is risky because it\n> is really, really painful to adjust the SQL in a released version.\n> You could back-patch a fix for this if done at the C level, but\n> I doubt we'd go to the trouble if it's done in the view.\n\nGood point. I'll work on a patch along these lines, then.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 30 Apr 2024 20:13:17 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_sequence_last_value() for unlogged sequences on standbys" }, { "msg_contents": "On Tue, Apr 30, 2024 at 08:13:17PM -0500, Nathan Bossart wrote:\n> Good point. I'll work on a patch along these lines, then.\n\nThis ended up being easier than I expected. While unlogged sequences are\nonly supported on v15 and above, temporary sequences have been around since\nv7.2, so this will probably need to be back-patched to all supported\nversions. The added test case won't work for v12-v14 since it uses an\nunlogged sequence. I'm not sure it's worth constructing a test case for\ntemporary sequences.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 30 Apr 2024 21:05:31 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_sequence_last_value() for unlogged sequences on standbys" }, { "msg_contents": "On Tue, Apr 30, 2024 at 09:05:31PM -0500, Nathan Bossart wrote:\n> This ended up being easier than I expected. While unlogged sequences are\n> only supported on v15 and above, temporary sequences have been around since\n> v7.2, so this will probably need to be back-patched to all supported\n> versions.\n\nUnlogged and temporary relations cannot be accessed during recovery,\nso I'm OK with your change to force a NULL for both relpersistences.\nHowever, it seems to me that you should also drop the\npg_is_other_temp_schema() in system_views.sql for the definition of\npg_sequences. Doing that on HEAD now would be OK, but there's nothing\nurgent to it so it may be better done once v18 opens up. Note that\npg_is_other_temp_schema() is only used for this sequence view, which\nis a nice cleanup.\n\nBy the way, shouldn't we also change the function to return NULL for a\nfailed permission check? It would be possible to remove the\nhas_sequence_privilege() as well, thanks to that, and a duplication\nbetween the code and the function view. I've been looking around a\nbit, noticing one use of this function in check_pgactivity (nagios\nagent), and its query also has a has_sequence_privilege() so returning\nNULL would simplify its definition in the long-run. I'd suspect other\nmonitoring queries to do something similar to bypass permission\nerrors.\n\n> The added test case won't work for v12-v14 since it uses an\n> unlogged sequence.\n\nThat would require a BackgroundPsql to maintain the connection to the\nprimary, so not having a test is OK by me.\n--\nMichael", "msg_date": "Wed, 1 May 2024 12:39:53 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_sequence_last_value() for unlogged sequences on standbys" }, { "msg_contents": "On Wed, May 01, 2024 at 12:39:53PM +0900, Michael Paquier wrote:\n> However, it seems to me that you should also drop the\n> pg_is_other_temp_schema() in system_views.sql for the definition of\n> pg_sequences. Doing that on HEAD now would be OK, but there's nothing\n> urgent to it so it may be better done once v18 opens up. Note that\n> pg_is_other_temp_schema() is only used for this sequence view, which\n> is a nice cleanup.\n\nIIUC this would cause other sessions' temporary sequences to appear in the\nview. Is that desirable?\n\n> By the way, shouldn't we also change the function to return NULL for a\n> failed permission check? It would be possible to remove the\n> has_sequence_privilege() as well, thanks to that, and a duplication\n> between the code and the function view. I've been looking around a\n> bit, noticing one use of this function in check_pgactivity (nagios\n> agent), and its query also has a has_sequence_privilege() so returning\n> NULL would simplify its definition in the long-run. I'd suspect other\n> monitoring queries to do something similar to bypass permission\n> errors.\n\nI'm okay with that, but it would be v18 material that I'd track separately\nfrom the back-patchable fix proposed in this thread.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 3 May 2024 15:49:08 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_sequence_last_value() for unlogged sequences on standbys" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Wed, May 01, 2024 at 12:39:53PM +0900, Michael Paquier wrote:\n>> However, it seems to me that you should also drop the\n>> pg_is_other_temp_schema() in system_views.sql for the definition of\n>> pg_sequences. Doing that on HEAD now would be OK, but there's nothing\n>> urgent to it so it may be better done once v18 opens up. Note that\n>> pg_is_other_temp_schema() is only used for this sequence view, which\n>> is a nice cleanup.\n\n> IIUC this would cause other sessions' temporary sequences to appear in the\n> view. Is that desirable?\n\nI assume Michael meant to move the test into the C code, not drop\nit entirely --- I agree we don't want that.\n\nMoving it has some attraction, but pg_is_other_temp_schema() is also\nused in a lot of information_schema views, so we couldn't get rid of\nit without a lot of further hacking. Not sure we want to relocate\nthat filter responsibility in just one view.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 May 2024 17:22:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_sequence_last_value() for unlogged sequences on standbys" }, { "msg_contents": "On Fri, May 03, 2024 at 05:22:06PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> IIUC this would cause other sessions' temporary sequences to appear in the\n>> view. Is that desirable?\n> \n> I assume Michael meant to move the test into the C code, not drop\n> it entirely --- I agree we don't want that.\n\nYup. I meant to remove it from the script and keep only something in\nthe C code to avoid the duplication, but you're right that the temp\nsequences would create more noise than now.\n\n> Moving it has some attraction, but pg_is_other_temp_schema() is also\n> used in a lot of information_schema views, so we couldn't get rid of\n> it without a lot of further hacking. Not sure we want to relocate\n> that filter responsibility in just one view.\n\nOkay.\n--\nMichael", "msg_date": "Sat, 4 May 2024 18:45:32 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_sequence_last_value() for unlogged sequences on standbys" }, { "msg_contents": "On Fri, May 03, 2024 at 03:49:08PM -0500, Nathan Bossart wrote:\n> On Wed, May 01, 2024 at 12:39:53PM +0900, Michael Paquier wrote:\n>> By the way, shouldn't we also change the function to return NULL for a\n>> failed permission check? It would be possible to remove the\n>> has_sequence_privilege() as well, thanks to that, and a duplication\n>> between the code and the function view. I've been looking around a\n>> bit, noticing one use of this function in check_pgactivity (nagios\n>> agent), and its query also has a has_sequence_privilege() so returning\n>> NULL would simplify its definition in the long-run. I'd suspect other\n>> monitoring queries to do something similar to bypass permission\n>> errors.\n> \n> I'm okay with that, but it would be v18 material that I'd track separately\n> from the back-patchable fix proposed in this thread.\n\nOf course. I mean only the permission check simplification for HEAD.\nMy apologies if my words were unclear.\n--\nMichael", "msg_date": "Sat, 4 May 2024 18:47:22 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_sequence_last_value() for unlogged sequences on standbys" }, { "msg_contents": "On Sat, May 04, 2024 at 06:45:32PM +0900, Michael Paquier wrote:\n> On Fri, May 03, 2024 at 05:22:06PM -0400, Tom Lane wrote:\n>> Nathan Bossart <[email protected]> writes:\n>>> IIUC this would cause other sessions' temporary sequences to appear in the\n>>> view. Is that desirable?\n>> \n>> I assume Michael meant to move the test into the C code, not drop\n>> it entirely --- I agree we don't want that.\n> \n> Yup. I meant to remove it from the script and keep only something in\n> the C code to avoid the duplication, but you're right that the temp\n> sequences would create more noise than now.\n> \n>> Moving it has some attraction, but pg_is_other_temp_schema() is also\n>> used in a lot of information_schema views, so we couldn't get rid of\n>> it without a lot of further hacking. Not sure we want to relocate\n>> that filter responsibility in just one view.\n> \n> Okay.\n\nOkay, so are we okay to back-patch something like v1? Or should we also\nreturn NULL for other sessions' temporary schemas on primaries? That would\nchange the condition to something like\n\n\tchar relpersist = seqrel->rd_rel->relpersistence;\n\n\tif (relpersist == RELPERSISTENCE_PERMANENT ||\n\t\t(relpersist == RELPERSISTENCE_UNLOGGED && !RecoveryInProgress()) ||\n\t\t!RELATION_IS_OTHER_TEMP(seqrel))\n\t{\n\t\t...\n\t}\n\nI personally think that would be fine to back-patch since pg_sequences\nalready filters it out anyway.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 7 May 2024 12:10:33 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_sequence_last_value() for unlogged sequences on standbys" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> Okay, so are we okay to back-patch something like v1? Or should we also\n> return NULL for other sessions' temporary schemas on primaries? That would\n> change the condition to something like\n\n> \tchar relpersist = seqrel->rd_rel->relpersistence;\n\n> \tif (relpersist == RELPERSISTENCE_PERMANENT ||\n> \t\t(relpersist == RELPERSISTENCE_UNLOGGED && !RecoveryInProgress()) ||\n> \t\t!RELATION_IS_OTHER_TEMP(seqrel))\n> \t{\n> \t\t...\n> \t}\n\nShould be AND'ing not OR'ing the !TEMP condition, no? Also I liked\nyour other formulation of the persistence check better.\n\n> I personally think that would be fine to back-patch since pg_sequences\n> already filters it out anyway.\n\n+1 to include that, as it offers a defense if someone invokes this\nfunction directly. In HEAD we could then rip out the test in the\nview.\n\nBTW, I think you also need something like\n\n-\tint64\t\tresult;\n+\tint64\t\tresult = 0;\n\nYour compiler may not complain about result being possibly\nuninitialized, but IME others will.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 May 2024 13:44:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_sequence_last_value() for unlogged sequences on standbys" }, { "msg_contents": "On Tue, May 07, 2024 at 01:44:16PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> \tchar relpersist = seqrel->rd_rel->relpersistence;\n> \n>> \tif (relpersist == RELPERSISTENCE_PERMANENT ||\n>> \t\t(relpersist == RELPERSISTENCE_UNLOGGED && !RecoveryInProgress()) ||\n>> \t\t!RELATION_IS_OTHER_TEMP(seqrel))\n>> \t{\n>> \t\t...\n>> \t}\n> \n> Should be AND'ing not OR'ing the !TEMP condition, no? Also I liked\n> your other formulation of the persistence check better.\n\nYes, that's a silly mistake on my part. I changed it to\n\n\tif ((RelationIsPermanent(seqrel) || !RecoveryInProgress()) &&\n\t\t!RELATION_IS_OTHER_TEMP(seqrel))\n\t{\n\t\t...\n\t}\n\nin the attached v2.\n\n>> I personally think that would be fine to back-patch since pg_sequences\n>> already filters it out anyway.\n> \n> +1 to include that, as it offers a defense if someone invokes this\n> function directly. In HEAD we could then rip out the test in the\n> view.\n\nI apologize for belaboring this point, but I don't see how we would be\ncomfortable removing that check unless we are okay with other sessions'\ntemporary sequences appearing in the view, albeit with a NULL last_value.\nThis check lives in the WHERE clause today, so if we remove it, we'd no\nlonger exclude those sequences. Michael and you seem united on this, so I\nhave a sinking feeling that I'm missing something terribly obvious.\n\n> BTW, I think you also need something like\n> \n> -\tint64\t\tresult;\n> +\tint64\t\tresult = 0;\n> \n> Your compiler may not complain about result being possibly\n> uninitialized, but IME others will.\n\nGood call.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 7 May 2024 13:40:51 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_sequence_last_value() for unlogged sequences on standbys" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Tue, May 07, 2024 at 01:44:16PM -0400, Tom Lane wrote:\n>> +1 to include that, as it offers a defense if someone invokes this\n>> function directly. In HEAD we could then rip out the test in the\n>> view.\n\n> I apologize for belaboring this point, but I don't see how we would be\n> comfortable removing that check unless we are okay with other sessions'\n> temporary sequences appearing in the view, albeit with a NULL last_value.\n\nOh! You're right, I'm wrong. I was looking at the CASE filter, which\nwe could get rid of -- but the \"WHERE NOT pg_is_other_temp_schema(N.oid)\"\npart has to stay.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 May 2024 15:02:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_sequence_last_value() for unlogged sequences on standbys" }, { "msg_contents": "On Tue, May 07, 2024 at 03:02:01PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> On Tue, May 07, 2024 at 01:44:16PM -0400, Tom Lane wrote:\n>>> +1 to include that, as it offers a defense if someone invokes this\n>>> function directly. In HEAD we could then rip out the test in the\n>>> view.\n> \n>> I apologize for belaboring this point, but I don't see how we would be\n>> comfortable removing that check unless we are okay with other sessions'\n>> temporary sequences appearing in the view, albeit with a NULL last_value.\n> \n> Oh! You're right, I'm wrong. I was looking at the CASE filter, which\n> we could get rid of -- but the \"WHERE NOT pg_is_other_temp_schema(N.oid)\"\n> part has to stay.\n\nOkay, phew. We can still do something like v3-0002 for v18. I'll give\nMichael a chance to comment on 0001 before committing/back-patching that\none.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 7 May 2024 14:39:42 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_sequence_last_value() for unlogged sequences on standbys" }, { "msg_contents": "On Tue, May 07, 2024 at 02:39:42PM -0500, Nathan Bossart wrote:\n> Okay, phew. We can still do something like v3-0002 for v18. I'll give\n> Michael a chance to comment on 0001 before committing/back-patching that\n> one.\n\nWhat you are doing in 0001, and 0002 for v18 sounds fine to me.\n--\nMichael", "msg_date": "Wed, 8 May 2024 11:01:01 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_sequence_last_value() for unlogged sequences on standbys" }, { "msg_contents": "On Wed, May 08, 2024 at 11:01:01AM +0900, Michael Paquier wrote:\n> On Tue, May 07, 2024 at 02:39:42PM -0500, Nathan Bossart wrote:\n>> Okay, phew. We can still do something like v3-0002 for v18. I'll give\n>> Michael a chance to comment on 0001 before committing/back-patching that\n>> one.\n> \n> What you are doing in 0001, and 0002 for v18 sounds fine to me.\n\nGreat. Rather than commit this on a Friday afternoon, I'll just post what\nI have staged for commit early next week.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 10 May 2024 16:00:55 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_sequence_last_value() for unlogged sequences on standbys" }, { "msg_contents": "Committed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 13 May 2024 16:01:07 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_sequence_last_value() for unlogged sequences on standbys" }, { "msg_contents": "Here is a rebased version of 0002, which I intend to commit once v18\ndevelopment begins.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 16 May 2024 20:33:35 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_sequence_last_value() for unlogged sequences on standbys" }, { "msg_contents": "On Thu, May 16, 2024 at 08:33:35PM -0500, Nathan Bossart wrote:\n> Here is a rebased version of 0002, which I intend to commit once v18\n> development begins.\n\nCommitted.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 1 Jul 2024 11:51:58 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_sequence_last_value() for unlogged sequences on standbys" } ]
[ { "msg_contents": "Hi,\n\nOver in [1] it was rediscovered that our documentation assumes the reader\nis familiar with NULL. It seems worthwhile to provide both an introduction\nto the topic and an overview of how this special value gets handled\nthroughout the system.\n\nAttached is a very rough draft attempting this, based on my own thoughts\nand those expressed by Tom in [1], which largely align with mine.\n\nI'll flesh this out some more once I get support for the goal, content, and\nplacement. On that point, NULL is a fundamental part of the SQL language\nand so having it be a section in a Chapter titled \"SQL Language\" seems to\nfit well, even if that falls into our tutorial. Framing this up as\ntutorial content won't be that hard, though I've skipped on examples and\nsuch pending feedback. It really doesn't fit as a top-level chapter under\npart II nor really under any of the other chapters there. The main issue\nwith the tutorial is the forward references to concepts not yet discussed\nbut problem points there can be addressed.\n\nI do plan to remove the entity reference and place the content into\nquery.sgml directly in the final version. It is just much easier to write\nan entire new section in its own file.\n\nDavid J.\n\n[1] https://www.postgresql.org/message-id/1859814.1714532025%40sss.pgh.pa.us", "msg_date": "Wed, 1 May 2024 08:12:09 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Document NULL" }, { "msg_contents": "On Wed, May 1, 2024, 16:13 David G. Johnston <[email protected]>\nwrote:\n\n> Hi,\n>\n> Over in [1] it was rediscovered that our documentation assumes the reader\n> is familiar with NULL. It seems worthwhile to provide both an introduction\n> to the topic and an overview of how this special value gets handled\n> throughout the system.\n>\n> Attached is a very rough draft attempting this, based on my own thoughts\n> and those expressed by Tom in [1], which largely align with mine.\n>\n> I'll flesh this out some more once I get support for the goal, content,\n> and placement. On that point, NULL is a fundamental part of the SQL\n> language and so having it be a section in a Chapter titled \"SQL Language\"\n> seems to fit well, even if that falls into our tutorial. Framing this up\n> as tutorial content won't be that hard, though I've skipped on examples and\n> such pending feedback. It really doesn't fit as a top-level chapter under\n> part II nor really under any of the other chapters there. The main issue\n> with the tutorial is the forward references to concepts not yet discussed\n> but problem points there can be addressed.\n>\n> I do plan to remove the entity reference and place the content into\n> query.sgml directly in the final version. It is just much easier to write\n> an entire new section in its own file.\n>\n> David J.\n>\n> [1]\n> https://www.postgresql.org/message-id/1859814.1714532025%40sss.pgh.pa.us\n>\n\n\"The cardinal rule, NULL is never equal or unequal to any non-null value.\"\n\nThis implies that a NULL is generally equal or unequal to another NULL.\nWhile this can be true (e.g. in aggregates), in general it is not. Perhaps\nimmediately follow it with something along the lines of \"In most cases NULL\nis also not considered equal or unequal to any other NULL (i.e. NULL = NULL\nwill return NULL), but there are occasional exceptions, which will be\nexplained further on.\"\n\nRegards\n\nThom\n\nOn Wed, May 1, 2024, 16:13 David G. Johnston <[email protected]> wrote:Hi,Over in [1] it was rediscovered that our documentation assumes the reader is familiar with NULL.  It seems worthwhile to provide both an introduction to the topic and an overview of how this special value gets handled throughout the system.Attached is a very rough draft attempting this, based on my own thoughts and those expressed by Tom in [1], which largely align with mine.I'll flesh this out some more once I get support for the goal, content, and placement.  On that point, NULL is a fundamental part of the SQL language and so having it be a section in a Chapter titled \"SQL Language\" seems to fit well, even if that falls into our tutorial.  Framing this up as tutorial content won't be that hard, though I've skipped on examples and such pending feedback.  It really doesn't fit as a top-level chapter under part II nor really under any of the other chapters there.  The main issue with the tutorial is the forward references to concepts not yet discussed but problem points there can be addressed.I do plan to remove the entity reference and place the content into query.sgml directly in the final version.  It is just much easier to write an entire new section in its own file.David J.[1] https://www.postgresql.org/message-id/1859814.1714532025%40sss.pgh.pa.us\"The cardinal rule, NULL is never equal or unequal to any non-null value.\"This implies that a NULL is generally equal or unequal to another NULL. While this can be true (e.g. in aggregates), in general it is not. Perhaps immediately follow it with something along the lines of \"In most cases NULL is also not considered equal or unequal to any other NULL (i.e. NULL = NULL will return NULL), but there are occasional exceptions, which will be explained further on.\"RegardsThom", "msg_date": "Wed, 1 May 2024 16:48:45 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document NULL" }, { "msg_contents": "On Wed, May 1, 2024 at 8:12 PM David G. Johnston <[email protected]>\nwrote:\n\n> Hi,\n>\n> Over in [1] it was rediscovered that our documentation assumes the reader\n> is familiar with NULL. It seems worthwhile to provide both an introduction\n> to the topic and an overview of how this special value gets handled\n> throughout the system.\n>\n> Attached is a very rough draft attempting this, based on my own thoughts\n> and those expressed by Tom in [1], which largely align with mine.\n>\n> I'll flesh this out some more once I get support for the goal, content,\n> and placement. On that point, NULL is a fundamental part of the SQL\n> language and so having it be a section in a Chapter titled \"SQL Language\"\n> seems to fit well, even if that falls into our tutorial. Framing this up\n> as tutorial content won't be that hard, though I've skipped on examples and\n> such pending feedback. It really doesn't fit as a top-level chapter under\n> part II nor really under any of the other chapters there. The main issue\n> with the tutorial is the forward references to concepts not yet discussed\n> but problem points there can be addressed.\n>\n> I do plan to remove the entity reference and place the content into\n> query.sgml directly in the final version. It is just much easier to write\n> an entire new section in its own file.\n>\n\nReviewed the documentation update and it's quite extensive, but I think\nit's better to include some examples as well.\n\nRegards\nKashif Zeeshan\n\n>\n> David J.\n>\n> [1]\n> https://www.postgresql.org/message-id/1859814.1714532025%40sss.pgh.pa.us\n>\n>\n\nOn Wed, May 1, 2024 at 8:12 PM David G. Johnston <[email protected]> wrote:Hi,Over in [1] it was rediscovered that our documentation assumes the reader is familiar with NULL.  It seems worthwhile to provide both an introduction to the topic and an overview of how this special value gets handled throughout the system.Attached is a very rough draft attempting this, based on my own thoughts and those expressed by Tom in [1], which largely align with mine.I'll flesh this out some more once I get support for the goal, content, and placement.  On that point, NULL is a fundamental part of the SQL language and so having it be a section in a Chapter titled \"SQL Language\" seems to fit well, even if that falls into our tutorial.  Framing this up as tutorial content won't be that hard, though I've skipped on examples and such pending feedback.  It really doesn't fit as a top-level chapter under part II nor really under any of the other chapters there.  The main issue with the tutorial is the forward references to concepts not yet discussed but problem points there can be addressed.I do plan to remove the entity reference and place the content into query.sgml directly in the final version.  It is just much easier to write an entire new section in its own file.Reviewed the documentation update and it's quite extensive, but I think it's better to include some examples as well.RegardsKashif ZeeshanDavid J.[1] https://www.postgresql.org/message-id/1859814.1714532025%40sss.pgh.pa.us", "msg_date": "Thu, 2 May 2024 08:42:41 +0500", "msg_from": "Kashif Zeeshan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document NULL" }, { "msg_contents": "On Thu, 2 May 2024 at 03:12, David G. Johnston\n<[email protected]> wrote:\n> Attached is a very rough draft attempting this, based on my own thoughts and those expressed by Tom in [1], which largely align with mine.\n\nThanks for picking this up. I agree that we should have something to\nimprove this.\n\nIt would be good to see some subtitles in this e.g \"Three-valued\nboolean logic\" and document about NULL being unknown, therefore false.\nGiving a few examples would be good to, which I think is useful as it\nat least demonstrates a simple way of testing these things using a\nsimple FROMless SELECT, e.g. \"SELECT NULL = NULL;\". You could link to\nthis section from where we document WHERE clauses.\n\nMaybe another subtitle would be \"GROUP BY / DISTINCT clauses with NULL\nvalues\", and then explain that including some other examples using\n\"SELECT 1 IS NOT DISTINCT FROM NULL;\" to allow the reader to\nexperiment and learn by running queries.\n\nYou likely skipped them due to draft status, but if not, references\nback to other sections likely could do with links back to that\nsection, e.g \"amount of precipitation Hayward\" is not on that page.\nWithout that you're assuming the reader is reading the documents\nlinearly.\n\nAnother section might briefly explain about disallowing NULLs in\ncolumns with NOT NULL constraints, then link to wherever we properly\ndocument those.\n\ntypo:\n\n+ <title>Handling Unkowns (NULL)</title>\n\nMaybe inject \"Values\" after Unknown.\n\nLet's bash it into shape a bit more before going any further on actual wording.\n\nDavid\n\n\n", "msg_date": "Thu, 2 May 2024 16:15:47 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document NULL" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> Let's bash it into shape a bit more before going any further on actual wording.\n\nFWIW, I want to push back on the idea of making it a tutorial section.\nI too considered that, but in the end I think it's a better idea to\nput it into the \"main\" docs, for two reasons:\n\n1. I want this to be a fairly official/formal statement about how we\ntreat nulls; not that it has to be written in dry academic style or\nwhatever, but it has to be citable as The Reasons Why We Act Like That,\nso the tutorial seems like the wrong place.\n\n2. I think we'll soon be cross-referencing it from other places in the\ndocs, even if we don't actually move existing bits of text into it.\nSo again, cross-ref'ing the tutorial doesn't feel quite right.\n\nThose arguments don't directly say where it should go, but after\nsurveying things a bit I think it could become section 5.2 in\nddl.sgml, between \"Table Basics\" and \"Default Values\". Another\nangle could be to put it after \"Default Values\" --- except that\nthat section already assumes you know what a null is.\n\nI've not read any of David's text in detail yet, but that's my\ntwo cents on where to place it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 May 2024 00:47:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document NULL" }, { "msg_contents": "On Wed, May 1, 2024 at 9:47 PM Tom Lane <[email protected]> wrote:\n\n> David Rowley <[email protected]> writes:\n> > Let's bash it into shape a bit more before going any further on actual\n> wording.\n>\n> FWIW, I want to push back on the idea of making it a tutorial section.\n> I too considered that, but in the end I think it's a better idea to\n> put it into the \"main\" docs, for two reasons:\n>\n>\nVersion 2 attached. Still a draft, focused on topic picking and overall\nstructure. Examples and links planned plus the usual semantic markup stuff.\n\nI chose to add a new sect1 in the user guide (The SQL Language) chapter,\n\"Data\". Don't tell Robert.\n\nThe \"Data Basics\" sub-section lets us readily slide this Chapter into the\nmain flow and here the NULL discussion feels like a natural fit. In\nhindsight, the lack of a Data chapter in a Database manual seems like an\noversight. One easily made because we assume if you are here you \"know\"\nwhat data is, but there is still stuff to be discussed, if nothing else to\nestablish a common understanding between us and our users.\n\nDavid J.", "msg_date": "Thu, 2 May 2024 08:23:44 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document NULL" }, { "msg_contents": "Hi David\n\nI reviewed the documentation and it's very detailed.\n\nThanks\nKashif Zeeshan\nBitnine Global\n\nOn Thu, May 2, 2024 at 8:24 PM David G. Johnston <[email protected]>\nwrote:\n\n> On Wed, May 1, 2024 at 9:47 PM Tom Lane <[email protected]> wrote:\n>\n>> David Rowley <[email protected]> writes:\n>> > Let's bash it into shape a bit more before going any further on actual\n>> wording.\n>>\n>> FWIW, I want to push back on the idea of making it a tutorial section.\n>> I too considered that, but in the end I think it's a better idea to\n>> put it into the \"main\" docs, for two reasons:\n>>\n>>\n> Version 2 attached. Still a draft, focused on topic picking and overall\n> structure. Examples and links planned plus the usual semantic markup stuff.\n>\n> I chose to add a new sect1 in the user guide (The SQL Language) chapter,\n> \"Data\". Don't tell Robert.\n>\n> The \"Data Basics\" sub-section lets us readily slide this Chapter into the\n> main flow and here the NULL discussion feels like a natural fit. In\n> hindsight, the lack of a Data chapter in a Database manual seems like an\n> oversight. One easily made because we assume if you are here you \"know\"\n> what data is, but there is still stuff to be discussed, if nothing else to\n> establish a common understanding between us and our users.\n>\n> David J.\n>\n>\n>\n\nHi DavidI reviewed the documentation and it's very detailed.ThanksKashif ZeeshanBitnine GlobalOn Thu, May 2, 2024 at 8:24 PM David G. Johnston <[email protected]> wrote:On Wed, May 1, 2024 at 9:47 PM Tom Lane <[email protected]> wrote:David Rowley <[email protected]> writes:\n> Let's bash it into shape a bit more before going any further on actual wording.\n\nFWIW, I want to push back on the idea of making it a tutorial section.\nI too considered that, but in the end I think it's a better idea to\nput it into the \"main\" docs, for two reasons:Version 2 attached.  Still a draft, focused on topic picking and overall structure.  Examples and links planned plus the usual semantic markup stuff.I chose to add a new sect1 in the user guide (The SQL Language) chapter, \"Data\".  Don't tell Robert.The \"Data Basics\" sub-section lets us readily slide this Chapter into the main flow and here the NULL discussion feels like a natural fit.  In hindsight, the lack of a Data chapter in a Database manual seems like an oversight.  One easily made because we assume if you are here you \"know\" what data is, but there is still stuff to be discussed, if nothing else to establish a common understanding between us and our users.David J.", "msg_date": "Fri, 3 May 2024 10:20:32 +0500", "msg_from": "Kashif Zeeshan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document NULL" }, { "msg_contents": "On Thu, 2024-05-02 at 08:23 -0700, David G. Johnston wrote:\n> Version 2 attached.  Still a draft, focused on topic picking and overall structure.\n\nI'm fine with most of the material (ignoring ellipses and typos), except this:\n\n+ The NOT NULL column constraint is largely syntax sugar for the corresponding\n+ column IS NOT NULL check constraint, though there are metadata differences\n+ described in create table.\n\nI see a substantial difference there:\n\n SELECT conname, contype,\n pg_get_expr(conbin, 'not_null'::regclass)\n FROM pg_constraint\n WHERE conrelid = 'not_null'::regclass;\n\n conname │ contype │ pg_get_expr \n ══════════════════════╪═════════╪══════════════════\n check_null │ c │ (id IS NOT NULL)\n not_null_id_not_null │ n │ ∅\n (2 rows)\n\nThere is also the \"attnotnull\" column in \"pg_attribute\".\n\nI didn't try it, but I guess that the performance difference will be measurable.\nSo I wouldn't call it \"syntactic sugar\".\n\nPerhaps: The behavior of the NOT NULL constraint is like that of a check\nconstraint with IS NOT NULL.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 03 May 2024 08:47:20 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document NULL" }, { "msg_contents": "On Fri, May 3, 2024 at 2:47 PM Laurenz Albe <[email protected]> wrote:\n>\n> On Thu, 2024-05-02 at 08:23 -0700, David G. Johnston wrote:\n> > Version 2 attached. Still a draft, focused on topic picking and overall structure.\n>\n> I'm fine with most of the material (ignoring ellipses and typos), except this:\n>\n> + The NOT NULL column constraint is largely syntax sugar for the corresponding\n> + column IS NOT NULL check constraint, though there are metadata differences\n> + described in create table.\n>\n\nthe system does not translate (check constraint column IS NOT NULL)\nto NOT NULL constraint,\nat least in domain.\n\nfor example:\ncreate domain connotnull integer;\nalter domain connotnull add not null;\n\\dD connotnull\n\ndrop domain connotnull cascade;\ncreate domain connotnull integer;\nalter domain connotnull add check (value is not null);\n\\dD\n\n\n", "msg_date": "Fri, 3 May 2024 16:14:27 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document NULL" }, { "msg_contents": "On Fri, May 3, 2024 at 1:14 AM jian he <[email protected]> wrote:\n\n> On Fri, May 3, 2024 at 2:47 PM Laurenz Albe <[email protected]>\n> wrote:\n> >\n> > On Thu, 2024-05-02 at 08:23 -0700, David G. Johnston wrote:\n> > > Version 2 attached. Still a draft, focused on topic picking and\n> overall structure.\n> >\n> > I'm fine with most of the material (ignoring ellipses and typos), except\n> this:\n> >\n> > + The NOT NULL column constraint is largely syntax sugar for the\n> corresponding\n> > + column IS NOT NULL check constraint, though there are metadata\n> differences\n> > + described in create table.\n> >\n>\n> the system does not translate (check constraint column IS NOT NULL)\n> to NOT NULL constraint,\n> at least in domain.\n>\n>\nI'll change this but I was focusing on the fact you get identical\nuser-visible behavior with not null and a check(col is not null). Chain of\nthought being we discuss the is not null operator (indirectly) already and\nso not null, which is syntax as opposed to an operation/expression, can\nleverage that explanation as opposed to getting its own special case. I'll\nconsider this some more and maybe mention the catalog dynamics a bit as\nwell, or at least point to them.\n\n\n> drop domain connotnull cascade;\n> create domain connotnull integer;\n> alter domain connotnull add check (value is not null);\n> \\dD\n>\n\nThis reminds me, I forgot to add commentary regarding defining a not null\nconstraint on a domain but the domain type surviving a left join but having\na null value.\n\nDavid J.\n\nOn Fri, May 3, 2024 at 1:14 AM jian he <[email protected]> wrote:On Fri, May 3, 2024 at 2:47 PM Laurenz Albe <[email protected]> wrote:\n>\n> On Thu, 2024-05-02 at 08:23 -0700, David G. Johnston wrote:\n> > Version 2 attached.  Still a draft, focused on topic picking and overall structure.\n>\n> I'm fine with most of the material (ignoring ellipses and typos), except this:\n>\n> +    The NOT NULL column constraint is largely syntax sugar for the corresponding\n> +    column IS NOT NULL check constraint, though there are metadata differences\n> +    described in create table.\n>\n\nthe system does not translate (check constraint column IS NOT NULL)\nto NOT NULL constraint,\nat least in domain.\nI'll change this but I was focusing on the fact you get identical user-visible behavior with not null and a check(col is not null).  Chain of thought being we discuss the is not null operator (indirectly) already and so not null, which is syntax as opposed to an operation/expression, can leverage that explanation as opposed to getting its own special case.  I'll consider this some more and maybe mention the catalog dynamics a bit as well, or at least point to them. \ndrop domain connotnull cascade;\ncreate domain connotnull integer;\nalter domain connotnull add check (value is not null);\n\\dDThis reminds me, I forgot to add commentary regarding defining a not null constraint on a domain but the domain type surviving a left join but having a null value.David J.", "msg_date": "Fri, 3 May 2024 06:58:50 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document NULL" }, { "msg_contents": "On 02.05.24 17:23, David G. Johnston wrote:\n> Version 2 attached.  Still a draft, focused on topic picking and overall \n> structure.  Examples and links planned plus the usual semantic markup stuff.\n> \n> I chose to add a new sect1 in the user guide (The SQL Language) chapter, \n> \"Data\".\n\nPlease, let's not.\n\nA stylistic note: \"null\" is an adjective. You can talk about a \"null \nvalue\" or a value \"is null\". These are lower-cased (or maybe \ntitle-cased). You can use upper-case when referring to SQL syntax \nelements (in which case also tag it with something like <literal>), and \nalso to the C-language symbol (tagged with <symbol>). We had recently \ncleaned this up, so I think the rest of the documentation should be \npretty consistent about this.\n\n\n", "msg_date": "Fri, 3 May 2024 16:09:58 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document NULL" }, { "msg_contents": "On Fri, May 3, 2024 at 7:10 AM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 02.05.24 17:23, David G. Johnston wrote:\n> > Version 2 attached. Still a draft, focused on topic picking and overall\n> > structure. Examples and links planned plus the usual semantic markup\n> stuff.\n> >\n> > I chose to add a new sect1 in the user guide (The SQL Language) chapter,\n> > \"Data\".\n>\n> Please, let's not.\n>\n\nIf a committer wants to state the single place in the documentation to put\nthis I'm content to put it there while leaving my reasoning of choices in\nplace for future bike-shedding. My next options to decide between are the\nappendix or the lead chapter in Data Types. It really doesn't fit inside\nDDL IMO which is the only other suggestion I've seen (and an uncertain, or\nat least unsubstantiated, one) and a new chapter meets both criteria Tom\nlaid out, so long as this is framed as more than just having to document\nnull values.\n\n\n> A stylistic note: \"null\" is an adjective. You can talk about a \"null\n> value\" or a value \"is null\".\n>\n\nWill do.\n\nDavid J.\n\nOn Fri, May 3, 2024 at 7:10 AM Peter Eisentraut <[email protected]> wrote:On 02.05.24 17:23, David G. Johnston wrote:\n> Version 2 attached.  Still a draft, focused on topic picking and overall \n> structure.  Examples and links planned plus the usual semantic markup stuff.\n> \n> I chose to add a new sect1 in the user guide (The SQL Language) chapter, \n> \"Data\".\n\nPlease, let's not.If a committer wants to state the single place in the documentation to put this I'm content to put it there while leaving my reasoning of choices in place for future bike-shedding.  My next options to decide between are the appendix or the lead chapter in Data Types. It really doesn't fit inside DDL IMO which is the only other suggestion I've seen (and an uncertain, or at least unsubstantiated, one) and a new chapter meets both criteria Tom laid out, so long as this is framed as more than just having to document null values.\n\nA stylistic note: \"null\" is an adjective.  You can talk about a \"null \nvalue\" or a value \"is null\".Will do.David J.", "msg_date": "Fri, 3 May 2024 08:16:26 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document NULL" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Fri, May 3, 2024 at 7:10 AM Peter Eisentraut <[email protected]>\n> wrote:\n>> On 02.05.24 17:23, David G. Johnston wrote:\n>>> I chose to add a new sect1 in the user guide (The SQL Language) chapter,\n>>> \"Data\".\n\n>> Please, let's not.\n\n> If a committer wants to state the single place in the documentation to put\n> this I'm content to put it there while leaving my reasoning of choices in\n> place for future bike-shedding. My next options to decide between are the\n> appendix or the lead chapter in Data Types. It really doesn't fit inside\n> DDL IMO which is the only other suggestion I've seen (and an uncertain, or\n> at least unsubstantiated, one) and a new chapter meets both criteria Tom\n> laid out, so long as this is framed as more than just having to document\n> null values.\n\nI could see going that route if we actually had a chapter's worth of\nmaterial to put into \"Data\". But we don't, there's really only one\nnot-very-long section. Robert has justifiably complained about that\nsort of thing elsewhere in the docs, and I don't want to argue with\nhim about why it'd be OK here.\n\nHaving said that, I reiterate my proposal that we make it a new\n<sect1> under DDL, before 5.2 Default Values which is the first\nplace in ddl.sgml that assumes you have heard of nulls. Sure,\nit's not totally ideal, but noplace is going to be entirely\nperfect. I can see some attraction in dropping it under Data Types,\nbut (a) null is a data-type-independent concept, and (b) the\nchapters before that are just full of places that assume you have\nheard of nulls. Putting it in an appendix is similarly throwing\nto the wind any idea that you can read the documentation in order.\n\nReally, even the syntax chapter has some mentions of nulls.\nIf we did have a \"Data\" chapter there would be a case for\nputting it as the *first* chapter of Part II.\n\nI suppose we could address the nonlinearity gripe with a bunch\nof cross-reference links, in which case maybe something under\nData Types is the least bad approach.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 May 2024 11:44:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document NULL" }, { "msg_contents": "On Fri, May 3, 2024 at 8:44 AM Tom Lane <[email protected]> wrote:\n\n> \"David G. Johnston\" <[email protected]> writes:\n> > On Fri, May 3, 2024 at 7:10 AM Peter Eisentraut <[email protected]>\n> > wrote:\n> >> On 02.05.24 17:23, David G. Johnston wrote:\n> >>> I chose to add a new sect1 in the user guide (The SQL Language)\n> chapter,\n> >>> \"Data\".\n>\n> >> Please, let's not.\n>\n> > If a committer wants to state the single place in the documentation to\n> put\n> > this I'm content to put it there while leaving my reasoning of choices in\n> > place for future bike-shedding. My next options to decide between are\n> the\n> > appendix or the lead chapter in Data Types. It really doesn't fit inside\n> > DDL IMO which is the only other suggestion I've seen (and an uncertain,\n> or\n> > at least unsubstantiated, one) and a new chapter meets both criteria Tom\n> > laid out, so long as this is framed as more than just having to document\n> > null values.\n>\n> I could see going that route if we actually had a chapter's worth of\n> material to put into \"Data\". But we don't, there's really only one\n> not-very-long section. Robert has justifiably complained about that\n> sort of thing elsewhere in the docs, and I don't want to argue with\n> him about why it'd be OK here.\n>\n\nOK. I was hopeful that once the Chapter existed the annoyance of it being\nshort would be solved by making it longer. If we ever do that, moving this\nsection under there at that point would be an option.\n\n\n> Having said that, I reiterate my proposal that we make it a new\n> <sect1> under DDL, before 5.2 Default Values which is the first\n> place in ddl.sgml that assumes you have heard of nulls.\n\n\nI will go with this and remove the \"Data Basics\" section I wrote, leaving\nit to be just a discussion about null values. The tutorial is the only\nsection that really needs unique wording to fit in. No matter where we\ndecide to place it otherwise the core content will be the same, with maybe\na different section preface to tie it in.\n\nPutting it in an appendix is similarly throwing\n> to the wind any idea that you can read the documentation in order.\n>\n\nI think we can keep the entire camel out of the tent while letting it get a\nwhiff of what is inside. It would be a summary reference linked to from\nthe various places that mention null values.\nhttps://en.wikipedia.org/wiki/Camel%27s_nose\n\n\n> I suppose we could address the nonlinearity gripe with a bunch\n> of cross-reference links, in which case maybe something under\n> Data Types is the least bad approach.\n>\n>\nYeah, there is circularity here that is probably impossible to\ncompletely resolve.\n\nDavid J.\n\nOn Fri, May 3, 2024 at 8:44 AM Tom Lane <[email protected]> wrote:\"David G. Johnston\" <[email protected]> writes:\n> On Fri, May 3, 2024 at 7:10 AM Peter Eisentraut <[email protected]>\n> wrote:\n>> On 02.05.24 17:23, David G. Johnston wrote:\n>>> I chose to add a new sect1 in the user guide (The SQL Language) chapter,\n>>> \"Data\".\n\n>> Please, let's not.\n\n> If a committer wants to state the single place in the documentation to put\n> this I'm content to put it there while leaving my reasoning of choices in\n> place for future bike-shedding.  My next options to decide between are the\n> appendix or the lead chapter in Data Types. It really doesn't fit inside\n> DDL IMO which is the only other suggestion I've seen (and an uncertain, or\n> at least unsubstantiated, one) and a new chapter meets both criteria Tom\n> laid out, so long as this is framed as more than just having to document\n> null values.\n\nI could see going that route if we actually had a chapter's worth of\nmaterial to put into \"Data\".  But we don't, there's really only one\nnot-very-long section.  Robert has justifiably complained about that\nsort of thing elsewhere in the docs, and I don't want to argue with\nhim about why it'd be OK here.OK.  I was hopeful that once the Chapter existed the annoyance of it being short would be solved by making it longer.  If we ever do that, moving this section under there at that point would be an option.\n\nHaving said that, I reiterate my proposal that we make it a new\n<sect1> under DDL, before 5.2 Default Values which is the first\nplace in ddl.sgml that assumes you have heard of nulls.I will go with this and remove the \"Data Basics\" section I wrote, leaving it to be just a discussion about null values.  The tutorial is the only section that really needs unique wording to fit in.  No matter where we decide to place it otherwise the core content will be the same, with maybe a different section preface to tie it in.Putting it in an appendix is similarly throwing\nto the wind any idea that you can read the documentation in order.I think we can keep the entire camel out of the tent while letting it get a whiff of what is inside.  It would be a summary reference linked to from the various places that mention null values.https://en.wikipedia.org/wiki/Camel%27s_nose\n\nI suppose we could address the nonlinearity gripe with a bunch\nof cross-reference links, in which case maybe something under\nData Types is the least bad approach.Yeah, there is circularity here that is probably impossible to completely resolve.David J.", "msg_date": "Fri, 3 May 2024 09:00:12 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document NULL" }, { "msg_contents": "On Fri, May 3, 2024 at 9:00 AM David G. Johnston <[email protected]>\nwrote:\n\n> On Fri, May 3, 2024 at 8:44 AM Tom Lane <[email protected]> wrote:\n>\n>> Having said that, I reiterate my proposal that we make it a new\n>>\n> <sect1> under DDL, before 5.2 Default Values which is the first\n>> place in ddl.sgml that assumes you have heard of nulls.\n>\n>\n> I will go with this and remove the \"Data Basics\" section I wrote, leaving\n> it to be just a discussion about null values. The tutorial is the only\n> section that really needs unique wording to fit in. No matter where we\n> decide to place it otherwise the core content will be the same, with maybe\n> a different section preface to tie it in.\n>\n>\nv3 Attached.\n\nProbably at the 90% complete mark. Minimal index entries, not as thorough\na look-about of the existing documentation as I'd like. Probably some\nwording and style choices to tweak. Figured better to get feedback now\nbefore I go into polish mode. In particular, tweaking and re-running the\nexamples.\n\nYes, I am aware of my improper indentation for programlisting and screen. I\nwanted to be able to use the code folding features of my editor. Those can\nbe readily un-indented in the final version.\n\nThe changes to func.sgml is basically one change repeated something like 20\ntimes with tweaks for true/false. Plus moving the discussion regarding the\nSQL specification into the new null handling section.\n\nIt took me doing this to really understand the difference between row\nconstructors and composite typed values, especially since array\nconstructors produce array typed values and the constructor is just an\nunimportant implementation option while row constructors introduce\nmeaningfully different behaviors when used.\n\nMy plan is to have a v4 out next week, without or without a review of this\ndraft, but then the subsequent few weeks will probably be a bit quiet.\n\nDavid J.", "msg_date": "Sat, 11 May 2024 08:33:27 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document NULL" }, { "msg_contents": "On Sat, May 11, 2024, 16:34 David G. Johnston <[email protected]>\nwrote:\n\n> On Fri, May 3, 2024 at 9:00 AM David G. Johnston <\n> [email protected]> wrote:\n>\n>> On Fri, May 3, 2024 at 8:44 AM Tom Lane <[email protected]> wrote:\n>>\n>>> Having said that, I reiterate my proposal that we make it a new\n>>>\n>> <sect1> under DDL, before 5.2 Default Values which is the first\n>>> place in ddl.sgml that assumes you have heard of nulls.\n>>\n>>\n>> I will go with this and remove the \"Data Basics\" section I wrote, leaving\n>> it to be just a discussion about null values. The tutorial is the only\n>> section that really needs unique wording to fit in. No matter where we\n>> decide to place it otherwise the core content will be the same, with maybe\n>> a different section preface to tie it in.\n>>\n>>\n> v3 Attached.\n>\n> Probably at the 90% complete mark. Minimal index entries, not as thorough\n> a look-about of the existing documentation as I'd like. Probably some\n> wording and style choices to tweak. Figured better to get feedback now\n> before I go into polish mode. In particular, tweaking and re-running the\n> examples.\n>\n> Yes, I am aware of my improper indentation for programlisting and screen.\n> I wanted to be able to use the code folding features of my editor. Those\n> can be readily un-indented in the final version.\n>\n> The changes to func.sgml is basically one change repeated something like\n> 20 times with tweaks for true/false. Plus moving the discussion regarding\n> the SQL specification into the new null handling section.\n>\n> It took me doing this to really understand the difference between row\n> constructors and composite typed values, especially since array\n> constructors produce array typed values and the constructor is just an\n> unimportant implementation option while row constructors introduce\n> meaningfully different behaviors when used.\n>\n> My plan is to have a v4 out next week, without or without a review of this\n> draft, but then the subsequent few weeks will probably be a bit quiet.\n>\n\n+ The cardinal rule, a given null value is never\n+ <link linkend=\"functions-comparison-op-table\">equal or unequal</link>\n+ to any other non-null.\n\nAgain, doesn't this imply it tends to be equal to another null by its\nomission?\n\nThom\n\n>\n\nOn Sat, May 11, 2024, 16:34 David G. Johnston <[email protected]> wrote:On Fri, May 3, 2024 at 9:00 AM David G. Johnston <[email protected]> wrote:On Fri, May 3, 2024 at 8:44 AM Tom Lane <[email protected]> wrote:Having said that, I reiterate my proposal that we make it a new\n<sect1> under DDL, before 5.2 Default Values which is the first\nplace in ddl.sgml that assumes you have heard of nulls.I will go with this and remove the \"Data Basics\" section I wrote, leaving it to be just a discussion about null values.  The tutorial is the only section that really needs unique wording to fit in.  No matter where we decide to place it otherwise the core content will be the same, with maybe a different section preface to tie it in.v3 Attached.Probably at the 90% complete mark.  Minimal index entries, not as thorough a look-about of the existing documentation as I'd like.  Probably some wording and style choices to tweak.  Figured better to get feedback now before I go into polish mode.  In particular, tweaking and re-running the examples.Yes, I am aware of my improper indentation for programlisting and screen. I wanted to be able to use the code folding features of my editor.  Those can be readily un-indented in the final version.The changes to func.sgml is basically one change repeated something like 20 times with tweaks for true/false.  Plus moving the discussion regarding the SQL specification into the new null handling section.It took me doing this to really understand the difference between row constructors and composite typed values, especially since array constructors produce array typed values and the constructor is just an unimportant implementation option while row constructors introduce meaningfully different behaviors when used.My plan is to have a v4 out next week, without or without a review of this draft, but then the subsequent few weeks will probably be a bit quiet.+   The cardinal rule, a given null value is never+   <link linkend=\"functions-comparison-op-table\">equal or unequal</link>+   to any other non-null.Again, doesn't this imply it tends to be equal to another null by its omission?Thom", "msg_date": "Sat, 11 May 2024 18:51:37 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document NULL" }, { "msg_contents": "On Saturday, May 11, 2024, Thom Brown <[email protected]> wrote:\n\n>\n> Sat, May 11, 2024, 16:34 David G. Johnston <[email protected]>\n> wrote:\n>\n\n\n> My plan is to have a v4 out next week, without or without a review of this\n>> draft, but then the subsequent few weeks will probably be a bit quiet.\n>>\n>\n> + The cardinal rule, a given null value is never\n> + <link linkend=\"functions-comparison-op-table\">equal or unequal</link>\n> + to any other non-null.\n>\n> Again, doesn't this imply it tends to be equal to another null by its\n> omission?\n>\n>\nI still agree, it’s just a typo now…\n\n…is never equal or unequal to any value.\n\nThough I haven’t settled on a phrasing I really like. But I’m trying to\navoid a parenthetical.\n\nDavid J.\n\nOn Saturday, May 11, 2024, Thom Brown <[email protected]> wrote: Sat, May 11, 2024, 16:34 David G. Johnston <[email protected]> wrote: My plan is to have a v4 out next week, without or without a review of this draft, but then the subsequent few weeks will probably be a bit quiet.+   The cardinal rule, a given null value is never+   <link linkend=\"functions-comparison-op-table\">equal or unequal</link>+   to any other non-null.Again, doesn't this imply it tends to be equal to another null by its omission?I still agree, it’s just a typo now……is never equal or unequal to any value.Though I haven’t settled on a phrasing I really like.  But I’m trying to avoid a parenthetical.David J.", "msg_date": "Sat, 11 May 2024 11:00:47 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document NULL" }, { "msg_contents": "On Sat, May 11, 2024 at 11:00 AM David G. Johnston <\[email protected]> wrote:\n\n> Though I haven’t settled on a phrasing I really like. But I’m trying to\n> avoid a parenthetical.\n>\n>\nSettled on this:\n\nThe cardinal rule, a null value is neither\n <link linkend=\"functions-comparison-op-table\">equal nor unequal</link>\n to any value, including other null values.\n\nI've been tempted to just say, \"to any value.\", but cannot quite bring\nmyself to do it...\n\nDavid J.\n\nOn Sat, May 11, 2024 at 11:00 AM David G. Johnston <[email protected]> wrote:Though I haven’t settled on a phrasing I really like.  But I’m trying to avoid a parenthetical.Settled on this:The cardinal rule, a null value is neither   <link linkend=\"functions-comparison-op-table\">equal nor unequal</link>   to any value, including other null values.I've been tempted to just say, \"to any value.\", but cannot quite bring myself to do it...David J.", "msg_date": "Mon, 17 Jun 2024 11:27:09 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document NULL" }, { "msg_contents": "On Sat, 11 May 2024 08:33:27 -0700\n\"David G. Johnston\" <[email protected]> wrote:\n\n> On Fri, May 3, 2024 at 9:00 AM David G. Johnston <[email protected]>\n> wrote:\n> \n> > On Fri, May 3, 2024 at 8:44 AM Tom Lane <[email protected]> wrote:\n> >\n> >> Having said that, I reiterate my proposal that we make it a new\n> >>\n> > <sect1> under DDL, before 5.2 Default Values which is the first\n> >> place in ddl.sgml that assumes you have heard of nulls.\n> >\n> >\n> > I will go with this and remove the \"Data Basics\" section I wrote, leaving\n> > it to be just a discussion about null values. The tutorial is the only\n> > section that really needs unique wording to fit in. No matter where we\n> > decide to place it otherwise the core content will be the same, with maybe\n> > a different section preface to tie it in.\n> >\n> >\n> v3 Attached.\n> \n> Probably at the 90% complete mark. Minimal index entries, not as thorough\n> a look-about of the existing documentation as I'd like. Probably some\n> wording and style choices to tweak. Figured better to get feedback now\n> before I go into polish mode. In particular, tweaking and re-running the\n> examples.\n> \n> Yes, I am aware of my improper indentation for programlisting and screen. I\n> wanted to be able to use the code folding features of my editor. Those can\n> be readily un-indented in the final version.\n> \n> The changes to func.sgml is basically one change repeated something like 20\n> times with tweaks for true/false. Plus moving the discussion regarding the\n> SQL specification into the new null handling section.\n> \n> It took me doing this to really understand the difference between row\n> constructors and composite typed values, especially since array\n> constructors produce array typed values and the constructor is just an\n> unimportant implementation option while row constructors introduce\n> meaningfully different behaviors when used.\n> \n> My plan is to have a v4 out next week, without or without a review of this\n> draft, but then the subsequent few weeks will probably be a bit quiet.\n\n+ A null value literal is written as unquoted, case insensitive, NULL.\n...(snip)...\n+ <programlisting>\n+ SELECT\n+ NULL,\n+ pg_typeof(null),\n+ pg_typeof(NuLl::text),\n+ cast(null as text);\n+ </programlisting>\n\nIt may be a trivial thing but I am not sure we need to mention case insensitivity\nhere, because all keywords and unquoted identifiers are case-insensitive in\nPostgreSQL and it is not specific to NULL.\n\nAlso, I found the other parts of the documentation use \"case-insensitive\" in which\nwords are joined with hyphen, so I wonder it is better to use the same form if we\nleave the description.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>\n\n\n", "msg_date": "Wed, 19 Jun 2024 12:34:12 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document NULL" }, { "msg_contents": "On Tue, Jun 18, 2024 at 8:34 PM Yugo NAGATA <[email protected]> wrote:\n\n>\n> It may be a trivial thing but I am not sure we need to mention case\n> insensitivity\n> here, because all keywords and unquoted identifiers are case-insensitive in\n> PostgreSQL and it is not specific to NULL.\n>\n\nBut it is neither a keyword nor an identifier. It behaves more like:\nSELECT 1 as one; A constant, which have no implied rules - mainly because\nnumbers don't have case. Which suggests adding some specific mention there\n- and also probably need to bring up it and its \"untyped\" nature in the\nsyntax chapter, probably here:\n\nhttps://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS-GENERIC\n\n\n> Also, I found the other parts of the documentation use \"case-insensitive\"\n> in which\n> words are joined with hyphen, so I wonder it is better to use the same\n> form if we\n> leave the description.\n>\n>\nTypo on my part, fixed.\n\nI'm not totally against just letting this content be assumed to be learned\nfrom elsewhere in the documentation but it also seems reasonable to\ninclude. I'm going to leave it for now.\n\nDavid J.\n\nOn Tue, Jun 18, 2024 at 8:34 PM Yugo NAGATA <[email protected]> wrote:\nIt may be a trivial thing but I am not sure we need to mention case insensitivity\nhere, because all keywords and unquoted identifiers are case-insensitive in\nPostgreSQL and it is not specific to NULL.But it is neither a keyword nor an identifier.  It behaves more like: SELECT 1 as one;  A constant, which have no implied rules - mainly because numbers don't have case.  Which suggests adding some specific mention there - and also probably need to bring up it and its \"untyped\" nature in the syntax chapter, probably here:https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS-GENERIC\n\nAlso, I found the other parts of the documentation use \"case-insensitive\" in which\nwords are joined with hyphen, so I wonder it is better to use the same form if we\nleave the description.\nTypo on my part, fixed.I'm not totally against just letting this content be assumed to be learned from elsewhere in the documentation but it also seems reasonable to include.  I'm going to leave it for now.David J.", "msg_date": "Tue, 18 Jun 2024 20:56:58 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document NULL" }, { "msg_contents": "On Tue, 18 Jun 2024 20:56:58 -0700\n\"David G. Johnston\" <[email protected]> wrote:\n\n> On Tue, Jun 18, 2024 at 8:34 PM Yugo NAGATA <[email protected]> wrote:\n> \n> >\n> > It may be a trivial thing but I am not sure we need to mention case\n> > insensitivity\n> > here, because all keywords and unquoted identifiers are case-insensitive in\n> > PostgreSQL and it is not specific to NULL.\n> >\n> \n> But it is neither a keyword nor an identifier. It behaves more like:\n> SELECT 1 as one; A constant, which have no implied rules - mainly because\n> numbers don't have case. Which suggests adding some specific mention there\n\nThank you for your explanation. This makes a bit clear for me why the description\nmentions 'string' syntax there. I just thought NULL is a keyword representing\na null constant.\n\n> - and also probably need to bring up it and its \"untyped\" nature in the\n> syntax chapter, probably here:\n> \n> https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS-GENERIC\n> \n> \n> > Also, I found the other parts of the documentation use \"case-insensitive\"\n> > in which\n> > words are joined with hyphen, so I wonder it is better to use the same\n> > form if we\n> > leave the description.\n> >\n> >\n> Typo on my part, fixed.\n> \n> I'm not totally against just letting this content be assumed to be learned\n> from elsewhere in the documentation but it also seems reasonable to\n> include. I'm going to leave it for now.\n> \n> David J.\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n", "msg_date": "Wed, 19 Jun 2024 14:15:20 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document NULL" }, { "msg_contents": "Yugo NAGATA <[email protected]> writes:\n> On Tue, 18 Jun 2024 20:56:58 -0700\n> \"David G. Johnston\" <[email protected]> wrote:\n>> But it is neither a keyword nor an identifier.\n\nThe lexer would be quite surprised by your claim that NULL isn't\na keyword. Per src/include/parser/kwlist.h, NULL is a keyword,\nand a fully reserved one at that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 Jun 2024 01:45:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document NULL" }, { "msg_contents": "On Tuesday, June 18, 2024, Tom Lane <[email protected]> wrote:\n\n> Yugo NAGATA <[email protected]> writes:\n> > On Tue, 18 Jun 2024 20:56:58 -0700\n> > \"David G. Johnston\" <[email protected]> wrote:\n> >> But it is neither a keyword nor an identifier.\n>\n> The lexer would be quite surprised by your claim that NULL isn't\n> a keyword. Per src/include/parser/kwlist.h, NULL is a keyword,\n> and a fully reserved one at that.\n>\n>\n>\n\nCan’t it be both a value and a keyword? I figured the not null constraint\nand is null predicates are why it’s a keyword but the existence of those\ndoesn’t cover its usage as a literal value that can be stuck anywhere you\nhave an expression.\n\nDavid J.\n\nOn Tuesday, June 18, 2024, Tom Lane <[email protected]> wrote:Yugo NAGATA <[email protected]> writes:\n> On Tue, 18 Jun 2024 20:56:58 -0700\n> \"David G. Johnston\" <[email protected]> wrote:\n>> But it is neither a keyword nor an identifier.\n\nThe lexer would be quite surprised by your claim that NULL isn't\na keyword.  Per src/include/parser/kwlist.h, NULL is a keyword,\nand a fully reserved one at that.\n                        \nCan’t it be both a value and a keyword?  I figured the not null constraint and is null predicates are why it’s a keyword but the existence of those doesn’t cover its usage as a literal value that can be stuck anywhere you have an expression.David J.", "msg_date": "Tue, 18 Jun 2024 23:02:14 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document NULL" }, { "msg_contents": "On Tue, 18 Jun 2024 23:02:14 -0700\n\"David G. Johnston\" <[email protected]> wrote:\n\n> On Tuesday, June 18, 2024, Tom Lane <[email protected]> wrote:\n> \n> > Yugo NAGATA <[email protected]> writes:\n> > > On Tue, 18 Jun 2024 20:56:58 -0700\n> > > \"David G. Johnston\" <[email protected]> wrote:\n> > >> But it is neither a keyword nor an identifier.\n> >\n> > The lexer would be quite surprised by your claim that NULL isn't\n> > a keyword. Per src/include/parser/kwlist.h, NULL is a keyword,\n> > and a fully reserved one at that.\n> >\n> >\n> >\n> \n> Can’t it be both a value and a keyword? I figured the not null constraint\n> and is null predicates are why it’s a keyword but the existence of those\n> doesn’t cover its usage as a literal value that can be stuck anywhere you\n> have an expression.\n\nI still wonder it whould be unnecessary to mention the case-insensitivity here\nif we can say NULL is *also* a keyword.\n\nRegards,\nYugo Nagata\n\n\n> David J.\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n", "msg_date": "Thu, 27 Jun 2024 12:14:25 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document NULL" }, { "msg_contents": "On Wed, Jun 26, 2024 at 8:14 PM Yugo NAGATA <[email protected]> wrote:\n\n>\n> I still wonder it whould be unnecessary to mention the case-insensitivity\n> here\n> if we can say NULL is *also* a keyword.\n>\n>\nI went with wording that includes mentioning its keyword status.\n\nThe attached are complete and ready for review. I did some file structure\nreformatting at the end and left that as the second patch. The first\ncontains all of the content.\n\nI'm adding this to the commitfest.\n\nThanks!\n\nDavid J.", "msg_date": "Fri, 28 Jun 2024 13:39:40 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document NULL" } ]
[ { "msg_contents": "Hi,\n\nComparing the current SSE4.2 implementation of the CRC32C algorithm in Postgres, to an optimized AVX-512 algorithm [0] we observed significant gains. The result was a ~6.6X average multiplier of increased performance measured on 3 different Intel products. Details below. The AVX-512 algorithm in C is a port of the ISA-L library [1] assembler code.\n\nWorkload call size distribution details (write heavy):\n * Average was approximately around 1,010 bytes per call\n * ~80% of the calls were under 256 bytes\n * ~20% of the calls were greater than or equal to 256 bytes up to the max buffer size of 8192\n\nThe 256 bytes is important because if the buffer is smaller, it makes sense fallback to the existing implementation. This is because the AVX-512 algorithm needs a minimum of 256 bytes to operate.\n\nUsing the above workload data distribution, \nat 0% calls < 256 bytes, a 841% improvement on average for crc32c functionality was observed.\nat 50% calls < 256 bytes, a 758% improvement on average for crc32c functionality was observed.\nat 90% calls < 256 bytes, a 44% improvement on average for crc32c functionality was observed. \nat 97.6% calls < 256 bytes, the workload's crc32c performance breaks-even.\nat 100% calls < 256 bytes, a 14% regression is seen when using AVX-512 implementation. \n\nThe results above are averages over 3 machines, and were measured on: Intel Saphire Rapids bare metal, and using EC2 on AWS cloud: Intel Saphire Rapids (m7i.2xlarge) and Intel Ice Lake (m6i.2xlarge).\n\nSummary Data (Saphire Rapids bare metal, AWS m7i-2xl, and AWS m6i-2xl):\n+---------------------+-------------------+-------------------+-------------------+--------------------+\n| Rates in Bytes/us | Bare Metal | AWS m6i-2xl | AWS m7i-2xl | |\n| (Larger is Better) +---------+---------+---------+---------+---------+---------+ Overall Multiplier |\n| | SSE 4.2 | AVX-512 | SSE 4.2 | AVX-512 | SSE 4.2 | AVX-512 | |\n+---------------------+---------+---------+---------+---------+---------+---------+--------------------+\n| Numbers 256-8192 | 12,046 | 83,196 | 7,471 | 39,965 | 11,867 | 84,589 | 6.62 |\n+---------------------+---------+---------+---------+---------+---------+---------+--------------------+\n| Numbers 64 - 255 | 16,865 | 15,909 | 9,209 | 7,363 | 12,496 | 10,046 | 0.86 |\n+---------------------+---------+---------+---------+---------+---------+---------+--------------------+\n | Weighted Multiplier [*] | 1.44 |\n +-----------------------------+--------------------+\nThere was no evidence of AVX-512 frequency throttling from perf data, which stayed steady during the test.\n\nFeedback on this proposed improvement is appreciated. Some questions: \n1) This AVX-512 ISA-L derived code uses BSD-3 license [2]. Is this compatible with the PostgreSQL License [3]? They both appear to be very permissive licenses, but I am not an expert on licenses. \n2) Is there a preferred benchmark I should run to test this change? \n\nIf licensing is a non-issue, I can post the initial patch along with my Postgres benchmark function patch for further review.\n\nThanks,\nPaul\n\n[0] https://www.researchgate.net/publication/263424619_Fast_CRC_computation#full-text\n[1] https://github.com/intel/isa-l\n[2] https://opensource.org/license/bsd-3-clause\n[3] https://opensource.org/license/postgresql\n \n[*] Weights used were 90% of requests less than 256 bytes, 10% greater than or equal to 256 bytes.\n\n\n", "msg_date": "Wed, 1 May 2024 15:56:08 +0000", "msg_from": "\"Amonson, Paul D\" <[email protected]>", "msg_from_op": true, "msg_subject": "Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "Hi, forgive the top-post but I have not seen any response to this post?\n\nThanks,\nPaul\n\n> -----Original Message-----\n> From: Amonson, Paul D\n> Sent: Wednesday, May 1, 2024 8:56 AM\n> To: [email protected]\n> Cc: Nathan Bossart <[email protected]>; Shankaran, Akash\n> <[email protected]>\n> Subject: Proposal for Updating CRC32C with AVX-512 Algorithm.\n> \n> Hi,\n> \n> Comparing the current SSE4.2 implementation of the CRC32C algorithm in\n> Postgres, to an optimized AVX-512 algorithm [0] we observed significant\n> gains. The result was a ~6.6X average multiplier of increased performance\n> measured on 3 different Intel products. Details below. The AVX-512 algorithm\n> in C is a port of the ISA-L library [1] assembler code.\n> \n> Workload call size distribution details (write heavy):\n> * Average was approximately around 1,010 bytes per call\n> * ~80% of the calls were under 256 bytes\n> * ~20% of the calls were greater than or equal to 256 bytes up to the max\n> buffer size of 8192\n> \n> The 256 bytes is important because if the buffer is smaller, it makes sense\n> fallback to the existing implementation. This is because the AVX-512 algorithm\n> needs a minimum of 256 bytes to operate.\n> \n> Using the above workload data distribution,\n> at 0% calls < 256 bytes, a 841% improvement on average for crc32c\n> functionality was observed.\n> at 50% calls < 256 bytes, a 758% improvement on average for crc32c\n> functionality was observed.\n> at 90% calls < 256 bytes, a 44% improvement on average for crc32c\n> functionality was observed.\n> at 97.6% calls < 256 bytes, the workload's crc32c performance breaks-even.\n> at 100% calls < 256 bytes, a 14% regression is seen when using AVX-512\n> implementation.\n> \n> The results above are averages over 3 machines, and were measured on: Intel\n> Saphire Rapids bare metal, and using EC2 on AWS cloud: Intel Saphire Rapids\n> (m7i.2xlarge) and Intel Ice Lake (m6i.2xlarge).\n> \n> Summary Data (Saphire Rapids bare metal, AWS m7i-2xl, and AWS m6i-2xl):\n> +---------------------+-------------------+-------------------+-------------------+---------\n> -----------+\n> | Rates in Bytes/us | Bare Metal | AWS m6i-2xl | AWS m7i-2xl |\n> |\n> | (Larger is Better) +---------+---------+---------+---------+---------+---------+\n> Overall Multiplier |\n> | | SSE 4.2 | AVX-512 | SSE 4.2 | AVX-512 | SSE 4.2 | AVX-512 |\n> |\n> +---------------------+---------+---------+---------+---------+---------+---------+-------\n> -------------+\n> | Numbers 256-8192 | 12,046 | 83,196 | 7,471 | 39,965 | 11,867 |\n> 84,589 | 6.62 |\n> +---------------------+---------+---------+---------+---------+---------+---------+-------\n> -------------+\n> | Numbers 64 - 255 | 16,865 | 15,909 | 9,209 | 7,363 | 12,496 |\n> 10,046 | 0.86 |\n> +---------------------+---------+---------+---------+---------+---------+---------+-------\n> -------------+\n> | Weighted Multiplier [*] | 1.44 |\n> +-----------------------------+--------------------+\n> There was no evidence of AVX-512 frequency throttling from perf data, which\n> stayed steady during the test.\n> \n> Feedback on this proposed improvement is appreciated. Some questions:\n> 1) This AVX-512 ISA-L derived code uses BSD-3 license [2]. Is this compatible\n> with the PostgreSQL License [3]? They both appear to be very permissive\n> licenses, but I am not an expert on licenses.\n> 2) Is there a preferred benchmark I should run to test this change?\n> \n> If licensing is a non-issue, I can post the initial patch along with my Postgres\n> benchmark function patch for further review.\n> \n> Thanks,\n> Paul\n> \n> [0]\n> https://www.researchgate.net/publication/263424619_Fast_CRC_computati\n> on#full-text\n> [1] https://github.com/intel/isa-l\n> [2] https://opensource.org/license/bsd-3-clause\n> [3] https://opensource.org/license/postgresql\n> \n> [*] Weights used were 90% of requests less than 256 bytes, 10% greater than\n> or equal to 256 bytes.\n\n\n", "msg_date": "Fri, 17 May 2024 16:21:19 +0000", "msg_from": "\"Amonson, Paul D\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "> On 17 May 2024, at 18:21, Amonson, Paul D <[email protected]> wrote:\n\n> Hi, forgive the top-post but I have not seen any response to this post?\n\nThe project is currently in feature-freeze in preparation for the next major\nrelease so new development and ideas are not the top priority right now.\nAdditionally there is a large developer meeting shortly which many are busy\npreparing for. Excercise some patience, and I'm sure there will be follow-ups\nto this once development of postgres v18 picks up.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 20 May 2024 10:03:20 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "> The project is currently in feature-freeze in preparation for the next major\n> release so new development and ideas are not the top priority right now.\n> Additionally there is a large developer meeting shortly which many are busy\n> preparing for. Excercise some patience, and I'm sure there will be follow-ups\n> to this once development of postgres v18 picks up.\n\nThanks, understood.\n\nI had our OSS internal team, who are experts in OSS licensing, review possible conflicts between the PostgreSQL license and the BSD-Clause 3-like license for the CRC32C AVX-512 code, and they found no issues. Therefore, including the new license into the PostgreSQL codebase should be acceptable.\n\nI am attaching the first official patches. The second patch is a simple test function in PostgreSQL SQL, which I used for testing and benchmarking. It will not be merged.\n\nCode Structure Question: While working on this code, I noticed overlaps with runtime CPU checks done in the previous POPCNT merged code. I was considering that these checks should perhaps be formalized and consolidated into a single source/header file pair. If this is desirable, where should I place these files? Should it be in \"src/port\" where they are used, or in \"src/common\" where they are available to all (not just the \"src/port\" tree)?\n\nThanks,\nPaul", "msg_date": "Wed, 12 Jun 2024 16:43:41 +0000", "msg_from": "\"Amonson, Paul D\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "\"Amonson, Paul D\" <[email protected]> writes:\n> I had our OSS internal team, who are experts in OSS licensing, review possible conflicts between the PostgreSQL license and the BSD-Clause 3-like license for the CRC32C AVX-512 code, and they found no issues. Therefore, including the new license into the PostgreSQL codebase should be acceptable.\n\nMaybe you should get some actual lawyers to answer this type of\nquestion. The Chromium license this code cites is 3-clause-BSD\nstyle, which is NOT compatible: the \"advertising\" clause is\nsignificant.\n\nIn any case, writing copyright notices that are pointers to\nexternal web pages is not how it's done around here. We generally\noperate on the assumption that the Postgres source code will\noutlive any specific web site. Dead links to incidental material\nmight be okay, but legally relevant stuff not so much.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Jun 2024 14:08:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "On Wed, Jun 12, 2024 at 02:08:02PM -0400, Tom Lane wrote:\n> \"Amonson, Paul D\" <[email protected]> writes:\n> > I had our OSS internal team, who are experts in OSS licensing, review possible conflicts between the PostgreSQL license and the BSD-Clause 3-like license for the CRC32C AVX-512 code, and they found no issues. Therefore, including the new license into the PostgreSQL codebase should be acceptable.\n> \n> Maybe you should get some actual lawyers to answer this type of\n> question. The Chromium license this code cites is 3-clause-BSD\n> style, which is NOT compatible: the \"advertising\" clause is\n> significant.\n> \n> In any case, writing copyright notices that are pointers to\n> external web pages is not how it's done around here. We generally\n> operate on the assumption that the Postgres source code will\n> outlive any specific web site. Dead links to incidental material\n> might be okay, but legally relevant stuff not so much.\n\nAgreed. The licenses are compatible in the sense that they can be\ncombined to create a unified work, but they cannot be combined without\nmodifying the license of the combined work. You would need to combine\nthe Postgres and Chrome license for this, and I highly doubt we are\ngoing to be modifying the Postgres for this.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 12 Jun 2024 14:24:57 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "Hi,\n\nI'm wonder if this isn't going in the wrong direction. We're using CRCs for\nsomething they're not well suited for in my understanding - and are paying a\nreasonably high price for it, given that even hardware accelerated CRCs aren't\nblazingly fast.\n\nCRCs are used for things like ethernet, iSCSI because they are good at\ndetecting the kinds of errors encountered, namely short bursts of\nbitflips. And the covered data is limited to a fairly small limit.\n\nWhich imo makes CRCs a bad choice for WAL. For one, we don't actually expect a\nshort burst of bitflips, the most likely case is all bits after some point\nchanging (because only one part of the record made it to disk). For another,\nWAL records are *not* limited to a small size, and if anything error detection\nbecomes more important with longer records (they're likely to be span more\npages / segments).\n\n\nIt's hard to understand, but a nonetheless helpful page is\nhttps://users.ece.cmu.edu/~koopman/crc/crc32.html which lists properties for\ncrc32c:\nhttps://users.ece.cmu.edu/~koopman/crc/c32/0x8f6e37a0_len.txt\nwhich lists\n(0x8f6e37a0; 0x11edc6f41) <=> (0x82f63b78; 0x105ec76f1) {2147483615,2147483615,5243,5243,177,177,47,47,20,20,8,8,6,6,1,1} | gold | (*op) iSCSI; CRC-32C; CRC-32/4\n\nThis cryptic notion AFAIU indicates that for our polynomial we can detect 2bit\nerrors up to a length of 2147483615 bytes, 3 bit errors up to 2147483615, 3\nand 4 bit errors up to 5243, 5 and 6 bit errors up to 177, 7/8 bit errors up\nto 47.\n\nIMO for our purposes just about all errors are going to be at least at sector\nboundaries, i.e. 512 bytes and thus are at least 8 bit large. At that point we\nare only guaranteed to find a single-byte error (it'll be common to have\nmuch more) up to a lenght of 47bits. Which isn't a useful guarantee.\n\n\nWith that I perhaps have established that CRC guarantees aren't useful for us.\nBut not yet why we should use something else: Given that we already aren't\nrelying on hard guarantees, we could instead just use a fast hash like xxh3.\nhttps://github.com/Cyan4973/xxHash which is fast both for large and small\namounts of data.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 12 Jun 2024 12:37:46 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "Hi,\n\nOn 2024-05-01 15:56:08 +0000, Amonson, Paul D wrote:\n> Comparing the current SSE4.2 implementation of the CRC32C algorithm in\n> Postgres, to an optimized AVX-512 algorithm [0] we observed significant\n> gains. The result was a ~6.6X average multiplier of increased performance\n> measured on 3 different Intel products. Details below. The AVX-512 algorithm\n> in C is a port of the ISA-L library [1] assembler code.\n>\n> Workload call size distribution details (write heavy):\n> * Average was approximately around 1,010 bytes per call\n> * ~80% of the calls were under 256 bytes\n> * ~20% of the calls were greater than or equal to 256 bytes up to the max buffer size of 8192\n\nThis is extremely workload dependent, it's not hard to find workloads with\nlots of very small record and very few big ones... What you observed might\nhave \"just\" been the warmup behaviour where more full page writes have to be\nwritten.\n\nThere a very frequent call computing COMP_CRC32C over just 20 bytes, while\nholding a crucial lock. If we were to do introduce something like this\nAVX-512 algorithm, it'd probably be worth to dispatch differently in case of\ncompile-time known small lengths.\n\n\nHow does the latency of the AVX-512 algorithm compare to just using the CRC32C\ninstruction?\n\n\nFWIW, I tried the v2 patch on my Xeon Gold 5215 workstation, and dies early on\nwith SIGILL:\n\nProgram terminated with signal SIGILL, Illegal instruction.\n#0 0x0000000000d5946c in _mm512_clmulepi64_epi128 (__A=..., __B=..., __C=0)\n at /home/andres/build/gcc/master/install/lib/gcc/x86_64-pc-linux-gnu/15/include/vpclmulqdqintrin.h:42\n42\t return (__m512i) __builtin_ia32_vpclmulqdq_v8di ((__v8di)__A,\n(gdb) bt\n#0 0x0000000000d5946c in _mm512_clmulepi64_epi128 (__A=..., __B=..., __C=0)\n at /home/andres/build/gcc/master/install/lib/gcc/x86_64-pc-linux-gnu/15/include/vpclmulqdqintrin.h:42\n#1 pg_comp_crc32c_avx512 (crc=<optimized out>, data=<optimized out>, length=<optimized out>)\n at ../../../../../home/andres/src/postgresql/src/port/pg_crc32c_avx512.c:163\n#2 0x0000000000819343 in ReadControlFile () at ../../../../../home/andres/src/postgresql/src/backend/access/transam/xlog.c:4375\n#3 0x000000000081c4ac in LocalProcessControlFile (reset=<optimized out>) at ../../../../../home/andres/src/postgresql/src/backend/access/transam/xlog.c:4817\n#4 0x0000000000a8131d in PostmasterMain (argc=argc@entry=85, argv=argv@entry=0x341b08f0)\n at ../../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:902\n#5 0x00000000009b53fe in main (argc=85, argv=0x341b08f0) at ../../../../../home/andres/src/postgresql/src/backend/main/main.c:197\n\n\nCascade lake doesn't have vpclmulqdq, so we shouldn't be getting here...\n\nThis is on an optimied build with meson, with -march=native included in\nc_flags.\n\nRelevant configure output:\n\nChecking if \"XSAVE intrinsics without -mxsave\" : links: NO (cached)\nChecking if \"XSAVE intrinsics with -mxsave\" : links: YES (cached)\nChecking if \"AVX-512 popcount without -mavx512vpopcntdq -mavx512bw\" : links: NO (cached)\nChecking if \"AVX-512 popcount with -mavx512vpopcntdq -mavx512bw\" : links: YES (cached)\nChecking if \"_mm512_clmulepi64_epi128 ... with -msse4.2 -mavx512vl -mvpclmulqdq\" : links: YES\nChecking if \"x86_64: popcntq instruction\" compiles: YES (cached)\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 12 Jun 2024 13:11:35 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "> -----Original Message-----\n> From: Andres Freund <[email protected]>\n> Sent: Wednesday, June 12, 2024 1:12 PM\n> To: Amonson, Paul D <[email protected]>\n\n> FWIW, I tried the v2 patch on my Xeon Gold 5215 workstation, and dies early\n> on with SIGILL:\n\nNice catch!!! I was testing the bit for the vpclmulqdq in EBX instead of the correct ECX register. New Patch attached. I added defines to make that easier to see those types of bugs rather than a simple index number. I double checked the others as well.\n\nPaul", "msg_date": "Wed, 12 Jun 2024 21:46:19 +0000", "msg_from": "\"Amonson, Paul D\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "> This is extremely workload dependent, it's not hard to find workloads with\n> lots of very small record and very few big ones... What you observed might\n> have \"just\" been the warmup behaviour where more full page writes have to\n> be written.\n\nCan you tell me how to avoid capturing this \"warm-up\" so that the numbers are more accurate?\n \n> There a very frequent call computing COMP_CRC32C over just 20 bytes, while\n> holding a crucial lock. If we were to do introduce something like this\n> AVX-512 algorithm, it'd probably be worth to dispatch differently in case of\n> compile-time known small lengths.\n\nSo are you suggesting that we be able to directly call into the 64/32 bit based algorithm directly from these known small byte cases in the code? I think that we can do that with a separate API being exposed.\n\n> How does the latency of the AVX-512 algorithm compare to just using the\n> CRC32C instruction?\n\nI think I need more information on this one as I am not sure I understand the use case? The same function pointer indirect methods are used with or without the AVX-512 algorithm?\n\nPaul\n\n\n\n", "msg_date": "Mon, 17 Jun 2024 22:42:54 +0000", "msg_from": "\"Amonson, Paul D\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "On 2024-Jun-12, Amonson, Paul D wrote:\n\n> +/*-------------------------------------------------------------------------\n> + *\n> + * pg_crc32c_avx512.c\n> + *\t Compute CRC-32C checksum using Intel AVX-512 instructions.\n> + *\n> + * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group\n> + * Portions Copyright (c) 1994, Regents of the University of California\n> + * Portions Copyright (c) 2024, Intel(r) Corporation\n> + *\n> + * IDENTIFICATION\n> + *\t src/port/pg_crc32c_avx512.c\n> + *\n> + *-------------------------------------------------------------------------\n> + */\n\nHmm, I wonder if the \"(c) 2024 Intel\" line is going to bring us trouble.\n(I bet it's not really necessary anyway.)\n\n> +/*******************************************************************\n> + * pg_crc32c_avx512(): compute the crc32c of the buffer, where the\n> + * buffer length must be at least 256, and a multiple of 64. Based\n> + * on:\n> + *\n> + * \"Fast CRC Computation for Generic Polynomials Using PCLMULQDQ\n> + * Instruction\"\n> + * V. Gopal, E. Ozturk, et al., 2009,\n> + * https://www.researchgate.net/publication/263424619_Fast_CRC_computation#full-text\n> + *\n> + * This Function:\n> + * Copyright 2017 The Chromium Authors\n> + * Copyright (c) 2024, Intel(r) Corporation\n> + *\n> + * Use of this source code is governed by a BSD-style license that can be\n> + * found in the Chromium source repository LICENSE file.\n> + * https://chromium.googlesource.com/chromium/src/+/refs/heads/main/LICENSE\n> + */\n\nAnd this bit doesn't look good. The LICENSE file says:\n\n> // Redistribution and use in source and binary forms, with or without\n> // modification, are permitted provided that the following conditions are\n> // met:\n> //\n> // * Redistributions of source code must retain the above copyright\n> // notice, this list of conditions and the following disclaimer.\n> // * Redistributions in binary form must reproduce the above\n> // copyright notice, this list of conditions and the following disclaimer\n> // in the documentation and/or other materials provided with the\n> // distribution.\n> // * Neither the name of Google LLC nor the names of its\n> // contributors may be used to endorse or promote products derived from\n> // this software without specific prior written permission.\n\nThe second clause essentially says we would have to add a page to our\n\"documentation and/or other materials\" with the contents of the license\nfile.\n\nThere's good reasons for UCB to have stopped using the old BSD license,\nbut apparently Google (or more precisely the Chromium authors) didn't\nget the memo.\n\n\nOur fork distributors spent a lot of time scouring out source cleaning\nup copyrights, a decade ago or two. I bet they won't be happy to see\nthis sort of thing crop up now.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"No nos atrevemos a muchas cosas porque son difíciles,\npero son difíciles porque no nos atrevemos a hacerlas\" (Séneca)\n\n\n", "msg_date": "Tue, 18 Jun 2024 09:57:44 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "> Hmm, I wonder if the \"(c) 2024 Intel\" line is going to bring us trouble.\r\n> (I bet it's not really necessary anyway.)\r\n\r\nOur lawyer agrees, copyright is covered by the \"PostgreSQL Global Development Group\" copyright line as a contributor.\r\n\r\n> And this bit doesn't look good. The LICENSE file says:\r\n...\r\n> > // * Redistributions in binary form must reproduce the above\r\n> > // copyright notice, this list of conditions and the following\r\n> > disclaimer // in the documentation and/or other materials provided\r\n> > with the // distribution.\r\n...\r\n> The second clause essentially says we would have to add a page to our\r\n> \"documentation and/or other materials\" with the contents of the license file.\r\n\r\nAccording to one of Intel’s lawyers, 55 instances of this clause was found when they searched in the PostgreSQL repository. Therefore, I assume that this obligation has either been satisfied or determined not to apply, given that the second BSD clause already appears in the PostgreSQL source tree. I might have misunderstood the concern, but the lawyer believes this is a non-issue. Could you please provide more clarifying details about the concern?\r\n\r\nThanks,\r\nPaul", "msg_date": "Tue, 18 Jun 2024 17:14:08 +0000", "msg_from": "\"Amonson, Paul D\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "On Tue, Jun 18, 2024 at 05:14:08PM +0000, Amonson, Paul D wrote:\n> > And this bit doesn't look good. The LICENSE file says:\n> ...\n> > > // * Redistributions in binary form must reproduce the above\n> > > // copyright notice, this list of conditions and the following\n> > > disclaimer // in the documentation and/or other materials provided\n> > > with the // distribution.\n> ...\n> > The second clause essentially says we would have to add a page to our\n> > \"documentation and/or other materials\" with the contents of the license file.\n> \n> According to one of Intel’s lawyers, 55 instances of this clause was found when they searched in the PostgreSQL repository. Therefore, I assume that this obligation has either been satisfied or determined not to apply, given that the second BSD clause already appears in the PostgreSQL source tree. I might have misunderstood the concern, but the lawyer believes this is a non-issue. Could you please provide more clarifying details about the concern?\n\nYes, I can confirm that:\n\n\tgrep -Rl 'Redistributions in binary form must reproduce' . | wc -l\n\nreports 54; file list attached.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Tue, 18 Jun 2024 13:20:50 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "On Tue, Jun 18, 2024 at 01:20:50PM -0400, Bruce Momjian wrote:\n> On Tue, Jun 18, 2024 at 05:14:08PM +0000, Amonson, Paul D wrote:\n> > > And this bit doesn't look good. The LICENSE file says:\n> > ...\n> > > > // * Redistributions in binary form must reproduce the above\n> > > > // copyright notice, this list of conditions and the following\n> > > > disclaimer // in the documentation and/or other materials provided\n> > > > with the // distribution.\n> > ...\n> > > The second clause essentially says we would have to add a page to our\n> > > \"documentation and/or other materials\" with the contents of the license file.\n> > \n> > According to one of Intel’s lawyers, 55 instances of this clause was found when they searched in the PostgreSQL repository. Therefore, I assume that this obligation has either been satisfied or determined not to apply, given that the second BSD clause already appears in the PostgreSQL source tree. I might have misunderstood the concern, but the lawyer believes this is a non-issue. Could you please provide more clarifying details about the concern?\n> \n> Yes, I can confirm that:\n> \n> \tgrep -Rl 'Redistributions in binary form must reproduce' . | wc -l\n> \n> reports 54; file list attached.\n\nI am somewhat embarrassed by this since we made the Intel lawyers find\nsomething that was in our own source code.\n\nFirst, the \"advertizing clause\" in the 4-clause license:\n\n\t 3. All advertising materials mentioning features or use of this\n\tsoftware must display the following acknowledgement: This product\n\tincludes software developed by the University of California,\n\tBerkeley and its contributors.\n\nand was disavowed by Berkeley on July 22nd, 1999:\n\n\thttps://elrc-share.eu/static/metashare/licences/BSD-3-Clause.pdf\n\nWhile the license we are concerned about does not have this clause, it\ndoes have:\n\n\t 2. Redistributions in binary form must reproduce the above\n\tcopyright notice, this list of conditions and the following\n\tdisclaimer in the documentation and/or other materials provided\n\twith the distribution.\n\nI assume that must also include the name of the copyright holder.\n\nI think that means we need to mention The Regents of the University of\nCalifornia in our copyright notice, which we do. However several\nnon-Regents of the University of California copyright holder licenses\nexist in our source tree, and accepting this AVX-512 patch would add\nanother one. Specifically, I see existing entries for:\n\n\tAaron D. Gifford\n\tBoard of Trustees of the University of Illinois\n\tDavid Burren\n\tEric P. Allman\n\tJens Schweikhardt\n\tMarko Kreen\n\tSun Microsystems, Inc.\n\tWIDE Project\n\t\nNow, some of these are these names plus Berkeley, and some are just the\nnames above.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 18 Jun 2024 14:00:34 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "On Tue, Jun 18, 2024 at 02:00:34PM -0400, Bruce Momjian wrote:\n> While the license we are concerned about does not have this clause, it\n> does have:\n> \n> \t 2. Redistributions in binary form must reproduce the above\n> \tcopyright notice, this list of conditions and the following\n> \tdisclaimer in the documentation and/or other materials provided\n> \twith the distribution.\n> \n> I assume that must also include the name of the copyright holder.\n> \n> I think that means we need to mention The Regents of the University of\n> California in our copyright notice, which we do. However several\n> non-Regents of the University of California copyright holder licenses\n> exist in our source tree, and accepting this AVX-512 patch would add\n> another one. Specifically, I see existing entries for:\n> \n> \tAaron D. Gifford\n> \tBoard of Trustees of the University of Illinois\n> \tDavid Burren\n> \tEric P. Allman\n> \tJens Schweikhardt\n> \tMarko Kreen\n> \tSun Microsystems, Inc.\n> \tWIDE Project\n> \t\n> Now, some of these are these names plus Berkeley, and some are just the\n> names above.\n\nIn summary, either we are doing something wrong in how we list\ncopyrights in our documentation, or we don't need to make any changes for\nthis Intel patch.\n\nOur license is at:\n\n\thttps://www.postgresql.org/about/licence/\n\nThe Intel copyright in the source code is:\n\n\t * Copyright 2017 The Chromium Authors\n\t * Copyright (c) 2024, Intel(r) Corporation\n\t *\n\t * Use of this source code is governed by a BSD-style license that can be\n\t * found in the Chromium source repository LICENSE file.\n\t * https://chromium.googlesource.com/chromium/src/+/refs/heads/main/LICENSE\n\nand the URL contents are:\n\n\t// Copyright 2015 The Chromium Authors\n\t//\n\t// Redistribution and use in source and binary forms, with or without\n\t// modification, are permitted provided that the following conditions are\n\t// met:\n\t//\n\t// * Redistributions of source code must retain the above copyright\n\t// notice, this list of conditions and the following disclaimer.\n\t// * Redistributions in binary form must reproduce the above\n\t// copyright notice, this list of conditions and the following disclaimer\n\t// in the documentation and/or other materials provided with the\n\t// distribution.\n\t// * Neither the name of Google LLC nor the names of its\n\t// contributors may be used to endorse or promote products derived from\n\t// this software without specific prior written permission.\n\t//\n\t// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n\t// \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n\t// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n\t// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n\t// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n\t// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n\t// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n\t// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n\t// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n\t// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n\t// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nGoogle LLC is added to clause three, and I assume Intel is also covered\nby this because it is considered \"the names of its contributors\", maybe?\n\nIt would be good to know exactly what, if any, changes the Intel lawyers\nwant us to make to our license if we accept this patch.\n\nThere are also different versions of clause three in our source tree.\nThe Postgres license only lists the University of California in our\nequivalent of clause three, meaning that there are three-clause BSD\nlicenses in our source tree that reference entities that we don't\nreference in the Postgres license. Oddly, the Postgres license doesn't\neven disclaim warranties for the PostgreSQL Global Development Group,\nonly for Berkeley.\n\nAn even bigger issue is that we are distributing 3-clause BSD licensed\nsoftware under the Postgres license, which is not the 3-clause BSD\nlicense. I think we were functioning under the assuption that the\nlicenses are compatibile, so can be combined, which is true, but I don't\nthink we can assume the individual licenses can be covered by our one\nlicense, can we?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 19 Jun 2024 09:43:12 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "> It would be good to know exactly what, if any, changes the Intel lawyers want\n> us to make to our license if we accept this patch.\n\nI asked about this and there is nothing Intel requires here license wise. They believe that there is nothing wrong with including Clause-3 BSD like licenses under the PostgreSQL license. They only specified that for the source file, the applying license need to be present either as a link (which was previously discouraged in this thread) or the full text. Please note that I checked and for this specific Chromium license there is not SPDX codename so the entire text is required.\n\nThanks,\nPaul\n\n\n\n", "msg_date": "Tue, 25 Jun 2024 17:41:12 +0000", "msg_from": "\"Amonson, Paul D\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "On Tue, Jun 25, 2024 at 05:41:12PM +0000, Amonson, Paul D wrote:\n> > It would be good to know exactly what, if any, changes the Intel\n> > lawyers want us to make to our license if we accept this patch.\n>\n> I asked about this and there is nothing Intel requires here license\n> wise. They believe that there is nothing wrong with including Clause-3\n> BSD like licenses under the PostgreSQL license. They only specified\n> that for the source file, the applying license need to be present\n> either as a link (which was previously discouraged in this thread)\n> or the full text. Please note that I checked and for this specific\n> Chromium license there is not SPDX codename so the entire text is\n> required.\n\nOkay, that is very interesting. Yes, we will have no problem\nreproducing the exact license text in the source code. I think we can\nremove the license issue as a blocker for this patch.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 25 Jun 2024 13:48:43 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "> Okay, that is very interesting. Yes, we will have no problem reproducing the\n> exact license text in the source code. I think we can remove the license issue\n> as a blocker for this patch.\n\nHi,\n\nI was wondering if I can I get a review please. I am interested in the refactor question for the HW capability tests as well as an actual implementation review. I create a commit fest entry for this thread.\n\nThanks,\nPaul\n\n\n", "msg_date": "Thu, 18 Jul 2024 16:33:22 +0000", "msg_from": "\"Amonson, Paul D\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "On Wed, Jun 12, 2024 at 12:37:46PM -0700, Andres Freund wrote:\n> I'm wonder if this isn't going in the wrong direction. We're using CRCs for\n> something they're not well suited for in my understanding - and are paying a\n> reasonably high price for it, given that even hardware accelerated CRCs aren't\n> blazingly fast.\n\nI tend to agree, especially that we should be more concerned about all\nbytes after a certain point being garbage than bit flips. (I think we\nshould also care about bit flips, but I hope those are much less common\nthan half-written WAL records.)\n\n> With that I perhaps have established that CRC guarantees aren't useful for us.\n> But not yet why we should use something else: Given that we already aren't\n> relying on hard guarantees, we could instead just use a fast hash like xxh3.\n> https://github.com/Cyan4973/xxHash which is fast both for large and small\n> amounts of data.\n\nWould it be out of the question to reuse the page checksum code (i.e., an\nFNV-1a derivative)? The chart in your link claims that xxh3 is\nsubstantially faster than \"FNV64\", but I wonder if the latter was\nvectorized. I don't know how our CRC-32C implementations (and proposed\nimplementations) compare, either.\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 8 Aug 2024 14:28:31 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "Hi,\n\nHere are the latest patches for the accelerated CRC32c algorithm. I did the following to create these refactored patches:\n\n1) From the main branch I moved all x86_64 hardware checks from the various locations into a single location. I did not move any ARM tests as I would have no way to test them for validity. However, an ARM section could be added to my consolidated source files.\n\nOnce I had this working and verified that there were no regressions....\n\n2) I ported the AVX-512 crc32c code as a second patch adding the new HW checks into the previously created file for HW checks from patch 0001.\n\nI reran all the basic tests again to make sure that the performance numbers were within the margin of error when compared to my original finding. This step showed similar numbers (see origin post) around 1.45X on average. I also made sure that if compiled with the AVX-512 features and ran on HW without these features the Postgres server still worked without throwing illegal instruction exceptions.\n\nPlease review the attached patches.\n\nThanks,\nPaul", "msg_date": "Thu, 22 Aug 2024 15:14:32 +0000", "msg_from": "\"Amonson, Paul D\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "Thanks for the new patches.\n\nOn Thu, Aug 22, 2024 at 03:14:32PM +0000, Amonson, Paul D wrote:\n> I reran all the basic tests again to make sure that the performance\n> numbers were within the margin of error when compared to my original\n> finding. This step showed similar numbers (see origin post) around 1.45X\n> on average. I also made sure that if compiled with the AVX-512 features\n> and ran on HW without these features the Postgres server still worked\n> without throwing illegal instruction exceptions.\n\nUpthread [0], Andres suggested dispatching to a different implementation\nfor compile-time-known small lengths. Have you looked into that? In your\noriginal post, you noted a 14% regression for records smaller than 256\nbytes, which is not an uncommon case for Postgres. IMO we should try to\nmitigate that as much as possible.\n\n[0] https://postgr.es/m/20240612201135.kk77tiqcux77lgev%40awork3.anarazel.de\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 22 Aug 2024 10:29:00 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "> Upthread [0], Andres suggested dispatching to a different implementation for\n> compile-time-known small lengths. Have you looked into that? In your\n> original post, you noted a 14% regression for records smaller than 256 bytes,\n> which is not an uncommon case for Postgres. IMO we should try to mitigate\n> that as much as possible.\n\nSo, without adding even more conditional tests (causing more latency), I can expose a new macro called COMP_CRC32C_SMALL that can be called from known locations where the size is known to be 20bytes or less (or any fixed size less than 256). Other than that, there is no method I know of to pre-decide calling a function based on input size. Is there any concrete thought on this?\n\nPaul\n\n\n\n", "msg_date": "Thu, 22 Aug 2024 16:19:20 +0000", "msg_from": "\"Amonson, Paul D\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "> Upthread [0], Andres suggested dispatching to a different implementation for\n> compile-time-known small lengths. Have you looked into that? In your\n> original post, you noted a 14% regression for records smaller than 256 bytes,\n> which is not an uncommon case for Postgres. IMO we should try to mitigate\n> that as much as possible.\n\nHi,\n\nOk I added a patch that exposed a new macro CRC32C_COMP_SMALL for targeted fixed size < 256 use cases in Postgres. As for mitigating the regression in general, I have not been able to work up a fallback (i.e. <256 bytes) that doesn't involve runtime checks which cause latency. I also attempted to change the AVX512 fallback from the current algorithm in the avx512 implementation to the SSE original implementation, but I am not seeing any real difference for this use case in performance.\n\nI am open to any other suggestions.\n\nPaul", "msg_date": "Mon, 26 Aug 2024 17:09:35 +0000", "msg_from": "\"Amonson, Paul D\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "On Mon, Aug 26, 2024 at 05:09:35PM +0000, Amonson, Paul D wrote:\n> Ok I added a patch that exposed a new macro CRC32C_COMP_SMALL for\n> targeted fixed size < 256 use cases in Postgres. As for mitigating the\n> regression in general, I have not been able to work up a fallback (i.e.\n> <256 bytes) that doesn't involve runtime checks which cause latency. I\n> also attempted to change the AVX512 fallback from the current algorithm\n> in the avx512 implementation to the SSE original implementation, but I am\n> not seeing any real difference for this use case in performance.\n\nI'm curious about where exactly the regression is coming from. Is it\npossible that your build for the SSE 4.2 tests was using it\nunconditionally, i.e., optimizing away the function pointer?\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 26 Aug 2024 13:38:30 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "> I'm curious about where exactly the regression is coming from. Is it possible\n> that your build for the SSE 4.2 tests was using it unconditionally, i.e.,\n> optimizing away the function pointer?\n\nI am calling the SSE 4.2 implementation directly; I am not even building the pg_sse42_*_choose.c file with the AVX512 choice. As best I can tell there is one extra function call and one extra int64 conditional test when bytes are <256 and a of course a JMP instruction to skip the AVX512 implementation.\n\nPaul\n\n\n\n", "msg_date": "Mon, 26 Aug 2024 18:44:55 +0000", "msg_from": "\"Amonson, Paul D\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "On Mon, Aug 26, 2024 at 06:44:55PM +0000, Amonson, Paul D wrote:\n>> I'm curious about where exactly the regression is coming from. Is it possible\n>> that your build for the SSE 4.2 tests was using it unconditionally, i.e.,\n>> optimizing away the function pointer?\n> \n> I am calling the SSE 4.2 implementation directly; I am not even building\n> the pg_sse42_*_choose.c file with the AVX512 choice. As best I can tell\n> there is one extra function call and one extra int64 conditional test\n> when bytes are <256 and a of course a JMP instruction to skip the AVX512\n> implementation.\n\nAnd this still shows the ~14% regression in your original post?\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 26 Aug 2024 13:50:00 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "> And this still shows the ~14% regression in your original post?\n\nAt the small buffer sizes the margin of error or \"noise\" is larger, 7-11%. My average could be just bad luck. It will take me a while to re-setup for full data collection runs but I can try it again if you like.\n\nPaul\n\n\n\n", "msg_date": "Mon, 26 Aug 2024 18:54:58 +0000", "msg_from": "\"Amonson, Paul D\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "On Mon, Aug 26, 2024 at 06:54:58PM +0000, Amonson, Paul D wrote:\n>> And this still shows the ~14% regression in your original post?\n> \n> At the small buffer sizes the margin of error or \"noise\" is larger,\n> 7-11%. My average could be just bad luck. It will take me a while to\n> re-setup for full data collection runs but I can try it again if you\n> like.\n\nIMHO that would be useful to establish the current state of the patch set\nfrom a performance standpoint, especially since you've added code intended\nto mitigate the regression.\n\n+#define COMP_CRC32C_SMALL(crc, data, len) \\\n+\t((crc) = pg_comp_crc32c_sse42((crc), (data), (len)))\n\nMy interpretation of Andres's upthread suggestion is that we'd add the\nlength check within the macro instead of introducing a separate one. We'd\nexpect the compiler to optimize out comparisons for small lengths known at\ncompile time and always call the existing implementation (which may still\ninvolve a function pointer in most cases).\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 26 Aug 2024 14:08:05 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "> IMHO that would be useful to establish the current state of the patch set from\n> a performance standpoint, especially since you've added code intended to\n> mitigate the regression.\n\nOk.\n\n> +#define COMP_CRC32C_SMALL(crc, data, len) \\\n> +\t((crc) = pg_comp_crc32c_sse42((crc), (data), (len)))\n> \n> My interpretation of Andres's upthread suggestion is that we'd add the length\n> check within the macro instead of introducing a separate one. We'd expect\n> the compiler to optimize out comparisons for small lengths known at compile\n> time and always call the existing implementation (which may still involve a\n> function pointer in most cases).\n\nHow does the m4/compiler know the difference between a const \"len\" and a dynamic \"len\"? I already when the code and changed constant sizes (structure sizes) to the new macro. Can you give an example of how this could work?\n\nPaul\n\n\n\n", "msg_date": "Mon, 26 Aug 2024 19:15:47 +0000", "msg_from": "\"Amonson, Paul D\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "On Mon, Aug 26, 2024 at 07:15:47PM +0000, Amonson, Paul D wrote:\n>> +#define COMP_CRC32C_SMALL(crc, data, len) \\\n>> +\t((crc) = pg_comp_crc32c_sse42((crc), (data), (len)))\n>> \n>> My interpretation of Andres's upthread suggestion is that we'd add the length\n>> check within the macro instead of introducing a separate one. We'd expect\n>> the compiler to optimize out comparisons for small lengths known at compile\n>> time and always call the existing implementation (which may still involve a\n>> function pointer in most cases).\n> \n> How does the m4/compiler know the difference between a const \"len\" and a\n> dynamic \"len\"? I already when the code and changed constant sizes\n> (structure sizes) to the new macro. Can you give an example of how this\n> could work?\n\nThings like sizeof() and offsetof() are known at compile time, so the\ncompiler will recognize when a condition is always true or false and\noptimize it out accordingly. In cases where the value cannot be known at\ncompile time, checking the length in the macro and dispatching to a\ndifferent implementation may still be advantageous, especially when the\ndifferent implementation doesn't involve function pointers.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 26 Aug 2024 14:32:12 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "> Things like sizeof() and offsetof() are known at compile time, so the compiler\n> will recognize when a condition is always true or false and optimize it out\n> accordingly. In cases where the value cannot be known at compile time,\n> checking the length in the macro and dispatching to a different\n> implementation may still be advantageous, especially when the different\n> implementation doesn't involve function pointers.\n\nOk, multiple issues resolved and have new numbers:\n\n1) Implemented the new COMP_CRC32 macro with the comparison and choice of avx512 vs. SSE42 at compile time for static structures.\n2) You were right about the baseline numbers, it seems that the binaries were compiled with the direct call version of the SSE 4.2 CRC implementation thus avoiding the function pointer. I rebuilt with USE_SSE42_CRC32C_WITH_RUNTIME_CHECK for the numbers below.\n3) ran through all the tests again and ended up with no regression (meaning run sets would fall either 0.5% below or 1.5% above the baseline and the margin of error was MUCH tighter this time at ~3%. :)\n\nNew Table of Rates (looks correct with fixed font width) below:\n\n+------------------+----------------+----------------+------------------+-------+------+\n| Rate in bytes/us | SDP (SPR) | m6i | m7i | | |\n+------------------+----------------+----------------+------------------+ Multi-| |\n| higher is better | SSE42 | AVX512 | SSE42 | AVX512 | SSE42 | AVX512 | plier | % |\n+==================+=================+=======+========+========+========+=======+======+\n| AVG Rate 64-8192 | 10,095 | 82,101 | 8,591 | 38,652 | 11,867 | 83,194 | 6.68 | 568% |\n+------------------+--------+--------+-------+--------+--------+--------+-------+------+\n| AVG Rate 64-255 | 9,034 | 9,136 | 7,619 | 7,437 | 9,030 | 9,293 | 1.01 | 1% |\n+------------------+--------+--------+-------+--------+--------+--------+-------+------+\n\n* With a data profile of 99% buffer sizes <256 bytes the improvement is still 6% and will not regress (except withing the margin of error)!\n* There is not a regression anymore (previously showing a 14% regression).\n\nThanks for the pointers!!!\nPaul", "msg_date": "Tue, 27 Aug 2024 20:42:14 +0000", "msg_from": "\"Amonson, Paul D\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Proposal for Updating CRC32C with AVX-512 Algorithm." }, { "msg_contents": "Hi all,\n\nI will be retiring from Intel at the end of this week. I wanted to introduce the engineer who will be taking over the CRC32c proposal and commit fest entry.\n\nDevulapalli, Raghuveer <[email protected]>\n\nI have brought him up to speed and he will be the go-to for technical review comments and questions. Please welcome him into the community.\n\nThanks,\nPaul\n\n\n\n", "msg_date": "Tue, 24 Sep 2024 16:06:16 +0000", "msg_from": "\"Amonson, Paul D\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Proposal for Updating CRC32C with AVX-512 Algorithm." } ]
[ { "msg_contents": "Hi PostgreSQL Community,\nI'm currently delving into Postgres HLL (HyperLogLog) functionality and\nhave encountered an unexpected behavior while executing queries from the \"\ncumulative_add_sparse_edge.sql\n<https://github.com/citusdata/postgresql-hll/blob/master/sql/cumulative_add_sparse_edge.sql#L28-L36>\"\nregress test. This particular test data file\n<https://github.com/citusdata/postgresql-hll/blob/master/sql/data/cumulative_add_sparse_edge.csv#L515-L516>\ninvolves\nthree columns, with the last column representing an HLL (HyperLogLog) value\nderived from the previous HLL value and the current raw value.\n\nUpon manual inspection of the query responsible for deriving the last row's\nHLL value, I noticed a discrepancy. When executing the query:\n\"\"\"\n-- '\\x148B481002....' is second last rows hll value\nSELECT hll_add('\\x148B481002.....', hll_hashval(2561));\n\"\"\"\ninstead of obtaining the expected value (''\\x148B481002....''), I received\na different output which is ('\\x138b48000200410061008100a1 ........').\n\nI am using\npostgres=> select version();\n version\n\n-------------------------------------------------------------------------------------------------------------\n PostgreSQL 16.1 on aarch64-unknown-linux-gnu, compiled by\naarch64-unknown-linux-gnu-gcc (GCC) 9.5.0, 64-bit\n\nMy initial assumption is that this could potentially be attributed to a\nprecision error. However, I'm reaching out to seek clarity on why this\ndisparity is occurring and to explore potential strategies for mitigating\nit (as I want the behaviour to be consistent to regress test file).\n\nRegards\nAyush Vatsa\n\nHi PostgreSQL Community,I'm currently delving into Postgres HLL (HyperLogLog) functionality and have encountered an unexpected behavior while executing queries from the \"cumulative_add_sparse_edge.sql\" regress test. This particular test data file involves three columns, with the last column representing an HLL (HyperLogLog) value derived from the previous HLL value and the current raw value.Upon manual inspection of the query responsible for deriving the last row's HLL value, I noticed a discrepancy. When executing the query:\"\"\"-- '\\x148B481002....' is second last rows hll valueSELECT hll_add('\\x148B481002.....', hll_hashval(2561));\"\"\"instead of obtaining the expected value (''\\x148B481002....''), I received a different output which is ('\\x138b48000200410061008100a1 ........').I am usingpostgres=> select version();                                                   version                                                   ------------------------------------------------------------------------------------------------------------- PostgreSQL 16.1 on aarch64-unknown-linux-gnu, compiled by aarch64-unknown-linux-gnu-gcc (GCC) 9.5.0, 64-bitMy initial assumption is that this could potentially be attributed to a precision error. However, I'm reaching out to seek clarity on why this disparity is occurring and to explore potential strategies for mitigating it (as I want the behaviour to be consistent to regress test file).RegardsAyush Vatsa", "msg_date": "Wed, 1 May 2024 22:39:49 +0530", "msg_from": "Ayush Vatsa <[email protected]>", "msg_from_op": true, "msg_subject": "Query Discrepancy in Postgres HLL Test" }, { "msg_contents": "On Wed, May 1, 2024 at 1:10 PM Ayush Vatsa <[email protected]> wrote:\n> I'm currently delving into Postgres HLL (HyperLogLog) functionality and have encountered an unexpected behavior while executing queries from the \"cumulative_add_sparse_edge.sql\" regress test. This particular test data file involves three columns, with the last column representing an HLL (HyperLogLog) value derived from the previous HLL value and the current raw value.\n>\n> Upon manual inspection of the query responsible for deriving the last row's HLL value, I noticed a discrepancy. When executing the query:\n> \"\"\"\n> -- '\\x148B481002....' is second last rows hll value\n> SELECT hll_add('\\x148B481002.....', hll_hashval(2561));\n> \"\"\"\n> instead of obtaining the expected value (''\\x148B481002....''), I received a different output which is ('\\x138b48000200410061008100a1 ........').\n\nPostgreSQL has no function called hll_add or hll_hashval, and no\nregression test file called cumulative_add_sparse_edge.sql. A quick\nGoogle search suggests that these things are part of citusdata's fork\nof PostgreSQL, so you might want to contact them.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 May 2024 15:07:57 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Discrepancy in Postgres HLL Test" } ]
[ { "msg_contents": "Hi hackers,\n\nIt seems the get_actual_variable_range function has a long history of\nfixes attempting to improve its worst-case behaviour, most recently in\n9c6ad5eaa95, which limited the number of heap page fetches to 100.\nThere's currently no limit on the number of index pages fetched.\n\nWe managed to get into trouble after deleting a large number of rows\n(~8 million) from the start of an index, which caused planning time to\nblow up on a hot (~250 calls/s) query. During the incident `explain\n(analyze, buffers)` looked like this:\nPlanning:\n Buffers: shared hit=88311\nPlanning Time: 249.902 ms\nExecution Time: 0.066 ms\n\nThe planner was burning a huge amount of CPU time looking through\nindex pages for the first visible tuple. The problem eventually\nresolved when the affected index was vacuumed, but that took several\nhours to complete.\n\nThere's a reproduction with a smaller dataset below.\n\nOur current workaround to safely bulk delete from these large tables\ninvolves delaying deletion of the minimum row until after a vacuum has\nrun, so there's always a visible tuple near the start of the index.\nIt's not realistic for us to run vacuums more frequently (ie. after\ndeleting a smaller number of rows) because they're so time-consuming.\n\nThe previous discussion [1] touched on the idea of also limiting the\nnumber of index page fetches, but there were doubts about the safety\nof back-patching and the ugliness of modifying the index AM API to\nsupport this.\n\nI would like to submit our experience as evidence that the lack of\nlimit on index page fetches is a real problem. Even if a fix for this\ndoesn't get back-patched, it would be nice to see it in a major\nversion.\n\nAs a starting point, I've updated the WIP index page limit patch from\nSimon Riggs [2] to apply cleanly to master.\n\nRegards,\nRian\n\n[1] https://www.postgresql.org/message-id/flat/CAKZiRmznOwi0oaV%3D4PHOCM4ygcH4MgSvt8%3D5cu_vNCfc8FSUug%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CANbhV-GUAo5cOw6XiqBjsLVBQsg%2B%3DkpcCCWYjdTyWzLP28ZX-Q%40mail.gmail.com\n\n=# create table test (id bigint primary key) with (autovacuum_enabled = 'off');\n=# insert into test select generate_series(1,10000000);\n=# analyze test;\n\nAn explain with no dead tuples looks like this:\n=# explain (analyze, buffers) select id from test where id in (select\nid from test order by id desc limit 1);\nPlanning:\n Buffers: shared hit=8\nPlanning Time: 0.244 ms\nExecution Time: 0.067 ms\n\nBut if we delete a large number of rows from the start of the index:\n=# delete from test where id <= 4000000;\n\nThe performance doesn't become unreasonable immediately - it's limited\nto visiting 100 heap pages while it's setting killed bits on the index\ntuples:\n=# explain (analyze, buffers) select id from test where id in (select\nid from test order by id desc limit 1);\nPlanning:\n Buffers: shared hit=1 read=168 dirtied=163\nPlanning Time: 5.910 ms\nExecution Time: 0.107 ms\n\nBut the number of index buffers visited increases on each query, and\neventually all the killed bits are set:\n$ for i in {1..500}; do psql test -c 'select id from test where id in\n(select id from test order by id desc limit 1)' >/dev/null; done\n=# explain (analyze, buffers) select id from test where id in (select\nid from test order by id desc limit 1);\nPlanning:\n Buffers: shared hit=11015\nPlanning Time: 35.772 ms\nExecution Time: 0.070 ms\n\nWith the patch:\n=# explain (analyze, buffers) select id from test where id in (select\nid from test order by id desc limit 1);\nPlanning:\n Buffers: shared hit=107\nPlanning Time: 0.377 ms\nExecution Time: 0.045 ms", "msg_date": "Thu, 2 May 2024 16:12:33 +1000", "msg_from": "Rian McGuire <[email protected]>", "msg_from_op": true, "msg_subject": "Limit index pages visited in planner's get_actual_variable_range" }, { "msg_contents": "On Thu, May 2, 2024 at 2:12 AM Rian McGuire <[email protected]> wrote:\n> The planner was burning a huge amount of CPU time looking through\n> index pages for the first visible tuple. The problem eventually\n> resolved when the affected index was vacuumed, but that took several\n> hours to complete.\n\nThis is exactly the same problem recently that Mark Callaghan recently\nencountered when benchmarking Postgres using a variant of his insert\nbenchmark:\n\nhttps://smalldatum.blogspot.com/2024/01/updated-insert-benchmark-postgres-9x-to_10.html\nhttps://smalldatum.blogspot.com/2024/01/updated-insert-benchmark-postgres-9x-to_27.html\nhttps://smalldatum.blogspot.com/2024/03/trying-to-tune-postgres-for-insert.html\n\nThis is a pretty nasty sharp edge. I bet many users encounter this\nproblem without ever understanding it.\n\n> The previous discussion [1] touched on the idea of also limiting the\n> number of index page fetches, but there were doubts about the safety\n> of back-patching and the ugliness of modifying the index AM API to\n> support this.\n\nFundamentally, the problem is that the heuristics that we have don't\ncare about the cost of reading index leaf pages. All that it takes is\na workload where that becomes the dominant cost -- such a workload\nwon't be helped at all by the existing heap-focussed heuristic. This\nseems obvious to me.\n\nIt seems natural to fix the problem by teaching the heuristics to give\nat least some consideration to the cost of reading index leaf pages --\nmore than zero. The patch from Simon seems like the right general\napproach to me, since it precisely targets the underlying problem.\nWhat other approach is there, really?\n\n> I would like to submit our experience as evidence that the lack of\n> limit on index page fetches is a real problem. Even if a fix for this\n> doesn't get back-patched, it would be nice to see it in a major\n> version.\n\nI find that very easy to believe.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 2 May 2024 11:05:25 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limit index pages visited in planner's get_actual_variable_range" } ]
[ { "msg_contents": "I tried to initialize a table with values for smallint columns.\n\nThe final goal is to get mask values for logical operations.\n\n\nThe insert failed with ERROR: smallint out of range.\n\n\nthe same occurs when using a simple select statement like:\n\nselect -32768::smallint;\nselect -2147483648::int;\nselect -9223372036854775808::bigint;\n\nThese limit values are taken from the documentation 8.1.1 Integer Types\n\nThis occurs on PG16.2 on Windows or Linux (64bit).\n\nThis prevents me to enter the binary value 0b10000000_00000000 into a smallint column\n(I could use some other tricks, but this is ugly!)\n\n-----------\n\npostgres=# select version ();\n version\n----------------------------------------------------------------------------------------------------------\n PostgreSQL 16.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 14.0.1 20240411 (Red Hat 14.0.1-0), 64-bit\n(1 Zeile)\n\npostgres=# select -32768::smallint;\nERROR: smallint out of range\n\nThank you for looking\n\n\nHans Buschmann\n\n\n\n\n\n\n\n\nI tried to initialize a table with values for smallint columns.\nThe final goal is to get mask values for logical operations.\n\n\nThe insert failed with ERROR: smallint out of range.\n\n\n\nthe same occurs when using a simple select statement like:\n\n\nselect -32768::smallint;\n\nselect -2147483648::int;\n\nselect -9223372036854775808::bigint;\n\n\n\nThese limit values are taken from the documentation 8.1.1\n Integer Types\n\n\nThis occurs on PG16.2 on Windows or Linux (64bit).\n\n\nThis prevents me to enter the binary value 0b10000000_00000000\n into a smallint column\n(I could use some other tricks, but this is ugly!)\n\n\n-----------\n\n\npostgres=# select version ();\n                                                 version\n----------------------------------------------------------------------------------------------------------\n PostgreSQL 16.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 14.0.1 20240411 (Red Hat 14.0.1-0), 64-bit\n(1 Zeile)\n\n\npostgres=# select -32768::smallint;\nERROR:  smallint out of range\n\n\nThank you for looking\n\n\n\nHans Buschmann", "msg_date": "Thu, 2 May 2024 11:25:32 +0000", "msg_from": "Hans Buschmann <[email protected]>", "msg_from_op": true, "msg_subject": "Type and CAST error on lowest negative integer values for smallint,\n int and bigint" }, { "msg_contents": "On Thu, 2 May 2024 at 23:25, Hans Buschmann <[email protected]> wrote:\n> postgres=# select -32768::smallint;\n> ERROR: smallint out of range\n\nThe precedence order of operations applies the cast before the unary\nminus operator.\n\nAny of the following will work:\n\npostgres=# select cast(-32768 as smallint), (-32768)::smallint,\n'-32768'::smallint;\n int2 | int2 | int2\n--------+--------+--------\n -32768 | -32768 | -32768\n(1 row)\n\nDavid\n\n\n", "msg_date": "Thu, 2 May 2024 23:33:55 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Type and CAST error on lowest negative integer values for\n smallint, int and bigint" }, { "msg_contents": "Thank you for your quick response.\n\n\nThis was very helpfull and resolved my problem.\n\n\nBut for me it is a bit counterintuitive that -32768 is not treated as a negative constant but as a unary operator to a positive constant.\n\n\nIt could be helpfull to remind the user of the nature of negative constants and tthe highest precedence of casts in the numeric type section of the doc.\n\n\nPerhaps someone could add an index topic \"precedence of operators\", since this is a very important information for every computer language.\n\n(I just found out that a topic of \"operator precedence\" exists already in the index)\n\n\nI just sent this message for the case this could be a problem to be resolved before the next minor versions scheduled for next week.\n\n\nRegards\n\n\nHans Buschmann\n\n\n________________________________\nVon: David Rowley <[email protected]>\nGesendet: Donnerstag, 2. Mai 2024 13:33\nAn: Hans Buschmann\nCc: [email protected]\nBetreff: Re: Type and CAST error on lowest negative integer values for smallint, int and bigint\n\nOn Thu, 2 May 2024 at 23:25, Hans Buschmann <[email protected]> wrote:\n> postgres=# select -32768::smallint;\n> ERROR: smallint out of range\n\nThe precedence order of operations applies the cast before the unary\nminus operator.\n\nAny of the following will work:\n\npostgres=# select cast(-32768 as smallint), (-32768)::smallint,\n'-32768'::smallint;\n int2 | int2 | int2\n--------+--------+--------\n -32768 | -32768 | -32768\n(1 row)\n\nDavid\n\n\n\n\n\n\n\n\nThank you for your quick response.\n\n\nThis was very helpfull and resolved my problem.\n\n\nBut for me it is a bit counterintuitive that -32768 is not treated as a negative constant but as a unary operator to a positive constant.\n\n\nIt could be helpfull to remind the user of the nature of negative constants and tthe highest precedence of casts in the numeric type section of the doc.\n\n\nPerhaps someone could add an index topic \"precedence of operators\", since this is a very important information for every computer language.\n(I just found out that a topic of \"operator precedence\" exists already in the index)\n\n\nI just sent this message for the case this could be a problem to be resolved before the next minor versions scheduled for next week.\n\n\nRegards\n\n\nHans Buschmann\n\n\n\n\n\nVon: David Rowley <[email protected]>\nGesendet: Donnerstag, 2. Mai 2024 13:33\nAn: Hans Buschmann\nCc: [email protected]\nBetreff: Re: Type and CAST error on lowest negative integer values for smallint, int and bigint\n \n\n\n\nOn Thu, 2 May 2024 at 23:25, Hans Buschmann <[email protected]> wrote:\n> postgres=# select -32768::smallint;\n> ERROR:  smallint out of range\n\nThe precedence order of operations applies the cast before the unary\nminus operator.\n\nAny of the following will work:\n\npostgres=# select cast(-32768 as smallint), (-32768)::smallint,\n'-32768'::smallint;\n  int2  |  int2  |  int2\n--------+--------+--------\n -32768 | -32768 | -32768\n(1 row)\n\nDavid", "msg_date": "Thu, 2 May 2024 12:14:08 +0000", "msg_from": "Hans Buschmann <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Type and CAST error on lowest negative integer values for\n smallint, int and bigint" } ]
[ { "msg_contents": "hi.\n\njust found out we can:\nexplain (verbose, verbose off, analyze on, analyze off, analyze on)\nselect count(*) from tenk1;\n\nsimilar to COPY, do we want to error out these redundant options?\n\n\n", "msg_date": "Thu, 2 May 2024 21:16:42 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "EXPLAN redundant options" }, { "msg_contents": "On Thu, May 2, 2024 at 6:17 AM jian he <[email protected]> wrote:\n\n> explain (verbose, verbose off, analyze on, analyze off, analyze on)\n>\n>\nI would just update this paragraph to note the last one wins behavior.\n\n\"When the option list is surrounded by parentheses, the options can be\nwritten in any order. However, if options are repeated the last one listed\nis used.\"\n\nI have no desire to introduce breakage here. The implemented concept is\nactually quite common. The inconsistency with COPY seems like a minor\npoint. It would influence my green field choice but not enough for\nchanging long-standing behavior.\n\nDavid J.\n\nOn Thu, May 2, 2024 at 6:17 AM jian he <[email protected]> wrote:explain (verbose, verbose off, analyze on, analyze off, analyze on)I would just update this paragraph to note the last one wins behavior.\"When the option list is surrounded by parentheses, the options can be written in any order.  However, if options are repeated the last one listed is used.\"I have no desire to introduce breakage here.  The implemented concept is actually quite common.  The inconsistency with COPY seems like a minor point.  It would influence my green field choice but not enough for changing long-standing behavior.David J.", "msg_date": "Thu, 2 May 2024 06:36:34 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EXPLAN redundant options" }, { "msg_contents": "On Thu, May 2, 2024, at 10:36 AM, David G. Johnston wrote:\n> On Thu, May 2, 2024 at 6:17 AM jian he <[email protected]> wrote:\n>> explain (verbose, verbose off, analyze on, analyze off, analyze on)\n> \n> I would just update this paragraph to note the last one wins behavior.\n> \n> \"When the option list is surrounded by parentheses, the options can be written in any order. However, if options are repeated the last one listed is used.\"\n> \n> I have no desire to introduce breakage here. The implemented concept is actually quite common. The inconsistency with COPY seems like a minor point. It would influence my green field choice but not enough for changing long-standing behavior.\n> \n\nThere is no policy saying we cannot introduce incompatibility changes in major\nreleases. If you check for \"conflicting or redundant options\" or\n\"errorConflictingDefElem\", you will notice that various SQL commands prevent\nyou to inform redundant options. IMO avoid redundant options is a good goal\nbecause (i) it keeps the command short and (b) it doesn't require you to check\nall options to figure out what's the current option value. If the application\nis already avoiding redundant options for other commands, the probability of\nallowing it just for EXPLAIN is low.\n\npostgres=# create database foo with owner = bar owner = xpto;\nERROR: conflicting or redundant options\nLINE 1: create database foo with owner = bar owner = xpto;\n ^\npostgres=# create user foo with createdb login createdb;\nERROR: conflicting or redundant options\nLINE 1: create user foo with createdb login createdb;\n ^\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, May 2, 2024, at 10:36 AM, David G. Johnston wrote:On Thu, May 2, 2024 at 6:17 AM jian he <[email protected]> wrote:explain (verbose, verbose off, analyze on, analyze off, analyze on)I would just update this paragraph to note the last one wins behavior.\"When the option list is surrounded by parentheses, the options can be written in any order.  However, if options are repeated the last one listed is used.\"I have no desire to introduce breakage here.  The implemented concept is actually quite common.  The inconsistency with COPY seems like a minor point.  It would influence my green field choice but not enough for changing long-standing behavior.There is no policy saying we cannot introduce incompatibility changes in majorreleases. If you check for \"conflicting or redundant options\" or\"errorConflictingDefElem\", you will notice that various SQL commands preventyou to inform redundant options. IMO avoid redundant options is a good goalbecause (i) it keeps the command short and (b) it doesn't require you to checkall options to figure out what's the current option value. If the applicationis already avoiding redundant options for other commands, the probability ofallowing it just for EXPLAIN is low.postgres=# create database foo with owner = bar owner = xpto;ERROR:  conflicting or redundant optionsLINE 1: create database foo with owner = bar owner = xpto;                                               ^postgres=# create user foo with createdb login createdb;ERROR:  conflicting or redundant optionsLINE 1: create user foo with createdb login createdb;                                            ^--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Thu, 02 May 2024 10:58:40 -0300", "msg_from": "\"Euler Taveira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EXPLAN redundant options" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Thu, May 2, 2024 at 6:17 AM jian he <[email protected]> wrote:\n>> explain (verbose, verbose off, analyze on, analyze off, analyze on)\n\n> I have no desire to introduce breakage here. The implemented concept is\n> actually quite common. The inconsistency with COPY seems like a minor\n> point. It would influence my green field choice but not enough for\n> changing long-standing behavior.\n\nThe argument for changing this would be consistency, but if you want\nto argue for it on those grounds, you'd need to change *every* command\nthat acts that way. I really doubt EXPLAIN is the only one.\n\nThere's also a theological argument to be had about which\nbehavior is preferable. For my own taste, I like last-one-wins.\nThat's extremely common with command line switches, for instance.\nSo maybe we should be making our commands consistent in the other\ndirection.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 May 2024 10:21:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EXPLAN redundant options" } ]
[ { "msg_contents": "Hi,\n\nIn PG17 we shall have parallel CREATE INDEX for BRIN indexes, and back\nwhen working on that I was thinking how difficult would it be to do\nsomething similar to do that for other index types, like GIN. I even had\nthat on my list of ideas to pitch to potential contributors, as I was\nfairly sure it's doable and reasonably isolated / well-defined.\n\nHowever, I was not aware of any takers, so a couple days ago on a slow\nweekend I took a stab at it. And yes, it's doable - attached is a fairly\ncomplete, tested and polished version of the feature, I think. It turned\nout to be a bit more complex than I expected, for reasons that I'll get\ninto when discussing the patches.\n\nFirst, let's talk about the benefits - how much faster is that than the\nsingle-process build we have for GIN indexes? I do have a table with the\narchive of all our mailing lists - it's ~1.5M messages, table is ~21GB\n(raw dump is about 28GB). This does include simple text data (message\nbody), JSONB (headers) and tsvector (full-text on message body).\n\nIf I do CREATE index with different number of workers (0 means serial\nbuild), I get this timings (in seconds):\n\n workers trgm tsvector jsonb jsonb (hash)\n -----------------------------------------------------\n 0 1240 378 104 57\n 1 773 196 59 85\n 2 548 163 51 78\n 3 423 153 45 75\n 4 362 142 43 75\n 5 323 134 40 70\n 6 295 130 39 73\n\nPerhaps an easier to understand result is this table with relative\ntiming compared to serial build:\n\n workers trgm tsvector jsonb jsonb (hash)\n -----------------------------------------------------\n 1 62% 52% 57% 149%\n 2 44% 43% 49% 136%\n 3 34% 40% 43% 132%\n 4 29% 38% 41% 131%\n 5 26% 35% 39% 123%\n 6 24% 34% 38% 129%\n\nThis shows the benefits are pretty nice, depending on the opclass. For\nmost indexes it's maybe ~3-4x faster, which is nice, and I don't think\nit's possible to do much better - the actual index inserts can happen\nfrom a single process only, which is the main limit.\n\nFor some of the opclasses it can regress (like the jsonb_path_ops). I\ndon't think that's a major issue. Or more precisely, I'm not surprised\nby it. It'd be nice to be able to disable the parallel builds in these\ncases somehow, but I haven't thought about that.\n\nI do plan to do some tests with btree_gin, but I don't expect that to\nbehave significantly differently.\n\nThere are small variations in the index size, when built in the serial\nway and the parallel way. It's generally within ~5-10%, and I believe\nit's due to the serial build adding the TIDs incrementally, while the\nbuild adds them in much larger chunks (possibly even in one chunk with\nall the TIDs for the key). I believe the same size variation can happen\nif the index gets built in a different way, e.g. by inserting the data\nin a different order, etc. I did a number of tests to check if the index\nproduces the correct results, and I haven't found any issues. So I think\nthis is OK, and neither a problem nor an advantage of the patch.\n\n\nNow, let's talk about the code - the series has 7 patches, with 6\nnon-trivial parts doing changes in focused and easier to understand\npieces (I hope so).\n\n\n1) v20240502-0001-Allow-parallel-create-for-GIN-indexes.patch\n\nThis is the initial feature, adding the \"basic\" version, implemented as\npretty much 1:1 copy of the BRIN parallel build and minimal changes to\nmake it work for GIN (mostly about how to store intermediate results).\n\nThe basic idea is that the workers do the regular build, but instead of\nflushing the data into the index after hitting the memory limit, it gets\nwritten into a shared tuplesort and sorted by the index key. And the\nleader then reads this sorted data, accumulates the TID for a given key\nand inserts that into the index in one go.\n\n\n2) v20240502-0002-Use-mergesort-in-the-leader-process.patch\n\nThe approach implemented by 0001 works, but there's a little bit of\nissue - if there are many distinct keys (e.g. for trigrams that can\nhappen very easily), the workers will hit the memory limit with only\nvery short TID lists for most keys. For serial build that means merging\nthe data into a lot of random places, and in parallel build it means the\nleader will have to merge a lot of tiny lists from many sorted rows.\n\nWhich can be quite annoying and expensive, because the leader does so\nusing qsort() in the serial part. It'd be better to ensure most of the\nsorting happens in the workers, and the leader can do a mergesort. But\nthe mergesort must not happen too often - merging many small lists is\nnot cheaper than a single qsort (especially when the lists overlap).\n\nSo this patch changes the workers to process the data in two phases. The\nfirst works as before, but the data is flushed into a local tuplesort.\nAnd then each workers sorts the results it produced, and combines them\ninto results with much larger TID lists, and those results are written\nto the shared tuplesort. So the leader only gets very few lists to\ncombine for a given key - usually just one list per worker.\n\n\n3) v20240502-0003-Remove-the-explicit-pg_qsort-in-workers.patch\n\nIn 0002 the workers still do an explicit qsort() on the TID list before\nwriting the data into the shared tuplesort. But we can do better - the\nworkers can do a merge sort too. To help with this, we add the first TID\nto the tuplesort tuple, and sort by that too - it helps the workers to\nprocess the data in an order that allows simple concatenation instead of\nthe full mergesort.\n\nNote: There's a non-obvious issue due to parallel scans always being\n\"sync scans\", which may lead to very \"wide\" TID ranges when the scan\nwraps around. More about that later.\n\n\n4) v20240502-0004-Compress-TID-lists-before-writing-tuples-t.patch\n\nThe parallel build passes data between processes using temporary files,\nwhich means it may need significant amount of disk space. For BRIN this\nwas not a major concern, because the summaries tend to be pretty small.\n\nBut for GIN that's not the case, and the two-phase processing introduced\nby 0002 make it worse, because the worker essentially creates another\ncopy of the intermediate data. It does not need to copy the key, so\nmaybe it's not exactly 2x the space requirement, but in the worst case\nit's not far from that.\n\nBut there's a simple way how to improve this - the TID lists tend to be\nvery compressible, and GIN already implements a very light-weight TID\ncompression, so this patch does just that - when building the tuple to\nbe written into the tuplesort, we just compress the TIDs.\n\n\n5) v20240502-0005-Collect-and-print-compression-stats.patch\n\nThis patch simply collects some statistics about the compression, to\nshow how much it reduces the amounts of data in the various phases. The\ndata I've seen so far usually show ~75% compression in the first phase,\nand ~30% compression in the second phase.\n\nThat is, in the first phase we save ~25% of space, in the second phase\nwe save ~70% of space. An example of the log messages from this patch,\nfor one worker (of two) in the trigram phase says:\n\nLOG: _gin_parallel_scan_and_build raw 10158870494 compressed 7519211584\n ratio 74.02%\nLOG: _gin_process_worker_data raw 4593563782 compressed 1314800758\n ratio 28.62%\n\nPut differently, a single-phase version without compression (as in 0001)\nwould need ~10GB of disk space per worker. With compression, we need\nonly about ~8.8GB for both phases (or ~7.5GB for the first phase alone).\n\nI do think these numbers look pretty good. The numbers are different for\nother opclasses (trigrams are rather extreme in how much space they\nneed), but the overall behavior is the same.\n\n\n6) v20240502-0006-Enforce-memory-limit-when-combining-tuples.patch\n\nUntil this part, there's no limit on memory used by combining results\nfor a single index key - it'll simply use as much memory as needed to\ncombine all the TID lists. Which may not be a huge issue because each\nTID is only 6B, and we can accumulate a lot of those in a couple MB. And\na parallel CREATE INDEX usually runs with a fairly significant values of\nmaintenance_work_mem (in fact it requires it to even allow parallel).\nBut still, there should be some memory limit.\n\nIt however is not as simple as dumping current state into the index,\nbecause the TID lists produced by the workers may overlap, so the tail\nof the list may still receive TIDs from some future TID list. And that's\na problem because ginEntryInsert() expects to receive TIDs in order, and\nif that's not the case it may fail with \"could not split GIN page\".\n\nBut we already have the first TID for each sort tuple (and we consider\nit when sorting the data), and this is useful for deducing how far we\ncan flush the data, and keep just the minimal part of the TID list that\nmay change by merging.\n\nSo this patch implements that - it introduces the concept of \"freezing\"\nthe head of the TID list up to \"first TID\" from the next tuple, and uses\nthat to write data into index if needed because of memory limit.\n\nWe don't want to do that too often, so it only happens if we hit the\nmemory limit and there's at least a certain number (1024) of TIDs.\n\n\n7) v20240502-0007-Detect-wrap-around-in-parallel-callback.patch\n\nThere's one more efficiency problem - the parallel scans are required to\nbe synchronized, i.e. the scan may start half-way through the table, and\nthen wrap around. Which however means the TID list will have a very wide\nrange of TID values, essentially the min and max of for the key.\n\nWithout 0006 this would cause frequent failures of the index build, with\nthe error I already mentioned:\n\n ERROR: could not split GIN page; all old items didn't fit\n\ntracking the \"safe\" TID horizon addresses that. But there's still an\nissue with efficiency - having such a wide TID list forces the mergesort\nto actually walk the lists, because this wide list overlaps with every\nother list produced by the worker. And that's much more expensive than\njust simply concatenating them, which is what happens without the wrap\naround (because in that case the worker produces non-overlapping lists).\n\nOne way to fix this would be to allow parallel scans to not be sync\nscans, but that seems fairly tricky and I'm not sure if that can be\ndone. The BRIN parallel build had a similar issue, and it was just\nsimpler to deal with this in the build code.\n\nSo 0007 does something similar - it tracks if the TID value goes\nbackward in the callback, and if it does it dumps the state into the\ntuplesort before processing the first tuple from the beginning of the\ntable. Which means we end up with two separate \"narrow\" TID list, not\none very wide one.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 2 May 2024 17:19:03 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "On Thu, 2 May 2024 at 17:19, Tomas Vondra <[email protected]> wrote:\n>\n> Hi,\n>\n> In PG17 we shall have parallel CREATE INDEX for BRIN indexes, and back\n> when working on that I was thinking how difficult would it be to do\n> something similar to do that for other index types, like GIN. I even had\n> that on my list of ideas to pitch to potential contributors, as I was\n> fairly sure it's doable and reasonably isolated / well-defined.\n>\n> However, I was not aware of any takers, so a couple days ago on a slow\n> weekend I took a stab at it. And yes, it's doable - attached is a fairly\n> complete, tested and polished version of the feature, I think. It turned\n> out to be a bit more complex than I expected, for reasons that I'll get\n> into when discussing the patches.\n\nThis is great. I've been thinking about approximately the same issue\nrecently, too, but haven't had time to discuss/implement any of this\nyet. I think some solutions may even be portable to the btree parallel\nbuild: it also has key deduplication (though to a much smaller degree)\nand could benefit from deduplication during the scan/ssup load phase,\nrather than only during insertion.\n\n> First, let's talk about the benefits - how much faster is that than the\n> single-process build we have for GIN indexes? I do have a table with the\n> archive of all our mailing lists - it's ~1.5M messages, table is ~21GB\n> (raw dump is about 28GB). This does include simple text data (message\n> body), JSONB (headers) and tsvector (full-text on message body).\n\nSidenote: Did you include the tsvector in the table to reduce time\nspent during index creation? I would have used an expression in the\nindex definition, rather than a direct column.\n\n> If I do CREATE index with different number of workers (0 means serial\n> build), I get this timings (in seconds):\n\n[...]\n\n> This shows the benefits are pretty nice, depending on the opclass. For\n> most indexes it's maybe ~3-4x faster, which is nice, and I don't think\n> it's possible to do much better - the actual index inserts can happen\n> from a single process only, which is the main limit.\n\nCan we really not insert with multiple processes? It seems to me that\nGIN could be very suitable for that purpose, with its clear double\ntree structure distinction that should result in few buffer conflicts\nif different backends work on known-to-be-very-different keys.\nWe'd probably need multiple read heads on the shared tuplesort, and a\nway to join the generated top-level subtrees, but I don't think that\nis impossible. Maybe it's work for later effort though.\n\nHave you tested and/or benchmarked this with multi-column GIN indexes?\n\n> For some of the opclasses it can regress (like the jsonb_path_ops). I\n> don't think that's a major issue. Or more precisely, I'm not surprised\n> by it. It'd be nice to be able to disable the parallel builds in these\n> cases somehow, but I haven't thought about that.\n\nDo you know why it regresses?\n\n> I do plan to do some tests with btree_gin, but I don't expect that to\n> behave significantly differently.\n>\n> There are small variations in the index size, when built in the serial\n> way and the parallel way. It's generally within ~5-10%, and I believe\n> it's due to the serial build adding the TIDs incrementally, while the\n> build adds them in much larger chunks (possibly even in one chunk with\n> all the TIDs for the key).\n\nI assume that was '[...] while the [parallel] build adds them [...]', right?\n\n> I believe the same size variation can happen\n> if the index gets built in a different way, e.g. by inserting the data\n> in a different order, etc. I did a number of tests to check if the index\n> produces the correct results, and I haven't found any issues. So I think\n> this is OK, and neither a problem nor an advantage of the patch.\n>\n>\n> Now, let's talk about the code - the series has 7 patches, with 6\n> non-trivial parts doing changes in focused and easier to understand\n> pieces (I hope so).\n\nThe following comments are generally based on the descriptions; I\nhaven't really looked much at the patches yet except to validate some\nassumptions.\n\n> 1) v20240502-0001-Allow-parallel-create-for-GIN-indexes.patch\n>\n> This is the initial feature, adding the \"basic\" version, implemented as\n> pretty much 1:1 copy of the BRIN parallel build and minimal changes to\n> make it work for GIN (mostly about how to store intermediate results).\n>\n> The basic idea is that the workers do the regular build, but instead of\n> flushing the data into the index after hitting the memory limit, it gets\n> written into a shared tuplesort and sorted by the index key. And the\n> leader then reads this sorted data, accumulates the TID for a given key\n> and inserts that into the index in one go.\n\nIn the code, GIN insertions are still basically single btree\ninsertions, all starting from the top (but with many same-valued\ntuples at once). Now that we have a tuplesort with the full table's\ndata, couldn't the code be adapted to do more efficient btree loading,\nsuch as that seen in the nbtree code, where the rightmost pages are\ncached and filled sequentially without requiring repeated searches\ndown the tree? I suspect we can gain a lot of time there.\n\nI don't need you to do that, but what's your opinion on this?\n\n> 2) v20240502-0002-Use-mergesort-in-the-leader-process.patch\n>\n> The approach implemented by 0001 works, but there's a little bit of\n> issue - if there are many distinct keys (e.g. for trigrams that can\n> happen very easily), the workers will hit the memory limit with only\n> very short TID lists for most keys. For serial build that means merging\n> the data into a lot of random places, and in parallel build it means the\n> leader will have to merge a lot of tiny lists from many sorted rows.\n>\n> Which can be quite annoying and expensive, because the leader does so\n> using qsort() in the serial part. It'd be better to ensure most of the\n> sorting happens in the workers, and the leader can do a mergesort. But\n> the mergesort must not happen too often - merging many small lists is\n> not cheaper than a single qsort (especially when the lists overlap).\n>\n> So this patch changes the workers to process the data in two phases. The\n> first works as before, but the data is flushed into a local tuplesort.\n> And then each workers sorts the results it produced, and combines them\n> into results with much larger TID lists, and those results are written\n> to the shared tuplesort. So the leader only gets very few lists to\n> combine for a given key - usually just one list per worker.\n\nHmm, I was hoping we could implement the merging inside the tuplesort\nitself during its own flush phase, as it could save significantly on\nIO, and could help other users of tuplesort with deduplication, too.\n\n> 3) v20240502-0003-Remove-the-explicit-pg_qsort-in-workers.patch\n>\n> In 0002 the workers still do an explicit qsort() on the TID list before\n> writing the data into the shared tuplesort. But we can do better - the\n> workers can do a merge sort too. To help with this, we add the first TID\n> to the tuplesort tuple, and sort by that too - it helps the workers to\n> process the data in an order that allows simple concatenation instead of\n> the full mergesort.\n>\n> Note: There's a non-obvious issue due to parallel scans always being\n> \"sync scans\", which may lead to very \"wide\" TID ranges when the scan\n> wraps around. More about that later.\n\nAs this note seems to imply, this seems to have a strong assumption\nthat data received in parallel workers is always in TID order, with\none optional wraparound. Non-HEAP TAMs may break with this assumption,\nso what's the plan on that?\n\n> 4) v20240502-0004-Compress-TID-lists-before-writing-tuples-t.patch\n>\n> The parallel build passes data between processes using temporary files,\n> which means it may need significant amount of disk space. For BRIN this\n> was not a major concern, because the summaries tend to be pretty small.\n>\n> But for GIN that's not the case, and the two-phase processing introduced\n> by 0002 make it worse, because the worker essentially creates another\n> copy of the intermediate data. It does not need to copy the key, so\n> maybe it's not exactly 2x the space requirement, but in the worst case\n> it's not far from that.\n>\n> But there's a simple way how to improve this - the TID lists tend to be\n> very compressible, and GIN already implements a very light-weight TID\n> compression, so this patch does just that - when building the tuple to\n> be written into the tuplesort, we just compress the TIDs.\n\nSee note on 0002: Could we do this in the tuplesort writeback, rather\nthan by moving the data around multiple times?\n\n[...]\n> So 0007 does something similar - it tracks if the TID value goes\n> backward in the callback, and if it does it dumps the state into the\n> tuplesort before processing the first tuple from the beginning of the\n> table. Which means we end up with two separate \"narrow\" TID list, not\n> one very wide one.\n\nSee note above: We may still need a merge phase, just to make sure we\nhandle all TAM parallel scans correctly, even if that merge join phase\nwouldn't get hit in vanilla PostgreSQL.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 2 May 2024 19:12:27 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "\n\nOn 5/2/24 19:12, Matthias van de Meent wrote:\n> On Thu, 2 May 2024 at 17:19, Tomas Vondra <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> In PG17 we shall have parallel CREATE INDEX for BRIN indexes, and back\n>> when working on that I was thinking how difficult would it be to do\n>> something similar to do that for other index types, like GIN. I even had\n>> that on my list of ideas to pitch to potential contributors, as I was\n>> fairly sure it's doable and reasonably isolated / well-defined.\n>>\n>> However, I was not aware of any takers, so a couple days ago on a slow\n>> weekend I took a stab at it. And yes, it's doable - attached is a fairly\n>> complete, tested and polished version of the feature, I think. It turned\n>> out to be a bit more complex than I expected, for reasons that I'll get\n>> into when discussing the patches.\n> \n> This is great. I've been thinking about approximately the same issue\n> recently, too, but haven't had time to discuss/implement any of this\n> yet. I think some solutions may even be portable to the btree parallel\n> build: it also has key deduplication (though to a much smaller degree)\n> and could benefit from deduplication during the scan/ssup load phase,\n> rather than only during insertion.\n> \n\nPerhaps, although I'm not that familiar with the details of btree\nbuilds, and I haven't thought about it when working on this over the\npast couple days.\n\n>> First, let's talk about the benefits - how much faster is that than the\n>> single-process build we have for GIN indexes? I do have a table with the\n>> archive of all our mailing lists - it's ~1.5M messages, table is ~21GB\n>> (raw dump is about 28GB). This does include simple text data (message\n>> body), JSONB (headers) and tsvector (full-text on message body).\n> \n> Sidenote: Did you include the tsvector in the table to reduce time\n> spent during index creation? I would have used an expression in the\n> index definition, rather than a direct column.\n> \n\nYes, it's a materialized column, not computed during index creation.\n\n>> If I do CREATE index with different number of workers (0 means serial\n>> build), I get this timings (in seconds):\n> \n> [...]\n> \n>> This shows the benefits are pretty nice, depending on the opclass. For\n>> most indexes it's maybe ~3-4x faster, which is nice, and I don't think\n>> it's possible to do much better - the actual index inserts can happen\n>> from a single process only, which is the main limit.\n> \n> Can we really not insert with multiple processes? It seems to me that\n> GIN could be very suitable for that purpose, with its clear double\n> tree structure distinction that should result in few buffer conflicts\n> if different backends work on known-to-be-very-different keys.\n> We'd probably need multiple read heads on the shared tuplesort, and a\n> way to join the generated top-level subtrees, but I don't think that\n> is impossible. Maybe it's work for later effort though.\n> \n\nMaybe, but I took it as a restriction and it seemed too difficult to\nrelax (or at least I assume that).\n\n> Have you tested and/or benchmarked this with multi-column GIN indexes?\n> \n\nI did test that, and I'm not aware of any bugs/issues. Performance-wise\nit depends on which opclasses are used by the columns - if you take the\nspeedup for each of them independently, the speedup for the whole index\nis roughly the average of that.\n\n>> For some of the opclasses it can regress (like the jsonb_path_ops). I\n>> don't think that's a major issue. Or more precisely, I'm not surprised\n>> by it. It'd be nice to be able to disable the parallel builds in these\n>> cases somehow, but I haven't thought about that.\n> \n> Do you know why it regresses?\n> \n\nNo, but one thing that stands out is that the index is much smaller than\nthe other columns/opclasses, and the compression does not save much\n(only about 5% for both phases). So I assume it's the overhead of\nwriting writing and reading a bunch of GB of data without really gaining\nmuch from doing that.\n\n>> I do plan to do some tests with btree_gin, but I don't expect that to\n>> behave significantly differently.\n>>\n>> There are small variations in the index size, when built in the serial\n>> way and the parallel way. It's generally within ~5-10%, and I believe\n>> it's due to the serial build adding the TIDs incrementally, while the\n>> build adds them in much larger chunks (possibly even in one chunk with\n>> all the TIDs for the key).\n> \n> I assume that was '[...] while the [parallel] build adds them [...]', right?\n> \n\nRight. The parallel build adds them in larger chunks.\n\n>> I believe the same size variation can happen\n>> if the index gets built in a different way, e.g. by inserting the data\n>> in a different order, etc. I did a number of tests to check if the index\n>> produces the correct results, and I haven't found any issues. So I think\n>> this is OK, and neither a problem nor an advantage of the patch.\n>>\n>>\n>> Now, let's talk about the code - the series has 7 patches, with 6\n>> non-trivial parts doing changes in focused and easier to understand\n>> pieces (I hope so).\n> \n> The following comments are generally based on the descriptions; I\n> haven't really looked much at the patches yet except to validate some\n> assumptions.\n> \n\nOK\n\n>> 1) v20240502-0001-Allow-parallel-create-for-GIN-indexes.patch\n>>\n>> This is the initial feature, adding the \"basic\" version, implemented as\n>> pretty much 1:1 copy of the BRIN parallel build and minimal changes to\n>> make it work for GIN (mostly about how to store intermediate results).\n>>\n>> The basic idea is that the workers do the regular build, but instead of\n>> flushing the data into the index after hitting the memory limit, it gets\n>> written into a shared tuplesort and sorted by the index key. And the\n>> leader then reads this sorted data, accumulates the TID for a given key\n>> and inserts that into the index in one go.\n> \n> In the code, GIN insertions are still basically single btree\n> insertions, all starting from the top (but with many same-valued\n> tuples at once). Now that we have a tuplesort with the full table's\n> data, couldn't the code be adapted to do more efficient btree loading,\n> such as that seen in the nbtree code, where the rightmost pages are\n> cached and filled sequentially without requiring repeated searches\n> down the tree? I suspect we can gain a lot of time there.\n> \n> I don't need you to do that, but what's your opinion on this?\n> \n\nI have no idea. I started working on this with only very basic idea of\nhow GIN works / is structured, so I simply leveraged the existing\ncallback and massaged it to work in the parallel case too.\n\n>> 2) v20240502-0002-Use-mergesort-in-the-leader-process.patch\n>>\n>> The approach implemented by 0001 works, but there's a little bit of\n>> issue - if there are many distinct keys (e.g. for trigrams that can\n>> happen very easily), the workers will hit the memory limit with only\n>> very short TID lists for most keys. For serial build that means merging\n>> the data into a lot of random places, and in parallel build it means the\n>> leader will have to merge a lot of tiny lists from many sorted rows.\n>>\n>> Which can be quite annoying and expensive, because the leader does so\n>> using qsort() in the serial part. It'd be better to ensure most of the\n>> sorting happens in the workers, and the leader can do a mergesort. But\n>> the mergesort must not happen too often - merging many small lists is\n>> not cheaper than a single qsort (especially when the lists overlap).\n>>\n>> So this patch changes the workers to process the data in two phases. The\n>> first works as before, but the data is flushed into a local tuplesort.\n>> And then each workers sorts the results it produced, and combines them\n>> into results with much larger TID lists, and those results are written\n>> to the shared tuplesort. So the leader only gets very few lists to\n>> combine for a given key - usually just one list per worker.\n> \n> Hmm, I was hoping we could implement the merging inside the tuplesort\n> itself during its own flush phase, as it could save significantly on\n> IO, and could help other users of tuplesort with deduplication, too.\n> \n\nWould that happen in the worker or leader process? Because my goal was\nto do the expensive part in the worker, because that's what helps with\nthe parallelization.\n\n>> 3) v20240502-0003-Remove-the-explicit-pg_qsort-in-workers.patch\n>>\n>> In 0002 the workers still do an explicit qsort() on the TID list before\n>> writing the data into the shared tuplesort. But we can do better - the\n>> workers can do a merge sort too. To help with this, we add the first TID\n>> to the tuplesort tuple, and sort by that too - it helps the workers to\n>> process the data in an order that allows simple concatenation instead of\n>> the full mergesort.\n>>\n>> Note: There's a non-obvious issue due to parallel scans always being\n>> \"sync scans\", which may lead to very \"wide\" TID ranges when the scan\n>> wraps around. More about that later.\n> \n> As this note seems to imply, this seems to have a strong assumption\n> that data received in parallel workers is always in TID order, with\n> one optional wraparound. Non-HEAP TAMs may break with this assumption,\n> so what's the plan on that?\n> \n\nWell, that would break the serial build too, right? Anyway, the way this\npatch works can be extended to deal with that by actually sorting the\nTIDs when serializing the tuplestore tuple. The consequence of that is\nthe combining will be more expensive, because it'll require a proper\nmergesort, instead of just appending the lists.\n\n>> 4) v20240502-0004-Compress-TID-lists-before-writing-tuples-t.patch\n>>\n>> The parallel build passes data between processes using temporary files,\n>> which means it may need significant amount of disk space. For BRIN this\n>> was not a major concern, because the summaries tend to be pretty small.\n>>\n>> But for GIN that's not the case, and the two-phase processing introduced\n>> by 0002 make it worse, because the worker essentially creates another\n>> copy of the intermediate data. It does not need to copy the key, so\n>> maybe it's not exactly 2x the space requirement, but in the worst case\n>> it's not far from that.\n>>\n>> But there's a simple way how to improve this - the TID lists tend to be\n>> very compressible, and GIN already implements a very light-weight TID\n>> compression, so this patch does just that - when building the tuple to\n>> be written into the tuplesort, we just compress the TIDs.\n> \n> See note on 0002: Could we do this in the tuplesort writeback, rather\n> than by moving the data around multiple times?\n> \n\nNo idea, I've never done that ...\n\n> [...]\n>> So 0007 does something similar - it tracks if the TID value goes\n>> backward in the callback, and if it does it dumps the state into the\n>> tuplesort before processing the first tuple from the beginning of the\n>> table. Which means we end up with two separate \"narrow\" TID list, not\n>> one very wide one.\n> \n> See note above: We may still need a merge phase, just to make sure we\n> handle all TAM parallel scans correctly, even if that merge join phase\n> wouldn't get hit in vanilla PostgreSQL.\n> \n\nWell, yeah. But in fact the parallel code already does that, while the\nexisting serial code may fail with the \"data don't fit\" error.\n\nThe parallel code will do the mergesort correctly, and only emit TIDs\nthat we know are safe to write to the index (i.e. no future TIDs will go\nbefore the \"TID horizon\").\n\nBut the serial build has nothing like that - it will sort the TIDs that\nfit into the memory limit, but it also relies on not processing data out\nof order (and disables sync scans to not have wrap around issues). But\nif the TAM does something funny, this may break.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 2 May 2024 20:22:23 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "Hi,\n\nHere's a slightly improved version, fixing a couple bugs in handling\nbyval/byref values, causing issues on 32-bit machines (but not only).\nAnd also a couple compiler warnings about string formatting.\n\nOther than that, no changes.\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sun, 5 May 2024 20:49:03 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "\nHello Tomas,\n\n>>> 2) v20240502-0002-Use-mergesort-in-the-leader-process.patch\n>>>\n>>> The approach implemented by 0001 works, but there's a little bit of\n>>> issue - if there are many distinct keys (e.g. for trigrams that can\n>>> happen very easily), the workers will hit the memory limit with only\n>>> very short TID lists for most keys. For serial build that means merging\n>>> the data into a lot of random places, and in parallel build it means the\n>>> leader will have to merge a lot of tiny lists from many sorted rows.\n>>>\n>>> Which can be quite annoying and expensive, because the leader does so\n>>> using qsort() in the serial part. It'd be better to ensure most of the\n>>> sorting happens in the workers, and the leader can do a mergesort. But\n>>> the mergesort must not happen too often - merging many small lists is\n>>> not cheaper than a single qsort (especially when the lists overlap).\n>>>\n>>> So this patch changes the workers to process the data in two phases. The\n>>> first works as before, but the data is flushed into a local tuplesort.\n>>> And then each workers sorts the results it produced, and combines them\n>>> into results with much larger TID lists, and those results are written\n>>> to the shared tuplesort. So the leader only gets very few lists to\n>>> combine for a given key - usually just one list per worker.\n>> \n>> Hmm, I was hoping we could implement the merging inside the tuplesort\n>> itself during its own flush phase, as it could save significantly on\n>> IO, and could help other users of tuplesort with deduplication, too.\n>> \n>\n> Would that happen in the worker or leader process? Because my goal was\n> to do the expensive part in the worker, because that's what helps with\n> the parallelization.\n\nI guess both of you are talking about worker process, if here are\nsomething in my mind:\n\n*btbuild* also let the WORKER dump the tuples into Sharedsort struct\nand let the LEADER merge them directly. I think this aim of this design\nis it is potential to save a mergeruns. In the current patch, worker dump\nto local tuplesort and mergeruns it and then leader run the merges\nagain. I admit the goal of this patch is reasonable, but I'm feeling we\nneed to adapt this way conditionally somehow. and if we find the way, we\ncan apply it to btbuild as well. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Thu, 09 May 2024 17:44:49 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "\nTomas Vondra <[email protected]> writes:\n\n> 3) v20240502-0003-Remove-the-explicit-pg_qsort-in-workers.patch\n>\n> In 0002 the workers still do an explicit qsort() on the TID list before\n> writing the data into the shared tuplesort. But we can do better - the\n> workers can do a merge sort too. To help with this, we add the first TID\n> to the tuplesort tuple, and sort by that too - it helps the workers to\n> process the data in an order that allows simple concatenation instead of\n> the full mergesort.\n>\n> Note: There's a non-obvious issue due to parallel scans always being\n> \"sync scans\", which may lead to very \"wide\" TID ranges when the scan\n> wraps around. More about that later.\n\nThis is really amazing.\n\n> 7) v20240502-0007-Detect-wrap-around-in-parallel-callback.patch\n>\n> There's one more efficiency problem - the parallel scans are required to\n> be synchronized, i.e. the scan may start half-way through the table, and\n> then wrap around. Which however means the TID list will have a very wide\n> range of TID values, essentially the min and max of for the key.\n>\n> Without 0006 this would cause frequent failures of the index build, with\n> the error I already mentioned:\n>\n> ERROR: could not split GIN page; all old items didn't fit\n\nI have two questions here and both of them are generall gin index questions\nrather than the patch here.\n\n1. What does the \"wrap around\" mean in the \"the scan may start half-way\nthrough the table, and then wrap around\". Searching \"wrap\" in\ngin/README gets nothing. \n\n2. I can't understand the below error.\n\n> ERROR: could not split GIN page; all old items didn't fit\n\nWhen the posting list is too long, we have posting tree strategy. so in\nwhich sistuation we could get this ERROR. \n\n> issue with efficiency - having such a wide TID list forces the mergesort\n> to actually walk the lists, because this wide list overlaps with every\n> other list produced by the worker.\n\nIf we split the blocks among worker 1-block by 1-block, we will have a\nserious issue like here. If we can have N-block by N-block, and N-block\nis somehow fill the work_mem which makes the dedicated temp file, we\ncan make things much better, can we? \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Thu, 09 May 2024 18:14:40 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "On 5/9/24 12:14, Andy Fan wrote:\n> \n> Tomas Vondra <[email protected]> writes:\n> \n>> 3) v20240502-0003-Remove-the-explicit-pg_qsort-in-workers.patch\n>>\n>> In 0002 the workers still do an explicit qsort() on the TID list before\n>> writing the data into the shared tuplesort. But we can do better - the\n>> workers can do a merge sort too. To help with this, we add the first TID\n>> to the tuplesort tuple, and sort by that too - it helps the workers to\n>> process the data in an order that allows simple concatenation instead of\n>> the full mergesort.\n>>\n>> Note: There's a non-obvious issue due to parallel scans always being\n>> \"sync scans\", which may lead to very \"wide\" TID ranges when the scan\n>> wraps around. More about that later.\n> \n> This is really amazing.\n> \n>> 7) v20240502-0007-Detect-wrap-around-in-parallel-callback.patch\n>>\n>> There's one more efficiency problem - the parallel scans are required to\n>> be synchronized, i.e. the scan may start half-way through the table, and\n>> then wrap around. Which however means the TID list will have a very wide\n>> range of TID values, essentially the min and max of for the key.\n>>\n>> Without 0006 this would cause frequent failures of the index build, with\n>> the error I already mentioned:\n>>\n>> ERROR: could not split GIN page; all old items didn't fit\n> \n> I have two questions here and both of them are generall gin index questions\n> rather than the patch here.\n> \n> 1. What does the \"wrap around\" mean in the \"the scan may start half-way\n> through the table, and then wrap around\". Searching \"wrap\" in\n> gin/README gets nothing. \n> \n\nThe \"wrap around\" is about the scan used to read data from the table\nwhen building the index. A \"sync scan\" may start e.g. at TID (1000,0)\nand read till the end of the table, and then wraps and returns the\nremaining part at the beginning of the table for blocks 0-999.\n\nThis means the callback would not see a monotonically increasing\nsequence of TIDs.\n\nWhich is why the serial build disables sync scans, allowing simply\nappending values to the sorted list, and even with regular flushes of\ndata into the index we can simply append data to the posting lists.\n\n> 2. I can't understand the below error.\n> \n>> ERROR: could not split GIN page; all old items didn't fit\n> \n> When the posting list is too long, we have posting tree strategy. so in\n> which sistuation we could get this ERROR. \n> \n\nAFAICS the index build relies on the assumption that we only append data\nto the TID list on a leaf page, and when the page gets split, the \"old\"\npart will always fit. Which may not be true, if there was a wrap around\nand we're adding low TID values to the list on the leaf page.\n\nFWIW the error in dataBeginPlaceToPageLeaf looks like this:\n\n if (!append || ItemPointerCompare(&maxOldItem, &remaining) >= 0)\n elog(ERROR, \"could not split GIN page; all old items didn't fit\");\n\nIt can fail simply because of the !append part.\n\nI'm not sure why dataBeginPlaceToPageLeaf() relies on this assumption,\nor with GIN details in general, and I haven't found any explanation. But\nAFAIK this is why the serial build disables sync scans.\n\n>> issue with efficiency - having such a wide TID list forces the mergesort\n>> to actually walk the lists, because this wide list overlaps with every\n>> other list produced by the worker.\n> \n> If we split the blocks among worker 1-block by 1-block, we will have a\n> serious issue like here. If we can have N-block by N-block, and N-block\n> is somehow fill the work_mem which makes the dedicated temp file, we\n> can make things much better, can we? \n> \n\nI don't understand the question. The blocks are distributed to workers\nby the parallel table scan, and it certainly does not do that block by\nblock. But even it it did, that's not a problem for this code.\n\nThe problem is that if the scan wraps around, then one of the TID lists\nfor a given worker will have the min TID and max TID, so it will overlap\nwith every other TID list for the same key in that worker. And when the\nworker does the merging, this list will force a \"full\" merge sort for\nall TID lists (for that key), which is very expensive.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 9 May 2024 14:19:23 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "\n\nOn 5/9/24 11:44, Andy Fan wrote:\n> \n> Hello Tomas,\n> \n>>>> 2) v20240502-0002-Use-mergesort-in-the-leader-process.patch\n>>>>\n>>>> The approach implemented by 0001 works, but there's a little bit of\n>>>> issue - if there are many distinct keys (e.g. for trigrams that can\n>>>> happen very easily), the workers will hit the memory limit with only\n>>>> very short TID lists for most keys. For serial build that means merging\n>>>> the data into a lot of random places, and in parallel build it means the\n>>>> leader will have to merge a lot of tiny lists from many sorted rows.\n>>>>\n>>>> Which can be quite annoying and expensive, because the leader does so\n>>>> using qsort() in the serial part. It'd be better to ensure most of the\n>>>> sorting happens in the workers, and the leader can do a mergesort. But\n>>>> the mergesort must not happen too often - merging many small lists is\n>>>> not cheaper than a single qsort (especially when the lists overlap).\n>>>>\n>>>> So this patch changes the workers to process the data in two phases. The\n>>>> first works as before, but the data is flushed into a local tuplesort.\n>>>> And then each workers sorts the results it produced, and combines them\n>>>> into results with much larger TID lists, and those results are written\n>>>> to the shared tuplesort. So the leader only gets very few lists to\n>>>> combine for a given key - usually just one list per worker.\n>>>\n>>> Hmm, I was hoping we could implement the merging inside the tuplesort\n>>> itself during its own flush phase, as it could save significantly on\n>>> IO, and could help other users of tuplesort with deduplication, too.\n>>>\n>>\n>> Would that happen in the worker or leader process? Because my goal was\n>> to do the expensive part in the worker, because that's what helps with\n>> the parallelization.\n> \n> I guess both of you are talking about worker process, if here are\n> something in my mind:\n> \n> *btbuild* also let the WORKER dump the tuples into Sharedsort struct\n> and let the LEADER merge them directly. I think this aim of this design\n> is it is potential to save a mergeruns. In the current patch, worker dump\n> to local tuplesort and mergeruns it and then leader run the merges\n> again. I admit the goal of this patch is reasonable, but I'm feeling we\n> need to adapt this way conditionally somehow. and if we find the way, we\n> can apply it to btbuild as well. \n> \n\nI'm a bit confused about what you're proposing here, or how is that\nrelated to what this patch is doing and/or to the what Matthias\nmentioned in his e-mail from last week.\n\nLet me explain the relevant part of the patch, and how I understand the\nimprovement suggested by Matthias. The patch does the work in three phases:\n\n\n1) Worker gets data from table, split that into index items and add\nthose into a \"private\" tuplesort, and finally sorts that. So a worker\nmay see a key many times, with different TIDs, so the tuplesort may\ncontain many items for the same key, with distinct TID lists:\n\n key1: 1, 2, 3, 4\n key1: 5, 6, 7\n key1: 8, 9, 10\n key2: 1, 2, 3\n ...\n\n\n2) Worker reads the sorted data, and combines TIDs for the same key into\nlarger TID lists, depending on work_mem etc. and writes the result into\na shared tuplesort. So the worker may write this:\n\n key1: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10\n key2: 1, 2, 3\n\n\n3) Leader reads this, combines TID lists from all workers (using a\nhigher memory limit, probably), and writes the result into the index.\n\n\nThe step (2) is optional - it would work without it. But it helps, as it\nmoves a potentially expensive sort into the workers (and thus the\nparallel part of the build), and it also makes it cheaper, because in a\nsingle worker the lists do not overlap and thus can be simply appended.\nWhich in the leader is not the case, forcing an expensive mergesort.\n\nThe trouble with (2) is that it \"just copies\" data from one tuplesort\ninto another, increasing the disk space requirements. In an extreme\ncase, when nothing can be combined, it pretty much doubles the amount of\ndisk space, and makes the build longer.\n\nWhat I think Matthias is suggesting, is that this \"TID list merging\"\ncould be done directly as part of the tuplesort in step (1). So instead\nof just moving the \"sort tuples\" from the appropriate runs, it could\nalso do an optional step of combining the tuples and writing this\ncombined tuple into the tuplesort result (for that worker).\n\nMatthias also mentioned this might be useful when building btree indexes\nwith key deduplication.\n\nAFAICS this might work, although it probably requires for the \"combined\"\ntuple to be smaller than the sum of the combined tuples (in order to fit\ninto the space). But at least in the GIN build in the workers this is\nlikely true, because the TID lists do not overlap (and thus not hurting\nthe compressibility).\n\nThat being said, I still see this more as an optimization than something\nrequired for the patch, and I don't think I'll have time to work on this\nanytime soon. The patch is not extremely complex, but it's not trivial\neither. But if someone wants to take a stab at extending tuplesort to\nallow this, I won't object ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 9 May 2024 15:13:16 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "On 5/2/24 20:22, Tomas Vondra wrote:\n>> \n>>> For some of the opclasses it can regress (like the jsonb_path_ops). I\n>>> don't think that's a major issue. Or more precisely, I'm not surprised\n>>> by it. It'd be nice to be able to disable the parallel builds in these\n>>> cases somehow, but I haven't thought about that.\n>>\n>> Do you know why it regresses?\n>>\n> \n> No, but one thing that stands out is that the index is much smaller than\n> the other columns/opclasses, and the compression does not save much\n> (only about 5% for both phases). So I assume it's the overhead of\n> writing writing and reading a bunch of GB of data without really gaining\n> much from doing that.\n> \n\nI finally got to look into this regression, but I think I must have done\nsomething wrong before because I can't reproduce it. This is the timings\nI get now, if I rerun the benchmark:\n\n workers trgm tsvector jsonb jsonb (hash)\n -------------------------------------------------------\n 0 1225 404 104 56\n 1 772 180 57 60\n 2 549 143 47 52\n 3 426 127 43 50\n 4 364 116 40 48\n 5 323 111 38 46\n 6 292 111 37 45\n\nand the speedup, relative to serial build:\n\n\n workers trgm tsvector jsonb jsonb (hash)\n --------------------------------------------------------\n 1 63% 45% 54% 108%\n 2 45% 35% 45% 94%\n 3 35% 31% 41% 89%\n 4 30% 29% 38% 86%\n 5 26% 28% 37% 83%\n 6 24% 28% 35% 81%\n\nSo there's a small regression for the jsonb_path_ops opclass, but only\nwith one worker. After that, it gets a bit faster than serial build.\nWhile not a great speedup, it's far better than the earlier results that\nshowed maybe 40% regression.\n\nI don't know what I did wrong before - maybe I had a build with an extra\ndebug info or something like that? No idea why would that affect only\none of the opclasses. But this time I made doubly sure the results are\ncorrect etc.\n\nAnyway, I'm fairly happy with these results. I don't think it's\nsurprising there are cases where parallel build does not help much.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 9 May 2024 16:28:36 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "On Thu, 9 May 2024 at 15:13, Tomas Vondra <[email protected]> wrote:\n> Let me explain the relevant part of the patch, and how I understand the\n> improvement suggested by Matthias. The patch does the work in three phases:\n>\n>\n> 1) Worker gets data from table, split that into index items and add\n> those into a \"private\" tuplesort, and finally sorts that. So a worker\n> may see a key many times, with different TIDs, so the tuplesort may\n> contain many items for the same key, with distinct TID lists:\n>\n> key1: 1, 2, 3, 4\n> key1: 5, 6, 7\n> key1: 8, 9, 10\n> key2: 1, 2, 3\n> ...\n\nThis step is actually split in several components/phases, too.\nAs opposed to btree, which directly puts each tuple's data into the\ntuplesort, this GIN approach actually buffers the tuples in memory to\ngenerate these TID lists for data keys, and flushes these pairs of Key\n+ TID list into the tuplesort when its own memory limit is exceeded.\nThat means we essentially double the memory used for this data: One\nGIN deform buffer, and one in-memory sort buffer in the tuplesort.\nThis is fine for now, but feels duplicative, hence my \"let's allow\ntuplesort to merge the key+TID pairs into pairs of key+TID list\"\ncomment.\n\n> The trouble with (2) is that it \"just copies\" data from one tuplesort\n> into another, increasing the disk space requirements. In an extreme\n> case, when nothing can be combined, it pretty much doubles the amount of\n> disk space, and makes the build longer.\n>\n> What I think Matthias is suggesting, is that this \"TID list merging\"\n> could be done directly as part of the tuplesort in step (1). So instead\n> of just moving the \"sort tuples\" from the appropriate runs, it could\n> also do an optional step of combining the tuples and writing this\n> combined tuple into the tuplesort result (for that worker).\n\nYes, but with a slightly more extensive approach than that even, see above.\n\n> Matthias also mentioned this might be useful when building btree indexes\n> with key deduplication.\n>\n> AFAICS this might work, although it probably requires for the \"combined\"\n> tuple to be smaller than the sum of the combined tuples (in order to fit\n> into the space).\n\n*must not be larger than the sum; not \"must be smaller than the sum\" [^0].\nFor btree tuples with posting lists this is guaranteed to be true: The\nadded size of a btree tuple with a posting list (containing at least 2\nvalues) vs one without is the maxaligned size of 2 TIDs, or 16 bytes\n(12 on 32-bit systems). The smallest btree tuple with data is also 16\nbytes (or 12 bytes on 32-bit systems), so this works out nicely.\n\n> But at least in the GIN build in the workers this is\n> likely true, because the TID lists do not overlap (and thus not hurting\n> the compressibility).\n>\n> That being said, I still see this more as an optimization than something\n> required for the patch,\n\nAgreed.\n\n> and I don't think I'll have time to work on this\n> anytime soon. The patch is not extremely complex, but it's not trivial\n> either. But if someone wants to take a stab at extending tuplesort to\n> allow this, I won't object ...\n\nSame here: While I do have some ideas on where and how to implement\nthis, I'm not planning on working on that soon.\n\n\nKind regards,\n\nMatthias van de Meent\n\n[^0] There's some overhead in the tuplesort serialization too, so\nthere is some leeway there, too.\n\n\n", "msg_date": "Thu, 9 May 2024 17:51:30 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "\n\nOn 5/9/24 17:51, Matthias van de Meent wrote:\n> On Thu, 9 May 2024 at 15:13, Tomas Vondra <[email protected]> wrote:\n>> Let me explain the relevant part of the patch, and how I understand the\n>> improvement suggested by Matthias. The patch does the work in three phases:\n>>\n>>\n>> 1) Worker gets data from table, split that into index items and add\n>> those into a \"private\" tuplesort, and finally sorts that. So a worker\n>> may see a key many times, with different TIDs, so the tuplesort may\n>> contain many items for the same key, with distinct TID lists:\n>>\n>> key1: 1, 2, 3, 4\n>> key1: 5, 6, 7\n>> key1: 8, 9, 10\n>> key2: 1, 2, 3\n>> ...\n> \n> This step is actually split in several components/phases, too.\n> As opposed to btree, which directly puts each tuple's data into the\n> tuplesort, this GIN approach actually buffers the tuples in memory to\n> generate these TID lists for data keys, and flushes these pairs of Key\n> + TID list into the tuplesort when its own memory limit is exceeded.\n> That means we essentially double the memory used for this data: One\n> GIN deform buffer, and one in-memory sort buffer in the tuplesort.\n> This is fine for now, but feels duplicative, hence my \"let's allow\n> tuplesort to merge the key+TID pairs into pairs of key+TID list\"\n> comment.\n> \n\nTrue, although the \"GIN deform buffer\" (flushed by the callback if using\ntoo much memory) likely does most of the merging already. If it only\nhappened in the tuplesort merge, we'd likely have far more tuples and\noverhead associated with that. So we certainly won't get rid of either\nof these things.\n\nYou're right the memory limits are a bit unclear, and need more thought.\nI certainly have not thought very much about not using more than the\nspecified maintenance_work_mem amount. This includes the planner code\ndetermining the number of workers to use - right now it simply does the\nsame thing as for btree/brin, i.e. assumes each workers uses 32MB of\nmemory and checks how many workers fit into maintenance_work_mem.\n\nThat was a bit bogus even for BRIN, because BRIN sorts only summaries,\nwhich is typically tiny - perhaps a couple kB, much less than 32MB. But\nit's still just one sort, and some opclasses may be much larger (like\nbloom, for example). So I just went with it.\n\nBut for GIN it's more complicated, because we have two tuplesorts (not\nsure if both can use the memory at the same time) and the GIN deform\nbuffer. Which probably means we need to have a per-worker allowance\nconsidering all these buffers.\n\n>> The trouble with (2) is that it \"just copies\" data from one tuplesort\n>> into another, increasing the disk space requirements. In an extreme\n>> case, when nothing can be combined, it pretty much doubles the amount of\n>> disk space, and makes the build longer.\n>>\n>> What I think Matthias is suggesting, is that this \"TID list merging\"\n>> could be done directly as part of the tuplesort in step (1). So instead\n>> of just moving the \"sort tuples\" from the appropriate runs, it could\n>> also do an optional step of combining the tuples and writing this\n>> combined tuple into the tuplesort result (for that worker).\n> \n> Yes, but with a slightly more extensive approach than that even, see above.\n> \n>> Matthias also mentioned this might be useful when building btree indexes\n>> with key deduplication.\n>>\n>> AFAICS this might work, although it probably requires for the \"combined\"\n>> tuple to be smaller than the sum of the combined tuples (in order to fit\n>> into the space).\n> \n> *must not be larger than the sum; not \"must be smaller than the sum\" [^0].\n\nYeah, I wrote that wrong.\n\n> For btree tuples with posting lists this is guaranteed to be true: The\n> added size of a btree tuple with a posting list (containing at least 2\n> values) vs one without is the maxaligned size of 2 TIDs, or 16 bytes\n> (12 on 32-bit systems). The smallest btree tuple with data is also 16\n> bytes (or 12 bytes on 32-bit systems), so this works out nicely.\n> \n>> But at least in the GIN build in the workers this is\n>> likely true, because the TID lists do not overlap (and thus not hurting\n>> the compressibility).\n>>\n>> That being said, I still see this more as an optimization than something\n>> required for the patch,\n> \n> Agreed.\n> \n\nOK\n\n>> and I don't think I'll have time to work on this\n>> anytime soon. The patch is not extremely complex, but it's not trivial\n>> either. But if someone wants to take a stab at extending tuplesort to\n>> allow this, I won't object ...\n> \n> Same here: While I do have some ideas on where and how to implement\n> this, I'm not planning on working on that soon.\n> \n\nUnderstood. I don't have a very good intuition on how significant the\nbenefit could be, which is one of the reasons why I have not prioritized\nthis very much.\n\nI did a quick experiment, to measure how expensive it is to build the\nsecond worker tuplesort - for the pg_trgm index build with 2 workers, it\ntakes ~30seconds. The index build takes ~550s in total, so 30s is ~5%.\nIf we eliminated all of this work we'd save this, but in reality some of\nit will still be necessary.\n\nPerhaps it's more significant for other indexes / slower storage, but it\ndoes not seem like a *must have* for v1.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 9 May 2024 21:26:04 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "\nTomas Vondra <[email protected]> writes:\n\n>> I guess both of you are talking about worker process, if here are\n>> something in my mind:\n>> \n>> *btbuild* also let the WORKER dump the tuples into Sharedsort struct\n>> and let the LEADER merge them directly. I think this aim of this design\n>> is it is potential to save a mergeruns. In the current patch, worker dump\n>> to local tuplesort and mergeruns it and then leader run the merges\n>> again. I admit the goal of this patch is reasonable, but I'm feeling we\n>> need to adapt this way conditionally somehow. and if we find the way, we\n>> can apply it to btbuild as well. \n>> \n>\n> I'm a bit confused about what you're proposing here, or how is that\n> related to what this patch is doing and/or to the what Matthias\n> mentioned in his e-mail from last week.\n>\n> Let me explain the relevant part of the patch, and how I understand the\n> improvement suggested by Matthias. The patch does the work in three phases:\n\nWhat's in my mind is:\n\n1. WORKER-1\n\nTempfile 1:\n\nkey1: 1\nkey3: 2\n...\n\nTempfile 2:\n\nkey5: 3\nkey7: 4\n...\n\n2. WORKER-2\n\nTempfile 1:\n\nKey2: 1\nKey6: 2\n...\n\nTempfile 2:\nKey3: 3\nKey6: 4\n..\n\nIn the above example: if we do the the merge in LEADER, only 1 mergerun\nis needed. reading 4 tempfile 8 tuples in total and write 8 tuples.\n\nIf we adds another mergerun into WORKER, the result will be:\n\nWORKER1: reading 2 tempfile 4 tuples and write 1 tempfile (called X) 4\ntuples. \nWORKER2: reading 2 tempfile 4 tuples and write 1 tempfile (called Y) 4\ntuples. \n\nLEADER: reading 2 tempfiles (X & Y) including 8 tuples and write it\ninto final tempfile.\n\nSo the intermedia result X & Y requires some extra effort. so I think\nthe \"extra mergerun in worker\" is *not always* a win, and my proposal is\nif we need to distinguish the cases in which one we should add the\n\"extra mergerun in worker\" step.\n\n> The trouble with (2) is that it \"just copies\" data from one tuplesort\n> into another, increasing the disk space requirements. In an extreme\n> case, when nothing can be combined, it pretty much doubles the amount of\n> disk space, and makes the build longer.\n\nThis sounds like the same question as I talk above, However my proposal\nis to distinguish which cost is bigger between \"the cost saving from\nmerging TIDs in WORKERS\" and \"cost paid because of the extra copy\",\nthen we do that only when we are sure we can benefits from it, but I\nknow it is hard and not sure if it is doable. \n\n> What I think Matthias is suggesting, is that this \"TID list merging\"\n> could be done directly as part of the tuplesort in step (1). So instead\n> of just moving the \"sort tuples\" from the appropriate runs, it could\n> also do an optional step of combining the tuples and writing this\n> combined tuple into the tuplesort result (for that worker).\n\nOK, I get it now. So we talked about lots of merge so far at different\nstage and for different sets of tuples.\n\n1. \"GIN deform buffer\" did the TIDs merge for the same key for the tuples\nin one \"deform buffer\" batch, as what the current master is doing.\n\n2. \"in memory buffer sort\" stage, currently there is no TID merge so\nfar and Matthias suggest that. \n\n3. Merge the TIDs for the same keys in LEADER vs in WORKER first +\nLEADER then. this is what your 0002 commit does now and I raised some\nconcerns as above.\n\n> Matthias also mentioned this might be useful when building btree indexes\n> with key deduplication.\n\n> AFAICS this might work, although it probably requires for the \"combined\"\n> tuple to be smaller than the sum of the combined tuples (in order to fit\n> into the space). But at least in the GIN build in the workers this is\n> likely true, because the TID lists do not overlap (and thus not hurting\n> the compressibility).\n>\n> That being said, I still see this more as an optimization than something\n> required for the patch,\n\nIf GIN deform buffer is big enough (like greater than the in memory\nbuffer sort) shall we have any gain because of this, since the\nscope is the tuples in in-memory-buffer-sort. \n\n> and I don't think I'll have time to work on this\n> anytime soon. The patch is not extremely complex, but it's not trivial\n> either. But if someone wants to take a stab at extending tuplesort to\n> allow this, I won't object ...\n\nAgree with this. I am more interested with understanding the whole\ndesign and the scope to fix in this patch, and then I can do some code\nreview and testing, as for now, I still in the \"understanding design and\nscope\" stage. If I'm too slow about this patch, please feel free to\ncommit it any time and I don't expect I can find any valueable\nimprovement and bugs. I probably needs another 1 ~ 2 weeks to study\nthis patch.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Fri, 10 May 2024 13:53:27 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "On 5/10/24 07:53, Andy Fan wrote:\n> \n> Tomas Vondra <[email protected]> writes:\n> \n>>> I guess both of you are talking about worker process, if here are\n>>> something in my mind:\n>>>\n>>> *btbuild* also let the WORKER dump the tuples into Sharedsort struct\n>>> and let the LEADER merge them directly. I think this aim of this design\n>>> is it is potential to save a mergeruns. In the current patch, worker dump\n>>> to local tuplesort and mergeruns it and then leader run the merges\n>>> again. I admit the goal of this patch is reasonable, but I'm feeling we\n>>> need to adapt this way conditionally somehow. and if we find the way, we\n>>> can apply it to btbuild as well. \n>>>\n>>\n>> I'm a bit confused about what you're proposing here, or how is that\n>> related to what this patch is doing and/or to the what Matthias\n>> mentioned in his e-mail from last week.\n>>\n>> Let me explain the relevant part of the patch, and how I understand the\n>> improvement suggested by Matthias. The patch does the work in three phases:\n> \n> What's in my mind is:\n> \n> 1. WORKER-1\n> \n> Tempfile 1:\n> \n> key1: 1\n> key3: 2\n> ...\n> \n> Tempfile 2:\n> \n> key5: 3\n> key7: 4\n> ...\n> \n> 2. WORKER-2\n> \n> Tempfile 1:\n> \n> Key2: 1\n> Key6: 2\n> ...\n> \n> Tempfile 2:\n> Key3: 3\n> Key6: 4\n> ..\n> \n> In the above example: if we do the the merge in LEADER, only 1 mergerun\n> is needed. reading 4 tempfile 8 tuples in total and write 8 tuples.\n> \n> If we adds another mergerun into WORKER, the result will be:\n> \n> WORKER1: reading 2 tempfile 4 tuples and write 1 tempfile (called X) 4\n> tuples. \n> WORKER2: reading 2 tempfile 4 tuples and write 1 tempfile (called Y) 4\n> tuples. \n> \n> LEADER: reading 2 tempfiles (X & Y) including 8 tuples and write it\n> into final tempfile.\n> \n> So the intermedia result X & Y requires some extra effort. so I think\n> the \"extra mergerun in worker\" is *not always* a win, and my proposal is\n> if we need to distinguish the cases in which one we should add the\n> \"extra mergerun in worker\" step.\n> \n\nThe thing you're forgetting is that the mergesort in the worker is\n*always* a simple append, because the lists are guaranteed to be\nnon-overlapping, so it's very cheap. The lists from different workers\nare however very likely to overlap, and hence a \"full\" mergesort is\nneeded, which is way more expensive.\n\nAnd not only that - without the intermediate merge, there will be very\nmany of those lists the leader would have to merge.\n\nIf we do the append-only merges in the workers first, we still need to\nmerge them in the leader, of course, but we have few lists to merge\n(only about one per worker).\n\nOf course, this means extra I/O on the intermediate tuplesort, and it's\nnot difficult to imagine cases with no benefit, or perhaps even a\nregression. For example, if the keys are unique, the in-worker merge\nstep can't really do anything. But that seems quite unlikely IMHO.\n\nAlso, if this overhead was really significant, we would not see the nice\nspeedups I measured during testing.\n\n>> The trouble with (2) is that it \"just copies\" data from one tuplesort\n>> into another, increasing the disk space requirements. In an extreme\n>> case, when nothing can be combined, it pretty much doubles the amount of\n>> disk space, and makes the build longer.\n> \n> This sounds like the same question as I talk above, However my proposal\n> is to distinguish which cost is bigger between \"the cost saving from\n> merging TIDs in WORKERS\" and \"cost paid because of the extra copy\",\n> then we do that only when we are sure we can benefits from it, but I\n> know it is hard and not sure if it is doable. \n> \n\nYeah. I'm not against picking the right execution strategy during the\nindex build, but it's going to be difficult, because we really don't\nhave the information to make a reliable decision.\n\nWe can't even use the per-column stats, because it does not say much\nabout the keys extracted by GIN, I think. And we need to do the decision\nat the very beginning, before we write the first batch of data either to\nthe local or shared tuplesort.\n\nBut maybe we could wait until we need to flush the first batch of data\n(in the callback), and make the decision then? In principle, if we only\nflush once at the end, the intermediate sort is not needed at all (fairy\nunlikely for large data sets, though).\n\nWell, in principle, maybe we could even start writing into the local\ntuplesort, and then \"rethink\" after a while and switch to the shared\none. We'd still need to copy data we've already written to the local\ntuplesort, but hopefully that'd be just a fraction compared to doing\nthat for the whole table.\n\n\n>> What I think Matthias is suggesting, is that this \"TID list merging\"\n>> could be done directly as part of the tuplesort in step (1). So instead\n>> of just moving the \"sort tuples\" from the appropriate runs, it could\n>> also do an optional step of combining the tuples and writing this\n>> combined tuple into the tuplesort result (for that worker).\n> \n> OK, I get it now. So we talked about lots of merge so far at different\n> stage and for different sets of tuples.\n> \n> 1. \"GIN deform buffer\" did the TIDs merge for the same key for the tuples\n> in one \"deform buffer\" batch, as what the current master is doing.\n> \n> 2. \"in memory buffer sort\" stage, currently there is no TID merge so\n> far and Matthias suggest that. \n> \n> 3. Merge the TIDs for the same keys in LEADER vs in WORKER first +\n> LEADER then. this is what your 0002 commit does now and I raised some\n> concerns as above.\n> \n\nOK\n\n>> Matthias also mentioned this might be useful when building btree indexes\n>> with key deduplication.\n> \n>> AFAICS this might work, although it probably requires for the \"combined\"\n>> tuple to be smaller than the sum of the combined tuples (in order to fit\n>> into the space). But at least in the GIN build in the workers this is\n>> likely true, because the TID lists do not overlap (and thus not hurting\n>> the compressibility).\n>>\n>> That being said, I still see this more as an optimization than something\n>> required for the patch,\n> \n> If GIN deform buffer is big enough (like greater than the in memory\n> buffer sort) shall we have any gain because of this, since the\n> scope is the tuples in in-memory-buffer-sort. \n> \n\nI don't think this is very likely. The only case when the GIN deform\ntuple is \"big enough\" is when we don't need to flush in the callback,\nbut that is going to happen only for \"small\" tables. And for those we\nshould not really do parallel builds. And even if we do, the overhead\nwould be pretty insignificant.\n\n>> and I don't think I'll have time to work on this\n>> anytime soon. The patch is not extremely complex, but it's not trivial\n>> either. But if someone wants to take a stab at extending tuplesort to\n>> allow this, I won't object ...\n> \n> Agree with this. I am more interested with understanding the whole\n> design and the scope to fix in this patch, and then I can do some code\n> review and testing, as for now, I still in the \"understanding design and\n> scope\" stage. If I'm too slow about this patch, please feel free to\n> commit it any time and I don't expect I can find any valueable\n> improvement and bugs. I probably needs another 1 ~ 2 weeks to study\n> this patch.\n> \n\nSure, happy to discuss and answer questions.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 10 May 2024 13:29:12 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "\nTomas Vondra <[email protected]> writes:\n\n>>> 7) v20240502-0007-Detect-wrap-around-in-parallel-callback.patch\n>>>\n>>> There's one more efficiency problem - the parallel scans are required to\n>>> be synchronized, i.e. the scan may start half-way through the table, and\n>>> then wrap around. Which however means the TID list will have a very wide\n>>> range of TID values, essentially the min and max of for the key.\n>> \n>> I have two questions here and both of them are generall gin index questions\n>> rather than the patch here.\n>> \n>> 1. What does the \"wrap around\" mean in the \"the scan may start half-way\n>> through the table, and then wrap around\". Searching \"wrap\" in\n>> gin/README gets nothing. \n>> \n>\n> The \"wrap around\" is about the scan used to read data from the table\n> when building the index. A \"sync scan\" may start e.g. at TID (1000,0)\n> and read till the end of the table, and then wraps and returns the\n> remaining part at the beginning of the table for blocks 0-999.\n>\n> This means the callback would not see a monotonically increasing\n> sequence of TIDs.\n>\n> Which is why the serial build disables sync scans, allowing simply\n> appending values to the sorted list, and even with regular flushes of\n> data into the index we can simply append data to the posting lists.\n\nThanks for the hints, I know the sync strategy comes from syncscan.c\nnow. \n\n>>> Without 0006 this would cause frequent failures of the index build, with\n>>> the error I already mentioned:\n>>>\n>>> ERROR: could not split GIN page; all old items didn't fit\n\n>> 2. I can't understand the below error.\n>> \n>>> ERROR: could not split GIN page; all old items didn't fit\n\n> if (!append || ItemPointerCompare(&maxOldItem, &remaining) >= 0)\n> elog(ERROR, \"could not split GIN page; all old items didn't fit\");\n>\n> It can fail simply because of the !append part.\n\nGot it, Thanks!\n\n>> If we split the blocks among worker 1-block by 1-block, we will have a\n>> serious issue like here. If we can have N-block by N-block, and N-block\n>> is somehow fill the work_mem which makes the dedicated temp file, we\n>> can make things much better, can we? \n\n> I don't understand the question. The blocks are distributed to workers\n> by the parallel table scan, and it certainly does not do that block by\n> block. But even it it did, that's not a problem for this code.\n\nOK, I get ParallelBlockTableScanWorkerData.phsw_chunk_size is designed\nfor this.\n\n> The problem is that if the scan wraps around, then one of the TID lists\n> for a given worker will have the min TID and max TID, so it will overlap\n> with every other TID list for the same key in that worker. And when the\n> worker does the merging, this list will force a \"full\" merge sort for\n> all TID lists (for that key), which is very expensive.\n\nOK.\n\nThanks for all the answers, they are pretty instructive!\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Mon, 13 May 2024 16:19:43 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "On 5/13/24 10:19, Andy Fan wrote:\n> \n> Tomas Vondra <[email protected]> writes:\n> \n>> ...\n>>\n>> I don't understand the question. The blocks are distributed to workers\n>> by the parallel table scan, and it certainly does not do that block by\n>> block. But even it it did, that's not a problem for this code.\n> \n> OK, I get ParallelBlockTableScanWorkerData.phsw_chunk_size is designed\n> for this.\n> \n>> The problem is that if the scan wraps around, then one of the TID lists\n>> for a given worker will have the min TID and max TID, so it will overlap\n>> with every other TID list for the same key in that worker. And when the\n>> worker does the merging, this list will force a \"full\" merge sort for\n>> all TID lists (for that key), which is very expensive.\n> \n> OK.\n> \n> Thanks for all the answers, they are pretty instructive!\n> \n\nThanks for the questions, it forces me to articulate the arguments more\nclearly. I guess it'd be good to put some of this into a README or at\nleast a comment at the beginning of gininsert.c or somewhere close.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 13 May 2024 11:19:08 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "\nHi Tomas,\n\nI have completed my first round of review, generally it looks good to\nme, more testing need to be done in the next days. Here are some tiny\ncomments from my side, just FYI. \n\n1. Comments about GinBuildState.bs_leader looks not good for me. \n \n\t/*\n\t * bs_leader is only present when a parallel index build is performed, and\n\t * only in the leader process. (Actually, only the leader process has a\n\t * GinBuildState.)\n\t */\n\tGinLeader *bs_leader;\n\nIn the worker function _gin_parallel_build_main:\ninitGinState(&buildstate.ginstate, indexRel); is called, and the\nfollowing members in workers at least: buildstate.funcCtx,\nbuildstate.accum and so on. So is the comment \"only the leader process\nhas a GinBuildState\" correct?\n\n2. progress argument is not used?\n_gin_parallel_scan_and_build(GinBuildState *state,\n\t\t\t\t\t\t\t GinShared *ginshared, Sharedsort *sharedsort,\n\t\t\t\t\t\t\t Relation heap, Relation index,\n\t\t\t\t\t\t\t int sortmem, bool progress)\n\n\n3. In function tuplesort_begin_index_gin, comments about nKeys takes me\nsome time to think about why 1 is correct(rather than\nIndexRelationGetNumberOfKeyAttributes) and what does the \"only the index\nkey\" means. \n\nbase->nKeys = 1;\t\t\t/* Only the index key */\n\nfinally I think it is because gin index stores each attribute value into\nan individual index entry for a multi-column index, so each index entry\nhas only 1 key. So we can comment it as the following? \n\n\"Gin Index stores the value of each attribute into different index entry\nfor mutli-column index, so each index entry has only 1 key all the\ntime.\" This probably makes it easier to understand.\n\n\n4. GinBuffer: The comment \"Similar purpose to BuildAccumulator, but much\nsimpler.\" makes me think why do we need a simpler but\nsimilar structure, After some thoughts, they are similar at accumulating\nTIDs only. GinBuffer is designed for \"same key value\" (hence\nGinBufferCanAddKey). so IMO, the first comment is good enough and the 2 \ncomments introduce confuses for green hand and is potential to remove\nit. \n\n/*\n * State used to combine accumulate TIDs from multiple GinTuples for the same\n * key value.\n *\n * XXX Similar purpose to BuildAccumulator, but much simpler.\n */\ntypedef struct GinBuffer\n\n\n5. GinBuffer: ginMergeItemPointers always allocate new memory for the\nnew items and hence we have to pfree old memory each time. However it is\nnot necessary in some places, for example the new items can be appended\nto Buffer->items (and this should be a common case). So could we\npre-allocate some spaces for items and reduce the number of pfree/palloc\nand save some TID items copy in the desired case?\n\n6. GinTuple.ItemPointerData first;\t\t/* first TID in the array */\n\nis ItemPointerData.ip_blkid good enough for its purpose? If so, we can\nsave the memory for OffsetNumber for each GinTuple.\n\nItem 5) and 6) needs some coding and testing. If it is OK to do, I'd\nlike to take it as an exercise in this area. (also including the item\n1~4.)\n\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Tue, 28 May 2024 09:29:48 +0000", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "Hi Andy,\n\nThanks for the review. Here's an updated patch series, addressing most\nof the points you've raised - I've kept them in \"fixup\" patches for now,\nshould be merged into 0001.\n\nMore detailed responses below.\n\nOn 5/28/24 11:29, Andy Fan wrote:\n> \n> Hi Tomas,\n> \n> I have completed my first round of review, generally it looks good to\n> me, more testing need to be done in the next days. Here are some tiny\n> comments from my side, just FYI. \n> \n> 1. Comments about GinBuildState.bs_leader looks not good for me. \n> \n> \t/*\n> \t * bs_leader is only present when a parallel index build is performed, and\n> \t * only in the leader process. (Actually, only the leader process has a\n> \t * GinBuildState.)\n> \t */\n> \tGinLeader *bs_leader;\n> \n> In the worker function _gin_parallel_build_main:\n> initGinState(&buildstate.ginstate, indexRel); is called, and the\n> following members in workers at least: buildstate.funcCtx,\n> buildstate.accum and so on. So is the comment \"only the leader process\n> has a GinBuildState\" correct?\n> \n\nYeah, this is misleading. I don't remember what exactly was my reasoning\nfor this wording, I've removed the comment.\n\n> 2. progress argument is not used?\n> _gin_parallel_scan_and_build(GinBuildState *state,\n> \t\t\t\t\t\t\t GinShared *ginshared, Sharedsort *sharedsort,\n> \t\t\t\t\t\t\t Relation heap, Relation index,\n> \t\t\t\t\t\t\t int sortmem, bool progress)\n> \n\nI've modified the code to use the progress flag, but now that I look at\nit I'm a bit unsure I understand the purpose of this. I've modeled this\nafter what the btree does, and I see that there are two places calling\n_bt_parallel_scan_and_sort:\n\n1) _bt_leader_participate_as_worker: progress=true\n\n2) _bt_parallel_build_main: progress=false\n\nIsn't that a bit weird? AFAIU the progress will be updated only by the\nleader, but will that progress be correct? And doesn't that means the if\nthe leader does not participate as a worker, the progress won't be updated?\n\nFWIW The parallel BRIN code has the same issue - it's not using the\nprogress flag in _brin_parallel_scan_and_build.\n\n> \n> 3. In function tuplesort_begin_index_gin, comments about nKeys takes me\n> some time to think about why 1 is correct(rather than\n> IndexRelationGetNumberOfKeyAttributes) and what does the \"only the index\n> key\" means. \n> \n> base->nKeys = 1;\t\t\t/* Only the index key */\n> \n> finally I think it is because gin index stores each attribute value into\n> an individual index entry for a multi-column index, so each index entry\n> has only 1 key. So we can comment it as the following? \n> \n> \"Gin Index stores the value of each attribute into different index entry\n> for mutli-column index, so each index entry has only 1 key all the\n> time.\" This probably makes it easier to understand.\n> \n\nOK, I see what you mean. The other tuplesort_begin_ functions nearby\nhave similar comments, but you're right GIN is a bit special in that it\n\"splits\" multi-column indexes into individual index entries. I've added\na comment (hopefully) clarifying this.\n\n> \n> 4. GinBuffer: The comment \"Similar purpose to BuildAccumulator, but much\n> simpler.\" makes me think why do we need a simpler but\n> similar structure, After some thoughts, they are similar at accumulating\n> TIDs only. GinBuffer is designed for \"same key value\" (hence\n> GinBufferCanAddKey). so IMO, the first comment is good enough and the 2 \n> comments introduce confuses for green hand and is potential to remove\n> it. \n> \n> /*\n> * State used to combine accumulate TIDs from multiple GinTuples for the same\n> * key value.\n> *\n> * XXX Similar purpose to BuildAccumulator, but much simpler.\n> */\n> typedef struct GinBuffer\n> \n\nI've updated the comment explaining the differences a bit clearer.\n\n> \n> 5. GinBuffer: ginMergeItemPointers always allocate new memory for the\n> new items and hence we have to pfree old memory each time. However it is\n> not necessary in some places, for example the new items can be appended\n> to Buffer->items (and this should be a common case). So could we\n> pre-allocate some spaces for items and reduce the number of pfree/palloc\n> and save some TID items copy in the desired case?\n> \n\nPerhaps, but that seems rather independent of this patch.\n\nAlso, I'm not sure how much would this optimization matter in practice.\nThe merge should happens fairly rarely, when we decide to store the TIDs\ninto the index. And then it's also subject to the caching built into the\nmemory contexts, limiting the malloc costs. We'll still pay for the\nmemcpy, of course.\n\nAnyway, it's an optimization that would affect existing callers of\nginMergeItemPointers. I don't plan to tweak this in this patch.\n\n\n> 6. GinTuple.ItemPointerData first;\t\t/* first TID in the array */\n> \n> is ItemPointerData.ip_blkid good enough for its purpose? If so, we can\n> save the memory for OffsetNumber for each GinTuple.\n> \n> Item 5) and 6) needs some coding and testing. If it is OK to do, I'd\n> like to take it as an exercise in this area. (also including the item\n> 1~4.)\n> \n\nIt might save 2 bytes in the struct, but that's negligible compared to\nthe memory usage overall (we only keep one GinTuple, but many TIDs and\nso on), and we allocate the space in power-of-2 pattern anyway (which\nmeans the 2B won't matter).\n\nMoreover, using just the block number would make it harder to compare\nthe TIDs (now we can just call ItemPointerCompare).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 19 Jun 2024 13:55:35 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "Here's a cleaned up patch series, merging the fixup patches into 0001.\n\nI've also removed the memset() from ginInsertBAEntry(). This was meant\nto fix valgrind reports, but I believe this was just a symptom of\nincorrect handling of byref data types, which was fixed in 2024/05/02\npatch version.\n\nThe other thing I did is cleanup of FIXME and XXX comments. There were a\ncouple stale/obsolete comments, discussing issues that have been already\nfixed (like the scan wrapping around).\n\nA couple things to fix remain, but all of them are minor. And there's\nalso a couple XXX comments, often describing thing that is then done in\none of the following patches.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 20 Jun 2024 23:19:43 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "Here's a bit more cleaned up version, clarifying a lot of comments,\nremoving a bunch of obsolete comments, or comments speculating about\npossible solutions, that sort of thing. I've also removed couple more\nXXX comments, etc.\n\nThe main change however is that the sorting no longer relies on memcmp()\nto compare the values. I did that because it was enough for the initial\nWIP patches, and it worked till now - but the comments explained this\nmay not be a good idea if the data type allows the same value to have\nmultiple binary representations, or something like that.\n\nI don't have a practical example to show an issue, but I guess if using\nmemcmp() was safe we'd be doing it in a bunch of places already, and\nAFAIK we're not. And even if it happened to be OK, this is a probably\nnot the place where to start doing it.\n\nSo I've switched this to use the regular data-type comparisons, with\nSortSupport etc. There's a bit more cleanup remaining and testing\nneeded, but I'm not aware of any bugs.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 24 Jun 2024 02:58:16 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "Tomas Vondra <[email protected]> writes:\n\nHi Tomas,\n\nI am in a incompleted review process but I probably doesn't have much\ntime on this because of my internal tasks. So I just shared what I\ndid and the non-good-result patch.\n\nWhat I tried to do is:\n1) remove all the \"sort\" effort for the state->bs_sort_state tuples since\nits input comes from state->bs_worker_state which is sorted already.\n\n2). remove *partial* \"sort\" operations between accum.rbtree to\nstate->bs_worker_state because the tuple in accum.rbtree is sorted\nalready. \n\nBoth of them can depend on the same API changes. \n\n1. \nstruct Tuplesortstate\n{\n..\n+ bool input_presorted; /* the tuples are presorted. */\n+ new_tapes; // writes the tuples in memory into a new 'run'. \n}\n\nand user can set it during tuplesort_begin_xx(, presorte=true);\n\n\n2. in tuplesort_puttuple, if memory is full but presorted is\ntrue, we can (a) avoid the sort. (b) resuse the existing 'runs'\nto reduce the effort of 'mergeruns' unless new_tapes is set to\ntrue. once it switch to a new tapes, the set state->new_tapes to false\nand wait 3) to change it to true again.\n\n3. tuplesort_dumptuples(..); // dump the tuples in memory and set\nnew_tapes=true to tell the *this batch of input is presorted but they\nare done, the next batch is just presort in its own batch*.\n\nIn the gin-parallel-build case, for the case 1), we can just use\n\nfor tuple in bs_worker_sort: \n\ttuplesort_putgintuple(state->bs_sortstate, ..);\ntuplesort_dumptuples(..);\n\nAt last we can get a). only 1 run in the worker so that the leader can\nhave merge less runs in mergeruns. b). reduce the sort both in\nperform_sort_tuplesort and in sortstate_puttuple_common.\n\nfor the case 2). we can have:\n\n for tuple in RBTree.tuples:\n \t tuplesort_puttuples(tuple) ; \n // this may cause a dumptuples internally when the memory is full,\n // but it is OK.\n tuplesort_dumptuples(..);\n\nwe can just remove the \"sort\" into sortstate_puttuple_common but\nprobably increase the 'runs' in sortstate which will increase the effort\nof mergeruns at last.\n\nBut the test result is not good, maybe the 'sort' is not a key factor of\nthis. I do missed the perf step before doing this. or maybe my test data\nis too small. \n\nHere is the patch I used for the above activity. and I used the\nfollowing sql to test. \n\nCREATE TABLE t(a int[], b numeric[]);\n\n-- generate 1000 * 1000 rows.\ninsert into t select i, n\nfrom normal_rand_array(1000, 90, 1::int4, 10000::int4) i,\nnormal_rand_array(1000, 90, '1.00233234'::numeric, '8.239241989134'::numeric) n;\n\nalter table t set (parallel_workers=4);\nset debug_parallel_query to on;\nset max_parallel_maintenance_workers to 4;\n\ncreate index on t using gin(a);\ncreate index on t using gin(b);\n\nI found normal_rand_array is handy to use in this case and I\nregister it into https://commitfest.postgresql.org/48/5061/.\n\nBesides the above stuff, I didn't find anything wrong in the currrent\npatch, and the above stuff can be categoried into \"furture improvement\"\neven it is worthy to. \n\n-- \nBest Regards\nAndy Fan", "msg_date": "Tue, 02 Jul 2024 08:07:57 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "On 7/2/24 02:07, Andy Fan wrote:\n> Tomas Vondra <[email protected]> writes:\n> \n> Hi Tomas,\n> \n> I am in a incompleted review process but I probably doesn't have much\n> time on this because of my internal tasks. So I just shared what I\n> did and the non-good-result patch.\n> \n> What I tried to do is:\n> 1) remove all the \"sort\" effort for the state->bs_sort_state tuples since\n> its input comes from state->bs_worker_state which is sorted already.\n> \n> 2). remove *partial* \"sort\" operations between accum.rbtree to\n> state->bs_worker_state because the tuple in accum.rbtree is sorted\n> already. \n> \n> Both of them can depend on the same API changes. \n> \n> 1. \n> struct Tuplesortstate\n> {\n> ..\n> + bool input_presorted; /* the tuples are presorted. */\n> + new_tapes; // writes the tuples in memory into a new 'run'. \n> }\n> \n> and user can set it during tuplesort_begin_xx(, presorte=true);\n> \n> \n> 2. in tuplesort_puttuple, if memory is full but presorted is\n> true, we can (a) avoid the sort. (b) resuse the existing 'runs'\n> to reduce the effort of 'mergeruns' unless new_tapes is set to\n> true. once it switch to a new tapes, the set state->new_tapes to false\n> and wait 3) to change it to true again.\n> \n> 3. tuplesort_dumptuples(..); // dump the tuples in memory and set\n> new_tapes=true to tell the *this batch of input is presorted but they\n> are done, the next batch is just presort in its own batch*.\n> \n> In the gin-parallel-build case, for the case 1), we can just use\n> \n> for tuple in bs_worker_sort: \n> \ttuplesort_putgintuple(state->bs_sortstate, ..);\n> tuplesort_dumptuples(..);\n> \n> At last we can get a). only 1 run in the worker so that the leader can\n> have merge less runs in mergeruns. b). reduce the sort both in\n> perform_sort_tuplesort and in sortstate_puttuple_common.\n> \n> for the case 2). we can have:\n> \n> for tuple in RBTree.tuples:\n> \t tuplesort_puttuples(tuple) ; \n> // this may cause a dumptuples internally when the memory is full,\n> // but it is OK.\n> tuplesort_dumptuples(..);\n> \n> we can just remove the \"sort\" into sortstate_puttuple_common but\n> probably increase the 'runs' in sortstate which will increase the effort\n> of mergeruns at last.\n> \n> But the test result is not good, maybe the 'sort' is not a key factor of\n> this. I do missed the perf step before doing this. or maybe my test data\n> is too small. \n> \n\nIf I understand the idea correctly, you're saying that we write the data\nfrom BuildAccumulator already sorted, so if we do that only once, it's\nalready sorted and we don't actually need the in-worker tuplesort.\n\nI think that's a good idea in principle, but maybe the simplest way to\nhandle this is by remembering if we already flushed any data, and if we\ndo that for the first time at the very end of the scan, we can write\nstuff directly to the shared tuplesort. That seems much simpler than\ndoing this inside the tuplesort code.\n\nOr did I get the idea wrong?\n\nFWIW I'm not sure how much this will help in practice. We only really\nwant to do parallel index build for fairly large tables, which makes it\nless likely the data will fit into the buffer (and if we flush during\nthe scan, that disables the optimization).\n\n> Here is the patch I used for the above activity. and I used the\n> following sql to test. \n> \n> CREATE TABLE t(a int[], b numeric[]);\n> \n> -- generate 1000 * 1000 rows.\n> insert into t select i, n\n> from normal_rand_array(1000, 90, 1::int4, 10000::int4) i,\n> normal_rand_array(1000, 90, '1.00233234'::numeric, '8.239241989134'::numeric) n;\n> \n> alter table t set (parallel_workers=4);\n> set debug_parallel_query to on;\n\nI don't think this forces parallel index builds - this GUC only affects\nqueries that go through the regular planner, but index build does not do\nthat, it just scans the table directly.\n\nSo maybe your testing did not actually do any parallel index builds?\nThat might explain why you didn't see any improvements.\n\nMaybe try this to \"force\" parallel index builds:\n\nset min_parallel_table_scan = '64kB';\nset maintenance_work_mem = '256MB';\n\n> set max_parallel_maintenance_workers to 4;\n> \n> create index on t using gin(a);\n> create index on t using gin(b);\n> \n> I found normal_rand_array is handy to use in this case and I\n> register it into https://commitfest.postgresql.org/48/5061/.\n> \n> Besides the above stuff, I didn't find anything wrong in the currrent\n> patch, and the above stuff can be categoried into \"furture improvement\"\n> even it is worthy to. \n> \n\nThanks for the review!\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 2 Jul 2024 17:11:52 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "On Mon, 24 Jun 2024 at 02:58, Tomas Vondra\n<[email protected]> wrote:\n>\n> Here's a bit more cleaned up version, clarifying a lot of comments,\n> removing a bunch of obsolete comments, or comments speculating about\n> possible solutions, that sort of thing. I've also removed couple more\n> XXX comments, etc.\n>\n> The main change however is that the sorting no longer relies on memcmp()\n> to compare the values. I did that because it was enough for the initial\n> WIP patches, and it worked till now - but the comments explained this\n> may not be a good idea if the data type allows the same value to have\n> multiple binary representations, or something like that.\n>\n> I don't have a practical example to show an issue, but I guess if using\n> memcmp() was safe we'd be doing it in a bunch of places already, and\n> AFAIK we're not. And even if it happened to be OK, this is a probably\n> not the place where to start doing it.\n\nI think one such example would be the values '5.00'::jsonb and\n'5'::jsonb when indexed using GIN's jsonb_ops, though I'm not sure if\nthey're treated as having the same value inside the opclass' ordering.\n\n> So I've switched this to use the regular data-type comparisons, with\n> SortSupport etc. There's a bit more cleanup remaining and testing\n> needed, but I'm not aware of any bugs.\n\nA review of patch 0001:\n\n---\n\n> src/backend/access/gin/gininsert.c | 1449 +++++++++++++++++++-\n\nThe nbtree code has `nbtsort.c` for its sort- and (parallel) build\nstate handling, which is exclusively used during index creation. As\nthe changes here seem to be largely related to bulk insertion, how\nmuch effort would it be to split the bulk insertion code path into a\nseparate file?\n\nI noticed that new fields in GinBuildState do get to have a\nbs_*-prefix, but none of the other added or previous fields of the\nmodified structs in gininsert.c have such prefixes. Could this be\nunified?\n\n> +/* Magic numbers for parallel state sharing */\n> +#define PARALLEL_KEY_GIN_SHARED UINT64CONST(0xB000000000000001)\n> ...\n\nThese overlap with BRIN's keys; can we make them unique while we're at it?\n\n> + * mutex protects all fields before heapdesc.\n\nI can't find the field that this `heapdesc` might refer to.\n\n> +_gin_begin_parallel(GinBuildState *buildstate, Relation heap, Relation index,\n> ...\n> + if (!isconcurrent)\n> + snapshot = SnapshotAny;\n> + else\n> + snapshot = RegisterSnapshot(GetTransactionSnapshot());\n\ngrumble: I know this is required from the index with the current APIs,\nbut I'm kind of annoyed that each index AM has to construct the table\nscan and snapshot in their own code. I mean, this shouldn't be\nmeaningfully different across AMs, so every AM implementing this same\ncode makes me feel like we've got the wrong abstraction.\n\nI'm not asking you to change this, but it's one more case where I'm\nannoyed by the state of the system, but not quite enough yet to change\nit.\n\n---\n> +++ b/src/backend/utils/sort/tuplesortvariants.c\n\nI was thinking some more about merging tuples inside the tuplesort. I\nrealized that this could be implemented by allowing buffering of tuple\nwrites in writetup. This would require adding a flush operation at the\nend of mergeonerun to store the final unflushed tuple on the tape, but\nthat shouldn't be too expensive. This buffering, when implemented\nthrough e.g. a GinBuffer in TuplesortPublic->arg, could allow us to\nmerge the TID lists of same-valued GIN tuples while they're getting\nstored and re-sorted, thus reducing the temporary space usage of the\ntuplesort by some amount with limited overhead for other\nnon-deduplicating tuplesorts.\n\nI've not yet spent the time to get this to work though, but I'm fairly\nsure it'd use less temporary space than the current approach with the\n2 tuplesorts, and could have lower overall CPU overhead as well\nbecause the number of sortable items gets reduced much earlier in the\nprocess.\n\n---\n\n> +++ b/src/include/access/gin_tuple.h\n> + typedef struct GinTuple\n\nI think this needs some more care: currently, each GinTuple is at\nleast 36 bytes in size on 64-bit systems. By using int instead of Size\n(no normal indexable tuple can be larger than MaxAllocSize), and\npacking the fields better we can shave off 10 bytes; or 12 bytes if\nGinTuple.keylen is further adjusted to (u)int16: a key needs to fit on\na page, so we can probably safely assume that the key size fits in\n(u)int16.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 3 Jul 2024 20:36:26 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "On Wed, 3 Jul 2024 at 20:36, Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Mon, 24 Jun 2024 at 02:58, Tomas Vondra\n> <[email protected]> wrote:\n> > So I've switched this to use the regular data-type comparisons, with\n> > SortSupport etc. There's a bit more cleanup remaining and testing\n> > needed, but I'm not aware of any bugs.\n\nI've hit assertion failures in my testing of the combined patches, in\nAssertCheckItemPointers: it assumes it's never called when the buffer\nis empty and uninitialized, but that's wrong: we don't initialize the\nitems array until the first tuple, which will cause the assertion to\nfire. By updating the first 2 assertions of AssertCheckItemPointers, I\ncould get it working.\n\n> ---\n> > +++ b/src/backend/utils/sort/tuplesortvariants.c\n>\n> I was thinking some more about merging tuples inside the tuplesort. I\n> realized that this could be implemented by allowing buffering of tuple\n> writes in writetup. This would require adding a flush operation at the\n> end of mergeonerun to store the final unflushed tuple on the tape, but\n> that shouldn't be too expensive. This buffering, when implemented\n> through e.g. a GinBuffer in TuplesortPublic->arg, could allow us to\n> merge the TID lists of same-valued GIN tuples while they're getting\n> stored and re-sorted, thus reducing the temporary space usage of the\n> tuplesort by some amount with limited overhead for other\n> non-deduplicating tuplesorts.\n>\n> I've not yet spent the time to get this to work though, but I'm fairly\n> sure it'd use less temporary space than the current approach with the\n> 2 tuplesorts, and could have lower overall CPU overhead as well\n> because the number of sortable items gets reduced much earlier in the\n> process.\n\nI've now spent some time on this. Attached the original patchset, plus\n2 incremental patches, the first of which implement the above design\n(patch no. 8).\n\nLocal tests show it's significantly faster: for the below test case\nI've seen reindex time reduced from 777455ms to 626217ms, or ~20%\nimprovement.\n\nAfter applying the 'reduce the size of GinTuple' patch, index creation\ntime is down to 551514ms, or about 29% faster total. This all was\ntested with a fresh stock postgres configuration.\n\n\"\"\"\nCREATE UNLOGGED TABLE testdata\nAS SELECT sha256(i::text::bytea)::text\n FROM generate_series(1, 15000000) i;\nCREATE INDEX ON testdata USING gin (sha256 gin_trgm_ops);\n\"\"\"\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)", "msg_date": "Fri, 5 Jul 2024 21:45:31 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "\n\nOn 7/3/24 20:36, Matthias van de Meent wrote:\n> On Mon, 24 Jun 2024 at 02:58, Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> Here's a bit more cleaned up version, clarifying a lot of comments,\n>> removing a bunch of obsolete comments, or comments speculating about\n>> possible solutions, that sort of thing. I've also removed couple more\n>> XXX comments, etc.\n>>\n>> The main change however is that the sorting no longer relies on memcmp()\n>> to compare the values. I did that because it was enough for the initial\n>> WIP patches, and it worked till now - but the comments explained this\n>> may not be a good idea if the data type allows the same value to have\n>> multiple binary representations, or something like that.\n>>\n>> I don't have a practical example to show an issue, but I guess if using\n>> memcmp() was safe we'd be doing it in a bunch of places already, and\n>> AFAIK we're not. And even if it happened to be OK, this is a probably\n>> not the place where to start doing it.\n> \n> I think one such example would be the values '5.00'::jsonb and\n> '5'::jsonb when indexed using GIN's jsonb_ops, though I'm not sure if\n> they're treated as having the same value inside the opclass' ordering.\n> \n\nYeah, possibly. But doing the comparison the \"proper\" way seems to be\nworking pretty well, so I don't plan to investigate this further.\n\n>> So I've switched this to use the regular data-type comparisons, with\n>> SortSupport etc. There's a bit more cleanup remaining and testing\n>> needed, but I'm not aware of any bugs.\n> \n> A review of patch 0001:\n> \n> ---\n> \n>> src/backend/access/gin/gininsert.c | 1449 +++++++++++++++++++-\n> \n> The nbtree code has `nbtsort.c` for its sort- and (parallel) build\n> state handling, which is exclusively used during index creation. As\n> the changes here seem to be largely related to bulk insertion, how\n> much effort would it be to split the bulk insertion code path into a\n> separate file?\n> \n\nHmmm. I haven't tried doing that, but I guess it's doable. I assume we'd\nwant to do the move first, because it involves pre-existing code, and\nthen do the patch on top of that.\n\nBut what would be the benefit of doing that? I'm not sure doing it just\nto make it look more like btree code is really worth it. Do you expect\nthe result to be clearer?\n\n> I noticed that new fields in GinBuildState do get to have a\n> bs_*-prefix, but none of the other added or previous fields of the\n> modified structs in gininsert.c have such prefixes. Could this be\n> unified?\n> \n\nYeah, these are inconsistencies from copying the infrastructure code to\nmake the parallel builds work, etc.\n\n>> +/* Magic numbers for parallel state sharing */\n>> +#define PARALLEL_KEY_GIN_SHARED UINT64CONST(0xB000000000000001)\n>> ...\n> \n> These overlap with BRIN's keys; can we make them unique while we're at it?\n> \n\nWe could, and I recall we had a similar discussion in the parallel BRIN\nthread, right?. But I'm somewhat unsure why would we actually want/need\nthese keys to be unique. Surely, we don't need to mix those keys in the\nsingle shm segment, right? So it seems more like an aesthetic thing. Or\nis there some policy to have unique values for these keys?\n\n>> + * mutex protects all fields before heapdesc.\n> \n> I can't find the field that this `heapdesc` might refer to.\n> \n\nYeah, likely a leftover from copying a bunch of code and then removing\nit without updating the comment. Will fix.\n\n>> +_gin_begin_parallel(GinBuildState *buildstate, Relation heap, Relation index,\n>> ...\n>> + if (!isconcurrent)\n>> + snapshot = SnapshotAny;\n>> + else\n>> + snapshot = RegisterSnapshot(GetTransactionSnapshot());\n> \n> grumble: I know this is required from the index with the current APIs,\n> but I'm kind of annoyed that each index AM has to construct the table\n> scan and snapshot in their own code. I mean, this shouldn't be\n> meaningfully different across AMs, so every AM implementing this same\n> code makes me feel like we've got the wrong abstraction.\n> \n> I'm not asking you to change this, but it's one more case where I'm\n> annoyed by the state of the system, but not quite enough yet to change\n> it.\n> \n\nYeah, it's not great, but not something I intend to rework.\n\n> ---\n>> +++ b/src/backend/utils/sort/tuplesortvariants.c\n> \n> I was thinking some more about merging tuples inside the tuplesort. I\n> realized that this could be implemented by allowing buffering of tuple\n> writes in writetup. This would require adding a flush operation at the\n> end of mergeonerun to store the final unflushed tuple on the tape, but\n> that shouldn't be too expensive. This buffering, when implemented\n> through e.g. a GinBuffer in TuplesortPublic->arg, could allow us to\n> merge the TID lists of same-valued GIN tuples while they're getting\n> stored and re-sorted, thus reducing the temporary space usage of the\n> tuplesort by some amount with limited overhead for other\n> non-deduplicating tuplesorts.\n> \n> I've not yet spent the time to get this to work though, but I'm fairly\n> sure it'd use less temporary space than the current approach with the\n> 2 tuplesorts, and could have lower overall CPU overhead as well\n> because the number of sortable items gets reduced much earlier in the\n> process.\n> \n\nWill respond to your later message about this.\n\n> ---\n> \n>> +++ b/src/include/access/gin_tuple.h\n>> + typedef struct GinTuple\n> \n> I think this needs some more care: currently, each GinTuple is at\n> least 36 bytes in size on 64-bit systems. By using int instead of Size\n> (no normal indexable tuple can be larger than MaxAllocSize), and\n> packing the fields better we can shave off 10 bytes; or 12 bytes if\n> GinTuple.keylen is further adjusted to (u)int16: a key needs to fit on\n> a page, so we can probably safely assume that the key size fits in\n> (u)int16.\n> \n\nYeah, I guess using int64 is a bit excessive - you're right about that.\nI'm not sure this is necessarily about \"indexable tuples\" (GinTuple is\nnot indexed, it's more an intermediate representation). But if we can\nmake it smaller, that probably can't hurt.\n\nI don't have a great intuition on how beneficial this might be. For\ncases with many TIDs per index key, it probably won't matter much. But\nif there's many keys (so that GinTuples store only very few TIDs), it\nmight make a difference.\n\nI'll try to measure the impact on the same \"realistic\" cases I used for\nthe earlier steps.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 7 Jul 2024 18:10:49 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "On 7/5/24 21:45, Matthias van de Meent wrote:\n> On Wed, 3 Jul 2024 at 20:36, Matthias van de Meent\n> <[email protected]> wrote:\n>>\n>> On Mon, 24 Jun 2024 at 02:58, Tomas Vondra\n>> <[email protected]> wrote:\n>>> So I've switched this to use the regular data-type comparisons, with\n>>> SortSupport etc. There's a bit more cleanup remaining and testing\n>>> needed, but I'm not aware of any bugs.\n> \n> I've hit assertion failures in my testing of the combined patches, in\n> AssertCheckItemPointers: it assumes it's never called when the buffer\n> is empty and uninitialized, but that's wrong: we don't initialize the\n> items array until the first tuple, which will cause the assertion to\n> fire. By updating the first 2 assertions of AssertCheckItemPointers, I\n> could get it working.\n> \n\nYeah, sorry. I think I broke this assert while doing the recent\ncleanups. The assert used to be called only when doing a sort, and then\nit certainly can't be empty. But then I moved it to be called from the\ngeneric GinTuple assert function, and that broke this assumption.\n\n>> ---\n>>> +++ b/src/backend/utils/sort/tuplesortvariants.c\n>>\n>> I was thinking some more about merging tuples inside the tuplesort. I\n>> realized that this could be implemented by allowing buffering of tuple\n>> writes in writetup. This would require adding a flush operation at the\n>> end of mergeonerun to store the final unflushed tuple on the tape, but\n>> that shouldn't be too expensive. This buffering, when implemented\n>> through e.g. a GinBuffer in TuplesortPublic->arg, could allow us to\n>> merge the TID lists of same-valued GIN tuples while they're getting\n>> stored and re-sorted, thus reducing the temporary space usage of the\n>> tuplesort by some amount with limited overhead for other\n>> non-deduplicating tuplesorts.\n>>\n>> I've not yet spent the time to get this to work though, but I'm fairly\n>> sure it'd use less temporary space than the current approach with the\n>> 2 tuplesorts, and could have lower overall CPU overhead as well\n>> because the number of sortable items gets reduced much earlier in the\n>> process.\n> \n> I've now spent some time on this. Attached the original patchset, plus\n> 2 incremental patches, the first of which implement the above design\n> (patch no. 8).\n> \n> Local tests show it's significantly faster: for the below test case\n> I've seen reindex time reduced from 777455ms to 626217ms, or ~20%\n> improvement.\n> \n> After applying the 'reduce the size of GinTuple' patch, index creation\n> time is down to 551514ms, or about 29% faster total. This all was\n> tested with a fresh stock postgres configuration.\n> \n> \"\"\"\n> CREATE UNLOGGED TABLE testdata\n> AS SELECT sha256(i::text::bytea)::text\n> FROM generate_series(1, 15000000) i;\n> CREATE INDEX ON testdata USING gin (sha256 gin_trgm_ops);\n> \"\"\"\n> \n\nThose results look really promising. I certainly would not have expected\nsuch improvements - 20-30% speedup on top of the already parallel run\nseems great. I'll do a bit more testing to see how much this helps for\nthe \"more realistic\" data sets.\n\nI can't say I 100% understand how the new stuff in tuplesortvariants.c\nworks, but that's mostly because my knowledge of tuplesort internals is\nfairly limited (and I managed to survive without that until now).\n\nWhat makes me a bit unsure/uneasy is that this seems to \"inject\" custom\ncode fairly deep into the tuplesort logic. I'm not quite sure if this is\na good idea ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 7 Jul 2024 18:26:03 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "On Sun, 7 Jul 2024, 18:11 Tomas Vondra, <[email protected]> wrote:\n>\n> On 7/3/24 20:36, Matthias van de Meent wrote:\n>> On Mon, 24 Jun 2024 at 02:58, Tomas Vondra\n>> <[email protected]> wrote:\n>>> So I've switched this to use the regular data-type comparisons, with\n>>> SortSupport etc. There's a bit more cleanup remaining and testing\n>>> needed, but I'm not aware of any bugs.\n>>\n>> A review of patch 0001:\n>>\n>> ---\n>>\n>>> src/backend/access/gin/gininsert.c | 1449 +++++++++++++++++++-\n>>\n>> The nbtree code has `nbtsort.c` for its sort- and (parallel) build\n>> state handling, which is exclusively used during index creation. As\n>> the changes here seem to be largely related to bulk insertion, how\n>> much effort would it be to split the bulk insertion code path into a\n>> separate file?\n>>\n>\n> Hmmm. I haven't tried doing that, but I guess it's doable. I assume we'd\n> want to do the move first, because it involves pre-existing code, and\n> then do the patch on top of that.\n>\n> But what would be the benefit of doing that? I'm not sure doing it just\n> to make it look more like btree code is really worth it. Do you expect\n> the result to be clearer?\n\nIt was mostly a consideration of file size and separation of concerns.\nThe sorted build path is quite different from the unsorted build,\nafter all.\n\n>> I noticed that new fields in GinBuildState do get to have a\n>> bs_*-prefix, but none of the other added or previous fields of the\n>> modified structs in gininsert.c have such prefixes. Could this be\n>> unified?\n>>\n>\n> Yeah, these are inconsistencies from copying the infrastructure code to\n> make the parallel builds work, etc.\n>\n>>> +/* Magic numbers for parallel state sharing */\n>>> +#define PARALLEL_KEY_GIN_SHARED UINT64CONST(0xB000000000000001)\n>>> ...\n>>\n>> These overlap with BRIN's keys; can we make them unique while we're at it?\n>>\n>\n> We could, and I recall we had a similar discussion in the parallel BRIN\n> thread, right?. But I'm somewhat unsure why would we actually want/need\n> these keys to be unique. Surely, we don't need to mix those keys in the\n> single shm segment, right? So it seems more like an aesthetic thing. Or\n> is there some policy to have unique values for these keys?\n\nUniqueness would be mostly useful for debugging shared memory issues,\nbut indeed, in a correctly working system we wouldn't have to worry\nabout parallel state key type confusion.\n\n>> ---\n>>\n>>> +++ b/src/include/access/gin_tuple.h\n>>> + typedef struct GinTuple\n>>\n>> I think this needs some more care: currently, each GinTuple is at\n>> least 36 bytes in size on 64-bit systems. By using int instead of Size\n>> (no normal indexable tuple can be larger than MaxAllocSize), and\n>> packing the fields better we can shave off 10 bytes; or 12 bytes if\n>> GinTuple.keylen is further adjusted to (u)int16: a key needs to fit on\n>> a page, so we can probably safely assume that the key size fits in\n>> (u)int16.\n>\n> Yeah, I guess using int64 is a bit excessive - you're right about that.\n> I'm not sure this is necessarily about \"indexable tuples\" (GinTuple is\n> not indexed, it's more an intermediate representation).\n\nYes, but even if the GinTuple itself isn't stored on disk in the\nindex, a GinTuple's key *is* part of the the primary GIN btree's keys\nsomewhere down the line, and thus must fit on a page somewhere. That's\nwhat I was referring to.\n\n> But if we can make it smaller, that probably can't hurt.\n>\n> I don't have a great intuition on how beneficial this might be. For\n> cases with many TIDs per index key, it probably won't matter much. But\n> if there's many keys (so that GinTuples store only very few TIDs), it\n> might make a difference.\n\nThis will indeed matter most when small TID lists are common. I\nsuspect it's not uncommon to find us such a in situation when users\nhave a low maintenance_work_mem (and thus don't have much space to\nbuffer and combine index tuples before they're flushed), or when the\ngenerated index keys can't be store in the available memory (such as\nin my example; it didn't tune m_w_m at all, and the table I tested had\n~15GB of data).\n\n> I'll try to measure the impact on the same \"realistic\" cases I used for\n> the earlier steps.\n\nThat would be greatly appreciated, thanks!\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 8 Jul 2024 10:05:38 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "On Sun, 7 Jul 2024, 18:26 Tomas Vondra, <[email protected]> wrote:\n>\n> On 7/5/24 21:45, Matthias van de Meent wrote:\n>> On Wed, 3 Jul 2024 at 20:36, Matthias van de Meent\n>> <[email protected]> wrote:\n>>>\n>>> On Mon, 24 Jun 2024 at 02:58, Tomas Vondra\n>>> <[email protected]> wrote:\n>>>> So I've switched this to use the regular data-type comparisons, with\n>>>> SortSupport etc. There's a bit more cleanup remaining and testing\n>>>> needed, but I'm not aware of any bugs.\n>>> ---\n>>>> +++ b/src/backend/utils/sort/tuplesortvariants.c\n>>>\n>>> I was thinking some more about merging tuples inside the tuplesort. I\n>>> realized that this could be implemented by allowing buffering of tuple\n>>> writes in writetup. This would require adding a flush operation at the\n>>> end of mergeonerun to store the final unflushed tuple on the tape, but\n>>> that shouldn't be too expensive. This buffering, when implemented\n>>> through e.g. a GinBuffer in TuplesortPublic->arg, could allow us to\n>>> merge the TID lists of same-valued GIN tuples while they're getting\n>>> stored and re-sorted, thus reducing the temporary space usage of the\n>>> tuplesort by some amount with limited overhead for other\n>>> non-deduplicating tuplesorts.\n>>>\n>>> I've not yet spent the time to get this to work though, but I'm fairly\n>>> sure it'd use less temporary space than the current approach with the\n>>> 2 tuplesorts, and could have lower overall CPU overhead as well\n>>> because the number of sortable items gets reduced much earlier in the\n>>> process.\n>>\n>> I've now spent some time on this. Attached the original patchset, plus\n>> 2 incremental patches, the first of which implement the above design\n>> (patch no. 8).\n>>\n>> Local tests show it's significantly faster: for the below test case\n>> I've seen reindex time reduced from 777455ms to 626217ms, or ~20%\n>> improvement.\n>>\n>> After applying the 'reduce the size of GinTuple' patch, index creation\n>> time is down to 551514ms, or about 29% faster total. This all was\n>> tested with a fresh stock postgres configuration.\n>>\n>> \"\"\"\n>> CREATE UNLOGGED TABLE testdata\n>> AS SELECT sha256(i::text::bytea)::text\n>> FROM generate_series(1, 15000000) i;\n>> CREATE INDEX ON testdata USING gin (sha256 gin_trgm_ops);\n>> \"\"\"\n>>\n>\n> Those results look really promising. I certainly would not have expected\n> such improvements - 20-30% speedup on top of the already parallel run\n> seems great. I'll do a bit more testing to see how much this helps for\n> the \"more realistic\" data sets.\n>\n> I can't say I 100% understand how the new stuff in tuplesortvariants.c\n> works, but that's mostly because my knowledge of tuplesort internals is\n> fairly limited (and I managed to survive without that until now).\n>\n> What makes me a bit unsure/uneasy is that this seems to \"inject\" custom\n> code fairly deep into the tuplesort logic. I'm not quite sure if this is\n> a good idea ...\n\nI thought it was still fairly high-level: it adds (what seems to me) a\nnatural extension to the pre-existing \"write a tuple to the tape\" API,\nallowing the Tuplesort (TS) implementation to further optimize its\nordered tape writes through buffering (and combining) of tuple writes.\nWhile it does remove the current 1:1 relation of TS tape writes to\ntape reads for the GIN case, there is AFAIK no code in TS that relies\non that 1:1 relation.\n\nAs to the GIN TS code itself; yes it's more complicated, mainly\nbecause it uses several optimizations to reduce unnecessary\nallocations and (de)serializations of GinTuples, and I'm aware of even\nmore such optimizations that can be added at some point.\n\nAs an example: I suspect the use of GinBuffer in writetup_index_gin to\nbe a significant resource drain in my patch because it lacks\n\"freezing\" in the tuplesort buffer. When no duplicate key values are\nencountered, the code is nearly optimal (except for a full tuple copy\nto get the data into the GinBuffer's memory context), but if more than\none GinTuple has the same key in the merge phase we deserialize both\ntuple's posting lists and merge the two. I suspect that merge to be\nmore expensive than operating on the compressed posting lists of the\nGinTuples themselves, so that's something I think could be improved. I\nsuspect/guess it could save another 10% in select cases, and will\ndefinitely reduce the memory footprint of the buffer.\nAnother thing that can be optimized is the current approach of\ninserting data into the index: I think it's kind of wasteful to\ndecompress and later re-compress the posting lists once we start\nstoring the tuples on disk.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Mon, 8 Jul 2024 11:45:55 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "\n\nOn 7/8/24 11:45, Matthias van de Meent wrote:\n> On Sun, 7 Jul 2024, 18:26 Tomas Vondra, <[email protected]> wrote:\n>>\n>> On 7/5/24 21:45, Matthias van de Meent wrote:\n>>> On Wed, 3 Jul 2024 at 20:36, Matthias van de Meent\n>>> <[email protected]> wrote:\n>>>>\n>>>> On Mon, 24 Jun 2024 at 02:58, Tomas Vondra\n>>>> <[email protected]> wrote:\n>>>>> So I've switched this to use the regular data-type comparisons, with\n>>>>> SortSupport etc. There's a bit more cleanup remaining and testing\n>>>>> needed, but I'm not aware of any bugs.\n>>>> ---\n>>>>> +++ b/src/backend/utils/sort/tuplesortvariants.c\n>>>>\n>>>> I was thinking some more about merging tuples inside the tuplesort. I\n>>>> realized that this could be implemented by allowing buffering of tuple\n>>>> writes in writetup. This would require adding a flush operation at the\n>>>> end of mergeonerun to store the final unflushed tuple on the tape, but\n>>>> that shouldn't be too expensive. This buffering, when implemented\n>>>> through e.g. a GinBuffer in TuplesortPublic->arg, could allow us to\n>>>> merge the TID lists of same-valued GIN tuples while they're getting\n>>>> stored and re-sorted, thus reducing the temporary space usage of the\n>>>> tuplesort by some amount with limited overhead for other\n>>>> non-deduplicating tuplesorts.\n>>>>\n>>>> I've not yet spent the time to get this to work though, but I'm fairly\n>>>> sure it'd use less temporary space than the current approach with the\n>>>> 2 tuplesorts, and could have lower overall CPU overhead as well\n>>>> because the number of sortable items gets reduced much earlier in the\n>>>> process.\n>>>\n>>> I've now spent some time on this. Attached the original patchset, plus\n>>> 2 incremental patches, the first of which implement the above design\n>>> (patch no. 8).\n>>>\n>>> Local tests show it's significantly faster: for the below test case\n>>> I've seen reindex time reduced from 777455ms to 626217ms, or ~20%\n>>> improvement.\n>>>\n>>> After applying the 'reduce the size of GinTuple' patch, index creation\n>>> time is down to 551514ms, or about 29% faster total. This all was\n>>> tested with a fresh stock postgres configuration.\n>>>\n>>> \"\"\"\n>>> CREATE UNLOGGED TABLE testdata\n>>> AS SELECT sha256(i::text::bytea)::text\n>>> FROM generate_series(1, 15000000) i;\n>>> CREATE INDEX ON testdata USING gin (sha256 gin_trgm_ops);\n>>> \"\"\"\n>>>\n>>\n>> Those results look really promising. I certainly would not have expected\n>> such improvements - 20-30% speedup on top of the already parallel run\n>> seems great. I'll do a bit more testing to see how much this helps for\n>> the \"more realistic\" data sets.\n>>\n>> I can't say I 100% understand how the new stuff in tuplesortvariants.c\n>> works, but that's mostly because my knowledge of tuplesort internals is\n>> fairly limited (and I managed to survive without that until now).\n>>\n>> What makes me a bit unsure/uneasy is that this seems to \"inject\" custom\n>> code fairly deep into the tuplesort logic. I'm not quite sure if this is\n>> a good idea ...\n> \n> I thought it was still fairly high-level: it adds (what seems to me) a\n> natural extension to the pre-existing \"write a tuple to the tape\" API,\n> allowing the Tuplesort (TS) implementation to further optimize its\n> ordered tape writes through buffering (and combining) of tuple writes.\n> While it does remove the current 1:1 relation of TS tape writes to\n> tape reads for the GIN case, there is AFAIK no code in TS that relies\n> on that 1:1 relation.\n> \n> As to the GIN TS code itself; yes it's more complicated, mainly\n> because it uses several optimizations to reduce unnecessary\n> allocations and (de)serializations of GinTuples, and I'm aware of even\n> more such optimizations that can be added at some point.\n> \n> As an example: I suspect the use of GinBuffer in writetup_index_gin to\n> be a significant resource drain in my patch because it lacks\n> \"freezing\" in the tuplesort buffer. When no duplicate key values are\n> encountered, the code is nearly optimal (except for a full tuple copy\n> to get the data into the GinBuffer's memory context), but if more than\n> one GinTuple has the same key in the merge phase we deserialize both\n> tuple's posting lists and merge the two. I suspect that merge to be\n> more expensive than operating on the compressed posting lists of the\n> GinTuples themselves, so that's something I think could be improved. I\n> suspect/guess it could save another 10% in select cases, and will\n> definitely reduce the memory footprint of the buffer.\n> Another thing that can be optimized is the current approach of\n> inserting data into the index: I think it's kind of wasteful to\n> decompress and later re-compress the posting lists once we start\n> storing the tuples on disk.\n> \n\nI need to experiment with this a bit more, to better understand the\nbehavior and pros/cons. But one thing that's not clear to me is why\nwould this be better than simply increasing the amount of memory for the\ninitial BuildAccumulator buffer ...\n\nWouldn't that have pretty much the same effect?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 8 Jul 2024 13:38:50 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "On Mon, 8 Jul 2024, 13:38 Tomas Vondra, <[email protected]> wrote:\n>\n> On 7/8/24 11:45, Matthias van de Meent wrote:\n> > As to the GIN TS code itself; yes it's more complicated, mainly\n> > because it uses several optimizations to reduce unnecessary\n> > allocations and (de)serializations of GinTuples, and I'm aware of even\n> > more such optimizations that can be added at some point.\n> >\n> > As an example: I suspect the use of GinBuffer in writetup_index_gin to\n> > be a significant resource drain in my patch because it lacks\n> > \"freezing\" in the tuplesort buffer. When no duplicate key values are\n> > encountered, the code is nearly optimal (except for a full tuple copy\n> > to get the data into the GinBuffer's memory context), but if more than\n> > one GinTuple has the same key in the merge phase we deserialize both\n> > tuple's posting lists and merge the two. I suspect that merge to be\n> > more expensive than operating on the compressed posting lists of the\n> > GinTuples themselves, so that's something I think could be improved. I\n> > suspect/guess it could save another 10% in select cases, and will\n> > definitely reduce the memory footprint of the buffer.\n> > Another thing that can be optimized is the current approach of\n> > inserting data into the index: I think it's kind of wasteful to\n> > decompress and later re-compress the posting lists once we start\n> > storing the tuples on disk.\n> >\n>\n> I need to experiment with this a bit more, to better understand the\n> behavior and pros/cons. But one thing that's not clear to me is why\n> would this be better than simply increasing the amount of memory for the\n> initial BuildAccumulator buffer ...\n>\n> Wouldn't that have pretty much the same effect?\n\nI don't think so:\n\nThe BuildAccumulator buffer will probably never be guaranteed to have\nspace for all index entries, though it does use memory more\nefficiently than Tuplesort. Therefore, it will likely have to flush\nkeys multiple times into sorted runs, with likely duplicate keys\nexisting in the tuplesort.\n\nMy patch 0008 targets the reduction of IO and CPU once\nBuildAccumulator has exceeded its memory limits. It reduces the IO and\ncomputational requirement of Tuplesort's sorted-run merging by merging\nthe tuples in those sorted runs in that merge process, reducing the\nnumber of tuples, bytes stored, and number of compares required in\nlater operations. It enables some BuildAccumulator-like behaviour\ninside the tuplesort, without needing to assign huge amounts of memory\nto the BuildAccumulator by allowing efficient spilling to disk of the\nincomplete index data. And, during merging, it can work with\nO(n_merge_tapes * tupsz) of memory, rather than O(n_distinct_keys *\ntupsz). This doesn't make BuildAccumulator totally useless, but it\ndoes reduce the relative impact of assigning more memory.\n\nOne significant difference between the modified Tuplesort and\nBuildAccumulator is that the modified Tuplesort only merges the\nentries once they're getting written, i.e. flushed from the in-memory\nstructure; while BuildAccumulator merges entries as they're being\nadded to the in-memory structure.\n\nNote that this difference causes BuildAccumulator to use memory more\nefficiently during in-memory workloads (it doesn't duplicate keys in\nmemory), but as BuildAccumulator doesn't have spilling it doesn't\nhandle full indexes' worth of data (it does duplciate keys on disk).\n\nI hope this clarifies things a bit. I'd be thrilled if we'd be able to\nput BuildAccumulator-like behaviour into the in-memory portion of\nTuplesort, but that'd require a significantly deeper understanding of\nthe Tuplesort internals than what I currently have, especially in the\narea of its memory management.\n\n\nKind regards\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 8 Jul 2024 17:35:42 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "Andy Fan <[email protected]> writes:\n\nI just realize all my replies is replied to sender only recently,\nprobably because I upgraded the email cient and the short-cut changed\nsliently, resent the lastest one only.... \n\n>>> Suppose RBTree's output is:\n>>> \n>>> batch-1 at RBTree:\n>>> 1 [tid1, tid8, tid100]\n>>> 2 [tid1, tid9, tid800]\n>>> ...\n>>> 78 [tid23, tid99, tid800]\n>>> \n>>> batch-2 at RBTree\n>>> 1 [tid1001, tid1203, tid1991]\n>>> ...\n>>> ...\n>>> 97 [tid1023, tid1099, tid1800]\n>>> \n>>> Since all the tuples in each batch (1, 2, .. 78) are sorted already, we\n>>> can just flush them into tuplesort as a 'run' *without any sorts*,\n>>> however within this way, it is possible to produce more 'runs' than what\n>>> you did in your patch.\n>>> \n>>\n>> Oh! Now I think I understand what you were proposing - you're saying\n>> that when dumping the RBTree to the tuplesort, we could tell the\n>> tuplesort this batch of tuples is already sorted, and tuplesort might\n>> skip some of the work when doing the sort later.\n>>\n>> I guess that's true, and maybe it'd be useful elsewhere, I still think\n>> this could be left as a future improvement. Allowing it seems far from\n>> trivial, and it's not quite clear if it'd be a win (it might interfere\n>> with the existing sort code in unexpected ways).\n>\n> Yes, and I agree that can be done later and I'm thinking Matthias's\n> proposal is more promising now. \n>\n>>> new way: the No. of batch depends on size of RBTree's batch size.\n>>> existing way: the No. of batch depends on size of work_mem in tuplesort.\n>>> Usually the new way would cause more no. of runs which is harmful for\n>>> mergeruns. so I can't say it is an improve of not and not include it in\n>>> my previous patch. \n>>> \n>>> however case 1 sounds a good canidiates for this method.\n>>> \n>>> Tuples from state->bs_worker_state after the perform_sort and ctid\n>>> merge: \n>>> \n>>> 1 [tid1, tid8, tid100, tid1001, tid1203, tid1991]\n>>> 2 [tid1, tid9, tid800]\n>>> 78 [tid23, tid99, tid800]\n>>> 97 [tid1023, tid1099, tid1800]\n>>> \n>>> then when we move tuples to bs_sort_state, a). we don't need to sort at\n>>> all. b). we can merge all of them into 1 run which is good for mergerun\n>>> on leader as well. That's the thing I did in the previous patch.\n>>> \n>>\n>> I'm sorry, I don't understand what you're proposing. Could you maybe\n>> elaborate in more detail?\n>\n> After we called \"tuplesort_performsort(state->bs_worker_sort);\" in\n> _gin_process_worker_data, all the tuples in bs_worker_sort are sorted\n> already, and in the same function _gin_process_worker_data, we have\n> code:\n>\n> while ((tup = tuplesort_getgintuple(worker_sort, &tuplen, true)) != NULL)\n> {\n>\n> ....(1)\n> \n> tuplesort_putgintuple(state->bs_sortstate, ntup, ntuplen);\n>\n> }\n>\n> and later we called 'tuplesort_performsort(state->bs_sortstate);'. Even\n> we have some CTID merges activity in '....(1)', the tuples are still\n> ordered, so the sort (in both tuplesort_putgintuple and\n> 'tuplesort_performsort) are not necessary, what's more, in the each of\n> 'flush-memory-to-disk' in tuplesort, it create a 'sorted-run', and in\n> this case, acutally we only need 1 run only since all the input tuples\n> in the worker is sorted. The reduction of 'sort-runs' in worker will be\n> helpful to leader's final mergeruns. the 'sorted-run' benefit doesn't\n> exist for the case-1 (RBTree -> worker_state). \n>\n> If Matthias's proposal is adopted, my optimization will not be useful\n> anymore and Matthias's porposal looks like a more natural and effecient\n> way.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Tue, 09 Jul 2024 09:18:11 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "Hi,\n\nI got to do the detailed benchmarking on the latest version of the patch\nseries, so here's the results. My goal was to better understand the\nimpact of each patch individually - especially the two parts introduced\nby Matthias, but not only - so I ran the test on a build with each fo\nthe 0001-0009 patches.\n\nThis is the same test I did at the very beginning, but the basic details\nare that I have a 22GB table with archives of our mailing lists (1.6M\nmessages, roughly), and I build a couple different GIN indexes on that:\n\ncreate index trgm on messages using gin (msg_body gin_trgm_ops);\ncreate index tsvector on messages using gin (msg_body_tsvector);\ncreate index jsonb on messages using gin (msg_headers);\ncreate index jsonb_hash on messages using gin (msg_headers jsonb_path_ops);\n\nThe indexes are 700MB-3GB, so not huge, but also not tiny. I did the\ntest with a varying number of parallel workers for each patch, measuring\nthe execution time and a couple more metrics (using pg_stat_statements).\nSee the attached scripts for details, and also conf/results from the two\nmachines I use for these tests.\n\nAttached is also a PDF with a summary of the tests - there are four\nsections with results in total, two for each machine with different\nwork_mem values (more on this later).\n\nFor each configuration, there are tables/charts for three metrics:\n\n- total CREATE INDEX duration\n- relative CREATE INDEX duration (relative to serial build)\n- amount of temporary files written\n\nHopefully it's easy to understand/interpret, but feel free to ask.\nThere's also CSVs with raw results, in case you choose to do your own\nanalysis (there's more metrics than presented here).\n\nWhile doing these tests, I realized there's a bug in how the patches\nhandle collations - it simply grabbed the value for the indexed column,\nbut if that's missing (e.g. for tsvector), it fell over. Instead the\npatch needs to use the default collation, so that's fixed in 0001.\n\nThe other thing I realized while working on this is that it's probably\nwrong to tie parallel callback to work_mem - both conceptually, but also\nfor performance reasons. I did the first run with the default work_mem\n(4MB), and that showed some serious regressions with the 0002 patch\n(where it took ~3.5x longer than serial build). It seemed to be due to a\nlot of merges of small TID lists, so I tried re-running the tests with\nwork_mem=32MB, and the regression pretty much disappeared.\n\nAlso, with 4MB there were almost no benefits of parallelism on the\nsmaller indexes (jsonb and jsonb_hash) - that's probably not unexpected,\nbut 32MB did improve that a little bit (still not great, though).\n\nIn practice this would not be a huge issue, because the later patches\nmake the regression go away - so unless we commit only the first couple\npatches, the users would not be affected by this. But it's annoying, and\nmore importantly it's a bit bogus to use work_mem here - why should that\nbe appropriate? It was more a temporary hack because I didn't have a\nbetter idea, and the comment in ginBuildCallbackParallel() questions\nthis too, after all.\n\nMy plan is to derive this from maintenance_work_mem, or rather the\nfraction we \"allocate\" for each worker. The planner logic caps the\nnumber of workers to maintenance_work_mem / 32MB, which means each\nworker has >=32MB of maintenance_work_mem at it's disposal. The worker\nneeds to do the BuildAccumulator thing, and also the tuplesort. So it\nseems reasonable to use 1/2 of the budget (>=16MB) for each of those.\nWhich seems good enough, IMHO. It's significantly more than 4MB, and the\n32MB I used for the second round was rather arbitrary.\n\nSo for further discussion, let's focus on results in the two sections\nfor 32MB ...\n\nAnd let's talk about the improvement by Matthias, namely:\n\n* 0008 Use a single GIN tuplesort\n* 0009 Reduce the size of GinTuple by 12 bytes\n\nI haven't really seen any impact on duration - it seems more or less\nwithin noise. Maybe it would be different on machines with less RAM, but\non my two systems it didn't really make a difference.\n\nIt did significantly reduce the amount of temporary data written, by\n~40% or so. This is pretty nicely visible on the \"trgm\" case, which\ngenerates the most temp files of the four indexes. An example from the\ni5/32MB section looks like this:\n\nlabel 0000 0001 0002 0003 0004 0005 0006 0007 0008 0009 0010\n------------------------------------------------------------------------\ntrgm / 3 0 2635 3690 3715 1177 1177 1179 1179 696 682 1016\n\nSo we start with patches producing 2.6GB - 3.7GB of temp files. Then the\ncompression of TID lists cuts that down to ~1.2GB, and the 0008 patch\ncuts that to just 700MB. That's pretty nice, even if it doesn't speed\nthings up. The 0009 (GinTuple reduction) improves that a little bit, but\nthe difference is smaller.\n\nI'm still a bit unsure about the tuplesort changes, but producing less\ntemporary files seems like a good thing.\n\n\nNow, what's the 0010 patch about?\n\nFor some indexes (e.g. trgm), the parallel builds help a lot, because\nthey produce a lot of temporary data and the parallel sort is a\nsubstantial part of the work. But for other indexes (especially the\n\"smaller\" indexes on jsonb headers), it's not that great. For example\nfor \"jsonb\", having 3 workers shaves off only ~25% of the time, not 75%.\n\nClearly, this happens because a lot of time is spent outside the sort,\nactually inserting data into the index. So I was wondering if we might\nparallelize that too, and how much time would it save - 0010 is an\nexperimental patch doing that. It splits the processing into 3 phases:\n\n1. workers feeding data into tuplesort\n2. leader finishes sort and \"repartitions\" the data\n3. workers inserting their partition into index\n\nThe patch is far from perfect (more a PoC) - it implements these phases\nby introducing a barrier to coordinate the processes. Workers feed the\ndata into the tuplesort as now, but instead of terminating they wait on\na barrier.\n\nThe leader reads data from the tuplesort, and partitions them evenly\ninto the a SharedFileSet with one file per worker. And then wakes up the\nworkers through the barrier again, and they do the inserts.\n\nThis does help a little bit, reducing the duration by ~15-25%. I wonder\nif this might be improved by partitioning the data differently - not by\nshuffling everything from the tuplesort into fileset (it increases the\namount of temporary data in the charts). And also by by distributing the\ndata differently - right now it's a bit of a round robin, because it\nwasn't clear we know how many entries are there.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 12 Jul 2024 17:34:25 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "On Tue, 9 Jul 2024 at 03:18, Andy Fan <[email protected]> wrote:\n>> and later we called 'tuplesort_performsort(state->bs_sortstate);'. Even\n>> we have some CTID merges activity in '....(1)', the tuples are still\n>> ordered, so the sort (in both tuplesort_putgintuple and\n>> 'tuplesort_performsort) are not necessary, what's more, in the each of\n>> 'flush-memory-to-disk' in tuplesort, it create a 'sorted-run', and in\n>> this case, acutally we only need 1 run only since all the input tuples\n>> in the worker is sorted. The reduction of 'sort-runs' in worker will be\n>> helpful to leader's final mergeruns. the 'sorted-run' benefit doesn't\n>> exist for the case-1 (RBTree -> worker_state).\n>>\n>> If Matthias's proposal is adopted, my optimization will not be useful\n>> anymore and Matthias's porposal looks like a more natural and effecient\n>> way.\n\nI think they might be complementary. I don't think it's reasonable to\nexpect GIN's BuildAccumulator to buffer all the index tuples at the\nsame time (as I mentioned upthread: we are or should be limited by\nwork memory), but the BuildAccumulator will do a much better job at\ncombining tuples than the in-memory sort + merge-write done by\nTuplesort (because BA will use (much?) less memory for the same number\nof stored values). So, the idea of making BuildAccumulator responsible\nfor providing the initial sorted runs does resonate with me, and can\nalso be worth pursuing.\n\nI think it would indeed save time otherwise spent comparing if tuples\ncan be merged before they're first spilled to disk, when we already\nhave knowledge about which tuples are a sorted run. Afterwards, only\nthe phases where we merge sorted runs from disk would require my\nbuffered write approach that merges Gin tuples.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 2 Aug 2024 17:37:17 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "Tomas Vondra <[email protected]> writes:\n\n> Hi,\n>\n> I got to do the detailed benchmarking on the latest version of the patch\n> series, so here's the results. My goal was to better understand the\n> impact of each patch individually - especially the two parts introduced\n> by Matthias, but not only - so I ran the test on a build with each fo\n> the 0001-0009 patches.\n>\n> This is the same test I did at the very beginning, but the basic details\n> are that I have a 22GB table with archives of our mailing lists (1.6M\n> messages, roughly), and I build a couple different GIN indexes on\n> that:\n..\n>\n\nVery impresive testing!\n\n> And let's talk about the improvement by Matthias, namely:\n>\n> * 0008 Use a single GIN tuplesort\n> * 0009 Reduce the size of GinTuple by 12 bytes\n>\n> I haven't really seen any impact on duration - it seems more or less\n> within noise. Maybe it would be different on machines with less RAM, but\n> on my two systems it didn't really make a difference.\n>\n> It did significantly reduce the amount of temporary data written, by\n> ~40% or so. This is pretty nicely visible on the \"trgm\" case, which\n> generates the most temp files of the four indexes. An example from the\n> i5/32MB section looks like this:\n>\n> label 0000 0001 0002 0003 0004 0005 0006 0007 0008 0009 0010\n> ------------------------------------------------------------------------\n> trgm / 3 0 2635 3690 3715 1177 1177 1179 1179 696 682\n> 1016\n\nAfter seeing the above data, I want to know where the time is spent and\nwhy the ~40% IO doesn't make a measurable duration improvement. then I\ndid the following test.\n\ncreate table gin_t (a int[]);\ninsert into gin_t select * from rand_array(30000000, 0, 100, 0, 50); [1]\nselect pg_prewarm('gin_t');\n\npostgres=# create index on gin_t using gin(a);\nINFO: pid: 145078, stage 1 took 44476 ms\nINFO: pid: 145177, stage 1 took 44474 ms\nINFO: pid: 145078, stage 2 took 2662 ms\nINFO: pid: 145177, stage 2 took 2611 ms\nINFO: pid: 145177, stage 3 took 240 ms\nINFO: pid: 145078, stage 3 took 239 ms\n\nCREATE INDEX\nTime: 79472.135 ms (01:19.472)\n\nThen we can see stage 1 take 56% execution time. stage 2 + stage 3 take\n3% execution time. and the leader's work takes the rest 41% execution\ntime. I think that's why we didn't see much performance improvement of\n0008 since it improves the \"stage 2 and stage 3\".\n\n==== Here is my definition for stage 1/2/3. \nstage 1:\n \treltuples = table_index_build_scan(heap, index, indexInfo, true, progress,\n\t\t\t\t\t\t\t\t\t\t ginBuildCallbackParallel, state, scan);\n\n\t\t/* write remaining accumulated entries */\n\t\tginFlushBuildState(state, index);\n\t\nstage 2:\n\t\t_gin_process_worker_data(state, state->bs_worker_sort)\n\nstage 3:\n\t\ttuplesort_performsort(state->bs_sortstate);\n\n\nBut 0008 still does many good stuff:\n1. Reduce the IO usage, this would be more useful on some heavily IO\n workload. \n2. Simplify the building logic by removing one stage.\n3. Add the 'buffer-writetup' to tuplesort.c, I don't have other user\n case for now, but it looks like a reasonable design.\n\nI think the current blocker is if it is safe to hack the tuplesort.c.\nWith my current knowledge, It looks good to me, but it would be better\nopen a dedicated thread to discuss this specially, the review would not\ntake a long time if a people who is experienced on this area would take\na look. \n\n> Now, what's the 0010 patch about?\n>\n> For some indexes (e.g. trgm), the parallel builds help a lot, because\n> they produce a lot of temporary data and the parallel sort is a\n> substantial part of the work. But for other indexes (especially the\n> \"smaller\" indexes on jsonb headers), it's not that great. For example\n> for \"jsonb\", having 3 workers shaves off only ~25% of the time, not 75%.\n>\n> Clearly, this happens because a lot of time is spent outside the sort,\n> actually inserting data into the index.\n\nYou can always foucs on the most important part which inpires me a lot,\neven with my simple testing, the \"inserting data into index\" stage take\n40% time.\n\n> So I was wondering if we might\n> parallelize that too, and how much time would it save - 0010 is an\n> experimental patch doing that. It splits the processing into 3 phases:\n>\n> 1. workers feeding data into tuplesort\n> 2. leader finishes sort and \"repartitions\" the data\n> 3. workers inserting their partition into index\n>\n> The patch is far from perfect (more a PoC) ..\n>\n> This does help a little bit, reducing the duration by ~15-25%. I wonder\n> if this might be improved by partitioning the data differently - not by\n> shuffling everything from the tuplesort into fileset (it increases the\n> amount of temporary data in the charts). And also by by distributing the\n> data differently - right now it's a bit of a round robin, because it\n> wasn't clear we know how many entries are there.\n\nDue to the complexity of the existing code, I would like to foucs on\nexisting patch first. So I vote for this optimization as a dedeciated\npatch. \n\n>>> and later we called 'tuplesort_performsort(state->bs_sortstate);'. Even\n>>> we have some CTID merges activity in '....(1)', the tuples are still\n>>> ordered, so the sort (in both tuplesort_putgintuple and\n>>> 'tuplesort_performsort) are not necessary,\n>>> ..\n>>> If Matthias's proposal is adopted, my optimization will not be useful\n>>> anymore and Matthias's porposal looks like a more natural and effecient\n>>> way.\n>\n> I think they might be complementary. I don't think it's reasonable to\n> expect GIN's BuildAccumulator to buffer all the index tuples at the\n> same time (as I mentioned upthread: we are or should be limited by\n> work memory), but the BuildAccumulator will do a much better job at\n> combining tuples than the in-memory sort + merge-write done by\n> Tuplesort (because BA will use (much?) less memory for the same number\n> of stored values).\n\nThank you Matthias for valuing my point! and thanks for highlighting the\nbenefit that BuildAccumulator can do a better job for sorting in memory\n(I think it is mainly because BuildAccumulator can do run-time merge\nwhen accept more tuples). but I still not willing to go further at this\ndirection. Reasons are: a). It probably can't make a big difference at\nthe final result. b). The best implementation of this idea would be\nallowing the user of tuplesort.c to insert the pre-sort tuples into tape\ndirectly rather than inserting them into tuplesort's memory and dump\nthem into tape without a sort. However I can't define a clean API for\nthe former case. c). create-index is a maintenance work, improving it by\n30% would be good, but if we just improve it by <3, it looks not very\ncharming in practice.\n\nSo my option is if we can have agreement on 0008, then we can final\nreview/test on the existing code (including 0009), and leave further\nimprovement as a dedicated patch. \n\nWhat do you think?\n\n[1] https://www.postgresql.org/message-id/87le0iqrsu.fsf%40163.com\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Tue, 27 Aug 2024 18:14:46 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "\n\nOn 8/27/24 12:14, Andy Fan wrote:\n> Tomas Vondra <[email protected]> writes:\n> \n>> Hi,\n>>\n>> I got to do the detailed benchmarking on the latest version of the patch\n>> series, so here's the results. My goal was to better understand the\n>> impact of each patch individually - especially the two parts introduced\n>> by Matthias, but not only - so I ran the test on a build with each fo\n>> the 0001-0009 patches.\n>>\n>> This is the same test I did at the very beginning, but the basic details\n>> are that I have a 22GB table with archives of our mailing lists (1.6M\n>> messages, roughly), and I build a couple different GIN indexes on\n>> that:\n> ..\n>>\n> \n> Very impresive testing!\n> \n>> And let's talk about the improvement by Matthias, namely:\n>>\n>> * 0008 Use a single GIN tuplesort\n>> * 0009 Reduce the size of GinTuple by 12 bytes\n>>\n>> I haven't really seen any impact on duration - it seems more or less\n>> within noise. Maybe it would be different on machines with less RAM, but\n>> on my two systems it didn't really make a difference.\n>>\n>> It did significantly reduce the amount of temporary data written, by\n>> ~40% or so. This is pretty nicely visible on the \"trgm\" case, which\n>> generates the most temp files of the four indexes. An example from the\n>> i5/32MB section looks like this:\n>>\n>> label 0000 0001 0002 0003 0004 0005 0006 0007 0008 0009 0010\n>> ------------------------------------------------------------------------\n>> trgm / 3 0 2635 3690 3715 1177 1177 1179 1179 696 682\n>> 1016\n> \n> After seeing the above data, I want to know where the time is spent and\n> why the ~40% IO doesn't make a measurable duration improvement. then I\n> did the following test.\n> \n> create table gin_t (a int[]);\n> insert into gin_t select * from rand_array(30000000, 0, 100, 0, 50); [1]\n> select pg_prewarm('gin_t');\n> \n> postgres=# create index on gin_t using gin(a);\n> INFO: pid: 145078, stage 1 took 44476 ms\n> INFO: pid: 145177, stage 1 took 44474 ms\n> INFO: pid: 145078, stage 2 took 2662 ms\n> INFO: pid: 145177, stage 2 took 2611 ms\n> INFO: pid: 145177, stage 3 took 240 ms\n> INFO: pid: 145078, stage 3 took 239 ms\n> \n> CREATE INDEX\n> Time: 79472.135 ms (01:19.472)\n> \n> Then we can see stage 1 take 56% execution time. stage 2 + stage 3 take\n> 3% execution time. and the leader's work takes the rest 41% execution\n> time. I think that's why we didn't see much performance improvement of\n> 0008 since it improves the \"stage 2 and stage 3\".\n> \n\nYes, that makes sense. It's so small fraction of the computation that it\ncan't translate to a meaningful speed.\n\n> ==== Here is my definition for stage 1/2/3. \n> stage 1:\n> \treltuples = table_index_build_scan(heap, index, indexInfo, true, progress,\n> \t\t\t\t\t\t\t\t\t\t ginBuildCallbackParallel, state, scan);\n> \n> \t\t/* write remaining accumulated entries */\n> \t\tginFlushBuildState(state, index);\n> \t\n> stage 2:\n> \t\t_gin_process_worker_data(state, state->bs_worker_sort)\n> \n> stage 3:\n> \t\ttuplesort_performsort(state->bs_sortstate);\n> \n> \n> But 0008 still does many good stuff:\n> 1. Reduce the IO usage, this would be more useful on some heavily IO\n> workload. \n> 2. Simplify the building logic by removing one stage.\n> 3. Add the 'buffer-writetup' to tuplesort.c, I don't have other user\n> case for now, but it looks like a reasonable design.\n> \n> I think the current blocker is if it is safe to hack the tuplesort.c.\n> With my current knowledge, It looks good to me, but it would be better\n> open a dedicated thread to discuss this specially, the review would not\n> take a long time if a people who is experienced on this area would take\n> a look. \n> \n\nI agree. I expressed the same impression earlier in this thread, IIRC.\n\n>> Now, what's the 0010 patch about?\n>>\n>> For some indexes (e.g. trgm), the parallel builds help a lot, because\n>> they produce a lot of temporary data and the parallel sort is a\n>> substantial part of the work. But for other indexes (especially the\n>> \"smaller\" indexes on jsonb headers), it's not that great. For example\n>> for \"jsonb\", having 3 workers shaves off only ~25% of the time, not 75%.\n>>\n>> Clearly, this happens because a lot of time is spent outside the sort,\n>> actually inserting data into the index.\n> \n> You can always foucs on the most important part which inpires me a lot,\n> even with my simple testing, the \"inserting data into index\" stage take\n> 40% time.\n>>> So I was wondering if we might\n>> parallelize that too, and how much time would it save - 0010 is an\n>> experimental patch doing that. It splits the processing into 3 phases:\n>>\n>> 1. workers feeding data into tuplesort\n>> 2. leader finishes sort and \"repartitions\" the data\n>> 3. workers inserting their partition into index\n>>\n>> The patch is far from perfect (more a PoC) ..\n>>\n>> This does help a little bit, reducing the duration by ~15-25%. I wonder\n>> if this might be improved by partitioning the data differently - not by\n>> shuffling everything from the tuplesort into fileset (it increases the\n>> amount of temporary data in the charts). And also by by distributing the\n>> data differently - right now it's a bit of a round robin, because it\n>> wasn't clear we know how many entries are there.\n> \n> Due to the complexity of the existing code, I would like to foucs on\n> existing patch first. So I vote for this optimization as a dedeciated\n> patch. \n> \n\nI agree. Even if we decide to do these parallel inserts, it relies on\ndoing the parallel sort first. So it makes sense to leave that for\nlater, as an additional improvement.\n\n>>>> and later we called 'tuplesort_performsort(state->bs_sortstate);'. Even\n>>>> we have some CTID merges activity in '....(1)', the tuples are still\n>>>> ordered, so the sort (in both tuplesort_putgintuple and\n>>>> 'tuplesort_performsort) are not necessary,\n>>>> ..\n>>>> If Matthias's proposal is adopted, my optimization will not be useful\n>>>> anymore and Matthias's porposal looks like a more natural and effecient\n>>>> way.\n>>\n>> I think they might be complementary. I don't think it's reasonable to\n>> expect GIN's BuildAccumulator to buffer all the index tuples at the\n>> same time (as I mentioned upthread: we are or should be limited by\n>> work memory), but the BuildAccumulator will do a much better job at\n>> combining tuples than the in-memory sort + merge-write done by\n>> Tuplesort (because BA will use (much?) less memory for the same number\n>> of stored values).\n> \n> Thank you Matthias for valuing my point! and thanks for highlighting the\n> benefit that BuildAccumulator can do a better job for sorting in memory\n> (I think it is mainly because BuildAccumulator can do run-time merge\n> when accept more tuples). but I still not willing to go further at this\n> direction. Reasons are: a). It probably can't make a big difference at\n> the final result. b). The best implementation of this idea would be\n> allowing the user of tuplesort.c to insert the pre-sort tuples into tape\n> directly rather than inserting them into tuplesort's memory and dump\n> them into tape without a sort. However I can't define a clean API for\n> the former case. c). create-index is a maintenance work, improving it by\n> 30% would be good, but if we just improve it by <3, it looks not very\n> charming in practice.\n> \n> So my option is if we can have agreement on 0008, then we can final\n> review/test on the existing code (including 0009), and leave further\n> improvement as a dedicated patch. \n> \n> What do you think?\n> \n\nYeah. I think we have agreement on 0001-0007. I'm a bit torn about 0008,\nI have not expected changing tuplesort like this when I started working\non the patch, but I can't deny it's a massive speedup for some cases\n(where the patch doesn't help otherwise). But then in other cases it\ndoesn't help at all, and 0010 helps. I wonder if maybe there's a good\nway to \"flip\" between those two approaches, by some heuristics.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Tue, 27 Aug 2024 13:16:25 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "On Tue, 27 Aug 2024 at 12:15, Andy Fan <[email protected]> wrote:\n>\n> Tomas Vondra <[email protected]> writes:\n> > And let's talk about the improvement by Matthias, namely:\n> >\n> > * 0008 Use a single GIN tuplesort\n> > * 0009 Reduce the size of GinTuple by 12 bytes\n> >\n> > I haven't really seen any impact on duration - it seems more or less\n> > within noise. Maybe it would be different on machines with less RAM, but\n> > on my two systems it didn't really make a difference.\n> >\n> > It did significantly reduce the amount of temporary data written, by\n> > ~40% or so. This is pretty nicely visible on the \"trgm\" case, which\n> > generates the most temp files of the four indexes. An example from the\n> > i5/32MB section looks like this:\n> >\n> > label 0000 0001 0002 0003 0004 0005 0006 0007 0008 0009 0010\n> > ------------------------------------------------------------------------\n> > trgm / 3 0 2635 3690 3715 1177 1177 1179 1179 696 682\n> > 1016\n>\n> After seeing the above data, I want to know where the time is spent and\n> why the ~40% IO doesn't make a measurable duration improvement. then I\n> did the following test.\n[...]\n> ==== Here is my definition for stage 1/2/3.\n> stage 1:\n> reltuples = table_index_build_scan(heap, index, indexInfo, true, progress,\n> ginBuildCallbackParallel, state, scan);\n>\n> /* write remaining accumulated entries */\n> ginFlushBuildState(state, index);\n>\n> stage 2:\n> _gin_process_worker_data(state, state->bs_worker_sort)\n>\n> stage 3:\n> tuplesort_performsort(state->bs_sortstate);\n\nNote that tuplesort does most of its sort and IO work while receiving\ntuples, which in this case would be during table_index_build_scan.\ntuplesort_performsort usually only needs to flush the last elements of\na sort that it still has in memory, which is thus generally a cheap\noperation bound by maintenance work memory, and definitely not\nrepresentative of the total cost of sorting data. In certain rare\ncases it may take a longer time as it may have to merge sorted runs,\nbut those cases are quite rare in my experience.\n\n> But 0008 still does many good stuff:\n> 1. Reduce the IO usage, this would be more useful on some heavily IO\n> workload.\n> 2. Simplify the building logic by removing one stage.\n> 3. Add the 'buffer-writetup' to tuplesort.c, I don't have other user\n> case for now, but it looks like a reasonable design.\n\nI'd imagine nbtree would like to use this too, for applying some\ndeduplication in the sort stage. The IO benefits are quite likely to\nbe worth it; a minimum space saving of 25% on duplicated key values in\ntuple sorts sounds real great. And it doesn't even have to merge all\nduplicates: even if you only merge 10 tuples at a time, the space\nsaving on those duplicates would be at least 47% on 64-bit systems.\n\n> I think the current blocker is if it is safe to hack the tuplesort.c.\n> With my current knowledge, It looks good to me, but it would be better\n> open a dedicated thread to discuss this specially, the review would not\n> take a long time if a people who is experienced on this area would take\n> a look.\n\nI could adapt the patch for nbtree use, to see if anyone's willing to\nreview that?\n\n> > Now, what's the 0010 patch about?\n> >\n> > For some indexes (e.g. trgm), the parallel builds help a lot, because\n> > they produce a lot of temporary data and the parallel sort is a\n> > substantial part of the work. But for other indexes (especially the\n> > \"smaller\" indexes on jsonb headers), it's not that great. For example\n> > for \"jsonb\", having 3 workers shaves off only ~25% of the time, not 75%.\n> >\n> > Clearly, this happens because a lot of time is spent outside the sort,\n> > actually inserting data into the index.\n>\n> You can always foucs on the most important part which inpires me a lot,\n> even with my simple testing, the \"inserting data into index\" stage take\n> 40% time.\n\nnbtree does sorted insertions into the tree, constructing leaf pages\none at a time and adding separator keys in the page above when the\nleaf page was filled, thus removing the need to descend the btree. I\nimagine we can save some performance by mirroring that in GIN too,\nwith as additional bonus that we'd be free to start logging completed\npages before we're done with the full index, reducing max WAL\nthroughput in GIN index creation.\n\n> > I think they might be complementary. I don't think it's reasonable to\n> > expect GIN's BuildAccumulator to buffer all the index tuples at the\n> > same time (as I mentioned upthread: we are or should be limited by\n> > work memory), but the BuildAccumulator will do a much better job at\n> > combining tuples than the in-memory sort + merge-write done by\n> > Tuplesort (because BA will use (much?) less memory for the same number\n> > of stored values).\n>\n> Thank you Matthias for valuing my point! and thanks for highlighting the\n> benefit that BuildAccumulator can do a better job for sorting in memory\n> (I think it is mainly because BuildAccumulator can do run-time merge\n> when accept more tuples). but I still not willing to go further at this\n> direction. Reasons are: a). It probably can't make a big difference at\n> the final result. b). The best implementation of this idea would be\n> allowing the user of tuplesort.c to insert the pre-sort tuples into tape\n> directly rather than inserting them into tuplesort's memory and dump\n> them into tape without a sort.\n\nYou'd still need to keep track of sorted runs on those tapes, which is what\ntuplesort.c does for us.\n\n> However I can't define a clean API for\n> the former case.\n\nI imagine a pair of tuplesort_beginsortedrun();\ntuplesort_endsortedrun() -functions to help this, but I'm not 100%\nsure if we'd want to expose Tuplesort to non-PG sorting algorithms, as\nit would be one easy way to create incorrect results if the sort used\nin tuplesort isn't exactly equivalent to the sort used by the provider\nof the tuples.\n\n> c). create-index is a maintenance work, improving it by\n> 30% would be good, but if we just improve it by <3, it looks not very\n> charming in practice.\n\nI think improving 3% on reindex operations can be well worth the effort.\n\nAlso, do note that the current patch does (still) not correctly handle\n[maintenance_]work_mem: Every backend's BuildAccumulator uses up to\nwork_mem of memory here, while the launched tuplesorts use an\nadditional maintenance_work_mem of memory, for a total of (workers +\n1) * work_mem + m_w_m of memory usage. The available memory should\ninstead be allocated between tuplesort and BuildAccumulator, but can\nprobably mostly be allocated to just BuildAccumulator if we can dump\nthe data into the tuplesort directly, as it'd reduce the overall\nnumber of operations and memory allocations for the tuplesort. I think\nthat once we correctly account for memory allocations (and an improved\nwrite path) we'll be able to see a meaningfully larger performance\nimprovement.\n\n> So my option is if we can have agreement on 0008, then we can final\n> review/test on the existing code (including 0009), and leave further\n> improvement as a dedicated patch.\n\nAs mentioned above, I think I could update the patch for a btree\nimplementation that also has immediate benefits, if so desired?\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 27 Aug 2024 15:28:12 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "Matthias van de Meent <[email protected]> writes:\n\n\n> tuplesort_performsort usually only needs to flush the last elements of\n> ... In certain rare\n> cases it may take a longer time as it may have to merge sorted runs,\n> but those cases are quite rare in my experience.\n\nOK, I am expecting such cases are not rare, Suppose we have hundreds of\nGB heap tuple, and have the 64MB or 1GB maintenance_work_mem setup, it\nprobably hit this sistuation. I'm not mean at which experience is the\nfact, but I am just to highlights the gap in our minds. and thanks for\nsharing this, I can pay more attention in this direction in my future\nwork. To be clearer, my setup hit the 'mergeruns' case. \n\n>\n>> But 0008 still does many good stuff:\n>> 1. Reduce the IO usage, this would be more useful on some heavily IO\n>> workload.\n>> 2. Simplify the building logic by removing one stage.\n>> 3. Add the 'buffer-writetup' to tuplesort.c, I don't have other user\n>> case for now, but it looks like a reasonable design.\n>\n> I'd imagine nbtree would like to use this too, for applying some\n> deduplication in the sort stage.\n\nThe current ntbtree do the deduplication during insert into the nbtree\nIIUC, in your new strategy, we can move it the \"sort\" stage, which looks\ngood to me [to confirm my understanding].\n\n> The IO benefits are quite likely to\n> be worth it; a minimum space saving of 25% on duplicated key values in\n> tuple sorts sounds real great.\n\nJust be clearer on the knowledge, the IO benefits can be only gained\nwhen the tuplesort's memory can't hold all the tuples, and in such case,\ntuplesort_performsort would run the 'mergeruns', or else we can't get\nany benefit?\n\n>\n>> I think the current blocker is if it is safe to hack the tuplesort.c.\n>> With my current knowledge, It looks good to me, but it would be better\n>> open a dedicated thread to discuss this specially, the review would not\n>> take a long time if a people who is experienced on this area would take\n>> a look.\n>\n> I could adapt the patch for nbtree use, to see if anyone's willing to\n> review that?\n\nI'm interested with it and can do some review & testing. But the\nkeypoint would be we need some authorities are willing to review it, to\nmake it happen to a bigger extent, a dedicated thread would be helpful.\n\n>> > Now, what's the 0010 patch about?\n>> >\n>> > For some indexes (e.g. trgm), the parallel builds help a lot, because\n>> > they produce a lot of temporary data and the parallel sort is a\n>> > substantial part of the work. But for other indexes (especially the\n>> > \"smaller\" indexes on jsonb headers), it's not that great. For example\n>> > for \"jsonb\", having 3 workers shaves off only ~25% of the time, not 75%.\n>> >\n>> > Clearly, this happens because a lot of time is spent outside the sort,\n>> > actually inserting data into the index.\n>>\n>> You can always foucs on the most important part which inpires me a lot,\n>> even with my simple testing, the \"inserting data into index\" stage take\n>> 40% time.\n>\n> nbtree does sorted insertions into the tree, constructing leaf pages\n> one at a time and adding separator keys in the page above when the\n> leaf page was filled, thus removing the need to descend the btree. I\n> imagine we can save some performance by mirroring that in GIN too,\n> with as additional bonus that we'd be free to start logging completed\n> pages before we're done with the full index, reducing max WAL\n> throughput in GIN index creation.\n\nI agree this is a promising direction as well.\n\n>> > I think they might be complementary. I don't think it's reasonable to\n>> > expect GIN's BuildAccumulator to buffer all the index tuples at the\n>> > same time (as I mentioned upthread: we are or should be limited by\n>> > work memory), but the BuildAccumulator will do a much better job at\n>> > combining tuples than the in-memory sort + merge-write done by\n>> > Tuplesort (because BA will use (much?) less memory for the same number\n>> > of stored values).\n>>\n>> Thank you Matthias for valuing my point! and thanks for highlighting the\n>> benefit that BuildAccumulator can do a better job for sorting in memory\n>> (I think it is mainly because BuildAccumulator can do run-time merge\n>> when accept more tuples). but I still not willing to go further at this\n>> direction. Reasons are: a). It probably can't make a big difference at\n>> the final result. b). The best implementation of this idea would be\n>> allowing the user of tuplesort.c to insert the pre-sort tuples into tape\n>> directly rather than inserting them into tuplesort's memory and dump\n>> them into tape without a sort.\n>\n> You'd still need to keep track of sorted runs on those tapes, which is what\n> tuplesort.c does for us.\n>\n>> However I can't define a clean API for\n>> the former case.\n>\n> I imagine a pair of tuplesort_beginsortedrun();\n> tuplesort_endsortedrun() -functions to help this.\n\nThis APIs do are better than the ones in my mind:) during the range\nbetween tuplesort_beginsortedrun and tuplesort_endsortedrun(), we can\nbypass the tuplessort's memory. \n\n> but I'm not 100%\n> sure if we'd want to expose Tuplesort to non-PG sorting algorithms, as\n> it would be one easy way to create incorrect results if the sort used\n> in tuplesort isn't exactly equivalent to the sort used by the provider\n> of the tuples.\n\nOK.\n\n>\n>> c). create-index is a maintenance work, improving it by\n>> 30% would be good, but if we just improve it by <3, it looks not very\n>> charming in practice.\n>\n> I think improving 3% on reindex operations can be well worth the effort.\n\n> Also, do note that the current patch does (still) not correctly handle\n> [maintenance_]work_mem: Every backend's BuildAccumulator uses up to\n> work_mem of memory here, while the launched tuplesorts use an\n> additional maintenance_work_mem of memory, for a total of (workers +\n> 1) * work_mem + m_w_m of memory usage. The available memory should\n> instead be allocated between tuplesort and BuildAccumulator, but can\n> probably mostly be allocated to just BuildAccumulator if we can dump\n> the data into the tuplesort directly, as it'd reduce the overall\n> number of operations and memory allocations for the tuplesort. I think\n> that once we correctly account for memory allocations (and an improved\n> write path) we'll be able to see a meaningfully larger performance\n> improvement.\n\nPersonally I am more fans of your \"buffer writetup\" idea, but not the\nsame interests with the tuplesort_beginsortedrun /\ntuplesort_endsortedrun. I said the '3%' is for the later one and I\nguess you understand it as the former one. \n>\n>> So my option is if we can have agreement on 0008, then we can final\n>> review/test on the existing code (including 0009), and leave further\n>> improvement as a dedicated patch.\n>\n> As mentioned above, I think I could update the patch for a btree\n> implementation that also has immediate benefits, if so desired?\n\nIf you are saying about the buffered-writetup in tuplesort, then I think\nit is great, and in a dedicated thread for better exposure.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Wed, 28 Aug 2024 08:37:55 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "On Wed, 28 Aug 2024 at 02:38, Andy Fan <[email protected]> wrote:\n>\n> Matthias van de Meent <[email protected]> writes:\n> > tuplesort_performsort usually only needs to flush the last elements of\n> > ... In certain rare\n> > cases it may take a longer time as it may have to merge sorted runs,\n> > but those cases are quite rare in my experience.\n>\n> OK, I am expecting such cases are not rare, Suppose we have hundreds of\n> GB heap tuple, and have the 64MB or 1GB maintenance_work_mem setup, it\n> probably hit this sistuation. I'm not mean at which experience is the\n> fact, but I am just to highlights the gap in our minds. and thanks for\n> sharing this, I can pay more attention in this direction in my future\n> work. To be clearer, my setup hit the 'mergeruns' case.\n\nHuh, I've never noticed the performsort phase of btree index creation\n(as seen in pg_stat_progress_create_index) take much time if any,\nespecially when compared to the tuple loading phase, so I assumed it\ndidn't happen often. Hmm, maybe I've just been quite lucky.\n\n> >\n> >> But 0008 still does many good stuff:\n> >> 1. Reduce the IO usage, this would be more useful on some heavily IO\n> >> workload.\n> >> 2. Simplify the building logic by removing one stage.\n> >> 3. Add the 'buffer-writetup' to tuplesort.c, I don't have other user\n> >> case for now, but it looks like a reasonable design.\n> >\n> > I'd imagine nbtree would like to use this too, for applying some\n> > deduplication in the sort stage.\n>\n> The current ntbtree do the deduplication during insert into the nbtree\n> IIUC, in your new strategy, we can move it the \"sort\" stage, which looks\n> good to me [to confirm my understanding].\n\nCorrect: We can do at least some deduplication in the sort stage. Not\nall, because tuples need to fit on pages and we don't want to make the\ntuples so large that we'd cause unnecessary splits while loading the\ntree, but merging runs of 10-30 tuples should reduce IO requirements\nby some margin for indexes where deduplication is important.\n\n> > The IO benefits are quite likely to\n> > be worth it; a minimum space saving of 25% on duplicated key values in\n> > tuple sorts sounds real great.\n>\n> Just be clearer on the knowledge, the IO benefits can be only gained\n> when the tuplesort's memory can't hold all the tuples, and in such case,\n> tuplesort_performsort would run the 'mergeruns', or else we can't get\n> any benefit?\n\nIt'd be when the tuplesort's memory can't hold all tuples, but\nmergeruns isn't strictly required here, as dumptuples() would already\nallow some tuple merging.\n\n> >> I think the current blocker is if it is safe to hack the tuplesort.c.\n> >> With my current knowledge, It looks good to me, but it would be better\n> >> open a dedicated thread to discuss this specially, the review would not\n> >> take a long time if a people who is experienced on this area would take\n> >> a look.\n> >\n> > I could adapt the patch for nbtree use, to see if anyone's willing to\n> > review that?\n>\n> I'm interested with it and can do some review & testing. But the\n> keypoint would be we need some authorities are willing to review it, to\n> make it happen to a bigger extent, a dedicated thread would be helpful.\n\nThen I'll split it off into a new thread sometime later this week.\n\n> > nbtree does sorted insertions into the tree, constructing leaf pages\n> > one at a time and adding separator keys in the page above when the\n> > leaf page was filled, thus removing the need to descend the btree. I\n> > imagine we can save some performance by mirroring that in GIN too,\n> > with as additional bonus that we'd be free to start logging completed\n> > pages before we're done with the full index, reducing max WAL\n> > throughput in GIN index creation.\n>\n> I agree this is a promising direction as well.\n\nIt'd be valuable to see if the current patch's \"parallel sorted\"\ninsertion is faster even than the current GIN insertion path even if\nwe use only the primary process, as it could be competative.\nBtree-like bulk tree loading might even be meaningfully faster than\nGIN's current index creation process. However, as I mentioned\nsignificantly upthread, I don't expect that change to happen in this\npatch series.\n\n> > I imagine a pair of tuplesort_beginsortedrun();\n> > tuplesort_endsortedrun() -functions to help this.\n>\n> This APIs do are better than the ones in my mind:) during the range\n> between tuplesort_beginsortedrun and tuplesort_endsortedrun(), we can\n> bypass the tuplessort's memory.\n\nExactly, we'd have the user call tuplesort_beginsortedrun(); then\niteratively insert its sorted tuples using the usual\ntuplesort_putYYY() api, and then call _endsortedrun() when the sorted\nrun is complete. It does need some work in tuplesort state handling\nand internals, but I think that's quite achievable.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Wed, 28 Aug 2024 03:15:33 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" }, { "msg_contents": "Tomas Vondra <[email protected]> writes:\n\nHi Tomas,\n\n> Yeah. I think we have agreement on 0001-0007.\n\nYes, the design of 0001-0007 looks good to me and because of the\nexisting compexitity, I want to foucs on this part for now. I am doing\ncode review from yesterday, and now my work is done. Just some small\nquestions: \n\n1. In GinBufferStoreTuple,\n\n\t/*\n\t * Check if the last TID in the current list is frozen. This is the case\n\t * when merging non-overlapping lists, e.g. in each parallel worker.\n\t */\n\tif ((buffer->nitems > 0) &&\n\t\t(ItemPointerCompare(&buffer->items[buffer->nitems - 1], &tup->first) == 0))\n\t\tbuffer->nfrozen = buffer->nitems;\n\nshould we do (ItemPointerCompare(&buffer->items[buffer->nitems - 1],\n&tup->first) \"<=\" 0), rather than \"==\"? \n\n2. Given the \"non-overlap\" case should be the major case\nGinBufferStoreTuple , does it deserve a fastpath for it before calling\nginMergeItemPointers since ginMergeItemPointers have a unconditionally\nmemory allocation directly, and later we pfree it?\n\nnew = ginMergeItemPointers(&buffer->items[buffer->nfrozen], /* first unfronzen */\n\t\t\t\t (buffer->nitems - buffer->nfrozen),\t/* num of unfrozen */\n\t\t\t items, tup->nitems, &nnew);\n\n\n3. The following comment in index_build is out-of-date now :)\n\n\t/*\n\t * Determine worker process details for parallel CREATE INDEX. Currently,\n\t * only btree has support for parallel builds.\n\t *\n\n4. Comments - Buffer is not empty and it's storing \"a different key\"\nlooks wrong to me. the key may be same and we just need to flush them\nbecause of memory usage. There is the same issue in both\n_gin_process_worker_data and _gin_parallel_merge. \n\n\t\tif (GinBufferShouldTrim(buffer, tup))\n\t\t{\n\t\t\tAssert(buffer->nfrozen > 0);\n\n\t\t\tstate->buildStats.nTrims++;\n\n\t\t\t/*\n\t\t\t * Buffer is not empty and it's storing a different key - flush\n\t\t\t * the data into the insert, and start a new entry for current\n\t\t\t * GinTuple.\n\t\t\t */\n\t\t\tAssertCheckItemPointers(buffer, true);\n\nI also run valgrind testing with some testcase, no memory issue is\nfound. \n\n> I'm a bit torn about 0008, I have not expected changing tuplesort like\n> this when I started working \n> on the patch, but I can't deny it's a massive speedup for some cases\n> (where the patch doesn't help otherwise). But then in other cases it\n> doesn't help at all, and 0010 helps.\n\nYes, I'd like to see these improvements both 0008 and 0010 as a\ndedicated improvement. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Thu, 29 Aug 2024 12:30:44 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel CREATE INDEX for GIN indexes" } ]
[ { "msg_contents": "I thought that this might be a small quality of life improvement for \npeople scrolling through logs wondering which tranche name wasn't \nregistered.\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Thu, 02 May 2024 11:23:06 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Specify tranch name in error when not registered" } ]
[ { "msg_contents": "Sorry to have to ask for help here, but no amount of stepping through code\nis giving me the answer.\n\nI'm building an index access method which supports an ordering operator:\n\n CREATE OPERATOR pg_catalog.<<=>> (\n FUNCTION = rdb.rank_match,\n LEFTARG = record,\n RIGHTARG = rdb.RankSpec\n );\n\n CREATE OPERATOR CLASS rdb_ops DEFAULT FOR TYPE record USING rdb AS\n OPERATOR 1 pg_catalog.<===> (record, rdb.userqueryspec),\n OPERATOR 2 pg_catalog.<<=>> (record, rdb.rankspec) FOR ORDER BY\npg_catalog.float_ops;\n\nHere is the supporting code (in Rust):\n\n #[derive(Serialize, Deserialize, PostgresType, Debug)]\n pub struct RankSpec {\n pub fastrank: String,\n pub slowrank: String,\n pub candidates: i32,\n }\n\n #[pg_extern(\n sql = \"CREATE FUNCTION rdb.rank_match(rec record, rankspec\nrdb.RankSpec) \\\n RETURNS real IMMUTABLE STRICT PARALLEL SAFE LANGUAGE c \\\n AS 'MODULE_PATHNAME', 'rank_match_wrapper';\",\n requires = [RankSpec],\n no_guard\n )]\n fn rank_match(_fcinfo: pg_sys::FunctionCallInfo) -> f32 {\n // todo make this an error\n pgrx::log!(\"Error -- this ranking method can only be called when\nthere is an RDB full-text search in the WHERE clause.\");\n 42.0\n }\n\n #[pg_extern(immutable, strict, parallel_safe)]\n fn rank(fastrank: String, slowrank: String, candidates: i32) ->\nRankSpec {\n RankSpec {\n fastrank,\n slowrank,\n candidates,\n }\n }\n\nThe index access method works fine. It successfully gets the keys and the\norderbys in the amrescan() method, and it dutifully returns the appropriate\nscan.xs_heaptid in the amgettuple() method. It returns the tids in the\nproper order. It also sets:\n\nscandesc.xs_recheck = false;\nscandesc.xs_recheckorderby = false;\n\nso there's no reason the system needs to know the actual float value\nreturned by rank_match(), the ordering operator distance function. In any\ncase, that value can only be calculated based on information in the index\nitself, and can't be calculated by rank_match().\n\nNevertheless, the system calls rank_match() after every call to\namgettuple(), and I can't figure out why. I've stepped through the code,\nand it looks like it has something to do with ScanState.ps.ps.ps_ProjInfo,\nbut I can't figure out where or why it's getting set.\n\nHere's a sample query. I have not found a query that does *not* call\nrank_match():\n\nSELECT *\nFROM products\nWHERE products <===> rdb.userquery('teddy')\nORDER BY products <<=>> rdb.rank();\n\nI'd be grateful for any help or insights.\n\n\n-- \nChris Cleveland\n312-339-2677 mobile\n\nSorry to have to ask for help here, but no amount of stepping through code is giving me the answer.I'm building an index access method which supports an ordering operator:    CREATE OPERATOR pg_catalog.<<=>> (        FUNCTION = rdb.rank_match,        LEFTARG = record,        RIGHTARG = rdb.RankSpec    );    CREATE OPERATOR CLASS rdb_ops DEFAULT FOR TYPE record USING rdb AS        OPERATOR 1 pg_catalog.<===> (record, rdb.userqueryspec),        OPERATOR 2 pg_catalog.<<=>> (record, rdb.rankspec) FOR ORDER BY pg_catalog.float_ops;Here is the supporting code (in Rust):    #[derive(Serialize, Deserialize, PostgresType, Debug)]    pub struct RankSpec {        pub fastrank: String,        pub slowrank: String,        pub candidates: i32,    }    #[pg_extern(    sql = \"CREATE FUNCTION rdb.rank_match(rec record, rankspec rdb.RankSpec) \\    RETURNS real IMMUTABLE STRICT PARALLEL SAFE LANGUAGE c \\    AS 'MODULE_PATHNAME', 'rank_match_wrapper';\",    requires = [RankSpec],    no_guard    )]    fn rank_match(_fcinfo: pg_sys::FunctionCallInfo) -> f32 {        // todo make this an error        pgrx::log!(\"Error -- this ranking method can only be called when there is an RDB full-text search in the WHERE clause.\");        42.0    }    #[pg_extern(immutable, strict, parallel_safe)]    fn rank(fastrank: String, slowrank: String, candidates: i32) -> RankSpec {        RankSpec {            fastrank,            slowrank,            candidates,        }    }The index access method works fine. It successfully gets the keys and the orderbys in the amrescan() method, and it dutifully returns the appropriate scan.xs_heaptid in the amgettuple() method. It returns the tids in the proper order. It also sets:scandesc.xs_recheck = false; scandesc.xs_recheckorderby = false;so there's no reason the system needs to know the actual float value returned by rank_match(), the ordering operator distance function. In any case, that value can only be calculated based on information in the index itself, and can't be calculated by rank_match().Nevertheless, the system calls rank_match() after every call to amgettuple(), and I can't figure out why. I've stepped through the code, and it looks like it has something to do with ScanState.ps.ps.ps_ProjInfo, but I can't figure out where or why it's getting set.Here's a sample query. I have not found a query that does *not* call rank_match():SELECT * FROM products WHERE products <===> rdb.userquery('teddy') ORDER BY products <<=>> rdb.rank();I'd be grateful for any help or insights.-- Chris Cleveland312-339-2677 mobile", "msg_date": "Thu, 2 May 2024 11:49:53 -0500", "msg_from": "Chris Cleveland <[email protected]>", "msg_from_op": true, "msg_subject": "Why is FOR ORDER BY function getting called when the index is\n handling ordering?" }, { "msg_contents": "On Thu, 2 May 2024 at 18:50, Chris Cleveland <[email protected]> wrote:\n>\n> Sorry to have to ask for help here, but no amount of stepping through code is giving me the answer.\n>\n> I'm building an index access method which supports an ordering operator:\n[...]\n> so there's no reason the system needs to know the actual float value returned by rank_match(), the ordering operator distance function. In any case, that value can only be calculated based on information in the index itself, and can't be calculated by rank_match().\n>\n> Nevertheless, the system calls rank_match() after every call to amgettuple(), and I can't figure out why. I've stepped through the code, and it looks like it has something to do with ScanState.ps.ps.ps_ProjInfo, but I can't figure out where or why it's getting set.\n>\n> Here's a sample query. I have not found a query that does *not* call rank_match():\n[...]\n> I'd be grateful for any help or insights.\n\nThe ordering clause produces a junk column that's used to keep track\nof the ordering, and because it's a projected column (not the indexed\nvalue, but an expression over that column) the executor will execute\nthat projection. This happens regardless of it's use in downstream\nnodes due to planner or executor limitations.\n\nSee also Heikki's thread over at [0], and a comment of me about the\nsame issue over at pgvector's issue board at [1].\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n[0] https://www.postgresql.org/message-id/2ca5865b-4693-40e5-8f78-f3b45d5378fb%40iki.fi\n[1] https://github.com/pgvector/pgvector/issues/359#issuecomment-1840786021\n\n\n", "msg_date": "Thu, 2 May 2024 19:20:30 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is FOR ORDER BY function getting called when the index is\n handling ordering?" }, { "msg_contents": "Chris Cleveland <[email protected]> writes:\n> I'm building an index access method which supports an ordering operator:\n\n> CREATE OPERATOR pg_catalog.<<=>> (\n> FUNCTION = rdb.rank_match,\n> LEFTARG = record,\n> RIGHTARG = rdb.RankSpec\n> );\n\nOkay ...\n\n> ... there's no reason the system needs to know the actual float value\n> returned by rank_match(), the ordering operator distance function. In any\n> case, that value can only be calculated based on information in the index\n> itself, and can't be calculated by rank_match().\n\nThis seems to me to be a very poorly designed concept. An index\nordering operator is an optimization that the planner may or may\nnot choose to employ. If you've designed your code on the assumption\nthat that's the only possible plan, it will break for any but the\nmost trivial queries.\n\n> Nevertheless, the system calls rank_match() after every call to\n> amgettuple(), and I can't figure out why.\n\nThe ORDER BY value is included in the set of values that the plan\nis expected to output. This is so because it's set up to still\nwork if the planner needs to use an explicit sort step. For\ninstance, using a trivial table with a couple of bigint columns:\n\nregression=# explain verbose select * from int8_tbl order by q1/2;\n QUERY PLAN \n----------------------------------------------------------------------\n Sort (cost=1.12..1.13 rows=5 width=24)\n Output: q1, q2, ((q1 / 2))\n Sort Key: ((int8_tbl.q1 / 2))\n -> Seq Scan on public.int8_tbl (cost=0.00..1.06 rows=5 width=24)\n Output: q1, q2, (q1 / 2)\n(5 rows)\n\nThe q1/2 column is marked \"resjunk\", so it doesn't actually get\nsent to the client, but it's computed so that the sort step can\nuse it. Even if I do this:\n\nregression=# create index on int8_tbl ((q1/2));\nCREATE INDEX\nregression=# set enable_seqscan TO 0;\nSET\nregression=# explain verbose select * from int8_tbl order by q1/2;\n QUERY PLAN \n-------------------------------------------------------------------------------------------\n Index Scan using int8_tbl_expr_idx on public.int8_tbl (cost=0.13..12.22 rows=5 width=24)\n Output: q1, q2, (q1 / 2)\n(2 rows)\n\n... it's still there, because the set of columns that the scan node is\nexpected to emit doesn't change based on the plan type. We could make\nit change perhaps, if we tried hard enough, but so far nobody has\nwanted to invest work in that. Note that even if this indexscan\nis chosen, that doesn't ensure that we won't need an explicit sort\nlater, since the query might need joins or aggregation on top of\nthe scan node. So it's far from trivial to decide that the scan\nnode doesn't need to emit the sort column.\n\nIn any case, I'm uninterested in making the world safe for a\ndesign that's going to fail if the planner doesn't choose an\nindexscan on a specific index. That's too fragile.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 May 2024 13:21:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is FOR ORDER BY function getting called when the index is\n handling ordering?" }, { "msg_contents": "> > ... there's no reason the system needs to know the actual float value\n> > returned by rank_match(), the ordering operator distance function. In any\n> > case, that value can only be calculated based on information in the index\n> > itself, and can't be calculated by rank_match().\n>\n> This seems to me to be a very poorly designed concept. An index\n> ordering operator is an optimization that the planner may or may\n> not choose to employ. If you've designed your code on the assumption\n> that that's the only possible plan, it will break for any but the\n> most trivial queries.\n>\n\nSo the use-case here is an enterprise search engine built using an index\naccess method. A search engine ranks its result set based on many, many\nfactors. Among them: token-specific weights, token statistics calculated\nacross the entire index, field lens, average field lens calculated across\nthe index, various kinds of linguistic analysis (verbs? nouns?), additional\nterms added to the query drawn from other parts of the index, fuzzy terms\nbased on ngrams from across the index, and a great many other magic tricks.\nThere are also index-specific parameters that are specified at index time,\nboth as parameters to the op classes attached to each column, and options\nspecified by CREATE INDEX ... WITH (...).\n\nIf the system must generate a score for ranking using a simple ORDER BY\ncolumn_op_constant, it can't, because all that information within the index\nitself isn't available.\n\nIn any case, I'm uninterested in making the world safe for a\n> design that's going to fail if the planner doesn't choose an\n> indexscan on a specific index. That's too fragile.\n>\n>\nBut that's the reality of search engines. It's also the reason that the\nbuilt-in pg full-text search has real limitations.\n\nThis isn't just a search engine problem. *Any* index type that depends on\nwhole-table statistics, or index options, or knowledge about other items in\nthe index will not be able to calculate a proper score without access to\nthe index. This applies to certain types of vector search. It could also\napply to a recommendation engine.\n\nIn general, it hasn't been a problem for my project because I adjust costs\nto force the use of the index. (Yes, I know that doing this is\ncontroversial, but I have little choice, and it works.)\n\nThe only area where it has become a problem is in the creation of the junk\ncolumn.\n\nI do understand the need for the index to report the value of the sort key\nup the food chain, because yes, all kinds of arbitrary re-ordering could\noccur. We already have a mechanism for that, though:\nIndexScanDesc.xs_orderbyvals. If there were a way for the system to use\nthat instead of a call to the ordering function, we'd be all set.\n\nIt would also be nice if the orderbyval could be made available in the\nprojection. That way we could report the score() in the result set.\n\nMatthias' response and links touch on some of these issues.\n\n-- \nChris Cleveland\n312-339-2677 mobile\n\n\n> ... there's no reason the system needs to know the actual float value\n> returned by rank_match(), the ordering operator distance function. In any\n> case, that value can only be calculated based on information in the index\n> itself, and can't be calculated by rank_match().\n\nThis seems to me to be a very poorly designed concept.  An index\nordering operator is an optimization that the planner may or may\nnot choose to employ.  If you've designed your code on the assumption\nthat that's the only possible plan, it will break for any but the\nmost trivial queries.So the use-case here is an enterprise search engine built using an index access method. A search engine ranks its result set based on many, many factors. Among them: token-specific weights, token statistics calculated across the entire index, field lens, average field lens calculated across the index, various kinds of linguistic analysis (verbs? nouns?), additional terms added to the query drawn from other parts of the index, fuzzy terms based on ngrams from across the index, and a great many other magic tricks. There are also index-specific parameters that are specified at index time, both as parameters to the op classes attached to each column, and options specified by CREATE INDEX ... WITH (...).If the system must generate a score for ranking using a simple ORDER BY column_op_constant, it can't, because all that information within the index itself isn't available. \nIn any case, I'm uninterested in making the world safe for a\ndesign that's going to fail if the planner doesn't choose an\nindexscan on a specific index.  That's too fragile.\nBut that's the reality of search engines. It's also the reason that the built-in pg full-text search has real limitations.This isn't just a search engine problem. *Any* index type that depends on whole-table statistics, or index options, or knowledge about other items in the index will not be able to calculate a proper score without access to the index. This applies to certain types of vector search. It could also apply to a recommendation engine.In general, it hasn't been a problem for my project because I adjust costs to force the use of the index. (Yes, I know that doing this is controversial, but I have little choice, and it works.)The only area where it has become a problem is in the creation of the junk column.I do understand the need for the index to report the value of the sort key up the food chain, because yes, all kinds of arbitrary re-ordering could occur. We already have a mechanism for that, though: IndexScanDesc.xs_orderbyvals. If there were a way for the system to use that instead of a call to the ordering function, we'd be all set.It would also be nice if the orderbyval could be made available in the projection. That way we could report the score() in the result set.Matthias' response and links touch on some of these issues.-- Chris Cleveland312-339-2677 mobile", "msg_date": "Thu, 2 May 2024 15:42:05 -0500", "msg_from": "Chris Cleveland <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is FOR ORDER BY function getting called when the index is\n handling ordering?" } ]
[ { "msg_contents": "Hi Hackers,\n\nThere is a comment like below in src/include/libpq/libpq.h,\n\n  /*\n   * prototypes for functions in be-secure.c\n   */\nextern PGDLLIMPORT char *ssl_library;\nextern PGDLLIMPORT char *ssl_cert_file;\n\n...\n\nHowever, 'ssl_library', 'ssl_cert_file' and the rest are global \nparameter settings, not functions. To address this confusion, it would \nbe better to move all global configuration settings to the proper \nsection, such as /* GUCs */, to maintain consistency.\n\nI have attached an attempt to help address this issue.\n\n\nThank you,\n\nDavid", "msg_date": "Thu, 2 May 2024 15:37:08 -0700", "msg_from": "David Zhang <[email protected]>", "msg_from_op": true, "msg_subject": "wrong comment in libpq.h" }, { "msg_contents": "On 03.05.24 00:37, David Zhang wrote:\n> Hi Hackers,\n> \n> There is a comment like below in src/include/libpq/libpq.h,\n> \n>  /*\n>   * prototypes for functions in be-secure.c\n>   */\n> extern PGDLLIMPORT char *ssl_library;\n> extern PGDLLIMPORT char *ssl_cert_file;\n> \n> ...\n> \n> However, 'ssl_library', 'ssl_cert_file' and the rest are global \n> parameter settings, not functions. To address this confusion, it would \n> be better to move all global configuration settings to the proper \n> section, such as /* GUCs */, to maintain consistency.\n\nMaybe it's easier if we just replaced\n\n prototypes for functions in xxx.c\n\nwith\n\n declarations for xxx.c\n\nthroughout src/include/libpq/libpq.h.\n\n\n\n", "msg_date": "Fri, 3 May 2024 13:48:20 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: wrong comment in libpq.h" }, { "msg_contents": "> On 3 May 2024, at 13:48, Peter Eisentraut <[email protected]> wrote:\n\n> Maybe it's easier if we just replaced\n> \n> prototypes for functions in xxx.c\n> \n> with\n> \n> declarations for xxx.c\n> \n> throughout src/include/libpq/libpq.h.\n\n+1\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 3 May 2024 13:53:28 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: wrong comment in libpq.h" }, { "msg_contents": "On 2024-05-03 4:53 a.m., Daniel Gustafsson wrote:\n>> On 3 May 2024, at 13:48, Peter Eisentraut <[email protected]> wrote:\n>> Maybe it's easier if we just replaced\n>>\n>> prototypes for functions in xxx.c\n>>\n>> with\n>>\n>> declarations for xxx.c\n>>\n>> throughout src/include/libpq/libpq.h.\n> +1\n+1\n>\n> --\n> Daniel Gustafsson\n>\nDavid\n\n\n", "msg_date": "Fri, 3 May 2024 15:29:45 -0700", "msg_from": "David Zhang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: wrong comment in libpq.h" }, { "msg_contents": "On 04.05.24 00:29, David Zhang wrote:\n> On 2024-05-03 4:53 a.m., Daniel Gustafsson wrote:\n>>> On 3 May 2024, at 13:48, Peter Eisentraut <[email protected]> wrote:\n>>> Maybe it's easier if we just replaced\n>>>\n>>>     prototypes for functions in xxx.c\n>>>\n>>> with\n>>>\n>>>     declarations for xxx.c\n>>>\n>>> throughout src/include/libpq/libpq.h.\n>> +1\n> +1\n\nIt looks like this wording \"prototypes for functions in\" is used many \ntimes in src/include/, in many cases equally inaccurately, so I would \nsuggest creating a more comprehensive patch for this.\n\n\n\n", "msg_date": "Mon, 6 May 2024 11:48:42 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: wrong comment in libpq.h" }, { "msg_contents": "\n> It looks like this wording \"prototypes for functions in\" is used many \n> times in src/include/, in many cases equally inaccurately, so I would \n> suggest creating a more comprehensive patch for this.\n\nI noticed this \"prototypes for functions in\" in many header files and \nbriefly checked them. It kind of all make sense except the bufmgr.h has \nsomething like below.\n\n/* in buf_init.c */\nextern void InitBufferPool(void);\nextern Size BufferShmemSize(void);\n\n/* in localbuf.c */\nextern void AtProcExit_LocalBuffers(void);\n\n/* in freelist.c */\n\nwhich doesn't say \"prototypes for functions xxx\", but it still make \nsense for me.\n\n\nThe main confusion part is in libpq.h.\n\n/*\n  * prototypes for functions in be-secure.c\n  */\nextern PGDLLIMPORT char *ssl_library;\nextern PGDLLIMPORT char *ssl_cert_file;\nextern PGDLLIMPORT char *ssl_key_file;\nextern PGDLLIMPORT char *ssl_ca_file;\nextern PGDLLIMPORT char *ssl_crl_file;\nextern PGDLLIMPORT char *ssl_crl_dir;\nextern PGDLLIMPORT char *ssl_dh_params_file;\nextern PGDLLIMPORT char *ssl_passphrase_command;\nextern PGDLLIMPORT bool ssl_passphrase_command_supports_reload;\n#ifdef USE_SSL\nextern PGDLLIMPORT bool ssl_loaded_verify_locations;\n#endif\n\nIf we can delete the comment and move the variables declarations to /* \nGUCs */ section, then it should be more consistent.\n\n/* GUCs */\nextern PGDLLIMPORT char *SSLCipherSuites;\nextern PGDLLIMPORT char *SSLECDHCurve;\nextern PGDLLIMPORT bool SSLPreferServerCiphers;\nextern PGDLLIMPORT int ssl_min_protocol_version;\nextern PGDLLIMPORT int ssl_max_protocol_version;\n\nOne more argument for my previous patch is that with this minor change \nit can better align with the parameters in postgresql.conf.\n\n# - SSL -\n\n#ssl = off\n#ssl_ca_file = ''\n#ssl_cert_file = 'server.crt'\n#ssl_crl_file = ''\n#ssl_crl_dir = ''\n#ssl_key_file = 'server.key'\n#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL'    # allowed SSL ciphers\n#ssl_prefer_server_ciphers = on\n#ssl_ecdh_curve = 'prime256v1'\n#ssl_min_protocol_version = 'TLSv1.2'\n#ssl_max_protocol_version = ''\n#ssl_dh_params_file = ''\n#ssl_passphrase_command = ''\n#ssl_passphrase_command_supports_reload = off\n\n\nbest regards,\n\nDavid\n\n\n\n\n", "msg_date": "Fri, 10 May 2024 11:14:51 -0700", "msg_from": "David Zhang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: wrong comment in libpq.h" } ]
[ { "msg_contents": "\nHi,\n\npg_ugprade from v15 to v16 failed in an environment. Often we get a\nreasonable message, but this time it was a bit weird. First, error\nmessage:\n\n=====================================================================================\npg_restore: creating TYPE \"foobar._packagestoptemp\"\npg_restore: while PROCESSING TOC:\npg_restore: from TOC entry 2418; 1247 20529 TYPE _packagestoptemp developer\npg_restore: error: could not execute query: ERROR: type \"_packagestoptemp\" does not exist\nHINT: Create the type as a shell type, then create its I/O functions, then do a full CREATE TYPE.\nCommand was: \n-- For binary upgrade, must preserve pg_type oid\nSELECT pg_catalog.binary_upgrade_set_next_pg_type_oid('20529'::pg_catalog.oid);\n\nCREATE TYPE \"foobar\".\"_packagestoptemp\" (\n INTERNALLENGTH = variable,\n INPUT = \"array_in\",\n OUTPUT = \"array_out\",\n RECEIVE = \"array_recv\",\n SEND = \"array_send\",\n ANALYZE = \"array_typanalyze\",\n SUBSCRIPT = \"array_subscript_handler\",\n ELEMENT = ???,\n CATEGORY = 'A',\n ALIGNMENT = double,\n STORAGE = extended\n);\n=====================================================================================\n\nWe noticed that this data type was actually not used, so decided to drop it:\n\n===================\nDROP TYPE \"foobar\".\"_packagestoptemp\";\nERROR: cannot drop (null) because (null) requires it\nHINT: You can drop (null) instead.\n\n===================\n\nWhat do these \"null\" mean here? Any hints?,\n\nThanks!\n\nRegards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, PostgreSQL Major Contributor\nTwitter: @DevrimGunduz , @DevrimGunduzTR\n\n\n", "msg_date": "Fri, 03 May 2024 00:36:33 +0100", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <[email protected]>", "msg_from_op": true, "msg_subject": "Weird \"null\" errors during DROP TYPE (pg_upgrade)" }, { "msg_contents": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <[email protected]> writes:\n> pg_ugprade from v15 to v16 failed in an environment. Often we get a\n> reasonable message, but this time it was a bit weird. First, error\n> message:\n\nIt seems like the source database must have been in quite a corrupt\nstate already --- here we have the array type _packagestoptemp, but\nthere's apparently no underlying type packagestoptemp, because\nformat_type seems to have failed for it:\n\n> ELEMENT = ???,\n\nCan you show a way to get into this state without manual catalog\nhacking?\n\n> ===================\n> DROP TYPE \"foobar\".\"_packagestoptemp\";\n> ERROR: cannot drop (null) because (null) requires it\n> HINT: You can drop (null) instead.\n> ===================\n\n> What do these \"null\" mean here? Any hints?,\n\nProbably some catalog lookup function failed and returned NULL,\nand then sprintf decided to print \"(null)\" instead of dumping\ncore (cf 3779ac62d). This is more evidence in favor of catalog\ncorruption, but it doesn't tell us exactly what kind.\n\nMaybe reindexing pg_type would improve matters?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 May 2024 20:18:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird \"null\" errors during DROP TYPE (pg_upgrade)" } ]
[ { "msg_contents": "Similar to 'pg_dump --binary-upgrade' [0], we can speed up pg_dump with\nmany sequences by gathering the required information in a single query\ninstead of two queries per sequence. The attached patches are\nworks-in-progress, but here are the results I see on my machine for\n'pg_dump --schema-only --binary-upgrade' with a million sequences:\n\n HEAD : 6m22.809s\n [0] : 1m54.701s\n[0] + attached : 0m38.233s\n\nI'm not sure I have all the details correct in 0003, and we might want to\nseparate the table into two tables which are only populated when the\nrelevant section is dumped. Furthermore, the query in 0003 is a bit goofy\nbecause it needs to dance around a bug reported elsewhere [1].\n\n[0] https://postgr.es/m/20240418041712.GA3441570%40nathanxps13\n[1] https://postgr.es/m/20240501005730.GA594666%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 2 May 2024 21:51:40 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "improve performance of pg_dump with many sequences" }, { "msg_contents": "rebased\n\n-- \nnathan", "msg_date": "Tue, 9 Jul 2024 14:11:51 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve performance of pg_dump with many sequences" }, { "msg_contents": "On Tue, Jul 9, 2024, at 4:11 PM, Nathan Bossart wrote:\n> rebased\n\nNice improvement. The numbers for a realistic scenario (10k sequences) are\n\nfor i in `seq 1 10000`; do echo \"CREATE SEQUENCE s$i;\"; done > /tmp/s.sql\n\nmaster:\nreal 0m1,141s\nuser 0m0,056s\nsys 0m0,147s\n\npatched:\nreal 0m0,410s\nuser 0m0,045s\nsys 0m0,103s\n\nYou are changing internal representation from char to int64. Is the main goal to\nvalidate catalog data? What if there is a new sequence data type whose\nrepresentation is not an integer?\n\nThis code path is adding zero byte to the last position of the fixed string. I\nsuggest that the zero byte is added to the position after the string length.\n\nAssert(strlen(PQgetvalue(res, 0, 0)) < sizeof(seqtype));\nstrncpy(seqtype, PQgetvalue(res, 0, 0), sizeof(seqtype));\nseqtype[sizeof(seqtype) - 1] = '\\0';\n\nSomething like\n\nl = strlen(PQgetvalue(res, 0, 0));\nAssert(l < sizeof(seqtype));\nstrncpy(seqtype, PQgetvalue(res, 0, 0), l);\nseqtype[l] = '\\0';\n\nAnother suggestion is to use a constant for seqtype\n\nchar seqtype[MAX_SEQNAME_LEN];\n\nand simplify the expression:\n\nsize_t seqtype_sz = sizeof(((SequenceItem *) 0)->seqtype);\n\nIf you are not planning to apply 0003, make sure you fix collectSequences() to\navoid versions less than 10. Move this part to 0002.\n\n@@ -17233,11 +17235,24 @@ collectSequences(Archive *fout) \n PGresult *res; \n const char *query; \n \n+ if (fout->remoteVersion < 100000) \n+ return; \n+ \n\nSince you apply a fix for pg_sequence_last_value function, you can simplify the\nquery in 0003. CASE is not required.\n\nI repeated the same test but not applying 0003.\n\npatched (0001 and 0002):\nreal 0m0,290s\nuser 0m0,038s\nsys 0m0,104s\n\nI'm not sure if 0003 is worth. Maybe if you have another table like you\nsuggested.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, Jul 9, 2024, at 4:11 PM, Nathan Bossart wrote:rebasedNice improvement. The numbers for a realistic scenario (10k sequences) arefor i in `seq 1 10000`; do echo \"CREATE SEQUENCE s$i;\"; done > /tmp/s.sqlmaster:real\t0m1,141suser\t0m0,056ssys\t\t0m0,147spatched:real\t0m0,410suser\t0m0,045ssys\t\t0m0,103sYou are changing internal representation from char to int64. Is the main goal tovalidate catalog data? What if there is a new sequence data type whoserepresentation is not an integer?This code path is adding zero byte to the last position of the fixed string. Isuggest that the zero byte is added to the position after the string length.Assert(strlen(PQgetvalue(res, 0, 0)) < sizeof(seqtype));strncpy(seqtype, PQgetvalue(res, 0, 0), sizeof(seqtype));seqtype[sizeof(seqtype) - 1] = '\\0';Something likel = strlen(PQgetvalue(res, 0, 0));Assert(l < sizeof(seqtype));strncpy(seqtype, PQgetvalue(res, 0, 0), l);seqtype[l] = '\\0';Another suggestion is to use a constant for seqtypechar seqtype[MAX_SEQNAME_LEN];and simplify the expression:size_t      seqtype_sz = sizeof(((SequenceItem *) 0)->seqtype);If you are not planning to apply 0003, make sure you fix collectSequences() toavoid versions less than 10. Move this part to 0002.@@ -17233,11 +17235,24 @@ collectSequences(Archive *fout)                           PGresult   *res;                                                                const char *query;                                                                                                                                          +   if (fout->remoteVersion < 100000)                                           +       return;                                                                 + Since you apply a fix for pg_sequence_last_value function, you can simplify thequery in 0003. CASE is not required.I repeated the same test but not applying 0003.patched (0001 and 0002):real\t0m0,290suser\t0m0,038ssys\t\t0m0,104sI'm not sure if 0003 is worth. Maybe if you have another table like yousuggested.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Wed, 10 Jul 2024 17:08:56 -0300", "msg_from": "\"Euler Taveira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve performance of pg_dump with many sequences" }, { "msg_contents": "On Wed, Jul 10, 2024 at 05:08:56PM -0300, Euler Taveira wrote:\n> Nice improvement. The numbers for a realistic scenario (10k sequences) are\n\nThanks for taking a look!\n\n> You are changing internal representation from char to int64. Is the main goal to\n> validate catalog data? What if there is a new sequence data type whose\n> representation is not an integer?\n\nIIRC 0001 was primarily intended to reduce the amount of memory needed for\nthe sorted table. Regarding a new sequence data type, I'm assuming we'll\nhave much bigger fish to fry if we do that (e.g., pg_sequence uses int8 for\nthe values), and I'd hope that adjusting this code wouldn't be too\ndifficult, anyway.\n\n> This code path is adding zero byte to the last position of the fixed string. I\n> suggest that the zero byte is added to the position after the string length.\n\nI'm not following why that would be a better approach. strncpy() will add\na NUL to the end of the string unless it doesn't fit in the buffer, in\nwhich case we'll add our own via \"seqtype[sizeof(seqtype) - 1] = '\\0'\".\nFurthermore, the compiler can determine the position where the NUL should\nbe placed, whereas placing it at the end of the copied string requires a\nruntime strlen().\n\n> l = strlen(PQgetvalue(res, 0, 0));\n> Assert(l < sizeof(seqtype));\n> strncpy(seqtype, PQgetvalue(res, 0, 0), l);\n> seqtype[l] = '\\0';\n\nI think the strncpy() should really be limited to the size of the seqtype\nbuffer. IMHO an Assert is not sufficient.\n\n> If you are not planning to apply 0003, make sure you fix collectSequences() to\n> avoid versions less than 10. Move this part to 0002.\n\nYeah, no need to create the table if we aren't going to use it.\n\n> Since you apply a fix for pg_sequence_last_value function, you can simplify the\n> query in 0003. CASE is not required.\n\nUnfortunately, I think we have to keep this workaround since older minor\nreleases of PostgreSQL don't have the fix.\n\n> patched (0001 and 0002):\n> real 0m0,290s\n> user 0m0,038s\n> sys 0m0,104s\n> \n> I'm not sure if 0003 is worth. Maybe if you have another table like you\n> suggested.\n\nWhat pg_dump command did you test here? Did you dump the sequence data, or\nwas this --schema-only?\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 10 Jul 2024 17:05:11 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve performance of pg_dump with many sequences" }, { "msg_contents": "On Wed, Jul 10, 2024, at 7:05 PM, Nathan Bossart wrote:\n> I'm not following why that would be a better approach. strncpy() will add\n> a NUL to the end of the string unless it doesn't fit in the buffer, in\n> which case we'll add our own via \"seqtype[sizeof(seqtype) - 1] = '\\0'\".\n> Furthermore, the compiler can determine the position where the NUL should\n> be placed, whereas placing it at the end of the copied string requires a\n> runtime strlen().\n\nNevermind, you are copying the whole buffer (n = sizeof(seqtype)).\n\n> Unfortunately, I think we have to keep this workaround since older minor\n> releases of PostgreSQL don't have the fix.\n\nHmm. Right.\n\n> What pg_dump command did you test here? Did you dump the sequence data, or\n> was this --schema-only?\n\ntime pg_dump -f - -s -d postgres\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Jul 10, 2024, at 7:05 PM, Nathan Bossart wrote:I'm not following why that would be a better approach.  strncpy() will adda NUL to the end of the string unless it doesn't fit in the buffer, inwhich case we'll add our own via \"seqtype[sizeof(seqtype) - 1] = '\\0'\".Furthermore, the compiler can determine the position where the NUL shouldbe placed, whereas placing it at the end of the copied string requires aruntime strlen().Nevermind, you are copying the whole buffer (n = sizeof(seqtype)).Unfortunately, I think we have to keep this workaround since older minorreleases of PostgreSQL don't have the fix.Hmm. Right.What pg_dump command did you test here?  Did you dump the sequence data, orwas this --schema-only?time pg_dump -f - -s -d postgres--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Wed, 10 Jul 2024 23:52:33 -0300", "msg_from": "\"Euler Taveira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve performance of pg_dump with many sequences" }, { "msg_contents": "On Wed, Jul 10, 2024 at 11:52:33PM -0300, Euler Taveira wrote:\n> On Wed, Jul 10, 2024, at 7:05 PM, Nathan Bossart wrote:\n>> Unfortunately, I think we have to keep this workaround since older minor\n>> releases of PostgreSQL don't have the fix.\n> \n> Hmm. Right.\n\nOn second thought, maybe we should just limit this improvement to the minor\nreleases with the fix so that we _can_ get rid of the workaround. Or we\ncould use the hacky workaround only for versions with the bug.\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 11 Jul 2024 21:09:17 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve performance of pg_dump with many sequences" }, { "msg_contents": "On Thu, Jul 11, 2024 at 09:09:17PM -0500, Nathan Bossart wrote:\n> On second thought, maybe we should just limit this improvement to the minor\n> releases with the fix so that we _can_ get rid of the workaround. Or we\n> could use the hacky workaround only for versions with the bug.\n\nHere is a new version of the patch set. The main differences are 1) we no\nlonger gather the sequence data for schema-only dumps and 2) 0003 uses a\nsimplified query for dumps on v18 and newer. I considered also using a\nslightly simplified query for dumps on versions with the\nunlogged-sequences-on-standbys fix, but I felt that wasn't worth the extra\ncode.\n\nUnfortunately, I've also discovered a problem with 0003.\npg_sequence_last_value() returns NULL when is_called is false, in which\ncase we assume last_value == seqstart, which is, sadly, bogus due to\ncommands like ALTER SEQUENCE [RE]START WITH. AFAICT there isn't an easy\nway around this. We could either create a giant query that gathers the\ninformation from all sequences in the database, or we could introduce a new\nfunction in v18 that returns everything we need (which would only help for\nupgrades _from_ v18). Assuming I'm not missing a better option, I think\nthe latter is the better choice, and I still think it's worth doing even\nthough it probably won't help anyone for ~2.5 years.\n\n-- \nnathan", "msg_date": "Tue, 16 Jul 2024 16:36:15 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve performance of pg_dump with many sequences" }, { "msg_contents": "On Tue, Jul 16, 2024 at 04:36:15PM -0500, Nathan Bossart wrote:\n> Unfortunately, I've also discovered a problem with 0003.\n> pg_sequence_last_value() returns NULL when is_called is false, in which\n> case we assume last_value == seqstart, which is, sadly, bogus due to\n> commands like ALTER SEQUENCE [RE]START WITH. AFAICT there isn't an easy\n> way around this. We could either create a giant query that gathers the\n> information from all sequences in the database, or we could introduce a new\n> function in v18 that returns everything we need (which would only help for\n> upgrades _from_ v18). Assuming I'm not missing a better option, I think\n> the latter is the better choice, and I still think it's worth doing even\n> though it probably won't help anyone for ~2.5 years.\n\nYeah, I have bumped on the same issue. In the long term, I also think\nthat we'd better have pg_sequence_last_value() return a row with\nis_called and the value scanned. As you say, it won't help except\nwhen upgrading from versions of Postgres that are at least to v18,\nassuming that this change gets in the tree, but that would be much\nbetter in the long term and time flies fast.\n\nSee 0001 as of this area:\nhttps://www.postgresql.org/message-id/ZnPIUPMmp5TzBPC2%40paquier.xyz\n--\nMichael", "msg_date": "Wed, 17 Jul 2024 11:30:04 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve performance of pg_dump with many sequences" }, { "msg_contents": "On Wed, Jul 17, 2024 at 11:30:04AM +0900, Michael Paquier wrote:\n> Yeah, I have bumped on the same issue. In the long term, I also think\n> that we'd better have pg_sequence_last_value() return a row with\n> is_called and the value scanned. As you say, it won't help except\n> when upgrading from versions of Postgres that are at least to v18,\n> assuming that this change gets in the tree, but that would be much\n> better in the long term and time flies fast.\n\nAFAICT pg_sequence_last_value() is basically an undocumented internal\nfunction only really intended for use by the pg_sequences system view, so\nchanging the function like this for v18 might not be out of the question.\nOtherwise, I think we'd have to create a strikingly similar function with\nslightly different behavior, which would be a bizarre place to end up.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 16 Jul 2024 22:23:08 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve performance of pg_dump with many sequences" }, { "msg_contents": "On Tue, Jul 16, 2024 at 10:23:08PM -0500, Nathan Bossart wrote:\n> On Wed, Jul 17, 2024 at 11:30:04AM +0900, Michael Paquier wrote:\n>> Yeah, I have bumped on the same issue. In the long term, I also think\n>> that we'd better have pg_sequence_last_value() return a row with\n>> is_called and the value scanned. As you say, it won't help except\n>> when upgrading from versions of Postgres that are at least to v18,\n>> assuming that this change gets in the tree, but that would be much\n>> better in the long term and time flies fast.\n> \n> AFAICT pg_sequence_last_value() is basically an undocumented internal\n> function only really intended for use by the pg_sequences system view, so\n> changing the function like this for v18 might not be out of the question.\n> Otherwise, I think we'd have to create a strikingly similar function with\n> slightly different behavior, which would be a bizarre place to end up.\n\nOn second thought, I worry that this change might needlessly complicate the\npg_sequences system view. Maybe we should just add a\npg_sequence_get_tuple() function that returns everything in\nFormData_pg_sequence_data for a given sequence OID...\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 17 Jul 2024 13:48:00 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve performance of pg_dump with many sequences" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On second thought, I worry that this change might needlessly complicate the\n> pg_sequences system view. Maybe we should just add a\n> pg_sequence_get_tuple() function that returns everything in\n> FormData_pg_sequence_data for a given sequence OID...\n\nUh ... why do we need a function, rather than just\n\nselect * from pg_sequence\n\n?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Jul 2024 14:59:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve performance of pg_dump with many sequences" }, { "msg_contents": "On Wed, Jul 17, 2024 at 02:59:26PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> On second thought, I worry that this change might needlessly complicate the\n>> pg_sequences system view. Maybe we should just add a\n>> pg_sequence_get_tuple() function that returns everything in\n>> FormData_pg_sequence_data for a given sequence OID...\n> \n> Uh ... why do we need a function, rather than just\n> \n> select * from pg_sequence\n\nWe can use that for dumpSequence(), but dumpSequenceData() requires\ninformation from the sequence tuple itself. Right now, we query each\nsequence relation individually for that data, and I'm trying to find a way\nto cut down on those round trips.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 17 Jul 2024 14:05:03 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve performance of pg_dump with many sequences" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Wed, Jul 17, 2024 at 02:59:26PM -0400, Tom Lane wrote:\n>> Uh ... why do we need a function, rather than just\n>> select * from pg_sequence\n\n> We can use that for dumpSequence(), but dumpSequenceData() requires\n> information from the sequence tuple itself. Right now, we query each\n> sequence relation individually for that data, and I'm trying to find a way\n> to cut down on those round trips.\n\nAh, I confused FormData_pg_sequence_data with FormData_pg_sequence.\nSorry for the noise.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Jul 2024 15:11:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve performance of pg_dump with many sequences" }, { "msg_contents": "Here is an attempt at adding a new function that returns the sequence tuple\nand using that to avoid querying each sequence relation individually in\ndumpSequenceData().\n\nIf we instead wanted to change pg_sequence_last_value() to return both\nis_called and last_value, I think we could modify the pg_sequences system\nview to use a LATERAL subquery, i.e.,\n\n SELECT\n ...\n CASE\n WHEN L.is_called THEN L.last_value\n ELSE NULL\n END AS last_value\n FROM pg_sequence S\n ...\n JOIN LATERAL pg_sequence_last_value(S.seqrelid) L ON true\n ...\n\nThat doesn't seem so bad, and it'd avoid an extra pg_proc entry, but it\nwould probably break anything that calls pg_sequence_last_value() directly.\nThoughts?\n\n-- \nnathan", "msg_date": "Wed, 17 Jul 2024 22:45:32 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve performance of pg_dump with many sequences" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> Here is an attempt at adding a new function that returns the sequence tuple\n> and using that to avoid querying each sequence relation individually in\n> dumpSequenceData().\n\nDidn't read the patch yet, but ...\n\n> If we instead wanted to change pg_sequence_last_value() to return both\n> is_called and last_value, I think we could modify the pg_sequences system\n> view to use a LATERAL subquery, i.e.,\n> ...\n> That doesn't seem so bad, and it'd avoid an extra pg_proc entry, but it\n> would probably break anything that calls pg_sequence_last_value() directly.\n> Thoughts?\n\n... one more pg_proc entry is pretty cheap. I think we should leave\npg_sequence_last_value alone. We don't know if anyone is depending\non it, and de-optimizing the pg_sequences view doesn't seem like a\nwin either.\n\n... okay, I lied, I looked at the patch. Why are you testing\n\n+\tif (pg_class_aclcheck(relid, GetUserId(), ACL_SELECT | ACL_USAGE) == ACLCHECK_OK &&\n\n? This is a substitute for a SELECT from the sequence and it seems\nlike it ought to demand exactly the same privilege as SELECT.\n(If you want to get more technical, USAGE allows nextval() which\ngives strictly less information than what this exposes; that's why\nwe're here after all.) So there is a difference in the privilege\nlevels, which is another reason for not combining this with\npg_sequence_last_value.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Jul 2024 23:58:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve performance of pg_dump with many sequences" }, { "msg_contents": "On Wed, Jul 17, 2024 at 11:58:21PM -0400, Tom Lane wrote:\n> ... okay, I lied, I looked at the patch. Why are you testing\n> \n> +\tif (pg_class_aclcheck(relid, GetUserId(), ACL_SELECT | ACL_USAGE) == ACLCHECK_OK &&\n> \n> ? This is a substitute for a SELECT from the sequence and it seems\n> like it ought to demand exactly the same privilege as SELECT.\n> (If you want to get more technical, USAGE allows nextval() which\n> gives strictly less information than what this exposes; that's why\n> we're here after all.) So there is a difference in the privilege\n> levels, which is another reason for not combining this with\n> pg_sequence_last_value.\n\nOh, that's a good point. I wrongly assumed the privilege checks would be\nthe same as pg_sequence_last_value(). I fixed this in v5.\n\nI also polished the rest of the patches a bit. Among other things, I\ncreated an enum for the sequence data types to avoid the hacky strncpy()\nstuff, which was causing weird CI failures [0].\n\n[0] https://cirrus-ci.com/task/4614801962303488\n\n-- \nnathan", "msg_date": "Thu, 18 Jul 2024 13:22:10 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve performance of pg_dump with many sequences" }, { "msg_contents": "I fixed a compiler warning on Windows in v6 of the patch set. Sorry for\nthe noise.\n\n-- \nnathan", "msg_date": "Thu, 18 Jul 2024 15:29:19 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve performance of pg_dump with many sequences" }, { "msg_contents": "I ran Euler's tests again on the v6 patch set.\n\n\tfor i in `seq 1 10000`; do psql postgres -c \"CREATE SEQUENCE s$i;\"; done\n\ttime pg_dump -f - -s -d postgres > /dev/null\n\n\tHEAD: 0.607s\n\t0001 + 0002: 0.094s\n\tall patches: 0.094s\n\nBarring additional feedback, I am planning to commit these patches early\nnext week.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 24 Jul 2024 13:36:34 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve performance of pg_dump with many sequences" }, { "msg_contents": "Committed.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 31 Jul 2024 10:19:01 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve performance of pg_dump with many sequences" } ]
[ { "msg_contents": "I'm trying to figure out why BufFileSize() Asserts that file->fileset\nisn't NULL, per 1a990b207.\n\nThe discussion for that commit is in [1], but I don't see any\nexplanation of the Assert in the discussion or commit message and\nthere's no comment explaining why it's there.\n\nThe code that comes after the Assert does not look at the fileset\nfield. With the code as it is, it doesn't seem possible to get the\nfile size of a non-shared BufFile and I don't see any reason for that.\n\nShould the Assert be checking file->files != NULL?\n\nDavid\n\n[1] https://postgr.es/m/CAH2-Wzn0ZNLZs3DhCYdLMv4xn1fnM8ugVHPvWz67dSUh1s_%3D2Q%40mail.gmail.com\n\n\n", "msg_date": "Fri, 3 May 2024 16:03:45 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Incorrect Assert in BufFileSize()?" }, { "msg_contents": "On Fri, 3 May 2024 at 16:03, David Rowley <[email protected]> wrote:\n> I'm trying to figure out why BufFileSize() Asserts that file->fileset\n> isn't NULL, per 1a990b207.\n\nI was hoping to get some feedback here.\n\nHere's a patch to remove the Assert(). Changing it to\nAssert(file->files != NULL); doesn't do anything useful.\n\nDavid", "msg_date": "Tue, 14 May 2024 22:58:29 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incorrect Assert in BufFileSize()?" }, { "msg_contents": "On Tue, May 14, 2024 at 6:58 AM David Rowley <[email protected]> wrote:\n> On Fri, 3 May 2024 at 16:03, David Rowley <[email protected]> wrote:\n> > I'm trying to figure out why BufFileSize() Asserts that file->fileset\n> > isn't NULL, per 1a990b207.\n>\n> I was hoping to get some feedback here.\n\nNotice that comments above BufFileSize() say \"Return the current\nfileset based BufFile size\". There are numerous identical assertions\nat the start of several other functions within the same file.\n\nI'm not sure what it would take for this function to support\nOpenTemporaryFile()-based BufFiles -- possibly not very much at all. I\nassume that that's what you're actually interested in doing here (you\ndidn't say). If it is, you'll need to update the function's contract\n-- just removing the assertion isn't enough.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 15 May 2024 15:19:39 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect Assert in BufFileSize()?" }, { "msg_contents": "On Thu, 16 May 2024 at 07:20, Peter Geoghegan <[email protected]> wrote:\n>\n> On Tue, May 14, 2024 at 6:58 AM David Rowley <[email protected]> wrote:\n> > On Fri, 3 May 2024 at 16:03, David Rowley <[email protected]> wrote:\n> > > I'm trying to figure out why BufFileSize() Asserts that file->fileset\n> > > isn't NULL, per 1a990b207.\n> >\n> > I was hoping to get some feedback here.\n>\n> Notice that comments above BufFileSize() say \"Return the current\n> fileset based BufFile size\". There are numerous identical assertions\n> at the start of several other functions within the same file.\n\nhmm, unfortunately the comment and existence of numerous other\nassertions does not answer my question. It just leads to more. The\nonly Assert I see that looks like it might be useful is\nBufFileExportFileSet() as fileset is looked at inside extendBufFile().\nIt kinda looks to me that it was left over fragments from the\ndevelopment of a patch when it was written some other way?\n\nLooking at the other similar Asserts in BufFileAppend(), I can't\nfigure out what those ones are for either.\n\n> I'm not sure what it would take for this function to support\n> OpenTemporaryFile()-based BufFiles -- possibly not very much at all. I\n> assume that that's what you're actually interested in doing here (you\n> didn't say). If it is, you'll need to update the function's contract\n> -- just removing the assertion isn't enough.\n\nI should have explained, I just wasn't quite done with what I was\nworking on when I sent the original email. In [1] I was working on\nadding additional output in EXPLAIN for the \"Material\" node to show\nthe memory or disk space used by the tuplestore. The use case there\nuses a BufFile with no fileset. Calling BufFileSize(state->myfile)\nfrom tuplestore.c results in an Assert failure.\n\nDavid\n\n[1] https://commitfest.postgresql.org/48/4968/\n\n\n", "msg_date": "Fri, 17 May 2024 19:19:20 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incorrect Assert in BufFileSize()?" }, { "msg_contents": "On Fri, 17 May 2024 at 19:19, David Rowley <[email protected]> wrote:\n>\n> On Thu, 16 May 2024 at 07:20, Peter Geoghegan <[email protected]> wrote:\n> > Notice that comments above BufFileSize() say \"Return the current\n> > fileset based BufFile size\". There are numerous identical assertions\n> > at the start of several other functions within the same file.\n>\n> hmm, unfortunately the comment and existence of numerous other\n> assertions does not answer my question. It just leads to more. The\n> only Assert I see that looks like it might be useful is\n> BufFileExportFileSet() as fileset is looked at inside extendBufFile().\n> It kinda looks to me that it was left over fragments from the\n> development of a patch when it was written some other way?\n>\n> Looking at the other similar Asserts in BufFileAppend(), I can't\n> figure out what those ones are for either.\n\nI've attached an updated patch which updates the comments and also\nremoves the misplaced Asserts from BufFileAppend.\n\nIf there are no objections or additional feedback, I'll push this patch soon.\n\nDavid", "msg_date": "Wed, 3 Jul 2024 18:05:50 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incorrect Assert in BufFileSize()?" }, { "msg_contents": "On Wed, 3 Jul 2024 at 08:06, David Rowley <[email protected]> wrote:\n>\n> On Fri, 17 May 2024 at 19:19, David Rowley <[email protected]> wrote:\n> >\n> > On Thu, 16 May 2024 at 07:20, Peter Geoghegan <[email protected]> wrote:\n> > > Notice that comments above BufFileSize() say \"Return the current\n> > > fileset based BufFile size\". There are numerous identical assertions\n> > > at the start of several other functions within the same file.\n> >\n> > hmm, unfortunately the comment and existence of numerous other\n> > assertions does not answer my question. It just leads to more. The\n> > only Assert I see that looks like it might be useful is\n> > BufFileExportFileSet() as fileset is looked at inside extendBufFile().\n> > It kinda looks to me that it was left over fragments from the\n> > development of a patch when it was written some other way?\n> >\n> > Looking at the other similar Asserts in BufFileAppend(), I can't\n> > figure out what those ones are for either.\n>\n> I've attached an updated patch which updates the comments and also\n> removes the misplaced Asserts from BufFileAppend.\n>\n> If there are no objections or additional feedback, I'll push this patch soon.\n\nFinding this thread after reviewing [0], this does explain why the\nAssert was changed in that patch.\n\nLGTM.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n[0] https://www.postgresql.org/message-id/flat/CAApHDvp5Py9g4Rjq7_inL3-MCK1Co2CRt_YWFwTU2zfQix0p4A%40mail.gmail.com\n\n\n", "msg_date": "Wed, 3 Jul 2024 10:50:21 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect Assert in BufFileSize()?" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> I've attached an updated patch which updates the comments and also\n> removes the misplaced Asserts from BufFileAppend.\n> If there are no objections or additional feedback, I'll push this patch soon.\n\n- * Return the current fileset based BufFile size.\n+ * Returns the size if the given BufFile in bytes.\n\n\"Returns the size of\", no doubt?\n\nA shade less nit-pickily, I wonder if \"size\" is sufficient.\nIt's not really obvious that this means the amount of data\nin the file, rather than say sizeof(BufFile). How about\n\n+ * Returns the amount of data in the given BufFile, in bytes.\n\nLGTM other than that point.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Jul 2024 12:19:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect Assert in BufFileSize()?" }, { "msg_contents": "On Thu, 4 Jul 2024 at 04:19, Tom Lane <[email protected]> wrote:\n> - * Return the current fileset based BufFile size.\n> + * Returns the size if the given BufFile in bytes.\n>\n> \"Returns the size of\", no doubt?\n\nYes, thanks.\n\n> A shade less nit-pickily, I wonder if \"size\" is sufficient.\n> It's not really obvious that this means the amount of data\n> in the file, rather than say sizeof(BufFile). How about\n>\n> + * Returns the amount of data in the given BufFile, in bytes.\n\nI've used that. Thank you.\n\n> LGTM other than that point.\n\nPushed.\n\nDavid\n\n\n", "msg_date": "Thu, 4 Jul 2024 09:46:24 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incorrect Assert in BufFileSize()?" } ]
[ { "msg_contents": "Hi\n\nI found https://github.com/vnmakarov/mir?tab=readme-ov-file\n\nIt should be significantly faster than llvm (compilation).\n\nRegards\n\nPavel\n\nHiI found https://github.com/vnmakarov/mir?tab=readme-ov-fileIt should be significantly faster than llvm (compilation).RegardsPavel", "msg_date": "Fri, 3 May 2024 10:07:45 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "different engine for JIT" }, { "msg_contents": "On 5/3/24 10:07 AM, Pavel Stehule wrote:\n> Hi\n> \n> I found https://github.com/vnmakarov/mir?tab=readme-ov-file \n> <https://github.com/vnmakarov/mir?tab=readme-ov-file>\n> \n> It should be significantly faster than llvm (compilation).\n> \n> Regards\n> \n> Pavel\n\nHello,\n\nI can't tell about JIT, it is too complicated for me ^^\n\nFwiw, Pierre Ducroquet wrote an article about another (faster) \nimplementation:\nhttps://www.pinaraf.info/2024/03/look-ma-i-wrote-a-new-jit-compiler-for-postgresql/\n\nXing Guo replied to his article by mentioning another two prototypes :\n\n– https://github.com/higuoxing/pg_slowjit\n– https://github.com/higuoxing/pg_asmjit\n\nIt is good to see interest in this area. Current implementation is quite \nexpensive and costing model risky. If often recommend disabling it and \nenabling it when it is really needed. (It is disabled by default on AWS).\n\nRegards,\n\n-- \nAdrien NAYRAT\n\n\n\n\n", "msg_date": "Fri, 3 May 2024 11:21:19 +0200", "msg_from": "Adrien Nayrat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: different engine for JIT" } ]
[ { "msg_contents": "(40af10b57 did this for tuplesort.c, this is the same, but for tuplestore.c)\n\nI was looking at the tuplestore.c code a few days ago and noticed that\nit allocates tuples in the memory context that tuplestore_begin_heap()\nis called in, which for nodeMaterial.c, is ExecutorState.\n\nI didn't think this was great because:\n1. Allocating many chunks in ExecutorState can bloat the context with\nmany blocks worth of free'd chunks, stored on freelists that might\nnever be reused for anything.\n2. To clean up the memory, pfree must be meticulously called on each\nallocated tuple\n3. ExecutorState is an aset.c context which isn't the most efficient\nallocator for this purpose.\n\nI've attached 2 patches:\n\n0001: Adds memory tracking to Materialize nodes, which looks like:\n\n -> Materialize (actual time=0.033..9.157 rows=10000 loops=2)\n Storage: Memory Maximum Storage: 10441kB\n\n0002: Creates a Generation MemoryContext for storing tuples in tuplestore.\n\nUsing generation has the following advantages:\n\n1. It does not round allocations up to the next power of 2. Using\ngeneration will save an average of 25% memory for tuplestores or allow\nan average of 25% more tuples before going to disk.\n2. Allocation patterns in tuplestore.c are FIFO, which is exactly what\ngeneration was designed to handle best.\n3. Generation is faster to palloc/pfree than aset. (See [1]. Compare\nthe 4-bit times between aset_palloc_pfree.png and\ngeneration_palloc_pfree.png)\n4. tuplestore_clear() and tuplestore_end() can reset or delete the\ntuple context instead of pfreeing every tuple one by one.\n5. Higher likelihood of neighbouring tuples being stored consecutively\nin memory, resulting in better CPU memory prefetching.\n6. Generation has a page-level freelist, so is able to reuse pages\ninstead of freeing and mallocing another if tuplestore_trim() is used\nto continually remove no longer needed tuples. aset.c can only\nefficiently do this if the tuples are all in the same size class.\n\nThe attached bench.sh.txt tests the performance of this change and\nresult_chart.png shows the results I got when running on an AMD 3990x\nmaster @ 8f0a97dff vs patched.\nThe script runs benchmarks for various tuple counts stored in the\ntuplestore -- 1 to 8192 in power-2 increments.\n\nThe script does output the memory consumed by the tuplestore for each\nquery. Here are the results for the 8192 tuple test:\n\nmaster @ 8f0a97dff\nStorage: Memory Maximum Storage: 16577kB\n\npatched:\nStorage: Memory Maximum Storage: 8577kB\n\nWhich is roughly half, but I did pad the tuple to just over 1024\nbytes, so the alloc set allocations would have rounded up to 2048\nbytes.\n\nSome things I've *not* done:\n\n1. Gone over other executor nodes which use tuplestore to add the same\nadditional EXPLAIN output. CTE Scan, Recursive Union, Window Agg\ncould get similar treatment.\n2. Given much consideration for the block sizes to use for\nGenerationContextCreate(). (Maybe using ALLOCSET_SMALL_INITSIZE for\nthe start size is a good idea.)\n3. A great deal of testing.\n\nI'll park this here until we branch for v18.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvqUWhOMkUjYXzq95idAwpiPdJLCxxRbf8kV6PYcW5y=Cg@mail.gmail.com", "msg_date": "Sat, 4 May 2024 01:55:22 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Use generation memory context for tuplestore.c" }, { "msg_contents": "On Fri, 3 May 2024 at 15:55, David Rowley <[email protected]> wrote:\n>\n> (40af10b57 did this for tuplesort.c, this is the same, but for tuplestore.c)\n>\n> I was looking at the tuplestore.c code a few days ago and noticed that\n> it allocates tuples in the memory context that tuplestore_begin_heap()\n> is called in, which for nodeMaterial.c, is ExecutorState.\n>\n> I didn't think this was great because:\n> 1. Allocating many chunks in ExecutorState can bloat the context with\n> many blocks worth of free'd chunks, stored on freelists that might\n> never be reused for anything.\n> 2. To clean up the memory, pfree must be meticulously called on each\n> allocated tuple\n> 3. ExecutorState is an aset.c context which isn't the most efficient\n> allocator for this purpose.\n\nAgreed on all counts.\n\n> I've attached 2 patches:\n>\n> 0001: Adds memory tracking to Materialize nodes, which looks like:\n>\n> -> Materialize (actual time=0.033..9.157 rows=10000 loops=2)\n> Storage: Memory Maximum Storage: 10441kB\n>\n> 0002: Creates a Generation MemoryContext for storing tuples in tuplestore.\n>\n> Using generation has the following advantages:\n\n[...]\n> 6. Generation has a page-level freelist, so is able to reuse pages\n> instead of freeing and mallocing another if tuplestore_trim() is used\n> to continually remove no longer needed tuples. aset.c can only\n> efficiently do this if the tuples are all in the same size class.\n\nWas a bump context considered? If so, why didn't it make the cut?\nIf tuplestore_trim is the only reason why the type of context in patch\n2 is a generation context, then couldn't we make the type of context\nconditional on state->eflags & EXEC_FLAG_REWIND, and use a bump\ncontext if we require rewind capabilities (i.e. where _trim is never\neffectively executed)?\n\n> master @ 8f0a97dff\n> Storage: Memory Maximum Storage: 16577kB\n>\n> patched:\n> Storage: Memory Maximum Storage: 8577kB\n\nThose are some impressive numbers.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\nOn Fri, 3 May 2024 at 15:55, David Rowley <[email protected]> wrote:\n>\n> (40af10b57 did this for tuplesort.c, this is the same, but for tuplestore.c)\n>\n> I was looking at the tuplestore.c code a few days ago and noticed that\n> it allocates tuples in the memory context that tuplestore_begin_heap()\n> is called in, which for nodeMaterial.c, is ExecutorState.\n>\n> I didn't think this was great because:\n> 1. Allocating many chunks in ExecutorState can bloat the context with\n> many blocks worth of free'd chunks, stored on freelists that might\n> never be reused for anything.\n> 2. To clean up the memory, pfree must be meticulously called on each\n> allocated tuple\n> 3. ExecutorState is an aset.c context which isn't the most efficient\n> allocator for this purpose.\n>\n> I've attached 2 patches:\n>\n> 0001: Adds memory tracking to Materialize nodes, which looks like:\n>\n> -> Materialize (actual time=0.033..9.157 rows=10000 loops=2)\n> Storage: Memory Maximum Storage: 10441kB\n>\n> 0002: Creates a Generation MemoryContext for storing tuples in tuplestore.\n>\n> Using generation has the following advantages:\n>\n> 1. It does not round allocations up to the next power of 2. Using\n> generation will save an average of 25% memory for tuplestores or allow\n> an average of 25% more tuples before going to disk.\n> 2. Allocation patterns in tuplestore.c are FIFO, which is exactly what\n> generation was designed to handle best.\n> 3. Generation is faster to palloc/pfree than aset. (See [1]. Compare\n> the 4-bit times between aset_palloc_pfree.png and\n> generation_palloc_pfree.png)\n> 4. tuplestore_clear() and tuplestore_end() can reset or delete the\n> tuple context instead of pfreeing every tuple one by one.\n> 5. Higher likelihood of neighbouring tuples being stored consecutively\n> in memory, resulting in better CPU memory prefetching.\n> 6. Generation has a page-level freelist, so is able to reuse pages\n> instead of freeing and mallocing another if tuplestore_trim() is used\n> to continually remove no longer needed tuples. aset.c can only\n> efficiently do this if the tuples are all in the same size class.\n>\n> The attached bench.sh.txt tests the performance of this change and\n> result_chart.png shows the results I got when running on an AMD 3990x\n> master @ 8f0a97dff vs patched.\n> The script runs benchmarks for various tuple counts stored in the\n> tuplestore -- 1 to 8192 in power-2 increments.\n>\n> The script does output the memory consumed by the tuplestore for each\n> query. Here are the results for the 8192 tuple test:\n>\n> master @ 8f0a97dff\n> Storage: Memory Maximum Storage: 16577kB\n>\n> patched:\n> Storage: Memory Maximum Storage: 8577kB\n>\n> Which is roughly half, but I did pad the tuple to just over 1024\n> bytes, so the alloc set allocations would have rounded up to 2048\n> bytes.\n>\n> Some things I've *not* done:\n>\n> 1. Gone over other executor nodes which use tuplestore to add the same\n> additional EXPLAIN output. CTE Scan, Recursive Union, Window Agg\n> could get similar treatment.\n> 2. Given much consideration for the block sizes to use for\n> GenerationContextCreate(). (Maybe using ALLOCSET_SMALL_INITSIZE for\n> the start size is a good idea.)\n> 3. A great deal of testing.\n>\n> I'll park this here until we branch for v18.\n>\n> David\n>\n> [1] https://www.postgresql.org/message-id/CAApHDvqUWhOMkUjYXzq95idAwpiPdJLCxxRbf8kV6PYcW5y=Cg@mail.gmail.com\n\n\n", "msg_date": "Fri, 3 May 2024 17:51:29 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use generation memory context for tuplestore.c" }, { "msg_contents": "On Sat, 4 May 2024 at 03:51, Matthias van de Meent\n<[email protected]> wrote:\n> Was a bump context considered? If so, why didn't it make the cut?\n> If tuplestore_trim is the only reason why the type of context in patch\n> 2 is a generation context, then couldn't we make the type of context\n> conditional on state->eflags & EXEC_FLAG_REWIND, and use a bump\n> context if we require rewind capabilities (i.e. where _trim is never\n> effectively executed)?\n\nI didn't really want to raise all this here, but to answer why I\ndidn't use bump...\n\nThere's a bit more that would need to be done to allow bump to work in\nuse-cases where no trim support is needed. Namely, if you look at\nwritetup_heap(), you'll see a heap_free_minimal_tuple(), which is\npfreeing the memory that was allocated for the tuple in either\ntuplestore_puttupleslot(), tuplestore_puttuple() or\ntuplestore_putvalues(). So basically, what happens if we're still\nloading the tuplestore and we've spilled to disk, once the\ntuplestore_put* function is called, we allocate memory for the tuple\nthat might get stored in RAM (we don't know yet), but then call\ntuplestore_puttuple_common() which decides if the tuple goes to RAM or\ndisk, then because we're spilling to disk, the write function pfree's\nthe memory we allocate in the tuplestore_put function after the tuple\nis safely written to the disk buffer.\n\nThis is a fairly inefficient design. While, we do need to still form\na tuple and store it somewhere for tuplestore_putvalues(), we don't\nneed to do that for a heap tuple. I think it should be possible to\nwrite directly from the table's page.\n\nOverall tuplestore.c seems quite confused as to how it's meant to\nwork. You have tuplestore_begin_heap() function, which is the *only*\nexternal function to create a tuplestore. We then pretend we're\nagnostic about how we store tuples that won't fit in memory by\nproviding callbacks for read/write/copy, but we only have 1 set of\nfunctions for those and instead of having some other begin method we\nuse when not dealing with heap tuples, we use some other\ntuplestore_put* function.\n\nIt seems like another pass is required to fix all this and I think\nthat should be:\n\n1. Get rid of the function pointers and just hardcode which static\nfunctions we call to read/write/copy.\n2. Rename tuplestore_begin_heap() to tuplestore_begin().\n3. See if we can rearrange the code so that the copying to the tuple\ncontext is only done when we are in TSS_INMEM. I'm not sure what that\nwould look like, but it's required before we could use bump so we\ndon't pfree a bump allocated chunk.\n\nOr maybe there's a way to fix this by adding other begin functions and\nmaking it work more like tuplesort. I've not really looked into that.\n\nI'd rather tackle these problems independently and I believe there are\nmuch bigger wins to moving from aset to generation than generation to\nbump, so that's where I've started.\n\nThanks for having a look at the patch.\n\nDavid\n\n\n", "msg_date": "Sat, 4 May 2024 14:01:48 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use generation memory context for tuplestore.c" }, { "msg_contents": "> On Sat, May 04, 2024 at 01:55:22AM +1200, David Rowley wrote:\n> (40af10b57 did this for tuplesort.c, this is the same, but for tuplestore.c)\n\nAn interesting idea, thanks. I was able to reproduce the results of your\nbenchmark and get similar conclusions from the results.\n\n> Using generation has the following advantages:\n>\n> [...]\n>\n> 2. Allocation patterns in tuplestore.c are FIFO, which is exactly what\n> generation was designed to handle best.\n\nDo I understand correctly, that the efficiency of generation memory\ncontext could be measured directly via counting number of malloc/free\ncalls? In those experiments I've conducted the numbers were indeed\nvisibly lower for the patched version (~30%), but I would like to\nconfirm my interpretation of this difference.\n\n> 5. Higher likelihood of neighbouring tuples being stored consecutively\n> in memory, resulting in better CPU memory prefetching.\n\nI guess this roughly translates into better CPU cache utilization.\nMeasuring cache hit ratio for unmodified vs patched versions in my case\nindeed shows about 10% less cache misses.\n\n> The attached bench.sh.txt tests the performance of this change and\n> result_chart.png shows the results I got when running on an AMD 3990x\n> master @ 8f0a97dff vs patched.\n\nThe query you use in the benchmark, is it something like a \"best-case\"\nscenario for generation memory context? I was experimenting with\ndifferent size of the b column, lower values seems to produce less\ndifference between generation and aset (although still generation\ncontext is distinctly faster regarding final query latencies, see the\nattached PDF estimation, ran for 8192 rows). I'm curious what could be a\n\"worst-case\" workload type for this patch?\n\nI've also noticed the first patch disables materialization in some tests.\n\n --- a/src/test/regress/sql/partition_prune.sql\n +++ b/src/test/regress/sql/partition_prune.sql\n\n +set enable_material = 0;\n +\n -- UPDATE on a partition subtree has been seen to have problems.\n insert into ab values (1,2);\n explain (analyze, costs off, summary off, timing off)\n\nIs it due to the volatility of Maximum Storage values? If yes, could it\nbe covered with something similar to COSTS OFF instead?", "msg_date": "Fri, 10 May 2024 18:34:36 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use generation memory context for tuplestore.c" }, { "msg_contents": "On Sat, 4 May 2024 at 04:02, David Rowley <[email protected]> wrote:\n>\n> On Sat, 4 May 2024 at 03:51, Matthias van de Meent\n> <[email protected]> wrote:\n> > Was a bump context considered? If so, why didn't it make the cut?\n> > If tuplestore_trim is the only reason why the type of context in patch\n> > 2 is a generation context, then couldn't we make the type of context\n> > conditional on state->eflags & EXEC_FLAG_REWIND, and use a bump\n> > context if we require rewind capabilities (i.e. where _trim is never\n> > effectively executed)?\n>\n> I didn't really want to raise all this here, but to answer why I\n> didn't use bump...\n>\n> There's a bit more that would need to be done to allow bump to work in\n> use-cases where no trim support is needed. Namely, if you look at\n> writetup_heap(), you'll see a heap_free_minimal_tuple(), which is\n> pfreeing the memory that was allocated for the tuple in either\n> tuplestore_puttupleslot(), tuplestore_puttuple() or\n> tuplestore_putvalues(). So basically, what happens if we're still\n> loading the tuplestore and we've spilled to disk, once the\n> tuplestore_put* function is called, we allocate memory for the tuple\n> that might get stored in RAM (we don't know yet), but then call\n> tuplestore_puttuple_common() which decides if the tuple goes to RAM or\n> disk, then because we're spilling to disk, the write function pfree's\n> the memory we allocate in the tuplestore_put function after the tuple\n> is safely written to the disk buffer.\n\nThanks, that's exactly the non-obvious issue I was looking for, but\ncouldn't find immediately.\n\n> This is a fairly inefficient design. While, we do need to still form\n> a tuple and store it somewhere for tuplestore_putvalues(), we don't\n> need to do that for a heap tuple. I think it should be possible to\n> write directly from the table's page.\n\n[...]\n\n> I'd rather tackle these problems independently and I believe there are\n> much bigger wins to moving from aset to generation than generation to\n> bump, so that's where I've started.\n\nThat's entirely reasonable, and I wouldn't ask otherwise.\n\nThanks,\n\nMatthias van de Meent\n\n\n", "msg_date": "Fri, 10 May 2024 18:36:14 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use generation memory context for tuplestore.c" }, { "msg_contents": "Thanks for having a look at this.\n\nOn Sat, 11 May 2024 at 04:34, Dmitry Dolgov <[email protected]> wrote:\n> Do I understand correctly, that the efficiency of generation memory\n> context could be measured directly via counting number of malloc/free\n> calls? In those experiments I've conducted the numbers were indeed\n> visibly lower for the patched version (~30%), but I would like to\n> confirm my interpretation of this difference.\n\nFor the test case I had in the script, I imagine the reduction in\nmalloc/free is just because of the elimination of the power-of-2\nroundup. If the test case had resulted in tuplestore_trim() being\nused, e.g if Material was used to allow mark and restore for a Merge\nJoin or WindowAgg, then the tuplestore_trim() would result in\npfree()ing of tuples and further reduce malloc of new blocks. This is\ntrue because AllocSet records pfree'd non-large chunks in a freelist\nelement and makes no attempt to free() blocks.\n\nIn the tuplestore_trim() scenario with the patched version, the\npfree'ing of chunks is in a first-in-first-out order which allows an\nentire block to become empty of palloc'd chunks which allows it to\nbecome the freeblock and be reused rather than malloc'ing another\nblock when there's not enough space on the current block. If the\nscenario is that tuplestore_trim() always manages to keep the number\nof tuples down to something that can fit on one GenerationBlock, then\nwe'll only malloc() 2 blocks and the generation code will just\nalternate between the two, reusing the empty one when it needs a new\nblock rather than calling malloc.\n\n> > 5. Higher likelihood of neighbouring tuples being stored consecutively\n> > in memory, resulting in better CPU memory prefetching.\n>\n> I guess this roughly translates into better CPU cache utilization.\n> Measuring cache hit ratio for unmodified vs patched versions in my case\n> indeed shows about 10% less cache misses.\n\nThanks for testing that. In simple test cases that's likely to come\nfrom removing the power-of-2 round-up behaviour of AllocSet allowing\nthe allocations to be more dense. In more complex cases when\nallocations are making occasional use of chunks from AllowSet's\nfreelist[], the access pattern will be more predictable and allow the\nCPU to prefetch memory more efficiently, resulting in a better CPU\ncache hit ratio.\n\n> The query you use in the benchmark, is it something like a \"best-case\"\n> scenario for generation memory context? I was experimenting with\n> different size of the b column, lower values seems to produce less\n> difference between generation and aset (although still generation\n> context is distinctly faster regarding final query latencies, see the\n> attached PDF estimation, ran for 8192 rows). I'm curious what could be a\n> \"worst-case\" workload type for this patch?\n\nIt's not purposefully \"best-case\". Likely there'd be more improvement\nif I'd scanned the Material node more than twice. However, the tuple\nsize I picked is close to the best case as it's just over 1024 bytes.\nGeneration will just round the size up to the next 8-byte alignment\nboundary, whereas AllocSet will round that up to 2048 bytes.\n\nA case where the patched version would be even better would be where\nthe unpatched version spilled to disk but the patched version did not.\nI imagine there's room for hundreds of percent speedup for that case.\n\n> I've also noticed the first patch disables materialization in some tests.\n>\n> --- a/src/test/regress/sql/partition_prune.sql\n> +++ b/src/test/regress/sql/partition_prune.sql\n>\n> +set enable_material = 0;\n> +\n> -- UPDATE on a partition subtree has been seen to have problems.\n> insert into ab values (1,2);\n> explain (analyze, costs off, summary off, timing off)\n>\n> Is it due to the volatility of Maximum Storage values? If yes, could it\n> be covered with something similar to COSTS OFF instead?\n\nI don't think not showing this with COSTS OFF is very consistent with\nSort and Memoize's memory stats output. I opted to disable the\nMaterial node for that plan as it seemed like the easiest way to make\nthe output stable. There are other ways that could be done. See\nexplain_memoize() in memoize.sql. I thought about doing that, but it\nseemed like overkill when there was a much more simple way to get what\nI wanted. I'd certainly live to regret that if disabling Material put\nthe Nested Loop costs dangerously close to the costs of some other\njoin method and the plan became unstable on the buildfarm.\n\nDavid\n\n\n", "msg_date": "Sat, 11 May 2024 09:56:26 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use generation memory context for tuplestore.c" }, { "msg_contents": "On Sat, 4 May 2024 at 03:51, Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Fri, 3 May 2024 at 15:55, David Rowley <[email protected]> wrote:\n> > master @ 8f0a97dff\n> > Storage: Memory Maximum Storage: 16577kB\n> >\n> > patched:\n> > Storage: Memory Maximum Storage: 8577kB\n>\n> Those are some impressive numbers.\n\nThis patch needed to be rebased, so updated patches are attached.\n\nI was also reflecting on what Bruce wrote in [1] about having to parse\nperformance numbers from the commit messages, so I decided to adjust\nthe placeholder commit message I'd written to make performance numbers\nmore clear to Bruce, or whoever does the next major version release\nnotes. That caused me to experiment with finding the best case for\nthis patch. I could scale the improvement much further than I have,\nbut here's something I came up with that's easy to reproduce.\n\ncreate table winagg (a int, b text);\ninsert into winagg select a,repeat('a', 1024) from generate_series(1,10000000)a;\nset work_mem = '1MB';\nset jit=0;\nexplain (analyze, timing off) select sum(l1),sum(l2) from (select\nlength(b) l1,length(lag(b, 800) over ()) as l2 from winagg limit\n1600);\n\nmaster:\nExecution Time: 6585.685 ms\n\npatched:\nExecution Time: 4.159 ms\n\n1583x faster.\n\nI've effectively just exploited the spool_tuples() behaviour of what\nit does when the tuplestore goes to disk to have it spool the entire\nremainder of the partition, which is 10 million rows. I'm just taking\na tiny portion of those with the LIMIT 1600. I just set work_mem to\nsomething that the patched version won't have the tuplestore spill to\ndisk so that spool_tuples() only spools what's needed in the patched\nversion. So, artificial is a word you could use, but certainly,\nsomeone could find this performance cliff in the wild and be prevented\nfrom falling off it by this patch.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/Zk5r2XyI0BhXLF8h%40momjian.us", "msg_date": "Fri, 31 May 2024 15:25:58 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use generation memory context for tuplestore.c" }, { "msg_contents": "On Fri, 31 May 2024 at 05:26, David Rowley <[email protected]> wrote:\n>\n> On Sat, 4 May 2024 at 03:51, Matthias van de Meent\n> <[email protected]> wrote:\n> >\n> > On Fri, 3 May 2024 at 15:55, David Rowley <[email protected]> wrote:\n> > > master @ 8f0a97dff\n> > > Storage: Memory Maximum Storage: 16577kB\n> > >\n> > > patched:\n> > > Storage: Memory Maximum Storage: 8577kB\n> >\n> > Those are some impressive numbers.\n>\n> This patch needed to be rebased, so updated patches are attached.\n\nHere's a review for the V2 patch:\n\nI noticed this change to buffile.c shows up in both v1 and v2 of the\npatchset, but I can't trace the reasoning why it changed with this\npatch (rather than a separate bugfix):\n\n> +++ b/src/backend/storage/file/buffile.c\n> @@ -867,7 +867,7 @@ BufFileSize(BufFile *file)\n> {\n> int64 lastFileSize;\n>\n> - Assert(file->fileset != NULL);\n> + Assert(file->files != NULL);\n\n> +++ b/src/backend/commands/explain.c\n> [...]\n> +show_material_info(MaterialState *mstate, ExplainState *es)\n> + spaceUsedKB = (tuplestore_space_used(tupstore) + 1023) / 1024;\n\nI think this should use the BYTES_TO_KILOBYTES macro for consistency.\n\nLastly, I think this would benefit from a test in\nregress/sql/explain.sql, as the test changes that were included\nremoved the only occurrance of a Materialize node from the regression\ntests' EXPLAIN outputs.\n\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Tue, 2 Jul 2024 14:20:00 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use generation memory context for tuplestore.c" }, { "msg_contents": "Thank you for having another look at this.\n\nOn Wed, 3 Jul 2024 at 00:20, Matthias van de Meent\n<[email protected]> wrote:\n> I noticed this change to buffile.c shows up in both v1 and v2 of the\n> patchset, but I can't trace the reasoning why it changed with this\n> patch (rather than a separate bugfix):\n\n I didn't explain that very well here, did I. I took up the discussion\nabout that in [1]. (Which you've now seen)\n\n> > +show_material_info(MaterialState *mstate, ExplainState *es)\n> > + spaceUsedKB = (tuplestore_space_used(tupstore) + 1023) / 1024;\n>\n> I think this should use the BYTES_TO_KILOBYTES macro for consistency.\n\nYes, that needs to be done. Thank you.\n\n> Lastly, I think this would benefit from a test in\n> regress/sql/explain.sql, as the test changes that were included\n> removed the only occurrance of a Materialize node from the regression\n> tests' EXPLAIN outputs.\n\nI've modified the tests where the previous patch disabled\nenable_material to enable it again and mask out the possibly unstable\npart. Do you think that's an ok level of testing?\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvofgZT0VzydhyGH5MMb-XZzNDqqAbzf1eBZV5HDm3%2BosQ%40mail.gmail.com", "msg_date": "Wed, 3 Jul 2024 22:41:56 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use generation memory context for tuplestore.c" }, { "msg_contents": "Hello David,\n\n03.07.2024 13:41, David Rowley wrote:\n>\n>> Lastly, I think this would benefit from a test in\n>> regress/sql/explain.sql, as the test changes that were included\n>> removed the only occurrance of a Materialize node from the regression\n>> tests' EXPLAIN outputs.\n> I've modified the tests where the previous patch disabled\n> enable_material to enable it again and mask out the possibly unstable\n> part. Do you think that's an ok level of testing?\n\nPlease look at a segfault crash introduced with 1eff8279d:\nCREATE TABLE t1(i int);\nCREATE TABLE t2(i int) PARTITION BY RANGE (i);\nCREATE TABLE t2p PARTITION OF t2 FOR VALUES FROM (1) TO (2);\n\nEXPLAIN ANALYZE SELECT * FROM t1 JOIN t2 ON t1.i > t2.i;\n\nLeads to:\nCore was generated by `postgres: law regression [local] EXPLAIN                                      '.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n\n#0  0x000055c36dbac2f7 in tuplestore_storage_type_name (state=0x0) at tuplestore.c:1476\n1476            if (state->status == TSS_INMEM)\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 5 Jul 2024 07:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use generation memory context for tuplestore.c" }, { "msg_contents": "On Fri, 5 Jul 2024 at 16:00, Alexander Lakhin <[email protected]> wrote:\n> Please look at a segfault crash introduced with 1eff8279d:\n> CREATE TABLE t1(i int);\n> CREATE TABLE t2(i int) PARTITION BY RANGE (i);\n> CREATE TABLE t2p PARTITION OF t2 FOR VALUES FROM (1) TO (2);\n>\n> EXPLAIN ANALYZE SELECT * FROM t1 JOIN t2 ON t1.i > t2.i;\n>\n> Leads to:\n> Core was generated by `postgres: law regression [local] EXPLAIN '.\n> Program terminated with signal SIGSEGV, Segmentation fault.\n>\n> #0 0x000055c36dbac2f7 in tuplestore_storage_type_name (state=0x0) at tuplestore.c:1476\n> 1476 if (state->status == TSS_INMEM)\n\nThanks for the report. I've just pushed a fix in 53abb1e0e.\n\nDavid\n\n\n", "msg_date": "Fri, 5 Jul 2024 16:57:26 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use generation memory context for tuplestore.c" }, { "msg_contents": "On Wed, 3 Jul 2024 at 22:41, David Rowley <[email protected]> wrote:\n>\n> On Wed, 3 Jul 2024 at 00:20, Matthias van de Meent\n> > Lastly, I think this would benefit from a test in\n> > regress/sql/explain.sql, as the test changes that were included\n> > removed the only occurrance of a Materialize node from the regression\n> > tests' EXPLAIN outputs.\n>\n> I've modified the tests where the previous patch disabled\n> enable_material to enable it again and mask out the possibly unstable\n> part. Do you think that's an ok level of testing?\n\nI spent quite a bit more time today looking at these patches. Mostly\nthat amounted to polishing comments more.\n\nI've pushed them both now.\n\nThank you both of you for reviewing these.\n\nDavid\n\n\n", "msg_date": "Fri, 5 Jul 2024 17:54:38 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use generation memory context for tuplestore.c" }, { "msg_contents": "05.07.2024 07:57, David Rowley wrote:\n> Thanks for the report. I've just pushed a fix in 53abb1e0e.\n\nThank you, David!\n\nPlease look at another anomaly introduced with 590b045c3:\necho \"\nCREATE TABLE t(f int, t int);\nINSERT INTO t VALUES (1, 1);\n\nWITH RECURSIVE sg(f, t) AS (\nSELECT * FROM t t1\nUNION ALL\nSELECT t2.* FROM t t2, sg WHERE t2.f = sg.t\n) SEARCH DEPTH FIRST BY f, t SET seq\nSELECT * FROM sg;\n\" | timeout 60 psql\n\ntriggers\nTRAP: failed Assert(\"chunk->requested_size < oldsize\"), File: \"generation.c\", Line: 842, PID: 830294\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 5 Jul 2024 17:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use generation memory context for tuplestore.c" }, { "msg_contents": "On Sat, 6 Jul 2024 at 02:00, Alexander Lakhin <[email protected]> wrote:\n> CREATE TABLE t(f int, t int);\n> INSERT INTO t VALUES (1, 1);\n>\n> WITH RECURSIVE sg(f, t) AS (\n> SELECT * FROM t t1\n> UNION ALL\n> SELECT t2.* FROM t t2, sg WHERE t2.f = sg.t\n> ) SEARCH DEPTH FIRST BY f, t SET seq\n> SELECT * FROM sg;\n> \" | timeout 60 psql\n>\n> triggers\n> TRAP: failed Assert(\"chunk->requested_size < oldsize\"), File: \"generation.c\", Line: 842, PID: 830294\n\nThis seems to be a bug in GenerationRealloc().\n\nWhat's happening is when we palloc(4) for the files array in\nmakeBufFile(), that palloc uses GenerationAlloc() and since we have\nMEMORY_CONTEXT_CHECKING, the code does:\n\n/* ensure there's always space for the sentinel byte */\nchunk_size = MAXALIGN(size + 1);\n\nresulting in chunk_size == 8. When extendBufFile() effectively does\nthe repalloc(file->files, 8), we call GenerationRealloc() with those 8\nbytes and go into the \"if (oldsize >= size)\" path thinking we have\nenough space already. Here both values are 8, which would be fine on\nnon-MEMORY_CONTEXT_CHECKING builds, but there's no space for the\nsentinel byte here. set_sentinel(pointer, size) stomps on some memory,\nbut no crash from that. It's only a problem when extendBufFile() asks\nfor 12 bytes that we come back into GenerationRealloc() and trigger\nthe Assert(chunk->requested_size < oldsize). Both of these values are\n8. The oldsize should have been large enough to store the sentinel\nbyte, it isn't due to the problem caused during GenerationRealloc with\n8 bytes.\n\nI also had not intended that the buffile.c stuff would use the\ngeneration context. I'll need to fix that too, but I think I'll fix\nthe GenerationRealloc() first.\n\nThe attached fixes the issue. I'll stare at it a bit more and try to\ndecide if that's the best way to fix it.\n\nDavid", "msg_date": "Sat, 6 Jul 2024 12:08:50 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use generation memory context for tuplestore.c" }, { "msg_contents": "On Sat, 6 Jul 2024 at 12:08, David Rowley <[email protected]> wrote:\n> The attached fixes the issue. I'll stare at it a bit more and try to\n> decide if that's the best way to fix it.\n\nI did that staring work and thought about arranging the code to reduce\nthe number of #ifdefs. In the end, I decided it was better to keep the\ntwo \"if\" checks close together so that it's easy to see that one does\n> and the other >=. Someone else's tastes may vary.\n\nI adjusted the comment to remove the >= mention and pushed the\nresulting patch and backpatched to 16, where I broke this in\n0e480385e.\n\nThanks for the report.\n\nDavid\n\nDavid\n\n\n", "msg_date": "Sat, 6 Jul 2024 14:05:46 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use generation memory context for tuplestore.c" }, { "msg_contents": "On Sat, 6 Jul 2024 at 12:08, David Rowley <[email protected]> wrote:\n> I also had not intended that the buffile.c stuff would use the\n> generation context. I'll need to fix that too, but I think I'll fix\n> the GenerationRealloc() first.\n\nI've pushed a fix for that now too.\n\nDavid\n\n\n", "msg_date": "Sat, 6 Jul 2024 17:41:21 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use generation memory context for tuplestore.c" } ]
[ { "msg_contents": "This is likely small potatoes compared to some of the other\npg_upgrade-related improvements I've proposed [0] [1] or plan to propose,\nbut this is easy enough, and I already wrote the patch, so here it is.\nAFAICT there's no reason to bother syncing these dump files to disk. If\nsomeone pulls the plug during pg_upgrade, it's not like you can resume\npg_upgrade from where it left off. Also, I think we skipped syncing before\nv10, anyway, as the --no-sync flag was only added in commit 96a7128, which\nadded the code to sync dump files, too.\n\n[0] https://postgr.es/m/20240418041712.GA3441570%40nathanxps13\n[1] https://postgr.es/m/20240503025140.GA1227404%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 3 May 2024 12:13:48 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "add --no-sync to pg_upgrade's calls to pg_dump and pg_dumpall" }, { "msg_contents": "On 03.05.24 19:13, Nathan Bossart wrote:\n> This is likely small potatoes compared to some of the other\n> pg_upgrade-related improvements I've proposed [0] [1] or plan to propose,\n> but this is easy enough, and I already wrote the patch, so here it is.\n> AFAICT there's no reason to bother syncing these dump files to disk. If\n> someone pulls the plug during pg_upgrade, it's not like you can resume\n> pg_upgrade from where it left off. Also, I think we skipped syncing before\n> v10, anyway, as the --no-sync flag was only added in commit 96a7128, which\n> added the code to sync dump files, too.\n\nLooks good to me.\n\n\n\n", "msg_date": "Wed, 8 May 2024 10:09:46 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add --no-sync to pg_upgrade's calls to pg_dump and pg_dumpall" }, { "msg_contents": "On Wed, May 08, 2024 at 10:09:46AM +0200, Peter Eisentraut wrote:\n> On 03.05.24 19:13, Nathan Bossart wrote:\n>> This is likely small potatoes compared to some of the other\n>> pg_upgrade-related improvements I've proposed [0] [1] or plan to propose,\n>> but this is easy enough, and I already wrote the patch, so here it is.\n>> AFAICT there's no reason to bother syncing these dump files to disk. If\n>> someone pulls the plug during pg_upgrade, it's not like you can resume\n>> pg_upgrade from where it left off. Also, I think we skipped syncing before\n>> v10, anyway, as the --no-sync flag was only added in commit 96a7128, which\n>> added the code to sync dump files, too.\n> \n> Looks good to me.\n\nThanks for looking. I noticed that the version check is unnecessary since\nwe always use the new binary's pg_dump[all], so I removed that in v2.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 8 May 2024 13:38:50 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add --no-sync to pg_upgrade's calls to pg_dump and pg_dumpall" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> Thanks for looking. I noticed that the version check is unnecessary since\n> we always use the new binary's pg_dump[all], so I removed that in v2.\n\n+1\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 May 2024 14:49:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add --no-sync to pg_upgrade's calls to pg_dump and pg_dumpall" }, { "msg_contents": "On Wed, May 08, 2024 at 02:49:58PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> Thanks for looking. I noticed that the version check is unnecessary since\n>> we always use the new binary's pg_dump[all], so I removed that in v2.\n> \n> +1\n\n+1. Could there be an argument in favor of a backpatch? This is a\nperformance improvement, but one could also side that the addition of\nsync support in pg_dump[all] has made that a regression that we'd\nbetter fix because the flushes don't matter in this context. They\nalso bring costs for no gain.\n--\nMichael", "msg_date": "Thu, 9 May 2024 09:03:56 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add --no-sync to pg_upgrade's calls to pg_dump and pg_dumpall" }, { "msg_contents": "On Thu, May 09, 2024 at 09:03:56AM +0900, Michael Paquier wrote:\n> +1. Could there be an argument in favor of a backpatch? This is a\n> performance improvement, but one could also side that the addition of\n> sync support in pg_dump[all] has made that a regression that we'd\n> better fix because the flushes don't matter in this context. They\n> also bring costs for no gain.\n\nI don't see a strong need to back-patch this, if for no other reason than\nit seems to have gone unnoticed for 7 major versions. Plus, based on my\nadmittedly limited testing, this is unlikely to provide significant\nimprovements.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 9 May 2024 14:34:25 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add --no-sync to pg_upgrade's calls to pg_dump and pg_dumpall" }, { "msg_contents": "> On 9 May 2024, at 21:34, Nathan Bossart <[email protected]> wrote:\n> \n> On Thu, May 09, 2024 at 09:03:56AM +0900, Michael Paquier wrote:\n>> +1. Could there be an argument in favor of a backpatch? This is a\n>> performance improvement, but one could also side that the addition of\n>> sync support in pg_dump[all] has made that a regression that we'd\n>> better fix because the flushes don't matter in this context. They\n>> also bring costs for no gain.\n> \n> I don't see a strong need to back-patch this, if for no other reason than\n> it seems to have gone unnoticed for 7 major versions. Plus, based on my\n> admittedly limited testing, this is unlikely to provide significant\n> improvements.\n\nAgreed, this is a nice little improvement but it's unlikely to be enough of a\nspeedup to warrant changing the backbranches.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 9 May 2024 21:58:37 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add --no-sync to pg_upgrade's calls to pg_dump and pg_dumpall" }, { "msg_contents": "Committed.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 1 Jul 2024 10:22:59 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add --no-sync to pg_upgrade's calls to pg_dump and pg_dumpall" } ]
[ { "msg_contents": "See\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=7155cc4a60e7bfc837233b2dea2563a2edc673fd\n\nAs usual, please send any corrections by Sunday.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 May 2024 14:12:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "First-draft release notes for back branches are done" } ]
[ { "msg_contents": "Dear pgsql-hackers,\n\nOne-line Summary:\nProposal to introduce the CREATE OR REPLACE syntax for EVENT TRIGGER in\nPostgreSQL.\n\nBusiness Use-case:\nCurrently, to modify an EVENT TRIGGER, one must drop and recreate it. This\nproposal aims to introduce a CREATE OR REPLACE syntax for EVENT TRIGGER,\nsimilar to other database objects in PostgreSQL, to simplify this process\nand improve usability.\n\nFor example, suppose you would like to stop people from creating tables\nwithout primary keys. You might run something like this.\nCREATE OR REPLACE FUNCTION test_event_trigger_table_have_primary_key ()\nRETURNS event_trigger LANGUAGE plpgsql AS $$\nDECLARE\n obj record;\n object_types text[];\n table_name text;\nBEGIN\n FOR obj IN SELECT * FROM pg_event_trigger_ddl_commands () LOOP\n RAISE NOTICE 'classid: % objid: %,object_type: % object_identity: %\nschema_name: % command_tag: %' , obj.classid , obj.objid , obj.object_type\n, obj.object_identity , obj.schema_name , obj.command_tag;\n IF obj.object_type ~ 'table' THEN\n table_name := obj.object_identity;\n END IF;\n object_types := object_types || obj.object_type;\n END LOOP;\n RAISE NOTICE 'table name: %' , table_name;\n IF EXISTS ( SELECT FROM pg_index i JOIN pg_attribute a ON a.attrelid =\ni.indrelid AND a.attnum = ANY (i.indkey) WHERE i.indisprimary AND\ni.indrelid = table_name::regclass) IS FALSE THEN\n RAISE EXCEPTION 'This table needs a primary key. Add a primary key\nto create the table.';\n END IF;\nEND;\n$$;\n\nCREATE EVENT TRIGGER trig_test_event_trigger_table_have_primary_key ON\nddl_command_end WHEN TAG IN ('CREATE TABLE') EXECUTE FUNCTION\ntest_event_trigger_table_have_primary_key ();\nIf you run this a second time, you will get an error. You can resolve this\nwith\nDROP EVENT TRIGGER trig_test_event_trigger_table_have_primary_key;\nCREATE EVENT TRIGGER trig_test_event_trigger_table_have_primary_key ON\nddl_command_end WHEN TAG IN ('CREATE TABLE') EXECUTE FUNCTION\ntest_event_trigger_table_have_primary_key ();\nMy suggestion is to have it so this would work.\nCREATE OR REPLACE EVENT TRIGGER\ntrig_test_event_trigger_table_have_primary_key ON ddl_command_end WHEN TAG\nIN ('CREATE TABLE') EXECUTE FUNCTION\ntest_event_trigger_table_have_primary_key ();\nThis would change the syntax from CREATE EVENT TRIGGER name\n ON event\n [ WHEN filter_variable IN (filter_value [, ... ]) [ AND ... ] ]\n EXECUTE { FUNCTION | PROCEDURE } function_name() to CREATE [OR REPLACE]\nEVENT TRIGGER name\n ON event\n [ WHEN filter_variable IN (filter_value [, ... ]) [ AND ... ] ]\n EXECUTE { FUNCTION | PROCEDURE } function_name() at\nhttps://www.postgresql.org/docs/current/sql-createeventtrigger.html.\nUser impact with the change:\nThis change will provide a more convenient and intuitive way for users to\nmodify EVENT TRIGGERS. It will eliminate the need to manually drop and\nrecreate the trigger when changes are needed.\n\nImplementation details:\nThe implementation would involve modifying the parser to recognize the\nCREATE OR REPLACE syntax for EVENT TRIGGER and appropriately handle the\nrecreation of the trigger.\n\nEstimated Development Time:\nUnknown at this time. Further analysis is required to provide an accurate\nestimate.\n\nOpportunity Window Period:\nNo specific end date. However, the sooner this feature is implemented, the\nsooner users can benefit from the improved usability.\n\nBudget Money:\nOpen to discussion.\n\nContact Information:\nPeter Burbery\[email protected]\n\nI look forward to your feedback on this proposal.\n\nBest regards,\nPeter Burbery\n\nDear pgsql-hackers,One-line Summary:Proposal to introduce the CREATE OR REPLACE syntax for EVENT TRIGGER in PostgreSQL.Business Use-case:Currently, to modify an EVENT TRIGGER, one must drop and recreate it. This proposal aims to introduce a CREATE OR REPLACE syntax for EVENT TRIGGER, similar to other database objects in PostgreSQL, to simplify this process and improve usability.For example, suppose you would like to stop people from creating tables without primary keys. You might run something like this.CREATE OR REPLACE FUNCTION test_event_trigger_table_have_primary_key () RETURNS event_trigger LANGUAGE plpgsql AS $$DECLARE     obj record;     object_types text[];     table_name text; BEGIN     FOR obj IN SELECT * FROM pg_event_trigger_ddl_commands () LOOP         RAISE NOTICE 'classid: % objid: %,object_type: % object_identity: % schema_name: % command_tag: %' , obj.classid , obj.objid , obj.object_type , obj.object_identity , obj.schema_name , obj.command_tag;         IF obj.object_type ~ 'table' THEN             table_name := obj.object_identity;         END IF;         object_types := object_types || obj.object_type;     END LOOP;     RAISE NOTICE 'table name: %' , table_name;     IF EXISTS ( SELECT FROM pg_index i JOIN pg_attribute a ON a.attrelid = i.indrelid AND a.attnum = ANY (i.indkey) WHERE i.indisprimary AND i.indrelid = table_name::regclass) IS FALSE THEN         RAISE EXCEPTION 'This table needs a primary key. Add a primary key to create the table.';     END IF; END; $$; CREATE EVENT TRIGGER trig_test_event_trigger_table_have_primary_key ON ddl_command_end WHEN TAG IN ('CREATE TABLE') EXECUTE FUNCTION test_event_trigger_table_have_primary_key ();If you run this a second time, you will get an error. You can resolve this withDROP EVENT TRIGGER trig_test_event_trigger_table_have_primary_key;CREATE EVENT TRIGGER trig_test_event_trigger_table_have_primary_key ON ddl_command_end WHEN TAG IN ('CREATE TABLE') EXECUTE FUNCTION test_event_trigger_table_have_primary_key ();My suggestion is to have it so this would work.CREATE OR REPLACE EVENT TRIGGER trig_test_event_trigger_table_have_primary_key ON ddl_command_end WHEN TAG IN ('CREATE TABLE') EXECUTE FUNCTION test_event_trigger_table_have_primary_key ();This would change the syntax from CREATE EVENT TRIGGER name    ON event    [ WHEN filter_variable IN (filter_value [, ... ]) [ AND ... ] ]    EXECUTE { FUNCTION | PROCEDURE } function_name() to CREATE [OR REPLACE] EVENT TRIGGER name    ON event    [ WHEN filter_variable IN (filter_value [, ... ]) [ AND ... ] ]    EXECUTE { FUNCTION | PROCEDURE } function_name() at https://www.postgresql.org/docs/current/sql-createeventtrigger.html.User impact with the change:This change will provide a more convenient and intuitive way for users to modify EVENT TRIGGERS. It will eliminate the need to manually drop and recreate the trigger when changes are needed.Implementation details:The implementation would involve modifying the parser to recognize the CREATE OR REPLACE syntax for EVENT TRIGGER and appropriately handle the recreation of the trigger.Estimated Development Time:Unknown at this time. Further analysis is required to provide an accurate estimate.Opportunity Window Period:No specific end date. However, the sooner this feature is implemented, the sooner users can benefit from the improved usability.Budget Money:Open to discussion.Contact Information:Peter [email protected] look forward to your feedback on this proposal.Best regards,Peter Burbery", "msg_date": "Fri, 3 May 2024 21:49:38 -0400", "msg_from": "Peter Burbery <[email protected]>", "msg_from_op": true, "msg_subject": "Proposal for CREATE OR REPLACE EVENT TRIGGER in PostgreSQL" }, { "msg_contents": "On Friday, May 3, 2024, Peter Burbery <[email protected]> wrote:\n\n> Dear pgsql-hackers,\n>\n> One-line Summary:\n> Proposal to introduce the CREATE OR REPLACE syntax for EVENT TRIGGER in\n> PostgreSQL.\n>\n> Business Use-case:\n> Currently, to modify an EVENT TRIGGER, one must drop and recreate it. This\n> proposal aims to introduce a CREATE OR REPLACE syntax for EVENT TRIGGER,\n> similar to other database objects in PostgreSQL, to simplify this process\n> and improve usability.\n>\n> Contact Information:\n> Peter Burbery\n> [email protected]\n>\n> I look forward to your feedback on this proposal.\n>\n\nIts doesn’t seem like that big an omission, especially since unlike views\nand functions you can’t actually affect dependencies on an event trigger.\nBut the same can be said of normal triggers and they already have an “or\nreplace” option. So, sure, if you come along and offer a patch for this it\nseems probable it would be accepted. I’d just be a bit sad it probably\nwould take away from time that could spent on more desirable features.\n\nAs for your proposal format, it is a quite confusing presentation to send\nto an open source project. Also, your examples are overly complex, and\ncrammed together, for the purpose and not enough effort is spent actually\ntrying to demonstrate why this is needed when “drop if exists” already is\npresent. A much better flow is to conditionally drop the existing object\nand create the new one all in the same transaction.\n\nDavid J.\n\nOn Friday, May 3, 2024, Peter Burbery <[email protected]> wrote:Dear pgsql-hackers,One-line Summary:Proposal to introduce the CREATE OR REPLACE syntax for EVENT TRIGGER in PostgreSQL.Business Use-case:Currently, to modify an EVENT TRIGGER, one must drop and recreate it. This proposal aims to introduce a CREATE OR REPLACE syntax for EVENT TRIGGER, similar to other database objects in PostgreSQL, to simplify this process and improve usability.Contact Information:Peter [email protected] look forward to your feedback on this proposal.Its doesn’t seem like that big an omission, especially since unlike views and functions you can’t actually affect dependencies on an event trigger.  But the same can be said of normal triggers and they already have an “or replace” option.  So, sure, if you come along and offer a patch for this it seems probable it would be accepted.  I’d just be a bit sad it probably would take away from time that could spent on more desirable features.As for your proposal format, it is a quite confusing presentation to send to an open source project.  Also, your examples are overly complex, and crammed together, for the purpose and not enough effort is spent actually trying to demonstrate why this is needed when “drop if exists” already is present.  A much better flow is to conditionally drop the existing object and create the new one all in the same transaction.David J.", "msg_date": "Fri, 3 May 2024 19:22:18 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal for CREATE OR REPLACE EVENT TRIGGER in PostgreSQL" } ]
[ { "msg_contents": "Hi all,\n\nThere's a rare edge case in `alter table` that can prevent users from\ndropping a column as shown below\n\n # create table atacc1(a int, \"........pg.dropped.1........\" int);\n CREATE TABLE\n # alter table atacc1 drop column a;\n ERROR: duplicate key value violates unique constraint\n\"pg_attribute_relid_attnam_index\"\n DETAIL: Key (attrelid, attname)=(16407, ........pg.dropped.1........)\nalready exists.\n\nIt seems a bit silly and unlikely that anyone would ever find\nthemselves in this scenario, but it also seems easy enough to fix as\nshown by the attached patch.\n\nDoes anyone think this is worth fixing? If so I can submit it to the\ncurrent commitfest.\n\nThanks,\nJoe Koshakow", "msg_date": "Sat, 4 May 2024 10:37:34 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "drop column name conflict" }, { "msg_contents": "Joseph Koshakow <[email protected]> writes:\n> There's a rare edge case in `alter table` that can prevent users from\n> dropping a column as shown below\n\n> # create table atacc1(a int, \"........pg.dropped.1........\" int);\n> CREATE TABLE\n> # alter table atacc1 drop column a;\n> ERROR: duplicate key value violates unique constraint\n> \"pg_attribute_relid_attnam_index\"\n> DETAIL: Key (attrelid, attname)=(16407, ........pg.dropped.1........)\n> already exists.\n\nI think we intentionally did not bother with preventing this,\non the grounds that if you were silly enough to name a column\nthat way then you deserve any ensuing problems.\n\nIf we were going to expend any code on the scenario, I'd prefer\nto make it be checks in column addition/renaming that disallow\nnaming a column this way. What you propose here doesn't remove\nthe fundamental tension about whether this is valid user namespace\nor not, it just makes things less predictable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 04 May 2024 11:29:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: drop column name conflict" }, { "msg_contents": "On Sat, May 4, 2024 at 11:29 AM Tom Lane <[email protected]> wrote:\n> I think we intentionally did not bother with preventing this,\n> on the grounds that if you were silly enough to name a column\n> that way then you deserve any ensuing problems.\n\nFair enough.\n\n> If we were going to expend any code on the scenario, I'd prefer\n> to make it be checks in column addition/renaming that disallow\n> naming a column this way.\n\nIs there any interest in making this change? The attached patch could\nuse some cleanup, but seems to accomplish what's described. It's\ndefinitely more involved than the previous one and may not be worth the\neffort. If you feel that it's worth it I can clean it up, otherwise\nI'll drop it.\n\nThanks,\nJoe Koshakow", "msg_date": "Sat, 4 May 2024 15:38:14 -0400", "msg_from": "Joseph Koshakow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: drop column name conflict" } ]