threads
listlengths
1
2.99k
[ { "msg_contents": "Hello.\n\nBefore the advent of procedures in PostgreSQL 11 that can manage \ntransactions, there could only be one transaction\nin one statement. Hence the end of the transaction also meant the end of \nthe statement. Apparently, this is why\nthe corresponding restriction is described differently in different \nplaces of the documentation:\n\nhttps://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-STATS-VIEWS\n\"...so a query or transaction still in progress does not affect the \ndisplayed totals...\"\n\"...counts actions taken so far within the current transaction...\"\n\nBut now it's possible that several transactions are performed within one \nSQL statement call.\nAt the same time, the current implementation transfers the accumulated \nstatistics to the shared memory only\nat the end of the statement. These statistics data is used by automatic \nvacuum. Thus, in a situation\nwhere some procedure that changes data is running for a long time (e.g. \nan infinite event processing loop,\nincluding implementing any queues), the changes made and committed in it \nwill not affect statistics in shared memory\nuntil the CALL statement is finished. This will not allow the autovacuum \nto make the right cleaning decision in time.\nTo illustrate the described feature, I suggest to consider the example \nbelow.\n\nExample.\n\nWe process the data in the 'test' table. The 'changes' column will show \nthe number of row updates:\n\n CREATE TABLE test (changes int);\n\nLet's insert a row into the table:\n\n INSERT INTO test VALUES (0);\n\nAt each processing step, the value of the 'changes' column will be \nincremented. The processing will be performed\nin a long-running loop within the 'process' procedure (see below). The \nactions of each loop step are committed.\n\n CREATE PROCEDURE process() AS $$\n DECLARE\n l_chs int;\n BEGIN\n LOOP\n UPDATE test SET changes = changes + 1 RETURNING changes INTO \nl_chs;\n COMMIT;\n RAISE NOTICE 'changes % -- upd_shared = %, upd_local = %', l_chs,\n (SELECT n_tup_upd FROM pg_stat_all_tables\n WHERE relname = 'test'), -- statistics in shared \nmemory (considered by autovacuum)\n (SELECT n_tup_upd FROM pg_stat_xact_all_tables\n WHERE relname = 'test'); -- statistics within the \noperation (transaction)\n END LOOP;\n END\n $$ LANGUAGE plpgsql\n\nLet's call the procedure:\n\n CALL process();\n\nNOTICE: changes 1 -- upd_shared = 0, upd_local = 1\nNOTICE: changes 2 -- upd_shared = 0, upd_local = 2\nNOTICE: changes 3 -- upd_shared = 0, upd_local = 3\nNOTICE: changes 4 -- upd_shared = 0, upd_local = 4\nNOTICE: changes 5 -- upd_shared = 0, upd_local = 5\nNOTICE: changes 6 -- upd_shared = 0, upd_local = 6\nNOTICE: changes 7 -- upd_shared = 0, upd_local = 7\nNOTICE: changes 8 -- upd_shared = 0, upd_local = 8\n...\n\nIf we now observe the cumulative statistics on the 'test' table from \nanother session, we will see\nthat despite the fact that there are updates and dead tuples appear, \nthis information does not get into the shared memory:\n\nSELECT n_tup_upd, n_dead_tup, n_ins_since_vacuum, vacuum_count, \nautovacuum_count FROM pg_stat_all_tables WHERE relname = 'test'\n | n_tup_upd | 0\n | n_dead_tup | 0\n | n_ins_since_vacuum | 1\n | vacuum_count | 0\n | autovacuum_count | 0\n\nIt would be logical to remove the existing restriction, that is, to \nupdate statistics data precisely\nafter transaction completion, even if the operator is still working.\n\n-- \nRegards, Igor Gnatyuk\nPostgres Professional https://postgrespro.com\n\n\n", "msg_date": "Wed, 12 Jun 2024 20:13:19 +0300", "msg_from": "\"Igor V.Gnatyuk\" <[email protected]>", "msg_from_op": true, "msg_subject": "Multi-transactional statements and statistics for autovacuum" }, { "msg_contents": "Hello everybody,\n\nOn 12.06.2024 20:13, Igor V.Gnatyuk wrote:\n> Hello.\n>\n> Before the advent of procedures in PostgreSQL 11 that can manage \n> transactions, there could only be one transaction\n> in one statement. Hence the end of the transaction also meant the end \n> of the statement. Apparently, this is why\n> the corresponding restriction is described differently in different \n> places of the documentation:\n>\n> https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-STATS-VIEWS \n>\n> \"...so a query or transaction still in progress does not affect the \n> displayed totals...\"\n> \"...counts actions taken so far within the current transaction...\"\n>\n> But now it's possible that several transactions are performed within \n> one SQL statement call.\n> At the same time, the current implementation transfers the accumulated \n> statistics to the shared memory only\n> at the end of the statement. These statistics data is used by \n> automatic vacuum. Thus, in a situation\n> where some procedure that changes data is running for a long time \n> (e.g. an infinite event processing loop,\n> including implementing any queues), the changes made and committed in \n> it will not affect statistics in shared memory\n> until the CALL statement is finished. This will not allow the \n> autovacuum to make the right cleaning decision in time.\n> To illustrate the described feature, I suggest to consider the example \n> below.\n\n\nIt would be nice to know if this is considered desired behavior or an \noversight.\n\nIf it's OK that transaction(s) statistics are not accumulated in shared \nmemory until the end of the SQL statement, we should at least improve \ndocumentation to better reflect this.\n\nAlthough, from my POV, statistics should be send to shared memory after \nthe end of each transaction, regardless of the boundaries of SQL \nstatements. With the current implementation, it's not possible to build \nan infinite-loop query processing routine entirely in Postgres; we have \nto rely on external tools either to implement a processing loop (to \nissue separate SQL statements for each event) or to schedule vacuum.\n\nWhat's your opinion on this?\n\n\n>\n> Example.\n>\n> We process the data in the 'test' table. The 'changes' column will \n> show the number of row updates:\n>\n>   CREATE TABLE test (changes int);\n>\n> Let's insert a row into the table:\n>\n>   INSERT INTO test VALUES (0);\n>\n> At each processing step, the value of the 'changes' column will be \n> incremented. The processing will be performed\n> in a long-running loop within the 'process' procedure (see below). The \n> actions of each loop step are committed.\n>\n>   CREATE PROCEDURE process() AS $$\n>   DECLARE\n>     l_chs int;\n>   BEGIN\n>     LOOP\n>       UPDATE test SET changes = changes + 1 RETURNING changes INTO l_chs;\n>       COMMIT;\n>       RAISE NOTICE 'changes % -- upd_shared = %, upd_local = %', l_chs,\n>                    (SELECT n_tup_upd FROM pg_stat_all_tables\n>                     WHERE relname = 'test'),  -- statistics in shared \n> memory (considered by autovacuum)\n>                    (SELECT n_tup_upd FROM pg_stat_xact_all_tables\n>                     WHERE relname = 'test');   -- statistics within \n> the operation (transaction)\n>     END LOOP;\n>   END\n>   $$ LANGUAGE plpgsql\n>\n> Let's call the procedure:\n>\n>   CALL process();\n>\n> NOTICE:  changes 1 -- upd_shared = 0, upd_local = 1\n> NOTICE:  changes 2 -- upd_shared = 0, upd_local = 2\n> NOTICE:  changes 3 -- upd_shared = 0, upd_local = 3\n> NOTICE:  changes 4 -- upd_shared = 0, upd_local = 4\n> NOTICE:  changes 5 -- upd_shared = 0, upd_local = 5\n> NOTICE:  changes 6 -- upd_shared = 0, upd_local = 6\n> NOTICE:  changes 7 -- upd_shared = 0, upd_local = 7\n> NOTICE:  changes 8 -- upd_shared = 0, upd_local = 8\n> ...\n>\n> If we now observe the cumulative statistics on the 'test' table from \n> another session, we will see\n> that despite the fact that there are updates and dead tuples appear, \n> this information does not get into the shared memory:\n>\n> SELECT n_tup_upd, n_dead_tup, n_ins_since_vacuum, vacuum_count, \n> autovacuum_count FROM pg_stat_all_tables WHERE relname = 'test'\n>     |  n_tup_upd          | 0\n>     |  n_dead_tup         | 0\n>     |  n_ins_since_vacuum | 1\n>     |  vacuum_count       | 0\n>     |  autovacuum_count   | 0\n>\n> It would be logical to remove the existing restriction, that is, to \n> update statistics data precisely\n> after transaction completion, even if the operator is still working.\n>\n\n\n", "msg_date": "Sun, 7 Jul 2024 15:21:51 +0300", "msg_from": "Egor Rogov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multi-transactional statements and statistics for autovacuum" } ]
[ { "msg_contents": "Attached patch harmonizes pg_bsd_indent's function parameter names, so\nthat they match the names used in corresponding function definitions.\n\nI have been putting this off because I wasn't sure that the policy\nshould be the same for pg_bsd_indent. Is there any reason to think\nthat this will create more work down the line? It seems like it might,\ndue to some kind of need to keep pg_bsd_indent's consistent with\nupstream BSD indent. But probably not. The patch is pretty small, in\nany case.\n\nI'd like to push this patch now. It's generally easier to be strict\nabout these inconsistencies. My clang-tidy workflow doesn't\nautomatically filter out various special cases requiring subjective\njudgement, so leaving stuff like this unfixed creates more work down\nthe road.\n\n-- \nPeter Geoghegan", "msg_date": "Wed, 12 Jun 2024 17:14:44 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Harmonizing pg_bsd_indent parameter names" }, { "msg_contents": "On Wed, Jun 12, 2024 at 05:14:44PM -0400, Peter Geoghegan wrote:\n> I have been putting this off because I wasn't sure that the policy\n> should be the same for pg_bsd_indent. Is there any reason to think\n> that this will create more work down the line? It seems like it might,\n> due to some kind of need to keep pg_bsd_indent's consistent with\n> upstream BSD indent. But probably not. The patch is pretty small, in\n> any case.\n\nI would be surprised if this 2-line patch created anything approaching a\nsignificant amount of work down the road. FWIW commit 10d34fe already\nchanged one line in indent.c.\n\n> I'd like to push this patch now. It's generally easier to be strict\n> about these inconsistencies. My clang-tidy workflow doesn't\n> automatically filter out various special cases requiring subjective\n> judgement, so leaving stuff like this unfixed creates more work down\n> the road.\n\nAre these the only remaining inconsistencies?\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 12 Jun 2024 16:32:54 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Harmonizing pg_bsd_indent parameter names" }, { "msg_contents": "Peter Geoghegan <[email protected]> writes:\n> Attached patch harmonizes pg_bsd_indent's function parameter names, so\n> that they match the names used in corresponding function definitions.\n\nHmm, these aren't really harmonizing inconsistencies, but overruling\nsomebody's style decision to leave parameter names out of the extern\ndeclarations. That's a style I don't like personally, but some do.\n\n> I have been putting this off because I wasn't sure that the policy\n> should be the same for pg_bsd_indent. Is there any reason to think\n> that this will create more work down the line? It seems like it might,\n> due to some kind of need to keep pg_bsd_indent's consistent with\n> upstream BSD indent.\n\nWe are, at least in theory, trying to stay within hailing distance\nof the upstream; that's the primary reason why we've not touched\nthe indentation style of pg_bsd_indent itself. Still, two lines\nis not going to make much of a difference in whether patches can\nbe passed back and forth (whereas reindentation would kill that\nsomewhat thoroughly).\n\nAnyway, after chewing on it for a few minutes, no objection here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Jun 2024 17:33:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Harmonizing pg_bsd_indent parameter names" }, { "msg_contents": "On Wed, Jun 12, 2024 at 5:33 PM Tom Lane <[email protected]> wrote:\n> Peter Geoghegan <[email protected]> writes:\n> > Attached patch harmonizes pg_bsd_indent's function parameter names, so\n> > that they match the names used in corresponding function definitions.\n>\n> Hmm, these aren't really harmonizing inconsistencies, but overruling\n> somebody's style decision to leave parameter names out of the extern\n> declarations. That's a style I don't like personally, but some do.\n\nIt was my understanding that that was considered just as bad as (or\nequivalent to) a mechanical inconsistency. At least for code that was\nwritten from scratch for Postgres (as opposed to vendored in Postgres)\nwas concerned. We dealt with this during the initial work on bulk\nharmonizing code. For example, we made Henry Spencer's regex code\nfollow Postgres coding standards (in commit bc2187ed).\n\nThe regex code was a little different to pg_bsd_indent. There wasn't the same\nneed to keep up with an upstream. That's why I thought I'd ask about\nit before acting.\n\n> We are, at least in theory, trying to stay within hailing distance\n> of the upstream; that's the primary reason why we've not touched\n> the indentation style of pg_bsd_indent itself. Still, two lines\n> is not going to make much of a difference in whether patches can\n> be passed back and forth (whereas reindentation would kill that\n> somewhat thoroughly).\n\nGot it.\n\n> Anyway, after chewing on it for a few minutes, no objection here.\n\nCool. Will push this shortly, then.\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 12 Jun 2024 17:42:29 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Harmonizing pg_bsd_indent parameter names" }, { "msg_contents": "On Wed, Jun 12, 2024 at 5:32 PM Nathan Bossart <[email protected]> wrote:\n> I would be surprised if this 2-line patch created anything approaching a\n> significant amount of work down the road. FWIW commit 10d34fe already\n> changed one line in indent.c.\n\nI missed that.\n\n> > I'd like to push this patch now. It's generally easier to be strict\n> > about these inconsistencies. My clang-tidy workflow doesn't\n> > automatically filter out various special cases requiring subjective\n> > judgement, so leaving stuff like this unfixed creates more work down\n> > the road.\n>\n> Are these the only remaining inconsistencies?\n\nNo, but they're just about the only remaining inconsistencies that seem fixable.\n\nThe vast majority of the remaining inconsistencies (reported by\nrun-clang-tidy.py, using only its\nreadability-inconsistent-declaration-parameter-name and\nreadability-named-parameter checks) fit into one of two categories\n(similar categories):\n\n1. Functions such as GUC_yy_scan_string. These are functions where the\nonly way to fix the complained-about inconsistency is to change the\nway that upstream Gnu flex generates its scanner C code.\n\nThere doesn't seem to be a built-in option that influences this behavior.\n\n2. Postgres C code that uses the C preprocessor in the style of C++ templates.\n\nIn practice category 2 just means users of simplehash.h. There are a\ncouple of those, and I get at least one warning for each of them.\n\nThere is also one oddball case, not quite in either category. This\ninvolves zic.c's declaration of\nlink(), when it should actually just be using the #include <unistd.h>\ndeclaration. That's another weird upstream code thing -- this isn't\nexactly fully under our control. I've avoided doing anything about\nthat, but perhaps I should have proposed a patch for that, too (it's\nfairly similar to pg_bsd_indent). What do you think of that idea?\n\nPersonally I don't care all that much about the machine-generated code\n(I'd fix it if there was a straightforward way to do so, but there\ndoesn't seem to be, so\nmeh). I use clangd's clang-tidy integration. It won't complain about\nthese cases anyway (it doesn't recognize that .l files and .y files\ncontain some C code).\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 12 Jun 2024 17:59:14 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Harmonizing pg_bsd_indent parameter names" }, { "msg_contents": "On Wed, Jun 12, 2024 at 05:59:14PM -0400, Peter Geoghegan wrote:\n> There is also one oddball case, not quite in either category. This\n> involves zic.c's declaration of\n> link(), when it should actually just be using the #include <unistd.h>\n> declaration. That's another weird upstream code thing -- this isn't\n> exactly fully under our control. I've avoided doing anything about\n> that, but perhaps I should have proposed a patch for that, too (it's\n> fairly similar to pg_bsd_indent). What do you think of that idea?\n\nThat one seems to be synchronized somewhat regularly, and I haven't been\nthe one doing the synchronizing, so we might want to be a little more\ncautious there. But I do see a couple of commits that have touched it\n(e.g., 235c0f6, c4f8e89, 0245f8d). At a glance, it looks like the link()\nstuff might be intended for Windows. I see we have our own version in\nwin32link.c, so your idea to remove it in favor of the unistd.h declaration\nseems like it ought to work.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 12 Jun 2024 20:35:26 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Harmonizing pg_bsd_indent parameter names" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Wed, Jun 12, 2024 at 05:59:14PM -0400, Peter Geoghegan wrote:\n>> There is also one oddball case, not quite in either category. This\n>> involves zic.c's declaration of\n>> link(), when it should actually just be using the #include <unistd.h>\n>> declaration.\n\n> That one seems to be synchronized somewhat regularly, and I haven't been\n> the one doing the synchronizing, so we might want to be a little more\n> cautious there.\n\nYeah. I'm overdue for another sync with upstream --- I'm dreading\nthat a little bit because they've been aggressively \"modernizing\"\ntheir code and I fear it will be painful.\n\n[ ... click click ... git pull ... ] It looks like the way that\nreads now in upstream is\n\n#if !HAVE_POSIX_DECLS\nextern int\tgetopt(int argc, char * const argv[],\n\t\t\tconst char * options);\nextern int\tlink(const char * target, const char * linkname);\nextern char *\toptarg;\nextern int\toptind;\n#endif\n\nWe could probably assume that we'll treat their code as though\nHAVE_POSIX_DECLS is true and so this whole stanza goes away.\nBut I'd just as soon not think about it until I have the energy\nto do that sync. Unless somebody else is hot to do it (if so,\nsee the notes at src/timezone/README), let's leave this be\nfor now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Jun 2024 21:58:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Harmonizing pg_bsd_indent parameter names" }, { "msg_contents": "On Wed, Jun 12, 2024 at 9:58 PM Tom Lane <[email protected]> wrote:\n> We could probably assume that we'll treat their code as though\n> HAVE_POSIX_DECLS is true and so this whole stanza goes away.\n> But I'd just as soon not think about it until I have the energy\n> to do that sync. Unless somebody else is hot to do it (if so,\n> see the notes at src/timezone/README), let's leave this be\n> for now.\n\nUnderstood. Thanks.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 12 Jun 2024 22:04:04 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Harmonizing pg_bsd_indent parameter names" } ]
[ { "msg_contents": "Hi,\n\nDavid R and I were discussing vectorisation and microarchitectures and\nwhat you can expect the target microarchitecture to be these days, and\nit seemed like some of our choices are not really very\nforward-looking.\n\nDistros targeting x86-64 traditionally assumed the original AMD64 K8\ninstruction set, so if we want to use newer instructions we use\nvarious configure or runtime checks to see if that's safe.\n\nRecent GCC and Clang versions understand -march=x86-64-v{2,3,4}[1].\nRHEL9 and similar and SUSE tumbleweed now require x86-64-v2, and IIUC\nthey changed the -march default to -v2 in their build of GCC, and I\nthink Ubuntu has something in the works perhaps for -v3[2].\n\nSome of our current tricks won't won't take proper advantage of that:\nwe'll still access POPCNT through a function pointer! I was wondering\nhow to do it. One idea that seems kinda weird is to try $(CC) -S\ntest_builtin_popcount.c, and then grepping for POPCNT in\ntest_builtin_popcount.s! I assume we don't want to use\n__builtin_popcount() if it doesn't generate the instruction (using the\ncompiler flags provided or default otherwise), because on a more\nconservative distro we'll use GCC/Clang's fallback code, instead of\nour own runtime-checked POPCNT-instruction-through-a-function-pointer.\n(Presumably we believed that to be better.) Being able to use\n__builtin_popcount() directly without any function pointer nonsense is\nobviously faster, but also automatically vectorisable.\n\nThat's not like the CRC32 instruction checks we have, because those\neither work or don't work with default compiler flags, but for POPCNT\nit always works but might general fallback code instead of the desired\ninstruction so you have to inspect what it generates.\n\nFWIW Windows 11 on x86 requires the POPCNT instruction to boot.\nWindows 10 EOL is October next year so we can use MSVC's intrinsic\nwithout a function pointer if we just wait :-)\n\nAll ARM64 bit systems have CNT, but we don't use it! Likewise for all\nmodern POWER (8+) and SPARC chips that any OS can actually run on\nthese days. For RISCV it's part of the bit manipulation option, but\nwe're already relying on that by detecting and using other\npg_bitutils.h builtins.\n\nSo I think we should probably just use the builtin directly\neverywhere, except on x86 where we should either check if it generates\nthe instruction we want, OR, if we can determine that the modern\nGCC/Clangfallback code is actually faster than our function pointer\nhop, then maybe we should just always use it even there, after\nchecking that it exists.\n\n[1] https://en.wikipedia.org/wiki/X86-64#Microarchitecture_levels\n[2] https://ubuntu.com/blog/optimising-ubuntu-performance-on-amd64-architecture\n\n\n", "msg_date": "Thu, 13 Jun 2024 11:11:56 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Changing default -march landscape" }, { "msg_contents": "On Thu, Jun 13, 2024 at 11:11:56AM +1200, Thomas Munro wrote:\n> David R and I were discussing vectorisation and microarchitectures and\n> what you can expect the target microarchitecture to be these days, and\n> it seemed like some of our choices are not really very\n> forward-looking.\n> \n> Distros targeting x86-64 traditionally assumed the original AMD64 K8\n> instruction set, so if we want to use newer instructions we use\n> various configure or runtime checks to see if that's safe.\n> \n> Recent GCC and Clang versions understand -march=x86-64-v{2,3,4}[1].\n> RHEL9 and similar and SUSE tumbleweed now require x86-64-v2, and IIUC\n> they changed the -march default to -v2 in their build of GCC, and I\n> think Ubuntu has something in the works perhaps for -v3[2].\n> \n> Some of our current tricks won't won't take proper advantage of that:\n> we'll still access POPCNT through a function pointer!\n\nThis is perhaps only tangentially related, but I've found it really\ndifficult to avoid painting ourselves into a corner with this stuff. Let's\nuse the SSE 4.2 CRC32C code as an example. Right now, if your default\ncompiler flags indicate support for SSE 4.2 (which I'll note can be assumed\nwith x86-64-v2), we'll use it unconditionally, no function pointer\nrequired. If additional compiler flags happen to convince the compiler to\ngenerate SSE 4.2 code, we'll instead build both a fallback version and the\nSSE version, and then we'll use a function pointer to direct to whatever we\ndetect is available on the CPU when the server starts.\n\nNow, let's say we require x86-64-v2. Once we have that, we can avoid the\nfunction pointer on many more x86 machines. While that sounds great, now\nwe have a different problem. If someone wants to add, say, AVX-512 support\n[0], which is a much newer instruction set, we'll need to use the function\npointer again. And we're back where we started. We could instead just ask\nfolks to compile with --march=native, but then these optimizations are only\navailable for a subset of users until we decide the instructions are\nstandard enough 20 years from now.\n\nThe idea that's been floating around recently is to build a bunch of\ndifferent versions of Postgres and to choose one on startup based on what\nthe CPU supports. That seems like quite a lot of work, and it'll increase\nthe size of the builds quite a bit, but it at least doesn't have the\naforementioned problem.\n\nSorry if I just rambled on about something unrelated, but your message had\nenough keywords to get me thinking about this again.\n\n[0] https://postgr.es/m/BL1PR11MB530401FA7E9B1CA432CF9DC3DC192%40BL1PR11MB5304.namprd11.prod.outlook.com\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 12 Jun 2024 20:09:45 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changing default -march landscape" }, { "msg_contents": "On Thu, Jun 13, 2024 at 1:09 PM Nathan Bossart <[email protected]> wrote:\n> Now, let's say we require x86-64-v2. Once we have that, we can avoid the\n> function pointer on many more x86 machines. While that sounds great, now\n> we have a different problem. If someone wants to add, say, AVX-512 support\n> [0], which is a much newer instruction set, we'll need to use the function\n> pointer again. And we're back where we started. We could instead just ask\n> folks to compile with --march=native, but then these optimizations are only\n> available for a subset of users until we decide the instructions are\n> standard enough 20 years from now.\n\nThe way I think about it, it's not our place to require anything (I\nmean, we can't literally put -march=XXX into our build files, or if we\ndo the Debian et al maintainers will have to remove them by local\npatch because they are in charge of what the baseline is for their\ndistro), but we should do the best thing possible when people DO build\nwith modern -march. I would assume for example that Amazon Linux is\nset up to use a default -march that targets the actual minimum\nmicroarch level on AWS hosts. I guess what I'm pointing out here is\nthat the baseline is (finally!) moving on common distributions, and\nyet we've coded things in a way that doesn't benefit...\n\n> The idea that's been floating around recently is to build a bunch of\n> different versions of Postgres and to choose one on startup based on what\n> the CPU supports. That seems like quite a lot of work, and it'll increase\n> the size of the builds quite a bit, but it at least doesn't have the\n> aforementioned problem.\n\nI guess another idea would be for the PGDG packagers or someone else\ninterested in performance to create repos with binaries built for\nthese microarch levels and users can research what they need. The new\n-v2 etc levels are a lot more practical than the microarch names and\nindividual features...\n\n\n", "msg_date": "Thu, 13 Jun 2024 13:20:17 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Changing default -march landscape" }, { "msg_contents": "On Thu, Jun 13, 2024 at 01:20:17PM +1200, Thomas Munro wrote:\n> The way I think about it, it's not our place to require anything (I\n> mean, we can't literally put -march=XXX into our build files, or if we\n> do the Debian et al maintainers will have to remove them by local\n> patch because they are in charge of what the baseline is for their\n> distro), but we should do the best thing possible when people DO build\n> with modern -march. I would assume for example that Amazon Linux is\n> set up to use a default -march that targets the actual minimum\n> microarch level on AWS hosts. I guess what I'm pointing out here is\n> that the baseline is (finally!) moving on common distributions, and\n> yet we've coded things in a way that doesn't benefit...\n\nThat's true, but my point is that as soon as we start avoiding function\npointers more commonly, it becomes difficult to justify adding them back in\norder to support new instruction sets. Should we just compile in the SSE\n4.2 version, or should we take a chance on AVX-512 with the function\npointer?\n\n>> The idea that's been floating around recently is to build a bunch of\n>> different versions of Postgres and to choose one on startup based on what\n>> the CPU supports. That seems like quite a lot of work, and it'll increase\n>> the size of the builds quite a bit, but it at least doesn't have the\n>> aforementioned problem.\n> \n> I guess another idea would be for the PGDG packagers or someone else\n> interested in performance to create repos with binaries built for\n> these microarch levels and users can research what they need. The new\n> -v2 etc levels are a lot more practical than the microarch names and\n> individual features...\n\nHeartily agreed.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 12 Jun 2024 21:00:41 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changing default -march landscape" }, { "msg_contents": "On 13.06.24 04:00, Nathan Bossart wrote:\n> That's true, but my point is that as soon as we start avoiding function\n> pointers more commonly, it becomes difficult to justify adding them back in\n> order to support new instruction sets. Should we just compile in the SSE\n> 4.2 version, or should we take a chance on AVX-512 with the function\n> pointer?\n> \n>>> The idea that's been floating around recently is to build a bunch of\n>>> different versions of Postgres and to choose one on startup based on what\n>>> the CPU supports. That seems like quite a lot of work, and it'll increase\n>>> the size of the builds quite a bit, but it at least doesn't have the\n>>> aforementioned problem.\n>>\n>> I guess another idea would be for the PGDG packagers or someone else\n>> interested in performance to create repos with binaries built for\n>> these microarch levels and users can research what they need. The new\n>> -v2 etc levels are a lot more practical than the microarch names and\n>> individual features...\n> \n> Heartily agreed.\n\nOne thing that is perhaps not clear (to me?) is how much this matters \nand how much of it matters. Obviously, we know that it matters some, \notherwise we'd not be working on it. But does it, like, matter only \nwith checksums, or with thousands of partitions, or with many CPUs, or \ncertain types of indexes, etc.?\n\nIf we're going to, say, create some recommendations for packagers around \nthis, how are they supposed to determine the tradeoffs? It might be \neasy for a packager to set some slightly-higher -march flag that is in \nline with their distro's policies, but it would probably be a lot more \nwork to create separate binaries or a separate repository for, say, \nmoving from SSE-something to AVX-something. And how are they supposed \nto decide that, and how are they supposed to communicate that to their \nusers? (And how can we get different packagers to make somewhat \nconsistent decisions around this?)\n\nWe have in a somewhat similar case quite clearly documented that without \nnative spinlock support everything will be terrible. And there is \nprobably some information out there that without certain CPU support \nchecksum performance will be terrible. But beyond that we probably \ndon't have much.\n\n\n\n", "msg_date": "Thu, 13 Jun 2024 09:41:33 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changing default -march landscape" }, { "msg_contents": "On Thu, Jun 13, 2024 at 9:41 AM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 13.06.24 04:00, Nathan Bossart wrote:\n> > That's true, but my point is that as soon as we start avoiding function\n> > pointers more commonly, it becomes difficult to justify adding them back\n> in\n> > order to support new instruction sets. Should we just compile in the SSE\n> > 4.2 version, or should we take a chance on AVX-512 with the function\n> > pointer?\n> >\n> >>> The idea that's been floating around recently is to build a bunch of\n> >>> different versions of Postgres and to choose one on startup based on\n> what\n> >>> the CPU supports. That seems like quite a lot of work, and it'll\n> increase\n> >>> the size of the builds quite a bit, but it at least doesn't have the\n> >>> aforementioned problem.\n> >>\n> >> I guess another idea would be for the PGDG packagers or someone else\n> >> interested in performance to create repos with binaries built for\n> >> these microarch levels and users can research what they need. The new\n> >> -v2 etc levels are a lot more practical than the microarch names and\n> >> individual features...\n> >\n> > Heartily agreed.\n>\n> One thing that is perhaps not clear (to me?) is how much this matters\n> and how much of it matters. Obviously, we know that it matters some,\n> otherwise we'd not be working on it. But does it, like, matter only\n> with checksums, or with thousands of partitions, or with many CPUs, or\n> certain types of indexes, etc.?\n>\n> If we're going to, say, create some recommendations for packagers around\n> this, how are they supposed to determine the tradeoffs? It might be\n> easy for a packager to set some slightly-higher -march flag that is in\n> line with their distro's policies, but it would probably be a lot more\n> work to create separate binaries or a separate repository for, say,\n> moving from SSE-something to AVX-something. And how are they supposed\n> to decide that, and how are they supposed to communicate that to their\n> users? (And how can we get different packagers to make somewhat\n> consistent decisions around this?)\n>\n> We have in a somewhat similar case quite clearly documented that without\n> native spinlock support everything will be terrible. And there is\n> probably some information out there that without certain CPU support\n> checksum performance will be terrible. But beyond that we probably\n> don't have much.\n>\n\nYeah, I think it's completely unreasonable to push this on packagers and\njust say \"this is your problem now\". If we do that, we can assume the only\npeople to get any benefit from these optimizations are those that use a\nfully managed cloud service like azure or RDS.\n\nThey can do it, but we need to tell them how and when. And if we intend for\npackagers to be part of the solution we need to explicitly bring them into\nthe discussion of how to do it, at a fairly early stage (and no, we can't\nexpect them to follow every thread on hackers).\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Jun 13, 2024 at 9:41 AM Peter Eisentraut <[email protected]> wrote:On 13.06.24 04:00, Nathan Bossart wrote:\n> That's true, but my point is that as soon as we start avoiding function\n> pointers more commonly, it becomes difficult to justify adding them back in\n> order to support new instruction sets.  Should we just compile in the SSE\n> 4.2 version, or should we take a chance on AVX-512 with the function\n> pointer?\n> \n>>> The idea that's been floating around recently is to build a bunch of\n>>> different versions of Postgres and to choose one on startup based on what\n>>> the CPU supports.  That seems like quite a lot of work, and it'll increase\n>>> the size of the builds quite a bit, but it at least doesn't have the\n>>> aforementioned problem.\n>>\n>> I guess another idea would be for the PGDG packagers or someone else\n>> interested in performance to create repos with binaries built for\n>> these microarch levels and users can research what they need.  The new\n>> -v2 etc levels are a lot more practical than the microarch names and\n>> individual features...\n> \n> Heartily agreed.\n\nOne thing that is perhaps not clear (to me?) is how much this matters \nand how much of it matters.  Obviously, we know that it matters some, \notherwise we'd not be working on it.  But does it, like, matter only \nwith checksums, or with thousands of partitions, or with many CPUs, or \ncertain types of indexes, etc.?\n\nIf we're going to, say, create some recommendations for packagers around \nthis, how are they supposed to determine the tradeoffs?  It might be \neasy for a packager to set some slightly-higher -march flag that is in \nline with their distro's policies, but it would probably be a lot more \nwork to create separate binaries or a separate repository for, say, \nmoving from SSE-something to AVX-something.  And how are they supposed \nto decide that, and how are they supposed to communicate that to their \nusers?  (And how can we get different packagers to make somewhat \nconsistent decisions around this?)\n\nWe have in a somewhat similar case quite clearly documented that without \nnative spinlock support everything will be terrible.  And there is \nprobably some information out there that without certain CPU support \nchecksum performance will be terrible.  But beyond that we probably \ndon't have much.Yeah, I think it's completely unreasonable  to push this on packagers and just say \"this is your problem now\". If we do that, we can assume the only people to get any benefit from these optimizations are those that use a fully managed cloud service like azure or RDS.They can do it, but we need to tell them how and when. And if we intend for packagers to be part of the solution we need to explicitly bring them into the discussion of how to do it, at a fairly early stage (and no, we can't expect them to follow every thread on hackers).--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Thu, 13 Jun 2024 10:14:52 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changing default -march landscape" }, { "msg_contents": "On Thu, Jun 13, 2024 at 8:15 PM Magnus Hagander <[email protected]> wrote:\n> Yeah, I think it's completely unreasonable to push this on packagers and just say \"this is your problem now\". If we do that, we can assume the only people to get any benefit from these optimizations are those that use a fully managed cloud service like azure or RDS.\n\nIt would also benefit distros that have decided to move their baseline\nmicro-arch level right now, probably without any additional action\nfrom the maintainers assuming that gcc defaults to -march=*-v2 etc.\nThe cloud vendors' internal distros aren't really special in that\nregard are they?\n\nHmm, among Linux distros, maybe it's really only Debian that isn't\nmoving or talking about moving the baseline yet? (Are they?)\n\n> They can do it, but we need to tell them how and when. And if we intend for packagers to be part of the solution we need to explicitly bring them into the discussion of how to do it, at a fairly early stage (and no, we can't expect them to follow every thread on hackers).\n\nOK let me CC Christoph and ask the question this way: hypothetically,\nif RHEL users' PostgreSQL packages became automatically faster than\nDebian users' packages because of default -march choice in the\ntoolchain, what would the Debian project think about that, and what\nshould we do about it? The options discussed so far were:\n\n1. Consider a community apt repo that is identical to the one except\ntargeting the higher microarch levels, that users can point a machine\nto if they want to.\n2. Various ideas for shipping multiple versions of the code at a\nhigher granularity than we're doing to day (a callback for a single\ninstruction! the opposite extreme being the whole executable +\nlibraries), probably using some of techniques mentioned at\nhttps://wiki.debian.org/InstructionSelection.\n\nI would guess that 1 is about 100x easier but I haven't looked into it.\n\n\n", "msg_date": "Fri, 14 Jun 2024 12:49:43 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Changing default -march landscape" }, { "msg_contents": "Hi,\n\nsorry for the delayed reply, I suck at prioritizing things.\n\nRe: Thomas Munro\n> OK let me CC Christoph and ask the question this way: hypothetically,\n> if RHEL users' PostgreSQL packages became automatically faster than\n> Debian users' packages because of default -march choice in the\n> toolchain, what would the Debian project think about that, and what\n> should we do about it? The options discussed so far were:\n> \n> 1. Consider a community apt repo that is identical to the one except\n> targeting the higher microarch levels, that users can point a machine\n> to if they want to.\n\nThere are sub-variations of that: Don't use -march in Debian for\nstrict baseline compatiblity, but use -march=something in\napt.postgresql.org; bump to -march=x86-64-v2 for the server build (but\nnot libpq and psql) saying that PG servers need better hardware; ...\n\nI'd rather want to avoid adding yet another axis to the matrix we\ntarget on apt.postgresql.org, it's already complex enough. So ideally\nthere should be only one server-build. Or if we decide it's worth to\nhave several, extension builds should still be compatible with either.\n\n> 2. Various ideas for shipping multiple versions of the code at a\n> higher granularity than we're doing to day (a callback for a single\n> instruction! the opposite extreme being the whole executable +\n> libraries), probably using some of techniques mentioned at\n> https://wiki.debian.org/InstructionSelection.\n\nI don't have enough experience with that to say anything about the\ntrade-offs, or if the online instruction selectors themselves add too\nmuch overhead.\n\nNot to mention that testing things against all instruction variants is\nprobably next to impossible in practice.\n\n> I would guess that 1 is about 100x easier but I haven't looked into it.\n\nAre there any numbers about what kind of speedup we could expect?\n\nIf the online selection isn't feasible/worthwhile, I'd be willing to\nbump the baseline for the server packages. There are already packages\ndoing that, and there's even infrastructure in the \"isa-support\"\npackage that lets packages declare a dependency on CPU features.\n\nOr Debian might just bump the baseline. PostgreSQL asking for it might\njust be the reason we wanted to hear to make it happen.\n\nChristoph\n\n\n", "msg_date": "Mon, 24 Jun 2024 14:16:28 +0200", "msg_from": "Christoph Berg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changing default -march landscape" }, { "msg_contents": "Re: To Thomas Munro\n> Or Debian might just bump the baseline. PostgreSQL asking for it might\n> just be the reason we wanted to hear to make it happen.\n\nWhich level would PostgreSQL specifically want? x86-64-v2 or even\nx86-64-v3?\n\nChristoph\n\n\n", "msg_date": "Mon, 24 Jun 2024 14:24:25 +0200", "msg_from": "Christoph Berg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changing default -march landscape" }, { "msg_contents": "Hi,\n\nReply triggered by https://postgr.es/m/ZqmRYh3iikm1Kh3D%40nathan\n\n\nOn 2024-06-12 20:09:45 -0500, Nathan Bossart wrote:\n> This is perhaps only tangentially related, but I've found it really\n> difficult to avoid painting ourselves into a corner with this stuff. Let's\n> use the SSE 4.2 CRC32C code as an example. Right now, if your default\n> compiler flags indicate support for SSE 4.2 (which I'll note can be assumed\n> with x86-64-v2), we'll use it unconditionally, no function pointer\n> required. If additional compiler flags happen to convince the compiler to\n> generate SSE 4.2 code, we'll instead build both a fallback version and the\n> SSE version, and then we'll use a function pointer to direct to whatever we\n> detect is available on the CPU when the server starts.\n\n> Now, let's say we require x86-64-v2. Once we have that, we can avoid the\n> function pointer on many more x86 machines. While that sounds great, now\n> we have a different problem. If someone wants to add, say, AVX-512 support\n> [0], which is a much newer instruction set, we'll need to use the function\n> pointer again. And we're back where we started. We could instead just ask\n> folks to compile with --march=native, but then these optimizations are only\n> available for a subset of users until we decide the instructions are\n> standard enough 20 years from now.\n\nI don't think this is that big an issue:\n\nSo much of the avx512 stuff is only available on an almost arbitrary seeming\nset of platforms, so we'll need runtime tests approximately forever. Which\nalso means we need a fairly standard pattern for deciding whether the runtime\ndispatch is worth the cost. That's pretty independent of whether we e.g. want\nto implement pg_popcount64 without runtime dispatch, it'd not make sense to\nuse avx512 for that anyway.\n\n\n> The idea that's been floating around recently is to build a bunch of\n> different versions of Postgres and to choose one on startup based on what\n> the CPU supports. That seems like quite a lot of work, and it'll increase\n> the size of the builds quite a bit, but it at least doesn't have the\n> aforementioned problem.\n\nI still think that's a good idea - but I don't think it's gonna deal well with\nthe various avx512 \"features\". The landscape of what's supported on what CPU\nis so comically complicated that there's no way to build separate versions for\neverything.\n\n\n\nWe can hide most of the dispatch cost in a static inline function that only\ndoes the runtime test if size is large enough - the size is a compile time\nconstant most of the time, which optimizes away the dispatch cost most of the\ntime. And even if not, an inlined runtime branch is a *lot* cheaper than an\nindirect function call.\n\n\nI think this should be something *roughly* like this:\n\n\n/* shared between all cpu-type dependent code */\n\n#define PGCPUCAP_INIT\t\t(1 << 0)\n#define PGCPUCAP_POPCNT\t\t(1 << 1)\n#define PGCPUCAP_VPOPCNT\t(1 << 2)\n#define PGCPUCAP_CRC32C\t\t(1 << 3)\n...\n\nextern uint32 pg_cpucap; /* possibly an bool array would be better */\nextern void pg_cpucap_initialize(void);\n\n\n\nstatic inline int\npg_popcount64(uint64 word)\n{\n Assert(pg_cpucap & PGCPUCAP_INIT);\n\n#if defined(HAVE_POPCNT64_COMPILETIME) || defined(HAVE_POPCNT64_RUNTIME)\n\n#if defined(HAVE_POPCNT64_RUNTIME)\n if (pg_cpucap & PGCPUCAP_POPCNT64)\n#endif\n {\n return pg_popcount64_fast(buf, bytes);\n }\n /* fall through if not available */\n#else\n return pg_popcount64_slow(word);\n#endif\n}\n\n/* badly guessed */\n#define AVX512_THRESHOLD 64\n\nstatic inline uint64\npg_popcount(const char *buf, int bytes)\n{\n uint64 popcnt = 0;\n\n Assert(pg_cpucap & PGCPUCAP_INIT);\n\n /*\n * Most of the times `bytes` will be a compile time constant and the\n * branches below can be optimized out. Even if they can't, a branch or\n * three here is a lot cheaper than an indirect function call.\n */\n\n#if defined(HAVE_AVX512_VPOPCNT_COMPILETIME) || defined(HAVE_AVX512_VPOPCNT_RUNTIME)\n if (unlikely(bytes >= AVX512_THRESHOLD))\n {\n#if defined(HAVE_AVX512_VPOPCNT_RUNTIME)\n if (pg_cpucap & PGCPUCAP_VPOPCNT)\n#else\n {\n return pg_popcount_avx512(buf, bytes);\n }\n /* if not available we fall through to the unrolled implementation */\n#endif\n }\n#endif /* HAVE_AVX512_VPOPCNT_* */\n\n /* XXX: should probably be implemented in separate function */\n if (bytes > 8)\n {\n\twhile (bytes >= 8)\n {\n uint64 word;\n\n /*\n * Address could be unaligned, compiler will optimize this to a\n * plain [unaligned] load on most architectures.\n */\n memcpy(&word, buf, sizeof(uint64));\n\n /*\n * TODO: if compilers can't hoist the pg_cpucap check\n * automatically, we should do so \"manually\".\n */\n popcnt += pg_popcount64(word);\n\n buf += sizeof(uint64);\n bytes -= sizeof(uint64);\n }\n }\n\n /*\n * Handle tail, we can use the 64bit version by just loading the relevant\n * portion of the data into a wider register.\n */\n if (bytes > 0)\n {\n uint64 word = 0;\n\n Assert(bytes < 8);\n\n memcpy(&word, buf, bytes);\n\n popcnt += pg_popcount64_fast(word);\n }\n\n return popcnt;\n}\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 30 Jul 2024 19:39:18 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changing default -march landscape" }, { "msg_contents": "On Tue, Jul 30, 2024 at 07:39:18PM -0700, Andres Freund wrote:\n> On 2024-06-12 20:09:45 -0500, Nathan Bossart wrote:\n>> The idea that's been floating around recently is to build a bunch of\n>> different versions of Postgres and to choose one on startup based on what\n>> the CPU supports. That seems like quite a lot of work, and it'll increase\n>> the size of the builds quite a bit, but it at least doesn't have the\n>> aforementioned problem.\n> \n> I still think that's a good idea - but I don't think it's gonna deal well with\n> the various avx512 \"features\". The landscape of what's supported on what CPU\n> is so comically complicated that there's no way to build separate versions for\n> everything.\n\nGood point.\n\n> We can hide most of the dispatch cost in a static inline function that only\n> does the runtime test if size is large enough - the size is a compile time\n> constant most of the time, which optimizes away the dispatch cost most of the\n> time. And even if not, an inlined runtime branch is a *lot* cheaper than an\n> indirect function call.\n\nI ended up doing precisely this for pg_popcount()/pg_popcount_masked(),\nalthough not quite as sophisticated as what you propose below. I'll look\ninto expanding on this strategy in v18.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 30 Jul 2024 21:59:44 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changing default -march landscape" }, { "msg_contents": "Hi,\n\nOn 2024-07-30 21:59:44 -0500, Nathan Bossart wrote:\n> On Tue, Jul 30, 2024 at 07:39:18PM -0700, Andres Freund wrote:\n> > We can hide most of the dispatch cost in a static inline function that only\n> > does the runtime test if size is large enough - the size is a compile time\n> > constant most of the time, which optimizes away the dispatch cost most of the\n> > time. And even if not, an inlined runtime branch is a *lot* cheaper than an\n> > indirect function call.\n> \n> I ended up doing precisely this for pg_popcount()/pg_popcount_masked(),\n> although not quite as sophisticated as what you propose below. I'll look\n> into expanding on this strategy in v18.\n\nI think you subsume that under \"not quite as sophisticated\", but just to make\nclear: The most important bits are to\n\na) do the dispatch in a header, without an indirect function call\n\nb) implement intrinsic using stuff in a header if it's using a size argument\n or such, because that allows to compiler to optimize away size checks in\n the very common case of such an argument being a compile time constant.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 31 Jul 2024 14:05:28 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changing default -march landscape" } ]
[ { "msg_contents": "Hi All,\r\nFrist of all really appreciate the review of my patch. I've seperated the patch into two: patch_support_tls1.3 for tls1.3 support and patch_support_curves_group for a set of curves list. \r\n\r\n\r\n\r\nTLS1.3 support - patch_support_tls1.3\r\nI agree with Jelte that it's better to have different options for tls1.2 and lower(cipher list) and tls1.3(cipher suite) since openssl provided different APIs for each. As for users not faimilar with TLS(they don't care TLS,)we can still keep the default value as described here https://www.postgresql.org/docs/devel/runtime-config-connection.html. If TLS is critical to them(they should have figured out the different options in tls1.2 and tls1.3), then they can set the values on-demand. Moreover we can add some description of these two options.\r\n\r\neg.&nbsp;\r\nssl_ciphers &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | HIGH:MEDIUM:+3DES:!aNULL &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;| Sets the list of allowed SSL ciphers for TLS1.2 and lower.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \r\nssl_ciphers_suites &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;| HIGH:MEDIUM:+3DES:!aNULL &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | Sets the list of allowed SSL cipher suites for TLS1.3.\r\n\r\n\r\nCurves groups - patch_support_curves_group\r\nIndeed SSL_CTX_set1_curves_list is deprecated it's bette to change to SSL_CTX_set1_groups_list instead.\r\n\r\n\r\n\r\n\r\n\r\n \r\nOriginal Email\r\n \r\n \r\n\r\nSender:\"Jelte Fennema-Nio\"< [email protected] &gt;;\r\n\r\nSent Time:2024/6/12 16:51\r\n\r\nTo:\"Daniel Gustafsson\"< [email protected] &gt;;\r\n\r\nCc recipient:\"Erica Zhang\"< [email protected] &gt;;\"Jacob Champion\"< [email protected] &gt;;\"Peter Eisentraut\"< [email protected] &gt;;\"pgsql-hackers\"< [email protected] &gt;;\r\n\r\nSubject:Re: Add support to TLS 1.3 cipher suites and curves lists\r\n\r\n\r\nOn Mon, 10 Jun 2024 at 12:31, Daniel Gustafsson wrote:\r\n&gt; Regarding the ciphersuites portion of the patch. I'm not particularly thrilled\r\n&gt; about having a GUC for TLSv1.2 ciphers and one for TLSv1.3 ciphersuites, users\r\n&gt; not all that familiar with TLS will likely find it confusing to figure out what\r\n&gt; to do.\r\n\r\nI don't think it's easy to create a single GUC because OpenSSL has\r\ndifferent APIs for both. So we'd have to add some custom parsing for\r\nthe combined string, which is likely to cause some problems imho. I\r\nthink separating them is the best option from the options we have and\r\nI don't think it matters much practice for users. Users not familiar\r\nwith TLS might indeed be confused, but those users shouldn't touch\r\nthese settings anyway, and just use the defaults. The users that care\r\nabout this probably already get two cipher strings from their\r\ncompliance teams, because many other applications also have two\r\nseparate options for specifying both.", "msg_date": "Thu, 13 Jun 2024 14:34:27 +0800", "msg_from": "\"=?utf-8?B?RXJpY2EgWmhhbmc=?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re:Re: Add support to TLS 1.3 cipher suites and curves lists" }, { "msg_contents": "Hi,\n\nThis thread was referenced by https://www.postgresql.org/message-id/48F0A1F8-E0B4-41F8-990F-41E6BA2A6185%40yesql.se\n\nOn 2024-06-13 14:34:27 +0800, Erica Zhang wrote:\n\n> diff --git a/src/backend/libpq/be-secure-openssl.c b/src/backend/libpq/be-secure-openssl.c\n> index 39b1a66236..d097e81407 100644\n> --- a/src/backend/libpq/be-secure-openssl.c\n> +++ b/src/backend/libpq/be-secure-openssl.c\n> @@ -1402,30 +1402,30 @@ static bool\n> initialize_ecdh(SSL_CTX *context, bool isServerStart)\n> {\n> #ifndef OPENSSL_NO_ECDH\n> -\tEC_KEY\t *ecdh;\n> -\tint\t\t\tnid;\n> +\tchar *curve_list = strdup(SSLECDHCurve);\n\nISTM we'd want to eventually rename the GUC variable to indicate it's a list?\nI think the \"ecdh\" portion is actually not accurate anymore either, it's used\noutside of ecdh if I understand correctly (probably I am not)?\n\n\n> +\tchar *saveptr;\n> +\tchar *token = strtok_r(curve_list, \":\", &saveptr);\n> +\tint nid;\n> \n> -\tnid = OBJ_sn2nid(SSLECDHCurve);\n> -\tif (!nid)\n> +\twhile (token != NULL)\n\nIt'd be good to have a comment explaining why we're parsing the list ourselves\ninstead of using just the builtin SSL_CTX_set1_groups_list().\n\n> \t{\n> -\t\tereport(isServerStart ? FATAL : LOG,\n> +\t\tnid = OBJ_sn2nid(token);\n> +\t\tif (!nid)\n> +\t\t{ereport(isServerStart ? FATAL : LOG,\n> \t\t\t\t(errcode(ERRCODE_CONFIG_FILE_ERROR),\n> -\t\t\t\t errmsg(\"ECDH: unrecognized curve name: %s\", SSLECDHCurve)));\n> +\t\t\t\t errmsg(\"ECDH: unrecognized curve name: %s\", token)));\n> \t\treturn false;\n> +\t\t}\n> +\t\ttoken = strtok_r(NULL, \":\", &saveptr);\n> \t}\n> \n> -\tecdh = EC_KEY_new_by_curve_name(nid);\n> -\tif (!ecdh)\n> +\tif(SSL_CTX_set1_groups_list(context, SSLECDHCurve) !=1)\n> \t{\n> \t\tereport(isServerStart ? FATAL : LOG,\n> \t\t\t\t(errcode(ERRCODE_CONFIG_FILE_ERROR),\n> -\t\t\t\t errmsg(\"ECDH: could not create key\")));\n> +\t\t\t\t errmsg(\"ECDH: failed to set curve names\")));\n\nProbably worth including the value of the GUC here?\n\n\n\nThis also needs to change the documentation for the GUC.\n\n\n\nOnce we have this parameter we probably should add at least x25519 to the\nallowed list, as that's the client side default these days.\n\nBut that can be done in a separate patch.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 17 Jun 2024 11:48:22 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add support to TLS 1.3 cipher suites and curves lists" } ]
[ { "msg_contents": "Hi Jelte and Daniel,\r\n\r\n\r\nBased on my understanding currently there is no setting that controls the cipher choices used by TLS version 1.3 connections but the default value(HIGH:MEDIUM:+3DES:!aNULL) is used. So if I want to connect to Postgres (eg. Postgres 14) with different TLS versions of customized ciphers instead of default one like below:\r\n\r\n\r\neg.&nbsp;\r\n\r\nTLS1.2 of ciphers \r\nECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:AES256-SHA:AES128-SHA\r\n\r\nTLS1.3 of ciphers\r\nTLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256\r\n\r\nFor TLS1.2 connection, we can set the configuration in postgresql.conf as:\r\nssl_ciphers = 'ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:AES256-SHA:AES128-SHA'\r\nHow can I achieve the value for TLS1.3? Do you mean I can set the Ciphersuites in openssl.conf, then Postgres will pick up and use this value accordingly?\r\n\r\neg. I can run below command to set ciphersuites of TLS1.3 on my appliance:\r\nopenssl&nbsp;ciphers -ciphersuites TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256\r\n\r\nthen Postgres will use 'TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256\" as ciphers for TLS1.3 connection?\r\nThanks,\r\nErica Zhang\r\n\r\n\r\n\r\n\r\n\r\n\r\n \r\nOriginal Email\r\n \r\n \r\n\r\nSender:\"Jelte Fennema-Nio\"< [email protected] &gt;;\r\n\r\nSent Time:2024/6/12 16:51\r\n\r\nTo:\"Erica Zhang\"< [email protected] &gt;;\r\n\r\nCc recipient:\"Michael Paquier\"< [email protected] &gt;;\"Peter Eisentraut\"< [email protected] &gt;;\"pgsql-hackers\"< [email protected] &gt;;\r\n\r\nSubject:Re: Re: Re: Add support to TLS 1.3 cipher suites and curves lists\r\n\r\n\r\nOn Wed, 12 Jun 2024 at 04:32, Erica Zhang wrote:\r\n&gt; There are certain government, financial and other enterprise organizations that have very strict requirements about the encrypted communication and more specifically about fine grained params like the TLS ciphers and curves that they use. The default ones for those customers are not acceptable. Any products that integrate Postgres and requires encrypted communication with the Postgres would have to fulfil those requirements.\r\n\r\nYeah, I ran into such requirements before too. So I do think it makes\r\nsense to have such a feature in Postgres.\r\n\r\n&gt; So if we can have this patch in the upcoming new major version, that means Postgres users who have similar requirements can upgrade to PG17.\r\n\r\nAs Daniel mentioned you can already achieve the same using the\r\n\"Ciphersuites\" directive in openssl.conf. Also you could of course\r\nalways disable TLSv1.3 support.\nHi Jelte and Daniel,Based on my understanding currently there is no setting that controls the cipher choices used by TLS version 1.3 connections but the default value(HIGH:MEDIUM:+3DES:!aNULL) is used. So if I want to connect to Postgres (eg. Postgres 14) with different TLS versions of customized ciphers instead of default one like below:eg. TLS1.2 of ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:AES256-SHA:AES128-SHATLS1.3 of ciphersTLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256For TLS1.2 connection, we can set the configuration in postgresql.conf as:ssl_ciphers = 'ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:AES256-SHA:AES128-SHA'How can I achieve the value for TLS1.3? Do you mean I can set the Ciphersuites in openssl.conf, then Postgres will pick up and use this value accordingly?eg. I can run below command to set ciphersuites of TLS1.3 on my appliance:openssl ciphers -ciphersuites TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256then Postgres will use 'TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256\" as ciphers for TLS1.3 connection?Thanks,Erica Zhang\nOriginal Email\n\nSender:\"Jelte Fennema-Nio\"< [email protected] >;Sent Time:2024/6/12 16:51To:\"Erica Zhang\"< [email protected] >;Cc recipient:\"Michael Paquier\"< [email protected] >;\"Peter Eisentraut\"< [email protected] >;\"pgsql-hackers\"< [email protected] >;Subject:Re: Re: Re: Add support to TLS 1.3 cipher suites and curves listsOn Wed, 12 Jun 2024 at 04:32, Erica Zhang wrote:> There are certain government, financial and other enterprise organizations that have very strict requirements about the encrypted communication and more specifically about fine grained params like the TLS ciphers and curves that they use. The default ones for those customers are not acceptable. Any products that integrate Postgres and requires encrypted communication with the Postgres would have to fulfil those requirements.Yeah, I ran into such requirements before too. So I do think it makessense to have such a feature in Postgres.> So if we can have this patch in the upcoming new major version, that means Postgres users who have similar requirements can upgrade to PG17.As Daniel mentioned you can already achieve the same using the\"Ciphersuites\" directive in openssl.conf. Also you could of coursealways disable TLSv1.3 support.", "msg_date": "Thu, 13 Jun 2024 15:07:38 +0800", "msg_from": "\"=?utf-8?B?RXJpY2EgWmhhbmc=?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re:Re: Re: Re: Add support to TLS 1.3 cipher suites and curves lists" }, { "msg_contents": "> On 13 Jun 2024, at 09:07, Erica Zhang <[email protected]> wrote:\n\n> How can I achieve the value for TLS1.3? Do you mean I can set the Ciphersuites in openssl.conf, then Postgres will pick up and use this value accordingly?\n\nYes, you should be able to restrict the ciphersuites for TLSv1.3 with\nopenssl.conf on your system.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 13 Jun 2024 13:56:22 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add support to TLS 1.3 cipher suites and curves lists" } ]
[ { "msg_contents": "Hi all,\n\nWhile looking at ways to make pg_stat_statements more scalable and\ndynamically manageable (no more PGC_POSTMASTER for the max number of\nentries), which came out as using a dshash, Andres has mentioned me\noff-list (on twitter/X) that we'd better plug in it to the shmem\npgstats facility, moving the text file that holds the query strings\ninto memory (with size restrictions for the query strings, for\nexample). This has challenges on its own (query ID is 8 bytes\nincompatible with the dboid/objid hash key used by pgstats, discard of\nentries when maximum). Anyway, this won't happen if we don't do one\nof these two things: \n1) Move pg_stat_statements into core, adapting pgstats for its\nrequirements.\n2) Make the shmem pgstats pluggable so as it is possible for extensions\nto register their own stats kinds.\n\n1) may have its advantages, still I am not sure if we want to do that.\nAnd 2) is actually something that can be used for more things than\njust pg_stat_statements, because people love extensions and\nstatistics (spoiler: I do). The idea is simple: any extension\ndefining a custom stats kind would be able to rely on all the in-core\nfacilities we use for the existing in-core kinds:\na) Snapshotting and caching of the stats, via stats_fetch_consistency.\nb) Native handling and persistency of the custom stats data.\nc) Reuse stats after a crash, pointing at this comment in xlog.c:\n * TODO: With a bit of extra work we could just start with a pgstat file\n * associated with the checkpoint redo location we're starting from.\nThis means that we always remove the stats after a crash. That's\nsomething I have a patch for, not for this thread, but the idea is\nthat custom stats would also benefit from this property.\n\nThe implementation is based on the following ideas:\n\n* A structure in shared memory that tracks the IDs of the custom stats\nkinds with their names. These are incremented starting from\nPGSTAT_KIND_LAST.\n\n* Processes use a local array cache that keeps tracks of all the\ncustom PgStat_KindInfos, indexed by (kind_id - PGSTAT_KIND_LAST).\n\n* The kind IDs may change across restarts, meaning that any stats data \nassociated to a custom kind is stored with the *name* of the custom\nstats kind. Depending on the discussion happening here, I'd be open\nto use the same concept as custom RMGRs, where custom kind IDs are\n\"reserved\", fixed in time, and tracked in the Postgres wiki. It is\ncheaper to store the stats this way, as well, while managing conflicts\nacross extensions available in the community ecosystem.\n\n* Custom stats can be added without shared_preload_libraries,\nloading them from a shmem startup hook with shared_preload_libraries\nis also possible.\n\n* The shmem pgstats defines two types of statistics: the ones in a\ndshash and what's called a \"fixed\" type like for archiver, WAL, etc.\npointing to areas of shared memory. All the fixed types are linked to \nstructures for snapshotting and shmem tracking. As a matter of\nsimplification and because I could not really see a case where I'd\nwant to plug in a fixed stats kind, the patch forbids this case. This\ncase could be allowed, but I'd rather refactor the structures of\npgstat_internal.h so as we don't have traces of the \"fixed\" stats\nstructures in so many areas.\n\n* Making custom stats data persistent is an interesting problem, and\nthere are a couple of approaches I've considered:\n** Allow custom kinds to define callbacks to read and write data from\na source they'd want, like their own file through a fd. This has the\ndisadvantage to remove the benefit of c) above.\n** Store everything in the existing stats file, adding one type of\nentry like 'S' and 'N' with a \"custom\" type, where the *name* of the\ncustom stats kind is stored instead of its ID computed from shared\nmemory.\nA mix of both? The patch attached has used the second approach. If\nthe process reading/writing the stats does not know about the custom\nstats data, the data is discarded.\n\n* pgstat.c has a big array called pgstat_kind_infos to define all the\nexisting stats kinds. Perhaps the code should be refactored to use\nthis new API? That would make the code more consistent with what we\ndo for resource managers, for one, while moving the KindInfos into\ntheir own file. With that in mind, storing the kind ID in KindInfos\nfeels intuitive.\n\nWhile thinking about a use case to show what these APIs can do, I have\ndecided to add statistics to the existing module injection_points\nrather than implement a new test module, gathering data about them and\nhave tests that could use this data (like tracking the number of times\na point is taken). This is simple enough that it can be used as a\ntemplate, as well. There is a TAP test checking the data persistence\nacross restarts, so I did not mess up this part much, hopefully.\n\nPlease find attached a patch set implementing these ideas:\n- 0001 switches PgStat_Kind from an enum to a uint32, for the internal\ncounters.\n- 0002 is some cleanup for the hardcoded S, N and E in pgstat.c.\n- 0003 introduces the backend-side APIs, with the shmem table counter\nand the routine to give code paths a way to register their own stats\nkind (see pgstat_add_kind).\n- 0004 implements an example of how to use these APIs, see\ninjection_stats.c in src/test/modules/injection_points/.\n- 0005 adds some docs.\n- 0006 is an idea of how to make this custom stats data persistent.\n\nThis will hopefully spark a discussion, and I was looking for answers\nregarding these questions:\n- Should the pgstat_kind_infos array in pgstat.c be refactored to use\nsomething similar to pgstat_add_kind?\n- How should the persistence of the custom stats be achieved?\nCallbacks to give custom stats kinds a way to write/read their data,\npush everything into a single file, or support both?\n- Should this do like custom RMGRs and assign to each stats kinds ID\nthat are set in stone rather than dynamic ones?\n\nThanks for reading.\n--\nMichael", "msg_date": "Thu, 13 Jun 2024 16:59:50 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Pluggable cumulative statistics" }, { "msg_contents": "On Thu, Jun 13, 2024 at 04:59:50PM +0900, Michael Paquier wrote:\n> - How should the persistence of the custom stats be achieved?\n> Callbacks to give custom stats kinds a way to write/read their data,\n> push everything into a single file, or support both?\n> - Should this do like custom RMGRs and assign to each stats kinds ID\n> that are set in stone rather than dynamic ones?\n\nThese two questions have been itching me in terms of how it would work\nfor extension developers, after noticing that custom RMGRs are used\nmore than I thought:\nhttps://wiki.postgresql.org/wiki/CustomWALResourceManagers\n\nThe result is proving to be nicer, shorter by 300 lines in total and\nmuch simpler when it comes to think about the way stats are flushed\nbecause it is possible to achieve the same result as the first patch\nset without manipulating any of the code paths doing the read and\nwrite of the pgstats file.\n\nIn terms of implementation, pgstat.c's KindInfo data is divided into\ntwo parts, for efficiency:\n- The exiting in-core stats with designated initializers, renamed as\nbuilt-in stats kinds.\n- The custom stats kinds are saved in TopMemoryContext, and can only\nbe registered with shared_preload_libraries. The patch reserves a set\nof 128 harcoded slots for all the custom kinds making the lookups for\nthe KindInfos quite cheap. Upon registration, a custom stats kind\nneeds to assign a unique ID, with uniqueness on the names and IDs\nchecked at registration.\n\nThe backend code does ID -> information lookups in the hotter paths,\nmeaning that the code only checks if an ID is built-in or custom, then\nredirects to the correct array where the information is stored.\nThere is one code path that does a name -> information lookup for the\nundocumented SQL function pg_stat_have_stats() used in the tests,\nwhich is a bit less efficient now, but that does not strike me as an\nissue.\n\nmodules/injection_points/ works as previously as a template to show\nhow to use these APIs, with tests for the whole.\n\nWith that in mind, the patch set is more pleasant to the eye, and the\nattached v2 consists of:\n- 0001 and 0002 are some cleanups, same as previously to prepare for\nthe backend-side APIs.\n- 0003 adds the backend support to plug-in custom stats.\n- 0004 includes documentation.\n- 0005 is an example of how to use them, with a TAP test providing\ncoverage.\n\nNote that the patch I've proposed to make stats persistent at\ncheckpoint so as we don't discard everything after a crash is able to\nwork with the custom stats proposed on this thread:\nhttps://commitfest.postgresql.org/48/5047/\n--\nMichael", "msg_date": "Thu, 20 Jun 2024 09:46:30 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "Hi,\n\nOn Thu, Jun 13, 2024 at 04:59:50PM +0900, Michael Paquier wrote:\n> Hi all,\n> \n> 2) Make the shmem pgstats pluggable so as it is possible for extensions\n> to register their own stats kinds.\n\nThanks for the patch! I like the idea of having custom stats (it has also been\nsomehow mentioned in [1]).\n\n> 2) is actually something that can be used for more things than\n> just pg_stat_statements, because people love extensions and\n> statistics (spoiler: I do).\n\n+1\n\n> * Making custom stats data persistent is an interesting problem, and\n> there are a couple of approaches I've considered:\n> ** Allow custom kinds to define callbacks to read and write data from\n> a source they'd want, like their own file through a fd. This has the\n> disadvantage to remove the benefit of c) above.\n> ** Store everything in the existing stats file, adding one type of\n> entry like 'S' and 'N' with a \"custom\" type, where the *name* of the\n> custom stats kind is stored instead of its ID computed from shared\n> memory.\n\nWhat about having 2 files?\n\n- One is the existing stats file\n- One \"predefined\" for all the custom stats (so what you've done minus the\nin-core stats). This one would not be configurable and the extensions will\nnot need to know about it.\n\nWould that remove the benefit from c) that you mentioned up-thread?\n\n[1]: https://www.postgresql.org/message-id/20220818195124.c7ipzf6c5v7vxymc%40awork3.anarazel.de\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 20 Jun 2024 13:05:42 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "Hi,\n\nOn Thu, Jun 20, 2024 at 09:46:30AM +0900, Michael Paquier wrote:\n> On Thu, Jun 13, 2024 at 04:59:50PM +0900, Michael Paquier wrote:\n> > - How should the persistence of the custom stats be achieved?\n> > Callbacks to give custom stats kinds a way to write/read their data,\n> > push everything into a single file, or support both?\n> > - Should this do like custom RMGRs and assign to each stats kinds ID\n> > that are set in stone rather than dynamic ones?\n\n> These two questions have been itching me in terms of how it would work\n> for extension developers, after noticing that custom RMGRs are used\n> more than I thought:\n> https://wiki.postgresql.org/wiki/CustomWALResourceManagers\n> \n> The result is proving to be nicer, shorter by 300 lines in total and\n> much simpler when it comes to think about the way stats are flushed\n> because it is possible to achieve the same result as the first patch\n> set without manipulating any of the code paths doing the read and\n> write of the pgstats file.\n\nI think it makes sense to follow the same \"behavior\" as the custom\nwal resource managers. That, indeed, looks much more simpler than v1.\n\n> In terms of implementation, pgstat.c's KindInfo data is divided into\n> two parts, for efficiency:\n> - The exiting in-core stats with designated initializers, renamed as\n> built-in stats kinds.\n> - The custom stats kinds are saved in TopMemoryContext,\n\nAgree that a backend lifetime memory area is fine for that purpose.\n\n> and can only\n> be registered with shared_preload_libraries. The patch reserves a set\n> of 128 harcoded slots for all the custom kinds making the lookups for\n> the KindInfos quite cheap.\n\n+ MemoryContextAllocZero(TopMemoryContext,\n+ sizeof(PgStat_KindInfo *) * PGSTAT_KIND_CUSTOM_SIZE);\n\nand that's only 8 * PGSTAT_KIND_CUSTOM_SIZE bytes in total.\n\nI had a quick look at the patches (have in mind to do more):\n\n> With that in mind, the patch set is more pleasant to the eye, and the\n> attached v2 consists of:\n> - 0001 and 0002 are some cleanups, same as previously to prepare for\n> the backend-side APIs.\n\n0001 and 0002 look pretty straightforward at a quick look.\n\n> - 0003 adds the backend support to plug-in custom stats.\n\n1 ===\n\nIt looks to me that there is a mix of \"in core\" and \"built-in\" to name the\nnon custom stats. Maybe it's worth to just use one?\n \nAs I can see (and as you said above) this is mainly inspired by the custom\nresource manager and 2 === and 3 === are probably copy/paste consequences.\n\n2 ===\n\n+ if (pgstat_kind_custom_infos[idx] != NULL &&\n+ pgstat_kind_custom_infos[idx]->name != NULL)\n+ ereport(ERROR,\n+ (errmsg(\"failed to register custom cumulative statistics \\\"%s\\\" with ID %u\", kind_info->name, kind),\n+ errdetail(\"Custom resource manager \\\"%s\\\" already registered with the same ID.\",\n+ pgstat_kind_custom_infos[idx]->name)));\n\ns/Custom resource manager/Custom cumulative statistics/\n\n3 ===\n\n+ ereport(LOG,\n+ (errmsg(\"registered custom resource manager \\\"%s\\\" with ID %u\",\n+ kind_info->name, kind)));\n\ns/custom resource manager/custom cumulative statistics/\n\n> - 0004 includes documentation.\n\nDid not look yet.\n\n> - 0005 is an example of how to use them, with a TAP test providing\n> coverage.\n\nDid not look yet.\n\nAs I said, I've in mind to do a more in depth review. I've noted the above while\ndoing a quick read of the patches so thought it makes sense to share them\nnow while at it.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 20 Jun 2024 14:27:14 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On Thu, Jun 20, 2024 at 01:05:42PM +0000, Bertrand Drouvot wrote:\n> On Thu, Jun 13, 2024 at 04:59:50PM +0900, Michael Paquier wrote:\n>> * Making custom stats data persistent is an interesting problem, and\n>> there are a couple of approaches I've considered:\n>> ** Allow custom kinds to define callbacks to read and write data from\n>> a source they'd want, like their own file through a fd. This has the\n>> disadvantage to remove the benefit of c) above.\n>> ** Store everything in the existing stats file, adding one type of\n>> entry like 'S' and 'N' with a \"custom\" type, where the *name* of the\n>> custom stats kind is stored instead of its ID computed from shared\n>> memory.\n> \n> What about having 2 files?\n> \n> - One is the existing stats file\n> - One \"predefined\" for all the custom stats (so what you've done minus the\n> in-core stats). This one would not be configurable and the extensions will\n> not need to know about it.\n\nAnother thing that can be done here is to add a few callbacks to\ncontrol how an entry should be written out when the dshash is scanned\nor read when the dshash is populated depending on the KindInfo.\nThat's not really complicated to do as the populate part could have a\ncleanup phase if an error is found. I just did not do it yet because\nthis patch set is already covering a lot, just to get the basics in.\n\n> Would that remove the benefit from c) that you mentioned up-thread?\n\nYes, that can be slightly annoying. Splitting the stats across\nmultiple files would mean that each stats file would have to store the\nredo LSN. That's not really complicated to implement, but really easy\nto miss. Perhaps folks implementing their own stats kinds would be\naware anyway because we are going to need a callback to initialize the\nfile to write if we do that, and the redo LSN should be provided in\ninput of it. Giving more control to extension developers here would\nbe OK for me, especially since they could use their own format for\ntheir output file(s).\n--\nMichael", "msg_date": "Fri, 21 Jun 2024 08:08:19 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On Thu, Jun 20, 2024 at 02:27:14PM +0000, Bertrand Drouvot wrote:\n> On Thu, Jun 20, 2024 at 09:46:30AM +0900, Michael Paquier wrote:\n> I think it makes sense to follow the same \"behavior\" as the custom\n> wal resource managers. That, indeed, looks much more simpler than v1.\n\nThanks for the feedback.\n\n>> and can only\n>> be registered with shared_preload_libraries. The patch reserves a set\n>> of 128 harcoded slots for all the custom kinds making the lookups for\n>> the KindInfos quite cheap.\n> \n> + MemoryContextAllocZero(TopMemoryContext,\n> + sizeof(PgStat_KindInfo *) * PGSTAT_KIND_CUSTOM_SIZE);\n> \n> and that's only 8 * PGSTAT_KIND_CUSTOM_SIZE bytes in total.\n\nEnlarging that does not worry me much. Just not too much.\n\n>> With that in mind, the patch set is more pleasant to the eye, and the\n>> attached v2 consists of:\n>> - 0001 and 0002 are some cleanups, same as previously to prepare for\n>> the backend-side APIs.\n> \n> 0001 and 0002 look pretty straightforward at a quick look.\n\n0002 is quite independentn. Still, 0001 depends a bit on the rest.\nAnyway, the Kind is already 4 bytes and it cleans up some APIs that\nused int for the Kind, so enforcing signedness is just cleaner IMO.\n\n>> - 0003 adds the backend support to plug-in custom stats.\n> \n> 1 ===\n> \n> It looks to me that there is a mix of \"in core\" and \"built-in\" to name the\n> non custom stats. Maybe it's worth to just use one?\n\nRight. Perhaps better to remove \"in core\" and stick to \"builtin\", as\nI've used the latter for the variables and such.\n\n> As I can see (and as you said above) this is mainly inspired by the custom\n> resource manager and 2 === and 3 === are probably copy/paste consequences.\n> \n> 2 ===\n> \n> + if (pgstat_kind_custom_infos[idx] != NULL &&\n> + pgstat_kind_custom_infos[idx]->name != NULL)\n> + ereport(ERROR,\n> + (errmsg(\"failed to register custom cumulative statistics \\\"%s\\\" with ID %u\", kind_info->name, kind),\n> + errdetail(\"Custom resource manager \\\"%s\\\" already registered with the same ID.\",\n> + pgstat_kind_custom_infos[idx]->name)));\n> \n> s/Custom resource manager/Custom cumulative statistics/\n> \n> 3 ===\n> \n> + ereport(LOG,\n> + (errmsg(\"registered custom resource manager \\\"%s\\\" with ID %u\",\n> + kind_info->name, kind)));\n> \n> s/custom resource manager/custom cumulative statistics/\n\nOops. Will fix.\n--\nMichael", "msg_date": "Fri, 21 Jun 2024 08:13:15 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "At Thu, 13 Jun 2024 16:59:50 +0900, Michael Paquier <[email protected]> wrote in \n> * The kind IDs may change across restarts, meaning that any stats data \n> associated to a custom kind is stored with the *name* of the custom\n> stats kind. Depending on the discussion happening here, I'd be open\n> to use the same concept as custom RMGRs, where custom kind IDs are\n> \"reserved\", fixed in time, and tracked in the Postgres wiki. It is\n> cheaper to store the stats this way, as well, while managing conflicts\n> across extensions available in the community ecosystem.\n\nI prefer to avoid having a central database if possible.\n\nIf we don't intend to move stats data alone out of a cluster for use\nin another one, can't we store the relationship between stats names\nand numeric IDs (or index numbers) in a separate file, which is loaded\njust before and synced just after extension preloading finishes?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 21 Jun 2024 13:09:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On Fri, Jun 21, 2024 at 01:09:10PM +0900, Kyotaro Horiguchi wrote:\n> At Thu, 13 Jun 2024 16:59:50 +0900, Michael Paquier <[email protected]> wrote in \n>> * The kind IDs may change across restarts, meaning that any stats data \n>> associated to a custom kind is stored with the *name* of the custom\n>> stats kind. Depending on the discussion happening here, I'd be open\n>> to use the same concept as custom RMGRs, where custom kind IDs are\n>> \"reserved\", fixed in time, and tracked in the Postgres wiki. It is\n>> cheaper to store the stats this way, as well, while managing conflicts\n>> across extensions available in the community ecosystem.\n> \n> I prefer to avoid having a central database if possible.\n\nI was thinking the same originally, but the experience with custom\nRMGRs has made me change my mind. There are more of these than I\nthought originally:\nhttps://wiki.postgresql.org/wiki/CustomWALResourceManagers\n\n> If we don't intend to move stats data alone out of a cluster for use\n> in another one, can't we store the relationship between stats names\n> and numeric IDs (or index numbers) in a separate file, which is loaded\n> just before and synced just after extension preloading finishes?\n\nYeah, I've implemented a prototype that does exactly something like\nthat with a restriction on the stats name to NAMEDATALEN, except that\nI've added the kind ID <-> kind name mapping at the beginning of the\nmain stats file. At the end, it still felt weird and over-engineered\nto me, like the v1 prototype of upthread, because we finish with a\nstrange mix when reloading the dshash where the builtin ID are handled\nwith fixed values, with more code paths required when doing the\nserialize callback dance for stats kinds like replication slots,\nbecause the custom kinds need to update their hash keys to the new\nvalues based on the ID/name mapping stored at the beginning of the\nfile itself.\n\nThe equation complicates itself a bit more once you'd try to add more\nways to write some stats kinds to other places, depending on what a\ncustom kind wants to achieve. I can see the benefits of both\napproaches, still fixing the IDs in time leads to a lot of simplicity\nin this infra, which is very appealing on its own before tackling the\nnext issues where I would rely on the proposed APIs.\n--\nMichael", "msg_date": "Fri, 21 Jun 2024 13:28:11 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On Fri, Jun 21, 2024 at 08:13:15AM +0900, Michael Paquier wrote:\n> On Thu, Jun 20, 2024 at 02:27:14PM +0000, Bertrand Drouvot wrote:\n>> On Thu, Jun 20, 2024 at 09:46:30AM +0900, Michael Paquier wrote:\n>> I think it makes sense to follow the same \"behavior\" as the custom\n>> wal resource managers. That, indeed, looks much more simpler than v1.\n> \n> Thanks for the feedback.\n\nWhile looking at a different patch from Tristan in this area at [1], I\nstill got annoyed that this patch set was not able to support the case\nof custom fixed-numbered stats, so as it is possible to plug in\npgstats things similar to the archiver, the checkpointer, WAL, etc.\nThese are plugged in shared memory, and are handled with copies in the\nstats snapshots. After a good night of sleep, I have come up with a\ngood solution for that, among the following lines:\n- PgStat_ShmemControl holds an array of void* indexed by\nPGSTAT_NUM_KINDS, pointing to shared memory areas allocated for each\nfixed-numbered stats. Each entry is allocated a size corresponding to\nPgStat_KindInfo->shared_size.\n- PgStat_Snapshot holds an array of void* also indexed by\nPGSTAT_NUM_KINDS, pointing to the fixed stats stored in the\nsnapshots. These have a size of PgStat_KindInfo->shared_data_len, set\nup when stats are initialized at process startup, so this reflects\neverywhere.\n- Fixed numbered stats now set shared_size, and we use this number to\ndetermine the size to allocate for each fixed-numbered stats in shmem.\n- A callback is added to initialize the shared memory assigned to each\nfixed-numbered stats, consisting of LWLock initializations for the\ncurrent types of stats. So this initialization step is moved out of\npgstat.c into each stats kind file.\n\nAll that has been done in the rebased patch set as of 0001, which is\nkind of a nice cleanup overall because it removes all the dependencies\nto the fixed-numbered stats structures from the \"main\" pgstats code in\npgstat.c and pgstat_shmem.c.\n\nThe remaining patches consist of:\n- 0002, Switch PgStat_Kind to a uint32. Cleanup.\n- 0003 introduces the pluggable stats facility. Feeding on the\nrefactoring for the fixed-numbered stats in 0001, it is actually\npossible to get support for these in the pluggable APIs by just\nremoving the restriction in the registration path. This extends the\nvoid* arrays to store references that cover the range of custom kind\nIDs.\n- 0004 has some docs.\n- 0005 includes an example of implementation for variable-numbered\nstats with the injection_points module.\n- 0006 is new for this thread, implementing an example for\nfixed-numbered stats, using again the injection_points module. This\nstuff gathers stats about the number of times points are run, attached\nand detached. Perhaps that's useful in itself, I don't know, but it\nprovides the coverage I want for this facility.\n\nWhile on it, I have applied one of the cleanup patches as\n9fd02525793f.\n\n[1]: https://www.postgresql.org/message-id/[email protected]\n--\nMichael", "msg_date": "Wed, 3 Jul 2024 18:47:15 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On 6/13/24 14:59, Michael Paquier wrote:\n> This will hopefully spark a discussion, and I was looking for answers\n> regarding these questions:\n> - Should the pgstat_kind_infos array in pgstat.c be refactored to use\n> something similar to pgstat_add_kind?\n> - How should the persistence of the custom stats be achieved?\n> Callbacks to give custom stats kinds a way to write/read their data,\n> push everything into a single file, or support both?\n> - Should this do like custom RMGRs and assign to each stats kinds ID\n> that are set in stone rather than dynamic ones?\nIt is a feature my extensions (which usually change planning behaviour) \ndefinitely need. It is a problem to show the user if the extension does \nsomething or not because TPS smooths the execution time of a single \nquery and performance cliffs.\nBTW, we have 'labelled DSM segments', which allowed extensions to be \n'lightweight' - not necessarily be loaded on startup, stay backend-local \nand utilise shared resources. It was a tremendous win for me. Is it \npossible to design this extension in the same way?\n\n-- \nregards, Andrei Lepikhov\n\n\n\n", "msg_date": "Thu, 4 Jul 2024 10:11:02 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On Thu, Jul 04, 2024 at 10:11:02AM +0700, Andrei Lepikhov wrote:\n> It is a feature my extensions (which usually change planning behaviour)\n> definitely need. It is a problem to show the user if the extension does\n> something or not because TPS smooths the execution time of a single query\n> and performance cliffs.\n\nYeah, I can get that. pgstat.c is quite good regarding that as it\ndelays stats flushes until commit by holding pending entries (see the\npgStatPending business for variable-size stats). Custom stats kinds\nregistered would just rely on these facilities, including snapshot\nAPIs, etc.\n\n> BTW, we have 'labelled DSM segments', which allowed extensions to be\n> 'lightweight' - not necessarily be loaded on startup, stay backend-local and\n> utilise shared resources. It was a tremendous win for me.\n>\n> Is it possible to design this extension in the same way?\n\nI am not sure how this would be useful when it comes to cumulative\nstatistics, TBH. These stats are global by design, and especially\nsince these most likely need to be flushed at shutdown (as of HEAD)\nand read at startup, the simplest way to achieve that to let the\ncheckpointer and the startup process know about them is to restrict\nthe registration of custom stats types via _PG_init when loading\nshared libraries. That's what we do for custom WAL RMGRs, for\nexample.\n\nI would not be against a new flag in KindInfo to state that a given\nstats type should not be flushed, as much as a set of callbacks that\noffers the possibility to redirect some stats kinds to somewhere else\nthan pgstat.stat, like pg_stat_statements. That would be a separate\npatch than what's proposed here.\n--\nMichael", "msg_date": "Thu, 4 Jul 2024 13:25:14 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "Hi,\n\nOn Wed, Jul 03, 2024 at 06:47:15PM +0900, Michael Paquier wrote:\n> While looking at a different patch from Tristan in this area at [1], I\n> still got annoyed that this patch set was not able to support the case\n> of custom fixed-numbered stats, so as it is possible to plug in\n> pgstats things similar to the archiver, the checkpointer, WAL, etc.\n> These are plugged in shared memory, and are handled with copies in the\n> stats snapshots. After a good night of sleep, I have come up with a\n> good solution for that,\n\nGreat!\n\n> among the following lines:\n> - PgStat_ShmemControl holds an array of void* indexed by\n> PGSTAT_NUM_KINDS, pointing to shared memory areas allocated for each\n> fixed-numbered stats. Each entry is allocated a size corresponding to\n> PgStat_KindInfo->shared_size.\n\nThat makes sense to me, and that's just a 96 bytes overhead (8 * PGSTAT_NUM_KINDS)\nas compared to now.\n\n> - PgStat_Snapshot holds an array of void* also indexed by\n> PGSTAT_NUM_KINDS, pointing to the fixed stats stored in the\n> snapshots.\n\nSame, that's just a 96 bytes overhead (8 * PGSTAT_NUM_KINDS) as compared to now.\n\n> These have a size of PgStat_KindInfo->shared_data_len, set\n> up when stats are initialized at process startup, so this reflects\n> everywhere.\n\nYeah.\n\n> - Fixed numbered stats now set shared_size, and we use this number to\n> determine the size to allocate for each fixed-numbered stats in shmem.\n> - A callback is added to initialize the shared memory assigned to each\n> fixed-numbered stats, consisting of LWLock initializations for the\n> current types of stats. So this initialization step is moved out of\n> pgstat.c into each stats kind file.\n\nThat looks a reasonable approach to me.\n\n> All that has been done in the rebased patch set as of 0001, which is\n> kind of a nice cleanup overall because it removes all the dependencies\n> to the fixed-numbered stats structures from the \"main\" pgstats code in\n> pgstat.c and pgstat_shmem.c.\n\nLooking at 0001:\n\n1 ===\n\nIn the commit message:\n\n - Fixed numbered stats now set shared_size, so as\n\nIs something missing in that sentence?\n\n2 ===\n\n@@ -425,14 +427,12 @@ typedef struct PgStat_ShmemControl\n pg_atomic_uint64 gc_request_count;\n\n /*\n- * Stats data for fixed-numbered objects.\n+ * Stats data for fixed-numbered objects, indexed by PgStat_Kind.\n+ *\n+ * Each entry has a size of PgStat_KindInfo->shared_size.\n */\n- PgStatShared_Archiver archiver;\n- PgStatShared_BgWriter bgwriter;\n- PgStatShared_Checkpointer checkpointer;\n- PgStatShared_IO io;\n- PgStatShared_SLRU slru;\n- PgStatShared_Wal wal;\n+ void *fixed_data[PGSTAT_NUM_KINDS];\n\nCan we move from PGSTAT_NUM_KINDS to the exact number of fixed stats? (add\na new define PGSTAT_NUM_FIXED_KINDS for example). That's not a big deal but we\nare allocating some space for pointers that we won't use. Would need to change\nthe \"indexing\" logic though.\n\n3 ===\n\nSame as 2 === but for PgStat_Snapshot.\n\n4 ===\n\n+static void pgstat_init_snapshot(void);\n\nwhat about pgstat_init_snapshot_fixed? (as it is for fixed-numbered statistics\nonly).\n\n5 ===\n\n+ /* Write various stats structs with fixed number of objects */\n\ns/Write various stats/Write the stats/? (not coming from your patch but they\nall were listed before though).\n\n6 ===\n\n+ for (int kind = PGSTAT_KIND_FIRST_VALID; kind <= PGSTAT_KIND_LAST; kind++)\n+ {\n+ char *ptr;\n+ const PgStat_KindInfo *info = pgstat_get_kind_info(kind);\n+\n+ if (!info->fixed_amount)\n+ continue;\n\nNit: Move the \"ptr\" declaration into an extra else? (useless to declare it\nif it's not a fixed number stat)\n\n7 ===\n\n+ /* prepare snapshot data and write it */\n+ pgstat_build_snapshot_fixed(kind);\n\nWhat about changing pgstat_build_snapshot_fixed() to accept a PgStat_KindInfo\nparameter (instead of the current PgStat_Kind one)? Reason is that\npgstat_get_kind_info() is already called/known in pgstat_snapshot_fixed(),\npgstat_build_snapshot() and pgstat_write_statsfile(). That would avoid\npgstat_build_snapshot_fixed() to retrieve (again) the kind_info.\n\n8 ===\n\n/*\n * Reads in existing statistics file into the shared stats hash.\n\nThis comment above pgstat_read_statsfile() is not correct, fixed stats\nare not going to the hash (was there before your patch though).\n\n9 ===\n\n+pgstat_archiver_init_shmem_cb(void *stats)\n+{\n+ PgStatShared_Archiver *stats_shmem = (PgStatShared_Archiver *) stats;\n+\n+ LWLockInitialize(&stats_shmem->lock, LWTRANCHE_PGSTATS_DATA);\n\nNit: Almost all the pgstat_XXX_init_shmem_cb() look very similar, I wonder if we\ncould use a macro to avoid code duplication.\n\n10 ===\n\nRemark not related to this patch: I think we could get rid of the shared_data_off\nfor the fixed stats (by moving the \"stats\" part at the header of their dedicated\nstruct). That would mean having things like:\n\n\"\ntypedef struct PgStatShared_Archiver\n{\n PgStat_ArchiverStats stats;\n /* lock protects ->reset_offset as well as stats->stat_reset_timestamp */\n LWLock lock;\n uint32 changecount;\n PgStat_ArchiverStats reset_offset;\n} PgStatShared_Archiver;\n\"\n\nNot sure that's worth it though.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 4 Jul 2024 11:30:17 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "Hi,\n\nOn 2024-07-03 18:47:15 +0900, Michael Paquier wrote:\n> While looking at a different patch from Tristan in this area at [1], I\n> still got annoyed that this patch set was not able to support the case\n> of custom fixed-numbered stats, so as it is possible to plug in\n> pgstats things similar to the archiver, the checkpointer, WAL, etc.\n> These are plugged in shared memory, and are handled with copies in the\n> stats snapshots. After a good night of sleep, I have come up with a\n> good solution for that, among the following lines:\n\n> - PgStat_ShmemControl holds an array of void* indexed by\n> PGSTAT_NUM_KINDS, pointing to shared memory areas allocated for each\n> fixed-numbered stats. Each entry is allocated a size corresponding to\n> PgStat_KindInfo->shared_size.\n\nI am dubious this is a good idea. The more indirection you add, the more\nexpensive it gets to count stuff, the more likely it is that we end up with\nbackend-local \"caching\" in front of the stats system.\n\nIOW, I am against making builtin stats pay the price for pluggable\nfixed-numbered stats.\n\nIt also substantially reduces type-safety, making it harder to refactor. Note\nthat you had to add static casts in a good number of additional places.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 4 Jul 2024 13:56:52 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "Hi,\n\nOn 2024-06-13 16:59:50 +0900, Michael Paquier wrote:\n> * Making custom stats data persistent is an interesting problem, and\n> there are a couple of approaches I've considered:\n> ** Allow custom kinds to define callbacks to read and write data from\n> a source they'd want, like their own file through a fd. This has the\n> disadvantage to remove the benefit of c) above.\n\nI am *strongly* against this. That'll make it much harder to do stuff like not\nresetting stats after crashes and just generally will make it harder to\nimprove the stats facility further.\n\nI think that pluggable users of the stats facility should only have control\nover how data is stored via quite generic means.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 4 Jul 2024 14:00:47 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "Hi,\n\nOn 2024-07-04 14:00:47 -0700, Andres Freund wrote:\n> On 2024-06-13 16:59:50 +0900, Michael Paquier wrote:\n> > * Making custom stats data persistent is an interesting problem, and\n> > there are a couple of approaches I've considered:\n> > ** Allow custom kinds to define callbacks to read and write data from\n> > a source they'd want, like their own file through a fd. This has the\n> > disadvantage to remove the benefit of c) above.\n> \n> I am *strongly* against this. That'll make it much harder to do stuff like not\n> resetting stats after crashes and just generally will make it harder to\n> improve the stats facility further.\n> \n> I think that pluggable users of the stats facility should only have control\n> over how data is stored via quite generic means.\n\n\nI forgot to say: In general I am highly supportive of this effort and thankful\nto Michael for tackling it. The above was just about that one aspect.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 4 Jul 2024 14:08:25 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On Thu, Jul 04, 2024 at 02:00:47PM -0700, Andres Freund wrote:\n> On 2024-06-13 16:59:50 +0900, Michael Paquier wrote:\n>> * Making custom stats data persistent is an interesting problem, and\n>> there are a couple of approaches I've considered:\n>> ** Allow custom kinds to define callbacks to read and write data from\n>> a source they'd want, like their own file through a fd. This has the\n>> disadvantage to remove the benefit of c) above.\n> \n> I am *strongly* against this. That'll make it much harder to do stuff like not\n> resetting stats after crashes and just generally will make it harder to\n> improve the stats facility further.\n> \n> I think that pluggable users of the stats facility should only have control\n> over how data is stored via quite generic means.\n\nI'm pretty much on the same line here, I think. If the redo logic is\nchanged, then any stats kinds pushing their stats into their own file\nwould need to copy/paste the same logic as the main file. And that's\nmore error prone.\n\nI can get why some people would get that they don't want some stats\nkinds to never be flushed at shutdown or even read at startup. Adding\nmore callbacks in this area is a separate discussion.\n--\nMichael", "msg_date": "Fri, 5 Jul 2024 08:27:12 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On Thu, Jul 04, 2024 at 02:08:25PM -0700, Andres Freund wrote:\n> I forgot to say: In general I am highly supportive of this effort and thankful\n> to Michael for tackling it. The above was just about that one aspect.\n\nThanks. Let's discuss how people want this stuff to be shaped, and\nhow much we want to cover. Better to do it one small step at a time.\n--\nMichael", "msg_date": "Fri, 5 Jul 2024 08:28:33 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On Thu, Jul 04, 2024 at 01:56:52PM -0700, Andres Freund wrote:\n> On 2024-07-03 18:47:15 +0900, Michael Paquier wrote:\n>> - PgStat_ShmemControl holds an array of void* indexed by\n>> PGSTAT_NUM_KINDS, pointing to shared memory areas allocated for each\n>> fixed-numbered stats. Each entry is allocated a size corresponding to\n>> PgStat_KindInfo->shared_size.\n> \n> I am dubious this is a good idea. The more indirection you add, the more\n> expensive it gets to count stuff, the more likely it is that we end up with\n> backend-local \"caching\" in front of the stats system.\n> \n> IOW, I am against making builtin stats pay the price for pluggable\n> fixed-numbered stats.\n\nOkay, noted. So, if I get that right, you would prefer an approach\nwhere we add an extra member in the snapshot and shmem control area\ndedicated only to the custom kind IDs, indexed based on the range\nof the custom kind IDs, leaving the built-in fixed structures in\nPgStat_ShmemControl and PgStat_Snapshot?\n\nI was feeling a bit uncomfortable with the extra redirection for the\nbuilt-in fixed kinds, still the temptation of making that more generic\nwas here, so..\n\nHaving the custom fixed types point to their own array in the snapshot\nand ShmemControl adds a couple more null-ness checks depending on if\nyou're dealing with a builtin or custom ID range. That's mostly the\npath in charge of retrieving the KindInfos.\n\n> It also substantially reduces type-safety, making it harder to refactor. Note\n> that you had to add static casts in a good number of additional places.\n\nNot sure on this one, because that's the same issue as\nvariable-numbered stats, no? The central dshash only knows about the\nsize of the shared stats entries for each kind, with an offset to the\nstats data that gets copied to the snapshots. So I don't quite get\nthe worry here.\n\nSeparately from that, I think that read/write of the fixed-numbered\nstats would gain in clarity if we update them to be closer to the\nvariable-numbers by storing entries with a specific character ('F' in\n0001). If we keep track of the fixed-numbered structures in\nPgStat_Snapshot, that means adding an extra field in PgStat_KindInfo\nto point to the offset in PgStat_Snapshot for the write part. Note\nthat the addition of the init_shmem callback simplifies shmem init,\nand it is also required for the fixed-numbered pluggable part.\n--\nMichael", "msg_date": "Fri, 5 Jul 2024 08:44:24 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On Thu, Jul 04, 2024 at 11:30:17AM +0000, Bertrand Drouvot wrote:\n> On Wed, Jul 03, 2024 at 06:47:15PM +0900, Michael Paquier wrote:\n>> among the following lines:\n>> - PgStat_ShmemControl holds an array of void* indexed by\n>> PGSTAT_NUM_KINDS, pointing to shared memory areas allocated for each\n>> fixed-numbered stats. Each entry is allocated a size corresponding to\n>> PgStat_KindInfo->shared_size.\n> \n> That makes sense to me, and that's just a 96 bytes overhead (8 * PGSTAT_NUM_KINDS)\n> as compared to now.\n\npgstat_io.c is by far the largest chunk.\n\n>> - PgStat_Snapshot holds an array of void* also indexed by\n>> PGSTAT_NUM_KINDS, pointing to the fixed stats stored in the\n>> snapshots.\n> \n> Same, that's just a 96 bytes overhead (8 * PGSTAT_NUM_KINDS) as compared to now.\n\nStill Andres does not seem to like that much, well ;)\n\n> Looking at 0001:\n> \n> 1 ===\n> \n> In the commit message:\n> \n> - Fixed numbered stats now set shared_size, so as\n> \n> Is something missing in that sentence?\n\nRight. This is missing a piece.\n\n> - PgStatShared_Archiver archiver;\n> - PgStatShared_BgWriter bgwriter;\n> - PgStatShared_Checkpointer checkpointer;\n> - PgStatShared_IO io;\n> - PgStatShared_SLRU slru;\n> - PgStatShared_Wal wal;\n> + void *fixed_data[PGSTAT_NUM_KINDS];\n> \n> Can we move from PGSTAT_NUM_KINDS to the exact number of fixed stats? (add\n> a new define PGSTAT_NUM_FIXED_KINDS for example). That's not a big deal but we\n> are allocating some space for pointers that we won't use. Would need to change\n> the \"indexing\" logic though.\n>\n> 3 ===\n> \n> Same as 2 === but for PgStat_Snapshot.\n> \n\nTrue for both. Based on the first inputs I got from Andres, the\nbuilt-in fixed stats structures would be kept as they are now, and we\ncould just add an extra member here for the custom fixed stats. That\nstill results in a few bytes wasted as not all custom stats want fixed\nstats, but that's much cheaper.\n\n> 4 ===\n> \n> +static void pgstat_init_snapshot(void);\n> \n> what about pgstat_init_snapshot_fixed? (as it is for fixed-numbered statistics\n> only).\n\nSure.\n\n> 5 ===\n> \n> + /* Write various stats structs with fixed number of objects */\n> \n> s/Write various stats/Write the stats/? (not coming from your patch but they\n> all were listed before though).\n\nYes, there are a few more things about that.\n\n> 6 ===\n> \n> + for (int kind = PGSTAT_KIND_FIRST_VALID; kind <= PGSTAT_KIND_LAST; kind++)\n> + {\n> + char *ptr;\n> + const PgStat_KindInfo *info = pgstat_get_kind_info(kind);\n> +\n> + if (!info->fixed_amount)\n> + continue;\n> \n> Nit: Move the \"ptr\" declaration into an extra else? (useless to declare it\n> if it's not a fixed number stat)\n\nComes down to one's taste. I think that this is OK as-is, but that's\nmy taste.\n\n> 7 ===\n> \n> + /* prepare snapshot data and write it */\n> + pgstat_build_snapshot_fixed(kind);\n> \n> What about changing pgstat_build_snapshot_fixed() to accept a PgStat_KindInfo\n> parameter (instead of the current PgStat_Kind one)? Reason is that\n> pgstat_get_kind_info() is already called/known in pgstat_snapshot_fixed(),\n> pgstat_build_snapshot() and pgstat_write_statsfile(). That would avoid\n> pgstat_build_snapshot_fixed() to retrieve (again) the kind_info.\n\npgstat_snapshot_fixed() only calls pgstat_get_kind_info() with\nassertions enabled. Perhaps we could do that, just that it does not\nseem that critical to me.\n\n> 8 ===\n> \n> /*\n> * Reads in existing statistics file into the shared stats hash.\n> \n> This comment above pgstat_read_statsfile() is not correct, fixed stats\n> are not going to the hash (was there before your patch though).\n\nGood catch. Let's adjust that separately.\n\n> 9 ===\n> \n> +pgstat_archiver_init_shmem_cb(void *stats)\n> +{\n> + PgStatShared_Archiver *stats_shmem = (PgStatShared_Archiver *) stats;\n> +\n> + LWLockInitialize(&stats_shmem->lock, LWTRANCHE_PGSTATS_DATA);\n> \n> Nit: Almost all the pgstat_XXX_init_shmem_cb() look very similar, I wonder if we\n> could use a macro to avoid code duplication.\n\nThey are very similar, still can do different things like pgstat_io.\nI am not sure that the macro would bring more readability.\n\n> Remark not related to this patch: I think we could get rid of the shared_data_off\n> for the fixed stats (by moving the \"stats\" part at the header of their dedicated\n> struct). That would mean having things like:\n> \n> \"\n> typedef struct PgStatShared_Archiver\n> {\n> PgStat_ArchiverStats stats;\n> /* lock protects ->reset_offset as well as stats->stat_reset_timestamp */\n> LWLock lock;\n> uint32 changecount;\n> PgStat_ArchiverStats reset_offset;\n> } PgStatShared_Archiver;\n> \"\n\nI'm not really convinced that it is a good idea to force the ordering\nof the members in the shared structures for the fixed-numbered stats,\nrequiring these \"stats\" fields to always be first.\n--\nMichael", "msg_date": "Fri, 5 Jul 2024 09:35:19 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "> On Fri, Jun 21, 2024 at 01:28:11PM +0900, Michael Paquier wrote:\n> On Fri, Jun 21, 2024 at 01:09:10PM +0900, Kyotaro Horiguchi wrote:\n> > At Thu, 13 Jun 2024 16:59:50 +0900, Michael Paquier <[email protected]> wrote in\n> >> * The kind IDs may change across restarts, meaning that any stats data\n> >> associated to a custom kind is stored with the *name* of the custom\n> >> stats kind. Depending on the discussion happening here, I'd be open\n> >> to use the same concept as custom RMGRs, where custom kind IDs are\n> >> \"reserved\", fixed in time, and tracked in the Postgres wiki. It is\n> >> cheaper to store the stats this way, as well, while managing conflicts\n> >> across extensions available in the community ecosystem.\n> >\n> > I prefer to avoid having a central database if possible.\n>\n> I was thinking the same originally, but the experience with custom\n> RMGRs has made me change my mind. There are more of these than I\n> thought originally:\n> https://wiki.postgresql.org/wiki/CustomWALResourceManagers\n\n From what I understand, coordinating custom RmgrIds via a wiki page was\nmade under the assumption that implementing a table AM with custom WAL\nrequires significant efforts, which limits the demand for ids. This\nmight not be same for custom stats -- I've got an impression it's easier\nto create one, and there could be multiple kinds of stats per an\nextension (one per component), right? This would mean more kind Ids to\nmanage and more efforts required to do that.\n\nI agree though that it makes sense to start this way, it's just simpler.\nBut maybe it's worth thinking about some other solution in the long\nterm, taking the over-engineered prototype as a sign that more\nrefactoring is needed.\n\n\n", "msg_date": "Sun, 7 Jul 2024 12:21:26 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On Sun, Jul 07, 2024 at 12:21:26PM +0200, Dmitry Dolgov wrote:\n> From what I understand, coordinating custom RmgrIds via a wiki page was\n> made under the assumption that implementing a table AM with custom WAL\n> requires significant efforts, which limits the demand for ids. This\n> might not be same for custom stats -- I've got an impression it's easier\n> to create one, and there could be multiple kinds of stats per an\n> extension (one per component), right? This would mean more kind Ids to\n> manage and more efforts required to do that.\n\nA given module will likely have one single RMGR because it is possible\nto divide the RMGR into multiple records. Yes, this cannot really be\nsaid for stats, and a set of stats kinds in one module may want\ndifferent kinds because these could have different properties.\n\nMy guess is that a combination of one fixed-numbered to track a global\nstate and one variable-numbered would be the combination most likely\nto happen. Also, my impression about pg_stat_statements is that we'd\nneed this combination, actually, to track the number of entries in a\ntighter way because scanning all the partitions of the central dshash\nfor entries with a specific KindInfo would have a high concurrency\ncost.\n\n> I agree though that it makes sense to start this way, it's just simpler.\n> But maybe it's worth thinking about some other solution in the long\n> term, taking the over-engineered prototype as a sign that more\n> refactoring is needed.\n\nThe three possible methods I can think of here are, knowing that we\nuse a central, unique, file to store the stats (per se the arguments\non the redo thread for the stats):\n- Store the name of the stats kinds with each entry. This is very\ncostly with many entries, and complicates the read-write paths because\ncurrently we rely on the KindInfo.\n- Store a mapping between the stats kind name and the KindInfo in the\nfile at write, then use the mapping at read and compare it reassemble\nthe entries stored. KindInfos are assigned at startup with a unique\ncounter in shmem. As mentioned upthread, I've implemented something\nlike that while making the custom stats being registered in the\nshmem_startup_hook with requests in shmem_request_hook. That felt\nover-engineered considering that the startup process needs to know the\nstats kinds very early anyway, so we need _PG_init() and should\nencourage its use.\n- Fix the KindInfos in time and centralize the values assigned. This\neases the error control and can force the custom stats kinds to be\nregistered when shared_preload_libraries is loaded. The read is\nfaster as there is no need to re-check the mapping to reassemble\nthe stats entries. \n\nAt the end, fixing the KindInfos in time is the most reliable method\nhere (debugging could be slightly easier, less complicated than with\nthe mapping stored, still doable for all three methods).\n--\nMichael", "msg_date": "Mon, 8 Jul 2024 08:14:20 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On Fri, Jul 05, 2024 at 09:35:19AM +0900, Michael Paquier wrote:\n> On Thu, Jul 04, 2024 at 11:30:17AM +0000, Bertrand Drouvot wrote:\n>> On Wed, Jul 03, 2024 at 06:47:15PM +0900, Michael Paquier wrote:\n>>> - PgStat_Snapshot holds an array of void* also indexed by\n>>> PGSTAT_NUM_KINDS, pointing to the fixed stats stored in the\n>>> snapshots.\n>> \n>> Same, that's just a 96 bytes overhead (8 * PGSTAT_NUM_KINDS) as compared to now.\n> \n> Still Andres does not seem to like that much, well ;)\n\nPlease find attached a rebased patch set labelled v4. Built-in\nfixed-numbered stats are still attached to the snapshot and shmem\ncontrol structures, and custom fixed stats kinds are tracked in the\nsame way as v3 with new members tracking data stored in\nTopMemoryContext for the snapshots and shmem for the control data.\nSo, the custom and built-in stats kinds are separated into separate\nparts of the structures, including the \"valid\" flags for the\nsnapshots. And this avoids any redirection when looking at the\nbuilt-in fixed-numbered stats.\n\nI've tried at address all the previous comments (there could be stuff\nI've missed while rebasing, of course).\n\nThe first three patches are refactoring pieces to make the rest more\nedible, while 0004~ implement the main logic with templates in\nmodules/injection_points:\n- 0001 refactors pgstat_write_statsfile() so as a loop om\nPgStat_KindInfo is used to write the data. This is done with the\naddition of snapshot_ctl_off in PgStat_KindInfo, to point to the area\nin PgStat_Snapshot where the data is located for fixed stats.\n9004abf6206e has done the same for the read part.\n- 0002 adds an init_shmem callback, to let stats kinds initialize\nstates based on what's been allocated.\n- 0003 refactors the read/write to use a new entry type in the stats\nfile for fixed-numbered stats.\n- 0004 switches PgStat_Kind from an enum to uint32, adding a better\ntype for pluggability.\n- 0005 is the main implementation.\n- 0006 adds some docs.\n- 0007 (variable-numbered stats) and 0008 (fixed-numbered stats) are\nthe examples demonstrating how to make pluggable stats for both types,\nwith tests of their own.\n--\nMichael", "msg_date": "Mon, 8 Jul 2024 14:30:23 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "Hi,\n\nOn Mon, Jul 08, 2024 at 02:30:23PM +0900, Michael Paquier wrote:\n> On Fri, Jul 05, 2024 at 09:35:19AM +0900, Michael Paquier wrote:\n> > On Thu, Jul 04, 2024 at 11:30:17AM +0000, Bertrand Drouvot wrote:\n> >> On Wed, Jul 03, 2024 at 06:47:15PM +0900, Michael Paquier wrote:\n> >>> - PgStat_Snapshot holds an array of void* also indexed by\n> >>> PGSTAT_NUM_KINDS, pointing to the fixed stats stored in the\n> >>> snapshots.\n> >> \n> >> Same, that's just a 96 bytes overhead (8 * PGSTAT_NUM_KINDS) as compared to now.\n> > \n> > Still Andres does not seem to like that much, well ;)\n> \n> Please find attached a rebased patch set labelled v4.\n\nThanks!\n\n> Built-in\n> fixed-numbered stats are still attached to the snapshot and shmem\n> control structures, and custom fixed stats kinds are tracked in the\n> same way as v3 with new members tracking data stored in\n> TopMemoryContext for the snapshots and shmem for the control data.\n> So, the custom and built-in stats kinds are separated into separate\n> parts of the structures, including the \"valid\" flags for the\n> snapshots. And this avoids any redirection when looking at the\n> built-in fixed-numbered stats.\n\nYeap.\n\n> I've tried at address all the previous comments (there could be stuff\n> I've missed while rebasing, of course).\n\nThanks!\n\n> The first three patches are refactoring pieces to make the rest more\n> edible, while 0004~ implement the main logic with templates in\n> modules/injection_points:\n> - 0001 refactors pgstat_write_statsfile() so as a loop om\n> PgStat_KindInfo is used to write the data. This is done with the\n> addition of snapshot_ctl_off in PgStat_KindInfo, to point to the area\n> in PgStat_Snapshot where the data is located for fixed stats.\n> 9004abf6206e has done the same for the read part.\n\nLooking at 0001:\n\n1 ==\n\n+ for (int kind = PGSTAT_KIND_FIRST_VALID; kind <= PGSTAT_KIND_LAST; kind++)\n+ {\n+ char *ptr;\n+ const PgStat_KindInfo *info = pgstat_get_kind_info(kind);\n\nI wonder if we could avoid going through stats that are not fixed ones. What about\ndoing something like?\n\n\"\nfor (int kind = <first fixed>; kind <= <last fixed>; kind++);\n\"\n\nWould probably need to change the indexing logic though.\n\nand then we could replace:\n\n+ if (!info->fixed_amount)\n+ continue;\n\nwith an assert instead. \n\nSame would apply for the read part added in 9004abf6206e.\n\n2 ===\n\n+ pgstat_build_snapshot_fixed(kind);\n+ ptr = ((char *) &pgStatLocal.snapshot) + info->snapshot_ctl_off;\n+ write_chunk(fpout, ptr, info->shared_data_len);\n\nI think that using \"shared_data_len\" is confusing here (was not the case in the\ncontext of 9004abf6206e). I mean it is perfectly correct but the wording \"shared\"\nlooks weird to me when being used here. What it really is, is the size of the\nstats. What about renaming shared_data_len with stats_data_len?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 8 Jul 2024 06:39:56 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On Mon, Jul 08, 2024 at 06:39:56AM +0000, Bertrand Drouvot wrote:\n> + for (int kind = PGSTAT_KIND_FIRST_VALID; kind <= PGSTAT_KIND_LAST; kind++)\n> + {\n> + char *ptr;\n> + const PgStat_KindInfo *info = pgstat_get_kind_info(kind);\n> \n> I wonder if we could avoid going through stats that are not fixed ones. What about\n> doing something like?\n> Same would apply for the read part added in 9004abf6206e.\n\nThis becomes more relevant when the custom stats are added, as this\nperforms a scan across the full range of IDs supported. So this\nchoice is here for consistency, and to ease the pluggability.\n\n> 2 ===\n> \n> + pgstat_build_snapshot_fixed(kind);\n> + ptr = ((char *) &pgStatLocal.snapshot) + info->snapshot_ctl_off;\n> + write_chunk(fpout, ptr, info->shared_data_len);\n> \n> I think that using \"shared_data_len\" is confusing here (was not the case in the\n> context of 9004abf6206e). I mean it is perfectly correct but the wording \"shared\"\n> looks weird to me when being used here. What it really is, is the size of the\n> stats. What about renaming shared_data_len with stats_data_len?\n\nIt is the stats data associated to a shared entry. I think that's OK,\nbut perhaps I'm just used to it as I've been staring at this area for\ndays.\n--\nMichael", "msg_date": "Mon, 8 Jul 2024 15:49:34 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "Hi,\n\nOn Mon, Jul 08, 2024 at 03:49:34PM +0900, Michael Paquier wrote:\n> On Mon, Jul 08, 2024 at 06:39:56AM +0000, Bertrand Drouvot wrote:\n> > + for (int kind = PGSTAT_KIND_FIRST_VALID; kind <= PGSTAT_KIND_LAST; kind++)\n> > + {\n> > + char *ptr;\n> > + const PgStat_KindInfo *info = pgstat_get_kind_info(kind);\n> > \n> > I wonder if we could avoid going through stats that are not fixed ones. What about\n> > doing something like?\n> > Same would apply for the read part added in 9004abf6206e.\n> \n> This becomes more relevant when the custom stats are added, as this\n> performs a scan across the full range of IDs supported. So this\n> choice is here for consistency, and to ease the pluggability.\n\nGotcha.\n\n> \n> > 2 ===\n> > \n> > + pgstat_build_snapshot_fixed(kind);\n> > + ptr = ((char *) &pgStatLocal.snapshot) + info->snapshot_ctl_off;\n> > + write_chunk(fpout, ptr, info->shared_data_len);\n> > \n> > I think that using \"shared_data_len\" is confusing here (was not the case in the\n> > context of 9004abf6206e). I mean it is perfectly correct but the wording \"shared\"\n> > looks weird to me when being used here. What it really is, is the size of the\n> > stats. What about renaming shared_data_len with stats_data_len?\n> \n> It is the stats data associated to a shared entry. I think that's OK,\n> but perhaps I'm just used to it as I've been staring at this area for\n> days.\n\nYeah, what I meant to say is that one could think for example that's the\nPgStatShared_Archiver size while in fact it's the PgStat_ArchiverStats size.\nI think it's more confusing when writing the stats. Here we are manipulating\n\"snapshot\" and \"snapshot\" offsets. It was not that confusing when reading as we\nare manipulating \"shmem\" and \"shared\" offsets.\n\nAs I said, the code is fully correct, that's just the wording here that sounds\nweird to me in the \"snapshot\" context.\n\nExcept the above (which is just a Nit), 0001 LGTM.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 8 Jul 2024 07:22:32 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "Hi,\n\nOn Mon, Jul 08, 2024 at 07:22:32AM +0000, Bertrand Drouvot wrote:\n> Except the above (which is just a Nit), 0001 LGTM.\n> \n\nLooking at 0002:\n\nIt looks pretty straightforward, just one comment:\n\n+ ptr = ((char *) ctl) + kind_info->shared_ctl_off;\n+ kind_info->init_shmem_cb((void *) ptr);\n\nI don't think we need to cast ptr to void when calling init_shmem_cb(). Looking\nat some examples in the code, it does not look like we cast the argument to void\nwhen a function has (void *) as parameter (also there is examples in 0003 where\nit's not done, see next comments for 0003).\n\nSo I think removing the cast here would be more consistent.\n\nLooking at 0003:\n\nIt looks pretty straightforward. Also for example, here:\n\n+ fputc(PGSTAT_FILE_ENTRY_FIXED, fpout);\n+ write_chunk_s(fpout, &kind);\n write_chunk(fpout, ptr, info->shared_data_len);\n\nptr is not casted to void when calling write_chunk() while its second parameter\nis a \"void *\".\n\n+ ptr = ((char *) shmem) + info->shared_ctl_off +\n+ info->shared_data_off;\n+\n+ if (!read_chunk(fpin, ptr,\n\nSame here for read_chunk().\n\nI think that's perfectly fine and that we should do the same in 0002 when\ncalling init_shmem_cb() for consistency.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 8 Jul 2024 14:07:58 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On Mon, Jul 08, 2024 at 07:22:32AM +0000, Bertrand Drouvot wrote:\n> Yeah, what I meant to say is that one could think for example that's the\n> PgStatShared_Archiver size while in fact it's the PgStat_ArchiverStats size.\n> I think it's more confusing when writing the stats. Here we are manipulating\n> \"snapshot\" and \"snapshot\" offsets. It was not that confusing when reading as we\n> are manipulating \"shmem\" and \"shared\" offsets.\n> \n> As I said, the code is fully correct, that's just the wording here that sounds\n> weird to me in the \"snapshot\" context.\n\nAfter sleeping on it, I can see your point. If we were to do the\n(shared_data_len -> stats_data_len) switch, could it make sense to\nrename shared_data_off to stats_data_off to have a better symmetry?\nThis one is the offset of the stats data in a shmem entry, so perhaps\nshared_data_off is OK, but it feels a bit inconsistent as well.\n\n> Except the above (which is just a Nit), 0001 LGTM.\n\nThanks, I've applied 0001 for now to improve the serialization of this code.\n--\nMichael", "msg_date": "Tue, 9 Jul 2024 10:45:05 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "Hi,\n\nOn Tue, Jul 09, 2024 at 10:45:05AM +0900, Michael Paquier wrote:\n> On Mon, Jul 08, 2024 at 07:22:32AM +0000, Bertrand Drouvot wrote:\n> > Yeah, what I meant to say is that one could think for example that's the\n> > PgStatShared_Archiver size while in fact it's the PgStat_ArchiverStats size.\n> > I think it's more confusing when writing the stats. Here we are manipulating\n> > \"snapshot\" and \"snapshot\" offsets. It was not that confusing when reading as we\n> > are manipulating \"shmem\" and \"shared\" offsets.\n> > \n> > As I said, the code is fully correct, that's just the wording here that sounds\n> > weird to me in the \"snapshot\" context.\n> \n> After sleeping on it, I can see your point. If we were to do the\n> (shared_data_len -> stats_data_len) switch, could it make sense to\n> rename shared_data_off to stats_data_off to have a better symmetry?\n> This one is the offset of the stats data in a shmem entry, so perhaps\n> shared_data_off is OK, but it feels a bit inconsistent as well.\n\nAgree that if we were to rename one of them then the second one should be\nrenamed to.\n\nI gave a second thought on it, and I think that this is the \"data\" part that lead\nto the confusion (as too generic), what about?\n\nshared_data_len -> shared_stats_len\nshared_data_off -> shared_stats_off\n\nThat looks ok to me even in the snapshot context (shared is fine after all\nbecause that's where the stats come from).\n\nAttached a patch proposal doing so.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 9 Jul 2024 05:23:03 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On Mon, Jul 08, 2024 at 02:07:58PM +0000, Bertrand Drouvot wrote:\n> It looks pretty straightforward, just one comment:\n> \n> + ptr = ((char *) ctl) + kind_info->shared_ctl_off;\n> + kind_info->init_shmem_cb((void *) ptr);\n> \n> I don't think we need to cast ptr to void when calling init_shmem_cb(). Looking\n> at some examples in the code, it does not look like we cast the argument to void\n> when a function has (void *) as parameter (also there is examples in 0003 where\n> it's not done, see next comments for 0003).\n\nYep. Fine by me.\n\nPlease find attached a rebased patch set for now, to make the\nCF bot happy.\n--\nMichael", "msg_date": "Tue, 9 Jul 2024 15:54:37 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On Tue, Jul 09, 2024 at 05:23:03AM +0000, Bertrand Drouvot wrote:\n> I gave a second thought on it, and I think that this is the \"data\" part that lead\n> to the confusion (as too generic), what about?\n> \n> shared_data_len -> shared_stats_len\n> shared_data_off -> shared_stats_off\n> \n> That looks ok to me even in the snapshot context (shared is fine after all\n> because that's where the stats come from).\n\nI'd tend to prefer the original suggestion because of the snapshot\ncontext, actually, as the fixed-numbered stats in a snapshot are a\ncopy of what's in shmem, and that's not shared at all.\n\nThe rename is not the most important part, still if others have an\nopinion, feel free.\n--\nMichael", "msg_date": "Tue, 9 Jul 2024 16:32:05 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "Hi,\n\nOn Tue, Jul 09, 2024 at 03:54:37PM +0900, Michael Paquier wrote:\n> On Mon, Jul 08, 2024 at 02:07:58PM +0000, Bertrand Drouvot wrote:\n> > It looks pretty straightforward, just one comment:\n> > \n> > + ptr = ((char *) ctl) + kind_info->shared_ctl_off;\n> > + kind_info->init_shmem_cb((void *) ptr);\n> > \n> > I don't think we need to cast ptr to void when calling init_shmem_cb(). Looking\n> > at some examples in the code, it does not look like we cast the argument to void\n> > when a function has (void *) as parameter (also there is examples in 0003 where\n> > it's not done, see next comments for 0003).\n> \n> Yep. Fine by me.\n\nThanks!\n\n> \n> Please find attached a rebased patch set for now, to make the\n> CF bot happy.\n\nv5-0001 LGTM.\n\nAs far v5-0002:\n\n+ goto error;\n+ info = pgstat_get_kind_info(kind);\n\nNit: add an empty line between the two?\n\nExcept this Nit, v5-0002 LGTM.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 10 Jul 2024 08:28:56 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "Hi,\n\nOn Wed, Jul 10, 2024 at 08:28:56AM +0000, Bertrand Drouvot wrote:\n> Hi,\n> \n> On Tue, Jul 09, 2024 at 03:54:37PM +0900, Michael Paquier wrote:\n> > On Mon, Jul 08, 2024 at 02:07:58PM +0000, Bertrand Drouvot wrote:\n> > > It looks pretty straightforward, just one comment:\n> > > \n> > > + ptr = ((char *) ctl) + kind_info->shared_ctl_off;\n> > > + kind_info->init_shmem_cb((void *) ptr);\n> > > \n> > > I don't think we need to cast ptr to void when calling init_shmem_cb(). Looking\n> > > at some examples in the code, it does not look like we cast the argument to void\n> > > when a function has (void *) as parameter (also there is examples in 0003 where\n> > > it's not done, see next comments for 0003).\n> > \n> > Yep. Fine by me.\n> \n> Thanks!\n> \n> > \n> > Please find attached a rebased patch set for now, to make the\n> > CF bot happy.\n> \n> v5-0001 LGTM.\n> \n> As far v5-0002:\n> \n> + goto error;\n> + info = pgstat_get_kind_info(kind);\n> \n> Nit: add an empty line between the two?\n> \n> Except this Nit, v5-0002 LGTM.\n\nOh, and also due to this change in 0002:\n\n switch (t)\n {\n+ case PGSTAT_FILE_ENTRY_FIXED:\n+ {\n\nThen this comment:\n\n /*\n * We found an existing statistics file. Read it and put all the hash\n * table entries into place.\n */\n for (;;)\n {\n int t = fgetc(fpin);\n\n switch (t)\n {\n case PGSTAT_FILE_ENTRY_FIXED:\n {\n\nis not correct anymore (as we're not reading the stats only into the hash table\nanymore).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 10 Jul 2024 09:00:31 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On Wed, Jul 10, 2024 at 09:00:31AM +0000, Bertrand Drouvot wrote:\n> On Wed, Jul 10, 2024 at 08:28:56AM +0000, Bertrand Drouvot wrote:\n>> v5-0001 LGTM.\n\nThanks. I've applied this refactoring piece.\n\n> /*\n> * We found an existing statistics file. Read it and put all the hash\n> * table entries into place.\n> */\n\nIndeed. Reworded that slightly and applied it as well.\n\nSo we are down to the remaining parts of the patch, and this is going\nto need a consensus about a few things because this impacts the\ndeveloper experience when implementing one's own custom stats:\n- Are folks OK with the point of fixing the kind IDs in time like\nRMGRs with a control in the wiki? Or should a more artistic approach\nbe used like what I am mentioning at the bottom of [1]. The patch\nallows a range of IDs to be used, to make the access to the stats\nfaster even if some area of memory may not be used.\n- The fixed-numbered custom stats kinds are stored in an array in\nPgStat_Snapshot and PgStat_ShmemControl, so as we have something\nconsistent with the built-in kinds. This makes the tracking of the\nvalidity of the data in the snapshots split into parts of the\nstructure for builtin and custom kinds. Perhaps there are better\nideas than that? The built-in fixed-numbered kinds have no\nredirection.\n- The handling of both built-in and custom kinds touches some areas of\npgstat.c and pgstat_shmem.c, which is the minimal I could come up\nwith.\n\nAttached is a rebased patch set with the remaining pieces.\n\n[1]: https://www.postgresql.org/message-id/ZoshTO9K7O7Z1wrX%40paquier.xyz\n--\nMichael", "msg_date": "Thu, 11 Jul 2024 16:42:22 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "Hi,\n\nOn Fri, Jul 05, 2024 at 09:35:19AM +0900, Michael Paquier wrote:\n> On Thu, Jul 04, 2024 at 11:30:17AM +0000, Bertrand Drouvot wrote:\n> > \n> > /*\n> > * Reads in existing statistics file into the shared stats hash.\n> > \n> > This comment above pgstat_read_statsfile() is not correct, fixed stats\n> > are not going to the hash (was there before your patch though).\n> \n> Good catch. Let's adjust that separately.\n\nPlease find attached a patch to do so (attached as .txt to not perturb the\ncfbot).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 11 Jul 2024 13:29:08 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On Thu, Jul 11, 2024 at 01:29:08PM +0000, Bertrand Drouvot wrote:\n> Please find attached a patch to do so (attached as .txt to not perturb the\n> cfbot).\n\n+ * Reads in existing statistics file into the shared stats hash (for non fixed\n+ * amount stats) or into the fixed amount stats.\n\nThanks. I have applied a simplified version of that, not mentioning\nthe details of what happens depending on the kinds of stats dealt\nwith.\n--\nMichael", "msg_date": "Fri, 12 Jul 2024 09:38:30 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "> On Thu, Jul 11, 2024 at 04:42:22PM GMT, Michael Paquier wrote:\n>\n> So we are down to the remaining parts of the patch, and this is going\n> to need a consensus about a few things because this impacts the\n> developer experience when implementing one's own custom stats:\n> - Are folks OK with the point of fixing the kind IDs in time like\n> RMGRs with a control in the wiki? Or should a more artistic approach\n> be used like what I am mentioning at the bottom of [1]. The patch\n> allows a range of IDs to be used, to make the access to the stats\n> faster even if some area of memory may not be used.\n\nI think it's fine. Although this solution feels a bit uncomfortable,\nafter thinking back and forth I don't see any significantly better\noption. Worth noting that since the main goal is to maintain uniqueness,\nfixing the kind IDs could be accomplished in more than one way, with\nvarying amount of control over the list of custom IDs:\n\n* One coud say \"lets keep it in wiki and let the community organize\n itself somehow\", and it's done.\n\n* Another way would be to keep it in wiki, and introduce some\n maintenance rules, e.g. once per release someone is going to cleanup\n the list from old unmaintained extensions, correct errors if needed,\n etc. Not sure if such cleanup would be needed, but it's not impossible\n to image.\n\n* Even more closed option would be to keep the kind IDs in some separate\n git repository, and let committers add new records on demand,\n expressed via some request form.\n\nAs far as I understand the current proposal is about the first option,\non one side of the spectrum.\n\n> - The fixed-numbered custom stats kinds are stored in an array in\n> PgStat_Snapshot and PgStat_ShmemControl, so as we have something\n> consistent with the built-in kinds. This makes the tracking of the\n> validity of the data in the snapshots split into parts of the\n> structure for builtin and custom kinds. Perhaps there are better\n> ideas than that? The built-in fixed-numbered kinds have no\n> redirection.\n\nAre you talking about this pattern?\n\n if (pgstat_is_kind_builtin(kind))\n ptr = // get something from a snapshot/shmem by offset\n else\n ptr = // get something from a custom_data by kind\n\nMaybe it would be possible to hide it behind some macros or an inlinable\nfunction with the offset and kind as arguments (and one of them will not\nbe used)?\n\n\n", "msg_date": "Fri, 12 Jul 2024 15:44:26 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On Fri, Jul 12, 2024 at 03:44:26PM +0200, Dmitry Dolgov wrote:\n> I think it's fine. Although this solution feels a bit uncomfortable,\n> after thinking back and forth I don't see any significantly better\n> option. Worth noting that since the main goal is to maintain uniqueness,\n> fixing the kind IDs could be accomplished in more than one way, with\n> varying amount of control over the list of custom IDs:\n> \n> * One coud say \"lets keep it in wiki and let the community organize\n> itself somehow\", and it's done.\n> * Another way would be to keep it in wiki, and introduce some\n> maintenance rules, e.g. once per release someone is going to cleanup\n> the list from old unmaintained extensions, correct errors if needed,\n> etc. Not sure if such cleanup would be needed, but it's not impossible\n> to image.\n> * Even more closed option would be to keep the kind IDs in some separate\n> git repository, and let committers add new records on demand,\n> expressed via some request form.\n\nRMGRs have been taking the wiki page approach to control the source of\ntruth, that still sounds like the simplest option to me. I'm OK to be\noutvoted, but this simplifies the read/write pgstats paths a lot, and\nthese would get more complicated if we add more options because of new\nentry types (more things like serialized names I cannot think of,\netc). Extra point is that this makes future entensibility a bit\neasier to work on.\n\n> As far as I understand the current proposal is about the first option,\n> on one side of the spectrum.\n\nYes.\n\n>> - The fixed-numbered custom stats kinds are stored in an array in\n>> PgStat_Snapshot and PgStat_ShmemControl, so as we have something\n>> consistent with the built-in kinds. This makes the tracking of the\n>> validity of the data in the snapshots split into parts of the\n>> structure for builtin and custom kinds. Perhaps there are better\n>> ideas than that? The built-in fixed-numbered kinds have no\n>> redirection.\n> \n> Are you talking about this pattern?\n> \n> if (pgstat_is_kind_builtin(kind))\n> ptr = // get something from a snapshot/shmem by offset\n> else\n> ptr = // get something from a custom_data by kind\n> \n> Maybe it would be possible to hide it behind some macros or an inlinable\n> function with the offset and kind as arguments (and one of them will not\n> be used)?\n\nKind of. All the code paths calling pgstat_is_kind_builtin() in the\npatch manipulate different areas of the snapshot and/or the shmem\ncontrol structures, so a macro makes little sense.\n\nPerhaps we should have a few more inline functions like\npgstat_get_entry_len() to retrieve the parts of the custom data in the\nsnapshot and shmem control structures for fixed-numbered stats. That\nwould limit what extensions need to know about\npgStatLocal.shmem->custom_data[] and\npgStatLocal.snapshot.custom_data[], which is easy to use incorrectly.\nThey don't need to know about pgStatLocal at all, either.\n\nThinking over the weekend on this patch, splitting injection_stats.c\ninto two separate files to act as two templates for the variable and\nfixed-numbered cases would be more friendly to developers, as well.\n--\nMichael", "msg_date": "Tue, 16 Jul 2024 10:27:25 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On Tue, Jul 16, 2024 at 10:27:25AM +0900, Michael Paquier wrote:\n> Perhaps we should have a few more inline functions like\n> pgstat_get_entry_len() to retrieve the parts of the custom data in the\n> snapshot and shmem control structures for fixed-numbered stats. That\n> would limit what extensions need to know about\n> pgStatLocal.shmem->custom_data[] and\n> pgStatLocal.snapshot.custom_data[], which is easy to use incorrectly.\n> They don't need to know about pgStatLocal at all, either.\n> \n> Thinking over the weekend on this patch, splitting injection_stats.c\n> into two separate files to act as two templates for the variable and\n> fixed-numbered cases would be more friendly to developers, as well.\n\nI've been toying a bit with these two ideas, and the result is\nactually neater:\n- The example for fixed-numbered stats is now in its own new file,\ncalled injection_stats_fixed.c.\n- Stats in the dshash are at the same location, injection_stats.c.\n- pgstat_internal.h gains two inline routines called\npgstat_get_custom_shmem_data and pgstat_get_custom_snapshot_data that\nhide completely the snapshot structure for extensions when it comes to\ncustom fixed-numbered stats, see the new injection_stats_fixed.c that\nuses them.\n--\nMichael", "msg_date": "Thu, 18 Jul 2024 14:56:20 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "> On Thu, Jul 18, 2024 at 02:56:20PM GMT, Michael Paquier wrote:\n> On Tue, Jul 16, 2024 at 10:27:25AM +0900, Michael Paquier wrote:\n> > Perhaps we should have a few more inline functions like\n> > pgstat_get_entry_len() to retrieve the parts of the custom data in the\n> > snapshot and shmem control structures for fixed-numbered stats. That\n> > would limit what extensions need to know about\n> > pgStatLocal.shmem->custom_data[] and\n> > pgStatLocal.snapshot.custom_data[], which is easy to use incorrectly.\n> > They don't need to know about pgStatLocal at all, either.\n> >\n> > Thinking over the weekend on this patch, splitting injection_stats.c\n> > into two separate files to act as two templates for the variable and\n> > fixed-numbered cases would be more friendly to developers, as well.\n>\n> I've been toying a bit with these two ideas, and the result is\n> actually neater:\n> - The example for fixed-numbered stats is now in its own new file,\n> called injection_stats_fixed.c.\n> - Stats in the dshash are at the same location, injection_stats.c.\n> - pgstat_internal.h gains two inline routines called\n> pgstat_get_custom_shmem_data and pgstat_get_custom_snapshot_data that\n> hide completely the snapshot structure for extensions when it comes to\n> custom fixed-numbered stats, see the new injection_stats_fixed.c that\n> uses them.\n\nAgree, looks good. I've tried to quickly sketch out such a fixed\nstatistic for some another extension, everything was fine and pretty\nstraightforward. One question, why don't you use\npgstat_get_custom_shmem_data & pgstat_get_custom_snapshot_data outside\nof the injection points code? There seems to be a couple of possible\nplaces in pgstats itself.\n\n\n", "msg_date": "Sat, 27 Jul 2024 15:49:42 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On Sat, Jul 27, 2024 at 03:49:42PM +0200, Dmitry Dolgov wrote:\n> Agree, looks good. I've tried to quickly sketch out such a fixed\n> statistic for some another extension, everything was fine and pretty\n> straightforward.\n\nThat's my hope. Thanks a lot for the feedback.\n\n> One question, why don't you use\n> pgstat_get_custom_shmem_data & pgstat_get_custom_snapshot_data outside\n> of the injection points code? There seems to be a couple of possible\n> places in pgstats itself.\n\nBecause these two helper routines are only able to fetch the fixed\ndata area in the snapshot and the control shmem structures for the\ncustom kinds, not the in-core ones. We could, but the current code is\nOK as well. My point was just to ease the pluggability effort.\n\nI would like to apply this new infrastructure stuff and move on to the\nproblems related to the scability of pg_stat_statements. So, are\nthere any objections with all that?\n--\nMichael", "msg_date": "Sun, 28 Jul 2024 22:20:45 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "> On Sun, Jul 28, 2024 at 10:20:45PM GMT, Michael Paquier wrote:\n> I would like to apply this new infrastructure stuff and move on to the\n> problems related to the scability of pg_stat_statements. So, are\n> there any objections with all that?\n\nSo far I've got nothing against :)\n\n\n", "msg_date": "Sun, 28 Jul 2024 22:03:56 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On Sun, Jul 28, 2024 at 10:03:56PM +0200, Dmitry Dolgov wrote:\n> So far I've got nothing against :)\n\nI've looked again at the first patch of this series, and applied the\nfirst one. Another last-minute edit I have done is to use more\nconsistently PgStat_Kind in the loops for the stats kinds across all\nthe pgstats code.\n\nAttached is a rebased set of the rest, with 0001 now introducing the\npluggable core part.\n--\nMichael", "msg_date": "Fri, 2 Aug 2024 05:53:31 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" }, { "msg_contents": "On Fri, Aug 02, 2024 at 05:53:31AM +0900, Michael Paquier wrote:\n> Attached is a rebased set of the rest, with 0001 now introducing the\n> pluggable core part.\n\nSo, I have been able to spend a few more days on all that while\ntravelling across three continents, and I have applied the core patch\nfollowed by the template parts after more polishing. The core part\nhas been tweaked a bit more in terms of variable and structure names,\nto bring the builtin and custom stats parts more consistent with each\nother. There were also a bunch of loops that did not use the\nPgStat_Kind, but an int with an index on the custom_data arrays. I\nhave uniformized the whole.\n\nI am keeping an eye on the buildfarm and it is currently green. My\nmachines don't seem to have run the new tests with injection points\nyet, the CI on the CF app is not reporting any failure caused by that,\nand my CI runs have all been stable.\n--\nMichael", "msg_date": "Mon, 5 Aug 2024 15:23:05 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pluggable cumulative statistics" } ]
[ { "msg_contents": "The purpose of MultiXactMemberFreezeThreshold() is to make the effective \nautovacuum_multixact_freeze_max_age smaller, if the multixact members \nSLRU is approaching wraparound. Per comment there:\n\n> * To prevent that, if more than a threshold portion of the members space is\n> * used, we effectively reduce autovacuum_multixact_freeze_max_age and\n> * to a value just less than the number of multixacts in use. We hope that\n> * this will quickly trigger autovacuuming on the table or tables with the\n> * oldest relminmxid, thus allowing datminmxid values to advance and removing\n> * some members.\n\nHowever, the value that the function calculates can sometimes be \n*greater* than autovacuum_multixact_freeze_max_age. To get an overview \nof how it behaves, I wrote the attached stand-alone C program to test it \nwith different inputs:\n\nIf members < MULTIXACT_MEMBER_SAFE_THRESHOLD, it just returns \nautovacuum_multixact_freeze_max_age, which is 200 million by default:\n\nmultixacts: 1000000, members 1000000000 -> 200000000\nmultixacts: 1000000, members 2000000000 -> 200000000\nmultixacts: 1000000, members 2100000000 -> 200000000\n\nAbove MULTIXACT_MEMBER_SAFE_THRESHOLD, the members-based calculated \nkicks in:\n\nmultixacts: 1000000, members 2200000000 -> 951091\nmultixacts: 1000000, members 2300000000 -> 857959\nmultixacts: 1000000, members 2500000000 -> 671694\nmultixacts: 1000000, members 3000000000 -> 206033\nmultixacts: 1000000, members 3100000000 -> 112901\nmultixacts: 1000000, members 3500000000 -> 0\nmultixacts: 1000000, members 4000000000 -> 0\n\nHowever, if multixacts is also large the returned value is also quite large:\n\nmultixacts: 1000000000, members 2200000000 -> 951090335\n\nThat's larger than the default autovacuum_multixact_freeze_max_age! If \nyou had set it to a lower non-default value, it's even worse.\n\nI noticed this after I used pg_resetwal to reset next-multixid and \nnext-mxoffset to a high value for testing purposes. Not sure how easy it \nis to reach that situation normally. In any case, I think the function \nshould clamp the result to autovacuum_multixact_freeze_max_age, per \nattached.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Thu, 13 Jun 2024 15:28:57 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "MultiXactMemberFreezeThreshold can make autovacuum *less* aggressive" }, { "msg_contents": "On Thu, Jun 13, 2024 at 8:29 AM Heikki Linnakangas <[email protected]> wrote:\n> However, the value that the function calculates can sometimes be\n> *greater* than autovacuum_multixact_freeze_max_age.\n\nThat was definitely not what I intended and is definitely bad.\n\n> In any case, I think the function\n> should clamp the result to autovacuum_multixact_freeze_max_age, per\n> attached.\n\nLGTM.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Jun 2024 10:40:20 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MultiXactMemberFreezeThreshold can make autovacuum *less*\n aggressive" }, { "msg_contents": "On 13/06/2024 17:40, Robert Haas wrote:\n> On Thu, Jun 13, 2024 at 8:29 AM Heikki Linnakangas <[email protected]> wrote:\n>> However, the value that the function calculates can sometimes be\n>> *greater* than autovacuum_multixact_freeze_max_age.\n> \n> That was definitely not what I intended and is definitely bad.\n> \n>> In any case, I think the function\n>> should clamp the result to autovacuum_multixact_freeze_max_age, per\n>> attached.\n> \n> LGTM.\n\nCommitted and backpatched to all supported versions. Thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 13 Jun 2024 19:04:19 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: MultiXactMemberFreezeThreshold can make autovacuum *less*\n aggressive" } ]
[ { "msg_contents": "Hackers,\n\nAnother apparent inconsistency I’ve noticed in jsonpath queries is the treatment of the && and || operators: They can’t operate on scalar functions, only on other expressions. Some examples:\n\ndavid=# select jsonb_path_query('true', '$ && $');\nERROR: syntax error at or near \"&&\" of jsonpath input\nLINE 1: select jsonb_path_query('true', '$ && $');\n ^\ndavid=# select jsonb_path_query('true', '$.boolean() && $.boolean()');\nERROR: syntax error at or near \"&&\" of jsonpath input\nLINE 1: select jsonb_path_query('true', '$.boolean() && $.boolean()'...\n ^\nThe only place I’ve seen them work is inside filters with binary or unary operands:\n\njsonb_path_query('[1, 3, 7]', '$[*] ? (@ > 1 && @ < 5)');\n jsonb_path_query \n------------------\n 3\n\nIt doesn’t even work with boolean methods!\n\ndavid=# select jsonb_path_query('[1, 3, 7]', '$[*] ? (@.boolean() && @.boolean())');\nERROR: syntax error at or near \"&&\" of jsonpath input\nLINE 1: select jsonb_path_query('[1, 3, 7]', '$[*] ? (@.boolean() &&...\n ^\nOther binary operators work just fine in these sorts of contexts:\n\ndavid=# select jsonb_path_query('1', '$ >= 1');\n jsonb_path_query \n------------------\n true\n(1 row)\n\ndavid=# select jsonb_path_query('[1, 3, 7]', '$[*] ? (@ > 1)');\n jsonb_path_query \n------------------\n 3\n 7\n(2 rows)\n\nShould && and || not also work on scalar operands?\n\nBest,\n\nDavid\n\n\n\n\n", "msg_date": "Thu, 13 Jun 2024 11:32:38 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "jsonpath: Missing Binary Execution Path?" }, { "msg_contents": "On Jun 13, 2024, at 11:32, David E. Wheeler <[email protected]> wrote:\n\n> Should && and || not also work on scalar operands?\n\nI see the same issue for unary !, too:\n\ndavid=# select jsonb_path_query('true', '!$');\nERROR: syntax error at or near \"$\" of jsonpath input\nLINE 1: select jsonb_path_query('true', '!$');\n ^\ndavid=# select jsonb_path_query('[1, 3, 7]', '$[*] ? (!true)');\nERROR: syntax error at end of jsonpath input\nLINE 1: select jsonb_path_query('[1, 3, 7]', '$[*] ? (!true)');\n ^\ndavid=# select jsonb_path_query('[1, 3, 7]', '$[*] ? ([email protected]())');\nERROR: syntax error at or near \"@\" of jsonpath input\nLINE 1: select jsonb_path_query('[1, 3, 7]', '$[*] ? ([email protected]())'...\n ^\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Thu, 13 Jun 2024 11:37:00 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: jsonpath: Missing Binary Execution Path?" }, { "msg_contents": "On 2024-06-13 Th 11:37, David E. Wheeler wrote:\n> On Jun 13, 2024, at 11:32, David E. Wheeler<[email protected]> wrote:\n>\n>> Should && and || not also work on scalar operands?\n> I see the same issue for unary !, too:\n\n\nWhat does the spec say about these? What do other implementations do?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-06-13 Th 11:37, David E.\n Wheeler wrote:\n\n\nOn Jun 13, 2024, at 11:32, David E. Wheeler <[email protected]> wrote:\n\n\n\nShould && and || not also work on scalar operands?\n\n\n\nI see the same issue for unary !, too:\n\n\n\nWhat does the spec say about these? What do other implementations\n do?\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 13 Jun 2024 15:33:50 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonpath: Missing Binary Execution Path?" }, { "msg_contents": "On Jun 13, 2024, at 3:33 PM, Andrew Dunstan <[email protected]> wrote:\n\n> What does the spec say about these? What do other implementations do?\n\nPaging Mr. Eisentraut!\n\n:-)\n\nD\n\n\n\n\n", "msg_date": "Thu, 13 Jun 2024 16:43:09 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: jsonpath: Missing Binary Execution Path?" }, { "msg_contents": "On 06/13/24 16:43, David E. Wheeler wrote:\n> Paging Mr. Eisentraut!\n\nI'm not Mr. Eisentraut, but I have at last talked my way into some\naccess to the standard, so ...\n\nNote 487 emphasizes that JSON path predicates \"are not expressions;\ninstead they form a separate language that can only be invoked within\na <JSON filter expression>\".\n\nThe only operators usable in a general expression (that is, a\n<JSON path wff> are binary + - and binary * / % and unary + -\nover a <JSON accessor expression>.\n\nInside a filter, you get to use a <JSON path predicate>. That's where\nyou can use ! and && and ||. But ! can only be applied to a\n<JSON delimited predicate>: either a <JSON exists path predicate>,\nor any other <JSON path predicate> wrapped in parentheses.\n\nOn 06/13/24 11:32, David E. Wheeler wrote:\n> david=# select jsonb_path_query('true', '$ && $');\n> david=# select jsonb_path_query('true', '$.boolean() && $.boolean()');\n\nThose don't work because, as you recognized, they're not inside filters.\n\n> david=# select jsonb_path_query('[1, 3, 7]', '$[*] ? (@.boolean() && @.boolean())');\n\nThat doesn't work because the operands of && or || must have the grammatical\nform of predicates; it's not enough that they be expressions of boolean\ntype. '$[*] ? (@.boolean() == true && @.boolean() == true)' ought to work\n(though in any other context you'd probably call it a code smell!) because\neach operand is now a <JSON comparison predicate>.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Thu, 13 Jun 2024 21:09:54 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonpath: Missing Binary Execution Path?" }, { "msg_contents": "On Thu, Jun 13, 2024 at 6:10 PM Chapman Flack <[email protected]> wrote:\n\n> On 06/13/24 16:43, David E. Wheeler wrote:\n> > Paging Mr. Eisentraut!\n>\n> I'm not Mr. Eisentraut, but I have at last talked my way into some\n> access to the standard, so ...\n>\n> Note 487 emphasizes that JSON path predicates \"are not expressions;\n> instead they form a separate language that can only be invoked within\n> a <JSON filter expression>\".\n>\n> The only operators usable in a general expression (that is, a\n> <JSON path wff> are binary + - and binary * / % and unary + -\n> over a <JSON accessor expression>.\n>\n> Inside a filter, you get to use a <JSON path predicate>. That's where\n> you can use ! and && and ||. But ! can only be applied to a\n> <JSON delimited predicate>: either a <JSON exists path predicate>,\n> or any other <JSON path predicate> wrapped in parentheses.\n>\n> On 06/13/24 11:32, David E. Wheeler wrote:\n> > david=# select jsonb_path_query('true', '$ && $');\n> > david=# select jsonb_path_query('true', '$.boolean() && $.boolean()');\n>\n> Those don't work because, as you recognized, they're not inside filters.\n>\n\nI'm content that the operators in the 'filter operators' table need to be\nwithin filter but then I cannot reconcile why this example worked:\n\ndavid=# select jsonb_path_query('1', '$ >= 1');\n jsonb_path_query\n------------------\n true\n(1 row)\n\nDavid J.\n\nOn Thu, Jun 13, 2024 at 6:10 PM Chapman Flack <[email protected]> wrote:On 06/13/24 16:43, David E. Wheeler wrote:\n> Paging Mr. Eisentraut!\n\nI'm not Mr. Eisentraut, but I have at last talked my way into some\naccess to the standard, so ...\n\nNote 487 emphasizes that JSON path predicates \"are not expressions;\ninstead they form a separate language that can only be invoked within\na <JSON filter expression>\".\n\nThe only operators usable in a general expression (that is, a\n<JSON path wff> are binary + - and binary * / % and unary + -\nover a <JSON accessor expression>.\n\nInside a filter, you get to use a <JSON path predicate>. That's where\nyou can use ! and && and ||. But ! can only be applied to a\n<JSON delimited predicate>: either a <JSON exists path predicate>,\nor any other <JSON path predicate> wrapped in parentheses.\n\nOn 06/13/24 11:32, David E. Wheeler wrote:\n> david=# select jsonb_path_query('true', '$ && $');\n> david=# select jsonb_path_query('true', '$.boolean() && $.boolean()');\n\nThose don't work because, as you recognized, they're not inside filters.I'm content that the operators in the 'filter operators' table need to be within filter but then I cannot reconcile why this example worked:david=# select jsonb_path_query('1', '$ >= 1'); jsonb_path_query------------------ true(1 row)David J.", "msg_date": "Thu, 13 Jun 2024 18:24:23 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonpath: Missing Binary Execution Path?" }, { "msg_contents": "On 06/13/24 21:24, David G. Johnston wrote:\n> I'm content that the operators in the 'filter operators' table need to be\n> within filter but then I cannot reconcile why this example worked:\n> \n> david=# select jsonb_path_query('1', '$ >= 1');\n\nGood point. I can't either. No way I can see to parse that as\na <JSON path wff>.\n\nRegards,\n-Chap\n\n\n\n", "msg_date": "Thu, 13 Jun 2024 21:40:11 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonpath: Missing Binary Execution Path?" }, { "msg_contents": "On Thursday, June 13, 2024, Chapman Flack <[email protected]> wrote:\n\n> On 06/13/24 21:24, David G. Johnston wrote:\n> > I'm content that the operators in the 'filter operators' table need to be\n> > within filter but then I cannot reconcile why this example worked:\n> >\n> > david=# select jsonb_path_query('1', '$ >= 1');\n>\n> Good point. I can't either. No way I can see to parse that as\n> a <JSON path wff>.\n>\n\n\nWhether we note it as non-standard or not is an open question then, but it\ndoes work and opens up a documentation question. It seems like it needs to\nappear in table T9.50. Whether it also should appear in T9.51 is the\nquestion. It seems like anything in T9.50 is allowed in a filter while the\nstuff in T9.51 should be limited to those things only allowed in a filter.\nWhich suggests moving it from T9.51 to T9.50\n\nDavid J.\n\nOn Thursday, June 13, 2024, Chapman Flack <[email protected]> wrote:On 06/13/24 21:24, David G. Johnston wrote:\n> I'm content that the operators in the 'filter operators' table need to be\n> within filter but then I cannot reconcile why this example worked:\n> \n> david=# select jsonb_path_query('1', '$ >= 1');\n\nGood point. I can't either. No way I can see to parse that as\na <JSON path wff>.\nWhether we note it as non-standard or not is an open question then, but it does work and opens up a documentation question.  It seems like it needs to appear in table T9.50.  Whether it also should appear in T9.51 is the question.  It seems like anything in T9.50 is allowed in a filter while the stuff in T9.51 should be limited to those things only allowed in a filter.  Which suggests moving it from T9.51 to T9.50David J.", "msg_date": "Thu, 13 Jun 2024 18:46:15 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonpath: Missing Binary Execution Path?" }, { "msg_contents": "On 06/13/24 21:46, David G. Johnston wrote:\n>>> david=# select jsonb_path_query('1', '$ >= 1');\n>>\n>> Good point. I can't either. No way I can see to parse that as\n>> a <JSON path wff>.\n> \n> Whether we note it as non-standard or not is an open question then, but it\n> does work and opens up a documentation question.\n\nDoes the fact that it does work raise any potential concern that our\ngrammar is nonconformant in some way that could present a headache\nsomewhere else, or down the road with a later standard edition?\n\nRegards,\n-Chap\n\n\n\n", "msg_date": "Thu, 13 Jun 2024 21:58:57 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonpath: Missing Binary Execution Path?" }, { "msg_contents": "On Thursday, June 13, 2024, Chapman Flack <[email protected]> wrote:\n\n> On 06/13/24 21:46, David G. Johnston wrote:\n> >>> david=# select jsonb_path_query('1', '$ >= 1');\n> >>\n> >> Good point. I can't either. No way I can see to parse that as\n> >> a <JSON path wff>.\n> >\n> > Whether we note it as non-standard or not is an open question then, but\n> it\n> > does work and opens up a documentation question.\n>\n> Does the fact that it does work raise any potential concern that our\n> grammar is nonconformant in some way that could present a headache\n> somewhere else, or down the road with a later standard edition?\n>\n\nThis isn’t new in v17 nor, to my knowledge, has the behavior changed, so I\nthink we just need to live with whatever, likely minimal, chance of\nheadache there is.\n\nI don’t get why the outcome of a boolean producing operation isn’t just\ngenerally allowed to be produced, and would hope the standard would move\ntoward allowing that across the board, and in doing so end up matching what\nwe already have implemented.\n\nDavid J.\n\nOn Thursday, June 13, 2024, Chapman Flack <[email protected]> wrote:On 06/13/24 21:46, David G. Johnston wrote:\n>>> david=# select jsonb_path_query('1', '$ >= 1');\n>>\n>> Good point. I can't either. No way I can see to parse that as\n>> a <JSON path wff>.\n> \n> Whether we note it as non-standard or not is an open question then, but it\n> does work and opens up a documentation question.\n\nDoes the fact that it does work raise any potential concern that our\ngrammar is nonconformant in some way that could present a headache\nsomewhere else, or down the road with a later standard edition?\nThis isn’t new in v17 nor, to my knowledge, has the behavior changed, so I think we just need to live with whatever, likely minimal, chance of headache there is.I don’t get why the outcome of a boolean producing operation isn’t just generally allowed to be produced, and would hope the standard would move toward allowing that across the board, and in doing so end up matching what we already have implemented.David J.", "msg_date": "Thu, 13 Jun 2024 19:14:36 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonpath: Missing Binary Execution Path?" }, { "msg_contents": "On Jun 13, 2024, at 21:58, Chapman Flack <[email protected]> wrote:\n\n>>>> david=# select jsonb_path_query('1', '$ >= 1');\n>>> \n>>> Good point. I can't either. No way I can see to parse that as\n>>> a <JSON path wff>.\n>> \n>> Whether we note it as non-standard or not is an open question then, but it\n>> does work and opens up a documentation question.\n> \n> Does the fact that it does work raise any potential concern that our\n> grammar is nonconformant in some way that could present a headache\n> somewhere else, or down the road with a later standard edition?\n\nI believe this case is already covered in the docs as a Postgres-specific feature: predicate path expressions.\n\nBut even inside filters I don’t understand why &&, ||, at least, currently only work if their operands are predicate expressions. Seems weird; and your notes above suggest that rule applies only to !, which makes slightly more sense.\n\nD\n\n\n\n", "msg_date": "Thu, 13 Jun 2024 22:16:09 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: jsonpath: Missing Binary Execution Path?" }, { "msg_contents": "On 06/13/24 22:16, David E. Wheeler wrote:\n> But even inside filters I don’t understand why &&, ||, at least,\n> currently only work if their operands are predicate expressions.\n> Seems weird; and your notes above suggest that rule applies only to !,\n> which makes slightly more sense.\n\nIt's baked right into the standard grammar: || can only have a\n<JSON boolean conjunction> on its right and a <JSON boolean disjunction>\non its left.\n\n&& can only have a <JSON boolean negation> on its right and a\n<JSON boolean conjunction> on its left.\n\nThe case for ! is even more limiting: it can't be applied to anything\nbut a <JSON delimited predicate>. That can be either the exists predicate,\nor, any other <JSON path predicate> but wrapped in parentheses.\n\nThe language seems sort of gappy in the same way XPath 1.0 was. XPath 2.0\nbecame much more consistent and conceptually unified, only by that time,\nXML was old school, and JSON was cool, and apparently started inventing\na path language.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Thu, 13 Jun 2024 22:31:02 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonpath: Missing Binary Execution Path?" }, { "msg_contents": "On Jun 13, 2024, at 22:31, Chapman Flack <[email protected]> wrote:\n\n> It's baked right into the standard grammar: || can only have a\n> <JSON boolean conjunction> on its right and a <JSON boolean disjunction>\n> on its left.\n> \n> && can only have a <JSON boolean negation> on its right and a\n> <JSON boolean conjunction> on its left.\n\nWow. \n\n> The case for ! is even more limiting: it can't be applied to anything\n> but a <JSON delimited predicate>. That can be either the exists predicate,\n> or, any other <JSON path predicate> but wrapped in parentheses.\n> \n> The language seems sort of gappy in the same way XPath 1.0 was. XPath 2.0\n> became much more consistent and conceptually unified, only by that time,\n> XML was old school, and JSON was cool, and apparently started inventing\n> a path language.\n\nI suppose that’s the reason for this design. But if these sorts of limitations were changed in XPath, perhaps SQL-Next could fix them, too.\n\nThanks for citing the standard; super helpful.\n\nD\n\n\n\n", "msg_date": "Fri, 14 Jun 2024 09:59:45 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: jsonpath: Missing Binary Execution Path?" } ]
[ { "msg_contents": "Hi\n\nI am found strange switch:\n\n<--><-->switch (carg->mode)\n<--><-->{\n<--><--><-->case RAW_PARSE_PLPGSQL_EXPR:\n<--><--><--><-->errcontext(\"SQL expression \\\"%s\\\"\", query);\n<--><--><--><-->break;\n<--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN1:\n<--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN2:\n<--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN3:\n<--><--><--><-->errcontext(\"PL/pgSQL assignment \\\"%s\\\"\", query);\n<--><--><--><-->break;\n<--><--><-->default:\n<--><--><--><-->errcontext(\"SQL statement \\\"%s\\\"\", query);\n<--><--><--><-->break;\n<--><-->}\n\nIs the message \"SQL expression ...\" for RAW_PLPGSQL_EXPR correct?\n\nShould there be a \"PL/pgSQL expression\" instead?\n\nRegards\n\nPavel\n\nHiI am found strange switch:<--><-->switch (carg->mode)<--><-->{<--><--><-->case RAW_PARSE_PLPGSQL_EXPR:<--><--><--><-->errcontext(\"SQL expression \\\"%s\\\"\", query);<--><--><--><-->break;<--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN1:<--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN2:<--><--><-->case RAW_PARSE_PLPGSQL_ASSIGN3:<--><--><--><-->errcontext(\"PL/pgSQL assignment \\\"%s\\\"\", query);<--><--><--><-->break;<--><--><-->default:<--><--><--><-->errcontext(\"SQL statement \\\"%s\\\"\", query);<--><--><--><-->break;<--><-->}Is the message \"SQL expression ...\" for RAW_PLPGSQL_EXPR correct?Should there  be a \"PL/pgSQL expression\" instead?RegardsPavel", "msg_date": "Thu, 13 Jun 2024 19:21:57 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "strange context message in spi.c?" }, { "msg_contents": "> On 13 Jun 2024, at 19:21, Pavel Stehule <[email protected]> wrote:\n\n> Is the message \"SQL expression ...\" for RAW_PLPGSQL_EXPR correct?\n\nThat indeed seems incorrect.\n\n> Should there be a \"PL/pgSQL expression\" instead?\n\nI think that would make more sense.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 13 Jun 2024 20:56:20 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange context message in spi.c?" }, { "msg_contents": "Hi\n\nčt 13. 6. 2024 v 20:56 odesílatel Daniel Gustafsson <[email protected]>\nnapsal:\n\n> > On 13 Jun 2024, at 19:21, Pavel Stehule <[email protected]> wrote:\n>\n> > Is the message \"SQL expression ...\" for RAW_PLPGSQL_EXPR correct?\n>\n> That indeed seems incorrect.\n>\n> > Should there be a \"PL/pgSQL expression\" instead?\n>\n> I think that would make more sense.\n>\n\nhere is the patch\n\nRegards\n\nPavel\n\n\n>\n> --\n> Daniel Gustafsson\n>\n>", "msg_date": "Sat, 15 Jun 2024 08:59:49 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "Re: strange context message in spi.c?" }, { "msg_contents": "Hi! Looks good to me!\nBest regards, Stepan Neretin.\n\nHi! Looks good to me! Best regards, Stepan Neretin.", "msg_date": "Mon, 24 Jun 2024 16:14:05 +0700", "msg_from": "Stepan Neretin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange context message in spi.c?" }, { "msg_contents": "> On 24 Jun 2024, at 11:14, Stepan Neretin <[email protected]> wrote:\n> \n> Hi! Looks good to me! \n\nThanks for review. I have this on my TODO for when the tree branches, it\ndoesn't seem like anything worth squeezing in before then.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 24 Jun 2024 21:04:32 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange context message in spi.c?" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: tested, failed\nSpec compliant: tested, failed\nDocumentation: tested, failed\n\nAs tree is branched out for PG17, I guess now it's time to commit.\r\n- No need to rebase\r\n- make, make-check , install-check verified\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Sat, 03 Aug 2024 05:43:47 +0000", "msg_from": "Umar Hayat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange context message in spi.c?" }, { "msg_contents": "On 03.08.24 07:43, Umar Hayat wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, failed\n> Implements feature: tested, failed\n> Spec compliant: tested, failed\n> Documentation: tested, failed\n> \n> As tree is branched out for PG17, I guess now it's time to commit.\n> - No need to rebase\n> - make, make-check , install-check verified\n> \n> The new status of this patch is: Ready for Committer\n\ncommitted\n\n\n", "msg_date": "Thu, 5 Sep 2024 15:25:47 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange context message in spi.c?" } ]
[ { "msg_contents": "IMHO there are a couple of opportunities for improving the predefined roles\ndocumentation [0]:\n\n* Several of the roles in the table do not have corresponding descriptions\n in the paragraphs below the table (e.g., pg_read_all_data,\n pg_write_all_data, pg_checkpoint, pg_maintain,\n pg_use_reserved_connections, and pg_create_subscription). Furthermore,\n IMHO it is weird to have some of the information in the table and some\n more in a paragraph down the page.\n\n* The table has grown quite a bit over the years, but the entries are\n basically unordered, requiring readers to perform a linear search (O(n))\n to find information about a specific role.\n\n* Documentation that refers to these roles cannot link to a specific one.\n Currently, we just link to the page or the table.\n\nI think we could improve matters by abandoning the table and instead\ndocumenting these roles more like we document GUCs, i.e., each one has a\nsection below it where we can document it in as much detail as we want.\nSome of these roles should probably be documented together (e.g.,\npg_read_all_data and pg_write_all_data), so the ordering is unlikely to be\nperfect, but I'm hoping it would still be a net improvement.\n\nThoughts?\n\n[0] https://www.postgresql.org/docs/devel/predefined-roles.html\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 13 Jun 2024 14:48:11 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "improve predefined roles documentation" }, { "msg_contents": "On Thu, Jun 13, 2024 at 12:48 PM Nathan Bossart <[email protected]>\nwrote:\n\n> I think we could improve matters by abandoning the table and instead\n> documenting these roles more like we document GUCs, i.e., each one has a\n> section below it where we can document it in as much detail as we want.\n>\n>\nOne of the main attributes for the GUCs is their category. If we want to\nimprove organization we'd need to assign categories first. We already\nimplicitly do so in the description section where we do group them together\nand explain why - but it is all informal. But getting rid of those\ngroupings and descriptions and isolating each role so it can be linked to\nmore easily seems like a net loss in usability.\n\nI'm against getting rid of the table. If we do add authoritative\nsubsection anchors we should just do like we do in System Catalogs and make\nthe existing table name values hyperlinks to those newly added anchors.\nBreaking the one table up into multiple tables along category lines is\nsomething to consider.\n\nDavid J.\n\nOn Thu, Jun 13, 2024 at 12:48 PM Nathan Bossart <[email protected]> wrote:I think we could improve matters by abandoning the table and instead\ndocumenting these roles more like we document GUCs, i.e., each one has a\nsection below it where we can document it in as much detail as we want.One of the main attributes for the GUCs is their category.  If we want to improve organization we'd need to assign categories first.  We already implicitly do so in the description section where we do group them together and explain why - but it is all informal.  But getting rid of those groupings and descriptions and isolating each role so it can be linked to more easily seems like a net loss in usability.I'm against getting rid of the table.  If we do add authoritative subsection anchors we should just do like we do in System Catalogs and make the existing table name values hyperlinks to those newly added anchors.  Breaking the one table up into multiple tables along category lines is something to consider.David J.", "msg_date": "Thu, 13 Jun 2024 13:05:33 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve predefined roles documentation" }, { "msg_contents": "On Thu, Jun 13, 2024 at 01:05:33PM -0700, David G. Johnston wrote:\n> One of the main attributes for the GUCs is their category. If we want to\n> improve organization we'd need to assign categories first. We already\n> implicitly do so in the description section where we do group them together\n> and explain why - but it is all informal. But getting rid of those\n> groupings and descriptions and isolating each role so it can be linked to\n> more easily seems like a net loss in usability.\n\nWhat I had in mind is that we would retain these groupings. I agree that\nisolating roles like pg_read_all_data and pg_write_all_data would be no\ngood.\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 13 Jun 2024 15:11:15 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve predefined roles documentation" }, { "msg_contents": "On Thu, Jun 13, 2024 at 3:48 PM Nathan Bossart <[email protected]> wrote:\n> I think we could improve matters by abandoning the table and instead\n> documenting these roles more like we document GUCs, i.e., each one has a\n> section below it where we can document it in as much detail as we want.\n> Some of these roles should probably be documented together (e.g.,\n> pg_read_all_data and pg_write_all_data), so the ordering is unlikely to be\n> perfect, but I'm hoping it would still be a net improvement.\n\n+1. I'm not sure about all of the details, but I like the general idea.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 Jun 2024 14:10:22 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve predefined roles documentation" }, { "msg_contents": "On Mon, Jun 17, 2024 at 02:10:22PM -0400, Robert Haas wrote:\n> On Thu, Jun 13, 2024 at 3:48 PM Nathan Bossart <[email protected]> wrote:\n>> I think we could improve matters by abandoning the table and instead\n>> documenting these roles more like we document GUCs, i.e., each one has a\n>> section below it where we can document it in as much detail as we want.\n>> Some of these roles should probably be documented together (e.g.,\n>> pg_read_all_data and pg_write_all_data), so the ordering is unlikely to be\n>> perfect, but I'm hoping it would still be a net improvement.\n> \n> +1. I'm not sure about all of the details, but I like the general idea.\n\nHere is a first try. I did pretty much exactly what I proposed in the\nquoted text, so I don't have much else to say about it. I didn't see an\neasy way to specify multiple ids and xreflabels for a given entry, so the\nentries that describe multiple roles just use the name of the first role\nlisted. In practice, I think this just means you need to do a little extra\nwork when linking to one of the other roles from elsewhere in the docs,\nwhich doesn't seem too terrible.\n\n-- \nnathan\n\n\n21.5. Predefined Roles21.5. Predefined RolesPrev UpChapter 21. Database RolesHome Next21.5. Predefined Roles #\nPostgreSQL provides a set of predefined roles\n that provide access to certain, commonly needed, privileged capabilities\n and information. Administrators (including roles that have the\n CREATEROLE privilege) can GRANT these\n roles to users and/or other roles in their environment, providing those\n users with access to the specified capabilities and information. For\n example:\n\n\nGRANT pg_signal_backend TO admin_user;\n\nWarning\n Care should be taken when granting these roles to ensure they are only used\n where needed and with the understanding that these roles grant access to\n privileged information.\n \n The predefined roles are described below.\n Note that the specific permissions for each of the roles may change in\n the future as additional capabilities are added. Administrators\n should monitor the release notes for changes.\n\n pg_checkpoint #\n Allows executing the\n CHECKPOINT command.\n pg_create_subscription #\n Allows users with CREATE permission on the database to issue\n CREATE SUBSCRIPTION.\n pg_database_owner #\n Membership consists, implicitly, of the current database owner. Like\n any role, it can own objects or receive grants of access privileges.\n Consequently, once pg_database_owner has rights\n within a template database, each owner of a database instantiated from\n that template will exercise those rights.\n pg_database_owner cannot be a member of any role, and\n it cannot have non-implicit members. Initially, this role owns the\n public schema, so each database owner governs local\n use of the schema.\n pg_maintain #\n Allows executing\n VACUUM,\n ANALYZE,\n CLUSTER,\n REFRESH MATERIALIZED VIEW,\n REINDEX,\n and LOCK TABLE on all\n relations, as if having MAINTAIN rights on those\n objects, even without having it explicitly.\n pg_read_all_datapg_write_all_data #\npg_read_all_data allows reading all data (tables,\n views, sequences), as if having SELECT rights on\n those objects, and USAGE rights on all schemas, even without having it\n explicitly. This role does not have the role attribute\n BYPASSRLS set. If RLS is being used, an\n administrator may wish to set BYPASSRLS on roles\n which this role is GRANTed to.\n \npg_write_all_data allows writing all data (tables,\n views, sequences), as if having INSERT,\n UPDATE, and DELETE rights on those\n objects, and USAGE rights on all schemas, even without having it\n explicitly. This role does not have the role attribute\n BYPASSRLS set. If RLS is being used, an\n administrator may wish to set BYPASSRLS on roles\n which this role is GRANTed to.\n pg_read_all_settingspg_read_all_statspg_stat_scan_tablespg_monitor #\n These roles are intended to allow administrators to easily configure a\n role for the purpose of monitoring the database server. They grant a\n set of common privileges allowing the role to read various useful\n configuration settings, statistics, and other system information\n normally restricted to superusers.\n \npg_read_all_settings allows reading all configuration\n variables, even those normally visible only to superusers.\n \npg_read_all_stats allows reading all pg_stat_* views\n and use various statistics related extensions, even those normally\n visible only to superusers.\n \npg_stat_scan_tables allows executing monitoring\n functions that may take ACCESS SHARE locks on tables,\n potentially for a long time.\n \npg_monitor allows reading/executing various\n monitoring views and functions. This role is a member of\n pg_read_all_settings,\n pg_read_all_stats and\n pg_stat_scan_tables.\n pg_read_server_filespg_write_server_filespg_execute_server_program #\n These roles are intended to allow administrators to have trusted, but\n non-superuser, roles which are able to access files and run programs on\n the database server as the user the database runs as. As these roles\n are able to access any file on the server file system, they bypass all\n database-level permission checks when accessing files directly and they\n could be used to gain superuser-level access, therefore great care\n should be taken when granting these roles to users.\n \npg_read_server_files allows reading files from any\n location the database can access on the server with COPY and other\n file-access functions.\n \npg_write_server_files allows writing to files in any\n location the database can access on the server with COPY any other\n file-access functions.\n \npg_execute_server_program allows executing programs\n on the database server as the user the database runs as with COPY and\n other functions which allow executing a server-side program.\n pg_signal_backend #\n Allows signaling another backend to cancel a query or terminate its\n session. A user granted this role cannot however send signals to a\n backend owned by a superuser. See\n Section 9.28.2.\n pg_use_reserved_connections #\n Allows use of connection slots reserved via\n reserved_connections.\n \nPrev Up Next21.4. Dropping Roles Home 21.6. Function Security", "msg_date": "Tue, 18 Jun 2024 11:52:12 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve predefined roles documentation" }, { "msg_contents": "On Tue, Jun 18, 2024 at 9:52 AM Nathan Bossart <[email protected]>\nwrote:\n\n> On Mon, Jun 17, 2024 at 02:10:22PM -0400, Robert Haas wrote:\n> > On Thu, Jun 13, 2024 at 3:48 PM Nathan Bossart <[email protected]>\n> wrote:\n> >> I think we could improve matters by abandoning the table and instead\n> >> documenting these roles more like we document GUCs, i.e., each one has a\n> >> section below it where we can document it in as much detail as we want.\n> >> Some of these roles should probably be documented together (e.g.,\n> >> pg_read_all_data and pg_write_all_data), so the ordering is unlikely to\n> be\n> >> perfect, but I'm hoping it would still be a net improvement.\n> >\n> > +1. I'm not sure about all of the details, but I like the general idea.\n>\n> Here is a first try. I did pretty much exactly what I proposed in the\n> quoted text, so I don't have much else to say about it. I didn't see an\n> easy way to specify multiple ids and xreflabels for a given entry, so the\n> entries that describe multiple roles just use the name of the first role\n> listed. In practice, I think this just means you need to do a little extra\n> work when linking to one of the other roles from elsewhere in the docs,\n> which doesn't seem too terrible.\n>\n>\nI like this. Losing the table turned out to be ok. Thank you.\n\nI would probably put pg_monitor first in the list.\n\n+ A user granted this role cannot however send signals to a backend owned\nby a superuser.\n\nRemove \"however\", or put commas around it. I prefer the first option.\n\nDo we really need to repeat \"even without having it explicitly\" everywhere?\n\n+ This role does not have the role attribute BYPASSRLS set.\n\nEven if it did, that attribute isn't inherited anyway...\n\n\"This role is still governed by any row level security policies that may be\nin force. Consider setting the BYPASSRLS attribute on member roles.\"\n\n(assuming they intend it to be ALL data then doing the bypassrls even if\nthey are not today using it doesn't hurt)\n\npg_stat_scan_tables - This explanation leaves me wanting more. Maybe give\nan example of such a function? I think the bar is set a bit too high just\ntalking about a specific lock level.\n\n\"As these roles are able to access any file on the server file system,\"\n\nWe forbid running under root so this isn't really true. They do have\noperating system level access logged in as the database process owner.\nThey are able to access all PostgreSQL files on the server file system and\nusually can run a wide-variety of commands on the server.\n\n\"access, therefore great care should be taken\"\n\nI would go with:\n\n\"access. Great care should be taken\"\n\nSeems more impactful as its own sentence then at the end of a long\nmulti-part sentence.\n\n\"server with COPY any other file-access functions.\" - s/with/using/\n\nDavid J.\n\nOn Tue, Jun 18, 2024 at 9:52 AM Nathan Bossart <[email protected]> wrote:On Mon, Jun 17, 2024 at 02:10:22PM -0400, Robert Haas wrote:\n> On Thu, Jun 13, 2024 at 3:48 PM Nathan Bossart <[email protected]> wrote:\n>> I think we could improve matters by abandoning the table and instead\n>> documenting these roles more like we document GUCs, i.e., each one has a\n>> section below it where we can document it in as much detail as we want.\n>> Some of these roles should probably be documented together (e.g.,\n>> pg_read_all_data and pg_write_all_data), so the ordering is unlikely to be\n>> perfect, but I'm hoping it would still be a net improvement.\n> \n> +1. I'm not sure about all of the details, but I like the general idea.\n\nHere is a first try.  I did pretty much exactly what I proposed in the\nquoted text, so I don't have much else to say about it.  I didn't see an\neasy way to specify multiple ids and xreflabels for a given entry, so the\nentries that describe multiple roles just use the name of the first role\nlisted.  In practice, I think this just means you need to do a little extra\nwork when linking to one of the other roles from elsewhere in the docs,\nwhich doesn't seem too terrible.I like this.  Losing the table turned out to be ok.  Thank you.I would probably put pg_monitor first in the list.+ A user granted this role cannot however send signals to a backend owned by a superuser.Remove \"however\", or put commas around it.  I prefer the first option.Do we really need to repeat \"even without having it explicitly\" everywhere?+ This role does not have the role attribute BYPASSRLS set.Even if it did, that attribute isn't inherited anyway...\"This role is still governed by any row level security policies that may be in force.  Consider setting the BYPASSRLS attribute on member roles.\"(assuming they intend it to be ALL data then doing the bypassrls even if they are not today using it doesn't hurt)pg_stat_scan_tables - This explanation leaves me wanting more.  Maybe give an example of such a function?  I think the bar is set a bit too high just talking about a specific lock level.\"As these roles are able to access any file on the server file system,\"We forbid running under root so this isn't really true.  They do have operating system level access logged in as the database process owner.  They are able to access all PostgreSQL files on the server file system and usually can run a wide-variety of commands on the server.\"access, therefore great care should be taken\"I would go with:\"access.  Great care should be taken\"Seems more impactful as its own sentence then at the end of a long multi-part sentence.\"server with COPY any other file-access functions.\" - s/with/using/David J.", "msg_date": "Thu, 20 Jun 2024 19:57:16 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve predefined roles documentation" }, { "msg_contents": "On Thu, Jun 20, 2024 at 07:57:16PM -0700, David G. Johnston wrote:\n> I like this. Losing the table turned out to be ok. Thank you.\n\nAwesome.\n\n> I would probably put pg_monitor first in the list.\n\nDone.\n\n> + A user granted this role cannot however send signals to a backend owned\n> by a superuser.\n> \n> Remove \"however\", or put commas around it. I prefer the first option.\n\nThis sentence caught my eye earlier, too, because it seems to imply that a\nsuperuser granted this role cannot signal superuser-owned backends. I\nchanged it to the following:\n\n\tNote that this role does not permit signaling backends owned by a\n\tsuperuser.\n\nHow does that sound?\n\n> Do we really need to repeat \"even without having it explicitly\" everywhere?\n\nRemoved.\n\n> + This role does not have the role attribute BYPASSRLS set.\n> \n> Even if it did, that attribute isn't inherited anyway...\n> \n> \"This role is still governed by any row level security policies that may be\n> in force. Consider setting the BYPASSRLS attribute on member roles.\"\n> \n> (assuming they intend it to be ALL data then doing the bypassrls even if\n> they are not today using it doesn't hurt)\n\nHow does something like the following sound?\n\n\tThis role does not bypass row-level security (RLS) policies. If RLS is\n\tbeing used, an administrator may wish to set BYPASSRLS on roles which\n\tthis role is granted to.\n\n> pg_stat_scan_tables - This explanation leaves me wanting more. Maybe give\n> an example of such a function? I think the bar is set a bit too high just\n> talking about a specific lock level.\n\nI was surprised to learn that this role only provides privileges for\nfunctions in contrib/ modules. Anyway, added an example.\n\n> \"As these roles are able to access any file on the server file system,\"\n> \n> We forbid running under root so this isn't really true. They do have\n> operating system level access logged in as the database process owner.\n> They are able to access all PostgreSQL files on the server file system and\n> usually can run a wide-variety of commands on the server.\n\nI just deleted this clause.\n\n> \"access, therefore great care should be taken\"\n> \n> I would go with:\n> \n> \"access. Great care should be taken\"\n> \n> Seems more impactful as its own sentence then at the end of a long\n> multi-part sentence.\n\nDone.\n\n> \"server with COPY any other file-access functions.\" - s/with/using/\n\nDone.\n\n-- \nnathan", "msg_date": "Fri, 21 Jun 2024 10:40:13 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve predefined roles documentation" }, { "msg_contents": "On Fri, Jun 21, 2024 at 11:40 AM Nathan Bossart\n<[email protected]> wrote:\n> Done.\n\nIf you look at how the varlistentries begin, there are three separate patterns:\n\n* Some document a single role and start with \"Allow doing blah blah blah\".\n\n* Some document a couple of rolls so there are several paragraphs,\neach beginning with \"<literal>name_of_role</literal allows doing blah\nblah blah\". This is sometimes preceded by an introductory paragraph\nexplaining why this group of roles exists and what it's intended to\ndo.\n\n* pg_database_owner is completely different from the rest, focusing on\nexplaining who is in the role rather than what the role gets to do.\n\nI think the first two cases could be made more like each other by\nchanging the varlistentires that are just about one setting to use the\nsecond format instead of the first, e.g. pg_checkpoint allows\nexecuting the CHECKPOINT command.\n\nI don't know what to do about pg_database_owner. I almost wonder if\nthat should be moved out of the table and documented as a special\ncase. Or maybe some more wordsmithing would add clarity. Or maybe it's\nfine as-is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 24 Jun 2024 14:44:33 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve predefined roles documentation" }, { "msg_contents": "On Mon, Jun 24, 2024 at 02:44:33PM -0400, Robert Haas wrote:\n> I think the first two cases could be made more like each other by\n> changing the varlistentires that are just about one setting to use the\n> second format instead of the first, e.g. pg_checkpoint allows\n> executing the CHECKPOINT command.\n\nDone.\n\n> I don't know what to do about pg_database_owner. I almost wonder if\n> that should be moved out of the table and documented as a special\n> case. Or maybe some more wordsmithing would add clarity. Or maybe it's\n> fine as-is.\n\nI've left it alone for now. I thought about adding something like\n\"pg_database_owner does not provide any special capabilities or access\nout-of-the-box\" to the beginning of the entry, but I don't have time at the\nmoment to properly wordsmith the rest. If anyone else wants to give it a\ntry before I get to it (probably tomorrow), please be my guest. TBH I\nthink the existing content is pretty good, so I'm not opposed to leaving it\nalone, even if the style is different than the other entries.\n\n-- \nnathan", "msg_date": "Mon, 24 Jun 2024 16:53:23 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve predefined roles documentation" }, { "msg_contents": "On Mon, Jun 24, 2024 at 2:53 PM Nathan Bossart <[email protected]>\nwrote:\n\n> On Mon, Jun 24, 2024 at 02:44:33PM -0400, Robert Haas wrote:\n>\n> > I don't know what to do about pg_database_owner. I almost wonder if\n> > that should be moved out of the table and documented as a special\n> > case. Or maybe some more wordsmithing would add clarity. Or maybe it's\n> > fine as-is.\n>\n> I've left it alone for now. I thought about adding something like\n> \"pg_database_owner does not provide any special capabilities or access\n> out-of-the-box\" to the beginning of the entry, but I don't have time at the\n> moment to properly wordsmith the rest. If anyone else wants to give it a\n> try before I get to it (probably tomorrow), please be my guest.\n>\n\nThis feels like a case where why is more important than what, so here's my\nfirst draft suggestion.\n\npg_database_owner owns the initially created public schema and has an\nimplicit membership list of one - the role owning the connected-to database.\nIt exists to encourage and facilitate best practices regarding database\nadministration. The primary rule being to avoid using superuser to own or\ndo things. The bootstrap superuser thus should connect to the postgres\ndatabase and create a login role, with the createdb attribute, and then use\nthat role to create and administer additional databases. In that context,\nthis feature allows the creator of the new database to log into it and\nimmediately begin working in the public schema.\n\nAs a result, in version 14, PostgreSQL no longer initially grants create\nand usage privileges, on the public schema, to the public pseudo-role.\n\nFor technical reasons, pg_database_owner may not participate in explicitly\ngranted role memberships. This is an easily mitigated limitation since the\nrole that owns the database may be a group and any inheriting members of\nthat group will be considered owners as well.\n\nDavid J.\n\nOn Mon, Jun 24, 2024 at 2:53 PM Nathan Bossart <[email protected]> wrote:On Mon, Jun 24, 2024 at 02:44:33PM -0400, Robert Haas wrote:\n> I don't know what to do about pg_database_owner. I almost wonder if\n> that should be moved out of the table and documented as a special\n> case. Or maybe some more wordsmithing would add clarity. Or maybe it's\n> fine as-is.\n\nI've left it alone for now.  I thought about adding something like\n\"pg_database_owner does not provide any special capabilities or access\nout-of-the-box\" to the beginning of the entry, but I don't have time at the\nmoment to properly wordsmith the rest.  If anyone else wants to give it a\ntry before I get to it (probably tomorrow), please be my guest.This feels like a case where why is more important than what, so here's my first draft suggestion.pg_database_owner owns the initially created public schema and has an implicit membership list of one - the role owning the connected-to database.  It exists to encourage and facilitate best practices regarding database administration.  The primary rule being to avoid using superuser to own or do things.  The bootstrap superuser thus should connect to the postgres database and create a login role, with the createdb attribute, and then use that role to create and administer additional databases.  In that context, this feature allows the creator of the new database to log into it and immediately begin working in the public schema.As a result, in version 14, PostgreSQL no longer initially grants create and usage privileges, on the public schema, to the public pseudo-role. For technical reasons, pg_database_owner may not participate in explicitly granted role memberships.  This is an easily mitigated limitation since the role that owns the database may be a group and any inheriting members of that group will be considered owners as well.David J.", "msg_date": "Mon, 24 Jun 2024 15:53:46 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve predefined roles documentation" }, { "msg_contents": "On Mon, Jun 24, 2024 at 03:53:46PM -0700, David G. Johnston wrote:\n> pg_database_owner owns the initially created public schema and has an\n> implicit membership list of one - the role owning the connected-to database.\n> It exists to encourage and facilitate best practices regarding database\n> administration. The primary rule being to avoid using superuser to own or\n> do things.\n\nThis part restates much of the existing text in a slightly different order,\nbut I'm not sure it's an improvement. I like that it emphasizes the intent\nof the role, but the basic description of the role is kind-of buried in the\nfirst sentence. IMO the way this role works is confusing enough that we\nought to keep the basic facts at the very top. I might even add a bit of\nfluff in an attempt to make things clearer:\n\n\tThe pg_database_owner role always has exactly one implicit,\n\tsituation-dependent member, namely the owner of the current database.\n\nOne other thing I like about your proposal is that it moves the bit about\nthe role initially owning the public schema much earlier. That seems like\npossibly the most important practical piece of information to convey to\nadministrators. Perhaps that could be the very next thing after the basic\ndescription of the role.\n\n> The bootstrap superuser thus should connect to the postgres\n> database and create a login role, with the createdb attribute, and then use\n> that role to create and administer additional databases. In that context,\n> this feature allows the creator of the new database to log into it and\n> immediately begin working in the public schema.\n\nIMHO the majority of this is too prescriptive, even if it's generally good\nadvice.\n\n> As a result, in version 14, PostgreSQL no longer initially grants create\n> and usage privileges, on the public schema, to the public pseudo-role.\n\nIME we tend to shy away from adding too many historical details in the\ndocumentation, and I'm not sure this information is directly related enough\nto the role to include here.\n\n> For technical reasons, pg_database_owner may not participate in explicitly\n> granted role memberships. This is an easily mitigated limitation since the\n> role that owns the database may be a group and any inheriting members of\n> that group will be considered owners as well.\n\nIIUC the intent of this is to expand on the following sentence in the\nexisting docs:\n\n\tpg_database_owner cannot be a member of any role, and it cannot have\n\tnon-implicit members.\n\nMy instinct would be to do something like this:\n\n\tpg_database_owner cannot be granted membership in any role, and no role\n\tmay be granted non-implicit membership in pg_database_owner.\n\nIMHO the part about mitigating this limitation via groups is again too\nprescriptive.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 25 Jun 2024 10:35:51 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve predefined roles documentation" }, { "msg_contents": "On Tue, Jun 25, 2024 at 11:35 AM Nathan Bossart\n<[email protected]> wrote:\n> IIUC the intent of this is to expand on the following sentence in the\n> existing docs:\n>\n> pg_database_owner cannot be a member of any role, and it cannot have\n> non-implicit members.\n>\n> My instinct would be to do something like this:\n>\n> pg_database_owner cannot be granted membership in any role, and no role\n> may be granted non-implicit membership in pg_database_owner.\n\nBut you couldn't grant someone implicit membership either, because\nthen it wouldn't be implicit. So maybe something like this:\n\npg_database_owner is a predefined role for which membership consists,\nimplicitly, of the current database owner. It cannot be granted\nmembership in any role, and no role can be granted membership in\npg_database_owner. However, like any role, it can own objects or\nreceive grants of access privileges. Consequently, once\npg_database_owner has rights within a template database, each owner of\na database instantiated from that template will exercise those rights.\nInitially, this role owns the public schema, so each database owner\ngoverns local use of the schema.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 25 Jun 2024 12:16:30 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve predefined roles documentation" }, { "msg_contents": "On Tue, Jun 25, 2024 at 12:16:30PM -0400, Robert Haas wrote:\n> pg_database_owner is a predefined role for which membership consists,\n> implicitly, of the current database owner. It cannot be granted\n> membership in any role, and no role can be granted membership in\n> pg_database_owner. However, like any role, it can own objects or\n> receive grants of access privileges. Consequently, once\n> pg_database_owner has rights within a template database, each owner of\n> a database instantiated from that template will exercise those rights.\n> Initially, this role owns the public schema, so each database owner\n> governs local use of the schema.\n\nThe main difference between this and the existing documentation is that the\nsentence on membership has been rephrased and moved to earlier in the\nparagraph. I think this helps the logical flow a bit. We first talk about\nimplicit membership, then explicit membership, then we talk about\nprivileges and the consequences of those privileges, and finally we talk\nabout the default privileges. So, WFM.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 25 Jun 2024 11:28:18 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve predefined roles documentation" }, { "msg_contents": "On Tue, Jun 25, 2024 at 11:28:18AM -0500, Nathan Bossart wrote:\n> On Tue, Jun 25, 2024 at 12:16:30PM -0400, Robert Haas wrote:\n>> pg_database_owner is a predefined role for which membership consists,\n>> implicitly, of the current database owner. It cannot be granted\n>> membership in any role, and no role can be granted membership in\n>> pg_database_owner. However, like any role, it can own objects or\n>> receive grants of access privileges. Consequently, once\n>> pg_database_owner has rights within a template database, each owner of\n>> a database instantiated from that template will exercise those rights.\n>> Initially, this role owns the public schema, so each database owner\n>> governs local use of the schema.\n> \n> The main difference between this and the existing documentation is that the\n> sentence on membership has been rephrased and moved to earlier in the\n> paragraph. I think this helps the logical flow a bit. We first talk about\n> implicit membership, then explicit membership, then we talk about\n> privileges and the consequences of those privileges, and finally we talk\n> about the default privileges. So, WFM.\n\nI used this in v4 (with some minor changes). I've copied it here to ease\nreview.\n\n\tpg_database_owner always has exactly one implicit member: the current\n\tdatabase owner. It cannot be granted membership in any role, and no\n\trole can be granted membership in pg_database_owner. However, like any\n\tother role, it can own objects and receive grants of access privileges.\n\tConsequently, once pg_database_owner has rights within a template\n\tdatabase, each owner of a database instantiated from that template will\n\tpossess those rights. Initially, this role owns the public schema, so\n\teach database owner governs local use of that schema.\n\n-- \nnathan", "msg_date": "Tue, 25 Jun 2024 14:26:46 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve predefined roles documentation" }, { "msg_contents": "On Tue, Jun 25, 2024 at 3:26 PM Nathan Bossart <[email protected]> wrote:\n> I used this in v4 (with some minor changes).\n\nLooking at this again, how happy are you with the way you've got\nseveral roles per <varlistentry> instead of one for each? I realize\nthat was probably part of the intent of the change, to move the data\nfrom below the table into the table, and I see the merit of that. But\none of your other complaints was the entries in the table were\nunordered, and it's hard for them to really be ordered if you have\ngroups like this, since you can't alphabetize, for example, unless you\nhave just a single entry per <varlistentry>.\n\nI don't have a problem with doing it the way you have here if you\nthink that's good. I'm just asking.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 25 Jun 2024 16:04:03 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve predefined roles documentation" }, { "msg_contents": "On Tue, Jun 25, 2024 at 04:04:03PM -0400, Robert Haas wrote:\n> Looking at this again, how happy are you with the way you've got\n> several roles per <varlistentry> instead of one for each? I realize\n> that was probably part of the intent of the change, to move the data\n> from below the table into the table, and I see the merit of that. But\n> one of your other complaints was the entries in the table were\n> unordered, and it's hard for them to really be ordered if you have\n> groups like this, since you can't alphabetize, for example, unless you\n> have just a single entry per <varlistentry>.\n\nYeah, my options were to either separate the roles or to weaken the\nordering, and I guess I felt like the weaker ordering was slightly less\nbad. The extra context in some of the groups seemed worth keeping, and\nthis probably isn't the only page of our docs that might require ctrl+f.\nBut I'll yield to the majority opinion here.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 25 Jun 2024 15:19:40 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve predefined roles documentation" }, { "msg_contents": "On Tue, Jun 25, 2024 at 1:19 PM Nathan Bossart <[email protected]>\nwrote:\n\n> On Tue, Jun 25, 2024 at 04:04:03PM -0400, Robert Haas wrote:\n> > Looking at this again, how happy are you with the way you've got\n> > several roles per <varlistentry> instead of one for each? I realize\n> > that was probably part of the intent of the change, to move the data\n> > from below the table into the table, and I see the merit of that. But\n> > one of your other complaints was the entries in the table were\n> > unordered, and it's hard for them to really be ordered if you have\n> > groups like this, since you can't alphabetize, for example, unless you\n> > have just a single entry per <varlistentry>.\n>\n> Yeah, my options were to either separate the roles or to weaken the\n> ordering, and I guess I felt like the weaker ordering was slightly less\n> bad. The extra context in some of the groups seemed worth keeping, and\n> this probably isn't the only page of our docs that might require ctrl+f.\n> But I'll yield to the majority opinion here.\n>\n>\nThere are few enough that logical grouping instead of strict alphabetical\nmakes sense.\n\nv4 WFM\n\nDavid J.\n\nOn Tue, Jun 25, 2024 at 1:19 PM Nathan Bossart <[email protected]> wrote:On Tue, Jun 25, 2024 at 04:04:03PM -0400, Robert Haas wrote:\n> Looking at this again, how happy are you with the way you've got\n> several roles per <varlistentry> instead of one for each? I realize\n> that was probably part of the intent of the change, to move the data\n> from below the table into the table, and I see the merit of that. But\n> one of your other complaints was the entries in the table were\n> unordered, and it's hard for them to really be ordered if you have\n> groups like this, since you can't alphabetize, for example, unless you\n> have just a single entry per <varlistentry>.\n\nYeah, my options were to either separate the roles or to weaken the\nordering, and I guess I felt like the weaker ordering was slightly less\nbad.  The extra context in some of the groups seemed worth keeping, and\nthis probably isn't the only page of our docs that might require ctrl+f.\nBut I'll yield to the majority opinion here.There are few enough that logical grouping instead of strict alphabetical makes sense.v4 WFMDavid J.", "msg_date": "Tue, 25 Jun 2024 17:38:15 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve predefined roles documentation" }, { "msg_contents": "On Tue, Jun 25, 2024 at 4:19 PM Nathan Bossart <[email protected]> wrote:\n> Yeah, my options were to either separate the roles or to weaken the\n> ordering, and I guess I felt like the weaker ordering was slightly less\n> bad. The extra context in some of the groups seemed worth keeping, and\n> this probably isn't the only page of our docs that might require ctrl+f.\n> But I'll yield to the majority opinion here.\n\nI'm not objecting. I'm just asking.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 26 Jun 2024 10:40:10 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improve predefined roles documentation" }, { "msg_contents": "On Wed, Jun 26, 2024 at 10:40:10AM -0400, Robert Haas wrote:\n> On Tue, Jun 25, 2024 at 4:19 PM Nathan Bossart <[email protected]> wrote:\n>> Yeah, my options were to either separate the roles or to weaken the\n>> ordering, and I guess I felt like the weaker ordering was slightly less\n>> bad. The extra context in some of the groups seemed worth keeping, and\n>> this probably isn't the only page of our docs that might require ctrl+f.\n>> But I'll yield to the majority opinion here.\n> \n> I'm not objecting. I'm just asking.\n\nCool. I'll plan on committing this latest version once v18devel hacking\nbegins.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 26 Jun 2024 10:48:30 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve predefined roles documentation" }, { "msg_contents": "rebased (due to commit ccd3802, which introduced\npg_signal_autovacuum_worker)\n\n-- \nnathan", "msg_date": "Tue, 9 Jul 2024 13:50:56 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve predefined roles documentation" }, { "msg_contents": "Committed. Thank you for reviewing!\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 10 Jul 2024 16:41:01 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improve predefined roles documentation" } ]
[ { "msg_contents": "Hi hackers,\n\n\nI am using gcc version 11.3.0 to compile postgres source code. Gcc complains about the following line:\n\n\n```c\nstrncpy(sqlca->sqlstate, \"YE001\", sizeof(sqlca->sqlstate));\n```\n\n\nwith error as:\n\n\nmisc.c:529:17: error: ‘strncpy’ output truncated before terminating nul copying 5 bytes from a string of the same length [-Werror=stringop-truncation]\n\n\nI find the definition of `sqlca->sqlstate` and it has only 5 bytes. When the statement\n\n\n```c\nstrncpy(sqlca->sqlstate, \"YE001\", sizeof(sqlca->sqlstate));\n```\n\n\nget executed, `sqlca->sqlstate` will have no '\\0' byte which makes me anxious when someone prints that as a string. Indeed, I found the code(in src/interfaces/ecpg/ecpglib/misc.c) does that,\n\n\n```c\nfprintf(debugstream, \"[NO_PID]: sqlca: code: %ld, state: %s\\n\",\nsqlca->sqlcode, sqlca->sqlstate);\n```\n\n\nIs there any chance to fix the code?\nHi hackers,I am using gcc version 11.3.0 to compile postgres source code. Gcc complains about the following line:```cstrncpy(sqlca->sqlstate, \"YE001\", sizeof(sqlca->sqlstate));```with error as:misc.c:529:17: error: ‘strncpy’ output truncated before terminating nul copying 5 bytes from a string of the same length [-Werror=stringop-truncation]I find the definition of `sqlca->sqlstate` and it has only 5 bytes. When the statement```cstrncpy(sqlca->sqlstate, \"YE001\", sizeof(sqlca->sqlstate));```get executed, `sqlca->sqlstate` will have no '\\0' byte which makes me anxious when someone prints that as a string. Indeed, I found the code(in src/interfaces/ecpg/ecpglib/misc.c) does that,```c fprintf(debugstream, \"[NO_PID]: sqlca: code: %ld, state: %s\\n\", sqlca->sqlcode, sqlca->sqlstate);```Is there any chance to fix the code?", "msg_date": "Fri, 14 Jun 2024 15:38:16 +0800 (CST)", "msg_from": "\"Winter Loo\" <[email protected]>", "msg_from_op": true, "msg_subject": "may be a buffer overflow problem" }, { "msg_contents": "> On 14 Jun 2024, at 09:38, Winter Loo <[email protected]> wrote:\n\n> I find the definition of `sqlca->sqlstate` and it has only 5 bytes. When the statement\n> \n> ```c\n> strncpy(sqlca->sqlstate, \"YE001\", sizeof(sqlca->sqlstate));\n> ```\n> \n> get executed, `sqlca->sqlstate` will have no '\\0' byte which makes me anxious when someone prints that as a string.\n\nsqlstate is defined as not being unterminated fixed-length, leaving the callers\nto handle termination.\n\n> Indeed, I found the code(in src/interfaces/ecpg/ecpglib/misc.c) does that,\n> \n> fprintf(debugstream, \"[NO_PID]: sqlca: code: %ld, state: %s\\n\",\n> sqlca->sqlcode, sqlca->sqlstate);\n\nThis is indeed buggy and need to take the length into account, as per the\nattached. This only happens when in the undocumented regression test debug\nmode which may be why it's gone unnoticed.\n\n--\nDaniel Gustafsson", "msg_date": "Fri, 14 Jun 2024 09:55:12 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: may be a buffer overflow problem" }, { "msg_contents": "On Fri, 2024-06-14 at 15:38 +0800, Winter Loo wrote:\n> I am using gcc version 11.3.0 to compile postgres source code. Gcc complains about the following line:\n> \n> strncpy(sqlca->sqlstate, \"YE001\", sizeof(sqlca->sqlstate));\n> \n> with error as:\n> \n> misc.c:529:17: error: ‘strncpy’ output truncated before terminating nul\n> copying 5 bytes from a string of the same length [-Werror=stringop-truncation]\n> \n> I find the definition of `sqlca->sqlstate` and it has only 5 bytes. When the statement\n> \n> strncpy(sqlca->sqlstate, \"YE001\", sizeof(sqlca->sqlstate));\n> \n> get executed, `sqlca->sqlstate` will have no '\\0' byte which makes me anxious\n> when someone prints that as a string. Indeed, I found the code(in src/interfaces/ecpg/ecpglib/misc.c) does that,\n> \n> \t\tfprintf(debugstream, \"[NO_PID]: sqlca: code: %ld, state: %s\\n\",\n> \t\t\t\tsqlca->sqlcode, sqlca->sqlstate);\n> \n> Is there any chance to fix the code?\n\nI agree that that is wrong.\n\nWe could either use memcpy() to avoid the warning and use a format string\nwith %.*s in fprintf(), or we could make the \"sqlstate\" one byte longer.\n\nI think that the second option would be less error-prone.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 14 Jun 2024 10:04:45 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: may be a buffer overflow problem" }, { "msg_contents": "On Fri, 2024-06-14 at 09:55 +0200, Daniel Gustafsson wrote:\n> > On 14 Jun 2024, at 09:38, Winter Loo <[email protected]> wrote:\n> \n> > I find the definition of `sqlca->sqlstate` and it has only 5 bytes. When the statement\n> > \n> > ```c\n> > strncpy(sqlca->sqlstate, \"YE001\", sizeof(sqlca->sqlstate));\n> > ```\n> > \n> > get executed, `sqlca->sqlstate` will have no '\\0' byte which makes me anxious when someone prints that as a string.\n> \n> sqlstate is defined as not being unterminated fixed-length, leaving the callers\n> to handle termination.\n> \n> > Indeed, I found the code(in src/interfaces/ecpg/ecpglib/misc.c) does that,\n> > \n> > fprintf(debugstream, \"[NO_PID]: sqlca: code: %ld, state: %s\\n\",\n> > sqlca->sqlcode, sqlca->sqlstate);\n> \n> This is indeed buggy and need to take the length into account, as per the\n> attached. This only happens when in the undocumented regression test debug\n> mode which may be why it's gone unnoticed.\n\nSo you think we should ignore that compiler warning?\nWhat about using memcpy() instead of strncpy()?\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 14 Jun 2024 10:06:58 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: may be a buffer overflow problem" }, { "msg_contents": "> On 14 Jun 2024, at 10:06, Laurenz Albe <[email protected]> wrote:\n\n> So you think we should ignore that compiler warning?\n\nWe already do using this in meson.build:\n\n # Similarly disable useless truncation warnings from gcc 8+\n 'format-truncation',\n 'stringop-truncation',\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 14 Jun 2024 10:10:42 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: may be a buffer overflow problem" }, { "msg_contents": "On Fri, 2024-06-14 at 10:10 +0200, Daniel Gustafsson wrote:\n> > On 14 Jun 2024, at 10:06, Laurenz Albe <[email protected]> wrote:\n> \n> > So you think we should ignore that compiler warning?\n> \n> We already do using this in meson.build:\n> \n>   # Similarly disable useless truncation warnings from gcc 8+\n>   'format-truncation',\n>   'stringop-truncation',\n\nRight; and I see that -Wno-stringop-truncation is also set if you build\nwith \"make\". So your patch is good.\n\nI wonder how Winter Loo got to see that warning...\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 14 Jun 2024 10:29:31 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: may be a buffer overflow problem" }, { "msg_contents": "> On 14 Jun 2024, at 10:29, Laurenz Albe <[email protected]> wrote:\n> \n> On Fri, 2024-06-14 at 10:10 +0200, Daniel Gustafsson wrote:\n>>> On 14 Jun 2024, at 10:06, Laurenz Albe <[email protected]> wrote:\n>> \n>>> So you think we should ignore that compiler warning?\n>> \n>> We already do using this in meson.build:\n>> \n>> # Similarly disable useless truncation warnings from gcc 8+\n>> 'format-truncation',\n>> 'stringop-truncation',\n> \n> Right; and I see that -Wno-stringop-truncation is also set if you build\n> with \"make\". So your patch is good.\n\nThanks for looking! I will apply it backpatched all the way down as this has\nbeen wrong since 2006.\n\n> I wonder how Winter Loo got to see that warning...\n\nAnd it would be interesting to know if that was the only warning, since error.c\nin ECPG performs the exact same string copy.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 14 Jun 2024 10:39:12 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: may be a buffer overflow problem" }, { "msg_contents": ">Thanks for looking! I will apply it backpatched all the way down as this has\n\n>been wrong since 2006.\n>\n>> I wonder how Winter Loo got to see that warning...\n\n>\nI was compiling source code of postgres version 13 and the building flags is changed in my development environment.\n>And it would be interesting to know if that was the only warning, since error.c >in ECPG performs the exact same string copy.\n>\n\n\nYes, that was the only warning. I searched all `sqlstate` words in ecpg directory, there's no other dubious problems.\n\n\n>Thanks for looking! I will apply it backpatched all the way down as this has>been wrong since 2006.\n>\n>> I wonder how Winter Loo got to see that warning...\n>I was compiling source code of postgres version 13 and the building flags is changed in my development environment.>And it would be interesting to know if that was the only warning, since error.c\n>in ECPG performs the exact same string copy.\n>Yes, that was the only warning. I searched all `sqlstate` words in ecpg directory, there's no other dubious problems.", "msg_date": "Fri, 14 Jun 2024 18:48:29 +0800 (CST)", "msg_from": "\"Winter Loo\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re:Re: may be a buffer overflow problem" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> This is indeed buggy and need to take the length into account, as per the\n> attached. This only happens when in the undocumented regression test debug\n> mode which may be why it's gone unnoticed.\n\nSeeing that this code is exercised thousands of times a day in the\nregression tests and has had a failure rate of exactly zero (and\nyes, the tests do check the output), there must be some reason\nwhy it's okay.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 Jun 2024 10:39:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: may be a buffer overflow problem" }, { "msg_contents": "I wrote:\n> Seeing that this code is exercised thousands of times a day in the\n> regression tests and has had a failure rate of exactly zero (and\n> yes, the tests do check the output), there must be some reason\n> why it's okay.\n\nAfter looking a little closer, I think the reason why it works in\npractice is that there's always a few bytes of zero padding at the\nend of struct sqlca_t.\n\nI don't see any value in changing individual code sites that are\ndepending on that, because there are surely many more, both in\nour own code and end users' code. What I suggest we ought to do\nis formalize the existence of that zero pad. Perhaps like this:\n\n \tchar\t\tsqlstate[5];\n+\tchar\t\tsqlstatepad; /* nul terminator for sqlstate */\n };\n\nAnother way could be to change\n\n- \tchar\t\tsqlstate[5];\n+ \tchar\t\tsqlstate[6];\n\nbut I fear that could have unforeseen consequences in code that\nis paying attention to sizeof(sqlstate).\n\nEither way there are probably doc adjustments to be made.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 Jun 2024 11:18:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: may be a buffer overflow problem" }, { "msg_contents": "> On 14 Jun 2024, at 17:18, Tom Lane <[email protected]> wrote:\n> \n> I wrote:\n>> Seeing that this code is exercised thousands of times a day in the\n>> regression tests and has had a failure rate of exactly zero (and\n>> yes, the tests do check the output), there must be some reason\n>> why it's okay.\n> \n> After looking a little closer, I think the reason why it works in\n> practice is that there's always a few bytes of zero padding at the\n> end of struct sqlca_t.\n> \n> I don't see any value in changing individual code sites that are\n> depending on that, because there are surely many more, both in\n> our own code and end users' code. What I suggest we ought to do\n> is formalize the existence of that zero pad. Perhaps like this:\n> \n> char sqlstate[5];\n> + char sqlstatepad; /* nul terminator for sqlstate */\n> };\n> \n> Another way could be to change\n> \n> - char sqlstate[5];\n> + char sqlstate[6];\n> \n> but I fear that could have unforeseen consequences in code that\n> is paying attention to sizeof(sqlstate).\n\nSince sqlca is, according to our docs, present in other database systems we\nshould probably keep it a 5-char array for portability reasons. Adding a\npadding character should be fine though.\n\nThe attached adds padding and adjust the tests and documentation to match. I\nkept the fprintf using %.*s to match other callers. I don't know ECPG well\nenough to have strong feelings wrt this being the right fix or not, and the age\nof incorrect assumptions arounf that fprintf hints at this not being a huge\nproblem in reality (should still be fixed of course).\n\n--\nDaniel Gustafsson", "msg_date": "Mon, 17 Jun 2024 23:52:54 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: may be a buffer overflow problem" }, { "msg_contents": "Hi,\n\nOn 2024-06-17 23:52:54 +0200, Daniel Gustafsson wrote:\n> Since sqlca is, according to our docs, present in other database systems we\n> should probably keep it a 5-char array for portability reasons. Adding a\n> padding character should be fine though.\n\nHow about, additionally, adding __attribute__((nonstring))? Wrapped in an\nattribute, of course. That'll trigger warning for many unsafe uses, like\nstrlen().\n\nIt doesn't quite detect the problematic case in ecpg_log() though, seems it\ndoesn't understand fprintf() well enough (it does trigger in simple printf()\ncases, because they get reduced to puts(), which it understands).\n\n\nAdding nonstring possibly allow us to re-enable -Wstringop-truncation, it triggers a\nbunch on\n\n../../../../../home/andres/src/postgresql/src/interfaces/ecpg/ecpglib/misc.c: In function ‘ECPGset_var’:\n../../../../../home/andres/src/postgresql/src/interfaces/ecpg/ecpglib/misc.c:575:17: warning: ‘__builtin_strncpy’ output truncated before terminating nul copying 5 bytes from a string of the same length [-Wstringop-truncation]\n 575 | strncpy(sqlca->sqlstate, \"YE001\", sizeof(sqlca->sqlstate));\n\n\nThe only other -Wstringop-truncation warnings are in ecpg tests and at least\nthe first one doesn't look bogus:\n\n../../../../../home/andres/src/postgresql/src/interfaces/ecpg/test/compat_oracle/char_array.pgc: In function 'main':\n../../../../../home/andres/src/postgresql/src/interfaces/ecpg/test/compat_oracle/char_array.pgc:54:5: warning: '__builtin_strncpy' output truncated before terminating nul copying 5 bytes from a string of the same length [-Wstringop-truncation]\n 54 | strncpy(shortstr, ppppp, sizeof shortstr);\n\nWhich seems like a valid complaint, given that shortstr is a char[5], ppppp\nis \"XXXXX\" and thatshortstr is printed:\n printf(\"\\\"%s\\\": \\\"%s\\\" %d\\n\", bigstr, shortstr, shstr_ind);\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 17 Jun 2024 19:35:32 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: may be a buffer overflow problem" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-06-17 23:52:54 +0200, Daniel Gustafsson wrote:\n>> Since sqlca is, according to our docs, present in other database systems we\n>> should probably keep it a 5-char array for portability reasons. Adding a\n>> padding character should be fine though.\n\n> How about, additionally, adding __attribute__((nonstring))? Wrapped in an\n> attribute, of course. That'll trigger warning for many unsafe uses, like\n> strlen().\n\nWhat I was advocating for is that we make it *safe* for strlen, not\nthat we double down on awkward, non-idiomatic, unsafe coding\npractices.\n\nAdmittedly, I'm not sure how we could persuade compilers that\na char[5] followed by a char field is a normal C string ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 17 Jun 2024 22:42:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: may be a buffer overflow problem" }, { "msg_contents": "Hi,\n\nOn 2024-06-17 22:42:41 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2024-06-17 23:52:54 +0200, Daniel Gustafsson wrote:\n> >> Since sqlca is, according to our docs, present in other database systems we\n> >> should probably keep it a 5-char array for portability reasons. Adding a\n> >> padding character should be fine though.\n> \n> > How about, additionally, adding __attribute__((nonstring))? Wrapped in an\n> > attribute, of course. That'll trigger warning for many unsafe uses, like\n> > strlen().\n> \n> What I was advocating for is that we make it *safe* for strlen, not\n> that we double down on awkward, non-idiomatic, unsafe coding\n> practices.\n\nGiven that apparently other platforms have it as a no-trailing-zero-byte\n\"string\", I'm not sure that that is that clearly a win. Also, if they just\ncopy the field onto the stack or such, they'll have the same issue again.\n\nAnd then there is this:\n\n> Admittedly, I'm not sure how we could persuade compilers that\n> a char[5] followed by a char field is a normal C string ...\n\nI think the explicit backstop of a zero byte is a good idea, but I don't think\nwe'd just want to rely on it.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 18 Jun 2024 07:11:03 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: may be a buffer overflow problem" }, { "msg_contents": "On 18.06.24 04:35, Andres Freund wrote:\n> On 2024-06-17 23:52:54 +0200, Daniel Gustafsson wrote:\n>> Since sqlca is, according to our docs, present in other database systems we\n>> should probably keep it a 5-char array for portability reasons. Adding a\n>> padding character should be fine though.\n> \n> How about, additionally, adding __attribute__((nonstring))? Wrapped in an\n> attribute, of course. That'll trigger warning for many unsafe uses, like\n> strlen().\n> \n> It doesn't quite detect the problematic case in ecpg_log() though, seems it\n> doesn't understand fprintf() well enough (it does trigger in simple printf()\n> cases, because they get reduced to puts(), which it understands).\n\nSee also <https://gcc.gnu.org/bugzilla/show_bug.cgi?id=115513>.\n\n> Adding nonstring possibly allow us to re-enable -Wstringop-truncation,\n\nNote that that would only work because we now always use our own \nsnprintf(), which is not covered by that option. I mean, we could still \ndo it, but it's not like the reasons we originally disabled that option \nhave actually gone away.\n\n\n\n", "msg_date": "Tue, 18 Jun 2024 16:22:01 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: may be a buffer overflow problem" } ]
[ { "msg_contents": "Hello!\n\nThe src/backend/access/heap/README.tuplock says about HEAP_XMAX_INVALID bit\nthat \"Any tuple with this bit set does not have a valid value stored in XMAX.\"\n\nFound that FreezeMultiXactId() tries to process such an invalid multi xmax\nand may looks for an update xid in the pg_multixact for it.\n\nMaybe not do this work in FreezeMultiXactId() and exit immediately if the\nbit HEAP_XMAX_INVALID was already set?\n\nFor instance, like that:\n\nmaster\n@@ -6215,6 +6215,15 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask,\n /* We should only be called in Multis */\n Assert(t_infomask & HEAP_XMAX_IS_MULTI);\n \n+ /* Xmax is already marked as invalid */\n+ if (MultiXactIdIsValid(multi) &&\n+ (t_infomask & HEAP_XMAX_INVALID))\n+ {\n+ *flags |= FRM_INVALIDATE_XMAX;\n+ pagefrz->freeze_required = true;\n+ return InvalidTransactionId;\n+ }\n+\n if (!MultiXactIdIsValid(multi) ||\n HEAP_LOCKED_UPGRADED(t_infomask))\n\n\nWith the best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Fri, 14 Jun 2024 10:45:38 +0300", "msg_from": "\"Anton A. Melnikov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Don't process multi xmax in FreezeMultiXactId() if it is already\n marked as invalid." }, { "msg_contents": "On 14.06.2024 10:45, Anton A. Melnikov wrote:\n\n> The src/backend/access/heap/README.tuplock says about HEAP_XMAX_INVALID bit\n> that \"Any tuple with this bit set does not have a valid value stored in XMAX.\"\n> \n> Found that FreezeMultiXactId() tries to process such an invalid multi xmax\n> and may looks for an update xid in the pg_multixact for it.\n> \n> Maybe not do this work in FreezeMultiXactId() and exit immediately if the\n> bit HEAP_XMAX_INVALID was already set?\n> \n\nSeems it is important to save the check that multi xmax is not behind relminmxid.\nSo saved it and correct README.tuplock accordingly.\n\nWould be glad if someone take a look at the patch attached.\n\n\nWith the best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 18 Jun 2024 09:57:11 +0300", "msg_from": "\"Anton A. Melnikov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Maybe don't process multi xmax in FreezeMultiXactId() if it is\n already marked as invalid?" }, { "msg_contents": "Hi!\n\nMaybe, I'm too bold, but looks like a kinda bug to me. At least, I don't\nunderstand why we do not check the HEAP_XMAX_INVALID flag.\nMy guess is nobody noticed, that MultiXactIdIsValid call does not check the\nmentioned flag in the \"first\" condition, but it's all my speculation.\nDoes anyone know if there are reasons to deliberately ignore the HEAP_XMAX\nINVALID flag? Or this is just an unfortunate oversight.\n\nPFA, my approach on this issue.\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Tue, 18 Jun 2024 17:29:13 +0300", "msg_from": "Maxim Orlov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maybe don't process multi xmax in FreezeMultiXactId() if it is\n already marked as invalid?" }, { "msg_contents": "On Tue, Jun 18, 2024 at 10:29 AM Maxim Orlov <[email protected]> wrote:\n> Maybe, I'm too bold, but looks like a kinda bug to me. At least, I don't understand why we do not check the HEAP_XMAX_INVALID flag.\n> My guess is nobody noticed, that MultiXactIdIsValid call does not check the mentioned flag in the \"first\" condition, but it's all my speculation.\n\nA related code path was changed in commit 02d647bbf0. That change made\nthe similar xmax handling that covers XIDs (not MXIDs) *stop* doing\nwhat you're now proposing to do in the Multi path.\n\nWhy do you think this is a bug?\n\n> Does anyone know if there are reasons to deliberately ignore the HEAP_XMAX INVALID flag? Or this is just an unfortunate oversight.\n\nHEAP_XMAX_INVALID is just a hint.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 18 Jun 2024 11:47:58 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maybe don't process multi xmax in FreezeMultiXactId() if it is\n already marked as invalid?" }, { "msg_contents": "18.06.2024 18:47, Peter Geoghegan пишет:\n> On Tue, Jun 18, 2024 at 10:29 AM Maxim Orlov <[email protected]> wrote:\n>> Maybe, I'm too bold, but looks like a kinda bug to me. At least, I don't understand why we do not check the HEAP_XMAX_INVALID flag.\n>> My guess is nobody noticed, that MultiXactIdIsValid call does not check the mentioned flag in the \"first\" condition, but it's all my speculation.\n> \n> A related code path was changed in commit 02d647bbf0. That change made\n> the similar xmax handling that covers XIDs (not MXIDs) *stop* doing\n> what you're now proposing to do in the Multi path.\n\nI don't agree commit 02d647bbf0 is similar to suggested change.\nCommit 02d647bbf0 fixed decision to set\n\tfreeze_xmax = false;\n\txmax_already_frozen = true;\n\nSuggested change is for decision to set\n\t*flags |= FRM_INVALIDATE_XMAX;\n\tpagefrz->freeze_required = true;\nWhich leads to\n\tfreeze_xmax = true;\n\nSo it is quite different code paths, and one could not be used\nto decline or justify other.\n\nMore over, certainly test on HEAP_XMAX_INVALID could be used\nthere in heap_prepare_freeze_tuple to set\n\tfreeze_xmax = true;\nWhy didn't you do it?\n\n> \n> Why do you think this is a bug?\n\nIt is not a bug per se.\nBut:\n- it is missed opportunity for optimization,\n- it is inconsistency in data handling.\nInconsistency leads to bugs when people attempt to modify code.\n\nYes, we changed completely different place mistakenly relying on \nconsistent reaction on this \"hint\", and that leads to bug in our\npatch.\n\n>> Does anyone know if there are reasons to deliberately ignore the HEAP_XMAX INVALID flag? Or this is just an unfortunate oversight.\n> \n> HEAP_XMAX_INVALID is just a hint.\n> \n\nWTF is \"just a hint\"?\nI thought, hint is \"yep, you can ignore it. But we already did some job \nand stored its result as this hint. And if you don't ignore this hint, \nthen you can skip doing the job we did already\".\n\nSo every time we ignore hint, we miss opportunity for optimization.\nWhy the hell we shouldn't optimize when we safely can?\n\nIf we couldn't rely on hint, then hint is completely meaningless.\n\n--------\n\nhave a nice day\n\nYura Sokolov\n\n\n", "msg_date": "Wed, 19 Jun 2024 20:00:28 +0300", "msg_from": "Yura Sokolov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maybe don't process multi xmax in FreezeMultiXactId() if it is\n already marked as invalid?" }, { "msg_contents": "On 2024-Jun-14, Anton A. Melnikov wrote:\n\n> Hello!\n> \n> The src/backend/access/heap/README.tuplock says about HEAP_XMAX_INVALID bit\n> that \"Any tuple with this bit set does not have a valid value stored in XMAX.\"\n> \n> Found that FreezeMultiXactId() tries to process such an invalid multi xmax\n> and may looks for an update xid in the pg_multixact for it.\n> \n> Maybe not do this work in FreezeMultiXactId() and exit immediately if the\n> bit HEAP_XMAX_INVALID was already set?\n> \n> For instance, like that:\n> \n> master\n> @@ -6215,6 +6215,15 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask,\n> /* We should only be called in Multis */\n> Assert(t_infomask & HEAP_XMAX_IS_MULTI);\n> + /* Xmax is already marked as invalid */\n> + if (MultiXactIdIsValid(multi) &&\n> + (t_infomask & HEAP_XMAX_INVALID))\n\nHmm, but why are we calling FreezeMultiXactId at all if the\nHEAP_XMAX_INVALID bit is set? We shouldn't do that. I think the fix\nshould appear in heap_prepare_freeze_tuple() to skip work completely if\nHEAP_XMAX_INVALID is set. Then in FreezeMultiXactId you could simply\nAssert() that the given tuple does not have HEAP_XMAX_INVALID set.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"It takes less than 2 seconds to get to 78% complete; that's a good sign.\nA few seconds later it's at 90%, but it seems to have stuck there. Did\nsomebody make percentages logarithmic while I wasn't looking?\"\n http://smylers.hates-software.com/2005/09/08/1995c749.html\n\n\n", "msg_date": "Wed, 19 Jun 2024 19:21:25 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Don't process multi xmax in FreezeMultiXactId() if it is already\n marked as invalid." }, { "msg_contents": "On Wed, Jun 19, 2024 at 1:00 PM Yura Sokolov <[email protected]> wrote:\n> So it is quite different code paths, and one could not be used\n> to decline or justify other.\n\nThe point is that we shouldn't need to rely on what is formally a\nhint. It might be useful to use the hint to decide whether or not\nfreezing now actually makes sense, but that isn't the same thing as\nrelying on the hint (we could make the same decision for a number of\ndifferent reasons).\n\n> More over, certainly test on HEAP_XMAX_INVALID could be used\n> there in heap_prepare_freeze_tuple to set\n> freeze_xmax = true;\n> Why didn't you do it?\n\nYou might as well ask me why I didn't do any number of other things. I\nactually wrote a patch that made FreezeMultiXactId() more aggressive\nabout this sort of thing (setting HEAP_XMAX_INVALID) that targeted\nPostgres 16. That worked by noticing that every member XID was at\nleast before OldestXmin, even when the MXID itself was >= OldestMxact.\nThat didn't go anywhere, even though it was a perfectly valid\noptimization.\n\nIt's true that FreezeMultiXactId() optimizations like this are\npossible. So what?\n\n> It is not a bug per se.\n> But:\n> - it is missed opportunity for optimization,\n> - it is inconsistency in data handling.\n> Inconsistency leads to bugs when people attempt to modify code.\n\nIn what sense is there an inconsistency?\n\nI think that you must mean that we need to do the same thing for the\n!MultiXactIdIsValid() case and the already-HEAP_XMAX_INVALID case. But\nI don't think that that's any meaningful kind of inconsistency. It's\n*also* what we do with plain XIDs. If anything, the problem is that\nwe're *too* consistent (ideally we *would* treat MultiXacts\ndifferently).\n\n> Yes, we changed completely different place mistakenly relying on\n> consistent reaction on this \"hint\", and that leads to bug in our\n> patch.\n\nOooops!\n\n> > HEAP_XMAX_INVALID is just a hint.\n> >\n>\n> WTF is \"just a hint\"?\n> I thought, hint is \"yep, you can ignore it. But we already did some job\n> and stored its result as this hint. And if you don't ignore this hint,\n> then you can skip doing the job we did already\".\n>\n> So every time we ignore hint, we miss opportunity for optimization.\n> Why the hell we shouldn't optimize when we safely can?\n\nThis is the first email that anybody has used the word \"optimization\".\nWe've been discussing this as if it was a bug. You introduced the\ntopic of optimization 3 seconds ago. Remember?\n\n> If we couldn't rely on hint, then hint is completely meaningless.\n\nWe don't actually trust the hints in any way. We always run checks\ninside heap_pre_freeze_checks(), rather than assuming that the hints\nare accurate.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 19 Jun 2024 13:30:44 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maybe don't process multi xmax in FreezeMultiXactId() if it is\n already marked as invalid?" }, { "msg_contents": "On 2024-Jun-19, Peter Geoghegan wrote:\n\n> On Wed, Jun 19, 2024 at 1:00 PM Yura Sokolov <[email protected]> wrote:\n> > So it is quite different code paths, and one could not be used\n> > to decline or justify other.\n> \n> The point is that we shouldn't need to rely on what is formally a\n> hint. It might be useful to use the hint to decide whether or not\n> freezing now actually makes sense, but that isn't the same thing as\n> relying on the hint (we could make the same decision for a number of\n> different reasons).\n\nFWIW I don't think HEAP_XMAX_INVALID as purely a hint.\nHEAP_XMAX_COMMITTED is a hint, for sure, as is HEAP_XMIN_COMMITTED on\nits own; but as far as I recall, the INVALID flags must persist once\nset. Consider the HEAP_XMIN_COMMITTED+ HEAP_XMIN_INVALID combination,\nwhich we use to represent HEAP_XMIN_FROZEN; if that didn't persist, we'd\nhave a pretty serious data corruption issue, because we don't reset the\nXmin field when freezing the tuple. So if we fail to keep the flag, the\ntuple is no longer frozen. (My point here is that some infomask bits\nare hints, but not all them are only hints.) So XMAX_INVALID gives\ncertainty that the Xmax value must not be read. That is to say, I think\nthere are (or there were) situations in which we set the bit but don't\nbother to reset the actual Xmax field. We should never try to read the\nXmax flag if the bit is set.\n\nI think the problem being investigated in this thread is that\nHEAP_XMAX_IS_MULTI is being treated as persistent, that is, it can only\nbe set if the xmax is not invalid, but apparently that's not always the\ncase (or we wouldn't be having this conversation).\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 19 Jun 2024 19:39:42 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maybe don't process multi xmax in FreezeMultiXactId() if it is\n already marked as invalid?" }, { "msg_contents": "On Wed, Jun 19, 2024 at 1:39 PM Alvaro Herrera <[email protected]> wrote:\n> FWIW I don't think HEAP_XMAX_INVALID as purely a hint.\n> HEAP_XMAX_COMMITTED is a hint, for sure, as is HEAP_XMIN_COMMITTED on\n> its own; but as far as I recall, the INVALID flags must persist once\n> set.\n\nSeems we disagree on some pretty fundamental things in this area, then.\n\n> Consider the HEAP_XMIN_COMMITTED+ HEAP_XMIN_INVALID combination,\n> which we use to represent HEAP_XMIN_FROZEN; if that didn't persist, we'd\n> have a pretty serious data corruption issue, because we don't reset the\n> Xmin field when freezing the tuple.\n\nThat's definitely true, but that's a case where the setting happens during a\nWAL-logged operation. It doesn't involve HEAP_XMAX_INVALID at all.\n\nFWIW I also don't think that even HEAP_XMIN_INVALID should be\nconsidered anything more than a hint when it appears on its own. Only\nHEAP_XMIN_FROZEN (by which I mean HEAP_XMIN_COMMITTED +\nHEAP_XMIN_INVALID) are non-hint xact status infomask bits.\n\nIt's slightly annoying that we have this HEAP_XMIN_FROZEN case where a\nhint bit isn't just a hint, but that's just a historical detail. And I\nsuppose that the same thing can be said of HEAP_XMAX_IS_MULTI itself\n(we have no other way of distinguishing a Multi from an Xid, so\nclearly that also has to be treated as a persistent non-hint by everybody).\n\n> So if we fail to keep the flag, the\n> tuple is no longer frozen. (My point here is that some infomask bits\n> are hints, but not all them are only hints.) So XMAX_INVALID gives\n> certainty that the Xmax value must not be read.\n\n\"Certainty\" seems far too strong here.\n\n> That is to say, I think\n> there are (or there were) situations in which we set the bit but don't\n> bother to reset the actual Xmax field.\n\nI'm sure that that's true, but that doesn't seem at all equivalent to\nwhat you said about XMAX_INVALID \"giving certainty\" about the tuple.\n\n> We should never try to read the\n> Xmax flag if the bit is set.\n\nBut that's exactly what FreezeMultiXactId does. It doesn't pay\nattention to XMAX_INVALID (only to !MultiXactIdIsValid()).\n\nYura is apparently arguing that FreezeMultiXactId should notice\nXMAX_INVALID and then tell its caller to \"FRM_INVALIDATE_XMAX\". That\ndoes seem like a valid optimization. But if we were to do that then\nwe'd likely do it in a way that still resulted in ordinary processing\nof the multi (it would not work by immediately setting\n\"FRM_INVALIDATE_XMAX\" in the !MultiXactIdIsValid() path). That\napproach to the optimization makes the most sense, because we'd likely\nwant to preserve the existing FreezeMultiXactId sanity checks.\n\n> I think the problem being investigated in this thread is that\n> HEAP_XMAX_IS_MULTI is being treated as persistent, that is, it can only\n> be set if the xmax is not invalid, but apparently that's not always the\n> case (or we wouldn't be having this conversation).\n\nA multixact/HEAP_XMAX_IS_MULTI xmax doesn't start out as invalid.\nTreating HEAP_XMAX_IS_MULTI as persistent doesn't mean that we should\ntreat XMAX_INVALID as consistent. In particular, XMAX_INVALID isn't\nequivalent to !MultiXactIdIsValid() (you can make a similar statement\nabout xmax XIDs).\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 19 Jun 2024 14:06:59 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Maybe don't process multi xmax in FreezeMultiXactId() if it is\n already marked as invalid?" }, { "msg_contents": "\nOn 19.06.2024 21:06, Peter Geoghegan wrote:\n> On Wed, Jun 19, 2024 at 1:39 PM Alvaro Herrera <[email protected]> wrote:\n>> FWIW I don't think HEAP_XMAX_INVALID as purely a hint.\n>> HEAP_XMAX_COMMITTED is a hint, for sure, as is HEAP_XMIN_COMMITTED on\n>> its own; but as far as I recall, the INVALID flags must persist once\n>> set.\n> \n> Seems we disagree on some pretty fundamental things in this area, then.\n\nTo resolve this situation seems it is necessary to agree on what\nis a \"hint bit\" exactly means and how to use it.\n\nFor example, in this way:\n\n1) Definition. The \"hint bit\" if it is set represents presence of the property of some object (e.g. tuple).\nThe value of a hint bit can be derived again at any time. So it is acceptable for a hint\nbit to be lost during some operations.\n\n2) Purpose. (It has already been defined by Yura Sokolov in one of the previous letters)\nSome work (e.g CPU + mem usage) must be done to check the property of some object.\nChecking the hint bit, if it is set, saves this work.\nSo the purpose of the hint bit is optimization.\n\n3) Use. By default code that needs to check some property of the object\nmust firstly check the corresponding hint bit. If hint is set, determine that the property\nis present. If hint is not set, do the work to check this property of the object and set\nhint bit if that property is present.\nAlso, non-standard behavior is allowed, when the hint bit is ignored and the work on property\ncheck will be performed unconditionally for some reasons. In this case the code must contain\na comment with an explanation of this reason.\n\nAnd maybe for clarity, explicitly say that some bit is a hint right in its definition?\nFor instance, use HEAP_XMIN_COMMITTED_HINT instead of HEAP_XMIN_COMMITTED.\n\n\nRemarks and concerns are gratefully welcome.\n \n\nWith the best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Mon, 29 Jul 2024 05:48:57 +0300", "msg_from": "\"Anton A. Melnikov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Maybe don't process multi xmax in FreezeMultiXactId() if it is\n already marked as invalid?" } ]
[ { "msg_contents": "I was performing tests around multixid wraparound, when I ran into this \nassertion:\n\n> TRAP: failed Assert(\"CritSectionCount == 0 || (context)->allowInCritSection\"), File: \"../src/backend/utils/mmgr/mcxt.c\", Line: 1353, PID: 920981\n> postgres: autovacuum worker template0(ExceptionalCondition+0x6e)[0x560a501e866e]\n> postgres: autovacuum worker template0(+0x5dce3d)[0x560a50217e3d]\n> postgres: autovacuum worker template0(ForwardSyncRequest+0x8e)[0x560a4ffec95e]\n> postgres: autovacuum worker template0(RegisterSyncRequest+0x2b)[0x560a50091eeb]\n> postgres: autovacuum worker template0(+0x187b0a)[0x560a4fdc2b0a]\n> postgres: autovacuum worker template0(SlruDeleteSegment+0x101)[0x560a4fdc2ab1]\n> postgres: autovacuum worker template0(TruncateMultiXact+0x2fb)[0x560a4fdbde1b]\n> postgres: autovacuum worker template0(vac_update_datfrozenxid+0x4b3)[0x560a4febd2f3]\n> postgres: autovacuum worker template0(+0x3adf66)[0x560a4ffe8f66]\n> postgres: autovacuum worker template0(AutoVacWorkerMain+0x3ed)[0x560a4ffe7c2d]\n> postgres: autovacuum worker template0(+0x3b1ead)[0x560a4ffecead]\n> postgres: autovacuum worker template0(+0x3b620e)[0x560a4fff120e]\n> postgres: autovacuum worker template0(+0x3b3fbb)[0x560a4ffeefbb]\n> postgres: autovacuum worker template0(+0x2f724e)[0x560a4ff3224e]\n> /lib/x86_64-linux-gnu/libc.so.6(+0x27c8a)[0x7f62cc642c8a]\n> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7f62cc642d45]\n> postgres: autovacuum worker template0(_start+0x21)[0x560a4fd16f31]\n> 2024-06-14 13:11:02.025 EEST [920971] LOG: server process (PID 920981) was terminated by signal 6: Aborted\n> 2024-06-14 13:11:02.025 EEST [920971] DETAIL: Failed process was running: autovacuum: VACUUM pg_toast.pg_toast_13407 (to prevent wraparound)\n\nThe attached python script reproduces this pretty reliably. It's a \nreduced version of a larger test script I was working on, it probably \ncould be simplified further for this particular issue.\n\nLooking at the code, it's pretty clear how it happens:\n\n1. TruncateMultiXact does START_CRIT_SECTION();\n\n2. In the critical section, it calls PerformMembersTruncation() -> \nSlruDeleteSegment() -> SlruInternalDeleteSegment() -> \nRegisterSyncRequest() -> ForwardSyncRequest()\n\n3. If the fsync request queue is full, it calls \nCompactCheckpointerRequestQueue(), which calls palloc0. Pallocs are not \nallowed in a critical section.\n\nA straightforward fix is to add a check to \nCompactCheckpointerRequestQueue() to bail out without compacting, if \nit's called in a critical section. That would cover any other cases like \nthis, where RegisterSyncRequest() is called in a critical section. I \nhaven't tried searching if any more cases like this exist.\n\nBut wait there is more!\n\nAfter applying that fix in CompactCheckpointerRequestQueue(), the test \nscript often gets stuck. There's a deadlock between the checkpointer, \nand the autovacuum backend trimming the SLRUs:\n\n1. TruncateMultiXact does this:\n\n \tMyProc->delayChkptFlags |= DELAY_CHKPT_START;\n\n2. It then makes that call to PerformMembersTruncation() and \nRegisterSyncRequest(). If it cannot queue the request, it sleeps a \nlittle and retries. But the checkpointer is stuck waiting for the \nautovacuum backend, because of delayChkptFlags, and will never clear the \nqueue.\n\nTo fix, I propose to add AbsorbSyncRequests() calls to the wait-loops in \nCreateCheckPoint().\n\n\nAttached patch fixes both of those issues.\n\nI can't help thinking that TruncateMultiXact() should perhaps not have \nsuch a long critical section. TruncateCLOG() doesn't do that. But it was \nadded for good reasons in commit 4f627f897367, and this fix seems \nappropriate for the stable branches anyway, even if we come up with \nsomething better for master.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Fri, 14 Jun 2024 14:37:35 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "TruncateMultiXact() bugs" }, { "msg_contents": "On 14/06/2024 14:37, Heikki Linnakangas wrote:\n> I was performing tests around multixid wraparound, when I ran into this\n> assertion:\n> \n>> TRAP: failed Assert(\"CritSectionCount == 0 || (context)->allowInCritSection\"), File: \"../src/backend/utils/mmgr/mcxt.c\", Line: 1353, PID: 920981\n>> postgres: autovacuum worker template0(ExceptionalCondition+0x6e)[0x560a501e866e]\n>> postgres: autovacuum worker template0(+0x5dce3d)[0x560a50217e3d]\n>> postgres: autovacuum worker template0(ForwardSyncRequest+0x8e)[0x560a4ffec95e]\n>> postgres: autovacuum worker template0(RegisterSyncRequest+0x2b)[0x560a50091eeb]\n>> postgres: autovacuum worker template0(+0x187b0a)[0x560a4fdc2b0a]\n>> postgres: autovacuum worker template0(SlruDeleteSegment+0x101)[0x560a4fdc2ab1]\n>> postgres: autovacuum worker template0(TruncateMultiXact+0x2fb)[0x560a4fdbde1b]\n>> postgres: autovacuum worker template0(vac_update_datfrozenxid+0x4b3)[0x560a4febd2f3]\n>> postgres: autovacuum worker template0(+0x3adf66)[0x560a4ffe8f66]\n>> postgres: autovacuum worker template0(AutoVacWorkerMain+0x3ed)[0x560a4ffe7c2d]\n>> postgres: autovacuum worker template0(+0x3b1ead)[0x560a4ffecead]\n>> postgres: autovacuum worker template0(+0x3b620e)[0x560a4fff120e]\n>> postgres: autovacuum worker template0(+0x3b3fbb)[0x560a4ffeefbb]\n>> postgres: autovacuum worker template0(+0x2f724e)[0x560a4ff3224e]\n>> /lib/x86_64-linux-gnu/libc.so.6(+0x27c8a)[0x7f62cc642c8a]\n>> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7f62cc642d45]\n>> postgres: autovacuum worker template0(_start+0x21)[0x560a4fd16f31]\n>> 2024-06-14 13:11:02.025 EEST [920971] LOG: server process (PID 920981) was terminated by signal 6: Aborted\n>> 2024-06-14 13:11:02.025 EEST [920971] DETAIL: Failed process was running: autovacuum: VACUUM pg_toast.pg_toast_13407 (to prevent wraparound)\n> \n> The attached python script reproduces this pretty reliably. It's a\n> reduced version of a larger test script I was working on, it probably\n> could be simplified further for this particular issue.\n> \n> Looking at the code, it's pretty clear how it happens:\n> \n> 1. TruncateMultiXact does START_CRIT_SECTION();\n> \n> 2. In the critical section, it calls PerformMembersTruncation() ->\n> SlruDeleteSegment() -> SlruInternalDeleteSegment() ->\n> RegisterSyncRequest() -> ForwardSyncRequest()\n> \n> 3. If the fsync request queue is full, it calls\n> CompactCheckpointerRequestQueue(), which calls palloc0. Pallocs are not\n> allowed in a critical section.\n> \n> A straightforward fix is to add a check to\n> CompactCheckpointerRequestQueue() to bail out without compacting, if\n> it's called in a critical section. That would cover any other cases like\n> this, where RegisterSyncRequest() is called in a critical section. I\n> haven't tried searching if any more cases like this exist.\n> \n> But wait there is more!\n> \n> After applying that fix in CompactCheckpointerRequestQueue(), the test\n> script often gets stuck. There's a deadlock between the checkpointer,\n> and the autovacuum backend trimming the SLRUs:\n> \n> 1. TruncateMultiXact does this:\n> \n> \tMyProc->delayChkptFlags |= DELAY_CHKPT_START;\n> \n> 2. It then makes that call to PerformMembersTruncation() and\n> RegisterSyncRequest(). If it cannot queue the request, it sleeps a\n> little and retries. But the checkpointer is stuck waiting for the\n> autovacuum backend, because of delayChkptFlags, and will never clear the\n> queue.\n> \n> To fix, I propose to add AbsorbSyncRequests() calls to the wait-loops in\n> CreateCheckPoint().\n> \n> \n> Attached patch fixes both of those issues.\n\nCommitted and backpatched down to v14. This particular scenario cannot \nhappen in older versions because the RegisterFsync() on SLRU truncation \nwas added in v14. In principle, I think older versions might have \nsimilar issues, but given that when assertions are disabled this is only \na problem if you happen to run out of memory in the critical section, it \ndoesn't seem worth backpatching further unless someone reports a \nconcrete case.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 26 Jun 2024 23:55:40 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TruncateMultiXact() bugs" } ]
[ { "msg_contents": "Hello,\n\nI met an assertion failure, and identified the root of the problem, but no\nidea how to fix it.\n\nThe location of the problematic Assert() is at cost_memoize_rescan() to\ncheck 'hit_ratio' is between 0.0 and 1.0.\nThe 'calls' is provided by the caller, and 'ndistinct' is the result\nof estimate_num_groups().\n\n#4 0x000000000084d583 in cost_memoize_rescan (root=0x2e95748,\nmpath=0x30aece8, rescan_startup_cost=0x7ffd72141260,\nrescan_total_cost=0x7ffd72141258) at costsize.c:2564\n/home/kaigai/source/pgsql-16/src/backend/optimizer/path/costsize.c:2564:83932:beg:0x84d583\n(gdb) l\n2559 * how many of those scans we expect to get a cache hit.\n2560 */\n2561 hit_ratio = ((calls - ndistinct) / calls) *\n2562 (est_cache_entries / Max(ndistinct,\nest_cache_entries));\n2563\n2564 Assert(hit_ratio >= 0 && hit_ratio <= 1.0);\n2565\n2566 /*\n2567 * Set the total_cost accounting for the expected cache hit\nratio. We\n2568 * also add on a cpu_operator_cost to account for a cache\nlookup. This\n\n(gdb) bt\n#0 0x00007f3a39aa154c in __pthread_kill_implementation () from\n/lib64/libc.so.6\n#1 0x00007f3a39a54d06 in raise () from /lib64/libc.so.6\n#2 0x00007f3a39a287f3 in abort () from /lib64/libc.so.6\n#3 0x0000000000b6ff2c in ExceptionalCondition (conditionName=0xd28c28\n\"hit_ratio >= 0 && hit_ratio <= 1.0\", fileName=0xd289a4 \"costsize.c\",\nlineNumber=2564) at assert.c:66\n#4 0x000000000084d583 in cost_memoize_rescan (root=0x2e95748,\nmpath=0x30aece8, rescan_startup_cost=0x7ffd72141260,\nrescan_total_cost=0x7ffd72141258) at costsize.c:2564\n#5 0x0000000000850831 in cost_rescan (root=0x2e95748, path=0x30aece8,\nrescan_startup_cost=0x7ffd72141260, rescan_total_cost=0x7ffd72141258) at\ncostsize.c:4350\n#6 0x000000000084e333 in initial_cost_nestloop (root=0x2e95748,\nworkspace=0x7ffd721412d0, jointype=JOIN_INNER, outer_path=0x3090058,\ninner_path=0x30aece8, extra=0x7ffd72141500) at costsize.c:2978\n#7 0x0000000000860f58 in try_partial_nestloop_path (root=0x2e95748,\njoinrel=0x30ae158, outer_path=0x3090058, inner_path=0x30aece8,\npathkeys=0x0, jointype=JOIN_INNER, extra=0x7ffd72141500) at joinpath.c:887\n#8 0x0000000000862a64 in consider_parallel_nestloop (root=0x2e95748,\njoinrel=0x30ae158, outerrel=0x308f428, innerrel=0x2eac390,\njointype=JOIN_INNER, extra=0x7ffd72141500) at joinpath.c:2083\n#9 0x000000000086273d in match_unsorted_outer (root=0x2e95748,\njoinrel=0x30ae158, outerrel=0x308f428, innerrel=0x2eac390,\njointype=JOIN_INNER, extra=0x7ffd72141500) at joinpath.c:1940\n#10 0x00000000008600f0 in add_paths_to_joinrel (root=0x2e95748,\njoinrel=0x30ae158, outerrel=0x308f428, innerrel=0x2eac390,\njointype=JOIN_INNER, sjinfo=0x7ffd721415f0, restrictlist=0x30ae5a8) at\njoinpath.c:296\n#11 0x0000000000864d10 in populate_joinrel_with_paths (root=0x2e95748,\nrel1=0x308f428, rel2=0x2eac390, joinrel=0x30ae158, sjinfo=0x7ffd721415f0,\nrestrictlist=0x30ae5a8) at joinrels.c:925\n#12 0x00000000008649e1 in make_join_rel (root=0x2e95748, rel1=0x308f428,\nrel2=0x2eac390) at joinrels.c:776\n#13 0x0000000000863ec1 in make_rels_by_clause_joins (root=0x2e95748,\nold_rel=0x308f428, other_rels_list=0x3088ed0, other_rels=0x3088ee8) at\njoinrels.c:312\n#14 0x000000000086399a in join_search_one_level (root=0x2e95748, level=3)\nat joinrels.c:123\n#15 0x00000000008463f8 in standard_join_search (root=0x2e95748,\nlevels_needed=4, initial_rels=0x3088ed0) at allpaths.c:3454\n#16 0x000000000084636d in make_rel_from_joinlist (root=0x2e95748,\njoinlist=0x306b4f8) at allpaths.c:3385\n#17 0x0000000000841548 in make_one_rel (root=0x2e95748, joinlist=0x306b4f8)\nat allpaths.c:229\n#18 0x00000000008806a9 in query_planner (root=0x2e95748,\nqp_callback=0x886bcb <standard_qp_callback>, qp_extra=0x7ffd72141960) at\nplanmain.c:278\n#19 0x0000000000882f5f in grouping_planner (root=0x2e95748,\ntuple_fraction=0) at planner.c:1495\n#20 0x000000000088268c in subquery_planner (glob=0x2e95348,\nparse=0x2e90e98, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at\nplanner.c:1064\n#21 0x0000000000880cdb in standard_planner (parse=0x2e90e98,\n query_string=0x2e3a0e8 \"explain\\nselect sum(lo_revenue), d_year,\np_brand1\\n from lineorder, date1, part, supplier\\n where lo_orderdate =\nd_datekey\\n and lo_partkey = p_partkey\\n and lo_suppkey = s_suppkey\\n\n and p_brand1\"..., cursorOptions=2048,\n boundParams=0x0) at planner.c:413\n\nI tracked the behavior of estimate_num_groups() using gdb line-by-line to\nobserve how 'input_rows' is changed\nand how it affects the result value.\nAccording to the call trace, the problematic estimate_num_groups()\ninvocation is called with \"input_rows=3251872.916666667\",\nthen it was rounded up to 3251873 by the clamp_row_est(). Eventually, its\nresult value was calculated larger than the upper\nlimit, so the return value was suppressed by 3251873, but it is a tiny bit\nlarger than the input value!\n\nBack to the cost_memoize_rescan().\nThe hit_ratio is calculated as follows:\n\n hit_ratio = ((calls - ndistinct) / calls) *\n (est_cache_entries / Max(ndistinct, est_cache_entries));\n\nThe \"calls\" is the \"input_rows\" above, and \"ndistinct\" is the return value\nof the estimate_num_groups().\nWhat happen if \"ndistinct\" is a tiny bit larger than \"calls\"?\nIn the results, the \"hit_ratio\" is calculated as a very small negative\nvalue, then it was terminated by Assert().\n\nHow do we fix the logic? Please some ideas.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <[email protected]>\n\nHello,I met an assertion failure, and identified the root of the problem, but no idea how to fix it.The location of the problematic Assert() is at cost_memoize_rescan() to check 'hit_ratio' is between 0.0 and 1.0.The 'calls' is provided by the caller, and 'ndistinct' is the result of estimate_num_groups().#4  0x000000000084d583 in cost_memoize_rescan (root=0x2e95748, mpath=0x30aece8, rescan_startup_cost=0x7ffd72141260, rescan_total_cost=0x7ffd72141258) at costsize.c:2564/home/kaigai/source/pgsql-16/src/backend/optimizer/path/costsize.c:2564:83932:beg:0x84d583(gdb) l2559             * how many of those scans we expect to get a cache hit.2560             */2561            hit_ratio = ((calls - ndistinct) / calls) *2562                    (est_cache_entries / Max(ndistinct, est_cache_entries));25632564            Assert(hit_ratio >= 0 && hit_ratio <= 1.0);25652566            /*2567             * Set the total_cost accounting for the expected cache hit ratio.  We2568             * also add on a cpu_operator_cost to account for a cache lookup. This(gdb) bt#0  0x00007f3a39aa154c in __pthread_kill_implementation () from /lib64/libc.so.6#1  0x00007f3a39a54d06 in raise () from /lib64/libc.so.6#2  0x00007f3a39a287f3 in abort () from /lib64/libc.so.6#3  0x0000000000b6ff2c in ExceptionalCondition (conditionName=0xd28c28 \"hit_ratio >= 0 && hit_ratio <= 1.0\", fileName=0xd289a4 \"costsize.c\", lineNumber=2564) at assert.c:66#4  0x000000000084d583 in cost_memoize_rescan (root=0x2e95748, mpath=0x30aece8, rescan_startup_cost=0x7ffd72141260, rescan_total_cost=0x7ffd72141258) at costsize.c:2564#5  0x0000000000850831 in cost_rescan (root=0x2e95748, path=0x30aece8, rescan_startup_cost=0x7ffd72141260, rescan_total_cost=0x7ffd72141258) at costsize.c:4350#6  0x000000000084e333 in initial_cost_nestloop (root=0x2e95748, workspace=0x7ffd721412d0, jointype=JOIN_INNER, outer_path=0x3090058, inner_path=0x30aece8, extra=0x7ffd72141500) at costsize.c:2978#7  0x0000000000860f58 in try_partial_nestloop_path (root=0x2e95748, joinrel=0x30ae158, outer_path=0x3090058, inner_path=0x30aece8, pathkeys=0x0, jointype=JOIN_INNER, extra=0x7ffd72141500) at joinpath.c:887#8  0x0000000000862a64 in consider_parallel_nestloop (root=0x2e95748, joinrel=0x30ae158, outerrel=0x308f428, innerrel=0x2eac390, jointype=JOIN_INNER, extra=0x7ffd72141500) at joinpath.c:2083#9  0x000000000086273d in match_unsorted_outer (root=0x2e95748, joinrel=0x30ae158, outerrel=0x308f428, innerrel=0x2eac390, jointype=JOIN_INNER, extra=0x7ffd72141500) at joinpath.c:1940#10 0x00000000008600f0 in add_paths_to_joinrel (root=0x2e95748, joinrel=0x30ae158, outerrel=0x308f428, innerrel=0x2eac390, jointype=JOIN_INNER, sjinfo=0x7ffd721415f0, restrictlist=0x30ae5a8) at joinpath.c:296#11 0x0000000000864d10 in populate_joinrel_with_paths (root=0x2e95748, rel1=0x308f428, rel2=0x2eac390, joinrel=0x30ae158, sjinfo=0x7ffd721415f0, restrictlist=0x30ae5a8) at joinrels.c:925#12 0x00000000008649e1 in make_join_rel (root=0x2e95748, rel1=0x308f428, rel2=0x2eac390) at joinrels.c:776#13 0x0000000000863ec1 in make_rels_by_clause_joins (root=0x2e95748, old_rel=0x308f428, other_rels_list=0x3088ed0, other_rels=0x3088ee8) at joinrels.c:312#14 0x000000000086399a in join_search_one_level (root=0x2e95748, level=3) at joinrels.c:123#15 0x00000000008463f8 in standard_join_search (root=0x2e95748, levels_needed=4, initial_rels=0x3088ed0) at allpaths.c:3454#16 0x000000000084636d in make_rel_from_joinlist (root=0x2e95748, joinlist=0x306b4f8) at allpaths.c:3385#17 0x0000000000841548 in make_one_rel (root=0x2e95748, joinlist=0x306b4f8) at allpaths.c:229#18 0x00000000008806a9 in query_planner (root=0x2e95748, qp_callback=0x886bcb <standard_qp_callback>, qp_extra=0x7ffd72141960) at planmain.c:278#19 0x0000000000882f5f in grouping_planner (root=0x2e95748, tuple_fraction=0) at planner.c:1495#20 0x000000000088268c in subquery_planner (glob=0x2e95348, parse=0x2e90e98, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at planner.c:1064#21 0x0000000000880cdb in standard_planner (parse=0x2e90e98,    query_string=0x2e3a0e8 \"explain\\nselect sum(lo_revenue), d_year, p_brand1\\n  from lineorder, date1, part, supplier\\n  where lo_orderdate = d_datekey\\n    and lo_partkey = p_partkey\\n    and lo_suppkey = s_suppkey\\n    and p_brand1\"..., cursorOptions=2048,    boundParams=0x0) at planner.c:413I tracked the behavior of estimate_num_groups() using gdb line-by-line to observe how 'input_rows' is changedand how it affects the result value.According to the call trace, the problematic estimate_num_groups() invocation is called with \"input_rows=3251872.916666667\",then it was rounded up to 3251873 by the clamp_row_est(). Eventually, its result value was calculated larger than the upperlimit, so the return value was suppressed by 3251873, but it is a tiny bit larger than the input value!Back to the cost_memoize_rescan().The hit_ratio is calculated as follows:    hit_ratio = ((calls - ndistinct) / calls) *        (est_cache_entries / Max(ndistinct, est_cache_entries));The \"calls\" is the \"input_rows\" above, and \"ndistinct\"  is the return value of the estimate_num_groups().What happen if \"ndistinct\" is a tiny bit larger than \"calls\"?In the results, the \"hit_ratio\" is calculated as a very small negative value, then it was terminated by Assert().How do we fix the logic? Please some ideas.Best regards,-- HeteroDB, Inc / The PG-Strom ProjectKaiGai Kohei <[email protected]>", "msg_date": "Fri, 14 Jun 2024 21:54:34 +0900", "msg_from": "Kohei KaiGai <[email protected]>", "msg_from_op": true, "msg_subject": "assertion failure at cost_memoize_rescan()" }, { "msg_contents": "On 6/14/24 14:54, Kohei KaiGai wrote:\n> ...\n>\n> I tracked the behavior of estimate_num_groups() using gdb line-by-line to\n> observe how 'input_rows' is changed\n> and how it affects the result value.\n> According to the call trace, the problematic estimate_num_groups()\n> invocation is called with \"input_rows=3251872.916666667\",\n> then it was rounded up to 3251873 by the clamp_row_est(). Eventually, its\n> result value was calculated larger than the upper\n> limit, so the return value was suppressed by 3251873, but it is a tiny bit\n> larger than the input value!\n> \n> Back to the cost_memoize_rescan().\n> The hit_ratio is calculated as follows:\n> \n> hit_ratio = ((calls - ndistinct) / calls) *\n> (est_cache_entries / Max(ndistinct, est_cache_entries));\n> \n> The \"calls\" is the \"input_rows\" above, and \"ndistinct\" is the return value\n> of the estimate_num_groups().\n> What happen if \"ndistinct\" is a tiny bit larger than \"calls\"?\n> In the results, the \"hit_ratio\" is calculated as a very small negative\n> value, then it was terminated by Assert().\n> \n> How do we fix the logic? Please some ideas.\n> \n\nInteresting. Seems like a bug due to the two places clamping the values\ninconsistently. It probably does not matter in other contexts because we\ndon't subtract the values like this, but here it triggers the assert.\n\nI guess the simplest fix would be to clamp \"calls\" the same way before\ncalculating hit_ratio. That makes the \">= 0\" part of the assert somewhat\npointless, though.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 17 Jun 2024 00:23:34 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: assertion failure at cost_memoize_rescan()" }, { "msg_contents": "On Mon, 17 Jun 2024 at 10:23, Tomas Vondra\n<[email protected]> wrote:\n> Interesting. Seems like a bug due to the two places clamping the values\n> inconsistently. It probably does not matter in other contexts because we\n> don't subtract the values like this, but here it triggers the assert.\n>\n> I guess the simplest fix would be to clamp \"calls\" the same way before\n> calculating hit_ratio. That makes the \">= 0\" part of the assert somewhat\n> pointless, though.\n\n\"calls\" comes from the value passed as the final parameter in\ncreate_memoize_path().\n\nThere's really only one call to that function and that's in get_memoize_path().\n\nreturn (Path *) create_memoize_path(root,\n innerrel,\n inner_path,\n param_exprs,\n hash_operators,\n extra->inner_unique,\n binary_mode,\n outer_path->rows);\n\nIt would be good to know what type of Path outer_path is. Normally\nwe'll clamp_row_est() on that field. I suspect we must have some Path\ntype that isn't doing that.\n\nKaiGai-san, what type of Path is outer_path?\n\nDavid\n\nDavid\n\n\n", "msg_date": "Mon, 17 Jun 2024 11:27:12 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: assertion failure at cost_memoize_rescan()" }, { "msg_contents": "2024年6月17日(月) 8:27 David Rowley <[email protected]>:\n>\n> On Mon, 17 Jun 2024 at 10:23, Tomas Vondra\n> <[email protected]> wrote:\n> > Interesting. Seems like a bug due to the two places clamping the values\n> > inconsistently. It probably does not matter in other contexts because we\n> > don't subtract the values like this, but here it triggers the assert.\n> >\n> > I guess the simplest fix would be to clamp \"calls\" the same way before\n> > calculating hit_ratio. That makes the \">= 0\" part of the assert somewhat\n> > pointless, though.\n>\n> \"calls\" comes from the value passed as the final parameter in\n> create_memoize_path().\n>\n> There's really only one call to that function and that's in get_memoize_path().\n>\n> return (Path *) create_memoize_path(root,\n> innerrel,\n> inner_path,\n> param_exprs,\n> hash_operators,\n> extra->inner_unique,\n> binary_mode,\n> outer_path->rows);\n>\n> It would be good to know what type of Path outer_path is. Normally\n> we'll clamp_row_est() on that field. I suspect we must have some Path\n> type that isn't doing that.\n>\n> KaiGai-san, what type of Path is outer_path?\n>\nIt is CustomPath with rows = 3251872.916666667.\n(I'm not certain whether the non-integer value in the estimated rows\nis legal or not.)\n\nAccording to the crash dump, try_partial_nestloop_path() takes this two paths,\n\n(gdb) up\n#7 0x0000000000860fc5 in try_partial_nestloop_path (root=0x133a968,\njoinrel=0x15258d8, outer_path=0x1513c98, inner_path=0x15264c8,\npathkeys=0x0, jointype=JOIN_INNER, extra=0x7ffc494cddf0) at\njoinpath.c:887\n/home/kaigai/source/pgsql-16/src/backend/optimizer/path/joinpath.c:887:30713:beg:0x860fc5\n\n(gdb) p *(CustomPath *)outer_path\n$13 = {path = {type = T_CustomPath, pathtype = T_CustomScan, parent =\n0x1513058, pathtarget = 0x1513268, param_info = 0x0, parallel_aware =\ntrue, parallel_safe = true, parallel_workers = 2, rows =\n3251872.916666667, startup_cost = 41886.752500000002,\n total_cost = 12348693.488611111, pathkeys = 0x0}, flags = 4,\ncustom_paths = 0x1514788, custom_private = 0x1514ee8, methods =\n0x7f45211feca0 <gpujoin_path_methods>}\n\n(gdb) p *(MemoizePath *)inner_path\n$14 = {path = {type = T_MemoizePath, pathtype = T_Memoize, parent =\n0x14dc800, pathtarget = 0x14dca10, param_info = 0x150d8a8,\nparallel_aware = false, parallel_safe = true, parallel_workers = 0,\nrows = 1, startup_cost = 0.44500000000000001,\n total_cost = 8.4284913207446568, pathkeys = 0x150d5c8}, subpath =\n0x150d148, hash_operators = 0x1526428, param_exprs = 0x1526478,\nsinglerow = true, binary_mode = false, calls = 3251872.916666667,\nest_entries = 3251873}\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <[email protected]>\n\n\n", "msg_date": "Tue, 18 Jun 2024 11:23:21 +0900", "msg_from": "Kohei KaiGai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: assertion failure at cost_memoize_rescan()" }, { "msg_contents": "On Tue, 18 Jun 2024 at 14:23, Kohei KaiGai <[email protected]> wrote:\n>\n> 2024年6月17日(月) 8:27 David Rowley <[email protected]>:\n> > It would be good to know what type of Path outer_path is. Normally\n> > we'll clamp_row_est() on that field. I suspect we must have some Path\n> > type that isn't doing that.\n> >\n> > KaiGai-san, what type of Path is outer_path?\n> >\n> It is CustomPath with rows = 3251872.916666667.\n\nI suspected this might have been a CustomPath.\n\n> (I'm not certain whether the non-integer value in the estimated rows\n> is legal or not.)\n\nI guess since it's not documented that Path.rows is always clamped,\nit's probably bad to assume that it is.\n\nSince clamp_row_est() will ensure the value is clamped >= 1.0 && <=\nMAXIMUM_ROWCOUNT (which ensures non-zero), I tried looking around the\ncodebase for anything that divides by Path.rows to see if we ever\nassume that we can divide without first checking if Path.rows != 0.\nOut of the places I saw, it seems we do tend to code things so that we\ndon't assume the value has been clamped. E.g.\nadjust_limit_rows_costs() does if (*rows < 1) *rows = 1;\n\nI think the best solution is to apply the attached. I didn't test,\nbut it should fix the issue you reported and also ensure that\nMemoizePath.calls is never zero, which would also cause issues in the\nhit_ratio calculation in cost_memoize_rescan().\n\nDavid", "msg_date": "Tue, 18 Jun 2024 14:53:24 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: assertion failure at cost_memoize_rescan()" }, { "msg_contents": "On Tue, Jun 18, 2024 at 10:53 AM David Rowley <[email protected]> wrote:\n> Out of the places I saw, it seems we do tend to code things so that we\n> don't assume the value has been clamped. E.g.\n> adjust_limit_rows_costs() does if (*rows < 1) *rows = 1;\n\nAgreed. In costsize.c I saw a few instances where we have\n\n /* Protect some assumptions below that rowcounts aren't zero */\n if (inner_path_rows <= 0)\n inner_path_rows = 1;\n\n> I think the best solution is to apply the attached. I didn't test,\n> but it should fix the issue you reported and also ensure that\n> MemoizePath.calls is never zero, which would also cause issues in the\n> hit_ratio calculation in cost_memoize_rescan().\n\n+1.\n\nThanks\nRichard\n\n\n", "msg_date": "Tue, 18 Jun 2024 11:14:02 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: assertion failure at cost_memoize_rescan()" }, { "msg_contents": "On Tue, 18 Jun 2024 at 15:14, Richard Guo <[email protected]> wrote:\n>\n> On Tue, Jun 18, 2024 at 10:53 AM David Rowley <[email protected]> wrote:\n> > I think the best solution is to apply the attached. I didn't test,\n> > but it should fix the issue you reported and also ensure that\n> > MemoizePath.calls is never zero, which would also cause issues in the\n> > hit_ratio calculation in cost_memoize_rescan().\n>\n> +1.\n\nThanks for looking. Pushed.\n\nDavid\n\n\n", "msg_date": "Wed, 19 Jun 2024 10:23:23 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: assertion failure at cost_memoize_rescan()" } ]
[ { "msg_contents": "Hi!\n\nWhile working on a making multix xact offsets 64-bit [0] I've discovered a\nminor issue. The\nthing is that type 'xid' is used in all macro, but it doesn't correct.\nAppropriate MultiXactId or\nMultiXactOffset should be used, actually.\n\nAnd the second thing, as Heikki Linnakangas points out, args naming is also\nmisleading.\n\nSince, these problems are not the point of thread [0], I decided to create\nthis discussion.\nAnd here is the patch set addressing mentioned issues (0001 and 0002).\n\nAdditionally, I made an optional patch 0003 to switch from macro to inline\nfunctions. For\nme, personally, use of macro functions is justified if we are dealing with\ndifferent argument\ntypes, to make polymorphic call. Which is not the case here. So, we can\nhave more\ncontrol over types and still generate the same code in terms of speed.\nSee https://godbolt.org/z/KM8voadhs Starting from O1 function is inlined,\nthus no\noverhead is noticeable. Anyway, it's up to the actual commiter to decide\ndoes it worth it\nor not. Again, this particular patch 0003 is completely optional.\n\nAs always, any opinions and reviews are very welcome!\n\n[0]\nhttps://www.postgresql.org/message-id/flat/ff143b24-a093-40da-9833-d36b83726bdf%40iki.fi#61d5a0e1cf6ab94b0e8aae8559bc4cf7\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Fri, 14 Jun 2024 16:56:44 +0300", "msg_from": "Maxim Orlov <[email protected]>", "msg_from_op": true, "msg_subject": "Bugfix and improvements in multixact.c" }, { "msg_contents": "On 14/06/2024 16:56, Maxim Orlov wrote:\n> Hi!\n> \n> While working on a making multix xact offsets 64-bit [0] I've\n> discovered a minor issue. The thing is that type 'xid' is used in\n> all macro, but it doesn't correct. Appropriate MultiXactId or \n> MultiXactOffset should be used, actually.\n> \n> And the second thing, as Heikki Linnakangas points out, args naming\n> is also misleading.\nThanks!\n\nLooks good to me at a quick glance. I'll try to review and commit these \nproperly by Monday.\n\n> Additionally, I made an optional patch 0003 to switch from macro to \n> inline functions. For me, personally, use of macro functions is \n> justified if we are dealing with different argument types, to make \n> polymorphic call. Which is not the case here. So, we can have more\n> control over types and still generate the same code in terms of \n> speed. See https://godbolt.org/z/KM8voadhs Starting from O1 function\n> is inlined, thus no overhead is noticeable. Anyway, it's up to the \n> actual commiter to decide does it worth it or not. Again, this \n> particular patch 0003 is completely optional.\nI agree static inline functions are generally easier to work with than \nmacros. These particular macros were not too bad, though.\n\nI'll bite the bullet and commit this one too unless someone objects. \nIt's late in the v17 release cycle, but these are local to multixact.c \nso there's no risk of breaking extensions, and it seems good to do it \nnow since we're modifying the macros anyway.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 14 Jun 2024 22:53:47 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bugfix and improvements in multixact.c" }, { "msg_contents": "On 14/06/2024 16:56, Maxim Orlov wrote:\n> +static inline int\n> +MXOffsetToFlagsOffset(MultiXactOffset offset)\n> +{\n> +\tint\t\tflagsoff;\n> +\n> +\toffset /= MULTIXACT_MEMBERS_PER_MEMBERGROUP;\n> +\toffset %= MULTIXACT_MEMBERGROUPS_PER_PAGE;\n> +\tflagsoff = offset * MULTIXACT_MEMBERGROUP_SIZE;\n> +\n> +\treturn flagsoff;\n> +}\n\nI found this reuse of the 'offset' variable a bit confusing, so I added \nseparate local variables for each step.\n\nCommitted with that change, thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Sun, 16 Jun 2024 20:54:16 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bugfix and improvements in multixact.c" } ]
[ { "msg_contents": "Over at [1] Andres expressed enthusiasm for enabling TAP tests to call \nLibPQ directly via FFI, and there was some support from others as well. \nAttached is a very rough POC for just that.There are two perl modules, \none which wraps libpq (or almost all of it) in perl, and another which \nuses that module to create a session object that can be used to run SQL. \nAlso in the patch is a modification of one TAP test (arbitrarily chosen \nas src/bin/pg_amcheck/t/004_verify_heapam.p) to use the new interface, \nso it doesn't use psql at all.\n\nThere's a bunch of work to do here, but for a morning's work it's not \ntoo bad :-) Luckily I had most of the first file already to hand.\n\nNext I plan to look at some of the recovery tests and other uses of \nbackground_psql, which might be more challenging,a dn require extension \nof the session API. Also there's a lot of error checking and \ndocumentation that need to be added.\n\n\ncheers\n\n\nandrew\n\n\n[1]  https://postgr.es/m/[email protected]\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 14 Jun 2024 11:09:43 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Using LibPq in TAP tests via FFI" }, { "msg_contents": "On 2024-06-14 Fr 11:09, Andrew Dunstan wrote:\n> Over at [1] Andres expressed enthusiasm for enabling TAP tests to call \n> LibPQ directly via FFI, and there was some support from others as \n> well. Attached is a very rough POC for just that.There are two perl \n> modules, one which wraps libpq (or almost all of it) in perl, and \n> another which uses that module to create a session object that can be \n> used to run SQL. Also in the patch is a modification of one TAP test \n> (arbitrarily chosen as src/bin/pg_amcheck/t/004_verify_heapam.p) to \n> use the new interface, so it doesn't use psql at all.\n>\n> There's a bunch of work to do here, but for a morning's work it's not \n> too bad :-) Luckily I had most of the first file already to hand.\n>\n> Next I plan to look at some of the recovery tests and other uses of \n> background_psql, which might be more challenging,a dn require \n> extension of the session API. Also there's a lot of error checking and \n> documentation that need to be added.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> [1] \n> https://postgr.es/m/[email protected]\n>\n>\n\nAnd here's the patch\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 14 Jun 2024 11:11:38 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "On Fri, Jun 14, 2024 at 11:11 AM Andrew Dunstan <[email protected]> wrote:\n> And here's the patch\n\nI haven't reviewed the patch, but a big +1 for the idea. Not only this\nmight cut down on the resource costs of running the tests in CI, as\nAndres has pointed out a few times, but it also could lead to much\nnicer user interfaces. For instance, right now, we have a number of\nTAP tests that are parsing psql output to recover the values returned\nby queries. Perhaps eventually - or maybe already, again I haven't\nlooked at the code - you'll be able to do something like\n$resultset->[0][0] to pull the first column out of the first row. That\nkind of thing could substantially improve the readability and\nmaintainability of some of our tests.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 14 Jun 2024 11:33:18 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "Hi, \n\nOn June 14, 2024 8:09:43 AM PDT, Andrew Dunstan <[email protected]> wrote:\n>Over at [1] Andres expressed enthusiasm for enabling TAP tests to call LibPQ directly via FFI, and there was some support from others as well. Attached is a very rough POC for just that.There are two perl modules, one which wraps libpq (or almost all of it) in perl, and another which uses that module to create a session object that can be used to run SQL. Also in the patch is a modification of one TAP test (arbitrarily chosen as src/bin/pg_amcheck/t/004_verify_heapam.p) to use the new interface, so it doesn't use psql at all.\n>\n>There's a bunch of work to do here, but for a morning's work it's not too bad :-) Luckily I had most of the first file already to hand.\n\nYay!\n\n\n>Next I plan to look at some of the recovery tests and other uses of background_psql, which might be more challenging,a dn require extension of the session API. Also there's a lot of error checking and documentation that need to be added.\n\nI'd suggest trying to convert the various looping constructs first, they're responsible for a large number of spawned shells. And I vaguely recall that there were none/very few that depend on actually being run via psql. \n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Fri, 14 Jun 2024 08:40:54 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "Hi,\n\nOn 2024-06-14 11:11:38 -0400, Andrew Dunstan wrote:\n> On 2024-06-14 Fr 11:09, Andrew Dunstan wrote:\n> > Over at [1] Andres expressed enthusiasm for enabling TAP tests to call\n> > LibPQ directly via FFI, and there was some support from others as well.\n> > Attached is a very rough POC for just that.There are two perl modules,\n> > one which wraps libpq (or almost all of it) in perl, and another which\n> > uses that module to create a session object that can be used to run SQL.\n\nWhat are your current thoughts about a fallback for this? It seems possible\nto implement the session module ontop of BackgroundPsql.pm, if necessary. But\nI suspect we'll eventually get to a point where that gets less and less\nconvenient.\n\n\nHow much of a dependency is FFI::Platypus, compared to requiring perl headers\nto be installed? In case FFI::Platypus is a complicted dependency, a small XS\nwrapper could be an alternative.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 14 Jun 2024 09:25:30 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "On Sat, Jun 15, 2024 at 3:33 AM Robert Haas <[email protected]> wrote:\n> I haven't reviewed the patch, but a big +1 for the idea. Not only this\n> might cut down on the resource costs of running the tests in CI, as\n\nIt would be good to keep some context between the threads here. For\nthe archives' sake, here is where the potential savings were reported,\nand this and other ideas were discussed:\n\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKGJoEO33K%3DZynsH%3DxkiEyfBMZjOoqBK%2BgouBdTGW2-woig%40mail.gmail.com\n\n\n", "msg_date": "Sat, 15 Jun 2024 11:41:40 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "On Fri, Jun 14, 2024 at 11:40 AM Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On June 14, 2024 8:09:43 AM PDT, Andrew Dunstan <[email protected]>\n> wrote:\n> >Over at [1] Andres expressed enthusiasm for enabling TAP tests to call\n> LibPQ directly via FFI, and there was some support from others as well.\n> Attached is a very rough POC for just that.There are two perl modules, one\n> which wraps libpq (or almost all of it) in perl, and another which uses\n> that module to create a session object that can be used to run SQL. Also in\n> the patch is a modification of one TAP test (arbitrarily chosen as\n> src/bin/pg_amcheck/t/004_verify_heapam.p) to use the new interface, so it\n> doesn't use psql at all.\n> >\n> >There's a bunch of work to do here, but for a morning's work it's not too\n> bad :-) Luckily I had most of the first file already to hand.\n>\n> Yay!\n>\n>\n> >Next I plan to look at some of the recovery tests and other uses of\n> background_psql, which might be more challenging,a dn require extension of\n> the session API. Also there's a lot of error checking and documentation\n> that need to be added.\n>\n> I'd suggest trying to convert the various looping constructs first,\n> they're responsible for a large number of spawned shells. And I vaguely\n> recall that there were none/very few that depend on actually being run via\n> psql.\n>\n>\n>\n>\nYeah, here's a new version with a few more scripts modified, and also\npoll_query_until() adjusted. That seems to be the biggest looping construct.\n\nThe biggest remaining unadjusted script users of psql are all in the\nsubscription and recovery tests.\n\ncheers\n\nandrew", "msg_date": "Sun, 16 Jun 2024 10:58:33 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "On Fri, Jun 14, 2024 at 7:42 PM Thomas Munro <[email protected]> wrote:\n\n> On Sat, Jun 15, 2024 at 3:33 AM Robert Haas <[email protected]> wrote:\n> > I haven't reviewed the patch, but a big +1 for the idea. Not only this\n> > might cut down on the resource costs of running the tests in CI, as\n>\n> It would be good to keep some context between the threads here. For\n> the archives' sake, here is where the potential savings were reported,\n> and this and other ideas were discussed:\n>\n>\n> https://www.postgresql.org/message-id/flat/CA%2BhUKGJoEO33K%3DZynsH%3DxkiEyfBMZjOoqBK%2BgouBdTGW2-woig%40mail.gmail.com\n\n\nYeah thanks for adding that context.\n\ncheers\n\nandrew\n\nOn Fri, Jun 14, 2024 at 7:42 PM Thomas Munro <[email protected]> wrote:On Sat, Jun 15, 2024 at 3:33 AM Robert Haas <[email protected]> wrote:\n> I haven't reviewed the patch, but a big +1 for the idea. Not only this\n> might cut down on the resource costs of running the tests in CI, as\n\nIt would be good to keep some context between the threads here.  For\nthe archives' sake, here is where the potential savings were reported,\nand this and other ideas were discussed:\n\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKGJoEO33K%3DZynsH%3DxkiEyfBMZjOoqBK%2BgouBdTGW2-woig%40mail.gmail.comYeah thanks for adding that context.cheersandrew", "msg_date": "Sun, 16 Jun 2024 10:59:49 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "On Fri, Jun 14, 2024 at 12:25 PM Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2024-06-14 11:11:38 -0400, Andrew Dunstan wrote:\n> > On 2024-06-14 Fr 11:09, Andrew Dunstan wrote:\n> > > Over at [1] Andres expressed enthusiasm for enabling TAP tests to call\n> > > LibPQ directly via FFI, and there was some support from others as well.\n> > > Attached is a very rough POC for just that.There are two perl modules,\n> > > one which wraps libpq (or almost all of it) in perl, and another which\n> > > uses that module to create a session object that can be used to run\n> SQL.\n>\n> What are your current thoughts about a fallback for this? It seems\n> possible\n> to implement the session module ontop of BackgroundPsql.pm, if necessary.\n> But\n> I suspect we'll eventually get to a point where that gets less and less\n> convenient.\n>\n\nI guess it's a question of how widely available FFI::Platypus is. I know\nit's available pretty much out of the box on Strawberry Perl and Msys2'\nucrt perl. It works fine on my Ubuntu ARM64 instance. On my Mac I had to\ninstall it via cpan, but that worked fine. For the moment CYgwin has me\nbeat, but I believe it's possible to make it work - at least the docs\nsuggest it is. Not sure about other platforms.\n\nI agree with you that falling back on BackgroundPsql is not a terribly\nsatisfactory solution.\n\n\n>\n>\n> How much of a dependency is FFI::Platypus, compared to requiring perl\n> headers\n> to be installed? In case FFI::Platypus is a complicted dependency, a\n> small XS\n> wrapper could be an alternative.\n>\n>\n>\n\nSure we could look at it. I might need to enlist some assistance there :-).\nUsing FFI is really nice because it does so much of the work for you.\n\ncheers\n\nandrew\n\nOn Fri, Jun 14, 2024 at 12:25 PM Andres Freund <[email protected]> wrote:Hi,\n\nOn 2024-06-14 11:11:38 -0400, Andrew Dunstan wrote:\n> On 2024-06-14 Fr 11:09, Andrew Dunstan wrote:\n> > Over at [1] Andres expressed enthusiasm for enabling TAP tests to call\n> > LibPQ directly via FFI, and there was some support from others as well.\n> > Attached is a very rough POC for just that.There are two perl modules,\n> > one which wraps libpq (or almost all of it) in perl, and another which\n> > uses that module to create a session object that can be used to run SQL.\n\nWhat are your current thoughts about a fallback for this?  It seems possible\nto implement the session module ontop of BackgroundPsql.pm, if necessary. But\nI suspect we'll eventually get to a point where that gets less and less\nconvenient.I guess it's a question of how widely available FFI::Platypus is. I know it's available pretty much out of the box on Strawberry Perl and Msys2' ucrt perl. It works fine on my Ubuntu ARM64 instance. On my Mac I had to install it via cpan, but that worked fine. For the moment CYgwin has me beat, but I believe it's possible to make it work - at least the docs suggest it is. Not sure about other platforms.I agree with you that falling back on BackgroundPsql is not a terribly satisfactory solution. \n\n\nHow much of a dependency is FFI::Platypus, compared to requiring perl headers\nto be installed?  In case FFI::Platypus is a complicted dependency, a small XS\nwrapper could be an alternative.\nSure we could look at it. I might need to enlist some assistance there :-). Using FFI is really nice because it does so much of the work for you.cheersandrew", "msg_date": "Sun, 16 Jun 2024 17:43:05 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "Hi,\n\nOn 2024-06-16 17:43:05 -0400, Andrew Dunstan wrote:\n> On Fri, Jun 14, 2024 at 12:25 PM Andres Freund <[email protected]> wrote:\n> I guess it's a question of how widely available FFI::Platypus is. I know\n> it's available pretty much out of the box on Strawberry Perl and Msys2'\n> ucrt perl.\n\nFWIW I hacked a bit on CI, trying to make it work. Took a bit, partially\nbecause CI uses an older strawberry perl without FFI::Platypus. And\nFFI::Platypus didn't build with that.\n\n\nUpdating that to 5.38 causes some complaints about LANG that I haven't hunted\ndown, just avoided by unsetting LANG.\n\n\nAs-is your patch didn't work, because it has \"systempath => []\", which caused\nlibpq to not load, because it depended on things in the system path...\n\n\nWhat's the reason for that?\n\nAfter commenting that out, all but one tests passed:\n\n[20:21:31.137] ------------------------------------- 8< -------------------------------------\n[20:21:31.137] stderr:\n[20:21:31.137] # Failed test 'psql connect success'\n[20:21:31.137] # at C:/cirrus/src/test/recovery/t/041_checkpoint_at_promote.pl line 161.\n[20:21:31.137] # got: '2'\n[20:21:31.137] # expected: '0'\n[20:21:31.137] # Failed test 'psql select 1'\n[20:21:31.137] # at C:/cirrus/src/test/recovery/t/041_checkpoint_at_promote.pl line 162.\n[20:21:31.137] # got: ''\n[20:21:31.137] # expected: '1'\n[20:21:31.137] # Looks like you failed 2 tests of 6.\n[20:21:31.137]\n[20:21:31.137] (test program exited with status code 2)\n[20:21:31.137] ------------------------------------------------------------------------------\n[20:21:31.137]\n\n\nDue to concurrency and run-to-run variance I wouldn't bet too much on this,\nbut the modified tests do have improved test times:\n\nbefore:\n\n[19:40:47.468] 135/296 postgresql:pg_amcheck / pg_amcheck/004_verify_heapam OK 7.70s 32 subtests passed\n[19:43:40.853] 232/296 postgresql:amcheck / amcheck/001_verify_heapam OK 36.50s 272 subtests passed\n\nafter:\n[20:22:55.495] 133/296 postgresql:pg_amcheck / pg_amcheck/004_verify_heapam OK 4.60s 32 subtests passed\n[20:25:13.641] 212/296 postgresql:amcheck / amcheck/001_verify_heapam OK 4.87s 272 subtests passed\n\n\nI looked at a few past runs and there never were instances of\namcheck/001_verify_heapam that were even close to as fast as this.\n\n\nThe overall tests time did improve some, but that is hard to weigh due to the\ntest failure.\n\n\n> I agree with you that falling back on BackgroundPsql is not a terribly\n> satisfactory solution.\n\nI'm somewhat doubtful we'll just agree on making FFI::Platypus a hard\ndependency, but if we agree to do so...\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 16 Jun 2024 15:38:30 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "On Mon, Jun 17, 2024 at 10:38 AM Andres Freund <[email protected]> wrote:\n> before:\n>\n> [19:40:47.468] 135/296 postgresql:pg_amcheck / pg_amcheck/004_verify_heapam OK 7.70s 32 subtests passed\n> [19:43:40.853] 232/296 postgresql:amcheck / amcheck/001_verify_heapam OK 36.50s 272 subtests passed\n>\n> after:\n> [20:22:55.495] 133/296 postgresql:pg_amcheck / pg_amcheck/004_verify_heapam OK 4.60s 32 subtests passed\n> [20:25:13.641] 212/296 postgresql:amcheck / amcheck/001_verify_heapam OK 4.87s 272 subtests passed\n\nNice!\n\n> > I agree with you that falling back on BackgroundPsql is not a terribly\n> > satisfactory solution.\n>\n> I'm somewhat doubtful we'll just agree on making FFI::Platypus a hard\n> dependency, but if we agree to do so...\n\nWhy can't we just do that? I mean, do we have any concrete reason to\nthink that it'll block a supported platform?\n\nI'm personally willing to test/validate on the full set of non-Linux\nUnixen and write up the install instructions to help eg build farm\nanimal owners adjust. Really this is mostly about libffi, which is\nsuper widely ported, and it is required by Python which we already\nsoft-depend on, and will hard-depend on if we drop autoconf. The rest\nis presumably just Perl xs glue to drive it, which, if it doesn't work\non some niche platform, you'd think should be easy enough to fix...\n\n\n", "msg_date": "Mon, 17 Jun 2024 10:57:35 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> On Mon, Jun 17, 2024 at 10:38 AM Andres Freund <[email protected]> wrote:\n>> I'm somewhat doubtful we'll just agree on making FFI::Platypus a hard\n>> dependency, but if we agree to do so...\n\n> Why can't we just do that? I mean, do we have any concrete reason to\n> think that it'll block a supported platform?\n\nIIUC, this would only be a hard dependency if you want to run certain\nTAP tests (maybe eventually all of them). Seems like not that much of\na roadblock for somebody that's just trying to build PG for\nthemselves. I agree we'd want it on most buildfarm animals\neventually, but they pretty much all have python installed ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 16 Jun 2024 19:07:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "Hi,\n\nOn 2024-06-16 19:07:49 -0400, Tom Lane wrote:\n> Thomas Munro <[email protected]> writes:\n> > On Mon, Jun 17, 2024 at 10:38 AM Andres Freund <[email protected]> wrote:\n> >> I'm somewhat doubtful we'll just agree on making FFI::Platypus a hard\n> >> dependency, but if we agree to do so...\n> \n> > Why can't we just do that? I mean, do we have any concrete reason to\n> > think that it'll block a supported platform?\n> \n> IIUC, this would only be a hard dependency if you want to run certain\n> TAP tests (maybe eventually all of them).\n\nI think it'd be all of them within a very short timeframe. IMO we'd want to\nconvert a bunch of the code in Cluster.pm to use psql-less connections to\nmaximize the benefit across all tests, without needing to modify all of them.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 16 Jun 2024 16:24:35 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "On Mon, Jun 17, 2024 at 10:57 AM Thomas Munro <[email protected]> wrote:\n> I'm personally willing to test/validate on the full set of non-Linux\n> Unixen and write up the install instructions to help eg build farm\n> animal owners adjust.\n\nI created a page where we can log \"works/doesn't work\" and \"installed\nhow\" information:\n\nhttps://wiki.postgresql.org/wiki/Platypus\n\nI'll go and test the BSDs and hopefully illumos. And then maybe Macs\nif Tom doesn't beat me to it.\n\n\n", "msg_date": "Mon, 17 Jun 2024 12:03:28 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "Hi,\n\nOn 2024-06-17 12:03:28 +1200, Thomas Munro wrote:\n> And then maybe Macs if Tom doesn't beat me to it.\n\nmacports even has a platypus package, so that should be easy.\n\nFor CI it should suffice to add p5.34-ffi-platypus to the list of packages in\nmacos' setup_additional_packages_script, they then should get automatically\ncached.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 16 Jun 2024 17:23:14 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-06-17 12:03:28 +1200, Thomas Munro wrote:\n>> And then maybe Macs if Tom doesn't beat me to it.\n\n> macports even has a platypus package, so that should be easy.\n\nLess easy if you don't want to depend on macports or homebrew.\nHowever, I see something a bit promising-looking in the base system:\n\n$ ls -l /usr/lib/*ffi*\n-rwxr-xr-x 1 root wheel 100720 May 7 03:01 /usr/lib/libffi-trampolines.dylib\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 16 Jun 2024 20:30:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> Really this is mostly about libffi, which is\n> super widely ported, and it is required by Python\n\nBTW, what form does that \"requirement\" take exactly? I see no\nevidence that the core python3 executable is linked to libffi\non any of the machines I checked.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 16 Jun 2024 20:34:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "I wrote:\n> Andres Freund <[email protected]> writes:\n>> macports even has a platypus package, so that should be easy.\n\n> Less easy if you don't want to depend on macports or homebrew.\n\nI tried \"sudo cpan install FFI::Platypus\" against macOS Sonoma's\nbase-system perl. It seemed to compile all right, but a nontrivial\nfraction of its self-tests fail:\n\nFiles=72, Tests=296, 7 wallclock secs ( 0.10 usr 0.07 sys + 4.96 cusr 1.34 csys = 6.47 CPU)\nResult: FAIL\nFailed 33/72 test programs. 87/296 subtests failed.\nmake: *** [test_dynamic] Error 3\n PLICEASE/FFI-Platypus-2.08.tar.gz\n /usr/bin/make test -- NOT OK\n\nNo energy for digging deeper tonight.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 16 Jun 2024 20:53:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "On Mon, Jun 17, 2024 at 12:34 PM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > Really this is mostly about libffi, which is\n> > super widely ported, and it is required by Python\n>\n> BTW, what form does that \"requirement\" take exactly? I see no\n> evidence that the core python3 executable is linked to libffi\n> on any of the machines I checked.\n\nThere is another library in between:\n\n$ ldd /usr/local/lib/python3.11/lib-dynload/_ctypes.cpython-311.so\n/usr/local/lib/python3.11/lib-dynload/_ctypes.cpython-311.so:\n libffi.so.8 => /usr/local/lib/libffi.so.8 (0x214865b76000)\n libdl.so.1 => /usr/lib/libdl.so.1 (0x214864bcc000)\n libthr.so.3 => /lib/libthr.so.3 (0x214866862000)\n libc.so.7 => /lib/libc.so.7 (0x214863e03000)\n\nPerhaps it's technically possible to build Python without the ctypes\nmodule, but I'm not sure and I don't see anywhere that describes it\nexplicitly as optional.\n\n\n", "msg_date": "Mon, 17 Jun 2024 13:36:24 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "On Mon, Jun 17, 2024 at 1:36 PM Thomas Munro <[email protected]> wrote:\n> Perhaps it's technically possible to build Python without the ctypes\n> module, but I'm not sure and I don't see anywhere that describes it\n> explicitly as optional.\n\nOne clue is that they used to bundle their own copy of libffi before\nPython 3.7. You had a choice of that or --with-system-ffi, but I\ndon't see an option for none. I might be missing something about\ntheir build system, though.\n\n\n", "msg_date": "Mon, 17 Jun 2024 13:41:40 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "On 2024-Jun-16, Andrew Dunstan wrote:\n\n\n> +sub query_oneval\n> +{\n> +\tmy $self = shift;\n> +\tmy $sql = shift;\n> +\tmy $missing_ok = shift; # default is not ok\n> +\tmy $conn = $self->{conn};\n> +\tmy $result = PQexec($conn, $sql);\n> +\tmy $ok = $result && (PQresultStatus($result) == PGRES_TUPLES_OK);\n> +\tunless ($ok)\n> +\t{\n> +\t\tPQclear($result) if $result;\n> +\t\treturn undef;\n> +\t}\n> +\tmy $ntuples = PQntuples($result);\n> +\treturn undef if ($missing_ok && !$ntuples);\n> +\tmy $nfields = PQnfields($result);\n> +\tdie \"$ntuples tuples != 1 or $nfields fields != 1\"\n> +\t if $ntuples != 1 || $nfields != 1;\n> +\tmy $val = PQgetvalue($result, 0, 0);\n> +\tif ($val eq \"\")\n> +\t{\n> +\t\t$val = undef if PGgetisnull($result, 0, 0);\n> +\t}\n> +\tPQclear($result);\n> +\treturn $val;\n> +}\n\nHmm, here you use PGgetisnull, is that a typo for PQgetisnull? If it\nis, then I wonder why doesn't this fail in some obvious way? Is this\npart dead code maybe?\n\n> +# return tuples like psql's -A -t mode.\n> +\n> +sub query_tuples\n> +{\n> +\tmy $self = shift;\n> +\tmy @results;\n> +\tforeach my $sql (@_)\n> +\t{\n> +\t\tmy $res = $self->query($sql);\n> +\t\t# join will render undef as an empty string here\n> +\t\tno warnings qw(uninitialized);\n> +\t\tmy @tuples = map { join('|', @$_); } @{$res->{rows}};\n> +\t\tpush(@results, join(\"\\n\",@tuples));\n> +\t}\n> +\treturn join(\"\\n\",@results);\n> +}\n\nYou made this function join the tuples from multiple queries together,\nbut the output format doesn't show anything for queries that return\nempty. I think this strategy doesn't cater for the case of comparing\nresults from multiple queries very well, because it might lead to sets\nof queries that return empty result for different queries reported as\nidentical when they aren't. Maybe add a separator line between the\nresults from each query, when there's more than one? (Perhaps just\n\"join('--\\n', @results)\" in that last line does the trick?)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"The Postgresql hackers have what I call a \"NASA space shot\" mentality.\n Quite refreshing in a world of \"weekend drag racer\" developers.\"\n(Scott Marlowe)\n\n\n", "msg_date": "Mon, 17 Jun 2024 11:07:19 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "\nOn 2024-06-17 Mo 5:07 AM, Alvaro Herrera wrote:\n> On 2024-Jun-16, Andrew Dunstan wrote:\n>\n>\n>> +sub query_oneval\n>> +{\n>> +\tmy $self = shift;\n>> +\tmy $sql = shift;\n>> +\tmy $missing_ok = shift; # default is not ok\n>> +\tmy $conn = $self->{conn};\n>> +\tmy $result = PQexec($conn, $sql);\n>> +\tmy $ok = $result && (PQresultStatus($result) == PGRES_TUPLES_OK);\n>> +\tunless ($ok)\n>> +\t{\n>> +\t\tPQclear($result) if $result;\n>> +\t\treturn undef;\n>> +\t}\n>> +\tmy $ntuples = PQntuples($result);\n>> +\treturn undef if ($missing_ok && !$ntuples);\n>> +\tmy $nfields = PQnfields($result);\n>> +\tdie \"$ntuples tuples != 1 or $nfields fields != 1\"\n>> +\t if $ntuples != 1 || $nfields != 1;\n>> +\tmy $val = PQgetvalue($result, 0, 0);\n>> +\tif ($val eq \"\")\n>> +\t{\n>> +\t\t$val = undef if PGgetisnull($result, 0, 0);\n>> +\t}\n>> +\tPQclear($result);\n>> +\treturn $val;\n>> +}\n> Hmm, here you use PGgetisnull, is that a typo for PQgetisnull? If it\n> is, then I wonder why doesn't this fail in some obvious way? Is this\n> part dead code maybe?\n\n\nIt's not dead, just not exercised ATM. I should maybe include a test \nscripts for the two new modules.\n\nAs you rightly suggest, it's a typo. If it had been called it would have \naborted the test.\n\n\n>\n>> +# return tuples like psql's -A -t mode.\n>> +\n>> +sub query_tuples\n>> +{\n>> +\tmy $self = shift;\n>> +\tmy @results;\n>> +\tforeach my $sql (@_)\n>> +\t{\n>> +\t\tmy $res = $self->query($sql);\n>> +\t\t# join will render undef as an empty string here\n>> +\t\tno warnings qw(uninitialized);\n>> +\t\tmy @tuples = map { join('|', @$_); } @{$res->{rows}};\n>> +\t\tpush(@results, join(\"\\n\",@tuples));\n>> +\t}\n>> +\treturn join(\"\\n\",@results);\n>> +}\n> You made this function join the tuples from multiple queries together,\n> but the output format doesn't show anything for queries that return\n> empty. I think this strategy doesn't cater for the case of comparing\n> results from multiple queries very well, because it might lead to sets\n> of queries that return empty result for different queries reported as\n> identical when they aren't. Maybe add a separator line between the\n> results from each query, when there's more than one? (Perhaps just\n> \"join('--\\n', @results)\" in that last line does the trick?)\n>\n\npsql doesn't do that, and this is designed to mimic psql's behaviour. We \ncould change that of course. I suspect none of the uses expect empty \nresultsets, so it's probably somewhat moot.\n\n\nThanks for looking.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 17 Jun 2024 08:22:06 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "On 2024-Jun-17, Andrew Dunstan wrote:\n\n> On 2024-06-17 Mo 5:07 AM, Alvaro Herrera wrote:\n\n> > You made this function join the tuples from multiple queries together,\n> > but the output format doesn't show anything for queries that return\n> > empty. I think this strategy doesn't cater for the case of comparing\n> > results from multiple queries very well, because it might lead to sets\n> > of queries that return empty result for different queries reported as\n> > identical when they aren't. Maybe add a separator line between the\n> > results from each query, when there's more than one? (Perhaps just\n> > \"join('--\\n', @results)\" in that last line does the trick?)\n> \n> psql doesn't do that, and this is designed to mimic psql's behaviour. We\n> could change that of course. I suspect none of the uses expect empty\n> resultsets, so it's probably somewhat moot.\n\nTrue -- I guess my comment should really be directed to the original\ncoding of the test in test_index_replay. I think adding the separator\nline makes it more trustworthy.\n\nProbably you're right that the current code of in-core tests don't care\nabout this, but if we export this technique to the world, I'm sure\nsomebody somewhere is going to care.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"The Gord often wonders why people threaten never to come back after they've\nbeen told never to return\" (www.actsofgord.com)\n\n\n", "msg_date": "Mon, 17 Jun 2024 15:45:26 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "On 2024-06-16 Su 6:38 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2024-06-16 17:43:05 -0400, Andrew Dunstan wrote:\n>> On Fri, Jun 14, 2024 at 12:25 PM Andres Freund <[email protected]> wrote:\n>> I guess it's a question of how widely available FFI::Platypus is. I know\n>> it's available pretty much out of the box on Strawberry Perl and Msys2'\n>> ucrt perl.\n> FWIW I hacked a bit on CI, trying to make it work. Took a bit, partially\n> because CI uses an older strawberry perl without FFI::Platypus. And\n> FFI::Platypus didn't build with that.\n>\n>\n> Updating that to 5.38 causes some complaints about LANG that I haven't hunted\n> down, just avoided by unsetting LANG.\n>\n>\n> As-is your patch didn't work, because it has \"systempath => []\", which caused\n> libpq to not load, because it depended on things in the system path...\n>\n>\n> What's the reason for that?\n\n\nNot sure, that code was written months ago. I just checked the \nFFI::CheckLib code and libpath is searched before systempath, so there \nshouldn't be any reason not to use the default load path.\n\n\n>\n> After commenting that out, all but one tests passed:\n>\n> [20:21:31.137] ------------------------------------- 8< -------------------------------------\n> [20:21:31.137] stderr:\n> [20:21:31.137] # Failed test 'psql connect success'\n> [20:21:31.137] # at C:/cirrus/src/test/recovery/t/041_checkpoint_at_promote.pl line 161.\n> [20:21:31.137] # got: '2'\n> [20:21:31.137] # expected: '0'\n> [20:21:31.137] # Failed test 'psql select 1'\n> [20:21:31.137] # at C:/cirrus/src/test/recovery/t/041_checkpoint_at_promote.pl line 162.\n> [20:21:31.137] # got: ''\n> [20:21:31.137] # expected: '1'\n> [20:21:31.137] # Looks like you failed 2 tests of 6.\n> [20:21:31.137]\n> [20:21:31.137] (test program exited with status code 2)\n> [20:21:31.137] ------------------------------------------------------------------------------\n> [20:21:31.137]\n\n\nYeah, the recovery tests were using poll_query_until in a rather funky \nway. That's fixed in this latest version.\n\n\n>\n>\n>\n>> I agree with you that falling back on BackgroundPsql is not a terribly\n>> satisfactory solution.\n> I'm somewhat doubtful we'll just agree on making FFI::Platypus a hard\n> dependency, but if we agree to do so...\n>\n>\n\nMaybe not. If so your other suggestion of a small XS wrapper might make \nsense.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 17 Jun 2024 10:01:27 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "On 2024-06-17 Mo 10:01 AM, Andrew Dunstan wrote:\n>\n>>\n>>> I agree with you that falling back on BackgroundPsql is not a terribly\n>>> satisfactory solution.\n>> I'm somewhat doubtful we'll just agree on making FFI::Platypus a hard\n>> dependency, but if we agree to do so...\n>>\n>>\n>\n> Maybe not. If so your other suggestion of a small XS wrapper might \n> make sense.\n\n\nHere's the latest version of this patch. It removes all use of \nbackground_psql(). Instead it uses libpq's async interface, which seems \nto me far more robust. There is one remaining use of interactive_psql(), \nbut that's reasonable as it's used for testing psql itself.\n\nI spent yesterday creating an XS wrapper for just the 19 libpq functions \nused in Session.pm. It's pretty simple. I have it passing a very basic \ntest, but haven't tried plugging it into Session.pm yet.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 16 Jul 2024 10:27:17 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "On Wed, Jul 17, 2024 at 2:27 AM Andrew Dunstan <[email protected]> wrote:\n> Here's the latest version of this patch. It removes all use of\n> background_psql(). Instead it uses libpq's async interface, which seems\n> to me far more robust. There is one remaining use of interactive_psql(),\n> but that's reasonable as it's used for testing psql itself.\n\nThis looks really nice! Works on my local FBSD machine.\n\nI pushed it to CI, and mostly saw environmental problems unrelated to\nthe patch, but you might be interested in the ASAN failure visible in\nthe cores section:\n\nhttps://cirrus-ci.com/task/6607915962859520\n\nUnfortunately I can't see the interesting log messages, because it\ndetected that the logs were still being appended to and declined to\nupload them. I think that means there must be subprocesses not being\nwaited for somewhere?\n\n> I spent yesterday creating an XS wrapper for just the 19 libpq functions\n> used in Session.pm. It's pretty simple. I have it passing a very basic\n> test, but haven't tried plugging it into Session.pm yet.\n\nNeat. I guess the libpq FFI/XS piece looks the same to the rest of\nthe test framework outside that module. It does sound pretty\nconvenient if the patch just works™ on CI/BF without any environment\nchanges, which I assume must be doable because we already build XS\nstuff in sr/pl/plperl. Looking forward to trying that version.\n\n\n", "msg_date": "Fri, 19 Jul 2024 10:51:51 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using LibPq in TAP tests via FFI" }, { "msg_contents": "On 2024-07-18 Th 6:51 PM, Thomas Munro wrote:\n> On Wed, Jul 17, 2024 at 2:27 AM Andrew Dunstan<[email protected]> wrote:\n>> Here's the latest version of this patch. It removes all use of\n>> background_psql(). Instead it uses libpq's async interface, which seems\n>> to me far more robust. There is one remaining use of interactive_psql(),\n>> but that's reasonable as it's used for testing psql itself.\n> This looks really nice! Works on my local FBSD machine.\n\n\ncool\n\n\n>\n> I pushed it to CI, and mostly saw environmental problems unrelated to\n> the patch, but you might be interested in the ASAN failure visible in\n> the cores section:\n>\n> https://cirrus-ci.com/task/6607915962859520\n>\n> Unfortunately I can't see the interesting log messages, because it\n> detected that the logs were still being appended to and declined to\n> upload them. I think that means there must be subprocesses not being\n> waited for somewhere?\n\n\nI couldn't see anything obvious either.\n\n\n>\n>> I spent yesterday creating an XS wrapper for just the 19 libpq functions\n>> used in Session.pm. It's pretty simple. I have it passing a very basic\n>> test, but haven't tried plugging it into Session.pm yet.\n> Neat. I guess the libpq FFI/XS piece looks the same to the rest of\n> the test framework outside that module.\n\n\nYeah, that's the idea.\n\n\n> It does sound pretty\n> convenient if the patch just works™ on CI/BF without any environment\n> changes, which I assume must be doable because we already build XS\n> stuff in sr/pl/plperl. Looking forward to trying that version.\n\n\nStill working on it. Meanwhile, here's a new version. It has some \ncleanup and also tries to use Session objects instead of psql in simple \ncases for safe_psql().\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com", "msg_date": "Fri, 19 Jul 2024 16:08:42 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using LibPq in TAP tests via FFI" } ]
[ { "msg_contents": "Hackers,\n\nI noticed that neither `regex_like` nor `starts with`, the jsonpath operators, raise an error when the operand is not a string (or array of strings):\n\ndavid=# select jsonb_path_query('true', '$ like_regex \"^hi\"');\n jsonb_path_query \n------------------\n null\n(1 row)\n\ndavid=# select jsonb_path_query('{\"x\": \"hi\"}', '$ starts with \"^hi\"');\n jsonb_path_query \n------------------\n null\n(1 row)\n\nThis is true in strict and lax mode, and with verbosity enabled (as in these examples). Most other operators raise an error when they can’t operate on the operand:\n\ndavid=# select jsonb_path_query('{\"x\": \"hi\"}', '$.integer()');\nERROR: jsonpath item method .integer() can only be applied to a string or numeric value\n\ndavid=# select jsonb_path_query('{\"x\": \"hi\"}', '$+$');\nERROR: left operand of jsonpath operator + is not a single numeric value\n\nShould `like_regex` and `starts with` adopt this behavior, too?\n\nI note that filter expressions seem to suppress these sorts of errors, but I assume that’s by design:\n\ndavid=# select jsonb_path_query('{\"x\": \"hi\"}', 'strict $ ?(@ starts with \"^hi\")');\n jsonb_path_query \n------------------\n(0 rows)\n\ndavid=# select jsonb_path_query('{\"x\": \"hi\"}', 'strict $ ?(@ like_regex \"^hi\")');\n jsonb_path_query \n------------------\n(0 rows)\n\ndavid=# select jsonb_path_query('{\"x\": \"hi\"}', 'strict $ ?(@.integer() == 1)');\n jsonb_path_query \n------------------\n(0 rows)\n\nD\n\n\n\n", "msg_date": "Fri, 14 Jun 2024 12:21:23 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "jsonpath: Missing regex_like && starts with Errors?" }, { "msg_contents": "On 06/14/24 12:21, David E. Wheeler wrote:\n> I noticed that neither `regex_like` nor `starts with`, the jsonpath operators, raise an error when the operand is not a string (or array of strings):\n> \n> david=# select jsonb_path_query('true', '$ like_regex \"^hi\"');\n> jsonb_path_query \n> ------------------\n> null\n> (1 row)\n> \n> david=# select jsonb_path_query('{\"x\": \"hi\"}', '$ starts with \"^hi\"');\n> jsonb_path_query \n> ------------------\n> null\n> (1 row)\n\nTo begin with, both of those path queries should have been rejected at\nthe parsing stage, just like the one David Johnson pointed out:\n\nOn 06/13/24 22:14, David G. Johnston wrote:\n> On Thursday, June 13, 2024, Chapman Flack <[email protected]> wrote:\n>> On 06/13/24 21:46, David G. Johnston wrote:\n>>>>> david=# select jsonb_path_query('1', '$ >= 1');\n>>>>\n>>>> Good point. I can't either. No way I can see to parse that as\n>>>> a <JSON path wff>.\n>>>\n>>> Whether we note it as non-standard or not is an open question then, but\n>> it\n>>> does work and opens up a documentation question.\n\nAll of these are <JSON path predicate> appearing where a <JSON path wff>\nis needed, and that's not allowed in the standard. Strictly speaking, the\nonly place <JSON path predicate> can appear is within <JSON filter expression>.\n\nSo I should go look at our code to see what grammar we've implemented,\nexactly. It is beginning to seem as if we have simply added\n<JSON path predicate> as another choice for an expression, not restricted\nto only appearing in a filter. If so, and we add documentation about how\nwe diverge from the standard, that's probably the way to say it.\n\nOn 06/13/24 22:14, David G. Johnston wrote:\n> I don’t get why the outcome of a boolean producing operation isn’t just\n> generally allowed to be produced\n\nI understand; after all, what is a 'predicate' but another 'boolean\nproducing operation'? But the committee (at least in this edition) has\nstuck us with this clear division in the grammar: there is no\n<JSON path wff>, boolean as it may be, that can appear as a\n<JSON path predicate>, and there is no <JSON path predicate> that\ncan appear outside of a filter and be treated as a boolean-valued\nexpression.\n\nAs for the error behavior of a <JSON path predicate> (which strictly,\nagain, can only appear inside a <JSON filter expression>), the standard\nsays what seems to be the same thing, in a couple different ways.\n\nIn 4.48.5 Overview of SQL/JSON path language, this is said: \"The SQL/JSON\npath language traps any errors that occur during the evaluation of a\n<JSON filter expression>. Depending on the precise <JSON path predicate> ...\nthe result may be Unknown, True, or False, ...\".\n\nLater in 9.46's General Rules where the specific semantics of the\nvarious predicates are laid out, each predicate has rules spelling out\nwhich of Unknown, True, or False results when an error condition is\nencountered (usually Unknown, except where something already seen allows\nreturning True or False). Finally, the <JSON filter expression> itself\ncollapses the three-valued logic to two; it includes the items for which\nthe predicate returns True, and excludes them for False or Unknown.\n\nSo that's where the errors went.\n\nThe question of what should happen to the errors when a\n<JSON path predicate> appears outside of a <JSON filter expression>\nof course isn't answered in the standard, because that's not supposed\nto be possible. So if we're allowing predicates to appear on their own\nas expressions, it's also up to us to say what should happen with errors\nwhen they do.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 14 Jun 2024 22:29:34 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonpath: Missing regex_like && starts with Errors?" }, { "msg_contents": "On 06/14/24 22:29, Chapman Flack wrote:\n> So I should go look at our code to see what grammar we've implemented,\n> exactly. It is beginning to seem as if we have simply added\n> <JSON path predicate> as another choice for an expression, not restricted\n> to only appearing in a filter. If so, and we add documentation about how\n> we diverge from the standard, that's probably the way to say it.\n\nThat's roughly what we've done:\n\n\n 119 result:\n 120 mode expr_or_predicate {\n 121 ...\n 125 }\n 126 | /* EMPTY */ { *result = NULL; }\n 127 ;\n 128\n 129 expr_or_predicate:\n 130 expr { $$ = $1; }\n 131 | predicate { $$ = $1; }\n 132 ;\n\n\nOddly, that's only at the top-level goal production. Your entire JSON\npath query we'll allow to be a predicate in lieu of an expr. We still\ndon't allow a predicate to appear in place of an expr within any other\nproduction.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 14 Jun 2024 23:21:52 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonpath: Missing regex_like && starts with Errors?" }, { "msg_contents": "On Jun 14, 2024, at 22:29, Chapman Flack <[email protected]> wrote:\n\n> So I should go look at our code to see what grammar we've implemented,\n> exactly. It is beginning to seem as if we have simply added\n> <JSON path predicate> as another choice for an expression, not restricted\n> to only appearing in a filter. If so, and we add documentation about how\n> we diverge from the standard, that's probably the way to say it.\n\nYes, if I understand correctly, these are predicate check expressions, supported and documented as an extension to the standard since Postgres 12[1]. I found their behavior quite confusing for a while, and spent some time figuring it out and submitting a doc patch (committed in 7014c9a[2]) to hopefully clarify things in Postgres 17.\n\n> So that's where the errors went.\n\nAh, great, that explains the error suppression in filters. Thank you. I still think the supression of `like_regex` and `starts with` errors in predicate path queries is odd, though.\n\n> The question of what should happen to the errors when a\n> <JSON path predicate> appears outside of a <JSON filter expression>\n> of course isn't answered in the standard, because that's not supposed\n> to be possible. So if we're allowing predicates to appear on their own\n> as expressions, it's also up to us to say what should happen with errors\n> when they do.\n\nRight, and I think there’s an inconsistency right now.\n\nBest,\n\nDavid\n\n[1]: https://www.postgresql.org/docs/devel/functions-json.html#FUNCTIONS-SQLJSON-CHECK-EXPRESSIONS\n[2]: https://github.com/postgres/postgres/commit/7014c9a\n\n\n\n", "msg_date": "Sat, 15 Jun 2024 10:47:07 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: jsonpath: Missing regex_like && starts with Errors?" }, { "msg_contents": "On 06/15/24 10:47, David E. Wheeler wrote:\n> these are predicate check expressions, supported and documented\n> as an extension to the standard since Postgres 12[1].\n> ...\n> [1]: https://www.postgresql.org/docs/devel/functions-json.html#FUNCTIONS-SQLJSON-CHECK-EXPRESSIONS\n\nI see. Yes, that documentation now says \"predicate check expressions return\nthe single three-valued result of the predicate: true, false, or unknown\".\n\n(Aside: are all readers of the docs assumed to have learned the habit\nof calling SQL null \"unknown\" when speaking of a boolean? They can flip\nback to 8.6 Boolean Type and see 'a third state, “unknown”, which is\nrepresented by the SQL null value'. But would it save them some page\nflipping to add \" (represented by SQL null)\" to the sentence here?)\n\nAs Unknown is typically what the predicates return within a filter (where\nerrors get trapped) when an error has occurred, the existing docs seem to\nsuggest they behave the same way in a \"predicate check expression\", so a\nchange to that behavior now would be a change to what we've documented.\n\nOTOH, getting Unknown because some error occurred is strictly less\ninformation than seeing the error, so perhaps you would want a way\nto request non-error-trapping behavior for a \"predicate check expression\".\n\nCan't really overload jsonb_path_query's 'silent' parameter for that,\nbecause it is already false by default. If predicate check expressions\nwere nonsilent by default, the existing 'silent' parameter would be a\nperfect way to silence them.\n\nNo appetite to add yet another optional boolean parameter to\njsonb_path_query for the sole purpose of controlling the silence of\nour nonstandard syntax extension ....\n\nMaybe just see the nonstandard syntax extension and raise it another one:\n\nexpr_or_predicate\n : expr\n | predicate\n | \"nonsilent\" '(' predicate ')'\n ;\n\nor something like that.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sat, 15 Jun 2024 12:23:08 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonpath: Missing regex_like && starts with Errors?" }, { "msg_contents": "On Jun 15, 2024, at 12:23, Chapman Flack <[email protected]> wrote:\n\n> I see. Yes, that documentation now says \"predicate check expressions return\n> the single three-valued result of the predicate: true, false, or unknown\".\n\nIt has been there since jsonpath was introduced in v12[1]:\n\n> A path expression can be a Boolean predicate, although the SQL/JSON standard allows predicates only in filters. This is necessary for implementation of the @@ operator. For example, the following jsonpath expression is valid in PostgreSQL:\n> \n> '$.track.segments[*].HR < 70'\n\n\n\n> (Aside: are all readers of the docs assumed to have learned the habit\n> of calling SQL null \"unknown\" when speaking of a boolean? They can flip\n> back to 8.6 Boolean Type and see 'a third state, “unknown”, which is\n> represented by the SQL null value'. But would it save them some page\n> flipping to add \" (represented by SQL null)\" to the sentence here?)\n\nIn 9.16.2[2] it says:\n\n> The unknown value plays the same role as SQL NULL and can be tested for with the is unknown predicate.\n\n> As Unknown is typically what the predicates return within a filter (where\n> errors get trapped) when an error has occurred, the existing docs seem to\n> suggest they behave the same way in a \"predicate check expression\", so a\n> change to that behavior now would be a change to what we've documented.\n\nIt’s reasonable to ask, then, whether `starts with` and `like_regex` are correct and the others shouldn’t throw errors in predicate check expressions, yes. I don’t know the answer, but would like it to be consistent.\n\n> Can't really overload jsonb_path_query's 'silent' parameter for that,\n> because it is already false by default. If predicate check expressions\n> were nonsilent by default, the existing 'silent' parameter would be a\n> perfect way to silence them.\n\nI think that’s how it should be; I prefer that it raises errors by default but you can silence them:\n\ndavid=# select jsonb_path_query(target => '{\"x\": \"hi\"}', path => '$.integer()', silent => false);\nERROR: jsonpath item method .integer() can only be applied to a string or numeric value\n\ndavid=# select jsonb_path_query(target => '{\"x\": \"hi\"}', path => '$.integer()', silent => true);\n jsonb_path_query \n------------------\n(0 rows)\n\nI suggest that the same behavior be adopted for `like_regex` and `starts with`.\n\n> No appetite to add yet another optional boolean parameter to\n> jsonb_path_query for the sole purpose of controlling the silence of\n> our nonstandard syntax extension ....\n\nYou don’t need it IMO, the existing silent parameter already does it existing error-raising operators.\n\nBest,\n\nDavid\n\n[1]: https://www.postgresql.org/docs/12/functions-json.html#FUNCTIONS-SQLJSON-PATH\n[2]: https://www.postgresql.org/docs/devel/functions-json.html#FUNCTIONS-SQLJSON-PATH\n\n\n\n\n", "msg_date": "Sun, 16 Jun 2024 11:52:23 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: jsonpath: Missing regex_like && starts with Errors?" }, { "msg_contents": "On Jun 16, 2024, at 11:52, David E. Wheeler <[email protected]> wrote:\n\n> I think that’s how it should be; I prefer that it raises errors by default but you can silence them:\n> \n> david=# select jsonb_path_query(target => '{\"x\": \"hi\"}', path => '$.integer()', silent => false);\n> ERROR: jsonpath item method .integer() can only be applied to a string or numeric value\n> \n> david=# select jsonb_path_query(target => '{\"x\": \"hi\"}', path => '$.integer()', silent => true);\n> jsonb_path_query \n> ------------------\n> (0 rows)\n> \n> I suggest that the same behavior be adopted for `like_regex` and `starts with`.\n\nOkay, I think I’ve figured this out, and the key is that I am, once again, comparing predicate path queries to SQL standard queries. If I update the first example to use a comparison I no longer get an error:\n\ndavid=# select jsonb_path_query('{\"x\": \"hi\"}', '$.integer() == 1');\n jsonb_path_query \n------------------\n null\n\nSo I think that’s the key: There’s not a difference between the behavior of `like_regex` and `starts with` vs other predicate expressions.\n\nThis dichotomy continues to annoy. I would very much like some way to have jsonb_path_query() raise an error (or even a warning!) if passed a predate expression, and for jsonb_path_match() to raise an error or warning if its path is not a predicate expression. Because I keep confusing TF out of myself.\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Mon, 17 Jun 2024 18:14:59 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: jsonpath: Missing regex_like && starts with Errors?" }, { "msg_contents": "On 06/17/24 18:14, David E. Wheeler wrote:\n> So I think that’s the key: There’s not a difference between the behavior of\n> `like_regex` and `starts with` vs other predicate expressions.\n\nThe current implementation seems to have made each of our\n<JSON path predicate>s responsible for swallowing its own errors, which\nis one perfectly cromulent way to satisfy the SQL standard behavior saying\nall errors within a <JSON filter expression> should be swallowed.\n\nThe standard says nothing on how they should behave outside of a\n<JSON filter expression>, because as far as the standard's concerned,\nthey can't appear there.\n\nOurs currently behave the same way, and swallow their errors.\n\nIt would have been possible to write them in such a way as to raise errors,\nbut not when inside a <JSON filter expression>, and that would also satisfy\nthe standard, but it would also give us the errors you would like from our\nnonstandard \"predicate check expressions\". And then you could easily use\nsilent => true if you wanted them silent.\n\nI'd be leery of changing that, though, as we've already documented that\na \"predicate check expression\" returns true, false, or unknown, so having\nit throw by default seems like a change of documented behavior.\n\nThe current situation can't make much use of 'silent', since it's already\nfalse by default; you can't make it any falser to make predicate-check\nerrors show up.\n\nWould it be a thinkable thought to change the 'silent' default to null?\nThat could have the same effect as false for SQL standard expressions, and\nthe same effect seen now for \"predicate check expressions\", and you could\npass it explicitly false if you wanted errors from the predicate checks.\n\nIf that's no good, I don't see an obvious solution other than adding\nanother nonstandard construct to what's nonstandard already, and allowing\nsomething like nonsilent(predicate check expression).\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 17 Jun 2024 18:44:41 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonpath: Missing regex_like && starts with Errors?" }, { "msg_contents": "On Jun 17, 2024, at 6:44 PM, Chapman Flack <[email protected]> wrote:\n\n> The current implementation seems to have made each of our\n> <JSON path predicate>s responsible for swallowing its own errors, which\n> is one perfectly cromulent way to satisfy the SQL standard behavior saying\n> all errors within a <JSON filter expression> should be swallowed.\n\nNaw, executePredicate does it for all of them, as for the left operand here[1].\n\n> The standard says nothing on how they should behave outside of a\n> <JSON filter expression>, because as far as the standard's concerned,\n> they can't appear there.\n> \n> Ours currently behave the same way, and swallow their errors.\n\nYes, and they’re handled consistently, at least.\n\n> It would have been possible to write them in such a way as to raise errors,\n> but not when inside a <JSON filter expression>, and that would also satisfy\n> the standard, but it would also give us the errors you would like from our\n> nonstandard \"predicate check expressions\". And then you could easily use\n> silent => true if you wanted them silent.\n\nI’m okay without the errors, as long as the behaviors are consistent. I mean it might be cool to have a way to get them, but the consistency I thought I saw was the bit that seemed like a bug.\n\n> I'd be leery of changing that, though, as we've already documented that\n> a \"predicate check expression\" returns true, false, or unknown, so having\n> it throw by default seems like a change of documented behavior.\n\nRight, same for using jsonb_path_match().\n\n> The current situation can't make much use of 'silent', since it's already\n> false by default; you can't make it any falser to make predicate-check\n> errors show up.\n\nEXTREAMLY FALSE! 😂\n\n> Would it be a thinkable thought to change the 'silent' default to null?\n> That could have the same effect as false for SQL standard expressions, and\n> the same effect seen now for \"predicate check expressions\", and you could\n> pass it explicitly false if you wanted errors from the predicate checks.\n\nThaat seems like it’d be confusing TBH.\n\n> If that's no good, I don't see an obvious solution other than adding\n> another nonstandard construct to what's nonstandard already, and allowing\n> something like nonsilent(predicate check expression).\n\nThe only options I can think of are a GUC to turn on SUPER STRICT mode or something (yuck, action at a distance) or introduce new functions with the new behavior. I advocate for neither (at this point).\n\nBest,\n\nDavid\n\n[1]: https://github.com/postgres/postgres/blob/82ed67a/src/backend/utils/adt/jsonpath_exec.c#L2058-L2059\n\n\n\n", "msg_date": "Mon, 17 Jun 2024 19:17:48 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: jsonpath: Missing regex_like && starts with Errors?" }, { "msg_contents": "On 06/17/24 19:17, David E. Wheeler wrote:\n> [1]: https://github.com/postgres/postgres/blob/82ed67a/src/backend/utils/adt/jsonpath_exec.c#L2058-L2059\n\nHuh, I just saw something peculiar, skimming through the code:\n\nhttps://github.com/postgres/postgres/blob/82ed67a/src/backend/utils/adt/jsonpath_exec.c#L1385\n\nWe allow .boolean() applied to a jbvBool, a jbvString (those are the only\ntwo possibilities allowed by the standard), or to a jbvNumeric (!), but\nonly if it can be serialized and then parsed as an int4, otherwise we say\nERRCODE_NON_NUMERIC_SQL_JSON_ITEM, or if it survived all that we call it\ntrue if it isn't zero.\n\nI wonder what that alternative is doing there.\n\nIt also looks like the expected errcode (in the standard, if the item\nwas not boolean or string) would be 2202V \"non-boolean SQL/JSON item\" ...\nwhich isn't in our errcodes.txt.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 17 Jun 2024 22:17:06 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonpath: Missing regex_like && starts with Errors?" }, { "msg_contents": "On 18.06.24 04:17, Chapman Flack wrote:\n> On 06/17/24 19:17, David E. Wheeler wrote:\n>> [1]: https://github.com/postgres/postgres/blob/82ed67a/src/backend/utils/adt/jsonpath_exec.c#L2058-L2059\n> \n> Huh, I just saw something peculiar, skimming through the code:\n> \n> https://github.com/postgres/postgres/blob/82ed67a/src/backend/utils/adt/jsonpath_exec.c#L1385\n> \n> We allow .boolean() applied to a jbvBool, a jbvString (those are the only\n> two possibilities allowed by the standard), or to a jbvNumeric (!), but\n> only if it can be serialized and then parsed as an int4, otherwise we say\n> ERRCODE_NON_NUMERIC_SQL_JSON_ITEM, or if it survived all that we call it\n> true if it isn't zero.\n> \n> I wonder what that alternative is doing there.\n\nAre you saying we shouldn't allow .boolean() to be called on a JSON number?\n\nI would concur that that's what the spec says.\n\n\n\n", "msg_date": "Tue, 18 Jun 2024 14:30:55 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonpath: Missing regex_like && starts with Errors?" }, { "msg_contents": "On 06/18/24 08:30, Peter Eisentraut wrote:\n> Are you saying we shouldn't allow .boolean() to be called on a JSON number?\n> \n> I would concur that that's what the spec says.\n\nOr, if we want to extend the spec and allow .boolean() on a JSON number,\nshould it just check that the number is nonzero or zero, rather than\nchecking that it can be serialized then deserialized as an int4 and\notherwise complaining that it isn't a number?\n\nWhich error code to use seems to be a separate issue. Is it possible that\nmore codes like 2202V non-boolean SQL/JSON item were added in a later spec\nthan we developed the code from?\n\nI have not read through all of the code to see in how many other places\nthe error code doesn't match the spec.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 18 Jun 2024 09:40:19 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonpath: Missing regex_like && starts with Errors?" } ]
[ { "msg_contents": "The blog post here (thank you depesz!):\n\nhttps://www.depesz.com/2024/06/11/how-much-speed-youre-leaving-at-the-table-if-you-use-default-locale/\n\nshowed an interesting result where the builtin provider is not quite as\nfast as \"C\" for queries like:\n\n SELECT * FROM a WHERE t = '...';\n\nThe reason is that it's calling varstr_cmp() many times, which does a\nlookup in the collation cache for each call. For sorts, it only does a\nlookup in the collation cache once, so the effect is not significant.\n\nThe reason looking up \"C\" is faster is because there's a special check\nfor C_COLLATION_OID, so it doesn't even need to do the hash lookup. If\nyou create an equivalent collation like:\n\n CREATE COLLATION libc_c(PROVIDER = libc, LOCALE = 'C');\n\nit will perform the same as a collation with the builtin provider.\n\nAttached is a patch to use simplehash.h instead, which speeds things up\nenough to make them fairly close (from around 15% slower to around 8%).\n\nThe patch is based on the series here:\n\nhttps://postgr.es/m/[email protected]\n\nwhich does some refactoring in a related area, but I can make them\nindependent.\n\nWe can also consider what to do about those special cases:\n\n * add a special case for PG_C_UTF8?\n * instead of a hardwired set of special collation IDs, have a single-\nelement \"last collation ID\" to check before doing the hash lookup?\n * remove the special cases entirely if we can close the performance\ngap enough that it's not important?\n\n(Note: the special case in lc_ctpye_is_c() is currently required for\ncorrectness because hba.c uses C_COLLATION_OID for regexes before the\nsyscache is initialized. That can be fixed pretty easily a couple\ndifferent ways, though.)\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Fri, 14 Jun 2024 16:46:39 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Speed up collation cache" }, { "msg_contents": "On 15.06.24 01:46, Jeff Davis wrote:\n> * instead of a hardwired set of special collation IDs, have a single-\n> element \"last collation ID\" to check before doing the hash lookup?\n\nI'd imagine that method could be very effective.\n\n\n", "msg_date": "Wed, 19 Jun 2024 10:10:09 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up collation cache" }, { "msg_contents": "On Sat, Jun 15, 2024 at 6:46 AM Jeff Davis <[email protected]> wrote:\n> Attached is a patch to use simplehash.h instead, which speeds things up\n> enough to make them fairly close (from around 15% slower to around 8%).\n\n+#define SH_HASH_KEY(tb, key) hash_uint32((uint32) key)\n\nFor a static inline hash for speed reasons, we can use murmurhash32\nhere, which is also inline.\n\n\n", "msg_date": "Thu, 20 Jun 2024 17:07:23 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up collation cache" }, { "msg_contents": "On Thu, 2024-06-20 at 17:07 +0700, John Naylor wrote:\n> On Sat, Jun 15, 2024 at 6:46 AM Jeff Davis <[email protected]> wrote:\n> > Attached is a patch to use simplehash.h instead, which speeds\n> > things up\n> > enough to make them fairly close (from around 15% slower to around\n> > 8%).\n> \n> +#define SH_HASH_KEY(tb, key)   hash_uint32((uint32) key)\n> \n> For a static inline hash for speed reasons, we can use murmurhash32\n> here, which is also inline.\n\nThank you, that brings it down a few more percentage points.\n\nNew patches attached, still based on the setlocale-removal patch\nseries.\n\nSetup:\n\n create collation libc_c (provider=libc, locale='C');\n create table collation_cache_test(t text);\n insert into collation_cache_test\n select g::text||' '||g::text\n from generate_series(1,200000000) g;\n\nQueries:\n\n select * from collation_cache_test where t < '0' collate \"C\";\n select * from collation_cache_test where t < '0' collate libc_c;\n\nThe two collations are identical except that the former benefits from\nthe optimization for C_COLLATION_OID, and the latter does not, so these\nqueries measure the overhead of the collation cache lookup.\n\nResults (in ms):\n\n \"C\" \"libc_c\" overhead\n master: 6350  7855 24%\n v4-0001: 6091 6324 4%\n\n(Note: I don't have an explanation for the difference in performance of\nthe \"C\" locale -- probably just some noise in the test.)\n\nConsidering that simplehash brings the worst case overhead under 5%, I\ndon't see a big reason to use the single-element cache also.\n\nRegards,\n\tJeff Davis", "msg_date": "Fri, 26 Jul 2024 14:00:31 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed up collation cache" }, { "msg_contents": "On 7/26/24 11:00 PM, Jeff Davis wrote:\n> Results (in ms):\n> \n> \"C\" \"libc_c\" overhead\n> master: 6350  7855 24%\n> v4-0001: 6091 6324 4%\n\nI got more overhead in my quick benchmarking when I ran the same \nbenchmark. Also tried your idea with caching the last lookup (PoC patch \nattached) and it basically removed all overhead, but I guess it will not \nhelp if you have two different non.default locales in the same query.\n\n \"C\" \"libc_c\" overhead\nbefore: 6695 8376 25%\nafter: 6605 7340 11%\ncache last: 6618 6677 1%\n\nBut even without that extra optimization I think this patch is worth \nmerging and the patch is small, simple and clean and easy to understand \nand a just a clear speed up. Feels like a no brainer. I think that it is \nready for committer.\n\nAnd then we can discuss after committing if an additional cache of the \nlast locale is worth it or not.\n\nAndreas", "msg_date": "Sun, 28 Jul 2024 00:14:56 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up collation cache" }, { "msg_contents": "On Sun, 2024-07-28 at 00:14 +0200, Andreas Karlsson wrote:\n> But even without that extra optimization I think this patch is worth \n> merging and the patch is small, simple and clean and easy to\n> understand \n> and a just a clear speed up. Feels like a no brainer. I think that it\n> is \n> ready for committer.\n\nCommitted, thank you.\n\n> And then we can discuss after committing if an additional cache of\n> the \n> last locale is worth it or not.\n\nYeah, I'm holding off on that until refactoring in the area settles,\nand we'll see if it's still worth it.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Sun, 28 Jul 2024 13:02:20 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed up collation cache" } ]
[ { "msg_contents": "Hi Team,\n\nGreetings of the day!!\n\nWe are planning to partition tables using pg_partman. Like we are planning\nfor their backup and restoration process.\n\nGot a few URLs where pg_dump had issues while restoring some data that was\nlost.\n\nkindly guide me the process or steps I need to follow for backing up\npartitioned tables correctly so that while restoration I don't face any\nissue.\n\nAnother question, currently we are using pg_dump for database backup which\nlocks tables and completely puts db transactions on hold. For this I want\ntables shouldnt get locked also the backup process should complete in less\ntime.\n\nThanks in advance!!\n\nThanks & Regards,\nGayatri\n\nHi Team,Greetings of the day!!We are planning to partition tables using pg_partman. Like we are planning for their backup and restoration process. Got a few URLs where pg_dump had issues while restoring some data that was lost.kindly guide me the process or steps I need to follow for backing up partitioned tables correctly so that while restoration I don't face any issue.Another question, currently we are using pg_dump for database backup which locks tables and completely puts db transactions on hold. For this I want tables shouldnt get locked also the backup process should complete in less time.Thanks in advance!!Thanks & Regards,Gayatri", "msg_date": "Sun, 16 Jun 2024 04:39:05 +0530", "msg_from": "Gayatri Singh <[email protected]>", "msg_from_op": true, "msg_subject": "Backup and Restore of Partitioned Table in PG-15" }, { "msg_contents": "Hi Gayatri Singh,\n\nCould you try pgBackRest ?\nIts advantages are speed, support for incremental backups, minimal\nlocking, and robust point in time recovery options besides several advanced\nfeatures.\nBest suites for large-scale and critical PostgreSQL deployments.\n\nRegards,\nMuhammad Ikram\nBitnine Global\n\nOn Sun, Jun 16, 2024 at 4:09 AM Gayatri Singh <[email protected]>\nwrote:\n\n> Hi Team,\n>\n> Greetings of the day!!\n>\n> We are planning to partition tables using pg_partman. Like we are planning\n> for their backup and restoration process.\n>\n> Got a few URLs where pg_dump had issues while restoring some data that was\n> lost.\n>\n> kindly guide me the process or steps I need to follow for backing up\n> partitioned tables correctly so that while restoration I don't face any\n> issue.\n>\n> Another question, currently we are using pg_dump for database backup which\n> locks tables and completely puts db transactions on hold. For this I want\n> tables shouldnt get locked also the backup process should complete in less\n> time.\n>\n> Thanks in advance!!\n>\n> Thanks & Regards,\n> Gayatri\n>\n>\n>\n\n-- \nMuhammad Ikram\n\nHi Gayatri Singh,Could you try pgBackRest ? Its advantages are speed,  support for incremental backups, minimal locking, and robust point in time recovery options besides several advanced features.Best suites for large-scale and critical PostgreSQL deployments.Regards,Muhammad IkramBitnine GlobalOn Sun, Jun 16, 2024 at 4:09 AM Gayatri Singh <[email protected]> wrote:Hi Team,Greetings of the day!!We are planning to partition tables using pg_partman. Like we are planning for their backup and restoration process. Got a few URLs where pg_dump had issues while restoring some data that was lost.kindly guide me the process or steps I need to follow for backing up partitioned tables correctly so that while restoration I don't face any issue.Another question, currently we are using pg_dump for database backup which locks tables and completely puts db transactions on hold. For this I want tables shouldnt get locked also the backup process should complete in less time.Thanks in advance!!Thanks & Regards,Gayatri\n-- Muhammad Ikram", "msg_date": "Sun, 16 Jun 2024 16:59:55 +0500", "msg_from": "Muhammad Ikram <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup and Restore of Partitioned Table in PG-15" }, { "msg_contents": "Hi Gayatri,\n\n\nOn Sun, Jun 16, 2024 at 4:39 AM Gayatri Singh <[email protected]>\nwrote:\n\n> Hi Team,\n>\n> Greetings of the day!!\n>\n> We are planning to partition tables using pg_partman. Like we are planning\n> for their backup and restoration process.\n>\n> Got a few URLs where pg_dump had issues while restoring some data that was\n> lost.\n>\n\nThis mailing list is for discussing development topics - bugs and features.\nPlease provide more details about the issues - URL where the issue is\nreported, a reproducer etc. If the issues are already being discussed,\nplease participate in the relevant threads.\n\n\n>\n> kindly guide me the process or steps I need to follow for backing up\n> partitioned tables correctly so that while restoration I don't face any\n> issue.\n>\n> Another question, currently we are using pg_dump for database backup which\n> locks tables and completely puts db transactions on hold. For this I want\n> tables shouldnt get locked also the backup process should complete in less\n> time.\n>\n\nThese questions are appropriate for pgsql-general mailing list.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nHi Gayatri,On Sun, Jun 16, 2024 at 4:39 AM Gayatri Singh <[email protected]> wrote:Hi Team,Greetings of the day!!We are planning to partition tables using pg_partman. Like we are planning for their backup and restoration process. Got a few URLs where pg_dump had issues while restoring some data that was lost.This mailing list is for discussing development topics - bugs and features. Please provide more details about the issues - URL where the issue is reported, a reproducer etc. If the issues are already being discussed, please participate in the relevant threads. kindly guide me the process or steps I need to follow for backing up partitioned tables correctly so that while restoration I don't face any issue.Another question, currently we are using pg_dump for database backup which locks tables and completely puts db transactions on hold. For this I want tables shouldnt get locked also the backup process should complete in less time.These questions are appropriate for pgsql-general mailing list.-- Best Wishes,Ashutosh Bapat", "msg_date": "Tue, 18 Jun 2024 14:25:40 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup and Restore of Partitioned Table in PG-15" } ]
[ { "msg_contents": "Separating this from the pytest thread:\n\nOn Sat, Jun 15, 2024 at 01:26:57PM -0400, Robert Haas wrote:\n> The one\n> thing I know about that *I* think is a pretty big problem about Perl\n> is that IPC::Run is not really maintained.\n\nI don't see in https://github.com/cpan-authors/IPC-Run/issues anything\naffecting PostgreSQL. If you know of IPC::Run defects, please report them.\nIf I knew of an IPC::Run defect affecting PostgreSQL, I likely would work on\nit before absurdity like https://github.com/cpan-authors/IPC-Run/issues/175\nNetBSD-10-specific behavior coping.\n\n\n", "msg_date": "Sat, 15 Jun 2024 16:48:24 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "IPC::Run accepts bug reports" }, { "msg_contents": "On Sat, Jun 15, 2024 at 7:48 PM Noah Misch <[email protected]> wrote:\n> Separating this from the pytest thread:\n>\n> On Sat, Jun 15, 2024 at 01:26:57PM -0400, Robert Haas wrote:\n> > The one\n> > thing I know about that *I* think is a pretty big problem about Perl\n> > is that IPC::Run is not really maintained.\n>\n> I don't see in https://github.com/cpan-authors/IPC-Run/issues anything\n> affecting PostgreSQL. If you know of IPC::Run defects, please report them.\n> If I knew of an IPC::Run defect affecting PostgreSQL, I likely would work on\n> it before absurdity like https://github.com/cpan-authors/IPC-Run/issues/175\n> NetBSD-10-specific behavior coping.\n\nI'm not concerned about any specific open issue; my concern is about\nthe health of that project. https://metacpan.org/pod/IPC::Run says\nthat this module is seeking new maintainers, and it looks like the\npeople listed as current maintainers are mostly inactive. Instead,\nyou're fixing stuff. That's great, but we ideally want PostgreSQL's\ndependencies to be things that are used widely enough that we don't\nend up maintaining them ourselves.\n\nI apologize if my comment came across as disparaging your efforts;\nthat was not my intent.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 Jun 2024 13:56:46 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IPC::Run accepts bug reports" }, { "msg_contents": "Hi,\n\nOn 2024-06-15 16:48:24 -0700, Noah Misch wrote:\n> Separating this from the pytest thread:\n>\n> On Sat, Jun 15, 2024 at 01:26:57PM -0400, Robert Haas wrote:\n> > The one\n> > thing I know about that *I* think is a pretty big problem about Perl\n> > is that IPC::Run is not really maintained.\n>\n> I don't see in https://github.com/cpan-authors/IPC-Run/issues anything\n> affecting PostgreSQL. If you know of IPC::Run defects, please report them.\n> If I knew of an IPC::Run defect affecting PostgreSQL, I likely would work on\n> it before absurdity like https://github.com/cpan-authors/IPC-Run/issues/175\n> NetBSD-10-specific behavior coping.\n\n1) Sometimes hangs hard on windows if started processes have not been shut\n down before script exits. I've mostly encountered this via the buildfarm /\n CI, so I never had a good way of narrowing this down. It's very painful\n because things seem to often just get stuck once that happens.\n\n2) If a subprocess dies in an inopportune moment, IPC::Run dies with \"ack\n Broken pipe:\" (in _do_filters()). There's plenty reports of this on the\n list, and I've hit this several times personally. It seems to be timing\n dependent, I've encountered it after seemingly irrelevant ordering changes.\n\n I suspect I could create a reproducer with a bit of time.\n\n3) It's very slow on windows (in addition to the windows process\n slowness). That got to be fixable to some degree.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 17 Jun 2024 11:11:17 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IPC::Run accepts bug reports" }, { "msg_contents": "On Mon, Jun 17, 2024 at 11:11:17AM -0700, Andres Freund wrote:\n> On 2024-06-15 16:48:24 -0700, Noah Misch wrote:\n> > On Sat, Jun 15, 2024 at 01:26:57PM -0400, Robert Haas wrote:\n> > > The one\n> > > thing I know about that *I* think is a pretty big problem about Perl\n> > > is that IPC::Run is not really maintained.\n> >\n> > I don't see in https://github.com/cpan-authors/IPC-Run/issues anything\n> > affecting PostgreSQL. If you know of IPC::Run defects, please report them.\n> > If I knew of an IPC::Run defect affecting PostgreSQL, I likely would work on\n> > it before absurdity like https://github.com/cpan-authors/IPC-Run/issues/175\n> > NetBSD-10-specific behavior coping.\n> \n> 1) Sometimes hangs hard on windows if started processes have not been shut\n> down before script exits. I've mostly encountered this via the buildfarm /\n> CI, so I never had a good way of narrowing this down. It's very painful\n> because things seem to often just get stuck once that happens.\n\nThat's bad. Do you have a link to a log, a thread discussing it, or even just\none of the test names experiencing it?\n\n> 2) If a subprocess dies in an inopportune moment, IPC::Run dies with \"ack\n> Broken pipe:\" (in _do_filters()). There's plenty reports of this on the\n> list, and I've hit this several times personally. It seems to be timing\n> dependent, I've encountered it after seemingly irrelevant ordering changes.\n> \n> I suspect I could create a reproducer with a bit of time.\n\nI've seen that one. If the harness has data to write to a child, the child\nexiting before the write is one way to reach that. Perhaps before exec(),\nIPC::Run should do a non-blocking write from each pending IO. That way, small\nwrites never experience the timing-dependent behavior.\n\n> 3) It's very slow on windows (in addition to the windows process\n> slowness). That got to be fixable to some degree.\n\nAgreed. For the next release, today's git has some optimizations. There are\nother known-possible Windows optimizations not implemented.\n\n\n", "msg_date": "Tue, 18 Jun 2024 10:10:17 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IPC::Run accepts bug reports" }, { "msg_contents": "Hi,\n\nOn 2024-06-18 10:10:17 -0700, Noah Misch wrote:\n> On Mon, Jun 17, 2024 at 11:11:17AM -0700, Andres Freund wrote:\n> > On 2024-06-15 16:48:24 -0700, Noah Misch wrote:\n> > > On Sat, Jun 15, 2024 at 01:26:57PM -0400, Robert Haas wrote:\n> > > > The one\n> > > > thing I know about that *I* think is a pretty big problem about Perl\n> > > > is that IPC::Run is not really maintained.\n> > >\n> > > I don't see in https://github.com/cpan-authors/IPC-Run/issues anything\n> > > affecting PostgreSQL. If you know of IPC::Run defects, please report them.\n> > > If I knew of an IPC::Run defect affecting PostgreSQL, I likely would work on\n> > > it before absurdity like https://github.com/cpan-authors/IPC-Run/issues/175\n> > > NetBSD-10-specific behavior coping.\n> > \n> > 1) Sometimes hangs hard on windows if started processes have not been shut\n> > down before script exits. I've mostly encountered this via the buildfarm /\n> > CI, so I never had a good way of narrowing this down. It's very painful\n> > because things seem to often just get stuck once that happens.\n> \n> That's bad. Do you have a link to a log, a thread discussing it, or even just\n> one of the test names experiencing it?\n\nI'm unfortunately blanking on the right keyword right now.\n\nI think it basically required not shutting down a process started in the\nbackground with IPC::Run.\n\nI'll try to repro it by removing some ->finish or ->quit calls.\n\nThere's also a bunch of tests that have blocks like\n\n\t# some Windows Perls at least don't like IPC::Run's start/kill_kill regime.\n\tskip \"Test fails on Windows perl\", 2 if $Config{osname} eq 'MSWin32';\n\nSome of them may have been related to this.\n\n\n> > 2) If a subprocess dies in an inopportune moment, IPC::Run dies with \"ack\n> > Broken pipe:\" (in _do_filters()). There's plenty reports of this on the\n> > list, and I've hit this several times personally. It seems to be timing\n> > dependent, I've encountered it after seemingly irrelevant ordering changes.\n> > \n> > I suspect I could create a reproducer with a bit of time.\n> \n> I've seen that one. If the harness has data to write to a child, the child\n> exiting before the write is one way to reach that. Perhaps before exec(),\n> IPC::Run should do a non-blocking write from each pending IO. That way, small\n> writes never experience the timing-dependent behavior.\n\nI think the question is rather, why is ipc run choosing to die in this\nsituation and can that be fixed?\n\n\n> > 3) It's very slow on windows (in addition to the windows process\n> > slowness). That got to be fixable to some degree.\n> \n> Agreed. For the next release, today's git has some optimizations. There are\n> other known-possible Windows optimizations not implemented.\n\nYay!\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 18 Jun 2024 12:00:13 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IPC::Run accepts bug reports" }, { "msg_contents": "On 2024-06-18 Tu 3:00 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2024-06-18 10:10:17 -0700, Noah Misch wrote:\n>> On Mon, Jun 17, 2024 at 11:11:17AM -0700, Andres Freund wrote:\n>>> On 2024-06-15 16:48:24 -0700, Noah Misch wrote:\n>>>> On Sat, Jun 15, 2024 at 01:26:57PM -0400, Robert Haas wrote:\n>>>>> The one\n>>>>> thing I know about that *I* think is a pretty big problem about Perl\n>>>>> is that IPC::Run is not really maintained.\n>>>> I don't see inhttps://github.com/cpan-authors/IPC-Run/issues anything\n>>>> affecting PostgreSQL. If you know of IPC::Run defects, please report them.\n>>>> If I knew of an IPC::Run defect affecting PostgreSQL, I likely would work on\n>>>> it before absurdity likehttps://github.com/cpan-authors/IPC-Run/issues/175\n>>>> NetBSD-10-specific behavior coping.\n>>> 1) Sometimes hangs hard on windows if started processes have not been shut\n>>> down before script exits. I've mostly encountered this via the buildfarm /\n>>> CI, so I never had a good way of narrowing this down. It's very painful\n>>> because things seem to often just get stuck once that happens.\n>> That's bad. Do you have a link to a log, a thread discussing it, or even just\n>> one of the test names experiencing it?\n> I'm unfortunately blanking on the right keyword right now.\n>\n> I think it basically required not shutting down a process started in the\n> background with IPC::Run.\n>\n> I'll try to repro it by removing some ->finish or ->quit calls.\n>\n> There's also a bunch of tests that have blocks like\n>\n> \t# some Windows Perls at least don't like IPC::Run's start/kill_kill regime.\n> \tskip \"Test fails on Windows perl\", 2 if $Config{osname} eq 'MSWin32';\n>\n> Some of them may have been related to this.\n\n\nI only found one of those, in \nsrc/test/recovery/t/006_logical_decoding.pl, which seems to be the only \nplace we use kill_kill at all. That comment dates back to 2017, so maybe \na more modern perl and/or IPC::Run will improve matters.\n\nIt's not clear to me why that code isn't calling finish() before trying \nkill_kill(). That's what the IPC::Run docs seem to suggest you should do.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-06-18 Tu 3:00 PM, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2024-06-18 10:10:17 -0700, Noah Misch wrote:\n\n\nOn Mon, Jun 17, 2024 at 11:11:17AM -0700, Andres Freund wrote:\n\n\nOn 2024-06-15 16:48:24 -0700, Noah Misch wrote:\n\n\nOn Sat, Jun 15, 2024 at 01:26:57PM -0400, Robert Haas wrote:\n\n\nThe one\nthing I know about that *I* think is a pretty big problem about Perl\nis that IPC::Run is not really maintained.\n\n\n\nI don't see in https://github.com/cpan-authors/IPC-Run/issues anything\naffecting PostgreSQL. If you know of IPC::Run defects, please report them.\nIf I knew of an IPC::Run defect affecting PostgreSQL, I likely would work on\nit before absurdity like https://github.com/cpan-authors/IPC-Run/issues/175\nNetBSD-10-specific behavior coping.\n\n\n\n1) Sometimes hangs hard on windows if started processes have not been shut\n down before script exits. I've mostly encountered this via the buildfarm /\n CI, so I never had a good way of narrowing this down. It's very painful\n because things seem to often just get stuck once that happens.\n\n\n\nThat's bad. Do you have a link to a log, a thread discussing it, or even just\none of the test names experiencing it?\n\n\n\nI'm unfortunately blanking on the right keyword right now.\n\nI think it basically required not shutting down a process started in the\nbackground with IPC::Run.\n\nI'll try to repro it by removing some ->finish or ->quit calls.\n\nThere's also a bunch of tests that have blocks like\n\n\t# some Windows Perls at least don't like IPC::Run's start/kill_kill regime.\n\tskip \"Test fails on Windows perl\", 2 if $Config{osname} eq 'MSWin32';\n\nSome of them may have been related to this.\n\n\n\nI only found one of those, in\n src/test/recovery/t/006_logical_decoding.pl, which seems to be the\n only place we use kill_kill at all. That comment dates back to\n 2017, so maybe a more modern perl and/or IPC::Run will improve\n matters.\nIt's not clear to me why that code isn't calling finish() before\n trying kill_kill(). That's what the IPC::Run docs seem to suggest\n you should do.\n\n\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 18 Jun 2024 16:42:04 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IPC::Run accepts bug reports" }, { "msg_contents": "Hi,\n\nOn 2024-06-18 12:00:13 -0700, Andres Freund wrote:\n> On 2024-06-18 10:10:17 -0700, Noah Misch wrote:\n> > > 1) Sometimes hangs hard on windows if started processes have not been shut\n> > > down before script exits. I've mostly encountered this via the buildfarm /\n> > > CI, so I never had a good way of narrowing this down. It's very painful\n> > > because things seem to often just get stuck once that happens.\n> >\n> > That's bad. Do you have a link to a log, a thread discussing it, or even just\n> > one of the test names experiencing it?\n>\n> I'm unfortunately blanking on the right keyword right now.\n>\n> I think it basically required not shutting down a process started in the\n> background with IPC::Run.\n>\n> I'll try to repro it by removing some ->finish or ->quit calls.\n\nYep, that did it. It reliably reproduces if I comment out\nthe lines below\n # explicitly shut down psql instances gracefully - to avoid hangs\n # or worse on windows\nin 021_row_visibility.pl\n\nThe logfile ends in\nWarning: unable to close filehandle GEN25 properly: Bad file descriptor during global destruction.\nWarning: unable to close filehandle GEN20 properly: Bad file descriptor during global destruction.\n\n\nEven if I cancel the test, I can't rerun it because due to a leftover psql\na) a new temp install can't be made (could be solved by rm -rf)\nb) the test's logfile can't be removed (couldn't even rename the directory)\n\nThe psql instance needs to be found and terminated first.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 18 Jun 2024 20:07:27 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IPC::Run accepts bug reports" }, { "msg_contents": "On Tue, Jun 18, 2024 at 08:07:27PM -0700, Andres Freund wrote:\n> > > > 1) Sometimes hangs hard on windows if started processes have not been shut\n> > > > down before script exits.\n\n> It reliably reproduces if I comment out\n> the lines below\n> # explicitly shut down psql instances gracefully - to avoid hangs\n> # or worse on windows\n> in 021_row_visibility.pl\n> \n> The logfile ends in\n> Warning: unable to close filehandle GEN25 properly: Bad file descriptor during global destruction.\n> Warning: unable to close filehandle GEN20 properly: Bad file descriptor during global destruction.\n> \n> \n> Even if I cancel the test, I can't rerun it because due to a leftover psql\n> a) a new temp install can't be made (could be solved by rm -rf)\n> b) the test's logfile can't be removed (couldn't even rename the directory)\n> \n> The psql instance needs to be found and terminated first.\n\nThanks for that recipe. I've put that in my queue to fix.\n\nOn Tue, Jun 18, 2024 at 12:00:13PM -0700, Andres Freund wrote:\n> On 2024-06-18 10:10:17 -0700, Noah Misch wrote:\n> > On Mon, Jun 17, 2024 at 11:11:17AM -0700, Andres Freund wrote:\n> > > 2) If a subprocess dies in an inopportune moment, IPC::Run dies with \"ack\n> > > Broken pipe:\" (in _do_filters()). There's plenty reports of this on the\n> > > list, and I've hit this several times personally. It seems to be timing\n> > > dependent, I've encountered it after seemingly irrelevant ordering changes.\n> > > \n> > > I suspect I could create a reproducer with a bit of time.\n> > \n> > I've seen that one. If the harness has data to write to a child, the child\n> > exiting before the write is one way to reach that. Perhaps before exec(),\n> > IPC::Run should do a non-blocking write from each pending IO. That way, small\n> > writes never experience the timing-dependent behavior.\n> \n> I think the question is rather, why is ipc run choosing to die in this\n> situation and can that be fixed?\n\nWith default signal handling, the process would die to SIGPIPE. Since\nPostgreSQL::Test ignores SIGPIPE, this happens instead. The IPC::Run source\ntree has no discussion of ignoring SIGPIPE, so I bet it didn't get a conscious\ndecision. Perhaps it can do better.\n\n\n", "msg_date": "Wed, 19 Jun 2024 14:53:54 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IPC::Run accepts bug reports" }, { "msg_contents": "On Mon, Jun 17, 2024 at 01:56:46PM -0400, Robert Haas wrote:\n> On Sat, Jun 15, 2024 at 7:48 PM Noah Misch <[email protected]> wrote:\n> > Separating this from the pytest thread:\n> >\n> > On Sat, Jun 15, 2024 at 01:26:57PM -0400, Robert Haas wrote:\n> > > The one\n> > > thing I know about that *I* think is a pretty big problem about Perl\n> > > is that IPC::Run is not really maintained.\n> >\n> > I don't see in https://github.com/cpan-authors/IPC-Run/issues anything\n> > affecting PostgreSQL. If you know of IPC::Run defects, please report them.\n> > If I knew of an IPC::Run defect affecting PostgreSQL, I likely would work on\n> > it before absurdity like https://github.com/cpan-authors/IPC-Run/issues/175\n> > NetBSD-10-specific behavior coping.\n> \n> I'm not concerned about any specific open issue; my concern is about\n> the health of that project. https://metacpan.org/pod/IPC::Run says\n> that this module is seeking new maintainers, and it looks like the\n> people listed as current maintainers are mostly inactive. Instead,\n> you're fixing stuff. That's great, but we ideally want PostgreSQL's\n> dependencies to be things that are used widely enough that we don't\n> end up maintaining them ourselves.\n\nThat's reasonable to want.\n\n> I apologize if my comment came across as disparaging your efforts;\n> that was not my intent.\n\nIt did not come across as disparaging.\n\n\n", "msg_date": "Tue, 25 Jun 2024 16:06:28 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IPC::Run accepts bug reports" } ]
[ { "msg_contents": "While my colleague is working on translating\ndoc/src/sgml/ref/create_role.sgml into Japanese, he has found\nfollowing sentence is hard to parse:\n\n The rules for which initial\n role membership options are enabled described below in the\n <literal>IN ROLE</literal>, <literal>ROLE</literal>, and\n <literal>ADMIN</literal> clauses.\n\nMaybe we need \"are\" in front of \"described\"?\n\nAttached is the patch for that.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp", "msg_date": "Sun, 16 Jun 2024 11:25:23 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "create role manual" }, { "msg_contents": "On Sat, Jun 15, 2024 at 7:25 PM Tatsuo Ishii <[email protected]> wrote:\n\n> The rules for which initial\n> role membership options are enabled described below in the\n> <literal>IN ROLE</literal>, <literal>ROLE</literal>, and\n> <literal>ADMIN</literal> clauses.\n>\n> Maybe we need \"are\" in front of \"described\"?\n>\n>\nAgreed.\n\nDavid J.\n\nOn Sat, Jun 15, 2024 at 7:25 PM Tatsuo Ishii <[email protected]> wrote:   The rules for which initial\n   role membership options are enabled described below in the\n   <literal>IN ROLE</literal>, <literal>ROLE</literal>, and\n   <literal>ADMIN</literal> clauses.\n\nMaybe we need \"are\" in front of \"described\"?Agreed.David J.", "msg_date": "Sat, 15 Jun 2024 19:37:11 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: create role manual" }, { "msg_contents": "> On Sat, Jun 15, 2024 at 7:25 PM Tatsuo Ishii <[email protected]> wrote:\r\n> \r\n>> The rules for which initial\r\n>> role membership options are enabled described below in the\r\n>> <literal>IN ROLE</literal>, <literal>ROLE</literal>, and\r\n>> <literal>ADMIN</literal> clauses.\r\n>>\r\n>> Maybe we need \"are\" in front of \"described\"?\r\n>>\r\n>>\r\n> Agreed.\r\n\r\nThank you for the confirmation. I have pushed the change to master and\r\nv16 stable branches.\r\n\r\nNote that the original report was created by Satoru Koizumi, who is\r\none of the major contributors to PostgreSQL manual translation (to\r\nJapanese) project. So I picked up his name as the author of the\r\ncommit.\r\n\r\nhttps://github.com/pgsql-jp/jpug-doc\r\n\r\nBest reagards,\r\n--\r\nTatsuo Ishii\r\nSRA OSS LLC\r\nEnglish: http://www.sraoss.co.jp/index_en/\r\nJapanese:http://www.sraoss.co.jp\r\n", "msg_date": "Sun, 16 Jun 2024 16:38:46 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: create role manual" } ]
[ { "msg_contents": "Hi,\n\nassigned patch try to solve issue reported by Mor Lehr (Missing semicolon\nin anonymous plpgsql block does not raise syntax error).\n\nhttps://www.postgresql.org/message-id/CALyvM2bp_CXMH_Gyq87pmHJRuZDEhV40u9VP8rX=CAnEt2wUXg@mail.gmail.com\n\nby introducing a new extra error check. With this check only a_expr exprs\nare allowed as plpgsql expressions. This is a small step to behaviour\ndescribed in SQL/PSM standard (although the language is different, the\nexpression syntax and features are almost similar. With this check the\nundocumented (but supported syntax)\n\nvar := column FROM tab\n\nis disallowed. Only ANSI syntax for embedded queries (inside assignment\nstatement) is allowed\n\nvar := (SELECT column FROM tab);\n\nWith this check, the reported issue (by Mor Lehr) is detected\n\ndefault setting\n\nCREATE TABLE foo3(id serial PRIMARY key, txt text);\nINSERT INTO foo3 (txt) VALUES ('aaa'),('bbb');\n\nDO $$\nDECLARE\n l_cnt int;\nBEGIN\n l_cnt := 1\n DELETE FROM foo3 WHERE id=1;\nEND; $$\n\n-- without reaction - just don't work\n\n(2024-06-16 16:05:55) postgres=# set plpgsql.extra_errors to\n'strict_expr_check';\nSET\n(2024-06-16 16:06:43) postgres=# DO $$\n\nDECLARE\n l_cnt int;\nBEGIN\n l_cnt := 1\n DELETE FROM foo3 WHERE id=1;\nEND; $$;\nERROR: syntax error at or near \"DELETE\"\nLINE 11: DELETE FROM foo3 WHERE id=1;\n ^\n\nThis patch has three parts\n\n1. Introduction strict_expr_check\n2. set strict_expr_check as default, and impact on regress tests\n3. revert @2\n\nI don't propose to be strict_expr_check active by default.\n\nComments, notes?\n\nRegards\n\nPavel", "msg_date": "Sun, 16 Jun 2024 16:11:22 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "proposal: plpgsql, new check for extra_errors - strict_expr_check" }, { "msg_contents": "Can you remove or just ignore double ; too ?\n\npostgres=# do $$\ndeclare var_x integer;\nbegin\n var_x = 99;;\n delete from x where x = var_x;\nend; $$;\nERROR: syntax error at or near \";\"\nLINE 1: do $$ declare var_x integer; begin var_x = 99;; delete from ...\n\nAtenciosamente,\n\n\n\n\nEm dom., 16 de jun. de 2024 às 11:12, Pavel Stehule <[email protected]>\nescreveu:\n\n> Hi,\n>\n> assigned patch try to solve issue reported by Mor Lehr (Missing semicolon\n> in anonymous plpgsql block does not raise syntax error).\n>\n>\n> https://www.postgresql.org/message-id/CALyvM2bp_CXMH_Gyq87pmHJRuZDEhV40u9VP8rX=CAnEt2wUXg@mail.gmail.com\n>\n> by introducing a new extra error check. With this check only a_expr exprs\n> are allowed as plpgsql expressions. This is a small step to behaviour\n> described in SQL/PSM standard (although the language is different, the\n> expression syntax and features are almost similar. With this check the\n> undocumented (but supported syntax)\n>\n> var := column FROM tab\n>\n> is disallowed. Only ANSI syntax for embedded queries (inside assignment\n> statement) is allowed\n>\n> var := (SELECT column FROM tab);\n>\n> With this check, the reported issue (by Mor Lehr) is detected\n>\n> default setting\n>\n> CREATE TABLE foo3(id serial PRIMARY key, txt text);\n> INSERT INTO foo3 (txt) VALUES ('aaa'),('bbb');\n>\n> DO $$\n> DECLARE\n> l_cnt int;\n> BEGIN\n> l_cnt := 1\n> DELETE FROM foo3 WHERE id=1;\n> END; $$\n>\n> -- without reaction - just don't work\n>\n> (2024-06-16 16:05:55) postgres=# set plpgsql.extra_errors to\n> 'strict_expr_check';\n> SET\n> (2024-06-16 16:06:43) postgres=# DO $$\n>\n> DECLARE\n> l_cnt int;\n> BEGIN\n> l_cnt := 1\n> DELETE FROM foo3 WHERE id=1;\n> END; $$;\n> ERROR: syntax error at or near \"DELETE\"\n> LINE 11: DELETE FROM foo3 WHERE id=1;\n> ^\n>\n> This patch has three parts\n>\n> 1. Introduction strict_expr_check\n> 2. set strict_expr_check as default, and impact on regress tests\n> 3. revert @2\n>\n> I don't propose to be strict_expr_check active by default.\n>\n> Comments, notes?\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n\nCan you remove or just ignore double ; too ?postgres=# do $$ declare var_x integer; begin   var_x = 99;;   delete from x where x = var_x; end; $$;ERROR:  syntax error at or near \";\"LINE 1: do $$ declare var_x integer; begin var_x = 99;; delete from ...Atenciosamente, Em dom., 16 de jun. de 2024 às 11:12, Pavel Stehule <[email protected]> escreveu:Hi,assigned patch try to solve issue reported by Mor Lehr (Missing semicolon in anonymous plpgsql block does not raise syntax error).https://www.postgresql.org/message-id/CALyvM2bp_CXMH_Gyq87pmHJRuZDEhV40u9VP8rX=CAnEt2wUXg@mail.gmail.comby introducing a new extra error check. With this check only a_expr exprs are allowed as plpgsql expressions. This is a small step to behaviour described in SQL/PSM standard (although the language is different, the expression syntax and features are almost similar. With this check the undocumented (but supported syntax) var := column FROM tab is disallowed. Only ANSI syntax for embedded queries (inside assignment statement) is allowedvar := (SELECT column FROM tab);With this check, the reported issue (by Mor Lehr) is detecteddefault settingCREATE TABLE foo3(id serial PRIMARY key, txt text);INSERT INTO foo3 (txt) VALUES ('aaa'),('bbb');DO $$DECLARE    l_cnt int;BEGIN    l_cnt := 1    DELETE FROM foo3 WHERE id=1;END; $$-- without reaction - just don't work(2024-06-16 16:05:55) postgres=# set plpgsql.extra_errors to 'strict_expr_check';SET(2024-06-16 16:06:43) postgres=# DO $$                                           DECLARE    l_cnt int;BEGIN    l_cnt := 1    DELETE FROM foo3 WHERE id=1;END; $$;ERROR:  syntax error at or near \"DELETE\"LINE 11:     DELETE FROM foo3 WHERE id=1;             ^This patch has three parts1. Introduction strict_expr_check2. set strict_expr_check as default, and impact on regress tests3. revert @2I don't propose to be strict_expr_check active  by default.Comments, notes?RegardsPavel", "msg_date": "Sun, 16 Jun 2024 11:22:02 -0300", "msg_from": "Marcos Pegoraro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: proposal: plpgsql, new check for extra_errors - strict_expr_check" }, { "msg_contents": "Hi\n\nne 16. 6. 2024 v 16:22 odesílatel Marcos Pegoraro <[email protected]>\nnapsal:\n\n> Can you remove or just ignore double ; too ?\n>\n\nI don't know - it is a different issue.\n\nPLpgSQL allows zero statements inside block, so you can write BEGIN END or\nIF 1 THEN END IF but it doesn't allow empty statement\n\nlike ;;\n\nprobably it just needs one more rule in gram.y - but in this case, I am not\nsure if we should support it.\n\nWhat is the expected benefit? Generally PL/pgSQL has very strict syntax -\nand using double semicolons makes no sense.\n\n\n\n\n>\n> postgres=# do $$\n> declare var_x integer;\n> begin\n> var_x = 99;;\n> delete from x where x = var_x;\n> end; $$;\n> ERROR: syntax error at or near \";\"\n> LINE 1: do $$ declare var_x integer; begin var_x = 99;; delete from ...\n>\n> Atenciosamente,\n>\n>\n>\n>\n> Em dom., 16 de jun. de 2024 às 11:12, Pavel Stehule <\n> [email protected]> escreveu:\n>\n>> Hi,\n>>\n>> assigned patch try to solve issue reported by Mor Lehr (Missing semicolon\n>> in anonymous plpgsql block does not raise syntax error).\n>>\n>>\n>> https://www.postgresql.org/message-id/CALyvM2bp_CXMH_Gyq87pmHJRuZDEhV40u9VP8rX=CAnEt2wUXg@mail.gmail.com\n>>\n>> by introducing a new extra error check. With this check only a_expr exprs\n>> are allowed as plpgsql expressions. This is a small step to behaviour\n>> described in SQL/PSM standard (although the language is different, the\n>> expression syntax and features are almost similar. With this check the\n>> undocumented (but supported syntax)\n>>\n>> var := column FROM tab\n>>\n>> is disallowed. Only ANSI syntax for embedded queries (inside assignment\n>> statement) is allowed\n>>\n>> var := (SELECT column FROM tab);\n>>\n>> With this check, the reported issue (by Mor Lehr) is detected\n>>\n>> default setting\n>>\n>> CREATE TABLE foo3(id serial PRIMARY key, txt text);\n>> INSERT INTO foo3 (txt) VALUES ('aaa'),('bbb');\n>>\n>> DO $$\n>> DECLARE\n>> l_cnt int;\n>> BEGIN\n>> l_cnt := 1\n>> DELETE FROM foo3 WHERE id=1;\n>> END; $$\n>>\n>> -- without reaction - just don't work\n>>\n>> (2024-06-16 16:05:55) postgres=# set plpgsql.extra_errors to\n>> 'strict_expr_check';\n>> SET\n>> (2024-06-16 16:06:43) postgres=# DO $$\n>>\n>> DECLARE\n>> l_cnt int;\n>> BEGIN\n>> l_cnt := 1\n>> DELETE FROM foo3 WHERE id=1;\n>> END; $$;\n>> ERROR: syntax error at or near \"DELETE\"\n>> LINE 11: DELETE FROM foo3 WHERE id=1;\n>> ^\n>>\n>> This patch has three parts\n>>\n>> 1. Introduction strict_expr_check\n>> 2. set strict_expr_check as default, and impact on regress tests\n>> 3. revert @2\n>>\n>> I don't propose to be strict_expr_check active by default.\n>>\n>> Comments, notes?\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>\n\nHine 16. 6. 2024 v 16:22 odesílatel Marcos Pegoraro <[email protected]> napsal:Can you remove or just ignore double ; too ?I don't know - it is a different issue.PLpgSQL allows zero statements inside block, so you can write BEGIN END or IF 1 THEN END IF but it doesn't allow empty statementlike ;; probably it just needs one more rule in gram.y - but in this case, I am not sure if we should support it. What is the expected benefit? Generally PL/pgSQL has very strict syntax - and using double semicolons makes no sense. postgres=# do $$ declare var_x integer; begin   var_x = 99;;   delete from x where x = var_x; end; $$;ERROR:  syntax error at or near \";\"LINE 1: do $$ declare var_x integer; begin var_x = 99;; delete from ...Atenciosamente, Em dom., 16 de jun. de 2024 às 11:12, Pavel Stehule <[email protected]> escreveu:Hi,assigned patch try to solve issue reported by Mor Lehr (Missing semicolon in anonymous plpgsql block does not raise syntax error).https://www.postgresql.org/message-id/CALyvM2bp_CXMH_Gyq87pmHJRuZDEhV40u9VP8rX=CAnEt2wUXg@mail.gmail.comby introducing a new extra error check. With this check only a_expr exprs are allowed as plpgsql expressions. This is a small step to behaviour described in SQL/PSM standard (although the language is different, the expression syntax and features are almost similar. With this check the undocumented (but supported syntax) var := column FROM tab is disallowed. Only ANSI syntax for embedded queries (inside assignment statement) is allowedvar := (SELECT column FROM tab);With this check, the reported issue (by Mor Lehr) is detecteddefault settingCREATE TABLE foo3(id serial PRIMARY key, txt text);INSERT INTO foo3 (txt) VALUES ('aaa'),('bbb');DO $$DECLARE    l_cnt int;BEGIN    l_cnt := 1    DELETE FROM foo3 WHERE id=1;END; $$-- without reaction - just don't work(2024-06-16 16:05:55) postgres=# set plpgsql.extra_errors to 'strict_expr_check';SET(2024-06-16 16:06:43) postgres=# DO $$                                           DECLARE    l_cnt int;BEGIN    l_cnt := 1    DELETE FROM foo3 WHERE id=1;END; $$;ERROR:  syntax error at or near \"DELETE\"LINE 11:     DELETE FROM foo3 WHERE id=1;             ^This patch has three parts1. Introduction strict_expr_check2. set strict_expr_check as default, and impact on regress tests3. revert @2I don't propose to be strict_expr_check active  by default.Comments, notes?RegardsPavel", "msg_date": "Sun, 16 Jun 2024 16:36:33 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "Re: proposal: plpgsql, new check for extra_errors - strict_expr_check" }, { "msg_contents": "Em dom., 16 de jun. de 2024 às 11:37, Pavel Stehule <[email protected]>\nescreveu:\n\n>\n> What is the expected benefit? Generally PL/pgSQL has very strict syntax -\n> and using double semicolons makes no sense.\n>\n> exactly, makes no sense. That is because it should be ignored, right ?\nBut ok, if this is a different issue, that´s fine.\n\nregards\nMarcos\n\nEm dom., 16 de jun. de 2024 às 11:37, Pavel Stehule <[email protected]> escreveu:What is the expected benefit? Generally PL/pgSQL has very strict syntax - and using double semicolons makes no sense.exactly, makes no sense. That is because it should be ignored, right ?But ok, if this is a different issue, that´s fine.regardsMarcos", "msg_date": "Sun, 16 Jun 2024 11:42:51 -0300", "msg_from": "Marcos Pegoraro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: proposal: plpgsql, new check for extra_errors - strict_expr_check" }, { "msg_contents": "ne 16. 6. 2024 v 16:43 odesílatel Marcos Pegoraro <[email protected]>\nnapsal:\n\n> Em dom., 16 de jun. de 2024 às 11:37, Pavel Stehule <\n> [email protected]> escreveu:\n>\n>>\n>> What is the expected benefit? Generally PL/pgSQL has very strict syntax -\n>> and using double semicolons makes no sense.\n>>\n>> exactly, makes no sense. That is because it should be ignored, right ?\n> But ok, if this is a different issue, that´s fine.\n>\n\nI don't follow this idea - when it does not make sense, then why do you use\nit? It can be a signal of some issue in your code.\n\nThe source code should not contain a code that should be ignored.\n\nBut I am not a authority - can be interesting if this is allowed in PL/SQL\nor Ada\n\nRegards\n\nPavel\n\n\n\n\n\n\n> regards\n> Marcos\n>\n\nne 16. 6. 2024 v 16:43 odesílatel Marcos Pegoraro <[email protected]> napsal:Em dom., 16 de jun. de 2024 às 11:37, Pavel Stehule <[email protected]> escreveu:What is the expected benefit? Generally PL/pgSQL has very strict syntax - and using double semicolons makes no sense.exactly, makes no sense. That is because it should be ignored, right ?But ok, if this is a different issue, that´s fine.I don't follow this idea - when it does not make sense, then why do you use it?  It can be a signal of some issue in your code.The source code should not contain a code that should be ignored.But I am not a authority - can be interesting if this is allowed in PL/SQL or AdaRegardsPavelregardsMarcos", "msg_date": "Sun, 16 Jun 2024 17:10:39 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "Re: proposal: plpgsql, new check for extra_errors - strict_expr_check" }, { "msg_contents": "Em dom., 16 de jun. de 2024 às 12:11, Pavel Stehule <[email protected]>\nescreveu:\n\n> I don't follow this idea - when it does not make sense, then why do you\n> use it? It can be a signal of some issue in your code.\n>\n>>\nI don't use it, but sometimes it occurs, and there are lots of languages\nwhich ignore it, so it would be cool if plpgsql does it too.\n\nIf you do this, works\nset search_path to public;;;\n\nbut if you do the same inside a block, it does not.\n\nregards\nMarcos\n\nEm dom., 16 de jun. de 2024 às 12:11, Pavel Stehule <[email protected]> escreveu:I don't follow this idea - when it does not make sense, then why do you use it?  It can be a signal of some issue in your code.I don't use it, but sometimes it occurs, and there are lots of languages which ignore it, so it would be cool if plpgsql does it too. If you do this, worksset search_path to public;;;but if you do the same inside a block, it does not.regardsMarcos", "msg_date": "Sun, 16 Jun 2024 14:35:38 -0300", "msg_from": "Marcos Pegoraro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: proposal: plpgsql, new check for extra_errors - strict_expr_check" }, { "msg_contents": "ne 16. 6. 2024 v 19:36 odesílatel Marcos Pegoraro <[email protected]>\nnapsal:\n\n> Em dom., 16 de jun. de 2024 às 12:11, Pavel Stehule <\n> [email protected]> escreveu:\n>\n>> I don't follow this idea - when it does not make sense, then why do you\n>> use it? It can be a signal of some issue in your code.\n>>\n>>>\n> I don't use it, but sometimes it occurs, and there are lots of languages\n> which ignore it, so it would be cool if plpgsql does it too.\n>\n> If you do this, works\n> set search_path to public;;;\n>\n\npsql allows it, but it is a shell - not a programming language.\n\n\n>\n> but if you do the same inside a block, it does not.\n>\n\nIt is a different language. I have not too strong an opinion about it - it\nis hard to say what is the correct design when you should work with a mix\nof languages like SQL and Ada (PL/pgSQL), and when related standard SQL/PSM\nis not widely used. Personally, I don't see any nice features that allow it\nto accept dirty code. I have negative experiences when a language is\ntolerant.\n\nRegards\n\nPavel\n\n\n> regards\n> Marcos\n>\n\nne 16. 6. 2024 v 19:36 odesílatel Marcos Pegoraro <[email protected]> napsal:Em dom., 16 de jun. de 2024 às 12:11, Pavel Stehule <[email protected]> escreveu:I don't follow this idea - when it does not make sense, then why do you use it?  It can be a signal of some issue in your code.I don't use it, but sometimes it occurs, and there are lots of languages which ignore it, so it would be cool if plpgsql does it too. If you do this, worksset search_path to public;;;psql allows it, but it is a shell - not a programming language.  but if you do the same inside a block, it does not.It is a different language. I have not too strong an opinion about it - it is hard to say what is the correct design when you should work with a mix of languages like SQL and Ada (PL/pgSQL), and when related standard SQL/PSM is not widely used. Personally, I don't see any nice features that allow it to accept dirty code. I have negative experiences when a language is tolerant.RegardsPavelregardsMarcos", "msg_date": "Sun, 16 Jun 2024 20:42:51 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "Re: proposal: plpgsql, new check for extra_errors - strict_expr_check" }, { "msg_contents": "ne 16. 6. 2024 v 16:11 odesílatel Pavel Stehule <[email protected]>\nnapsal:\n\n> Hi,\n>\n> assigned patch try to solve issue reported by Mor Lehr (Missing semicolon\n> in anonymous plpgsql block does not raise syntax error).\n>\n>\n> https://www.postgresql.org/message-id/CALyvM2bp_CXMH_Gyq87pmHJRuZDEhV40u9VP8rX=CAnEt2wUXg@mail.gmail.com\n>\n> by introducing a new extra error check. With this check only a_expr exprs\n> are allowed as plpgsql expressions. This is a small step to behaviour\n> described in SQL/PSM standard (although the language is different, the\n> expression syntax and features are almost similar. With this check the\n> undocumented (but supported syntax)\n>\n> var := column FROM tab\n>\n> is disallowed. Only ANSI syntax for embedded queries (inside assignment\n> statement) is allowed\n>\n> var := (SELECT column FROM tab);\n>\n> With this check, the reported issue (by Mor Lehr) is detected\n>\n> default setting\n>\n> CREATE TABLE foo3(id serial PRIMARY key, txt text);\n> INSERT INTO foo3 (txt) VALUES ('aaa'),('bbb');\n>\n> DO $$\n> DECLARE\n> l_cnt int;\n> BEGIN\n> l_cnt := 1\n> DELETE FROM foo3 WHERE id=1;\n> END; $$\n>\n> -- without reaction - just don't work\n>\n> (2024-06-16 16:05:55) postgres=# set plpgsql.extra_errors to\n> 'strict_expr_check';\n> SET\n> (2024-06-16 16:06:43) postgres=# DO $$\n>\n> DECLARE\n> l_cnt int;\n> BEGIN\n> l_cnt := 1\n> DELETE FROM foo3 WHERE id=1;\n> END; $$;\n> ERROR: syntax error at or near \"DELETE\"\n> LINE 11: DELETE FROM foo3 WHERE id=1;\n> ^\n>\n> This patch has three parts\n>\n> 1. Introduction strict_expr_check\n> 2. set strict_expr_check as default, and impact on regress tests\n> 3. revert @2\n>\n> I don't propose to be strict_expr_check active by default.\n>\n> Comments, notes?\n>\n\nfresh rebase\n\n\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>", "msg_date": "Thu, 5 Sep 2024 17:24:24 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "Re: proposal: plpgsql, new check for extra_errors - strict_expr_check" } ]
[ { "msg_contents": "Hi,\n\nWhen connecting with a libpq based client, the TLS establishment ends up like\nthis in many configurations;\n\nC->S: TLSv1 393 Client Hello\nS->C: TLSv1.3 167 Hello Retry Request, Change Cipher Spec\nC->S: TLSv1.3 432 Change Cipher Spec, Client Hello\nS->C: TLSv1.3 1407 Server Hello, Application Data, Application Data, Application Data, Application Data\n...\n\nI.e. there are two clients hellos, because the server rejects the clients\n\"parameters\".\n\n\nThis appears to be caused by ECDH support. The difference between the two\nClientHellos is\n- Extension: key_share (len=38) x25519\n+ Extension: key_share (len=71) secp256r1\n\nI.e. the clients wanted to use x25519, but the server insists on secp256r1.\n\n\nThis turns out to be due to\n\ncommit 3164721462d547fa2d15e2a2f07eb086a3590fd5\nAuthor: Peter Eisentraut <[email protected]>\nDate: 2013-12-07 15:11:44 -0500\n\n SSL: Support ECDH key exchange\n\n\n\nI don't know if it's good that we're calling SSL_CTX_set_tmp_ecdh at all, but\nif it is, shouldn't we at least do the same in libpq, so we don't introduce\nunnecessary roundtrips?\n\nI did confirm that doing the same thing on the client side removes the\nadditional roundtrip.\n\n\n\nIt seems kind of a shame that we have fewer roundtrips due to\nsslnegotiation=direct, but do completely unnecessary roundtrips all the\ntime...\n\n\nIn a network with ~10ms latency I see an almost 30% increased\nconnections-per-second for a single client if I avoid the added roundtrip.\n\nI think this could almost be considered a small bug...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 16 Jun 2024 16:46:12 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "ecdh support causes unnecessary roundtrips" }, { "msg_contents": "> On 17 Jun 2024, at 01:46, Andres Freund <[email protected]> wrote:\n\n> When connecting with a libpq based client, the TLS establishment ends up like\n> this in many configurations;\n> \n> C->S: TLSv1 393 Client Hello\n> S->C: TLSv1.3 167 Hello Retry Request, Change Cipher Spec\n> C->S: TLSv1.3 432 Change Cipher Spec, Client Hello\n> S->C: TLSv1.3 1407 Server Hello, Application Data, Application Data, Application Data, Application Data\n\nI wonder if this is the result of us still using TLS 1.2 as the default minimum\nprotocol version. In 1.2 the ClientHello doesn't contain the extensions for\ncipher and curves, so the server must reply with a HRR (Hello Retry Request)\nmessage asking the client to set protocol, curve and cipher. The client will\nrespond with a new Client Hello using 1.3 with the extensions.\n\nUsing 1.3 as the default minimum has been on my personal TODO for a while,\nmaybe that should be revisited for 18.\n\n> I.e. there are two clients hellos, because the server rejects the clients\n> \"parameters\".\n\nOr the client didn't send any.\n\n> This appears to be caused by ECDH support. The difference between the two\n> ClientHellos is\n> - Extension: key_share (len=38) x25519\n> + Extension: key_share (len=71) secp256r1\n> \n> I.e. the clients wanted to use x25519, but the server insists on secp256r1.\n\nSomewhat related, Erica Zhang has an open patch to make the server-side curves\nconfiguration take a list rather than a single curve [0], and modernizing the\nAPI used as a side effect (SSL_CTX_set_tmp_ecdh is documented as obsoleted by\nOpenSSL, but not deprecated with an API level).\n\n> I don't know if it's good that we're calling SSL_CTX_set_tmp_ecdh at all,\n\nTo set the specified curve in ssl_ecdh_curve we have to don't we?\n\n> but if it is, shouldn't we at least do the same in libpq, so we don't introduce\n> unnecessary roundtrips?\n\nIf we don't set the curve in the client I believe OpenSSL will pass the set of\nsupported curves the client has, which then should allow the server to pick the\none it wants based on ssl_ecdh_curve, so ideally we shouldn't have to I think.\n\n> I did confirm that doing the same thing on the client side removes the\n> additional roundtrip.\n\nThe roundtrip went away because the client was set to use secp256r1? I wonder\nif that made OpenSSL override the min protocol version and switch to a TLS1.3\nClientHello since it otherwise couldn't announce the curve. If you force the\nclient min protocol to 1.3 in an unpatched client, do you see the same speedup?\n\n--\nDaniel Gustafsson\n\n[0] [email protected]\n\n", "msg_date": "Mon, 17 Jun 2024 12:00:30 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ecdh support causes unnecessary roundtrips" }, { "msg_contents": "Hi,\n\nOn 2024-06-17 12:00:30 +0200, Daniel Gustafsson wrote:\n> > On 17 Jun 2024, at 01:46, Andres Freund <[email protected]> wrote:\n>\n> > When connecting with a libpq based client, the TLS establishment ends up like\n> > this in many configurations;\n> >\n> > C->S: TLSv1 393 Client Hello\n> > S->C: TLSv1.3 167 Hello Retry Request, Change Cipher Spec\n> > C->S: TLSv1.3 432 Change Cipher Spec, Client Hello\n> > S->C: TLSv1.3 1407 Server Hello, Application Data, Application Data, Application Data, Application Data\n>\n> I wonder if this is the result of us still using TLS 1.2 as the default minimum\n> protocol version. In 1.2 the ClientHello doesn't contain the extensions for\n> cipher and curves, so the server must reply with a HRR (Hello Retry Request)\n> message asking the client to set protocol, curve and cipher. The client will\n> respond with a new Client Hello using 1.3 with the extensions.\n\nI'm pretty sure it's not that, given that\na) removing the server side SSL_CTX_set_tmp_ecdh() call\nb) adding a client side SSL_CTX_set_tmp_ecdh() call\navoid the roundtrip.\n\nI've now also confirmed that ssl_min_protocol_version=TLSv1.3 doesn't change\nanything (relevant, of course the indicated version support changes).\n\n\n> > This appears to be caused by ECDH support. The difference between the two\n> > ClientHellos is\n> > - Extension: key_share (len=38) x25519\n> > + Extension: key_share (len=71) secp256r1\n> >\n> > I.e. the clients wanted to use x25519, but the server insists on secp256r1.\n>\n> Somewhat related, Erica Zhang has an open patch to make the server-side curves\n> configuration take a list rather than a single curve [0], and modernizing the\n> API used as a side effect (SSL_CTX_set_tmp_ecdh is documented as obsoleted by\n> OpenSSL, but not deprecated with an API level).\n\nThat does seem nicer. Fun coincidence in timing.\n\n\n> > I don't know if it's good that we're calling SSL_CTX_set_tmp_ecdh at all,\n>\n> To set the specified curve in ssl_ecdh_curve we have to don't we?\n\nSure, but it's not obvious to me why we actually want to override openssl's\ndefaults here. There's not even a parameter to opt out of forcing a specific\nchoice on the server side.\n\n\n> > but if it is, shouldn't we at least do the same in libpq, so we don't introduce\n> > unnecessary roundtrips?\n>\n> If we don't set the curve in the client I believe OpenSSL will pass the set of\n> supported curves the client has, which then should allow the server to pick the\n> one it wants based on ssl_ecdh_curve, so ideally we shouldn't have to I think.\n\nAfaict the client sends exactly one:\n\nTransport Layer Security\n TLSv1.3 Record Layer: Handshake Protocol: Client Hello\n Content Type: Handshake (22)\n Version: TLS 1.0 (0x0301)\n Length: 320\n Handshake Protocol: Client Hello\n...\n Extension: supported_versions (len=5) TLS 1.3, TLS 1.2\n Type: supported_versions (43)\n Length: 5\n Supported Versions length: 4\n Supported Version: TLS 1.3 (0x0304)\n Supported Version: TLS 1.2 (0x0303)\n Extension: psk_key_exchange_modes (len=2)\n Type: psk_key_exchange_modes (45)\n Length: 2\n PSK Key Exchange Modes Length: 1\n PSK Key Exchange Mode: PSK with (EC)DHE key establishment (psk_dhe_ke) (1)\n Extension: key_share (len=38) x25519\n Type: key_share (51)\n Length: 38\n Key Share extension\n Extension: compress_certificate (len=5)\n Type: compress_certificate (27)\n...\n\nNote key_share being set to x25519.\n\n\nThe HRR says:\n\nExtension: key_share (len=2) secp256r1\n Type: key_share (51)\n Length: 2\n Key Share extension\n Selected Group: secp256r1 (23)\n\n\n> > I did confirm that doing the same thing on the client side removes the\n> > additional roundtrip.\n>\n> The roundtrip went away because the client was set to use secp256r1?\n\nYes. Or if I change the server to not set the ecdh curve.\n\n\n> I wonder if that made OpenSSL override the min protocol version and switch\n> to a TLS1.3 ClientHello since it otherwise couldn't announce the curve.\n\nThe client seems to announce the curve in the initial ClientHello even with\n1.3 as the minimum version.\n\nWhat *does* make the HRR go away is setting ssl_max_protocol_version=TLSv1.2\non the client side.\n\n\nhttps://wiki.openssl.org/index.php/TLS1.3 says:\n\n> In practice most clients will use X25519 or P-256 for their initial\n> key_share. For maximum performance it is recommended that servers are\n> configured to support at least those two groups and clients use one of those\n> two for its initial key_share. This is the default case (OpenSSL clients\n> will use X25519).\n\nWe're not allowing both groups and the client defaults to X25519, hence\nthe HRR.\n\n\n> If you force the client min protocol to 1.3 in an unpatched client, do you\n> see the same speedup?\n\nNope.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 17 Jun 2024 10:01:33 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ecdh support causes unnecessary roundtrips" }, { "msg_contents": "On Mon, Jun 17, 2024 at 10:01 AM Andres Freund <[email protected]> wrote:\n> On 2024-06-17 12:00:30 +0200, Daniel Gustafsson wrote:\n> > To set the specified curve in ssl_ecdh_curve we have to don't we?\n>\n> Sure, but it's not obvious to me why we actually want to override openssl's\n> defaults here. There's not even a parameter to opt out of forcing a specific\n> choice on the server side.\n\nI had exactly the same question in the context of the other thread, and found\n\n https://www.openssl.org/blog/blog/2022/10/21/tls-groups-configuration/index.html\n\nMy initial takeaway was that our default is more restrictive than it\nshould be, but the OpenSSL default is more permissive than what they\nrecommend in practice, due to denial of service concerns:\n\n> A general recommendation is to limit the groups to those that meet the\n> required security level and that all the potential TLS clients support.\n\n--Jacob\n\n\n", "msg_date": "Mon, 17 Jun 2024 10:19:23 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ecdh support causes unnecessary roundtrips" }, { "msg_contents": "> On 17 Jun 2024, at 19:01, Andres Freund <[email protected]> wrote:\n> On 2024-06-17 12:00:30 +0200, Daniel Gustafsson wrote:\n>>> On 17 Jun 2024, at 01:46, Andres Freund <[email protected]> wrote:\n\n>>> I don't know if it's good that we're calling SSL_CTX_set_tmp_ecdh at all,\n>> \n>> To set the specified curve in ssl_ecdh_curve we have to don't we?\n> \n> Sure, but it's not obvious to me why we actually want to override openssl's\n> defaults here. There's not even a parameter to opt out of forcing a specific\n> choice on the server side.\n\nI agree that the GUC is a bit rough around the edges, maybe leavint it blank or\nsomething should be defined as \"OpenSSL defaults\". Let's bring that to Erica's\npatch for allowing a list of curves.\n\n>>> I did confirm that doing the same thing on the client side removes the\n>>> additional roundtrip.\n>> \n>> The roundtrip went away because the client was set to use secp256r1?\n> \n> Yes. Or if I change the server to not set the ecdh curve.\n\nConfiguring the server to use x25519 instead of secp256r1 should achieve the\nsame thing.\n\n>> I wonder if that made OpenSSL override the min protocol version and switch\n>> to a TLS1.3 ClientHello since it otherwise couldn't announce the curve.\n> \n> The client seems to announce the curve in the initial ClientHello even with\n> 1.3 as the minimum version.\n\nWith 1.3 it should announce it in ClientHello, do you mean that it's announced\nwhen 1.2 is the minimum version as well? It does make sense since a 1.2 server\nis defined to disregard all extensions.\n\n> What *does* make the HRR go away is setting ssl_max_protocol_version=TLSv1.2\n> on the client side.\n\nMakes sense, that would remove the curve and there is no change required.\n\n> https://wiki.openssl.org/index.php/TLS1.3 says:\n> \n>> In practice most clients will use X25519 or P-256 for their initial\n>> key_share. For maximum performance it is recommended that servers are\n>> configured to support at least those two groups and clients use one of those\n>> two for its initial key_share. This is the default case (OpenSSL clients\n>> will use X25519).\n> \n> We're not allowing both groups and the client defaults to X25519, hence\n> the HRR.\n\nSo this would be solved by the curve-list patch referenced above, especially if\nallow it to have an opt-out to use OpenSSL defaults.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 17 Jun 2024 19:29:47 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ecdh support causes unnecessary roundtrips" }, { "msg_contents": "Hi,\n\nOn 2024-06-17 19:29:47 +0200, Daniel Gustafsson wrote:\n> >> I wonder if that made OpenSSL override the min protocol version and switch\n> >> to a TLS1.3 ClientHello since it otherwise couldn't announce the curve.\n> >\n> > The client seems to announce the curve in the initial ClientHello even with\n> > 1.3 as the minimum version.\n>\n> With 1.3 it should announce it in ClientHello, do you mean that it's announced\n> when 1.2 is the minimum version as well? It does make sense since a 1.2 server\n> is defined to disregard all extensions.\n\nYes, it's announced even when 1.2 is the minimum:\n\n Extension: supported_versions (len=5) TLS 1.3, TLS 1.2\n Type: supported_versions (43)\n Length: 5\n Supported Versions length: 4\n Supported Version: TLS 1.3 (0x0304)\n Supported Version: TLS 1.2 (0x0303)\n...\n Extension: key_share (len=38) x25519\n Type: key_share (51)\n Length: 38\n Key Share extension\n\n\n\n> Let's bring that to Erica's patch for allowing a list of curves.\n\nI'm kinda wondering if we ought to do something about this in the\nbackbranches. Forcing unnecessary roundtrips onto everyone for the next five\nyears due to an oversight on our part isn't great. Once you're not local, the\nroundtrip does measurably increase the \"time to first query\".\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 17 Jun 2024 10:44:22 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ecdh support causes unnecessary roundtrips" }, { "msg_contents": "> On 17 Jun 2024, at 19:44, Andres Freund <[email protected]> wrote:\n\n>> Let's bring that to Erica's patch for allowing a list of curves.\n> \n> I'm kinda wondering if we ought to do something about this in the\n> backbranches. Forcing unnecessary roundtrips onto everyone for the next five\n> years due to an oversight on our part isn't great. Once you're not local, the\n> roundtrip does measurably increase the \"time to first query\".\n\nI don't disagree, but wouldn't it be the type of behavioural change which we\ntypically try to avoid in backbranches? Changing the default of the ecdh GUC\nwould perhaps be doable? (assuming that's a working solution to avoid the\nroundtrip). Amending the documentation is the one thing we certainly can do\nbut 99.9% of affected users won't know they are affected so won't look for that\nsection.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 17 Jun 2024 19:51:45 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ecdh support causes unnecessary roundtrips" }, { "msg_contents": "Hi,\n\nOn 2024-06-17 19:51:45 +0200, Daniel Gustafsson wrote:\n> > On 17 Jun 2024, at 19:44, Andres Freund <[email protected]> wrote:\n> \n> >> Let's bring that to Erica's patch for allowing a list of curves.\n> > \n> > I'm kinda wondering if we ought to do something about this in the\n> > backbranches. Forcing unnecessary roundtrips onto everyone for the next five\n> > years due to an oversight on our part isn't great. Once you're not local, the\n> > roundtrip does measurably increase the \"time to first query\".\n> \n> I don't disagree, but wouldn't it be the type of behavioural change which we\n> typically try to avoid in backbranches?\n\nYea, it's not great. Not sure what the right thing is here.\n\n\n> Changing the default of the ecdh GUC would perhaps be doable?\n\nI was wondering whether we could change the default so that it accepts both\nx25519 and secp256r1. Unfortunately that seems to requires changing what we\nuse to set the parameter...\n\n\n> (assuming that's a working solution to avoid the roundtrip).\n\nIt is.\n\n\n> Amending the documentation is the one thing we certainly can do but 99.9% of\n> affected users won't know they are affected so won't look for that section.\n\nYea. It's also possible that some other bindings changed their default to\nmatch ours...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 17 Jun 2024 10:56:26 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ecdh support causes unnecessary roundtrips" }, { "msg_contents": "> On 17 Jun 2024, at 19:56, Andres Freund <[email protected]> wrote:\n> On 2024-06-17 19:51:45 +0200, Daniel Gustafsson wrote:\n\n>> Changing the default of the ecdh GUC would perhaps be doable?\n> \n> I was wondering whether we could change the default so that it accepts both\n> x25519 and secp256r1. Unfortunately that seems to requires changing what we\n> use to set the parameter...\n\nRight. The patch in https://commitfest.postgresql.org/48/5025/ does allow for\naccepting both but that's a different discussion.\n\nChanging, and backpatching, the default to at least keep new installations from\nextra roundtrips doesn't seem that far off in terms of scope from what\n860fe27ee1e2 backpatched. Maybe it can be an option.\n\n>> Amending the documentation is the one thing we certainly can do but 99.9% of\n>> affected users won't know they are affected so won't look for that section.\n> \n> Yea. It's also possible that some other bindings changed their default to\n> match ours...\n\nThere is that possibility, though I think we would've heard something about\nthat by now if that had happened.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 30 Jul 2024 00:25:59 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ecdh support causes unnecessary roundtrips" } ]
[ { "msg_contents": "hi.\nsimlar to https://www.postgresql.org/docs/devel//xfunc-c.html#XFUNC-ADDIN-WAIT-EVENTS\n\n\nin https://www.postgresql.org/docs/devel//xfunc-c.html#XFUNC-ADDIN-INJECTION-POINTS\ndo we need to add a sentence like \" An injections points usage example\ncan be found in src/test/modules/injection_points in the PostgreSQL\nsource tree.\"\nbefore \"Enabling injections points requires --enable-injection-points\nwith configure or -Dinjection_points=true with Meson.\"\n?\n\n\n", "msg_date": "Mon, 17 Jun 2024 11:24:09 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "minor doc change in src/sgml/xfunc.sgml" }, { "msg_contents": "On Mon, Jun 17, 2024 at 11:24:09AM +0800, jian he wrote:\n> in https://www.postgresql.org/docs/devel//xfunc-c.html#XFUNC-ADDIN-INJECTION-POINTS\n> do we need to add a sentence like \" An injections points usage example\n> can be found in src/test/modules/injection_points in the PostgreSQL\n> source tree.\"\n> before \"Enabling injections points requires --enable-injection-points\n> with configure or -Dinjection_points=true with Meson.\"\n> ?\n\nIndeed. I intended originally to add a pointer to the module in the\ndocs, but it somewhat got lost in the rebases of the patch. Will fix,\nthanks for the report!\n--\nMichael", "msg_date": "Mon, 17 Jun 2024 13:37:45 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: minor doc change in src/sgml/xfunc.sgml" } ]
[ { "msg_contents": "Hi PostgreSQL hackers,\r\n\r\nFor most access methods in PostgreSQL, the implementation of the access method itself and the implementation of its WAL replay logic are organized in separate source files. However, the HEAP access method is an exception. Both the access method and the WAL replay logic are collocated in the same heapam.c. To follow the pattern established by other access methods and to improve maintainability, I made the enclosed patch to separate HEAP’s replay logic into its own file. The changes are straightforward. Move the replay related functions into the new heapam_xlog.c file, push the common heap_execute_freeze_tuple() helper function into the heapam.h header, and adjust the build files.\r\n\r\nI hope people find this straightforward refactoring helpful.\r\n\r\n\r\nYong", "msg_date": "Mon, 17 Jun 2024 06:20:22 +0000", "msg_from": "\"Li, Yong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Separate HEAP WAL replay logic into its own file" }, { "msg_contents": "On Mon, Jun 17, 2024 at 2:20 AM Li, Yong <[email protected]> wrote:\n>\n> Hi PostgreSQL hackers,\n>\n> For most access methods in PostgreSQL, the implementation of the access method itself and the implementation of its WAL replay logic are organized in separate source files. However, the HEAP access method is an exception. Both the access method and the WAL replay logic are collocated in the same heapam.c. To follow the pattern established by other access methods and to improve maintainability, I made the enclosed patch to separate HEAP’s replay logic into its own file. The changes are straightforward. Move the replay related functions into the new heapam_xlog.c file, push the common heap_execute_freeze_tuple() helper function into the heapam.h header, and adjust the build files.\n\nI'm not against this change, but I am curious at what inspired this.\nWere you looking at Postgres code and simply noticed that there isn't\na heapam_xlog.c (like there is a nbtxlog.c etc) and thought that you\nwanted to change that? Or is there some specific reason this would\nhelp you as a Postgres developer, user, or ecosystem member?\n\n- Melanie\n\n\n", "msg_date": "Mon, 17 Jun 2024 11:01:08 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Separate HEAP WAL replay logic into its own file" }, { "msg_contents": "\r\n\r\n> On Jun 17, 2024, at 23:01, Melanie Plageman <[email protected]> wrote:\r\n> \r\n> External Email\r\n> \r\n> On Mon, Jun 17, 2024 at 2:20 AM Li, Yong <[email protected]> wrote:\r\n>> \r\n>> Hi PostgreSQL hackers,\r\n>> \r\n>> For most access methods in PostgreSQL, the implementation of the access method itself and the implementation of its WAL replay logic are organized in separate source files. However, the HEAP access method is an exception. Both the access method and the WAL replay logic are collocated in the same heapam.c. To follow the pattern established by other access methods and to improve maintainability, I made the enclosed patch to separate HEAP’s replay logic into its own file. The changes are straightforward. Move the replay related functions into the new heapam_xlog.c file, push the common heap_execute_freeze_tuple() helper function into the heapam.h header, and adjust the build files.\r\n> \r\n> I'm not against this change, but I am curious at what inspired this.\r\n> Were you looking at Postgres code and simply noticed that there isn't\r\n> a heapam_xlog.c (like there is a nbtxlog.c etc) and thought that you\r\n> wanted to change that? Or is there some specific reason this would\r\n> help you as a Postgres developer, user, or ecosystem member?\r\n> \r\n> - Melanie\r\n\r\nAs a newcomer, when I was walking through the code looking for WAL replay related code, it was relatively easy for me to find them for the B-Tree access method because of the “xlog” hint in the file names. It took me a while to find the same for the heap access method. When I finally found them (via text search), it was a small surprise. Having different file organizations for different access methods gives me this urge to make everything consistent. I think it will make it easier for newcomers, and it will reduce the mental load for everyone to remember that heap replay is inside the heapam.c not some “???xlog.c”.\r\n\r\nYong", "msg_date": "Tue, 18 Jun 2024 01:12:42 +0000", "msg_from": "\"Li, Yong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Separate HEAP WAL replay logic into its own file" }, { "msg_contents": "On Mon, Jun 17, 2024 at 9:12 PM Li, Yong <[email protected]> wrote:\n>\n> As a newcomer, when I was walking through the code looking for WAL replay related code, it was relatively easy for me to find them for the B-Tree access method because of the “xlog” hint in the file names. It took me a while to find the same for the heap access method. When I finally found them (via text search), it was a small surprise. Having different file organizations for different access methods gives me this urge to make everything consistent. I think it will make it easier for newcomers, and it will reduce the mental load for everyone to remember that heap replay is inside the heapam.c not some “???xlog.c”.\n\nThat makes sense. The branch for PG18 has not been cut yet, so I\nrecommend registering this patch for the July commitfest [1] so it\ndoesn't get lost.\n\n- Melanie\n\n[1] https://commitfest.postgresql.org/48/\n\n\n", "msg_date": "Tue, 18 Jun 2024 08:42:39 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Separate HEAP WAL replay logic into its own file" }, { "msg_contents": "\r\n\r\n> On Jun 18, 2024, at 20:42, Melanie Plageman <[email protected]> wrote:\r\n> \r\n> External Email\r\n> \r\n> On Mon, Jun 17, 2024 at 9:12 PM Li, Yong <[email protected]> wrote:\r\n>> \r\n>> As a newcomer, when I was walking through the code looking for WAL replay related code, it was relatively easy for me to find them for the B-Tree access method because of the “xlog” hint in the file names. It took me a while to find the same for the heap access method. When I finally found them (via text search), it was a small surprise. Having different file organizations for different access methods gives me this urge to make everything consistent. I think it will make it easier for newcomers, and it will reduce the mental load for everyone to remember that heap replay is inside the heapam.c not some “???xlog.c”.\r\n> \r\n> That makes sense. The branch for PG18 has not been cut yet, so I\r\n> recommend registering this patch for the July commitfest [1] so it\r\n> doesn't get lost.\r\n> \r\n> - Melanie\r\n> \r\n\r\nThanks for the positive feedback. I’ve added the patch to the July CF.\r\n\r\nYong\r\n\r\n", "msg_date": "Wed, 19 Jun 2024 06:44:58 +0000", "msg_from": "\"Li, Yong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Separate HEAP WAL replay logic into its own file" }, { "msg_contents": "Hi,\n\nI'm reviewing patches in commitfest 2024-07:\nhttps://commitfest.postgresql.org/48/\n\nThis is the 5th patch:\nhttps://commitfest.postgresql.org/48/5054/\n\nFYI: https://commitfest.postgresql.org/48/4681/ is my patch.\n\nIn <[email protected]>\n \"Re: Separate HEAP WAL replay logic into its own file\" on Tue, 18 Jun 2024 01:12:42 +0000,\n \"Li, Yong\" <[email protected]> wrote:\n\n> As a newcomer, when I was walking through the code looking\n> for WAL replay related code, it was relatively easy for me\n> to find them for the B-Tree access method because of the\n> “xlog” hint in the file names. It took me a while to\n> find the same for the heap access method. When I finally\n> found them (via text search), it was a small\n> surprise. Having different file organizations for\n> different access methods gives me this urge to make\n> everything consistent. I think it will make it easier for\n> newcomers, and it will reduce the mental load for everyone\n> to remember that heap replay is inside the heapam.c not\n> some “???xlog.c”.\n\nIt makes sense.\n\n\nHere are my comments for your patch:\n\n1. Could you create your patch by \"git format-patch -vN master\"\n or something? If you create your patch by \"git format-patch\",\n we can apply your patch by \"git am XXX.patch\".\n\n2. I confirmed that all heapam.c -> heapam_xlog.c/heapam.h\n moves don't change implementations. I re-moved moved\n codes to heapam.c and there is no diff in heapam.c.\n\n3. Could you remove '#include \"access/heapam_xlog.h\"' from\n heapam.c because it's needless now.\n\n BTW, it seems that we can remove more includes from\n heapam.c:\n\n----\ndiff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c\nindex bc6d4868975..f1671072576 100644\n--- a/src/backend/access/heap/heapam.c\n+++ b/src/backend/access/heap/heapam.c\n@@ -31,42 +31,24 @@\n */\n #include \"postgres.h\"\n \n-#include \"access/bufmask.h\"\n #include \"access/heapam.h\"\n-#include \"access/heapam_xlog.h\"\n #include \"access/heaptoast.h\"\n #include \"access/hio.h\"\n #include \"access/multixact.h\"\n-#include \"access/parallel.h\"\n-#include \"access/relscan.h\"\n #include \"access/subtrans.h\"\n #include \"access/syncscan.h\"\n-#include \"access/sysattr.h\"\n-#include \"access/tableam.h\"\n-#include \"access/transam.h\"\n #include \"access/valid.h\"\n #include \"access/visibilitymap.h\"\n-#include \"access/xact.h\"\n-#include \"access/xlog.h\"\n #include \"access/xloginsert.h\"\n-#include \"access/xlogutils.h\"\n-#include \"catalog/catalog.h\"\n #include \"commands/vacuum.h\"\n-#include \"miscadmin.h\"\n #include \"pgstat.h\"\n-#include \"port/atomics.h\"\n #include \"port/pg_bitutils.h\"\n-#include \"storage/bufmgr.h\"\n-#include \"storage/freespace.h\"\n #include \"storage/lmgr.h\"\n #include \"storage/predicate.h\"\n #include \"storage/procarray.h\"\n-#include \"storage/standby.h\"\n #include \"utils/datum.h\"\n #include \"utils/injection_point.h\"\n #include \"utils/inval.h\"\n-#include \"utils/relcache.h\"\n-#include \"utils/snapmgr.h\"\n #include \"utils/spccache.h\"\n---\n\n We may want to work on removing needless includes as a\n separated cleanup task.\n\n4. Could you remove needless includes from heapam_xlog.c? It\n seems that we can remove the following includes:\n\n----\ndiff --git a/src/backend/access/heap/heapam_xlog.c b/src/backend/access/heap/heapam_xlog.c\nindex b372f2b4afc..af4976f382d 100644\n--- a/src/backend/access/heap/heapam_xlog.c\n+++ b/src/backend/access/heap/heapam_xlog.c\n@@ -16,16 +16,11 @@\n \n #include \"access/bufmask.h\"\n #include \"access/heapam.h\"\n-#include \"access/heapam_xlog.h\"\n-#include \"access/transam.h\"\n #include \"access/visibilitymap.h\"\n #include \"access/xlog.h\"\n #include \"access/xlogutils.h\"\n-#include \"port/atomics.h\"\n-#include \"storage/bufmgr.h\"\n #include \"storage/freespace.h\"\n #include \"storage/standby.h\"\n-#include \"utils/relcache.h\"\n----\n\n5. There are still WAL related codes in heapam.c:\n\n 4.1. log_heap_update()\n 4.2. log_heap_new_cid()\n 4.3. if (RelationNeedsWAL()) {...} in heap_insert()\n 4.4. if (needwal) {...} in heap_multi_insert()\n 4.5. if (RelationNeedsWAL()) {...} in heap_delete()\n 4.6. if (RelationNeedsWAL()) {...}s in heap_update()\n 4.7. if (RelationNeedsWAL()) {...} in heap_lock_tuple()\n 4.8. if (RelationNeedsWAL()) {...} in heap_lock_updated_tuple_rec()\n 4.9. if (RelationNeedsWAL()) {...} in heap_finish_speculative()\n 4.10. if (RelationNeedsWAL()) {...} in heap_abort_speculative()\n 4.11. if (RelationNeedsWAL()) {...} in heap_inplace_update()\n 4.12. log_heap_visible()\n\n Should we move them to head_xlog.c too?\n\n If we should do it, separated commits will be easy to\n review. For example, the 0001 patch moves existing codes\n to head_xlog.c as-is. The 0002 patch extracts WAL related\n codes in heap_insert() to heap_xlog.c and so on.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Tue, 23 Jul 2024 10:54:55 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Separate HEAP WAL replay logic into its own file" }, { "msg_contents": "> On Jul 23, 2024, at 09:54, Sutou Kouhei <[email protected]> wrote:\r\n>\r\n>\r\n> Here are my comments for your patch:\r\n>\r\n> 1. Could you create your patch by \"git format-patch -vN master\"\r\n> or something? If you create your patch by \"git format-patch\",\r\n> we can apply your patch by \"git am XXX.patch\".\r\n>\r\n\r\nThanks for your review. I’ve updated the patch to follow your\r\nsuggested format.\r\n\r\n>\r\n> 3. Could you remove '#include \"access/heapam_xlog.h\"' from\r\n> heapam.c because it's needless now.\r\n>\r\n> BTW, it seems that we can remove more includes from\r\n> heapam.c:\r\n>\r\n> 4. Could you remove needless includes from heapam_xlog.c? It\r\n> seems that we can remove the following includes:\r\n\r\nI have removed the redundant includes in the latest patch.\r\n\r\n>\r\n> 5. There are still WAL related codes in heapam.c:\r\n>\r\n> 4.1. log_heap_update()\r\n> 4.2. log_heap_new_cid()\r\n> 4.3. if (RelationNeedsWAL()) {...} in heap_insert()\r\n> 4.4. if (needwal) {...} in heap_multi_insert()\r\n> 4.5. if (RelationNeedsWAL()) {...} in heap_delete()\r\n> 4.6. if (RelationNeedsWAL()) {...}s in heap_update()\r\n> 4.7. if (RelationNeedsWAL()) {...} in heap_lock_tuple()\r\n> 4.8. if (RelationNeedsWAL()) {...} in heap_lock_updated_tuple_rec()\r\n> 4.9. if (RelationNeedsWAL()) {...} in heap_finish_speculative()\r\n> 4.10. if (RelationNeedsWAL()) {...} in heap_abort_speculative()\r\n> 4.11. if (RelationNeedsWAL()) {...} in heap_inplace_update()\r\n> 4.12. log_heap_visible()\r\n>\r\n> Should we move them to head_xlog.c too?\r\n>\r\n> If we should do it, separated commits will be easy to\r\n> review. For example, the 0001 patch moves existing codes\r\n> to head_xlog.c as-is. The 0002 patch extracts WAL related\r\n> codes in heap_insert() to heap_xlog.c and so on.\r\n>\r\n>\r\n> Thanks,\r\n> --\r\n> kou\r\n\r\nI followed the convention of most access methods. The “xlog”\r\nfile includes the WAL replay logic only. The logic that generates\r\nthe log records themselves stays with the code that performs\r\nthe changes. Take nbtree as an example, you can also find\r\nWAL generating code in several _bt_insertxxx() functions inside\r\nthe nbtinsert.c file.\r\n\r\nPlease help review the updated file again. Thanks in advance!\r\n\r\n\r\nYong", "msg_date": "Fri, 26 Jul 2024 07:56:12 +0000", "msg_from": "\"Li, Yong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Separate HEAP WAL replay logic into its own file" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Separate HEAP WAL replay logic into its own file\" on Fri, 26 Jul 2024 07:56:12 +0000,\n \"Li, Yong\" <[email protected]> wrote:\n\n>> 1. Could you create your patch by \"git format-patch -vN master\"\n>> or something? If you create your patch by \"git format-patch\",\n>> we can apply your patch by \"git am XXX.patch\".\n>>\n> \n> Thanks for your review. I’ve updated the patch to follow your\n> suggested format.\n\nThanks. I could apply your patch by \"git am\nv2-0001-heapam_refactor.patch\".\n\nCould you use the following format for the commit message\nnext time?\n\n----\n${TITLE}\n\n${DESCRIPTION}\n----\n\nFor example:\n\n----\nSeparate HEAP WAL replay logic into its own file\n\nMost access methods (i.e. nbtree and hash) use a separate\nfile with \"xlog\" in its name for its WAL replay logic. Heap\nis one exception of this convention. To make it easier for\nnewcomers to find the WAL replay logic for the heap access\nmethod, this patch isolates heap's replay logic in a new\nheapam_xlog.c file. This patch is a pure refactoring with no\nchange to the logic.\n----\n\nThis is a commonly used Git's commit message format. See\nalso other commit messages by \"git log\".\n\n>> 5. There are still WAL related codes in heapam.c:\n>>\n>> 4.1. log_heap_update()\n>> 4.2. log_heap_new_cid()\n>> 4.3. if (RelationNeedsWAL()) {...} in heap_insert()\n>> 4.4. if (needwal) {...} in heap_multi_insert()\n>> 4.5. if (RelationNeedsWAL()) {...} in heap_delete()\n>> 4.6. if (RelationNeedsWAL()) {...}s in heap_update()\n>> 4.7. if (RelationNeedsWAL()) {...} in heap_lock_tuple()\n>> 4.8. if (RelationNeedsWAL()) {...} in heap_lock_updated_tuple_rec()\n>> 4.9. if (RelationNeedsWAL()) {...} in heap_finish_speculative()\n>> 4.10. if (RelationNeedsWAL()) {...} in heap_abort_speculative()\n>> 4.11. if (RelationNeedsWAL()) {...} in heap_inplace_update()\n>> 4.12. log_heap_visible()\n>>\n>> Should we move them to head_xlog.c too?\n>>\n>> If we should do it, separated commits will be easy to\n>> review. For example, the 0001 patch moves existing codes\n>> to head_xlog.c as-is. The 0002 patch extracts WAL related\n>> codes in heap_insert() to heap_xlog.c and so on.\n> \n> I followed the convention of most access methods. The “xlog”\n> file includes the WAL replay logic only. The logic that generates\n> the log records themselves stays with the code that performs\n> the changes. Take nbtree as an example, you can also find\n> WAL generating code in several _bt_insertxxx() functions inside\n> the nbtinsert.c file.\n\nYou're right. Sorry.\n\n\nI think that this proposal is reasonable but we need to get\nattention from a committer to move forward this proposal.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Tue, 30 Jul 2024 14:47:34 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Separate HEAP WAL replay logic into its own file" }, { "msg_contents": "\r\n> I think that this proposal is reasonable but we need to get\r\n> attention from a committer to move forward this proposal.\r\n> \r\n> \r\n> Thanks,\r\n> —\r\n> kou\r\n\r\nThank you Kou for your review. I will move the CF to the next\r\nphase and see what happens.\r\n\r\n\r\nRegards,\r\nYong\r\n\r\n\r\n", "msg_date": "Tue, 30 Jul 2024 06:48:26 +0000", "msg_from": "\"Li, Yong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Separate HEAP WAL replay logic into its own file" }, { "msg_contents": "On Tue, Jul 30, 2024 at 06:48:26AM +0000, Li, Yong wrote:\n> Thank you Kou for your review. I will move the CF to the next\n> phase and see what happens.\n\nQuite a fan of what you are proposing here, knowing that heapam.c is\nstill 8.8k lines of code even after moving the 1.3k lines dedicated to\nWAL records.\n\n+#include \"access/heapam_xlog.h\"\n\nThis is included in heapam.h, but missing from the patch. I guess\nthat you fat-fingered a `git add`.\n--\nMichael", "msg_date": "Wed, 11 Sep 2024 16:41:49 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Separate HEAP WAL replay logic into its own file" }, { "msg_contents": "On Wed, Sep 11, 2024 at 04:41:49PM +0900, Michael Paquier wrote:\n> +#include \"access/heapam_xlog.h\"\n> \n> This is included in heapam.h, but missing from the patch. I guess\n> that you fat-fingered a `git add`.\n\nIt looks like my mind was wondering away when I wrote this part.\nSorry for the useless noise.\n--\nMichael", "msg_date": "Thu, 12 Sep 2024 08:12:30 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Separate HEAP WAL replay logic into its own file" }, { "msg_contents": "On Thu, Sep 12, 2024 at 08:12:30AM +0900, Michael Paquier wrote:\n> It looks like my mind was wondering away when I wrote this part.\n> Sorry for the useless noise.\n\nI was looking at all that, and this is only moving code around. While\nthe part for heap_xlog_logical_rewrite in rewriteheap.c is a bit sad\nbut historical, the header cleanup in heapam.c is nice.\n\nSeeing heap_execute_freeze_tuple in heapam.h due to the dependency to\nXLH_INVALID_XVAC and XLH_FREEZE_XVAC is slightly surprising, but the\nopposite where heap_execute_freeze_tuple() would be in heapam_xlog.h\nwas less interesting. Just to say that I am agreeing with you here\nand I have let this part as you suggested originally.\n\nI was wondering for a bit about the order of the functions for heap\nand heap, but these are ordered in their own, which is also OK. I\nhave added a few more comments at the top of each subroutine for the\nrecords to be more consistent, and applied the result.\n--\nMichael", "msg_date": "Thu, 12 Sep 2024 14:39:11 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Separate HEAP WAL replay logic into its own file" }, { "msg_contents": "\r\n\r\n> On Sep 12, 2024, at 13:39, Michael Paquier <[email protected]> wrote:\r\n> \r\n> External Email\r\n> \r\n> \r\n> I was looking at all that, and this is only moving code around. While\r\n> the part for heap_xlog_logical_rewrite in rewriteheap.c is a bit sad\r\n> but historical, the header cleanup in heapam.c is nice.\r\n> \r\n> Seeing heap_execute_freeze_tuple in heapam.h due to the dependency to\r\n> XLH_INVALID_XVAC and XLH_FREEZE_XVAC is slightly surprising, but the\r\n> opposite where heap_execute_freeze_tuple() would be in heapam_xlog.h\r\n> was less interesting. Just to say that I am agreeing with you here\r\n> and I have let this part as you suggested originally.\r\n> \r\n> I was wondering for a bit about the order of the functions for heap\r\n> and heap, but these are ordered in their own, which is also OK. I\r\n> have added a few more comments at the top of each subroutine for the\r\n> records to be more consistent, and applied the result.\r\n> —\r\n> Michael\r\n> \r\n\r\nI am so glad to see that my patch got committed. Thank you a lot for it!\r\nThis is my first accepted patch. It really means a lot to me.\r\n\r\nYong", "msg_date": "Wed, 18 Sep 2024 08:40:02 +0000", "msg_from": "\"Li, Yong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Separate HEAP WAL replay logic into its own file" }, { "msg_contents": "On Wed, Sep 18, 2024 at 08:40:02AM +0000, Li, Yong wrote:\n> I am so glad to see that my patch got committed. Thank you a lot for it!\n> This is my first accepted patch. It really means a lot to me.\n\nNo problem. Thanks for the patch!\n--\nMichael", "msg_date": "Wed, 18 Sep 2024 18:06:50 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Separate HEAP WAL replay logic into its own file" } ]
[ { "msg_contents": "Hi,\n\nWhile looking at the commit\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=29d0a77fa6606f9c01ba17311fc452dabd3f793d,\nI noticed that get_old_cluster_logical_slot_infos gets called for even\ntemplate1 and template0 databases. Which means, pg_upgrade executes\nqueries against the template databases to get replication slot\ninformation. I then realized that postgres allows one to connect to\ntemplate1 database (or any other user-defined template databases for\nthat matter), and create logical replication slots in it. If created,\nall the subsequent database creations will end up adding inactive\nlogical replication slots in the postgres server. This might not be a\nproblem in production servers as I assume the connections to template\ndatabases are typically restricted. Despite the connection\nrestrictions, if at all one gets to connect to template databases in\nany way, it's pretty much possible to load the postgres server with\ninactive replication slots.\n\nThis leads me to think why one would need logical replication slots in\ntemplate databases at all. Can postgres restrict logical replication\nslots creation in template databases? If restricted, it may deviate\nfrom the fundamental principle of template databases in the sense that\neverything in the template database must be copied over to the new\ndatabase created using it. Is it okay to do this? Am I missing\nsomething here?\n\nThoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 17 Jun 2024 17:49:46 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Is creating logical replication slots in template databases useful at\n all?" }, { "msg_contents": "On Mon, Jun 17, 2024 at 5:50 PM Bharath Rupireddy <\[email protected]> wrote:\n\n> Hi,\n>\n> While looking at the commit\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=29d0a77fa6606f9c01ba17311fc452dabd3f793d\n> ,\n> I noticed that get_old_cluster_logical_slot_infos gets called for even\n> template1 and template0 databases. Which means, pg_upgrade executes\n> queries against the template databases to get replication slot\n> information. I then realized that postgres allows one to connect to\n> template1 database (or any other user-defined template databases for\n> that matter), and create logical replication slots in it. If created,\n> all the subsequent database creations will end up adding inactive\n> logical replication slots in the postgres server. This might not be a\n> problem in production servers as I assume the connections to template\n> databases are typically restricted. Despite the connection\n> restrictions, if at all one gets to connect to template databases in\n> any way, it's pretty much possible to load the postgres server with\n> inactive replication slots.\n>\n\nThe replication slot names are unique across databases [1] Hence\nreplication slots created by connecting to template1 database should not\nget copied over when creating a new database. Is that broken? A logical\nreplication slot is associated with a database but a physical replication\nslot is not. The danger you mention above applies only to logical\nreplication slots I assume.\n\n\n>\n> This leads me to think why one would need logical replication slots in\n> template databases at all. Can postgres restrict logical replication\n> slots creation in template databases? If restricted, it may deviate\n> from the fundamental principle of template databases in the sense that\n> everything in the template database must be copied over to the new\n> database created using it. Is it okay to do this? Am I missing\n> something here?\n>\n\nIf applications are using template1, they would want to keep the template1\non primary and replica in sync. Replication slot associated with template1\nwould be useful there.\n\n[1]\nhttps://www.postgresql.org/docs/current/logicaldecoding-explanation.html#LOGICALDECODING-REPLICATION-SLOTS\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Mon, Jun 17, 2024 at 5:50 PM Bharath Rupireddy <[email protected]> wrote:Hi,\n\nWhile looking at the commit\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=29d0a77fa6606f9c01ba17311fc452dabd3f793d,\nI noticed that get_old_cluster_logical_slot_infos gets called for even\ntemplate1 and template0 databases. Which means, pg_upgrade executes\nqueries against the template databases to get replication slot\ninformation. I then realized that postgres allows one to connect to\ntemplate1 database (or any other user-defined template databases for\nthat matter), and create logical replication slots in it. If created,\nall the subsequent database creations will end up adding inactive\nlogical replication slots in the postgres server. This might not be a\nproblem in production servers as I assume the connections to template\ndatabases are typically restricted. Despite the connection\nrestrictions, if at all one gets to connect to template databases in\nany way, it's pretty much possible to load the postgres server with\ninactive replication slots.The replication slot names are unique across databases [1] Hence replication slots created by connecting to template1 database should not get copied over when creating a new database. Is that broken? A logical replication slot is associated with a database but a physical replication slot is not. The danger you mention above applies only to logical replication slots I assume. \n\nThis leads me to think why one would need logical replication slots in\ntemplate databases at all. Can postgres restrict logical replication\nslots creation in template databases? If restricted, it may deviate\nfrom the fundamental principle of template databases in the sense that\neverything in the template database must be copied over to the new\ndatabase created using it. Is it okay to do this? Am I missing\nsomething here?If applications are using template1, they would want to keep the template1 on primary and replica in sync. Replication slot associated with template1 would be useful there. [1] https://www.postgresql.org/docs/current/logicaldecoding-explanation.html#LOGICALDECODING-REPLICATION-SLOTS-- Best Wishes,Ashutosh Bapat", "msg_date": "Tue, 18 Jun 2024 15:19:41 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is creating logical replication slots in template databases\n useful at all?" }, { "msg_contents": "On Tue, Jun 18, 2024 at 03:19:41PM +0530, Ashutosh Bapat wrote:\n> On Mon, Jun 17, 2024 at 5:50 PM Bharath Rupireddy <\n> [email protected]> wrote:\n>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=29d0a77fa6606f9c01ba17311fc452dabd3f793d\n>> ,\n>> I noticed that get_old_cluster_logical_slot_infos gets called for even\n>> template1 and template0 databases. Which means, pg_upgrade executes\n>> queries against the template databases to get replication slot\n>> information. I then realized that postgres allows one to connect to\n>> template1 database (or any other user-defined template databases for\n>> that matter), and create logical replication slots in it. If created,\n>> all the subsequent database creations will end up adding inactive\n>> logical replication slots in the postgres server. This might not be a\n>> problem in production servers as I assume the connections to template\n>> databases are typically restricted. Despite the connection\n>> restrictions, if at all one gets to connect to template databases in\n>> any way, it's pretty much possible to load the postgres server with\n>> inactive replication slots.\n> \n> The replication slot names are unique across databases [1] Hence\n> replication slots created by connecting to template1 database should not\n> get copied over when creating a new database. Is that broken? A logical\n> replication slot is associated with a database but a physical replication\n> slot is not. The danger you mention above applies only to logical\n> replication slots I assume.\n\nget_old_cluster_logical_slot_infos() on even template0 is still\ncorrect, IMO, even if this template database is not something that\nshould be modified at all, or even have allow_connections enabled. It\nseems to me the correct answer here is that users should not create\nslots where they are not going to use them.\n\n>> This leads me to think why one would need logical replication slots in\n>> template databases at all. Can postgres restrict logical replication\n>> slots creation in template databases? If restricted, it may deviate\n>> from the fundamental principle of template databases in the sense that\n>> everything in the template database must be copied over to the new\n>> database created using it. Is it okay to do this? Am I missing\n>> something here?\n> \n> If applications are using template1, they would want to keep the template1\n> on primary and replica in sync. Replication slot associated with template1\n> would be useful there.\n\nTemplates defined in CREATE DATABASE can be any active database as\nlong as they are in pg_database, so doing logical replication on\ntemplate1 to keep it in sync across nodes is fine.\n\nIn short, I am not quite seeing the problem here.\n--\nMichael", "msg_date": "Wed, 10 Jul 2024 15:29:54 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is creating logical replication slots in template databases\n useful at all?" } ]
[ { "msg_contents": "Hi.\n\nThere's the following inconsistency between try_mergejoin_path() and \ncreate_mergejoin_plan().\nWhen clause operator has no commutator, we can end up with mergejoin \npath.\nLater create_mergejoin_plan() will call get_switched_clauses(). This \nfunction can error out with\n\nERROR: could not find commutator for operator XXX\n\nThe similar behavior seems to be present also for hash join.\n\nAttaching a test case (in patch) and a possible fix.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Mon, 17 Jun 2024 17:51:30 +0300", "msg_from": "Alexander Pyhalov <[email protected]>", "msg_from_op": true, "msg_subject": "Inconsistency between try_mergejoin_path and create_mergejoin_plan" }, { "msg_contents": "On Mon, Jun 17, 2024 at 10:51 PM Alexander Pyhalov\n<[email protected]> wrote:\n> There's the following inconsistency between try_mergejoin_path() and\n> create_mergejoin_plan().\n> When clause operator has no commutator, we can end up with mergejoin\n> path.\n> Later create_mergejoin_plan() will call get_switched_clauses(). This\n> function can error out with\n>\n> ERROR: could not find commutator for operator XXX\n\nInteresting. This error can be reproduced with table 'ec1' from\nsql/equivclass.sql.\n\nset enable_indexscan to off;\n\nexplain select * from ec1 t1 join ec1 t2 on t2.ff = t1.f1;\nERROR: could not find commutator for operator 30450\n\nThe column ec1.f1 has a type of 'int8alias1', a new data type created in\nthis test file. Additionally, there is also a newly created operator\n'int8 = int8alias1' which is mergejoinable but lacks a valid commutator.\nTherefore, there is no problem generating the mergejoin path, but when\nwe create the mergejoin plan, get_switched_clauses would notice the\nabsence of a valid commutator needed to commute the clause.\n\nIt seems to me that the new operator is somewhat artificial, since it is\ndesigned to support a mergejoin but lacks a valid commutator. So before\nwe proceed to discuss the fix, I'd like to know whether this is a valid\nissue that needs fixing.\n\nAny thoughts?\n\nThanks\nRichard\n\n\n", "msg_date": "Wed, 19 Jun 2024 12:15:24 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inconsistency between try_mergejoin_path and\n create_mergejoin_plan" }, { "msg_contents": "Richard Guo <[email protected]> writes:\n> On Mon, Jun 17, 2024 at 10:51 PM Alexander Pyhalov\n> <[email protected]> wrote:\n>> ERROR: could not find commutator for operator XXX\n\n> It seems to me that the new operator is somewhat artificial, since it is\n> designed to support a mergejoin but lacks a valid commutator. So before\n> we proceed to discuss the fix, I'd like to know whether this is a valid\n> issue that needs fixing.\n\nWell, there's no doubt that the case is artificial: nobody who knew\nwhat they were doing would install an incomplete opclass like this\nin a production setting. However, there are several parts of the\nplanner that take pains to avoid this type of failure. I am pretty\nsure that we are careful about flipping around candidate indexscan\nquals for instance. And the \"broken equivalence class\" mechanism\nis all about that, which is what equivclass.sql is setting up this\nopclass to test. So I find it a bit sad if mergejoin creation is\ntripping over this case.\n\nI do not think we should add a great deal of complexity or extra\nplanner cycles to deal with this; but if it can be fixed at low\ncost, we should.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 Jun 2024 00:49:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inconsistency between try_mergejoin_path and\n create_mergejoin_plan" }, { "msg_contents": "On Wed, Jun 19, 2024 at 12:49 PM Tom Lane <[email protected]> wrote:\n> Richard Guo <[email protected]> writes:\n> > It seems to me that the new operator is somewhat artificial, since it is\n> > designed to support a mergejoin but lacks a valid commutator. So before\n> > we proceed to discuss the fix, I'd like to know whether this is a valid\n> > issue that needs fixing.\n\n> I do not think we should add a great deal of complexity or extra\n> planner cycles to deal with this; but if it can be fixed at low\n> cost, we should.\n\nI think we can simply verify the validity of commutators for clauses in\nthe form \"inner op outer\" when selecting mergejoin/hash clauses. If a\nclause lacks a commutator, we should consider it unusable for the given\npair of outer and inner rels. Please see the attached patch.\n\nThanks\nRichard", "msg_date": "Wed, 19 Jun 2024 21:30:39 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inconsistency between try_mergejoin_path and\n create_mergejoin_plan" }, { "msg_contents": "Richard Guo писал(а) 2024-06-19 16:30:\n> On Wed, Jun 19, 2024 at 12:49 PM Tom Lane <[email protected]> wrote:\n>> Richard Guo <[email protected]> writes:\n>> > It seems to me that the new operator is somewhat artificial, since it is\n>> > designed to support a mergejoin but lacks a valid commutator. So before\n>> > we proceed to discuss the fix, I'd like to know whether this is a valid\n>> > issue that needs fixing.\n> \n>> I do not think we should add a great deal of complexity or extra\n>> planner cycles to deal with this; but if it can be fixed at low\n>> cost, we should.\n> \n> I think we can simply verify the validity of commutators for clauses in\n> the form \"inner op outer\" when selecting mergejoin/hash clauses. If a\n> clause lacks a commutator, we should consider it unusable for the given\n> pair of outer and inner rels. Please see the attached patch.\n> \n\nThis seems to be working for my test cases.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Wed, 19 Jun 2024 17:24:18 +0300", "msg_from": "Alexander Pyhalov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inconsistency between try_mergejoin_path and\n create_mergejoin_plan" }, { "msg_contents": "On Wed, Jun 19, 2024 at 10:24 PM Alexander Pyhalov\n<[email protected]> wrote:\n> Richard Guo писал(а) 2024-06-19 16:30:\n> > I think we can simply verify the validity of commutators for clauses in\n> > the form \"inner op outer\" when selecting mergejoin/hash clauses. If a\n> > clause lacks a commutator, we should consider it unusable for the given\n> > pair of outer and inner rels. Please see the attached patch.\n\n> This seems to be working for my test cases.\n\nThank you for confirming. Here is an updated patch with some tweaks to\nthe comments and commit message. I've parked this patch in the July\ncommitfest.\n\nThanks\nRichard", "msg_date": "Mon, 24 Jun 2024 14:29:53 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inconsistency between try_mergejoin_path and\n create_mergejoin_plan" }, { "msg_contents": "Richard Guo <[email protected]> writes:\n> Thank you for confirming. Here is an updated patch with some tweaks to\n> the comments and commit message. I've parked this patch in the July\n> commitfest.\n\nI took a brief look at this. I think the basic idea is sound,\nbut I have a couple of nits:\n\n* It's not entirely obvious that the checks preceding these additions\nare sufficient to ensure that the clauses are OpExprs. They probably\nare, since the clauses are marked hashable or mergeable, but that test\nis mighty far away. More to the point, if they ever weren't OpExprs\nthe result would likely be to pass a bogus OID to get_commutator and\nthus silently fail, allowing the problem to go undetected for a long\ntime. I'd suggest using castNode() rather than a hard-wired\nassumption that the clause is an OpExpr.\n\n* Do we really need to construct a whole new set of bogus operators\nand opclasses to test this? As you noted, the regression tests\nalready set up an incomplete opclass for other purposes. Why can't\nwe extend that test, to reduce the amount of cycles wasted forevermore\non this rather trivial point?\n\n(I'm actually wondering whether we really need to memorialize this\nwith a regression test case at all. But I'd settle for minimizing\nthe amount of test cycles added.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Jul 2024 12:07:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inconsistency between try_mergejoin_path and\n create_mergejoin_plan" }, { "msg_contents": "On Thu, Jul 25, 2024 at 12:07 AM Tom Lane <[email protected]> wrote:\n> I took a brief look at this. I think the basic idea is sound,\n> but I have a couple of nits:\n\nThank you for reviewing this patch!\n\n> * It's not entirely obvious that the checks preceding these additions\n> are sufficient to ensure that the clauses are OpExprs. They probably\n> are, since the clauses are marked hashable or mergeable, but that test\n> is mighty far away. More to the point, if they ever weren't OpExprs\n> the result would likely be to pass a bogus OID to get_commutator and\n> thus silently fail, allowing the problem to go undetected for a long\n> time. I'd suggest using castNode() rather than a hard-wired\n> assumption that the clause is an OpExpr.\n\nGood point. I've modified the code to use castNode(), and added\ncomment accordingly.\n\n> * Do we really need to construct a whole new set of bogus operators\n> and opclasses to test this? As you noted, the regression tests\n> already set up an incomplete opclass for other purposes. Why can't\n> we extend that test, to reduce the amount of cycles wasted forevermore\n> on this rather trivial point?\n>\n> (I'm actually wondering whether we really need to memorialize this\n> with a regression test case at all. But I'd settle for minimizing\n> the amount of test cycles added.)\n\nAt first I planned to use the alias type 'int8alias1' created in\nequivclass.sql for this test. However, I found that this type is not\ncreated yet when running join.sql. Perhaps it's more convenient to\nplace this test in equivclass.sql to leverage the bogus operators\ncreated there, but the test does not seem to be related to 'broken'\nECs. I'm not sure if equivclass.sql is the appropriate place.\n\nDo you think it works if we place this test in equivclass.sql and\nwrite a comment explaining why it's there, like attached? Now I’m\nalso starting to wonder if this change actually warrants such a test.\n\nBTW, do you think this patch is worth backpatching?\n\nThanks\nRichard", "msg_date": "Fri, 26 Jul 2024 15:56:08 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inconsistency between try_mergejoin_path and\n create_mergejoin_plan" }, { "msg_contents": "On Fri, Jul 26, 2024 at 3:56 PM Richard Guo <[email protected]> wrote:\n> Do you think it works if we place this test in equivclass.sql and\n> write a comment explaining why it's there, like attached? Now I’m\n> also starting to wonder if this change actually warrants such a test.\n\nThe new test case fails starting from adf97c156, and we have to\ninstall a hash opfamily and a hash function for the hacked int8alias1\ntype to make the test case work again.\n\nNow, I'm more dubious about whether we really need to add a test case\nfor this change.\n\nThanks\nRichard", "msg_date": "Tue, 3 Sep 2024 17:51:47 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inconsistency between try_mergejoin_path and\n create_mergejoin_plan" }, { "msg_contents": "On Tue, Sep 3, 2024 at 5:51 PM Richard Guo <[email protected]> wrote:\n> The new test case fails starting from adf97c156, and we have to\n> install a hash opfamily and a hash function for the hacked int8alias1\n> type to make the test case work again.\n>\n> Now, I'm more dubious about whether we really need to add a test case\n> for this change.\n\nI pushed this patch with the test case remaining, as it adds only a\nminimal number of test cycles. I explained in the commit message why\nthe test case is included in equivclass.sql rather than in join.sql.\n\nI did not do backpatch because this bug cannot be reproduced without\ninstalling an incomplete opclass, which is unlikely to happen in\npractice.\n\nThanks for the report and review.\n\nThanks\nRichard\n\n\n", "msg_date": "Wed, 4 Sep 2024 11:50:50 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inconsistency between try_mergejoin_path and\n create_mergejoin_plan" } ]
[ { "msg_contents": "Hackers,\n\nFYI, I wanted to try using PostgreSQL with LLVM on my Mac, but the backend repeatedly crashes during `make check`. I found the same issue in master and REL_16_STABLE. The crash message is:\n\nFATAL: fatal llvm error: Unsupported stack probing method\nserver closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\nconnection to server was lost\n\nSame thing appears in the log output. I poked around and found a similar bug[1] was fixed in LLVM in December, but as I’m getting it with the latest release from 2 weeks ago, something else might be going on. I’ve opened a new LLVM bug report[2] for the issue.\n\nI don’t *think* it’s something that can be fixed in Postgres core. This is mostly in FYI in case anyone else runs into this issue or knows something more about it.\n\nIn the meantime I’ll be building without --with-llvm.\n\nBest,\n\nDavid\n\n[1]: https://github.com/llvm/llvm-project/issues/57171\n[2]: https://github.com/llvm/llvm-project/issues/95804\n\n\n\n", "msg_date": "Mon, 17 Jun 2024 11:52:10 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "FYI: LLVM Runtime Crash" }, { "msg_contents": "On Jun 17, 2024, at 11:52, David E. Wheeler <[email protected]> wrote:\n\n> I don’t *think* it’s something that can be fixed in Postgres core. This is mostly in FYI in case anyone else runs into this issue or knows something more about it.\n\nOkay, a response to the issue[1] says the bug is in Postgres:\n\n> The error message is LLVM reporting the backend can't handle the particular form of \"probe-stack\" attribute in the input LLVM IR. So this is likely a bug in the way postgres is generating LLVM IR: please file a bug against Postgres. (Feel free to reopen if you have some reason to believe the issue is on the LLVM side.)\n\nWould it be best for me to send a report to pgsql-bugs?\n\nBest,\n\nDavid\n\n[1]: https://github.com/llvm/llvm-project/issues/95804#issuecomment-2174310977\n\n\n\n", "msg_date": "Mon, 17 Jun 2024 16:07:49 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FYI: LLVM Runtime Crash" }, { "msg_contents": "Hi,\n\nOn 2024-06-17 16:07:49 -0400, David E. Wheeler wrote:\n> On Jun 17, 2024, at 11:52, David E. Wheeler <[email protected]> wrote:\n> \n> > I don’t *think* it’s something that can be fixed in Postgres core. This is mostly in FYI in case anyone else runs into this issue or knows something more about it.\n> \n> Okay, a response to the issue[1] says the bug is in Postgres:\n>\n> > The error message is LLVM reporting the backend can't handle the particular form of \"probe-stack\" attribute in the input LLVM IR. So this is likely a bug in the way postgres is generating LLVM IR: please file a bug against Postgres. (Feel free to reopen if you have some reason to believe the issue is on the LLVM side.)\n\nI suspect the issue might be that the version of clang and LLVM are diverging\ntoo far. Does it work if you pass CLANG=/opt/homebrew/opt/llvm/bin/clang to\nconfigure?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 17 Jun 2024 13:37:21 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FYI: LLVM Runtime Crash" }, { "msg_contents": "On Jun 17, 2024, at 16:37, Andres Freund <[email protected]> wrote:\n\n> I suspect the issue might be that the version of clang and LLVM are diverging\n> too far. Does it work if you pass CLANG=/opt/homebrew/opt/llvm/bin/clang to\n> configure?\n\nIt does! It didn’t occur to me this would be the issue, but I presumes /usr/bin/clang is not compatible with the latest LLVM installed from Homebrew. Interesting! I’ll update that issue.\n\nThanks,\n\nDavid\n\n\n\n", "msg_date": "Mon, 17 Jun 2024 16:48:29 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FYI: LLVM Runtime Crash" } ]
[ { "msg_contents": "Hello,\n\nWhen currently trying to lock a virtual tuple the returned error\nwill be a misleading `could not read block 0`. This patch adds a\ncheck for the tuple table slot being virtual to produce a clearer\nerror.\n\nThis can be triggered by extensions returning virtual tuples.\nWhile this is of course an error in those extensions the resulting\nerror is very misleading.\n\n\n\n-- \nRegards, Sven Klemm", "msg_date": "Mon, 17 Jun 2024 17:55:37 +0200", "msg_from": "Sven Klemm <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH] Improve error message when trying to lock virtual tuple." }, { "msg_contents": "Hi,\n\n> When currently trying to lock a virtual tuple the returned error\n> will be a misleading `could not read block 0`. This patch adds a\n> check for the tuple table slot being virtual to produce a clearer\n> error.\n>\n> This can be triggered by extensions returning virtual tuples.\n> While this is of course an error in those extensions the resulting\n> error is very misleading.\n\n```\n+ /*\n+ * If the slot is virtual, we can't lock it. This should never happen, but\n+ * this will lead to a misleading could not read block error\nlater otherwise.\n+ */\n```\n\nI suggest dropping or rephrasing the \"this should never happen\" part.\nIf this never happened we didn't need this check. Maybe \"If the slot\nis virtual, we can't lock it. Fail early in order to provide an\nappropriate error message\", or just \"If the slot is virtual, we can't\nlock it\".\n\n```\nelog(ERROR, \"cannot lock virtual tuple\");\n```\n\nFor some reason I thought that ereport() is the preferred way of\nthrowing errors, but I see elog() used many times in ExecLockRows() so\nthis is probably fine.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 17 Jun 2024 19:12:31 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve error message when trying to lock virtual tuple." }, { "msg_contents": "(now send a copy to -hackers, too)\n\nOn Mon, 17 Jun 2024 at 17:55, Sven Klemm <[email protected]> wrote:\n>\n> Hello,\n>\n> When currently trying to lock a virtual tuple the returned error\n> will be a misleading `could not read block 0`. This patch adds a\n> check for the tuple table slot being virtual to produce a clearer\n> error.\n>\n> This can be triggered by extensions returning virtual tuples.\n> While this is of course an error in those extensions the resulting\n> error is very misleading.\n\nI think you're solving the wrong problem here, as I can't think of a\nplace where both virtual tuple slots and tuple locking are allowed at\nthe same time in core code.\n\nI mean, in which kind of situation could we get a Relation's table\nslot which is not lockable by said relation's AM? Assuming the \"could\nnot read block 0\" error comes from the heap code, why does the\nassertion in heapam_tuple_lock that checks for a\nBufferHeapTupleTableSlot not fire before this `block 0` error? If the\nerror is not in the heapam code, could you show an example of the code\nthat breaks with that error code?\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 17 Jun 2024 22:25:09 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve error message when trying to lock virtual tuple." }, { "msg_contents": "On Mon, Jun 17, 2024 at 10:25 PM Matthias van de Meent\n<[email protected]> wrote:\n\n> I think you're solving the wrong problem here, as I can't think of a\n> place where both virtual tuple slots and tuple locking are allowed at\n> the same time in core code.\n>\n> I mean, in which kind of situation could we get a Relation's table\n> slot which is not lockable by said relation's AM? Assuming the \"could\n> not read block 0\" error comes from the heap code, why does the\n> assertion in heapam_tuple_lock that checks for a\n> BufferHeapTupleTableSlot not fire before this `block 0` error? If the\n> error is not in the heapam code, could you show an example of the code\n> that breaks with that error code?\n\nIn assertion enabled builds this will be stopped much earlier and not return\nthe misleading error message. But most packaged postgres versions don't have\nassertions enabled and will produce the misleading `could not read block 0`\nerror.\nI am aware that this not a postgres bug, but i think this error message\nis an improvement over the current situation.\n\n\n-- \nRegards, Sven Klemm\n\n\n", "msg_date": "Tue, 18 Jun 2024 09:32:30 +0200", "msg_from": "Sven Klemm <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve error message when trying to lock virtual tuple." }, { "msg_contents": "On Tue, 18 Jun 2024 at 09:32, Sven Klemm <[email protected]> wrote:\n>\n> On Mon, Jun 17, 2024 at 10:25 PM Matthias van de Meent\n> <[email protected]> wrote:\n>\n> > I think you're solving the wrong problem here, as I can't think of a\n> > place where both virtual tuple slots and tuple locking are allowed at\n> > the same time in core code.\n> >\n> > I mean, in which kind of situation could we get a Relation's table\n> > slot which is not lockable by said relation's AM? Assuming the \"could\n> > not read block 0\" error comes from the heap code, why does the\n> > assertion in heapam_tuple_lock that checks for a\n> > BufferHeapTupleTableSlot not fire before this `block 0` error? If the\n> > error is not in the heapam code, could you show an example of the code\n> > that breaks with that error code?\n>\n> In assertion enabled builds this will be stopped much earlier and not return\n> the misleading error message. But most packaged postgres versions don't have\n> assertions enabled and will produce the misleading `could not read block 0`\n> error.\n> I am aware that this not a postgres bug, but i think this error message\n> is an improvement over the current situation.\n\nExtensions shouldn't cause assertions to trigger, IMO, and I don't\nthink that this check in ExecLockRows is a good way to solve that\nissue. In my opinion, authors should test their extension on\nassert-enabled PostgreSQL, so that they're certain they're not doing\n\nIf you're dead-set on having users see less confusing error messages\nwhen assertions should have triggered (but are not enabled, and thus\ndon't), I think the better place to add additional checks & error\nmessages is in the actual heapam_tuple_lock method, just after the\nassertion, rather than in the AM-agnostic ExecLockRows: If or when a\ntableAM decides that VirtualTableTupleSlot is the slot type they want\nto use for passing tuples around, then that shouldn't be broken by\ncode in ExecLockRows that was put there to mimick an assert in the\nheap AM.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 18 Jun 2024 11:14:45 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve error message when trying to lock virtual tuple." } ]
[ { "msg_contents": "Hi,\n\nTo investigate an unrelated issue, I set up key logging in the backend (we\nprobably should build that in) and looked at the decrypted data. And noticed\nthat just after TLS setup finished the server sends three packets in a row:\n\nC->S: TLSv1.3: finished\nC->S: TLSv1.3: application data (startup message)\nS->C: TLSv1.3: new session ticket\nS->C: TLSv1.3: new session ticket\nS->C: TLSv1.3: application data (authentication ok, parameter status+)\n\n\nWe try to turn off session resumption, but not completely enough for 1.3:\n SSL_OP_NO_TICKET\n SSL/TLS supports two mechanisms for resuming sessions: session ids and stateless session tickets.\n\n When using session ids a copy of the session information is cached on the server and a unique id is sent to the client. When the client wishes to\n resume it provides the unique id so that the server can retrieve the session information from its cache.\n\n When using stateless session tickets the server uses a session ticket encryption key to encrypt the session information. This encrypted data is\n sent to the client as a \"ticket\". When the client wishes to resume it sends the encrypted data back to the server. The server uses its key to\n decrypt the data and resume the session. In this way the server can operate statelessly - no session information needs to be cached locally.\n\n The TLSv1.3 protocol only supports tickets and does not directly support session ids. However, OpenSSL allows two modes of ticket operation in\n TLSv1.3: stateful and stateless. Stateless tickets work the same way as in TLSv1.2 and below. Stateful tickets mimic the session id behaviour\n available in TLSv1.2 and below. The session information is cached on the server and the session id is wrapped up in a ticket and sent back to the\n client. When the client wishes to resume, it presents a ticket in the same way as for stateless tickets. The server can then extract the session id\n from the ticket and retrieve the session information from its cache.\n\n By default OpenSSL will use stateless tickets. The SSL_OP_NO_TICKET option will cause stateless tickets to not be issued. In TLSv1.2 and below this\n means no ticket gets sent to the client at all. In TLSv1.3 a stateful ticket will be sent. This is a server-side option only.\n\n In TLSv1.3 it is possible to suppress all tickets (stateful and stateless) from being sent by calling SSL_CTX_set_num_tickets(3) or\n SSL_set_num_tickets(3).\n\n\nNote the second to last paragraph: Because we use SSL_OP_NO_TICKET we trigger\nuse of stateful tickets. Which afaict are never going to be useful, because we\ndon't share the necessary state.\n\nI guess openssl really could have inferred this from the fact that we *do*\ncall SSL_CTX_set_session_cache_mode(SSL_SESS_CACHE_OFF), b\n\n\nSeems we ought to use SSL_CTX_set_num_tickets() to prevent issuing the useless\ntickets?\n\n\n\nIt seems like a buglet in openssl that it forces each session tickets to be\nsent in its own packet (it does an explicit BIO_flush(), so even if we\nbuffered between openssl and OS, as I think we should, we'd still send it\nseparately), but I don't really understand most of this stuff.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 17 Jun 2024 10:38:03 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "tls 1.3: sending multiple tickets" }, { "msg_contents": "> On 17 Jun 2024, at 19:38, Andres Freund <[email protected]> wrote:\n\n> Note the second to last paragraph: Because we use SSL_OP_NO_TICKET we trigger\n> use of stateful tickets. Which afaict are never going to be useful, because we\n> don't share the necessary state.\n\nNice catch, I learned something new today. I was under the impression that the\nflag turned of all tickets but clearly not.\n\n> I guess openssl really could have inferred this from the fact that we *do*\n> call SSL_CTX_set_session_cache_mode(SSL_SESS_CACHE_OFF), b\n\nEvery day with the OpenSSL API is an adventure.\n\n> Seems we ought to use SSL_CTX_set_num_tickets() to prevent issuing the useless\n> tickets?\n\nAgreed, in 1.1.1 and above as the API was only introduced then. LibreSSL added\nthe API in 3.5.4 but only for compatibility since it doesn't support TLS\ntickets at all.\n\n> It seems like a buglet in openssl that it forces each session tickets to be\n> sent in its own packet (it does an explicit BIO_flush(), so even if we\n> buffered between openssl and OS, as I think we should, we'd still send it\n> separately), but I don't really understand most of this stuff.\n\nI don't see anything in the RFCs so not sure.\n\nThe attached applies this, and I think this is backpatching material since we\narguably fail to do what we say in the code. AFAIK we don't have a hard rule\nagainst backpatching changes to autoconf/meson?\n\n--\nDaniel Gustafsson", "msg_date": "Tue, 18 Jun 2024 15:11:33 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tls 1.3: sending multiple tickets" }, { "msg_contents": "On 18/06/2024 16:11, Daniel Gustafsson wrote:\n>> On 17 Jun 2024, at 19:38, Andres Freund <[email protected]> wrote:\n>> Seems we ought to use SSL_CTX_set_num_tickets() to prevent issuing the useless\n>> tickets?\n> \n> Agreed, in 1.1.1 and above as the API was only introduced then. LibreSSL added\n> the API in 3.5.4 but only for compatibility since it doesn't support TLS\n> tickets at all.\n\nWow, that's a bizarre API. The OpenSSL docs are not clear on what the \npossible values for SSL_CTX_set_num_tickets() are. It talks about 0, and \nmentions that 2 is the default, but what does it mean to set it to 1, or \n5, for example?\n\nAnyway, it's pretty clear that SSL_CTX_set_num_tickets(0) can be used to \ndisable tickets, so that's fine.\n\n>> It seems like a buglet in openssl that it forces each session tickets to be\n>> sent in its own packet (it does an explicit BIO_flush(), so even if we\n>> buffered between openssl and OS, as I think we should, we'd still send it\n>> separately), but I don't really understand most of this stuff.\n> \n> I don't see anything in the RFCs so not sure.\n> \n> The attached applies this, and I think this is backpatching material since we\n> arguably fail to do what we say in the code. AFAIK we don't have a hard rule\n> against backpatching changes to autoconf/meson?\n\nLooks good to me. Backpatching autoconf/meson changes is fine, we've \ndone it before.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 24 Jul 2024 08:44:16 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tls 1.3: sending multiple tickets" }, { "msg_contents": "> On 24 Jul 2024, at 07:44, Heikki Linnakangas <[email protected]> wrote:\n> \n> On 18/06/2024 16:11, Daniel Gustafsson wrote:\n>>> On 17 Jun 2024, at 19:38, Andres Freund <[email protected]> wrote:\n>>> Seems we ought to use SSL_CTX_set_num_tickets() to prevent issuing the useless\n>>> tickets?\n>> Agreed, in 1.1.1 and above as the API was only introduced then. LibreSSL added\n>> the API in 3.5.4 but only for compatibility since it doesn't support TLS\n>> tickets at all.\n> \n> Wow, that's a bizarre API. The OpenSSL docs are not clear on what the possible values for SSL_CTX_set_num_tickets() are. It talks about 0, and mentions that 2 is the default, but what does it mean to set it to 1, or 5, for example?\n\nIt means that 1 or 5 tickets can be sent to the user, OpenSSL accepts an\narbitrary number of tickets (tickets can be issued at other points during the\nconnection than the handshake AFAICT).\n\n> Anyway, it's pretty clear that SSL_CTX_set_num_tickets(0) can be used to disable tickets, so that's fine.\n> \n>>> It seems like a buglet in openssl that it forces each session tickets to be\n>>> sent in its own packet (it does an explicit BIO_flush(), so even if we\n>>> buffered between openssl and OS, as I think we should, we'd still send it\n>>> separately), but I don't really understand most of this stuff.\n>> I don't see anything in the RFCs so not sure.\n>> The attached applies this, and I think this is backpatching material since we\n>> arguably fail to do what we say in the code. AFAIK we don't have a hard rule\n>> against backpatching changes to autoconf/meson?\n> \n> Looks good to me. Backpatching autoconf/meson changes is fine, we've done it before.\n\nThanks for review, I've applied this backpatched all the way.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 26 Jul 2024 13:55:29 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tls 1.3: sending multiple tickets" }, { "msg_contents": "Hello!\n\nOn 2024-07-26 14:55, Daniel Gustafsson wrote:\n> Thanks for review, I've applied this backpatched all the way.\n\nIt looks like the recommended way of using autoheader [1] is now broken. \nThe attached patch fixes the master branch for me.\n\n[1] \nhttps://www.postgresql.org/message-id/30511.1546097762%40sss.pgh.pa.us\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 26 Jul 2024 15:03:31 +0300", "msg_from": "Marina Polyakova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tls 1.3: sending multiple tickets" }, { "msg_contents": "> On 26 Jul 2024, at 14:03, Marina Polyakova <[email protected]> wrote:\n> On 2024-07-26 14:55, Daniel Gustafsson wrote:\n>> Thanks for review, I've applied this backpatched all the way.\n> \n> It looks like the recommended way of using autoheader [1] is now broken. The attached patch fixes the master branch for me.\n\nThanks for the report, I'll fix it. Buildfarm animal hamerkop also reminded me\nthat I had managed to stash the old MSVC buildsystem changes (ENOTENOUGHCOFFEE)\nso fixing that at the same time.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 26 Jul 2024 14:27:03 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tls 1.3: sending multiple tickets" }, { "msg_contents": "On 2024-07-26 15:27, Daniel Gustafsson wrote:\n>> On 26 Jul 2024, at 14:03, Marina Polyakova \n>> <[email protected]> wrote:\n>> It looks like the recommended way of using autoheader [1] is now \n>> broken. The attached patch fixes the master branch for me.\n> \n> Thanks for the report, I'll fix it. Buildfarm animal hamerkop also \n> reminded me\n> that I had managed to stash the old MSVC buildsystem changes \n> (ENOTENOUGHCOFFEE)\n> so fixing that at the same time.\n\nThank you!\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Fri, 26 Jul 2024 15:47:01 +0300", "msg_from": "Marina Polyakova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tls 1.3: sending multiple tickets" }, { "msg_contents": "On Fri, Jul 26, 2024 at 8:27 AM Daniel Gustafsson <[email protected]> wrote:\n> Thanks for the report, I'll fix it. Buildfarm animal hamerkop also reminded me\n> that I had managed to stash the old MSVC buildsystem changes (ENOTENOUGHCOFFEE)\n> so fixing that at the same time.\n\nI was just looking at this commit and noticing that nothing in the\ncommit message explains why we want to turn off tickets. So then I\nlooked at the comments in the patch and that didn't explain it either.\nSo then I read through this thread and that also didn't explain it.\n\nI don't doubt that you're doing the right thing here but it'd be nice\nto document why it's the right thing someplace.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Jul 2024 10:08:20 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tls 1.3: sending multiple tickets" }, { "msg_contents": "> On 26 Jul 2024, at 16:08, Robert Haas <[email protected]> wrote:\n> \n> On Fri, Jul 26, 2024 at 8:27 AM Daniel Gustafsson <[email protected]> wrote:\n>> Thanks for the report, I'll fix it. Buildfarm animal hamerkop also reminded me\n>> that I had managed to stash the old MSVC buildsystem changes (ENOTENOUGHCOFFEE)\n>> so fixing that at the same time.\n> \n> I was just looking at this commit and noticing that nothing in the\n> commit message explains why we want to turn off tickets. So then I\n> looked at the comments in the patch and that didn't explain it either.\n> So then I read through this thread and that also didn't explain it.\n\nSorry for the lack of detail, I probably Stockholm syndromed myself after\nhaving spent some time in this code.\n\nWe turn off TLS session tickets for two reasons: a) we don't support TLS\nsession resumption, and some resumption capable client libraries can experience\nconnection failures if they try to use tickets received in the setup (Npgsql at\nleast had this problem in the past); b) it's network overhead in the connection\nsetup phase which doesn't give any value due to us not supporting their use.\n\nTLS tickets were disallowed in 2017 in 97d3a0b09 but as Andres found out,\nTLSv1.3 session tickets had a new API which we didn't call and thus issued\ntickets.\n\n> I don't doubt that you're doing the right thing here but it'd be nice\n> to document why it's the right thing someplace.\n\nI can add a summary of the above in the comment for future readers if you think\nthat would be useful.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 26 Jul 2024 16:23:41 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tls 1.3: sending multiple tickets" }, { "msg_contents": "On Fri, Jul 26, 2024 at 10:23 AM Daniel Gustafsson <[email protected]> wrote:\n> We turn off TLS session tickets for two reasons: a) we don't support TLS\n> session resumption, and some resumption capable client libraries can experience\n> connection failures if they try to use tickets received in the setup (Npgsql at\n> least had this problem in the past); b) it's network overhead in the connection\n> setup phase which doesn't give any value due to us not supporting their use.\n>\n> TLS tickets were disallowed in 2017 in 97d3a0b09 but as Andres found out,\n> TLSv1.3 session tickets had a new API which we didn't call and thus issued\n> tickets.\n\nThanks much for this explanation.\n\n> > I don't doubt that you're doing the right thing here but it'd be nice\n> > to document why it's the right thing someplace.\n>\n> I can add a summary of the above in the comment for future readers if you think\n> that would be useful.\n\nI think having (a) and (b) from above at the end of the \"Disallow SSL\nsession tickets\" comment would be helpful.\n\nWhile I'm complaining, the bit about SSL renegotiation could use a\nbetter comment, too. One of my chronic complaints about comments is\nthat they should say why we're doing things, not what we're doing. To\nme, having a comment that says \"Disallow SSL renegotiation\" followed\nimmediately by SSL_CTX_set_options(context, SSL_OP_NO_RENEGOTIATION)\nis a good example of what not to do, because, I mean, I can guess\nwithout the comment what that does. Actually, that comment is quite\nwell-written in terms of explaining why we do it in different ways\ndepending on the OpenSSL version; it just fails to explain why it's\nimportant in the first place. I'm pretty sure I know in that case that\nthere are CVEs about that topic, but that's just because I happen to\nremember the mailing list discussion on it, and I don't think every\nhacker is contractually required to remember that.\n\nI don't want to sound like I'm giving orders and I think it's really\nup to you how much effort you want to put in here, but I feel like any\nplace where we are doing X because of some property of a non-PG code\nbase with which a particular reader might not be familiar, we should\nhave a comment explaining why we're doing it. And especially if it's\nsecurity-relevant.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Jul 2024 14:29:49 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tls 1.3: sending multiple tickets" }, { "msg_contents": "> On 26 Jul 2024, at 20:29, Robert Haas <[email protected]> wrote:\n\n> One of my chronic complaints about comments is\n> that they should say why we're doing things, not what we're doing.\n\nAgreed.\n\n> I feel like any\n> place where we are doing X because of some property of a non-PG code\n> base with which a particular reader might not be familiar, we should\n> have a comment explaining why we're doing it. And especially if it's\n> security-relevant.\n\nI'm sure there are more interactions with OpenSSL, and TLS in general, which\nwarrants better comments but the attached takes a stab at the two examples in\nquestion here to get started (to avoid perfect get in the way of progress). \n\n--\nDaniel Gustafsson", "msg_date": "Mon, 29 Jul 2024 11:56:55 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tls 1.3: sending multiple tickets" }, { "msg_contents": "On Mon, Jul 29, 2024 at 5:57 AM Daniel Gustafsson <[email protected]> wrote:\n> I'm sure there are more interactions with OpenSSL, and TLS in general, which\n> warrants better comments but the attached takes a stab at the two examples in\n> question here to get started (to avoid perfect get in the way of progress).\n\n+1.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Jul 2024 11:34:39 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tls 1.3: sending multiple tickets" }, { "msg_contents": "On 2024-07-26 13:55:29 +0200, Daniel Gustafsson wrote:\n> Thanks for review, I've applied this backpatched all the way.\n\nThanks for working on this!\n\n\n", "msg_date": "Mon, 29 Jul 2024 13:08:53 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tls 1.3: sending multiple tickets" } ]
[ { "msg_contents": "Hi,\n\nAs I mentioned in my talk at 2024.pgconf.dev, I think that the biggest\nproblem with autovacuum as it exists today is that the cost delay is\nsometimes too low to keep up with the amount of vacuuming that needs\nto be done. I sketched a solution during the talk, but it was very\ncomplicated, so I started to try to think of simpler ideas that might\nstill solve the problem, or at least be better than what we have\ntoday.\n\nI think we might able to get fairly far by observing that if the\nnumber of running autovacuum workers is equal to the maximum allowable\nnumber of running autovacuum workers, that may be a sign of trouble,\nand the longer that situation persists, the more likely it is that\nwe're in trouble. So, a very simple algorithm would be: If the maximum\nnumber of workers have been running continuously for more than, say,\n10 minutes, assume we're falling behind and exempt all workers from\nthe cost limit for as long as the situation persists. One could\ncriticize this approach on the grounds that it causes a very sudden\nbehavior change instead of, say, allowing the rate of vacuuming to\ngradually increase. I'm curious to know whether other people think\nthat would be a problem.\n\nI think it might be OK, for a couple of reasons:\n\n1. I'm unconvinced that the vacuum_cost_delay system actually prevents\nvery many problems. I've fixed a lot of problems by telling users to\nraise the cost limit, but virtually never by lowering it. When we\nlowered the delay by an order of magnitude a few releases ago -\nequivalent to increasing the cost limit by an order of magnitude - I\ndidn't personally hear any complaints about that causing problems. So\ndisabling the delay completely some of the time might just be fine.\n\n1a. Incidentally, when I have seen problems because of vacuum running\n\"too fast\", it's not been because it was using up too much I/O\nbandwidth, but because it's pushed too much data out of cache too\nquickly. A long overnight vacuum can evict a lot of pages from the\nsystem page cache by morning - the ring buffer only protects our\nshared_buffers, not the OS cache. I don't think this can be fixed by\nrate-limiting vacuum, though: to keep the cache eviction at a level\nlow enough that you could be certain of not causing trouble, you'd\nhave to limit it to an extremely low rate which would just cause\nvacuuming not to keep up. The cure would be worse than the disease at\nthat point.\n\n2. If we decided to gradually increase the rate of vacuuming instead\nof just removing the throttling all at once, what formula would we use\nand why would that be the right idea? We'd need a lot of state to\nreally do a calculation of how fast we would need to go in order to\nkeep up, and that starts to rapidly turn into a very complicated\nproject along the lines of what I mooted in Vancouver. Absent that,\nthe only other idea I have is to gradually ramp up the cost limit\nhigher and higher, which we could do, but we would have no idea how\nfast to ramp it up, so anything we do here feels like it's just\npicking random numbers and calling them an algorithm.\n\nIf you like this idea, I'd like to know that, and hear any further\nthoughts you have about how to improve or refine it. If you don't, I'd\nlike to know that, too, and any alternatives you can propose,\nespecially alternatives that don't require crazy amounts of new\ninfrastructure to implement.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 Jun 2024 15:39:27 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "cost delay brainstorming" }, { "msg_contents": "Hi,\n\nOn 2024-06-17 15:39:27 -0400, Robert Haas wrote:\n> As I mentioned in my talk at 2024.pgconf.dev, I think that the biggest\n> problem with autovacuum as it exists today is that the cost delay is\n> sometimes too low to keep up with the amount of vacuuming that needs\n> to be done.\n\nI agree it's a big problem, not sure it's *the* problem. But I'm happy to see\nit improved anyway, so it doesn't really matter.\n\nOne issue around all of this is that we pretty much don't have the tools to\nanalyze autovacuum behaviour across a larger number of systems in a realistic\nway :/. I find my own view of what precisely the problem is being heavily\nswayed by the last few problematic cases I've looked t.\n\n\n\n> I think we might able to get fairly far by observing that if the\n> number of running autovacuum workers is equal to the maximum allowable\n> number of running autovacuum workers, that may be a sign of trouble,\n> and the longer that situation persists, the more likely it is that\n> we're in trouble. So, a very simple algorithm would be: If the maximum\n> number of workers have been running continuously for more than, say,\n> 10 minutes, assume we're falling behind and exempt all workers from\n> the cost limit for as long as the situation persists. One could\n> criticize this approach on the grounds that it causes a very sudden\n> behavior change instead of, say, allowing the rate of vacuuming to\n> gradually increase. I'm curious to know whether other people think\n> that would be a problem.\n\nAnother issue is that it's easy to fall behind due to cost limits on systems\nwhere autovacuum_max_workers is smaller than the number of busy tables.\n\nIME one common situation is to have a single table that's being vacuumed too\nslowly due to cost limits, with everything else keeping up easily.\n\n\n> I think it might be OK, for a couple of reasons:\n>\n> 1. I'm unconvinced that the vacuum_cost_delay system actually prevents\n> very many problems. I've fixed a lot of problems by telling users to\n> raise the cost limit, but virtually never by lowering it. When we\n> lowered the delay by an order of magnitude a few releases ago -\n> equivalent to increasing the cost limit by an order of magnitude - I\n> didn't personally hear any complaints about that causing problems. So\n> disabling the delay completely some of the time might just be fine.\n\nI have seen disabling cost limits cause replication setups to fall over\nbecause the amount of WAL increases beyond what can be\nreplicated/archived/replayed. It's very easy to reach the issue when syncrep\nis enabled.\n\n\n\n> 1a. Incidentally, when I have seen problems because of vacuum running\n> \"too fast\", it's not been because it was using up too much I/O\n> bandwidth, but because it's pushed too much data out of cache too\n> quickly. A long overnight vacuum can evict a lot of pages from the\n> system page cache by morning - the ring buffer only protects our\n> shared_buffers, not the OS cache. I don't think this can be fixed by\n> rate-limiting vacuum, though: to keep the cache eviction at a level\n> low enough that you could be certain of not causing trouble, you'd\n> have to limit it to an extremely low rate which would just cause\n> vacuuming not to keep up. The cure would be worse than the disease at\n> that point.\n\nI've seen versions of this too. Ironically it's often made way worse by\nringbuffers, because even if there is space is shared buffers, we'll not move\nbuffers there, instead putting a lot of pressure on the OS page cache.\n\n\n> If you like this idea, I'd like to know that, and hear any further\n> thoughts you have about how to improve or refine it. If you don't, I'd\n> like to know that, too, and any alternatives you can propose,\n> especially alternatives that don't require crazy amounts of new\n> infrastructure to implement.\n\nI unfortunately don't know what to do about all of this with just the existing\nset of metrics :/.\n\n\nOne reason that cost limit can be important is that we often schedule\nautovacuums on tables in useless ways, over and over. Without cost limits\nproviding *some* protection against using up all IO bandwidth / WAL volume, we\ncould end up doing even worse.\n\n\nCommon causes of such useless vacuums I've seen:\n\n- Longrunning transaction prevents increasing relfrozenxid, we run autovacuum\n over and over on the same relation, using up the whole cost budget. This is\n particularly bad because often we'll not autovacuum anything else, building\n up a larger and larger backlog of actual work.\n\n- Tables, where on-access pruning works very well, end up being vacuumed far\n too frequently, because our autovacuum scheduling doesn't know about tuples\n having been cleaned up by on-access pruning.\n\n- Larger tables with occasional lock conflicts cause autovacuum to be\n cancelled and restarting from scratch over and over. If that happens before\n the second table scan, this can easily eat up the whole cost budget without\n making forward progress.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 17 Jun 2024 14:37:19 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cost delay brainstorming" }, { "msg_contents": "On Mon, Jun 17, 2024 at 3:39 PM Robert Haas <[email protected]> wrote:\n\n> So, a very simple algorithm would be: If the maximum number of workers\n> have been running continuously for more than, say,\n> 10 minutes, assume we're falling behind\n\n\nHmm, I don't know about the validity of this. I've seen plenty of cases\nwhere we hit the max workers but all is just fine. On the other hand, I\ndon't have an alternative trigger point yet. But I do overall like the idea\nof dynamically changing the delay. And agree it is pretty conservative.\n\n\n> 2. If we decided to gradually increase the rate of vacuuming instead of\n> just removing the throttling all at once, what formula would we use\n> and why would that be the right idea?\n\n\nWell, since the idea of disabling the delay is on the table, we could raise\nthe cost every minute by X% until we effectively reach an infinite cost /\nzero delay situation. I presume this would only affect currently running\nvacs, and future ones would get the default cost until things get triggered\nagain?\n\nCheers,\nGreg\n\nOn Mon, Jun 17, 2024 at 3:39 PM Robert Haas <[email protected]> wrote:So, a very simple algorithm would be: If the maximum number of workers have been running continuously for more than, say,\n10 minutes, assume we're falling behindHmm, I don't know about the validity of this. I've seen plenty of cases where we hit the max workers but all is just fine. On the other hand, I don't have an alternative trigger point yet. But I do overall like the idea of dynamically changing the delay. And agree it is pretty conservative. 2. If we decided to gradually increase the rate of vacuuming instead of just removing the throttling all at once, what formula would we use\nand why would that be the right idea?Well, since the idea of disabling the delay is on the table, we could raise the cost every minute by X% until we effectively reach an infinite cost / zero delay situation. I presume this would only affect currently running vacs, and future ones would get the default cost until things get triggered again?Cheers,Greg", "msg_date": "Mon, 17 Jun 2024 17:44:08 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cost delay brainstorming" }, { "msg_contents": "On Tue, 18 Jun 2024 at 07:39, Robert Haas <[email protected]> wrote:\n> I think we might able to get fairly far by observing that if the\n> number of running autovacuum workers is equal to the maximum allowable\n> number of running autovacuum workers, that may be a sign of trouble,\n> and the longer that situation persists, the more likely it is that\n> we're in trouble. So, a very simple algorithm would be: If the maximum\n> number of workers have been running continuously for more than, say,\n> 10 minutes, assume we're falling behind and exempt all workers from\n> the cost limit for as long as the situation persists. One could\n> criticize this approach on the grounds that it causes a very sudden\n> behavior change instead of, say, allowing the rate of vacuuming to\n> gradually increase. I'm curious to know whether other people think\n> that would be a problem.\n\nI think a nicer solution would implement some sort of unified \"urgency\nlevel\" and slowly ramp up the vacuum_cost_limit according to that\nurgency rather than effectively switching to an infinite\nvacuum_cost_limit when all workers have been going for N mins. If\nthere is nothing else that requires a vacuum while all 3 workers have\nbeen busy for an hour or two, it seems strange to hurry them up so\nthey can more quickly start their next task -- being idle.\n\nAn additional feature that having this unified \"urgency level\" will\nprovide is the ability to prioritise auto-vacuum so that it works on\nthe most urgent tables first.\n\nI outlined some ideas in [1] of how this might be done.\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvo8DWyt4CWhF=NPeRstz_78SteEuuNDfYO7cjp=7YTK4g@mail.gmail.com\n\n\n", "msg_date": "Tue, 18 Jun 2024 10:43:26 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cost delay brainstorming" }, { "msg_contents": "On Mon, Jun 17, 2024 at 03:39:27PM -0400, Robert Haas wrote:\n> I think we might able to get fairly far by observing that if the\n> number of running autovacuum workers is equal to the maximum allowable\n> number of running autovacuum workers, that may be a sign of trouble,\n> and the longer that situation persists, the more likely it is that\n> we're in trouble. So, a very simple algorithm would be: If the maximum\n> number of workers have been running continuously for more than, say,\n> 10 minutes, assume we're falling behind and exempt all workers from\n> the cost limit for as long as the situation persists. One could\n> criticize this approach on the grounds that it causes a very sudden\n> behavior change instead of, say, allowing the rate of vacuuming to\n> gradually increase. I'm curious to know whether other people think\n> that would be a problem.\n> \n> I think it might be OK, for a couple of reasons:\n> \n> 1. I'm unconvinced that the vacuum_cost_delay system actually prevents\n> very many problems. I've fixed a lot of problems by telling users to\n> raise the cost limit, but virtually never by lowering it. When we\n> lowered the delay by an order of magnitude a few releases ago -\n> equivalent to increasing the cost limit by an order of magnitude - I\n> didn't personally hear any complaints about that causing problems. So\n> disabling the delay completely some of the time might just be fine.\n\nHave we ruled out further adjustments to the cost parameters as a first\nstep? If you are still recommending that folks raise it and never\nrecommending that folks lower it, ISTM that our defaults might still not be\nin the right ballpark. The autovacuum_vacuum_cost_delay adjustment you\nreference (commit cbccac3) is already 5 years old, so maybe it's worth\nanother look.\n\nPerhaps only tangentially related, but I feel like the default of 3 for\nautovacuum_max_workers is a bit low, especially for systems with many\ntables. Changing the default for that likely requires changing the default\nfor the delay/limit, too.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 18 Jun 2024 13:50:46 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cost delay brainstorming" }, { "msg_contents": "Hi,\n\nOn 2024-06-18 13:50:46 -0500, Nathan Bossart wrote:\n> Have we ruled out further adjustments to the cost parameters as a first\n> step?\n\nI'm not against that, but I it doesn't address the issue that with the current\nlogic one set of values just isn't going to fit a 60MB that's allowed to burst\nto 100 iops and a 60TB database that has multiple 1M iops NVMe drives.\n\n\nThat said, the fact that vacuum_cost_page_hit is 1 and vacuum_cost_page_miss\nis 2 just doesn't make much sense aesthetically. There's a far bigger\nmultiplier in actual costs than that...\n\n\n\n> If you are still recommending that folks raise it and never recommending\n> that folks lower it, ISTM that our defaults might still not be in the right\n> ballpark. The autovacuum_vacuum_cost_delay adjustment you reference (commit\n> cbccac3) is already 5 years old, so maybe it's worth another look.\n\nAdjusting cost delay much lower doesn't make much sense imo. It's already only\n2ms on a 1ms granularity variable. We could increase the resolution, but\nsleeping for much shorter often isn't that cheap (you need to set up hardware\ntimers all the time and due to the short time they can't be combined with\nother timers) and/or barely gives time to switch to other tasks.\n\n\nSo we'd have to increase the cost limit.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 18 Jun 2024 13:32:38 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cost delay brainstorming" }, { "msg_contents": "On Tue, Jun 18, 2024 at 01:32:38PM -0700, Andres Freund wrote:\n> On 2024-06-18 13:50:46 -0500, Nathan Bossart wrote:\n>> Have we ruled out further adjustments to the cost parameters as a first\n>> step?\n> \n> I'm not against that, but I it doesn't address the issue that with the current\n> logic one set of values just isn't going to fit a 60MB that's allowed to burst\n> to 100 iops and a 60TB database that has multiple 1M iops NVMe drives.\n\nTrue.\n\n>> If you are still recommending that folks raise it and never recommending\n>> that folks lower it, ISTM that our defaults might still not be in the right\n>> ballpark. The autovacuum_vacuum_cost_delay adjustment you reference (commit\n>> cbccac3) is already 5 years old, so maybe it's worth another look.\n> \n> Adjusting cost delay much lower doesn't make much sense imo. It's already only\n> 2ms on a 1ms granularity variable. We could increase the resolution, but\n> sleeping for much shorter often isn't that cheap (you need to set up hardware\n> timers all the time and due to the short time they can't be combined with\n> other timers) and/or barely gives time to switch to other tasks.\n> \n> \n> So we'd have to increase the cost limit.\n\nAgreed.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 18 Jun 2024 16:13:08 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cost delay brainstorming" }, { "msg_contents": "\nHi, \n> Hi,\n>\n> On 2024-06-17 15:39:27 -0400, Robert Haas wrote:\n>> As I mentioned in my talk at 2024.pgconf.dev, I think that the biggest\n>> problem with autovacuum as it exists today is that the cost delay is\n>> sometimes too low to keep up with the amount of vacuuming that needs\n>> to be done.\n>\n> I agree it's a big problem, not sure it's *the* problem. But I'm happy to see\n> it improved anyway, so it doesn't really matter.\n\nIn my past knowldege, another big problem is the way we triggers an\nautovacuum on a relation. With the current stategy, if we have lots of\nwrites between 9:00 AM ~ 5:00 PM, it is more likely to triggers an\nautovauum at that time which is the peak time of application as well.\n\nIf we can trigger vacuum at off-peak time, like 00:00 am ~ 05:00 am,\neven we use lots of resource, it is unlikly cause any issue.\n\n> One issue around all of this is that we pretty much don't have the tools to\n> analyze autovacuum behaviour across a larger number of systems in a realistic\n> way :/. I find my own view of what precisely the problem is being heavily\n> swayed by the last few problematic cases I've looked t.\n>\n>\n>> I think we might able to get fairly far by observing that if the\n>> number of running autovacuum workers is equal to the maximum allowable\n>> number of running autovacuum workers, that may be a sign of trouble,\n>> and the longer that situation persists, the more likely it is that\n>> we're in trouble. So, a very simple algorithm would be: If the maximum\n>> number of workers have been running continuously for more than, say,\n>> 10 minutes, assume we're falling behind and exempt all workers from\n>> the cost limit for as long as the situation persists. One could\n>> criticize this approach on the grounds that it causes a very sudden\n>> behavior change instead of, say, allowing the rate of vacuuming to\n>> gradually increase. I'm curious to know whether other people think\n>> that would be a problem.\n>\n> Another issue is that it's easy to fall behind due to cost limits on systems\n> where autovacuum_max_workers is smaller than the number of busy tables.\n>\n> IME one common situation is to have a single table that's being vacuumed too\n> slowly due to cost limits, with everything else keeping up easily.\n>\n>\n>> I think it might be OK, for a couple of reasons:\n>>\n>> 1. I'm unconvinced that the vacuum_cost_delay system actually prevents\n>> very many problems. I've fixed a lot of problems by telling users to\n>> raise the cost limit, but virtually never by lowering it. When we\n>> lowered the delay by an order of magnitude a few releases ago -\n>> equivalent to increasing the cost limit by an order of magnitude - I\n>> didn't personally hear any complaints about that causing problems. So\n>> disabling the delay completely some of the time might just be fine.\n>\n> I have seen disabling cost limits cause replication setups to fall over\n> because the amount of WAL increases beyond what can be\n> replicated/archived/replayed. It's very easy to reach the issue when syncrep\n> is enabled.\n\nUsually applications have off-peak time, if we can use such character, we\nmight have some good result. But I know it is hard to do in PostgreSQL\ncore, I ever tried it in an external system (external minotor +\ncrontab-like). I can see the CPU / Memory ussage of autovacuum reduced a\nlot at the daytime (application peak time).\n\n\n>> 1a. Incidentally, when I have seen problems because of vacuum running\n>> \"too fast\", it's not been because it was using up too much I/O\n>> bandwidth, but because it's pushed too much data out of cache too\n>> quickly. A long overnight vacuum can evict a lot of pages from the\n>> system page cache by morning - the ring buffer only protects our\n>> shared_buffers, not the OS cache. I don't think this can be fixed by\n>> rate-limiting vacuum, though: to keep the cache eviction at a level\n>> low enough that you could be certain of not causing trouble, you'd\n>> have to limit it to an extremely low rate which would just cause\n>> vacuuming not to keep up. The cure would be worse than the disease at\n>> that point.\n>\n> I've seen versions of this too. Ironically it's often made way worse by\n> ringbuffers, because even if there is space is shared buffers, we'll not move\n> buffers there, instead putting a lot of pressure on the OS page cache.\n\nI can understand the pressure on the OS page cache, but I thought the\nOS page cache can be reused easily for any other purposes. Not sure what\noutstanding issue it can cause. \n\n> - Longrunning transaction prevents increasing relfrozenxid, we run autovacuum\n> over and over on the same relation, using up the whole cost budget. This is\n> particularly bad because often we'll not autovacuum anything else, building\n> up a larger and larger backlog of actual work.\n\nCould we maintain a pg_class.last_autovacuum_min_xid during vacuum? So\nif we compare the OldestXminXid with pg_class.last_autovacuum_min_xid\nbefore doing the real work. I think we can use a in-place update on it\nto avoid too many versions of pg_class tuples when updating\npg_class.last_autovacuum_min_xid.\n\n>\n> - Tables, where on-access pruning works very well, end up being vacuumed far\n> too frequently, because our autovacuum scheduling doesn't know about tuples\n> having been cleaned up by on-access pruning.\n\nGood to know this case. if we update the pg_stats_xx metrics when on-access\npruning, would it is helpful on this? \n\n> - Larger tables with occasional lock conflicts cause autovacuum to be\n> cancelled and restarting from scratch over and over. If that happens before\n> the second table scan, this can easily eat up the whole cost budget without\n> making forward progress.\n\nOff-peak time + manual vacuum should be helpful I think.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Sat, 22 Jun 2024 12:10:32 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cost delay brainstorming" }, { "msg_contents": "Andy Fan <[email protected]> writes:\n\n>\n>> - Longrunning transaction prevents increasing relfrozenxid, we run autovacuum\n>> over and over on the same relation, using up the whole cost budget. This is\n>> particularly bad because often we'll not autovacuum anything else, building\n>> up a larger and larger backlog of actual work.\n>\n> Could we maintain a pg_class.last_autovacuum_min_xid during vacuum? So\n> if we compare the OldestXminXid with pg_class.last_autovacuum_min_xid\n> before doing the real work.\n\nMaintaining the oldestXminXid on this relation might be expensive. \n\n>>\n>> - Tables, where on-access pruning works very well, end up being vacuumed far\n>> too frequently, because our autovacuum scheduling doesn't know about tuples\n>> having been cleaned up by on-access pruning.\n>\n> Good to know this case. if we update the pg_stats_xx metrics when on-access\n> pruning, would it is helpful on this?\n\nI got the answer myself, it doesn't work because on-access pruing\nworking on per-index level and one relation may has many indexes. and\npg_stats_xx works at relation level. So the above proposal doesn't\nwork. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Wed, 26 Jun 2024 05:39:06 +0000", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cost delay brainstorming" } ]
[ { "msg_contents": "AtCommit_Memory and friends have done $SUBJECT for at least a couple\nof decades, but in the wake of analyzing bug #18512 [1], I'm feeling\nlike that's a really bad idea. There is too much code running\naround the system that assumes that it's fine to leak stuff in\nCurrentMemoryContext. If we execute any such thing between\nAtCommit_Memory and the next AtStart_Memory, presto: we have a\nsession-lifespan memory leak. I'm almost feeling that we should\nhave a policy that CurrentMemoryContext should never point at\nTopMemoryContext.\n\nAs to what to do about it: I'm imagining that instead of resetting\nCurrentMemoryContext to TopMemoryContext, we set it to some special\ncontext that we expect we can reset every so often, like at the start\nof the next transaction. The existing TransactionAbortContext is\na very similar thing, and maybe could be repurposed/shared with this\nusage.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/18512-6e89f654d7da884d%40postgresql.org\n\n\n", "msg_date": "Mon, 17 Jun 2024 16:37:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Xact end leaves CurrentMemoryContext = TopMemoryContext" }, { "msg_contents": "Hi,\n\nOn 2024-06-17 16:37:05 -0400, Tom Lane wrote:\n> As to what to do about it: I'm imagining that instead of resetting\n> CurrentMemoryContext to TopMemoryContext, we set it to some special\n> context that we expect we can reset every so often, like at the start\n> of the next transaction. The existing TransactionAbortContext is\n> a very similar thing, and maybe could be repurposed/shared with this\n> usage.\n\nOne issue is that that could lead to hard to find use-after-free issues in\ncurrently working code. Right now allocations made \"between transactions\"\nlive forever, if we just use a different context it won't anymore.\n\nParticularly if the reset is only occasional, we'd make it hard to find\nbuggy allocations.\n\nI wonder if we ought to set CurrentMemoryContext to NULL in that timeframe,\nforcing code to explicitly choose what lifetime is needed, rather than just\ndefaulting such code into changed semantics.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 17 Jun 2024 14:43:05 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Xact end leaves CurrentMemoryContext = TopMemoryContext" }, { "msg_contents": "On Tue, 18 Jun 2024 at 08:37, Tom Lane <[email protected]> wrote:\n> As to what to do about it: I'm imagining that instead of resetting\n> CurrentMemoryContext to TopMemoryContext, we set it to some special\n> context that we expect we can reset every so often, like at the start\n> of the next transaction. The existing TransactionAbortContext is\n> a very similar thing, and maybe could be repurposed/shared with this\n> usage.\n>\n> Thoughts?\n\nInstead, could we just not delete TopTransactionContext in\nAtCommit_Memory() and instead do MemoryContextReset() on it? Likewise\nin AtCleanup_Memory().\n\nIf we got to a stage where we didn't expect anything to allocate into\nthat context outside of a transaction, we could check if the context\nis still reset in AtStart_Memory() and do something like raise a\nWARNING on debug builds (or Assert()) to alert us that some code that\nbroke our expectations.\n\nIt might also be a very tiny amount more efficient to not delete the\ncontext so we don't have to fetch a new context from the context\nfreelist in AtStart_Memory(). Certainly, it wouldn't add any\noverhead. Adding a new special context would and so would the logic\nto reset it every so often.\n\nDavid\n\n\n", "msg_date": "Tue, 18 Jun 2024 16:34:43 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Xact end leaves CurrentMemoryContext = TopMemoryContext" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> Instead, could we just not delete TopTransactionContext in\n> AtCommit_Memory() and instead do MemoryContextReset() on it? Likewise\n> in AtCleanup_Memory().\n\nHmm, that's a nice idea. Maybe reset again in AtStart_Memory, although\nthat seems optional. My first reaction was \"what about memory context\ncallbacks attached to TopTransactionContext?\" ... but those are defined\nto be fired on either reset or delete, so semantically this seems like\nit creates no issues. And you're right that not constantly deleting\nand recreating that context should save some microscopic amount.\n\n> If we got to a stage where we didn't expect anything to allocate into\n> that context outside of a transaction, we could check if the context\n> is still reset in AtStart_Memory() and do something like raise a\n> WARNING on debug builds (or Assert()) to alert us that some code that\n> broke our expectations.\n\nMy point is exactly that I don't think that we can expect that,\nor at least that the cost of guaranteeing it will vastly outweigh\nany possible benefit. (So I wasn't excited about Andres' suggestion.\nBut this one seems promising.)\n\nI'll poke at this tomorrow, unless you're hot to try it right now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 18 Jun 2024 00:53:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Xact end leaves CurrentMemoryContext = TopMemoryContext" }, { "msg_contents": "On Tue, 18 Jun 2024 at 16:53, Tom Lane <[email protected]> wrote:\n> I'll poke at this tomorrow, unless you're hot to try it right now.\n\nPlease go ahead. I was just in suggestion mode here.\n\nDavid\n\n\n", "msg_date": "Tue, 18 Jun 2024 17:30:21 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Xact end leaves CurrentMemoryContext = TopMemoryContext" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> On Tue, 18 Jun 2024 at 16:53, Tom Lane <[email protected]> wrote:\n>> I'll poke at this tomorrow, unless you're hot to try it right now.\n\n> Please go ahead. I was just in suggestion mode here.\n\nSo I tried that, and while it kind of worked, certain parts of the\nsystem (notably logical replication) had acute indigestion. Turns\nout there is a fair amount of code that does\n\n\tStartTransactionCommand();\n\t... some random thing or other ...\n\tCommitTransactionCommand();\n\nand does not stop to think at all about whether that has any effect on\nits memory context. Andres' idea would break every single such place,\nand this idea isn't much better because while it does provide a\ncurrent memory context after CommitTransactionCommand, that context is\neffectively short-lived: the next Start/CommitTransactionCommand will\ntrash it. That broke a lot more places than I'd hoped, mostly in\nobscure ways.\n\nAfter awhile I had an epiphany: what we should do is make\nCommitTransactionCommand restore the memory context that was active\nbefore StartTransactionCommand. That's what we want in every place\nthat was cognizant of this issue, and it seems to be the case in every\nplace that wasn't doing anything explicit about it, either.\n\nThe 0001 patch attached does that, and seems to work nicely.\nI made it implement the idea of recycling TopTransactionContext,\ntoo. (Interestingly, src/backend/utils/mmgr/README *already*\nclaims we manage TopTransactionContext this way. Did we do that\nand then change it back in the mists of time?) The core parts\nof the patch are all in xact.c --- the other diffs are just random\ncleanup that I found while surveying use of TopMemoryContext and\nCommitTransactionCommand.\n\nAlso, 0002 is what I propose for the back branches. It just adds\nmemory context save/restore in notify interrupt processing to solve\nthe original bug report, as well as in sinval interrupt processing\nwhich I discovered has the same disease. We don't need this in\nHEAD if we apply 0001.\n\nAt this point I'd be inclined to wait for the branch to be made,\nand then apply 0001 in HEAD/v18 only and 0002 in v17 and before.\nWhile 0001 seems fairly straightforward, it's still a nontrivial\nchange and I'm hesitant to shove it in at this late stage of the\nv17 cycle.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 18 Jun 2024 15:28:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Xact end leaves CurrentMemoryContext = TopMemoryContext" }, { "msg_contents": "Hi,\n\nOn 2024-06-18 15:28:03 -0400, Tom Lane wrote:\n> After awhile I had an epiphany: what we should do is make\n> CommitTransactionCommand restore the memory context that was active\n> before StartTransactionCommand. That's what we want in every place\n> that was cognizant of this issue, and it seems to be the case in every\n> place that wasn't doing anything explicit about it, either.\n\nI like it.\n\nI wonder if there's an argument the \"persistent\" TopTransactionContext should\nlive under a different name outside of transactions, to avoid references to it\nworking in a context where it's not valid? It's probably not worth it, but\nnot sure.\n\n\n> The 0001 patch attached does that, and seems to work nicely.\n> I made it implement the idea of recycling TopTransactionContext,\n> too\n\nNice.\n\nI think there might be some benefit to doing that for some more things,\nlater/separately. E.g. the allocation of TopTransactionResourceOwner shows up\nin profiles for workloads with small transactions. Which got a bit worse with\n17 (largely bought back in other places by the advantages of the new resowner\nsystem).\n\n\n\n> At this point I'd be inclined to wait for the branch to be made,\n> and then apply 0001 in HEAD/v18 only and 0002 in v17 and before.\n> While 0001 seems fairly straightforward, it's still a nontrivial\n> change and I'm hesitant to shove it in at this late stage of the\n> v17 cycle.\n\nSeems reasonable.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 18 Jun 2024 13:23:36 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Xact end leaves CurrentMemoryContext = TopMemoryContext" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> I wonder if there's an argument the \"persistent\" TopTransactionContext should\n> live under a different name outside of transactions, to avoid references to it\n> working in a context where it's not valid? It's probably not worth it, but\n> not sure.\n\nHm. We could stash the long-lived pointer in a static variable,\nand only install it in TopTransactionContext when you're supposed\nto use it. I tend to agree not worth it, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 18 Jun 2024 16:48:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Xact end leaves CurrentMemoryContext = TopMemoryContext" } ]
[ { "msg_contents": "This commit added enable_group_by_reordering:\n\n\tcommit 0452b461bc4\n\tAuthor: Alexander Korotkov <[email protected]>\n\tDate: Sun Jan 21 22:21:36 2024 +0200\n\t\n\t Explore alternative orderings of group-by pathkeys during optimization.\n\t\n\t When evaluating a query with a multi-column GROUP BY clause, we can minimize\n\t sort operations or avoid them if we synchronize the order of GROUP BY clauses\n\t with the ORDER BY sort clause or sort order, which comes from the underlying\n\t query tree. Grouping does not imply any ordering, so we can compare\n\t the keys in arbitrary order, and a Hash Agg leverages this. But for Group Agg,\n\t we simply compared keys in the order specified in the query. This commit\n\t explores alternative ordering of the keys, trying to find a cheaper one.\n\t\n\t The ordering of group keys may interact with other parts of the query, some of\n\t which may not be known while planning the grouping. For example, there may be\n\t an explicit ORDER BY clause or some other ordering-dependent operation higher up\n\t in the query, and using the same ordering may allow using either incremental\n\t sort or even eliminating the sort entirely.\n\t\n\t The patch always keeps the ordering specified in the query, assuming the user\n\t might have additional insights.\n\t\n\t This introduces a new GUC enable_group_by_reordering so that the optimization\n\t may be disabled if needed.\n\t\n\t Discussion: https://postgr.es/m/7c79e6a5-8597-74e8-0671-1c39d124c9d6%40sigaev.ru\n\t Author: Andrei Lepikhov, Teodor Sigaev\n\t Reviewed-by: Tomas Vondra, Claudio Freire, Gavin Flower, Dmitry Dolgov\n\t Reviewed-by: Robert Haas, Pavel Borisov, David Rowley, Zhihong Yu\n\t Reviewed-by: Tom Lane, Alexander Korotkov, Richard Guo, Alena Rybakina\n\nIt mentions it was added as a GUC to postgresql.conf, but I see no SGML\ndocs for this new GUC value. Would someone please add docs for this? \nThanks.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 17 Jun 2024 22:32:56 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Missing docs for new enable_group_by_reordering GUC" }, { "msg_contents": "On 6/18/24 09:32, Bruce Momjian wrote:\n> This commit added enable_group_by_reordering:\n> \n> \tcommit 0452b461bc4\n> \tAuthor: Alexander Korotkov <[email protected]>\n> \tDate: Sun Jan 21 22:21:36 2024 +0200\n> It mentions it was added as a GUC to postgresql.conf, but I see no SGML\n> docs for this new GUC value. Would someone please add docs for this?\n> Thanks.\nIt is my mistake, sorry for that. See the patch in attachment.\n\n-- \nregards, Andrei Lepikhov", "msg_date": "Tue, 18 Jun 2024 13:13:54 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missing docs for new enable_group_by_reordering GUC" }, { "msg_contents": "On Tue, Jun 18, 2024 at 9:14 AM Andrei Lepikhov <[email protected]> wrote:\n> On 6/18/24 09:32, Bruce Momjian wrote:\n> > This commit added enable_group_by_reordering:\n> >\n> > commit 0452b461bc4\n> > Author: Alexander Korotkov <[email protected]>\n> > Date: Sun Jan 21 22:21:36 2024 +0200\n> > It mentions it was added as a GUC to postgresql.conf, but I see no SGML\n> > docs for this new GUC value. Would someone please add docs for this?\n> > Thanks.\n> It is my mistake, sorry for that. See the patch in attachment.\n\nBruce, thank for noticing. Andrei, thank you for providing a fix.\nPlease, check the revised patch.\n\n------\nRegards,\nAlexander Korotkov\nSupabase", "msg_date": "Tue, 18 Jun 2024 15:13:11 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missing docs for new enable_group_by_reordering GUC" }, { "msg_contents": "Hi, Alexander!\n\nOn Tue, 18 Jun 2024 at 16:13, Alexander Korotkov <[email protected]>\nwrote:\n\n> On Tue, Jun 18, 2024 at 9:14 AM Andrei Lepikhov <[email protected]> wrote:\n> > On 6/18/24 09:32, Bruce Momjian wrote:\n> > > This commit added enable_group_by_reordering:\n> > >\n> > > commit 0452b461bc4\n> > > Author: Alexander Korotkov <[email protected]>\n> > > Date: Sun Jan 21 22:21:36 2024 +0200\n> > > It mentions it was added as a GUC to postgresql.conf, but I see no SGML\n> > > docs for this new GUC value. Would someone please add docs for this?\n> > > Thanks.\n> > It is my mistake, sorry for that. See the patch in attachment.\n>\n> Bruce, thank for noticing. Andrei, thank you for providing a fix.\n> Please, check the revised patch.\n>\nI briefly looked into this docs patch. Planner gucs are arranged\nalphabetically, so enable_group_by_reordering is better to come after\nenable-gathermerge not before.\n\n+ Enables or disables the reordering of keys in a\n+ <literal>GROUP BY</literal> clause to match the ordering keys of a\n+ child node of the plan, such as an index scan. When turned off,\nkeys\n+ in a <literal>GROUP BY</literal> clause are only reordered to match\n+ the <literal>ORDER BY</literal> clause, if any. The default is\n+ <literal>on</literal>.\nI'd also suggest the same style as already exists\nfor enable_presorted_aggregate guc i.e:\n\nControls if the query planner will produce a plan which will provide\n<literal>GROUP BY</literal> keys presorted in the order of keys of a child\nnode of the plan, such as an index scan. When disabled, the query planner\nwill produce a plan with <literal>GROUP BY</literal> keys only reordered to\nmatch\nthe <literal>ORDER BY</literal> clause, if any. When enabled, the planner\nwill try to produce a more efficient plan. The default value is on.\n\nRegards,\nPavel Borisov\nSupabase\n\nHi, Alexander!On Tue, 18 Jun 2024 at 16:13, Alexander Korotkov <[email protected]> wrote:On Tue, Jun 18, 2024 at 9:14 AM Andrei Lepikhov <[email protected]> wrote:\n> On 6/18/24 09:32, Bruce Momjian wrote:\n> > This commit added enable_group_by_reordering:\n> >\n> >       commit 0452b461bc4\n> >       Author: Alexander Korotkov <[email protected]>\n> >       Date:   Sun Jan 21 22:21:36 2024 +0200\n> > It mentions it was added as a GUC to postgresql.conf, but I see no SGML\n> > docs for this new GUC value.  Would someone please add docs for this?\n> > Thanks.\n> It is my mistake, sorry for that. See the patch in attachment.\n\nBruce, thank for noticing.  Andrei, thank you for providing a fix.\nPlease, check the revised patch.I briefly looked into this docs patch. Planner gucs are arranged alphabetically, so enable_group_by_reordering is better to come after enable-gathermerge not before. +  Enables or disables the reordering of keys in a+        <literal>GROUP BY</literal> clause to match the ordering keys of a+        child node of the plan, such as an index scan. When turned off, keys+        in a <literal>GROUP BY</literal> clause are only reordered to match+        the <literal>ORDER BY</literal> clause, if any. The default is+        <literal>on</literal>.I'd also suggest the same style as already exists for enable_presorted_aggregate guc i.e:Controls if the query planner will produce a plan which will provide <literal>GROUP BY</literal> keys presorted in the order of keys of a child node of the plan, such as an index scan. When disabled, the query planner will produce a plan with <literal>GROUP BY</literal> keys only reordered to matchthe <literal>ORDER BY</literal> clause, if any. When enabled, the planner will try to produce a more efficient plan. The default value is on.Regards, Pavel BorisovSupabase", "msg_date": "Tue, 18 Jun 2024 16:45:25 +0400", "msg_from": "Pavel Borisov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missing docs for new enable_group_by_reordering GUC" }, { "msg_contents": ">\n> Controls if the query planner will produce a plan which will provide\n> <literal>GROUP BY</literal> keys presorted in the order of keys of a child\n> node of the plan, such as an index scan. When disabled, the query planner\n> will produce a plan with <literal>GROUP BY</literal> keys only reordered to\n> match\n> the <literal>ORDER BY</literal> clause, if any. When enabled, the planner\n> will try to produce a more efficient plan. The default value is on.\n>\nA correction of myself: presorted -> sorted, reordered ->sorted\n\nRegards,\nPavel\n\nControls if the query planner will produce a plan which will provide <literal>GROUP BY</literal> keys presorted in the order of keys of a child node of the plan, such as an index scan. When disabled, the query planner will produce a plan with <literal>GROUP BY</literal> keys only reordered to matchthe <literal>ORDER BY</literal> clause, if any. When enabled, the planner will try to produce a more efficient plan. The default value is on.A correction of myself: presorted -> sorted, reordered ->sortedRegards,Pavel", "msg_date": "Tue, 18 Jun 2024 17:13:57 +0400", "msg_from": "Pavel Borisov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missing docs for new enable_group_by_reordering GUC" }, { "msg_contents": "On Tue, Jun 18, 2024 at 4:14 PM Pavel Borisov <[email protected]> wrote:\n>> Controls if the query planner will produce a plan which will provide <literal>GROUP BY</literal> keys presorted in the order of keys of a child node of the plan, such as an index scan. When disabled, the query planner will produce a plan with <literal>GROUP BY</literal> keys only reordered to match\n>> the <literal>ORDER BY</literal> clause, if any. When enabled, the planner will try to produce a more efficient plan. The default value is on.\n> A correction of myself: presorted -> sorted, reordered ->sorted\n\nThank you for your review. I think all of this make sense. Please,\ncheck the revised patch attached.\n\n------\nRegards,\nAlexander Korotkov\nSupabase", "msg_date": "Wed, 19 Jun 2024 04:27:31 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missing docs for new enable_group_by_reordering GUC" }, { "msg_contents": "Hi, Alexander!\n\nOn Wed, 19 Jun 2024 at 05:27, Alexander Korotkov <[email protected]>\nwrote:\n\n> On Tue, Jun 18, 2024 at 4:14 PM Pavel Borisov <[email protected]>\n> wrote:\n> >> Controls if the query planner will produce a plan which will provide\n> <literal>GROUP BY</literal> keys presorted in the order of keys of a child\n> node of the plan, such as an index scan. When disabled, the query planner\n> will produce a plan with <literal>GROUP BY</literal> keys only reordered to\n> match\n> >> the <literal>ORDER BY</literal> clause, if any. When enabled, the\n> planner will try to produce a more efficient plan. The default value is on.\n> > A correction of myself: presorted -> sorted, reordered ->sorted\n>\n> Thank you for your review. I think all of this make sense. Please,\n> check the revised patch attached.\n>\nTo me patch v3 looks good.\n\nRegards,\nPavel Borisov\nSupabase\n\nHi, Alexander!On Wed, 19 Jun 2024 at 05:27, Alexander Korotkov <[email protected]> wrote:On Tue, Jun 18, 2024 at 4:14 PM Pavel Borisov <[email protected]> wrote:\n>> Controls if the query planner will produce a plan which will provide <literal>GROUP BY</literal> keys presorted in the order of keys of a child node of the plan, such as an index scan. When disabled, the query planner will produce a plan with <literal>GROUP BY</literal> keys only reordered to match\n>> the <literal>ORDER BY</literal> clause, if any. When enabled, the planner will try to produce a more efficient plan. The default value is on.\n> A correction of myself: presorted -> sorted, reordered ->sorted\n\nThank you for your review.  I think all of this make sense.  Please,\ncheck the revised patch attached.To me patch v3 looks good.Regards,Pavel BorisovSupabase", "msg_date": "Wed, 19 Jun 2024 19:02:19 +0400", "msg_from": "Pavel Borisov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missing docs for new enable_group_by_reordering GUC" }, { "msg_contents": "On Wed, Jun 19, 2024 at 6:02 PM Pavel Borisov <[email protected]> wrote:\n> On Wed, 19 Jun 2024 at 05:27, Alexander Korotkov <[email protected]> wrote:\n>>\n>> On Tue, Jun 18, 2024 at 4:14 PM Pavel Borisov <[email protected]> wrote:\n>> >> Controls if the query planner will produce a plan which will provide <literal>GROUP BY</literal> keys presorted in the order of keys of a child node of the plan, such as an index scan. When disabled, the query planner will produce a plan with <literal>GROUP BY</literal> keys only reordered to match\n>> >> the <literal>ORDER BY</literal> clause, if any. When enabled, the planner will try to produce a more efficient plan. The default value is on.\n>> > A correction of myself: presorted -> sorted, reordered ->sorted\n>>\n>> Thank you for your review. I think all of this make sense. Please,\n>> check the revised patch attached.\n>\n> To me patch v3 looks good.\n\nOk, thank you. I'm going to push this if no objections.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Wed, 19 Jun 2024 22:35:49 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missing docs for new enable_group_by_reordering GUC" } ]
[ { "msg_contents": "Hi all,\n\nOn HEAD, xlog.c has the following comment, which has been on my own\nTODO list for a couple of weeks now:\n\n * TODO: With a bit of extra work we could just start with a pgstat file\n * associated with the checkpoint redo location we're starting from.\n\nPlease find a patch series to implement that, giving the possibility\nto keep statistics after a crash rather than discard them. I have\nbeen looking at the code for a while, before settling down on:\n- Forcing the flush of the pgstats file to happen during non-shutdown\ncheckpoint and restart points, after updating the control file's redo\nLSN and the critical sections in the area.\n- Leaving the current before_shmem_exit() callback around, as a matter\nof delaying the flush of the stats for as long as possible in a\nshutdown sequence. This also makes the single-user mode shutdown\nsimpler.\n- Adding the redo LSN to the pgstats file, with a bump of\nPGSTAT_FILE_FORMAT_ID, cross-checked with the redo LSN. This change\nis independently useful on its own when loading stats after a clean\nstartup, as well.\n- The crash recovery case is simplified, as there is no more need for\nthe \"discard\" code path.\n- Not using a logic where I would need to stick a LSN into the stats\nfile name, implying that we would need a potential lookup at the\ncontents of pg_stat/ to clean up past files at crash recovery. These\nshould not be costly, but I'd rather not add more of these.\n\n7ff23c6d277d, that has removed the last call of CreateCheckPoint()\nfrom the startup process, is older than 5891c7a8ed8f, still it seems\nto me that pgstats relies on some areas of the code that don't make\nsense on HEAD (see locking mentioned at the top of the write routine\nfor example). The logic gets straight-forward to think about as\nrestart points and checkpoints always run from the checkpointer,\nimplying that pgstat_write_statsfile() is already called only from the\npostmaster in single-user mode or the checkpointer itself, at\nshutdown.\n\nAttached is a patch set, with the one being the actual feature, with\nsome stuff prior to that:\n- 0001 adds the redo LSN to the pgstats file flushed.\n- 0002 adds an assertion in pgstat_write_statsfile(), to check from\nwhere it is called.\n- 0003 with more debugging.\n- 0004 is the meat of the thread.\n\nI am adding that to the next CF. Thoughts and comments are welcome.\nThanks,\n--\nMichael", "msg_date": "Tue, 18 Jun 2024 15:01:12 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Flush pgstats file during checkpoints" }, { "msg_contents": "\nOn 18/06/2024 9:01 am, Michael Paquier wrote:\n> Hi all,\n>\n> On HEAD, xlog.c has the following comment, which has been on my own\n> TODO list for a couple of weeks now:\n>\n> * TODO: With a bit of extra work we could just start with a pgstat file\n> * associated with the checkpoint redo location we're starting from.\n>\n> Please find a patch series to implement that, giving the possibility\n> to keep statistics after a crash rather than discard them. I have\n> been looking at the code for a while, before settling down on:\n> - Forcing the flush of the pgstats file to happen during non-shutdown\n> checkpoint and restart points, after updating the control file's redo\n> LSN and the critical sections in the area.\n> - Leaving the current before_shmem_exit() callback around, as a matter\n> of delaying the flush of the stats for as long as possible in a\n> shutdown sequence. This also makes the single-user mode shutdown\n> simpler.\n> - Adding the redo LSN to the pgstats file, with a bump of\n> PGSTAT_FILE_FORMAT_ID, cross-checked with the redo LSN. This change\n> is independently useful on its own when loading stats after a clean\n> startup, as well.\n> - The crash recovery case is simplified, as there is no more need for\n> the \"discard\" code path.\n> - Not using a logic where I would need to stick a LSN into the stats\n> file name, implying that we would need a potential lookup at the\n> contents of pg_stat/ to clean up past files at crash recovery. These\n> should not be costly, but I'd rather not add more of these.\n>\n> 7ff23c6d277d, that has removed the last call of CreateCheckPoint()\n> from the startup process, is older than 5891c7a8ed8f, still it seems\n> to me that pgstats relies on some areas of the code that don't make\n> sense on HEAD (see locking mentioned at the top of the write routine\n> for example). The logic gets straight-forward to think about as\n> restart points and checkpoints always run from the checkpointer,\n> implying that pgstat_write_statsfile() is already called only from the\n> postmaster in single-user mode or the checkpointer itself, at\n> shutdown.\n>\n> Attached is a patch set, with the one being the actual feature, with\n> some stuff prior to that:\n> - 0001 adds the redo LSN to the pgstats file flushed.\n> - 0002 adds an assertion in pgstat_write_statsfile(), to check from\n> where it is called.\n> - 0003 with more debugging.\n> - 0004 is the meat of the thread.\n>\n> I am adding that to the next CF. Thoughts and comments are welcome.\n> Thanks,\n> --\n> Michael\n\nHi Michael.\n\nI am working mostly on the same problem - persisting pgstat state in \nNeon (because of separation of storage and compute it has no local files).\nI have two questions concerning this PR and the whole strategy for \nsaving pgstat state between sessions.\n\n1. Right now pgstat.stat is discarded after abnormal Postgres \ntermination. And in your PR we are storing LSN in pgstat.staty to check \nthat it matches checkpoint redo LSN. I wonder if having outdated version \nof pgstat.stat  is worse than not having it at all? Comment in xlog.c \nsays: \"When starting with crash recovery, reset pgstat data - it might \nnot be valid.\" But why it may be invalid? We are writing it first to \ntemp file and then rename. May be fsync() should be added here and \ndurable_rename() should be used instead of rename(). But it seems to be \nbetter than loosing statistics. And should not add some significant \noverhead (because it happens only at shutdown). In your case we are \nchecking LSN of pgstat.stat file. But once again - why it is better to \ndiscard file than load version from previous checkpoint?\n\n2. Second question is also related with 1). So we saved pgstat.stat on \ncheckpoint, then did some updates and then Postgres crashed. As far as I \nunderstand with your patch we will successfully restore pgstats.stat \nfile. But it is not actually up-to-date: it doesn't reflect information \nabout recent updates. If it was so critical to keep pgstat.stat \nup-to-date, then why do we allow to restore state on most recent checkpoint?\n\nThanks,\nKonstantin\n\n\n\n", "msg_date": "Fri, 28 Jun 2024 10:32:06 +0300", "msg_from": "Konstantin Knizhnik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Flush pgstats file during checkpoints" }, { "msg_contents": "\n\nOn 6/28/24 09:32, Konstantin Knizhnik wrote:\n> \n> On 18/06/2024 9:01 am, Michael Paquier wrote:\n>> Hi all,\n>>\n>> On HEAD, xlog.c has the following comment, which has been on my own\n>> TODO list for a couple of weeks now:\n>>\n>>       * TODO: With a bit of extra work we could just start with a\n>> pgstat file\n>>       * associated with the checkpoint redo location we're starting from.\n>>\n>> Please find a patch series to implement that, giving the possibility\n>> to keep statistics after a crash rather than discard them.  I have\n>> been looking at the code for a while, before settling down on:\n>> - Forcing the flush of the pgstats file to happen during non-shutdown\n>> checkpoint and restart points, after updating the control file's redo\n>> LSN and the critical sections in the area.\n>> - Leaving the current before_shmem_exit() callback around, as a matter\n>> of delaying the flush of the stats for as long as possible in a\n>> shutdown sequence.  This also makes the single-user mode shutdown\n>> simpler.\n>> - Adding the redo LSN to the pgstats file, with a bump of\n>> PGSTAT_FILE_FORMAT_ID, cross-checked with the redo LSN.  This change\n>> is independently useful on its own when loading stats after a clean\n>> startup, as well.\n>> - The crash recovery case is simplified, as there is no more need for\n>> the \"discard\" code path.\n>> - Not using a logic where I would need to stick a LSN into the stats\n>> file name, implying that we would need a potential lookup at the\n>> contents of pg_stat/ to clean up past files at crash recovery.  These\n>> should not be costly, but I'd rather not add more of these.\n>>\n>> 7ff23c6d277d, that has removed the last call of CreateCheckPoint()\n>> from the startup process, is older than 5891c7a8ed8f, still it seems\n>> to me that pgstats relies on some areas of the code that don't make\n>> sense on HEAD (see locking mentioned at the top of the write routine\n>> for example).  The logic gets straight-forward to think about as\n>> restart points and checkpoints always run from the checkpointer,\n>> implying that pgstat_write_statsfile() is already called only from the\n>> postmaster in single-user mode or the checkpointer itself, at\n>> shutdown.\n>>\n>> Attached is a patch set, with the one being the actual feature, with\n>> some stuff prior to that:\n>> - 0001 adds the redo LSN to the pgstats file flushed.\n>> - 0002 adds an assertion in pgstat_write_statsfile(), to check from\n>> where it is called.\n>> - 0003 with more debugging.\n>> - 0004 is the meat of the thread.\n>>\n>> I am adding that to the next CF.  Thoughts and comments are welcome.\n>> Thanks,\n>> -- \n>> Michael\n> \n> Hi Michael.\n> \n> I am working mostly on the same problem - persisting pgstat state in\n> Neon (because of separation of storage and compute it has no local files).\n> I have two questions concerning this PR and the whole strategy for\n> saving pgstat state between sessions.\n> \n> 1. Right now pgstat.stat is discarded after abnormal Postgres\n> termination. And in your PR we are storing LSN in pgstat.staty to check\n> that it matches checkpoint redo LSN. I wonder if having outdated version\n> of pgstat.stat  is worse than not having it at all? Comment in xlog.c\n> says: \"When starting with crash recovery, reset pgstat data - it might\n> not be valid.\" But why it may be invalid? We are writing it first to\n> temp file and then rename. May be fsync() should be added here and\n> durable_rename() should be used instead of rename(). But it seems to be\n> better than loosing statistics. And should not add some significant\n> overhead (because it happens only at shutdown). In your case we are\n> checking LSN of pgstat.stat file. But once again - why it is better to\n> discard file than load version from previous checkpoint?\n> \n\nI think those are two independent issues - knowing that the snapshot is\nfrom the last checkpoint, and knowing that it's correct (not corrupted).\nAnd yeah, we should be careful about fsync/durable_rename.\n\n> 2. Second question is also related with 1). So we saved pgstat.stat on\n> checkpoint, then did some updates and then Postgres crashed. As far as I\n> understand with your patch we will successfully restore pgstats.stat\n> file. But it is not actually up-to-date: it doesn't reflect information\n> about recent updates. If it was so critical to keep pgstat.stat\n> up-to-date, then why do we allow to restore state on most recent\n> checkpoint?\n> \n\nYeah, I was wondering about the same thing - can't this mean we fail to\nstart autovacuum? Let's say we delete a significant fraction of a huge\ntable, but then we crash before the next checkpoint. With this patch we\nrestore the last stats snapshot, which can easily have\n\nn_dead_tup | 0\nn_mod_since_analyze | 0\n\nfor the table. And thus we fail to notice the table needs autovacuum.\nAFAIK we run autovacuum on all tables with missing stats (or am I\nwrong?). That's what's happening on replicas after switchover/failover\ntoo, right?\n\nIt'd not be such an issue if we updated stats during recovery, but I\nthink think we're doing that. Perhaps we should, which might also help\non replicas - no idea if it's feasible, though.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 29 Jun 2024 23:13:04 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Flush pgstats file during checkpoints" }, { "msg_contents": "On Sat, Jun 29, 2024 at 11:13:04PM +0200, Tomas Vondra wrote:\n> I think those are two independent issues - knowing that the snapshot is\n> from the last checkpoint, and knowing that it's correct (not corrupted).\n> And yeah, we should be careful about fsync/durable_rename.\n\nYeah, that's bugging me as well. I don't really get why we would not\nwant durability at shutdown for this data. So how about switching the\nend of pgstat_write_statsfile() to use durable_rename()? Sounds like\nan independent change, worth on its own.\n\n> Yeah, I was wondering about the same thing - can't this mean we fail to\n> start autovacuum? Let's say we delete a significant fraction of a huge\n> table, but then we crash before the next checkpoint. With this patch we\n> restore the last stats snapshot, which can easily have\n> \n> n_dead_tup | 0\n> n_mod_since_analyze | 0\n> \n> for the table. And thus we fail to notice the table needs autovacuum.\n> AFAIK we run autovacuum on all tables with missing stats (or am I\n> wrong?). That's what's happening on replicas after switchover/failover\n> too, right?\n\nThat's the opposite, please look at relation_needs_vacanalyze(). If a\ntable does not have any stats found in pgstats, like on HEAD after a\ncrash, then autoanalyze is skipped and autovacuum happens only for the\nanti-wrap case.\n\nFor the database case, rebuild_database_list() uses\npgstat_fetch_stat_dbentry() three times, discards entirely databases\nthat have no stats. Again, there should be a stats entry post a\ncrash upon a reconnection.\n\nSo there's an argument in saying that the situation does not get worse\nhere and that we actually may improve odds by keeping a trace of the\nprevious stats, no? In most cases, there would be a stats entry\npost-crash as an INSERT or a scan would have re-created it, but the\nstats would reflect the state registered since the last crash recovery\n(even on HEAD, a bulk delete bumping the dead tuple counter would not\nbe detected post-crash). The case of spiky workloads may impact the \ndecision-making, of course, but at least we'd allow autovacuum to take\nsome decision over giving up entirely based on some previous state of\nthe stats. That sounds like a win for me with steady workloads, less\nfor bulby workloads..\n\n> It'd not be such an issue if we updated stats during recovery, but I\n> think think we're doing that. Perhaps we should, which might also help\n> on replicas - no idea if it's feasible, though.\n\nStats on replicas are considered an independent thing AFAIU (scans are\ncounted for example in read-only queries). If we were to do that we\nmay want to split stats handling between nodes in standby state and\ncrash recovery. Not sure if that's worth the complication. First,\nthe stats exist at node-level.\n--\nMichael", "msg_date": "Fri, 5 Jul 2024 13:52:31 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Flush pgstats file during checkpoints" }, { "msg_contents": "On Fri, Jul 05, 2024 at 01:52:31PM +0900, Michael Paquier wrote:\n> On Sat, Jun 29, 2024 at 11:13:04PM +0200, Tomas Vondra wrote:\n>> I think those are two independent issues - knowing that the snapshot is\n>> from the last checkpoint, and knowing that it's correct (not corrupted).\n>> And yeah, we should be careful about fsync/durable_rename.\n> \n> Yeah, that's bugging me as well. I don't really get why we would not\n> want durability at shutdown for this data. So how about switching the\n> end of pgstat_write_statsfile() to use durable_rename()? Sounds like\n> an independent change, worth on its own.\n\nPlease find attached a rebased patch set with the durability point\naddressed in 0001. There were also some conflicts.\n\nNote that I have applied the previous 0002 adding an assert in\npgstat_write_statsfile() as 734c057a8935, as I've managed to break\nagain this assumption while hacking more on this area..\n--\nMichael", "msg_date": "Fri, 12 Jul 2024 15:42:21 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Flush pgstats file during checkpoints" }, { "msg_contents": "Hi,\n\nOn Fri, Jul 05, 2024 at 01:52:31PM +0900, Michael Paquier wrote:\n> On Sat, Jun 29, 2024 at 11:13:04PM +0200, Tomas Vondra wrote:\n> > I think those are two independent issues - knowing that the snapshot is\n> > from the last checkpoint, and knowing that it's correct (not corrupted).\n> > And yeah, we should be careful about fsync/durable_rename.\n> \n> Yeah, that's bugging me as well. I don't really get why we would not\n> want durability at shutdown for this data. So how about switching the\n> end of pgstat_write_statsfile() to use durable_rename()? Sounds like\n> an independent change, worth on its own.\n> \n> > Yeah, I was wondering about the same thing - can't this mean we fail to\n> > start autovacuum? Let's say we delete a significant fraction of a huge\n> > table, but then we crash before the next checkpoint. With this patch we\n> > restore the last stats snapshot, which can easily have\n> > \n> > n_dead_tup | 0\n> > n_mod_since_analyze | 0\n> > \n> > for the table. And thus we fail to notice the table needs autovacuum.\n> > AFAIK we run autovacuum on all tables with missing stats (or am I\n> > wrong?). That's what's happening on replicas after switchover/failover\n> > too, right?\n> \n> That's the opposite, please look at relation_needs_vacanalyze(). If a\n> table does not have any stats found in pgstats, like on HEAD after a\n> crash, then autoanalyze is skipped and autovacuum happens only for the\n> anti-wrap case.\n> \n> For the database case, rebuild_database_list() uses\n> pgstat_fetch_stat_dbentry() three times, discards entirely databases\n> that have no stats. Again, there should be a stats entry post a\n> crash upon a reconnection.\n> \n> So there's an argument in saying that the situation does not get worse\n> here and that we actually may improve odds by keeping a trace of the\n> previous stats, no? \n\nI agree, we still could get autoanalyze/autovacuum skipped due to obsolete/stales\nstats being restored from the last checkpoint but that's better than having them\nalways skipped (current HEAD).\n\n> In most cases, there would be a stats entry\n> post-crash as an INSERT or a scan would have re-created it,\n\nYeap.\n\n> but the\n> stats would reflect the state registered since the last crash recovery\n> (even on HEAD, a bulk delete bumping the dead tuple counter would not\n> be detected post-crash).\n\nRight.\n\n> The case of spiky workloads may impact the \n> decision-making, of course, but at least we'd allow autovacuum to take\n> some decision over giving up entirely based on some previous state of\n> the stats. That sounds like a win for me with steady workloads, less\n> for bulby workloads..\n\nI agree and it is not worst (though not ideal) that currently on HEAD.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 12 Jul 2024 10:26:32 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Flush pgstats file during checkpoints" }, { "msg_contents": "Hi,\n\nOn Fri, Jul 12, 2024 at 03:42:21PM +0900, Michael Paquier wrote:\n> On Fri, Jul 05, 2024 at 01:52:31PM +0900, Michael Paquier wrote:\n> > On Sat, Jun 29, 2024 at 11:13:04PM +0200, Tomas Vondra wrote:\n> >> I think those are two independent issues - knowing that the snapshot is\n> >> from the last checkpoint, and knowing that it's correct (not corrupted).\n> >> And yeah, we should be careful about fsync/durable_rename.\n> > \n> > Yeah, that's bugging me as well. I don't really get why we would not\n> > want durability at shutdown for this data. So how about switching the\n> > end of pgstat_write_statsfile() to use durable_rename()? Sounds like\n> > an independent change, worth on its own.\n> \n> Please find attached a rebased patch set with the durability point\n> addressed in 0001. There were also some conflicts.\n\nThanks!\n\nLooking at 0001:\n\n+ /* error logged already */\n\nMaybe mention it's already logged by durable_rename() (like it's done in\nInstallXLogFileSegment(), BaseBackup() for example).\n\nExcept this nit, 0001 LGTM.\n\nNeed to spend more time and thoughts on 0002+.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 12 Jul 2024 12:10:26 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Flush pgstats file during checkpoints" }, { "msg_contents": "Hi,\n\nOn Fri, Jul 12, 2024 at 12:10:26PM +0000, Bertrand Drouvot wrote:\n> Need to spend more time and thoughts on 0002+.\n\nI think there is a corner case, say:\n\n1. shutdown checkpoint at LSN1\n2. startup->reads the stat file (contains LSN1)->all good->read stat file and\nremove it\n3. crash (no checkpoint occured between 2. and 3.) \n4. startup (last checkpoint is still LSN1)->no stat file (as removed in 2.) \n\nIn that case we start with empty stats.\n\nInstead of removing the stat file, should we keep it around until the first call\nto pgstat_write_statsfile()?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 12 Jul 2024 13:01:19 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Flush pgstats file during checkpoints" }, { "msg_contents": "On Fri, Jul 12, 2024 at 01:01:19PM +0000, Bertrand Drouvot wrote:\n> Instead of removing the stat file, should we keep it around until the first call\n> to pgstat_write_statsfile()?\n\nOops. You are right, I have somewhat missed the unlink() once we are\ndone reading the stats file with a correct redo location.\n--\nMichael", "msg_date": "Tue, 16 Jul 2024 10:37:39 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Flush pgstats file during checkpoints" }, { "msg_contents": "On Fri, Jul 12, 2024 at 12:10:26PM +0000, Bertrand Drouvot wrote:\n> Looking at 0001:\n> \n> + /* error logged already */\n> \n> Maybe mention it's already logged by durable_rename() (like it's done in\n> InstallXLogFileSegment(), BaseBackup() for example).\n> \n> Except this nit, 0001 LGTM.\n\nTweaked the comment, and applied 0001 for durable_rename(). Thanks\nfor the review.\n--\nMichael", "msg_date": "Wed, 17 Jul 2024 12:08:01 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Flush pgstats file during checkpoints" }, { "msg_contents": "On Tue, Jul 16, 2024 at 10:37:39AM +0900, Michael Paquier wrote:\n> On Fri, Jul 12, 2024 at 01:01:19PM +0000, Bertrand Drouvot wrote:\n>> Instead of removing the stat file, should we keep it around until the first call\n>> to pgstat_write_statsfile()?\n>\n> Oops. You are right, I have somewhat missed the unlink() once we are\n> done reading the stats file with a correct redo location.\n\nThe durable_rename() part has been applied. Please find attached a\nrebase of the patch set with all the other comments addressed.\n--\nMichael", "msg_date": "Wed, 17 Jul 2024 12:52:12 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Flush pgstats file during checkpoints" }, { "msg_contents": "Hi,\n\nOn Wed, Jul 17, 2024 at 12:52:12PM +0900, Michael Paquier wrote:\n> On Tue, Jul 16, 2024 at 10:37:39AM +0900, Michael Paquier wrote:\n> > On Fri, Jul 12, 2024 at 01:01:19PM +0000, Bertrand Drouvot wrote:\n> >> Instead of removing the stat file, should we keep it around until the first call\n> >> to pgstat_write_statsfile()?\n> >\n> > Oops. You are right, I have somewhat missed the unlink() once we are\n> > done reading the stats file with a correct redo location.\n> \n> The durable_rename() part has been applied. Please find attached a\n> rebase of the patch set with all the other comments addressed.\n\nThanks!\n\nLooking at 0001:\n\n1 ===\n\n- pgstat_write_statsfile();\n+ pgstat_write_statsfile(GetRedoRecPtr());\n\nNot related with your patch but this comment in the GetRedoRecPtr() function:\n\n * grabbed a WAL insertion lock to read the authoritative value in\n * Insert->RedoRecPtr\n\nsounds weird. Should'nt that be s/Insert/XLogCtl/?\n\n2 ===\n\n+ /* Write the redo LSN, used to cross check the file loaded */\n\nNit: s/loaded/read/?\n\n3 ===\n\n+ /*\n+ * Read the redo LSN stored in the file.\n+ */\n+ if (!read_chunk_s(fpin, &file_redo) ||\n+ file_redo != redo)\n+ goto error;\n\nI wonder if it would make sense to have dedicated error messages for\n\"file_redo != redo\" and for \"format_id != PGSTAT_FILE_FORMAT_ID\". That would\nease to diagnose as to why the stat file is discarded.\n\nLooking at 0002:\n\nLGTM\n\nLooking at 0003:\n\n4 ===\n\n@@ -5638,10 +5634,7 @@ StartupXLOG(void)\n * TODO: With a bit of extra work we could just start with a pgstat file\n * associated with the checkpoint redo location we're starting from.\n */\n- if (didCrash)\n- pgstat_discard_stats();\n- else\n- pgstat_restore_stats(checkPoint.redo);\n+ pgstat_restore_stats(checkPoint.redo)\n\nremove the TODO comment?\n\n5 ===\n\n+ * process) if the stats file has a redo LSN that matches with the .\n\nunfinished sentence?\n\n6 ===\n\n- * Should only be called by the startup process or in single user mode.\n+ * This is called by the checkpointer or in single-user mode.\n */\n void\n-pgstat_discard_stats(void)\n+pgstat_flush_stats(XLogRecPtr redo)\n {\n\nWould that make sense to add an Assert in pgstat_flush_stats()? (checking what \nthe above comment states).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 22 Jul 2024 07:01:41 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Flush pgstats file during checkpoints" }, { "msg_contents": "On Mon, Jul 22, 2024 at 07:01:41AM +0000, Bertrand Drouvot wrote:\n> 1 ===\n> Not related with your patch but this comment in the GetRedoRecPtr() function:\n> \n> * grabbed a WAL insertion lock to read the authoritative value in\n> * Insert->RedoRecPtr\n> \n> sounds weird. Should'nt that be s/Insert/XLogCtl/?\n\nNo, the comment is right. We are retrieving a copy of\nInsert->RedoRecPtr here.\n\n> 2 ===\n> \n> + /* Write the redo LSN, used to cross check the file loaded */\n> \n> Nit: s/loaded/read/?\n\nWFM.\n\n> 3 ===\n> \n> + /*\n> + * Read the redo LSN stored in the file.\n> + */\n> + if (!read_chunk_s(fpin, &file_redo) ||\n> + file_redo != redo)\n> + goto error;\n> \n> I wonder if it would make sense to have dedicated error messages for\n> \"file_redo != redo\" and for \"format_id != PGSTAT_FILE_FORMAT_ID\". That would\n> ease to diagnose as to why the stat file is discarded.\n\nYep. This has been itching me quite a bit, and that's a bit more than\njust the format ID or the redo LSN: it relates to all the read_chunk()\ncallers. I've taken a shot at this with patch 0001, implemented on\ntop of the rest. Adjusted as well the redo LSN read to have more\nerror context, now in 0002.\n\n> Looking at 0003:\n> \n> 4 ===\n> \n> @@ -5638,10 +5634,7 @@ StartupXLOG(void)\n> * TODO: With a bit of extra work we could just start with a pgstat file\n> * associated with the checkpoint redo location we're starting from.\n> */\n> - if (didCrash)\n> - pgstat_discard_stats();\n> - else\n> - pgstat_restore_stats(checkPoint.redo);\n> + pgstat_restore_stats(checkPoint.redo)\n> \n> remove the TODO comment?\n\nPretty sure I've removed that more than one time already, and that\nthis is a rebase accident. Thanks for noticing.\n\n> 5 ===\n> \n> + * process) if the stats file has a redo LSN that matches with the .\n> \n> unfinished sentence?\n\nThis is missing a reference to the control file.\n\n> 6 ===\n> \n> - * Should only be called by the startup process or in single user mode.\n> + * This is called by the checkpointer or in single-user mode.\n> */\n> void\n> -pgstat_discard_stats(void)\n> +pgstat_flush_stats(XLogRecPtr redo)\n> {\n> \n> Would that make sense to add an Assert in pgstat_flush_stats()? (checking what \n> the above comment states).\n\nThere is one in pgstat_write_statsfile(), not sure there is a point in\nduplicating the assertion in both.\n\nAttaching a new v4 series, with all these comments addressed.\n--\nMichael", "msg_date": "Tue, 23 Jul 2024 12:52:11 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Flush pgstats file during checkpoints" }, { "msg_contents": "Hi,\n\nOn Tue, Jul 23, 2024 at 12:52:11PM +0900, Michael Paquier wrote:\n> On Mon, Jul 22, 2024 at 07:01:41AM +0000, Bertrand Drouvot wrote:\n> > 3 ===\n> > \n> > + /*\n> > + * Read the redo LSN stored in the file.\n> > + */\n> > + if (!read_chunk_s(fpin, &file_redo) ||\n> > + file_redo != redo)\n> > + goto error;\n> > \n> > I wonder if it would make sense to have dedicated error messages for\n> > \"file_redo != redo\" and for \"format_id != PGSTAT_FILE_FORMAT_ID\". That would\n> > ease to diagnose as to why the stat file is discarded.\n> \n> Yep. This has been itching me quite a bit, and that's a bit more than\n> just the format ID or the redo LSN: it relates to all the read_chunk()\n> callers. I've taken a shot at this with patch 0001, implemented on\n> top of the rest.\n\nThanks! 0001 attached is v4-0001-Revert-Test-that-vacuum-removes-tuples-older-than.patch\nso I guess you did not attached the right one.\n\n> Attaching a new v4 series, with all these comments addressed.\n\nThanks!\n\nLooking at 0002:\n\n1 ===\n\n if (!read_chunk(fpin, ptr, info->shared_data_len))\n+ {\n+\t\telog(WARNING, \"could not read data of stats kind %d for entry of type %c\",\n+ \t\t\tkind, t);\n\nNit: what about to include the \"info->shared_data_len\" value in the WARNING?\n\n2 ===\n\n if (!read_chunk_s(fpin, &name))\n+ {\n+ \t\telog(WARNING, \"could not read name of stats kind %d for entry of type %c\",\n+ kind, t);\n goto error;\n+ }\n if (!pgstat_is_kind_valid(kind))\n+ {\n+ elog(WARNING, \"invalid stats kind %d for entry of type %c\",\n+ kind, t);\n goto error;\n+ }\n\nShouldn't we swap those 2 tests so that we check that the kind is valid right\nafter this one?\n\n if (!read_chunk_s(fpin, &kind))\n+ {\n+ elog(WARNING, \"could not read stats kind for entry of type %c\", t);\n goto error;\n+ }\n\nLooking at 0003: LGTM\n\nLooking at 0004: LGTM\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 29 Jul 2024 04:46:17 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Flush pgstats file during checkpoints" }, { "msg_contents": "On Mon, Jul 29, 2024 at 04:46:17AM +0000, Bertrand Drouvot wrote:\n> Thanks! 0001 attached is v4-0001-Revert-Test-that-vacuum-removes-tuples-older-than.patch\n> so I guess you did not attached the right one.\n\nI did attach the right set of patches, please ignore 0001 entirely:\nthe patch series is made of three patches, beginning with 0002 :)\n\n> Looking at 0002:\n>\n> if (!read_chunk(fpin, ptr, info->shared_data_len))\n> + {\n> +\t\telog(WARNING, \"could not read data of stats kind %d for entry of type %c\",\n> + \t\t\tkind, t);\n> \n> Nit: what about to include the \"info->shared_data_len\" value in the WARNING?\n\nGood idea, so added.\n\n> if (!read_chunk_s(fpin, &name))\n> + {\n> + \t\telog(WARNING, \"could not read name of stats kind %d for entry of type %c\",\n> + kind, t);\n> goto error;\n> + }\n> if (!pgstat_is_kind_valid(kind))\n> + {\n> + elog(WARNING, \"invalid stats kind %d for entry of type %c\",\n> + kind, t);\n> goto error;\n> + }\n> \n> Shouldn't we swap those 2 tests so that we check that the kind is valid right\n> after this one?\n\nHmm. We could, but this order is not buggy either. So I've let it\nas-is for now, just adding the WARNINGs.\n\nBy the way, I have noticed an extra path where a WARNING would not be\nlogged while playing with corrupted pgstats files: when the entry type\nitself is incorrect. I have added an extra elog() in this case, and\napplied 0001. Err.. 0002, sorry ;)\n\n> Looking at 0003: LGTM\n> Looking at 0004: LGTM\n\nThanks. Attached are the two remaining patches, for now. I'm\nplanning to get back to these in a few days.\n--\nMichael", "msg_date": "Tue, 30 Jul 2024 15:25:31 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Flush pgstats file during checkpoints" }, { "msg_contents": "Hi,\n\nOn Tue, Jul 30, 2024 at 03:25:31PM +0900, Michael Paquier wrote:\n> On Mon, Jul 29, 2024 at 04:46:17AM +0000, Bertrand Drouvot wrote:\n> > Thanks! 0001 attached is v4-0001-Revert-Test-that-vacuum-removes-tuples-older-than.patch\n> > so I guess you did not attached the right one.\n> \n> I did attach the right set of patches, please ignore 0001 entirely:\n> the patch series is made of three patches, beginning with 0002 :)\n\nYeah, saw that ;-)\n\n> > Looking at 0002:\n> >\n> > if (!read_chunk(fpin, ptr, info->shared_data_len))\n> > + {\n> > +\t\telog(WARNING, \"could not read data of stats kind %d for entry of type %c\",\n> > + \t\t\tkind, t);\n> > \n> > Nit: what about to include the \"info->shared_data_len\" value in the WARNING?\n> \n> Good idea, so added.\n\nThanks!\n\n> > Looking at 0003: LGTM\n> > Looking at 0004: LGTM\n> \n> Thanks. Attached are the two remaining patches, for now. I'm\n> planning to get back to these in a few days.\n\nDid a quick check and still LGTM.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 30 Jul 2024 08:53:48 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Flush pgstats file during checkpoints" }, { "msg_contents": "On Tue, Jul 30, 2024 at 08:53:48AM +0000, Bertrand Drouvot wrote:\n> Did a quick check and still LGTM.\n\nApplied 0003 for now to add the redo LSN to the pgstats file, adding\nthe redo LSN to the two DEBUG2 entries when reading and writing while\non it, that I forgot. (It was not 01:57 where I am now.)\n\nAttached is the last one.\n--\nMichael", "msg_date": "Fri, 2 Aug 2024 02:11:34 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Flush pgstats file during checkpoints" }, { "msg_contents": "On Fri, Aug 02, 2024 at 02:11:34AM +0900, Michael Paquier wrote:\n> Applied 0003 for now to add the redo LSN to the pgstats file, adding\n> the redo LSN to the two DEBUG2 entries when reading and writing while\n> on it, that I forgot. (It was not 01:57 where I am now.)\n> \n> Attached is the last one.\n\nThe CF bot has been complaining in injection_points as an effect of\nthe stats remaining after a crash, so rebased to adapt to that.\n--\nMichael", "msg_date": "Mon, 26 Aug 2024 13:56:40 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Flush pgstats file during checkpoints" }, { "msg_contents": "Hi,\n\nOn Mon, Aug 26, 2024 at 01:56:40PM +0900, Michael Paquier wrote:\n> On Fri, Aug 02, 2024 at 02:11:34AM +0900, Michael Paquier wrote:\n> > Applied 0003 for now to add the redo LSN to the pgstats file, adding\n> > the redo LSN to the two DEBUG2 entries when reading and writing while\n> > on it, that I forgot. (It was not 01:57 where I am now.)\n> > \n> > Attached is the last one.\n> \n> The CF bot has been complaining in injection_points as an effect of\n> the stats remaining after a crash, so rebased to adapt to that.\n\nThanks!\n\nChecking the V7 diffs as compared to V4:\n\n1. In pgstat_write_statsfile():\n\n-\telog(DEBUG2, \"writing stats file \\\"%s\\\"\", statfile);\n+\telog(DEBUG2, \"writing stats file \\\"%s\\\" with redo %X/%X\", statfile,\n+\t\t LSN_FORMAT_ARGS(redo));\n\n2. and the ones in injection_points/t/001_stats.pl:\n\n +# On crash the stats are still there.\n $node->stop('immediate');\n $node->start;\n $numcalls = $node->safe_psql('postgres',\n \t\"SELECT injection_points_stats_numcalls('stats-notice');\");\n -is($numcalls, '', 'number of stats after crash');\n +is($numcalls, '3', 'number of stats after crash');\n $fixedstats = $node->safe_psql('postgres',\n \t\"SELECT * FROM injection_points_stats_fixed();\");\n -is($fixedstats, '0|0|0|0|0', 'fixed stats after crash');\n +is($fixedstats, '1|0|2|1|1', 'fixed stats after crash');\n\nThey both LGTM.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 28 Aug 2024 03:43:43 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Flush pgstats file during checkpoints" }, { "msg_contents": ">> It'd not be such an issue if we updated stats during recovery, but I\n>> think think we're doing that. Perhaps we should, which might also help\n>> on replicas - no idea if it's feasible, though.\n> \n> Stats on replicas are considered an independent thing AFAIU (scans are\n> counted for example in read-only queries). If we were to do that we\n> may want to split stats handling between nodes in standby state and\n> crash recovery. Not sure if that's worth the complication. First,\n> the stats exist at node-level.\n\nHmm, I'm a bit disappointed this doesn't address replication. It makes \nsense that scans are counted separately on a standby, but it would be \nnice if stats like last_vacuum were propagated from primary to standbys. \n I guess that can be handled separately later.\n\n\nReviewing v7-0001-Flush-pgstats-file-during-checkpoints.patch:\n\nThere are various race conditions where a stats entry can be leaked in \nthe pgstats file. I.e. relation is dropped, but its stats entry is \nretained in the stats file after crash. In the worst case, suck leaked \nentries can accumulate until the stats file is manually removed, which \nresets all stats again. Perhaps that's acceptable - it's still possible \nleak the actual relation file for a new relation on crash, after all, \nwhich is much worse (I'm glad Horiguchi-san is working on that [1]).\n\nFor example:\n1. BEGIN; CREATE TABLE foo (); ANALYZE foo;\n2. CHECKPOINT;\n3. pg_ctl restart -m immediate\n\nThis is the same scenario where we leak the relfile, but now you can \nhave it with e.g. function statistics too.\n\nUntil 5891c7a8ed, there was a mechanism to garbage collect such orphaned \nentries (pgstat_vacuum()). How bad would it be to re-introduce that? Or \ncan we make it more watertight so that there are no leaks?\n\n\nIf you do this:\n\npg_ctl start -D data\npg_ctl stop -D data -m immediate\npg_ctl start -D data\npg_ctl stop -D data -m immediate\n\nYou get this in the log:\n\n2024-09-02 16:28:37.874 EEST [1397281] WARNING: found incorrect redo \nLSN 0/160A3C8 (expected 0/160A440)\n\nI think it's failing to flush the stats file at the end of recovery \ncheckpoint.\n\n\n[1] \nhttps://www.postgresql.org/message-id/20240901.010925.656452225144636594.horikyota.ntt%40gmail.com\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 2 Sep 2024 17:08:03 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Flush pgstats file during checkpoints" }, { "msg_contents": "On Mon, Sep 02, 2024 at 05:08:03PM +0300, Heikki Linnakangas wrote:\n> Hmm, I'm a bit disappointed this doesn't address replication. It makes sense\n> that scans are counted separately on a standby, but it would be nice if\n> stats like last_vacuum were propagated from primary to standbys. I guess\n> that can be handled separately later.\n\nYes, it's not something that I'm planning to tackle for this thread.\nSpeaking of which, the design that I got in mind for this area was not\n\"that\" complicated:\n- Add a new RMGR for all the stats.\n- Add a first callback for stats kinds for WAL inserts, giving to each\nstats the possibility to pass down data inserted to the record, as we\nwant to replicate a portion of the data depending on the kind dealt\nwith.\n- Add a second callback for recovery, called depending on the kind ID.\n\nI have not looked into the details yet, but stats to replicate should\nbe grouped in a single record on transaction commit or depending on\nthe flush timing for fixed-numbered stats. Or we should just add them\nin commit records?\n\n> Reviewing v7-0001-Flush-pgstats-file-during-checkpoints.patch:\n> \n> There are various race conditions where a stats entry can be leaked in the\n> pgstats file. I.e. relation is dropped, but its stats entry is retained in\n> the stats file after crash. In the worst case, suck leaked entries can\n> accumulate until the stats file is manually removed, which resets all stats\n> again. Perhaps that's acceptable - it's still possible leak the actual\n> relation file for a new relation on crash, after all, which is much worse\n> (I'm glad Horiguchi-san is working on that [1]).\n\nYeah, that's not an easy issue. We don't really have a protection\nregarding that as well now. Backends can also refer to stats entries\nin their shutdown callback that have been dropped concurrently. See\nsome details about that at https://commitfest.postgresql.org/49/5045/.\n\n> Until 5891c7a8ed, there was a mechanism to garbage collect such orphaned\n> entries (pgstat_vacuum()). How bad would it be to re-introduce that? Or can\n> we make it more watertight so that there are no leaks?\n\nNot sure about this part, TBH. Doing that again in autovacuum does\nnot excite me much as it has a cost.\n\n> I think it's failing to flush the stats file at the end of recovery\n> checkpoint.\n\nMissed that, oops. I'll double-check this area.\n--\nMichael", "msg_date": "Tue, 3 Sep 2024 10:08:46 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Flush pgstats file during checkpoints" } ]
[ { "msg_contents": "Hi hackers,\n\nWhile reviewing another thread that proposes to include \"generated\ncolumns\" support for logical replication [1] I was looking for any\nexisting PostgreSQL documentation on this topic.\n\nBut, I found almost nothing about it at all -- I only saw one aside\nmention saying that logical replication low-level message information\nis not sent for generated columns [2].\n\n~~\n\nIMO there should be some high-level place in the docs where the\nbehaviour for logical replication w.r.t. generated columns is\ndescribed.\n\nThere are lots of candidate places which could talk about this topic.\n* e.g.1 in \"Generated Columns\" (section 5.4)\n* e.g.2 in LR \"Column-Lists\" docs (section 29.5)\n* e.g.3 in LR \"Restrictions\" docs (section 29.7)\n* e.g.4 in the \"CREATE PUBLICATION\" reference page\n\nFor now, I have provided just a simple patch for the \"Generated\nColumns\" section [3]. Perhaps it is enough.\n\nThoughts?\n\n======\n[1] https://www.postgresql.org/message-id/flat/B80D17B2-2C8E-4C7D-87F2-E5B4BE3C069E%40gmail.com\n[2] https://www.postgresql.org/docs/devel/protocol-logicalrep-message-formats.html\n[3] https://www.postgresql.org/docs/devel/ddl-generated-columns.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 18 Jun 2024 16:40:36 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "DOCS: Generated table columns are skipped by logical replication" }, { "msg_contents": "On Tue, Jun 18, 2024 at 12:11 PM Peter Smith <[email protected]> wrote:\n>\n> While reviewing another thread that proposes to include \"generated\n> columns\" support for logical replication [1] I was looking for any\n> existing PostgreSQL documentation on this topic.\n>\n> But, I found almost nothing about it at all -- I only saw one aside\n> mention saying that logical replication low-level message information\n> is not sent for generated columns [2].\n>\n> ~~\n>\n> IMO there should be some high-level place in the docs where the\n> behaviour for logical replication w.r.t. generated columns is\n> described.\n>\n\n+1.\n\n> There are lots of candidate places which could talk about this topic.\n> * e.g.1 in \"Generated Columns\" (section 5.4)\n> * e.g.2 in LR \"Column-Lists\" docs (section 29.5)\n> * e.g.3 in LR \"Restrictions\" docs (section 29.7)\n> * e.g.4 in the \"CREATE PUBLICATION\" reference page\n>\n> For now, I have provided just a simple patch for the \"Generated\n> Columns\" section [3]. Perhaps it is enough.\n>\n\nCan we try to clarify if their corresponding values are replicated?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 18 Jun 2024 17:10:16 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DOCS: Generated table columns are skipped by logical replication" }, { "msg_contents": "On Tue, Jun 18, 2024 at 9:40 PM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Jun 18, 2024 at 12:11 PM Peter Smith <[email protected]> wrote:\n> >\n> > While reviewing another thread that proposes to include \"generated\n> > columns\" support for logical replication [1] I was looking for any\n> > existing PostgreSQL documentation on this topic.\n> >\n> > But, I found almost nothing about it at all -- I only saw one aside\n> > mention saying that logical replication low-level message information\n> > is not sent for generated columns [2].\n> >\n> > ~~\n> >\n> > IMO there should be some high-level place in the docs where the\n> > behaviour for logical replication w.r.t. generated columns is\n> > described.\n> >\n>\n> +1.\n>\n> > There are lots of candidate places which could talk about this topic.\n> > * e.g.1 in \"Generated Columns\" (section 5.4)\n> > * e.g.2 in LR \"Column-Lists\" docs (section 29.5)\n> > * e.g.3 in LR \"Restrictions\" docs (section 29.7)\n> > * e.g.4 in the \"CREATE PUBLICATION\" reference page\n> >\n> > For now, I have provided just a simple patch for the \"Generated\n> > Columns\" section [3]. Perhaps it is enough.\n> >\n>\n> Can we try to clarify if their corresponding values are replicated?\n>\n\nSure. Here are some current PG17 observed behaviours demonstrating\nthat generated columns are not replicated.\n\n======\n\nExample #1\n\nThe generated cols 'b' column is not replicated. Notice the subscriber\nside 'b' has its own computed value which uses a different\ncalculation.\n\nPUB: create table t1 (a int, b int generated always as (a * 2) stored);\nSUB: create table t1 (a int, b int generated always as (a * 20) stored);\n\nPUB:\ninsert into t1 values (1),(2),(3);\ncreate publication pub1 for table t1;\ntest_pub=# select * from t1;\n a | b\n---+---\n 1 | 2\n 2 | 4\n 3 | 6\n(3 rows)\n\nSUB:\ncreate subscription sub1 connection 'dbname=test_pub' publication pub1;\ntest_sub=# select * from t1;\n a | b\n---+----\n 1 | 20\n 2 | 40\n 3 | 60\n(3 rows)\n\n======\n\nExample 2\n\nYou cannot specify a generated column in a CREATE PUBLICATION column-list.\n\nPUB:\ncreate table t2 (a int, b int generated always as (a * 2) stored);\ncreate publication pub2 for table t2(b);\nERROR: cannot use generated column \"b\" in publication column list\n\n======\n\nExample 3\n\nHere the subscriber-side table doesn't even have a column 'b'.\nNormally, a missing column like this would cause subscription errors,\nbut since the publisher-side generated column 'b' is not replicated,\nthis scenario is allowed.\n\nPUB: create table t3 (a int, b int generated always as (a * 2) stored);\nSUB: create table t3 (a int);\n\nPUB:\ncreate publication pub3 for table t3;\ninsert into t3 values (1),(2),(3);\ntest_pub=# select * from t3;\n a | b\n---+---\n 1 | 2\n 2 | 4\n 3 | 6\n(3 rows)\n\nSUB:\ncreate subscription sub3 connection 'dbname=test_pub' publication pub3;\ntest_sub=# select * from t3;\n a\n---\n 1\n 2\n 3\n(3 rows)\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 19 Jun 2024 11:16:27 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DOCS: Generated table columns are skipped by logical replication" }, { "msg_contents": "On Wed, Jun 19, 2024 at 6:46 AM Peter Smith <[email protected]> wrote:\n>\n> On Tue, Jun 18, 2024 at 9:40 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Jun 18, 2024 at 12:11 PM Peter Smith <[email protected]> wrote:\n> > >\n> > > While reviewing another thread that proposes to include \"generated\n> > > columns\" support for logical replication [1] I was looking for any\n> > > existing PostgreSQL documentation on this topic.\n> > >\n> > > But, I found almost nothing about it at all -- I only saw one aside\n> > > mention saying that logical replication low-level message information\n> > > is not sent for generated columns [2].\n> > >\n> > > ~~\n> > >\n> > > IMO there should be some high-level place in the docs where the\n> > > behaviour for logical replication w.r.t. generated columns is\n> > > described.\n> > >\n> >\n> > +1.\n> >\n> > > There are lots of candidate places which could talk about this topic.\n> > > * e.g.1 in \"Generated Columns\" (section 5.4)\n> > > * e.g.2 in LR \"Column-Lists\" docs (section 29.5)\n> > > * e.g.3 in LR \"Restrictions\" docs (section 29.7)\n> > > * e.g.4 in the \"CREATE PUBLICATION\" reference page\n> > >\n> > > For now, I have provided just a simple patch for the \"Generated\n> > > Columns\" section [3]. Perhaps it is enough.\n> > >\n> >\n> > Can we try to clarify if their corresponding values are replicated?\n> >\n>\n> Sure. Here are some current PG17 observed behaviours demonstrating\n> that generated columns are not replicated.\n>\n\nThanks for sharing examples. Your proposed patch as-is looks good to\nme. We should back-patch this unless you or someone else thinks\notherwise.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 19 Jun 2024 09:51:05 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DOCS: Generated table columns are skipped by logical replication" }, { "msg_contents": "On 18.06.24 08:40, Peter Smith wrote:\n> For now, I have provided just a simple patch for the \"Generated\n> Columns\" section [3]. Perhaps it is enough.\n\nMakes sense.\n\n+ Generated columns are skipped for logical replication, and cannot be\n+ specified in a <command>CREATE PUBLICATION</command> column-list.\n\nMaybe remove the comma, and change \"column-list\" to \"column list\".\n\n\n", "msg_date": "Wed, 19 Jun 2024 17:55:56 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DOCS: Generated table columns are skipped by logical replication" }, { "msg_contents": "On Wed, Jun 19, 2024 at 2:21 PM Amit Kapila <[email protected]> wrote:\n>\n...\n>\n> Thanks for sharing examples. Your proposed patch as-is looks good to\n> me. We should back-patch this unless you or someone else thinks\n> otherwise.\n>\n\nHi Amit.\n\nI modified the patch text slightly according to Peter E's suggestion [1].\n\nI also tested the above examples against all older PostgreSQL versions\n12,13,14,15,16,17. The logical replication behaviour of skipping\ngenerated columns is the same for all of them.\n\nNote that CREATE PUBLICATION column lists did not exist until PG15, so\na modified patch is needed for the versions before that.\n\n~\n\nThe attached \"HEAD\" patch is appropriate for HEAD, PG17, PG16, PG15\nThe attached \"PG14\" patch is appropriate for PG14, PG13, PG12\n\n======\n[1] https://www.postgresql.org/message-id/2b291af9-929f-49ab-b378-5cbc029d348f%40eisentraut.org\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 20 Jun 2024 11:05:35 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DOCS: Generated table columns are skipped by logical replication" }, { "msg_contents": "On Thu, Jun 20, 2024 at 6:36 AM Peter Smith <[email protected]> wrote:\n>\n> Hi Amit.\n>\n> I modified the patch text slightly according to Peter E's suggestion [1].\n>\n> I also tested the above examples against all older PostgreSQL versions\n> 12,13,14,15,16,17. The logical replication behaviour of skipping\n> generated columns is the same for all of them.\n>\n> Note that CREATE PUBLICATION column lists did not exist until PG15, so\n> a modified patch is needed for the versions before that.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 21 Jun 2024 11:57:15 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DOCS: Generated table columns are skipped by logical replication" } ]
[ { "msg_contents": "Hi,\r\nThanks a lot for the review.\r\nIndeed the original ssl_ecdh_curve is used to set a single value of curve name. If we want to change it to indicate a list of curve names, is there any rule for naming in Postgres? like ssl_curve_groups?\r\n\r\n\r\n\r\n \r\nOriginal Email\r\n \r\n \r\n\r\nFrom:\"Andres Freund\"< [email protected] &gt;;\r\n\r\nSent Time:2024/6/18 2:48\r\n\r\nTo:\"Erica Zhang\"< [email protected] &gt;;\r\n\r\nCc recipient:\"Jelte Fennema-Nio\"< [email protected] &gt;;\"Daniel Gustafsson\"< [email protected] &gt;;\"Jacob Champion\"< [email protected] &gt;;\"Peter Eisentraut\"< [email protected] &gt;;\"pgsql-hackers\"< [email protected] &gt;;\r\n\r\nSubject:Re: Add support to TLS 1.3 cipher suites and curves lists\r\n\r\n\r\nHi,\r\n\r\nThis thread was referenced by https://www.postgresql.org/message-id/48F0A1F8-E0B4-41F8-990F-41E6BA2A6185%40yesql.se\r\n\r\nOn 2024-06-13 14:34:27 +0800, Erica Zhang wrote:\r\n\r\n&gt; diff --git a/src/backend/libpq/be-secure-openssl.c b/src/backend/libpq/be-secure-openssl.c\r\n&gt; index 39b1a66236..d097e81407 100644\r\n&gt; --- a/src/backend/libpq/be-secure-openssl.c\r\n&gt; +++ b/src/backend/libpq/be-secure-openssl.c\r\n&gt; @@ -1402,30 +1402,30 @@ static bool\r\n&gt; initialize_ecdh(SSL_CTX *context, bool isServerStart)\r\n&gt; {\r\n&gt; #ifndef OPENSSL_NO_ECDH\r\n&gt; -\tEC_KEY\t *ecdh;\r\n&gt; -\tint\t\t\tnid;\r\n&gt; +\tchar *curve_list = strdup(SSLECDHCurve);\r\n\r\nISTM we'd want to eventually rename the GUC variable to indicate it's a list?\r\nI think the \"ecdh\" portion is actually not accurate anymore either, it's used\r\noutside of ecdh if I understand correctly (probably I am not)?\r\n\r\n\r\n&gt; +\tchar *saveptr;\r\n&gt; +\tchar *token = strtok_r(curve_list, \":\", &amp;saveptr);\r\n&gt; +\tint nid;\r\n&gt; \r\n&gt; -\tnid = OBJ_sn2nid(SSLECDHCurve);\r\n&gt; -\tif (!nid)\r\n&gt; +\twhile (token != NULL)\r\n\r\nIt'd be good to have a comment explaining why we're parsing the list ourselves\r\ninstead of using just the builtin SSL_CTX_set1_groups_list().\r\n\r\n&gt; \t{\r\n&gt; -\t\tereport(isServerStart ? FATAL : LOG,\r\n&gt; +\t\tnid = OBJ_sn2nid(token);\r\n&gt; +\t\tif (!nid)\r\n&gt; +\t\t{ereport(isServerStart ? FATAL : LOG,\r\n&gt; \t\t\t\t(errcode(ERRCODE_CONFIG_FILE_ERROR),\r\n&gt; -\t\t\t\t errmsg(\"ECDH: unrecognized curve name: %s\", SSLECDHCurve)));\r\n&gt; +\t\t\t\t errmsg(\"ECDH: unrecognized curve name: %s\", token)));\r\n&gt; \t\treturn false;\r\n&gt; +\t\t}\r\n&gt; +\t\ttoken = strtok_r(NULL, \":\", &amp;saveptr);\r\n&gt; \t}\r\n&gt; \r\n&gt; -\tecdh = EC_KEY_new_by_curve_name(nid);\r\n&gt; -\tif (!ecdh)\r\n&gt; +\tif(SSL_CTX_set1_groups_list(context, SSLECDHCurve) !=1)\r\n&gt; \t{\r\n&gt; \t\tereport(isServerStart ? FATAL : LOG,\r\n&gt; \t\t\t\t(errcode(ERRCODE_CONFIG_FILE_ERROR),\r\n&gt; -\t\t\t\t errmsg(\"ECDH: could not create key\")));\r\n&gt; +\t\t\t\t errmsg(\"ECDH: failed to set curve names\")));\r\n\r\nProbably worth including the value of the GUC here?\r\n\r\n\r\n\r\nThis also needs to change the documentation for the GUC.\r\n\r\n\r\n\r\nOnce we have this parameter we probably should add at least x25519 to the\r\nallowed list, as that's the client side default these days.\r\n\r\nBut that can be done in a separate patch.\r\n\r\nGreetings,\r\n\r\nAndres Freund\nHi,Thanks a lot for the review.Indeed the original ssl_ecdh_curve is used to set a single value of curve name. If we want to change it to indicate a list of curve names, is there any rule for naming in Postgres? like ssl_curve_groups?\nOriginal Email\n\nFrom:\"Andres Freund\"< [email protected] >;Sent Time:2024/6/18 2:48To:\"Erica Zhang\"< [email protected] >;Cc recipient:\"Jelte Fennema-Nio\"< [email protected] >;\"Daniel Gustafsson\"< [email protected] >;\"Jacob Champion\"< [email protected] >;\"Peter Eisentraut\"< [email protected] >;\"pgsql-hackers\"< [email protected] >;Subject:Re: Add support to TLS 1.3 cipher suites and curves listsHi,This thread was referenced by https://www.postgresql.org/message-id/48F0A1F8-E0B4-41F8-990F-41E6BA2A6185%40yesql.seOn 2024-06-13 14:34:27 +0800, Erica Zhang wrote:> diff --git a/src/backend/libpq/be-secure-openssl.c b/src/backend/libpq/be-secure-openssl.c> index 39b1a66236..d097e81407 100644> --- a/src/backend/libpq/be-secure-openssl.c> +++ b/src/backend/libpq/be-secure-openssl.c> @@ -1402,30 +1402,30 @@ static bool> initialize_ecdh(SSL_CTX *context, bool isServerStart)> {> #ifndef OPENSSL_NO_ECDH> -\tEC_KEY\t *ecdh;> -\tint\t\t\tnid;> +\tchar *curve_list = strdup(SSLECDHCurve);ISTM we'd want to eventually rename the GUC variable to indicate it's a list?I think the \"ecdh\" portion is actually not accurate anymore either, it's usedoutside of ecdh if I understand correctly (probably I am not)?> +\tchar *saveptr;> +\tchar *token = strtok_r(curve_list, \":\", &saveptr);> +\tint nid;> > -\tnid = OBJ_sn2nid(SSLECDHCurve);> -\tif (!nid)> +\twhile (token != NULL)It'd be good to have a comment explaining why we're parsing the list ourselvesinstead of using just the builtin SSL_CTX_set1_groups_list().> \t{> -\t\tereport(isServerStart ? FATAL : LOG,> +\t\tnid = OBJ_sn2nid(token);> +\t\tif (!nid)> +\t\t{ereport(isServerStart ? FATAL : LOG,> \t\t\t\t(errcode(ERRCODE_CONFIG_FILE_ERROR),> -\t\t\t\t errmsg(\"ECDH: unrecognized curve name: %s\", SSLECDHCurve)));> +\t\t\t\t errmsg(\"ECDH: unrecognized curve name: %s\", token)));> \t\treturn false;> +\t\t}> +\t\ttoken = strtok_r(NULL, \":\", &saveptr);> \t}> > -\tecdh = EC_KEY_new_by_curve_name(nid);> -\tif (!ecdh)> +\tif(SSL_CTX_set1_groups_list(context, SSLECDHCurve) !=1)> \t{> \t\tereport(isServerStart ? FATAL : LOG,> \t\t\t\t(errcode(ERRCODE_CONFIG_FILE_ERROR),> -\t\t\t\t errmsg(\"ECDH: could not create key\")));> +\t\t\t\t errmsg(\"ECDH: failed to set curve names\")));Probably worth including the value of the GUC here?This also needs to change the documentation for the GUC.Once we have this parameter we probably should add at least x25519 to theallowed list, as that's the client side default these days.But that can be done in a separate patch.Greetings,Andres Freund", "msg_date": "Tue, 18 Jun 2024 14:56:49 +0800", "msg_from": "\"=?utf-8?B?RXJpY2EgWmhhbmc=?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re:Re: Add support to TLS 1.3 cipher suites and curves lists" }, { "msg_contents": "I had a look at this patchset today and I think I've come around to the idea of\nhaving a separate GUC for cipher suites. I don't have strong opinions on\nrenaming ssl_ecdh_curve to reflect that it can take a list of multiple values,\nthere is merit to having descriptive names but it would also be an invasive\nchange for adding suffix 's'.\n\nAfter fiddling a bit with the code and documentation I came up with the\nattached version which also makes the testsuite use the list syntax in order to\ntest it. It's essentially just polish and adding comments with the functional\nchanges that a) it parses the entire list of curves so all errors can be\nreported instead of giving up at the first error; b) leaving the cipher suite\nGUC blank will set the suites to the OpenSSL default vale.\n\nThis patch requires OpenSSL 1.1.1 as the minimum version, which in my view is\nfine. Removing support for older OpenSSL versions is being discussed already\nand this makes a good case for requiring 1.1.1. It does however mean that this\npatch cannot be commmitted until that has been done though. I have yet to test\nthis with LibreSSL.\n\nAs was suggested in a related thread I think we should change the default value\nof the ECDH curves parameter, but that's for another patch.\n\n--\nDaniel Gustafsson", "msg_date": "Wed, 3 Jul 2024 18:20:21 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add support to TLS 1.3 cipher suites and curves lists" }, { "msg_contents": "On 03.07.24 17:20, Daniel Gustafsson wrote:\n> After fiddling a bit with the code and documentation I came up with the\n> attached version which also makes the testsuite use the list syntax in order to\n> test it. It's essentially just polish and adding comments with the functional\n> changes that a) it parses the entire list of curves so all errors can be\n> reported instead of giving up at the first error; b) leaving the cipher suite\n> GUC blank will set the suites to the OpenSSL default vale.\n\nIt would be worth checking the discussion at \n<https://www.postgresql.org/message-id/flat/[email protected]> \nabout strtok()/strtok_r() issues. First, for list parsing, it sometimes \ngives the wrong semantics, which I think might apply here. Maybe it's \nworth comparing this with the semantics that OpenSSL provides natively. \nAnd second, strtok_r() is not available on Windows without the \nworkaround provided in that thread.\n\nI'm doubtful that it's worth replicating all this list parsing logic \ninstead of just letting OpenSSL do it. This is a very marginal feature \nafter all.\n\n\n\n", "msg_date": "Thu, 11 Jul 2024 22:16:37 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add support to TLS 1.3 cipher suites and curves lists" }, { "msg_contents": "> On 11 Jul 2024, at 23:16, Peter Eisentraut <[email protected]> wrote:\n\n> It would be worth checking the discussion at <https://www.postgresql.org/message-id/flat/[email protected]> about strtok()/strtok_r() issues. First, for list parsing, it sometimes gives the wrong semantics, which I think might apply here. Maybe it's worth comparing this with the semantics that OpenSSL provides natively. And second, strtok_r() is not available on Windows without the workaround provided in that thread.\n> \n> I'm doubtful that it's worth replicating all this list parsing logic instead of just letting OpenSSL do it. This is a very marginal feature after all.\n\nThe original author added the string parsing in order to provide a good error\nmessage in case of an error in the list, and since that seemed like a nice idea\nI kept in my review revision. With what you said above I agree it's not worth\nthe extra complexity it brings so the attached revision removes it.\n\n--\nDaniel Gustafsson", "msg_date": "Fri, 12 Jul 2024 22:03:33 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add support to TLS 1.3 cipher suites and curves lists" }, { "msg_contents": "On Fri, Jul 12, 2024 at 1:03 PM Daniel Gustafsson <[email protected]> wrote:\n> The original author added the string parsing in order to provide a good error\n> message in case of an error in the list, and since that seemed like a nice idea\n> I kept in my review revision. With what you said above I agree it's not worth\n> the extra complexity it brings so the attached revision removes it.\n\nMisspelling a group now leads to the following error message for OpenSSL 3.0:\n\n FATAL: ECDH: failed to set curve names: no SSL error reported\n\nMaybe a HINT would be nice here?:\n\n HINT: Check that each group name is both spelled correctly and\nsupported by the installed version of OpenSSL.\n\nor something.\n\n> I don't have strong opinions on\n> renaming ssl_ecdh_curve to reflect that it can take a list of multiple values,\n> there is merit to having descriptive names but it would also be an invasive\n> change for adding suffix 's'.\n\nCan we just add an entry to map_old_guc_names to handle it? Something\nlike (untested)\n\n static const char *const map_old_guc_names[] = {\n \"sort_mem\", \"work_mem\",\n \"vacuum_mem\", \"maintenance_work_mem\",\n+ \"ssl_ecdh_curve\", \"ssl_groups\",\n NULL\n };\n\nRe: Andres' concern about the ECDH part of the name, we could probably\nkeep the \"dh\" part, but I'd be wary of that changing underneath us\ntoo. IANA changed the registry name to \"TLS Supported Groups\".\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Mon, 22 Jul 2024 10:14:34 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add support to TLS 1.3 cipher suites and curves lists" }, { "msg_contents": "On Wed, Jul 3, 2024 at 9:20 AM Daniel Gustafsson <[email protected]> wrote:\n> It's essentially just polish and adding comments with the functional\n> changes that a) it parses the entire list of curves so all errors can be\n> reported instead of giving up at the first error; b) leaving the cipher suite\n> GUC blank will set the suites to the OpenSSL default vale.\n\nIs there an advantage to setting it to a compile-time default, as\nopposed to just leaving it alone and not setting it at all? With the\ncurrent patch, if you dropped in a more advanced OpenSSL 3.x that\nchanged up the defaults, you wouldn't see any benefit.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Mon, 22 Jul 2024 10:54:43 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add support to TLS 1.3 cipher suites and curves lists" }, { "msg_contents": "> On 22 Jul 2024, at 19:14, Jacob Champion <[email protected]> wrote:\n> \n> On Fri, Jul 12, 2024 at 1:03 PM Daniel Gustafsson <[email protected]> wrote:\n>> The original author added the string parsing in order to provide a good error\n>> message in case of an error in the list, and since that seemed like a nice idea\n>> I kept in my review revision. With what you said above I agree it's not worth\n>> the extra complexity it brings so the attached revision removes it.\n> \n> Misspelling a group now leads to the following error message for OpenSSL 3.0:\n> \n> FATAL: ECDH: failed to set curve names: no SSL error reported\n> \n> Maybe a HINT would be nice here?:\n> \n> HINT: Check that each group name is both spelled correctly and\n> supported by the installed version of OpenSSL.\n\nGood catch. OpenSSL 3.2 changed the error message to be a lot more helpful,\nbefore that there is no error added to the queue at all for this processing\n(hence the \"no SSL error reported\"). The attached adds a hint as well as a\nproper error message for OpenSSL versions prior to 3.2. Pushing an error on\nthe queue would've been nice but we can't replicate the OpenSSL level of detail\nin the error until we require OpenSSL 3.0 as the base since that's when _data\nerror reporting was added.\n\n>> I don't have strong opinions on\n>> renaming ssl_ecdh_curve to reflect that it can take a list of multiple values,\n>> there is merit to having descriptive names but it would also be an invasive\n>> change for adding suffix 's'.\n> \n> Can we just add an entry to map_old_guc_names to handle it? Something\n> like (untested)\n> \n> static const char *const map_old_guc_names[] = {\n> \"sort_mem\", \"work_mem\",\n> \"vacuum_mem\", \"maintenance_work_mem\",\n> + \"ssl_ecdh_curve\", \"ssl_groups\",\n> NULL\n> };\n> \n> Re: Andres' concern about the ECDH part of the name, we could probably\n> keep the \"dh\" part, but I'd be wary of that changing underneath us\n> too. IANA changed the registry name to \"TLS Supported Groups\".\n\nFair point, I've renamed to ssl_groups and added a mapping from the old name as\nwell as a note in the docs that the parameter has changed name (and ability to\nhandle more than one).\n\n> Is there an advantage to setting it to a compile-time default, as\n> opposed to just leaving it alone and not setting it at all? With the\n> current patch, if you dropped in a more advanced OpenSSL 3.x that\n> changed up the defaults, you wouldn't see any benefit.\n\n\nNot really, I have changed such that a blank GUC does *no* OpenSSL call at all\nwhich will retain the default from the local OpenSSL installation.\n\nThe attached version also has a new 0001 which bumps the minimum required\nOpenSSL version to 1.1.1 (from 1.1.0) since this patchset requires API's only\npresent in 1.1.1 and onwards. To keep it from being hidden here I will raise a\nseparate thread about it.\n\n--\nDaniel Gustafsson", "msg_date": "Mon, 9 Sep 2024 14:00:17 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add support to TLS 1.3 cipher suites and curves lists" }, { "msg_contents": "On Mon, Sep 9, 2024 at 5:00 AM Daniel Gustafsson <[email protected]> wrote:\n> Good catch. OpenSSL 3.2 changed the error message to be a lot more helpful,\n> before that there is no error added to the queue at all for this processing\n> (hence the \"no SSL error reported\"). The attached adds a hint as well as a\n> proper error message for OpenSSL versions prior to 3.2.\n\nThanks!\n\n> The attached version also has a new 0001 which bumps the minimum required\n> OpenSSL version to 1.1.1 (from 1.1.0) since this patchset requires API's only\n> present in 1.1.1 and onwards. To keep it from being hidden here I will raise a\n> separate thread about it.\n\nAs implemented, my build matrix is no longer able to compile against\nLibreSSL 3.3 and below (OpenBSD 6.x). Has the lower bound on LibreSSL\nfor PG18 been discussed yet?\n\n> +#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed TLSv1.2 ciphers\n> +#ssl_cipher_suites = '' # allowed TLSv1.3 cipher suites, blank for default\n\nAfter marinating on this a bit... I think the naming may result in\nsome \"who's on first\" miscommunications in forums and on the list. \"I\nset the SSL ciphers to <whatever>, but it says there are no valid\nciphers available!\" Should we put TLS 1.3 into the new GUC name\nsomehow?\n\n> + {\"ssl_groups\", PGC_SIGHUP, CONN_AUTH_SSL,\n> + gettext_noop(\"Sets the curve(s) to use for ECDH.\"),\n\nThe ECDH reference should probably be updated/removed. Maybe something\nlike \"Sets the group(s) to use for Diffie-Hellman key exchange.\" ?\n\n> +#if (OPENSSL_VERSION_NUMBER <= 0x30200000L)\n> + /*\n> + * OpenSSL 3.3.0 introduced proper error messages for group\n> + * parsing errors, earlier versions returns \"no SSL error\n> + * reported\" which is far from helpful. For older versions, we\n> + * manually set a better error message. Injecting the error\n> + * into the OpenSSL error queue need APIs from OpenSSL 3.0.\n> + */\n> + errmsg(\"ECDH: failed to set curve names: No valid groups in '%s'\",\n> + SSLECDHCurve),\n\nnit: can we do this only when ERR_get_error() returns zero? It looks\nlike LibreSSL is stuck at OPENSSL_VERSION_NUMBER == 0x20000000, so if\nthey introduce a nice error message at some point it'll still get\nignored.\n\n> + &SSLCipherLists,\n\nnit: a singular \"SSLCipherList\" would be clearer, IMO.\n\n--\n\nLooking at the commit messages:\n\n> Support configuring multiple ECDH curves\n>\n> The ssl_ecdh_curve only GUC accepts a single value, but the TLS\n\n\"GUC\" and \"only\" are transposed here.\n\n> Support configuring TLSv1.3 cipher suites\n>\n> The ssl_ciphers GUC can only set cipher suites for TLSv1.2, and lower,\n> connections. For TLSv1.3 connections a different OpenSSL must be used.\n\n\"a different OpenSSL API\", maybe?\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Wed, 18 Sep 2024 13:48:47 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add support to TLS 1.3 cipher suites and curves lists" }, { "msg_contents": "On 18.09.24 22:48, Jacob Champion wrote:\n>> +#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed TLSv1.2 ciphers\n>> +#ssl_cipher_suites = '' # allowed TLSv1.3 cipher suites, blank for default\n> After marinating on this a bit... I think the naming may result in\n> some \"who's on first\" miscommunications in forums and on the list. \"I\n> set the SSL ciphers to <whatever>, but it says there are no valid\n> ciphers available!\" Should we put TLS 1.3 into the new GUC name\n> somehow?\n\nYeah, I think just\n\nssl_ciphers =\nssl_ciphers_tlsv13 =\n\nwould be clear enough. Just using \"ciphers\" vs. \"cipher suites\" would \nnot be.\n\n\n\n", "msg_date": "Wed, 25 Sep 2024 10:51:05 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add support to TLS 1.3 cipher suites and curves lists" }, { "msg_contents": "> On 18 Sep 2024, at 22:48, Jacob Champion <[email protected]> wrote:\n> On Mon, Sep 9, 2024 at 5:00 AM Daniel Gustafsson <[email protected]> wrote:\n\n>> The attached version also has a new 0001 which bumps the minimum required\n>> OpenSSL version to 1.1.1 (from 1.1.0) since this patchset requires API's only\n>> present in 1.1.1 and onwards. To keep it from being hidden here I will raise a\n>> separate thread about it.\n> \n> As implemented, my build matrix is no longer able to compile against\n> LibreSSL 3.3 and below (OpenBSD 6.x). Has the lower bound on LibreSSL\n> for PG18 been discussed yet?\n\nI can't recall specific bounds for supporting LibreSSL even being discussed,\nthe support is also not documented as an official thing. Requiring TLS 1.3\nAPIs for supporting a library in 2025 (when 18 ships) doesn't seem entirely\nunreasonable so maybe 3.4 is a good cutoff. The fact that LibreSSL trailed \nbehind OpenSSL in adding these APIs shouldn't limit our functionality.\n\n>> +#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed TLSv1.2 ciphers\n>> +#ssl_cipher_suites = '' # allowed TLSv1.3 cipher suites, blank for default\n> \n> After marinating on this a bit... I think the naming may result in\n> some \"who's on first\" miscommunications in forums and on the list. \"I\n> set the SSL ciphers to <whatever>, but it says there are no valid\n> ciphers available!\" Should we put TLS 1.3 into the new GUC name\n> somehow?\n\nYeah, I don't disagree with your concern. Thinking on it a bit I went (to some\ndegree inspired by what we did in curl) with ssl_tls13_ciphers which makes the\nname very similar to the tls12 GUC but with the clear distinction of being\nprotocol specific. It also makes the GUC name more readable to place the\nprotocol before \"ciphers\" I think.\n\n>> + {\"ssl_groups\", PGC_SIGHUP, CONN_AUTH_SSL,\n>> + gettext_noop(\"Sets the curve(s) to use for ECDH.\"),\n> \n> The ECDH reference should probably be updated/removed. Maybe something\n> like \"Sets the group(s) to use for Diffie-Hellman key exchange.\" ?\n\nDone.\n\n>> +#if (OPENSSL_VERSION_NUMBER <= 0x30200000L)\n>> + /*\n>> + * OpenSSL 3.3.0 introduced proper error messages for group\n>> + * parsing errors, earlier versions returns \"no SSL error\n>> + * reported\" which is far from helpful. For older versions, we\n>> + * manually set a better error message. Injecting the error\n>> + * into the OpenSSL error queue need APIs from OpenSSL 3.0.\n>> + */\n>> + errmsg(\"ECDH: failed to set curve names: No valid groups in '%s'\",\n>> + SSLECDHCurve),\n> \n> nit: can we do this only when ERR_get_error() returns zero? It looks\n> like LibreSSL is stuck at OPENSSL_VERSION_NUMBER == 0x20000000, so if\n> they introduce a nice error message at some point it'll still get\n> ignored.\n\nWe can do that, I'm not going to hold my breath on LibreSSL doing that but it\nhas the benefit of using the API and not hardcoded version knowledge. I ended\nup adding a version of SSLerrmessage which takes a replacement string for ecode\n0 (which admittedly is hardcoded version knowledge as well..). This can be\nused for scenarios when it's known that OpenSSL sometimes reports and error and\nsometimes not (I'm sure there are quite a few more).\n\n>> + &SSLCipherLists,\n> \n> nit: a singular \"SSLCipherList\" would be clearer, IMO.\n\nDone.\n\n> Looking at the commit messages:\n> \n>> Support configuring multiple ECDH curves\n>> \n>> The ssl_ecdh_curve only GUC accepts a single value, but the TLS\n> \n> \"GUC\" and \"only\" are transposed here.\n\nFixed.\n\n>> Support configuring TLSv1.3 cipher suites\n>> \n>> The ssl_ciphers GUC can only set cipher suites for TLSv1.2, and lower,\n>> connections. For TLSv1.3 connections a different OpenSSL must be used.\n> \n> \"a different OpenSSL API\", maybe?\n\nFixed.\n\n--\nDaniel Gustafsson", "msg_date": "Wed, 25 Sep 2024 15:39:11 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add support to TLS 1.3 cipher suites and curves lists" }, { "msg_contents": "Attached is a v7 which address a test failure in the CI. It turns out that the\ntest_misc module gather GUC names using the :alpha: character class which only\nallows alphabetic whereas GUC names can have digits in them. The 0001 patch\nfixes this by instead using the :alnum: character class which allows all\nalphanumeric characters. This is not directly related to this patch, it just\nhappened to be exposed by it.\n\n--\nDaniel Gustafsson", "msg_date": "Thu, 26 Sep 2024 11:01:35 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add support to TLS 1.3 cipher suites and curves lists" } ]
[ { "msg_contents": "Under the topic of getting rid of thread-unsafe functions in the backend \n[0], here is a patch series to deal with strtok().\n\nOf course, strtok() is famously not thread-safe and can be replaced by \nstrtok_r(). But it also has the wrong semantics in some cases, because \nit considers adjacent delimiters to be one delimiter. So if you parse\n\n SCRAM-SHA-256$<iterations>:<salt>$<storedkey>:<serverkey>\n\nwith strtok(), then\n\n SCRAM-SHA-256$$<iterations>::<salt>$$<storedkey>::<serverkey>\n\nparses just the same. In many cases, this is arguably wrong and could \nhide mistakes.\n\nSo I'm suggesting to use strsep() in those places. strsep() is \nnonstandard but widely available.\n\nThere are a few places where strtok() has the right semantics, such as \nparsing tokens separated by whitespace. For those, I'm using strtok_r().\n\nA reviewer job here would be to check whether I made that distinction \ncorrectly in each case.\n\nOn the portability side, I'm including a port/ replacement for strsep() \nand some workaround to get strtok_r() for Windows. I have included \nthese here as separate patches for clarity.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/[email protected]", "msg_date": "Tue, 18 Jun 2024 09:18:28 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "replace strtok()" }, { "msg_contents": "Em ter., 18 de jun. de 2024 às 04:18, Peter Eisentraut <[email protected]>\nescreveu:\n\n> Under the topic of getting rid of thread-unsafe functions in the backend\n> [0], here is a patch series to deal with strtok().\n>\n> Of course, strtok() is famously not thread-safe and can be replaced by\n> strtok_r(). But it also has the wrong semantics in some cases, because\n> it considers adjacent delimiters to be one delimiter. So if you parse\n>\n> SCRAM-SHA-256$<iterations>:<salt>$<storedkey>:<serverkey>\n>\n> with strtok(), then\n>\n> SCRAM-SHA-256$$<iterations>::<salt>$$<storedkey>::<serverkey>\n>\n> parses just the same. In many cases, this is arguably wrong and could\n> hide mistakes.\n>\n> So I'm suggesting to use strsep() in those places. strsep() is\n> nonstandard but widely available.\n>\n> There are a few places where strtok() has the right semantics, such as\n> parsing tokens separated by whitespace. For those, I'm using strtok_r().\n>\n> A reviewer job here would be to check whether I made that distinction\n> correctly in each case.\n>\n> On the portability side, I'm including a port/ replacement for strsep()\n> and some workaround to get strtok_r() for Windows. I have included\n> these here as separate patches for clarity.\n>\n+1 For making the code thread-safe.\nBut I would like to see more const char * where this is possible.\n\nFor example, in pg_locale.c\nIMO, the token variable can be const char *.\n\nAt least strchr expects a const char * as the first parameter.\n\nI found another implementation of strsep, it seems lighter to me.\nI will attach it for consideration, however, I have not done any testing.\n\nbest regards,\nRanier Vilela", "msg_date": "Tue, 18 Jun 2024 08:43:23 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: replace strtok()" }, { "msg_contents": "At Tue, 18 Jun 2024 09:18:28 +0200, Peter Eisentraut <[email protected]> wrote in \n> Under the topic of getting rid of thread-unsafe functions in the\n> backend [0], here is a patch series to deal with strtok().\n> \n> Of course, strtok() is famously not thread-safe and can be replaced by\n> strtok_r(). But it also has the wrong semantics in some cases,\n> because it considers adjacent delimiters to be one delimiter. So if\n> you parse\n> \n> SCRAM-SHA-256$<iterations>:<salt>$<storedkey>:<serverkey>\n> \n> with strtok(), then\n> \n> SCRAM-SHA-256$$<iterations>::<salt>$$<storedkey>::<serverkey>\n> \n> parses just the same. In many cases, this is arguably wrong and could\n> hide mistakes.\n> \n> So I'm suggesting to use strsep() in those places. strsep() is\n> nonstandard but widely available.\n> \n> There are a few places where strtok() has the right semantics, such as\n> parsing tokens separated by whitespace. For those, I'm using\n> strtok_r().\n\nI agree with the distinction.\n\n> A reviewer job here would be to check whether I made that distinction\n> correctly in each case.\n\n0001 and 0002 look correct to me regarding that distinction. They\napplied correctly to the master HEAD and all tests passed on Linux.\n\n> On the portability side, I'm including a port/ replacement for\n> strsep() and some workaround to get strtok_r() for Windows. I have\n> included these here as separate patches for clarity.\n\n0003 looks fine and successfully built and seems working on an MSVC\nbuild.\n\nAbout 0004, Cygwin seems to have its own strtok_r, but I haven't\nchecked how that fact affects the build.\n\n> [0]:\n> https://www.postgresql.org/message-id/[email protected]\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 19 Jun 2024 17:30:21 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: replace strtok()" }, { "msg_contents": "On 18.06.24 13:43, Ranier Vilela wrote:\n> But I would like to see more const char * where this is possible.\n> \n> For example, in pg_locale.c\n> IMO, the token variable can be const char *.\n> \n> At least strchr expects a const char * as the first parameter.\n\nThis would not be future-proof. In C23, if you pass a const char * into \nstrchr(), you also get a const char * as a result. And in this case, we \ndo write into the area pointed to by the result. So with a const char \n*token, this whole thing would not compile cleanly under C23.\n\n> I found another implementation of strsep, it seems lighter to me.\n> I will attach it for consideration, however, I have not done any testing.\n\nYeah, surely there are many possible implementations. I'm thinking, \nsince we already took other str*() functions from OpenBSD, it makes \nsense to do this here as well, so we have only one source to deal with.\n\n\n\n", "msg_date": "Sat, 22 Jun 2024 17:04:11 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: replace strtok()" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 18.06.24 13:43, Ranier Vilela wrote:\n>> I found another implementation of strsep, it seems lighter to me.\n>> I will attach it for consideration, however, I have not done any testing.\n\n> Yeah, surely there are many possible implementations. I'm thinking, \n> since we already took other str*() functions from OpenBSD, it makes \n> sense to do this here as well, so we have only one source to deal with.\n\nWhy not use strpbrk? That's equally thread-safe, it's been there\nsince C89, and it doesn't have the problem that you can't find out\nwhich of the delimiter characters was found.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 22 Jun 2024 11:48:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: replace strtok()" }, { "msg_contents": "On Sat, Jun 22, 2024 at 11:48:21AM -0400, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n> > On 18.06.24 13:43, Ranier Vilela wrote:\n> >> I found another implementation of strsep, it seems lighter to me.\n> >> I will attach it for consideration, however, I have not done any testing.\n> \n> > Yeah, surely there are many possible implementations. I'm thinking, \n> > since we already took other str*() functions from OpenBSD, it makes \n> > sense to do this here as well, so we have only one source to deal with.\n> \n> Why not use strpbrk? That's equally thread-safe, it's been there\n> since C89, and it doesn't have the problem that you can't find out\n> which of the delimiter characters was found.\n\nYeah, strpbrk() has been used in the tree as far as 2003 without any \nport/ implementation.\n--\nMichael", "msg_date": "Mon, 24 Jun 2024 09:34:05 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: replace strtok()" }, { "msg_contents": "On 24.06.24 02:34, Michael Paquier wrote:\n> On Sat, Jun 22, 2024 at 11:48:21AM -0400, Tom Lane wrote:\n>> Peter Eisentraut <[email protected]> writes:\n>>> On 18.06.24 13:43, Ranier Vilela wrote:\n>>>> I found another implementation of strsep, it seems lighter to me.\n>>>> I will attach it for consideration, however, I have not done any testing.\n>>\n>>> Yeah, surely there are many possible implementations. I'm thinking,\n>>> since we already took other str*() functions from OpenBSD, it makes\n>>> sense to do this here as well, so we have only one source to deal with.\n>>\n>> Why not use strpbrk? That's equally thread-safe, it's been there\n>> since C89, and it doesn't have the problem that you can't find out\n>> which of the delimiter characters was found.\n> \n> Yeah, strpbrk() has been used in the tree as far as 2003 without any\n> port/ implementation.\n\nThe existing uses of strpbrk() are really just checking whether some \ncharacters exist in a string, more like an enhanced strchr(). I don't \nsee any uses for tokenizing a string like strtok() or strsep() would do. \n I think that would look quite cumbersome. So I think a simpler and \nmore convenient abstraction like strsep() would still be worthwhile.\n\n\n", "msg_date": "Mon, 24 Jun 2024 14:57:27 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: replace strtok()" }, { "msg_contents": "On 6/24/24 19:57, Peter Eisentraut wrote:\n> On 24.06.24 02:34, Michael Paquier wrote:\n>> On Sat, Jun 22, 2024 at 11:48:21AM -0400, Tom Lane wrote:\n>>> Peter Eisentraut <[email protected]> writes:\n>>>> On 18.06.24 13:43, Ranier Vilela wrote:\n>>>>> I found another implementation of strsep, it seems lighter to me.\n>>>>> I will attach it for consideration, however, I have not done any \n>>>>> testing.\n>>>\n>>>> Yeah, surely there are many possible implementations.  I'm thinking,\n>>>> since we already took other str*() functions from OpenBSD, it makes\n>>>> sense to do this here as well, so we have only one source to deal with.\n>>>\n>>> Why not use strpbrk?  That's equally thread-safe, it's been there\n>>> since C89, and it doesn't have the problem that you can't find out\n>>> which of the delimiter characters was found.\n>>\n>> Yeah, strpbrk() has been used in the tree as far as 2003 without any\n>> port/ implementation.\n> \n> The existing uses of strpbrk() are really just checking whether some \n> characters exist in a string, more like an enhanced strchr().  I don't \n> see any uses for tokenizing a string like strtok() or strsep() would do. \n>  I think that would look quite cumbersome.  So I think a simpler and \n> more convenient abstraction like strsep() would still be worthwhile.\n\nI agree that using strsep() in these cases seems more natural. Since \nthis patch provides a default implementation compatibility does not seem \nlike a big issue.\n\nI've also reviewed the rest of the patch and it looks good to me.\n\nRegards,\n-David\n\n\n", "msg_date": "Mon, 8 Jul 2024 12:45:50 +0700", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: replace strtok()" }, { "msg_contents": "On 08.07.24 07:45, David Steele wrote:\n> On 6/24/24 19:57, Peter Eisentraut wrote:\n>> On 24.06.24 02:34, Michael Paquier wrote:\n>>> On Sat, Jun 22, 2024 at 11:48:21AM -0400, Tom Lane wrote:\n>>>> Peter Eisentraut <[email protected]> writes:\n>>>>> On 18.06.24 13:43, Ranier Vilela wrote:\n>>>>>> I found another implementation of strsep, it seems lighter to me.\n>>>>>> I will attach it for consideration, however, I have not done any \n>>>>>> testing.\n>>>>\n>>>>> Yeah, surely there are many possible implementations.  I'm thinking,\n>>>>> since we already took other str*() functions from OpenBSD, it makes\n>>>>> sense to do this here as well, so we have only one source to deal \n>>>>> with.\n>>>>\n>>>> Why not use strpbrk?  That's equally thread-safe, it's been there\n>>>> since C89, and it doesn't have the problem that you can't find out\n>>>> which of the delimiter characters was found.\n>>>\n>>> Yeah, strpbrk() has been used in the tree as far as 2003 without any\n>>> port/ implementation.\n>>\n>> The existing uses of strpbrk() are really just checking whether some \n>> characters exist in a string, more like an enhanced strchr().  I don't \n>> see any uses for tokenizing a string like strtok() or strsep() would \n>> do.   I think that would look quite cumbersome.  So I think a simpler \n>> and more convenient abstraction like strsep() would still be worthwhile.\n> \n> I agree that using strsep() in these cases seems more natural. Since \n> this patch provides a default implementation compatibility does not seem \n> like a big issue.\n> \n> I've also reviewed the rest of the patch and it looks good to me.\n\nThis has been committed. Thanks.\n\n\n\n", "msg_date": "Tue, 23 Jul 2024 14:38:47 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: replace strtok()" } ]
[ { "msg_contents": "The CompilerWarnings task on Cirrus CI does not catch warnings in C++ \ncode. It tries to make warnings fatal by passing COPT='-Werror', but \nthat does not apply to C++ compilations.\n\nI suggest that we just add COPT to CXXFLAGS as well. I think passing \n-Werror is just about the only reasonable use of COPT nowadays, so \nmaking that more robust seems useful. I don't think there is a need for \na separate make variable for C++ here.", "msg_date": "Tue, 18 Jun 2024 09:27:02 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "CompilerWarnings task does not catch C++ warnings" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> The CompilerWarnings task on Cirrus CI does not catch warnings in C++ \n> code. It tries to make warnings fatal by passing COPT='-Werror', but \n> that does not apply to C++ compilations.\n> I suggest that we just add COPT to CXXFLAGS as well. I think passing \n> -Werror is just about the only reasonable use of COPT nowadays, so \n> making that more robust seems useful. I don't think there is a need for \n> a separate make variable for C++ here.\n\n+1, but what about the meson side of things?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 18 Jun 2024 10:08:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CompilerWarnings task does not catch C++ warnings" }, { "msg_contents": "On 18.06.24 16:08, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> The CompilerWarnings task on Cirrus CI does not catch warnings in C++\n>> code. It tries to make warnings fatal by passing COPT='-Werror', but\n>> that does not apply to C++ compilations.\n>> I suggest that we just add COPT to CXXFLAGS as well. I think passing\n>> -Werror is just about the only reasonable use of COPT nowadays, so\n>> making that more robust seems useful. I don't think there is a need for\n>> a separate make variable for C++ here.\n> \n> +1, but what about the meson side of things?\n\nIf you use meson {setup|configure} --werror, that would affect both C \nand C++ compilers.\n\n\n\n", "msg_date": "Tue, 18 Jun 2024 16:22:49 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CompilerWarnings task does not catch C++ warnings" }, { "msg_contents": "Hi,\n\nOn 2024-06-18 09:27:02 +0200, Peter Eisentraut wrote:\n> The CompilerWarnings task on Cirrus CI does not catch warnings in C++ code.\n> It tries to make warnings fatal by passing COPT='-Werror', but that does not\n> apply to C++ compilations.\n> \n> I suggest that we just add COPT to CXXFLAGS as well. I think passing\n> -Werror is just about the only reasonable use of COPT nowadays, so making\n> that more robust seems useful. I don't think there is a need for a separate\n> make variable for C++ here.\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 18 Jun 2024 07:31:16 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CompilerWarnings task does not catch C++ warnings" } ]
[ { "msg_contents": "I have this patch series that fixes up the types of the new incremental \nJSON API a bit. Specifically, it uses \"const\" throughout so that the \ntop-level entry points such as pg_parse_json_incremental() can declare \ntheir arguments as const char * instead of just char *. This just \nworks, it doesn't require any new casting tricks. In fact, it removes a \nfew unconstify() calls.\n\nAlso, a few arguments and variables that relate to object sizes should \nbe size_t rather than int. At this point, this mainly makes the API \nbetter self-documenting. I don't think it actually works to parse \nlarger than 2 GB chunks (not tested).", "msg_date": "Tue, 18 Jun 2024 13:48:17 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "jsonapi type fixups" }, { "msg_contents": "\nOn 2024-06-18 Tu 7:48 AM, Peter Eisentraut wrote:\n> I have this patch series that fixes up the types of the new \n> incremental JSON API a bit.  Specifically, it uses \"const\" throughout \n> so that the top-level entry points such as pg_parse_json_incremental() \n> can declare their arguments as const char * instead of just char *.  \n> This just works, it doesn't require any new casting tricks.  In fact, \n> it removes a few unconstify() calls.\n>\n> Also, a few arguments and variables that relate to object sizes should \n> be size_t rather than int.  At this point, this mainly makes the API \n> better self-documenting.  I don't think it actually works to parse \n> larger than 2 GB chunks (not tested).\n\n\n\nI think this is mostly OK.\n\nThe change at line 1857 of jsonapi.c looks dubious, though. The pointer \nvariable p looks anything but constant. Perhaps I'm misunderstanding.\n\nIt would also be nice to reword the comment at line 3142 of jsonfuncs.c, \nso it can still fit on one line.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 20 Jun 2024 08:05:14 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonapi type fixups" }, { "msg_contents": "\nOn 2024-06-20 Th 8:05 AM, Andrew Dunstan wrote:\n>\n> On 2024-06-18 Tu 7:48 AM, Peter Eisentraut wrote:\n>> I have this patch series that fixes up the types of the new \n>> incremental JSON API a bit.  Specifically, it uses \"const\" throughout \n>> so that the top-level entry points such as \n>> pg_parse_json_incremental() can declare their arguments as const char \n>> * instead of just char *.  This just works, it doesn't require any \n>> new casting tricks.  In fact, it removes a few unconstify() calls.\n>>\n>> Also, a few arguments and variables that relate to object sizes \n>> should be size_t rather than int.  At this point, this mainly makes \n>> the API better self-documenting.  I don't think it actually works to \n>> parse larger than 2 GB chunks (not tested).\n>\n>\n>\n> I think this is mostly OK.\n>\n> The change at line 1857 of jsonapi.c looks dubious, though. The \n> pointer variable p looks anything but constant. Perhaps I'm \n> misunderstanding.\n\n\nIgnore this comment, moment of brain fade. Of course it's the string \nthat's constant, not the pointer.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 20 Jun 2024 08:44:52 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonapi type fixups" }, { "msg_contents": "On 20.06.24 14:05, Andrew Dunstan wrote:\n> On 2024-06-18 Tu 7:48 AM, Peter Eisentraut wrote:\n>> I have this patch series that fixes up the types of the new \n>> incremental JSON API a bit.  Specifically, it uses \"const\" throughout \n>> so that the top-level entry points such as pg_parse_json_incremental() \n>> can declare their arguments as const char * instead of just char *. \n>> This just works, it doesn't require any new casting tricks.  In fact, \n>> it removes a few unconstify() calls.\n>>\n>> Also, a few arguments and variables that relate to object sizes should \n>> be size_t rather than int.  At this point, this mainly makes the API \n>> better self-documenting.  I don't think it actually works to parse \n>> larger than 2 GB chunks (not tested).\n\n> I think this is mostly OK.\n\n> It would also be nice to reword the comment at line 3142 of jsonfuncs.c, \n> so it can still fit on one line.\n\nAgreed. Committed with that fixup.\n\n\n\n", "msg_date": "Fri, 21 Jun 2024 08:01:11 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: jsonapi type fixups" } ]
[ { "msg_contents": "Hi everyone,\n\nI've noticed that truncating mapped catalogs causes the server to\ncrash due to an assertion failure. Here are the details:\n\nExecuting below commands:\n\n-- set allow_system_table_mods TO on;\n-- truncate table pg_type;\n\nResults into a server crash with below backtrace:\n\n...\n#2 0x000055736767537d in ExceptionalCondition\n(conditionName=0x5573678c5760 \"relation->rd_rel->relkind ==\nRELKIND_INDEX\", fileName=0x5573678c4b28 \"relcache.c\",\n lineNumber=3896) at assert.c:66\n#3 0x0000557367664e31 in RelationSetNewRelfilenumber\n(relation=0x7f68240f1d58, persistence=112 'p') at relcache.c:3896\n#4 0x000055736715b952 in ExecuteTruncateGuts\n(explicit_rels=0x55736989e5b0, relids=0x55736989e600,\nrelids_logged=0x0, behavior=DROP_RESTRICT, restart_seqs=false,\n run_as_table_owner=false) at tablecmds.c:2146\n#5 0x000055736715affa in ExecuteTruncate (stmt=0x55736989f950) at\ntablecmds.c:1877\n#6 0x0000557367493693 in standard_ProcessUtility\n(pstmt=0x55736989fa00, queryString=0x55736989eed0 \"truncate table\npg_type;\", readOnlyTree=false,\n context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\ndest=0x5573698a0330, qc=0x7ffe19367ac0) at utility.c:728\n\nAs seen from the backtrace above, the assertion failure occurs in\n'RelationSetNewRelfilenumber()' at:\n\nif (RelationIsMapped(relation))\n{\n /* This case is only supported for indexes */\n Assert(relation->rd_rel->relkind == RELKIND_INDEX);\n}\n\nI would like to know why we are only expecting index tables here and\nnot the regular catalog tables. For instance, pg_type is a mapped\nrelation but not of index type, leading to the failure in this case.\nShould we consider changing this Assert condition from RELKIND_INDEX\nto (RELKIND_INDEX || RELKIND_RELATION)?\n\nAdditionally, is it advisable to restrict truncation of the pg_class\ntable? It's like a kind of circular dependency in case of pg_class\nwhich is not applicable in case of other catalog tables.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Tue, 18 Jun 2024 17:39:41 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Truncation of mapped catalogs (whether local or shared) leads to\n server crash" }, { "msg_contents": "On Tue, Jun 18, 2024 at 8:10 AM Ashutosh Sharma <[email protected]> wrote:\n> I've noticed that truncating mapped catalogs causes the server to\n> crash due to an assertion failure. Here are the details:\n>\n> Executing below commands:\n>\n> -- set allow_system_table_mods TO on;\n> -- truncate table pg_type;\n\nIf the operation isn't allowed without turning on\nallow_system_table_mods, that means that doing it is probably a bad\nidea and will probably break stuff, as happened here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 18 Jun 2024 10:13:44 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Truncation of mapped catalogs (whether local or shared) leads to\n server crash" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Tue, Jun 18, 2024 at 8:10 AM Ashutosh Sharma <[email protected]> wrote:\n>> Executing below commands:\n>> -- set allow_system_table_mods TO on;\n>> -- truncate table pg_type;\n\n> If the operation isn't allowed without turning on\n> allow_system_table_mods, that means that doing it is probably a bad\n> idea and will probably break stuff, as happened here.\n\nNothing good can come of truncating *any* core system catalog --- what\ndo you think you'll still be able to do afterwards?\n\nI think the assertion you noticed is there because the code path gets\ntraversed during REINDEX, which is an operation we do support on\nsystem catalogs. I have zero interest in making truncate work\non them.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 18 Jun 2024 10:20:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Truncation of mapped catalogs (whether local or shared) leads to\n server crash" }, { "msg_contents": "Hi,\n\nOn Tue, Jun 18, 2024 at 7:50 PM Tom Lane <[email protected]> wrote:\n>\n> Robert Haas <[email protected]> writes:\n> > On Tue, Jun 18, 2024 at 8:10 AM Ashutosh Sharma <[email protected]> wrote:\n> >> Executing below commands:\n> >> -- set allow_system_table_mods TO on;\n> >> -- truncate table pg_type;\n>\n> > If the operation isn't allowed without turning on\n> > allow_system_table_mods, that means that doing it is probably a bad\n> > idea and will probably break stuff, as happened here.\n>\n> Nothing good can come of truncating *any* core system catalog --- what\n> do you think you'll still be able to do afterwards?\n>\n> I think the assertion you noticed is there because the code path gets\n> traversed during REINDEX, which is an operation we do support on\n> system catalogs. I have zero interest in making truncate work\n> on them.\n>\n\nI agree with you on that point. How about considering a complete\nrestriction instead?\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Tue, 18 Jun 2024 19:58:26 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Truncation of mapped catalogs (whether local or shared) leads to\n server crash" }, { "msg_contents": "Hi,\n\nOn 2024-06-18 19:58:26 +0530, Ashutosh Sharma wrote:\n> On Tue, Jun 18, 2024 at 7:50 PM Tom Lane <[email protected]> wrote:\n> >\n> > Robert Haas <[email protected]> writes:\n> > > On Tue, Jun 18, 2024 at 8:10 AM Ashutosh Sharma <[email protected]> wrote:\n> > >> Executing below commands:\n> > >> -- set allow_system_table_mods TO on;\n> > >> -- truncate table pg_type;\n> >\n> > > If the operation isn't allowed without turning on\n> > > allow_system_table_mods, that means that doing it is probably a bad\n> > > idea and will probably break stuff, as happened here.\n> >\n> > Nothing good can come of truncating *any* core system catalog --- what\n> > do you think you'll still be able to do afterwards?\n> >\n> > I think the assertion you noticed is there because the code path gets\n> > traversed during REINDEX, which is an operation we do support on\n> > system catalogs. I have zero interest in making truncate work\n> > on them.\n> >\n> \n> I agree with you on that point. How about considering a complete\n> restriction instead?\n\nWhat's the point? There are occasional cases where doing something dangerous\nis useful, for debugging or corruption recovery. If we flat out prohibit this\nwe'll just need a allow_system_table_mods_but_for_real option.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 18 Jun 2024 07:32:43 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Truncation of mapped catalogs (whether local or shared) leads to\n server crash" }, { "msg_contents": "Ashutosh Sharma <[email protected]> writes:\n> On Tue, Jun 18, 2024 at 7:50 PM Tom Lane <[email protected]> wrote:\n>> I think the assertion you noticed is there because the code path gets\n>> traversed during REINDEX, which is an operation we do support on\n>> system catalogs. I have zero interest in making truncate work\n>> on them.\n\n> I agree with you on that point. How about considering a complete\n> restriction instead?\n\nYou already broke the safety seal by enabling allow_system_table_mods.\nPerhaps the documentation of that is not scary enough?\n\n Allows modification of the structure of system tables as well as\n certain other risky actions on system tables. This is otherwise not\n allowed even for superusers. Ill-advised use of this setting can\n cause irretrievable data loss or seriously corrupt the database\n system.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 18 Jun 2024 10:55:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Truncation of mapped catalogs (whether local or shared) leads to\n server crash" }, { "msg_contents": "Hi Robert, Andres, Tom,\n\nThank you for sharing your thoughts.\n\nOn Tue, Jun 18, 2024 at 8:02 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2024-06-18 19:58:26 +0530, Ashutosh Sharma wrote:\n> > On Tue, Jun 18, 2024 at 7:50 PM Tom Lane <[email protected]> wrote:\n> > >\n> > > Robert Haas <[email protected]> writes:\n> > > > On Tue, Jun 18, 2024 at 8:10 AM Ashutosh Sharma <[email protected]> wrote:\n> > > >> Executing below commands:\n> > > >> -- set allow_system_table_mods TO on;\n> > > >> -- truncate table pg_type;\n> > >\n> > > > If the operation isn't allowed without turning on\n> > > > allow_system_table_mods, that means that doing it is probably a bad\n> > > > idea and will probably break stuff, as happened here.\n> > >\n> > > Nothing good can come of truncating *any* core system catalog --- what\n> > > do you think you'll still be able to do afterwards?\n> > >\n> > > I think the assertion you noticed is there because the code path gets\n> > > traversed during REINDEX, which is an operation we do support on\n> > > system catalogs. I have zero interest in making truncate work\n> > > on them.\n> > >\n> >\n> > I agree with you on that point. How about considering a complete\n> > restriction instead?\n>\n> What's the point? There are occasional cases where doing something dangerous\n> is useful, for debugging or corruption recovery. If we flat out prohibit this\n> we'll just need a allow_system_table_mods_but_for_real option.\n>\n\nThis is specifically about truncation of system catalogs, and does not\nrefer to any other DML operations on system catalogs, which I see are\nnecessary for many extensions that directly update catalogs like\npg_proc and others. Additionally, according to the comments in\ntruncate_check_rel(), we permit truncation of the pg_largeobject\ncatalog specifically during pg_upgrade. So, afaiu truncation of any\ncatalogs other than this can be restricted.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Tue, 18 Jun 2024 20:40:21 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Truncation of mapped catalogs (whether local or shared) leads to\n server crash" }, { "msg_contents": "Hi,\n\nOn Tue, Jun 18, 2024 at 8:25 PM Tom Lane <[email protected]> wrote:\n>\n> Ashutosh Sharma <[email protected]> writes:\n> > On Tue, Jun 18, 2024 at 7:50 PM Tom Lane <[email protected]> wrote:\n> >> I think the assertion you noticed is there because the code path gets\n> >> traversed during REINDEX, which is an operation we do support on\n> >> system catalogs. I have zero interest in making truncate work\n> >> on them.\n>\n> > I agree with you on that point. How about considering a complete\n> > restriction instead?\n>\n> You already broke the safety seal by enabling allow_system_table_mods.\n> Perhaps the documentation of that is not scary enough?\n>\n> Allows modification of the structure of system tables as well as\n> certain other risky actions on system tables. This is otherwise not\n> allowed even for superusers. Ill-advised use of this setting can\n> cause irretrievable data loss or seriously corrupt the database\n> system.\n>\n\nI was actually referring to just the truncation part here, not any DML\noperations, as I've observed their usage in certain extensions.\nHowever, truncation is just used for pg_largeobject and that too only\nduring pg_upgrade, so for other catalogs truncation can be avoided.\nBut that is just my perspective; if it's not essential, we can\npossibly stop this discussion here.\n\nThank you to everyone for sharing your valuable insights.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Tue, 18 Jun 2024 20:55:08 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Truncation of mapped catalogs (whether local or shared) leads to\n server crash" } ]
[ { "msg_contents": "Hi\n\nFurther to my previous report [1] about zlib detection not working with\nMeson on Windows, I found it's similarly or entirely broken for the\nmajority of other dependencies, none of which are tested on the buildfarm\nas far as I can see.\n\nFor convenience, I've put together a number of Github actions [2] that show\nhow to build the various dependencies on Windows, in the most\nstandard/recommended way I can find for each. Another action combines these\ninto a single downloadable archive that people can test with, and another\none uses that archive to build PostgreSQL 12 through 16, all successfully.\n\nYou can see build logs, and download the various builds/artefacts from the\nGithub Workflow pages.\n\nMy next task was to extend that to support PostgreSQL 17 and beyond, which\nis where I started to run into problems. I've attempted builds using Meson\nwith each of the dependencies defined in the old-style config.pl, both with\nand without modifying the INCLUDE/LIBS envvars to include the directories\nfor the dependencies (as was found to work in the previous discussion re\nzlib):\n\nWill not successfully configure at all:\n\nmeson setup --auto-features=disabled\n-Dextra_include_dirs=C:\\build64\\include\n-Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dgssapi=enabled\nbuild-gssapi\n\nmeson setup --auto-features=disabled\n-Dextra_include_dirs=C:\\build64\\include\n-Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dicu=enabled\nbuild-icu\n\nmeson setup --auto-features=disabled\n-Dextra_include_dirs=C:\\build64\\include\n-Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dlibxml=enabled\nbuild-libxml\n\nmeson setup --auto-features=disabled\n-Dextra_include_dirs=C:\\build64\\include\n-Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dlz4=enabled\nbuild-lz4\n\nmeson setup --auto-features=disabled\n-Dextra_include_dirs=C:\\build64\\include\n-Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dnls=enabled\nbuild-nls\n\nmeson setup --auto-features=disabled\n-Dextra_include_dirs=C:\\build64\\include\n-Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Duuid=ossp\nbuild-uuid\n\nmeson setup --auto-features=disabled\n-Dextra_include_dirs=C:\\build64\\include\n-Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dzstd=enabled\nbuild-zstd\n\nConfigured with modified LIBS/INCLUDE:\n\nmeson setup --auto-features=disabled\n-Dextra_include_dirs=C:\\build64\\include\n-Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dlibxslt=enabled\nbuild-libxslt\n\nmeson setup --auto-features=disabled\n-Dextra_include_dirs=C:\\build64\\include\n-Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dssl=openssl\nbuild-openssl\n\nmeson setup --auto-features=disabled\n-Dextra_include_dirs=C:\\build64\\include\n-Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dzlib=enabled\nbuild-zlib\n\nI think it's important to note that Meson largely seems to want to use\npkgconfig and cmake to find dependencies. pkgconfig isn't really a thing on\nWindows (it is available, but isn't commonly used), and even cmake would\ntypically rely on finding things in either known installation directories\nor through lib/include vars. There really aren't standard directories like\n/usr/lib or /usr/include as we find on unixes, or pkgconfig files for\neverything.\n\nFor the EDB installers, the team has hand-crafted pkgconfig files for\neverything, which is clearly not a proper solution.\n\nI can provide logs and run tests if anyone wants me to do so. Testing so\nfar has been with the Ninja backend, in a VS2022 x86_64 native environment.\n\n[1]\nhttps://www.postgresql.org/message-id/CA+OCxozrPZx57ue8rmhq6CD1Jic5uqKh80=vTpZurSKESn-dkw@mail.gmail.com\n[2] https://github.com/dpage/winpgbuild/actions\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nHiFurther to my previous report [1] about zlib detection not working with Meson on Windows, I found it's similarly or entirely broken for the majority of other dependencies, none of which are tested on the buildfarm as far as I can see.For convenience, I've put together a number of Github actions [2] that show how to build the various dependencies on Windows, in the most standard/recommended way I can find for each. Another action combines these into a single downloadable archive that people can test with, and another one uses that archive to build PostgreSQL 12 through 16, all successfully.You can see build logs, and download the various builds/artefacts from the Github Workflow pages. My next task was to extend that to support PostgreSQL 17 and beyond, which is where I started to run into problems. I've attempted builds using Meson with each of the dependencies defined in the old-style config.pl, both with and without modifying the INCLUDE/LIBS envvars to include the directories for the dependencies (as was found to work in the previous discussion re zlib):Will not successfully configure at all:meson setup --auto-features=disabled -Dextra_include_dirs=C:\\build64\\include -Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dgssapi=enabled build-gssapimeson setup --auto-features=disabled -Dextra_include_dirs=C:\\build64\\include -Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dicu=enabled build-icumeson setup --auto-features=disabled -Dextra_include_dirs=C:\\build64\\include -Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dlibxml=enabled build-libxmlmeson setup --auto-features=disabled -Dextra_include_dirs=C:\\build64\\include -Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dlz4=enabled build-lz4meson setup --auto-features=disabled -Dextra_include_dirs=C:\\build64\\include -Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dnls=enabled build-nlsmeson setup --auto-features=disabled -Dextra_include_dirs=C:\\build64\\include -Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Duuid=ossp build-uuidmeson setup --auto-features=disabled -Dextra_include_dirs=C:\\build64\\include -Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dzstd=enabled build-zstdConfigured with modified LIBS/INCLUDE:meson setup --auto-features=disabled -Dextra_include_dirs=C:\\build64\\include -Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dlibxslt=enabled build-libxsltmeson setup --auto-features=disabled -Dextra_include_dirs=C:\\build64\\include -Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dssl=openssl build-opensslmeson setup --auto-features=disabled -Dextra_include_dirs=C:\\build64\\include -Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dzlib=enabled build-zlibI think it's important to note that Meson largely seems to want to use pkgconfig and cmake to find dependencies. pkgconfig isn't really a thing on Windows (it is available, but isn't commonly used), and even cmake would typically rely on finding things in either known installation directories or through lib/include vars. There really aren't standard directories like /usr/lib or /usr/include as we find on unixes, or pkgconfig files for everything.For the EDB installers, the team has hand-crafted pkgconfig files for everything, which is clearly not a proper solution.I can provide logs and run tests if anyone wants me to do so. Testing so far has been with the Ninja backend, in a VS2022 x86_64 native environment.[1] https://www.postgresql.org/message-id/CA+OCxozrPZx57ue8rmhq6CD1Jic5uqKh80=vTpZurSKESn-dkw@mail.gmail.com[2] https://github.com/dpage/winpgbuild/actions-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Tue, 18 Jun 2024 14:53:53 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "Meson far from ready on Windows" }, { "msg_contents": "Hi,\n\nOn 2024-06-18 14:53:53 +0100, Dave Page wrote:\n> My next task was to extend that to support PostgreSQL 17 and beyond, which\n> is where I started to run into problems. I've attempted builds using Meson\n> with each of the dependencies defined in the old-style config.pl, both with\n> and without modifying the INCLUDE/LIBS envvars to include the directories\n> for the dependencies (as was found to work in the previous discussion re\n> zlib):\n> \n> Will not successfully configure at all:\n\nDo you have logs for those failures?\n\n\n> meson setup --auto-features=disabled\n> -Dextra_include_dirs=C:\\build64\\include\n> -Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dgssapi=enabled\n> build-gssapi\n\n> meson setup --auto-features=disabled\n> -Dextra_include_dirs=C:\\build64\\include\n> -Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dicu=enabled\n> build-icu\n> \n> meson setup --auto-features=disabled\n> -Dextra_include_dirs=C:\\build64\\include\n> -Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dlibxml=enabled\n> build-libxml\n> \n> meson setup --auto-features=disabled\n> -Dextra_include_dirs=C:\\build64\\include\n> -Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dlz4=enabled\n> build-lz4\n> \n> meson setup --auto-features=disabled\n> -Dextra_include_dirs=C:\\build64\\include\n> -Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dnls=enabled\n> build-nls\n> \n> meson setup --auto-features=disabled\n> -Dextra_include_dirs=C:\\build64\\include\n> -Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Duuid=ossp\n> build-uuid\n> \n> meson setup --auto-features=disabled\n> -Dextra_include_dirs=C:\\build64\\include\n> -Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dzstd=enabled\n> build-zstd\n> \n> Configured with modified LIBS/INCLUDE:\n> \n> meson setup --auto-features=disabled\n> -Dextra_include_dirs=C:\\build64\\include\n> -Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dlibxslt=enabled\n> build-libxslt\n> \n> meson setup --auto-features=disabled\n> -Dextra_include_dirs=C:\\build64\\include\n> -Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dssl=openssl\n> build-openssl\n> \n> meson setup --auto-features=disabled\n> -Dextra_include_dirs=C:\\build64\\include\n> -Dextra_lib_dirs=C:\\build64\\lib;C:\\build64\\lib64 --wipe -Dzlib=enabled\n> build-zlib\n> \n> I think it's important to note that Meson largely seems to want to use\n> pkgconfig and cmake to find dependencies. pkgconfig isn't really a thing on\n> Windows (it is available, but isn't commonly used), and even cmake would\n> typically rely on finding things in either known installation directories\n> or through lib/include vars.\n\nI am not really following what you mean with the cmake bit here?\n\nYou can configure additional places to search for cmake files with\nmeson setup --cmake-prefix-path=...\n\n\n> There really aren't standard directories like\n> /usr/lib or /usr/include as we find on unixes, or pkgconfig files for\n> everything.\n\nYes, and?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 18 Jun 2024 07:38:50 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "Hi\n\nOn Tue, 18 Jun 2024 at 15:38, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2024-06-18 14:53:53 +0100, Dave Page wrote:\n> > My next task was to extend that to support PostgreSQL 17 and beyond,\n> which\n> > is where I started to run into problems. I've attempted builds using\n> Meson\n> > with each of the dependencies defined in the old-style config.pl, both\n> with\n> > and without modifying the INCLUDE/LIBS envvars to include the directories\n> > for the dependencies (as was found to work in the previous discussion re\n> > zlib):\n> >\n> > Will not successfully configure at all:\n>\n> Do you have logs for those failures?\n>\n\nSure - https://developer.pgadmin.org/~dpage/build-logs.zip. Those are all\nwithout any modifications to %LIB% or %INCLUDE%.\n\n\n> I think it's important to note that Meson largely seems to want to use\n> > pkgconfig and cmake to find dependencies. pkgconfig isn't really a thing\n> on\n> > Windows (it is available, but isn't commonly used), and even cmake would\n> > typically rely on finding things in either known installation directories\n> > or through lib/include vars.\n>\n> I am not really following what you mean with the cmake bit here?\n>\n> You can configure additional places to search for cmake files with\n> meson setup --cmake-prefix-path=...\n>\n\nNone of the dependencies include cmake files for distribution on Windows,\nso there are no additional files to tell meson to search for. The same\napplies to pkgconfig files, which is why the EDB team had to manually craft\nthem.\n\n\n>\n>\n> > There really aren't standard directories like\n> > /usr/lib or /usr/include as we find on unixes, or pkgconfig files for\n> > everything.\n>\n> Yes, and?\n>\n\nAnd that's why we really need to be able to locate headers and libraries\neasily by passing paths to meson, as we can't rely on pkgconfig, cmake, or\nthings being in some standard directory on Windows.\n\nThanks.\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nHiOn Tue, 18 Jun 2024 at 15:38, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2024-06-18 14:53:53 +0100, Dave Page wrote:\n> My next task was to extend that to support PostgreSQL 17 and beyond, which\n> is where I started to run into problems. I've attempted builds using Meson\n> with each of the dependencies defined in the old-style config.pl, both with\n> and without modifying the INCLUDE/LIBS envvars to include the directories\n> for the dependencies (as was found to work in the previous discussion re\n> zlib):\n> \n> Will not successfully configure at all:\n\nDo you have logs for those failures?Sure - https://developer.pgadmin.org/~dpage/build-logs.zip. Those are all without any modifications to %LIB% or %INCLUDE%.\n> I think it's important to note that Meson largely seems to want to use\n> pkgconfig and cmake to find dependencies. pkgconfig isn't really a thing on\n> Windows (it is available, but isn't commonly used), and even cmake would\n> typically rely on finding things in either known installation directories\n> or through lib/include vars.\n\nI am not really following what you mean with the cmake bit here?\n\nYou can configure additional places to search for cmake files with\nmeson setup --cmake-prefix-path=...None of the dependencies include cmake files for distribution on Windows, so there are no additional files to tell meson to search for. The same applies to pkgconfig files, which is why the EDB team had to manually craft them. \n\n\n> There really aren't standard directories like\n> /usr/lib or /usr/include as we find on unixes, or pkgconfig files for\n> everything.\n\nYes, and?And that's why we really need to be able to locate headers and libraries easily by passing paths to meson, as we can't rely on pkgconfig, cmake, or things being in some standard directory on Windows.Thanks.-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Tue, 18 Jun 2024 15:54:27 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "Hi,\n\n\nOn 2024-06-18 15:54:27 +0100, Dave Page wrote:\n> On Tue, 18 Jun 2024 at 15:38, Andres Freund <[email protected]> wrote:\n> > Do you have logs for those failures?\n> >\n>\n> Sure - https://developer.pgadmin.org/~dpage/build-logs.zip. Those are all\n> without any modifications to %LIB% or %INCLUDE%.\n\nThanks.\n\n\n> > I think it's important to note that Meson largely seems to want to use\n> > > pkgconfig and cmake to find dependencies. pkgconfig isn't really a thing\n> > on\n> > > Windows (it is available, but isn't commonly used), and even cmake would\n> > > typically rely on finding things in either known installation directories\n> > > or through lib/include vars.\n> >\n> > I am not really following what you mean with the cmake bit here?\n> >\n> > You can configure additional places to search for cmake files with\n> > meson setup --cmake-prefix-path=...\n> >\n>\n> None of the dependencies include cmake files for distribution on Windows,\n> so there are no additional files to tell meson to search for. The same\n> applies to pkgconfig files, which is why the EDB team had to manually craft\n> them.\n\nMany of them do include at least cmake files on windows if you build them\nthough?\n\n\nBtw, I've been working with Bilal to add a few of the dependencies to the CI\nimages so we can test those automatically. Using vcpkg. We got that nearly\nworking, but he's on vacation this week... That does ensure both cmake and\n.pc files are generated, fwiw.\n\nCurrently builds gettext, icu, libxml2, libxslt, lz4, openssl, pkgconf,\npython3, tcl, zlib, zstd.\n\n\n\nI'm *NOT* sure that vcpkg is the way to go, fwiw. It does seem advantageous to\nuse one of the toolkits thats commonly built for building dependencies on\nwindows, which seems to mean vcpkg or conan.\n\n\n> And that's why we really need to be able to locate headers and libraries\n> easily by passing paths to meson, as we can't rely on pkgconfig, cmake, or\n> things being in some standard directory on Windows.\n\nExcept that that often causes hard to diagnose breakages, because that doesn't\nallow including the necessary compiler/linker flags [2]. It's a bad model, we shouldn't\nperpetuate it. If we want to forever make windows a complicated annoying\nstepchild, that's the way to go.\n\nFWIW, at least libzstd, libxml [3], lz4, zlib can generate cmake dependency\nfiles on windows in their upstream code.\n\nI'm *not* against adding \"hardcoded\" dependency lookup stuff for libraries\nwhere other approaches aren't feasible, I just don't think it's a good idea to\nadd fragile stuff that will barely be tested, when not necessary.\n\nGreetings,\n\nAndres Freund\n\n\n[1] Here's a build of PG with the dependencies installed, builds\n https://cirrus-ci.com/task/4953968097361920\n\n[2] E.g.\n https://github.com/postgres/postgres/blob/REL_16_STABLE/src/tools/msvc/Mkvcbuild.pm#L600\n https://github.com/postgres/postgres/blob/REL_16_STABLE/src/tools/msvc/Solution.pm#L1039\n\n[3] Actually, at least your libxml build actually *did* include both .pc and\n cmake files. So just pointing to the relevant path would do the trick.\n\n\n", "msg_date": "Tue, 18 Jun 2024 09:08:45 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "Hi\n\nOn Tue, 18 Jun 2024 at 17:08, Andres Freund <[email protected]> wrote:\n\n> > None of the dependencies include cmake files for distribution on Windows,\n> > so there are no additional files to tell meson to search for. The same\n> > applies to pkgconfig files, which is why the EDB team had to manually\n> craft\n> > them.\n>\n> Many of them do include at least cmake files on windows if you build them\n> though?\n>\n\nThe only one that does is libxml2 as far as I can see. And that doesn't\nseem to work even if I use --cmake-prefix-path= as you suggested:\n\nC:\\Users\\dpage\\git\\postgresql>meson setup --auto-features=disabled --wipe\n-Dlibxml=enabled --cmake-prefix-path=C:\\build64\\lib\\cmake\\libxml2-2.11.8\nbuild-libxml\nThe Meson build system\nVersion: 1.4.0\nSource dir: C:\\Users\\dpage\\git\\postgresql\nBuild dir: C:\\Users\\dpage\\git\\postgresql\\build-libxml\nBuild type: native build\nProject name: postgresql\nProject version: 17beta1\nC compiler for the host machine: cl (msvc 19.39.33523 \"Microsoft (R) C/C++\nOptimizing Compiler Version 19.39.33523 for x64\")\nC linker for the host machine: link link 14.39.33523.0\nHost machine cpu family: x86_64\nHost machine cpu: x86_64\nRun-time dependency threads found: YES\nLibrary ws2_32 found: YES\nLibrary secur32 found: YES\nProgram perl found: YES (C:\\Strawberry\\perl\\bin\\perl.EXE)\nProgram python found: YES (C:\\Python312\\python.EXE)\nProgram win_flex found: YES 2.6.4 2.6.4\n(C:\\ProgramData\\chocolatey\\bin\\win_flex.EXE)\nProgram win_bison found: YES 3.7.4 3.7.4\n(C:\\ProgramData\\chocolatey\\bin\\win_bison.EXE)\nProgram sed found: YES (C:\\ProgramData\\chocolatey\\bin\\sed.EXE)\nProgram prove found: YES (C:\\Strawberry\\perl\\bin\\prove.BAT)\nProgram tar found: YES (C:\\Windows\\system32\\tar.EXE)\nProgram gzip found: YES (C:\\ProgramData\\chocolatey\\bin\\gzip.EXE)\nProgram lz4 found: NO\nProgram openssl found: YES (C:\\build64\\bin\\openssl.EXE)\nProgram zstd found: NO\nProgram dtrace skipped: feature dtrace disabled\nProgram config/missing found: YES (sh\nC:\\Users\\dpage\\git\\postgresql\\config/missing)\nProgram cp found: YES (C:\\Program Files (x86)\\GnuWin32\\bin\\cp.EXE)\nProgram xmllint found: YES (C:\\build64\\bin\\xmllint.EXE)\nProgram xsltproc found: YES (C:\\build64\\bin\\xsltproc.EXE)\nProgram wget found: YES (C:\\ProgramData\\chocolatey\\bin\\wget.EXE)\nProgram C:\\Python312\\Scripts\\meson found: YES\n(C:\\Python312\\Scripts\\meson.exe)\nCheck usable header \"bsd_auth.h\" skipped: feature bsd_auth disabled\nCheck usable header \"dns_sd.h\" skipped: feature bonjour disabled\nCompiler for language cpp skipped: feature llvm disabled\nFound pkg-config: YES (C:\\ProgramData\\chocolatey\\bin\\pkg-config.EXE) 0.28\nFound CMake: C:\\Program Files\\Microsoft Visual\nStudio\\2022\\Community\\Common7\\IDE\\CommonExtensions\\Microsoft\\CMake\\CMake\\bin\\cmake.EXE\n(3.28.0)\nRun-time dependency libxml-2.0 found: NO (tried pkgconfig and cmake)\n\nmeson.build:796:11: ERROR: Dependency \"libxml-2.0\" not found, tried\npkgconfig and cmake\n\nA full log can be found at\nC:\\Users\\dpage\\git\\postgresql\\build-libxml\\meson-logs\\meson-log.txt\n\n\n>\n>\n> Btw, I've been working with Bilal to add a few of the dependencies to the\n> CI\n> images so we can test those automatically. Using vcpkg. We got that nearly\n> working, but he's on vacation this week... That does ensure both cmake and\n> .pc files are generated, fwiw.\n>\n> Currently builds gettext, icu, libxml2, libxslt, lz4, openssl, pkgconf,\n> python3, tcl, zlib, zstd.\n\n\nThat appears to be using Mingw/Msys, which is quite different from a VC++\nbuild, in part because it's a full environment with its own package manager\nand packages that people have put a lot of effort into making work as they\ndo on unix.\n\n\n> I'm *NOT* sure that vcpkg is the way to go, fwiw. It does seem\n> advantageous to\n> use one of the toolkits thats commonly built for building dependencies on\n> windows, which seems to mean vcpkg or conan.\n>\n\nI don't think requiring or expecting vcpkg or conan is reasonable at all,\nfor a number of reasons:\n\n- Neither supports all the dependencies at present.\n- There are real supply chain verification concerns for vendors.\n- That would be a huge change from what we've required in the last 19\nyears, with no deprecation notices or warnings for packagers etc.\n\n\n> > And that's why we really need to be able to locate headers and libraries\n> > easily by passing paths to meson, as we can't rely on pkgconfig, cmake,\n> or\n> > things being in some standard directory on Windows.\n>\n> Except that that often causes hard to diagnose breakages, because that\n> doesn't\n> allow including the necessary compiler/linker flags [2]. It's a bad\n> model, we shouldn't\n> perpetuate it. If we want to forever make windows a complicated annoying\n> stepchild, that's the way to go.\n>\n\nThat is a good point, though I suspect it wouldn't solve your second\nexample of the Kerberos libraries, as you'll get both 32 and 64 bit libs if\nyou follow their standard process for building on Windows so you still need\nto have code to pick the right ones.\n\n\n>\n> FWIW, at least libzstd, libxml [3], lz4, zlib can generate cmake dependency\n> files on windows in their upstream code.\n>\n\nIn the case of zstd, it does not if you build with VC++, the Makefile, or\nMeson, at least in my testing. It looks like it would if you built it\nwith cmake, but I couldn't get that to work in 10 minutes or so of messing\naround. And that's a perfect example of what I'm bleating about - there are\noften many ways of building things on Windows and there are definitely many\nways of getting things on Windows, and they're not all equal. We've either\ngot to be extremely prescriptive in our docs, telling people precisely what\nthey need to download for each dependency, or how to build it themselves in\nthe way that will work with PostgreSQL, or the build system needs to be\nflexible enough to handle different dependency variations, as the old VC++\nbuild system was.\n\n\n>\n> I'm *not* against adding \"hardcoded\" dependency lookup stuff for libraries\n> where other approaches aren't feasible, I just don't think it's a good\n> idea to\n> add fragile stuff that will barely be tested, when not necessary.\n>\n> Greetings,\n>\n> Andres Freund\n>\n>\n> [1] Here's a build of PG with the dependencies installed, builds\n> https://cirrus-ci.com/task/4953968097361920\n>\n> [2] E.g.\n>\n> https://github.com/postgres/postgres/blob/REL_16_STABLE/src/tools/msvc/Mkvcbuild.pm#L600\n>\n> https://github.com/postgres/postgres/blob/REL_16_STABLE/src/tools/msvc/Solution.pm#L1039\n>\n> [3] Actually, at least your libxml build actually *did* include both .pc\n> and\n> cmake files. So just pointing to the relevant path would do the trick.\n>\n\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nHiOn Tue, 18 Jun 2024 at 17:08, Andres Freund <[email protected]> wrote:\n> None of the dependencies include cmake files for distribution on Windows,\n> so there are no additional files to tell meson to search for. The same\n> applies to pkgconfig files, which is why the EDB team had to manually craft\n> them.\n\nMany of them do include at least cmake files on windows if you build them\nthough?The only one that does is libxml2 as far as I can see. And that doesn't seem to work even if I use --cmake-prefix-path= as you suggested:C:\\Users\\dpage\\git\\postgresql>meson setup --auto-features=disabled --wipe -Dlibxml=enabled --cmake-prefix-path=C:\\build64\\lib\\cmake\\libxml2-2.11.8 build-libxmlThe Meson build systemVersion: 1.4.0Source dir: C:\\Users\\dpage\\git\\postgresqlBuild dir: C:\\Users\\dpage\\git\\postgresql\\build-libxmlBuild type: native buildProject name: postgresqlProject version: 17beta1C compiler for the host machine: cl (msvc 19.39.33523 \"Microsoft (R) C/C++ Optimizing Compiler Version 19.39.33523 for x64\")C linker for the host machine: link link 14.39.33523.0Host machine cpu family: x86_64Host machine cpu: x86_64Run-time dependency threads found: YESLibrary ws2_32 found: YESLibrary secur32 found: YESProgram perl found: YES (C:\\Strawberry\\perl\\bin\\perl.EXE)Program python found: YES (C:\\Python312\\python.EXE)Program win_flex found: YES 2.6.4 2.6.4 (C:\\ProgramData\\chocolatey\\bin\\win_flex.EXE)Program win_bison found: YES 3.7.4 3.7.4 (C:\\ProgramData\\chocolatey\\bin\\win_bison.EXE)Program sed found: YES (C:\\ProgramData\\chocolatey\\bin\\sed.EXE)Program prove found: YES (C:\\Strawberry\\perl\\bin\\prove.BAT)Program tar found: YES (C:\\Windows\\system32\\tar.EXE)Program gzip found: YES (C:\\ProgramData\\chocolatey\\bin\\gzip.EXE)Program lz4 found: NOProgram openssl found: YES (C:\\build64\\bin\\openssl.EXE)Program zstd found: NOProgram dtrace skipped: feature dtrace disabledProgram config/missing found: YES (sh C:\\Users\\dpage\\git\\postgresql\\config/missing)Program cp found: YES (C:\\Program Files (x86)\\GnuWin32\\bin\\cp.EXE)Program xmllint found: YES (C:\\build64\\bin\\xmllint.EXE)Program xsltproc found: YES (C:\\build64\\bin\\xsltproc.EXE)Program wget found: YES (C:\\ProgramData\\chocolatey\\bin\\wget.EXE)Program C:\\Python312\\Scripts\\meson found: YES (C:\\Python312\\Scripts\\meson.exe)Check usable header \"bsd_auth.h\" skipped: feature bsd_auth disabledCheck usable header \"dns_sd.h\" skipped: feature bonjour disabledCompiler for language cpp skipped: feature llvm disabledFound pkg-config: YES (C:\\ProgramData\\chocolatey\\bin\\pkg-config.EXE) 0.28Found CMake: C:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\Common7\\IDE\\CommonExtensions\\Microsoft\\CMake\\CMake\\bin\\cmake.EXE (3.28.0)Run-time dependency libxml-2.0 found: NO (tried pkgconfig and cmake)meson.build:796:11: ERROR: Dependency \"libxml-2.0\" not found, tried pkgconfig and cmakeA full log can be found at C:\\Users\\dpage\\git\\postgresql\\build-libxml\\meson-logs\\meson-log.txt \n\n\nBtw, I've been working with Bilal to add a few of the dependencies to the CI\nimages so we can test those automatically. Using vcpkg. We got that nearly\nworking, but he's on vacation this week...  That does ensure both cmake and\n.pc files are generated, fwiw.\n\nCurrently builds gettext, icu, libxml2, libxslt, lz4, openssl, pkgconf,\npython3, tcl, zlib, zstd. That appears to be using Mingw/Msys, which is quite different from a VC++ build, in part because it's a full environment with its own package manager and packages that people have put a lot of effort into making work as they do on unix.  \nI'm *NOT* sure that vcpkg is the way to go, fwiw. It does seem advantageous to\nuse one of the toolkits thats commonly built for building dependencies on\nwindows, which seems to mean vcpkg or conan.I don't think requiring or expecting vcpkg or conan is reasonable at all, for a number of reasons:- Neither supports all the dependencies at present.- There are real supply chain verification concerns for vendors.- That would be a huge change from what we've required in the last 19 years, with no deprecation notices or warnings for packagers etc. \n> And that's why we really need to be able to locate headers and libraries\n> easily by passing paths to meson, as we can't rely on pkgconfig, cmake, or\n> things being in some standard directory on Windows.\n\nExcept that that often causes hard to diagnose breakages, because that doesn't\nallow including the necessary compiler/linker flags [2].  It's a bad model, we shouldn't\nperpetuate it.  If we want to forever make windows a complicated annoying\nstepchild, that's the way to go.That is a good point, though I suspect it wouldn't solve your second example of the Kerberos libraries, as you'll get both 32 and 64 bit libs if you follow their standard process for building on Windows so you still need to have code to pick the right ones. \n\nFWIW, at least libzstd, libxml [3], lz4, zlib can generate cmake dependency\nfiles on windows in their upstream code.In the case of zstd, it does not if you build with VC++, the Makefile, or Meson, at least in my testing. It looks like it would if you built it with cmake, but I couldn't get that to work in 10 minutes or so of messing around. And that's a perfect example of what I'm bleating about - there are often many ways of building things on Windows and there are definitely many ways of getting things on Windows, and they're not all equal. We've either got to be extremely prescriptive in our docs, telling people precisely what they need to download for each dependency, or how to build it themselves in the way that will work with PostgreSQL, or the build system needs to be flexible enough to handle different dependency variations, as the old VC++ build system was. \n\nI'm *not* against adding \"hardcoded\" dependency lookup stuff for libraries\nwhere other approaches aren't feasible, I just don't think it's a good idea to\nadd fragile stuff that will barely be tested, when not necessary.\n\nGreetings,\n\nAndres Freund\n\n\n[1] Here's a build of PG with the dependencies installed, builds\n    https://cirrus-ci.com/task/4953968097361920\n\n[2] E.g.\n    https://github.com/postgres/postgres/blob/REL_16_STABLE/src/tools/msvc/Mkvcbuild.pm#L600\n    https://github.com/postgres/postgres/blob/REL_16_STABLE/src/tools/msvc/Solution.pm#L1039\n\n[3] Actually, at least your libxml build actually *did* include both .pc and\n    cmake files. So just pointing to the relevant path would do the trick.\n-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Wed, 19 Jun 2024 14:47:50 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "Hi,\n\nOn 2024-06-19 14:47:50 +0100, Dave Page wrote:\n\n> > I'm *NOT* sure that vcpkg is the way to go, fwiw. It does seem\n> > advantageous to\n> > use one of the toolkits thats commonly built for building dependencies on\n> > windows, which seems to mean vcpkg or conan.\n> >\n>\n> I don't think requiring or expecting vcpkg or conan is reasonable at all,\n> for a number of reasons:\n>\n> - Neither supports all the dependencies at present.\n> - There are real supply chain verification concerns for vendors.\n> - That would be a huge change from what we've required in the last 19\n> years, with no deprecation notices or warnings for packagers etc.\n\nI don't think we should hard-require one specifically. I do think it'd be good\nif we provided an easy recipe for dependencies to be installed though.\n\n\nI think such flexibility acually means it becomes *more* important to abstract\naway some of the concrete ways of using the various dependencies. It doesn't\nmake sense for postgres to understand the internals of each dependency on all\nplatforms to a detail that it can cope with all the different ways of linking\nagainst them.\n\nE.g. libxml can be built with icu, lzma, zlib support. If libxml is built\nstatically, postgres needs to link to all those libraries as well. How can we\nknow which of those dependencies are enabled?\n\n\nEven if we can make that somehow work, it's not reasonable for postgres\ndevelopers adding a dependency to have to figure out how to probe all of this,\nwhen literally no other platform works that way anymore.\n\n\nIf you look around at recipes for building postgres on windows, they all have\nto patch src/tools/msvc (see links at the bottom), because the builtin paths\nand flags just don't work outside of a single way of acquiring the dependency.\n\n\n\nThe fact that this thread started only now is actually a good example for how\nbroken the approach to internalize all knowledge about building against\nlibraries into postgres is. This could all have been figured out 1+ years ago\n- but wasn't.\n\nUnless you want to require postgres devs to get constantly in the muck on\nwindows, we'll never get that right until just before the release.\n\n\n\nI don't particularly care how we abstract away the low level linking details\non windows. We can use pkgconf, we can use cmake, we can invent our own thing.\nBut it has to be something other than hardcoding windows library paths and\ncompiler flags into our buildsystem.\n\n\nAnd yes, that might make it a bit harder for a packager on windows, but\nwindows is already a *massive* drag on PG developers, it has to be somewhat\nmanageable.\n\nI do think we can make the effort of windows dependency management a lot more\nreasonable than it is now though, by providing a recipe for acquiring the\ndependency in some form. It's a lot easier to for packagers and developers to\ncustomize ontop of something like that.\n\n\nFrankly, the fact that there's pretty much no automated testing of the various\ndependencies that's accessible to non-windows devs is not a sustainable\nsituation.\n\n\n> > Btw, I've been working with Bilal to add a few of the dependencies to the\n> > CI\n> > images so we can test those automatically. Using vcpkg. We got that nearly\n> > working, but he's on vacation this week... That does ensure both cmake and\n> > .pc files are generated, fwiw.\n> >\n> > Currently builds gettext, icu, libxml2, libxslt, lz4, openssl, pkgconf,\n> > python3, tcl, zlib, zstd.\n>\n>\n> That appears to be using Mingw/Msys, which is quite different from a VC++\n> build, in part because it's a full environment with its own package manager\n> and packages that people have put a lot of effort into making work as they\n> do on unix.\n\nErr, that was a copy-paste mistake on my end and doesn't even use the vcpkg\ngenerated stuff.\n\nHere's an example build with most dependencies enabled (see below for more\ndetails):\n\nhttps://cirrus-ci.com/task/6497321108635648?logs=configure#L323\n\n\nI started hacking a bit further on testing all dependencies, which led me down\na few rabbitholes:\n\n\n- kerberos: When built linking against a debug runtime, it spews *ginormous*\n amounts of information onto stderr. Unfortunately its buildsystem doesn't\n seperate out debugging output and linking against a debug runtime. Argh.\n\n The tests fail even with a non-debug runtime though, due to debugging output\n in some cases, not sure why:\n https://cirrus-ci.com/task/5872684519653376?logs=check_world#L502\n\n Separately, the kerberos tests don't seem to be prepared to work on windows\n :(.\n\n So I disabled using it in CI for now.\n\n\n- Linking the backend dynamically against lz4, icu, ssl, xml, xslt, zstd, zlib\n slows down the tests noticeably (~20%). So I ended up building those\n statically.\n\n I ran into some issue with using a static libintl. I made it work, but for\n now reverted to a dynamic one.\n\n\n- Enabling nls slows down the tests by about 15%, somewhat painful. This is\n when statically linking, it's a bit worse when linked dynamically :(.\n\n\n- readline: Instead of the old issue with a compiler error, now we get a\n compiler crash:\n https://developercommunity.visualstudio.com/t/tab-completec4023:-fatal-error-C1001:/10685868\n\n The issue is fairly trivial to work around, we just need to break the the\n if/else chain into two. Probably deserves a bigger refactoring, but that's\n for another day.\n\n\nI haven't yet looked into a) uuid b) tcl. I think those are the only other\nmissing dependencies.\n\n\n> > Many of them do include at least cmake files on windows if you build them\n> > though?\n\n> The only one that does is libxml2 as far as I can see. And that doesn't\n> seem to work even if I use --cmake-prefix-path= as you suggested:\n\nUgh, that's because they used a different name for their cmake dependency than\nfor pkg-config. We can add the alternative spelling to meson.build.\n\n\n\n>\n> > > And that's why we really need to be able to locate headers and libraries\n> > > easily by passing paths to meson, as we can't rely on pkgconfig, cmake,\n> > or\n> > > things being in some standard directory on Windows.\n> >\n> > Except that that often causes hard to diagnose breakages, because that\n> > doesn't allow including the necessary compiler/linker flags [2]. It's a\n> > bad model, we shouldn't perpetuate it. If we want to forever make windows\n> > a complicated annoying stepchild, that's the way to go.\n>\n> That is a good point, though I suspect it wouldn't solve your second\n> example of the Kerberos libraries, as you'll get both 32 and 64 bit libs if\n> you follow their standard process for building on Windows so you still need\n> to have code to pick the right ones.\n\nvcpkg for one does provide .pc files for kerberos.\n\n\n> > FWIW, at least libzstd, libxml [3], lz4, zlib can generate cmake dependency\n> > files on windows in their upstream code.\n> >\n>\n> In the case of zstd, it does not if you build with VC++, the Makefile, or\n> Meson, at least in my testing.\n\nWhen building with meson it does generate a .pc file, which does work with\nPG as-is.\n\n\n> It looks like it would if you built it with cmake, but I couldn't get that\n> to work in 10 minutes or so of messing around.\n\n\n> And that's a perfect example of what I'm bleating about - there are often\n> many ways of building things on Windows and there are definitely many ways\n> of getting things on Windows, and they're not all equal.\n\nRight - but that's precisely which is why it's unreasable for postgres to know\nall the ins and outs of the different file locations and compiler flags for\nall those sources. Hence needing to abstract that.\n\n\n> We've either got to be extremely prescriptive in our docs, telling people\n> precisely what they need to download for each dependency, or how to build it\n> themselves in the way that will work with PostgreSQL, or the build system\n> needs to be flexible enough to handle different dependency variations, as\n> the old VC++ build system was.\n\nI'm confused - the old build system wasn't flexible around this stuff *at\nall*. Everyone had to patch it to get dependencies to work, unless you chose\nexactly the right source to download from - which was often not documented or\noutdated.\n\nFor example:\n- https://github.com/microsoft/vcpkg/blob/master/ports/libpq/windows/msbuild.patch\n- https://github.com/conan-io/conan-center-index/blob/1b24f7c74994ec6573e322b7ae4111c10f620ffa/recipes/libpq/all/conanfile.py#L116-L160\n- https://github.com/conda-forge/postgresql-feedstock/tree/main/recipe/patches\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 20 Jun 2024 13:58:49 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "Hi\n\nOn Thu, 20 Jun 2024 at 21:58, Andres Freund <[email protected]> wrote:\n\n>\n> > I don't think requiring or expecting vcpkg or conan is reasonable at all,\n> > for a number of reasons:\n> >\n> > - Neither supports all the dependencies at present.\n> > - There are real supply chain verification concerns for vendors.\n> > - That would be a huge change from what we've required in the last 19\n> > years, with no deprecation notices or warnings for packagers etc.\n>\n> I don't think we should hard-require one specifically. I do think it'd be\n> good\n> if we provided an easy recipe for dependencies to be installed though.\n>\n\nThat is precisely what https://github.com/dpage/winpgbuild/ was intended\nfor - and it works well for PG <= 16.\n\n\n> I think such flexibility acually means it becomes *more* important to\n> abstract\n> away some of the concrete ways of using the various dependencies. It\n> doesn't\n> make sense for postgres to understand the internals of each dependency on\n> all\n> platforms to a detail that it can cope with all the different ways of\n> linking\n> against them.\n>\n> E.g. libxml can be built with icu, lzma, zlib support. If libxml is built\n> statically, postgres needs to link to all those libraries as well. How\n> can we\n> know which of those dependencies are enabled?\n>\n\nI don't think it's unreasonable to not support static linking, but I take\nyour point.\n\n\n> Even if we can make that somehow work, it's not reasonable for postgres\n> developers adding a dependency to have to figure out how to probe all of\n> this,\n> when literally no other platform works that way anymore.\n>\n> If you look around at recipes for building postgres on windows, they all\n> have\n> to patch src/tools/msvc (see links at the bottom), because the builtin\n> paths\n> and flags just don't work outside of a single way of acquiring the\n> dependency.\n>\n\nI've been responsible for the Windows installers at EDB since we started\nwork on them, and prior to that built the original ones with Magnus. Since\nv8.0, I've implemented multiple frameworks for building those packages, and\nfor building PostgreSQL as a dependency of other things (e.g. pgAdmin).\nI've done that using builds of dependencies found at random places on the\ninternet, building them all myself, and a mixture.\n\nI have never once had to patch the MSVC build system. The most I've ever\nhad to do is copy/rename a .lib file - zlib, which for some reason uses\ndifferent naming depending on how you build it. I vaguely recall that\nOpenSSL had a similar issue in the distant past.\n\nMy point is that we seem to be heading from minor hacks to get things\nworking in some corner cases, towards requiring packagers and other people\nbuilding PostgreSQL on Windows having to do significant work to make the\ndependencies look as we now expect. I suspect even more people will end up\npatching the Meson build system as it might actually be easier to get\nthings to work.\n\nThe fact that this thread started only now is actually a good example for\n> how\n> broken the approach to internalize all knowledge about building against\n> libraries into postgres is. This could all have been figured out 1+ years\n> ago\n> - but wasn't.\n>\n> Unless you want to require postgres devs to get constantly in the muck on\n> windows, we'll never get that right until just before the release.\n>\n\n<rant>\nRight now I'd be happy to just have the old MSVC build system back until we\ncan figure out a less complicated way to get to Meson (which I fully\nsupport).\n\nMy assumption all along was that Meson would replace autoconf etc. before\nanything happened with MSVC, precisely because that's the type of\nenvironment all the Postgres devs work in primarily. Instead we seem to\nhave taken what I think is a flawed approach of entirely replacing the\nbuild system on the platform none of the devs use, whilst leaving the new\nsystem as an experimental option on the platforms it will have had orders\nof magnitude more testing.\n\nWhat makes it worse, is that I don't believe anyone was warned about such a\ndrastic change. Packagers were told about the git archive changes to the\ntarball generation, but that's it (and we've said before, we can't expect\npackagers to follow all the activity on -hackers).\n</rant>\n\n\n> I don't particularly care how we abstract away the low level linking\n> details\n> on windows. We can use pkgconf, we can use cmake, we can invent our own\n> thing.\n> But it has to be something other than hardcoding windows library paths and\n> compiler flags into our buildsystem.\n>\n>\n> And yes, that might make it a bit harder for a packager on windows, but\n> windows is already a *massive* drag on PG developers, it has to be somewhat\n> manageable.\n>\n> I do think we can make the effort of windows dependency management a lot\n> more\n> reasonable than it is now though, by providing a recipe for acquiring the\n> dependency in some form. It's a lot easier to for packagers and developers\n> to\n> customize ontop of something like that.\n>\n\nWell as I noted, that is the point of my Github repo above. You can just go\ndownload the binaries from the all_deps action - you can even download the\nconfig.pl and buildenv.pl that will work with those dependencies (those\nfiles are artefacts of the postgresql action).\n\nWe/I *could* add cmake/pc file generation to that tool, which would make\nthings work nicely with PostgreSQL 17 of course - however my original aim\nfor the project was to build all the dependencies in their officially\ndocumented way, using MSVC (or UCRT64 if MSVC can't be used) for maximum\ncompatibility with the PG build, specifically eliminating or at least\nminimising any custom build steps/hacks. As it turns out, I think the only\nhack I really have is to avoid having to do an otherwise unnecessary 32bit\nbuild of krb5.\n\nErr, that was a copy-paste mistake on my end and doesn't even use the vcpkg\n> generated stuff.\n>\n> Here's an example build with most dependencies enabled (see below for more\n> details):\n>\n> https://cirrus-ci.com/task/6497321108635648?logs=configure#L323\n\n\nOK.\n\n\n> I started hacking a bit further on testing all dependencies, which led me\n> down\n> a few rabbitholes:\n>\n>\n> - kerberos: When built linking against a debug runtime, it spews\n> *ginormous*\n> amounts of information onto stderr. Unfortunately its buildsystem doesn't\n> seperate out debugging output and linking against a debug runtime. Argh.\n>\n> The tests fail even with a non-debug runtime though, due to debugging\n> output\n> in some cases, not sure why:\n> https://cirrus-ci.com/task/5872684519653376?logs=check_world#L502\n>\n> Separately, the kerberos tests don't seem to be prepared to work on\n> windows\n> :(.\n>\n> So I disabled using it in CI for now.\n>\n\nUrgh, makes sense.\n\n\n>\n>\n> - Linking the backend dynamically against lz4, icu, ssl, xml, xslt, zstd,\n> zlib\n> slows down the tests noticeably (~20%). So I ended up building those\n> statically.\n>\n\nCurious. I wonder if that translates into a general 20% performance hit.\nPresumably it would for anything that looks similar to whatever test/tests\nare affected.\n\n\n>\n> I ran into some issue with using a static libintl. I made it work, but\n> for\n> now reverted to a dynamic one.\n>\n>\n> - Enabling nls slows down the tests by about 15%, somewhat painful. This is\n> when statically linking, it's a bit worse when linked dynamically :(.\n>\n\nThat one I can imagine being in psql, so maybe not a big issue for most\nreal world use cases.\n\n\n>\n>\n> - readline: Instead of the old issue with a compiler error, now we get a\n> compiler crash:\n>\n> https://developercommunity.visualstudio.com/t/tab-completec4023:-fatal-error-C1001:/10685868\n>\n> The issue is fairly trivial to work around, we just need to break the the\n> if/else chain into two. Probably deserves a bigger refactoring, but\n> that's\n> for another day.\n\n\n>\n> I haven't yet looked into a) uuid b) tcl. I think those are the only other\n> missing dependencies.\n>\n\nWe really need to replace ossp-uuid on Windows anyway. It's basically\nabandoned these days. I haven't looked to see if the alternatives work on\nWindows now.\n\n\n>\n>\n> > > Many of them do include at least cmake files on windows if you build\n> them\n> > > though?\n>\n> > The only one that does is libxml2 as far as I can see. And that doesn't\n> > seem to work even if I use --cmake-prefix-path= as you suggested:\n>\n> Ugh, that's because they used a different name for their cmake dependency\n> than\n> for pkg-config. We can add the alternative spelling to meson.build.\n>\n>\n>\n> >\n> > > > And that's why we really need to be able to locate headers and\n> libraries\n> > > > easily by passing paths to meson, as we can't rely on pkgconfig,\n> cmake,\n> > > or\n> > > > things being in some standard directory on Windows.\n> > >\n> > > Except that that often causes hard to diagnose breakages, because that\n> > > doesn't allow including the necessary compiler/linker flags [2]. It's\n> a\n> > > bad model, we shouldn't perpetuate it. If we want to forever make\n> windows\n> > > a complicated annoying stepchild, that's the way to go.\n> >\n> > That is a good point, though I suspect it wouldn't solve your second\n> > example of the Kerberos libraries, as you'll get both 32 and 64 bit libs\n> if\n> > you follow their standard process for building on Windows so you still\n> need\n> > to have code to pick the right ones.\n>\n> vcpkg for one does provide .pc files for kerberos.\n>\n\nYes - that's in the vcpkg repo. I suspect they're adding pc and cmake files\nfor a lot of things.\n\n\n> > We've either got to be extremely prescriptive in our docs, telling people\n> > precisely what they need to download for each dependency, or how to\n> build it\n> > themselves in the way that will work with PostgreSQL, or the build system\n> > needs to be flexible enough to handle different dependency variations, as\n> > the old VC++ build system was.\n>\n> I'm confused - the old build system wasn't flexible around this stuff *at\n> all*. Everyone had to patch it to get dependencies to work, unless you\n> chose\n> exactly the right source to download from - which was often not documented\n> or\n> outdated.\n>\n\nAs I noted above - as the \"owner\" of the official packages, I never did\ndespite using a variety of upstream sources.\n\n\n>\n> For example:\n> -\n> https://github.com/microsoft/vcpkg/blob/master/ports/libpq/windows/msbuild.patch\n\n\nThat one looks almost entirely related to making PostgreSQL itself fit into\nvcpkg's view of the world. It's changing the installation footprint, and\npulling some paths from their own variables. If they're changing our\ninstallation footprint, it's likely they're doing the same for other\npackages.\n\n\n>\n> -\n> https://github.com/conan-io/conan-center-index/blob/1b24f7c74994ec6573e322b7ae4111c10f620ffa/recipes/libpq/all/conanfile.py#L116-L160\n\n\nSame for that one. It's making many of those changes for non-Windows\nplatforms as well.\n\n\n>\n> -\n> https://github.com/conda-forge/postgresql-feedstock/tree/main/recipe/patches\n\n\nThat one is interesting. It fixes the same zlib and OpenSSL issues I\nmentioned being the one fix I did myself, albeit by renaming libraries\noriginally, and later by actually following the upstream build instructions\ncorrectly.\n\nIt also makes essentially the same fix for krb5 that I hacked into my\nGithub Action, but similarly that isn't actually needed at all if you\nfollow the documented krb5 build process, which produces 32 and 64 bit\nbinaries.\n\nAdditionally, it also fixes a GSSAPI related bug which I reported a week or\ntwo back here and for which there is a patch waiting to be committed, and\nreplaces some setenv calls with _putenv_.\n\nThere are a couple more patches in there, but they're Linux related from a\nquick glance.\n\n\nIn short, I don't really see anything in those examples that are general\nissues (aside from the bugs of course).\n\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nHiOn Thu, 20 Jun 2024 at 21:58, Andres Freund <[email protected]> wrote:\n> I don't think requiring or expecting vcpkg or conan is reasonable at all,\n> for a number of reasons:\n>\n> - Neither supports all the dependencies at present.\n> - There are real supply chain verification concerns for vendors.\n> - That would be a huge change from what we've required in the last 19\n> years, with no deprecation notices or warnings for packagers etc.\n\nI don't think we should hard-require one specifically. I do think it'd be good\nif we provided an easy recipe for dependencies to be installed though.That is precisely what https://github.com/dpage/winpgbuild/ was intended for - and it works well for PG <= 16. \nI think such flexibility acually means it becomes *more* important to abstract\naway some of the concrete ways of using the various dependencies. It doesn't\nmake sense for postgres to understand the internals of each dependency on all\nplatforms to a detail that it can cope with all the different ways of linking\nagainst them.\n\nE.g. libxml can be built with icu, lzma, zlib support. If libxml is built\nstatically, postgres needs to link to all those libraries as well.  How can we\nknow which of those dependencies are enabled?I don't think it's unreasonable to not support static linking, but I take your point. \nEven if we can make that somehow work, it's not reasonable for postgres\ndevelopers adding a dependency to have to figure out how to probe all of this,\nwhen literally no other platform works that way anymore.\nIf you look around at recipes for building postgres on windows, they all have\nto patch src/tools/msvc (see links at the bottom), because the builtin paths\nand flags just don't work outside of a single way of acquiring the dependency.I've been responsible for the Windows installers at EDB since we started work on them, and prior to that built the original ones with Magnus. Since v8.0, I've implemented multiple frameworks for building those packages, and for building PostgreSQL as a dependency of other things (e.g. pgAdmin). I've done that using builds of dependencies found at random places on the internet, building them all myself, and a mixture. I have never once had to patch the MSVC build system. The most I've ever had to do is copy/rename a .lib file - zlib, which for some reason uses different naming depending on how you build it. I vaguely recall that OpenSSL had a similar issue in the distant past.My point is that we seem to be heading from minor hacks to get things working in some corner cases, towards requiring packagers and other people building PostgreSQL on Windows having to do significant work to make the dependencies look as we now expect. I suspect even more people will end up patching the Meson build system as it might actually be easier to get things to work.\nThe fact that this thread started only now is actually a good example for how\nbroken the approach to internalize all knowledge about building against\nlibraries into postgres is. This could all have been figured out 1+ years ago\n- but wasn't.\n\nUnless you want to require postgres devs to get constantly in the muck on\nwindows, we'll never get that right until just before the release.<rant>Right now I'd be happy to just have the old MSVC build system back until we can figure out a less complicated way to get to Meson (which I fully support).My assumption all along was that Meson would replace autoconf etc. before anything happened with MSVC, precisely because that's the type of environment all the Postgres devs work in primarily. Instead we seem to have taken what I think is a flawed approach of entirely replacing the build system on the platform none of the devs use, whilst leaving the new system as an experimental option on the platforms it will have had orders of magnitude more testing.What makes it worse, is that I don't believe anyone was warned about such a drastic change. Packagers were told about the git archive changes to the tarball generation, but that's it (and we've said before, we can't expect packagers to follow all the activity on -hackers).</rant> \nI don't particularly care how we abstract away the low level linking details\non windows. We can use pkgconf, we can use cmake, we can invent our own thing.\nBut it has to be something other than hardcoding windows library paths and\ncompiler flags into our buildsystem.\n\n\nAnd yes, that might make it a bit harder for a packager on windows, but\nwindows is already a *massive* drag on PG developers, it has to be somewhat\nmanageable.\n\nI do think we can make the effort of windows dependency management a lot more\nreasonable than it is now though, by providing a recipe for acquiring the\ndependency in some form. It's a lot easier to for packagers and developers to\ncustomize ontop of something like that.Well as I noted, that is the point of my Github repo above. You can just go download the binaries from the all_deps action - you can even download the config.pl and buildenv.pl that will work with those dependencies (those files are artefacts of the postgresql action).We/I *could* add cmake/pc file generation to that tool, which would make things work nicely with PostgreSQL 17 of course - however my original aim for the project was to build all the dependencies in their officially documented way, using MSVC (or UCRT64 if MSVC can't be used) for maximum compatibility with the PG build, specifically eliminating or at least minimising any custom build steps/hacks. As it turns out, I think the only hack I really have is to avoid having to do an otherwise unnecessary 32bit build of krb5.\nErr, that was a copy-paste mistake on my end and doesn't even use the vcpkg\ngenerated stuff.\n\nHere's an example build with most dependencies enabled (see below for more\ndetails):\n\nhttps://cirrus-ci.com/task/6497321108635648?logs=configure#L323OK. \nI started hacking a bit further on testing all dependencies, which led me down\na few rabbitholes:\n\n\n- kerberos: When built linking against a debug runtime, it spews *ginormous*\n  amounts of information onto stderr. Unfortunately its buildsystem doesn't\n  seperate out debugging output and linking against a debug runtime. Argh.\n\n  The tests fail even with a non-debug runtime though, due to debugging output\n  in some cases, not sure why:\n  https://cirrus-ci.com/task/5872684519653376?logs=check_world#L502\n\n  Separately, the kerberos tests don't seem to be prepared to work on windows\n  :(.\n\n  So I disabled using it in CI for now.Urgh, makes sense. \n\n\n- Linking the backend dynamically against lz4, icu, ssl, xml, xslt, zstd, zlib\n  slows down the tests noticeably (~20%).  So I ended up building those\n  statically.Curious. I wonder if that translates into a general 20% performance hit. Presumably it would for anything that looks similar to whatever test/tests are affected. \n\n  I ran into some issue with using a static libintl. I made it work, but for\n  now reverted to a dynamic one.\n\n\n- Enabling nls slows down the tests by about 15%, somewhat painful. This is\n  when statically linking, it's a bit worse when linked dynamically :(.That one I can imagine being in psql, so maybe not a big issue for most real world use cases. \n\n\n- readline: Instead of the old issue with a compiler error, now we get a\n  compiler crash:\n  https://developercommunity.visualstudio.com/t/tab-completec4023:-fatal-error-C1001:/10685868\n\n  The issue is fairly trivial to work around, we just need to break the the\n  if/else chain into two. Probably deserves a bigger refactoring, but that's\n  for another day.\n\n\nI haven't yet looked into a) uuid b) tcl.  I think those are the only other\nmissing dependencies.We really need to replace ossp-uuid on Windows anyway. It's basically abandoned these days. I haven't looked to see if the alternatives work on Windows now. \n\n\n> > Many of them do include at least cmake files on windows if you build them\n> > though?\n\n> The only one that does is libxml2 as far as I can see. And that doesn't\n> seem to work even if I use --cmake-prefix-path= as you suggested:\n\nUgh, that's because they used a different name for their cmake dependency than\nfor pkg-config. We can add the alternative spelling to meson.build.\n\n\n\n>\n> > > And that's why we really need to be able to locate headers and libraries\n> > > easily by passing paths to meson, as we can't rely on pkgconfig, cmake,\n> > or\n> > > things being in some standard directory on Windows.\n> >\n> > Except that that often causes hard to diagnose breakages, because that\n> > doesn't allow including the necessary compiler/linker flags [2].  It's a\n> > bad model, we shouldn't perpetuate it.  If we want to forever make windows\n> > a complicated annoying stepchild, that's the way to go.\n>\n> That is a good point, though I suspect it wouldn't solve your second\n> example of the Kerberos libraries, as you'll get both 32 and 64 bit libs if\n> you follow their standard process for building on Windows so you still need\n> to have code to pick the right ones.\n\nvcpkg for one does provide .pc files for kerberos.Yes - that's in the vcpkg repo. I suspect they're adding pc and cmake files for a lot of things. \n> We've either got to be extremely prescriptive in our docs, telling people\n> precisely what they need to download for each dependency, or how to build it\n> themselves in the way that will work with PostgreSQL, or the build system\n> needs to be flexible enough to handle different dependency variations, as\n> the old VC++ build system was.\n\nI'm confused - the old build system wasn't flexible around this stuff *at\nall*. Everyone had to patch it to get dependencies to work, unless you chose\nexactly the right source to download from - which was often not documented or\noutdated.As I noted above - as the \"owner\" of the official packages, I never did despite using a variety of upstream sources. \n\nFor example:\n- https://github.com/microsoft/vcpkg/blob/master/ports/libpq/windows/msbuild.patchThat one looks almost entirely related to making PostgreSQL itself fit into vcpkg's view of the world. It's changing the installation footprint, and pulling some paths from their own variables. If they're changing our installation footprint, it's likely they're doing the same for other packages. \n- https://github.com/conan-io/conan-center-index/blob/1b24f7c74994ec6573e322b7ae4111c10f620ffa/recipes/libpq/all/conanfile.py#L116-L160Same for that one. It's making many of those changes for non-Windows platforms as well. \n- https://github.com/conda-forge/postgresql-feedstock/tree/main/recipe/patchesThat one is interesting. It fixes the same zlib and OpenSSL issues I mentioned being the one fix I did myself, albeit by renaming libraries originally, and later by actually following the upstream build instructions correctly. It also makes essentially the same fix for krb5 that I hacked into my Github Action, but similarly that isn't actually needed at all if you follow the documented krb5 build process, which produces 32 and 64 bit binaries. Additionally, it also fixes a GSSAPI related bug which I reported a week or two back here and for which there is a patch waiting to be committed, and replaces some setenv calls with _putenv_. There are a couple more patches in there, but they're Linux related from a quick glance.In short, I don't really see anything in those examples that are general issues (aside from the bugs of course).-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Fri, 21 Jun 2024 12:20:49 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "Hi\n\nOn Fri, 21 Jun 2024 at 12:20, Dave Page <[email protected]> wrote:\n\n>\n> We/I *could* add cmake/pc file generation to that tool, which would make\n> things work nicely with PostgreSQL 17 of course.\n>\n\nFor giggles, I took a crack at doing that, manually creating .pc files for\neverything I've been working with so far. It seems to work as expected,\nexcept that unlike everything else libintl is detected entirely based on\nwhether the header and library can be found. I had to pass extra lib and\ninclude dirs:\n\nmeson setup --wipe --pkg-config-path=C:\\build64\\lib\\pkgconfig\n-Dextra_include_dirs=C:\\build64\\include -Dextra_lib_dirs=C:\\build64\\lib\n-Duuid=ossp build-auto\n\nI'm assuming that's an oversight, given your previous comments?\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nHiOn Fri, 21 Jun 2024 at 12:20, Dave Page <[email protected]> wrote:We/I *could* add cmake/pc file generation to that tool, which would make things work nicely with PostgreSQL 17 of course.For giggles, I took a crack at doing that, manually creating .pc files for everything I've been working with so far. It seems to work as expected, except that unlike everything else libintl is detected entirely based on whether the header and library can be found. I had to pass extra lib and include dirs:meson setup --wipe --pkg-config-path=C:\\build64\\lib\\pkgconfig -Dextra_include_dirs=C:\\build64\\include -Dextra_lib_dirs=C:\\build64\\lib -Duuid=ossp build-autoI'm assuming that's an oversight, given your previous comments? -- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Fri, 21 Jun 2024 15:36:56 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "On Fri, Jun 21, 2024 at 7:21 AM Dave Page <[email protected]> wrote:\n> My assumption all along was that Meson would replace autoconf etc. before anything happened with MSVC, precisely because that's the type of environment all the Postgres devs work in primarily. Instead we seem to have taken what I think is a flawed approach of entirely replacing the build system on the platform none of the devs use, whilst leaving the new system as an experimental option on the platforms it will have had orders of magnitude more testing.\n>\n> What makes it worse, is that I don't believe anyone was warned about such a drastic change. Packagers were told about the git archive changes to the tarball generation, but that's it (and we've said before, we can't expect packagers to follow all the activity on -hackers).\n\nI agree that we should have given a heads up to pgsql-packagers. The\nfact that that wasn't done is a mistake, and inconsiderate. At the\nsame time, I don't quite know who should have done that exactly when.\nNote that, while I believe Andres is on pgsql-packagers, many\ncommitters are not, and we have no written guidelines anywhere for\nwhat kinds of changes require notifying pgsql-packagers.\n\nPrevious threads on this issue:\n\nhttps://postgr.es/m/[email protected]\nhttp://postgr.es/m/[email protected]\n\nNote that in the second of these threads, which contemplated removing\nMSVC for v16, I actually pointed out that if we went that way, we\nneeded to notify pgsql-packagers ASAP. But, since we didn't do that,\nno email was ever sent to pgsql-packagers about this, or at least not\nthat I can find. Still, MSVC support was removed more than six months\nago, so even if somebody didn't see any of the pgsql-hackers\ndiscussion about this, there's been a fair amount of time (and a beta)\nfor someone to notice that their build process isn't working any more.\nIt seems a bit weird to me to start complaining about this now.\n\nAs a practical matter, I don't think MSVC is coming back. The\nbuildfarm was already changed over to use meson, and it would be\npretty disruptive to try to re-add buildfarm coverage for a\nresurrected MSVC on the eve of beta2. I think we should focus on\nimproving whatever isn't quite right in meson -- plenty of other\npeople have also complained about various things there, me included --\nrather than trying to undo over a year's worth of work by lots of\npeople to get things on Windows switched over to MSVC.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 21 Jun 2024 11:15:34 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "On Fri, 21 Jun 2024 at 16:15, Robert Haas <[email protected]> wrote:\n\n> On Fri, Jun 21, 2024 at 7:21 AM Dave Page <[email protected]> wrote:\n> > My assumption all along was that Meson would replace autoconf etc.\n> before anything happened with MSVC, precisely because that's the type of\n> environment all the Postgres devs work in primarily. Instead we seem to\n> have taken what I think is a flawed approach of entirely replacing the\n> build system on the platform none of the devs use, whilst leaving the new\n> system as an experimental option on the platforms it will have had orders\n> of magnitude more testing.\n> >\n> > What makes it worse, is that I don't believe anyone was warned about\n> such a drastic change. Packagers were told about the git archive changes to\n> the tarball generation, but that's it (and we've said before, we can't\n> expect packagers to follow all the activity on -hackers).\n>\n> I agree that we should have given a heads up to pgsql-packagers. The\n> fact that that wasn't done is a mistake, and inconsiderate. At the\n> same time, I don't quite know who should have done that exactly when.\n> Note that, while I believe Andres is on pgsql-packagers, many\n> committers are not, and we have no written guidelines anywhere for\n> what kinds of changes require notifying pgsql-packagers.\n>\n> Previous threads on this issue:\n>\n> https://postgr.es/m/[email protected]\n> http://postgr.es/m/[email protected]\n>\n> Note that in the second of these threads, which contemplated removing\n> MSVC for v16, I actually pointed out that if we went that way, we\n> needed to notify pgsql-packagers ASAP. But, since we didn't do that,\n> no email was ever sent to pgsql-packagers about this, or at least not\n> that I can find.\n\n\nThat's what I was saying should have been done. I don't think there was a\nrequirement on Andres to tell them that they could use Meson instead.\n\n\n> Still, MSVC support was removed more than six months\n> ago, so even if somebody didn't see any of the pgsql-hackers\n> discussion about this, there's been a fair amount of time (and a beta)\n> for someone to notice that their build process isn't working any more.\n> It seems a bit weird to me to start complaining about this now.\n>\n\nPeople noticed when they started prepping for beta1. Then there was a mad\nrush to get things working under Meson in any way possible.\n\n\n> As a practical matter, I don't think MSVC is coming back. The\n> buildfarm was already changed over to use meson, and it would be\n> pretty disruptive to try to re-add buildfarm coverage for a\n> resurrected MSVC on the eve of beta2. I think we should focus on\n> improving whatever isn't quite right in meson -- plenty of other\n> people have also complained about various things there, me included --\n> rather than trying to undo over a year's worth of work by lots of\n> people to get things on Windows switched over to MSVC.\n>\n\nThe buildfarm hasn't switched over - it had support added for Meson. If it\nhad been switched, then the older back branches would have gone red.\n\nAnyway, that's immaterial - I know the old code isn't coming back now. My\nmotivation for this thread is to get Meson to a usable state on Windows,\nthat doesn't require hacking stuff around for the casual builder moving\nforwards - and at present, it requires *significantly* more hacking around\nthan it has in many years.\n\nThe design goals Andres spoke about would clearly be a technical\nimprovement to PostgreSQL, however as we're finding, they rely on the\nupstream dependencies being built with pkgconfig or cmake files which\neither doesn't happen at present, or only happens if you happen to build in\na certain way, or download from some source that has added them. I'm not\nsure how to fix that without re-introducing the old hacks in the build\nsystem, or extending my side project to add .pc files to all the\ndependencies it builds. I will almost certainly do that, as it'll give\nfolks a single place where they can download everything they need, and\nprovide a reference on how everything can be built if they want to do it\nthemselves, but on the other hand, it's far from an ideal solution and I'd\nmuch prefer if I didn't need to do that at all.\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nOn Fri, 21 Jun 2024 at 16:15, Robert Haas <[email protected]> wrote:On Fri, Jun 21, 2024 at 7:21 AM Dave Page <[email protected]> wrote:\n> My assumption all along was that Meson would replace autoconf etc. before anything happened with MSVC, precisely because that's the type of environment all the Postgres devs work in primarily. Instead we seem to have taken what I think is a flawed approach of entirely replacing the build system on the platform none of the devs use, whilst leaving the new system as an experimental option on the platforms it will have had orders of magnitude more testing.\n>\n> What makes it worse, is that I don't believe anyone was warned about such a drastic change. Packagers were told about the git archive changes to the tarball generation, but that's it (and we've said before, we can't expect packagers to follow all the activity on -hackers).\n\nI agree that we should have given a heads up to pgsql-packagers. The\nfact that that wasn't done is a mistake, and inconsiderate. At the\nsame time, I don't quite know who should have done that exactly when.\nNote that, while I believe Andres is on pgsql-packagers, many\ncommitters are not, and we have no written guidelines anywhere for\nwhat kinds of changes require notifying pgsql-packagers.\n\nPrevious threads on this issue:\n\nhttps://postgr.es/m/[email protected]\nhttp://postgr.es/m/[email protected]\n\nNote that in the second of these threads, which contemplated removing\nMSVC for v16, I actually pointed out that if we went that way, we\nneeded to notify pgsql-packagers ASAP. But, since we didn't do that,\nno email was ever sent to pgsql-packagers about this, or at least not\nthat I can find. That's what I was saying should have been done. I don't think there was a requirement on Andres to tell them that they could use Meson instead. Still, MSVC support was removed more than six months\nago, so even if somebody didn't see any of the pgsql-hackers\ndiscussion about this, there's been a fair amount of time (and a beta)\nfor someone to notice that their build process isn't working any more.\nIt seems a bit weird to me to start complaining about this now.People noticed when they started prepping for beta1. Then there was a mad rush to get things working under Meson in any way possible. \nAs a practical matter, I don't think MSVC is coming back. The\nbuildfarm was already changed over to use meson, and it would be\npretty disruptive to try to re-add buildfarm coverage for a\nresurrected MSVC on the eve of beta2. I think we should focus on\nimproving whatever isn't quite right in meson -- plenty of other\npeople have also complained about various things there, me included --\nrather than trying to undo over a year's worth of work by lots of\npeople to get things on Windows switched over to MSVC.The buildfarm hasn't switched over - it had support added for Meson. If it had been switched, then the older back branches would have gone red.Anyway, that's immaterial - I know the old code isn't coming back now. My motivation for this thread is to get Meson to a usable state on Windows, that doesn't require hacking stuff around for the casual builder moving forwards - and at present, it requires *significantly* more hacking around than it has in many years.The design goals Andres spoke about would clearly be a technical improvement to PostgreSQL, however as we're finding, they rely on the upstream dependencies being built with pkgconfig or cmake files which either doesn't happen at present, or only happens if you happen to build in a certain way, or download from some source that has added them. I'm not sure how to fix that without re-introducing the old hacks in the build system, or extending my side project to add .pc files to all the dependencies it builds. I will almost certainly do that, as it'll give folks a single place where they can download everything they need, and provide a reference on how everything can be built if they want to do it themselves, but on the other hand, it's far from an ideal solution and I'd much prefer if I didn't need to do that at all. -- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Fri, 21 Jun 2024 16:46:06 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "On Fri Jun 21, 2024 at 10:46 AM CDT, Dave Page wrote:\n> On Fri, 21 Jun 2024 at 16:15, Robert Haas <[email protected]> wrote:\n> > As a practical matter, I don't think MSVC is coming back. The\n> > buildfarm was already changed over to use meson, and it would be\n> > pretty disruptive to try to re-add buildfarm coverage for a\n> > resurrected MSVC on the eve of beta2. I think we should focus on\n> > improving whatever isn't quite right in meson -- plenty of other\n> > people have also complained about various things there, me included --\n> > rather than trying to undo over a year's worth of work by lots of\n> > people to get things on Windows switched over to MSVC.\n> >\n>\n> The buildfarm hasn't switched over - it had support added for Meson. If it\n> had been switched, then the older back branches would have gone red.\n>\n> Anyway, that's immaterial - I know the old code isn't coming back now. My\n> motivation for this thread is to get Meson to a usable state on Windows,\n> that doesn't require hacking stuff around for the casual builder moving\n> forwards - and at present, it requires *significantly* more hacking around\n> than it has in many years.\n>\n> The design goals Andres spoke about would clearly be a technical\n> improvement to PostgreSQL, however as we're finding, they rely on the\n> upstream dependencies being built with pkgconfig or cmake files which\n> either doesn't happen at present, or only happens if you happen to build in\n> a certain way, or download from some source that has added them. I'm not\n> sure how to fix that without re-introducing the old hacks in the build\n> system, or extending my side project to add .pc files to all the\n> dependencies it builds. I will almost certainly do that, as it'll give\n> folks a single place where they can download everything they need, and\n> provide a reference on how everything can be built if they want to do it\n> themselves, but on the other hand, it's far from an ideal solution and I'd\n> much prefer if I didn't need to do that at all.\n\nHey Dave,\n\nI'm a maintainer for Meson, and am happy to help you in any way that \nI reasonably can.\n\nLet's start with the state of Windows support in Meson. If I were to \nrank Meson support for platforms, I would do something like this:\n\n- Linux\n- BSDs\n- Solaris/IllumOS\n- ...\n- Apple\n- Windows\n\nAs you can see Windows is the bottom of the totem pole. We don't have \nWindows people coming along to contribute very often for whatever \nreason. Thus admittedly, Windows support can be very lackluster at \ntimes.\n\nMeson is not a project which sees a lot of funding. (Do any build \ntools?) The projects that support Meson in any way are Mesa and \nGStreamer, which don't have a lot of incentive to do anything with \nWindows, generally.\n\nI'm not even sure any of the regular contributors to Meson have \nWindows development machines. I surely don't have access to a Windows \nmachine.\n\nAll that being said, I would like to help you solve your Windows \ndependencies issue, or at least mitigate them. I think it is probably \nbest to just look at one dependency at a time. Here is how lz4 is picked \nup in the Postgres Meson build:\n\n> lz4opt = get_option('lz4')\n> if not lz4opt.disabled()\n> lz4 = dependency('liblz4', required: lz4opt)\n> \n> if lz4.found()\n> cdata.set('USE_LZ4', 1)\n> cdata.set('HAVE_LIBLZ4', 1)\n> endif\n> \n> else\n> lz4 = not_found_dep\n> endif\n\nAs you are well aware, dependency() looks largely at pkgconfig and cmake \nto find the dependencies. In your case, that is obviously not working. \n\nI think there are two ways to solve your problem. A first solution would \nlook like this:\n\n> lz4opt = get_option('lz4')\n> if not lz4opt.disabled()\n> lz4 = dependency('liblz4', required: false)\n> if not lz4.found()\n> lz4 = cc.find_library('lz4', required: lz4opt, dirs: extra_lib_dirs)\n> endif\n> \n> if lz4.found()\n> cdata.set('USE_LZ4', 1)\n> cdata.set('HAVE_LIBLZ4', 1)\n> end\n> else\n> lz4 = not_found_dep\n> endif\n\nAnother solution that could work alongside the previous suggestion is to \nuse Meson subprojects[0] for managing Postgres dependencies. I don't \nknow if we would want this in the Postgres repo or a patch that \ndownstream packagers would need to apply, but essentially, add the wrap \nfile:\n\n> [wrap-file]\n> directory = lz4-1.9.4\n> source_url = https://github.com/lz4/lz4/archive/v1.9.4.tar.gz\n> source_filename = lz4-1.9.4.tgz\n> source_hash = 0b0e3aa07c8c063ddf40b082bdf7e37a1562bda40a0ff5272957f3e987e0e54b\n> patch_filename = lz4_1.9.4-2_patch.zip\n> patch_url = https://wrapdb.mesonbuild.com/v2/lz4_1.9.4-2/get_patch\n> patch_hash = 4f33456cce986167d23faf5d28a128e773746c10789950475d2155a7914630fb\n> wrapdb_version = 1.9.4-2\n> \n> [provide]\n> liblz4 = liblz4_dep\n\ninto subprojects/lz4.wrap, and Meson should be able to automagically \npick up the dependency. Do this for all the projects that Postgres \ndepends on, and you'll have an entire build managed by Meson. Note that \nMeson subprojects don't have to use Meson themselves. They can also use \nCMake[1] or Autotools[2], but your results may vary.\n\nHappy to hear your thoughts. I think if our goal is to enable more \npeople to work on Postgres, we should probably add subproject wraps to \nthe source tree, but we also need to be forgiving like in the Meson DSL \nsnippet above.\n\nLet me know your thoughts!\n\n[0]: https://mesonbuild.com/Wrap-dependency-system-manual.html\n[1]: https://github.com/hse-project/hse/blob/6d5207f88044a3bd9b3539260074395317e276d5/meson.build#L239-L275\n[2]: https://github.com/hse-project/hse/blob/6d5207f88044a3bd9b3539260074395317e276d5/subprojects/packagefiles/libbsd/meson.build\n\n-- \nTristan Partin\nhttps://tristan.partin.io\n\n\n", "msg_date": "Fri, 21 Jun 2024 12:16:06 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "Hi,\n\nOn 2024-06-21 12:20:49 +0100, Dave Page wrote:\n> On Thu, 20 Jun 2024 at 21:58, Andres Freund <[email protected]> wrote:\n> That is precisely what https://github.com/dpage/winpgbuild/ was intended\n> for - and it works well for PG <= 16.\n\nIf we develop it into that - I'd be happy. I mostly want to be able to do\nautomated tests on windows with all reasonable dependencies. And occasionally\ndo some interactive investigation, without a lot of setup time.\n\nOne small advantage of something outside of PG is that it's easy to add\nadditional dependencies when developing additional features. Except of course\nall the windows packaging systems seem ... suboptimal.\n\n\n> I don't think it's unreasonable to not support static linking, but I take\n> your point.\n\nSeparately from this thread: ISTM that on windows it'd be quite beneficial to\nlink a few things statically, given how annoying dealing with dlls can be?\nThere's also the perf issue addressed further down.\n\n\n> My assumption all along was that Meson would replace autoconf etc. before\n> anything happened with MSVC, precisely because that's the type of\n> environment all the Postgres devs work in primarily. Instead we seem to\n> have taken what I think is a flawed approach of entirely replacing the\n> build system on the platform none of the devs use, whilst leaving the new\n> system as an experimental option on the platforms it will have had orders\n> of magnitude more testing.\n\nThe old system was a major bottleneck. For one, there was no way to run all\ntests. And even the tests that one could run, would run serially, leading to\nexceedingly long tests times. While that could partially be addressed by\nhaving both buildsystems in parallel, the old one would frequently break in a\nway that one couldn't reproduce on other systems. And resource wise it wasn't\nfeasible to test both old and new system for cfbot/CI.\n\n\n> What makes it worse, is that I don't believe anyone was warned about such a\n> drastic change. Packagers were told about the git archive changes to the\n> tarball generation, but that's it (and we've said before, we can't expect\n> packagers to follow all the activity on -hackers).\n\nI'm sure we could have dealt better with it. There certainly was some lack of\nof cohesion because I wasn't able to do drive the src/tools/msvc removal and\nMichael took up the slack.\n\nBut I also don't think it's really fair to say that there was no heads\nup. Several people at EDB participated in the removal and buildfarm\nmaintainers at EDB were repeatedly pinged, to move their buildfarm animals\nover.\n\nAnd of course the meson stuff came out a year earlier and it wouldn't have\nbeen exactly unreasonable\n\n\n> Well as I noted, that is the point of my Github repo above. You can just go\n> download the binaries from the all_deps action - you can even download the\n> config.pl and buildenv.pl that will work with those dependencies (those\n> files are artefacts of the postgresql action).\n\nFor the purpose of CI we'd really need debug builds of most of the libraries -\nthere are compat issues when mixing debug/non-debug runtimes (we've hit them\noccasionally) and not using the debug runtime hides a lot of issues. Of course\nalso not optimal for CI / BF usage.\n\n\n\n> > - Linking the backend dynamically against lz4, icu, ssl, xml, xslt, zstd,\n> > zlib\n> > slows down the tests noticeably (~20%). So I ended up building those\n> > statically.\n\n> Curious. I wonder if that translates into a general 20% performance hit.\n> Presumably it would for anything that looks similar to whatever test/tests\n> are affected.\n\nFWIW, dynamic linking has a noticeable overhead on other platforms too. A\nnon-dependencies-enabled postgres can do about 2x the connections-per-second\nthan a fully kitted out postgres can (basically due to more memory mapping\nmetadata being copied). But on windows the overhead is larger because so much\nmore happens for every new connections, including loading all dlls from\nscratch.\n\nI suspect linking a few libraries statically would be quite worth it on\nwindows. On other platforms it'd be quite inadvisable to statically link\nlibraries, due to security updates, but for stuff like the EDB windows\ninstaller dynamic linking doesn't really help with that afaict?\n\n\n\n> > I ran into some issue with using a static libintl. I made it work, but\n> > for\n> > now reverted to a dynamic one.\n> >\n> >\n> > - Enabling nls slows down the tests by about 15%, somewhat painful. This is\n> > when statically linking, it's a bit worse when linked dynamically :(.\n> >\n> \n> That one I can imagine being in psql, so maybe not a big issue for most\n> real world use cases.\n\nI think it's both psql and backend. I think partially it's just due the\nadditional libraries being linked in everywhere (intl and iconv) and partially\nit's the additinal indirection that happens in a bunch more places. We have a\nbunch of places where we do gettext lookups but never use the result unless\nyou use DEBUG3 or such, and that not free. It also triggers additional\nfilesystem lookups (for the translations) and that's not cheap on windows\neither.\n\n> >\n> > I haven't yet looked into a) uuid b) tcl. I think those are the only other\n> > missing dependencies.\n> >\n> \n> We really need to replace ossp-uuid on Windows anyway. It's basically\n> abandoned these days. I haven't looked to see if the alternatives work on\n> Windows now.\n\nYea, once we have *something* for ossp-uuid, I think we should remove support\nfor ossp-uuid. We don't do anyone favors by inducing them to install it.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 22 Jun 2024 09:32:25 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "Hi,\n\nOn 2024-06-21 15:36:56 +0100, Dave Page wrote:\n> For giggles, I took a crack at doing that, manually creating .pc files for\n> everything I've been working with so far.\n\nCool!\n\n\n> It seems to work as expected, except that unlike everything else libintl is\n> detected entirely based on whether the header and library can be found. I\n> had to pass extra lib and include dirs:\n\nYea, right now libintl isn't using dependency detection because I didn't see\nany platform where it's distributed with a .pc for or such. It'd be just a\nline or two to make it use one...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 22 Jun 2024 09:35:00 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "Andres Freund:\n> FWIW, dynamic linking has a noticeable overhead on other platforms too. A\n> non-dependencies-enabled postgres can do about 2x the connections-per-second\n> than a fully kitted out postgres can (basically due to more memory mapping\n> metadata being copied). But on windows the overhead is larger because so much\n> more happens for every new connections, including loading all dlls from\n> scratch.\n> \n> I suspect linking a few libraries statically would be quite worth it on\n> windows. On other platforms it'd be quite inadvisable to statically link\n> libraries, due to security updates, [...]\n\nThat's not necessarily true. The nix package manager and thus NixOS \ntrack all dependencies for a piece of software. If any of the \ndependencies are updated, all dependents are rebuilt, too. So the \nsecurity concern doesn't apply here. There is a \"static overlay\", which \nbuilds everything linked fully statically. Unfortunately, PostgreSQL \ndoesn't build in that, so far.\n\nLately, I have been looking into building at least libpq in that static \noverlay, via Meson. There are two related config options:\n-Ddefault_library=shared|static|both\n-Dprefer_static\n\nThe first controls which libraries (libpq, ...) to build ourselves. The \nsecond controls linking, IIUC also against external dependencies.\n\nMaybe it would be a first step to support -Dprefer_static?\n\nThen this could be set on Windows.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Sat, 22 Jun 2024 19:32:01 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "Hi, \n\nOn June 22, 2024 7:32:01 PM GMT+02:00, [email protected] wrote:\n>Andres Freund:\n>> FWIW, dynamic linking has a noticeable overhead on other platforms too. A\n>> non-dependencies-enabled postgres can do about 2x the connections-per-second\n>> than a fully kitted out postgres can (basically due to more memory mapping\n>> metadata being copied). But on windows the overhead is larger because so much\n>> more happens for every new connections, including loading all dlls from\n>> scratch.\n>> \n>> I suspect linking a few libraries statically would be quite worth it on\n>> windows. On other platforms it'd be quite inadvisable to statically link\n>> libraries, due to security updates, [...]\n>That's not necessarily true. The nix package manager and thus NixOS track all dependencies for a piece of software. If any of the dependencies are updated, all dependents are rebuilt, too. So the security concern doesn't apply here. There is a \"static overlay\", which builds everything linked fully statically. \n\nRight. There's definitely some scenario where it's ok, I was simplifying a bit.\n\n> Unfortunately, PostgreSQL doesn't build in that, so far.\n\nI've built mostly statically linked pg without much of a problem, what trouble did you encounter? Think there were some issues with linking Kerberos and openldap statically, but not on postgres' side.\n\nBuilding the postgres backend without support for dynamic linking doesn't make sense though. Extensions are just stop ingrained part of pg.\n\n\n>Lately, I have been looking into building at least libpq in that static overlay, via Meson. There are two related config options:\n>-Ddefault_library=shared|static|both\n>-Dprefer_static\n>\n>The first controls which libraries (libpq, ...) to build ourselves. The second controls linking, IIUC also against external dependencies.\n\nPg by default builds a static libpq on nearly all platforms (not aix I think and maybe not Windows when building with autoconf, not sure about the old msvc system) today?\n\n\n>Maybe it would be a first step to support -Dprefer_static?\n\nThat should work for nearly all dependencies today. Except for libintl, I think. I found that there are a lot of buglets in static link dependencies of various libraries though. \n\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Sat, 22 Jun 2024 23:17:43 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "On 2024-06-21 Fr 11:15 AM, Robert Haas wrote:\n> As a practical matter, I don't think MSVC is coming back. The\n> buildfarm was already changed over to use meson, and it would be\n> pretty disruptive to try to re-add buildfarm coverage for a\n> resurrected MSVC on the eve of beta2. I think we should focus on\n> improving whatever isn't quite right in meson -- plenty of other\n> people have also complained about various things there, me included --\n> rather than trying to undo over a year's worth of work by lots of\n> people to get things on Windows switched over to MSVC.\n>\n\nAs a practical matter, whether the buildfarm client uses meson or not is \na matter of one line in the client's config file. Support for the old \nsystem is still there, of course, as it's required on older branches. So \nthe impact would be pretty minimal if we did decide to re-enable the old \nbuild system. There are only two MSVC animals building master right now: \ndrongo (run by me) and hammerkop (run by our friends at SRA OSS).\n\nI am not necessarily advocating it, just setting the record straight \nabout how  easy it would be to switch the buildfarm.\n\n\ncheers\n\n\nandrew\n\n\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-06-21 Fr 11:15 AM, Robert Haas\n wrote:\n\n\n\nAs a practical matter, I don't think MSVC is coming back. The\nbuildfarm was already changed over to use meson, and it would be\npretty disruptive to try to re-add buildfarm coverage for a\nresurrected MSVC on the eve of beta2. I think we should focus on\nimproving whatever isn't quite right in meson -- plenty of other\npeople have also complained about various things there, me included --\nrather than trying to undo over a year's worth of work by lots of\npeople to get things on Windows switched over to MSVC.\n\n\n\n\n\nAs a practical matter, whether the buildfarm client uses meson or\n not is a matter of one line in the client's config file. Support\n for the old system is still there, of course, as it's required on\n older branches. So the impact would be pretty minimal if we did\n decide to re-enable the old build system. There are only two MSVC\n animals building master right now: drongo (run by me) and\n hammerkop (run by our friends at SRA OSS).\nI am not necessarily advocating it, just setting the record\n straight about how  easy it would be to switch the buildfarm.\n\n\ncheers\n\n\nandrew\n\n\n\n\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sun, 23 Jun 2024 06:18:44 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "Hi Tristan,\n\nOn Fri, 21 Jun 2024 at 18:16, Tristan Partin <[email protected]> wrote:\n\n>\n> Hey Dave,\n>\n> I'm a maintainer for Meson, and am happy to help you in any way that\n> I reasonably can.\n>\n\nThank you!\n\n>\n> Let's start with the state of Windows support in Meson. If I were to\n> rank Meson support for platforms, I would do something like this:\n>\n> - Linux\n> - BSDs\n> - Solaris/IllumOS\n> - ...\n> - Apple\n> - Windows\n>\n> As you can see Windows is the bottom of the totem pole. We don't have\n> Windows people coming along to contribute very often for whatever\n> reason. Thus admittedly, Windows support can be very lackluster at\n> times.\n>\n> Meson is not a project which sees a lot of funding. (Do any build\n> tools?) The projects that support Meson in any way are Mesa and\n> GStreamer, which don't have a lot of incentive to do anything with\n> Windows, generally.\n>\n> I'm not even sure any of the regular contributors to Meson have\n> Windows development machines. I surely don't have access to a Windows\n> machine.\n>\n\nTo be very clear, my comments - in particular the subject line of this\nthread - are not referring to Meson itself, rather our use of it on\nWindows. I've been quite impressed with Meson in general, and am coming to\nlike it a lot.\n\n\n>\n> All that being said, I would like to help you solve your Windows\n> dependencies issue, or at least mitigate them. I think it is probably\n> best to just look at one dependency at a time. Here is how lz4 is picked\n> up in the Postgres Meson build:\n>\n> > lz4opt = get_option('lz4')\n> > if not lz4opt.disabled()\n> > lz4 = dependency('liblz4', required: lz4opt)\n> >\n> > if lz4.found()\n> > cdata.set('USE_LZ4', 1)\n> > cdata.set('HAVE_LIBLZ4', 1)\n> > endif\n> >\n> > else\n> > lz4 = not_found_dep\n> > endif\n>\n> As you are well aware, dependency() looks largely at pkgconfig and cmake\n> to find the dependencies. In your case, that is obviously not working.\n>\n> I think there are two ways to solve your problem. A first solution would\n> look like this:\n>\n> > lz4opt = get_option('lz4')\n> > if not lz4opt.disabled()\n> > lz4 = dependency('liblz4', required: false)\n> > if not lz4.found()\n> > lz4 = cc.find_library('lz4', required: lz4opt, dirs: extra_lib_dirs)\n> > endif\n> >\n> > if lz4.found()\n> > cdata.set('USE_LZ4', 1)\n> > cdata.set('HAVE_LIBLZ4', 1)\n> > end\n> > else\n> > lz4 = not_found_dep\n> > endif\n>\n\nYes, that's the pattern I think we should generally be using:\n\n- It supports the design goals, allowing for configurations we don't know\nabout to be communicated through pkgconfig or cmake files.\n- It provides a fallback method to detect the dependencies as we do in the\nold MSVC build system, which should work with most dependencies built with\ntheir \"standard\" configuration on Windows.\n\nTo address Andres' concerns around mis-detection of dependencies, or other\noddities such as required compiler flags not being included, I would\nsuggest that a) that's happened very rarely, if ever, in the past, and b)\nwe can always spit out an obvious warning if we've not been able to use\ncmake or pkgconfig for any particular dependencies.\n\n\n>\n> Another solution that could work alongside the previous suggestion is to\n> use Meson subprojects[0] for managing Postgres dependencies. I don't\n> know if we would want this in the Postgres repo or a patch that\n> downstream packagers would need to apply, but essentially, add the wrap\n> file:\n>\n> > [wrap-file]\n> > directory = lz4-1.9.4\n> > source_url = https://github.com/lz4/lz4/archive/v1.9.4.tar.gz\n> > source_filename = lz4-1.9.4.tgz\n> > source_hash =\n> 0b0e3aa07c8c063ddf40b082bdf7e37a1562bda40a0ff5272957f3e987e0e54b\n> > patch_filename = lz4_1.9.4-2_patch.zip\n> > patch_url = https://wrapdb.mesonbuild.com/v2/lz4_1.9.4-2/get_patch\n> > patch_hash =\n> 4f33456cce986167d23faf5d28a128e773746c10789950475d2155a7914630fb\n> > wrapdb_version = 1.9.4-2\n> >\n> > [provide]\n> > liblz4 = liblz4_dep\n>\n> into subprojects/lz4.wrap, and Meson should be able to automagically\n> pick up the dependency. Do this for all the projects that Postgres\n> depends on, and you'll have an entire build managed by Meson. Note that\n> Meson subprojects don't have to use Meson themselves. They can also use\n> CMake[1] or Autotools[2], but your results may vary.\n>\n\nThat's certainly interesting functionality. I'm not sure we'd want to try\nto use it here, simply because building all of PostgreSQL's dependencies on\nWindows requires multiple different toolchains and environments and is not\ntrivial to setup. That's largely why I started working on\nhttps://github.com/dpage/winpgbuild.\n\nThanks!\n\n\n>\n> Happy to hear your thoughts. I think if our goal is to enable more\n> people to work on Postgres, we should probably add subproject wraps to\n> the source tree, but we also need to be forgiving like in the Meson DSL\n> snippet above.\n>\n> Let me know your thoughts!\n>\n> [0]: https://mesonbuild.com/Wrap-dependency-system-manual.html\n> [1]:\n> https://github.com/hse-project/hse/blob/6d5207f88044a3bd9b3539260074395317e276d5/meson.build#L239-L275\n> [2]:\n> https://github.com/hse-project/hse/blob/6d5207f88044a3bd9b3539260074395317e276d5/subprojects/packagefiles/libbsd/meson.build\n>\n> --\n> Tristan Partin\n> https://tristan.partin.io\n>\n\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nHi Tristan,On Fri, 21 Jun 2024 at 18:16, Tristan Partin <[email protected]> wrote:\n\nHey Dave,\n\nI'm a maintainer for Meson, and am happy to help you in any way that \nI reasonably can.Thank you! \n\nLet's start with the state of Windows support in Meson. If I were to \nrank Meson support for platforms, I would do something like this:\n\n- Linux\n- BSDs\n- Solaris/IllumOS\n- ...\n- Apple\n- Windows\n\nAs you can see Windows is the bottom of the totem pole. We don't have \nWindows people coming along to contribute very often for whatever \nreason. Thus admittedly, Windows support can be very lackluster at \ntimes.\n\nMeson is not a project which sees a lot of funding. (Do any build \ntools?) The projects that support Meson in any way are Mesa and \nGStreamer, which don't have a lot of incentive to do anything with \nWindows, generally.\n\nI'm not even sure any of the regular contributors to Meson have \nWindows development machines. I surely don't have access to a Windows \nmachine.To be very clear, my comments - in particular the subject line of this thread - are not referring to Meson itself, rather our use of it on Windows. I've been quite impressed with Meson in general, and am coming to like it a lot. \n\nAll that being said, I would like to help you solve your Windows \ndependencies issue, or at least mitigate them. I think it is probably \nbest to just look at one dependency at a time. Here is how lz4 is picked \nup in the Postgres Meson build:\n\n> lz4opt = get_option('lz4')\n> if not lz4opt.disabled()\n>   lz4 = dependency('liblz4', required: lz4opt)\n> \n>   if lz4.found()\n>     cdata.set('USE_LZ4', 1)\n>     cdata.set('HAVE_LIBLZ4', 1)\n>   endif\n> \n> else\n>   lz4 = not_found_dep\n> endif\n\nAs you are well aware, dependency() looks largely at pkgconfig and cmake \nto find the dependencies. In your case, that is obviously not working. \n\nI think there are two ways to solve your problem. A first solution would \nlook like this:\n\n> lz4opt = get_option('lz4')\n> if not lz4opt.disabled()\n>   lz4 = dependency('liblz4', required: false)\n>   if not lz4.found()\n>     lz4 = cc.find_library('lz4', required: lz4opt, dirs: extra_lib_dirs)\n>   endif\n> \n>   if lz4.found()\n>     cdata.set('USE_LZ4', 1)\n>     cdata.set('HAVE_LIBLZ4', 1)\n>   end\n> else\n>   lz4 = not_found_dep\n> endifYes, that's the pattern I think we should generally be using:- It supports the design goals, allowing for configurations we don't know about to be communicated through pkgconfig or cmake files.- It provides a fallback method to detect the dependencies as we do in the old MSVC build system, which should work with most dependencies built with their \"standard\" configuration on Windows.To address Andres' concerns around mis-detection of dependencies, or other oddities such as required compiler flags not being included, I would suggest that a) that's happened very rarely, if ever, in the past, and b) we can always spit out an obvious warning if we've not been able to use cmake or pkgconfig for any particular dependencies. \n\nAnother solution that could work alongside the previous suggestion is to \nuse Meson subprojects[0] for managing Postgres dependencies. I don't \nknow if we would want this in the Postgres repo or a patch that \ndownstream packagers would need to apply, but essentially, add the wrap \nfile:\n\n> [wrap-file]\n> directory = lz4-1.9.4\n> source_url = https://github.com/lz4/lz4/archive/v1.9.4.tar.gz\n> source_filename = lz4-1.9.4.tgz\n> source_hash = 0b0e3aa07c8c063ddf40b082bdf7e37a1562bda40a0ff5272957f3e987e0e54b\n> patch_filename = lz4_1.9.4-2_patch.zip\n> patch_url = https://wrapdb.mesonbuild.com/v2/lz4_1.9.4-2/get_patch\n> patch_hash = 4f33456cce986167d23faf5d28a128e773746c10789950475d2155a7914630fb\n> wrapdb_version = 1.9.4-2\n> \n> [provide]\n> liblz4 = liblz4_dep\n\ninto subprojects/lz4.wrap, and Meson should be able to automagically \npick up the dependency. Do this for all the projects that Postgres \ndepends on, and you'll have an entire build managed by Meson. Note that \nMeson subprojects don't have to use Meson themselves. They can also use \nCMake[1] or Autotools[2], but your results may vary.That's certainly interesting functionality. I'm not sure we'd want to try to use it here, simply because building all of PostgreSQL's dependencies on Windows requires multiple different toolchains and environments and is not trivial to setup. That's largely why I started working on https://github.com/dpage/winpgbuild.Thanks! \n\nHappy to hear your thoughts. I think if our goal is to enable more \npeople to work on Postgres, we should probably add subproject wraps to \nthe source tree, but we also need to be forgiving like in the Meson DSL \nsnippet above.\n\nLet me know your thoughts!\n\n[0]: https://mesonbuild.com/Wrap-dependency-system-manual.html\n[1]: https://github.com/hse-project/hse/blob/6d5207f88044a3bd9b3539260074395317e276d5/meson.build#L239-L275\n[2]: https://github.com/hse-project/hse/blob/6d5207f88044a3bd9b3539260074395317e276d5/subprojects/packagefiles/libbsd/meson.build\n\n-- \nTristan Partin\nhttps://tristan.partin.io\n-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Mon, 24 Jun 2024 09:44:57 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "Hi\n\nOn Sat, 22 Jun 2024 at 17:32, Andres Freund <[email protected]> wrote:\n\n> > I don't think it's unreasonable to not support static linking, but I take\n> > your point.\n>\n> Separately from this thread: ISTM that on windows it'd be quite beneficial\n> to\n> link a few things statically, given how annoying dealing with dlls can be?\n> There's also the perf issue addressed further down.\n>\n\nDealing with DLLs largely just boils down to copying them into the right\nplace when packaging. The perf issue is a much more compelling reason to\nlook at static linking imho.\n\n\n>\n>\n> > My assumption all along was that Meson would replace autoconf etc. before\n> > anything happened with MSVC, precisely because that's the type of\n> > environment all the Postgres devs work in primarily. Instead we seem to\n> > have taken what I think is a flawed approach of entirely replacing the\n> > build system on the platform none of the devs use, whilst leaving the new\n> > system as an experimental option on the platforms it will have had orders\n> > of magnitude more testing.\n>\n> The old system was a major bottleneck. For one, there was no way to run all\n> tests. And even the tests that one could run, would run serially, leading\n> to\n> exceedingly long tests times. While that could partially be addressed by\n> having both buildsystems in parallel, the old one would frequently break\n> in a\n> way that one couldn't reproduce on other systems. And resource wise it\n> wasn't\n> feasible to test both old and new system for cfbot/CI.\n>\n\nHmm, I've found that running the tests under Meson takes notably longer\nthan the old system - maybe 5 - 10x longer (\"meson test\" vs. \"vcregress\ncheck\"). I haven't yet put any effort into figuring out a cause for that\nyet.\n\n\n> FWIW, dynamic linking has a noticeable overhead on other platforms too. A\n> non-dependencies-enabled postgres can do about 2x the\n> connections-per-second\n> than a fully kitted out postgres can (basically due to more memory mapping\n> metadata being copied). But on windows the overhead is larger because so\n> much\n> more happens for every new connections, including loading all dlls from\n> scratch.\n>\n> I suspect linking a few libraries statically would be quite worth it on\n> windows. On other platforms it'd be quite inadvisable to statically link\n> libraries, due to security updates, but for stuff like the EDB windows\n> installer dynamic linking doesn't really help with that afaict?\n>\n\nCorrect - we're shipping the dependencies ourselves, so we have to\nrewrap/retest anyway.\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nHiOn Sat, 22 Jun 2024 at 17:32, Andres Freund <[email protected]> wrote:\n> I don't think it's unreasonable to not support static linking, but I take\n> your point.\n\nSeparately from this thread: ISTM that on windows it'd be quite beneficial to\nlink a few things statically, given how annoying dealing with dlls can be?\nThere's also the perf issue addressed further down.Dealing with DLLs largely just boils down to copying them into the right place when packaging. The perf issue is a much more compelling reason to look at static linking imho. \n\n\n> My assumption all along was that Meson would replace autoconf etc. before\n> anything happened with MSVC, precisely because that's the type of\n> environment all the Postgres devs work in primarily. Instead we seem to\n> have taken what I think is a flawed approach of entirely replacing the\n> build system on the platform none of the devs use, whilst leaving the new\n> system as an experimental option on the platforms it will have had orders\n> of magnitude more testing.\n\nThe old system was a major bottleneck. For one, there was no way to run all\ntests. And even the tests that one could run, would run serially, leading to\nexceedingly long tests times. While that could partially be addressed by\nhaving both buildsystems in parallel, the old one would frequently break in a\nway that one couldn't reproduce on other systems. And resource wise it wasn't\nfeasible to test both old and new system for cfbot/CI.Hmm, I've found that running the tests under Meson takes notably longer than the old system - maybe 5 - 10x longer (\"meson test\" vs. \"vcregress check\"). I haven't yet put any effort into figuring out a cause for that yet. \nFWIW, dynamic linking has a noticeable overhead on other platforms too. A\nnon-dependencies-enabled postgres can do about 2x the connections-per-second\nthan a fully kitted out postgres can (basically due to more memory mapping\nmetadata being copied).  But on windows the overhead is larger because so much\nmore happens for every new connections, including loading all dlls from\nscratch.\n\nI suspect linking a few libraries statically would be quite worth it on\nwindows. On other platforms it'd be quite inadvisable to statically link\nlibraries, due to security updates, but for stuff like the EDB windows\ninstaller dynamic linking doesn't really help with that afaict?Correct - we're shipping the dependencies ourselves, so we have to rewrap/retest anyway. -- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Mon, 24 Jun 2024 09:54:51 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "Hi,\n\nOn 2024-06-24 09:54:51 +0100, Dave Page wrote:\n> > The old system was a major bottleneck. For one, there was no way to run all\n> > tests. And even the tests that one could run, would run serially, leading\n> > to\n> > exceedingly long tests times. While that could partially be addressed by\n> > having both buildsystems in parallel, the old one would frequently break\n> > in a\n> > way that one couldn't reproduce on other systems. And resource wise it\n> > wasn't\n> > feasible to test both old and new system for cfbot/CI.\n> >\n> \n> Hmm, I've found that running the tests under Meson takes notably longer\n> than the old system - maybe 5 - 10x longer (\"meson test\" vs. \"vcregress\n> check\"). I haven't yet put any effort into figuring out a cause for that\n> yet.\n\nThat's because vcregress check only runs a small portion of the tests (just\nthe main pg_regress checks, no tap tests, no extension). Which is pretty much\nmy point.\n\nTo run a decent, but still far from complete, portion of the tests you needed to do\nthis:\nhttps://github.com/postgres/postgres/blob/REL_15_STABLE/.cirrus.tasks.yml#L402-L440\n\nIf you want to run just the main regression tests with meson, you can:\n meson test --suite setup --suite regress\n\nTo see the list of all tests\n meson test --list\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 24 Jun 2024 02:12:02 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "On Sat, 22 Jun 2024 at 17:35, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2024-06-21 15:36:56 +0100, Dave Page wrote:\n> > For giggles, I took a crack at doing that, manually creating .pc files\n> for\n> > everything I've been working with so far.\n>\n> Cool!\n>\n>\n> > It seems to work as expected, except that unlike everything else libintl\n> is\n> > detected entirely based on whether the header and library can be found. I\n> > had to pass extra lib and include dirs:\n>\n> Yea, right now libintl isn't using dependency detection because I didn't\n> see\n> any platform where it's distributed with a .pc for or such. It'd be just a\n> line or two to make it use one...\n>\n\nI think it should, for consistency if nothing else - especially if we're\nadding our own pc/cmake files to prebuilt dependencies.\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nOn Sat, 22 Jun 2024 at 17:35, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2024-06-21 15:36:56 +0100, Dave Page wrote:\n> For giggles, I took a crack at doing that, manually creating .pc files for\n> everything I've been working with so far.\n\nCool!\n\n\n> It seems to work as expected, except that unlike everything else libintl is\n> detected entirely based on whether the header and library can be found. I\n> had to pass extra lib and include dirs:\n\nYea, right now libintl isn't using dependency detection because I didn't see\nany platform where it's distributed with a .pc for or such. It'd be just a\nline or two to make it use one...I think it should, for consistency if nothing else - especially if we're adding our own pc/cmake files to prebuilt dependencies. -- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Mon, 24 Jun 2024 13:24:05 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "Hi,\n\nOn 2024-06-21 12:20:49 +0100, Dave Page wrote:\n> > I'm confused - the old build system wasn't flexible around this stuff *at\n> > all*. Everyone had to patch it to get dependencies to work, unless you\n> > chose\n> > exactly the right source to download from - which was often not documented\n> > or\n> > outdated.\n> >\n> \n> As I noted above - as the \"owner\" of the official packages, I never did\n> despite using a variety of upstream sources.\n\nFor reference, with 16 and src/tools/msvc:\n- upstream zstd build doesn't work, wrong filename (libzstd.dll.a instead of libzstd.lib)\n- upstream lz4 build doesn't work, wrong filename (liblz4.dll.a instead of liblz4.lib)\n- openssl, from https://slproweb.com/products/Win32OpenSSL.htm , as our\n docs suggest: doesn't work, wrong filenames (openssl.lib instead of\n lib*64.lib, works if you delete lib/VC/sslcrypto64MD.lib)\n- iconv/intl: mismatching library names (lib*.dll.a lib*.lib)\n\n- zlib at least at least from some of the sources (it's hard to tell, because\n everything available is so outdated), wrong filenames\n\nUpstream ICU works.\n\nI gave up at this point, so I don't know if libxml, xslt and uuid work without\npatching the sources.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 25 Jun 2024 03:41:45 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "On Tue, 25 Jun 2024 at 11:41, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2024-06-21 12:20:49 +0100, Dave Page wrote:\n> > > I'm confused - the old build system wasn't flexible around this stuff\n> *at\n> > > all*. Everyone had to patch it to get dependencies to work, unless you\n> > > chose\n> > > exactly the right source to download from - which was often not\n> documented\n> > > or\n> > > outdated.\n> > >\n> >\n> > As I noted above - as the \"owner\" of the official packages, I never did\n> > despite using a variety of upstream sources.\n>\n> For reference, with 16 and src/tools/msvc:\n> - upstream zstd build doesn't work, wrong filename (libzstd.dll.a instead\n> of libzstd.lib)\n> - upstream lz4 build doesn't work, wrong filename (liblz4.dll.a instead of\n> liblz4.lib)\n> - openssl, from https://slproweb.com/products/Win32OpenSSL.htm , as our\n> docs suggest: doesn't work, wrong filenames (openssl.lib instead of\n> lib*64.lib, works if you delete lib/VC/sslcrypto64MD.lib)\n> - iconv/intl: mismatching library names (lib*.dll.a lib*.lib)\n>\n> - zlib at least at least from some of the sources (it's hard to tell,\n> because\n> everything available is so outdated), wrong filenames\n>\n\nhttps://github.com/dpage/winpgbuild proves that the hacks above are not\nrequired *if* you build the dependencies in the recommended way for use\nwith MSVC++ (where documented), otherwise just native Windows.\n\nIf you, for example, build a dependency using Mingw/Msys, then you may get\ndifferent filenames than if you build the same thing using its VC++\nsolution or makefile. That's where most, if not all, of these issues come\nfrom.\n\nIt's probably worth noting that \"back in the day\" when most of this stuff\nwas built, there was no UCRT32 compiler option, and it really was a\npotential problem to mix VC++ and Mingw compiled binaries so there was a\nheavy focus on making sure everything was designed around the MSVC++ builds\nwherever they existed.\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nOn Tue, 25 Jun 2024 at 11:41, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2024-06-21 12:20:49 +0100, Dave Page wrote:\n> > I'm confused - the old build system wasn't flexible around this stuff *at\n> > all*. Everyone had to patch it to get dependencies to work, unless you\n> > chose\n> > exactly the right source to download from - which was often not documented\n> > or\n> > outdated.\n> >\n> \n> As I noted above - as the \"owner\" of the official packages, I never did\n> despite using a variety of upstream sources.\n\nFor reference, with 16 and src/tools/msvc:\n- upstream zstd build doesn't work, wrong filename (libzstd.dll.a instead of libzstd.lib)\n- upstream lz4 build doesn't work, wrong filename (liblz4.dll.a instead of liblz4.lib)\n- openssl, from https://slproweb.com/products/Win32OpenSSL.htm , as our\n  docs suggest: doesn't work, wrong filenames (openssl.lib instead of\n  lib*64.lib, works if you delete lib/VC/sslcrypto64MD.lib)\n- iconv/intl: mismatching library names (lib*.dll.a lib*.lib)\n\n- zlib at least at least from some of the sources (it's hard to tell, because\n  everything available is so outdated), wrong filenameshttps://github.com/dpage/winpgbuild proves that the hacks above are not required *if* you build the dependencies in the recommended way for use with MSVC++ (where documented), otherwise just native Windows.If you, for example, build a dependency using Mingw/Msys, then you may get different filenames than if you build the same thing using its VC++ solution or makefile. That's where most, if not all, of these issues come from.It's probably worth noting that \"back in the day\" when most of this stuff was built, there was no UCRT32 compiler option, and it really was a potential problem to mix VC++ and Mingw compiled binaries so there was a heavy focus on making sure everything was designed around the MSVC++ builds wherever they existed.-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Tue, 25 Jun 2024 11:54:56 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "Hi,\n\nOn 2024-06-24 09:44:57 +0100, Dave Page wrote:\n> To address Andres' concerns around mis-detection of dependencies, or other\n> oddities such as required compiler flags not being included, I would\n> suggest that a) that's happened very rarely, if ever, in the past, and b)\n> we can always spit out an obvious warning if we've not been able to use\n> cmake or pkgconfig for any particular dependencies.\n\nI personally spent quite a few days hunting down issues related to this. Not\nbecause I wanted to, but because it was causing breakage and nobody else was\nlooking. For several years postgres didn't build against a modern perl, for\nexample, see the stuff leading up to\n http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=ccc59a83cd97\nbut nobody seemed to care for a prolonged amount of time.\n\nWe have evidence of random build hackery all over the tree - often\nentirely outdated, sometimes even *breaking* builds these days ([1]):\n\nhttps://github.com/postgres/postgres/blob/master/src/interfaces/libpq/Makefile#L80-L88\n(wrong library names for kerberos on 64bit systems, wrong ssl libnames for )\n\nhttps://github.com/postgres/postgres/blob/master/contrib/bool_plperl/Makefile#L30\nhttps://github.com/postgres/postgres/blob/master/src/pl/plpython/Makefile#L59-L72\nhttps://github.com/postgres/postgres/blob/master/src/pl/tcl/Makefile#L35-L51\nhttps://github.com/postgres/postgres/blob/master/config/python.m4#L62-L64x\n\nThere's plenty more, some of the more complicated cases are a bit less trivial\nto search for.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/CAGPVpCSKS9E0An4%3De7ZDnme%2By%3DWOcQFJYJegKO8kE9%3Dgh8NJKQ%40mail.gmail.com\n\n\n", "msg_date": "Tue, 25 Jun 2024 04:23:37 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "Hi,\n\nOn 2024-06-25 11:54:56 +0100, Dave Page wrote:\n> https://github.com/dpage/winpgbuild proves that the hacks above are not\n> required *if* you build the dependencies in the recommended way for use\n> with MSVC++ (where documented), otherwise just native Windows.\n\nPartially it just means that some of the hacks are now located in the \"build\ndependencies\" script. E.g. you're renaming libintl.dll.a, libiconv.dll.a,\nlibuuid.a to something that's expected by the buildmethod. And the scripts\nchange the directory structure for several other dependencies (e.g. zstd, krb).\n\n\n> If you, for example, build a dependency using Mingw/Msys, then you may get\n> different filenames than if you build the same thing using its VC++\n> solution or makefile. That's where most, if not all, of these issues come\n> from.\n\nYes, that's precisely my point. The set of correct names / flags depends on\nthings outside of postgres control. Hence they should be handled outside of\npostgres, not as part of postgres. Particularly because several of the\ndependencies can be built in multiple ways, resulting in multiple library\nnames. And it doesn't even just differ by compiler, there's ways to get\ndifferent library names for some of the deps even with the same compiler!\n\n\n> It's probably worth noting that \"back in the day\" when most of this stuff\n> was built, there was no UCRT32 compiler option, and it really was a\n> potential problem to mix VC++ and Mingw compiled binaries so there was a\n> heavy focus on making sure everything was designed around the MSVC++ builds\n> wherever they existed.\n\nAgreed, this luckily got easier. But it also increased the variety of expected\nlibrary names / flags. It's entirely reasonable to build postgres with msvc\nagainst an gcc built ICU or whatnot.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 25 Jun 2024 04:39:06 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "Hi\n\nOn Tue, 25 Jun 2024 at 12:39, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2024-06-25 11:54:56 +0100, Dave Page wrote:\n> > https://github.com/dpage/winpgbuild proves that the hacks above are not\n> > required *if* you build the dependencies in the recommended way for use\n> > with MSVC++ (where documented), otherwise just native Windows.\n>\n> Partially it just means that some of the hacks are now located in the\n> \"build\n> dependencies\" script. E.g. you're renaming libintl.dll.a, libiconv.dll.a,\n> libuuid.a to something that's expected by the buildmethod. And the scripts\n> change the directory structure for several other dependencies (e.g. zstd,\n> krb).\n>\n>\n> > If you, for example, build a dependency using Mingw/Msys, then you may\n> get\n> > different filenames than if you build the same thing using its VC++\n> > solution or makefile. That's where most, if not all, of these issues come\n> > from.\n>\n> Yes, that's precisely my point. The set of correct names / flags depends on\n> things outside of postgres control. Hence they should be handled outside of\n> postgres, not as part of postgres. Particularly because several of the\n> dependencies can be built in multiple ways, resulting in multiple library\n> names. And it doesn't even just differ by compiler, there's ways to get\n> different library names for some of the deps even with the same compiler!\n>\n\nReplying to this and your previous email.\n\nI think we're in violent agreement here as to how things *should* be in an\nideal world. The issue for me is that this isn't an ideal world and the\ncurrent solution potentially makes it much harder to get a working\nPostgreSQL build on Windows - not only that, but everyone doing those\nbuilds potentially has to figure out how to get things to work for\nthemselves because we're pushing the knowledge outside of our build system.\n\nI've been building Postgres on Windows for years (and like to think I'm\nreasonably competent), and despite reading the docs I still ended up\nstarting multiple threads on the hackers list to try to understand how to\nget v17 to build. Would we accept the current Meson setup on Linux if\npeople had to hand-craft .pc files, or install dependencies using multiple\nthird-party package managers to get it to work?\n\nAs I previously noted, I think we should default to pkgconfig/cmake\ndetection, but then fall back to what we did previously (with suitably\nobnoxious warnings). Then at least a build environment that worked in the\npast should work in the future.\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nHiOn Tue, 25 Jun 2024 at 12:39, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2024-06-25 11:54:56 +0100, Dave Page wrote:\n> https://github.com/dpage/winpgbuild proves that the hacks above are not\n> required *if* you build the dependencies in the recommended way for use\n> with MSVC++ (where documented), otherwise just native Windows.\n\nPartially it just means that some of the hacks are now located in the \"build\ndependencies\" script.  E.g. you're renaming libintl.dll.a, libiconv.dll.a,\nlibuuid.a to something that's expected by the buildmethod. And the scripts\nchange the directory structure for several other dependencies (e.g. zstd, krb).\n\n\n> If you, for example, build a dependency using Mingw/Msys, then you may get\n> different filenames than if you build the same thing using its VC++\n> solution or makefile. That's where most, if not all, of these issues come\n> from.\n\nYes, that's precisely my point. The set of correct names / flags depends on\nthings outside of postgres control. Hence they should be handled outside of\npostgres, not as part of postgres. Particularly because several of the\ndependencies can be built in multiple ways, resulting in multiple library\nnames. And it doesn't even just differ by compiler, there's ways to get\ndifferent library names for some of the deps even with the same compiler!Replying to this and your previous email. I think we're in violent agreement here as to how things *should* be in an ideal world. The issue for me is that this isn't an ideal world and the current solution potentially makes it much harder to get a working PostgreSQL build on Windows - not only that, but everyone doing those builds potentially has to figure out how to get things to work for themselves because we're pushing the knowledge outside of our build system.I've been building Postgres on Windows for years (and like to think I'm reasonably competent), and despite reading the docs I still ended up starting multiple threads on the hackers list to try to understand how to get v17 to build. Would we accept the current Meson setup on Linux if people had to hand-craft .pc files, or install dependencies using multiple third-party package managers to get it to work?As I previously noted, I think we should default to pkgconfig/cmake detection, but then fall back to what we did previously (with suitably obnoxious warnings). Then at least a build environment that worked in the past should work in the future.-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Tue, 25 Jun 2024 13:33:25 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "Hi,\n\nI've been hacking on addressing some of the complaints (after having been\noff-work in a somewhat unplanned way for most of the last two weeks). With\nsome already opened and soon-to-be-proposed PRs to Dave's winbuild and the\nattached changes I think the concerns can largely be addressed.\n\n\nHere's the current set of changes:\n\n0001: meson: Add missing argument to gssapi.h check\n Largely independent, but included to avoid conflicts\n\n0002: Don't define HAVE_[GSSAPI_]GSSAPI_EXT_H\n Largely independent, but included to avoid conflicts\n\n0003: meson: Add support for detecting gss without pkg-config\n0004: meson: Add support for detecting ossp-uuid without pkg-config\n Do what it says on the tin. Neither includes dependency information via\n pkg-config or cmake in their upstream repos.\n\n0005: meson: Add dependency lookups via names used by cmake\n\n This adds support for the alternative names used by cmake lookups. That\n addresses\n\n0006: meson: nls: Handle intl requiring iconv\n This afaict is only required when dealing with a static libc, so it might be\n considered independent\n\n0007: windows-tab-complete-workaround\n Just included so the build doesn't fail for me with all the dependencies\n installed.\n\n0008: krb-vs-openssl-workaround\n Just included so the build doesn't fail for me with all the dependencies\n installed. There's a separate thread to discuss the right fix.\n\n0009: wip: meson: reduce linker noise for executables\n\n This is mainly a minor quality of life thing for me when hacking on this. If\n a static libintl is used, link.exe outputs a message for every binary, which\n makes it harder to see warnings.\n\n0010: meson: wip: tcl\n This is just some preliminary hacking needs more work.\n\n\nNote that cmake is automatically installed as part of visual studio these\ndays.\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 8 Jul 2024 23:51:01 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "> From 9f7c96dfab4d807e668c9d32b44db5f4ff122e15 Mon Sep 17 00:00:00 2001\n> From: Andres Freund <[email protected]>\n> Date: Mon, 8 Jul 2024 15:55:56 -0700\n> Subject: [PATCH v2 02/10] Don't define HAVE_[GSSAPI_]GSSAPI_EXT_H\n> \n> The check for gssapi_ext.h was added in f7431bca8b0. As we require\n> gssapi_ext.h to be present, there's no point in defining symbols for the\n> header presence.\n> \n> While at it, use cc.has_header() instead of cc.check_header(), that's a bit\n> cheaper and it seems improbably that gssapi.h would compile while gssapi_ext.h\n> would not.\n\nimprobable\n\nOther than that, it looks pretty solid. Looks like we could help future \nus out by teaching compiler.find_library() to take a list of names to \nlook at similar to how dependency() works now.\n\nReviewed-by: Tristan Partin <[email protected]>\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 16 Jul 2024 15:53:45 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "Hi,\n\nOn 2024-07-16 15:53:45 -0500, Tristan Partin wrote:\n> Other than that, it looks pretty solid.\n\nThanks for looking! I'm thinking of pushing the first few patches soon-ish.\n\nI'm debating between going for 17 + HEAD or also applying it to 16, to keep\nthe trees more similar.\n\n\n> Looks like we could help future us out by teaching compiler.find_library()\n> to take a list of names to look at similar to how dependency() works now.\n\nYep, that'd be useful.\n\nAndres\n\n\n", "msg_date": "Wed, 17 Jul 2024 09:49:47 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "Hi,\n\nOn 2024-07-17 09:49:47 -0700, Andres Freund wrote:\n> On 2024-07-16 15:53:45 -0500, Tristan Partin wrote:\n> > Other than that, it looks pretty solid.\n> \n> Thanks for looking! I'm thinking of pushing the first few patches soon-ish.\n> \n> I'm debating between going for 17 + HEAD or also applying it to 16, to keep\n> the trees more similar.\n\nPushed a number of these to 16 - HEAD.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 20 Jul 2024 13:56:36 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "On Sat, 20 Jul 2024 at 21:56, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2024-07-17 09:49:47 -0700, Andres Freund wrote:\n> > On 2024-07-16 15:53:45 -0500, Tristan Partin wrote:\n> > > Other than that, it looks pretty solid.\n> >\n> > Thanks for looking! I'm thinking of pushing the first few patches\n> soon-ish.\n> >\n> > I'm debating between going for 17 + HEAD or also applying it to 16, to\n> keep\n> > the trees more similar.\n>\n> Pushed a number of these to 16 - HEAD.\n>\n\nThanks. I've updated winpgbuild with the additional pkgconfig file needed\nfor ICU now, so it should better match a *nix build.\n\nAny chance you can look at the GSSAPI/OpenSSL X509_NAME conflict one? I'm\nstill having to patch around that to build with all the dependencies.\n\nhttps://www.postgresql.org/message-id/flat/CA%2BOCxoxwsgi8QdzN8A0OPGuGfu_1vEW3ufVBnbwd3gfawVpsXw%40mail.gmail.com\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nPGDay UK 2024, 11th September, London: https://2024.pgday.uk/\n\nOn Sat, 20 Jul 2024 at 21:56, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2024-07-17 09:49:47 -0700, Andres Freund wrote:\n> On 2024-07-16 15:53:45 -0500, Tristan Partin wrote:\n> > Other than that, it looks pretty solid.\n> \n> Thanks for looking!  I'm thinking of pushing the first few patches soon-ish.\n> \n> I'm debating between going for 17 + HEAD or also applying it to 16, to keep\n> the trees more similar.\n\nPushed a number of these to 16 - HEAD.Thanks. I've updated winpgbuild with the additional pkgconfig file needed for ICU now, so it should better match a *nix build.Any chance you can look at the GSSAPI/OpenSSL X509_NAME conflict one? I'm still having to patch around that to build with all the dependencies. https://www.postgresql.org/message-id/flat/CA%2BOCxoxwsgi8QdzN8A0OPGuGfu_1vEW3ufVBnbwd3gfawVpsXw%40mail.gmail.com-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.comPGDay UK 2024, 11th September, London: https://2024.pgday.uk/", "msg_date": "Wed, 24 Jul 2024 16:13:24 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Meson far from ready on Windows" }, { "msg_contents": "Andres Freund:\n>> That's not necessarily true. The nix package manager and thus NixOS track all dependencies for a piece of software. If any of the dependencies are updated, all dependents are rebuilt, too. So the security concern doesn't apply here. There is a \"static overlay\", which builds everything linked fully statically.\n> \n> Right. There's definitely some scenario where it's ok, I was simplifying a bit.\n> \n>> Unfortunately, PostgreSQL doesn't build in that, so far.\n> \n> I've built mostly statically linked pg without much of a problem, what trouble did you encounter? Think there were some issues with linking Kerberos and openldap statically, but not on postgres' side.\n\nMostly the \"can't disable shared libraries / backend builds\" part \nmentioned below.\n\n> Building the postgres backend without support for dynamic linking doesn't make sense though. Extensions are just stop ingrained part of pg.\n\nI think there might be some limited use-cases for a fully-static \npostgres backend without the ability to load extensions: Even if we get \nlibpq to build fine in the fully-static overlay mentioned above, a lot \nof reverse dependencies have to disable tests, because they need a \nrunning postgres server to run their tests against.\n\nProviding a really simple postgres backend, with only minimal \nfunctionality, would allow some basic sanity tests, even in this purely \nstatic environment.\n\n>> Lately, I have been looking into building at least libpq in that static overlay, via Meson. There are two related config options:\n>> -Ddefault_library=shared|static|both\n>> -Dprefer_static\n>>\n>> The first controls which libraries (libpq, ...) to build ourselves. The second controls linking, IIUC also against external dependencies.\n> \n> Pg by default builds a static libpq on nearly all platforms (not aix I think and maybe not Windows when building with autoconf, not sure about the old msvc system) today?\n\nYes, PG builds a static libpq today. But it's hard-to-impossible to \n*disable building the shared library*. In the fully static overlay, this \ncauses the build to fail, because shared libraries can't be build.\n\n>> Maybe it would be a first step to support -Dprefer_static?\n> \n> That should work for nearly all dependencies today. Except for libintl, I think. I found that there are a lot of buglets in static link dependencies of various libraries though.\n\nTo support prefer_static, we'd also have to look at our internal \nlinking, i.e. whether for example psql is linked against libpq \nstatically or dynamically. Once prefer_static controls that, that's \nalready a step forward to be able to build more of the code-base without \nshared libraries available.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Sun, 18 Aug 2024 16:30:11 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Meson far from ready on Windows" } ]
[ { "msg_contents": "Hi,\r\n\r\nPostgreSQL 17 Beta 2 is planned to be release on June 27, 2024. Please \r\ncontinue your hard work on closing out open items[1] ahead of the \r\nrelease and have the fixes targeted for the release committed by June \r\n22, 2024.\r\n\r\nThanks!\r\n\r\nJonathan\r\n\r\n[1] https://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items", "msg_date": "Tue, 18 Jun 2024 12:10:50 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 17 Beta 2 release date & commit freeze" }, { "msg_contents": "On Tue, Jun 18, 2024 at 12:10:50PM -0400, Jonathan Katz wrote:\n> \n> Hi,\n> \n> PostgreSQL 17 Beta 2 is planned to be release on June 27, 2024. Please\n> continue your hard work on closing out open items[1] ahead of the release\n> and have the fixes targeted for the release committed by June 22, 2024.\n\nI am adding markup to the PG 17 release notes and will be finished by\nthis Friday.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 18 Jun 2024 12:32:49 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 2 release date & commit freeze" } ]
[ { "msg_contents": "Hi.\n\nNow when planner finds suitable pathkeys in \ngenerate_orderedappend_paths(), it uses them, even if explicit sort of \nthe cheapest child path could be more efficient.\n\nWe encountered this issue on partitioned table with two indexes, where \none is suitable for sorting, and another is good for selecting data. \nMergeAppend was generated\nwith subpaths doing index scan on suitably ordered index and filtering a \nlot of data.\nThe suggested fix allows MergeAppend to consider sorting on cheapest \nchildrel total path as an alternative.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Tue, 18 Jun 2024 19:45:09 +0300", "msg_from": "Alexander Pyhalov <[email protected]>", "msg_from_op": true, "msg_subject": "MergeAppend could consider sorting cheapest child path" } ]
[ { "msg_contents": "I noticed that the \"check\" variable, which is used for \"pg_upgrade\n--check\", is commented as follows:\n\n\tbool\t\tcheck;\t\t\t/* true -> ask user for permission to make\n\t\t\t\t\t\t\t\t * changes */\n\nThis comment was first added when pg_upgrade was introduced (commit\nc2e9b2f288), but I imagine it predates even that. I've attached a patch to\nfix this. Barring objections, I'll probably commit this soon.\n\n-- \nnathan", "msg_date": "Tue, 18 Jun 2024 14:50:05 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "fix pg_upgrade comment" }, { "msg_contents": "> On 18 Jun 2024, at 21:50, Nathan Bossart <[email protected]> wrote:\n> \n> I noticed that the \"check\" variable, which is used for \"pg_upgrade\n> --check\", is commented as follows:\n> \n> bool check; /* true -> ask user for permission to make\n> * changes */\n> \n> This comment was first added when pg_upgrade was introduced (commit\n> c2e9b2f288), but I imagine it predates even that. I've attached a patch to\n> fix this. Barring objections, I'll probably commit this soon.\n\nNice catch, +1 for committing. \n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 18 Jun 2024 22:20:06 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fix pg_upgrade comment" }, { "msg_contents": "On Tue, Jun 18, 2024 at 10:20:06PM +0200, Daniel Gustafsson wrote:\n> Nice catch, +1 for committing. \n\nCommitted.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 19 Jun 2024 16:14:10 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: fix pg_upgrade comment" } ]
[ { "msg_contents": "Hello hackers,\n\nCurrently, obtaining the Access Control List (ACL) for a database object\nrequires querying specific pg_catalog tables directly, where the user\nneeds to know the name of the ACL column for the object.\n\nConsider:\n\n```\nCREATE USER test_user;\nCREATE USER test_owner;\nCREATE SCHEMA test_schema AUTHORIZATION test_owner;\nSET ROLE TO test_owner;\nCREATE TABLE test_schema.test_table ();\nGRANT SELECT ON TABLE test_schema.test_table TO test_user;\n```\n\nTo get the ACL we can do:\n\n```\nSELECT relacl FROM pg_class WHERE oid = 'test_schema.test_table'::regclass::oid;\n\n relacl\n---------------------------------------------------------\n {test_owner=arwdDxtm/test_owner,test_user=r/test_owner}\n```\n\nAttached patch adds a new SQL-callable functoin `pg_get_acl()`, so we can do:\n\n```\nSELECT pg_get_acl('pg_class'::regclass, 'test_schema.test_table'::regclass::oid);\n pg_get_acl\n---------------------------------------------------------\n {test_owner=arwdDxtm/test_owner,test_user=r/test_owner}\n```\n\nThe original idea for this function came from Alvaro Herrera,\nin this related discussion:\nhttps://postgr.es/m/[email protected]\n\nOn Thu, Mar 25, 2021, at 16:16, Alvaro Herrera wrote:\n> On 2021-Mar-25, Joel Jacobson wrote:\n>\n>> pg_shdepend doesn't contain the aclitem info though,\n>> so it won't work for pg_permissions if we want to expose\n>> privilege_type, is_grantable and grantor.\n>\n> Ah, of course -- the only way to obtain the acl columns is by going\n> through the catalogs individually, so it won't be possible. I think\n> this could be fixed with some very simple, quick function pg_get_acl()\n> that takes a catalog OID and object OID and returns the ACL; then\n> use aclexplode() to obtain all those details.\n\nThe pg_get_acl() function has been implemented by following\nthe guidance from Alvaro in the related dicussion:\n\nOn Fri, Mar 26, 2021, at 13:43, Alvaro Herrera wrote:\n> AFAICS the way to do it is like AlterObjectOwner_internal obtains data\n> -- first do get_catalog_object_by_oid (gives you the HeapTuple that\n> represents the object), then\n> heap_getattr( ..., get_object_attnum_acl(), ..), and there you have the\n> ACL which you can \"explode\" (or maybe just return as-is).\n>\n> AFAICS if you do this, it's just one cache lookups per object, or\n> one indexscan for the cases with no by-OID syscache. It should be much\n> cheaper than the UNION ALL query. And you use pg_shdepend to guide\n> this, so you only do it for the objects that you already know are\n> interesting.\n\nMany thanks Alvaro for the very helpful instructions.\n\nThis function would then allow users to e.g. create a view to show the privileges\nfor all database objects, like the pg_privileges system view suggested in the\nrelated discussion.\n\nTests and docs are added.\n\nBest regards,\nJoel Jakobsson", "msg_date": "Wed, 19 Jun 2024 13:34:31 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Add pg_get_acl() function get the ACL for a database object" }, { "msg_contents": "Em qua., 19 de jun. de 2024 às 08:35, Joel Jacobson <[email protected]>\nescreveu:\n\n> Hello hackers,\n>\n> Currently, obtaining the Access Control List (ACL) for a database object\n> requires querying specific pg_catalog tables directly, where the user\n> needs to know the name of the ACL column for the object.\n>\n> Consider:\n>\n> ```\n> CREATE USER test_user;\n> CREATE USER test_owner;\n> CREATE SCHEMA test_schema AUTHORIZATION test_owner;\n> SET ROLE TO test_owner;\n> CREATE TABLE test_schema.test_table ();\n> GRANT SELECT ON TABLE test_schema.test_table TO test_user;\n> ```\n>\n> To get the ACL we can do:\n>\n> ```\n> SELECT relacl FROM pg_class WHERE oid =\n> 'test_schema.test_table'::regclass::oid;\n>\n> relacl\n> ---------------------------------------------------------\n> {test_owner=arwdDxtm/test_owner,test_user=r/test_owner}\n> ```\n>\n> Attached patch adds a new SQL-callable functoin `pg_get_acl()`, so we can\n> do:\n>\n> ```\n> SELECT pg_get_acl('pg_class'::regclass,\n> 'test_schema.test_table'::regclass::oid);\n> pg_get_acl\n> ---------------------------------------------------------\n> {test_owner=arwdDxtm/test_owner,test_user=r/test_owner}\n> ```\n>\n> The original idea for this function came from Alvaro Herrera,\n> in this related discussion:\n> https://postgr.es/m/[email protected]\n>\n> On Thu, Mar 25, 2021, at 16:16, Alvaro Herrera wrote:\n> > On 2021-Mar-25, Joel Jacobson wrote:\n> >\n> >> pg_shdepend doesn't contain the aclitem info though,\n> >> so it won't work for pg_permissions if we want to expose\n> >> privilege_type, is_grantable and grantor.\n> >\n> > Ah, of course -- the only way to obtain the acl columns is by going\n> > through the catalogs individually, so it won't be possible. I think\n> > this could be fixed with some very simple, quick function pg_get_acl()\n> > that takes a catalog OID and object OID and returns the ACL; then\n> > use aclexplode() to obtain all those details.\n>\n> The pg_get_acl() function has been implemented by following\n> the guidance from Alvaro in the related dicussion:\n>\n> On Fri, Mar 26, 2021, at 13:43, Alvaro Herrera wrote:\n> > AFAICS the way to do it is like AlterObjectOwner_internal obtains data\n> > -- first do get_catalog_object_by_oid (gives you the HeapTuple that\n> > represents the object), then\n> > heap_getattr( ..., get_object_attnum_acl(), ..), and there you have the\n> > ACL which you can \"explode\" (or maybe just return as-is).\n> >\n> > AFAICS if you do this, it's just one cache lookups per object, or\n> > one indexscan for the cases with no by-OID syscache. It should be much\n> > cheaper than the UNION ALL query. And you use pg_shdepend to guide\n> > this, so you only do it for the objects that you already know are\n> > interesting.\n>\n> Many thanks Alvaro for the very helpful instructions.\n>\n> This function would then allow users to e.g. create a view to show the\n> privileges\n> for all database objects, like the pg_privileges system view suggested in\n> the\n> related discussion.\n>\n> Tests and docs are added.\n>\nHi,\nFor some reason, the function pg_get_acl, does not exist in generated\nfmgrtab.c\n\nSo, when install postgres, the function does not work.\n\npostgres=# SELECT pg_get_acl('pg_class'::regclass, 'atest2'::regclass::oid);\nERROR: function pg_get_acl(regclass, oid) does not exist\nLINE 1: SELECT pg_get_acl('pg_class'::regclass, 'atest2'::regclass::...\n ^\nHINT: No function matches the given name and argument types. You might\nneed to add explicit type casts.\n\nbest regards,\nRanier Vilela\n\nEm qua., 19 de jun. de 2024 às 08:35, Joel Jacobson <[email protected]> escreveu:Hello hackers,\n\nCurrently, obtaining the Access Control List (ACL) for a database object\nrequires querying specific pg_catalog tables directly, where the user\nneeds to know the name of the ACL column for the object.\n\nConsider:\n\n```\nCREATE USER test_user;\nCREATE USER test_owner;\nCREATE SCHEMA test_schema AUTHORIZATION test_owner;\nSET ROLE TO test_owner;\nCREATE TABLE test_schema.test_table ();\nGRANT SELECT ON TABLE test_schema.test_table TO test_user;\n```\n\nTo get the ACL we can do:\n\n```\nSELECT relacl FROM pg_class WHERE oid = 'test_schema.test_table'::regclass::oid;\n\n                         relacl\n---------------------------------------------------------\n {test_owner=arwdDxtm/test_owner,test_user=r/test_owner}\n```\n\nAttached patch adds a new SQL-callable functoin `pg_get_acl()`, so we can do:\n\n```\nSELECT pg_get_acl('pg_class'::regclass, 'test_schema.test_table'::regclass::oid);\n                       pg_get_acl\n---------------------------------------------------------\n {test_owner=arwdDxtm/test_owner,test_user=r/test_owner}\n```\n\nThe original idea for this function came from Alvaro Herrera,\nin this related discussion:\nhttps://postgr.es/m/[email protected]\n\nOn Thu, Mar 25, 2021, at 16:16, Alvaro Herrera wrote:\n> On 2021-Mar-25, Joel Jacobson wrote:\n>\n>> pg_shdepend doesn't contain the aclitem info though,\n>> so it won't work for pg_permissions if we want to expose\n>> privilege_type, is_grantable and grantor.\n>\n> Ah, of course -- the only way to obtain the acl columns is by going\n> through the catalogs individually, so it won't be possible.  I think\n> this could be fixed with some very simple, quick function pg_get_acl()\n> that takes a catalog OID and object OID and returns the ACL; then\n> use aclexplode() to obtain all those details.\n\nThe pg_get_acl() function has been implemented by following\nthe guidance from Alvaro in the related dicussion:\n\nOn Fri, Mar 26, 2021, at 13:43, Alvaro Herrera wrote:\n> AFAICS the way to do it is like AlterObjectOwner_internal obtains data\n> -- first do get_catalog_object_by_oid (gives you the HeapTuple that\n> represents the object), then\n> heap_getattr( ..., get_object_attnum_acl(), ..), and there you have the\n> ACL which you can \"explode\" (or maybe just return as-is).\n>\n> AFAICS if you do this, it's just one cache lookups per object, or\n> one indexscan for the cases with no by-OID syscache.  It should be much\n> cheaper than the UNION ALL query.  And you use pg_shdepend to guide\n> this, so you only do it for the objects that you already know are\n> interesting.\n\nMany thanks Alvaro for the very helpful instructions.\n\nThis function would then allow users to e.g. create a view to show the privileges\nfor all database objects, like the pg_privileges system view suggested in the\nrelated discussion.\n\nTests and docs are added.Hi,For some reason, the function pg_get_acl, does not exist in generated fmgrtab.cSo, when install postgres, the function does not work.postgres=# SELECT pg_get_acl('pg_class'::regclass, 'atest2'::regclass::oid);ERROR:  function pg_get_acl(regclass, oid) does not existLINE 1: SELECT pg_get_acl('pg_class'::regclass, 'atest2'::regclass::...               ^HINT:  No function matches the given name and argument types. You might need to add explicit type casts.best regards,Ranier Vilela", "msg_date": "Wed, 19 Jun 2024 09:59:23 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add pg_get_acl() function get the ACL for a database object" }, { "msg_contents": "Hi Ranier,\n\nThanks for looking at this.\n\nI've double-checked the patch I sent, and it works fine.\n\nI think I know the cause of your problem:\n\nSince this is a catalog change, you need to run `make clean`, to ensure the catalog is rebuilt,\nfollowed by the usual `make && make install`.\n\nYou also need to run `initdb` to create a new database cluster, with the new catalog version.\n\nLet me know if you need more specific instructions.\n\nBest,\nJoel\n\nOn Wed, Jun 19, 2024, at 14:59, Ranier Vilela wrote:\n> Em qua., 19 de jun. de 2024 às 08:35, Joel Jacobson <[email protected]> \n> escreveu:\n>> Hello hackers,\n>> \n>> Currently, obtaining the Access Control List (ACL) for a database object\n>> requires querying specific pg_catalog tables directly, where the user\n>> needs to know the name of the ACL column for the object.\n>> \n>> Consider:\n>> \n>> ```\n>> CREATE USER test_user;\n>> CREATE USER test_owner;\n>> CREATE SCHEMA test_schema AUTHORIZATION test_owner;\n>> SET ROLE TO test_owner;\n>> CREATE TABLE test_schema.test_table ();\n>> GRANT SELECT ON TABLE test_schema.test_table TO test_user;\n>> ```\n>> \n>> To get the ACL we can do:\n>> \n>> ```\n>> SELECT relacl FROM pg_class WHERE oid = 'test_schema.test_table'::regclass::oid;\n>> \n>> relacl\n>> ---------------------------------------------------------\n>> {test_owner=arwdDxtm/test_owner,test_user=r/test_owner}\n>> ```\n>> \n>> Attached patch adds a new SQL-callable functoin `pg_get_acl()`, so we can do:\n>> \n>> ```\n>> SELECT pg_get_acl('pg_class'::regclass, 'test_schema.test_table'::regclass::oid);\n>> pg_get_acl\n>> ---------------------------------------------------------\n>> {test_owner=arwdDxtm/test_owner,test_user=r/test_owner}\n>> ```\n>> \n>> The original idea for this function came from Alvaro Herrera,\n>> in this related discussion:\n>> https://postgr.es/m/[email protected]\n>> \n>> On Thu, Mar 25, 2021, at 16:16, Alvaro Herrera wrote:\n>> > On 2021-Mar-25, Joel Jacobson wrote:\n>> >\n>> >> pg_shdepend doesn't contain the aclitem info though,\n>> >> so it won't work for pg_permissions if we want to expose\n>> >> privilege_type, is_grantable and grantor.\n>> >\n>> > Ah, of course -- the only way to obtain the acl columns is by going\n>> > through the catalogs individually, so it won't be possible. I think\n>> > this could be fixed with some very simple, quick function pg_get_acl()\n>> > that takes a catalog OID and object OID and returns the ACL; then\n>> > use aclexplode() to obtain all those details.\n>> \n>> The pg_get_acl() function has been implemented by following\n>> the guidance from Alvaro in the related dicussion:\n>> \n>> On Fri, Mar 26, 2021, at 13:43, Alvaro Herrera wrote:\n>> > AFAICS the way to do it is like AlterObjectOwner_internal obtains data\n>> > -- first do get_catalog_object_by_oid (gives you the HeapTuple that\n>> > represents the object), then\n>> > heap_getattr( ..., get_object_attnum_acl(), ..), and there you have the\n>> > ACL which you can \"explode\" (or maybe just return as-is).\n>> >\n>> > AFAICS if you do this, it's just one cache lookups per object, or\n>> > one indexscan for the cases with no by-OID syscache. It should be much\n>> > cheaper than the UNION ALL query. And you use pg_shdepend to guide\n>> > this, so you only do it for the objects that you already know are\n>> > interesting.\n>> \n>> Many thanks Alvaro for the very helpful instructions.\n>> \n>> This function would then allow users to e.g. create a view to show the privileges\n>> for all database objects, like the pg_privileges system view suggested in the\n>> related discussion.\n>> \n>> Tests and docs are added.\n> Hi,\n> For some reason, the function pg_get_acl, does not exist in generated fmgrtab.c\n>\n> So, when install postgres, the function does not work.\n>\n> postgres=# SELECT pg_get_acl('pg_class'::regclass, \n> 'atest2'::regclass::oid);\n> ERROR: function pg_get_acl(regclass, oid) does not exist\n> LINE 1: SELECT pg_get_acl('pg_class'::regclass, 'atest2'::regclass::...\n> ^\n> HINT: No function matches the given name and argument types. You might \n> need to add explicit type casts.\n>\n> best regards,\n> Ranier Vilela\n\n-- \nKind regards,\n\nJoel\n\n\n", "msg_date": "Wed, 19 Jun 2024 15:26:03 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add pg_get_acl() function get the ACL for a database object" }, { "msg_contents": "Em qua., 19 de jun. de 2024 às 10:26, Joel Jacobson <[email protected]>\nescreveu:\n\n> Hi Ranier,\n>\n> Thanks for looking at this.\n>\n> I've double-checked the patch I sent, and it works fine.\n>\n> I think I know the cause of your problem:\n>\n> Since this is a catalog change, you need to run `make clean`, to ensure\n> the catalog is rebuilt,\n> followed by the usual `make && make install`.\n>\n> You also need to run `initdb` to create a new database cluster, with the\n> new catalog version.\n>\n> Let me know if you need more specific instructions.\n>\nSorry, sorry but I'm on Windows -> meson.\n\nDouble checked with:\nninja clean\nninja\nninja install\n\nbest regards,\nRanier Vilela\n\nEm qua., 19 de jun. de 2024 às 10:26, Joel Jacobson <[email protected]> escreveu:Hi Ranier,\n\nThanks for looking at this.\n\nI've double-checked the patch I sent, and it works fine.\n\nI think I know the cause of your problem:\n\nSince this is a catalog change, you need to run `make clean`, to ensure the catalog is rebuilt,\nfollowed by the usual `make && make install`.\n\nYou also need to run `initdb` to create a new database cluster, with the new catalog version.\n\nLet me know if you need more specific instructions.Sorry, sorry but I'm on Windows -> meson.Double checked with:ninja cleanninjaninja installbest regards,Ranier Vilela", "msg_date": "Wed, 19 Jun 2024 10:28:37 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add pg_get_acl() function get the ACL for a database object" }, { "msg_contents": "Em qua., 19 de jun. de 2024 às 10:28, Ranier Vilela <[email protected]>\nescreveu:\n\n> Em qua., 19 de jun. de 2024 às 10:26, Joel Jacobson <[email protected]>\n> escreveu:\n>\n>> Hi Ranier,\n>>\n>> Thanks for looking at this.\n>>\n>> I've double-checked the patch I sent, and it works fine.\n>>\n>> I think I know the cause of your problem:\n>>\n>> Since this is a catalog change, you need to run `make clean`, to ensure\n>> the catalog is rebuilt,\n>> followed by the usual `make && make install`.\n>>\n>> You also need to run `initdb` to create a new database cluster, with the\n>> new catalog version.\n>>\n>> Let me know if you need more specific instructions.\n>>\n> Sorry, sorry but I'm on Windows -> meson.\n>\n> Double checked with:\n> ninja clean\n> ninja\n> ninja install\n>\nSorry for the noise, now pg_get_acl is shown in the regress test.\n\nRegarding the patch, could it be written in the following style?\n\nDatum\npg_get_acl(PG_FUNCTION_ARGS)\n{\nOid classId = PG_GETARG_OID(0);\nOid objectId = PG_GETARG_OID(1);\nOid catalogId;\nAttrNumber Anum_oid;\nAttrNumber Anum_acl;\n\n/* for \"pinned\" items in pg_depend, return null */\nif (!OidIsValid(classId) && !OidIsValid(objectId))\nPG_RETURN_NULL();\n\ncatalogId = (classId == LargeObjectRelationId) ?\nLargeObjectMetadataRelationId : classId;\nAnum_oid = get_object_attnum_oid(catalogId);\nAnum_acl = get_object_attnum_acl(catalogId);\n\nif (Anum_acl != InvalidAttrNumber)\n{\nRelation rel;\nHeapTuple tup;\nDatum datum;\nbool isnull;\n\nrel = table_open(catalogId, AccessShareLock);\n\ntup = get_catalog_object_by_oid(rel, Anum_oid, objectId);\nif (!HeapTupleIsValid(tup))\nelog(ERROR, \"cache lookup failed for object %u of catalog \\\"%s\\\"\",\nobjectId, RelationGetRelationName(rel));\n\ndatum = heap_getattr(tup, Anum_acl, RelationGetDescr(rel), &isnull);\n\ntable_close(rel, AccessShareLock);\n\nif (!isnull)\nPG_RETURN_DATUM(datum);\n}\n\nPG_RETURN_NULL();\n}\n\nbest regards,\nRanier Vilela\n\nEm qua., 19 de jun. de 2024 às 10:28, Ranier Vilela <[email protected]> escreveu:Em qua., 19 de jun. de 2024 às 10:26, Joel Jacobson <[email protected]> escreveu:Hi Ranier,\n\nThanks for looking at this.\n\nI've double-checked the patch I sent, and it works fine.\n\nI think I know the cause of your problem:\n\nSince this is a catalog change, you need to run `make clean`, to ensure the catalog is rebuilt,\nfollowed by the usual `make && make install`.\n\nYou also need to run `initdb` to create a new database cluster, with the new catalog version.\n\nLet me know if you need more specific instructions.Sorry, sorry but I'm on Windows -> meson.Double checked with:ninja cleanninjaninja installSorry for the noise, now pg_get_acl is shown in the regress test.Regarding the patch, could it be written in the following style?Datumpg_get_acl(PG_FUNCTION_ARGS){\tOid\t\t\tclassId = PG_GETARG_OID(0);\tOid\t\t\tobjectId = PG_GETARG_OID(1);\tOid\t\t\tcatalogId;\tAttrNumber\tAnum_oid;\tAttrNumber\tAnum_acl;\t/* for \"pinned\" items in pg_depend, return null */\tif (!OidIsValid(classId) && !OidIsValid(objectId))\t\tPG_RETURN_NULL();\tcatalogId = (classId == LargeObjectRelationId) ? LargeObjectMetadataRelationId : classId;\tAnum_oid = get_object_attnum_oid(catalogId);\tAnum_acl = get_object_attnum_acl(catalogId);\tif (Anum_acl != InvalidAttrNumber)\t{\t\tRelation\trel;\t\tHeapTuple\ttup;\t\tDatum\t\tdatum;\t\tbool\t\tisnull;\t\trel = table_open(catalogId, AccessShareLock);\t\ttup = get_catalog_object_by_oid(rel, Anum_oid, objectId);\t\tif (!HeapTupleIsValid(tup))\t\t\telog(ERROR, \"cache lookup failed for object %u of catalog \\\"%s\\\"\",\t\t\t\tobjectId, RelationGetRelationName(rel));\t\tdatum = heap_getattr(tup, Anum_acl, RelationGetDescr(rel), &isnull);\t\ttable_close(rel, AccessShareLock);\t\tif (!isnull)\t\t\tPG_RETURN_DATUM(datum);\t}\tPG_RETURN_NULL();}best regards,Ranier Vilela", "msg_date": "Wed, 19 Jun 2024 10:51:32 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add pg_get_acl() function get the ACL for a database object" }, { "msg_contents": "On Wed, Jun 19, 2024, at 15:51, Ranier Vilela wrote:\n> Regarding the patch, could it be written in the following style?\n\nThanks for nice improvement. New version attached.\n\nBest,\nJoel", "msg_date": "Wed, 19 Jun 2024 16:21:43 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add pg_get_acl() function get the ACL for a database object" }, { "msg_contents": "On Wed, 19 Jun 2024 at 07:35, Joel Jacobson <[email protected]> wrote:\n\n> Hello hackers,\n>\n> Currently, obtaining the Access Control List (ACL) for a database object\n> requires querying specific pg_catalog tables directly, where the user\n> needs to know the name of the ACL column for the object.\n>\n\nI have no idea how often this would be useful, but I wonder if it could\nwork to have overloaded single-parameter versions for each of regprocedure\n(pg_proc.proacl), regclass (pg_class.relacl), …. To call, just cast the OID\nto the appropriate reg* type.\n\nFor example: To get the ACL for table 'example_table', call pg_get_acl\n('example_table'::regclass)\n\nOn Wed, 19 Jun 2024 at 07:35, Joel Jacobson <[email protected]> wrote:Hello hackers,\n\nCurrently, obtaining the Access Control List (ACL) for a database object\nrequires querying specific pg_catalog tables directly, where the user\nneeds to know the name of the ACL column for the object.I have no idea how often this would be useful, but I wonder if it could work to have overloaded single-parameter versions for each of regprocedure (pg_proc.proacl), regclass (pg_class.relacl), …. To call, just cast the OID to the appropriate reg* type.For example: To get the ACL for table 'example_table', call pg_get_acl ('example_table'::regclass)", "msg_date": "Wed, 19 Jun 2024 10:23:45 -0400", "msg_from": "Isaac Morland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add pg_get_acl() function get the ACL for a database object" }, { "msg_contents": "On Wed, Jun 19, 2024, at 16:23, Isaac Morland wrote:\n> I have no idea how often this would be useful, but I wonder if it could \n> work to have overloaded single-parameter versions for each of \n> regprocedure (pg_proc.proacl), regclass (pg_class.relacl), …. To call, \n> just cast the OID to the appropriate reg* type.\n>\n> For example: To get the ACL for table 'example_table', call pg_get_acl \n> ('example_table'::regclass)\n\n+1\n\nNew patch attached.\n\nI've added overloaded versions for regclass and regproc so far:\n\n\\df pg_get_acl\n List of functions\n Schema | Name | Result data type | Argument data types | Type\n------------+------------+------------------+------------------------+------\n pg_catalog | pg_get_acl | aclitem[] | classid oid, objid oid | func\n pg_catalog | pg_get_acl | aclitem[] | objid regclass | func\n pg_catalog | pg_get_acl | aclitem[] | objid regproc | func\n(3 rows)\n\n/Joel", "msg_date": "Thu, 20 Jun 2024 08:32:57 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add pg_get_acl() function get the ACL for a database object" }, { "msg_contents": "On Thu, Jun 20, 2024 at 08:32:57AM +0200, Joel Jacobson wrote:\n> I've added overloaded versions for regclass and regproc so far:\n> \n> \\df pg_get_acl\n> List of functions\n> Schema | Name | Result data type | Argument data types | Type\n> ------------+------------+------------------+------------------------+------\n> pg_catalog | pg_get_acl | aclitem[] | classid oid, objid oid | func\n> pg_catalog | pg_get_acl | aclitem[] | objid regclass | func\n> pg_catalog | pg_get_acl | aclitem[] | objid regproc | func\n> (3 rows)\n\nInteresting idea.\n\nI am not really convinced that the regproc and regclass overloads are\nreally necessary, considering the fact that one of the goals\nmentioned, as far as I understand, is to be able to get an idea of the\nACLs associated to an object with its dependencies in pg_depend and/or \npg_shdepend. Another one is to reduce the JOIN burden when querying\na set of them, like attribute ACLs.\n\nPerhaps the documentation should add one or two examples to show this\npoint?\n\n+ tup = get_catalog_object_by_oid(rel, Anum_oid, objectId);\n+ if (!HeapTupleIsValid(tup))\n+ elog(ERROR, \"cache lookup failed for object %u of catalog \\\"%s\\\"\",\n+ objectId, RelationGetRelationName(rel));\n\nget_catalog_object_by_oid() is handled differently here than in\nfunctions line pg_identify_object(). Shouldn't we return NULL for\nthis case? That would be more useful when using this function with\none or more large scans.\n--\nMichael", "msg_date": "Fri, 21 Jun 2024 12:25:06 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add pg_get_acl() function get the ACL for a database object" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> On Thu, Jun 20, 2024 at 08:32:57AM +0200, Joel Jacobson wrote:\n>> I've added overloaded versions for regclass and regproc so far:\n>> \n>> \\df pg_get_acl\n>> List of functions\n>> Schema | Name | Result data type | Argument data types | Type\n>> ------------+------------+------------------+------------------------+------\n>> pg_catalog | pg_get_acl | aclitem[] | classid oid, objid oid | func\n>> pg_catalog | pg_get_acl | aclitem[] | objid regclass | func\n>> pg_catalog | pg_get_acl | aclitem[] | objid regproc | func\n>> (3 rows)\n\n> Interesting idea.\n\nDoesn't that result in \"cannot resolve ambiguous function call\"\nfailures?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 Jun 2024 23:44:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add pg_get_acl() function get the ACL for a database object" }, { "msg_contents": "On Thu, 20 Jun 2024 at 02:33, Joel Jacobson <[email protected]> wrote:\n\n> On Wed, Jun 19, 2024, at 16:23, Isaac Morland wrote:\n> > I have no idea how often this would be useful, but I wonder if it could\n> > work to have overloaded single-parameter versions for each of\n> > regprocedure (pg_proc.proacl), regclass (pg_class.relacl), …. To call,\n> > just cast the OID to the appropriate reg* type.\n> >\n> > For example: To get the ACL for table 'example_table', call pg_get_acl\n> > ('example_table'::regclass)\n>\n> +1\n>\n> New patch attached.\n>\n> I've added overloaded versions for regclass and regproc so far:\n>\n> \\df pg_get_acl\n> List of functions\n> Schema | Name | Result data type | Argument data types | Type\n>\n> ------------+------------+------------------+------------------------+------\n> pg_catalog | pg_get_acl | aclitem[] | classid oid, objid oid | func\n> pg_catalog | pg_get_acl | aclitem[] | objid regclass | func\n> pg_catalog | pg_get_acl | aclitem[] | objid regproc | func\n> (3 rows)\n\n\nThose were just examples. I think for completeness there should be 5\noverloads:\n\n[input type] → [relation.aclattribute]\nregproc/regprocedure → pg_proc.proacl\nregtype → pg_type.typacl\nregclass → pg_class.relacl\nregnamespace → pg_namespace.nspacl\n\nI believe the remaining reg* types don't correspond to objects with ACLs,\nand the remaining ACL fields are for objects which don't have a\ncorresponding reg* type.\n\nIn general I believe the reg* types are underutilized. All over the place I\nsee examples where people write code to generate SQL statements and they\ntake schema and object name and then format with %I.%I when all that is\nneeded is a reg* value and then format it with a simple %s (of course, need\nto make sure the SQL will execute with the same search_path as when the SQL\nwas generated, or generate with an empty search_path).\n\nOn Thu, 20 Jun 2024 at 02:33, Joel Jacobson <[email protected]> wrote:On Wed, Jun 19, 2024, at 16:23, Isaac Morland wrote:\n> I have no idea how often this would be useful, but I wonder if it could \n> work to have overloaded single-parameter versions for each of \n> regprocedure (pg_proc.proacl), regclass (pg_class.relacl), …. To call, \n> just cast the OID to the appropriate reg* type.\n>\n> For example: To get the ACL for table 'example_table', call pg_get_acl \n> ('example_table'::regclass)\n\n+1\n\nNew patch attached.\n\nI've added overloaded versions for regclass and regproc so far:\n\n\\df pg_get_acl\n                             List of functions\n   Schema   |    Name    | Result data type |  Argument data types   | Type\n------------+------------+------------------+------------------------+------\n pg_catalog | pg_get_acl | aclitem[]        | classid oid, objid oid | func\n pg_catalog | pg_get_acl | aclitem[]        | objid regclass         | func\n pg_catalog | pg_get_acl | aclitem[]        | objid regproc          | func\n(3 rows)Those were just examples. I think for completeness there should be 5 overloads:[input type] → [relation.aclattribute]regproc/regprocedure → pg_proc.proaclregtype → pg_type.typaclregclass → pg_class.relaclregnamespace → pg_namespace.nspaclI believe the remaining reg* types don't correspond to objects with ACLs, and the remaining ACL fields are for objects which don't have a corresponding reg* type.In general I believe the reg* types are underutilized. All over the place I see examples where people write code to generate SQL statements and they take schema and object name and then format with %I.%I when all that is needed is a reg* value and then format it with a simple %s (of course, need to make sure the SQL will execute with the same search_path as when the SQL was generated, or generate with an empty search_path).", "msg_date": "Thu, 20 Jun 2024 23:48:19 -0400", "msg_from": "Isaac Morland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add pg_get_acl() function get the ACL for a database object" }, { "msg_contents": "On Thu, 20 Jun 2024 at 23:44, Tom Lane <[email protected]> wrote:\n\n> Michael Paquier <[email protected]> writes:\n> > On Thu, Jun 20, 2024 at 08:32:57AM +0200, Joel Jacobson wrote:\n> >> I've added overloaded versions for regclass and regproc so far:\n> >>\n> >> \\df pg_get_acl\n> >> List of functions\n> >> Schema | Name | Result data type | Argument data types | Type\n> >>\n> ------------+------------+------------------+------------------------+------\n> >> pg_catalog | pg_get_acl | aclitem[] | classid oid, objid oid |\n> func\n> >> pg_catalog | pg_get_acl | aclitem[] | objid regclass |\n> func\n> >> pg_catalog | pg_get_acl | aclitem[] | objid regproc |\n> func\n> >> (3 rows)\n>\n> > Interesting idea.\n>\n> Doesn't that result in \"cannot resolve ambiguous function call\"\n> failures?\n\n\nIf you try to pass an oid directly, as a value of type oid, you should get\n\"function is not unique\". But if you cast a string or numeric value to the\nappropriate reg* type for the object you are using, it should work fine.\n\nI have functions which reset object permissions on all objects in a\nspecified schema back to the default state as if they had been freshly\ncreated which rely on this. They work very well, and allow me to have a\nprivilege-granting script for each project which always fully resets all\nthe privileges back to a known state.\n\nOn Thu, 20 Jun 2024 at 23:44, Tom Lane <[email protected]> wrote:Michael Paquier <[email protected]> writes:\n> On Thu, Jun 20, 2024 at 08:32:57AM +0200, Joel Jacobson wrote:\n>> I've added overloaded versions for regclass and regproc so far:\n>> \n>> \\df pg_get_acl\n>> List of functions\n>> Schema   |    Name    | Result data type |  Argument data types   | Type\n>> ------------+------------+------------------+------------------------+------\n>> pg_catalog | pg_get_acl | aclitem[]        | classid oid, objid oid | func\n>> pg_catalog | pg_get_acl | aclitem[]        | objid regclass         | func\n>> pg_catalog | pg_get_acl | aclitem[]        | objid regproc          | func\n>> (3 rows)\n\n> Interesting idea.\n\nDoesn't that result in \"cannot resolve ambiguous function call\"\nfailures?If you try to pass an oid directly, as a value of type oid, you should get \"function is not unique\". But if you cast a string or numeric value to the appropriate reg* type for the object you are using, it should work fine.I have functions which reset object permissions on all objects in a specified schema back to the default state as if they had been freshly created which rely on this. They work very well, and allow me to have a privilege-granting script for each project which always fully resets all the privileges back to a known state.", "msg_date": "Thu, 20 Jun 2024 23:58:00 -0400", "msg_from": "Isaac Morland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add pg_get_acl() function get the ACL for a database object" }, { "msg_contents": "On Fri, Jun 21, 2024, at 05:25, Michael Paquier wrote:\n> Interesting idea.\n>\n> I am not really convinced that the regproc and regclass overloads are\n> really necessary, considering the fact that one of the goals\n> mentioned, as far as I understand, is to be able to get an idea of the\n> ACLs associated to an object with its dependencies in pg_depend and/or \n> pg_shdepend. Another one is to reduce the JOIN burden when querying\n> a set of them, like attribute ACLs.\n\nOverloads moved to a second patch, which can be applied\non top of the first one. I think they would be quite nice, but I could\nalso live without them.\n\n> Perhaps the documentation should add one or two examples to show this\n> point?\n\nGood point, added.\n\n>\n> + tup = get_catalog_object_by_oid(rel, Anum_oid, objectId);\n> + if (!HeapTupleIsValid(tup))\n> + elog(ERROR, \"cache lookup failed for object %u of catalog \\\"%s\\\"\",\n> + objectId, RelationGetRelationName(rel));\n>\n> get_catalog_object_by_oid() is handled differently here than in\n> functions line pg_identify_object(). Shouldn't we return NULL for\n> this case? That would be more useful when using this function with\n> one or more large scans.\n\nRight, I've changed the patch accordingly.\n\nOn Fri, Jun 21, 2024, at 05:48, Isaac Morland wrote:\n> Those were just examples. I think for completeness there should be 5 overloads:\n>\n> [input type] → [relation.aclattribute]\n> regproc/regprocedure → pg_proc.proacl\n> regtype → pg_type.typacl\n> regclass → pg_class.relacl\n> regnamespace → pg_namespace.nspacl\n>\n> I believe the remaining reg* types don't correspond to objects with \n> ACLs, and the remaining ACL fields are for objects which don't have a \n> corresponding reg* type.\n>\n> In general I believe the reg* types are underutilized. All over the \n> place I see examples where people write code to generate SQL statements \n> and they take schema and object name and then format with %I.%I when \n> all that is needed is a reg* value and then format it with a simple %s \n> (of course, need to make sure the SQL will execute with the same \n> search_path as when the SQL was generated, or generate with an empty \n> search_path).\n\nI've added regtype and regnamespace overloads to the second patch.\n\nOn Fri, Jun 21, 2024, at 05:58, Isaac Morland wrote:\n> On Thu, 20 Jun 2024 at 23:44, Tom Lane <[email protected]> wrote:\n>> Doesn't that result in \"cannot resolve ambiguous function call\"\n>> failures?\n>\n> If you try to pass an oid directly, as a value of type oid, you should \n> get \"function is not unique\". But if you cast a string or numeric value \n> to the appropriate reg* type for the object you are using, it should \n> work fine.\n\nYes, I can confirm that's the case, it works fine when casting a string\nto reg* type.\n\n/Joel", "msg_date": "Sat, 22 Jun 2024 02:54:30 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add pg_get_acl() function get the ACL for a database object" }, { "msg_contents": "On Sat, Jun 22, 2024, at 02:54, Joel Jacobson wrote:\n> Attachments:\n> * v4-0001-Add-pg_get_acl.patch\n> * 0002-Add-pg_get_acl-overloads.patch\n\nRebase and reduced diff for src/test/regress/sql/privileges.sql between patches.\n\n/Joel", "msg_date": "Sat, 22 Jun 2024 11:44:02 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add pg_get_acl() function get the ACL for a database object" }, { "msg_contents": "On Sat, Jun 22, 2024, at 11:44, Joel Jacobson wrote:\n> * v5-0001-Add-pg_get_acl.patch\n> * v2-0002-Add-pg_get_acl-overloads.patch\n\nRename files to ensure cfbot applies them in order; both need to have same version prefix.", "msg_date": "Sun, 23 Jun 2024 08:48:46 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add pg_get_acl() function get the ACL for a database object" }, { "msg_contents": "On Sun, Jun 23, 2024 at 08:48:46AM +0200, Joel Jacobson wrote:\n> On Sat, Jun 22, 2024, at 11:44, Joel Jacobson wrote:\n>> * v5-0001-Add-pg_get_acl.patch\n>> * v2-0002-Add-pg_get_acl-overloads.patch\n> \n> Rename files to ensure cfbot applies them in order; both need to\n> have same version prefix. \n\n+ <para>\n+ Returns the Access Control List (ACL) for a database object,\n+ specified by catalog OID and object OID.\n\nRather unrelated to this patch, still this patch makes the situation\nmore complicated in the docs, but wouldn't it be better to add ACL as\na term in acronyms.sql, and reuse it here? It would be a doc-only\npatch that applies on top of the rest (could be on a new thread of its\nown), with some <acronym> markups added where needed.\n\n+SELECT\n+ (pg_identify_object(s.classid,s.objid,s.objsubid)).*,\n+ pg_catalog.pg_get_acl(s.classid,s.objid)\n+FROM pg_catalog.pg_shdepend AS s\n+JOIN pg_catalog.pg_database AS d ON d.datname = current_database() AND d.oid = s.dbid\n+JOIN pg_catalog.pg_authid AS a ON a.oid = s.refobjid AND s.refclassid = 'pg_authid'::regclass\n+WHERE s.deptype = 'a';\n\nCould be a bit prettier. That's a good addition.\n\n+\tcatalogId = (classId == LargeObjectRelationId) ? LargeObjectMetadataRelationId : classId;\n\nIndeed, and we need to live with this tweak as per the reason in\ninv_api.c related to clients, so that's fine. Still a comment is\nadapted for this particular case?\n\n+SELECT pg_get_acl('pg_class'::regclass, 'atest2'::regclass::oid);\n+ pg_get_acl \n+------------\n+ \n+(1 row)\n\nHow about adding a bit more coverage? I'd suggest the following\nadditions:\n- class ID as 0 in input.\n- object ID as 0 in input.\n- Both class and object ID as 0 in input.\n\n+SELECT pg_get_acl('pg_class'::regclass, 'atest2'::regclass::oid);\n+ pg_get_acl \n+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n+ {regress_priv_user1=arwdDxtm/regress_priv_user1,regress_priv_user2=r/regress_priv_user1,regress_priv_user3=w/regress_priv_user1,regress_priv_user4=a/regress_priv_user1,regress_priv_user5=D/regress_priv_user1}\n+(1 row)\n\nThis is hard to parse. I would add an unnest() and order the entries\nso as modifications are easier to catch, with a more predictible\nresult.\n\nFWIW, I'm still a bit meh with the addition of the functions\noverloading the arguments with reg inputs. I'm OK with that when we\nknow that the input would be of a given object type, like\npg_partition_ancestors or pg_partition_tree, but for a case as generic\nas this one this is less appealing to me.\n--\nMichael", "msg_date": "Mon, 24 Jun 2024 08:46:30 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add pg_get_acl() function get the ACL for a database object" }, { "msg_contents": "On Mon, Jun 24, 2024, at 01:46, Michael Paquier wrote:\n> Rather unrelated to this patch, still this patch makes the situation\n> more complicated in the docs, but wouldn't it be better to add ACL as\n> a term in acronyms.sql, and reuse it here? It would be a doc-only\n> patch that applies on top of the rest (could be on a new thread of its\n> own), with some <acronym> markups added where needed.\n\nGood idea, I've started a separate thread for this:\n\nhttps://postgr.es/m/9253b872-dbb1-42a6-a79e-b1e96effc857%40app.fastmail.com\n\nThis patch now assumes <acronym>ACL</acronym> will be supported.\n\n> +SELECT\n> + (pg_identify_object(s.classid,s.objid,s.objsubid)).*,\n> + pg_catalog.pg_get_acl(s.classid,s.objid)\n> +FROM pg_catalog.pg_shdepend AS s\n> +JOIN pg_catalog.pg_database AS d ON d.datname = current_database() AND \n> d.oid = s.dbid\n> +JOIN pg_catalog.pg_authid AS a ON a.oid = s.refobjid AND s.refclassid \n> = 'pg_authid'::regclass\n> +WHERE s.deptype = 'a';\n>\n> Could be a bit prettier. That's a good addition.\n\nHow could we make it prettier?\n\n> +\tcatalogId = (classId == LargeObjectRelationId) ? \n> LargeObjectMetadataRelationId : classId;\n>\n> Indeed, and we need to live with this tweak as per the reason in\n> inv_api.c related to clients, so that's fine. Still a comment is\n> adapted for this particular case?\n\nThanks, fixed.\n\n> How about adding a bit more coverage? I'd suggest the following\n> additions:\n\nThanks, good idea. I've added the tests,\nbut need some help reasoning if the output is expected:\n\n> - class ID as 0 in input.\n\nSELECT pg_get_acl(0, 'atest2'::regclass::oid);\nERROR: unrecognized class ID: 0\n\nI believe we want an error here, since: an invalid class ID,\nlike 0, or any other invalid OID, should raise an error,\nsince classes can't be dropped, so we should never\nexpect an invalid OID for a class ID.\nPlease correct me if this reasoning is incorrect.\n\n> - object ID as 0 in input.\nSELECT pg_get_acl('pg_class'::regclass, 0);\n\nThis returns null, which I believe it should,\nsince the OID for a database object could\nbe invalid due to having being dropped concurrently.\n\n> - Both class and object ID as 0 in input.\n\nThis returns null, but I'm not sure I think this is correct?\nSince if the class ID is zero, i.e. incorrect, that is unexpected,\nand wouldn't we want to throw an error in that case,\njust like if only the class ID is invalid?\n\n> +SELECT pg_get_acl('pg_class'::regclass, 'atest2'::regclass::oid);\n> + \n> pg_get_acl \n> \n> +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> + \n> {regress_priv_user1=arwdDxtm/regress_priv_user1,regress_priv_user2=r/regress_priv_user1,regress_priv_user3=w/regress_priv_user1,regress_priv_user4=a/regress_priv_user1,regress_priv_user5=D/regress_priv_user1}\n> +(1 row)\n>\n> This is hard to parse. I would add an unnest() and order the entries\n> so as modifications are easier to catch, with a more predictible\n> result.\n\nThanks, much better, fixed.\n\n> FWIW, I'm still a bit meh with the addition of the functions\n> overloading the arguments with reg inputs. I'm OK with that when we\n> know that the input would be of a given object type, like\n> pg_partition_ancestors or pg_partition_tree, but for a case as generic\n> as this one this is less appealing to me.\n\nI've looked at other occurrences of \"<type>reg\" in func.sgml,\nand I now agree with you we should skip the overloads,\nsince adding them would seem unconventional.\n\n/Joel", "msg_date": "Tue, 25 Jun 2024 01:21:14 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add pg_get_acl() function get the ACL for a database object" }, { "msg_contents": "On Tue, Jun 25, 2024 at 01:21:14AM +0200, Joel Jacobson wrote:\n> Good idea, I've started a separate thread for this:\n> \n> https://postgr.es/m/9253b872-dbb1-42a6-a79e-b1e96effc857%40app.fastmail.com\n> \n> This patch now assumes <acronym>ACL</acronym> will be supported.\n\nThanks for doing that! That helps in making reviews easier to follow\nfor all, attracting the correct audience when necessary.\n\n>> +SELECT\n>> + (pg_identify_object(s.classid,s.objid,s.objsubid)).*,\n>> + pg_catalog.pg_get_acl(s.classid,s.objid)\n>> +FROM pg_catalog.pg_shdepend AS s\n>> +JOIN pg_catalog.pg_database AS d ON d.datname = current_database() AND \n>> d.oid = s.dbid\n>> +JOIN pg_catalog.pg_authid AS a ON a.oid = s.refobjid AND s.refclassid \n>> = 'pg_authid'::regclass\n>> +WHERE s.deptype = 'a';\n>>\n>> Could be a bit prettier. That's a good addition.\n> \n> How could we make it prettier?\n\nPerhaps split the two JOIN conditions into two lines each, with a bit\nmore indentation to make it render better? Usually I handle that on a\ncase-by-case basis while preparing a patch for commit. I'm OK to edit\nthat myself with some final touches, FWIW. Depending on the input\nthis shows, I'd also look at some LATERAL business, that can be\ncleaner in some cases for the docs.\n\n>> How about adding a bit more coverage? I'd suggest the following\n>> additions:\n> \n> Thanks, good idea. I've added the tests,\n> but need some help reasoning if the output is expected:\n\nTotal coverage sounds good here.\n\n>> - class ID as 0 in input.\n> \n> SELECT pg_get_acl(0, 'atest2'::regclass::oid);\n> ERROR: unrecognized class ID: 0\n> \n> I believe we want an error here, since: an invalid class ID,\n> like 0, or any other invalid OID, should raise an error,\n> since classes can't be dropped, so we should never\n> expect an invalid OID for a class ID.\n> Please correct me if this reasoning is incorrect.\n\nThis is an internal error, so it should never be visible to the end\nuser via SQL because it is an unexpected state. See for example\n2a10fdc4307a, which is similar to what you are doing here.\n\n>> - object ID as 0 in input.\n> SELECT pg_get_acl('pg_class'::regclass, 0);\n> \n> This returns null, which I believe it should,\n> since the OID for a database object could\n> be invalid due to having being dropped concurrently.\n\nThat's right. It would be sad for monitoring queries doing large\nscans of pg_depend or pg_shdepend to fail in obstructive ways because\nof concurrent object drops, because we'd lose information about all\nthe other objects because of at least one object gone at the moment\nwhere pg_get_acl() is called for its OID retrieved previously.\n\n>> - Both class and object ID as 0 in input.\n> \n> This returns null, but I'm not sure I think this is correct?\n> Since if the class ID is zero, i.e. incorrect, that is unexpected,\n> and wouldn't we want to throw an error in that case,\n> just like if only the class ID is invalid?\n\nNULL is the correct answer for all that, IMO.\n\n>> FWIW, I'm still a bit meh with the addition of the functions\n>> overloading the arguments with reg inputs. I'm OK with that when we\n>> know that the input would be of a given object type, like\n>> pg_partition_ancestors or pg_partition_tree, but for a case as generic\n>> as this one this is less appealing to me.\n> \n> I've looked at other occurrences of \"<type>reg\" in func.sgml,\n> and I now agree with you we should skip the overloads,\n> since adding them would seem unconventional.\n\nOkay. If another committer is interested in that, I'd be OK if there\nis a consensus on this point. The fact that I'm not convinced does\nnot mean that it would show enough value for somebody else.\n--\nMichael", "msg_date": "Tue, 25 Jun 2024 10:57:12 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add pg_get_acl() function get the ACL for a database object" }, { "msg_contents": "On Tue, Jun 25, 2024, at 03:57, Michael Paquier wrote:\n> On Tue, Jun 25, 2024 at 01:21:14AM +0200, Joel Jacobson wrote:\n>> Good idea, I've started a separate thread for this:\n>> \n>> https://postgr.es/m/9253b872-dbb1-42a6-a79e-b1e96effc857%40app.fastmail.com\n>> \n>> This patch now assumes <acronym>ACL</acronym> will be supported.\n>\n> Thanks for doing that! That helps in making reviews easier to follow\n> for all, attracting the correct audience when necessary.\n>\n>>> +SELECT\n>>> + (pg_identify_object(s.classid,s.objid,s.objsubid)).*,\n>>> + pg_catalog.pg_get_acl(s.classid,s.objid)\n>>> +FROM pg_catalog.pg_shdepend AS s\n>>> +JOIN pg_catalog.pg_database AS d ON d.datname = current_database() AND \n>>> d.oid = s.dbid\n>>> +JOIN pg_catalog.pg_authid AS a ON a.oid = s.refobjid AND s.refclassid \n>>> = 'pg_authid'::regclass\n>>> +WHERE s.deptype = 'a';\n>>>\n>>> Could be a bit prettier. That's a good addition.\n>> \n>> How could we make it prettier?\n>\n> Perhaps split the two JOIN conditions into two lines each, with a bit\n> more indentation to make it render better? Usually I handle that on a\n> case-by-case basis while preparing a patch for commit. I'm OK to edit\n> that myself with some final touches, FWIW. Depending on the input\n> this shows, I'd also look at some LATERAL business, that can be\n> cleaner in some cases for the docs.\n\nThanks, some indentation certainly helped.\nNot sure where LATERAL would help, so leaving that part to you.\n\n>> SELECT pg_get_acl(0, 'atest2'::regclass::oid);\n>> ERROR: unrecognized class ID: 0\n>> \n>> I believe we want an error here, since: an invalid class ID,\n>> like 0, or any other invalid OID, should raise an error,\n>> since classes can't be dropped, so we should never\n>> expect an invalid OID for a class ID.\n>> Please correct me if this reasoning is incorrect.\n>\n> This is an internal error, so it should never be visible to the end\n> user via SQL because it is an unexpected state. See for example\n> 2a10fdc4307a, which is similar to what you are doing here.\n\nThanks for pointing me to that commit, good to learn about missing_ok.\n\nNot sure if I see how to implement it for pg_get_acl() though.\n\nI've had a look at how pg_describe_object() works for this case:\n\nSELECT pg_describe_object(0,'t'::regclass::oid,0);\nERROR: unsupported object class: 0\n\nI suppose this is the error message we want in pg_get_acl() when\nthe class ID is invalid?\n\nIf no, the rest of this email can be skipped.\n\nIf yes, then I suppose we should try to see if there is any existing code\nin objectaddress.c that we could reuse, that can throw this error message\nfor us, for an invalid class OID.\n\nThere are three places in objectaddress.c currently capable of\nthrowing a \"unsupported object class\" error message:\n\nchar *\ngetObjectDescription(const ObjectAddress *object, bool missing_ok)\n\nchar *\ngetObjectTypeDescription(const ObjectAddress *object, bool missing_ok)\n\nchar *\ngetObjectIdentityParts(const ObjectAddress *object,\n\t\t\t\t\t List **objname, List **objargs,\n\t\t\t\t\t bool missing_ok)\n\nAll three of them contain a `switch (object->classId)` statement,\nwhere the default branch contains the code that throws the error:\n\n\t\tdefault:\n\t\t\telog(ERROR, \"unsupported object class: %u\", object->classId);\n\nIt would be nice to avoid having to copy the long switch statement, with noops for each branch,\nexcept the default branch, to throw an error in case of an invalid class OID,\nbut I don't see how we can use any of these three functions in pg_get_acl(), since they\ndo more things than just checking if the class OID is valid?\n\nSo not sure what to do here.\n\nMaybe we want a separate new bool helper function to check if a class OID is valid or not?\n\nThat helper function would not be useful for the existing three cases where this error is thrown\nin objectaddress.c, since they need actual switch branches for each class ID, whereas in pg_get_acl()\nwe just need to check if it's valid or not.\n\nI haven't checked, but maybe there are other places in the sources where we just want to check\nif a class OID is valid or not, that could benefit from such a helper function.\n\nOr perhaps one exist already?\n\n/Joel\n\n\n", "msg_date": "Tue, 25 Jun 2024 08:06:41 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add pg_get_acl() function get the ACL for a database object" }, { "msg_contents": "On Tue, Jun 25, 2024 at 08:06:41AM +0200, Joel Jacobson wrote:\n> Not sure if I see how to implement it for pg_get_acl() though.\n> \n> I've had a look at how pg_describe_object() works for this case:\n> \n> SELECT pg_describe_object(0,'t'::regclass::oid,0);\n> ERROR: unsupported object class: 0\n> \n> I suppose this is the error message we want in pg_get_acl() when\n> the class ID is invalid?\n\nAh, and here I thought that this was also returning NULL. My previous\nwork in this area only focused on the object OIDs, not their classes.\nAt the end, I'm OK to keep your patch as it is, checking only for the\ncase of pinned dependencies in pg_depend as we do for\npg_describe_object().\n\nIt's still a bit confusing, but we've been living with that for years\nnow without anybody complaining except me, so perhaps that's fine at\nthe end to keep that as this is still useful. If we change that,\napplying the same rules across the board would make the most sense.\n--\nMichael", "msg_date": "Tue, 25 Jun 2024 15:42:27 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add pg_get_acl() function get the ACL for a database object" }, { "msg_contents": "On Tue, Jun 25, 2024, at 08:42, Michael Paquier wrote:\n> On Tue, Jun 25, 2024 at 08:06:41AM +0200, Joel Jacobson wrote:\n>> Not sure if I see how to implement it for pg_get_acl() though.\n>> \n>> I've had a look at how pg_describe_object() works for this case:\n>> \n>> SELECT pg_describe_object(0,'t'::regclass::oid,0);\n>> ERROR: unsupported object class: 0\n>> \n>> I suppose this is the error message we want in pg_get_acl() when\n>> the class ID is invalid?\n>\n> Ah, and here I thought that this was also returning NULL. My previous\n> work in this area only focused on the object OIDs, not their classes.\n> At the end, I'm OK to keep your patch as it is, checking only for the\n> case of pinned dependencies in pg_depend as we do for\n> pg_describe_object().\n>\n> It's still a bit confusing, but we've been living with that for years\n> now without anybody complaining except me, so perhaps that's fine at\n> the end to keep that as this is still useful. If we change that,\n> applying the same rules across the board would make the most sense.\n\nOK, cool.\n\nNew version attached that fixes the indentation of the example,\nand uses <literal>NULL</literal> instead of just NULL in the doc.\n\n/Joel", "msg_date": "Tue, 25 Jun 2024 09:13:16 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add pg_get_acl() function get the ACL for a database object" }, { "msg_contents": "On Tue, Jun 25, 2024, at 09:13, Joel Jacobson wrote:\n> Attachments:\n> * v8-0001-Add-pg_get_acl.patch\n\nRebased version.\nUses ACL acronym added in commit 00d819d46a6f5b7e9d2e02948a1c80d11c4ce260:\n doc: Add ACL acronym for \"Access Control List\"\n\n/Joel", "msg_date": "Tue, 02 Jul 2024 12:38:07 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add pg_get_acl() function get the ACL for a database object" }, { "msg_contents": "On Tue, Jul 02, 2024 at 12:38:07PM +0200, Joel Jacobson wrote:\n> Rebased version.\n> Uses ACL acronym added in commit 00d819d46a6f5b7e9d2e02948a1c80d11c4ce260:\n> doc: Add ACL acronym for \"Access Control List\"\n\nForgot to push the send button for this one yesterday, done now..\n\nWhile looking at that, I've finished by applying what you have here as\nit is good enough to retrieve any ACLs for all catalogs that don't use\na subobjid (aka everything except pg_attribute's ACL, for which\ndependencies are stored with pg_class in pg_shdepend so we'd need a\nshortcut in pg_get_acl() or more data in ObjectProperty but I'm not\nmuch a fan of tracking in that the dependency between pg_attribute and\npg_class coming from pg_shdepend), with two tweaks:\n- Slightly reshaped the code to avoid more blocks, even if it means\none more PG_RETURN_NULL().\n- Moved the example outside the main function table as it was rather\ncomplex, with some output provided that should fit in the width of\nthe PDF docs.\n--\nMichael", "msg_date": "Fri, 5 Jul 2024 10:13:06 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add pg_get_acl() function get the ACL for a database object" } ]
[ { "msg_contents": "Hi,\n\nI tried running master under valgrind on 64-bit ARM (rpi5 running debian\n12.5), and I got some suspicous reports, all related to the radixtree\ncode used by tidstore. I'm used to valgrind on arm sometimes reporting\nharmless issues, but this seems like it might be an actual issue.\n\nI'm attaching a snippet with a couple example reports. I can provide the\ncomplete report, but AFAIK it's all just repetitions of these cases. If\nneeded, I can probably provide access to the rpi5 machine.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 19 Jun 2024 16:34:52 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "suspicious valgrind reports about radixtree/tidstore on arm64" }, { "msg_contents": "Tomas Vondra <[email protected]> writes:\n> I tried running master under valgrind on 64-bit ARM (rpi5 running debian\n> 12.5), and I got some suspicous reports, all related to the radixtree\n> code used by tidstore.\n\nWhat's the test scenario that triggers this?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 Jun 2024 11:11:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: suspicious valgrind reports about radixtree/tidstore on arm64" }, { "msg_contents": "\n\nOn 6/19/24 17:11, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> I tried running master under valgrind on 64-bit ARM (rpi5 running debian\n>> 12.5), and I got some suspicous reports, all related to the radixtree\n>> code used by tidstore.\n> \n> What's the test scenario that triggers this?\n> \n\nI haven't investigated that yet, I just ran \"make check\".\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 19 Jun 2024 17:48:47 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: suspicious valgrind reports about radixtree/tidstore on arm64" }, { "msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 6/19/24 17:11, Tom Lane wrote:\n>> What's the test scenario that triggers this?\n\n> I haven't investigated that yet, I just ran \"make check\".\n\nI've reproduced what looks like about the same thing on\nmy Pi 4 using Fedora 38: just run \"make installcheck-parallel\"\nunder valgrind, and kaboom. Definitely needs investigation.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 19 Jun 2024 17:11:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: suspicious valgrind reports about radixtree/tidstore on arm64" }, { "msg_contents": "I wrote:\n> I've reproduced what looks like about the same thing on\n> my Pi 4 using Fedora 38: just run \"make installcheck-parallel\"\n> under valgrind, and kaboom. Definitely needs investigation.\n\nThe problem appears to be that RT_ALLOC_NODE doesn't bother to\ninitialize the chunks[] array when making a RT_NODE_16 node.\nIf we fill fewer than RT_FANOUT_16_MAX of the chunks[] entries,\nthen when RT_NODE_16_SEARCH_EQ applies vector operations that\nread the entire array, it's operating partially on uninitialized\ndata. Now, that's actually OK because of the \"mask off invalid\nentries\" step, but aarch64 valgrind complains anyway.\n\nI hypothesize that the reason we're not seeing equivalent failures\non x86_64 is one of\n\n1. x86_64 valgrind is stupider than aarch64, and fails to track that\nthe contents of the SIMD registers are only partially defined.\n\n2. x86_64 valgrind is smarter than aarch64, and is able to see\nthat the \"mask off invalid entries\" step removes all the\npotentially-uninitialized bits.\n\nThe first attached patch, \"radixtree-fix-minimal.patch\", is enough\nto stop the aarch64 valgrind failure for me. However, I think\nthat the coding here is pretty penny-wise and pound-foolish,\nand that what we really ought to do is the second patch,\n\"radixtree-fix-proposed.patch\". I do not believe that asking\nmemset to zero the three-byte RT_NODE structure produces code\nthat's either shorter or faster than having it zero 8 bytes\n(as for RT_NODE_4) or having it do that and then separately\nzero some more stuff (as for the larger node types). Leaving\nRT_NODE_4's chunks[] array uninitialized is going to bite us\nsomeday, too, even if it doesn't right now. So I think we\nought to just zero the whole fixed-size part of the nodes,\nwhich is what radixtree-fix-proposed.patch does.\n\n(The RT_NODE_48 case could be optimized a bit if we cared\nto swap the order of its slot_idxs[] and isset[] arrays;\nthen the initial zeroing could just go up to slot_idxs[].\nI don't know if there's any reason why the current order\nis preferable.)\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 19 Jun 2024 18:54:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: suspicious valgrind reports about radixtree/tidstore on arm64" }, { "msg_contents": "I wrote:\n> I hypothesize that the reason we're not seeing equivalent failures\n> on x86_64 is one of\n\n> 1. x86_64 valgrind is stupider than aarch64, and fails to track that\n> the contents of the SIMD registers are only partially defined.\n\n> 2. x86_64 valgrind is smarter than aarch64, and is able to see\n> that the \"mask off invalid entries\" step removes all the\n> potentially-uninitialized bits.\n\nSide note: it struck me that this could also be a valgrind version\nskew issue. But the machine I'm seeing the failure on is running\nvalgrind-3.22.0-1.fc38.aarch64, which is the same upstream version\nas valgrind-3.22.0-2.el8.x86_64, where I don't see it. So apparently\nnot. (There is a 3.23.0 out recently, but its release notes don't\nmention anything that seems related.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 Jun 2024 19:33:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: suspicious valgrind reports about radixtree/tidstore on arm64" }, { "msg_contents": "I wrote:\n> I hypothesize that the reason we're not seeing equivalent failures\n> on x86_64 is one of\n\n> 1. x86_64 valgrind is stupider than aarch64, and fails to track that\n> the contents of the SIMD registers are only partially defined.\n\n> 2. x86_64 valgrind is smarter than aarch64, and is able to see\n> that the \"mask off invalid entries\" step removes all the\n> potentially-uninitialized bits.\n\nHah: it's the second case. If I patch radixtree.h as attached,\nthen x86_64 valgrind complains about \n\n==00:00:00:32.759 247596== Conditional jump or move depends on uninitialised value(s)\n==00:00:00:32.759 247596== at 0x52F668: local_ts_node_16_search_eq (radixtree.h:1018)\n\nshowing that it knows that the result of vector8_highbit_mask is\nonly partly defined. Kind of odd though that aarch64 valgrind\nis getting the hard part right and not the easy(?) part.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 19 Jun 2024 19:51:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: suspicious valgrind reports about radixtree/tidstore on arm64" }, { "msg_contents": "Hi,\n\nOn Thu, Jun 20, 2024 at 7:54 AM Tom Lane <[email protected]> wrote:\n>\n> I wrote:\n> > I've reproduced what looks like about the same thing on\n> > my Pi 4 using Fedora 38: just run \"make installcheck-parallel\"\n> > under valgrind, and kaboom. Definitely needs investigation.\n>\n> The problem appears to be that RT_ALLOC_NODE doesn't bother to\n> initialize the chunks[] array when making a RT_NODE_16 node.\n> If we fill fewer than RT_FANOUT_16_MAX of the chunks[] entries,\n> then when RT_NODE_16_SEARCH_EQ applies vector operations that\n> read the entire array, it's operating partially on uninitialized\n> data. Now, that's actually OK because of the \"mask off invalid\n> entries\" step, but aarch64 valgrind complains anyway.\n>\n> I hypothesize that the reason we're not seeing equivalent failures\n> on x86_64 is one of\n>\n> 1. x86_64 valgrind is stupider than aarch64, and fails to track that\n> the contents of the SIMD registers are only partially defined.\n>\n> 2. x86_64 valgrind is smarter than aarch64, and is able to see\n> that the \"mask off invalid entries\" step removes all the\n> potentially-uninitialized bits.\n>\n> The first attached patch, \"radixtree-fix-minimal.patch\", is enough\n> to stop the aarch64 valgrind failure for me. However, I think\n> that the coding here is pretty penny-wise and pound-foolish,\n> and that what we really ought to do is the second patch,\n> \"radixtree-fix-proposed.patch\". I do not believe that asking\n> memset to zero the three-byte RT_NODE structure produces code\n> that's either shorter or faster than having it zero 8 bytes\n> (as for RT_NODE_4) or having it do that and then separately\n> zero some more stuff (as for the larger node types). Leaving\n> RT_NODE_4's chunks[] array uninitialized is going to bite us\n> someday, too, even if it doesn't right now. So I think we\n> ought to just zero the whole fixed-size part of the nodes,\n> which is what radixtree-fix-proposed.patch does.\n\nI agree with radixtree-fix-proposed.patch. Even if we zero more fields\nin the node it would not add noticeable overheads.\n\n>\n> (The RT_NODE_48 case could be optimized a bit if we cared\n> to swap the order of its slot_idxs[] and isset[] arrays;\n> then the initial zeroing could just go up to slot_idxs[].\n> I don't know if there's any reason why the current order\n> is preferable.)\n\nIIUC there is no particular reason for the current order in RT_NODE_48.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 20 Jun 2024 10:11:37 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: suspicious valgrind reports about radixtree/tidstore on arm64" }, { "msg_contents": "On Thu, Jun 20, 2024 at 10:11:37AM +0900, Masahiko Sawada wrote:\n> I agree with radixtree-fix-proposed.patch. Even if we zero more fields\n> in the node it would not add noticeable overheads.\n\nThis needs to be tracked as an open item, so I have added one now.\n--\nMichael", "msg_date": "Thu, 20 Jun 2024 13:03:01 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: suspicious valgrind reports about radixtree/tidstore on arm64" }, { "msg_contents": "On Thu, Jun 20, 2024 at 8:12 AM Masahiko Sawada <[email protected]> wrote:\n\n> On Thu, Jun 20, 2024 at 7:54 AM Tom Lane <[email protected]> wrote:\n> >\n\n> I agree with radixtree-fix-proposed.patch. Even if we zero more fields\n> in the node it would not add noticeable overheads.\n\n+1 in general, although I'm slightly concerned about this part:\n\n> > (The RT_NODE_48 case could be optimized a bit if we cared\n> > to swap the order of its slot_idxs[] and isset[] arrays;\n> > then the initial zeroing could just go up to slot_idxs[].\n\n- memset(n48->isset, 0, sizeof(n48->isset));\n+ memset(n48, 0, offsetof(RT_NODE_48, children));\n memset(n48->slot_idxs, RT_INVALID_SLOT_IDX, sizeof(n48->slot_idxs));\n\nI was a bit surprised that neither gcc 14 nor clang 18 can figure out\nthat they can skip zeroing the slot index array since it's later\nfilled in with \"invalid index\", so they actually zero out 272 bytes\nbefore re-initializing 256 of those bytes. It may not matter in\npractice, but it's also not free, and trivial to avoid.\n\n> > I don't know if there's any reason why the current order\n> > is preferable.)\n>\n> IIUC there is no particular reason for the current order in RT_NODE_48.\n\nYeah. I found that simply swapping them enables clang to avoid\ndouble-initialization, but gcc still can't figure it out and must be\ntold to stop at slot_idxs[]. I'd prefer to do it that way and document\nthat slot_idxs is purposefully the last member of the fixed part of\nthe struct. If that's agreeable I'll commit it that way tomorrow\nunless someone beats me to it.\n\n\n", "msg_date": "Thu, 20 Jun 2024 14:58:28 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: suspicious valgrind reports about radixtree/tidstore on arm64" }, { "msg_contents": "John Naylor <[email protected]> writes:\n> On Thu, Jun 20, 2024 at 8:12 AM Masahiko Sawada <[email protected]> wrote:\n>> IIUC there is no particular reason for the current order in RT_NODE_48.\n\n> Yeah. I found that simply swapping them enables clang to avoid\n> double-initialization, but gcc still can't figure it out and must be\n> told to stop at slot_idxs[]. I'd prefer to do it that way and document\n> that slot_idxs is purposefully the last member of the fixed part of\n> the struct.\n\nWFM.\n\n> If that's agreeable I'll commit it that way tomorrow\n> unless someone beats me to it.\n\nI was going to push it, but feel free.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 Jun 2024 04:01:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: suspicious valgrind reports about radixtree/tidstore on arm64" }, { "msg_contents": "Em qua., 19 de jun. de 2024 às 20:52, Tom Lane <[email protected]> escreveu:\n\n> I wrote:\n> > I hypothesize that the reason we're not seeing equivalent failures\n> > on x86_64 is one of\n>\n> > 1. x86_64 valgrind is stupider than aarch64, and fails to track that\n> > the contents of the SIMD registers are only partially defined.\n>\n> > 2. x86_64 valgrind is smarter than aarch64, and is able to see\n> > that the \"mask off invalid entries\" step removes all the\n> > potentially-uninitialized bits.\n>\n> Hah: it's the second case. If I patch radixtree.h as attached,\n> then x86_64 valgrind complains about\n>\n> ==00:00:00:32.759 247596== Conditional jump or move depends on\n> uninitialised value(s)\n> ==00:00:00:32.759 247596== at 0x52F668: local_ts_node_16_search_eq\n> (radixtree.h:1018)\n>\n> showing that it knows that the result of vector8_highbit_mask is\n> only partly defined.\n\nI wouldn't be surprised if *RT_NODE_16_GET_INSERTPOS*\n(src/include/lib/radixtree.h),\ndoes not suffer from the same problem?\nEven with Assert trying to protect.\n\nDoes the fix not apply here too?\n\nbest regards,\nRanier Vilela\n\nEm qua., 19 de jun. de 2024 às 20:52, Tom Lane <[email protected]> escreveu:I wrote:\n> I hypothesize that the reason we're not seeing equivalent failures\n> on x86_64 is one of\n\n> 1. x86_64 valgrind is stupider than aarch64, and fails to track that\n> the contents of the SIMD registers are only partially defined.\n\n> 2. x86_64 valgrind is smarter than aarch64, and is able to see\n> that the \"mask off invalid entries\" step removes all the\n> potentially-uninitialized bits.\n\nHah: it's the second case.  If I patch radixtree.h as attached,\nthen x86_64 valgrind complains about \n\n==00:00:00:32.759 247596== Conditional jump or move depends on uninitialised value(s)\n==00:00:00:32.759 247596==    at 0x52F668: local_ts_node_16_search_eq (radixtree.h:1018)\n\nshowing that it knows that the result of vector8_highbit_mask is\nonly partly defined.I wouldn't be surprised if *RT_NODE_16_GET_INSERTPOS* (src/include/lib/radixtree.h),does not suffer from the same problem?Even with Assert trying to protect.Does the fix not apply here too?best regards,Ranier Vilela", "msg_date": "Thu, 20 Jun 2024 08:50:26 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: suspicious valgrind reports about radixtree/tidstore on arm64" }, { "msg_contents": "On Thu, Jun 20, 2024 at 4:58 PM John Naylor <[email protected]> wrote:\n>\n> On Thu, Jun 20, 2024 at 8:12 AM Masahiko Sawada <[email protected]> wrote:\n>\n> > On Thu, Jun 20, 2024 at 7:54 AM Tom Lane <[email protected]> wrote:\n> > >\n> > > I don't know if there's any reason why the current order\n> > > is preferable.)\n> >\n> > IIUC there is no particular reason for the current order in RT_NODE_48.\n>\n> Yeah. I found that simply swapping them enables clang to avoid\n> double-initialization, but gcc still can't figure it out and must be\n> told to stop at slot_idxs[]. I'd prefer to do it that way and document\n> that slot_idxs is purposefully the last member of the fixed part of\n> the struct.\n\n+1\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 20 Jun 2024 21:30:44 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: suspicious valgrind reports about radixtree/tidstore on arm64" }, { "msg_contents": "Ranier Vilela <[email protected]> writes:\n> Em qua., 19 de jun. de 2024 às 20:52, Tom Lane <[email protected]> escreveu:\n>> Hah: it's the second case. If I patch radixtree.h as attached,\n>> then x86_64 valgrind complains about\n>> ==00:00:00:32.759 247596== Conditional jump or move depends on\n>> uninitialised value(s)\n>> ==00:00:00:32.759 247596== at 0x52F668: local_ts_node_16_search_eq\n>> (radixtree.h:1018)\n>> showing that it knows that the result of vector8_highbit_mask is\n>> only partly defined.\n\n> I wouldn't be surprised if *RT_NODE_16_GET_INSERTPOS*\n> (src/include/lib/radixtree.h),\n> does not suffer from the same problem?\n\nDunno, I only saw valgrind complaints in local_ts_node_16_search_eq,\nand Tomas reported the same.\n\nIt seems moderately likely to me that this is a bug in aarch64\nvalgrind. Still, if it is that then people will have to deal with it\nfor awhile yet. It won't cost us anything meaningful to work around\nit (he says, without having done actual measurements...)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 Jun 2024 11:33:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: suspicious valgrind reports about radixtree/tidstore on arm64" }, { "msg_contents": "On Thu, Jun 20, 2024 at 2:58 PM John Naylor <[email protected]> wrote:\n> the struct. If that's agreeable I'll commit it that way tomorrow\n> unless someone beats me to it.\n\nPushed. I'll clear the open item once all buildfarm members have reported in.\n\n\n", "msg_date": "Fri, 21 Jun 2024 18:06:06 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: suspicious valgrind reports about radixtree/tidstore on arm64" }, { "msg_contents": "John Naylor <[email protected]> writes:\n> Pushed. I'll clear the open item once all buildfarm members have reported in.\n\nJust to confirm, my raspberry pi 4 got through \"make\ninstallcheck-parallel\" under valgrind after this commit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Jun 2024 13:33:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: suspicious valgrind reports about radixtree/tidstore on arm64" } ]
[ { "msg_contents": "When doing performance hacking on pg_upgrade it's often important to see\nindividual runtimes to isolate changes. I've written versions of the attached\npatch numerous times, and I wouldn't be surprised if others have done the same.\nIs there any interest in adding something like the attached to pg_upgrade? The\npatch needs some cleaning and tidying up but I wanted to to gauge interest\nbefore investing time. I've added it to verbose mode mainly since it's not\nreally all that informative for regular users I think.\n\n--\nDaniel Gustafsson", "msg_date": "Wed, 19 Jun 2024 16:50:59 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Report runtimes in pg_upgrade verbose mode" }, { "msg_contents": "On Wed, Jun 19, 2024 at 04:50:59PM +0200, Daniel Gustafsson wrote:\n> When doing performance hacking on pg_upgrade it's often important to see\n> individual runtimes to isolate changes. I've written versions of the attached\n> patch numerous times, and I wouldn't be surprised if others have done the same.\n\nIndeed: https://postgr.es/m/flat/20230727235134.GA3658499%40nathanxps13\n\n> Is there any interest in adding something like the attached to pg_upgrade? The\n> patch needs some cleaning and tidying up but I wanted to to gauge interest\n> before investing time. I've added it to verbose mode mainly since it's not\n> really all that informative for regular users I think.\n\nI've been using 'ts -i' as Peter suggested [0], and that has worked\ndecently well. One other thing that I've noticed is that some potentially\nlong-running tasks don't have corresponding reports. For example, the\ninitial get_db_rel_and_slot_infos() on the old cluster doesn't report\nanything, but that is often one of the most time-consuming steps.\n\n[0] https://postgr.es/m/32d24bcf-9ac4-b10e-4aa2-da6975312eb2%40eisentraut.org\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 19 Jun 2024 10:09:02 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Report runtimes in pg_upgrade verbose mode" }, { "msg_contents": "> On 19 Jun 2024, at 17:09, Nathan Bossart <[email protected]> wrote:\n\n> I've been using 'ts -i' as Peter suggested\n\nOh nice, I had forgotten about that one, thanks for the reminder!\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 19 Jun 2024 20:18:08 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Report runtimes in pg_upgrade verbose mode" } ]
[ { "msg_contents": "The order of json related aggregate functions in the docs is currently \nlike this:\n\n[...]\njson_agg\njson_objectagg\njson_object_agg\njson_object_agg_strict\njson_object_agg_unique\njson_arrayagg\njson_object_agg_unique_strict\nmax\nmin\nrange_agg\nrange_intersect_agg\njson_agg_strict\n[...]\n\njson_arrayagg and json_agg_strict are out of place.\n\nAttached patch puts them in the right spot. This is the same down to v16.\n\nBest,\n\nWolfgang", "msg_date": "Wed, 19 Jun 2024 19:49:53 +0200", "msg_from": "Wolfgang Walther <[email protected]>", "msg_from_op": true, "msg_subject": "Docs: Order of json aggregate functions" }, { "msg_contents": "Am Mo., 22. Juli 2024 um 15:19 Uhr schrieb Wolfgang Walther\n<[email protected]>:\n>\n> The order of json related aggregate functions in the docs is currently\n> like this:\n>\n> [...]\n> json_agg\n> json_objectagg\n> json_object_agg\n> json_object_agg_strict\n> json_object_agg_unique\n> json_arrayagg\n> json_object_agg_unique_strict\n> max\n> min\n> range_agg\n> range_intersect_agg\n> json_agg_strict\n> [...]\n>\n> json_arrayagg and json_agg_strict are out of place.\n>\n> Attached patch puts them in the right spot. This is the same down to v16.\n\n\nI compiled and it worked and didn't throw an error.\n\nThe changes to the patch seem useful in my perspective, for making it\neasier to find the functions in the documentation, so people will find\nthem easier.\n\nThere is another table which isn't sorted too, the \"Hypothetical-Set\nAggregate Functions\". Which would be in need of an alphabetical\nsorting too, if all the tables on this side\nof the documentation should look alike.\n\nRegards\nMarlene\n\n\n", "msg_date": "Tue, 23 Jul 2024 11:45:00 +0200", "msg_from": "Marlene Reiterer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Docs: Order of json aggregate functions" }, { "msg_contents": "On Tue, 2024-07-23 at 11:45 +0200, Marlene Reiterer wrote:\n> Am Mo., 22. Juli 2024 um 15:19 Uhr schrieb Wolfgang Walther <[email protected]>:\n> > \n> > The order of json related aggregate functions in the docs is currently\n> > like this:\n> > \n> > [...]\n> > json_agg\n> > json_objectagg\n> > json_object_agg\n> > json_object_agg_strict\n> > json_object_agg_unique\n> > json_arrayagg\n> > json_object_agg_unique_strict\n> > max\n> > min\n> > range_agg\n> > range_intersect_agg\n> > json_agg_strict\n> > [...]\n> > \n> > json_arrayagg and json_agg_strict are out of place.\n> > \n> > Attached patch puts them in the right spot. This is the same down to v16.\n> \n> I compiled and it worked and didn't throw an error.\n> \n> The changes to the patch seem useful in my perspective, for making it\n> easier to find the functions in the documentation, so people will find\n> them easier.\n> \n> There is another table which isn't sorted too, the \"Hypothetical-Set\n> Aggregate Functions\". Which would be in need of an alphabetical\n> sorting too, if all the tables on this side\n> of the documentation should look alike.\n\nThere are only four hypothetical-set aggregate functions, so it is no problem\nto find a function in that list.\n\nI would say that it makes sense to apply the proposed patch, even if we\ndon't sort that short list.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Wed, 31 Jul 2024 10:12:50 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Docs: Order of json aggregate functions" }, { "msg_contents": "On Thu, 20 Jun 2024 at 05:50, Wolfgang Walther <[email protected]> wrote:\n> json_arrayagg and json_agg_strict are out of place.\n>\n> Attached patch puts them in the right spot. This is the same down to v16.\n\nThank you. I've pushed this and ended up backpatching to 16 too. It's\nquite hard to unsee the broken order once seen.\n\nIt seems worth the backpatch both to reduce pain for any future\nbackpatches and also because the aggregates seemed rather badly\nplaced.\n\nDavid\n\n\n", "msg_date": "Thu, 12 Sep 2024 22:41:23 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Docs: Order of json aggregate functions" } ]
[ { "msg_contents": "While working on an idea from another thread [0], I noticed that each of\nmax_connections, max_worker_process, max_autovacuum_workers, and\nmax_wal_senders have a check hook that verifies the sum of those GUCs does\nnot exceed a certain value. Then, in InitializeMaxBackends(), we do the\nsame check once more. Not only do the check hooks seem redundant, but I\nthink they might sometimes be inaccurate since some values might not yet be\ninitialized. Furthermore, the error message is not exactly the most\ndescriptive:\n\n\t$ pg_ctl -D . start -o \"-c max_connections=262100 -c max_wal_senders=10000\"\n\n\tFATAL: invalid value for parameter \"max_wal_senders\": 10000\n\nThe attached patch removes these hooks and enhances the error message to\nlook like this:\n\n\tFATAL: too many backends configured\n\tDETAIL: \"max_connections\" (262100) plus \"autovacuum_max_workers\" (3) plus \"max_worker_processes\" (8) plus \"max_wal_senders\" (10000) must be less than 262142.\n\nThe downside of this change is that server startup progresses a little\nfurther before it fails, but that might not be too concerning given this\n_should_ be a relatively rare occurrence.\n\nThoughts?\n\n[0] https://postgr.es/m/20240618213331.ef2spg3nasksisbi%40awork3.anarazel.de\n\n-- \nnathan", "msg_date": "Wed, 19 Jun 2024 14:04:58 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "remove check hooks for GUCs that contribute to MaxBackends" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> While working on an idea from another thread [0], I noticed that each of\n> max_connections, max_worker_process, max_autovacuum_workers, and\n> max_wal_senders have a check hook that verifies the sum of those GUCs does\n> not exceed a certain value. Then, in InitializeMaxBackends(), we do the\n> same check once more. Not only do the check hooks seem redundant, but I\n> think they might sometimes be inaccurate since some values might not yet be\n> initialized.\n\nYeah, these per-variable checks are inherently bogus. If we can get\nof them and make the net user experience actually better, that's a\nwin-win.\n\nIt seems easier to do for these because they can't change after server\nstart, so there can be one well-defined time to apply the consistency\ncheck. IIRC, we have some similar issues in other hooks for variables\nthat aren't PGC_POSTMASTER, so it's harder to see how we might get rid\nof their cross-checks. That doesn't make them less bogus though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 Jun 2024 15:09:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: remove check hooks for GUCs that contribute to MaxBackends" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> The attached patch removes these hooks and enhances the error message to\n> look like this:\n\n> \tFATAL: too many backends configured\n> \tDETAIL: \"max_connections\" (262100) plus \"autovacuum_max_workers\" (3) plus \"max_worker_processes\" (8) plus \"max_wal_senders\" (10000) must be less than 262142.\n\nBTW, I suggest writing it as \"too many server processes configured\",\nor perhaps \"too many server processes required\". \"Backend\" is too\nmuch of an insider term.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 Jun 2024 15:14:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: remove check hooks for GUCs that contribute to MaxBackends" }, { "msg_contents": "On Wed, Jun 19, 2024 at 03:14:16PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> The attached patch removes these hooks and enhances the error message to\n>> look like this:\n> \n>> \tFATAL: too many backends configured\n>> \tDETAIL: \"max_connections\" (262100) plus \"autovacuum_max_workers\" (3) plus \"max_worker_processes\" (8) plus \"max_wal_senders\" (10000) must be less than 262142.\n> \n> BTW, I suggest writing it as \"too many server processes configured\",\n> or perhaps \"too many server processes required\". \"Backend\" is too\n> much of an insider term.\n\nWill do, thanks for reviewing.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 19 Jun 2024 14:24:08 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: remove check hooks for GUCs that contribute to MaxBackends" }, { "msg_contents": "Committed.\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 5 Jul 2024 14:50:39 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: remove check hooks for GUCs that contribute to MaxBackends" } ]
[ { "msg_contents": "https://postgr.es/m/[email protected] wrote:\n> Separable, nontrivial things not fixed in the attached patch stack:\n\n> - Trouble is possible, I bet, if the system crashes between the inplace-update\n> memcpy() and XLogInsert(). See the new XXX comment below the memcpy().\n\nThat comment:\n\n\t/*----------\n\t * XXX A crash here can allow datfrozenxid() to get ahead of relfrozenxid:\n\t *\n\t * [\"D\" is a VACUUM (ONLY_DATABASE_STATS)]\n\t * [\"R\" is a VACUUM tbl]\n\t * D: vac_update_datfrozenid() -> systable_beginscan(pg_class)\n\t * D: systable_getnext() returns pg_class tuple of tbl\n\t * R: memcpy() into pg_class tuple of tbl\n\t * D: raise pg_database.datfrozenxid, XLogInsert(), finish\n\t * [crash]\n\t * [recovery restores datfrozenxid w/o relfrozenxid]\n\t */\n\n> Might solve this by inplace update setting DELAY_CHKPT, writing WAL, and\n> finally issuing memcpy() into the buffer.\n\nThat fix worked. Along with that, I'm attaching a not-for-commit patch with a\ntest case and one with the fix rebased on that test case. Apply on top of the\nv2 patch stack from https://postgr.es/m/[email protected].\nThis gets key testing from 027_stream_regress.pl; when I commented out some\nmemcpy lines of the heapam.c change, that test caught it.\n\nThis resolves the last inplace update defect known to me.\n\nThanks,\nnm", "msg_date": "Wed, 19 Jun 2024 18:29:08 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "datfrozenxid > relfrozenxid w/ crash before XLOG_HEAP_INPLACE" }, { "msg_contents": "\n\n> On 20 Jun 2024, at 06:29, Noah Misch <[email protected]> wrote:\n> \n> This resolves the last inplace update defect known to me.\n\nThat’s a huge amount of work, thank you!\n\nDo I get it right, that inplace updates are catalog-specific and some other OOM corruptions [0] and Standby corruptions [1] are not related to this fix. Bot cases we observed on regular tables.\nOr that might be effect of vacuum deepening corruption after observing wrong datfrozenxid?\n\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/67EADE8F-AEA6-4B73-8E38-A69E5D48BAFE%40yandex-team.ru#1266dd8b898ba02686c2911e0a50ab47\n[1] https://www.postgresql.org/message-id/flat/CAFj8pRBEFMxxFSCVOSi-4n0jHzSaxh6Ze_cZid5eG%3Dtsnn49-A%40mail.gmail.com\n\n", "msg_date": "Thu, 20 Jun 2024 12:17:44 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: datfrozenxid > relfrozenxid w/ crash before XLOG_HEAP_INPLACE" }, { "msg_contents": "On Thu, Jun 20, 2024 at 12:17:44PM +0500, Andrey M. Borodin wrote:\n> On 20 Jun 2024, at 06:29, Noah Misch <[email protected]> wrote:\n> > This resolves the last inplace update defect known to me.\n> \n> That’s a huge amount of work, thank you!\n> \n> Do I get it right, that inplace updates are catalog-specific and some other OOM corruptions [0] and Standby corruptions [1] are not related to this fix. Bot cases we observed on regular tables.\n\nIn core code, inplace updates are specific to pg_class and pg_database.\nAdding PGXN modules, only the citus extension uses them on some other table.\n[0] definitely looks unrelated.\n\n> Or that might be effect of vacuum deepening corruption after observing wrong datfrozenxid?\n\nWrong datfrozenxid can cause premature clog truncation, which can cause \"could\nnot access status of transaction\". While $SUBJECT could cause that, I think\nit would happen on both primary and standby. [1] seems to be about a standby\nlacking clog present on the primary, which is unrelated.\n\n> [0] https://www.postgresql.org/message-id/flat/67EADE8F-AEA6-4B73-8E38-A69E5D48BAFE%40yandex-team.ru#1266dd8b898ba02686c2911e0a50ab47\n> [1] https://www.postgresql.org/message-id/flat/CAFj8pRBEFMxxFSCVOSi-4n0jHzSaxh6Ze_cZid5eG%3Dtsnn49-A%40mail.gmail.com\n\n\n", "msg_date": "Thu, 20 Jun 2024 08:08:43 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: datfrozenxid > relfrozenxid w/ crash before XLOG_HEAP_INPLACE" } ]
[ { "msg_contents": "The pg_combinebackup --clone option currently doesn't work at all. Even \non systems where it should it be supported, you'll always get a \"file \ncloning not supported on this platform\" error.\n\nThe reason is this checking code in pg_combinebackup.c:\n\n#if (defined(HAVE_COPYFILE) && defined(COPYFILE_CLONE_FORCE)) || \\\n (defined(__linux__) && defined(FICLONE))\n\n if (opt.dry_run)\n pg_log_debug(\"would use cloning to copy files\");\n else\n pg_log_debug(\"will use cloning to copy files\");\n\n#else\n pg_fatal(\"file cloning not supported on this platform\");\n#endif\n\nThe problem is that this file does not include the appropriate OS header \nfiles that would define COPYFILE_CLONE_FORCE or FICLONE, respectively.\n\nThe same problem also exists in copy_file.c. (That one has the right \nheader file for macOS but still not for Linux.)\n\nThis should be pretty easy to fix up, and we should think about some \nways to refactor this to avoid repeating all these OS-specific things a \nfew times. (The code was copied from pg_upgrade originally.)\n\nBut in the short term, how about some test coverage? You can exercise \nthe different pg_combinebackup copy modes like this:\n\ndiff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm \nb/src/test/perl/PostgreSQL/Test/Cluster.pm\nindex 83f385a4870..7e8dd024c82 100644\n--- a/src/test/perl/PostgreSQL/Test/Cluster.pm\n+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm\n@@ -848,7 +848,7 @@ sub init_from_backup\n }\n\n local %ENV = $self->_get_env();\n- my @combineargs = ('pg_combinebackup', '-d');\n+ my @combineargs = ('pg_combinebackup', '-d', '--clone');\n if (exists $params{tablespace_map})\n {\n while (my ($olddir, $newdir) = each %{ \n$params{tablespace_map} })\n\nWe could do something like what we have for pg_upgrade, where we can use \nthe environment variable PG_TEST_PG_UPGRADE_MODE to test the different \ncopy modes. We could turn this into a global option. (This might also \nbe useful for future work to use file cloning elsewhere, like in CREATE \nDATABASE?)\n\nAlso, I think it would be useful for consistency if pg_combinebackup had \na --copy option to select the default mode, like pg_upgrade does.\n\n\n", "msg_date": "Thu, 20 Jun 2024 07:55:07 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "pg_combinebackup --clone doesn't work" }, { "msg_contents": "On 6/20/24 07:55, Peter Eisentraut wrote:\n> The pg_combinebackup --clone option currently doesn't work at all.  Even\n> on systems where it should it be supported, you'll always get a \"file\n> cloning not supported on this platform\" error.\n> \n> The reason is this checking code in pg_combinebackup.c:\n> \n> #if (defined(HAVE_COPYFILE) && defined(COPYFILE_CLONE_FORCE)) || \\\n>     (defined(__linux__) && defined(FICLONE))\n> \n>         if (opt.dry_run)\n>             pg_log_debug(\"would use cloning to copy files\");\n>         else\n>             pg_log_debug(\"will use cloning to copy files\");\n> \n> #else\n>         pg_fatal(\"file cloning not supported on this platform\");\n> #endif\n> \n> The problem is that this file does not include the appropriate OS header\n> files that would define COPYFILE_CLONE_FORCE or FICLONE, respectively.\n> \n> The same problem also exists in copy_file.c.  (That one has the right\n> header file for macOS but still not for Linux.)\n> \n\nSeems like my bug, I guess :-( Chances are the original patches had the\ninclude, but it got lost during refactoring or something. Anyway, will\nfix shortly.\n\n> This should be pretty easy to fix up, and we should think about some\n> ways to refactor this to avoid repeating all these OS-specific things a\n> few times.  (The code was copied from pg_upgrade originally.)\n> \n\nYeah. The ifdef forest got rather hard to navigate.\n\n> But in the short term, how about some test coverage?  You can exercise\n> the different pg_combinebackup copy modes like this:\n> \n> diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm\n> b/src/test/perl/PostgreSQL/Test/Cluster.pm\n> index 83f385a4870..7e8dd024c82 100644\n> --- a/src/test/perl/PostgreSQL/Test/Cluster.pm\n> +++ b/src/test/perl/PostgreSQL/Test/Cluster.pm\n> @@ -848,7 +848,7 @@ sub init_from_backup\n>         }\n> \n>         local %ENV = $self->_get_env();\n> -       my @combineargs = ('pg_combinebackup', '-d');\n> +       my @combineargs = ('pg_combinebackup', '-d', '--clone');\n>         if (exists $params{tablespace_map})\n>         {\n>             while (my ($olddir, $newdir) = each %{\n> $params{tablespace_map} })\n> \n\nFor ad hoc testing? Sure, but that won't work on platforms without the\nclone support, right?\n\n> We could do something like what we have for pg_upgrade, where we can use\n> the environment variable PG_TEST_PG_UPGRADE_MODE to test the different\n> copy modes.  We could turn this into a global option.  (This might also\n> be useful for future work to use file cloning elsewhere, like in CREATE\n> DATABASE?)\n> \n\nYeah, this sounds like a good way to do this. Is there a good reason to\nhave multiple different variables, one for each tool, or should we have\na single PG_TEST_COPY_MODE affecting all the places?\n\n> Also, I think it would be useful for consistency if pg_combinebackup had\n> a --copy option to select the default mode, like pg_upgrade does.\n> \n\nI vaguely recall this might have been discussed in the thread about\nadding cloning to pg_combinebackup, but I don't recall the details why\nwe didn't adopt the pg_uprade way. But we can revisit that, IMO.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 20 Jun 2024 11:31:09 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup --clone doesn't work" }, { "msg_contents": "Here's a fix adding the missing headers to pg_combinebackup, and fixing\nsome compile-time issues in the ifdef-ed block.\n\nI've done some basic manual testing today - I plan to test this a bit\nmore tomorrow, and I'll also look at integrating this into the existing\ntests.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 21 Jun 2024 00:07:32 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup --clone doesn't work" }, { "msg_contents": "On 6/21/24 00:07, Tomas Vondra wrote:\n> Here's a fix adding the missing headers to pg_combinebackup, and fixing\n> some compile-time issues in the ifdef-ed block.\n> \n> I've done some basic manual testing today - I plan to test this a bit\n> more tomorrow, and I'll also look at integrating this into the existing\n> tests.\n> \n\nHere's a bit more complete / cleaned patch, adding the testing changes\nin separate parts.\n\n0001 adds the missing headers / fixes the now-accessible code a bit\n\n0002 adds the --copy option for consistency with pg_upgrade\n\n0003 adds the PG_TEST_PG_COMBINEBACKUP_MODE, so that we can override the\ncopy method for tests\n\n0004 tweaks two of the Cirrus CI tasks to use --clone/--copy-file-range\n\n\nI believe 0001-0003 are likely non-controversial, although if someone\ncould take a look at the Perl in 0003 that'd be nice. Also, 0002 seems\nnice not only because of consistency with pg_upgrade, but it also makes\n0003 easier as we don't need to special-case the default mode etc.\n\nI'm not sure about 0004 - I initially did this mostly to check we have\nthe right headers on other platforms, but not sure we want to actually\ndo this. Or maybe we want to test a different combination (e.g. also\ntest the --clone on Linux)?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 21 Jun 2024 18:10:38 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup --clone doesn't work" }, { "msg_contents": "On 21.06.24 18:10, Tomas Vondra wrote:\n> On 6/21/24 00:07, Tomas Vondra wrote:\n>> Here's a fix adding the missing headers to pg_combinebackup, and fixing\n>> some compile-time issues in the ifdef-ed block.\n>>\n>> I've done some basic manual testing today - I plan to test this a bit\n>> more tomorrow, and I'll also look at integrating this into the existing\n>> tests.\n>>\n> \n> Here's a bit more complete / cleaned patch, adding the testing changes\n> in separate parts.\n> \n> 0001 adds the missing headers / fixes the now-accessible code a bit\n> \n> 0002 adds the --copy option for consistency with pg_upgrade\n\nThis looks good.\n\n> 0003 adds the PG_TEST_PG_COMBINEBACKUP_MODE, so that we can override the\n> copy method for tests\n\nI had imagined that we combine PG_TEST_PG_UPGRADE_MODE and this new one \ninto one setting. But maybe that's something to consider with less time \npressure for PG18.\n\n > I believe 0001-0003 are likely non-controversial, although if someone\n > could take a look at the Perl in 0003 that'd be nice. Also, 0002 seems\n > nice not only because of consistency with pg_upgrade, but it also makes\n > 0003 easier as we don't need to special-case the default mode etc.\n\nRight, that was one of the reasons.\n\n> 0004 tweaks two of the Cirrus CI tasks to use --clone/--copy-file-range\n\n> I'm not sure about 0004 - I initially did this mostly to check we have\n> the right headers on other platforms, but not sure we want to actually\n> do this. Or maybe we want to test a different combination (e.g. also\n> test the --clone on Linux)?\n\nIt's tricky to find the right balance here. We had to figure this out \nfor pg_upgrade as well. I think your solution is good, and we should \nalso add test coverage for pg_upgrade --copy-file-range in the same \nplace, I think.\n\n\n\n", "msg_date": "Tue, 25 Jun 2024 15:21:37 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_combinebackup --clone doesn't work" }, { "msg_contents": "\n\nOn 6/25/24 15:21, Peter Eisentraut wrote:\n> On 21.06.24 18:10, Tomas Vondra wrote:\n>> On 6/21/24 00:07, Tomas Vondra wrote:\n>>> Here's a fix adding the missing headers to pg_combinebackup, and fixing\n>>> some compile-time issues in the ifdef-ed block.\n>>>\n>>> I've done some basic manual testing today - I plan to test this a bit\n>>> more tomorrow, and I'll also look at integrating this into the existing\n>>> tests.\n>>>\n>>\n>> Here's a bit more complete / cleaned patch, adding the testing changes\n>> in separate parts.\n>>\n>> 0001 adds the missing headers / fixes the now-accessible code a bit\n>>\n>> 0002 adds the --copy option for consistency with pg_upgrade\n> \n> This looks good.\n> \n>> 0003 adds the PG_TEST_PG_COMBINEBACKUP_MODE, so that we can override the\n>> copy method for tests\n> \n> I had imagined that we combine PG_TEST_PG_UPGRADE_MODE and this new one\n> into one setting.  But maybe that's something to consider with less time\n> pressure for PG18.\n> \n\nYeah. I initially planned to combine those options into a single one,\nbecause it seems like it's not very useful to have them set differently,\nand because it's easier to not have different options between releases.\nBut then I realized PG_TEST_PG_UPGRADE_MODE was added in 16, so this\nship already sailed - so no reason to rush this into 18.\n\n>> I believe 0001-0003 are likely non-controversial, although if someone\n>> could take a look at the Perl in 0003 that'd be nice. Also, 0002 seems\n>> nice not only because of consistency with pg_upgrade, but it also makes\n>> 0003 easier as we don't need to special-case the default mode etc.\n> \n> Right, that was one of the reasons.\n> \n>> 0004 tweaks two of the Cirrus CI tasks to use --clone/--copy-file-range\n> \n>> I'm not sure about 0004 - I initially did this mostly to check we have\n>> the right headers on other platforms, but not sure we want to actually\n>> do this. Or maybe we want to test a different combination (e.g. also\n>> test the --clone on Linux)?\n> \n> It's tricky to find the right balance here.  We had to figure this out\n> for pg_upgrade as well.  I think your solution is good, and we should\n> also add test coverage for pg_upgrade --copy-file-range in the same\n> place, I think.\n> \n\nYeah. I'm not sure if we need to decide this now, or if we can tweak the\ntesting even for released branches.\n\nIMHO the main challenge is to decide which combinations we actually want\nto test on CI. It'd be nice to test each platform with all modes it\nsupports (I guess for backups that wouldn't be a bad thing). But that'd\nrequire expanding the number of combinations, and I don't think that's\nlikely.\n\nMaybe it'd be possible to have a second CI config, with additional\ncombinations, but not triggered after each push?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 25 Jun 2024 16:33:30 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup --clone doesn't work" }, { "msg_contents": "Hi,\n\nI've pushed the first three patches, fixing the headers, adding the\n--copy option and PG_TEST_PG_COMBINEBACKUP_MODE variable.\n\nI haven't pushed the CI changes, I'm not sure if there's a clear\nconsensus on which combination to test. It's something we can tweak\nlater, I think.\n\nFWIW I've added the patch to the 2024-07 commitfest, but mostly to get\nsome CI runs (runs on private fork fail with some macos issues unrelated\nto the patch).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sun, 30 Jun 2024 20:58:02 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup --clone doesn't work" }, { "msg_contents": "On 30.06.24 20:58, Tomas Vondra wrote:\n> I've pushed the first three patches, fixing the headers, adding the\n> --copy option and PG_TEST_PG_COMBINEBACKUP_MODE variable.\n> \n> I haven't pushed the CI changes, I'm not sure if there's a clear\n> consensus on which combination to test. It's something we can tweak\n> later, I think.\n> \n> FWIW I've added the patch to the 2024-07 commitfest, but mostly to get\n> some CI runs (runs on private fork fail with some macos issues unrelated\n> to the patch).\n\nThis last patch is still pending in the commitfest. Personally, I think \nit's good to commit as is.\n\n\n\n", "msg_date": "Fri, 23 Aug 2024 14:00:48 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_combinebackup --clone doesn't work" }, { "msg_contents": "\nOn 8/23/24 14:00, Peter Eisentraut wrote:\n> On 30.06.24 20:58, Tomas Vondra wrote:\n>> I've pushed the first three patches, fixing the headers, adding the\n>> --copy option and PG_TEST_PG_COMBINEBACKUP_MODE variable.\n>>\n>> I haven't pushed the CI changes, I'm not sure if there's a clear\n>> consensus on which combination to test. It's something we can tweak\n>> later, I think.\n>>\n>> FWIW I've added the patch to the 2024-07 commitfest, but mostly to get\n>> some CI runs (runs on private fork fail with some macos issues unrelated\n>> to the patch).\n> \n> This last patch is still pending in the commitfest.  Personally, I think\n> it's good to commit as is.\n> \n\nOK, thanks for reminding me. I'll take care of it after thinking about\nit a bit more.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Fri, 23 Aug 2024 14:50:41 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup --clone doesn't work" }, { "msg_contents": "On 8/23/24 14:50, Tomas Vondra wrote:\n> \n> On 8/23/24 14:00, Peter Eisentraut wrote:\n>> On 30.06.24 20:58, Tomas Vondra wrote:\n>>> I've pushed the first three patches, fixing the headers, adding the\n>>> --copy option and PG_TEST_PG_COMBINEBACKUP_MODE variable.\n>>>\n>>> I haven't pushed the CI changes, I'm not sure if there's a clear\n>>> consensus on which combination to test. It's something we can tweak\n>>> later, I think.\n>>>\n>>> FWIW I've added the patch to the 2024-07 commitfest, but mostly to get\n>>> some CI runs (runs on private fork fail with some macos issues unrelated\n>>> to the patch).\n>>\n>> This last patch is still pending in the commitfest.  Personally, I think\n>> it's good to commit as is.\n>>\n> \n> OK, thanks for reminding me. I'll take care of it after thinking about\n> it a bit more.\n> \n\nTook me longer than I expected, but pushed (into master only).\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Tue, 10 Sep 2024 16:32:28 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup --clone doesn't work" } ]
[ { "msg_contents": "\nHi, hackers\n\nWhen I read [1], I think the \"counted_by\" attribute may also be valuable for\nPostgreSQL.\n\nThe 'counted_by' attribute is used on flexible array members. The argument for\nthe attribute is the name of the field member in the same structure holding\nthe count of elements in the flexible array. This information can be used to\nimprove the results of the array bound sanitizer and the\n'__builtin_dynamic_object_size' builtin [2].\n\nIt was introduced in Clang-18 [3] and will soon be available in GCC-15.\n\n[1] https://embeddedor.com/blog/2024/06/18/how-to-use-the-new-counted_by-attribute-in-c-and-linux/\n[2] https://reviews.llvm.org/D148381\n[3] https://godbolt.org/z/5qKsEhG8o\n\n-- \nRegrads,\nJapin Li\n\n\n", "msg_date": "Thu, 20 Jun 2024 18:02:38 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "How about add counted_by attribute for flexible-array?" } ]
[ { "msg_contents": "Hi,\n\nWhile running valgrind on 32-bit ARM (rpi5 with debian), I got this\nreally strange report:\n\n\n==25520== Use of uninitialised value of size 4\n==25520== at 0x94A550: wrapper_handler (pqsignal.c:108)\n==25520== by 0x4D7826F: ??? (sigrestorer.S:64)\n==25520== Uninitialised value was created by a heap allocation\n==25520== at 0x8FB780: palloc (mcxt.c:1340)\n==25520== by 0x913067: tuplestore_begin_common (tuplestore.c:289)\n==25520== by 0x91310B: tuplestore_begin_heap (tuplestore.c:331)\n==25520== by 0x3EA717: ExecMaterial (nodeMaterial.c:64)\n==25520== by 0x3B2FF7: ExecProcNodeFirst (execProcnode.c:464)\n==25520== by 0x3EF73F: ExecProcNode (executor.h:274)\n==25520== by 0x3F0637: ExecMergeJoin (nodeMergejoin.c:703)\n==25520== by 0x3B2FF7: ExecProcNodeFirst (execProcnode.c:464)\n==25520== by 0x3C47DB: ExecProcNode (executor.h:274)\n==25520== by 0x3C4D4F: fetch_input_tuple (nodeAgg.c:561)\n==25520== by 0x3C8233: agg_retrieve_direct (nodeAgg.c:2364)\n==25520== by 0x3C7E07: ExecAgg (nodeAgg.c:2179)\n==25520== by 0x3B2FF7: ExecProcNodeFirst (execProcnode.c:464)\n==25520== by 0x3A5EC3: ExecProcNode (executor.h:274)\n==25520== by 0x3A8FBF: ExecutePlan (execMain.c:1646)\n==25520== by 0x3A6677: standard_ExecutorRun (execMain.c:363)\n==25520== by 0x3A644B: ExecutorRun (execMain.c:304)\n==25520== by 0x6976D3: PortalRunSelect (pquery.c:924)\n==25520== by 0x6972F7: PortalRun (pquery.c:768)\n==25520== by 0x68FA1F: exec_simple_query (postgres.c:1274)\n==25520==\n{\n <insert_a_suppression_name_here>\n Memcheck:Value4\n fun:wrapper_handler\n obj:/usr/lib/arm-linux-gnueabihf/libc.so.6\n}\n**25520** Valgrind detected 1 error(s) during execution of \"select\ncount(*) from\n**25520** (select * from tenk1 x order by x.thousand, x.twothousand,\nx.fivethous) x\n**25520** left join\n**25520** (select * from tenk1 y order by y.unique2) y\n**25520** on x.thousand = y.unique2 and x.twothousand = y.hundred and\nx.fivethous = y.unique2;\"\n\n\nI'm mostly used to weird valgrind stuff on this platform, but it's\nusually about libarmmmem and (possibly) thinking it might access\nundefined stuff when calculating checksums etc.\n\nThis seems somewhat different, so I wonder if it's something real? But\nalso, at the same time, it's rather weird, because the report says it's\nthis bit in pqsignal.c\n\n (*pqsignal_handlers[postgres_signal_arg]) (postgres_signal_arg);\n\nbut it also says the memory was allocated in tuplestore, and that's\nobviously very unlikely, because it does not do anything with signals.\n\nI've only seen this once, but if it's related to signals, that's not\nsurprising - the window may be pretty narrow.\n\nAnyone saw/investigated a report like this?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 20 Jun 2024 12:28:24 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "confusing valgrind report about tuplestore+wrapper_handler (?) on\n 32-bit arm" }, { "msg_contents": "Em qui., 20 de jun. de 2024 às 07:28, Tomas Vondra <\[email protected]> escreveu:\n\n> Hi,\n>\n> While running valgrind on 32-bit ARM (rpi5 with debian), I got this\n> really strange report:\n>\n>\n> ==25520== Use of uninitialised value of size 4\n> ==25520== at 0x94A550: wrapper_handler (pqsignal.c:108)\n> ==25520== by 0x4D7826F: ??? (sigrestorer.S:64)\n> ==25520== Uninitialised value was created by a heap allocation\n> ==25520== at 0x8FB780: palloc (mcxt.c:1340)\n> ==25520== by 0x913067: tuplestore_begin_common (tuplestore.c:289)\n> ==25520== by 0x91310B: tuplestore_begin_heap (tuplestore.c:331)\n> ==25520== by 0x3EA717: ExecMaterial (nodeMaterial.c:64)\n> ==25520== by 0x3B2FF7: ExecProcNodeFirst (execProcnode.c:464)\n> ==25520== by 0x3EF73F: ExecProcNode (executor.h:274)\n> ==25520== by 0x3F0637: ExecMergeJoin (nodeMergejoin.c:703)\n> ==25520== by 0x3B2FF7: ExecProcNodeFirst (execProcnode.c:464)\n> ==25520== by 0x3C47DB: ExecProcNode (executor.h:274)\n> ==25520== by 0x3C4D4F: fetch_input_tuple (nodeAgg.c:561)\n> ==25520== by 0x3C8233: agg_retrieve_direct (nodeAgg.c:2364)\n> ==25520== by 0x3C7E07: ExecAgg (nodeAgg.c:2179)\n> ==25520== by 0x3B2FF7: ExecProcNodeFirst (execProcnode.c:464)\n> ==25520== by 0x3A5EC3: ExecProcNode (executor.h:274)\n> ==25520== by 0x3A8FBF: ExecutePlan (execMain.c:1646)\n> ==25520== by 0x3A6677: standard_ExecutorRun (execMain.c:363)\n> ==25520== by 0x3A644B: ExecutorRun (execMain.c:304)\n> ==25520== by 0x6976D3: PortalRunSelect (pquery.c:924)\n> ==25520== by 0x6972F7: PortalRun (pquery.c:768)\n> ==25520== by 0x68FA1F: exec_simple_query (postgres.c:1274)\n> ==25520==\n> {\n> <insert_a_suppression_name_here>\n> Memcheck:Value4\n> fun:wrapper_handler\n> obj:/usr/lib/arm-linux-gnueabihf/libc.so.6\n> }\n> **25520** Valgrind detected 1 error(s) during execution of \"select\n> count(*) from\n> **25520** (select * from tenk1 x order by x.thousand, x.twothousand,\n> x.fivethous) x\n> **25520** left join\n> **25520** (select * from tenk1 y order by y.unique2) y\n> **25520** on x.thousand = y.unique2 and x.twothousand = y.hundred and\n> x.fivethous = y.unique2;\"\n>\n>\n> I'm mostly used to weird valgrind stuff on this platform, but it's\n> usually about libarmmmem and (possibly) thinking it might access\n> undefined stuff when calculating checksums etc.\n>\n> This seems somewhat different, so I wonder if it's something real?\n\nIt seems like a false positive to me.\n\nAccording to valgrind's documentation:\nhttps://valgrind.org/docs/manual/mc-manual.html#mc-manual.value\n\n\" This can lead to false positive errors, as the shared memory can be\ninitialised via a first mapping, and accessed via another mapping. The\naccess via this other mapping will have its own V bits, which have not been\nchanged when the memory was initialised via the first mapping. The bypass\nfor these false positives is to use Memcheck's client requests\nVALGRIND_MAKE_MEM_DEFINED and VALGRIND_MAKE_MEM_UNDEFINED to inform\nMemcheck about what your program does (or what another process does) to\nthese shared memory mappings. \"\n\nbest regards,\nRanier Vilela\n\nEm qui., 20 de jun. de 2024 às 07:28, Tomas Vondra <[email protected]> escreveu:Hi,\n\nWhile running valgrind on 32-bit ARM (rpi5 with debian), I got this\nreally strange report:\n\n\n==25520== Use of uninitialised value of size 4\n==25520==    at 0x94A550: wrapper_handler (pqsignal.c:108)\n==25520==    by 0x4D7826F: ??? (sigrestorer.S:64)\n==25520==  Uninitialised value was created by a heap allocation\n==25520==    at 0x8FB780: palloc (mcxt.c:1340)\n==25520==    by 0x913067: tuplestore_begin_common (tuplestore.c:289)\n==25520==    by 0x91310B: tuplestore_begin_heap (tuplestore.c:331)\n==25520==    by 0x3EA717: ExecMaterial (nodeMaterial.c:64)\n==25520==    by 0x3B2FF7: ExecProcNodeFirst (execProcnode.c:464)\n==25520==    by 0x3EF73F: ExecProcNode (executor.h:274)\n==25520==    by 0x3F0637: ExecMergeJoin (nodeMergejoin.c:703)\n==25520==    by 0x3B2FF7: ExecProcNodeFirst (execProcnode.c:464)\n==25520==    by 0x3C47DB: ExecProcNode (executor.h:274)\n==25520==    by 0x3C4D4F: fetch_input_tuple (nodeAgg.c:561)\n==25520==    by 0x3C8233: agg_retrieve_direct (nodeAgg.c:2364)\n==25520==    by 0x3C7E07: ExecAgg (nodeAgg.c:2179)\n==25520==    by 0x3B2FF7: ExecProcNodeFirst (execProcnode.c:464)\n==25520==    by 0x3A5EC3: ExecProcNode (executor.h:274)\n==25520==    by 0x3A8FBF: ExecutePlan (execMain.c:1646)\n==25520==    by 0x3A6677: standard_ExecutorRun (execMain.c:363)\n==25520==    by 0x3A644B: ExecutorRun (execMain.c:304)\n==25520==    by 0x6976D3: PortalRunSelect (pquery.c:924)\n==25520==    by 0x6972F7: PortalRun (pquery.c:768)\n==25520==    by 0x68FA1F: exec_simple_query (postgres.c:1274)\n==25520==\n{\n   <insert_a_suppression_name_here>\n   Memcheck:Value4\n   fun:wrapper_handler\n   obj:/usr/lib/arm-linux-gnueabihf/libc.so.6\n}\n**25520** Valgrind detected 1 error(s) during execution of \"select\ncount(*) from\n**25520**   (select * from tenk1 x order by x.thousand, x.twothousand,\nx.fivethous) x\n**25520**   left join\n**25520**   (select * from tenk1 y order by y.unique2) y\n**25520**   on x.thousand = y.unique2 and x.twothousand = y.hundred and\nx.fivethous = y.unique2;\"\n\n\nI'm mostly used to weird valgrind stuff on this platform, but it's\nusually about libarmmmem and (possibly) thinking it might access\nundefined stuff when calculating checksums etc.\n\nThis seems somewhat different, so I wonder if it's something real? It seems like a false positive to me.According to valgrind's documentation:https://valgrind.org/docs/manual/mc-manual.html#mc-manual.value\"\nThis can lead to false positive errors, as\nthe shared memory can be initialised via a first mapping, and accessed via\nanother mapping. The access via this other mapping will have its own V bits,\nwhich have not been changed when the memory was initialised via the first\nmapping. The bypass for these false positives is to use Memcheck's client\nrequests VALGRIND_MAKE_MEM_DEFINED and\nVALGRIND_MAKE_MEM_UNDEFINED to inform\nMemcheck about what your program does (or what another process does)\nto these shared memory mappings. \"best regards,Ranier Vilela", "msg_date": "Thu, 20 Jun 2024 08:32:05 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: confusing valgrind report about tuplestore+wrapper_handler (?) on\n 32-bit arm" }, { "msg_contents": "\n\nOn 6/20/24 13:32, Ranier Vilela wrote:\n> Em qui., 20 de jun. de 2024 às 07:28, Tomas Vondra <\n> [email protected]> escreveu:\n> \n>> Hi,\n>>\n>> While running valgrind on 32-bit ARM (rpi5 with debian), I got this\n>> really strange report:\n>>\n>>\n>> ==25520== Use of uninitialised value of size 4\n>> ==25520== at 0x94A550: wrapper_handler (pqsignal.c:108)\n>> ==25520== by 0x4D7826F: ??? (sigrestorer.S:64)\n>> ==25520== Uninitialised value was created by a heap allocation\n>> ==25520== at 0x8FB780: palloc (mcxt.c:1340)\n>> ==25520== by 0x913067: tuplestore_begin_common (tuplestore.c:289)\n>> ==25520== by 0x91310B: tuplestore_begin_heap (tuplestore.c:331)\n>> ==25520== by 0x3EA717: ExecMaterial (nodeMaterial.c:64)\n>> ==25520== by 0x3B2FF7: ExecProcNodeFirst (execProcnode.c:464)\n>> ==25520== by 0x3EF73F: ExecProcNode (executor.h:274)\n>> ==25520== by 0x3F0637: ExecMergeJoin (nodeMergejoin.c:703)\n>> ==25520== by 0x3B2FF7: ExecProcNodeFirst (execProcnode.c:464)\n>> ==25520== by 0x3C47DB: ExecProcNode (executor.h:274)\n>> ==25520== by 0x3C4D4F: fetch_input_tuple (nodeAgg.c:561)\n>> ==25520== by 0x3C8233: agg_retrieve_direct (nodeAgg.c:2364)\n>> ==25520== by 0x3C7E07: ExecAgg (nodeAgg.c:2179)\n>> ==25520== by 0x3B2FF7: ExecProcNodeFirst (execProcnode.c:464)\n>> ==25520== by 0x3A5EC3: ExecProcNode (executor.h:274)\n>> ==25520== by 0x3A8FBF: ExecutePlan (execMain.c:1646)\n>> ==25520== by 0x3A6677: standard_ExecutorRun (execMain.c:363)\n>> ==25520== by 0x3A644B: ExecutorRun (execMain.c:304)\n>> ==25520== by 0x6976D3: PortalRunSelect (pquery.c:924)\n>> ==25520== by 0x6972F7: PortalRun (pquery.c:768)\n>> ==25520== by 0x68FA1F: exec_simple_query (postgres.c:1274)\n>> ==25520==\n>> {\n>> <insert_a_suppression_name_here>\n>> Memcheck:Value4\n>> fun:wrapper_handler\n>> obj:/usr/lib/arm-linux-gnueabihf/libc.so.6\n>> }\n>> **25520** Valgrind detected 1 error(s) during execution of \"select\n>> count(*) from\n>> **25520** (select * from tenk1 x order by x.thousand, x.twothousand,\n>> x.fivethous) x\n>> **25520** left join\n>> **25520** (select * from tenk1 y order by y.unique2) y\n>> **25520** on x.thousand = y.unique2 and x.twothousand = y.hundred and\n>> x.fivethous = y.unique2;\"\n>>\n>>\n>> I'm mostly used to weird valgrind stuff on this platform, but it's\n>> usually about libarmmmem and (possibly) thinking it might access\n>> undefined stuff when calculating checksums etc.\n>>\n>> This seems somewhat different, so I wonder if it's something real?\n> \n> It seems like a false positive to me.\n> \n> According to valgrind's documentation:\n> https://valgrind.org/docs/manual/mc-manual.html#mc-manual.value\n> \n> \" This can lead to false positive errors, as the shared memory can be\n> initialised via a first mapping, and accessed via another mapping. The\n> access via this other mapping will have its own V bits, which have not been\n> changed when the memory was initialised via the first mapping. The bypass\n> for these false positives is to use Memcheck's client requests\n> VALGRIND_MAKE_MEM_DEFINED and VALGRIND_MAKE_MEM_UNDEFINED to inform\n> Memcheck about what your program does (or what another process does) to\n> these shared memory mappings. \"\n> \n\nBut that's about shared memory, and the report has nothing to do with\nshared memory AFAICS.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 20 Jun 2024 13:54:08 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: confusing valgrind report about tuplestore+wrapper_handler (?) on\n 32-bit arm" }, { "msg_contents": "Em qui., 20 de jun. de 2024 às 08:54, Tomas Vondra <\[email protected]> escreveu:\n\n>\n>\n> On 6/20/24 13:32, Ranier Vilela wrote:\n> > Em qui., 20 de jun. de 2024 às 07:28, Tomas Vondra <\n> > [email protected]> escreveu:\n> >\n> >> Hi,\n> >>\n> >> While running valgrind on 32-bit ARM (rpi5 with debian), I got this\n> >> really strange report:\n> >>\n> >>\n> >> ==25520== Use of uninitialised value of size 4\n> >> ==25520== at 0x94A550: wrapper_handler (pqsignal.c:108)\n> >> ==25520== by 0x4D7826F: ??? (sigrestorer.S:64)\n> >> ==25520== Uninitialised value was created by a heap allocation\n> >> ==25520== at 0x8FB780: palloc (mcxt.c:1340)\n> >> ==25520== by 0x913067: tuplestore_begin_common (tuplestore.c:289)\n> >> ==25520== by 0x91310B: tuplestore_begin_heap (tuplestore.c:331)\n> >> ==25520== by 0x3EA717: ExecMaterial (nodeMaterial.c:64)\n> >> ==25520== by 0x3B2FF7: ExecProcNodeFirst (execProcnode.c:464)\n> >> ==25520== by 0x3EF73F: ExecProcNode (executor.h:274)\n> >> ==25520== by 0x3F0637: ExecMergeJoin (nodeMergejoin.c:703)\n> >> ==25520== by 0x3B2FF7: ExecProcNodeFirst (execProcnode.c:464)\n> >> ==25520== by 0x3C47DB: ExecProcNode (executor.h:274)\n> >> ==25520== by 0x3C4D4F: fetch_input_tuple (nodeAgg.c:561)\n> >> ==25520== by 0x3C8233: agg_retrieve_direct (nodeAgg.c:2364)\n> >> ==25520== by 0x3C7E07: ExecAgg (nodeAgg.c:2179)\n> >> ==25520== by 0x3B2FF7: ExecProcNodeFirst (execProcnode.c:464)\n> >> ==25520== by 0x3A5EC3: ExecProcNode (executor.h:274)\n> >> ==25520== by 0x3A8FBF: ExecutePlan (execMain.c:1646)\n> >> ==25520== by 0x3A6677: standard_ExecutorRun (execMain.c:363)\n> >> ==25520== by 0x3A644B: ExecutorRun (execMain.c:304)\n> >> ==25520== by 0x6976D3: PortalRunSelect (pquery.c:924)\n> >> ==25520== by 0x6972F7: PortalRun (pquery.c:768)\n> >> ==25520== by 0x68FA1F: exec_simple_query (postgres.c:1274)\n> >> ==25520==\n> >> {\n> >> <insert_a_suppression_name_here>\n> >> Memcheck:Value4\n> >> fun:wrapper_handler\n> >> obj:/usr/lib/arm-linux-gnueabihf/libc.so.6\n> >> }\n> >> **25520** Valgrind detected 1 error(s) during execution of \"select\n> >> count(*) from\n> >> **25520** (select * from tenk1 x order by x.thousand, x.twothousand,\n> >> x.fivethous) x\n> >> **25520** left join\n> >> **25520** (select * from tenk1 y order by y.unique2) y\n> >> **25520** on x.thousand = y.unique2 and x.twothousand = y.hundred and\n> >> x.fivethous = y.unique2;\"\n> >>\n> >>\n> >> I'm mostly used to weird valgrind stuff on this platform, but it's\n> >> usually about libarmmmem and (possibly) thinking it might access\n> >> undefined stuff when calculating checksums etc.\n> >>\n> >> This seems somewhat different, so I wonder if it's something real?\n> >\n> > It seems like a false positive to me.\n> >\n> > According to valgrind's documentation:\n> > https://valgrind.org/docs/manual/mc-manual.html#mc-manual.value\n> >\n> > \" This can lead to false positive errors, as the shared memory can be\n> > initialised via a first mapping, and accessed via another mapping. The\n> > access via this other mapping will have its own V bits, which have not\n> been\n> > changed when the memory was initialised via the first mapping. The bypass\n> > for these false positives is to use Memcheck's client requests\n> > VALGRIND_MAKE_MEM_DEFINED and VALGRIND_MAKE_MEM_UNDEFINED to inform\n> > Memcheck about what your program does (or what another process does) to\n> > these shared memory mappings. \"\n> >\n>\n> But that's about shared memory, and the report has nothing to do with\n> shared memory AFAICS.\n>\nYou can try once:\nSelecting --expensive-definedness-checks=yes causes Memcheck to use the\nmost accurate analysis possible. This minimises false error rates but can\ncause up to 30% performance degradation.\n\nI did a search through my reports and none refer to this particular source.\n\nbest regards,\nRanier Vilela\n\nEm qui., 20 de jun. de 2024 às 08:54, Tomas Vondra <[email protected]> escreveu:\n\nOn 6/20/24 13:32, Ranier Vilela wrote:\n> Em qui., 20 de jun. de 2024 às 07:28, Tomas Vondra <\n> [email protected]> escreveu:\n> \n>> Hi,\n>>\n>> While running valgrind on 32-bit ARM (rpi5 with debian), I got this\n>> really strange report:\n>>\n>>\n>> ==25520== Use of uninitialised value of size 4\n>> ==25520==    at 0x94A550: wrapper_handler (pqsignal.c:108)\n>> ==25520==    by 0x4D7826F: ??? (sigrestorer.S:64)\n>> ==25520==  Uninitialised value was created by a heap allocation\n>> ==25520==    at 0x8FB780: palloc (mcxt.c:1340)\n>> ==25520==    by 0x913067: tuplestore_begin_common (tuplestore.c:289)\n>> ==25520==    by 0x91310B: tuplestore_begin_heap (tuplestore.c:331)\n>> ==25520==    by 0x3EA717: ExecMaterial (nodeMaterial.c:64)\n>> ==25520==    by 0x3B2FF7: ExecProcNodeFirst (execProcnode.c:464)\n>> ==25520==    by 0x3EF73F: ExecProcNode (executor.h:274)\n>> ==25520==    by 0x3F0637: ExecMergeJoin (nodeMergejoin.c:703)\n>> ==25520==    by 0x3B2FF7: ExecProcNodeFirst (execProcnode.c:464)\n>> ==25520==    by 0x3C47DB: ExecProcNode (executor.h:274)\n>> ==25520==    by 0x3C4D4F: fetch_input_tuple (nodeAgg.c:561)\n>> ==25520==    by 0x3C8233: agg_retrieve_direct (nodeAgg.c:2364)\n>> ==25520==    by 0x3C7E07: ExecAgg (nodeAgg.c:2179)\n>> ==25520==    by 0x3B2FF7: ExecProcNodeFirst (execProcnode.c:464)\n>> ==25520==    by 0x3A5EC3: ExecProcNode (executor.h:274)\n>> ==25520==    by 0x3A8FBF: ExecutePlan (execMain.c:1646)\n>> ==25520==    by 0x3A6677: standard_ExecutorRun (execMain.c:363)\n>> ==25520==    by 0x3A644B: ExecutorRun (execMain.c:304)\n>> ==25520==    by 0x6976D3: PortalRunSelect (pquery.c:924)\n>> ==25520==    by 0x6972F7: PortalRun (pquery.c:768)\n>> ==25520==    by 0x68FA1F: exec_simple_query (postgres.c:1274)\n>> ==25520==\n>> {\n>>    <insert_a_suppression_name_here>\n>>    Memcheck:Value4\n>>    fun:wrapper_handler\n>>    obj:/usr/lib/arm-linux-gnueabihf/libc.so.6\n>> }\n>> **25520** Valgrind detected 1 error(s) during execution of \"select\n>> count(*) from\n>> **25520**   (select * from tenk1 x order by x.thousand, x.twothousand,\n>> x.fivethous) x\n>> **25520**   left join\n>> **25520**   (select * from tenk1 y order by y.unique2) y\n>> **25520**   on x.thousand = y.unique2 and x.twothousand = y.hundred and\n>> x.fivethous = y.unique2;\"\n>>\n>>\n>> I'm mostly used to weird valgrind stuff on this platform, but it's\n>> usually about libarmmmem and (possibly) thinking it might access\n>> undefined stuff when calculating checksums etc.\n>>\n>> This seems somewhat different, so I wonder if it's something real?\n> \n> It seems like a false positive to me.\n> \n> According to valgrind's documentation:\n> https://valgrind.org/docs/manual/mc-manual.html#mc-manual.value\n> \n> \" This can lead to false positive errors, as the shared memory can be\n> initialised via a first mapping, and accessed via another mapping. The\n> access via this other mapping will have its own V bits, which have not been\n> changed when the memory was initialised via the first mapping. The bypass\n> for these false positives is to use Memcheck's client requests\n> VALGRIND_MAKE_MEM_DEFINED and VALGRIND_MAKE_MEM_UNDEFINED to inform\n> Memcheck about what your program does (or what another process does) to\n> these shared memory mappings. \"\n> \n\nBut that's about shared memory, and the report has nothing to do with\nshared memory AFAICS.You can try once:\nSelecting --expensive-definedness-checks=yes\n causes Memcheck to use the most accurate analysis possible. This\n minimises false error rates but can cause up to 30% performance\n degradation.  I did a search through my reports and none refer to this particular source.best regards,Ranier Vilela", "msg_date": "Thu, 20 Jun 2024 09:14:17 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: confusing valgrind report about tuplestore+wrapper_handler (?) on\n 32-bit arm" } ]
[ { "msg_contents": "Hello Team,\nGood Day,\n\nI have been working on adding a CustomScanState object in the executor\nstate in my project. As part of CustomScanState, I execute queries and\nstore their results in the Tuplestorestate object. After storing all tuples\nin the Tuplestorestate, I retrieve each tuple and place it in the\nTupleTableSlot using the tuplestore_gettupleslot() function.\n\nHowever, I encounter an error: *\"trying to store a minimal tuple into the\nwrong type of slot.\"* Upon debugging, I discovered that the TupleTableSlot\nonly holds virtual tuples (tupleTableSlot->tts_ops is set to TTSOpsVirtual).\nIn contrast, tuplestore_gettupleslot() calls ExecStoreMinimalTuple(), which\nexpects TupleTableSlotOps of type TTSOpsMinimalTuple.\n\nFurther investigation revealed that in the ExecInitCustomScan() function\nwithin the nodeCustom.c source file, where ScanTupleSlot and\nResultTupleSlots are initialized, users can choose custom slots by setting\nslotOps in CustomScanState. We initialize the ScanTupleSlot based on\nuser-specified slotOps, but for ResultTupleSlot, we proceed with\nTTSOpsVirtual instead of the custom slotOps, which is causing the issue.\n\nIs this behavior expected? Is there a way to store tuples in slots\naccording to the TupleTableSlot type?\n\nI found a function ExecForceStoreMinimalTuple() which can be used in my\ncase. We need to pass the MinimalTuple to this function, but I was unable\nto find a way to fetch the tuple from tuple storestate. We do have\ntuplestore_gettuple()\nfunction to get the minimal tuple but it is a static function, is there any\nother function like that?\n\nBelow is the code snippet of ExecInitCustomScan() , for simplicity I\nremoved some code in the function. I took it from the nodeCustom.c file in\nthe PG source.\nCustomScanState *\nExecInitCustomScan(CustomScan *cscan, EState *estate, int eflags)\n{\nCustomScanState *css;\nconst TupleTableSlotOps *slotOps;\n\ncss = castNode(CustomScanState,\ncscan->methods->CreateCustomScanState(cscan));\n// ------------------------------- CODE STARTED ----------------\n\n/*\n* Use a custom slot if specified in CustomScanState or use virtual slot\n* otherwise.\n*/\nslotOps = css->slotOps;\nif (!slotOps)\nslotOps = &TTSOpsVirtual;\n\nif (cscan->custom_scan_tlist != NIL || scan_rel == NULL)\n{\nExecInitScanTupleSlot(estate, &css->ss, scan_tupdesc, slotOps); // Here we\nare using slotOps provided by user\n}\nelse\n{\nExecInitScanTupleSlot(estate, &css->ss, RelationGetDescr(scan_rel),\nslotOps); // Here we are using slotOps provided by user\n}\n\nExecInitResultTupleSlotTL(&css->ss.ps, &*TTSOpsVirtual*); // Here we have\nhard coded TTSOpsVirtual\n// -------------------------- CODE ENDED -----------------------\n}\n\nHello Team,Good Day,I have been working on adding a CustomScanState object in the executor state in my project. As part of CustomScanState, I execute queries and store their results in the Tuplestorestate object. After storing all tuples in the Tuplestorestate, I retrieve each tuple and place it in the TupleTableSlot using the tuplestore_gettupleslot() function.However, I encounter an error: \"trying to store a minimal tuple into the wrong type of slot.\" Upon debugging, I discovered that the TupleTableSlot only holds virtual tuples (tupleTableSlot->tts_ops is set to TTSOpsVirtual). In contrast, tuplestore_gettupleslot() calls ExecStoreMinimalTuple(), which expects TupleTableSlotOps of type TTSOpsMinimalTuple.Further investigation revealed that in the ExecInitCustomScan() function within the nodeCustom.c source file, where ScanTupleSlot and ResultTupleSlots are initialized, users can choose custom slots by setting slotOps in CustomScanState. We initialize the ScanTupleSlot based on user-specified slotOps, but for ResultTupleSlot, we proceed with TTSOpsVirtual instead of the custom slotOps, which is causing the issue.Is this behavior expected? Is there a way to store tuples in slots according to the TupleTableSlot type?I found a function ExecForceStoreMinimalTuple() which can be used in my case. We need to pass the MinimalTuple to this function, but I was unable to find a way to fetch the tuple from tuple storestate. We do have tuplestore_gettuple() function to get the minimal tuple but it is a static function, is there any other function like that?Below is the code snippet of ExecInitCustomScan() , for simplicity I removed some code in the function. I took it from the nodeCustom.c file in the PG source.CustomScanState *ExecInitCustomScan(CustomScan *cscan, EState *estate, int eflags){ CustomScanState *css; const TupleTableSlotOps *slotOps; css = castNode(CustomScanState, cscan->methods->CreateCustomScanState(cscan)); // ------------------------------- CODE STARTED ---------------- /* * Use a custom slot if specified in CustomScanState or use virtual slot * otherwise. */ slotOps = css->slotOps; if (!slotOps) slotOps = &TTSOpsVirtual; if (cscan->custom_scan_tlist != NIL || scan_rel == NULL) { ExecInitScanTupleSlot(estate, &css->ss, scan_tupdesc, slotOps); // Here we are using slotOps provided by user } else { ExecInitScanTupleSlot(estate, &css->ss, RelationGetDescr(scan_rel), slotOps); // Here we are using slotOps provided by user } ExecInitResultTupleSlotTL(&css->ss.ps, &TTSOpsVirtual); // Here we have hard coded TTSOpsVirtual // -------------------------- CODE ENDED -----------------------}", "msg_date": "Thu, 20 Jun 2024 15:58:42 +0530", "msg_from": "V N G Samba Siva Reddy Chinta <[email protected]>", "msg_from_op": true, "msg_subject": "Custom TupleTableSlotOps while Initializing Custom Scan" }, { "msg_contents": "Hi,\n\nOn Thu, Jun 20, 2024 at 5:58 PM V N G Samba Siva Reddy Chinta <\[email protected]> wrote:\n\n> Hello Team,\n> Good Day,\n>\n> I have been working on adding a CustomScanState object in the executor\n> state in my project. As part of CustomScanState, I execute queries and\n> store their results in the Tuplestorestate object. After storing all\n> tuples in the Tuplestorestate, I retrieve each tuple and place it in the\n> TupleTableSlot using the tuplestore_gettupleslot() function.\n>\n> However, I encounter an error: *\"trying to store a minimal tuple into the\n> wrong type of slot.\"* Upon debugging, I discovered that the TupleTableSlot\n> only holds virtual tuples (tupleTableSlot->tts_ops is set to TTSOpsVirtual).\n> In contrast, tuplestore_gettupleslot() calls ExecStoreMinimalTuple(),\n> which expects TupleTableSlotOps of type TTSOpsMinimalTuple.\n>\n> Further investigation revealed that in the ExecInitCustomScan() function\n> within the nodeCustom.c source file, where ScanTupleSlot and\n> ResultTupleSlots are initialized, users can choose custom slots by\n> setting slotOps in CustomScanState. We initialize the ScanTupleSlot based\n> on user-specified slotOps, but for ResultTupleSlot, we proceed with\n> TTSOpsVirtual instead of the custom slotOps, which is causing the issue.\n>\n> Is this behavior expected? Is there a way to store tuples in slots\n> according to the TupleTableSlot type?\n>\n> I found a function ExecForceStoreMinimalTuple() which can be used in my\n> case. We need to pass the MinimalTuple to this function, but I was unable\n> to find a way to fetch the tuple from tuple storestate. We do have tuplestore_gettuple()\n> function to get the minimal tuple but it is a static function, is there any\n> other function like that?\n>\n\n From the description I assume that you are passing ResultTupleSlot to\ntuplestore_gettupleslot(). Please confirm.\n\nI think the reason for CustomScan's ResultTupleSlot to be set to virtual\ntuple slot is it's the most general kind of tuple - values and isnulls.\nOther kinds depend upon the heap tuple format. And the rest of the code\nneeds CustomScans to produce a deterministic tuple slot type.\n\nWithout knowing the details of your code, I think what you need to do is\ndeclare a minimal tuple slot, fetch the tuple from store using this slot\nand then store it into ResultTupleSlot using its ttsops. Other types of\nplanstate nodes some times return the slot they get from their subplans as\nis without going through ResultTupleSlot. You may want to do something like\nthat, if the result tuple contents are same as the tuple stored in\ntuplestore\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nHi,On Thu, Jun 20, 2024 at 5:58 PM V N G Samba Siva Reddy Chinta <[email protected]> wrote:Hello Team,Good Day,I have been working on adding a CustomScanState object in the executor state in my project. As part of CustomScanState, I execute queries and store their results in the Tuplestorestate object. After storing all tuples in the Tuplestorestate, I retrieve each tuple and place it in the TupleTableSlot using the tuplestore_gettupleslot() function.However, I encounter an error: \"trying to store a minimal tuple into the wrong type of slot.\" Upon debugging, I discovered that the TupleTableSlot only holds virtual tuples (tupleTableSlot->tts_ops is set to TTSOpsVirtual). In contrast, tuplestore_gettupleslot() calls ExecStoreMinimalTuple(), which expects TupleTableSlotOps of type TTSOpsMinimalTuple.Further investigation revealed that in the ExecInitCustomScan() function within the nodeCustom.c source file, where ScanTupleSlot and ResultTupleSlots are initialized, users can choose custom slots by setting slotOps in CustomScanState. We initialize the ScanTupleSlot based on user-specified slotOps, but for ResultTupleSlot, we proceed with TTSOpsVirtual instead of the custom slotOps, which is causing the issue.Is this behavior expected? Is there a way to store tuples in slots according to the TupleTableSlot type?I found a function ExecForceStoreMinimalTuple() which can be used in my case. We need to pass the MinimalTuple to this function, but I was unable to find a way to fetch the tuple from tuple storestate. We do have tuplestore_gettuple() function to get the minimal tuple but it is a static function, is there any other function like that?From the description I assume that you are passing ResultTupleSlot to tuplestore_gettupleslot(). Please confirm.I think the reason for CustomScan's ResultTupleSlot to be set to virtual tuple slot is it's the most general kind of tuple - values and isnulls. Other kinds depend upon the heap tuple format. And the rest of the code needs CustomScans to produce a deterministic tuple slot type.Without knowing the details of your code, I think what you need to do is declare a minimal tuple slot, fetch the tuple from store using this slot and then store it into ResultTupleSlot using its ttsops. Other types of planstate nodes some times return the slot they get from their subplans as is without going through ResultTupleSlot. You may want to do something like that, if the result tuple contents are same as the tuple stored in tuplestore-- Best Wishes,Ashutosh Bapat", "msg_date": "Fri, 21 Jun 2024 17:25:51 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Custom TupleTableSlotOps while Initializing Custom Scan" } ]
[ { "msg_contents": "Hi,\n\nKonstantin and I found an MVCC bug with:\n\n- a prepared transaction,\n- which has a subtransaction,\n- on a hot standby,\n- after starting the standby from a shutdown checkpoint.\n\nSee the test case in the attached patch to demonstrate this. The last \nquery in the new test returns incorrect result on master, causing the \ntest to fail.\n\nThe problem\n-----------\n\nWhen you shut down a primary with a prepared transaction, and start a \nhot standby server from the shutdown checkpoint, the hot standby server \ngoes through this code at startup:\n\n> \t\t\tif (wasShutdown)\n> \t\t\t\toldestActiveXID = PrescanPreparedTransactions(&xids, &nxids);\n> \t\t\telse\n> \t\t\t\toldestActiveXID = checkPoint.oldestActiveXid;\n> \t\t\tAssert(TransactionIdIsValid(oldestActiveXID));\n> \n> \t\t\t/* Tell procarray about the range of xids it has to deal with */\n> \t\t\tProcArrayInitRecovery(XidFromFullTransactionId(TransamVariables->nextXid));\n> \n> \t\t\t/*\n> \t\t\t * Startup subtrans only. CLOG, MultiXact and commit timestamp\n> \t\t\t * have already been started up and other SLRUs are not maintained\n> \t\t\t * during recovery and need not be started yet.\n> \t\t\t */\n> \t\t\tStartupSUBTRANS(oldestActiveXID);\n> \n> \t\t\t/*\n> \t\t\t * If we're beginning at a shutdown checkpoint, we know that\n> \t\t\t * nothing was running on the primary at this point. So fake-up an\n> \t\t\t * empty running-xacts record and use that here and now. Recover\n> \t\t\t * additional standby state for prepared transactions.\n> \t\t\t */\n> \t\t\tif (wasShutdown)\n> \t\t\t{\n> \t\t\t\tRunningTransactionsData running;\n> \t\t\t\tTransactionId latestCompletedXid;\n> \n> \t\t\t\t/*\n> \t\t\t\t * Construct a RunningTransactions snapshot representing a\n> \t\t\t\t * shut down server, with only prepared transactions still\n> \t\t\t\t * alive. We're never overflowed at this point because all\n> \t\t\t\t * subxids are listed with their parent prepared transactions.\n> \t\t\t\t */\n> \t\t\t\trunning.xcnt = nxids;\n> \t\t\t\trunning.subxcnt = 0;\n> \t\t\t\trunning.subxid_overflow = false;\n> \t\t\t\trunning.nextXid = XidFromFullTransactionId(checkPoint.nextXid);\n> \t\t\t\trunning.oldestRunningXid = oldestActiveXID;\n> \t\t\t\tlatestCompletedXid = XidFromFullTransactionId(checkPoint.nextXid);\n> \t\t\t\tTransactionIdRetreat(latestCompletedXid);\n> \t\t\t\tAssert(TransactionIdIsNormal(latestCompletedXid));\n> \t\t\t\trunning.latestCompletedXid = latestCompletedXid;\n> \t\t\t\trunning.xids = xids;\n> \n> \t\t\t\tProcArrayApplyRecoveryInfo(&running);\n> \n> \t\t\t\tStandbyRecoverPreparedTransactions();\n> \t\t\t}\n\nThe problem is that the RunningTransactions snapshot constructed here \ndoes not include subtransaction XIDs of the prepared transactions, only \nthe main XIDs. Because of that, snapshots taken in the standby will \nconsider the sub-XIDs as aborted rather than in-progress. That leads to \ntwo problems if the prepared transaction is later committed:\n\n- We will incorrectly set hint bits on tuples inserted/deleted by the \nsubtransactions, which leads to incorrect query results later if the \nprepared transaction is committed.\n\n- If you acquire an MVCC snapshot and hold to it while the prepared \ntransaction commits, the subtransactions will suddenly become visible to \nthe old snapshot.\n\nHistory\n-------\n\nStandbyRecoverPreparedTransactions has this comment:\n\n> * The lack of calls to SubTransSetParent() calls here is by design;\n> * those calls are made by RecoverPreparedTransactions() at the end of recovery\n> * for those xacts that need this.\n\nI think that's wrong; it really should update pg_subtrans. It used to, a \nlong time ago, but commit 49e92815497 changed it. Reading the \ndiscussions that led to that change, seems that we somehow didn't \nrealize that it's important to distinguish between in-progress and \naborted transactions in a standby. On that thread, Nikhil posted [1] a \ntest case that is almost exactly the same test case that I used to find \nthis, but apparently the visibility in standby in that scenario was not \ntested thoroughly back then.\n\n[1] \nhttps://www.postgresql.org/message-id/CAMGcDxde4XjDyTjGvZCPVQROpXw1opfpC0vjpCkzc1pcQBqvrg%40mail.gmail.com\n\nFix\n---\n\nAttached is a patch to fix this, with a test case. It should be \nbackpatched to all supported versions.\n\nThe patch changes a field in RunningTransactionsData from bool to an \nenum. Could that break extensions on back branches? I think it's OK, I'm \nnot aware of any extensions touching RunningTransactionsData. I did not \nchange the xl_running_xacts WAL record, only the in-memory struct.\n\nAlternatively, we could add a new argument to \nProcArrayApplyRecoveryInfo() to indicate the new case that the xids \narray in RunningTransactionsData does not include all the subxids but \nthey have all been marked in pg_subtrans already. But I think the \nattached is better, as the enum makes the three different states more clear.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Thu, 20 Jun 2024 16:41:21 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Visibility bug with prepared transaction with subtransactions on\n standby" }, { "msg_contents": "On 20/06/2024 16:41, Heikki Linnakangas wrote:\n> Attached is a patch to fix this, with a test case.\n\nThe patch did not compile, thanks to a last-minute change in a field \nname. Here's a fixed version.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Thu, 20 Jun 2024 17:10:17 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Visibility bug with prepared transaction with subtransactions on\n standby" }, { "msg_contents": "On 20/06/2024 17:10, Heikki Linnakangas wrote:\n> On 20/06/2024 16:41, Heikki Linnakangas wrote:\n>> Attached is a patch to fix this, with a test case.\n> \n> The patch did not compile, thanks to a last-minute change in a field\n> name. Here's a fixed version.\n\nAll I heard is crickets, so committed and backported to all supported \nversions.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 27 Jun 2024 21:35:53 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Visibility bug with prepared transaction with subtransactions on\n standby" }, { "msg_contents": "Hi,\nI'm trying to run REL_13_STABLE recovey tests for windows and I get the \nerror\n\nwaiting for server to shut down.............. failed\npg_ctl: server does not shut down\n# pg_ctl stop failed: 256\n# Stale postmaster.pid file for node \"paris\": PID 1868 no longer exists\nBail out! pg_ctl stop failed\n\nI noticed that on buildfarm recovey tests are disabled for windows, was \nthis done intentionally?\n\n'invocation_args' => [\n'--config',\n'./bowerbird.conf',\n'--skip-steps',\n'recovery-check',\n'--verbose',\n'REL_13_STABLE'\n],\n\nREL_13_STABLE (071e19a36) - test passed\nREL_13_STABLE(e9c8747ee) - test failed\n\n28.06.2024 1:35, Heikki Linnakangas пишет:\n> On 20/06/2024 17:10, Heikki Linnakangas wrote:\n>> On 20/06/2024 16:41, Heikki Linnakangas wrote:\n>>> Attached is a patch to fix this, with a test case.\n>>\n>> The patch did not compile, thanks to a last-minute change in a field\n>> name. Here's a fixed version.\n>\n> All I heard is crickets, so committed and backported to all supported \n> versions.\n>\n\n\n", "msg_date": "Mon, 29 Jul 2024 14:37:34 +0700", "msg_from": "\"a.kozhemyakin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Visibility bug with prepared transaction with subtransactions on\n standby" } ]
[ { "msg_contents": "Hi,\n\nI'm working to start a mentoring program where code contributors can\nbe mentored by current committers. Applications are now open:\nhttps://forms.gle/dgjmdxtHYXCSg6aB7\n\nNine committers have volunteered to mentor one person each; hence, the\nanticipated number of acceptances is less than or equal to nine. In\nthe future, we may have more mentors, or some mentors may be willing\nto take more than one mentee, or some mentoring relationships may end,\nopening up spots for new people, but right now I have nine slots\nmaximum. Even if less than nine people apply initially, that doesn't\nguarantee that your application will be accepted, because the way this\nworks is you can only be matched to a committer if you want to be\nmatched with them and they want to be matched with you. If you don't\nalready have a significant track record on pgsql-hackers, it is\nprobably unlikely that you will find a mentor in this program at this\ntime. Even if you do, you may not match with a mentor for any number\nof reasons: not enough slots, time zone, language issues, your\nparticular interests as contrasted with those of the mentors, etc.\n\nThe basic expectation around mentorship is that your mentor will have\na voice call with you at least once per month for at least one hour.\nBefore that call, you should give them some idea what you'd like to\ntalk about and they should do some non-zero amount of preparation.\nDuring that call, they'll try to give you some useful advice. Maybe\nthey'll be willing to do other things, too, like review and commit\nyour patches, or email back and forth with you off-list, or chat using\nan instant messaging service, but if they do any of that stuff, that's\nextra. Either the mentor or the mentee is free to end the mentoring\nrelationship at any time for any reason, or for no reason. If that\nhappens, please let me know, whether it's because of an explicit\ndecision on someone's part, or because somehow the monthly voice calls\nhave ceased to occur.\n\nPeriodically, someone -- most likely not me, since a few people have\nbeen kind enough to offer help -- will contact mentors and mentees to\nget feedback on how things are going. We'll use this feedback to\nimprove the program, which might involve adjusting mentoring\nassignments, or might involve taking such other actions as the\nsituation may suggest.\n\nIn the future, I would like to expand this program to include\nnon-committer mentors. The idea would be that committers would most\nlikely want to mentor more senior contributors and senior\nnon-committers could mentor more junior contributors, so that we pay\nit all forward. If this is something you'd be interested in\nparticipating in, whether as a co-organizer, mentor, or mentee, please\nlet me know. It might also be advantageous to expand this program, or\nhave a separate program, to mentor people making non-code\ncontributions e.g. mentoring for conference organizers. I've chosen to\nfocus on mentorship for code contribution because I know enough about\nit to function as an organizer for such an effort.\n\nIf you apply for this program, you can expect to receive an email from\nme in the next couple of weeks letting you know the result of your\napplication. If for some reason that does not occur, please feel free\nto email me privately, but note that I'll want to give a bit of time\nfor people to see this email and fill out the form before doing\nanything, and then I'll need to talk over possibilities with the\nmentors before finalizing anything, so it will take a bit of time.\n\nFinally, I would like to extend a special thanks to the mentors for\nvolunteering to mentor, and a more general thanks to everyone who\ncontributes to PostgreSQL in any way or is interested in doing so for\ntheir interest in and hard work on the project.\n\nThanks,\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 20 Jun 2024 13:12:09 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "call for applications: mentoring program for code contributors" }, { "msg_contents": "On Jun 20, 2024, at 13:12, Robert Haas <[email protected]> wrote:\n\n> I'm working to start a mentoring program where code contributors can\n> be mentored by current committers. Applications are now open:\n> https://forms.gle/dgjmdxtHYXCSg6aB7\n\nThis is amazing! Thank you for putting it together, Robert.\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Thu, 20 Jun 2024 13:34:11 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: call for applications: mentoring program for code contributors" }, { "msg_contents": "Hi,\n\n> > I'm working to start a mentoring program where code contributors can\n> > be mentored by current committers. Applications are now open:\n> > https://forms.gle/dgjmdxtHYXCSg6aB7\n>\n> This is amazing! Thank you for putting it together, Robert.\n\nGreat initiative! Thanks Rovert and to everyone involved.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 21 Jun 2024 11:25:19 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: call for applications: mentoring program for code contributors" }, { "msg_contents": "On 20/06/2024 19:12, Robert Haas wrote:\n\n>\n> I'm working to start a mentoring program where code contributors can\n> be mentored by current committers. Applications are now open:\n> https://forms.gle/dgjmdxtHYXCSg6aB7\n\n> Periodically, someone -- most likely not me, since a few people have\n> been kind enough to offer help -- will contact mentors and mentees to\n> get feedback on how things are going. We'll use this feedback to\n> improve the program, which might involve adjusting mentoring\n> assignments, or might involve taking such other actions as the\n> situation may suggest.\n\nI'm offering to help with this part.\n\n-- \n\t\t\t\tAndreas 'ads' Scherbaum\nGerman PostgreSQL User Group\nEuropean PostgreSQL User Group - Board of Directors\nVolunteer Regional Contact, Germany - PostgreSQL Project\n\n\n\n", "msg_date": "Sat, 22 Jun 2024 00:42:36 +0200", "msg_from": "Andreas 'ads' Scherbaum <[email protected]>", "msg_from_op": false, "msg_subject": "Re: call for applications: mentoring program for code contributors" }, { "msg_contents": "On Thu, Jun 20, 2024 at 1:12 PM Robert Haas <[email protected]> wrote:\n> I'm working to start a mentoring program where code contributors can\n> be mentored by current committers. Applications are now open:\n> https://forms.gle/dgjmdxtHYXCSg6aB7\n\nApplications are now closed. Initially, I had imagined just keeping\nthis form more or less indefinitely, but that looks clearly\nimpractical at this point, so what I'm going to do instead is create a\nnew form at some future point TBD and repeat this process, taking into\naccount what needs we have at that time. Part of the reason it seems\nimpractical to keep the form open is because a significant percentage\nof applications are from people who have posted a total of zero (0)\nemails to pgsql-hackers, and I don't want to waste my time or that of\nother committers by relying to such inquiries one by one. Hence, the\nform is closed for now, but with the intention of having a new one at\nsome point when the time seems opportune. That will also give people\nwho did not find a match this time an opportunity to resubmit if\nthey're still interested.\n\nMatching is largely complete at this point. I expect to send emails to\nall applicants letting them know what happened with their application\nsoon, hopefully tomorrow (my time). In preparation for that, allow me\nto say that I'm very pleased with the number of acceptances that I\nanticipate being able to extend. Some committers ended up deciding to\ntake two mentees, which is really great. More details on that soon.\nNonetheless, I am sure that those who did not find a mentor for one\nreason or another will be disappointed. I hope that no one will be so\ndisappointed that they give up on hacking on PostgreSQL. Remember, if\nyou didn't get matched to a mentor, you're no worse off than you were\nbefore, and your work on PostgreSQL is no less valuable than it was\nbefore! I am also hoping to start up something to provide some more\nlimited support to people who didn't match to a mentor, and I'll tell\nyou more about that when and if I have more to say.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 1 Jul 2024 13:15:13 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: call for applications: mentoring program for code contributors" }, { "msg_contents": "Hi,\n\nI've now sent acceptance and rejection emails to, I believe, all\napplicants. If you applied and didn't get an email, let me know.\n\nFor those who may be interested in the statistics, I received 34\napplications. Although I initially anticipated being unable to accept\nmore than 9, because we had 9 committers volunteer to mentor, it\nturned out that five of those committers ended up wanting to mentor\ntwo people each, so I ended up being able to send 14 acceptances. I'm\nfairly satisfied with that, especially because 12 or 13 of the people\nwho were rejected have not, to the best of my ability to figure such\nthings out, ever sent an email to the list. Of course, it would be\nnice to do better, but I feel like for the first time around, this\nwent well.\n\nLet's see how things go from here!\n\n...Robert\n\n\n", "msg_date": "Tue, 2 Jul 2024 16:00:48 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: call for applications: mentoring program for code contributors" }, { "msg_contents": "Hi Robert,\n\nI loved this initiative. Please allow me to introduce myself: I have been\nusing Postgres for 10 years both as a backend developer connecting to a\nPostgres cluster, a DBA, and also I studied thoroughly the code of Postgres\nand some plugins. I'm currently working on an ambitious plan to have\nlock-free full vacuum and continuous ordering of a clustered index.\nMy first step in this effort is creating a DataGrip plugin that shows\nvarious related stats and most importantly a visual view of the data pages\nwhere we can see the ordering and fragmentation of the pages.\nI do have the complete plan in my head and the DataGrip plugin itself is\n50% done. However, having a mentor/partner would help a lot if that is a\npossibility.\n\nRegards,\nAhmed\n\nOn Tue, Jul 2, 2024 at 5:01 PM Robert Haas <[email protected]> wrote:\n\n> Hi,\n>\n> I've now sent acceptance and rejection emails to, I believe, all\n> applicants. If you applied and didn't get an email, let me know.\n>\n> For those who may be interested in the statistics, I received 34\n> applications. Although I initially anticipated being unable to accept\n> more than 9, because we had 9 committers volunteer to mentor, it\n> turned out that five of those committers ended up wanting to mentor\n> two people each, so I ended up being able to send 14 acceptances. I'm\n> fairly satisfied with that, especially because 12 or 13 of the people\n> who were rejected have not, to the best of my ability to figure such\n> things out, ever sent an email to the list. Of course, it would be\n> nice to do better, but I feel like for the first time around, this\n> went well.\n>\n> Let's see how things go from here!\n>\n> ...Robert\n>\n>\n>\n\nHi Robert,I loved this initiative. Please allow me to introduce myself: I have been using Postgres for 10 years both as a backend developer connecting to a Postgres cluster, a DBA, and also I studied thoroughly the code of Postgres and some plugins. I'm currently working on an ambitious plan to have lock-free full vacuum and continuous ordering of a clustered index.My first step in this effort is creating a DataGrip plugin that shows various related stats and most importantly a visual view of the data pages where we can see the ordering and fragmentation of the pages.I do have the complete plan in my head and the DataGrip plugin itself is 50% done. However, having a mentor/partner would help a lot if that is a possibility.Regards,AhmedOn Tue, Jul 2, 2024 at 5:01 PM Robert Haas <[email protected]> wrote:Hi,\n\nI've now sent acceptance and rejection emails to, I believe, all\napplicants. If you applied and didn't get an email, let me know.\n\nFor those who may be interested in the statistics, I received 34\napplications. Although I initially anticipated being unable to accept\nmore than 9, because we had 9 committers volunteer to mentor, it\nturned out that five of those committers ended up wanting to mentor\ntwo people each, so I ended up being able to send 14 acceptances. I'm\nfairly satisfied with that, especially because 12 or 13 of the people\nwho were rejected have not, to the best of my ability to figure such\nthings out, ever sent an email to the list. Of course, it would be\nnice to do better, but I feel like for the first time around, this\nwent well.\n\nLet's see how things go from here!\n\n...Robert", "msg_date": "Tue, 2 Jul 2024 17:27:44 -0300", "msg_from": "Ahmed Yarub Hani Al Nuaimi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: call for applications: mentoring program for code contributors" } ]
[ { "msg_contents": "While working on skip scan, I stumbled upon a bug on HEAD. This is an\nissue in my commit 5bf748b8, \"Enhance nbtree ScalarArrayOp execution\".\nThe attached test case (repro_wrong_prim.sql) causes an assertion\nfailure on HEAD. Here's the stack trace:\n\nTRAP: failed Assert(\"so->keyData[opsktrig].sk_strategy !=\nBTEqualStrategyNumber\"), File:\n\"../source/src/backend/access/nbtree/nbtutils.c\", Line: 2475, PID:\n1765589\n[0x55942a24db8f] _bt_advance_array_keys:\n/mnt/nvme/postgresql/patch/build_meson_dc/../source/src/backend/access/nbtree/nbtutils.c:2475\n[0x55942a24bf22] _bt_checkkeys:\n/mnt/nvme/postgresql/patch/build_meson_dc/../source/src/backend/access/nbtree/nbtutils.c:3797\n[0x55942a244160] _bt_readpage:\n/mnt/nvme/postgresql/patch/build_meson_dc/../source/src/backend/access/nbtree/nbtsearch.c:2221\n[0x55942a2434ca] _bt_first:\n/mnt/nvme/postgresql/patch/build_meson_dc/../source/src/backend/access/nbtree/nbtsearch.c:1888\n[0x55942a23ef88] btgettuple:\n/mnt/nvme/postgresql/patch/build_meson_dc/../source/src/backend/access/nbtree/nbtree.c:259\n\nThe problem is that _bt_advance_array_keys() doesn't take sufficient\ncare at the point where it decides whether its call to\n_bt_check_compare against finaltup (with the scan direction flipped\naround) indicates that another primitive index scan is required. The\nfinal decision is conditioned on rules about how the scan key offset\nsktrig that triggered the call to _bt_advance_array_keys() relates to\nthe scan key offset that was set by the _bt_check_compare finaltup\ncomparison. This was fragile. It breaks with this test case because of\nfairly subtle conditions around when and how the arrays advance, the\nlayout of the relevant leaf page, and the placement of inequality scan\nkeys.\n\nWhen assertions are disabled, we do multiple primitive index scans\nthat land on the same leaf page, which isn't supposed to be possible\nanymore. The query gives correct answers, but this behavior is\ndefinitely wrong (it is simply supposed to be impossible now, per\n5bf748b8's commit message).\n\nAttached is a draft bug fix patch. It nails down the test by simply\ntesting \"so->keyData[opsktrig].sk_strategy != BTEqualStrategyNumber\"\ndirectly, rather than comparing scan key offsets. This is a far\nsimpler and far more direct approach.\n\nYou might wonder why I didn't do it like this in the first place. It\njust worked out that way. The code in question was written before I\nchanged the design of _bt_check_compare (in the draft patch that\nbecame commit 5bf748b8). Up until not that long before the patch was\ncommitted, _bt_check_compare would set \"continuescan=false\" for\nnon-required arrays. That factor made detecting whether or not the\nrelevant _bt_check_compare call had in fact encountered a required\ninequality of the kind we need to detect (to decide on whether to\nstart another primitive index scan) difficult and messy. However, the\nfinal committed patch simplified _bt_check_compare, making the\napproach I've taken in the bug fix patch possible. I just never made\nthe connection before now.\n\n-- \nPeter Geoghegan", "msg_date": "Thu, 20 Jun 2024 17:43:58 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Issue with \"start another primitive scan\" logic during nbtree array\n advancement" } ]
[ { "msg_contents": "Hi,\n\nIf vacuum fails to remove a tuple with xmax older than\nVacuumCutoffs->OldestXmin and younger than\nGlobalVisState->maybe_needed, it will ERROR out when determining\nwhether or not to freeze the tuple with \"cannot freeze committed\nxmax\".\n\nIn back branches starting with 14, failing to remove tuples older than\nOldestXmin during pruning caused vacuum to infinitely loop in\nlazy_scan_prune(), as investigated on this [1] thread.\n\nOn master, after 1ccc1e05ae removed the retry loop in\nlazy_scan_prune() and stopped comparing tuples to OldestXmin, the hang\ncould no longer happen, but we can still attempt to freeze dead tuples\nvisibly killed before OldestXmin -- resulting in an ERROR.\n\nPruning may fail to remove dead tuples with xmax before OldestXmin if\nthe tuple is not considered removable by GlobalVisState.\n\nFor vacuum, the GlobalVisState is initially calculated at the\nbeginning of vacuuming the relation -- at the same time and with the\nsame value as VacuumCutoffs->OldestXmin.\n\nA backend's GlobalVisState may be updated again when it is accessed if\na new snapshot is taken or if something caused ComputeXidHorizons() to\nbe called.\n\nThis can happen, for example, at the end of a round of index vacuuming\nwhen GetOldestNonRemovableTransactionId() is called.\n\nNormally this may result in GlobalVisState's horizon moving forward --\npotentially allowing more dead tuples to be removed.\n\nHowever, if a disconnected standby with a running transaction older\nthan VacuumCutoffs->OldestXmin reconnects to the primary after vacuum\ninitially calculates GlobalVisState and OldestXmin but before\nGlobalVisState is updated, the value of GlobalVisState->maybe_needed\ncould go backwards.\n\nIf this happens in the middle of vacuum's first pruning and freezing\npass, it is possible that pruning/freezing could then encounter a\ntuple whose xmax is younger than GlobalVisState->maybe_needed and\nolder than VacuumCutoffs->OldestXmin. heap_prune_satisfies_vacuum()\nwould deem the tuple HEAPTUPLE_RECENTLY_DEAD and would not remove it.\nBut the heap_pre_freeze_checks() would ERROR out with \"cannot freeze\ncommitted xmax\". This check is to avoid freezing dead tuples.\n\nWe can fix this by always removing tuples considered dead before\nVacuumCutoffs->OldestXmin. This is okay even if a reconnected standby\nhas a transaction that sees that tuple as alive, because it will\nsimply wait to replay the removal until it would be correct to do so\nor recovery conflict handling will cancel the transaction that sees\nthe tuple as alive and allow replay to continue.\n\nAttached is the suggested fix for master plus a repro. I wrote it as a\nrecovery suite TAP test, but I am _not_ proposing we add it to the\nongoing test suite. It is, amongst other things, definitely prone to\nflaking. I also had to use loads of data to force two index vacuuming\npasses now that we have TIDStore, so it is a slow test.\n\nIf you want to run the repro with meson, you'll have to add\n't/099_vacuum_hang.pl' to src/test/recovery/meson.build and then run it with:\n\nmeson test postgresql:recovery / recovery/099_vacuum_hang\n\nIf you use autotools, you can run it with:\nmake check PROVE_TESTS=\"t/099_vacuum_hang.pl\"\n\nThe repro forces a round of index vacuuming after the standby\nreconnects and before pruning a dead tuple whose xmax is older than\nOldestXmin.\n\nAt the end of the round of index vacuuming, _bt_pendingfsm_finalize()\ncalls GetOldestNonRemovableTransactionId(), thereby updating the\nbackend's GlobalVisState and moving maybe_needed backwards.\n\nThen vacuum's first pass will continue with pruning and find our later\ninserted and updated tuple HEAPTUPLE_RECENTLY_DEAD when compared to\nmaybe_needed but HEAPTUPLE_DEAD when compared to OldestXmin.\n\nI make sure that the standby reconnects between vacuum_get_cutoffs()\nand pruning because I have a cursor on the page keeping VACUUM FREEZE\nfrom getting a cleanup lock.\n\nSee the repro for step-by-step explanations of how it works.\n\nI have a modified version of this that repros the infinite loop on\n14-16 with substantially less data. See it here [2]. Also, the repro\nattached to this mail won't work on 14 and 15 because of changes to\nbackground_psql.\n\n- Melanie\n\n[1] https://postgr.es/m/20240415173913.4zyyrwaftujxthf2%40awork3.anarazel.de#1b216b7768b5bd577a3d3d51bd5aadee\n[2] https://www.postgresql.org/message-id/CAAKRu_Y_NJzF4-8gzTTeaOuUL3CcGoXPjXcAHbTTygT8AyVqag%40mail.gmail.com", "msg_date": "Thu, 20 Jun 2024 19:42:07 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Thu, Jun 20, 2024 at 7:42 PM Melanie Plageman\n<[email protected]> wrote:\n> If vacuum fails to remove a tuple with xmax older than\n> VacuumCutoffs->OldestXmin and younger than\n> GlobalVisState->maybe_needed, it will ERROR out when determining\n> whether or not to freeze the tuple with \"cannot freeze committed\n> xmax\".\n>\n> In back branches starting with 14, failing to remove tuples older than\n> OldestXmin during pruning caused vacuum to infinitely loop in\n> lazy_scan_prune(), as investigated on this [1] thread.\n\nThis is a great summary.\n\n> We can fix this by always removing tuples considered dead before\n> VacuumCutoffs->OldestXmin. This is okay even if a reconnected standby\n> has a transaction that sees that tuple as alive, because it will\n> simply wait to replay the removal until it would be correct to do so\n> or recovery conflict handling will cancel the transaction that sees\n> the tuple as alive and allow replay to continue.\n\nI think that this is the right general approach.\n\n> The repro forces a round of index vacuuming after the standby\n> reconnects and before pruning a dead tuple whose xmax is older than\n> OldestXmin.\n>\n> At the end of the round of index vacuuming, _bt_pendingfsm_finalize()\n> calls GetOldestNonRemovableTransactionId(), thereby updating the\n> backend's GlobalVisState and moving maybe_needed backwards.\n\nRight. I saw details exactly consistent with this when I used GDB\nagainst a production instance.\n\nI'm glad that you were able to come up with a repro that involves\nexactly the same basic elements, including index page deletion.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 20 Jun 2024 20:02:16 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "Hi, Melanie! I'm glad to hear you that you have found a root case of the \nproblem) Thank you for that!\n\nOn 21.06.2024 02:42, Melanie Plageman wrote:\n> Hi,\n>\n> If vacuum fails to remove a tuple with xmax older than\n> VacuumCutoffs->OldestXmin and younger than\n> GlobalVisState->maybe_needed, it will ERROR out when determining\n> whether or not to freeze the tuple with \"cannot freeze committed\n> xmax\".\n>\n> In back branches starting with 14, failing to remove tuples older than\n> OldestXmin during pruning caused vacuum to infinitely loop in\n> lazy_scan_prune(), as investigated on this [1] thread.\n>\n> On master, after 1ccc1e05ae removed the retry loop in\n> lazy_scan_prune() and stopped comparing tuples to OldestXmin, the hang\n> could no longer happen, but we can still attempt to freeze dead tuples\n> visibly killed before OldestXmin -- resulting in an ERROR.\n>\n> Pruning may fail to remove dead tuples with xmax before OldestXmin if\n> the tuple is not considered removable by GlobalVisState.\n>\n> For vacuum, the GlobalVisState is initially calculated at the\n> beginning of vacuuming the relation -- at the same time and with the\n> same value as VacuumCutoffs->OldestXmin.\n>\n> A backend's GlobalVisState may be updated again when it is accessed if\n> a new snapshot is taken or if something caused ComputeXidHorizons() to\n> be called.\n>\n> This can happen, for example, at the end of a round of index vacuuming\n> when GetOldestNonRemovableTransactionId() is called.\n>\n> Normally this may result in GlobalVisState's horizon moving forward --\n> potentially allowing more dead tuples to be removed.\n>\n> However, if a disconnected standby with a running transaction older\n> than VacuumCutoffs->OldestXmin reconnects to the primary after vacuum\n> initially calculates GlobalVisState and OldestXmin but before\n> GlobalVisState is updated, the value of GlobalVisState->maybe_needed\n> could go backwards.\n>\n> If this happens in the middle of vacuum's first pruning and freezing\n> pass, it is possible that pruning/freezing could then encounter a\n> tuple whose xmax is younger than GlobalVisState->maybe_needed and\n> older than VacuumCutoffs->OldestXmin. heap_prune_satisfies_vacuum()\n> would deem the tuple HEAPTUPLE_RECENTLY_DEAD and would not remove it.\n> But the heap_pre_freeze_checks() would ERROR out with \"cannot freeze\n> committed xmax\". This check is to avoid freezing dead tuples.\n>\n> We can fix this by always removing tuples considered dead before\n> VacuumCutoffs->OldestXmin. This is okay even if a reconnected standby\n> has a transaction that sees that tuple as alive, because it will\n> simply wait to replay the removal until it would be correct to do so\n> or recovery conflict handling will cancel the transaction that sees\n> the tuple as alive and allow replay to continue.\n\nThisis an interestinganddifficultcase)Inoticedthatwheninitializingthe \ncluster,inmyopinion,we provideexcessivefreezing.Initializationtakesa \nlongtime,whichcanlead,for example,tolongertestexecution.Igot ridofthisby \naddingthe OldestMxact checkboxisnotFirstMultiXactId,anditworksfine.\n\nif (prstate->cutoffs &&\nTransactionIdIsValid(prstate->cutoffs->OldestXmin) &&\nprstate->cutoffs->OldestMxact != FirstMultiXactId &&\nNormalTransactionIdPrecedes(dead_after, prstate->cutoffs->OldestXmin))\n     return HEAPTUPLE_DEAD;\n\nCanI keepit?\n\n> Attached is the suggested fix for master plus a repro. I wrote it as a\n> recovery suite TAP test, but I am _not_ proposing we add it to the\n> ongoing test suite. It is, amongst other things, definitely prone to\n> flaking. I also had to use loads of data to force two index vacuuming\n> passes now that we have TIDStore, so it is a slow test.\n>\n> If you want to run the repro with meson, you'll have to add\n> 't/099_vacuum_hang.pl' to src/test/recovery/meson.build and then run it with:\n>\n> meson test postgresql:recovery / recovery/099_vacuum_hang\n>\n> If you use autotools, you can run it with:\n> make check PROVE_TESTS=\"t/099_vacuum_hang.pl\n>\n> The repro forces a round of index vacuuming after the standby\n> reconnects and before pruning a dead tuple whose xmax is older than\n> OldestXmin.\n>\n> At the end of the round of index vacuuming, _bt_pendingfsm_finalize()\n> calls GetOldestNonRemovableTransactionId(), thereby updating the\n> backend's GlobalVisState and moving maybe_needed backwards.\n>\n> Then vacuum's first pass will continue with pruning and find our later\n> inserted and updated tuple HEAPTUPLE_RECENTLY_DEAD when compared to\n> maybe_needed but HEAPTUPLE_DEAD when compared to OldestXmin.\n>\n> I make sure that the standby reconnects between vacuum_get_cutoffs()\n> and pruning because I have a cursor on the page keeping VACUUM FREEZE\n> from getting a cleanup lock.\n>\n> See the repro for step-by-step explanations of how it works.\n>\n> I have a modified version of this that repros the infinite loop on\n> 14-16 with substantially less data. See it here [2]. Also, the repro\n> attached to this mail won't work on 14 and 15 because of changes to\n> background_psql.\n>\n> [1]https://postgr.es/m/20240415173913.4zyyrwaftujxthf2%40awork3.anarazel.de#1b216b7768b5bd577a3d3d51bd5aadee\n> [2]https://www.postgresql.org/message-id/CAAKRu_Y_NJzF4-8gzTTeaOuUL3CcGoXPjXcAHbTTygT8AyVqag%40mail.gmail.com\nTo be honest, the meson test is new for me, but I see its useful \nfeatures. I think I will use it for checking my features)\n\nI couldn't understand why the replica is necessary here. Now I am \ndigging why I got the similar behavior without replica when I have only \none instance. I'm stillcheckingthisinmytest,butI \nbelievethispatchfixesthe originalproblembecausethesymptomswerethesame.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\nHi, Melanie! I'm glad to hear you that you have found a root case\n of the problem) Thank you for that!\n\nOn 21.06.2024 02:42, Melanie Plageman\n wrote:\n\n\nHi,\n\nIf vacuum fails to remove a tuple with xmax older than\nVacuumCutoffs->OldestXmin and younger than\nGlobalVisState->maybe_needed, it will ERROR out when determining\nwhether or not to freeze the tuple with \"cannot freeze committed\nxmax\".\n\nIn back branches starting with 14, failing to remove tuples older than\nOldestXmin during pruning caused vacuum to infinitely loop in\nlazy_scan_prune(), as investigated on this [1] thread.\n\nOn master, after 1ccc1e05ae removed the retry loop in\nlazy_scan_prune() and stopped comparing tuples to OldestXmin, the hang\ncould no longer happen, but we can still attempt to freeze dead tuples\nvisibly killed before OldestXmin -- resulting in an ERROR.\n\nPruning may fail to remove dead tuples with xmax before OldestXmin if\nthe tuple is not considered removable by GlobalVisState.\n\nFor vacuum, the GlobalVisState is initially calculated at the\nbeginning of vacuuming the relation -- at the same time and with the\nsame value as VacuumCutoffs->OldestXmin.\n\nA backend's GlobalVisState may be updated again when it is accessed if\na new snapshot is taken or if something caused ComputeXidHorizons() to\nbe called.\n\nThis can happen, for example, at the end of a round of index vacuuming\nwhen GetOldestNonRemovableTransactionId() is called.\n\nNormally this may result in GlobalVisState's horizon moving forward --\npotentially allowing more dead tuples to be removed.\n\nHowever, if a disconnected standby with a running transaction older\nthan VacuumCutoffs->OldestXmin reconnects to the primary after vacuum\ninitially calculates GlobalVisState and OldestXmin but before\nGlobalVisState is updated, the value of GlobalVisState->maybe_needed\ncould go backwards.\n\nIf this happens in the middle of vacuum's first pruning and freezing\npass, it is possible that pruning/freezing could then encounter a\ntuple whose xmax is younger than GlobalVisState->maybe_needed and\nolder than VacuumCutoffs->OldestXmin. heap_prune_satisfies_vacuum()\nwould deem the tuple HEAPTUPLE_RECENTLY_DEAD and would not remove it.\nBut the heap_pre_freeze_checks() would ERROR out with \"cannot freeze\ncommitted xmax\". This check is to avoid freezing dead tuples.\n\nWe can fix this by always removing tuples considered dead before\nVacuumCutoffs->OldestXmin. This is okay even if a reconnected standby\nhas a transaction that sees that tuple as alive, because it will\nsimply wait to replay the removal until it would be correct to do so\nor recovery conflict handling will cancel the transaction that sees\nthe tuple as alive and allow replay to continue.\n\nThis is an interesting and difficult case) I noticed that when initializing the cluster, in my opinion, we provide excessive freezing. Initialization takes a long time, which can lead, for example, to longer test execution. I got rid of this by adding the OldestMxact checkbox is not FirstMultiXactId, and it works fine.\nif (prstate->cutoffs &&\n TransactionIdIsValid(prstate->cutoffs->OldestXmin)\n &&\n prstate->cutoffs->OldestMxact != FirstMultiXactId &&\n NormalTransactionIdPrecedes(dead_after,\n prstate->cutoffs->OldestXmin))\n     return HEAPTUPLE_DEAD;\n\nCan I keep it?\n\nAttached is the suggested fix for master plus a repro. I wrote it as a\nrecovery suite TAP test, but I am _not_ proposing we add it to the\nongoing test suite. It is, amongst other things, definitely prone to\nflaking. I also had to use loads of data to force two index vacuuming\npasses now that we have TIDStore, so it is a slow test.\n\nIf you want to run the repro with meson, you'll have to add\n't/099_vacuum_hang.pl' to src/test/recovery/meson.build and then run it with:\n\nmeson test postgresql:recovery / recovery/099_vacuum_hang\n\nIf you use autotools, you can run it with:\nmake check PROVE_TESTS=\"t/099_vacuum_hang.pl\n\nThe repro forces a round of index vacuuming after the standby\nreconnects and before pruning a dead tuple whose xmax is older than\nOldestXmin.\n\nAt the end of the round of index vacuuming, _bt_pendingfsm_finalize()\ncalls GetOldestNonRemovableTransactionId(), thereby updating the\nbackend's GlobalVisState and moving maybe_needed backwards.\n\nThen vacuum's first pass will continue with pruning and find our later\ninserted and updated tuple HEAPTUPLE_RECENTLY_DEAD when compared to\nmaybe_needed but HEAPTUPLE_DEAD when compared to OldestXmin.\n\nI make sure that the standby reconnects between vacuum_get_cutoffs()\nand pruning because I have a cursor on the page keeping VACUUM FREEZE\nfrom getting a cleanup lock.\n\nSee the repro for step-by-step explanations of how it works.\n\nI have a modified version of this that repros the infinite loop on\n14-16 with substantially less data. See it here [2]. Also, the repro\nattached to this mail won't work on 14 and 15 because of changes to\nbackground_psql.\n\n[1] https://postgr.es/m/20240415173913.4zyyrwaftujxthf2%40awork3.anarazel.de#1b216b7768b5bd577a3d3d51bd5aadee\n[2] https://www.postgresql.org/message-id/CAAKRu_Y_NJzF4-8gzTTeaOuUL3CcGoXPjXcAHbTTygT8AyVqag%40mail.gmail.com\n\n To be honest, the meson test is new for me, but I see its useful\n features. I think I will use it for checking my features)\n I couldn't understand why the replica is necessary here. Now I am\n digging why I got the similar behavior without replica when I have\n only one instance. I'm still checking this in my test, but I believe this patch fixes the original problem because the symptoms were the same.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 24 Jun 2024 11:10:38 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On 21/06/2024 03:02, Peter Geoghegan wrote:\n> On Thu, Jun 20, 2024 at 7:42 PM Melanie Plageman\n> <[email protected]> wrote:\n>> If vacuum fails to remove a tuple with xmax older than\n>> VacuumCutoffs->OldestXmin and younger than\n>> GlobalVisState->maybe_needed, it will ERROR out when determining\n>> whether or not to freeze the tuple with \"cannot freeze committed\n>> xmax\".\n>>\n>> In back branches starting with 14, failing to remove tuples older than\n>> OldestXmin during pruning caused vacuum to infinitely loop in\n>> lazy_scan_prune(), as investigated on this [1] thread.\n> \n> This is a great summary.\n\n+1\n\n>> We can fix this by always removing tuples considered dead before\n>> VacuumCutoffs->OldestXmin. This is okay even if a reconnected standby\n>> has a transaction that sees that tuple as alive, because it will\n>> simply wait to replay the removal until it would be correct to do so\n>> or recovery conflict handling will cancel the transaction that sees\n>> the tuple as alive and allow replay to continue.\n> \n> I think that this is the right general approach.\n\n+1\n\n>> The repro forces a round of index vacuuming after the standby\n>> reconnects and before pruning a dead tuple whose xmax is older than\n>> OldestXmin.\n>>\n>> At the end of the round of index vacuuming, _bt_pendingfsm_finalize()\n>> calls GetOldestNonRemovableTransactionId(), thereby updating the\n>> backend's GlobalVisState and moving maybe_needed backwards.\n> \n> Right. I saw details exactly consistent with this when I used GDB\n> against a production instance.\n> \n> I'm glad that you were able to come up with a repro that involves\n> exactly the same basic elements, including index page deletion.\n\nWould it be possible to make it robust so that we could always run it \nwith \"make check\"? This seems like an important corner case to \nregression test.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 24 Jun 2024 11:27:42 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jun 24, 2024 at 4:10 AM Alena Rybakina <[email protected]> wrote:\n>\n> We can fix this by always removing tuples considered dead before\n> VacuumCutoffs->OldestXmin. This is okay even if a reconnected standby\n> has a transaction that sees that tuple as alive, because it will\n> simply wait to replay the removal until it would be correct to do so\n> or recovery conflict handling will cancel the transaction that sees\n> the tuple as alive and allow replay to continue.\n>\n> This is an interesting and difficult case) I noticed that when initializing the cluster, in my opinion, we provide excessive freezing. Initialization takes a long time, which can lead, for example, to longer test execution. I got rid of this by adding the OldestMxact checkbox is not FirstMultiXactId, and it works fine.\n>\n> if (prstate->cutoffs &&\n> TransactionIdIsValid(prstate->cutoffs->OldestXmin) &&\n> prstate->cutoffs->OldestMxact != FirstMultiXactId &&\n> NormalTransactionIdPrecedes(dead_after, prstate->cutoffs->OldestXmin))\n> return HEAPTUPLE_DEAD;\n>\n> Can I keep it?\n\nThis looks like an addition to the new criteria I added to\nheap_prune_satisfies_vacuum(). Is that what you are suggesting? If so,\nit looks like it would only return HEAPTUPLE_DEAD (and thus only\nremove) a subset of the tuples my original criteria would remove. When\nvacuum calculates OldestMxact as FirstMultiXactId, it would not remove\nthose tuples deleted before OldestXmin. It seems like OldestMxact will\nequal FirstMultiXactID sometimes right after initdb and after\ntransaction ID wraparound. I'm not sure I totally understand the\ncriteria.\n\nOne thing I find confusing about this is that this would actually\nremove less tuples than with my criteria -- which could lead to more\nfreezing. When vacuum calculates OldestMxact == FirstMultiXactID, we\nwould not remove tuples deleted before OldestXmin and thus return\nHEAPTUPLE_RECENTLY_DEAD for those tuples. Then we would consider\nfreezing them. So, it seems like we would do more freezing by adding\nthis criteria.\n\nCould you explain more about how the criteria you are suggesting\nworks? Are you saying it does less freezing than master or less\nfreezing than with my patch?\n\n> Attached is the suggested fix for master plus a repro. I wrote it as a\n> recovery suite TAP test, but I am _not_ proposing we add it to the\n> ongoing test suite. It is, amongst other things, definitely prone to\n> flaking. I also had to use loads of data to force two index vacuuming\n> passes now that we have TIDStore, so it is a slow test.\n-- snip --\n> I have a modified version of this that repros the infinite loop on\n> 14-16 with substantially less data. See it here [2]. Also, the repro\n> attached to this mail won't work on 14 and 15 because of changes to\n> background_psql.\n>\n> I couldn't understand why the replica is necessary here. Now I am digging why I got the similar behavior without replica when I have only one instance. I'm still checking this in my test, but I believe this patch fixes the original problem because the symptoms were the same.\n\nDid you get similar behavior on master or on back branches? Was the\nbehavior you observed the infinite loop or the error during\nheap_prepare_freeze_tuple()?\n\nIn my examples, the replica is needed because something has to move\nthe horizon on the primary backwards. When a standby reconnects with\nan older oldest running transaction ID than any of the running\ntransactions on the primary and the vacuuming backend recomputes its\nRecentXmin, the horizon may move backwards when compared to the\nhorizon calculated at the beginning of the vacuum. Vacuum does not\nrecompute cutoffs->OldestXmin during vacuuming a relation but it may\nrecompute the values in the GlobalVisState it uses for pruning.\n\nWe knew of only one other way that the horizon could move backwards\nwhich Matthias describes here [1]. However, this is thought to be its\nown concurrency-related bug in the commit-abort path that should be\nfixed -- as opposed to the standby reconnecting with an older oldest\nrunning transaction ID which can be expected.\n\nDo you know if you were seeing the effects of the scenario Matthias describes?\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CAEze2WjMTh4KS0%3DQEQB-Jq%2BtDLPR%2B0%2BzVBMfVwSPK5A%3DWZa95Q%40mail.gmail.com\n\n\n", "msg_date": "Mon, 24 Jun 2024 10:37:08 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jun 24, 2024 at 4:27 AM Heikki Linnakangas <[email protected]> wrote:\n>\n> On 21/06/2024 03:02, Peter Geoghegan wrote:\n> > On Thu, Jun 20, 2024 at 7:42 PM Melanie Plageman\n> > <[email protected]> wrote:\n> >\n> >> The repro forces a round of index vacuuming after the standby\n> >> reconnects and before pruning a dead tuple whose xmax is older than\n> >> OldestXmin.\n> >>\n> >> At the end of the round of index vacuuming, _bt_pendingfsm_finalize()\n> >> calls GetOldestNonRemovableTransactionId(), thereby updating the\n> >> backend's GlobalVisState and moving maybe_needed backwards.\n> >\n> > Right. I saw details exactly consistent with this when I used GDB\n> > against a production instance.\n> >\n> > I'm glad that you were able to come up with a repro that involves\n> > exactly the same basic elements, including index page deletion.\n>\n> Would it be possible to make it robust so that we could always run it\n> with \"make check\"? This seems like an important corner case to\n> regression test.\n\nI'd have to look into how to ensure I can stabilize some of the parts\nthat seem prone to flaking. I can probably stabilize the vacuum bit\nwith a query of pg_stat_activity making sure it is waiting to acquire\nthe cleanup lock.\n\nI don't, however, see a good way around the large amount of data\nrequired to trigger more than one round of index vacuuming. I could\ngenerate the data more efficiently than I am doing here\n(generate_series() in the from clause). Perhaps with a copy? I know it\nis too slow now to go in an ongoing test, but I don't have an\nintuition around how fast it would have to be to be acceptable. Is\nthere a set of additional tests that are slower that we don't always\nrun? I didn't follow how the wraparound test ended up, but that seems\nlike one that would have been slow.\n\n- Melanie\n\n\n", "msg_date": "Mon, 24 Jun 2024 10:53:28 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "\n\n\nOn 6/24/24 16:53, Melanie Plageman wrote:\n> On Mon, Jun 24, 2024 at 4:27 AM Heikki Linnakangas <[email protected]> wrote:\n>>\n>> On 21/06/2024 03:02, Peter Geoghegan wrote:\n>>> On Thu, Jun 20, 2024 at 7:42 PM Melanie Plageman\n>>> <[email protected]> wrote:\n>>>\n>>>> The repro forces a round of index vacuuming after the standby\n>>>> reconnects and before pruning a dead tuple whose xmax is older than\n>>>> OldestXmin.\n>>>>\n>>>> At the end of the round of index vacuuming, _bt_pendingfsm_finalize()\n>>>> calls GetOldestNonRemovableTransactionId(), thereby updating the\n>>>> backend's GlobalVisState and moving maybe_needed backwards.\n>>>\n>>> Right. I saw details exactly consistent with this when I used GDB\n>>> against a production instance.\n>>>\n>>> I'm glad that you were able to come up with a repro that involves\n>>> exactly the same basic elements, including index page deletion.\n>>\n>> Would it be possible to make it robust so that we could always run it\n>> with \"make check\"? This seems like an important corner case to\n>> regression test.\n> \n> I'd have to look into how to ensure I can stabilize some of the parts\n> that seem prone to flaking. I can probably stabilize the vacuum bit\n> with a query of pg_stat_activity making sure it is waiting to acquire\n> the cleanup lock.\n> \n> I don't, however, see a good way around the large amount of data\n> required to trigger more than one round of index vacuuming. I could\n> generate the data more efficiently than I am doing here\n> (generate_series() in the from clause). Perhaps with a copy? I know it\n> is too slow now to go in an ongoing test, but I don't have an\n> intuition around how fast it would have to be to be acceptable. Is\n> there a set of additional tests that are slower that we don't always\n> run? I didn't follow how the wraparound test ended up, but that seems\n> like one that would have been slow.\n> \n\nI think it depends on what is the impact on the 'make check' duration.\nIf it could be added to one of the existing test groups, then it depends\non how long the slowest test in that group is. If the new test needs to\nbe in a separate group, it probably needs to be very fast.\n\nBut I was wondering how much time are we talking about, so I tried\n\ncreating a table, filling it with 300k rows => 250ms\ncreating an index => 180ms\ndelete 90% data => 200ms\nvacuum t => 130ms\n\nwhich with m_w_m=1MB does two rounds of index cleanup. That's ~760ms.\nI'm not sure how much more stuff does the test need to do, but this\nwould be pretty reasonable, if we could add it to an existing group.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 24 Jun 2024 17:14:45 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Thu, Jun 20, 2024 at 7:42 PM Melanie Plageman\n<[email protected]> wrote:\n> We can fix this by always removing tuples considered dead before\n> VacuumCutoffs->OldestXmin.\n\nI don't have a great feeling about this fix. It's not that I think\nit's wrong. It's just that the underlying problem here is that we have\nheap_page_prune_and_freeze() getting both GlobalVisState *vistest and\nstruct VacuumCutoffs *cutoffs, and the vistest wants to be in charge\nof deciding what gets pruned, but that doesn't actually work, because\nas I pointed out in\nhttp://postgr.es/m/CA+Tgmob1BtWcP6R5-toVHB5wqHasPTSR2TJkcDCutMzaUYBaHQ@mail.gmail.com\nit's not properly synchronized with vacrel->cutoffs.OldestXmin. Your\nfix is to consider both variables, which again may be totally correct,\nbut wouldn't it be a lot better if we didn't have two variables\nfighting for control of the same behavior?\n\n(I'm not trying to be a nuisance here -- I think it's great that\nyou've done the work to pin this down and perhaps there is no better\nfix than what you've proposed.)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 24 Jun 2024 11:43:59 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jun 24, 2024 at 11:44 AM Robert Haas <[email protected]> wrote:\n> I don't have a great feeling about this fix. It's not that I think\n> it's wrong. It's just that the underlying problem here is that we have\n> heap_page_prune_and_freeze() getting both GlobalVisState *vistest and\n> struct VacuumCutoffs *cutoffs, and the vistest wants to be in charge\n> of deciding what gets pruned, but that doesn't actually work, because\n> as I pointed out in\n> http://postgr.es/m/CA+Tgmob1BtWcP6R5-toVHB5wqHasPTSR2TJkcDCutMzaUYBaHQ@mail.gmail.com\n> it's not properly synchronized with vacrel->cutoffs.OldestXmin. Your\n> fix is to consider both variables, which again may be totally correct,\n> but wouldn't it be a lot better if we didn't have two variables\n> fighting for control of the same behavior?\n\nWhy would it be better? It's to our advantage to have vistest prune\naway extra tuples when possible. Andres placed a lot of emphasis on\nthat during the snapshot scalability work -- vistest can be updated on\nthe fly.\n\nThe problem here is that OldestXmin is supposed to be more\nconservative than vistest, which it almost always is, except in this\none edge case. I don't think that plugging that hole changes the basic\nfact that there is one source of truth about what *needs* to be\npruned. There is such a source of truth: OldestXmin.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 24 Jun 2024 12:42:55 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jun 24, 2024 at 12:43 PM Peter Geoghegan <[email protected]> wrote:\n> The problem here is that OldestXmin is supposed to be more\n> conservative than vistest, which it almost always is, except in this\n> one edge case. I don't think that plugging that hole changes the basic\n> fact that there is one source of truth about what *needs* to be\n> pruned. There is such a source of truth: OldestXmin.\n\nWell, another approach could be to make it so that OldestXmin actually\nis always more conservative than vistest rather than almost always.\n\nI agree with you that letting the pruning horizon move forward during\nvacuum is desirable. I'm just wondering if having the vacuum code need\nto know a second horizon is really the best way to address that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 24 Jun 2024 13:05:10 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jun 24, 2024 at 1:05 PM Robert Haas <[email protected]> wrote:\n> On Mon, Jun 24, 2024 at 12:43 PM Peter Geoghegan <[email protected]> wrote:\n> > The problem here is that OldestXmin is supposed to be more\n> > conservative than vistest, which it almost always is, except in this\n> > one edge case. I don't think that plugging that hole changes the basic\n> > fact that there is one source of truth about what *needs* to be\n> > pruned. There is such a source of truth: OldestXmin.\n>\n> Well, another approach could be to make it so that OldestXmin actually\n> is always more conservative than vistest rather than almost always.\n\nIf we did things like that then it would still be necessary to write a\npatch like the one Melanie came up with, on the grounds that we'd\nreally need to be paranoid about having missed some subtlety. We might\nas well just rely on the mechanism directly. I just don't think that\nit makes much difference.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 24 Jun 2024 13:33:21 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jun 24, 2024 at 1:05 PM Robert Haas <[email protected]> wrote:\n>\n> On Mon, Jun 24, 2024 at 12:43 PM Peter Geoghegan <[email protected]> wrote:\n> > The problem here is that OldestXmin is supposed to be more\n> > conservative than vistest, which it almost always is, except in this\n> > one edge case. I don't think that plugging that hole changes the basic\n> > fact that there is one source of truth about what *needs* to be\n> > pruned. There is such a source of truth: OldestXmin.\n>\n> Well, another approach could be to make it so that OldestXmin actually\n> is always more conservative than vistest rather than almost always.\n\nFor the purposes of pruning, we are effectively always using the more\nconservative of the two with this patch.\n\nAre you more concerned about having a single horizon for pruning or\nabout having a horizon that does not move backwards after being\nestablished at the beginning of vacuuming the relation?\n\nRight now, in master, we do use a single horizon when determining what\nis pruned -- that from GlobalVisState. OldestXmin is only used for\nfreezing and full page visibility determinations. Using a different\nhorizon for pruning by vacuum than freezing is what is causing the\nerror on master.\n\n> I agree with you that letting the pruning horizon move forward during\n> vacuum is desirable. I'm just wondering if having the vacuum code need\n> to know a second horizon is really the best way to address that.\n\nI was thinking about this some more and I realized I don't really get\nwhy we think using GlobalVisState for pruning will let us remove more\ntuples in the common case.\n\nI had always thought it was because the vacuuming backend's\nGlobalVisState will get updated periodically throughout vacuum and so,\nassuming the oldest running transaction changes, our horizon for\nvacuum would change. But, in writing this repro, it is actually quite\nhard to get GlobalVisState to update. Our backend's RecentXmin needs\nto have changed. And there aren't very many places where we take a new\nsnapshot after starting to vacuum a relation. One of those is at the\nend of index vacuuming, but that can only affect the pruning horizon\nif we have to do multiple rounds of index vacuuming. Is that really\nthe case we are thinking of when we say we want the pruning horizon to\nmove forward during vacuum?\n\n- Melanie\n\n\n", "msg_date": "Mon, 24 Jun 2024 15:23:39 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jun 24, 2024 at 3:23 PM Melanie Plageman\n<[email protected]> wrote:\n> I had always thought it was because the vacuuming backend's\n> GlobalVisState will get updated periodically throughout vacuum and so,\n> assuming the oldest running transaction changes, our horizon for\n> vacuum would change.\n\nI believe that it's more of an aspirational thing at this point. That\nit is currently aspirational (it happens to some extent but isn't ever\nparticularly useful) shouldn't change the analysis about how to fix\nthis bug.\n\n> One of those is at the\n> end of index vacuuming, but that can only affect the pruning horizon\n> if we have to do multiple rounds of index vacuuming. Is that really\n> the case we are thinking of when we say we want the pruning horizon to\n> move forward during vacuum?\n\nNo, that's definitely not what we were thinking of. It's just an\naccident that it's almost the only thing that'll do that.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 24 Jun 2024 15:31:29 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Thu, Jun 20, 2024 at 7:42 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> If vacuum fails to remove a tuple with xmax older than\n> VacuumCutoffs->OldestXmin and younger than\n> GlobalVisState->maybe_needed, it will ERROR out when determining\n> whether or not to freeze the tuple with \"cannot freeze committed\n> xmax\".\n\nOne thing I don't understand is why it is okay to freeze the xmax of a\ndead tuple just because it is from an aborted update.\nheap_prepare_freeze_tuple() is called on HEAPTUPLE_RECENTLY_DEAD\ntuples with normal xmaxes (non-multis) so that it can freeze tuples\nfrom aborted updates. The only case in which we freeze dead tuples\nwith a non-multi xmax is if the xmax is from before OldestXmin and is\nalso not committed (so from an aborted update). Freezing dead tuples\nreplaces their xmax with InvalidTransactionId -- which would make them\nlook alive. So, it makes sense we don't do this for dead tuples in the\ncommon case. But why is it 1) okay and 2) desirable to freeze xmaxes\nof tuples from aborted updates? Won't it make them look alive again?\n\n- Melanie\n\n\n", "msg_date": "Mon, 24 Jun 2024 15:35:57 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jun 24, 2024 at 3:23 PM Melanie Plageman\n<[email protected]> wrote:\n> Are you more concerned about having a single horizon for pruning or\n> about having a horizon that does not move backwards after being\n> established at the beginning of vacuuming the relation?\n\nI'm not sure I understand. The most important thing here is fixing the\nbug. But if we have a choice of how to fix the bug, I'd prefer to do\nit by having the pruning code test one horizon that is always correct,\nrather than (as I think the patch does) having it test against two\nhorizons because as a way of covering possible discrepancies between\nthose values.\n\n> Right now, in master, we do use a single horizon when determining what\n> is pruned -- that from GlobalVisState. OldestXmin is only used for\n> freezing and full page visibility determinations. Using a different\n> horizon for pruning by vacuum than freezing is what is causing the\n> error on master.\n\nAgreed.\n\n> I had always thought it was because the vacuuming backend's\n> GlobalVisState will get updated periodically throughout vacuum and so,\n> assuming the oldest running transaction changes, our horizon for\n> vacuum would change. But, in writing this repro, it is actually quite\n> hard to get GlobalVisState to update. Our backend's RecentXmin needs\n> to have changed. And there aren't very many places where we take a new\n> snapshot after starting to vacuum a relation. One of those is at the\n> end of index vacuuming, but that can only affect the pruning horizon\n> if we have to do multiple rounds of index vacuuming. Is that really\n> the case we are thinking of when we say we want the pruning horizon to\n> move forward during vacuum?\n\nI thought the idea was that the GlobalVisTest stuff would force a\nrecalculation now and then, but maybe it doesn't actually do that?\n\nSuppose process A begins a transaction, acquires an XID, and then goes\nidle. Process B now begins a giant vacuum. At some point in the middle\nof the vacuum, A ends the transaction. Are you saying that B's\nGlobalVisTest never really notices that this has happened?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 24 Jun 2024 16:35:50 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jun 24, 2024 at 3:36 PM Melanie Plageman\n<[email protected]> wrote:\n> One thing I don't understand is why it is okay to freeze the xmax of a\n> dead tuple just because it is from an aborted update.\n\nWe don't do that with XID-based xmaxs. Though perhaps we should, since\nwe'll already prune-away the successor tuple, and so might as well go\none tiny step further and clear the xmax for the original tuple via\nfreezing/setting it InvalidTransactionId. Instead we just leave the\noriginal tuple largely undisturbed, with its original xmax.\n\nWe do something like that with Multi-based xmax fields, though not\nwith the specific goal of cleaning up after aborts in mind (we can\nalso remove lockers that are no longer running, regardless of where\nthey are relative to OldestXmin, stuff like that). The actual goal\nwith that is to enforce MultiXactCutoff, independently of whether or\nnot their member XIDs are < FreezeLimit yet.\n\n> The only case in which we freeze dead tuples\n> with a non-multi xmax is if the xmax is from before OldestXmin and is\n> also not committed (so from an aborted update).\n\nPerhaps I misunderstand, but: we simply don't freeze DEAD (not\nRECENTLY_DEAD) tuples in the first place, because we don't have to\n(pruning removes them instead). It doesn't matter if they're DEAD due\nto being from aborted transactions or DEAD due to being\ndeleted/updated by a transaction that committed (committed and <\nOldestXmin).\n\nThe freezing related code paths in heapam.c don't particularly care\nwhether a tuple xmax is RECENTLY_DEAD or LIVE to HTSV + OldestXmin.\nJust as long as it's not fully DEAD (then it should have been pruned).\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 24 Jun 2024 16:42:11 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jun 24, 2024 at 4:36 PM Robert Haas <[email protected]> wrote:\n> I'm not sure I understand. The most important thing here is fixing the\n> bug. But if we have a choice of how to fix the bug, I'd prefer to do\n> it by having the pruning code test one horizon that is always correct,\n> rather than (as I think the patch does) having it test against two\n> horizons because as a way of covering possible discrepancies between\n> those values.\n\nYour characterizing of OldestXmin + vistest as two horizons seems\npretty arbitrary to me. I know what you mean, of course, but it seems\nlike a distinction without a difference.\n\n> I thought the idea was that the GlobalVisTest stuff would force a\n> recalculation now and then, but maybe it doesn't actually do that?\n\nIt definitely can do that. Just not in a way that meaningfully\nincreases the number of heap tuples that we can recognize as DEAD and\nremove. At least not currently.\n\n> Suppose process A begins a transaction, acquires an XID, and then goes\n> idle. Process B now begins a giant vacuum. At some point in the middle\n> of the vacuum, A ends the transaction. Are you saying that B's\n> GlobalVisTest never really notices that this has happened?\n\nThat's my understanding, yes. That is, vistest is approximately the\nsame thing as OldestXmin anyway. At least for now.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 24 Jun 2024 16:51:24 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jun 24, 2024 at 4:42 PM Peter Geoghegan <[email protected]> wrote:\n>\n> On Mon, Jun 24, 2024 at 3:36 PM Melanie Plageman\n> <[email protected]> wrote:\n> > One thing I don't understand is why it is okay to freeze the xmax of a\n> > dead tuple just because it is from an aborted update.\n>\n> We don't do that with XID-based xmaxs. Though perhaps we should, since\n> we'll already prune-away the successor tuple, and so might as well go\n> one tiny step further and clear the xmax for the original tuple via\n> freezing/setting it InvalidTransactionId. Instead we just leave the\n> original tuple largely undisturbed, with its original xmax.\n\nI thought that was the case too, but we call\nheap_prepare_freeze_tuple() on HEAPTUPLE_RECENTLY_DEAD tuples and then\n\n else if (TransactionIdIsNormal(xid))\n {\n /* Raw xmax is normal XID */\n freeze_xmax = TransactionIdPrecedes(xid, cutoffs->OldestXmin);\n }\n\nAnd then later we\n\n if (freeze_xmax)\n frz->xmax = InvalidTransactionId;\n\nand then when we execute freezing the tuple in heap_execute_freeze_tuple()\n\n HeapTupleHeaderSetXmax(tuple, frz->xmax);\n\nWhich sets the xmax to InvalidTransactionId. Or am I missing something?\n\n> > The only case in which we freeze dead tuples\n> > with a non-multi xmax is if the xmax is from before OldestXmin and is\n> > also not committed (so from an aborted update).\n>\n> Perhaps I misunderstand, but: we simply don't freeze DEAD (not\n> RECENTLY_DEAD) tuples in the first place, because we don't have to\n> (pruning removes them instead). It doesn't matter if they're DEAD due\n> to being from aborted transactions or DEAD due to being\n> deleted/updated by a transaction that committed (committed and <\n> OldestXmin).\n\nRight, I'm talking about HEAPTUPLE_RECENTLY_DEAD tuples.\nHEAPTUPLE_DEAD tuples are pruned away. But we can't replace the xmax\nof a tuple that has been deleted or updated by a transaction that\ncommitted with InvalidTransactionId. And it seems like the code does\nthat? Why even call heap_prepare_freeze_tuple() on\nHEAPTUPLE_RECENTLY_DEAD tuples? Is it mainly to handle MultiXact\nfreezing?\n\n> The freezing related code paths in heapam.c don't particularly care\n> whether a tuple xmax is RECENTLY_DEAD or LIVE to HTSV + OldestXmin.\n> Just as long as it's not fully DEAD (then it should have been pruned).\n\nBut it just seems like we shouldn't freeze RECENTLY_DEAD either.\n\n- Melanie\n\n\n", "msg_date": "Mon, 24 Jun 2024 16:51:38 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jun 24, 2024 at 4:51 PM Peter Geoghegan <[email protected]> wrote:\n>\n> On Mon, Jun 24, 2024 at 4:36 PM Robert Haas <[email protected]> wrote:\n> > I thought the idea was that the GlobalVisTest stuff would force a\n> > recalculation now and then, but maybe it doesn't actually do that?\n>\n> It definitely can do that. Just not in a way that meaningfully\n> increases the number of heap tuples that we can recognize as DEAD and\n> remove. At least not currently.\n>\n> > Suppose process A begins a transaction, acquires an XID, and then goes\n> > idle. Process B now begins a giant vacuum. At some point in the middle\n> > of the vacuum, A ends the transaction. Are you saying that B's\n> > GlobalVisTest never really notices that this has happened?\n>\n> That's my understanding, yes. That is, vistest is approximately the\n> same thing as OldestXmin anyway. At least for now.\n\nExactly. Something has to cause this backend to update its view of the\nhorizon. At the end of index vacuuming,\nGetOldestNonRemovableTransactionId() will explicitly\nComputeXidHorizons() which will update our backend's GlobalVisStates.\nOtherwise, if our backend's RecentXmin is updated, by taking a new\nsnapshot, then we may update our GlobalVisStates. See\nGlobalVisTestShouldUpdate() for the conditions under which we would\nupdate our GlobalVisStates during the normal visibility checks\nhappening during pruning.\n\nVacuum used to open indexes after calculating horizons before starting\nits first pass. This led to a recomputation of the horizon. But, in\nmaster, there aren't many obvious places where such a thing would be\nhappening.\n\n- Melanie\n\n\n", "msg_date": "Mon, 24 Jun 2024 16:58:09 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jun 24, 2024 at 04:51:24PM -0400, Peter Geoghegan wrote:\n> On Mon, Jun 24, 2024 at 4:36 PM Robert Haas <[email protected]> wrote:\n> > I'm not sure I understand. The most important thing here is fixing the\n> > bug. But if we have a choice of how to fix the bug, I'd prefer to do\n> > it by having the pruning code test one horizon that is always correct,\n> > rather than (as I think the patch does) having it test against two\n> > horizons because as a way of covering possible discrepancies between\n> > those values.\n> \n> Your characterizing of OldestXmin + vistest as two horizons seems\n> pretty arbitrary to me. I know what you mean, of course, but it seems\n> like a distinction without a difference.\n\n\"Two horizons\" matches how I model it. If the two were _always_ indicating\nthe same notion of visibility, we wouldn't have this thread.\n\nOn Mon, Jun 24, 2024 at 03:23:39PM -0400, Melanie Plageman wrote:\n> Right now, in master, we do use a single horizon when determining what\n> is pruned -- that from GlobalVisState. OldestXmin is only used for\n> freezing and full page visibility determinations. Using a different\n> horizon for pruning by vacuum than freezing is what is causing the\n> error on master.\n\nAgreed, and I think using different sources for pruning and freezing is a\nrecipe for future bugs. Fundamentally, both are about answering \"is\nsnapshot_considers_xid_in_progress(snapshot, xid) false for every snapshot?\"\nThat's not to say this thread shall unify the two, but I suspect that's the\nright long-term direction.\n\n\n", "msg_date": "Mon, 24 Jun 2024 18:30:04 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jun 24, 2024 at 9:30 PM Noah Misch <[email protected]> wrote:\n> On Mon, Jun 24, 2024 at 03:23:39PM -0400, Melanie Plageman wrote:\n> > Right now, in master, we do use a single horizon when determining what\n> > is pruned -- that from GlobalVisState. OldestXmin is only used for\n> > freezing and full page visibility determinations. Using a different\n> > horizon for pruning by vacuum than freezing is what is causing the\n> > error on master.\n>\n> Agreed, and I think using different sources for pruning and freezing is a\n> recipe for future bugs. Fundamentally, both are about answering \"is\n> snapshot_considers_xid_in_progress(snapshot, xid) false for every snapshot?\"\n> That's not to say this thread shall unify the two, but I suspect that's the\n> right long-term direction.\n\nWhat does it really mean to unify the two, though?\n\nIf the OldestXmin field was located in struct GlobalVisState (next to\ndefinitely_needed and maybe_needed), but everything worked in\nessentially the same way as it will with Melanie's patch in place,\nwould that count as unifying the two? Why or why not?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 24 Jun 2024 21:49:53 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jun 24, 2024 at 09:49:53PM -0400, Peter Geoghegan wrote:\n> On Mon, Jun 24, 2024 at 9:30 PM Noah Misch <[email protected]> wrote:\n> > On Mon, Jun 24, 2024 at 03:23:39PM -0400, Melanie Plageman wrote:\n> > > Right now, in master, we do use a single horizon when determining what\n> > > is pruned -- that from GlobalVisState. OldestXmin is only used for\n> > > freezing and full page visibility determinations. Using a different\n> > > horizon for pruning by vacuum than freezing is what is causing the\n> > > error on master.\n> >\n> > Agreed, and I think using different sources for pruning and freezing is a\n> > recipe for future bugs. Fundamentally, both are about answering \"is\n> > snapshot_considers_xid_in_progress(snapshot, xid) false for every snapshot?\"\n> > That's not to say this thread shall unify the two, but I suspect that's the\n> > right long-term direction.\n> \n> What does it really mean to unify the two, though?\n> \n> If the OldestXmin field was located in struct GlobalVisState (next to\n> definitely_needed and maybe_needed), but everything worked in\n> essentially the same way as it will with Melanie's patch in place,\n> would that count as unifying the two? Why or why not?\n\nTo me, no, unification would mean removing the data redundancy. Relocating\nthe redundant field and/or code that updates the redundant field certainly\ncould reduce the risk of bugs, so I'm not opposing everything short of\nremoving the data redundancy. I'm just agreeing with the \"prefer\" from\nhttps://postgr.es/m/CA+TgmoYzS_bkt_MrNxr5QrXDKfedmh4tStn8UBTTBXqv=3JTew@mail.gmail.com\n\n\n", "msg_date": "Mon, 24 Jun 2024 19:09:52 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "Hi,\n\nOn 2024-06-24 16:35:50 -0400, Robert Haas wrote:\n> On Mon, Jun 24, 2024 at 3:23 PM Melanie Plageman\n> <[email protected]> wrote:\n> > Are you more concerned about having a single horizon for pruning or\n> > about having a horizon that does not move backwards after being\n> > established at the beginning of vacuuming the relation?\n> \n> I'm not sure I understand. The most important thing here is fixing the\n> bug. But if we have a choice of how to fix the bug, I'd prefer to do\n> it by having the pruning code test one horizon that is always correct,\n> rather than (as I think the patch does) having it test against two\n> horizons because as a way of covering possible discrepancies between\n> those values.\n\nI think that's going in the wrong direction. We *want* to prune more\naggressively if we can (*), the necessary state is represented by the\nvistest. That's a different thing than *having* to prune tuples beyond a\ncertain xmin (the cutoff determined by vacuum.c/vacuumlazy.c). The problem\nwe're having here is that the two states can get out of sync due to the\nvistest \"moving backwards\", because of hot_standby_feedback (and perhaps also\nan issue around aborts).\n\nTo prevent that we can\na) make sure that we always take the hard cutoff into account\nb) prevent vistest from going backwards\n\n(*) we really ought to become more aggressive, by removing intermediary row\n versions when they're not visible to anyone, but not yet old enough to be\n below the horizon. But that realistically will only be possible in *some*\n cases, e.g. when the predecessor row version is on the same page.\n\n\n\n> > I had always thought it was because the vacuuming backend's\n> > GlobalVisState will get updated periodically throughout vacuum and so,\n> > assuming the oldest running transaction changes, our horizon for\n> > vacuum would change. But, in writing this repro, it is actually quite\n> > hard to get GlobalVisState to update. Our backend's RecentXmin needs\n> > to have changed. And there aren't very many places where we take a new\n> > snapshot after starting to vacuum a relation. One of those is at the\n> > end of index vacuuming, but that can only affect the pruning horizon\n> > if we have to do multiple rounds of index vacuuming. Is that really\n> > the case we are thinking of when we say we want the pruning horizon to\n> > move forward during vacuum?\n> \n> I thought the idea was that the GlobalVisTest stuff would force a\n> recalculation now and then, but maybe it doesn't actually do that?\n\nIt forces an accurate horizon to be determined the first time it would require\nit to determine visibility. The \"first time\" is determined by RecentXmin not\nhaving changed.\n\nThe main goal of the vistest stuff was to not have to determine an accurate\nhorizon in GetSnapshotData(). Determining an accurate horizon requires\naccessing each backends ->xmin, which causes things to scale badly, as ->xmin\nchanges so frequently.\n\nThe cost of determining the accurate horizon is irrelevant for vacuums, but\nit's not at all irrelevant for on-access pruning.\n\n\n> Suppose process A begins a transaction, acquires an XID, and then goes\n> idle. Process B now begins a giant vacuum. At some point in the middle\n> of the vacuum, A ends the transaction. Are you saying that B's\n> GlobalVisTest never really notices that this has happened?\n\nNot at the moment, but we should add heuristics like that.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 25 Jun 2024 05:03:15 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Tue, Jun 25, 2024 at 8:03 AM Andres Freund <[email protected]> wrote:\n> I think that's going in the wrong direction. We *want* to prune more\n> aggressively if we can (*), the necessary state is represented by the\n> vistest. That's a different thing than *having* to prune tuples beyond a\n> certain xmin (the cutoff determined by vacuum.c/vacuumlazy.c). The problem\n> we're having here is that the two states can get out of sync due to the\n> vistest \"moving backwards\", because of hot_standby_feedback (and perhaps also\n> an issue around aborts).\n\nI agree that we want to prune more aggressively if we can. I think\nthat fixing this by preventing vistest from going backward is\nreasonable, and I like it better than what Melanie proposed, although\nI like what Melanie proposed much better than not fixing it! I'm not\nsure how to do that cleanly, but one of you may have an idea.\n\nI do think that having a bunch of different XID values that function\nas horizons and a vistest object that holds some more XID horizons\nfloating around in vacuum makes the code hard to understand. The\nrelationships between the various values are not well-documented. For\ninstance, the vistest has to be after vacrel->cutoffs.OldestXmin for\ncorrectness, but I don't think there's a single comment anywhere\nsaying that; meanwhile, the comments for VacuumCutoffs say \"OldestXmin\nis the Xid below which tuples deleted by any xact (that committed)\nshould be considered DEAD, not just RECENTLY_DEAD.\" Surely the reader\ncan be forgiven for thinking that this is the cutoff that will\nactually be used by pruning, but it isn't.\n\nAnd more generally, it seems like a fairly big problem to me that\nLVRelState directly stores NewRelfrozenXid; contains a VacuumCutoffs\nobject that stores relfrozenxid, OldestXmin, and FreezeLimit; and also\npoints to a GlobalVisState object that contains definitely_needed and\nmaybe_needed. That is six different XID cutoffs for one vacuum\noperation. That's a lot. I can't describe how they're all different\nfrom each other or what the necessary relationships between them are\noff-hand, and I bet nobody else could either, at least until recently,\nelse we might not have this bug. I feel like if it were possible to\nhave fewer of them and still have things work, we'd be better off. I'm\nnot sure that's doable. But six seems like a lot.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 25 Jun 2024 08:42:02 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On 2024-06-25 08:42:02 -0400, Robert Haas wrote:\n> On Tue, Jun 25, 2024 at 8:03 AM Andres Freund <[email protected]> wrote:\n> > I think that's going in the wrong direction. We *want* to prune more\n> > aggressively if we can (*), the necessary state is represented by the\n> > vistest. That's a different thing than *having* to prune tuples beyond a\n> > certain xmin (the cutoff determined by vacuum.c/vacuumlazy.c). The problem\n> > we're having here is that the two states can get out of sync due to the\n> > vistest \"moving backwards\", because of hot_standby_feedback (and perhaps also\n> > an issue around aborts).\n> \n> I agree that we want to prune more aggressively if we can. I think\n> that fixing this by preventing vistest from going backward is\n> reasonable, and I like it better than what Melanie proposed, although\n> I like what Melanie proposed much better than not fixing it! I'm not\n> sure how to do that cleanly, but one of you may have an idea.\n\nIt's not hard - but it has downsides. It'll mean that - outside of vacuum -\nwe'll much more often not react to horizons going backwards due to\nhot_standby_feedback. Which means that hot_standby_feedback, when used without\nslots, will prevent fewer conflicts.\n\n\n> I do think that having a bunch of different XID values that function\n> as horizons and a vistest object that holds some more XID horizons\n> floating around in vacuum makes the code hard to understand. The\n> relationships between the various values are not well-documented. For\n> instance, the vistest has to be after vacrel->cutoffs.OldestXmin for\n> correctness, but I don't think there's a single comment anywhere\n> saying that;\n\nIt is somewhat documented:\n\n * Note: the approximate horizons (see definition of GlobalVisState) are\n * updated by the computations done here. That's currently required for\n * correctness and a small optimization. Without doing so it's possible that\n * heap vacuum's call to heap_page_prune_and_freeze() uses a more conservative\n * horizon than later when deciding which tuples can be removed - which the\n * code doesn't expect (breaking HOT).\n\n\n> And more generally, it seems like a fairly big problem to me that\n> LVRelState directly stores NewRelfrozenXid; contains a VacuumCutoffs\n> object that stores relfrozenxid, OldestXmin, and FreezeLimit; and also\n> points to a GlobalVisState object that contains definitely_needed and\n> maybe_needed. That is six different XID cutoffs for one vacuum\n> operation. That's a lot. I can't describe how they're all different\n> from each other or what the necessary relationships between them are\n> off-hand, and I bet nobody else could either, at least until recently,\n> else we might not have this bug. I feel like if it were possible to\n> have fewer of them and still have things work, we'd be better off. I'm\n> not sure that's doable. But six seems like a lot.\n\nAgreed. I don't think you can just unify things though, they actually are all\ndifferent for good, or at least decent, reasons. I think improving the naming\nalone could help a good bit though.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 25 Jun 2024 06:06:59 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Tue, Jun 25, 2024 at 9:07 AM Andres Freund <[email protected]> wrote:\n> It's not hard - but it has downsides. It'll mean that - outside of vacuum -\n> we'll much more often not react to horizons going backwards due to\n> hot_standby_feedback. Which means that hot_standby_feedback, when used without\n> slots, will prevent fewer conflicts.\n\nCan you explain this in more detail?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 25 Jun 2024 10:31:33 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On 24.06.2024 17:37, Melanie Plageman wrote:\n> On Mon, Jun 24, 2024 at 4:10 AM Alena Rybakina<[email protected]> wrote:\n>> We can fix this by always removing tuples considered dead before\n>> VacuumCutoffs->OldestXmin. This is okay even if a reconnected standby\n>> has a transaction that sees that tuple as alive, because it will\n>> simply wait to replay the removal until it would be correct to do so\n>> or recovery conflict handling will cancel the transaction that sees\n>> the tuple as alive and allow replay to continue.\n>>\n>> This is an interesting and difficult case) I noticed that when initializing the cluster, in my opinion, we provide excessive freezing. Initialization takes a long time, which can lead, for example, to longer test execution. I got rid of this by adding the OldestMxact checkbox is not FirstMultiXactId, and it works fine.\n>>\n>> if (prstate->cutoffs &&\n>> TransactionIdIsValid(prstate->cutoffs->OldestXmin) &&\n>> prstate->cutoffs->OldestMxact != FirstMultiXactId &&\n>> NormalTransactionIdPrecedes(dead_after, prstate->cutoffs->OldestXmin))\n>> return HEAPTUPLE_DEAD;\n>>\n>> Can I keep it?\n> This looks like an addition to the new criteria I added to\n> heap_prune_satisfies_vacuum(). Is that what you are suggesting? If so,\n> it looks like it would only return HEAPTUPLE_DEAD (and thus only\n> remove) a subset of the tuples my original criteria would remove. When\n> vacuum calculates OldestMxact as FirstMultiXactId, it would not remove\n> those tuples deleted before OldestXmin. It seems like OldestMxact will\n> equal FirstMultiXactID sometimes right after initdb and after\n> transaction ID wraparound. I'm not sure I totally understand the\n> criteria.\n>\n> One thing I find confusing about this is that this would actually\n> remove less tuples than with my criteria -- which could lead to more\n> freezing. When vacuum calculates OldestMxact == FirstMultiXactID, we\n> would not remove tuples deleted before OldestXmin and thus return\n> HEAPTUPLE_RECENTLY_DEAD for those tuples. Then we would consider\n> freezing them. So, it seems like we would do more freezing by adding\n> this criteria.\n>\n> Could you explain more about how the criteria you are suggesting\n> works? Are you saying it does less freezing than master or less\n> freezing than with my patch?\n>\n>\nAt first, Inoticedthatwiththispatch, vacuumfoulsthenodes more \noftenthanbefore,anditseemedto me thatmoretimewasspentinitializingthe \nclusterwiththispatchthanbefore,soIsuggestedconsideringthiscondition.Aftercheckingagain, \nIfoundthatthe problemwaswithmylaptop.So,sorryforthe noise.\n\n>> Attached is the suggested fix for master plus a repro. I wrote it as a\n>> recovery suite TAP test, but I am _not_ proposing we add it to the\n>> ongoing test suite. It is, amongst other things, definitely prone to\n>> flaking. I also had to use loads of data to force two index vacuuming\n>> passes now that we have TIDStore, so it is a slow test.\n> -- snip --\n>> I have a modified version of this that repros the infinite loop on\n>> 14-16 with substantially less data. See it here [2]. Also, the repro\n>> attached to this mail won't work on 14 and 15 because of changes to\n>> background_psql.\n>>\n>> I couldn't understand why the replica is necessary here. Now I am digging why I got the similar behavior without replica when I have only one instance. I'm still checking this in my test, but I believe this patch fixes the original problem because the symptoms were the same.\n> Did you get similar behavior on master or on back branches? Was the\n> behavior you observed the infinite loop or the error during\n> heap_prepare_freeze_tuple()?\n>\n> In my examples, the replica is needed because something has to move\n> the horizon on the primary backwards. When a standby reconnects with\n> an older oldest running transaction ID than any of the running\n> transactions on the primary and the vacuuming backend recomputes its\n> RecentXmin, the horizon may move backwards when compared to the\n> horizon calculated at the beginning of the vacuum. Vacuum does not\n> recompute cutoffs->OldestXmin during vacuuming a relation but it may\n> recompute the values in the GlobalVisState it uses for pruning.\n>\n> We knew of only one other way that the horizon could move backwards\n> which Matthias describes here [1]. However, this is thought to be its\n> own concurrency-related bug in the commit-abort path that should be\n> fixed -- as opposed to the standby reconnecting with an older oldest\n> running transaction ID which can be expected.\n>\n> Do you know if you were seeing the effects of the scenario Matthias describes?\n>\n>\n> [1]https://www.postgresql.org/message-id/CAEze2WjMTh4KS0%3DQEQB-Jq%2BtDLPR%2B0%2BzVBMfVwSPK5A%3DWZa95Q%40mail.gmail.com\nI'm sorry, I need a little more time to figure this out. I will answer \nthis question later.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\nOn 24.06.2024 17:37, Melanie Plageman\n wrote:\n\n\nOn Mon, Jun 24, 2024 at 4:10 AM Alena Rybakina <[email protected]> wrote:\n\n\n\nWe can fix this by always removing tuples considered dead before\nVacuumCutoffs->OldestXmin. This is okay even if a reconnected standby\nhas a transaction that sees that tuple as alive, because it will\nsimply wait to replay the removal until it would be correct to do so\nor recovery conflict handling will cancel the transaction that sees\nthe tuple as alive and allow replay to continue.\n\nThis is an interesting and difficult case) I noticed that when initializing the cluster, in my opinion, we provide excessive freezing. Initialization takes a long time, which can lead, for example, to longer test execution. I got rid of this by adding the OldestMxact checkbox is not FirstMultiXactId, and it works fine.\n\nif (prstate->cutoffs &&\nTransactionIdIsValid(prstate->cutoffs->OldestXmin) &&\nprstate->cutoffs->OldestMxact != FirstMultiXactId &&\nNormalTransactionIdPrecedes(dead_after, prstate->cutoffs->OldestXmin))\n return HEAPTUPLE_DEAD;\n\nCan I keep it?\n\n\n\nThis looks like an addition to the new criteria I added to\nheap_prune_satisfies_vacuum(). Is that what you are suggesting? If so,\nit looks like it would only return HEAPTUPLE_DEAD (and thus only\nremove) a subset of the tuples my original criteria would remove. When\nvacuum calculates OldestMxact as FirstMultiXactId, it would not remove\nthose tuples deleted before OldestXmin. It seems like OldestMxact will\nequal FirstMultiXactID sometimes right after initdb and after\ntransaction ID wraparound. I'm not sure I totally understand the\ncriteria.\n\nOne thing I find confusing about this is that this would actually\nremove less tuples than with my criteria -- which could lead to more\nfreezing. When vacuum calculates OldestMxact == FirstMultiXactID, we\nwould not remove tuples deleted before OldestXmin and thus return\nHEAPTUPLE_RECENTLY_DEAD for those tuples. Then we would consider\nfreezing them. So, it seems like we would do more freezing by adding\nthis criteria.\n\nCould you explain more about how the criteria you are suggesting\nworks? Are you saying it does less freezing than master or less\nfreezing than with my patch?\n\n\n\n\nAt first, I noticed that with this patch, vacuum fouls the nodes more often than before, and it seemed to me that more time was spent initializing the cluster with this patch than before, so I suggested considering this condition. After checking again, I found that the problem was with my laptop. So, sorry for the noise.\n\n\nAttached is the suggested fix for master plus a repro. I wrote it as a\nrecovery suite TAP test, but I am _not_ proposing we add it to the\nongoing test suite. It is, amongst other things, definitely prone to\nflaking. I also had to use loads of data to force two index vacuuming\npasses now that we have TIDStore, so it is a slow test.\n\n\n-- snip --\n\n\nI have a modified version of this that repros the infinite loop on\n14-16 with substantially less data. See it here [2]. Also, the repro\nattached to this mail won't work on 14 and 15 because of changes to\nbackground_psql.\n\nI couldn't understand why the replica is necessary here. Now I am digging why I got the similar behavior without replica when I have only one instance. I'm still checking this in my test, but I believe this patch fixes the original problem because the symptoms were the same.\n\n\n\nDid you get similar behavior on master or on back branches? Was the\nbehavior you observed the infinite loop or the error during\nheap_prepare_freeze_tuple()?\n\nIn my examples, the replica is needed because something has to move\nthe horizon on the primary backwards. When a standby reconnects with\nan older oldest running transaction ID than any of the running\ntransactions on the primary and the vacuuming backend recomputes its\nRecentXmin, the horizon may move backwards when compared to the\nhorizon calculated at the beginning of the vacuum. Vacuum does not\nrecompute cutoffs->OldestXmin during vacuuming a relation but it may\nrecompute the values in the GlobalVisState it uses for pruning.\n\nWe knew of only one other way that the horizon could move backwards\nwhich Matthias describes here [1]. However, this is thought to be its\nown concurrency-related bug in the commit-abort path that should be\nfixed -- as opposed to the standby reconnecting with an older oldest\nrunning transaction ID which can be expected.\n\nDo you know if you were seeing the effects of the scenario Matthias describes?\n\n\n[1] https://www.postgresql.org/message-id/CAEze2WjMTh4KS0%3DQEQB-Jq%2BtDLPR%2B0%2BzVBMfVwSPK5A%3DWZa95Q%40mail.gmail.com\n\n\n I'm sorry, I need a little more time to figure this out. I will\n answer this question later.\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 25 Jun 2024 17:37:38 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Tue, Jun 25, 2024 at 10:31 AM Robert Haas <[email protected]> wrote:\n>\n> On Tue, Jun 25, 2024 at 9:07 AM Andres Freund <[email protected]> wrote:\n> > It's not hard - but it has downsides. It'll mean that - outside of vacuum -\n> > we'll much more often not react to horizons going backwards due to\n> > hot_standby_feedback. Which means that hot_standby_feedback, when used without\n> > slots, will prevent fewer conflicts.\n>\n> Can you explain this in more detail?\n\nIf we prevent GlobalVisState from moving backward, then we would less\nfrequently be pushing the horizon backward on the primary in response\nto hot standby feedback. Then, the primary would do more things that\nwould not be safely replayable on the standby -- so the standby could\nend up encountering more recovery conflicts.\n\n- Melanie\n\n\n", "msg_date": "Tue, 25 Jun 2024 11:39:03 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Tue, Jun 25, 2024 at 11:39 AM Melanie Plageman\n<[email protected]> wrote:\n> On Tue, Jun 25, 2024 at 10:31 AM Robert Haas <[email protected]> wrote:\n> > On Tue, Jun 25, 2024 at 9:07 AM Andres Freund <[email protected]> wrote:\n> > > It's not hard - but it has downsides. It'll mean that - outside of vacuum -\n> > > we'll much more often not react to horizons going backwards due to\n> > > hot_standby_feedback. Which means that hot_standby_feedback, when used without\n> > > slots, will prevent fewer conflicts.\n> >\n> > Can you explain this in more detail?\n>\n> If we prevent GlobalVisState from moving backward, then we would less\n> frequently be pushing the horizon backward on the primary in response\n> to hot standby feedback. Then, the primary would do more things that\n> would not be safely replayable on the standby -- so the standby could\n> end up encountering more recovery conflicts.\n\nI don't get it. hot_standby_feedback only moves horizons backward on\nthe primary, AFAIK, when it first connects, or when it reconnects.\nWhich I guess could be frequent for some users with flaky networks,\nbut does that really rise to the level of \"much more often\"?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 25 Jun 2024 12:31:11 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "Hi,\n\nOn 2024-06-25 12:31:11 -0400, Robert Haas wrote:\n> On Tue, Jun 25, 2024 at 11:39 AM Melanie Plageman\n> <[email protected]> wrote:\n> > On Tue, Jun 25, 2024 at 10:31 AM Robert Haas <[email protected]> wrote:\n> > > On Tue, Jun 25, 2024 at 9:07 AM Andres Freund <[email protected]> wrote:\n> > > > It's not hard - but it has downsides. It'll mean that - outside of vacuum -\n> > > > we'll much more often not react to horizons going backwards due to\n> > > > hot_standby_feedback. Which means that hot_standby_feedback, when used without\n> > > > slots, will prevent fewer conflicts.\n> > >\n> > > Can you explain this in more detail?\n> >\n> > If we prevent GlobalVisState from moving backward, then we would less\n> > frequently be pushing the horizon backward on the primary in response\n> > to hot standby feedback. Then, the primary would do more things that\n> > would not be safely replayable on the standby -- so the standby could\n> > end up encountering more recovery conflicts.\n> \n> I don't get it. hot_standby_feedback only moves horizons backward on\n> the primary, AFAIK, when it first connects, or when it reconnects.\n> Which I guess could be frequent for some users with flaky networks,\n> but does that really rise to the level of \"much more often\"?\n\nWell, the thing is that with the \"prevent it from going backwards\" approach,\nonce the horizon is set to something recent in a backend, it's \"sticky\". If a\nreplica is a bit behind or if there's a long-lived snapshot and disconnects,\nthe vistest state will advance beyond where the replica needs it to be. Even\nif the standby later reconnects, the vistest in long-lived sessions will still\nhave the more advanced state. So all future pruning these backends do runs\ninto the risk of performing pruning that removes rows the standby still deems\nvisible and thus causes recovery conflicts.\n\nI.e. you don't even need frequent disconnects, you just need one disconnect\nand sessions that aren't shortlived.\n\nThat said, obviously there will be plenty setups where this won't cause an\nissue. I don't really have a handle on how often it'd be a problem.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 25 Jun 2024 10:10:26 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Tue, Jun 25, 2024 at 1:10 PM Andres Freund <[email protected]> wrote:\n> That said, obviously there will be plenty setups where this won't cause an\n> issue. I don't really have a handle on how often it'd be a problem.\n\nFair enough. Even if it's not super-common, it doesn't seem like a\ngreat idea to regress such scenarios in the back-branches.\n\nIs there any way that we could instead tweak things so that we adjust\nthe visibility test object itself? Like can have a GlobalVisTest API\nwhere we can supply the OldestXmin from the VacuumCutoffs and have it\n... do something useful with that?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 25 Jun 2024 14:35:00 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On 2024-06-25 14:35:00 -0400, Robert Haas wrote:\n> Is there any way that we could instead tweak things so that we adjust\n> the visibility test object itself? Like can have a GlobalVisTest API\n> where we can supply the OldestXmin from the VacuumCutoffs and have it\n> ... do something useful with that?\n\nI doubt that's doable in the back branches. And even on HEAD, I don't think\nit's a particularly attractive option - there's just a global vistest for each\nof the types of objects with a specific horizon (they need to be updated\noccasionally, e.g. when taking snapshots). So there's not really a spot to put\nan associated OldestXmin. We could put it there and remove it at the end of\nvacuum / in an exception handler, but that seems substantially worse.\n\n\n", "msg_date": "Tue, 25 Jun 2024 13:41:36 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Tue, Jun 25, 2024 at 4:41 PM Andres Freund <[email protected]> wrote:\n> I doubt that's doable in the back branches. And even on HEAD, I don't think\n> it's a particularly attractive option - there's just a global vistest for each\n> of the types of objects with a specific horizon (they need to be updated\n> occasionally, e.g. when taking snapshots). So there's not really a spot to put\n> an associated OldestXmin. We could put it there and remove it at the end of\n> vacuum / in an exception handler, but that seems substantially worse.\n\nOh, right: I forgot that the visibility test objects were just\npointers to global variables.\n\nWell, I don't know. I guess that doesn't leave any real options but to\nfix it as Melanie proposed. But I still don't like it very much. I\nfeel like having to test against two different thresholds in the\npruning code is surely a sign that we're doing something wrong.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 25 Jun 2024 19:33:35 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jun 24, 2024 at 4:27 AM Heikki Linnakangas <[email protected]> wrote:\n>\n> Would it be possible to make it robust so that we could always run it\n> with \"make check\"? This seems like an important corner case to\n> regression test.\n\nOkay, I've attached a new version of the patch and a new version of\nthe repro that may be fast and stable enough to commit. It is more\nminimal than the previous version. I made the table as small as I\ncould to still trigger two rounds of index vacuuming. I tried to make\nit as stable as possible. I also removed the cursor on the standby\nthat could trigger recovery conflicts. It would be super helpful if\nsomeone could take a look at the test and point out any ways I could\nmake it even more likely to be stable.\n\n- Melanie", "msg_date": "Tue, 2 Jul 2024 19:07:39 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Tue, Jul 2, 2024 at 7:07 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Mon, Jun 24, 2024 at 4:27 AM Heikki Linnakangas <[email protected]> wrote:\n> >\n> > Would it be possible to make it robust so that we could always run it\n> > with \"make check\"? This seems like an important corner case to\n> > regression test.\n>\n> Okay, I've attached a new version of the patch and a new version of\n> the repro that may be fast and stable enough to commit. It is more\n> minimal than the previous version. I made the table as small as I\n> could to still trigger two rounds of index vacuuming. I tried to make\n> it as stable as possible. I also removed the cursor on the standby\n> that could trigger recovery conflicts. It would be super helpful if\n> someone could take a look at the test and point out any ways I could\n> make it even more likely to be stable.\n\nAttached v3 has one important additional component in the test -- I\nuse pg_stat_progress_vacuum to confirm that we actually do more than\none pass of index vacuuming. Otherwise, it would have been trivial for\nthe test to incorrectly pass.\n\nI could still use another pair of eyes on the test (looking out for\nstability enhancing measures I could take). I also would be happy if\nsomeone looked at the commit message and/or comments to let me know if\nthey make sense.\n\nI'll finish with versions of the patch and test targeted at v14-16 and\npropose those before committing this.\n\n- Melanie", "msg_date": "Mon, 8 Jul 2024 14:25:16 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jul 8, 2024 at 2:25 PM Melanie Plageman\n<[email protected]> wrote:\n> Attached v3 has one important additional component in the test -- I\n> use pg_stat_progress_vacuum to confirm that we actually do more than\n> one pass of index vacuuming. Otherwise, it would have been trivial for\n> the test to incorrectly pass.\n\nThat's a good idea.\n\n> I could still use another pair of eyes on the test (looking out for\n> stability enhancing measures I could take).\n\nFirst, the basics: I found that your test failed reliably without your\nfix, and passed reliably with your fix.\n\nNext, performance: the total test runtime (as indicated by \"time meson\ntest -q recovery/043_vacuum_horizon_floor\") was comparable to other\nrecovery/* TAP tests. The new vacuum_horizon_floor test is a little on\nthe long running side, as these tests go, but I think that's fine.\n\nMinor nitpicking about the comments in your TAP test:\n\n* It is necessary but not sufficient for your test to \"skewer\"\nmaybe_needed, relative to OldestXmin. Obviously, it is not sufficient\nbecause the test can only fail when VACUUM prunes a heap page after\nthe backend's horizons have been \"skewered\" in this sense.\n\nPruning is when we get stuck, and if there's no more pruning then\nthere's no opportunity for VACUUM to get stuck. Perhaps this point\nshould be noted directly in the comments. You could add a sentence\nimmediately after the existing sentence \"Then vacuum's first pass will\ncontinue and pruning...\". This new sentence would then add commentary\nsuch as \"Finally, vacuum's second pass over the heap...\".\n\n* Perhaps you should point out that you're using VACUUM FREEZE for\nthis because it'll force the backend to always get a cleanup lock.\nThis is something you rely on to make the repro reliable, but that's\nit.\n\nIn other words, point out to the reader that this bug has nothing to\ndo with freezing; it just so happens to be convenient to use VACUUM\nFREEZE this way, due to implementation details.\n\n* The sentence \"VACUUM proceeds with pruning and does a visibility\ncheck on each tuple...\" describes the bug in terms of the current\nstate of things on Postgres 17, but Postgres 17 hasn't been released\njust yet. Does that really make sense?\n\nIf you want to describe the invariant that caused\nheap_pre_freeze_checks() to error-out on HEAD/Postgres 17, then the\ncommit message of your fix seems like the right place for that. You\ncould reference these errors in passing. The errors seem fairly\nincidental to the real problem, at least to me.\n\nI think that there is some chance that this test will break the build\nfarm in whatever way, since there is a long history of VACUUM not\nquite behaving as expected with these sorts of tests. I think that you\nshould commit the test case separately, first thing in the morning,\nand then keep an eye on the build farm for the rest of the day. I\ndon't think that it's sensible to bend over backwards, just to avoid\nbreaking the build farm in this way.\n\nThanks for working on this\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 15 Jul 2024 18:01:43 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jul 15, 2024 at 6:02 PM Peter Geoghegan <[email protected]> wrote:\n>\n> On Mon, Jul 8, 2024 at 2:25 PM Melanie Plageman\n> <[email protected]> wrote:\n> > I could still use another pair of eyes on the test (looking out for\n> > stability enhancing measures I could take).\n>\n> First, the basics: I found that your test failed reliably without your\n> fix, and passed reliably with your fix.\n\nThanks for the review.\n\n> Minor nitpicking about the comments in your TAP test:\n>\n> * It is necessary but not sufficient for your test to \"skewer\"\n> maybe_needed, relative to OldestXmin. Obviously, it is not sufficient\n> because the test can only fail when VACUUM prunes a heap page after\n> the backend's horizons have been \"skewered\" in this sense.\n>\n> Pruning is when we get stuck, and if there's no more pruning then\n> there's no opportunity for VACUUM to get stuck. Perhaps this point\n> should be noted directly in the comments. You could add a sentence\n> immediately after the existing sentence \"Then vacuum's first pass will\n> continue and pruning...\". This new sentence would then add commentary\n> such as \"Finally, vacuum's second pass over the heap...\".\n\nI've added a description to the top of the test of the scenario\nrequired and then reworked the comment you are describing to try and\nmake this more clear.\n\n> * Perhaps you should point out that you're using VACUUM FREEZE for\n> this because it'll force the backend to always get a cleanup lock.\n> This is something you rely on to make the repro reliable, but that's\n> it.\n>\n> In other words, point out to the reader that this bug has nothing to\n> do with freezing; it just so happens to be convenient to use VACUUM\n> FREEZE this way, due to implementation details.\n\nI've mentioned this in a comment.\n\n> * The sentence \"VACUUM proceeds with pruning and does a visibility\n> check on each tuple...\" describes the bug in terms of the current\n> state of things on Postgres 17, but Postgres 17 hasn't been released\n> just yet. Does that really make sense?\n\nIn the patch targeted at master, I think it makes sense to describe\nthe code as it is. In the backpatch versions, I reworked this comment\nto be correct for those versions.\n\n> If you want to describe the invariant that caused\n> heap_pre_freeze_checks() to error-out on HEAD/Postgres 17, then the\n> commit message of your fix seems like the right place for that. You\n> could reference these errors in passing. The errors seem fairly\n> incidental to the real problem, at least to me.\n\nThe errors are mentioned in the fix commit message.\n\n> I think that there is some chance that this test will break the build\n> farm in whatever way, since there is a long history of VACUUM not\n> quite behaving as expected with these sorts of tests. I think that you\n> should commit the test case separately, first thing in the morning,\n> and then keep an eye on the build farm for the rest of the day. I\n> don't think that it's sensible to bend over backwards, just to avoid\n> breaking the build farm in this way.\n\nSounds good.\n\n- Melanie", "msg_date": "Wed, 17 Jul 2024 11:07:16 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Wed, Jul 17, 2024 at 11:07 AM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Mon, Jul 15, 2024 at 6:02 PM Peter Geoghegan <[email protected]> wrote:\n> >\n> > I think that there is some chance that this test will break the build\n> > farm in whatever way, since there is a long history of VACUUM not\n> > quite behaving as expected with these sorts of tests. I think that you\n> > should commit the test case separately, first thing in the morning,\n> > and then keep an eye on the build farm for the rest of the day. I\n> > don't think that it's sensible to bend over backwards, just to avoid\n> > breaking the build farm in this way.\n>\n> Sounds good.\n\nHmm. So, I was just running all the versions through CI again, and I\nnoticed that the test failed on master on CI on Linux - Debian\nBookworm with Meson. (This passes locally for me and has passed on\nprevious CI runs).\n\n[15:43:41.547] stderr:\n[15:43:41.547] # poll_query_until timed out executing this query:\n[15:43:41.547] #\n[15:43:41.547] # SELECT index_vacuum_count > 0\n[15:43:41.547] # FROM pg_stat_progress_vacuum\n[15:43:41.547] # WHERE datname='test_db' AND relid::regclass =\n'vac_horizon_floor_table'::regclass;\n[15:43:41.547] #\n[15:43:41.547] # expecting this output:\n[15:43:41.547] # t\n[15:43:41.547] # last actual query output:\n[15:43:41.547] # f\n\nWe didn't end up doing two index vacuum passes. Because it doesn't\nrepro locally for me, I can only assume that the conditions for\nforcing two index vacuuming passes in master just weren't met in this\ncase. I'm unsurprised, as it is much harder since 17 to force two\npasses of index vacuuming. It seems like this might be as unstable as\nI feared. I could add more dead data. Or, I could just commit the test\nto the back branches before 17. What do you think?\n\n- Melanie\n\n\n", "msg_date": "Wed, 17 Jul 2024 12:07:08 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Wed, Jul 17, 2024 at 12:07 PM Melanie Plageman\n<[email protected]> wrote:\n> We didn't end up doing two index vacuum passes. Because it doesn't\n> repro locally for me, I can only assume that the conditions for\n> forcing two index vacuuming passes in master just weren't met in this\n> case. I'm unsurprised, as it is much harder since 17 to force two\n> passes of index vacuuming. It seems like this might be as unstable as\n> I feared. I could add more dead data. Or, I could just commit the test\n> to the back branches before 17. What do you think?\n\nHow much margin of error do you have, in terms of total number of\ndead_items? That is, have you whittled it down to the minimum possible\nthreshold for 2 passes?\n\nSome logging with VACUUM VERBOSE (run on the ci instance) might be illuminating.\n\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 17 Jul 2024 12:10:42 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Wed, Jul 17, 2024 at 12:11 PM Peter Geoghegan <[email protected]> wrote:\n>\n> On Wed, Jul 17, 2024 at 12:07 PM Melanie Plageman\n> <[email protected]> wrote:\n> > We didn't end up doing two index vacuum passes. Because it doesn't\n> > repro locally for me, I can only assume that the conditions for\n> > forcing two index vacuuming passes in master just weren't met in this\n> > case. I'm unsurprised, as it is much harder since 17 to force two\n> > passes of index vacuuming. It seems like this might be as unstable as\n> > I feared. I could add more dead data. Or, I could just commit the test\n> > to the back branches before 17. What do you think?\n>\n> How much margin of error do you have, in terms of total number of\n> dead_items? That is, have you whittled it down to the minimum possible\n> threshold for 2 passes?\n\nWhen I run it on my machine with some added logging, the space taken\nby dead items is about 330 kB more than maintenance_work_mem (which is\nset to 1 MB). I could roughly double the excess by increasing the\nnumber of inserted tuples from 400000 to 600000. I'll do this.\n\n> Some logging with VACUUM VERBOSE (run on the ci instance) might be illuminating.\n\nVacuum verbose only will tell us the number of dead tuples and dead\nitem identifiers but not how much space they take up -- which is how\nwe decide whether or not to do index vacuuming.\n\n- Melanie\n\n\n", "msg_date": "Wed, 17 Jul 2024 12:49:35 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "Melanie Plageman <[email protected]> writes:\n> When I run it on my machine with some added logging, the space taken\n> by dead items is about 330 kB more than maintenance_work_mem (which is\n> set to 1 MB). I could roughly double the excess by increasing the\n> number of inserted tuples from 400000 to 600000. I'll do this.\n\nSo, after about two days in the buildfarm, we have failure reports\nfrom this test on gull, mamba, mereswine, and copperhead. mamba\nis mine, and I was able to reproduce the failure in a manual run.\nThe problem seems to be that the test simply takes too long and\nwe hit the default 180-second timeout on one step or another.\nI was able to make it pass by dint of\n\n$ export PG_TEST_TIMEOUT_DEFAULT=1800\n\nHowever, the test then took 908 seconds:\n\n$ time make installcheck PROVE_TESTS=t/043_vacuum_horizon_floor.pl\n...\n# +++ tap install-check in src/test/recovery +++\nt/043_vacuum_horizon_floor.pl .. ok \nAll tests successful.\nFiles=1, Tests=3, 908 wallclock secs ( 0.17 usr 0.01 sys + 21.42 cusr 35.03 csys = 56.63 CPU)\nResult: PASS\n 909.26 real 22.10 user 35.21 sys\n\nThis is even slower than the 027_stream_regress.pl test, which\ncurrently takes around 847 seconds on that machine.\n\nmamba, gull, and mereswine are 32-bit machines, which aside from\nbeing old and slow suffer an immediate 2x size-of-test penalty:\n\n>> # The TIDStore vacuum uses to store dead items is optimized for its target\n>> # system. On a 32-bit system, our example requires twice as many pages with\n>> # the same number of dead items per page to fill the TIDStore and trigger a\n>> # second round of index vacuuming.\n>> my $is_64bit = $node_primary->safe_psql($test_db,\n>> \tqq[SELECT typbyval FROM pg_type WHERE typname = 'int8';]);\n>> \n>> my $nrows = $is_64bit eq 't' ? 400000 : 800000;\n\ncopperhead is 64-bit but is nonetheless even slower than the\nother three, so the fact that it's also timing out isn't\nthat surprising.\n\nI do not think the answer to this is to nag the respective animal\nowners to raise PG_TEST_TIMEOUT_DEFAULT. IMV this test is simply\nnot worth the cycles it takes, at least not for these machines.\nI'm not sure whether to propose reverting it entirely or just\ndisabling it on 32-bit hardware. I don't think we'd lose anything\nmeaningful in test coverage if we did the latter; but that won't be\nenough to make copperhead happy. I am also suspicious that we'll\nget bad news from other very slow animals such as dikkop.\n\nI wonder if there is a less expensive way to trigger the test\nsituation than brute-forcing things with a large index.\nMaybe the injection point infrastructure could help?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 21 Jul 2024 12:51:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Sun, Jul 21, 2024 at 12:51 PM Tom Lane <[email protected]> wrote:\n> I do not think the answer to this is to nag the respective animal\n> owners to raise PG_TEST_TIMEOUT_DEFAULT. IMV this test is simply\n> not worth the cycles it takes, at least not for these machines.\n\nCan't we just move it to PG_TEST_EXTRA? Alongside the existing\n\"xid_wraparound\" test?\n\nWe didn't even have basic coverage of multi-pass VACUUMs before now.\nThis new test added that coverage. I think that it will pull its\nweight.\n\nThere will always be a small number of extremely slow buildfarm\nanimals. Optimizing for things like Raspberry pi animals with SD cards\njust doesn't seem like a good use of developer time. I really care\nabout keeping the tests fast, but only on platforms that hackers\nactually use for their development work.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Sun, 21 Jul 2024 16:28:33 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "Peter Geoghegan <[email protected]> writes:\n> On Sun, Jul 21, 2024 at 12:51 PM Tom Lane <[email protected]> wrote:\n>> I do not think the answer to this is to nag the respective animal\n>> owners to raise PG_TEST_TIMEOUT_DEFAULT. IMV this test is simply\n>> not worth the cycles it takes, at least not for these machines.\n\n> Can't we just move it to PG_TEST_EXTRA? Alongside the existing\n> \"xid_wraparound\" test?\n\nPerhaps. xid_wraparound seems entirely too slow for what it's\ntesting as well, if you ask me, and there's a concurrent thread\nabout that test causing problems too.\n\n> There will always be a small number of extremely slow buildfarm\n> animals. Optimizing for things like Raspberry pi animals with SD cards\n> just doesn't seem like a good use of developer time. I really care\n> about keeping the tests fast, but only on platforms that hackers\n> actually use for their development work.\n\nI find this argument completely disingenuous. If a test is slow\nenough to cause timeout failures on slower machines, then it's also\neating a disproportionate number of cycles in every other check-world\nrun --- many of which have humans waiting for them to finish. Caring\nabout the runtime of test cases is good for future-you not just\nobsolete buildfarm animals.\n\nI note also that the PG_TEST_EXTRA approach has caused xid_wraparound\nto get next-to-zero buildfarm coverage. If that test is actually\ncapable of revealing problems, we're unlikely to find out under the\nstatus quo.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 21 Jul 2024 17:04:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Sun, Jul 21, 2024 at 5:04 PM Tom Lane <[email protected]> wrote:\n> > There will always be a small number of extremely slow buildfarm\n> > animals. Optimizing for things like Raspberry pi animals with SD cards\n> > just doesn't seem like a good use of developer time. I really care\n> > about keeping the tests fast, but only on platforms that hackers\n> > actually use for their development work.\n>\n> I find this argument completely disingenuous.\n\nDisingenuous? Really?\n\n> If a test is slow\n> enough to cause timeout failures on slower machines, then it's also\n> eating a disproportionate number of cycles in every other check-world\n> run --- many of which have humans waiting for them to finish. Caring\n> about the runtime of test cases is good for future-you not just\n> obsolete buildfarm animals.\n\nThat's not necessarily true, though.\n\nI actually benchmarked this new test. I found that its runtime was a\nlittle on the long side as these recovery TAP tests go, but not to an\nexcessive degree. It wasn't the slowest by any means.\n\nIt's entirely possible that the new test would in fact be far slower\nthan other comparable tests, were I to run a similar benchmark on\nsomething like a Raspberry pi with an SD card -- that would explain\nthe apparent inconsistency here. Obviously Raspberry pi type hardware\nis expected to be much slower than the machine I use day to day, but\nthat isn't the only thing that matters. A Raspberry pi can also have\ncompletely different performance characteristics to high quality\nworkstation hardware. The CPU might be tolerably fast, while I/O is a\nhuge bottleneck.\n\n> I note also that the PG_TEST_EXTRA approach has caused xid_wraparound\n> to get next-to-zero buildfarm coverage. If that test is actually\n> capable of revealing problems, we're unlikely to find out under the\n> status quo.\n\nI saw that.\n\nI think that there is significant value in providing a way for\nindividual developers to test wraparound. Both by providing a TAP\ntest, and providing the associated SQL callable C test functions.\nThere is less value in testing it on every conceivable platform.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 21 Jul 2024 17:23:40 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Sun, Jul 21, 2024 at 12:51 PM Tom Lane <[email protected]> wrote:\n>\n> Melanie Plageman <[email protected]> writes:\n> > When I run it on my machine with some added logging, the space taken\n> > by dead items is about 330 kB more than maintenance_work_mem (which is\n> > set to 1 MB). I could roughly double the excess by increasing the\n> > number of inserted tuples from 400000 to 600000. I'll do this.\n>\n> So, after about two days in the buildfarm, we have failure reports\n> from this test on gull, mamba, mereswine, and copperhead. mamba\n> is mine, and I was able to reproduce the failure in a manual run.\n> The problem seems to be that the test simply takes too long and\n> we hit the default 180-second timeout on one step or another.\n> I was able to make it pass by dint of\n>\n> $ export PG_TEST_TIMEOUT_DEFAULT=1800\n>\n> However, the test then took 908 seconds:\n\nThanks for taking the time to do this. If the test failures can be\nfixed by increasing timeout, that means that at least multiple index\nvacuums are reliably triggered with that number of rows. Obviously we\ncan't have a super slow, flakey test, but I was worried the test might\nfail on different platforms because somehow the row count was\ninsufficient to cause multiple index vacuums on some platforms for\nsome reason (due to adaptive radix tree size being dependent on many\nfactors).\n\n> $ time make installcheck PROVE_TESTS=t/043_vacuum_horizon_floor.pl\n> ...\n> # +++ tap install-check in src/test/recovery +++\n> t/043_vacuum_horizon_floor.pl .. ok\n> All tests successful.\n> Files=1, Tests=3, 908 wallclock secs ( 0.17 usr 0.01 sys + 21.42 cusr 35.03 csys = 56.63 CPU)\n> Result: PASS\n> 909.26 real 22.10 user 35.21 sys\n>\n> This is even slower than the 027_stream_regress.pl test, which\n> currently takes around 847 seconds on that machine.\n>\n> mamba, gull, and mereswine are 32-bit machines, which aside from\n> being old and slow suffer an immediate 2x size-of-test penalty:\n>\n> >> # The TIDStore vacuum uses to store dead items is optimized for its target\n> >> # system. On a 32-bit system, our example requires twice as many pages with\n> >> # the same number of dead items per page to fill the TIDStore and trigger a\n> >> # second round of index vacuuming.\n> >> my $is_64bit = $node_primary->safe_psql($test_db,\n> >> qq[SELECT typbyval FROM pg_type WHERE typname = 'int8';]);\n> >>\n> >> my $nrows = $is_64bit eq 't' ? 400000 : 800000;\n>\n> copperhead is 64-bit but is nonetheless even slower than the\n> other three, so the fact that it's also timing out isn't\n> that surprising.\n>\n> I do not think the answer to this is to nag the respective animal\n> owners to raise PG_TEST_TIMEOUT_DEFAULT. IMV this test is simply\n> not worth the cycles it takes, at least not for these machines.\n> I'm not sure whether to propose reverting it entirely or just\n> disabling it on 32-bit hardware. I don't think we'd lose anything\n> meaningful in test coverage if we did the latter; but that won't be\n> enough to make copperhead happy. I am also suspicious that we'll\n> get bad news from other very slow animals such as dikkop.\n\nI am happy to do what Peter suggests and move it to PG_TEST_EXTRA, to\ndisable for 32-bit, or to revert it.\n\n> I wonder if there is a less expensive way to trigger the test\n> situation than brute-forcing things with a large index.\n> Maybe the injection point infrastructure could help?\n\nThe issue with an injection point is that we need more than for the\nvacuuming backend to pause at a specific point, we need a refresh of\nGlobalVisState to be forced at that point. Even if the horizon moves\nbackward on the primary, this backend won't notice unless it has to\nupdate its GlobalVisState -- which often happens due to taking a new\nsnapshot but this also happens at the end of index vacuuming\nexplicitly.\n\n- Melanie\n\n\n", "msg_date": "Mon, 22 Jul 2024 09:25:31 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Sun, Jul 21, 2024 at 5:04 PM Tom Lane <[email protected]> wrote:\n>\n> Peter Geoghegan <[email protected]> writes:\n> > On Sun, Jul 21, 2024 at 12:51 PM Tom Lane <[email protected]> wrote:\n> >> I do not think the answer to this is to nag the respective animal\n> >> owners to raise PG_TEST_TIMEOUT_DEFAULT. IMV this test is simply\n> >> not worth the cycles it takes, at least not for these machines.\n>\n> > Can't we just move it to PG_TEST_EXTRA? Alongside the existing\n> > \"xid_wraparound\" test?\n>\n> Perhaps. xid_wraparound seems entirely too slow for what it's\n> testing as well, if you ask me, and there's a concurrent thread\n> about that test causing problems too.\n>\n> > There will always be a small number of extremely slow buildfarm\n> > animals. Optimizing for things like Raspberry pi animals with SD cards\n> > just doesn't seem like a good use of developer time. I really care\n> > about keeping the tests fast, but only on platforms that hackers\n> > actually use for their development work.\n>\n> I find this argument completely disingenuous. If a test is slow\n> enough to cause timeout failures on slower machines, then it's also\n> eating a disproportionate number of cycles in every other check-world\n> run --- many of which have humans waiting for them to finish. Caring\n> about the runtime of test cases is good for future-you not just\n> obsolete buildfarm animals.\n>\n> I note also that the PG_TEST_EXTRA approach has caused xid_wraparound\n> to get next-to-zero buildfarm coverage. If that test is actually\n> capable of revealing problems, we're unlikely to find out under the\n> status quo.\n\nWhat is the argument for PG_TEST_EXTRA if it is not running on almost\nany buildfarm animals? Are some of those tests valuable for other\nreasons than being consistently automatically run (e.g. developer\nunderstanding of how a particular part of code works)?\n\nIf they aren't being run, how do we know that they still work (as test\ninfrastructure changes)? The recovery conflict test is skipped on 15,\nwhich means that backported perl test changes may break it without us\nknowing. I don't know if you mean that PG_TEST_EXTRA tests are never\nrun or just seldom run. If they are never run on the buildfarm, then\nthey could end up silently breaking too.\n\n- Melanie\n\n\n", "msg_date": "Mon, 22 Jul 2024 09:30:24 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Sun, Jul 21, 2024 at 4:29 PM Peter Geoghegan <[email protected]> wrote:\n>\n> On Sun, Jul 21, 2024 at 12:51 PM Tom Lane <[email protected]> wrote:\n> > I do not think the answer to this is to nag the respective animal\n> > owners to raise PG_TEST_TIMEOUT_DEFAULT. IMV this test is simply\n> > not worth the cycles it takes, at least not for these machines.\n>\n> Can't we just move it to PG_TEST_EXTRA? Alongside the existing\n> \"xid_wraparound\" test?\n>\n> We didn't even have basic coverage of multi-pass VACUUMs before now.\n> This new test added that coverage. I think that it will pull its\n> weight.\n\nAndres has suggested in the past that we allow maintenance_work_mem be\nset to a lower value or introduce some kind of development GUC so that\nwe can more easily test multiple pass index vacuuming. Do you think\nthis would be worth it?\n\n- Melanie\n\n\n", "msg_date": "Mon, 22 Jul 2024 09:32:12 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On 2024-Jul-22, Melanie Plageman wrote:\n\n> On Sun, Jul 21, 2024 at 5:04 PM Tom Lane <[email protected]> wrote:\n\n> > I note also that the PG_TEST_EXTRA approach has caused xid_wraparound\n> > to get next-to-zero buildfarm coverage. If that test is actually\n> > capable of revealing problems, we're unlikely to find out under the\n> > status quo.\n> \n> What is the argument for PG_TEST_EXTRA if it is not running on almost\n> any buildfarm animals? Are some of those tests valuable for other\n> reasons than being consistently automatically run (e.g. developer\n> understanding of how a particular part of code works)?\n\nI think it's a bad idea to require buildfarm owners to edit their config\nfiles as we add tests that depend on PG_TEST_EXTRA. AFAIR we invented\nthat setting so that tests that had security implications could be made\nopt-in instead of opt-out; I think this was a sensible thing to do, to\navoid possibly compromising the machines in some way. But I think these\nnew tests have a different problem, so we shouldn't use the same\nmechanism.\n\nWhat about some brainstorming to improve this?\n\nFor example: have something in the tree that lets committers opt some\ntests out from specific BF machines without having to poke at the BF\nmachines. I imagine two files: one that carries tags for buildfarm\nmembers, something like the /etc/groups file,\n\nsrc/test/tags.lst\n slow: gull,mamba,mereswine,copperhead\n\nand another file that lists tests to skip on members that have certain\ntags,\n\nsrc/tools/buildfarm/do_not_run.lst\n slow:src/test/modules/xid_wraparound\n slow:src/test/recovery/t/043_vacuum_horizon_floor.pl\n\nso that run_build.pl know that if the current member has tag slow, then\nthese two tests are to be skipped.\n\nThen we can have xid_wraparound enabled generally (without requiring\nPG_TEST_EXTRA), and the BF client knows not to run it in the particular\ncases where it's not wanted.\n\nThis proposal has a number of problems (a glaring one being the\nmaintenance of the list of members per tag), but maybe it inspires\nbetter ideas.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Siempre hay que alimentar a los dioses, aunque la tierra esté seca\" (Orual)\n\n\n", "msg_date": "Mon, 22 Jul 2024 16:11:55 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jul 22, 2024 at 9:32 AM Melanie Plageman\n<[email protected]> wrote:\n> Andres has suggested in the past that we allow maintenance_work_mem be\n> set to a lower value or introduce some kind of development GUC so that\n> we can more easily test multiple pass index vacuuming. Do you think\n> this would be worth it?\n\nNo, I don't.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 22 Jul 2024 11:17:03 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "Melanie Plageman <[email protected]> writes:\n> On Sun, Jul 21, 2024 at 5:04 PM Tom Lane <[email protected]> wrote:\n>> I note also that the PG_TEST_EXTRA approach has caused xid_wraparound\n>> to get next-to-zero buildfarm coverage. If that test is actually\n>> capable of revealing problems, we're unlikely to find out under the\n>> status quo.\n\n> What is the argument for PG_TEST_EXTRA if it is not running on almost\n> any buildfarm animals? Are some of those tests valuable for other\n> reasons than being consistently automatically run (e.g. developer\n> understanding of how a particular part of code works)?\n\nThe point of PG_TEST_EXTRA is to make some of the tests be opt-in.\nOriginally it was just used for tests that might have security\nimplications (e.g. the kerberos tests, which involve running a\nnot-terribly-locked-down kerberos server). I'm a little suspicious\nof using it for tests that merely take an unreasonable amount of\ntime --- to me, that indicates laziness on the part of the test\nauthor. It'd be better to get the test runtime down to the point\nwhere it's reasonable to expect all the buildfarm animals to run it.\nAs an example, we're not getting any Valgrind coverage on\nxid_wraparound, and we won't ever get it with the current approach,\nwhich I find bad.\n\n> I don't know if you mean that PG_TEST_EXTRA tests are never\n> run or just seldom run. If they are never run on the buildfarm, then\n> they could end up silently breaking too.\n\nThey are opt-in, meaning that both buildfarm owners and regular\ndevelopers have to take extra action (i.e. set the PG_TEST_EXTRA\nenvironment variable) to run them. There's a reasonable number\nof animals opting into ssl, kerberos, etc, but I see only two\nthat are opting into xid_wraparound. If we change this new test\nto be conditional on PG_TEST_EXTRA, it won't get run unless you\nsuccessfully nag some buildfarm owners to run it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Jul 2024 11:48:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "Peter Geoghegan <[email protected]> writes:\n> On Mon, Jul 22, 2024 at 9:32 AM Melanie Plageman\n> <[email protected]> wrote:\n>> Andres has suggested in the past that we allow maintenance_work_mem be\n>> set to a lower value or introduce some kind of development GUC so that\n>> we can more easily test multiple pass index vacuuming. Do you think\n>> this would be worth it?\n\n> No, I don't.\n\nI don't see why that's not a good idea.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Jul 2024 11:49:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> I think it's a bad idea to require buildfarm owners to edit their config\n> files as we add tests that depend on PG_TEST_EXTRA. AFAIR we invented\n> that setting so that tests that had security implications could be made\n> opt-in instead of opt-out; I think this was a sensible thing to do, to\n> avoid possibly compromising the machines in some way. But I think these\n> new tests have a different problem, so we shouldn't use the same\n> mechanism.\n\nThat's my feeling also.\n\n> What about some brainstorming to improve this?\n\n> For example: have something in the tree that lets committers opt some\n> tests out from specific BF machines without having to poke at the BF\n> machines. I imagine two files: one that carries tags for buildfarm\n> members, something like the /etc/groups file,\n\nI'd turn it around, and provide some way for buildfarm owners to\nsay \"this machine is slow\". Maybe make the tests respond to the\npresence of an environment variable PG_TEST_SKIP_SLOW, or some\nsuch thing. That particular solution would require no new\ninfrastructure (such as a new buildfarm client release); it'd\nonly require editing the config files of affected animals.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Jul 2024 11:54:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jul 22, 2024 at 11:49 AM Tom Lane <[email protected]> wrote:\n> >> Andres has suggested in the past that we allow maintenance_work_mem be\n> >> set to a lower value or introduce some kind of development GUC so that\n> >> we can more easily test multiple pass index vacuuming. Do you think\n> >> this would be worth it?\n>\n> > No, I don't.\n>\n> I don't see why that's not a good idea.\n\nI don't think that it's worth going to that trouble. Testing multiple\npasses isn't hard -- not in any real practical sense.\n\nI accept that there needs to be some solution to the problem of the\ntests timing out on slow running buildfarm animals. Your\nPG_TEST_SKIP_SLOW proposal seems like a good approach.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 22 Jul 2024 12:00:51 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "Hi,\n\nOn 2024-07-21 12:51:51 -0400, Tom Lane wrote:\n> Melanie Plageman <[email protected]> writes:\n> > When I run it on my machine with some added logging, the space taken\n> > by dead items is about 330 kB more than maintenance_work_mem (which is\n> > set to 1 MB). I could roughly double the excess by increasing the\n> > number of inserted tuples from 400000 to 600000. I'll do this.\n\n> mamba, gull, and mereswine are 32-bit machines, which aside from\n> being old and slow suffer an immediate 2x size-of-test penalty:\n\nI think what we ought to do here is to lower the lower limit for memory usage\nfor vacuum. With the new state in 17+ it basically has become impossible to\ntest multi-pass vacuums in a way that won't get your test thrown out - that's\nbad.\n\n\n> I do not think the answer to this is to nag the respective animal owners to\n> raise PG_TEST_TIMEOUT_DEFAULT. IMV this test is simply not worth the cycles\n> it takes, at least not for these machines.\n\nThis specific area of the code has a *long* history of bugs, I'd be very loath\nto give up testing.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 Jul 2024 09:47:45 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jul 22, 2024 at 11:48 AM Tom Lane <[email protected]> wrote:\n> I'm a little suspicious\n> of using it for tests that merely take an unreasonable amount of\n> time --- to me, that indicates laziness on the part of the test\n> author.\n\nLaziness would have been not bothering to develop a TAP test for this\nat all. Going to the trouble of creating one and not being able to\nmake it as fast or as stable as everybody would like is just being\nhuman.\n\nI never quite know what to do about TAP testing for issues like this.\nIdeally, we want a test case that runs quickly, is highly stable, is\nperfectly sensitive to the bug being fixed, and has a reasonable\nlikelihood of being sensitive to future bugs of the same ilk. But such\na test case need not exist, and even if it does, it need not be the\ncase that any of us are able to find it. Or maybe finding it is\npossible but will take an unreasonable amount of time: if it took a\ncommitter six months to come up with such a test case for this bug,\nwould that be worth it, or just overkill? I'd say overkill: I'd rather\nhave that committer working on other stuff than spending six months\ntrying to craft the perfect test case for a bug that's already fixed.\n\nAlso, this particular bug seems to require a very specific combination\nof circumstances in order to trigger it. So the test gets complicated.\nAs mentioned, that makes it harder to get the test case fast and\nstable, but it also reduces the chances that the test case will ever\nfind anything. I don't think that this will be the last time we make a\nmistake around VACUUM's xmin handling, but the next mistake may well\nrequire an equally baroque but *different* setup to cause a problem. I\nhate to come to the conclusion that we just shouldn't test for this,\nbut I don't think it's fair to send Melanie off on a wild goose chase\nlooking for a perfect test case that may not realistically exist,\neither.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Jul 2024 13:16:49 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On 2024-07-22 13:16:49 -0400, Robert Haas wrote:\n> On Mon, Jul 22, 2024 at 11:48 AM Tom Lane <[email protected]> wrote:\n> > I'm a little suspicious\n> > of using it for tests that merely take an unreasonable amount of\n> > time --- to me, that indicates laziness on the part of the test\n> > author.\n> \n> Laziness would have been not bothering to develop a TAP test for this\n> at all. Going to the trouble of creating one and not being able to\n> make it as fast or as stable as everybody would like is just being\n> human.\n\nYea, I think calling weeks of effort by Melanie lazy is, uhm, not kind.\n\nIt's not like somebody else had a great suggestion for how to do this in a\nbetter way either.\n\n\n", "msg_date": "Mon, 22 Jul 2024 10:37:17 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On 2024-07-22 12:00:51 -0400, Peter Geoghegan wrote:\n> On Mon, Jul 22, 2024 at 11:49 AM Tom Lane <[email protected]> wrote:\n> > >> Andres has suggested in the past that we allow maintenance_work_mem be\n> > >> set to a lower value or introduce some kind of development GUC so that\n> > >> we can more easily test multiple pass index vacuuming. Do you think\n> > >> this would be worth it?\n> >\n> > > No, I don't.\n> >\n> > I don't see why that's not a good idea.\n> \n> I don't think that it's worth going to that trouble. Testing multiple\n> passes isn't hard -- not in any real practical sense.\n\nIt's hard by now (i.e. 17+) because you need substantial amounts of rows to be\nable to trigger it which makes it a hard fight to introduce. And the cost of\nsetting the GUC limit lower is essentially zero.\n\nWhat's the point of having such a high lower limit?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 Jul 2024 11:13:32 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jul 22, 2024 at 1:17 PM Robert Haas <[email protected]> wrote:\n>\n> On Mon, Jul 22, 2024 at 11:48 AM Tom Lane <[email protected]> wrote:\n> > I'm a little suspicious\n> > of using it for tests that merely take an unreasonable amount of\n> > time --- to me, that indicates laziness on the part of the test\n> > author.\n>\n> Laziness would have been not bothering to develop a TAP test for this\n> at all. Going to the trouble of creating one and not being able to\n> make it as fast or as stable as everybody would like is just being\n> human.\n>\n> I never quite know what to do about TAP testing for issues like this.\n> Ideally, we want a test case that runs quickly, is highly stable, is\n> perfectly sensitive to the bug being fixed, and has a reasonable\n> likelihood of being sensitive to future bugs of the same ilk. But such\n> a test case need not exist, and even if it does, it need not be the\n> case that any of us are able to find it. Or maybe finding it is\n> possible but will take an unreasonable amount of time: if it took a\n> committer six months to come up with such a test case for this bug,\n> would that be worth it, or just overkill? I'd say overkill: I'd rather\n> have that committer working on other stuff than spending six months\n> trying to craft the perfect test case for a bug that's already fixed.\n>\n> Also, this particular bug seems to require a very specific combination\n> of circumstances in order to trigger it. So the test gets complicated.\n> As mentioned, that makes it harder to get the test case fast and\n> stable, but it also reduces the chances that the test case will ever\n> find anything. I don't think that this will be the last time we make a\n> mistake around VACUUM's xmin handling, but the next mistake may well\n> require an equally baroque but *different* setup to cause a problem. I\n> hate to come to the conclusion that we just shouldn't test for this,\n> but I don't think it's fair to send Melanie off on a wild goose chase\n> looking for a perfect test case that may not realistically exist,\n> either.\n\nSo, I've just gone through all the test failures on master and 17 for\nmamba, gull, mereswine, and copperhead. I wanted to confirm that the\ntest was always failing for the same reason and also if it had any\nfailures pre-TIDStore.\n\nWe've only run tests with this commit on some of the back branches for\nsome of these animals. Of those, I don't see any failures so far. So,\nit seems the test instability is just related to trying to get\nmultiple passes of index vacuuming reliably with TIDStore.\n\nAFAICT, all the 32bit machine failures are timeouts waiting for the\nstandby to catch up (mamba, gull, merswine). Unfortunately, the\nfailures on copperhead (a 64 bit machine) are because we don't\nactually succeed in triggering a second vacuum pass. This would not be\nfixed by a longer timeout.\n\nBecause of this, I'm inclined to revert the test on 17 and master to\navoid distracting folks committing other work and seeing those animals\ngo red.\n\nI wonder if Sawada-san or John have a test case minimally reproducing\na case needing multiple index vacuuming rounds. You can't do it with\nmy example and just more dead rows per page. If you just increase the\nnumber of dead tuples, it doesn't increase the size of the TIDStore\nunless those dead tuples are at different offsets. And I couldn't find\nDDL which would cause the TIDStore to be > 1MB without using a low\nfill-factor and many rows. Additionally, the fact that the same number\nof rows does not trigger the multiple passes on two different 64bit\nmachines worries me and makes me think that we will struggle to\ntrigger these conditions without overshooting the minimum by quite a\nbit.\n\n- Melanie\n\n\n", "msg_date": "Mon, 22 Jul 2024 14:17:46 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jul 22, 2024 at 2:13 PM Andres Freund <[email protected]> wrote:\n> It's hard by now (i.e. 17+) because you need substantial amounts of rows to be\n> able to trigger it which makes it a hard fight to introduce.\n\nI didn't think that it was particularly hard when I tested the test\nthat Melanie committed.\n\n> And the cost of\n> setting the GUC limit lower is essentially zero.\n\nApparently you know more about TID Store than me.\n\nIf it really is trivial to lower the limit, then I have no objections\nto doing so. That would make it easy to fix the test flappiness issues\nby just using the much lower limit.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 22 Jul 2024 14:22:54 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jul 22, 2024 at 2:17 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> So, I've just gone through all the test failures on master and 17 for\n> mamba, gull, mereswine, and copperhead. I wanted to confirm that the\n> test was always failing for the same reason and also if it had any\n> failures pre-TIDStore.\n>\n> We've only run tests with this commit on some of the back branches for\n> some of these animals. Of those, I don't see any failures so far. So,\n> it seems the test instability is just related to trying to get\n> multiple passes of index vacuuming reliably with TIDStore.\n>\n> AFAICT, all the 32bit machine failures are timeouts waiting for the\n> standby to catch up (mamba, gull, merswine). Unfortunately, the\n> failures on copperhead (a 64 bit machine) are because we don't\n> actually succeed in triggering a second vacuum pass. This would not be\n> fixed by a longer timeout.\n>\n> Because of this, I'm inclined to revert the test on 17 and master to\n> avoid distracting folks committing other work and seeing those animals\n> go red.\n\nOkay, I reverted this for now on 17 and master. Adding Sawada-san to\nthe thread to see if he has any ideas for a smaller two-round index\nvacuum example.\n\n- Melanie\n\n\n", "msg_date": "Mon, 22 Jul 2024 17:04:34 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "Melanie Plageman <[email protected]> writes:\n> We've only run tests with this commit on some of the back branches for\n> some of these animals. Of those, I don't see any failures so far. So,\n> it seems the test instability is just related to trying to get\n> multiple passes of index vacuuming reliably with TIDStore.\n\n> AFAICT, all the 32bit machine failures are timeouts waiting for the\n> standby to catch up (mamba, gull, merswine). Unfortunately, the\n> failures on copperhead (a 64 bit machine) are because we don't\n> actually succeed in triggering a second vacuum pass. This would not be\n> fixed by a longer timeout.\n\nOuch. This seems to me to raise the importance of getting a better\nway to test multiple-index-vacuum-passes. Peter argued upthread\nthat we don't need a better way, but I don't see how that argument\nholds water if copperhead was not reaching it despite being 64-bit.\n(Did you figure out exactly why it doesn't reach the code?)\n\n> Because of this, I'm inclined to revert the test on 17 and master to\n> avoid distracting folks committing other work and seeing those animals\n> go red.\n\nAgreed as a short-term measure.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Jul 2024 18:36:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jul 22, 2024 at 10:37:17AM -0700, Andres Freund wrote:\n> On 2024-07-22 13:16:49 -0400, Robert Haas wrote:\n>> Laziness would have been not bothering to develop a TAP test for this\n>> at all. Going to the trouble of creating one and not being able to\n>> make it as fast or as stable as everybody would like is just being\n>> human.\n> \n> Yea, I think calling weeks of effort by Melanie lazy is, uhm, not kind.\n\nFWIW, I'm really impressed by what she has achieved here by Melanie,\nfixing a hard bug while hacking a crazily-complicated test to make it\nreproducible. This has an incredible amount of value in the long-run.\n\n> It's not like somebody else had a great suggestion for how to do this in a\n> better way either.\n\nSawada-san and John are the two ones in the best position to answer\nthat. I'm not sure either how to force a second index pass, either.\nHmm. An idea would be to manipulate the TIDStore stack under the\ninjection points switch? This is run by the CI and some buildfarm\nmembers already.\n--\nMichael", "msg_date": "Tue, 23 Jul 2024 09:25:04 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> Sawada-san and John are the two ones in the best position to answer\n> that. I'm not sure either how to force a second index pass, either.\n\nYeah, I think we've established that having some way to force that,\nwithout using a huge test case, would be really desirable. Maybe\njust provide a way to put an artificial limit on how many tuples\nprocessed per pass?\n\n(And no, I wasn't trying to rag on Melanie. My point here is that\nwe've failed to design-in easy testability of this code path, and\nthat's surely not her fault.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Jul 2024 20:37:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jul 22, 2024 at 2:04 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Mon, Jul 22, 2024 at 2:17 PM Melanie Plageman\n> <[email protected]> wrote:\n> >\n> > So, I've just gone through all the test failures on master and 17 for\n> > mamba, gull, mereswine, and copperhead. I wanted to confirm that the\n> > test was always failing for the same reason and also if it had any\n> > failures pre-TIDStore.\n> >\n> > We've only run tests with this commit on some of the back branches for\n> > some of these animals. Of those, I don't see any failures so far. So,\n> > it seems the test instability is just related to trying to get\n> > multiple passes of index vacuuming reliably with TIDStore.\n> >\n> > AFAICT, all the 32bit machine failures are timeouts waiting for the\n> > standby to catch up (mamba, gull, merswine). Unfortunately, the\n> > failures on copperhead (a 64 bit machine) are because we don't\n> > actually succeed in triggering a second vacuum pass. This would not be\n> > fixed by a longer timeout.\n> >\n> > Because of this, I'm inclined to revert the test on 17 and master to\n> > avoid distracting folks committing other work and seeing those animals\n> > go red.\n>\n> Okay, I reverted this for now on 17 and master. Adding Sawada-san to\n> the thread to see if he has any ideas for a smaller two-round index\n> vacuum example.\n>\n\n+ CREATE TABLE ${table1}(col1 int)\n+ WITH (autovacuum_enabled=false, fillfactor=10);\n+ INSERT INTO $table1 VALUES(7);\n+ INSERT INTO $table1 SELECT generate_series(1, $nrows) % 3;\n+ CREATE INDEX on ${table1}(col1);\n+ UPDATE $table1 SET col1 = 3 WHERE col1 = 0;\n+ INSERT INTO $table1 VALUES(7);\n\nThese queries make sense to me; these make the radix tree wide and use\nmore nodes, instead of fattening lead nodes (i.e. the offset bitmap).\nThe $table1 has 18182 blocks and the statistics of radix tree shows:\n\nmax_val = 65535\nnum_keys = 18182\nheight = 1, n4 = 0, n16 = 1, n32 = 0, n64 = 0, n256 = 72, leaves = 18182\n\nWhich means that the height of the tree is 2 and we use the maximum\nsize node for all nodes except for 1 node.\n\nI don't have any great idea to substantially reduce the total number\nof tuples in the $table1. Probably we can use DELETE instead of UPDATE\nto make garbage tuples (although I'm not sure it's okay for this\ntest). Which reduces the amount of WAL records from 11MB to 4MB and\nwould reduce the time to catch up. But I'm not sure how much it would\nhelp. There might be ideas to trigger a two-round index vacuum with\nfewer tuples but if the tests are too optimized for the current\nTidStore, we will have to re-adjust them if the TidStore changes in\nthe future. So I think it's better and reliable to allow\nmaintenance_work_mem to be a lower value or use injection points\nsomehow.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 22 Jul 2024 18:25:28 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jul 22, 2024 at 6:36 PM Tom Lane <[email protected]> wrote:\n>\n> Melanie Plageman <[email protected]> writes:\n> > We've only run tests with this commit on some of the back branches for\n> > some of these animals. Of those, I don't see any failures so far. So,\n> > it seems the test instability is just related to trying to get\n> > multiple passes of index vacuuming reliably with TIDStore.\n>\n> > AFAICT, all the 32bit machine failures are timeouts waiting for the\n> > standby to catch up (mamba, gull, merswine). Unfortunately, the\n> > failures on copperhead (a 64 bit machine) are because we don't\n> > actually succeed in triggering a second vacuum pass. This would not be\n> > fixed by a longer timeout.\n>\n> Ouch. This seems to me to raise the importance of getting a better\n> way to test multiple-index-vacuum-passes. Peter argued upthread\n> that we don't need a better way, but I don't see how that argument\n> holds water if copperhead was not reaching it despite being 64-bit.\n> (Did you figure out exactly why it doesn't reach the code?)\n\nI wasn't able to reproduce the failure (failing to do > 1 index vacuum\npass) on my local machine (which is 64 bit) without decreasing the\nnumber of tuples inserted. The copperhead failure confuses me because\nthe speed of the machine should *not* affect how much space the dead\nitem TIDStore takes up. I would have bet money that the same number\nand offsets of dead tuples per page in a relation would take up the\nsame amount of space in a TIDStore on any 64-bit system -- regardless\nof how slowly it runs vacuum.\n\nHere is some background on how I came up with the DDL and tuple count\nfor the test: TIDStore uses 32 BITS_PER_BITMAPWORD on 32 bit systems\nand 64 on 64 bit systems. So, if you only have one bitmapword's worth\nof dead items per page, it was easy to figure out that you would need\ndouble the number of pages with dead items to take up the same amount\nof TIDStore space on a 32 bit system than on a 64 bit system.\n\nI wanted to figure out how to take up double the amount of TIDStore\nspace *without* doubling the number of tuples. This is not\nstraightforward. You can't just delete twice as many dead tuples per\npage. For starters, you can compactly represent many dead tuples in a\nsingle bitmapword. Outside of this, there seems to be some effect on\nthe amount of space the adaptive radix tree takes up if the dead items\non the pages are at the same offsets on all the pages. I thought this\nmight have to do with being able to use the same chunk (in ART terms)?\nI spent some time trying to figure it out, but I gave up once I got\nconfused enough to try and read the adaptive radix tree paper.\n\nI found myself wishing there was some way to visualize the TIDStore. I\ndon't have good ideas how to represent this, but if we found one, we\ncould add a function to the test_tidstore module.\n\nI also think it would be useful to have peak TIDStore usage in bytes\nin the vacuum verbose output. I had it on my list to propose something\nlike this after I hacked together a version myself while trying to\ndebug the test locally.\n\n- Melanie\n\n\n", "msg_date": "Mon, 22 Jul 2024 21:26:11 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jul 22, 2024 at 6:26 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Mon, Jul 22, 2024 at 6:36 PM Tom Lane <[email protected]> wrote:\n> >\n> > Melanie Plageman <[email protected]> writes:\n> > > We've only run tests with this commit on some of the back branches for\n> > > some of these animals. Of those, I don't see any failures so far. So,\n> > > it seems the test instability is just related to trying to get\n> > > multiple passes of index vacuuming reliably with TIDStore.\n> >\n> > > AFAICT, all the 32bit machine failures are timeouts waiting for the\n> > > standby to catch up (mamba, gull, merswine). Unfortunately, the\n> > > failures on copperhead (a 64 bit machine) are because we don't\n> > > actually succeed in triggering a second vacuum pass. This would not be\n> > > fixed by a longer timeout.\n> >\n> > Ouch. This seems to me to raise the importance of getting a better\n> > way to test multiple-index-vacuum-passes. Peter argued upthread\n> > that we don't need a better way, but I don't see how that argument\n> > holds water if copperhead was not reaching it despite being 64-bit.\n> > (Did you figure out exactly why it doesn't reach the code?)\n>\n> I wasn't able to reproduce the failure (failing to do > 1 index vacuum\n> pass) on my local machine (which is 64 bit) without decreasing the\n> number of tuples inserted. The copperhead failure confuses me because\n> the speed of the machine should *not* affect how much space the dead\n> item TIDStore takes up. I would have bet money that the same number\n> and offsets of dead tuples per page in a relation would take up the\n> same amount of space in a TIDStore on any 64-bit system -- regardless\n> of how slowly it runs vacuum.\n\nLooking at copperhead's failure logs, I could not find that \"VACUUM\n(VERBOSE, FREEZE) vac_horizon_floor_table;\" wrote the number of index\nscans in logs. Is there any clue that made you think the test failed\nto do multiple index vacuum passes?\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 22 Jul 2024 19:53:24 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jul 22, 2024 at 10:54 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Mon, Jul 22, 2024 at 6:26 PM Melanie Plageman\n> <[email protected]> wrote:\n> >\n> > On Mon, Jul 22, 2024 at 6:36 PM Tom Lane <[email protected]> wrote:\n> > >\n> > > Melanie Plageman <[email protected]> writes:\n> > > > We've only run tests with this commit on some of the back branches for\n> > > > some of these animals. Of those, I don't see any failures so far. So,\n> > > > it seems the test instability is just related to trying to get\n> > > > multiple passes of index vacuuming reliably with TIDStore.\n> > >\n> > > > AFAICT, all the 32bit machine failures are timeouts waiting for the\n> > > > standby to catch up (mamba, gull, merswine). Unfortunately, the\n> > > > failures on copperhead (a 64 bit machine) are because we don't\n> > > > actually succeed in triggering a second vacuum pass. This would not be\n> > > > fixed by a longer timeout.\n> > >\n> > > Ouch. This seems to me to raise the importance of getting a better\n> > > way to test multiple-index-vacuum-passes. Peter argued upthread\n> > > that we don't need a better way, but I don't see how that argument\n> > > holds water if copperhead was not reaching it despite being 64-bit.\n> > > (Did you figure out exactly why it doesn't reach the code?)\n> >\n> > I wasn't able to reproduce the failure (failing to do > 1 index vacuum\n> > pass) on my local machine (which is 64 bit) without decreasing the\n> > number of tuples inserted. The copperhead failure confuses me because\n> > the speed of the machine should *not* affect how much space the dead\n> > item TIDStore takes up. I would have bet money that the same number\n> > and offsets of dead tuples per page in a relation would take up the\n> > same amount of space in a TIDStore on any 64-bit system -- regardless\n> > of how slowly it runs vacuum.\n>\n> Looking at copperhead's failure logs, I could not find that \"VACUUM\n> (VERBOSE, FREEZE) vac_horizon_floor_table;\" wrote the number of index\n> scans in logs. Is there any clue that made you think the test failed\n> to do multiple index vacuum passes?\n\nThe vacuum doesn't actually finish because I have a cursor that keeps\nit from finishing and then I query pg_stat_progress_vacuum after the\nfirst index vacuuming round should have happened and it did not do the\nindex vacuum:\n\n[20:39:34.645](351.522s) # poll_query_until timed out executing this query:\n#\n# SELECT index_vacuum_count > 0\n# FROM pg_stat_progress_vacuum\n# WHERE datname='test_db' AND relid::regclass =\n'vac_horizon_floor_table'::regclass;\n#\n# expecting this output:\n# t\n# last actual query output:\n# f\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-07-22%2015%3A00%3A11\n\nI suppose it is possible that it did in fact time out and the index\nvacuum was still in progress. But most of the other \"too slow\"\nfailures were when the standby was trying to catch up. Usually the\npg_stat_progress_vacuum test fails because we didn't actually do that\nindex vacuuming round yet.\n\n- Melanie\n\n\n", "msg_date": "Tue, 23 Jul 2024 08:43:20 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Tue, Jul 23, 2024 at 5:43 AM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Mon, Jul 22, 2024 at 10:54 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Mon, Jul 22, 2024 at 6:26 PM Melanie Plageman\n> > <[email protected]> wrote:\n> > >\n> > > On Mon, Jul 22, 2024 at 6:36 PM Tom Lane <[email protected]> wrote:\n> > > >\n> > > > Melanie Plageman <[email protected]> writes:\n> > > > > We've only run tests with this commit on some of the back branches for\n> > > > > some of these animals. Of those, I don't see any failures so far. So,\n> > > > > it seems the test instability is just related to trying to get\n> > > > > multiple passes of index vacuuming reliably with TIDStore.\n> > > >\n> > > > > AFAICT, all the 32bit machine failures are timeouts waiting for the\n> > > > > standby to catch up (mamba, gull, merswine). Unfortunately, the\n> > > > > failures on copperhead (a 64 bit machine) are because we don't\n> > > > > actually succeed in triggering a second vacuum pass. This would not be\n> > > > > fixed by a longer timeout.\n> > > >\n> > > > Ouch. This seems to me to raise the importance of getting a better\n> > > > way to test multiple-index-vacuum-passes. Peter argued upthread\n> > > > that we don't need a better way, but I don't see how that argument\n> > > > holds water if copperhead was not reaching it despite being 64-bit.\n> > > > (Did you figure out exactly why it doesn't reach the code?)\n> > >\n> > > I wasn't able to reproduce the failure (failing to do > 1 index vacuum\n> > > pass) on my local machine (which is 64 bit) without decreasing the\n> > > number of tuples inserted. The copperhead failure confuses me because\n> > > the speed of the machine should *not* affect how much space the dead\n> > > item TIDStore takes up. I would have bet money that the same number\n> > > and offsets of dead tuples per page in a relation would take up the\n> > > same amount of space in a TIDStore on any 64-bit system -- regardless\n> > > of how slowly it runs vacuum.\n> >\n> > Looking at copperhead's failure logs, I could not find that \"VACUUM\n> > (VERBOSE, FREEZE) vac_horizon_floor_table;\" wrote the number of index\n> > scans in logs. Is there any clue that made you think the test failed\n> > to do multiple index vacuum passes?\n>\n> The vacuum doesn't actually finish because I have a cursor that keeps\n> it from finishing and then I query pg_stat_progress_vacuum after the\n> first index vacuuming round should have happened and it did not do the\n> index vacuum:\n>\n> [20:39:34.645](351.522s) # poll_query_until timed out executing this query:\n> #\n> # SELECT index_vacuum_count > 0\n> # FROM pg_stat_progress_vacuum\n> # WHERE datname='test_db' AND relid::regclass =\n> 'vac_horizon_floor_table'::regclass;\n> #\n> # expecting this output:\n> # t\n> # last actual query output:\n> # f\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-07-22%2015%3A00%3A11\n>\n> I suppose it is possible that it did in fact time out and the index\n> vacuum was still in progress. But most of the other \"too slow\"\n> failures were when the standby was trying to catch up. Usually the\n> pg_stat_progress_vacuum test fails because we didn't actually do that\n> index vacuuming round yet.\n\nThank you for your explanation! I understood the test cases.\n\nI figured out why two-round index vacuum was not triggered on\ncopperhead although it's a 64-bit system. In short, this test case\ndepends on MEMORY_CONTEXT_CHECK (or USE_ASSERT_CHECKING) being on.\n\nIn this test case, every BlocktableEntry size would be 16 bytes; the\nheader is 8 bytes and offset bitmap is 8 bytes (covering up to offset\n63). We calculate the memory size (required_size in BumpAlloc()) to\nallocate in a bump memory context as follows:\n\n#ifdef MEMORY_CONTEXT_CHECKING\n /* ensure there's always space for the sentinel byte */\n chunk_size = MAXALIGN(size + 1);\n#else\n chunk_size = MAXALIGN(size);\n#endif\n\n (snip)\n\n required_size = chunk_size + Bump_CHUNKHDRSZ;\n\nWithout MEMORY_CONTEXT_CHECK, if size is 16 bytes, required_size is\nalso 16 bytes as it's already 8-byte aligned and Bump_CHUNKHDRSZ is 0.\nOn the other hand with MEMORY_CONTEXT_CHECK, the requied_size is\nbumped to 40 bytes as chunk_size is 24 bytes and Bump_CHUNKHDRSZ is 16\nbytes. Therefore, with MEMORY_CONTEXT_CHECK, we allocate more memory\nand use more Bump memory blocks, resulting in filling up TidStore in\nthe test cases. We can easily reproduce this test failure with\nPostgreSQL server built without --enable-cassert. It seems that\ncopperhead is the sole BF animal that doesn't use --enable-cassert but\nruns recovery-check.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 23 Jul 2024 15:39:49 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Wed, Jul 24, 2024 at 5:40 AM Masahiko Sawada <[email protected]> wrote:\n\n> Without MEMORY_CONTEXT_CHECK, if size is 16 bytes, required_size is\n> also 16 bytes as it's already 8-byte aligned and Bump_CHUNKHDRSZ is 0.\n> On the other hand with MEMORY_CONTEXT_CHECK, the requied_size is\n> bumped to 40 bytes as chunk_size is 24 bytes and Bump_CHUNKHDRSZ is 16\n> bytes. Therefore, with MEMORY_CONTEXT_CHECK, we allocate more memory\n> and use more Bump memory blocks, resulting in filling up TidStore in\n> the test cases. We can easily reproduce this test failure with\n> PostgreSQL server built without --enable-cassert. It seems that\n> copperhead is the sole BF animal that doesn't use --enable-cassert but\n> runs recovery-check.\n\nIt seems we could force the bitmaps to be larger, and also reduce the\nnumber of updated tuples by updating only the last few tuples (say\n5-10) by looking at the ctid's offset. This requires some trickery,\nbut I believe I've done it in the past by casting to text and\nextracting with a regex. (I'm assuming the number of tuples updated is\nmore important than the number of tuples inserted on a newly created\ntable.)\n\nAs for lowering the limit, we've experimented with 256kB here:\n\nhttps://www.postgresql.org/message-id/CANWCAZZUTvZ3LsYpauYQVzcEZXZ7Qe+9ntnHgYZDTWxPuL++zA@mail.gmail.com\n\nAs I mention there, going lower than that would need a small amount of\nreorganization in the radix tree. Not difficult -- the thing I'm\nconcerned about is that we'd likely need to document a separate\nminimum for DSA, since that behaves strangely with 256kB and might not\nwork at all lower than that.\n\n\n", "msg_date": "Wed, 24 Jul 2024 14:42:59 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Wed, Jul 24, 2024 at 2:42 PM John Naylor <[email protected]> wrote:\n> As for lowering the limit, we've experimented with 256kB here:\n>\n> https://www.postgresql.org/message-id/CANWCAZZUTvZ3LsYpauYQVzcEZXZ7Qe+9ntnHgYZDTWxPuL++zA@mail.gmail.com\n>\n> As I mention there, going lower than that would need a small amount of\n> reorganization in the radix tree. Not difficult -- the thing I'm\n> concerned about is that we'd likely need to document a separate\n> minimum for DSA, since that behaves strangely with 256kB and might not\n> work at all lower than that.\n\nFor experimentation, here's a rough patch (really two, squashed\ntogether for now) that allows m_w_m to go down to 64kB.\n\ndrop table if exists test;\ncreate table test (a int) with (autovacuum_enabled=false, fillfactor=10);\ninsert into test (a) select i from generate_series(1,2000) i;\ncreate index on test (a);\nupdate test set a = a + 1;\n\nset maintenance_work_mem = '64kB';\nvacuum (verbose) test;\n\nINFO: vacuuming \"john.public.test\"\nINFO: finished vacuuming \"john.public.test\": index scans: 3\npages: 0 removed, 91 remain, 91 scanned (100.00% of total)\n\nThe advantage with this is that we don't need to care about\nMEMORY_CONTEXT_CHECKING or 32/64 bit-ness, since allocating a single\nlarge node will immediately blow the limit, and that will happen\nfairly quickly regardless. I suspect going this low will not work with\ndynamic shared memory and if so would need a warning comment.", "msg_date": "Wed, 24 Jul 2024 19:19:36 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Mon, Jul 22, 2024 at 9:26 PM Masahiko Sawada <[email protected]> wrote:\n>\n> + CREATE TABLE ${table1}(col1 int)\n> + WITH (autovacuum_enabled=false, fillfactor=10);\n> + INSERT INTO $table1 VALUES(7);\n> + INSERT INTO $table1 SELECT generate_series(1, $nrows) % 3;\n> + CREATE INDEX on ${table1}(col1);\n> + UPDATE $table1 SET col1 = 3 WHERE col1 = 0;\n> + INSERT INTO $table1 VALUES(7);\n>\n> These queries make sense to me; these make the radix tree wide and use\n> more nodes, instead of fattening lead nodes (i.e. the offset bitmap).\n> The $table1 has 18182 blocks and the statistics of radix tree shows:\n>\n> max_val = 65535\n> num_keys = 18182\n> height = 1, n4 = 0, n16 = 1, n32 = 0, n64 = 0, n256 = 72, leaves = 18182\n>\n> Which means that the height of the tree is 2 and we use the maximum\n> size node for all nodes except for 1 node.\n\nDo you have some kind of tool that prints this out for you? That would\nbe really handy.\n\n> I don't have any great idea to substantially reduce the total number\n> of tuples in the $table1. Probably we can use DELETE instead of UPDATE\n> to make garbage tuples (although I'm not sure it's okay for this\n> test). Which reduces the amount of WAL records from 11MB to 4MB and\n> would reduce the time to catch up. But I'm not sure how much it would\n> help. There might be ideas to trigger a two-round index vacuum with\n> fewer tuples but if the tests are too optimized for the current\n> TidStore, we will have to re-adjust them if the TidStore changes in\n> the future. So I think it's better and reliable to allow\n> maintenance_work_mem to be a lower value or use injection points\n> somehow.\n\nI think we can make improvements in overall time on master and 17 with\nthe examples John provided later in the thread. However, I realized\nyou are right about using a DELETE instead of an UPDATE. At some point\nin my development, I needed the UPDATE to satisfy some other aspect of\nthe test. But that is no longer true. A DELETE works just as well as\nan UPDATE WRT the dead items and, as you point out, much less WAL is\ncreated and replay is much faster.\n\nI also realized I forgot to add 043_vacuum_horizon_floor.pl to\nsrc/test/recovery/meson.build in 16. I will post a patch here this\nweekend which changes the UPDATE to a DELETE in 14-16 (sped up the\ntest by about 20% for me locally) and adds 043_vacuum_horizon_floor.pl\nto src/test/recovery/meson.build in 16. I'll plan to push it on Monday\nto save myself any weekend buildfarm embarrassment.\n\nAs for 17 and master, I'm going to try out John's examples and see if\nit seems like it will be fast enough to commit to 17/master without\nlowering the maintenance_work_mem lower bound.\n\nIf we want to lower it, I wonder if we just halve it -- since it seems\nlike the tests with half the number of tuples were fast enough to\navoid timing out on slow animals on the buildfarm? Or do we need some\nmore meaningful value to decrease it to?\n\n- Melanie\n\n\n", "msg_date": "Fri, 26 Jul 2024 16:27:17 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Wed, Jul 24, 2024 at 8:19 AM John Naylor <[email protected]> wrote:\n>\n> On Wed, Jul 24, 2024 at 2:42 PM John Naylor <[email protected]> wrote:\n> > As for lowering the limit, we've experimented with 256kB here:\n> >\n> > https://www.postgresql.org/message-id/CANWCAZZUTvZ3LsYpauYQVzcEZXZ7Qe+9ntnHgYZDTWxPuL++zA@mail.gmail.com\n> >\n> > As I mention there, going lower than that would need a small amount of\n> > reorganization in the radix tree. Not difficult -- the thing I'm\n> > concerned about is that we'd likely need to document a separate\n> > minimum for DSA, since that behaves strangely with 256kB and might not\n> > work at all lower than that.\n>\n> For experimentation, here's a rough patch (really two, squashed\n> together for now) that allows m_w_m to go down to 64kB.\n\nOh, great, thanks! I didn't read this closely enough before I posed my\nupthread question about how small we should make the minimum. It\nsounds like you've thought a lot about this.\n\nI ran my test with your patch (on my 64-bit system, non-assert build)\nand the result is great:\n\nmaster with my test (slightly modified to now use DELETE instead of\nUPDATE as mentioned upthread)\n 3.09s\n\nmaster with your patch applied, MWM set to 64kB and 9000 rows instead of 800000\n 1.06s\n\n> drop table if exists test;\n> create table test (a int) with (autovacuum_enabled=false, fillfactor=10);\n> insert into test (a) select i from generate_series(1,2000) i;\n> create index on test (a);\n> update test set a = a + 1;\n>\n> set maintenance_work_mem = '64kB';\n> vacuum (verbose) test;\n>\n> INFO: vacuuming \"john.public.test\"\n> INFO: finished vacuuming \"john.public.test\": index scans: 3\n> pages: 0 removed, 91 remain, 91 scanned (100.00% of total)\n>\n> The advantage with this is that we don't need to care about\n> MEMORY_CONTEXT_CHECKING or 32/64 bit-ness, since allocating a single\n> large node will immediately blow the limit, and that will happen\n> fairly quickly regardless. I suspect going this low will not work with\n> dynamic shared memory and if so would need a warning comment.\n\nI took a look at the patch, but I can't say I know enough about the\nmemory allocation subsystems and how TIDStore works to meaningfully\nreview it -- nor enough about DSM to comment about the interactions.\n\nI suspect 256kB would also be fast enough to avoid my test timing out\non the buildfarm, but it is appealing to have a minimum for\nmaintenance_work_mem that is the same as work_mem.\n\n- Melanie\n\n\n", "msg_date": "Fri, 26 Jul 2024 17:04:31 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Wed, Jul 24, 2024 at 3:43 AM John Naylor <[email protected]> wrote:\n>\n> On Wed, Jul 24, 2024 at 5:40 AM Masahiko Sawada <[email protected]> wrote:\n>\n> > Without MEMORY_CONTEXT_CHECK, if size is 16 bytes, required_size is\n> > also 16 bytes as it's already 8-byte aligned and Bump_CHUNKHDRSZ is 0.\n> > On the other hand with MEMORY_CONTEXT_CHECK, the requied_size is\n> > bumped to 40 bytes as chunk_size is 24 bytes and Bump_CHUNKHDRSZ is 16\n> > bytes. Therefore, with MEMORY_CONTEXT_CHECK, we allocate more memory\n> > and use more Bump memory blocks, resulting in filling up TidStore in\n> > the test cases. We can easily reproduce this test failure with\n> > PostgreSQL server built without --enable-cassert. It seems that\n> > copperhead is the sole BF animal that doesn't use --enable-cassert but\n> > runs recovery-check.\n>\n> It seems we could force the bitmaps to be larger, and also reduce the\n> number of updated tuples by updating only the last few tuples (say\n> 5-10) by looking at the ctid's offset. This requires some trickery,\n> but I believe I've done it in the past by casting to text and\n> extracting with a regex. (I'm assuming the number of tuples updated is\n> more important than the number of tuples inserted on a newly created\n> table.)\n\nYes, the only thing that is important is having two rounds of index\nvacuuming and having one tuple with a value matching my cursor\ncondition before the first index vacuum and one after. What do you\nmean update only the last few tuples though?\n\n- Melanie\n\n\n", "msg_date": "Fri, 26 Jul 2024 17:07:59 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Saturday, July 27, 2024, Melanie Plageman <[email protected]>\nwrote:\n>\n>\n> Yes, the only thing that is important is having two rounds of index\n> vacuuming and having one tuple with a value matching my cursor\n> condition before the first index vacuum and one after. What do you\n> mean update only the last few tuples though?\n>\n\nI meant we could update tuples with the highest offsets on each page. That\nwould then lead to longer arrays of bitmaps to store offsets during vacuum.\nLowering the minimum memory setting is easier to code and reason about,\nhowever.\n\nOn Saturday, July 27, 2024, Melanie Plageman <[email protected]> wrote:\n\nYes, the only thing that is important is having two rounds of index\nvacuuming and having one tuple with a value matching my cursor\ncondition before the first index vacuum and one after. What do you\nmean update only the last few tuples though?\nI meant we could update tuples with the highest offsets on each page. That would then lead to longer arrays of bitmaps to store offsets during vacuum. Lowering the minimum memory setting is easier to code and reason about, however.", "msg_date": "Sun, 28 Jul 2024 10:20:02 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Fri, Jul 26, 2024 at 1:27 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Mon, Jul 22, 2024 at 9:26 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > + CREATE TABLE ${table1}(col1 int)\n> > + WITH (autovacuum_enabled=false, fillfactor=10);\n> > + INSERT INTO $table1 VALUES(7);\n> > + INSERT INTO $table1 SELECT generate_series(1, $nrows) % 3;\n> > + CREATE INDEX on ${table1}(col1);\n> > + UPDATE $table1 SET col1 = 3 WHERE col1 = 0;\n> > + INSERT INTO $table1 VALUES(7);\n> >\n> > These queries make sense to me; these make the radix tree wide and use\n> > more nodes, instead of fattening lead nodes (i.e. the offset bitmap).\n> > The $table1 has 18182 blocks and the statistics of radix tree shows:\n> >\n> > max_val = 65535\n> > num_keys = 18182\n> > height = 1, n4 = 0, n16 = 1, n32 = 0, n64 = 0, n256 = 72, leaves = 18182\n> >\n> > Which means that the height of the tree is 2 and we use the maximum\n> > size node for all nodes except for 1 node.\n>\n> Do you have some kind of tool that prints this out for you? That would\n> be really handy.\n\nYou can add '#define RT_DEBUG' for radix tree used in TidStore and\nthen call RT_STATS (e.g., local_ts_stats()).\n\n>\n> > I don't have any great idea to substantially reduce the total number\n> > of tuples in the $table1. Probably we can use DELETE instead of UPDATE\n> > to make garbage tuples (although I'm not sure it's okay for this\n> > test). Which reduces the amount of WAL records from 11MB to 4MB and\n> > would reduce the time to catch up. But I'm not sure how much it would\n> > help. There might be ideas to trigger a two-round index vacuum with\n> > fewer tuples but if the tests are too optimized for the current\n> > TidStore, we will have to re-adjust them if the TidStore changes in\n> > the future. So I think it's better and reliable to allow\n> > maintenance_work_mem to be a lower value or use injection points\n> > somehow.\n>\n> I think we can make improvements in overall time on master and 17 with\n> the examples John provided later in the thread. However, I realized\n> you are right about using a DELETE instead of an UPDATE. At some point\n> in my development, I needed the UPDATE to satisfy some other aspect of\n> the test. But that is no longer true. A DELETE works just as well as\n> an UPDATE WRT the dead items and, as you point out, much less WAL is\n> created and replay is much faster.\n>\n> I also realized I forgot to add 043_vacuum_horizon_floor.pl to\n> src/test/recovery/meson.build in 16. I will post a patch here this\n> weekend which changes the UPDATE to a DELETE in 14-16 (sped up the\n> test by about 20% for me locally) and adds 043_vacuum_horizon_floor.pl\n> to src/test/recovery/meson.build in 16. I'll plan to push it on Monday\n> to save myself any weekend buildfarm embarrassment.\n>\n> As for 17 and master, I'm going to try out John's examples and see if\n> it seems like it will be fast enough to commit to 17/master without\n> lowering the maintenance_work_mem lower bound.\n\n+1. Thanks.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 28 Jul 2024 07:44:46 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Thu, Jun 20, 2024 at 7:42 PM Melanie Plageman\n<[email protected]> wrote:\n> In back branches starting with 14, failing to remove tuples older than\n> OldestXmin during pruning caused vacuum to infinitely loop in\n> lazy_scan_prune(), as investigated on this [1] thread.\n\nShouldn't somebody remove the entry that we have for this issue under\nhttps://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items#Older_bugs_affecting_stable_branches?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 31 Jul 2024 16:37:51 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Wed, Jul 31, 2024 at 4:38 PM Peter Geoghegan <[email protected]> wrote:\n>\n> On Thu, Jun 20, 2024 at 7:42 PM Melanie Plageman\n> <[email protected]> wrote:\n> > In back branches starting with 14, failing to remove tuples older than\n> > OldestXmin during pruning caused vacuum to infinitely loop in\n> > lazy_scan_prune(), as investigated on this [1] thread.\n>\n> Shouldn't somebody remove the entry that we have for this issue under\n> https://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items#Older_bugs_affecting_stable_branches?\n\nThanks for the reminder. Done!\n\n- Melanie\n\n\n", "msg_date": "Wed, 31 Jul 2024 17:46:54 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Sat, Jul 27, 2024 at 4:04 AM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Wed, Jul 24, 2024 at 8:19 AM John Naylor <[email protected]> wrote:\n\n> I ran my test with your patch (on my 64-bit system, non-assert build)\n> and the result is great:\n>\n> master with my test (slightly modified to now use DELETE instead of\n> UPDATE as mentioned upthread)\n> 3.09s\n>\n> master with your patch applied, MWM set to 64kB and 9000 rows instead of 800000\n> 1.06s\n\nGlad to hear it!\n\n> I took a look at the patch, but I can't say I know enough about the\n> memory allocation subsystems and how TIDStore works to meaningfully\n> review it -- nor enough about DSM to comment about the interactions.\n\nI tried using parallel vacuum with 64kB and it succeeded, but needed\nto perform an index scan for every heap page pruned. It's not hard to\nimagine some code moving around so that it doesn't work anymore, but\nsince this is for testing only, it seems a warning comment is enough.\n\n> I suspect 256kB would also be fast enough to avoid my test timing out\n> on the buildfarm, but it is appealing to have a minimum for\n> maintenance_work_mem that is the same as work_mem.\n\nAgreed on both counts:\n\nI came up with a simple ctid expression to make the bitmap arrays larger:\n\ndelete from test where ctid::text like '%,2__)';\n\nWith that, it still takes between 250k and 300k tuples to force a\nsecond index scan with 256kB m_w_m, default fillfactor, and without\nasserts. (It may need a few more pages for 32-bit but not many more)\nThe table is around 1300 pages, where on v16 it's about 900. But with\nfewer tuples deleted, the WAL for deletes should be lower. So it might\nbe comparable to v16's test.\n\nIt also turns out that to support 64kB memory settings, we actually\nwouldn't need to change radix tree to lazily create memory contexts --\nat least currently, SlabCreate doesn't allocate a keeper block, so a\nnewly created slab context reports 0 for \"mem_allocated\". So I'm\ninclined to go ahead change the minimum m_w_m on v17 and master to\n64kB. It's the quickest and (I think) most future-proof way to make\nthis test work. Any objections?\n\n\n", "msg_date": "Tue, 6 Aug 2024 21:58:42 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On Tue, Aug 6, 2024 at 9:58 PM John Naylor <[email protected]> wrote:\n>\n> It also turns out that to support 64kB memory settings, we actually\n> wouldn't need to change radix tree to lazily create memory contexts --\n> at least currently, SlabCreate doesn't allocate a keeper block, so a\n> newly created slab context reports 0 for \"mem_allocated\". So I'm\n> inclined to go ahead change the minimum m_w_m on v17 and master to\n> 64kB. It's the quickest and (I think) most future-proof way to make\n> this test work. Any objections?\n\nThis is done. I also changed autovacuum_work_mem just for the sake of\nconsistency. I did some quick math and found that there shouldn't be a\ndifference between 32- and 64-bit platforms for when they exceed 64kB\nin the tid store. That's because exceeding the limit is caused by\nallocating the first block of one of the slab contexts. That\nindependence may not be stable, so I'm thinking of hard-coding the\nblock sizes in master only, but I've left that for another time.\n\n\n", "msg_date": "Sat, 10 Aug 2024 16:01:18 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" }, { "msg_contents": "On 2024-08-10 16:01:18 +0700, John Naylor wrote:\n> On Tue, Aug 6, 2024 at 9:58 PM John Naylor <[email protected]> wrote:\n> >\n> > It also turns out that to support 64kB memory settings, we actually\n> > wouldn't need to change radix tree to lazily create memory contexts --\n> > at least currently, SlabCreate doesn't allocate a keeper block, so a\n> > newly created slab context reports 0 for \"mem_allocated\". So I'm\n> > inclined to go ahead change the minimum m_w_m on v17 and master to\n> > 64kB. It's the quickest and (I think) most future-proof way to make\n> > this test work. Any objections?\n> \n> This is done. I also changed autovacuum_work_mem just for the sake of\n> consistency. I did some quick math and found that there shouldn't be a\n> difference between 32- and 64-bit platforms for when they exceed 64kB\n> in the tid store. That's because exceeding the limit is caused by\n> allocating the first block of one of the slab contexts. That\n> independence may not be stable, so I'm thinking of hard-coding the\n> block sizes in master only, but I've left that for another time.\n\nThanks a lot!\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 10 Aug 2024 14:47:57 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum ERRORs out considering freezing dead tuples from before\n OldestXmin" } ]
[ { "msg_contents": "FYI, looking at the release notes, I see 15 GUC variables added in this\nrelease, and two removed. That 15 number seemed unusually high so I\nthought I would report it.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 20 Jun 2024 20:01:19 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "PG 17 and GUC variables" }, { "msg_contents": "On Thu, Jun 20, 2024 at 08:01:19PM -0400, Bruce Momjian wrote:\n> FYI, looking at the release notes, I see 15 GUC variables added in this\n> release, and two removed. That 15 number seemed unusually high so I\n> thought I would report it.\n\nScanning pg_settings across the two versions, I'm seeing:\n- Removed GUCs between 16 and 17:\ndb_user_namespace\nold_snapshot_threshold\ntrace_recovery_messages\n\n- Added GUCs between 16 and 17:\nallow_alter_system\ncommit_timestamp_buffers\nenable_group_by_reordering\nevent_triggers\nhuge_pages_status\nio_combine_limit\nmax_notify_queue_pages\nmultixact_member_buffers\nmultixact_offset_buffers\nnotify_buffers\nserializable_buffers\nstandby_slot_names\nsubtransaction_buffers\nsummarize_wal\nsync_replication_slots\ntrace_connection_negotiation\ntransaction_buffers\ntransaction_timeout\nwal_summary_keep_time\n\nSo that makes for 3 removed, 19 additions and a +16.\n--\nMichael", "msg_date": "Fri, 21 Jun 2024 11:03:29 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 17 and GUC variables" }, { "msg_contents": "On Thu, Jun 20, 2024 at 10:03 PM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Jun 20, 2024 at 08:01:19PM -0400, Bruce Momjian wrote:\n> > FYI, looking at the release notes, I see 15 GUC variables added in this\n> > release, and two removed. That 15 number seemed unusually high so I\n> > thought I would report it.\n>\n> Scanning pg_settings across the two versions, I'm seeing:\n> - Removed GUCs between 16 and 17:\n> db_user_namespace\n> old_snapshot_threshold\n> trace_recovery_messages\n>\n> - Added GUCs between 16 and 17:\n> allow_alter_system\n> commit_timestamp_buffers\n> enable_group_by_reordering\n> event_triggers\n> huge_pages_status\n> io_combine_limit\n> max_notify_queue_pages\n> multixact_member_buffers\n> multixact_offset_buffers\n> notify_buffers\n> serializable_buffers\n> standby_slot_names\n> subtransaction_buffers\n> summarize_wal\n> sync_replication_slots\n> trace_connection_negotiation\n> transaction_buffers\n> transaction_timeout\n> wal_summary_keep_time\n>\n\nI was looking at trace_connection_negotiation and ran across this\ncommit removing it's mention from the release notes because it is\nundocumented: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=95cabf542f04b634303f899600ea62fb256a08c2\n\nWhy is the right solution to remove it from the release notes rather\nthan to document it properly? It's not like people won't notice a new\nGUC has popped up in their configs. Also, presumaing I'm unerstanding\nit's purpose correctly, ISTM it would fit along side other trace_*\ngucs in https://www.postgresql.org/docs/current/runtime-config-developer.html#RUNTIME-CONFIG-DEVELOPER.\n\nRobert Treat\nhttps://xzilla.net\n\n\n", "msg_date": "Sat, 3 Aug 2024 23:29:59 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 17 and GUC variables" }, { "msg_contents": "On 04/08/2024 06:29, Robert Treat wrote:\n> I was looking at trace_connection_negotiation and ran across this\n> commit removing it's mention from the release notes because it is\n> undocumented: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=95cabf542f04b634303f899600ea62fb256a08c2\n> \n> Why is the right solution to remove it from the release notes rather\n> than to document it properly? It's not like people won't notice a new\n> GUC has popped up in their configs. Also, presumaing I'm unerstanding\n> it's purpose correctly, ISTM it would fit along side other trace_*\n> gucs in https://www.postgresql.org/docs/current/runtime-config-developer.html#RUNTIME-CONFIG-DEVELOPER.\n\nNot sure whether it's worth mentioning in release notes, but I think \nyou're right that it should be listed in that docs section. How about \nthe attached description?\n\nI see that there are two more developer-only GUCs that are not listed in \nthe docs:\n\ntrace_syncscan\noptimize_bounded_sort\n\nThere's a comment on them that says \"/* this is undocumented because not \nexposed in a standard build */\", but that seems like a weak reason, \ngiven that many of the other options in that docs section also require \nadditional build-time options. I think we should add those to the docs \ntoo for the sake of completeness.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Sun, 4 Aug 2024 11:45:27 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 17 and GUC variables" }, { "msg_contents": "On Sun, Aug 4, 2024 at 4:45 AM Heikki Linnakangas <[email protected]> wrote:\n> On 04/08/2024 06:29, Robert Treat wrote:\n> > I was looking at trace_connection_negotiation and ran across this\n> > commit removing it's mention from the release notes because it is\n> > undocumented: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=95cabf542f04b634303f899600ea62fb256a08c2\n> >\n> > Why is the right solution to remove it from the release notes rather\n> > than to document it properly? It's not like people won't notice a new\n> > GUC has popped up in their configs. Also, presumaing I'm unerstanding\n> > it's purpose correctly, ISTM it would fit along side other trace_*\n> > gucs in https://www.postgresql.org/docs/current/runtime-config-developer.html#RUNTIME-CONFIG-DEVELOPER.\n>\n> Not sure whether it's worth mentioning in release notes, but I think\n> you're right that it should be listed in that docs section. How about\n> the attached description?\n>\n\nSlightly modified version attached which I think is a little more succinct.\n\n> I see that there are two more developer-only GUCs that are not listed in\n> the docs:\n>\n> trace_syncscan\n> optimize_bounded_sort\n>\n> There's a comment on them that says \"/* this is undocumented because not\n> exposed in a standard build */\", but that seems like a weak reason,\n> given that many of the other options in that docs section also require\n> additional build-time options. I think we should add those to the docs\n> too for the sake of completeness.\n>\n\nAgreed.\n\nRobert Treat\nhttps://xzilla.net", "msg_date": "Sun, 4 Aug 2024 10:28:36 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 17 and GUC variables" } ]
[ { "msg_contents": "\nHi,\n\nI relies on some compiler's check to reduce some simple coding issues, I\nuse clang 18.1.6 for now. however \"CFLAGS='-Wall -Werror ' ./configure\"\nwould fail, and if I run ' ./configure' directly, it is OK. I'm not sure\nwhy it happens. More details is below:\n\n(master)> echo $CC\nclang\n(master)> clang --version\nclang version 18.1.6 (https://gitee.com/mirrors/llvm-project.git 1118c2e05e67a36ed8ca250524525cdb66a55256)\nTarget: x86_64-unknown-linux-gnu\nThread model: posix\nInstalledDir: /usr/local/bin\n\n(master)> CFLAGS='-Wall -Werror ' ./configure\n\nchecking for clang option to accept ISO C89... unsupported\nchecking for clang option to accept ISO C99... unsupported\nconfigure: error: C compiler \"clang\" does not support C99\n\nIn config.log, we can see:\n\nconfigure:4433: clang -qlanglvl=extc89 -c -Wall -Werror conftest.c >&5\nclang: error: unknown argument: '-qlanglvl=extc89'\n\nand clang does doesn't support -qlanglvl.\n\nin 'configure', we can see the related code is:\n\n\"\"\"\nfor ac_arg in '' -std=gnu99 -std=c99 -c99 -AC99 -D_STDC_C99= -qlanglvl=extc99\ndo\n CC=\"$ac_save_CC $ac_arg\"\n if ac_fn_c_try_compile \"$LINENO\"; then :\n ac_cv_prog_cc_c99=$ac_arg\nfi\nrm -f core conftest.err conftest.$ac_objext\n test \"x$ac_cv_prog_cc_c99\" != \"xno\" && break\ndone\nrm -f conftest.$ac_ext\nCC=$ac_save_CC\n\n....\n\n# Error out if the compiler does not support C99, as the codebase\n# relies on that.\nif test \"$ac_cv_prog_cc_c99\" = no; then\n as_fn_error $? \"C compiler \\\"$CC\\\" does not support C99\" \"$LINENO\" 5\nfi\n\"\"\"\n\nSo my questions are:\n1. based on the fact clang doesn't support '-qlanglvl' all the time, why\nremoving the CFLAGS matters.\n\n2. If you are using clang as well, what CFLAGS you use and it works?\nfor example: IIRC, clang doesn't report error when a variable is set\nbut no used by default, we have to add some extra flags to make it.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Fri, 21 Jun 2024 02:26:53 +0000", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "configure error when CFLAGS='-Wall -Werror " }, { "msg_contents": "Andy Fan <[email protected]> writes:\n> I relies on some compiler's check to reduce some simple coding issues, I\n> use clang 18.1.6 for now. however \"CFLAGS='-Wall -Werror ' ./configure\"\n> would fail,\n\nNope, you cannot do that: -Werror breaks many of configure's tests.\nSee\n\nhttps://www.postgresql.org/docs/current/install-make.html#CONFIGURE-ENVVARS\n\nfor the standard workaround.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 Jun 2024 23:18:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: configure error when CFLAGS='-Wall -Werror" } ]
[ { "msg_contents": "hi.\n-------------\n9.16.2.1.1. Boolean Predicate Check Expressions\nAs an extension to the SQL standard, a PostgreSQL path expression can\nbe a Boolean predicate, whereas the SQL standard allows predicates\nonly within filters. While SQL-standard path expressions return the\nrelevant element(s) of the queried JSON value, predicate check\nexpressions return the single three-valued result of the predicate:\ntrue, false, or unknown. For example, we could write this SQL-standard\nfilter expression:\n\n-------------\nslight inconsistency, \"SQL-standard\" versus \"SQL standard\"\n\"path expression can be a Boolean predicate\", why capital \"Boolean\"?\n\n\"predicate check expressions return the single three-valued result of\nthe predicate: true, false, or unknown.\"\n\"unknown\" is wrong, because `select 'unknown'::jsonb;` will fail.\nhere \"unknown\" should be \"null\"? see jsonb_path_query doc entry also.\n\n\n", "msg_date": "Fri, 21 Jun 2024 10:30:08 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "minor doc issue in 9.16.2.1.1. Boolean Predicate Check Expressions" }, { "msg_contents": "On Thu, Jun 20, 2024 at 7:30 PM jian he <[email protected]> wrote:\n\n> \"predicate check expressions return the single three-valued result of\n>\nthe predicate: true, false, or unknown.\"\n> \"unknown\" is wrong, because `select 'unknown'::jsonb;` will fail.\n> here \"unknown\" should be \"null\"? see jsonb_path_query doc entry also.\n>\n>\nThe syntax for json_exists belies this claim (assuming our docs are\naccurate there). Its \"on error\" options are true/false/unknown.\nAdditionally, the predicate test operator is named \"is unknown\" not \"is\nnull\".\n\nThe result of the predicate test, which is never produced as a value, only\na concept, is indeed \"unknown\" - which then devolves to false when it is\npractically applied to determining whether to output the path item being\ntested. As it does also when used in a parth expression.\n\npostgres=# select json_value('[null]','$[0] < 1');\n json_value\n------------\n f\n\npostgres=# select json_value('[null]','$[0] == null');\n json_value\n------------\n t\n\nNot sure how to peek inside the jsonpath system here though...\n\npostgres=# select json_value('[null]','($[0] < 1) == null');\nERROR: syntax error at or near \"==\" of jsonpath input\nLINE 1: select json_value('[null]','($[0] < 1) == null');\n\nI am curious if that produces true (the unknown is left as null) or false\n(the unknown becomes false immediately).\n\nDavid J.\n\nOn Thu, Jun 20, 2024 at 7:30 PM jian he <[email protected]> wrote:\"predicate check expressions return the single three-valued result of\nthe predicate: true, false, or unknown.\"\n\"unknown\" is wrong, because `select 'unknown'::jsonb;` will fail.\nhere \"unknown\" should be \"null\"? see jsonb_path_query doc entry also.The syntax for json_exists belies this claim (assuming our docs are accurate there).  Its \"on error\" options are true/false/unknown.  Additionally, the predicate test operator is named \"is unknown\" not \"is null\".The result of the predicate test, which is never produced as a value, only a concept, is indeed \"unknown\" - which then devolves to false when it is practically applied to determining whether to output the path item being tested.  As it does also when used in a parth expression.postgres=# select json_value('[null]','$[0] < 1'); json_value ------------ fpostgres=# select json_value('[null]','$[0] == null'); json_value ------------ tNot sure how to peek inside the jsonpath system here though...postgres=# select json_value('[null]','($[0] < 1) == null');ERROR:  syntax error at or near \"==\" of jsonpath inputLINE 1: select json_value('[null]','($[0] < 1) == null');I am curious if that produces true (the unknown is left as null) or false (the unknown becomes false immediately).       David J.", "msg_date": "Thu, 20 Jun 2024 20:11:02 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: minor doc issue in 9.16.2.1.1. Boolean Predicate Check\n Expressions" }, { "msg_contents": "On Fri, Jun 21, 2024 at 11:11 AM David G. Johnston\n<[email protected]> wrote:\n>\n> On Thu, Jun 20, 2024 at 7:30 PM jian he <[email protected]> wrote:\n>>\n>> \"predicate check expressions return the single three-valued result of\n>>\n>> the predicate: true, false, or unknown.\"\n>> \"unknown\" is wrong, because `select 'unknown'::jsonb;` will fail.\n>> here \"unknown\" should be \"null\"? see jsonb_path_query doc entry also.\n>>\n>\n> The syntax for json_exists belies this claim (assuming our docs are accurate there). Its \"on error\" options are true/false/unknown. Additionally, the predicate test operator is named \"is unknown\" not \"is null\".\n>\n> The result of the predicate test, which is never produced as a value, only a concept, is indeed \"unknown\" - which then devolves to false when it is practically applied to determining whether to output the path item being tested. As it does also when used in a parth expression.\n>\n\nin [1] says\nThe similar predicate check expression simply returns true, indicating\nthat a match exists:\n\n=> select jsonb_path_query(:'json', '$.track.segments[*].HR > 130');\n jsonb_path_query\n------------------\n true\n\n\n----------------------------------------\nbut in this example\nselect jsonb_path_query('1', '$ == \"1\"');\nreturn null.\n\nI guess here, the match evaluation cannot be applied, thus returning null.\n\n\nSo summary:\nif the boolean predicate check expressions are applicable, return true or false.\n\nthe boolean predicate check expressions are not applicable, return null.\nexample: select jsonb_path_query('1', '$ == \"a\"');\n\n\nbut I found following two examples returning different results,\ni think they should return the same value.\nselect json_value('1', '$ == \"1\"' returning jsonb error on error);\nselect json_query('1', '$ == \"1\"' returning jsonb error on error);\n\n[1] https://www.postgresql.org/docs/devel/functions-json.html#FUNCTIONS-SQLJSON-CHECK-EXPRESSIONS\n\n\n", "msg_date": "Fri, 21 Jun 2024 16:53:55 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: minor doc issue in 9.16.2.1.1. Boolean Predicate Check\n Expressions" } ]
[ { "msg_contents": "Hi,\n\n\n\nRegarding the multicolumn B-Tree Index, I'm considering\n\nif we can enhance the EXPLAIN output. There have been requests\n\nfor this from our customer.\n\n\n\nAs the document says, we need to use it carefully.\n\n> The exact rule is that equality constraints on leading columns,\n\n> plus any inequality constraints on the first column that does\n\n> not have an equality constraint, will be used to limit the portion\n\n> of the index that is scanned.\n\nhttps://www.postgresql.org/docs/17/indexes-multicolumn.html\n\n\n\nHowever, it's not easy to confirm whether multi-column indexes are\n\nbeing used efficiently because we need to compare the index\n\ndefinitions and query conditions individually.\n\n\n\nFor instance, just by looking at the following EXPLAIN result, we\n\ncan't determine whether the index is being used efficiently or not\n\nat a glance. Indeed, the current index definition is not suitable\n\nfor the query, so the cost is significantly high.\n\n\n\n =# EXPLAIN (VERBOSE, ANALYZE) SELECT * FROM test WHERE id1 = 1 AND id3 = 101;\n\n QUERY PLAN\n\n ----------------------------------------------------------------------------------------------------------------------------\n\n Index Scan using test_idx on public.test (cost=0.42..12754.76 rows=1 width=18) (actual time=0.033..54.115 rows=1 loops=1)\n\n Output: id1, id2, id3, value\n\n Index Cond: ((test.id1 = 1) AND (test.id3 = 101)) -- Is it efficient or not?\n\n Planning Time: 0.145 ms\n\n Execution Time: 54.150 ms\n\n (6 rows)\n\n\n\nSo, I'd like to improve the output to be more user-friendly.\n\n\n\n\n\n# Idea\n\n\n\nI'm considering adding new information, \"Index Bound Cond\", which specifies\n\nwhat quals will be used for the boundary condition of the B-Tree index.\n\n(Since this is just my current idea, I'm open to changing the output.)\n\n\n\nHere is an example output.\n\n\n\n-- prepare for the test\n\nCREATE TABLE test (id1 int, id2 int, id3 int, value varchar(32));\n\nCREATE INDEX test_idx ON test(id1, id2, id3); -- multicolumn B-Tree index\n\nINSERT INTO test (SELECT i % 2, i, i, 'hello' FROM generate_series(1,1000000) s(i));\n\n ANALYZE;\n\n\n\n-- explain\n\n\n\n=# EXPLAIN (VERBOSE, ANALYZE) SELECT * FROM test WHERE id1 = 1 AND id2 = 101;\n\n QUERY PLAN\n\n -----------------------------------------------------------------------------------------------------------------------\n\n Index Scan using test_idx on public.test (cost=0.42..8.45 rows=1 width=18) (actual time=0.046..0.047 rows=1 loops=1)\n\n Output: id1, id2, id3, value\n\n Index Cond: ((test.id1 = 1) AND (test.id2 = 101))\n\n Index Bound Cond: ((test.id1 = 1) AND (test.id2 = 101)) -- The B-Tree index is used efficiently.\n\n Planning Time: 0.124 ms\n\n Execution Time: 0.076 ms\n\n(6 rows)\n\n =# EXPLAIN (VERBOSE, ANALYZE) SELECT * FROM test WHERE id1 = 1 AND id3 = 101;\n\n QUERY PLAN\n\n ----------------------------------------------------------------------------------------------------------------------------\n\n Index Scan using test_idx on public.test (cost=0.42..12754.76 rows=1 width=18) (actual time=0.033..54.115 rows=1 loops=1)\n\n Output: id1, id2, id3, value\n\n Index Cond: ((test.id1 = 1) AND (test.id3 = 101))\n\n Index Bound Cond: (test.id1 = 1) -- The B-tree index is *not* used efficiently\n\n -- compared to the previous execution conditions,\n\n -- because it differs from \"Index Cond\".\n\n Planning Time: 0.145 ms\n\n Execution Time: 54.150 ms\n\n(6 rows)\n\n\n\n\n\n# PoC patch\n\n\n\nThe PoC patch makes the following changes:\n\n\n\n* Adds a new variable related to bound conditions\n\n to IndexPath, IndexScan, IndexOnlyScan, and BitmapIndexScan\n\n* Adds quals for bound conditions to IndexPath when estimating cost, since\n\n the B-Tree index considers the boundary condition in btcostestimate()\n\n* Adds quals for bound conditions to the output of EXPLAIN\n\n\n\n\n\n\n\nThank you for reading my suggestion. Please feel free to comment.\n\n\n\n* Is this feature useful? Is there a possibility it will be accepted?\n\n* Are there any other ideas for determining if multicolumn indexes are\n\nbeing used efficiently? Although I considered calculating the efficiency using\n\npg_statio_all_indexes.idx_blks_read and pg_stat_all_indexes.idx_tup_read,\n\n I believe improving the EXPLAIN output is better because it can be output\n\nper query and it's more user-friendly.\n\n* Is \"Index Bound Cond\" the proper term?I also considered changing\n\n\"Index Cond\" to only show quals for the boundary condition and adding\n\na new term \"Index Filter\".\n\n* Would it be better to add new interfaces to Index AM? Is there any case\n\n to output the EXPLAIN for each index context? At least, I think it's worth\n\n considering whether it's good for amcostestimate() to modify the\n\n IndexPath directly as the PoC patch does.\n\n\n\n\n\nRegards,\n\n--\n\nMasahiro Ikeda\n\nNTT DATA CORPORATION", "msg_date": "Fri, 21 Jun 2024 07:12:25 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "On Fri, Jun 21, 2024 at 12:42 PM <[email protected]> wrote:\n\n> Hi,\n>\n>\n>\n> Regarding the multicolumn B-Tree Index, I'm considering\n>\n> if we can enhance the EXPLAIN output. There have been requests\n>\n> for this from our customer.\n>\n>\n>\n> As the document says, we need to use it carefully.\n>\n> > The exact rule is that equality constraints on leading columns,\n>\n> > plus any inequality constraints on the first column that does\n>\n> > not have an equality constraint, will be used to limit the portion\n>\n> > of the index that is scanned.\n>\n> *https://www.postgresql.org/docs/17/indexes-multicolumn.html\n> <https://www.postgresql.org/docs/17/indexes-multicolumn.html>*\n>\n>\n>\n> However, it's not easy to confirm whether multi-column indexes are\n>\n> being used efficiently because we need to compare the index\n>\n> definitions and query conditions individually.\n>\n>\n>\n> For instance, just by looking at the following EXPLAIN result, we\n>\n> can't determine whether the index is being used efficiently or not\n>\n> at a glance. Indeed, the current index definition is not suitable\n>\n> for the query, so the cost is significantly high.\n>\n>\n>\n> =# EXPLAIN (VERBOSE, ANALYZE) SELECT * FROM test WHERE id1 = 1 AND id3 =\n> 101;\n>\n> QUERY\n> PLAN\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------\n>\n> Index Scan using test_idx on public.test (cost=0.42..12754.76 rows=1\n> width=18) (actual time=0.033..54.115 rows=1 loops=1)\n>\n> Output: id1, id2, id3, value\n>\n> Index Cond: ((test.id1 = 1) AND (test.id3 = 101)) -- Is it\n> efficient or not?\n>\n> Planning Time: 0.145 ms\n>\n> Execution Time: 54.150 ms\n>\n> (6 rows)\n>\n>\n>\n> So, I'd like to improve the output to be more user-friendly.\n>\n>\n>\n>\n>\n> # Idea\n>\n>\n>\n> I'm considering adding new information, \"Index Bound Cond\", which specifies\n>\n> what quals will be used for the boundary condition of the B-Tree index.\n>\n> (Since this is just my current idea, I'm open to changing the output.)\n>\n>\n>\n> Here is an example output.\n>\n>\n>\n> -- prepare for the test\n>\n> CREATE TABLE test (id1 int, id2 int, id3 int, value varchar(32));\n>\n> CREATE INDEX test_idx ON test(id1, id2, id3); --\n> multicolumn B-Tree index\n>\n> INSERT INTO test (SELECT i % 2, i, i, 'hello' FROM\n> generate_series(1,1000000) s(i));\n>\n> ANALYZE;\n>\n>\n>\n> -- explain\n>\n>\n>\n> =# EXPLAIN (VERBOSE, ANALYZE) SELECT * FROM test WHERE id1 = 1 AND id2 =\n> 101;\n>\n> QUERY\n> PLAN\n>\n>\n> -----------------------------------------------------------------------------------------------------------------------\n>\n> Index Scan using test_idx on public.test (cost=0.42..8.45 rows=1\n> width=18) (actual time=0.046..0.047 rows=1 loops=1)\n>\n> Output: id1, id2, id3, value\n>\n> Index Cond: ((test.id1 = 1) AND (test.id2 = 101))\n>\n> Index Bound Cond: ((test.id1 = 1) AND (test.id2 = 101)) -- The B-Tree\n> index is used efficiently.\n>\n> Planning Time: 0.124 ms\n>\n> Execution Time: 0.076 ms\n>\n> (6 rows)\n>\n> =# EXPLAIN (VERBOSE, ANALYZE) SELECT * FROM test WHERE id1 = 1 AND id3 =\n> 101;\n>\n> QUERY\n> PLAN\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------\n>\n> Index Scan using test_idx on public.test (cost=0.42..12754.76 rows=1\n> width=18) (actual time=0.033..54.115 rows=1 loops=1)\n>\n> Output: id1, id2, id3, value\n>\n> Index Cond: ((test.id1 = 1) AND (test.id3 = 101))\n>\n> Index Bound Cond: (test.id1 = 1) -- The B-tree\n> index is *not* used efficiently\n>\n> -- compared to\n> the previous execution conditions,\n>\n> -- because it\n> differs from \"Index Cond\".\n>\n> Planning Time: 0.145 ms\n>\n> Execution Time: 54.150 ms\n>\n> (6 rows)\n>\n>\n>\n>\n>\n> # PoC patch\n>\n>\n>\n> The PoC patch makes the following changes:\n>\n>\n>\n> * Adds a new variable related to bound conditions\n>\n> to IndexPath, IndexScan, IndexOnlyScan, and BitmapIndexScan\n>\n> * Adds quals for bound conditions to IndexPath when estimating cost, since\n>\n> the B-Tree index considers the boundary condition in btcostestimate()\n>\n> * Adds quals for bound conditions to the output of EXPLAIN\n>\n>\n>\n>\n>\n>\n>\n> Thank you for reading my suggestion. Please feel free to comment.\n>\n>\n>\n> * Is this feature useful? Is there a possibility it will be accepted?\n>\n> * Are there any other ideas for determining if multicolumn indexes are\n>\n> being used efficiently? Although I considered calculating the efficiency\n> using\n>\n> pg_statio_all_indexes.idx_blks_read and pg_stat_all_indexes.idx_tup_read,\n>\n> I believe improving the EXPLAIN output is better because it can be output\n>\n> per query and it's more user-friendly.\n>\n> * Is \"Index Bound Cond\" the proper term?I also considered changing\n>\n> \"Index Cond\" to only show quals for the boundary condition and adding\n>\n> a new term \"Index Filter\".\n>\n> * Would it be better to add new interfaces to Index AM? Is there any case\n>\n> to output the EXPLAIN for each index context? At least, I think it's\n> worth\n>\n> considering whether it's good for amcostestimate() to modify the\n>\n> IndexPath directly as the PoC patch does.\n>\n>\n>\n\nI am unable to decide whether reporting the bound quals is just enough to\ndecide the efficiency of index without knowing the difference in the number\nof index tuples selectivity and heap tuple selectivity. The difference\nseems to be a better indicator of index efficiency whereas the bound quals\nwill help debug the in-efficiency, if any.\n\nAlso, do we want to report bound quals even if they are the same as index\nconditions or just when they are different?\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Fri, Jun 21, 2024 at 12:42 PM <[email protected]> wrote:\n\n\nHi,\n\n \n\nRegarding the multicolumn B-Tree Index, I'm considering\n\nif we can enhance the EXPLAIN output. There have been requests\n\nfor this from our customer.\n\n \n\nAs the document says, we need to use it carefully.\n\n> The exact rule is that equality constraints on leading columns,\n\n> plus any inequality constraints on the first column that does\n\n> not have an equality constraint, will be used to limit the portion\n\n> of the index that is scanned.\n\nhttps://www.postgresql.org/docs/17/indexes-multicolumn.html\n\n \n\nHowever, it's not easy to confirm whether multi-column indexes are\n\nbeing used efficiently because we need to compare the index\n\ndefinitions and query conditions individually.\n\n \n\nFor instance, just by looking at the following EXPLAIN result, we\n\ncan't determine whether the index is being used efficiently or not\n\nat a glance. Indeed, the current index definition is not suitable\n\nfor the query, so the cost is significantly high.\n\n \n\n  =# EXPLAIN (VERBOSE, ANALYZE) SELECT * FROM test WHERE id1 = 1 AND id3 = 101;\n\n                                                           QUERY PLAN                                                        \n\n  ----------------------------------------------------------------------------------------------------------------------------\n\n   Index Scan using test_idx on public.test  (cost=0.42..12754.76 rows=1 width=18) (actual time=0.033..54.115 rows=1 loops=1)\n\n     Output: id1, id2, id3, value\n\n     Index Cond: ((test.id1 = 1) AND (test.id3 = 101))    -- Is it efficient or not?\n\n   Planning Time: 0.145 ms\n\n   Execution Time: 54.150 ms\n\n  (6 rows)\n\n \n\nSo, I'd like to improve the output to be more user-friendly.\n\n \n\n \n\n# Idea\n\n \n\nI'm considering adding new information, \"Index Bound Cond\", which specifies\n\nwhat quals will be used for the boundary condition of the B-Tree index.\n\n(Since this is just my current idea, I'm open to changing the output.)\n\n \n\nHere is an example output.\n\n \n\n-- prepare for the test\n\nCREATE TABLE test (id1 int, id2 int, id3 int, value varchar(32));\n\nCREATE INDEX test_idx ON test(id1, id2, id3);                -- multicolumn B-Tree index\n\nINSERT INTO test (SELECT i % 2, i, i, 'hello' FROM generate_series(1,1000000) s(i));\n\n ANALYZE;\n\n \n\n-- explain\n\n \n\n=# EXPLAIN (VERBOSE, ANALYZE) SELECT * FROM test WHERE id1 = 1 AND id2 = 101;\n\n                                                       QUERY PLAN                                                      \n\n -----------------------------------------------------------------------------------------------------------------------\n\n  Index Scan using test_idx on public.test  (cost=0.42..8.45 rows=1 width=18) (actual time=0.046..0.047 rows=1 loops=1)\n\n    Output: id1, id2, id3, value\n\n    Index Cond: ((test.id1 = 1) AND (test.id2 = 101))\n\n    Index Bound Cond: ((test.id1 = 1) AND (test.id2 = 101))  -- The B-Tree index is used efficiently.\n\n  Planning Time: 0.124 ms\n\n  Execution Time: 0.076 ms\n\n(6 rows)\n\n =# EXPLAIN (VERBOSE, ANALYZE) SELECT * FROM test WHERE id1 = 1 AND id3 = 101;\n\n                                                          QUERY PLAN                                                        \n\n ----------------------------------------------------------------------------------------------------------------------------\n\n  Index Scan using test_idx on public.test  (cost=0.42..12754.76 rows=1 width=18) (actual time=0.033..54.115 rows=1 loops=1)\n\n    Output: id1, id2, id3, value\n\n    Index Cond: ((test.id1 = 1) AND (test.id3 = 101))\n\n    Index Bound Cond: (test.id1 = 1)                        -- The B-tree index is *not* used efficiently\n\n                                                          -- compared to the previous execution conditions,\n\n                                                          -- because it differs from \"Index Cond\".\n\n  Planning Time: 0.145 ms\n\n  Execution Time: 54.150 ms\n\n(6 rows)\n\n \n\n \n\n# PoC patch\n\n \n\nThe PoC patch makes the following changes:\n\n \n\n* Adds a new variable related to bound conditions\n\n  to IndexPath, IndexScan, IndexOnlyScan, and BitmapIndexScan\n\n* Adds quals for bound conditions to IndexPath when estimating cost, since\n\n  the B-Tree index considers the boundary condition in btcostestimate()\n\n* Adds quals for bound conditions to the output of EXPLAIN\n\n \n\n \n\n \n\nThank you for reading my suggestion. Please feel free to comment.\n\n \n\n* Is this feature useful? Is there a possibility it will be accepted?\n\n* Are there any other ideas for determining if multicolumn indexes are\n\nbeing used efficiently? Although I considered calculating the efficiency using\n\npg_statio_all_indexes.idx_blks_read and pg_stat_all_indexes.idx_tup_read,\n\n I believe improving the EXPLAIN output is better because it can be output\n\nper query and it's more user-friendly.\n\n* Is \"Index Bound Cond\" the proper term?I also considered changing\n\n\"Index Cond\" to only show quals for the boundary condition and adding\n\na new term \"Index Filter\".\n\n* Would it be better to add new interfaces to Index AM? Is there any case\n\n  to output the EXPLAIN for each index context? At least, I think it's worth\n\n  considering whether it's good for amcostestimate() to modify the\n\n  IndexPath directly as the PoC patch does.\n\n I am unable to decide whether reporting the bound quals is just enough to decide the efficiency of index without knowing the difference in the number of index tuples selectivity and heap tuple selectivity. The difference seems to be a better indicator of index efficiency whereas the bound quals will help debug the in-efficiency, if any. Also, do we want to report bound quals even if they are the same as index conditions or just when they are different?-- Best Wishes,Ashutosh Bapat", "msg_date": "Fri, 21 Jun 2024 17:59:12 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "On Fri, 21 Jun 2024 07:12:25 +0000\n<[email protected]> wrote:\n\n> * Is this feature useful? Is there a possibility it will be accepted?\n\nI think adding such information to EXPLAIN outputs is useful because it\nwill help users confirm the effect of a multicolumn index on a certain query\nand decide to whether leave, drop, or recreate the index, and so on.\n \n> * Are there any other ideas for determining if multicolumn indexes are\n> \n> being used efficiently? Although I considered calculating the efficiency using\n> \n> pg_statio_all_indexes.idx_blks_read and pg_stat_all_indexes.idx_tup_read,\n> \n> I believe improving the EXPLAIN output is better because it can be output\n> \n> per query and it's more user-friendly.\n\nIt seems for me improving EXPLAIN is a natural way to show information\non query optimization like index scans.\n \n> * Is \"Index Bound Cond\" the proper term?I also considered changing\n> \n> \"Index Cond\" to only show quals for the boundary condition and adding\n> \n> a new term \"Index Filter\".\n\n\"Index Bound Cond\" seems not intuitive for me because I could not find\ndescription explaining what this means from the documentation. I like\n\"Index Filter\" that implies the index has to be scanned.\n \n> * Would it be better to add new interfaces to Index AM? Is there any case\n> \n> to output the EXPLAIN for each index context? At least, I think it's worth\n> \n> considering whether it's good for amcostestimate() to modify the\n> \n> IndexPath directly as the PoC patch does.\n\nI am not sure it is the best way to modify IndexPath in amcostestimate(), but\nI don't have better ideas for now.\n\nRegards,\nYugo Nagata\n\n> \n> \n> \n> \n> Regards,\n> \n> --\n> \n> Masahiro Ikeda\n> \n> NTT DATA CORPORATION\n> \n> \n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n", "msg_date": "Sat, 22 Jun 2024 00:31:09 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "> I am unable to decide whether reporting the bound quals is just enough to decide the efficiency of index without knowing the difference in the number of index tuples selectivity and heap tuple selectivity. The difference seems to be a better indicator of index efficiency whereas the bound quals will help debug the in-efficiency, if any. \r\n> Also, do we want to report bound quals even if they are the same as index conditions or just when they are different?\r\n\r\nThank you for your comment. After receiving your comment, I thought it would be better to also report information that would make the difference in selectivity understandable. One idea I had is to output the number of index tuples inefficiently extracted, like “Rows Removed by Filter”. Users can check the selectivity and efficiency by looking at the number.\r\n\r\nAlso, I thought it would be better to change the way bound quals are reported to align with the \"Filter\". I think it would be better to modify it so that it does not output when the bound quals are the same as the index conditions.\r\n\r\nIn my local PoC patch, I have modified the output as follows, what do you think?\r\n\r\n=# EXPLAIN (VERBOSE, ANALYZE) SELECT * FROM test WHERE id1 = 1 AND id2 = 101;\r\n QUERY PLAN \r\n-------------------------------------------------------------------------------------------------------------------------\r\n Index Scan using test_idx on ikedamsh.test (cost=0.42..8.45 rows=1 width=18) (actual time=0.082..0.086 rows=1 loops=1)\r\n Output: id1, id2, id3, value\r\n Index Cond: ((test.id1 = 1) AND (test.id2 = 101)) -- If it’s efficient, the output won’t change.\r\n Planning Time: 5.088 ms\r\n Execution Time: 0.162 ms\r\n(5 rows)\r\n\r\n=# EXPLAIN (VERBOSE, ANALYZE) SELECT * FROM test WHERE id1 = 1 AND id3 = 101;\r\n QUERY PLAN \r\n-------------------------------------------------------------------------------------------------------------------------------\r\n Index Scan using test_idx on ikedamsh.test (cost=0.42..12630.10 rows=1 width=18) (actual time=0.175..279.819 rows=1 loops=1)\r\n Output: id1, id2, id3, value\r\n Index Cond: (test.id1 = 1) -- Change the output. Show only the bound quals. \r\n Index Filter: (test.id3 = 101) -- New. Output quals which are not used as the bound quals\r\n Rows Removed by Index Filter: 499999 -- New. Output when ANALYZE option is specified\r\n Planning Time: 0.354 ms\r\n Execution Time: 279.908 ms\r\n(7 rows)\r\n\r\nRegards,\r\n--\r\nMasahiro Ikeda\r\nNTT DATA CORPORATION\r\n", "msg_date": "Mon, 24 Jun 2024 02:38:32 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "RE: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "> > * Is this feature useful? Is there a possibility it will be accepted?\n> \n> I think adding such information to EXPLAIN outputs is useful because it will help users\n> confirm the effect of a multicolumn index on a certain query and decide to whether\n> leave, drop, or recreate the index, and so on.\n\nThank you for your comments and for empathizing with the utility of the approach.\n\n> > * Are there any other ideas for determining if multicolumn indexes are\n> >\n> > being used efficiently? Although I considered calculating the\n> > efficiency using\n> >\n> > pg_statio_all_indexes.idx_blks_read and\n> > pg_stat_all_indexes.idx_tup_read,\n> >\n> > I believe improving the EXPLAIN output is better because it can be\n> > output\n> >\n> > per query and it's more user-friendly.\n> \n> It seems for me improving EXPLAIN is a natural way to show information on query\n> optimization like index scans.\n\nOK, I'll proceed with the way.\n\n> > * Is \"Index Bound Cond\" the proper term?I also considered changing\n> >\n> > \"Index Cond\" to only show quals for the boundary condition and adding\n> >\n> > a new term \"Index Filter\".\n> \n> \"Index Bound Cond\" seems not intuitive for me because I could not find description\n> explaining what this means from the documentation. I like \"Index Filter\" that implies the\n> index has to be scanned.\n\nOK, I think you are right. Even at this point, there are things like ‘Filter’ and\n‘Rows Removed by Filter’, so it seems natural to align with them. I described a\nnew output example in the previous email, how about that?\n\n> > * Would it be better to add new interfaces to Index AM? Is there any\n> > case\n> >\n> > to output the EXPLAIN for each index context? At least, I think it's\n> > worth\n> >\n> > considering whether it's good for amcostestimate() to modify the\n> >\n> > IndexPath directly as the PoC patch does.\n> \n> I am not sure it is the best way to modify IndexPath in amcostestimate(), but I don't\n> have better ideas for now.\n\nOK, I’ll consider what the best way to change is. In addition, if we add\n\"Rows Removed by Index Filter\", we might need to consider a method to receive the\nnumber of filtered tuples at execution time from Index AM.\n\nRegards,\n--\nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 24 Jun 2024 02:52:10 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "RE: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "On Mon, 24 Jun 2024 at 04:38, <[email protected]> wrote:\n>\n> In my local PoC patch, I have modified the output as follows, what do you think?\n>\n> =# EXPLAIN (VERBOSE, ANALYZE) SELECT * FROM test WHERE id1 = 1 AND id2 = 101;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------\n> Index Scan using test_idx on ikedamsh.test (cost=0.42..8.45 rows=1 width=18) (actual time=0.082..0.086 rows=1 loops=1)\n> Output: id1, id2, id3, value\n> Index Cond: ((test.id1 = 1) AND (test.id2 = 101)) -- If it’s efficient, the output won’t change.\n> Planning Time: 5.088 ms\n> Execution Time: 0.162 ms\n> (5 rows)\n>\n> =# EXPLAIN (VERBOSE, ANALYZE) SELECT * FROM test WHERE id1 = 1 AND id3 = 101;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using test_idx on ikedamsh.test (cost=0.42..12630.10 rows=1 width=18) (actual time=0.175..279.819 rows=1 loops=1)\n> Output: id1, id2, id3, value\n> Index Cond: (test.id1 = 1) -- Change the output. Show only the bound quals.\n> Index Filter: (test.id3 = 101) -- New. Output quals which are not used as the bound quals\n\nI think this is too easy to confuse with the pre-existing 'Filter'\ncondition, which you'll find on indexes with INCLUDE-d columns or\nfilters on non-index columns.\nFurthermore, I think this is probably not helpful (maybe even harmful)\nfor index types like GIN and BRIN, where index searchkey order is\nmostly irrelevant to the index shape and performance.\nFinally, does this change the index AM API? Does this add another\nscankey argument to ->amrescan?\n\n> Rows Removed by Index Filter: 499999 -- New. Output when ANALYZE option is specified\n\nSeparate from the changes to Index Cond/Index Filter output changes I\nthink this can be useful output, though I'd probably let the AM\nspecify what kind of filter data to display.\nE.g. BRIN may well want to display how many ranges matched the\npredicate, vs how many ranges were unsummarized and thus returned; two\nconditions which aren't as easy to differentiate but can be important\ndebugging query performance.\n\n> Planning Time: 0.354 ms\n> Execution Time: 279.908 ms\n> (7 rows)\n\nWas this a test against the same dataset as the one you'd posted your\nmeasurements of your first patchset with? The execution time seems to\nhave slown down quite significantly, so if the testset is the same\nthen this doesn't bode well for your patchset.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Mon, 24 Jun 2024 11:11:00 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "+1 for the idea.\n\nOn Mon, 24 Jun 2024 at 11:11, Matthias van de Meent\n<[email protected]> wrote:\n> I think this is too easy to confuse with the pre-existing 'Filter'\n> condition, which you'll find on indexes with INCLUDE-d columns or\n> filters on non-index columns.\n\nWhy not combine them? And both call them Filter? In a sense this\nfiltering acts very similar to INCLUDE based filtering (for btrees at\nleast). Although I might be wrong about that, because when I try to\nconfirm the same perf using the following script I do get quite\ndifferent timings (maybe you have an idea what's going on here). But\neven if it does mean something slightly different perf wise, I think\nusing Filter for both is unlikely to confuse anyone. Since, while\nallowed, it seems extremely unlikely in practice that someone will use\nthe same column as part of the indexed columns and as part of the\nINCLUDE-d columns (why would you store the same info twice).\n\nCREATE TABLE test (id1 int, id2 int, id3 int, value varchar(32));\nINSERT INTO test (SELECT i % 10, i % 1000, i, 'hello' FROM\ngenerate_series(1,1000000) s(i));\nvacuum freeze test;\nCREATE INDEX test_idx_include ON test(id1, id2) INCLUDE (id3);\nANALYZE test;\nEXPLAIN (VERBOSE, ANALYZE, BUFFERS) SELECT id1, id3 FROM test WHERE\nid1 = 1 AND id3 = 101;\nCREATE INDEX test_idx ON test(id1, id2, id3);\nANALYZE test;\nEXPLAIN (VERBOSE, ANALYZE, BUFFERS) SELECT id1, id3 FROM test WHERE\nid1 = 1 AND id3 = 101;\n\n QUERY PLAN\n───────────────────────────────────────\n Index Only Scan using test_idx_include on public.test\n(cost=0.42..3557.09 rows=1 width=8) (actual time=0.708..6.639 rows=1\nloops=1)\n Output: id1, id3\n Index Cond: (test.id1 = 1)\n Filter: (test.id3 = 101)\n Rows Removed by Filter: 99999\n Heap Fetches: 0\n Buffers: shared hit=1 read=386\n Query Identifier: 471139784017641093\n Planning:\n Buffers: shared hit=8 read=1\n Planning Time: 0.091 ms\n Execution Time: 6.656 ms\n(12 rows)\n\nTime: 7.139 ms\n QUERY PLAN\n─────────────────────────────────────\n Index Only Scan using test_idx on public.test (cost=0.42..2591.77\nrows=1 width=8) (actual time=0.238..2.110 rows=1 loops=1)\n Output: id1, id3\n Index Cond: ((test.id1 = 1) AND (test.id3 = 101))\n Heap Fetches: 0\n Buffers: shared hit=1 read=386\n Query Identifier: 471139784017641093\n Planning:\n Buffers: shared hit=10 read=1\n Planning Time: 0.129 ms\n Execution Time: 2.128 ms\n(10 rows)\n\nTime: 2.645 ms\n\n\n", "msg_date": "Mon, 24 Jun 2024 11:57:55 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "On Mon, 24 Jun 2024 at 11:58, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> +1 for the idea.\n>\n> On Mon, 24 Jun 2024 at 11:11, Matthias van de Meent\n> <[email protected]> wrote:\n> > I think this is too easy to confuse with the pre-existing 'Filter'\n> > condition, which you'll find on indexes with INCLUDE-d columns or\n> > filters on non-index columns.\n>\n> Why not combine them? And both call them Filter? In a sense this\n> filtering acts very similar to INCLUDE based filtering (for btrees at\n> least).\n\nIt does not really behave similar: index scan keys (such as the\nid3=101 scankey) don't require visibility checks in the btree code,\nwhile the Filter condition _does_ require a visibility check, and\ndelegates the check to the table AM if the scan isn't Index-Only, or\nif the VM didn't show all-visible during the check.\n\nFurthermore, the index could use the scankey to improve the number of\nkeys to scan using \"skip scans\"; by realising during a forward scan\nthat if you've reached tuple (1, 2, 3) and looking for (1, _, 1) you\ncan skip forward to (1, 3, _), rather than having to go through tuples\n(1, 2, 4), (1, 2, 5), ... (1, 2, n). This is not possible for\nINCLUDE-d columns, because their datatypes and structure are opaque to\nthe index AM; the AM cannot assume anything about or do anything with\nthose values.\n\n> Although I might be wrong about that, because when I try to\n> confirm the same perf using the following script I do get quite\n> different timings (maybe you have an idea what's going on here). But\n> even if it does mean something slightly different perf wise, I think\n> using Filter for both is unlikely to confuse anyone.\n\nI don't want A to to be the plan, while showing B' to the user, as the\nperformance picture for the two may be completely different. And, as I\nmentioned upthread, the differences between AMs in the (lack of)\nmeaning in index column order also makes it quite wrong to generally\nseparate prefixes equalities from the rest of the keys.\n\n> Since, while\n> allowed, it seems extremely unlikely in practice that someone will use\n> the same column as part of the indexed columns and as part of the\n> INCLUDE-d columns (why would you store the same info twice).\n\nYeah, people don't generally include the same index column more than\nonce in the same index.\n\n> CREATE INDEX test_idx_include ON test(id1, id2) INCLUDE (id3);\n> CREATE INDEX test_idx ON test(id1, id2, id3);\n>\n> QUERY PLAN\n> ───────────────────────────────────────\n> Index Only Scan using test_idx_include on public.test\n[...]\n> Time: 7.139 ms\n> QUERY PLAN\n> ─────────────────────────────────────\n> Index Only Scan using test_idx on public.test (cost=0.42..2591.77\n[...]\n> Time: 2.645 ms\n\nAs you can see, there's a huge difference in performance. Putting both\nnon-bound and \"normal\" filter clauses in the same Filter clause will\nmake it more difficult to explain performance issues based on only the\nexplain output.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 24 Jun 2024 13:02:26 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "On Mon, 24 Jun 2024 at 13:02, Matthias van de Meent\n<[email protected]> wrote:\n> It does not really behave similar: index scan keys (such as the\n> id3=101 scankey) don't require visibility checks in the btree code,\n> while the Filter condition _does_ require a visibility check, and\n> delegates the check to the table AM if the scan isn't Index-Only, or\n> if the VM didn't show all-visible during the check.\n\nAny chance you could point me in the right direction for the\ncode/docs/comment about this? I'd like to learn a bit more about why\nthat is the case, because I didn't realize visibility checks worked\ndifferently for index scan keys and Filter keys.\n\n> Furthermore, the index could use the scankey to improve the number of\n> keys to scan using \"skip scans\"; by realising during a forward scan\n> that if you've reached tuple (1, 2, 3) and looking for (1, _, 1) you\n> can skip forward to (1, 3, _), rather than having to go through tuples\n> (1, 2, 4), (1, 2, 5), ... (1, 2, n). This is not possible for\n> INCLUDE-d columns, because their datatypes and structure are opaque to\n> the index AM; the AM cannot assume anything about or do anything with\n> those values.\n\nDoes Postgres actually support this currently? I thought skip scans\nwere not available (yet).\n\n> I don't want A to to be the plan, while showing B' to the user, as the\n> performance picture for the two may be completely different. And, as I\n> mentioned upthread, the differences between AMs in the (lack of)\n> meaning in index column order also makes it quite wrong to generally\n> separate prefixes equalities from the rest of the keys.\n\nYeah, that makes sense. These specific explain lines probably\nonly/mostly make sense for btree. So yeah we'd want the index AM to be\nable to add some stuff to the explain plan.\n\n> As you can see, there's a huge difference in performance. Putting both\n> non-bound and \"normal\" filter clauses in the same Filter clause will\n> make it more difficult to explain performance issues based on only the\n> explain output.\n\nFair enough, that's of course the main point of this patch in the\nfirst place: being able to better interpret the explain plan when you\ndon't have access to the schema. Still I think Filter is the correct\nkeyword for both, so how about we make it less confusing by making the\ncurrent \"Filter\" more specific by calling it something like \"Non-key\nFilter\" or \"INCLUDE Filter\" and then call the other something like\n\"Index Filter\" or \"Secondary Bound Filter\".\n\n\n", "msg_date": "Mon, 24 Jun 2024 14:41:53 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "On Mon, Jun 24, 2024 at 8:08 AM <[email protected]> wrote:\n\n> > I am unable to decide whether reporting the bound quals is just enough\n> to decide the efficiency of index without knowing the difference in the\n> number of index tuples selectivity and heap tuple selectivity. The\n> difference seems to be a better indicator of index efficiency whereas the\n> bound quals will help debug the in-efficiency, if any.\n> > Also, do we want to report bound quals even if they are the same as\n> index conditions or just when they are different?\n>\n> Thank you for your comment. After receiving your comment, I thought it\n> would be better to also report information that would make the difference\n> in selectivity understandable. One idea I had is to output the number of\n> index tuples inefficiently extracted, like “Rows Removed by Filter”. Users\n> can check the selectivity and efficiency by looking at the number.\n>\n> Also, I thought it would be better to change the way bound quals are\n> reported to align with the \"Filter\". I think it would be better to modify\n> it so that it does not output when the bound quals are the same as the\n> index conditions.\n>\n> In my local PoC patch, I have modified the output as follows, what do you\n> think?\n>\n> =# EXPLAIN (VERBOSE, ANALYZE) SELECT * FROM test WHERE id1 = 1 AND id2 =\n> 101;\n> QUERY PLAN\n>\n>\n> -------------------------------------------------------------------------------------------------------------------------\n> Index Scan using test_idx on ikedamsh.test (cost=0.42..8.45 rows=1\n> width=18) (actual time=0.082..0.086 rows=1 loops=1)\n> Output: id1, id2, id3, value\n> Index Cond: ((test.id1 = 1) AND (test.id2 = 101)) -- If it’s\n> efficient, the output won’t change.\n> Planning Time: 5.088 ms\n> Execution Time: 0.162 ms\n> (5 rows)\n>\n\nThis looks fine. We may highlight in the documentation that lack of Index\nbound quals in EXPLAIN output indicate that they are same as Index Cond:.\nOther idea is to use Index Cond and bound quals as property name but that's\ntoo long.\n\n\n>\n> =# EXPLAIN (VERBOSE, ANALYZE) SELECT * FROM test WHERE id1 = 1 AND id3 =\n> 101;\n> QUERY PLAN\n>\n>\n> -------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using test_idx on ikedamsh.test (cost=0.42..12630.10 rows=1\n> width=18) (actual time=0.175..279.819 rows=1 loops=1)\n> Output: id1, id2, id3, value\n> Index Cond: (test.id1 = 1) -- Change the output. Show\n> only the bound quals.\n> Index Filter: (test.id3 = 101) -- New. Output quals which\n> are not used as the bound quals\n> Rows Removed by Index Filter: 499999 -- New. Output when ANALYZE\n> option is specified\n> Planning Time: 0.354 ms\n> Execution Time: 279.908 ms\n> (7 rows)\n>\n\nI don't think we want to split these clauses. Index Cond should indicate\nthe conditions applied to the index scan. Bound quals should be listed\nseparately even though they will have an intersection with Index Cond. I am\nnot sure whether Index Filter is the right name, maybe Index Bound Cond:\nBut I don't know this area enough to make a final call.\n\nAbout Rows Removed by Index Filter: it's good to provide a number when\nANALYZE is specified, but it will be also better to specify what was\nestimated. We do that for (cost snd rows etc.) but doing that somewhere in\nthe plan output may not have a precedent. I think we should try that and\nsee what others think.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Mon, Jun 24, 2024 at 8:08 AM <[email protected]> wrote:> I am unable to decide whether reporting the bound quals is just enough to decide the efficiency of index without knowing the difference in the number of index tuples selectivity and heap tuple selectivity. The difference seems to be a better indicator of index efficiency whereas the bound quals will help debug the in-efficiency, if any. \n> Also, do we want to report bound quals even if they are the same as index conditions or just when they are different?\n\nThank you for your comment. After receiving your comment, I thought it would be better to also report information that would make the difference in selectivity understandable. One idea I had is to output the number of index tuples inefficiently extracted, like “Rows Removed by Filter”. Users can check the selectivity and efficiency by looking at the number.\n\nAlso, I thought it would be better to change the way bound quals are reported to align with the \"Filter\". I think it would be better to modify it so that it does not output when the bound quals are the same as the index conditions.\n\nIn my local PoC patch, I have modified the output as follows, what do you think?\n\n=# EXPLAIN (VERBOSE, ANALYZE) SELECT * FROM test WHERE id1 = 1 AND id2 = 101;\n                                                       QUERY PLAN                                                        \n-------------------------------------------------------------------------------------------------------------------------\n Index Scan using test_idx on ikedamsh.test  (cost=0.42..8.45 rows=1 width=18) (actual time=0.082..0.086 rows=1 loops=1)\n   Output: id1, id2, id3, value\n   Index Cond: ((test.id1 = 1) AND (test.id2 = 101))  -- If it’s efficient, the output won’t change.\n Planning Time: 5.088 ms\n Execution Time: 0.162 ms\n(5 rows)This looks fine. We may highlight in the documentation that lack of Index bound quals in EXPLAIN output indicate that they are same as Index Cond:. Other idea is to use Index Cond and bound quals as property name but that's too long. \n\n=# EXPLAIN (VERBOSE, ANALYZE) SELECT * FROM test WHERE id1 = 1 AND id3 = 101;\n                                                          QUERY PLAN                                                           \n-------------------------------------------------------------------------------------------------------------------------------\n Index Scan using test_idx on ikedamsh.test  (cost=0.42..12630.10 rows=1 width=18) (actual time=0.175..279.819 rows=1 loops=1)\n   Output: id1, id2, id3, value\n   Index Cond: (test.id1 = 1)                 -- Change the output. Show only the bound quals. \n   Index Filter: (test.id3 = 101)              -- New. Output quals which are not used as the bound quals\n   Rows Removed by Index Filter: 499999    -- New. Output when ANALYZE option is specified\n Planning Time: 0.354 ms\n Execution Time: 279.908 ms\n(7 rows)I don't think we want to split these clauses. Index Cond should indicate the conditions applied to the index scan. Bound quals should be listed separately even though they will have an intersection with Index Cond. I am not sure whether Index Filter is the right name, maybe Index Bound Cond: But I don't know this area enough to make a final call.About Rows Removed by Index Filter: it's good to provide a number when ANALYZE is specified, but it will be also better to specify what was estimated. We do that for (cost snd rows etc.) but doing that somewhere in the plan output may not have a precedent. I think we should try that and see what others think.-- Best Wishes,Ashutosh Bapat", "msg_date": "Mon, 24 Jun 2024 18:25:35 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "On Mon, 24 Jun 2024 at 14:42, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Mon, 24 Jun 2024 at 13:02, Matthias van de Meent\n> <[email protected]> wrote:\n> > It does not really behave similar: index scan keys (such as the\n> > id3=101 scankey) don't require visibility checks in the btree code,\n> > while the Filter condition _does_ require a visibility check, and\n> > delegates the check to the table AM if the scan isn't Index-Only, or\n> > if the VM didn't show all-visible during the check.\n>\n> Any chance you could point me in the right direction for the\n> code/docs/comment about this? I'd like to learn a bit more about why\n> that is the case, because I didn't realize visibility checks worked\n> differently for index scan keys and Filter keys.\n\nThis can be derived by combining how Filter works (it only filters the\nreturned live tuples) and how Index-Only scans work (return the index\ntuple, unless !ALL_VISIBLE, in which case the heap tuple is\nprojected). There have been several threads more or less recently that\nalso touch this topic and closely related topics, e.g. [0][1].\n\n> > Furthermore, the index could use the scankey to improve the number of\n> > keys to scan using \"skip scans\"; by realising during a forward scan\n> > that if you've reached tuple (1, 2, 3) and looking for (1, _, 1) you\n> > can skip forward to (1, 3, _), rather than having to go through tuples\n> > (1, 2, 4), (1, 2, 5), ... (1, 2, n). This is not possible for\n> > INCLUDE-d columns, because their datatypes and structure are opaque to\n> > the index AM; the AM cannot assume anything about or do anything with\n> > those values.\n>\n> Does Postgres actually support this currently? I thought skip scans\n> were not available (yet).\n\nPeter Geoghegan has been working on it as project after PG17's\nIN()-list improvements were committed, and I hear he has the basics\nworking but the further details need fleshing out.\n\n> > As you can see, there's a huge difference in performance. Putting both\n> > non-bound and \"normal\" filter clauses in the same Filter clause will\n> > make it more difficult to explain performance issues based on only the\n> > explain output.\n>\n> Fair enough, that's of course the main point of this patch in the\n> first place: being able to better interpret the explain plan when you\n> don't have access to the schema. Still I think Filter is the correct\n> keyword for both, so how about we make it less confusing by making the\n> current \"Filter\" more specific by calling it something like \"Non-key\n> Filter\" or \"INCLUDE Filter\" and then call the other something like\n> \"Index Filter\" or \"Secondary Bound Filter\".\n\nI'm not sure how debuggable explain plans are without access to the\nschema, especially when VERBOSE isn't configured, so I would be\nhesitant to accept that as an argument here.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n[0] https://www.postgresql.org/message-id/flat/N1xaIrU29uk5YxLyW55MGk5fz9s6V2FNtj54JRaVlFbPixD5z8sJ07Ite5CvbWwik8ZvDG07oSTN-usENLVMq2UAcizVTEd5b-o16ZGDIIU%3D%40yamlcoder.me\n[1] https://www.postgresql.org/message-id/flat/cf85f46f-b02f-05b2-5248-5000b894ebab%40enterprisedb.com\n\n\n", "msg_date": "Mon, 24 Jun 2024 17:56:35 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "> On Mon, 24 Jun 2024 at 04:38, <[email protected]> wrote:\r\n> >\r\n> > In my local PoC patch, I have modified the output as follows, what do you think?\r\n> >\r\n> > =# EXPLAIN (VERBOSE, ANALYZE) SELECT * FROM test WHERE id1 = 1 AND id2 =\r\n> 101;\r\n> > QUERY PLAN\r\n> > ----------------------------------------------------------------------\r\n> > ---------------------------------------------------\r\n> > Index Scan using test_idx on ikedamsh.test (cost=0.42..8.45 rows=1 width=18)\r\n> (actual time=0.082..0.086 rows=1 loops=1)\r\n> > Output: id1, id2, id3, value\r\n> > Index Cond: ((test.id1 = 1) AND (test.id2 = 101)) -- If it’s efficient, the output\r\n> won’t change.\r\n> > Planning Time: 5.088 ms\r\n> > Execution Time: 0.162 ms\r\n> > (5 rows)\r\n> >\r\n> > =# EXPLAIN (VERBOSE, ANALYZE) SELECT * FROM test WHERE id1 = 1 AND id3 =\r\n> 101;\r\n> > QUERY PLAN\r\n> > ----------------------------------------------------------------------\r\n> > ---------------------------------------------------------\r\n> > Index Scan using test_idx on ikedamsh.test (cost=0.42..12630.10 rows=1\r\n> width=18) (actual time=0.175..279.819 rows=1 loops=1)\r\n> > Output: id1, id2, id3, value\r\n> > Index Cond: (test.id1 = 1) -- Change the output. Show only the\r\n> bound quals.\r\n> > Index Filter: (test.id3 = 101) -- New. Output quals which are not\r\n> used as the bound quals\r\n> \r\n> I think this is too easy to confuse with the pre-existing 'Filter'\r\n> condition, which you'll find on indexes with INCLUDE-d columns or filters on non-index\r\n> columns.\r\n\r\nThanks for your comment. I forgot the case.\r\n\r\n> Furthermore, I think this is probably not helpful (maybe even harmful) for index types\r\n> like GIN and BRIN, where index searchkey order is mostly irrelevant to the index shape\r\n> and performance.\r\n\r\nYes, I expected that only B-Tree index support the feature.\r\n\r\n> Finally, does this change the index AM API? Does this add another scankey argument to\r\n> ->amrescan?\r\n\r\nYes, I think so. But since I'd like to make users know the index scan will happen without\r\nANALYZE, I planned to change amcostestimate for \"Index Filter\" and amrescan() for \r\n\"Rows Removed by Index Filter\".\r\n\r\n> > Rows Removed by Index Filter: 499999 -- New. Output when ANALYZE option\r\n> is specified\r\n> \r\n> Separate from the changes to Index Cond/Index Filter output changes I think this can\r\n> be useful output, though I'd probably let the AM specify what kind of filter data to\r\n> display.\r\n> E.g. BRIN may well want to display how many ranges matched the predicate, vs how\r\n> many ranges were unsummarized and thus returned; two conditions which aren't as\r\n> easy to differentiate but can be important debugging query performance.\r\n\r\nOK, thanks. I understood that it would be nice if we could customize to output information\r\nspecific to other indexes like BRIN.\r\n\r\n> > Planning Time: 0.354 ms\r\n> > Execution Time: 279.908 ms\r\n> > (7 rows)\r\n> \r\n> Was this a test against the same dataset as the one you'd posted your measurements of\r\n> your first patchset with? The execution time seems to have slown down quite\r\n> significantly, so if the testset is the same then this doesn't bode well for your patchset.\r\n\r\nYes, the reason is that the cache hit ratio is very low since I tested after I restarted the \r\nmachine. I had to add BUFFERS option.\r\n\r\nRegards,\r\n--\r\nMasahiro Ikeda\r\nNTT DATA CORPORATION\r\n", "msg_date": "Wed, 26 Jun 2024 06:45:38 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "RE: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "> +1 for the idea.\r\n\r\nThanks! I was interested in the test result that you shared.\r\n\r\nRegards,\r\n--\r\nMasahiro Ikeda\r\nNTT DATA CORPORATION\r\n", "msg_date": "Wed, 26 Jun 2024 06:51:39 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "RE: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": ">>=# EXPLAIN (VERBOSE, ANALYZE) SELECT * FROM test WHERE id1 = 1 AND id3 = 101;\r\n>>                                                          QUERY PLAN                                                           \r\n>>-------------------------------------------------------------------------------------------------------------------------------\r\n>> Index Scan using test_idx on ikedamsh.test  (cost=0.42..12630.10 rows=1 width=18) (actual time=0.175..279.819 rows=1 loops=1)\r\n>>   Output: id1, id2, id3, value\r\n>>   Index Cond: (test.id1 = 1)                 -- Change the output. Show only the bound quals. \r\n>>   Index Filter: (test.id3 = 101)              -- New. Output quals which are not used as the bound quals\r\n>>   Rows Removed by Index Filter: 499999    -- New. Output when ANALYZE option is specified\r\n>> Planning Time: 0.354 ms\r\n>> Execution Time: 279.908 ms\r\n>> (7 rows)\r\n>\r\n> I don't think we want to split these clauses. Index Cond should indicate the conditions applied\r\n> to the index scan. Bound quals should be listed separately even though they will have an\r\n> intersection with Index Cond. I am not sure whether Index Filter is the right name, \r\n> maybe Index Bound Cond: But I don't know this area enough to make a final call.\r\n\r\nOK, I understood that it's better to only add new ones. I think \"Index Filter\" fits other than \"Index\r\nBound Cond\" if we introduce \"Rows Removed By Index Filter\".\r\n\r\n> About Rows Removed by Index Filter: it's good to provide a number when ANALYZE is\r\n> specified, but it will be also better to specify what was estimated. We do that for (cost snd rows etc.)\r\n> but doing that somewhere in the plan output may not have a precedent. I think we should try that\r\n> and see what others think.\r\n\r\nIt's interesting! It’s an idea that can be applied not only to multi-column indexes, right?\r\nI will consider the implementation and discuss it in a new thread. However, I would like to\r\nfocus on the feature to output information about multi-column indexes at first.\r\n\r\nRegards,\r\n--\r\nMasahiro Ikeda\r\nNTT DATA CORPORATION\r\n", "msg_date": "Wed, 26 Jun 2024 07:25:26 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "RE: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "> On Mon, 24 Jun 2024 at 14:42, Jelte Fennema-Nio <[email protected]> wrote:\r\n> >\r\n> > On Mon, 24 Jun 2024 at 13:02, Matthias van de Meent\r\n> > <[email protected]> wrote:\r\n> > > It does not really behave similar: index scan keys (such as the\r\n> > > id3=101 scankey) don't require visibility checks in the btree code,\r\n> > > while the Filter condition _does_ require a visibility check, and\r\n> > > delegates the check to the table AM if the scan isn't Index-Only, or\r\n> > > if the VM didn't show all-visible during the check.\r\n> >\r\n> > Any chance you could point me in the right direction for the\r\n> > code/docs/comment about this? I'd like to learn a bit more about why\r\n> > that is the case, because I didn't realize visibility checks worked\r\n> > differently for index scan keys and Filter keys.\r\n> \r\n> This can be derived by combining how Filter works (it only filters the returned live tuples)\r\n> and how Index-Only scans work (return the index tuple, unless !ALL_VISIBLE, in which\r\n> case the heap tuple is projected). There have been several threads more or less\r\n> recently that also touch this topic and closely related topics, e.g. [0][1].\r\n\r\nThanks! I could understand what is difference between INCLUDE based filter and index filter.\r\n\r\n> > > As you can see, there's a huge difference in performance. Putting\r\n> > > both non-bound and \"normal\" filter clauses in the same Filter clause\r\n> > > will make it more difficult to explain performance issues based on\r\n> > > only the explain output.\r\n> >\r\n> > Fair enough, that's of course the main point of this patch in the\r\n> > first place: being able to better interpret the explain plan when you\r\n> > don't have access to the schema. Still I think Filter is the correct\r\n> > keyword for both, so how about we make it less confusing by making the\r\n> > current \"Filter\" more specific by calling it something like \"Non-key\r\n> > Filter\" or \"INCLUDE Filter\" and then call the other something like\r\n> > \"Index Filter\" or \"Secondary Bound Filter\".\r\n> \r\n> I'm not sure how debuggable explain plans are without access to the schema, especially\r\n> when VERBOSE isn't configured, so I would be hesitant to accept that as an argument\r\n> here.\r\n\r\nIMHO, it's nice to be able to understand the differences between each\r\nFILTER even without the VERBOSE option. (+1 for Jelte Fennema-Nio's idea)\r\n\r\nEven without access to the schema, it would be possible to quickly know if\r\nthe plan is not as expected, and I believe there are virtually no disadvantages\r\nto having multiple \"XXX FILTER\" outputs.\r\n\r\nIf it's better to output such information only with the VERBOSE option, \r\nWhat do you think about the following idea?\r\n* When the VERBOSE option is not specified, output as \"Filter\" in all cases\r\n* When the VERBOSE option is specified, output as \"Non-key Filter\", \"INCLUDE Filter\" \r\n and \"Index Filter\".\r\n\r\nIn addition, I think it would be good to mention the differences between each filter in \r\nthe documentation.\r\n\r\nRegards,\r\n--\r\nMasahiro Ikeda\r\nNTT DATA CORPORATION\r\n", "msg_date": "Wed, 26 Jun 2024 07:44:47 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "RE: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "On Fri, Jun 21, 2024 at 3:12 AM <[email protected]> wrote:\n> Regarding the multicolumn B-Tree Index, I'm considering\n> if we can enhance the EXPLAIN output. There have been requests\n> for this from our customer.\n\nI agree that this is a real problem -- I'm not surprised to hear that\nyour customer asked about it.\n\nIn the past, we've heard complaints about this problem from Markus Winand, too:\n\nhttps://use-the-index-luke.com/sql/explain-plan/postgresql/filter-predicates\n\nAs it happens I have been thinking about this problem a lot recently.\nSpecifically the user-facing aspects, what we show in EXPLAIN. It is\nrelevant to my ongoing work on skip scan:\n\nhttps://commitfest.postgresql.org/48/5081/\nhttps://www.postgresql.org/message-id/flat/CAH2-Wzmn1YsLzOGgjAQZdn1STSG_y8qP__vggTaPAYXJP+G4bw@mail.gmail.com\n\nUnfortunately, my patch will make the situation more complicated for\nyour patch. I would like to resolve the tension between the two\npatches, but I'm not sure how to do that.\n\nIf you look at the example query that I included in my introductory\nemail on the skip scan thread (the query against the sales_mdam_paper\ntable), you'll see that my patch makes it go much faster. My patch\nwill effectively \"convert\" nbtree scan keys that would traditionally\nhave to use non-index-bound conditions/filter predicates, into\nindex-bound conditions/access predicates. This all happens at runtime,\nduring nbtree preprocessing (not during planning).\n\nThis may mean that your patch's approach of determining which\ncolumns/scan keys are in which category (bound vs. non-bound) cannot\nrely on its current approach of placing each type of clause into one\nof two categories inside btcostestimate() -- the view that we see from\nbtcostestimate() will be made less authoritative by skip scan. What\nactually matters in what happens during nbtree preprocessing, inside\n_bt_preprocess_keys().\n\nUnfortunately, this is even more complicated than it sounds. It would\nbe a good idea if we moved _bt_preprocess_keys() to plan time, so that\nbtcostestimate() operated off of authoritative information, rather\nthan independently figuring out the same details for the purposes of\ncosting. We've talked about this before, even [1]. That way your patch\ncould just work off of this authoritative information. But even that\ndoesn't necessarily fix the problem.\n\nNote that the skip scan patch makes _bt_preprocess_keys()\n*indiscriminately* \"convert\" *all* scan keys to index bound conditions\n-- at least where that's possible at all. There are minor\nimplementation restrictions that mean that we can't always do that.\nBut overall, the patch more or less eliminates non-bound index\nconditions. That is, it'll be rare to non-existent for nbtree to fail\nto mark *any* scan key as SK_BT_REQFWD/SK_BT_REQBKWD. Technically\nspeaking, non-bound conditions mostly won't exist anymore.\n\nOf course, this doesn't mean that the problem that your patch is\nsolving will actually go away. I fully expect that the skip scan patch\nwill merely make some scan keys \"required-by-scan/index bound\ncondition scan keys in name only\". Technically they won't be the\nproblematic kind of index condition, but that won't actually be true\nin any practical sense. Because users (like your customer) will still\nget full index scans, and be surprised, just like today.\n\nAs I explain in my email on the skip scan thread, I believe that the\npatch's aggressive approach to \"converting\" scan keys is an advantage.\nThe amount of skipping that actually takes place should be decided\ndynamically, at runtime. It is a decision that should be made at the\nlevel of individual leaf pages (or small groups of leaf pages), not at\nthe level of the whole scan. The distinction between index bound\nconditions and non-bound conditions becomes much more \"squishy\", which\nis mostly (though not entirely) a good thing.\n\nI really don't know what to do about this. As I said, I agree with the\ngeneral idea of this patch -- this is definitely a real problem. And,\nI don't pretend that my skip scan patch will actually define the\nproblem out of existence (except perhaps in those cases that it\nactually makes it much faster). Maybe we could make a guess (based on\nstatistics) whether or not any skip attributes will leave the\nlower-order clauses as useful index bound conditions at runtime. But I\ndon't know...that condition is actually a \"continuous\" condition now\n-- it is not a strict dichotomy (it is not either/or, but rather a\nquestion of degree, perhaps on a scale of 0.0 - 1.0).\n\nIt's also possible that we should just do something simple, like your\npatch, even though technically it won't really be accurate in cases\nwhere skip scan is used to good effect. Maybe showing the \"default\nworking assumption\" about how the scan keys/clauses will behave at\nruntime is actually the right thing to do. Maybe I am just\noverthinking it.\n\n[1] https://www.postgresql.org/message-id/[email protected]\n-- the final full paragraph mentions moving _bt_preprocess_keys() into\nthe planner\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 27 Jun 2024 16:01:56 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "On Thu, 27 Jun 2024 at 22:02, Peter Geoghegan <[email protected]> wrote:\n> It's also possible that we should just do something simple, like your\n> patch, even though technically it won't really be accurate in cases\n> where skip scan is used to good effect. Maybe showing the \"default\n> working assumption\" about how the scan keys/clauses will behave at\n> runtime is actually the right thing to do. Maybe I am just\n> overthinking it.\n\nIIUC, you're saying that your skip scan will improve the situation\nMasahiro describes dramatically in some/most cases. But it still won't\nbe as good as a pure index \"prefix\" scan.\n\nIf that's the case then I do think you're overthinking this a bit.\nBecause then you'd still want to see this difference between the\nprefix-scan keys and the skip-scan keys. I think the main thing that\nthe introduction of the skip scan changes is the name that we should\nshow, e.g. instead of \"Non-key Filter\" we might want to call it \"Skip\nScan Cond\"\n\nI do think though that in addition to a \"Skip Scan Filtered\" count for\nANALYZE, it would be very nice to also get a \"Skip Scan Skipped\" count\n(if that's possible to measure/estimate somehow). This would allow\nusers to determine how effective the skip scan was, i.e. were they\nable to skip over large swaths of the index? Or did they skip over\nnothing because the second column of the index (on which there was no\nfilter) was unique within the table\n\n\n", "msg_date": "Thu, 27 Jun 2024 22:46:34 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "On Thu, Jun 27, 2024 at 4:46 PM Jelte Fennema-Nio <[email protected]> wrote:\n> On Thu, 27 Jun 2024 at 22:02, Peter Geoghegan <[email protected]> wrote:\n> > It's also possible that we should just do something simple, like your\n> > patch, even though technically it won't really be accurate in cases\n> > where skip scan is used to good effect. Maybe showing the \"default\n> > working assumption\" about how the scan keys/clauses will behave at\n> > runtime is actually the right thing to do. Maybe I am just\n> > overthinking it.\n>\n> IIUC, you're saying that your skip scan will improve the situation\n> Masahiro describes dramatically in some/most cases.\n\n\"Most cases\" seems likely to be overstating it. Overall, I doubt that\nit makes sense to try to generalize like that.\n\nThe breakdown of the cases that we see in the field right now\n(whatever it really is) is bound to be strongly influenced by the\ncurrent capabilities of Postgres. If something is intolerably slow,\nthen it just isn't tolerated. If something works adequately, then\nusers don't usually care why it is so.\n\n> But it still won't\n> be as good as a pure index \"prefix\" scan.\n\nTypically, no, it won't be. But there's really no telling for sure.\nThe access patterns for a composite index on '(a, b)' with a qual\n\"WHERE b = 5\" are identical to a qual explicitly written \"WHERE a =\nany(<every possible value in 'a'>) AND b = 5\".\n\n> If that's the case then I do think you're overthinking this a bit.\n> Because then you'd still want to see this difference between the\n> prefix-scan keys and the skip-scan keys. I think the main thing that\n> the introduction of the skip scan changes is the name that we should\n> show, e.g. instead of \"Non-key Filter\" we might want to call it \"Skip\n> Scan Cond\"\n\nWhat about cases where we legitimately have to vary our strategy\nduring the same index scan? We might very well be able to skip over\nmany leaf pages when scanning through a low cardinality subset of the\nindex (low cardinality in respect of a leading column 'a'). Then we\nmight find that there are long runs on leaf pages where no skipping is\npossible.\n\nI don't expect this to be uncommon. I do expect it to happen when the\noptimizer wasn't particularly expecting it. Like when a full index\nscan was the fastest plan anyway. Or when a skip scan wasn't quite as\ngood as expected, but nevertheless turned out to be the fastest plan.\n\n> I do think though that in addition to a \"Skip Scan Filtered\" count for\n> ANALYZE, it would be very nice to also get a \"Skip Scan Skipped\" count\n> (if that's possible to measure/estimate somehow). This would allow\n> users to determine how effective the skip scan was, i.e. were they\n> able to skip over large swaths of the index? Or did they skip over\n> nothing because the second column of the index (on which there was no\n> filter) was unique within the table\n\nYeah, EXPLAIN ANALYZE should probably be showing something about\nskipping. That provides us with a way of telling the user what really\nhappened, which could help when EXPLAIN output alone turns out to be\nquite misleading.\n\nIn fact, that'd make sense even today, without skip scan (just with\nthe 17 work on nbtree SAOP scans). Even with regular SAOP nbtree index\nscans, the number of primitive scans is hard to predict, and quite\nindicative of what's really going on with the scan.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 27 Jun 2024 18:40:54 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "On Thu, 27 Jun 2024 at 22:02, Peter Geoghegan <[email protected]> wrote:\n> Unfortunately, my patch will make the situation more complicated\n> for your patch. I would like to resolve the tension between the\n> two patches, but I'm not sure how to do that.\n\nOK. I would like to understand more about your proposed patch. I\nhave also registered as a reviewer in the commitfests entry.\n\nOn 2024-06-28 07:40, Peter Geoghegan wrote:\n> On Thu, Jun 27, 2024 at 4:46 PM Jelte Fennema-Nio <[email protected]> wrote:\n>> I do think though that in addition to a \"Skip Scan Filtered\" count for\n>> ANALYZE, it would be very nice to also get a \"Skip Scan Skipped\" count\n>> (if that's possible to measure/estimate somehow). This would allow\n>> users to determine how effective the skip scan was, i.e. were they\n>> able to skip over large swaths of the index? Or did they skip overx\n>> nothing because the second column of the index (on which there was no\n>> filter) was unique within the table\n> \n> Yeah, EXPLAIN ANALYZE should probably be showing something about\n> skipping. That provides us with a way of telling the user what really\n> happened, which could help when EXPLAIN output alone turns out to be\n> quite misleading.\n> \n> In fact, that'd make sense even today, without skip scan (just with\n> the 17 work on nbtree SAOP scans). Even with regular SAOP nbtree index\n> scans, the number of primitive scans is hard to predict, and quite\n> indicative of what's really going on with the scan.\n\nI agree as well.\n\nAlthough I haven't looked on your patch yet, if it's difficult to know\nhow it can optimize during the planning phase, it's enough for me to just\nshow \"Skip Scan Cond (or Non-Key Filter)\". This is because users can\nunderstand that inefficient index scans *may* occur.\n\nIf users want more detail, they can execute \"EXPLAIN ANALYZE\". This will\nallow them to understand the execution effectively and determine if there\nis any room to optimize the plan by looking at the counter of\n\"Skip Scan Filtered (or Skip Scan Skipped)\".\n\nIn terms of the concept of EXPLAIN output, I thought that runtime partition\npruning is similar. \"EXPLAIN without ANALYZE\" only shows the possibilities and\n\"EXPLAIN ANALYZE\" shows the actual results.\n\nRegards,\n--\nMasahiro Ikeda\nNTT DATA CORPORATION\n", "msg_date": "Fri, 28 Jun 2024 03:05:57 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "RE: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "On Fri, 28 Jun 2024 at 00:41, Peter Geoghegan <[email protected]> wrote:\n> Typically, no, it won't be. But there's really no telling for sure.\n> The access patterns for a composite index on '(a, b)' with a qual\n> \"WHERE b = 5\" are identical to a qual explicitly written \"WHERE a =\n> any(<every possible value in 'a'>) AND b = 5\".\n\nHmm, that's true. But in that case the explain plan gives a clear hint\nthat something like that might be going on, because you'll see:\n\nIndex Cond: a = any(<every possible value in 'a'>) AND b = 5\n\nThat does make me think of another way, and maybe more \"correct\" way,\nof representing Masahiros situation than adding a new \"Skip Scan Cond\"\nrow to the EXPLAIN output. We could explicitly include a comparison to\nall prefix columns in the Index Cond:\n\nIndex Cond: ((test.id1 = 1) AND (test.id2 = ANY) AND (test.id3 = 101))\n\nOr if you want to go with syntactically correct SQL we could do the following:\n\nIndex Cond: ((test.id1 = 1) AND ((test.id2 IS NULL) OR (test.id2 IS\nNOT NULL)) AND (test.id3 = 101))\n\nAn additional benefit this provides is that you now know which\nadditional column you should use a more specific filter on to speed up\nthe query. In this case test.id2\n\nOTOH, maybe it's not a great way because actually running that puts\nthe IS NULL+ IS NOT NULL query in the Filter clause (which honestly\nsurprises me because I had expected this \"always true expression\"\nwould have been optimized away by the planner).\n\n> EXPLAIN (VERBOSE, ANALYZE) SELECT id1, id2, id3 FROM test WHERE id1 = 1 AND (id2 IS NULL OR id2 IS NOT NULL) AND id3 = 101;\n QUERY PLAN\n─────────────────────────────────────────────────────\n Index Only Scan using test_idx on public.test (cost=0.42..12809.10\nrows=1 width=12) (actual time=0.027..11.234 rows=1 loops=1)\n Output: id1, id2, id3\n Index Cond: ((test.id1 = 1) AND (test.id3 = 101))\n Filter: ((test.id2 IS NULL) OR (test.id2 IS NOT NULL))\n\n> What about cases where we legitimately have to vary our strategy\n> during the same index scan?\n\nWould my new suggestion above address this?\n\n> In fact, that'd make sense even today, without skip scan (just with\n> the 17 work on nbtree SAOP scans). Even with regular SAOP nbtree index\n> scans, the number of primitive scans is hard to predict, and quite\n> indicative of what's really going on with the scan.\n\n*googles nbtree SAOP scans and finds the very helpful[1]*\n\nYes, I feel like this should definitely be part of the ANALYZE output.\nSeeing how Lukas has to look at pg_stat_user_tables to get this\ninformation seems quite annoying[2] and only possible on systems that\nhave no concurrent queries.\n\nSo it sounds like we'd want a \"Primitive Index Scans\" counter in\nANALYZE too. In addition to the number of filtered rows by, which if\nwe go with my proposal above should probably be called \"Rows Removed\nby Index Cond\".\n\n[1]: https://www.youtube.com/watch?v=jg2KeSB5DR8\n[2]: https://youtu.be/jg2KeSB5DR8?t=188\n\n\n", "msg_date": "Fri, 28 Jun 2024 10:59:24 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "On Fri, 28 Jun 2024 at 10:59, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Fri, 28 Jun 2024 at 00:41, Peter Geoghegan <[email protected]> wrote:\n> > Typically, no, it won't be. But there's really no telling for sure.\n> > The access patterns for a composite index on '(a, b)' with a qual\n> > \"WHERE b = 5\" are identical to a qual explicitly written \"WHERE a =\n> > any(<every possible value in 'a'>) AND b = 5\".\n>\n> Hmm, that's true. But in that case the explain plan gives a clear hint\n> that something like that might be going on, because you'll see:\n>\n> Index Cond: a = any(<every possible value in 'a'>) AND b = 5\n>\n> That does make me think of another way, and maybe more \"correct\" way,\n> of representing Masahiros situation than adding a new \"Skip Scan Cond\"\n> row to the EXPLAIN output. We could explicitly include a comparison to\n> all prefix columns in the Index Cond:\n>\n> Index Cond: ((test.id1 = 1) AND (test.id2 = ANY) AND (test.id3 = 101))\n>\n> Or if you want to go with syntactically correct SQL we could do the following:\n>\n> Index Cond: ((test.id1 = 1) AND ((test.id2 IS NULL) OR (test.id2 IS\n> NOT NULL)) AND (test.id3 = 101))\n>\n> An additional benefit this provides is that you now know which\n> additional column you should use a more specific filter on to speed up\n> the query. In this case test.id2\n>\n> OTOH, maybe it's not a great way because actually running that puts\n> the IS NULL+ IS NOT NULL query in the Filter clause (which honestly\n> surprises me because I had expected this \"always true expression\"\n> would have been optimized away by the planner).\n>\n> > EXPLAIN (VERBOSE, ANALYZE) SELECT id1, id2, id3 FROM test WHERE id1 = 1 AND (id2 IS NULL OR id2 IS NOT NULL) AND id3 = 101;\n> QUERY PLAN\n> ─────────────────────────────────────────────────────\n> Index Only Scan using test_idx on public.test (cost=0.42..12809.10\n> rows=1 width=12) (actual time=0.027..11.234 rows=1 loops=1)\n> Output: id1, id2, id3\n> Index Cond: ((test.id1 = 1) AND (test.id3 = 101))\n> Filter: ((test.id2 IS NULL) OR (test.id2 IS NOT NULL))\n>\n> > What about cases where we legitimately have to vary our strategy\n> > during the same index scan?\n>\n> Would my new suggestion above address this?\n>\n> > In fact, that'd make sense even today, without skip scan (just with\n> > the 17 work on nbtree SAOP scans). Even with regular SAOP nbtree index\n> > scans, the number of primitive scans is hard to predict, and quite\n> > indicative of what's really going on with the scan.\n>\n> *googles nbtree SAOP scans and finds the very helpful[1]*\n>\n> Yes, I feel like this should definitely be part of the ANALYZE output.\n> Seeing how Lukas has to look at pg_stat_user_tables to get this\n> information seems quite annoying[2] and only possible on systems that\n> have no concurrent queries.\n>\n> So it sounds like we'd want a \"Primitive Index Scans\" counter in\n> ANALYZE too. In addition to the number of filtered rows by, which if\n> we go with my proposal above should probably be called \"Rows Removed\n> by Index Cond\".\n\nThis all just made me more confident that this shows a need to enable\nindex AMs to provide output for EXPLAIN: The knowledge about how index\nscankeys are actually used is exclusively known to the index AM,\nbecause the strategies are often unique to the index AM (or even\nchosen operator classes), and sometimes can only be applied at\nruntime: while the index scankeys' sizes, attribute numbers and\noperators are known in advance (even if not all arguments are filled\nin; `FROM a JOIN b ON b.id = ANY (a.ref_array)`), the AM can at least\nshow what strategy it is likely going to choose, and how (in)efficient\nthat strategy could be.\n\nRight now, Masahiro-san's patch tries to do that with an additional\nfield in IndexPath populated (by proxy) exclusively in btcostestimate.\nI think that design is wrong, because it wires explain- and\nbtree-specific data through the planner, adding overhead everywhere\nwhich is only used for btree- and btree-compatible indexes.\n\nI think the better choice would be adding an IndexAmRoutine->amexplain\nsupport function, which would get called in e.g. explain.c's\nExplainIndexScanDetails to populate a new \"Index Scan Details\" (name\nto be bikeshed) subsection of explain plans. This would certainly be\npossible, as the essentials for outputting things to EXPLAIN are\nreadily available in the explain.h header.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 28 Jun 2024 14:05:23 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "On Thu, Jun 27, 2024 at 11:06 PM <[email protected]> wrote:\n> OK. I would like to understand more about your proposed patch. I\n> have also registered as a reviewer in the commitfests entry.\n\nGreat!\n\n> Although I haven't looked on your patch yet, if it's difficult to know\n> how it can optimize during the planning phase, it's enough for me to just\n> show \"Skip Scan Cond (or Non-Key Filter)\". This is because users can\n> understand that inefficient index scans *may* occur.\n\nThat makes sense.\n\nThe goal of your patch is to highlight when an index scan is using an\nindex that is suboptimal for a particular query (a query that the user\nruns through EXPLAIN or EXPLAIN ANALYZE). The underlying rules that\ndetermine \"access predicate vs. filter predicate\" are not very\ncomplicated -- they're intuitive, even. But even an expert can easily\nmake a mistake on a bad day.\n\nIt seems to me that all your patch really needs to do is to give the\nuser a friendly nudge in that direction, when it makes sense to. You\nwant to subtly suggest to the user \"hey, are you sure that the index\nthe plan uses is exactly what you expected?\". Fortunately, even when\nskip scan works well that should still be a useful nudge. If we assume\nthat the query that the user is looking at is much more important than\nother queries, then the user really shouldn't be using skip scan in\nthe first place. Even a good skip scan is a little suspicious (it's\nokay if it \"stands out\" a bit).\n\n> In terms of the concept of EXPLAIN output, I thought that runtime partition\n> pruning is similar. \"EXPLAIN without ANALYZE\" only shows the possibilities and\n> \"EXPLAIN ANALYZE\" shows the actual results.\n\nThat seems logical.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 28 Jun 2024 14:27:46 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "On 2024-06-28 21:05, Matthias van de Meent wrote:\r\n> On Fri, 28 Jun 2024 at 10:59, Jelte Fennema-Nio <[email protected]> wrote:\r\n>>\r\n>> On Fri, 28 Jun 2024 at 00:41, Peter Geoghegan <[email protected]> wrote:\r\n>> > Typically, no, it won't be. But there's really no telling for sure.\r\n>> > The access patterns for a composite index on '(a, b)' with a qual\r\n>> > \"WHERE b = 5\" are identical to a qual explicitly written \"WHERE a =\r\n>> > any(<every possible value in 'a'>) AND b = 5\".\r\n>>\r\n>> Hmm, that's true. But in that case the explain plan gives a clear hint\r\n>> that something like that might be going on, because you'll see:\r\n>>\r\n>> Index Cond: a = any(<every possible value in 'a'>) AND b = 5\r\n>>\r\n>> That does make me think of another way, and maybe more \"correct\" way,\r\n>> of representing Masahiros situation than adding a new \"Skip Scan Cond\"\r\n>> row to the EXPLAIN output. We could explicitly include a comparison to\r\n>> all prefix columns in the Index Cond:\r\n>>\r\n>> Index Cond: ((test.id1 = 1) AND (test.id2 = ANY) AND (test.id3 = 101))\r\n>>\r\n>> Or if you want to go with syntactically correct SQL we could do the following:\r\n>>\r\n>> Index Cond: ((test.id1 = 1) AND ((test.id2 IS NULL) OR (test.id2 IS\r\n>> NOT NULL)) AND (test.id3 = 101))\r\n>>\r\n>> An additional benefit this provides is that you now know which\r\n>> additional column you should use a more specific filter on to speed up\r\n>> the query. In this case test.id2\r\n>>\r\n>> OTOH, maybe it's not a great way because actually running that puts\r\n>> the IS NULL+ IS NOT NULL query in the Filter clause (which honestly\r\n>> surprises me because I had expected this \"always true expression\"\r\n>> would have been optimized away by the planner).\r\n>>\r\n>> > EXPLAIN (VERBOSE, ANALYZE) SELECT id1, id2, id3 FROM test WHERE id1 = 1 AND (id2 IS NULL OR id2 IS NOT NULL) AND id3 = 101;\r\n>> QUERY PLAN\r\n>> ─────────────────────────────────────────────────────\r\n>> Index Only Scan using test_idx on public.test (cost=0.42..12809.10\r\n>> rows=1 width=12) (actual time=0.027..11.234 rows=1 loops=1)\r\n>> Output: id1, id2, id3\r\n>> Index Cond: ((test.id1 = 1) AND (test.id3 = 101))\r\n>> Filter: ((test.id2 IS NULL) OR (test.id2 IS NOT NULL))\r\n>>\r\n>> > What about cases where we legitimately have to vary our strategy\r\n>> > during the same index scan?\r\n>>\r\n>> Would my new suggestion above address this?\r\n>>\r\n>> > In fact, that'd make sense even today, without skip scan (just with\r\n>> > the 17 work on nbtree SAOP scans). Even with regular SAOP nbtree index\r\n>> > scans, the number of primitive scans is hard to predict, and quite\r\n>> > indicative of what's really going on with the scan.\r\n>>\r\n>> *googles nbtree SAOP scans and finds the very helpful[1]*\r\n>>\r\n>> Yes, I feel like this should definitely be part of the ANALYZE output.\r\n>> Seeing how Lukas has to look at pg_stat_user_tables to get this\r\n>> information seems quite annoying[2] and only possible on systems that\r\n>> have no concurrent queries.\r\n>>\r\n>> So it sounds like we'd want a \"Primitive Index Scans\" counter in\r\n>> ANALYZE too. In addition to the number of filtered rows by, which if\r\n>> we go with my proposal above should probably be called \"Rows Removed\r\n>> by Index Cond\".\r\n> \r\n> This all just made me more confident that this shows a need to enable\r\n> index AMs to provide output for EXPLAIN: The knowledge about how index\r\n> scankeys are actually used is exclusively known to the index AM,\r\n> because the strategies are often unique to the index AM (or even\r\n> chosen operator classes), and sometimes can only be applied at\r\n> runtime: while the index scankeys' sizes, attribute numbers and\r\n> operators are known in advance (even if not all arguments are filled\r\n> in; `FROM a JOIN b ON b.id = ANY (a.ref_array)`), the AM can at least\r\n> show what strategy it is likely going to choose, and how (in)efficient\r\n> that strategy could be.\r\n> \r\n> Right now, Masahiro-san's patch tries to do that with an additional\r\n> field in IndexPath populated (by proxy) exclusively in btcostestimate.\r\n> I think that design is wrong, because it wires explain- and\r\n> btree-specific data through the planner, adding overhead everywhere\r\n> which is only used for btree- and btree-compatible indexes.\r\n> \r\n> I think the better choice would be adding an IndexAmRoutine->amexplain\r\n> support function, which would get called in e.g. explain.c's\r\n> ExplainIndexScanDetails to populate a new \"Index Scan Details\" (name\r\n> to be bikeshed) subsection of explain plans. This would certainly be\r\n> possible, as the essentials for outputting things to EXPLAIN are\r\n> readily available in the explain.h header.\r\n\r\nYes, that's one of my concerns. I agree to add IndexAmRoutine->amexplain\r\nis better because we can support several use cases.\r\n\r\nAlthough I'm not confident to add only IndexAmRoutine->amexplain is enough\r\nnow, I'll make a PoC patch to confirm it.\r\n\r\nRegards,\r\n-- \r\nMasahiro Ikeda\r\nNTT DATA CORPORATION\r\n", "msg_date": "Mon, 1 Jul 2024 02:53:51 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "RE: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "On 2024-06-29 03:27, Peter Geoghegan wrote:\r\n> On Thu, Jun 27, 2024 at 11:06 PM <[email protected]> wrote:\r\n>> Although I haven't looked on your patch yet, if it's difficult to know\r\n>> how it can optimize during the planning phase, it's enough for me to just\r\n>> show \"Skip Scan Cond (or Non-Key Filter)\". This is because users can\r\n>> understand that inefficient index scans *may* occur.\r\n> \r\n> That makes sense.\r\n> \r\n> The goal of your patch is to highlight when an index scan is using an\r\n> index that is suboptimal for a particular query (a query that the user\r\n> runs through EXPLAIN or EXPLAIN ANALYZE). The underlying rules that\r\n> determine \"access predicate vs. filter predicate\" are not very\r\n> complicated -- they're intuitive, even. But even an expert can easily\r\n> make a mistake on a bad day.\r\n> \r\n> It seems to me that all your patch really needs to do is to give the\r\n> user a friendly nudge in that direction, when it makes sense to. You\r\n> want to subtly suggest to the user \"hey, are you sure that the index\r\n> the plan uses is exactly what you expected?\". Fortunately, even when\r\n> skip scan works well that should still be a useful nudge. If we assume\r\n> that the query that the user is looking at is much more important than\r\n> other queries, then the user really shouldn't be using skip scan in\r\n> the first place. Even a good skip scan is a little suspicious (it's\r\n> okay if it \"stands out\" a bit).\r\n\r\nYes, you're right. I'd like users to take the chance easily.\r\n\r\n-- \r\nMasahiro Ikeda\r\nNTT DATA CORPORATION\r\n", "msg_date": "Mon, 1 Jul 2024 02:55:54 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "RE: Improve EXPLAIN output for multicolumn B-Tree Index" }, { "msg_contents": "> > I think the better choice would be adding an IndexAmRoutine->amexplain\n> > support function, which would get called in e.g. explain.c's\n> > ExplainIndexScanDetails to populate a new \"Index Scan Details\" (name\n> > to be bikeshed) subsection of explain plans. This would certainly be\n> > possible, as the essentials for outputting things to EXPLAIN are\n> > readily available in the explain.h header.\n> \n> Yes, that's one of my concerns. I agree to add IndexAmRoutine->amexplain is better\n> because we can support several use cases.\n> \n> Although I'm not confident to add only IndexAmRoutine->amexplain is enough now, I'll\n> make a PoC patch to confirm it.\n\nI attached the patch adding an IndexAmRoutine->amexplain.\n\nThis patch changes following.\n* add a new index AM function \"amexplain_function()\" and it's called in ExplainNode()\n Although I tried to add it in ExplainIndexScanDetails(), I think it's not the proper place to \n show quals. So, amexplain_function() will call after calling show_scanqual() in the patch. \n* add \"amexplain_function\" for B-Tree index and show \"Non Key Filter\" if VERBOSE is specified\n To avoid confusion with INCLUDE-d columns and non-index column \"Filter\", I've decided to\n output only with the VERBOSE option. However, I'm not sure if this is the appropriate solution.\n It might be a good idea to include words like 'b-tree' to make it clear that it's an output specific\n to b-tree index.\n\n-- Example dataset\nCREATE TABLE test (id1 int, id2 int, id3 int, value varchar(32));\nCREATE INDEX test_idx ON test(id1, id2, id3); -- multicolumn B-Tree index\nINSERT INTO test (SELECT i % 2, i, i, 'hello' FROM generate_series(1,1000000) s(i));\nANALYZE;\n\n-- The output is same as without this patch if it can search efficiently\n=# EXPLAIN (VERBOSE, ANALYZE, BUFFERS, MEMORY, SERIALIZE) SELECT id3 FROM test WHERE id1 = 1 AND id2 = 101;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using test_idx on public.test (cost=0.42..4.44 rows=1 width=4) (actual time=0.058..0.060 rows=1 loops=1)\n Output: id3\n Index Cond: ((test.id1 = 1) AND (test.id2 = 101))\n Heap Fetches: 0\n Buffers: shared hit=4\n Planning:\n Memory: used=14kB allocated=16kB\n Planning Time: 0.166 ms\n Serialization: time=0.009 ms output=1kB format=text\n Execution Time: 0.095 ms\n(10 rows)\n\n-- \"Non Key Filter\" will be displayed if it will scan index tuples and filter them\n=# EXPLAIN (VERBOSE, ANALYZE, BUFFERS, MEMORY, SERIALIZE) SELECT id3 FROM test WHERE id1 = 1 AND id3 = 101;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using test_idx on public.test (cost=0.42..12724.10 rows=1 width=4) (actual time=0.055..69.446 rows=1 loops=1)\n Output: id3\n Index Cond: ((test.id1 = 1) AND (test.id3 = 101))\n Heap Fetches: 0\n Non Key Filter: (test.id3 = 101)\n Buffers: shared hit=1920\n Planning:\n Memory: used=14kB allocated=16kB\n Planning Time: 0.113 ms\n Serialization: time=0.004 ms output=1kB format=text\n Execution Time: 69.491 ms\n(11 rows)\n\nAlthough I plan to support \"Rows Removed by Non Key Filtered\"(or \"Skip Scan Filtered\"),\nI'd like to know whether the current direction is good. One of my concerns is there might\nbe a better way to exact quals for boundary conditions in btexplain().\n\nRegards,\n--\nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Tue, 2 Jul 2024 03:44:01 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "RE: Improve EXPLAIN output for multicolumn B-Tree Index" } ]
[ { "msg_contents": "Hi hackers,\nCc people involved in the original thread[1].\n\nI am starting a new thread to share and discuss the implementation of\nconflict detection and logging in logical replication, as well as the\ncollection of statistics related to these conflicts.\n\nIn the original conflict resolution thread[1], we have decided to\nsplit this work into multiple patches to facilitate incremental progress\ntowards supporting conflict resolution in logical replication. This phased\napproach will allow us to address simpler tasks first. The overall work\nplan involves: 1. conflict detection (detect and log conflicts like\n'insert_exists', 'update_differ', 'update_missing', and 'delete_missing')\n2. implement simple built-in resolution strategies like\n'apply(remote_apply)' and 'skip(keep_local)'. 3. monitor capability for\nconflicts and resolutions in statistics or history table.\n\nFollowing the feedback received from PGconf.dev and discussions in the\nconflict resolution thread, features 1 and 3 are important independently.\nSo, we start a separate thread for them.\n\nHere are the basic designs for the detection and statistics:\n\n- The detail of the conflict detection\n\nWe add a new parameter detect_conflict for CREATE and ALTER subscription\ncommands. This new parameter will decide if subscription will go for\nconfict detection. By default, conflict detection will be off for a\nsubscription.\n\nWhen conflict detection is enabled, additional logging is triggered in the\nfollowing conflict scenarios:\ninsert_exists: Inserting a row that violates a NOT DEFERRABLE unique constraint.\nupdate_differ: updating a row that was previously modified by another origin.\nupdate_missing: The tuple to be updated is missing.\ndelete_missing: The tuple to be deleted is missing.\n\nFor insert_exists conflict, the log can include origin and commit\ntimestamp details of the conflicting key with track_commit_timestamp\nenabled. And update_differ conflict can only be detected when\ntrack_commit_timestamp is enabled.\n\nRegarding insert_exists conflicts, the current design is to pass\nnoDupErr=true in ExecInsertIndexTuples() to prevent immediate error\nhandling on duplicate key violation. After calling\nExecInsertIndexTuples(), if there was any potential conflict in the\nunique indexes, we report an ERROR for the insert_exists conflict along\nwith additional information (origin, committs, key value) for the\nconflicting row. Another way for this is to conduct a pre-check for\nduplicate key violation before applying the INSERT operation, but this\ncould introduce overhead for each INSERT even in the absence of conflicts.\nWe welcome any alternative viewpoints on this matter.\n\n- The detail of statistics collection\n\nWe add columns(insert_exists_count, update_differ_count,\nupdate_missing_count, delete_missing_count) in view\npg_stat_subscription_workers to shows information about the conflict which\noccur during the application of logical replication changes.\n\nThe conflicts will be tracked when track_conflict option of the\nsubscription is enabled. Additionally, update_differ can be detected only\nwhen track_commit_timestamp is enabled.\n\n\nThe patches for above features are attached.\nSuggestions and comments are highly appreciated.\n\n[1] https://www.postgresql.org/message-id/CAA4eK1LgPyzPr_Vrvvr4syrde4hyT%3DQQnGjdRUNP-tz3eYa%3DGQ%40mail.gmail.com\n\nBest Regards,\nHou Zhijie", "msg_date": "Fri, 21 Jun 2024 07:47:20 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Conflict detection and logging in logical replication" }, { "msg_contents": "On Friday, June 21, 2024 3:47 PM Zhijie Hou (Fujitsu) <[email protected]> wrote:\n> \n> - The detail of the conflict detection\n> \n> We add a new parameter detect_conflict for CREATE and ALTER subscription\n> commands. This new parameter will decide if subscription will go for\n> confict detection. By default, conflict detection will be off for a\n> subscription.\n> \n> When conflict detection is enabled, additional logging is triggered in the\n> following conflict scenarios:\n> insert_exists: Inserting a row that violates a NOT DEFERRABLE unique\n> constraint.\n> update_differ: updating a row that was previously modified by another origin.\n> update_missing: The tuple to be updated is missing.\n> delete_missing: The tuple to be deleted is missing.\n> \n> For insert_exists conflict, the log can include origin and commit\n> timestamp details of the conflicting key with track_commit_timestamp\n> enabled. And update_differ conflict can only be detected when\n> track_commit_timestamp is enabled.\n> \n> Regarding insert_exists conflicts, the current design is to pass\n> noDupErr=true in ExecInsertIndexTuples() to prevent immediate error\n> handling on duplicate key violation. After calling\n> ExecInsertIndexTuples(), if there was any potential conflict in the\n> unique indexes, we report an ERROR for the insert_exists conflict along\n> with additional information (origin, committs, key value) for the\n> conflicting row. Another way for this is to conduct a pre-check for\n> duplicate key violation before applying the INSERT operation, but this\n> could introduce overhead for each INSERT even in the absence of conflicts.\n> We welcome any alternative viewpoints on this matter.\n\nWhen testing the patch, I noticed a bug that when reporting the conflict\nafter calling ExecInsertIndexTuples(), we might find the tuple that we\njust inserted and report it.(we should only report conflict if there are\nother conflict tuples which are not inserted by us) Here is a new patch\nwhich fixed this and fixed a compile warning reported by CFbot.\n\nBest Regards,\nHou zj", "msg_date": "Mon, 24 Jun 2024 02:09:27 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Mon, Jun 24, 2024 at 7:39 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> When testing the patch, I noticed a bug that when reporting the conflict\n> after calling ExecInsertIndexTuples(), we might find the tuple that we\n> just inserted and report it.(we should only report conflict if there are\n> other conflict tuples which are not inserted by us) Here is a new patch\n> which fixed this and fixed a compile warning reported by CFbot.\n>\n\nThanks for the patch. Few comments:\n\n1) Few typos:\nCommit msg of patch001: iolates--> violates\nexecIndexing.c: ingored --> ignored\n\n2) Commit msg of stats patch: \"The commit adds columns in view\npg_stat_subscription_workers to shows\"\n--\"pg_stat_subscription_workers\" --> \"pg_stat_subscription_stats\"\n\n3) I feel, chapter '31.5. Conflicts' in docs should also mention about\ndetection or point to the page where it is already mentioned.\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 24 Jun 2024 17:13:25 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Mon, Jun 24, 2024 at 7:39 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> When testing the patch, I noticed a bug that when reporting the conflict\n> after calling ExecInsertIndexTuples(), we might find the tuple that we\n> just inserted and report it.(we should only report conflict if there are\n> other conflict tuples which are not inserted by us) Here is a new patch\n> which fixed this and fixed a compile warning reported by CFbot.\n>\nThank you for the patch!\nA review comment: The patch does not detect 'update_differ' conflicts\nwhen the Publisher has a non-partitioned table and the Subscriber has\na partitioned version.\n\nHere’s a simple failing test case:\nPub: create table tab (a int primary key, b int not null, c varchar(5));\n\nSub: create table tab (a int not null, b int not null, c varchar(5))\npartition by range (b);\nalter table tab add constraint tab_pk primary key (a, b);\ncreate table tab_1 partition of tab for values from (minvalue) to (100);\ncreate table tab_2 partition of tab for values from (100) to (maxvalue);\n\nWith the above setup, in case the Subscriber table has a tuple with\nits own origin, the incoming remote update from the Publisher fails to\ndetect the 'update_differ' conflict.\n\n--\nThanks,\nNisha\n\n\n", "msg_date": "Mon, 24 Jun 2024 18:05:07 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Monday, June 24, 2024 8:35 PM Nisha Moond <[email protected]> wrote:\r\n> \r\n> On Mon, Jun 24, 2024 at 7:39 AM Zhijie Hou (Fujitsu)\r\n> <[email protected]> wrote:\r\n> >\r\n> > When testing the patch, I noticed a bug that when reporting the\r\n> > conflict after calling ExecInsertIndexTuples(), we might find the\r\n> > tuple that we just inserted and report it.(we should only report\r\n> > conflict if there are other conflict tuples which are not inserted by\r\n> > us) Here is a new patch which fixed this and fixed a compile warning\r\n> reported by CFbot.\r\n> >\r\n> Thank you for the patch!\r\n> A review comment: The patch does not detect 'update_differ' conflicts when\r\n> the Publisher has a non-partitioned table and the Subscriber has a partitioned\r\n> version.\r\n\r\nThanks for reporting the issue !\r\n\r\nHere is the new version patch set which fixed this issue. I also fixed\r\nsome typos and improved the doc in logical replication conflict based\r\non the comments from Shveta[1].\r\n\r\n[1] https://www.postgresql.org/message-id/CAJpy0uABSf15E%2BbMDBRCpbFYo0dh4N%3DEtpv%2BSNw6RMy8ohyrcQ%40mail.gmail.com\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Wed, 26 Jun 2024 02:57:47 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wednesday, June 26, 2024 10:58 AM Zhijie Hou (Fujitsu) <[email protected]> wrote:\r\n>\r\n\r\nHi,\r\n\r\nAs suggested by Sawada-san in another thread[1].\r\n\r\nI am attaching the V4 patch set which tracks the delete_differ\r\nconflict in logical replication.\r\n\r\ndelete_differ means that the replicated DELETE is deleting a row\r\nthat was modified by a different origin.\r\n\r\n[1] https://www.postgresql.org/message-id/CAD21AoDzo8ck57nvRVFWOCsjWBCjQMzqTFLY4cCeFeQZ3V_oQg%40mail.gmail.com\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Wed, 3 Jul 2024 03:00:50 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wed, Jul 3, 2024 at 8:31 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Wednesday, June 26, 2024 10:58 AM Zhijie Hou (Fujitsu) <[email protected]> wrote:\n> >\n>\n> Hi,\n>\n> As suggested by Sawada-san in another thread[1].\n>\n> I am attaching the V4 patch set which tracks the delete_differ\n> conflict in logical replication.\n>\n> delete_differ means that the replicated DELETE is deleting a row\n> that was modified by a different origin.\n>\n\nThanks for the patch. I am still in process of review but please find\nfew comments:\n\n1) When I try to *insert* primary/unique key on pub, which already\nexists on sub, conflict gets detected. But when I try to *update*\nprimary/unique key to a value on pub which already exists on sub,\nconflict is not detected. I get the error:\n\n2024-07-10 14:21:09.976 IST [647678] ERROR: duplicate key value\nviolates unique constraint \"t1_pkey\"\n2024-07-10 14:21:09.976 IST [647678] DETAIL: Key (pk)=(4) already exists.\n\nThis is because such conflict detection needs detection of constraint\nviolation using the *new value* rather than *existing* value during\nUPDATE. INSERT conflict detection takes care of this case i.e. the\ncolumns of incoming row are considered as new values and it tries to\nsee if all unique indexes are okay to digest such new values (all\nincoming columns) but update's logic is different. It searches based\non oldTuple *only* and thus above detection is missing.\n\nShall we support such detection? If not, is it worth docuementing? It\nbasically falls in 'pkey_exists' conflict category but to user it\nmight seem like any ordinary update leading to 'unique key constraint\nviolation'.\n\n\n2)\nAnother case which might confuse user:\n\nCREATE TABLE t1 (pk integer primary key, val1 integer, val2 integer);\n\nOn PUB: insert into t1 values(1,10,10); insert into t1 values(2,20,20);\n\nOn SUB: update t1 set pk=3 where pk=2;\n\nData on PUB: {1,10,10}, {2,20,20}\nData on SUB: {1,10,10}, {3,20,20}\n\nNow on PUB: update t1 set val1=200 where val1=20;\n\nOn Sub, I get this:\n2024-07-10 14:44:00.160 IST [648287] LOG: conflict update_missing\ndetected on relation \"public.t1\"\n2024-07-10 14:44:00.160 IST [648287] DETAIL: Did not find the row to\nbe updated.\n2024-07-10 14:44:00.160 IST [648287] CONTEXT: processing remote data\nfor replication origin \"pg_16389\" during message type \"UPDATE\" for\nreplication target relation \"public.t1\" in transaction 760, finished\nat 0/156D658\n\nTo user, it could be quite confusing, as val1=20 exists on sub but\nstill he gets update_missing conflict and the 'DETAIL' is not\nsufficient to give the clarity. I think on HEAD as well (have not\ntested), we will get same behavior i.e. update will be ignored as we\nmake search based on RI (pk in this case). So we are not worsening the\nsituation, but now since we are detecting conflict, is it possible to\ngive better details in 'DETAIL' section indicating what is actually\nmissing?\n\n\n thanks\nShveta\n\n\n", "msg_date": "Wed, 10 Jul 2024 15:09:17 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wednesday, July 10, 2024 5:39 PM shveta malik <[email protected]> wrote:\r\n> \r\n> On Wed, Jul 3, 2024 at 8:31 AM Zhijie Hou (Fujitsu) <[email protected]>\r\n> wrote:\r\n> >\r\n> > On Wednesday, June 26, 2024 10:58 AM Zhijie Hou (Fujitsu)\r\n> <[email protected]> wrote:\r\n> > >\r\n> >\r\n> > Hi,\r\n> >\r\n> > As suggested by Sawada-san in another thread[1].\r\n> >\r\n> > I am attaching the V4 patch set which tracks the delete_differ\r\n> > conflict in logical replication.\r\n> >\r\n> > delete_differ means that the replicated DELETE is deleting a row that\r\n> > was modified by a different origin.\r\n> >\r\n> \r\n> Thanks for the patch. I am still in process of review but please find few\r\n> comments:\r\n\r\nThanks for the comments!\r\n\r\n> 1) When I try to *insert* primary/unique key on pub, which already exists on\r\n> sub, conflict gets detected. But when I try to *update* primary/unique key to a\r\n> value on pub which already exists on sub, conflict is not detected. I get the\r\n> error:\r\n> \r\n> 2024-07-10 14:21:09.976 IST [647678] ERROR: duplicate key value violates\r\n> unique constraint \"t1_pkey\"\r\n> 2024-07-10 14:21:09.976 IST [647678] DETAIL: Key (pk)=(4) already exists.\r\n\r\nYes, I think the detection of this conflict is not added with the\r\nintention to control the size of the patch in the first version.\r\n\r\n> \r\n> This is because such conflict detection needs detection of constraint violation\r\n> using the *new value* rather than *existing* value during UPDATE. INSERT\r\n> conflict detection takes care of this case i.e. the columns of incoming row are\r\n> considered as new values and it tries to see if all unique indexes are okay to\r\n> digest such new values (all incoming columns) but update's logic is different.\r\n> It searches based on oldTuple *only* and thus above detection is missing.\r\n\r\nI think the logic is the same if we want to detect the unique violation\r\nfor UDPATE, we need to check if the new value of the UPDATE violates any\r\nunique constraints same as the detection of insert_exists (e.g. check\r\nthe conflict around ExecInsertIndexTuples())\r\n\r\n> \r\n> Shall we support such detection? If not, is it worth docuementing?\r\n\r\nI am personally OK to support this detection. And\r\nI think it's already documented that we only detect unique violation for\r\ninsert which mean update conflict is not detected.\r\n\r\n> 2)\r\n> Another case which might confuse user:\r\n> \r\n> CREATE TABLE t1 (pk integer primary key, val1 integer, val2 integer);\r\n> \r\n> On PUB: insert into t1 values(1,10,10); insert into t1 values(2,20,20);\r\n> \r\n> On SUB: update t1 set pk=3 where pk=2;\r\n> \r\n> Data on PUB: {1,10,10}, {2,20,20}\r\n> Data on SUB: {1,10,10}, {3,20,20}\r\n> \r\n> Now on PUB: update t1 set val1=200 where val1=20;\r\n> \r\n> On Sub, I get this:\r\n> 2024-07-10 14:44:00.160 IST [648287] LOG: conflict update_missing detected\r\n> on relation \"public.t1\"\r\n> 2024-07-10 14:44:00.160 IST [648287] DETAIL: Did not find the row to be\r\n> updated.\r\n> 2024-07-10 14:44:00.160 IST [648287] CONTEXT: processing remote data for\r\n> replication origin \"pg_16389\" during message type \"UPDATE\" for replication\r\n> target relation \"public.t1\" in transaction 760, finished at 0/156D658\r\n> \r\n> To user, it could be quite confusing, as val1=20 exists on sub but still he gets\r\n> update_missing conflict and the 'DETAIL' is not sufficient to give the clarity. I\r\n> think on HEAD as well (have not tested), we will get same behavior i.e. update\r\n> will be ignored as we make search based on RI (pk in this case). So we are not\r\n> worsening the situation, but now since we are detecting conflict, is it possible\r\n> to give better details in 'DETAIL' section indicating what is actually missing?\r\n\r\nI think It's doable to report the row value that cannot be found in the local\r\nrelation, but the concern is the potential risk of exposing some\r\nsensitive data in the log. This may be OK, as we are already reporting the\r\nkey value for constraints violation, so if others also agree, we can add\r\nthe row value in the DETAIL as well.\r\n\r\nBest Regards,\r\nHou zj\r\n\r\n", "msg_date": "Thu, 11 Jul 2024 02:17:17 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Thu, Jul 11, 2024 at 7:47 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Wednesday, July 10, 2024 5:39 PM shveta malik <[email protected]> wrote:\n> >\n> > On Wed, Jul 3, 2024 at 8:31 AM Zhijie Hou (Fujitsu) <[email protected]>\n> > wrote:\n> > >\n> > > On Wednesday, June 26, 2024 10:58 AM Zhijie Hou (Fujitsu)\n> > <[email protected]> wrote:\n> > > >\n> > >\n> > > Hi,\n> > >\n> > > As suggested by Sawada-san in another thread[1].\n> > >\n> > > I am attaching the V4 patch set which tracks the delete_differ\n> > > conflict in logical replication.\n> > >\n> > > delete_differ means that the replicated DELETE is deleting a row that\n> > > was modified by a different origin.\n> > >\n> >\n> > Thanks for the patch. I am still in process of review but please find few\n> > comments:\n>\n> Thanks for the comments!\n>\n> > 1) When I try to *insert* primary/unique key on pub, which already exists on\n> > sub, conflict gets detected. But when I try to *update* primary/unique key to a\n> > value on pub which already exists on sub, conflict is not detected. I get the\n> > error:\n> >\n> > 2024-07-10 14:21:09.976 IST [647678] ERROR: duplicate key value violates\n> > unique constraint \"t1_pkey\"\n> > 2024-07-10 14:21:09.976 IST [647678] DETAIL: Key (pk)=(4) already exists.\n>\n> Yes, I think the detection of this conflict is not added with the\n> intention to control the size of the patch in the first version.\n>\n> >\n> > This is because such conflict detection needs detection of constraint violation\n> > using the *new value* rather than *existing* value during UPDATE. INSERT\n> > conflict detection takes care of this case i.e. the columns of incoming row are\n> > considered as new values and it tries to see if all unique indexes are okay to\n> > digest such new values (all incoming columns) but update's logic is different.\n> > It searches based on oldTuple *only* and thus above detection is missing.\n>\n> I think the logic is the same if we want to detect the unique violation\n> for UDPATE, we need to check if the new value of the UPDATE violates any\n> unique constraints same as the detection of insert_exists (e.g. check\n> the conflict around ExecInsertIndexTuples())\n>\n> >\n> > Shall we support such detection? If not, is it worth docuementing?\n>\n> I am personally OK to support this detection.\n\n+1. I think it should not be a complex or too big change.\n\n> And\n> I think it's already documented that we only detect unique violation for\n> insert which mean update conflict is not detected.\n>\n> > 2)\n> > Another case which might confuse user:\n> >\n> > CREATE TABLE t1 (pk integer primary key, val1 integer, val2 integer);\n> >\n> > On PUB: insert into t1 values(1,10,10); insert into t1 values(2,20,20);\n> >\n> > On SUB: update t1 set pk=3 where pk=2;\n> >\n> > Data on PUB: {1,10,10}, {2,20,20}\n> > Data on SUB: {1,10,10}, {3,20,20}\n> >\n> > Now on PUB: update t1 set val1=200 where val1=20;\n> >\n> > On Sub, I get this:\n> > 2024-07-10 14:44:00.160 IST [648287] LOG: conflict update_missing detected\n> > on relation \"public.t1\"\n> > 2024-07-10 14:44:00.160 IST [648287] DETAIL: Did not find the row to be\n> > updated.\n> > 2024-07-10 14:44:00.160 IST [648287] CONTEXT: processing remote data for\n> > replication origin \"pg_16389\" during message type \"UPDATE\" for replication\n> > target relation \"public.t1\" in transaction 760, finished at 0/156D658\n> >\n> > To user, it could be quite confusing, as val1=20 exists on sub but still he gets\n> > update_missing conflict and the 'DETAIL' is not sufficient to give the clarity. I\n> > think on HEAD as well (have not tested), we will get same behavior i.e. update\n> > will be ignored as we make search based on RI (pk in this case). So we are not\n> > worsening the situation, but now since we are detecting conflict, is it possible\n> > to give better details in 'DETAIL' section indicating what is actually missing?\n>\n> I think It's doable to report the row value that cannot be found in the local\n> relation, but the concern is the potential risk of exposing some\n> sensitive data in the log. This may be OK, as we are already reporting the\n> key value for constraints violation, so if others also agree, we can add\n> the row value in the DETAIL as well.\n\nOkay, let's see what others say. JFYI, the same situation holds valid\nfor delete_missing case.\n\nI have one concern about how we deal with conflicts. As for\ninsert_exists, we keep on erroring out while raising conflict, until\nit is manually resolved:\nERROR: conflict insert_exists detected\n\nBut for other cases, we just log conflict and either skip or apply the\noperation. I\nLOG: conflict update_differ detected\nDETAIL: Updating a row that was modified by a different origin\n\nI know that it is no different than HEAD. But now since we are logging\nconflicts explicitly, we should call out default behavior on each\nconflict. I see some incomplete and scattered info in '31.5.\nConflicts' section saying that:\n \"When replicating UPDATE or DELETE operations, missing data will not\nproduce a conflict and such operations will simply be skipped.\"\n(lets say it as pt a)\n\nAlso some more info in a later section saying (pt b):\n:A conflict will produce an error and will stop the replication; it\nmust be resolved manually by the user.\"\n\nMy suggestions:\n1) in point a above, shall we have:\nmissing data or differing data (i.e. somehow reword to accommodate\nupdate_differ and delete_differ cases)\n\n2) Now since we have a section explaining conflicts detected and\nlogged with detect_conflict=true, shall we mention default behaviour\nwith each?\n\ninsert_exists: error will be raised until resolved manually.\nupdate_differ: update will be applied\nupdate_missing: update will be skipped\ndelete_missing: delete will be skipped\ndelete_differ: delete will be applied.\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 11 Jul 2024 10:33:02 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wed, Jul 10, 2024 at 3:09 PM shveta malik <[email protected]> wrote:\n>\n> On Wed, Jul 3, 2024 at 8:31 AM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > On Wednesday, June 26, 2024 10:58 AM Zhijie Hou (Fujitsu) <[email protected]> wrote:\n> > >\n> >\n> > Hi,\n> >\n> > As suggested by Sawada-san in another thread[1].\n> >\n> > I am attaching the V4 patch set which tracks the delete_differ\n> > conflict in logical replication.\n> >\n> > delete_differ means that the replicated DELETE is deleting a row\n> > that was modified by a different origin.\n> >\n\nThanks for the patch. please find few comments for patch002:\n\n1)\nCommit msg says: The conflicts will be tracked only when\ntrack_conflict option of the subscription is enabled.\n\ntrack_conflict --> detect_conflict\n\n2)\nmonitoring.sgml: Below are my suggestions, please change if you feel apt.\n\n2a) insert_exists_count : Number of times inserting a row that\nviolates a NOT DEFERRABLE unique constraint while applying changes.\nSuggestion: Number of times a row insertion violated a NOT DEFERRABLE\nunique constraint while applying changes.\n\n2b) update_differ_count : Number of times updating a row that was\npreviously modified by another source while applying changes.\nSuggestion: Number of times update was performed on a row that was\npreviously modified by another source while applying changes.\n\n2c) delete_differ_count: Number of times deleting a row that was\npreviously modified by another source while applying changes.\nSuggestion: Number of times delete was performed on a row that was\npreviously modified by another source while applying changes.\n\n2d) To be consistent, we can change 'is not found' to 'was not found'\nin update_missing_count , delete_missing_count cases as well.\n\n\n3)\ncreate_subscription.sgml has:\nWhen conflict detection is enabled, additional logging is triggered\nand the conflict statistics are collected in the following scenarios:\n\n--Can we rephrase a little and link pg_stat_subscription_stats\nstructure here for reference.\n\n4)\nIIUC, conflict_count array (in pgstat.h) maps directly to ConflictType\nenum. So if the order of entries ever changes in this enum, without\nchanging it in pg_stat_subscription_stats and pg_proc, we may get\nwrong values under each column when querying\npg_stat_subscription_stats. If so, then perhaps it is good to add a\ncomment atop ConflictType that if someone changes this order, order in\nother files too needs to be changed.\n\n5)\nconflict.h:CONFLICT_NUM_TYPES\n\n--Shall the macro be CONFLICT_TYPES_NUM instead?\n\n6)\npgstatsfuncs.c\n\n-----\n/* conflict count */\nfor (int i = 0; i < CONFLICT_NUM_TYPES; i++)\nvalues[3 + i] = Int64GetDatum(subentry->conflict_count[i]);\n\n/* stats_reset */\nif (subentry->stat_reset_timestamp == 0)\nnulls[8] = true;\nelse\nvalues[8] = TimestampTzGetDatum(subentry->stat_reset_timestamp);\n-----\n\nAfter setting values for [3+i], we abruptly had [8]. I think it will\nbe better to use i++ to increment values' index. And at the end, we\ncan check if it reached 'PG_STAT_GET_SUBSCRIPTION_STATS_COLS'.\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 11 Jul 2024 14:36:13 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Thursday, July 11, 2024 1:03 PM shveta malik <[email protected]> wrote:\r\n\r\nHi,\r\n\r\nThanks for the comments!\r\n\r\n> \r\n> I have one concern about how we deal with conflicts. As for insert_exists, we\r\n> keep on erroring out while raising conflict, until it is manually resolved:\r\n> ERROR: conflict insert_exists detected\r\n> \r\n> But for other cases, we just log conflict and either skip or apply the operation. I\r\n> LOG: conflict update_differ detected\r\n> DETAIL: Updating a row that was modified by a different origin\r\n> \r\n> I know that it is no different than HEAD. But now since we are logging conflicts\r\n> explicitly, we should call out default behavior on each conflict. I see some\r\n> incomplete and scattered info in '31.5.\r\n> Conflicts' section saying that:\r\n> \"When replicating UPDATE or DELETE operations, missing data will not\r\n> produce a conflict and such operations will simply be skipped.\"\r\n> (lets say it as pt a)\r\n> \r\n> Also some more info in a later section saying (pt b):\r\n> :A conflict will produce an error and will stop the replication; it must be resolved\r\n> manually by the user.\"\r\n> \r\n> My suggestions:\r\n> 1) in point a above, shall we have:\r\n> missing data or differing data (i.e. somehow reword to accommodate\r\n> update_differ and delete_differ cases)\r\n\r\nI am not sure if rewording existing words is better. I feel adding a link to\r\nlet user refer to the detect_conflict section for the all the\r\nconflicts is sufficient, so did like that.\r\n\r\n>\r\n> 2)\r\n> monitoring.sgml: Below are my suggestions, please change if you feel apt.\r\n> \r\n> 2a) insert_exists_count : Number of times inserting a row that violates a NOT\r\n> DEFERRABLE unique constraint while applying changes. Suggestion: Number of\r\n> times a row insertion violated a NOT DEFERRABLE unique constraint while\r\n> applying changes.\r\n> \r\n> 2b) update_differ_count : Number of times updating a row that was previously\r\n> modified by another source while applying changes. Suggestion: Number of times\r\n> update was performed on a row that was previously modified by another source\r\n> while applying changes.\r\n> \r\n> 2c) delete_differ_count: Number of times deleting a row that was previously\r\n> modified by another source while applying changes. Suggestion: Number of times\r\n> delete was performed on a row that was previously modified by another source\r\n> while applying changes.\r\n\r\nI am a bit unsure which one is better, so I didn't change in this version.\r\n\r\n> \r\n> 5)\r\n> conflict.h:CONFLICT_NUM_TYPES\r\n> \r\n> --Shall the macro be CONFLICT_TYPES_NUM instead?\r\n\r\nI think the current style followed existing ones(e.g. IOOP_NUM_TYPES,\r\nBACKEND_NUM_TYPES, IOOBJECT_NUM_TYPES ...), so I didn't change this.\r\n\r\nAttach the V5 patch set which changed the following:\r\n1. addressed shveta's comments which are not mentioned above.\r\n2. support update_exists conflict which indicates\r\nthat the updated value of a row violates the unique constraint.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Thu, 18 Jul 2024 02:22:16 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Thu, Jul 18, 2024 at 7:52 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Thursday, July 11, 2024 1:03 PM shveta malik <[email protected]> wrote:\n>\n> Hi,\n>\n> Thanks for the comments!\n>\n> >\n> > I have one concern about how we deal with conflicts. As for insert_exists, we\n> > keep on erroring out while raising conflict, until it is manually resolved:\n> > ERROR: conflict insert_exists detected\n> >\n> > But for other cases, we just log conflict and either skip or apply the operation. I\n> > LOG: conflict update_differ detected\n> > DETAIL: Updating a row that was modified by a different origin\n> >\n> > I know that it is no different than HEAD. But now since we are logging conflicts\n> > explicitly, we should call out default behavior on each conflict. I see some\n> > incomplete and scattered info in '31.5.\n> > Conflicts' section saying that:\n> > \"When replicating UPDATE or DELETE operations, missing data will not\n> > produce a conflict and such operations will simply be skipped.\"\n> > (lets say it as pt a)\n> >\n> > Also some more info in a later section saying (pt b):\n> > :A conflict will produce an error and will stop the replication; it must be resolved\n> > manually by the user.\"\n> >\n> > My suggestions:\n> > 1) in point a above, shall we have:\n> > missing data or differing data (i.e. somehow reword to accommodate\n> > update_differ and delete_differ cases)\n>\n> I am not sure if rewording existing words is better. I feel adding a link to\n> let user refer to the detect_conflict section for the all the\n> conflicts is sufficient, so did like that.\n\nAgree, it looks better with detect_conflict link.\n\n> >\n> > 2)\n> > monitoring.sgml: Below are my suggestions, please change if you feel apt.\n> >\n> > 2a) insert_exists_count : Number of times inserting a row that violates a NOT\n> > DEFERRABLE unique constraint while applying changes. Suggestion: Number of\n> > times a row insertion violated a NOT DEFERRABLE unique constraint while\n> > applying changes.\n> >\n> > 2b) update_differ_count : Number of times updating a row that was previously\n> > modified by another source while applying changes. Suggestion: Number of times\n> > update was performed on a row that was previously modified by another source\n> > while applying changes.\n> >\n> > 2c) delete_differ_count: Number of times deleting a row that was previously\n> > modified by another source while applying changes. Suggestion: Number of times\n> > delete was performed on a row that was previously modified by another source\n> > while applying changes.\n>\n> I am a bit unsure which one is better, so I didn't change in this version.\n\nI still feel the wording is bit unclear/incomplete Also to be\nconsistent with previous fields (see sync_error_count:Number of times\nan error occurred during the initial table synchronization), we should\nat-least have it in past tense. Another way of writing could be:\n\n'Number of times inserting a row violated a NOT DEFERRABLE unique\nconstraint while applying changes.' and likewise for each conflict\nfield.\n\n\n> >\n> > 5)\n> > conflict.h:CONFLICT_NUM_TYPES\n> >\n> > --Shall the macro be CONFLICT_TYPES_NUM instead?\n>\n> I think the current style followed existing ones(e.g. IOOP_NUM_TYPES,\n> BACKEND_NUM_TYPES, IOOBJECT_NUM_TYPES ...), so I didn't change this.\n>\n> Attach the V5 patch set which changed the following:\n> 1. addressed shveta's comments which are not mentioned above.\n> 2. support update_exists conflict which indicates\n> that the updated value of a row violates the unique constraint.\n\nThanks for making the changes.\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 18 Jul 2024 11:20:22 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Thu, Jul 18, 2024 at 7:52 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> Attach the V5 patch set which changed the following.\n\nThanks for the patch. Tested that previous reported issues are fixed.\nPlease have a look at below scenario, I was expecting it to raise\n'update_differ' but it raised both 'update_differ' and 'delete_differ'\ntogether:\n\n-------------------------\nPub:\ncreate table tab (a int not null, b int primary key);\ncreate publication pub1 for table tab;\n\nSub (partitioned table):\ncreate table tab (a int not null, b int primary key) partition by\nrange (b);\ncreate table tab_1 partition of tab for values from (minvalue) to\n(100);\ncreate table tab_2 partition of tab for values from (100) to\n(maxvalue);\ncreate subscription sub1 connection '.......' publication pub1 WITH\n(detect_conflict=true);\n\nPub - insert into tab values (1,1);\nSub - update tab set b=1000 where a=1;\nPub - update tab set b=1000 where a=1; -->update_missing detected\ncorrectly as b=1 will not be found on sub.\nPub - update tab set b=1 where b=1000; -->update_differ expected, but\nit gives both update_differ and delete_differ.\n-------------------------\n\nFew trivial comments:\n\n1)\nCommit msg:\nFor insert_exists conflict, the log can include origin and commit\ntimestamp details of the conflicting key with track_commit_timestamp\nenabled.\n\n--Please add update_exists as well.\n\n2)\nexecReplication.c:\nReturn false if there is no conflict and *conflictslot is set to NULL.\n\n--This gives a feeling that this function will return false if both\nthe conditions are true. But instead first one is the condition, while\nthe second is action. Better to rephrase to:\n\nReturns false if there is no conflict. Sets *conflictslot to NULL in\nsuch a case.\nOr\nSets *conflictslot to NULL and returns false in case of no conflict.\n\n3)\nFindConflictTuple() shares some code parts with\nRelationFindReplTupleByIndex() and RelationFindReplTupleSeq() for\nchecking status in 'res'. Is it worth making a function to be used in\nall three.\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 19 Jul 2024 14:06:34 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Fri, Jul 19, 2024 at 2:06 PM shveta malik <[email protected]> wrote:\n>\n> On Thu, Jul 18, 2024 at 7:52 AM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Attach the V5 patch set which changed the following.\n>\n\nPlease find last batch of comments on v5:\n\npatch001:\n1)\ncreate subscription sub1 ... (detect_conflict=true);\nI think it will be good to give WARNING here indicating that\ndetect_conflict is enabled but track_commit_timestamp is disabled and\nthus few conflicts detection may not work. (Rephrase as apt)\n\n2)\n013_partition.pl: Since we have added update_differ testcase here, we\nshall add delete_differ as well. And in some file where appropriate,\nwe shall add update_exists test as well.\n\n3)\n013_partition.pl (#799,802):\nFor update_missing and delete_missing, we have log verification format\nas 'qr/conflict delete_missing/update_missing detected on relation '.\nBut for update_differ, we do not capture \"conflict update_differ\ndetected on relation ...\". We capture only the DETAIL.\nI think we should be consistent and capture conflict name here as well.\n\n\npatch002:\n\n4)\npg_stat_get_subscription_stats():\n\n---------\n/* conflict count */\nfor (int nconflict = 0; nconflict < CONFLICT_NUM_TYPES; nconflict++)\nvalues[i + nconflict] = Int64GetDatum(subentry->conflict_count[nconflict]);\n\ni += CONFLICT_NUM_TYPES;\n---------\n\nCan't we do values[i++] here as well (instead of values[i +\nnconflict])? Then we don't need to do 'i += CONFLICT_NUM_TYPES'.\n\n5)\n026_stats.pl:\nWherever we are checking this: 'Apply and Sync errors are > 0 and\nreset timestamp is NULL', we need to check update_exssts_count as well\nalong with other counts.\n\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 22 Jul 2024 14:33:07 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Thu, Jul 18, 2024 at 7:52 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> Attach the V5 patch set which changed the following:\n>\n\nTested v5-0001 patch, and it fails to detect the update_exists\nconflict for a setup where Pub has a non-partitioned table and Sub has\nthe same table partitioned.\nBelow is a testcase showcasing the issue:\n\nSetup:\nPub:\ncreate table tab (a int not null, b int not null);\nalter table tab add constraint tab_pk primary key (a,b);\n\nSub:\ncreate table tab (a int not null, b int not null) partition by range (b);\nalter table tab add constraint tab_pk primary key (a,b);\nCREATE TABLE tab_1 PARTITION OF tab FOR VALUES FROM (MINVALUE) TO (100);\nCREATE TABLE tab_2 PARTITION OF tab FOR VALUES FROM (101) TO (MAXVALUE);\n\nTest:\nPub: insert into tab values (1,1);\nSub: insert into tab values (2,1);\nPub: update tab set a=2 where a=1; --> After this update on Pub,\n'update_exists' is expected on Sub, but it fails to detect the\nconflict and logs the key violation error -\n\nERROR: duplicate key value violates unique constraint \"tab_1_pkey\"\nDETAIL: Key (a, b)=(2, 1) already exists.\n\nThanks,\nNisha\n\n\n", "msg_date": "Wed, 24 Jul 2024 10:20:24 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Monday, July 22, 2024 5:03 PM shveta malik <[email protected]> wrote:\r\n> \r\n> On Fri, Jul 19, 2024 at 2:06 PM shveta malik <[email protected]> wrote:\r\n> >\r\n> > On Thu, Jul 18, 2024 at 7:52 AM Zhijie Hou (Fujitsu)\r\n> > <[email protected]> wrote:\r\n> > >\r\n> > > Attach the V5 patch set which changed the following.\r\n> >\r\n> \r\n> Please find last batch of comments on v5:\r\n\r\nThanks Shveta and Nisha for giving comments!\r\n\r\n> \r\n> \r\n> 2)\r\n> 013_partition.pl: Since we have added update_differ testcase here, we shall\r\n> add delete_differ as well. \r\n\r\nI didn't add tests for delete_differ in partition test, because I think the main\r\ncodes and functionality of delete_differ have been tested in 030_origin.pl.\r\nThe test for update_differ is needed because the patch adds new codes in\r\npartition code path to report this conflict.\r\n\r\nHere is the V6 patch set which addressed Shveta and Nisha's comments\r\nin [1][2][3][4].\r\n\r\n[1] https://www.postgresql.org/message-id/CAJpy0uDWdw2W-S8boFU0KOcZjw0%2BsFFgLrHYrr1TROtrcTPZMg%40mail.gmail.com\r\n[2] https://www.postgresql.org/message-id/CAJpy0uDGJXdVCGoaRHP-5G0pL0zhuZaRJSqxOxs%3DCNsSwc%2BSJQ%40mail.gmail.com\r\n[3] https://www.postgresql.org/message-id/CAJpy0uC%2B1puapWdOnAMSS%3DQUp_1jj3GfAeivE0JRWbpqrUy%3Dug%40mail.gmail.com\r\n[4] https://www.postgresql.org/message-id/CABdArM6%2BN1Xy_%2BtK%2Bu-H%3DsCB%2B92rAUh8qH6GDsB%2B1naKzgGKzQ%40mail.gmail.com\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Thu, 25 Jul 2024 06:34:08 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Thu, Jul 25, 2024 at 12:04 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> Here is the V6 patch set which addressed Shveta and Nisha's comments\n> in [1][2][3][4].\n>\n\nDo we need an option detect_conflict for logging conflicts? The\npossible reason to include such an option is to avoid any overhead\nduring apply due to conflict detection. IIUC, to detect some of the\nconflicts like update_differ and delete_differ, we would need to fetch\ncommit_ts information which could be costly but we do that only when\nGUC track_commit_timestamp is enabled which would anyway have overhead\non its own. Can we do performance testing to see how much additional\noverhead we have due to fetching commit_ts information during conflict\ndetection?\n\nThe other time we need to enquire commit_ts is to log the conflict\ndetection information which is an ERROR path, so performance shouldn't\nmatter in this case.\n\nIn general, it would be good to enable conflict detection/logging by\ndefault but if it has overhead then we can consider adding this new\noption. Anyway, adding an option could be a separate patch (at least\nfor review), let the first patch be the core code of conflict\ndetection and logging.\n\nminor cosmetic comments:\n1.\n+static void\n+check_conflict_detection(void)\n+{\n+ if (!track_commit_timestamp)\n+ ereport(WARNING,\n+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"conflict detection could be incomplete due to disabled\ntrack_commit_timestamp\"),\n+ errdetail(\"Conflicts update_differ and delete_differ cannot be\ndetected, and the origin and commit timestamp for the local row will\nnot be logged.\"));\n+}\n\nThe errdetail string is too long. It would be better to split it into\nmultiple rows.\n\n2.\n-\n+static void check_conflict_detection(void);\n\nSpurious line removal.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 25 Jul 2024 16:12:15 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Thu, Jul 11, 2024 at 7:47 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Wednesday, July 10, 2024 5:39 PM shveta malik <[email protected]> wrote:\n> >\n\n> > 2)\n> > Another case which might confuse user:\n> >\n> > CREATE TABLE t1 (pk integer primary key, val1 integer, val2 integer);\n> >\n> > On PUB: insert into t1 values(1,10,10); insert into t1 values(2,20,20);\n> >\n> > On SUB: update t1 set pk=3 where pk=2;\n> >\n> > Data on PUB: {1,10,10}, {2,20,20}\n> > Data on SUB: {1,10,10}, {3,20,20}\n> >\n> > Now on PUB: update t1 set val1=200 where val1=20;\n> >\n> > On Sub, I get this:\n> > 2024-07-10 14:44:00.160 IST [648287] LOG: conflict update_missing detected\n> > on relation \"public.t1\"\n> > 2024-07-10 14:44:00.160 IST [648287] DETAIL: Did not find the row to be\n> > updated.\n> > 2024-07-10 14:44:00.160 IST [648287] CONTEXT: processing remote data for\n> > replication origin \"pg_16389\" during message type \"UPDATE\" for replication\n> > target relation \"public.t1\" in transaction 760, finished at 0/156D658\n> >\n> > To user, it could be quite confusing, as val1=20 exists on sub but still he gets\n> > update_missing conflict and the 'DETAIL' is not sufficient to give the clarity. I\n> > think on HEAD as well (have not tested), we will get same behavior i.e. update\n> > will be ignored as we make search based on RI (pk in this case). So we are not\n> > worsening the situation, but now since we are detecting conflict, is it possible\n> > to give better details in 'DETAIL' section indicating what is actually missing?\n>\n> I think It's doable to report the row value that cannot be found in the local\n> relation, but the concern is the potential risk of exposing some\n> sensitive data in the log. This may be OK, as we are already reporting the\n> key value for constraints violation, so if others also agree, we can add\n> the row value in the DETAIL as well.\n\nThis is still awaiting some feedback. I feel it will be good to add\nsome pk value at-least in DETAIL section, like we add for other\nconflict types.\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 26 Jul 2024 09:39:22 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Thu, Jul 25, 2024 at 12:04 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Monday, July 22, 2024 5:03 PM shveta malik <[email protected]> wrote:\n> >\n> > On Fri, Jul 19, 2024 at 2:06 PM shveta malik <[email protected]> wrote:\n> > >\n> > > On Thu, Jul 18, 2024 at 7:52 AM Zhijie Hou (Fujitsu)\n> > > <[email protected]> wrote:\n> > > >\n> > > > Attach the V5 patch set which changed the following.\n> > >\n> >\n> > Please find last batch of comments on v5:\n>\n> Thanks Shveta and Nisha for giving comments!\n>\n> >\n> >\n> > 2)\n> > 013_partition.pl: Since we have added update_differ testcase here, we shall\n> > add delete_differ as well.\n>\n> I didn't add tests for delete_differ in partition test, because I think the main\n> codes and functionality of delete_differ have been tested in 030_origin.pl.\n> The test for update_differ is needed because the patch adds new codes in\n> partition code path to report this conflict.\n>\n> Here is the V6 patch set which addressed Shveta and Nisha's comments\n> in [1][2][3][4].\n\nThanks for addressing the comments.\n\n> [1] https://www.postgresql.org/message-id/CAJpy0uDWdw2W-S8boFU0KOcZjw0%2BsFFgLrHYrr1TROtrcTPZMg%40mail.gmail.com\n> [2] https://www.postgresql.org/message-id/CAJpy0uDGJXdVCGoaRHP-5G0pL0zhuZaRJSqxOxs%3DCNsSwc%2BSJQ%40mail.gmail.com\n> [3] https://www.postgresql.org/message-id/CAJpy0uC%2B1puapWdOnAMSS%3DQUp_1jj3GfAeivE0JRWbpqrUy%3Dug%40mail.gmail.com\n> [4] https://www.postgresql.org/message-id/CABdArM6%2BN1Xy_%2BtK%2Bu-H%3DsCB%2B92rAUh8qH6GDsB%2B1naKzgGKzQ%40mail.gmail.com\n\nI was re-testing all the issues reported so far. I think the issue\nreported in [4] above is not fixed yet.\n\nPlease find a few more comments:\n\npatch001:\n\n1)\n030_origin.pl:\nI feel tests added in this file may fail. Since there are 3 nodes here\nand if the actual order of replication is not as per the expected\norder by your test, it will fail.\n\nExample:\n---------\n$node_B->safe_psql('postgres', \"DELETE FROM tab;\");\n$node_A->safe_psql('postgres', \"INSERT INTO tab VALUES (33);\");\n\n# The delete should remove the row on node B that was inserted by node A.\n$node_C->safe_psql('postgres', \"DELETE FROM tab WHERE a = 33;\");\n\n$node_B->wait_for_log(\nqr/conflict delete_differ detected..);\n---------\n\nThe third line assumes Node A's change is replicated to Node B already\nbefore Node C's change reaches NodeB, but it may not be true. Should\nwe do wait_for_catchup and have a verification step that Node A data\nis replicated to Node B before we execute Node C query?\nSame for the rest of the tests.\n\n2) 013_partition.pl:\n---------\n$logfile = slurp_file($node_subscriber1->logfile(), $log_location);\nok( $logfile =~\n qr/Updating a row that was modified by a different origin [0-9]+ in\ntransaction [0-9]+ at .*/,\n'updating a tuple that was modified by a different origin');\n---------\n\nTo be consistent, here as well, we can have 'conflict update_differ\ndetected on relation ....'\n\n\npatch002:\n3) monitoring.sgml:\n'Number of times that the updated value of a row violates a NOT\nDEFERRABLE unique constraint while applying changes.'\n\nTo be consistent, we can change: 'violates' --> 'violated'\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 26 Jul 2024 11:54:45 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Fri, Jul 26, 2024 at 9:39 AM shveta malik <[email protected]> wrote:\n>\n> On Thu, Jul 11, 2024 at 7:47 AM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > On Wednesday, July 10, 2024 5:39 PM shveta malik <[email protected]> wrote:\n> > >\n>\n> > > 2)\n> > > Another case which might confuse user:\n> > >\n> > > CREATE TABLE t1 (pk integer primary key, val1 integer, val2 integer);\n> > >\n> > > On PUB: insert into t1 values(1,10,10); insert into t1 values(2,20,20);\n> > >\n> > > On SUB: update t1 set pk=3 where pk=2;\n> > >\n> > > Data on PUB: {1,10,10}, {2,20,20}\n> > > Data on SUB: {1,10,10}, {3,20,20}\n> > >\n> > > Now on PUB: update t1 set val1=200 where val1=20;\n> > >\n> > > On Sub, I get this:\n> > > 2024-07-10 14:44:00.160 IST [648287] LOG: conflict update_missing detected\n> > > on relation \"public.t1\"\n> > > 2024-07-10 14:44:00.160 IST [648287] DETAIL: Did not find the row to be\n> > > updated.\n> > > 2024-07-10 14:44:00.160 IST [648287] CONTEXT: processing remote data for\n> > > replication origin \"pg_16389\" during message type \"UPDATE\" for replication\n> > > target relation \"public.t1\" in transaction 760, finished at 0/156D658\n> > >\n> > > To user, it could be quite confusing, as val1=20 exists on sub but still he gets\n> > > update_missing conflict and the 'DETAIL' is not sufficient to give the clarity. I\n> > > think on HEAD as well (have not tested), we will get same behavior i.e. update\n> > > will be ignored as we make search based on RI (pk in this case). So we are not\n> > > worsening the situation, but now since we are detecting conflict, is it possible\n> > > to give better details in 'DETAIL' section indicating what is actually missing?\n> >\n> > I think It's doable to report the row value that cannot be found in the local\n> > relation, but the concern is the potential risk of exposing some\n> > sensitive data in the log. This may be OK, as we are already reporting the\n> > key value for constraints violation, so if others also agree, we can add\n> > the row value in the DETAIL as well.\n>\n> This is still awaiting some feedback. I feel it will be good to add\n> some pk value at-least in DETAIL section, like we add for other\n> conflict types.\n>\n\nI agree that displaying pk where applicable should be okay as we\ndisplay it at other places but the same won't be possible when we do\nsequence scan to fetch the required tuple. So, the message will be\ndifferent in that case, right?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 26 Jul 2024 11:56:03 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Thu, Jul 25, 2024 at 12:04 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n> Here is the V6 patch set which addressed Shveta and Nisha's comments\n> in [1][2][3][4].\n\nThanks for the patch.\nI tested the v6-0001 patch with partition table scenarios. Please\nreview the following scenario where Pub updates a tuple, causing it to\nmove from one partition to another on Sub.\n\nSetup:\nPub:\n create table tab (a int not null, b int not null);\n alter table tab add constraint tab_pk primary key (a,b);\nSub:\n create table tab (a int not null, b int not null) partition by range (b);\n alter table tab add constraint tab_pk primary key (a,b);\n create table tab_1 partition of tab FOR values from (MINVALUE) TO (100);\n create table tab_2 partition of tab FOR values from (101) TO (MAXVALUE);\n\nTest:\n Pub: insert into tab values (1,1);\n Sub: update tab set a=1 where a=1; > just to make it Sub's origin\n Sub: insert into tab values (1,101);\n Pub: update b=101 where b=1; --> Both 'update_differ' and\n'insert_exists' are detected.\n\nFor non-partitioned tables, a similar update results in\n'update_differ' and 'update_exists' conflicts. After detecting\n'update_differ', the apply worker proceeds to apply the remote update\nand if a tuple with the updated key already exists, it raises\n'update_exists'.\nThis same behavior is expected for partitioned tables too.\n\nThanks,\nNisha\n\n\n", "msg_date": "Fri, 26 Jul 2024 15:03:44 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Fri, Jul 26, 2024 at 3:03 PM Nisha Moond <[email protected]> wrote:\n>\n> On Thu, Jul 25, 2024 at 12:04 PM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> > Here is the V6 patch set which addressed Shveta and Nisha's comments\n> > in [1][2][3][4].\n>\n> Thanks for the patch.\n> I tested the v6-0001 patch with partition table scenarios. Please\n> review the following scenario where Pub updates a tuple, causing it to\n> move from one partition to another on Sub.\n>\n> Setup:\n> Pub:\n> create table tab (a int not null, b int not null);\n> alter table tab add constraint tab_pk primary key (a,b);\n> Sub:\n> create table tab (a int not null, b int not null) partition by range (b);\n> alter table tab add constraint tab_pk primary key (a,b);\n> create table tab_1 partition of tab FOR values from (MINVALUE) TO (100);\n> create table tab_2 partition of tab FOR values from (101) TO (MAXVALUE);\n>\n> Test:\n> Pub: insert into tab values (1,1);\n> Sub: update tab set a=1 where a=1; > just to make it Sub's origin\n> Sub: insert into tab values (1,101);\n> Pub: update b=101 where b=1; --> Both 'update_differ' and\n> 'insert_exists' are detected.\n>\n> For non-partitioned tables, a similar update results in\n> 'update_differ' and 'update_exists' conflicts. After detecting\n> 'update_differ', the apply worker proceeds to apply the remote update\n> and if a tuple with the updated key already exists, it raises\n> 'update_exists'.\n> This same behavior is expected for partitioned tables too.\n\nGood catch. Yes, from the user's perspective, an update_* conflict\nshould be raised when performing an update operation. But internally\nsince we are deleting from one partition and inserting to another, we\nare hitting insert_exist. To convert this insert_exist to udpate_exist\nconflict, perhaps we need to change insert-operation to\nupdate-operation as the default resolver is 'always apply update' in\ncase of update_differ. But not sure how much complexity it will add to\n the code. If it makes the code too complex, I think we can retain the\nexisting behaviour but document this multi-partition case. And in the\nresolver patch, we can handle the resolution of insert_exists by\nconverting it to update. Thoughts?\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 26 Jul 2024 15:37:24 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Fri, Jul 26, 2024 at 3:37 PM shveta malik <[email protected]> wrote:\n>\n> On Fri, Jul 26, 2024 at 3:03 PM Nisha Moond <[email protected]> wrote:\n> >\n> > On Thu, Jul 25, 2024 at 12:04 PM Zhijie Hou (Fujitsu)\n> > <[email protected]> wrote:\n> > > Here is the V6 patch set which addressed Shveta and Nisha's comments\n> > > in [1][2][3][4].\n> >\n> > Thanks for the patch.\n> > I tested the v6-0001 patch with partition table scenarios. Please\n> > review the following scenario where Pub updates a tuple, causing it to\n> > move from one partition to another on Sub.\n> >\n> > Setup:\n> > Pub:\n> > create table tab (a int not null, b int not null);\n> > alter table tab add constraint tab_pk primary key (a,b);\n> > Sub:\n> > create table tab (a int not null, b int not null) partition by range (b);\n> > alter table tab add constraint tab_pk primary key (a,b);\n> > create table tab_1 partition of tab FOR values from (MINVALUE) TO (100);\n> > create table tab_2 partition of tab FOR values from (101) TO (MAXVALUE);\n> >\n> > Test:\n> > Pub: insert into tab values (1,1);\n> > Sub: update tab set a=1 where a=1; > just to make it Sub's origin\n> > Sub: insert into tab values (1,101);\n> > Pub: update b=101 where b=1; --> Both 'update_differ' and\n> > 'insert_exists' are detected.\n> >\n> > For non-partitioned tables, a similar update results in\n> > 'update_differ' and 'update_exists' conflicts. After detecting\n> > 'update_differ', the apply worker proceeds to apply the remote update\n> > and if a tuple with the updated key already exists, it raises\n> > 'update_exists'.\n> > This same behavior is expected for partitioned tables too.\n>\n> Good catch. Yes, from the user's perspective, an update_* conflict\n> should be raised when performing an update operation. But internally\n> since we are deleting from one partition and inserting to another, we\n> are hitting insert_exist. To convert this insert_exist to udpate_exist\n> conflict, perhaps we need to change insert-operation to\n> update-operation as the default resolver is 'always apply update' in\n> case of update_differ.\n>\n\nBut we already document that behind the scenes such an update is a\nDELETE+INSERT operation [1]. Also, all the privilege checks or before\nrow triggers are of type insert, so, I think it is okay to consider\nthis as insert_exists conflict and document it. Later, resolver should\nalso fire for insert_exists conflict.\n\nOne more thing we need to consider is whether we should LOG or ERROR\nfor update/delete_differ conflicts. If we LOG as the patch is doing\nthen we are intentionally overwriting the row when the user may not\nexpect it. OTOH, without a patch anyway we are overwriting, so there\nis an argument that logging by default is what the user will expect.\nWhat do you think?\n\n[1] - https://www.postgresql.org/docs/devel/sql-update.html (See ...\nBehind the scenes, the row movement is actually a DELETE and INSERT\noperation.)\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 26 Jul 2024 15:55:51 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Fri, Jul 26, 2024 at 3:56 PM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Jul 26, 2024 at 3:37 PM shveta malik <[email protected]> wrote:\n> >\n> > On Fri, Jul 26, 2024 at 3:03 PM Nisha Moond <[email protected]> wrote:\n> > >\n> > > On Thu, Jul 25, 2024 at 12:04 PM Zhijie Hou (Fujitsu)\n> > > <[email protected]> wrote:\n> > > > Here is the V6 patch set which addressed Shveta and Nisha's comments\n> > > > in [1][2][3][4].\n> > >\n> > > Thanks for the patch.\n> > > I tested the v6-0001 patch with partition table scenarios. Please\n> > > review the following scenario where Pub updates a tuple, causing it to\n> > > move from one partition to another on Sub.\n> > >\n> > > Setup:\n> > > Pub:\n> > > create table tab (a int not null, b int not null);\n> > > alter table tab add constraint tab_pk primary key (a,b);\n> > > Sub:\n> > > create table tab (a int not null, b int not null) partition by range (b);\n> > > alter table tab add constraint tab_pk primary key (a,b);\n> > > create table tab_1 partition of tab FOR values from (MINVALUE) TO (100);\n> > > create table tab_2 partition of tab FOR values from (101) TO (MAXVALUE);\n> > >\n> > > Test:\n> > > Pub: insert into tab values (1,1);\n> > > Sub: update tab set a=1 where a=1; > just to make it Sub's origin\n> > > Sub: insert into tab values (1,101);\n> > > Pub: update b=101 where b=1; --> Both 'update_differ' and\n> > > 'insert_exists' are detected.\n> > >\n> > > For non-partitioned tables, a similar update results in\n> > > 'update_differ' and 'update_exists' conflicts. After detecting\n> > > 'update_differ', the apply worker proceeds to apply the remote update\n> > > and if a tuple with the updated key already exists, it raises\n> > > 'update_exists'.\n> > > This same behavior is expected for partitioned tables too.\n> >\n> > Good catch. Yes, from the user's perspective, an update_* conflict\n> > should be raised when performing an update operation. But internally\n> > since we are deleting from one partition and inserting to another, we\n> > are hitting insert_exist. To convert this insert_exist to udpate_exist\n> > conflict, perhaps we need to change insert-operation to\n> > update-operation as the default resolver is 'always apply update' in\n> > case of update_differ.\n> >\n>\n> But we already document that behind the scenes such an update is a\n> DELETE+INSERT operation [1]. Also, all the privilege checks or before\n> row triggers are of type insert, so, I think it is okay to consider\n> this as insert_exists conflict and document it. Later, resolver should\n> also fire for insert_exists conflict.\n\nThanks for the link. +1 on existing behaviour of insert_exists conflict.\n\n> One more thing we need to consider is whether we should LOG or ERROR\n> for update/delete_differ conflicts. If we LOG as the patch is doing\n> then we are intentionally overwriting the row when the user may not\n> expect it. OTOH, without a patch anyway we are overwriting, so there\n> is an argument that logging by default is what the user will expect.\n> What do you think?\n\nI was under the impression that in this patch we do not intend to\nchange behaviour of HEAD and thus only LOG the conflict wherever\npossible. And in the next patch of resolver, based on the user's input\nof error/skip/or resolve, we take the action. I still think it is\nbetter to stick to the said behaviour. Only if we commit the resolver\npatch in the same version where we commit the detection patch, then we\ncan take the risk of changing this default behaviour to 'always\nerror'. Otherwise users will be left with conflicts arising but no\nautomatic way to resolve those. But for users who really want their\napplication to error out, we can provide an additional GUC in this\npatch itself which changes the behaviour to 'always ERROR on\nconflict'. Thoughts?\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 26 Jul 2024 16:27:55 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Thu, Jul 25, 2024 at 4:12 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Jul 25, 2024 at 12:04 PM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Here is the V6 patch set which addressed Shveta and Nisha's comments\n> > in [1][2][3][4].\n> >\n>\n> Do we need an option detect_conflict for logging conflicts? The\n> possible reason to include such an option is to avoid any overhead\n> during apply due to conflict detection. IIUC, to detect some of the\n> conflicts like update_differ and delete_differ, we would need to fetch\n> commit_ts information which could be costly but we do that only when\n> GUC track_commit_timestamp is enabled which would anyway have overhead\n> on its own. Can we do performance testing to see how much additional\n> overhead we have due to fetching commit_ts information during conflict\n> detection?\n>\n> The other time we need to enquire commit_ts is to log the conflict\n> detection information which is an ERROR path, so performance shouldn't\n> matter in this case.\n>\n> In general, it would be good to enable conflict detection/logging by\n> default but if it has overhead then we can consider adding this new\n> option. Anyway, adding an option could be a separate patch (at least\n> for review), let the first patch be the core code of conflict\n> detection and logging.\n>\n> minor cosmetic comments:\n> 1.\n> +static void\n> +check_conflict_detection(void)\n> +{\n> + if (!track_commit_timestamp)\n> + ereport(WARNING,\n> + errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"conflict detection could be incomplete due to disabled\n> track_commit_timestamp\"),\n> + errdetail(\"Conflicts update_differ and delete_differ cannot be\n> detected, and the origin and commit timestamp for the local row will\n> not be logged.\"));\n> +}\n>\n> The errdetail string is too long. It would be better to split it into\n> multiple rows.\n>\n> 2.\n> -\n> +static void check_conflict_detection(void);\n>\n> Spurious line removal.\n>\n\nA few more comments:\n1.\nFor duplicate key, the patch reports conflict as following:\nERROR: conflict insert_exists detected on relation \"public.t1\"\n2024-07-26 11:06:34.570 IST [27800] DETAIL: Key (c1)=(1) already\nexists in unique index \"t1_pkey\", which was modified by origin 1 in\ntransaction 770 at 2024-07-26 09:16:47.79805+05:30.\n2024-07-26 11:06:34.570 IST [27800] CONTEXT: processing remote data\nfor replication origin \"pg_16387\" during message type \"INSERT\" for\nreplication target relation \"public.t1\" in transaction 742, finished\nat 0/151A108\n\nIn detail, it is better to display the origin name instead of the\norigin id. This will be similar to what we do in CONTEXT information.\n\n2.\nif (resultRelInfo->ri_NumIndices > 0)\n recheckIndexes = ExecInsertIndexTuples(resultRelInfo,\n- slot, estate, false, false,\n- NULL, NIL, false);\n+ slot, estate, false,\n+ conflictindexes, &conflict,\n\nIt is better to use true/false for the bool parameter (something like\nconflictindexes ? true : false). That will make the code easier to\nfollow.\n\n3. The need for ReCheckConflictIndexes() is not clear from comments.\nCan you please add a few comments to explain this?\n\n4.\n- will simply be skipped.\n+ will simply be skipped. Please refer to <link\nlinkend=\"sql-createsubscription-params-with-detect-conflict\"><literal>detect_conflict</literal></link>\n+ for all the conflicts that will be logged when enabling\n<literal>detect_conflict</literal>.\n </para>\n\nIt would be easier to read the patch if you move <link .. to the next line.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 26 Jul 2024 17:03:41 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Fri, Jul 26, 2024 at 4:28 PM shveta malik <[email protected]> wrote:\n>\n> On Fri, Jul 26, 2024 at 3:56 PM Amit Kapila <[email protected]> wrote:\n> >\n>\n> > One more thing we need to consider is whether we should LOG or ERROR\n> > for update/delete_differ conflicts. If we LOG as the patch is doing\n> > then we are intentionally overwriting the row when the user may not\n> > expect it. OTOH, without a patch anyway we are overwriting, so there\n> > is an argument that logging by default is what the user will expect.\n> > What do you think?\n>\n> I was under the impression that in this patch we do not intend to\n> change behaviour of HEAD and thus only LOG the conflict wherever\n> possible.\n>\n\nEarlier, I thought it was good to keep LOGGING the conflict where\nthere is no chance of wrong data update but for cases where we can do\nsomething wrong, it is better to ERROR out. For example, for an\nupdate_differ case where the apply worker can overwrite the data from\na different origin, it is better to ERROR out. I thought this case was\ncomparable to an existing ERROR case like a unique constraint\nviolation. But I see your point as well that one might expect the\nexisting behavior where we are silently overwriting the different\norigin data. The one idea to address this concern is to suggest users\nset the detect_conflict subscription option as off but I guess that\nwould make this feature unusable for users who don't want to ERROR out\nfor different origin update cases.\n\n> And in the next patch of resolver, based on the user's input\n> of error/skip/or resolve, we take the action. I still think it is\n> better to stick to the said behaviour. Only if we commit the resolver\n> patch in the same version where we commit the detection patch, then we\n> can take the risk of changing this default behaviour to 'always\n> error'. Otherwise users will be left with conflicts arising but no\n> automatic way to resolve those. But for users who really want their\n> application to error out, we can provide an additional GUC in this\n> patch itself which changes the behaviour to 'always ERROR on\n> conflict'.\n>\n\nI don't see a need of GUC here, even if we want we can have a\nsubscription option such conflict_log_level. But users may want to\neither LOG or ERROR based on conflict type. For example, there won't\nbe any data inconsistency in two node replication for delete_missing\ncase as one is trying to delete already deleted data, so LOGGING such\na case should be sufficient whereas update_differ could lead to\ndifferent data on two nodes, so the user may want to ERROR out in such\na case.\n\nWe can keep the current behavior as default for the purpose of\nconflict detection but can have a separate patch to decide whether to\nLOG/ERROR based on conflict_type.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 29 Jul 2024 09:31:07 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Friday, July 26, 2024 7:34 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Thu, Jul 25, 2024 at 4:12 PM Amit Kapila <[email protected]>\r\n> wrote:\r\n> > > \r\n> A few more comments:\r\n\r\nThanks for the comments.\r\n\r\n> 1.\r\n> For duplicate key, the patch reports conflict as following:\r\n> ERROR: conflict insert_exists detected on relation \"public.t1\"\r\n> 2024-07-26 11:06:34.570 IST [27800] DETAIL: Key (c1)=(1) already exists in\r\n> unique index \"t1_pkey\", which was modified by origin 1 in transaction 770 at\r\n> 2024-07-26 09:16:47.79805+05:30.\r\n> 2024-07-26 11:06:34.570 IST [27800] CONTEXT: processing remote data for\r\n> replication origin \"pg_16387\" during message type \"INSERT\" for replication\r\n> target relation \"public.t1\" in transaction 742, finished at 0/151A108\r\n> \r\n> In detail, it is better to display the origin name instead of the origin id. This will\r\n> be similar to what we do in CONTEXT information.\r\n\r\n\r\nAgreed. Before modifying this, I'd like to confirm the message style in the\r\ncases where origin id may not have a corresponding origin name (e.g., if the\r\ndata was modified locally (id = 0), or if the origin that modified the data has\r\nbeen dropped). I thought of two styles:\r\n\r\n1)\r\n- for local change: \"xxx was modified by a different origin \\\"(local)\\\" in transaction 123 at 2024..\"\r\n- for dropped origin: \"xxx was modified by a different origin \\\"(unknown)\\\" in transaction 123 at 2024..\"\r\n\r\nOne issue for this style is that user may create an origin with the same name\r\nhere (e.g. \"(local)\" and \"(unknown)\").\r\n\r\n2) \r\n- for local change: \"xxx was modified locally in transaction 123 at 2024..\"\r\n- for dropped origin: \"xxx was modified by an unknown different origin 1234 in transaction 123 at 2024..\"\r\n\r\nThis style slightly modifies the message format. I personally feel 2) maybe\r\nbetter but am OK for other options as well.\r\n\r\nWhat do you think ?\r\n\r\nHere is the V7 patch set that addressed all the comments so far[1][2][3].\r\nThe subscription option part is splitted into the separate patch 0002 and\r\nwe will decide whether to drop this patch after finishing the perf tests.\r\nNote that I didn't display the tuple value in the message as the discussion\r\nis still ongoing[4].\r\n\r\n[1] https://www.postgresql.org/message-id/CAJpy0uDhCnzvNHVYwse%3DKxmOB%3DqtXr6twnDP9xqdzT-oU0OWEQ%40mail.gmail.com\r\n[2] https://www.postgresql.org/message-id/CAA4eK1%2BCJXKK34zJdEJZf2Mpn5QyMyaZiPDSNS6%3Dkvewr0-pdg%40mail.gmail.com\r\n[3] https://www.postgresql.org/message-id/CAA4eK1Lmu%3DoVySfGjxEUykCT3FPnL1YFDHKr1ZMwFy7WUgfc6g%40mail.gmail.com\r\n[4] https://www.postgresql.org/message-id/CAA4eK1%2BaK4MLxbfLtp%3DEV5bpvJozKhxGDRS6T9q8sz_s%2BLK3vw%40mail.gmail.com\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Mon, 29 Jul 2024 06:14:30 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Mon, Jul 29, 2024 at 11:44 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Friday, July 26, 2024 7:34 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Jul 25, 2024 at 4:12 PM Amit Kapila <[email protected]>\n> > wrote:\n> > > >\n> > A few more comments:\n>\n> Thanks for the comments.\n>\n> > 1.\n> > For duplicate key, the patch reports conflict as following:\n> > ERROR: conflict insert_exists detected on relation \"public.t1\"\n> > 2024-07-26 11:06:34.570 IST [27800] DETAIL: Key (c1)=(1) already exists in\n> > unique index \"t1_pkey\", which was modified by origin 1 in transaction 770 at\n> > 2024-07-26 09:16:47.79805+05:30.\n> > 2024-07-26 11:06:34.570 IST [27800] CONTEXT: processing remote data for\n> > replication origin \"pg_16387\" during message type \"INSERT\" for replication\n> > target relation \"public.t1\" in transaction 742, finished at 0/151A108\n> >\n> > In detail, it is better to display the origin name instead of the origin id. This will\n> > be similar to what we do in CONTEXT information.\n>\n>\n> Agreed. Before modifying this, I'd like to confirm the message style in the\n> cases where origin id may not have a corresponding origin name (e.g., if the\n> data was modified locally (id = 0), or if the origin that modified the data has\n> been dropped). I thought of two styles:\n>\n> 1)\n> - for local change: \"xxx was modified by a different origin \\\"(local)\\\" in transaction 123 at 2024..\"\n> - for dropped origin: \"xxx was modified by a different origin \\\"(unknown)\\\" in transaction 123 at 2024..\"\n>\n> One issue for this style is that user may create an origin with the same name\n> here (e.g. \"(local)\" and \"(unknown)\").\n>\n> 2)\n> - for local change: \"xxx was modified locally in transaction 123 at 2024..\"\n>\n\nThis sounds good.\n\n> - for dropped origin: \"xxx was modified by an unknown different origin 1234 in transaction 123 at 2024..\"\n>\n\nFor this one, how about: \"xxx was modified by a non-existent origin in\ntransaction 123 at 2024..\"?\n\nAlso, in code please do write comments when each of these two can occur.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 29 Jul 2024 14:54:57 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Mon, Jul 29, 2024 at 11:44 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n\nI was going through v7-0001, and I have some initial comments.\n\n@@ -536,11 +542,9 @@ ExecCheckIndexConstraints(ResultRelInfo\n*resultRelInfo, TupleTableSlot *slot,\n ExprContext *econtext;\n Datum values[INDEX_MAX_KEYS];\n bool isnull[INDEX_MAX_KEYS];\n- ItemPointerData invalidItemPtr;\n bool checkedIndex = false;\n\n ItemPointerSetInvalid(conflictTid);\n- ItemPointerSetInvalid(&invalidItemPtr);\n\n /*\n * Get information from the result relation info structure.\n@@ -629,7 +633,7 @@ ExecCheckIndexConstraints(ResultRelInfo\n*resultRelInfo, TupleTableSlot *slot,\n\n satisfiesConstraint =\n check_exclusion_or_unique_constraint(heapRelation, indexRelation,\n- indexInfo, &invalidItemPtr,\n+ indexInfo, &slot->tts_tid,\n values, isnull, estate, false,\n CEOUC_WAIT, true,\n conflictTid);\n\nWhat is the purpose of this change? I mean\n'check_exclusion_or_unique_constraint' says that 'tupleid'\nshould be invalidItemPtr if the tuple is not yet inserted and\nExecCheckIndexConstraints is called by ExecInsert\nbefore inserting the tuple. So what is this change? Would this change\nExecInsert's behavior as well?\n\n----\n----\n\n+ReCheckConflictIndexes(ResultRelInfo *resultRelInfo, EState *estate,\n+ ConflictType type, List *recheckIndexes,\n+ TupleTableSlot *slot)\n+{\n+ /* Re-check all the unique indexes for potential conflicts */\n+ foreach_oid(uniqueidx, resultRelInfo->ri_onConflictArbiterIndexes)\n+ {\n+ TupleTableSlot *conflictslot;\n+\n+ if (list_member_oid(recheckIndexes, uniqueidx) &&\n+ FindConflictTuple(resultRelInfo, estate, uniqueidx, slot, &conflictslot))\n+ {\n+ RepOriginId origin;\n+ TimestampTz committs;\n+ TransactionId xmin;\n+\n+ GetTupleCommitTs(conflictslot, &xmin, &origin, &committs);\n+ ReportApplyConflict(ERROR, type, resultRelInfo->ri_RelationDesc, uniqueidx,\n+ xmin, origin, committs, conflictslot);\n+ }\n+ }\n+}\n and\n\n+ conflictindexes = resultRelInfo->ri_onConflictArbiterIndexes;\n+\n if (resultRelInfo->ri_NumIndices > 0)\n recheckIndexes = ExecInsertIndexTuples(resultRelInfo,\n- slot, estate, false, false,\n- NULL, NIL, false);\n+ slot, estate, false,\n+ conflictindexes ? true : false,\n+ &conflict,\n+ conflictindexes, false);\n+\n+ /*\n+ * Rechecks the conflict indexes to fetch the conflicting local tuple\n+ * and reports the conflict. We perform this check here, instead of\n+ * perform an additional index scan before the actual insertion and\n+ * reporting the conflict if any conflicting tuples are found. This is\n+ * to avoid the overhead of executing the extra scan for each INSERT\n+ * operation, even when no conflict arises, which could introduce\n+ * significant overhead to replication, particularly in cases where\n+ * conflicts are rare.\n+ */\n+ if (conflict)\n+ ReCheckConflictIndexes(resultRelInfo, estate, CT_INSERT_EXISTS,\n+ recheckIndexes, slot);\n\n\n This logic is confusing, first, you are calling\nExecInsertIndexTuples() with no duplicate error for the indexes\npresent in 'ri_onConflictArbiterIndexes' which means\n the indexes returned by the function must be a subset of\n'ri_onConflictArbiterIndexes' and later in ReCheckConflictIndexes()\nyou are again processing all the\n indexes of 'ri_onConflictArbiterIndexes' and checking if any of these\nis a subset of the indexes that is returned by\nExecInsertIndexTuples().\n\n Why are we doing that, I think we can directly use the\n'recheckIndexes' which is returned by ExecInsertIndexTuples(), and\nthose indexes are guaranteed to be a subset of\n ri_onConflictArbiterIndexes. No?\n\n ---\n ---\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Jul 2024 16:29:11 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Mon, Jul 29, 2024 at 9:31 AM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Jul 26, 2024 at 4:28 PM shveta malik <[email protected]> wrote:\n> >\n> > On Fri, Jul 26, 2024 at 3:56 PM Amit Kapila <[email protected]> wrote:\n> > >\n> >\n> > > One more thing we need to consider is whether we should LOG or ERROR\n> > > for update/delete_differ conflicts. If we LOG as the patch is doing\n> > > then we are intentionally overwriting the row when the user may not\n> > > expect it. OTOH, without a patch anyway we are overwriting, so there\n> > > is an argument that logging by default is what the user will expect.\n> > > What do you think?\n> >\n> > I was under the impression that in this patch we do not intend to\n> > change behaviour of HEAD and thus only LOG the conflict wherever\n> > possible.\n> >\n>\n> Earlier, I thought it was good to keep LOGGING the conflict where\n> there is no chance of wrong data update but for cases where we can do\n> something wrong, it is better to ERROR out. For example, for an\n> update_differ case where the apply worker can overwrite the data from\n> a different origin, it is better to ERROR out. I thought this case was\n> comparable to an existing ERROR case like a unique constraint\n> violation. But I see your point as well that one might expect the\n> existing behavior where we are silently overwriting the different\n> origin data. The one idea to address this concern is to suggest users\n> set the detect_conflict subscription option as off but I guess that\n> would make this feature unusable for users who don't want to ERROR out\n> for different origin update cases.\n>\n> > And in the next patch of resolver, based on the user's input\n> > of error/skip/or resolve, we take the action. I still think it is\n> > better to stick to the said behaviour. Only if we commit the resolver\n> > patch in the same version where we commit the detection patch, then we\n> > can take the risk of changing this default behaviour to 'always\n> > error'. Otherwise users will be left with conflicts arising but no\n> > automatic way to resolve those. But for users who really want their\n> > application to error out, we can provide an additional GUC in this\n> > patch itself which changes the behaviour to 'always ERROR on\n> > conflict'.\n> >\n>\n> I don't see a need of GUC here, even if we want we can have a\n> subscription option such conflict_log_level. But users may want to\n> either LOG or ERROR based on conflict type. For example, there won't\n> be any data inconsistency in two node replication for delete_missing\n> case as one is trying to delete already deleted data, so LOGGING such\n> a case should be sufficient whereas update_differ could lead to\n> different data on two nodes, so the user may want to ERROR out in such\n> a case.\n>\n> We can keep the current behavior as default for the purpose of\n> conflict detection but can have a separate patch to decide whether to\n> LOG/ERROR based on conflict_type.\n\n+1 on the idea of giving an option to the user to choose either ERROR\nor LOG for each conflict type separately.\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 29 Jul 2024 17:25:22 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Monday, July 29, 2024 6:59 PM Dilip Kumar <[email protected]> wrote:\r\n> \r\n> On Mon, Jul 29, 2024 at 11:44 AM Zhijie Hou (Fujitsu)\r\n> <[email protected]> wrote:\r\n> >\r\n> \r\n> I was going through v7-0001, and I have some initial comments.\r\n\r\nThanks for the comments !\r\n\r\n> \r\n> @@ -536,11 +542,9 @@ ExecCheckIndexConstraints(ResultRelInfo\r\n> *resultRelInfo, TupleTableSlot *slot,\r\n> ExprContext *econtext;\r\n> Datum values[INDEX_MAX_KEYS];\r\n> bool isnull[INDEX_MAX_KEYS];\r\n> - ItemPointerData invalidItemPtr;\r\n> bool checkedIndex = false;\r\n> \r\n> ItemPointerSetInvalid(conflictTid);\r\n> - ItemPointerSetInvalid(&invalidItemPtr);\r\n> \r\n> /*\r\n> * Get information from the result relation info structure.\r\n> @@ -629,7 +633,7 @@ ExecCheckIndexConstraints(ResultRelInfo\r\n> *resultRelInfo, TupleTableSlot *slot,\r\n> \r\n> satisfiesConstraint =\r\n> check_exclusion_or_unique_constraint(heapRelation, indexRelation,\r\n> - indexInfo, &invalidItemPtr,\r\n> + indexInfo, &slot->tts_tid,\r\n> values, isnull, estate, false,\r\n> CEOUC_WAIT, true,\r\n> conflictTid);\r\n> \r\n> What is the purpose of this change? I mean\r\n> 'check_exclusion_or_unique_constraint' says that 'tupleid'\r\n> should be invalidItemPtr if the tuple is not yet inserted and\r\n> ExecCheckIndexConstraints is called by ExecInsert before inserting the tuple.\r\n> So what is this change?\r\n\r\nBecause the function ExecCheckIndexConstraints() is now invoked after inserting\r\na tuple (in the patch). So, we need to ignore the newly inserted tuple when\r\nchecking conflict in check_exclusion_or_unique_constraint().\r\n\r\n> Would this change ExecInsert's behavior as well?\r\n\r\nThanks for pointing it out, I will check and reply.\r\n\r\n> \r\n> ----\r\n> ----\r\n> \r\n> +ReCheckConflictIndexes(ResultRelInfo *resultRelInfo, EState *estate,\r\n> + ConflictType type, List *recheckIndexes,\r\n> + TupleTableSlot *slot)\r\n> +{\r\n> + /* Re-check all the unique indexes for potential conflicts */\r\n> +foreach_oid(uniqueidx, resultRelInfo->ri_onConflictArbiterIndexes)\r\n> + {\r\n> + TupleTableSlot *conflictslot;\r\n> +\r\n> + if (list_member_oid(recheckIndexes, uniqueidx) &&\r\n> + FindConflictTuple(resultRelInfo, estate, uniqueidx, slot,\r\n> + &conflictslot)) { RepOriginId origin; TimestampTz committs;\r\n> + TransactionId xmin;\r\n> +\r\n> + GetTupleCommitTs(conflictslot, &xmin, &origin, &committs);\r\n> +ReportApplyConflict(ERROR, type, resultRelInfo->ri_RelationDesc,\r\n> +uniqueidx, xmin, origin, committs, conflictslot); } } }\r\n> and\r\n> \r\n> + conflictindexes = resultRelInfo->ri_onConflictArbiterIndexes;\r\n> +\r\n> if (resultRelInfo->ri_NumIndices > 0)\r\n> recheckIndexes = ExecInsertIndexTuples(resultRelInfo,\r\n> - slot, estate, false, false,\r\n> - NULL, NIL, false);\r\n> + slot, estate, false,\r\n> + conflictindexes ? true : false,\r\n> + &conflict,\r\n> + conflictindexes, false);\r\n> +\r\n> + /*\r\n> + * Rechecks the conflict indexes to fetch the conflicting local tuple\r\n> + * and reports the conflict. We perform this check here, instead of\r\n> + * perform an additional index scan before the actual insertion and\r\n> + * reporting the conflict if any conflicting tuples are found. This is\r\n> + * to avoid the overhead of executing the extra scan for each INSERT\r\n> + * operation, even when no conflict arises, which could introduce\r\n> + * significant overhead to replication, particularly in cases where\r\n> + * conflicts are rare.\r\n> + */\r\n> + if (conflict)\r\n> + ReCheckConflictIndexes(resultRelInfo, estate, CT_INSERT_EXISTS,\r\n> + recheckIndexes, slot);\r\n> \r\n> \r\n> This logic is confusing, first, you are calling\r\n> ExecInsertIndexTuples() with no duplicate error for the indexes\r\n> present in 'ri_onConflictArbiterIndexes' which means\r\n> the indexes returned by the function must be a subset of\r\n> 'ri_onConflictArbiterIndexes' and later in ReCheckConflictIndexes()\r\n> you are again processing all the\r\n> indexes of 'ri_onConflictArbiterIndexes' and checking if any of these\r\n> is a subset of the indexes that is returned by\r\n> ExecInsertIndexTuples().\r\n\r\nI think that's not always true. The indexes returned by the function *may not*\r\nbe a subset of 'ri_onConflictArbiterIndexes'. Based on the comments atop of the\r\nExecInsertIndexTuples, it returns a list of index OIDs for any unique or\r\nexclusion constraints that are deferred, and in addition to that, it will\r\ninclude the indexes in 'arbiterIndexes' if noDupErr == true.\r\n\r\n> \r\n> Why are we doing that, I think we can directly use the\r\n> 'recheckIndexes' which is returned by ExecInsertIndexTuples(), and\r\n> those indexes are guaranteed to be a subset of\r\n> ri_onConflictArbiterIndexes. No?\r\n\r\nBased on above, we need to filter the deferred indexes or exclusion constraints\r\nin the 'ri_onConflictArbiterIndexes'.\r\n\r\nBest Regards,\r\nHou zj\r\n\r\n", "msg_date": "Mon, 29 Jul 2024 12:14:48 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "> On Monday, July 29, 2024 6:59 PM Dilip Kumar <[email protected]>\r\n> wrote:\r\n> >\r\n> > On Mon, Jul 29, 2024 at 11:44 AM Zhijie Hou (Fujitsu)\r\n> > <[email protected]> wrote:\r\n> > >\r\n> >\r\n> > I was going through v7-0001, and I have some initial comments.\r\n> \r\n> Thanks for the comments !\r\n> \r\n> >\r\n> > @@ -536,11 +542,9 @@ ExecCheckIndexConstraints(ResultRelInfo\r\n> > *resultRelInfo, TupleTableSlot *slot,\r\n> > ExprContext *econtext;\r\n> > Datum values[INDEX_MAX_KEYS];\r\n> > bool isnull[INDEX_MAX_KEYS];\r\n> > - ItemPointerData invalidItemPtr;\r\n> > bool checkedIndex = false;\r\n> >\r\n> > ItemPointerSetInvalid(conflictTid);\r\n> > - ItemPointerSetInvalid(&invalidItemPtr);\r\n> >\r\n> > /*\r\n> > * Get information from the result relation info structure.\r\n> > @@ -629,7 +633,7 @@ ExecCheckIndexConstraints(ResultRelInfo\r\n> > *resultRelInfo, TupleTableSlot *slot,\r\n> >\r\n> > satisfiesConstraint =\r\n> > check_exclusion_or_unique_constraint(heapRelation, indexRelation,\r\n> > - indexInfo, &invalidItemPtr,\r\n> > + indexInfo, &slot->tts_tid,\r\n> > values, isnull, estate, false,\r\n> > CEOUC_WAIT, true,\r\n> > conflictTid);\r\n> >\r\n> > What is the purpose of this change? I mean\r\n> > 'check_exclusion_or_unique_constraint' says that 'tupleid'\r\n> > should be invalidItemPtr if the tuple is not yet inserted and\r\n> > ExecCheckIndexConstraints is called by ExecInsert before inserting the\r\n> tuple.\r\n> > So what is this change?\r\n> \r\n> Because the function ExecCheckIndexConstraints() is now invoked after\r\n> inserting a tuple (in the patch). So, we need to ignore the newly inserted tuple\r\n> when checking conflict in check_exclusion_or_unique_constraint().\r\n> \r\n> > Would this change ExecInsert's behavior as well?\r\n> \r\n> Thanks for pointing it out, I will check and reply.\r\n\r\nAfter checking, I think it may affect ExecInsert's behavior if the slot passed\r\nto ExecCheckIndexConstraints() comes from other tables (e.g. when executing\r\nINSERT INTO SELECT FROM othertbl), because the slot->tts_tid points to a valid\r\nposition from another table in this case, which can cause the\r\ncheck_exclusion_or_unique_constraint to skip a tuple unexpectedly).\r\n\r\nI thought about two ideas to fix this: One is to reset the slot->tts_tid before\r\ncalling ExecCheckIndexConstraints() in ExecInsert(), but I feel a bit\r\nuncomfortable to this since it is touching existing logic. So, another idea is to\r\njust add a new parameter 'tupletid' in ExecCheckIndexConstraints(), then pass\r\ntupletid=InvalidOffsetNumber in when invoke the function in ExecInsert() and\r\npass a valid tupletid in the new code paths in the patch. The new\r\n'tupletid' will be passed to check_exclusion_or_unique_constraint to\r\nskip the target tuple. I feel the second one maybe better.\r\n\r\nWhat do you think ?\r\n\r\nBest Regards,\r\nHou zj\r\n\r\n", "msg_date": "Tue, 30 Jul 2024 08:19:33 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Tue, Jul 30, 2024 at 1:49 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> > On Monday, July 29, 2024 6:59 PM Dilip Kumar <[email protected]>\n> > wrote:\n> > >\n> > > On Mon, Jul 29, 2024 at 11:44 AM Zhijie Hou (Fujitsu)\n> > > <[email protected]> wrote:\n> > > >\n> > >\n> > > I was going through v7-0001, and I have some initial comments.\n> >\n> > Thanks for the comments !\n> >\n> > >\n> > > @@ -536,11 +542,9 @@ ExecCheckIndexConstraints(ResultRelInfo\n> > > *resultRelInfo, TupleTableSlot *slot,\n> > > ExprContext *econtext;\n> > > Datum values[INDEX_MAX_KEYS];\n> > > bool isnull[INDEX_MAX_KEYS];\n> > > - ItemPointerData invalidItemPtr;\n> > > bool checkedIndex = false;\n> > >\n> > > ItemPointerSetInvalid(conflictTid);\n> > > - ItemPointerSetInvalid(&invalidItemPtr);\n> > >\n> > > /*\n> > > * Get information from the result relation info structure.\n> > > @@ -629,7 +633,7 @@ ExecCheckIndexConstraints(ResultRelInfo\n> > > *resultRelInfo, TupleTableSlot *slot,\n> > >\n> > > satisfiesConstraint =\n> > > check_exclusion_or_unique_constraint(heapRelation, indexRelation,\n> > > - indexInfo, &invalidItemPtr,\n> > > + indexInfo, &slot->tts_tid,\n> > > values, isnull, estate, false,\n> > > CEOUC_WAIT, true,\n> > > conflictTid);\n> > >\n> > > What is the purpose of this change? I mean\n> > > 'check_exclusion_or_unique_constraint' says that 'tupleid'\n> > > should be invalidItemPtr if the tuple is not yet inserted and\n> > > ExecCheckIndexConstraints is called by ExecInsert before inserting the\n> > tuple.\n> > > So what is this change?\n> >\n> > Because the function ExecCheckIndexConstraints() is now invoked after\n> > inserting a tuple (in the patch). So, we need to ignore the newly inserted tuple\n> > when checking conflict in check_exclusion_or_unique_constraint().\n> >\n> > > Would this change ExecInsert's behavior as well?\n> >\n> > Thanks for pointing it out, I will check and reply.\n>\n> After checking, I think it may affect ExecInsert's behavior if the slot passed\n> to ExecCheckIndexConstraints() comes from other tables (e.g. when executing\n> INSERT INTO SELECT FROM othertbl), because the slot->tts_tid points to a valid\n> position from another table in this case, which can cause the\n> check_exclusion_or_unique_constraint to skip a tuple unexpectedly).\n>\n> I thought about two ideas to fix this: One is to reset the slot->tts_tid before\n> calling ExecCheckIndexConstraints() in ExecInsert(), but I feel a bit\n> uncomfortable to this since it is touching existing logic. So, another idea is to\n> just add a new parameter 'tupletid' in ExecCheckIndexConstraints(), then pass\n> tupletid=InvalidOffsetNumber in when invoke the function in ExecInsert() and\n> pass a valid tupletid in the new code paths in the patch. The new\n> 'tupletid' will be passed to check_exclusion_or_unique_constraint to\n> skip the target tuple. I feel the second one maybe better.\n>\n> What do you think ?\n\nYes, the second approach seems good to me.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 30 Jul 2024 14:11:40 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Mon, Jul 29, 2024 at 11:44 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> Here is the V7 patch set that addressed all the comments so far[1][2][3].\n\nThanks for the patch, few comments:\n\n1)\nbuild_index_value_desc()\n /* Assume the index has been locked */\n indexDesc = index_open(indexoid, NoLock);\n\n-- Comment is not very informative. Can we explain in the header if\nthe caller is supposed to lock it?\n\n2)\napply_handle_delete_internal()\n\n--Do we need to check \"(!edata->mtstate || edata->mtstate->operation\n!= CMD_UPDATE)\" in the else part as well? Can there be a scenario\nwhere during update flow, it is trying to delete from a partition and\ncomes here, but till then that row is deleted already and we end up\nraising 'delete_missing' additionally instead of 'update_missing'\nalone?\n\n3)\nerrdetail_apply_conflict(): Bulid the index value string.\n-- Bulid->Build\n\n4)\npatch003: create_subscription.sgml\nthe conflict statistics are collected(displayed in the\n\n-- collected (displayed in the -->space before '(' is needed.\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 30 Jul 2024 14:36:24 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Tuesday, July 30, 2024 5:06 PM shveta malik <[email protected]> wrote:\r\n> \r\n> On Mon, Jul 29, 2024 at 11:44 AM Zhijie Hou (Fujitsu)\r\n> <[email protected]> wrote:\r\n> >\r\n> > Here is the V7 patch set that addressed all the comments so far[1][2][3].\r\n> \r\n> Thanks for the patch, few comments:\r\n\r\nThanks for the comments !\r\n\r\n> \r\n> 2)\r\n> apply_handle_delete_internal()\r\n> \r\n> --Do we need to check \"(!edata->mtstate || edata->mtstate->operation !=\r\n> CMD_UPDATE)\" in the else part as well? Can there be a scenario where during\r\n> update flow, it is trying to delete from a partition and comes here, but till then\r\n> that row is deleted already and we end up raising 'delete_missing' additionally\r\n> instead of 'update_missing'\r\n> alone?\r\n\r\nI think this shouldn't happen because the row to be deleted should have been\r\nlocked before entering the apply_handle_delete_internal(). Actually, calling\r\napply_handle_delete_internal() for cross-partition update is a big buggy because the\r\nrow to be deleted has already been found in apply_handle_tuple_routing(), so we\r\ncould have avoid scanning the tuple again. I have posted another patch to fix\r\nthis issue in thread[1].\r\n\r\n\r\nHere is the V8 patch set. It includes the following changes:\r\n\r\n* Addressed the comments from Shveta.\r\n* Reported the origin name in the DETAIL instead of the origin id.\r\n* fixed the issue Dilip pointed[2].\r\n* fixed one old issue[3] Nisha pointed that I missed to fix in previous version.\r\n* Improved the document a bit.\r\n\r\n[1] https://www.postgresql.org/message-id/CAA4eK1JsNPzFE8dgFOm-Tfk_CDZyg1R3zuuQWkUnef-N-vTkoA%40mail.gmail.com\r\n[2] https://www.postgresql.org/message-id/CAFiTN-tYdN63U%3Dd8V8rBfRtFmhZ%3DQQX7jEmj1cdWMe_NM%2B7%3DTQ%40mail.gmail.com\r\n[3] https://www.postgresql.org/message-id/CABdArM6%2BN1Xy_%2BtK%2Bu-H%3DsCB%2B92rAUh8qH6GDsB%2B1naKzgGKzQ%40mail.gmail.com\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Wed, 31 Jul 2024 02:10:24 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wed, Jul 31, 2024 at 7:40 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> >\n> > 2)\n> > apply_handle_delete_internal()\n> >\n> > --Do we need to check \"(!edata->mtstate || edata->mtstate->operation !=\n> > CMD_UPDATE)\" in the else part as well? Can there be a scenario where during\n> > update flow, it is trying to delete from a partition and comes here, but till then\n> > that row is deleted already and we end up raising 'delete_missing' additionally\n> > instead of 'update_missing'\n> > alone?\n>\n> I think this shouldn't happen because the row to be deleted should have been\n> locked before entering the apply_handle_delete_internal(). Actually, calling\n> apply_handle_delete_internal() for cross-partition update is a big buggy because the\n> row to be deleted has already been found in apply_handle_tuple_routing(), so we\n> could have avoid scanning the tuple again. I have posted another patch to fix\n> this issue in thread[1].\n\nThanks for the details.\n\n>\n> Here is the V8 patch set. It includes the following changes:\n>\n\nThanks for the patch. I verified that all the bugs reported so far are\naddressed. Few trivial comments:\n\n1)\n029_on_error.pl:\n--I did not understand the intent of this change. The existing insert\nwould also have resulted in conflict (insert_exists) and we would have\nidentified and skipped that. Why change to UPDATE?\n\n $node_publisher->safe_psql(\n 'postgres',\n qq[\n BEGIN;\n-INSERT INTO tbl VALUES (1, NULL);\n+UPDATE tbl SET i = 2;\n PREPARE TRANSACTION 'gtx';\n COMMIT PREPARED 'gtx';\n ]);\n\n\n2)\nlogical-replication.sgml\n--In doc, shall we have 'delete_differ' first and then\n'delete_missing', similar to what we have for update (first\n'update_differ' and then 'update_missing')\n\n3)\nlogical-replication.sgml: \"For instance, the origin in the above log\nindicates that the existing row was modified by a local change.\"\n\n--This clarification about origin was required when we had 'origin 0'\nin 'DETAILS'. Now we have \"locally\":\n\"Key (c)=(1) already exists in unique index \"t_pkey\", which was\nmodified locally in transaction 740\".\n\nAnd thus shall we rephrase the concerned line ?\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 31 Jul 2024 11:05:36 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wed, Jul 31, 2024 at 7:40 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> Here is the V8 patch set. It includes the following changes:\n>\n\nA few more comments:\n1. I think in FindConflictTuple() the patch is locking the tuple so\nthat after finding a conflict if there is a concurrent delete, it can\nretry to find the tuple. If there is no concurrent delete then we can\nsuccessfully report the conflict. Is that correct? If so, it is better\nto explain this somewhere in the comments.\n\n2.\n* Note that this doesn't lock the values in any way, so it's\n * possible that a conflicting tuple is inserted immediately\n * after this returns. But this can be used for a pre-check\n * before insertion.\n..\nExecCheckIndexConstraints()\n\nThese comments indicate that this function can be used before\ninserting the tuple, however, this patch uses it after inserting the\ntuple as well. So, I think the comments should be updated accordingly.\n\n3.\n * For unique indexes, we usually don't want to add info to the IndexInfo for\n * checking uniqueness, since the B-Tree AM handles that directly. However,\n * in the case of speculative insertion, additional support is required.\n...\nBuildSpeculativeIndexInfo(){...}\n\nThis additional support is now required even for logical replication\nto detect conflicts. So the comments atop this function should reflect\nthe same.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 31 Jul 2024 16:23:14 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wednesday, July 31, 2024 1:36 PM shveta malik <[email protected]> wrote:\r\n> \r\n> On Wed, Jul 31, 2024 at 7:40 AM Zhijie Hou (Fujitsu) \r\n> <[email protected]>\r\n> wrote:\r\n> >\r\n> > >\r\n> > > 2)\r\n> > > apply_handle_delete_internal()\r\n> > >\r\n> > > --Do we need to check \"(!edata->mtstate || \r\n> > > edata->mtstate->operation != CMD_UPDATE)\" in the else part as \r\n> > > well? Can there be a scenario where during update flow, it is \r\n> > > trying to delete from a partition and comes here, but till then \r\n> > > that row is deleted already and we end up raising 'delete_missing' additionally instead of 'update_missing'\r\n> > > alone?\r\n> >\r\n> > I think this shouldn't happen because the row to be deleted should \r\n> > have been locked before entering the apply_handle_delete_internal().\r\n> > Actually, calling\r\n> > apply_handle_delete_internal() for cross-partition update is a big \r\n> > buggy because the row to be deleted has already been found in \r\n> > apply_handle_tuple_routing(), so we could have avoid scanning the \r\n> > tuple again. I have posted another patch to fix this issue in thread[1].\r\n> \r\n> Thanks for the details.\r\n> \r\n> >\r\n> > Here is the V8 patch set. It includes the following changes:\r\n> >\r\n> \r\n> Thanks for the patch. I verified that all the bugs reported so far are addressed.\r\n> Few trivial comments:\r\n\r\nThanks for the comments!\r\n\r\n> \r\n> 1)\r\n> 029_on_error.pl:\r\n> --I did not understand the intent of this change. The existing insert \r\n> would also have resulted in conflict (insert_exists) and we would have \r\n> identified and skipped that. Why change to UPDATE?\r\n> \r\n> $node_publisher->safe_psql(\r\n> 'postgres',\r\n> qq[\r\n> BEGIN;\r\n> -INSERT INTO tbl VALUES (1, NULL);\r\n> +UPDATE tbl SET i = 2;\r\n> PREPARE TRANSACTION 'gtx';\r\n> COMMIT PREPARED 'gtx';\r\n> ]);\r\n> \r\n\r\nThe intention of this change is to cover the code path of update_exists.\r\nThe original test only tested the code of insert_exists.\r\n\r\n> \r\n> 2)\r\n> logical-replication.sgml\r\n> --In doc, shall we have 'delete_differ' first and then \r\n> 'delete_missing', similar to what we have for update (first \r\n> 'update_differ' and then 'update_missing')\r\n> \r\n> 3)\r\n> logical-replication.sgml: \"For instance, the origin in the above log \r\n> indicates that the existing row was modified by a local change.\"\r\n> \r\n> --This clarification about origin was required when we had 'origin 0'\r\n> in 'DETAILS'. Now we have \"locally\":\r\n> \"Key (c)=(1) already exists in unique index \"t_pkey\", which was \r\n> modified locally in transaction 740\".\r\n> \r\n> And thus shall we rephrase the concerned line ?\r\n\r\nFixed in the V9 patch set.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Thu, 1 Aug 2024 03:39:56 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wednesday, July 31, 2024 6:53 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Wed, Jul 31, 2024 at 7:40 AM Zhijie Hou (Fujitsu) <[email protected]>\r\n> wrote:\r\n> >\r\n> > Here is the V8 patch set. It includes the following changes:\r\n> >\r\n> \r\n> A few more comments:\r\n> 1. I think in FindConflictTuple() the patch is locking the tuple so that after\r\n> finding a conflict if there is a concurrent delete, it can retry to find the tuple. If\r\n> there is no concurrent delete then we can successfully report the conflict. Is\r\n> that correct? If so, it is better to explain this somewhere in the comments.\r\n> \r\n> 2.\r\n> * Note that this doesn't lock the values in any way, so it's\r\n> * possible that a conflicting tuple is inserted immediately\r\n> * after this returns. But this can be used for a pre-check\r\n> * before insertion.\r\n> ..\r\n> ExecCheckIndexConstraints()\r\n> \r\n> These comments indicate that this function can be used before inserting the\r\n> tuple, however, this patch uses it after inserting the tuple as well. So, I think the\r\n> comments should be updated accordingly.\r\n> \r\n> 3.\r\n> * For unique indexes, we usually don't want to add info to the IndexInfo for\r\n> * checking uniqueness, since the B-Tree AM handles that directly. However,\r\n> * in the case of speculative insertion, additional support is required.\r\n> ...\r\n> BuildSpeculativeIndexInfo(){...}\r\n> \r\n> This additional support is now required even for logical replication to detect\r\n> conflicts. So the comments atop this function should reflect the same.\r\n\r\nThanks for the comments.\r\n\r\nHere is the V9 patch set which addressed above and Shveta's comments.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Thu, 1 Aug 2024 03:40:09 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Thursday, August 1, 2024 11:40 AM Zhijie Hou (Fujitsu) <[email protected]>\r\n> Here is the V9 patch set which addressed above and Shveta's comments.\r\n> \r\n\r\nThe patch conflict with a recent commit a67da49, so here is rebased V10 patch set. \r\n\r\nThanks to the commit a67da49, I have removed the special check for\r\ncross-partition update in apply_handle_delete_internal() because this function\r\nwill not be called in cross-update anymore.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Thu, 1 Aug 2024 06:09:41 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "Dear Hou,\r\n\r\nLet me contribute the great feature. I read only the 0001 patch and here are initial comments.\r\n\r\n01. logical-replication.sgml\r\n\r\ntrack_commit_timestamp must be specified only on the subscriber, but it is not clarified.\r\nCan you write down that?\r\n\r\n02. logical-replication.sgml\r\n\r\nI felt that the ordering of {exists, differ,missing} should be fixed, but not done.\r\nFor update \"differ\" is listerd after the \"missing\", but for delete, \"differ\"\r\nlocates before the \"missing\". The inconsistency exists on souce code as well.\r\n\r\n03. conflict.h\r\n\r\nThe copyright seems wrong. 2012 is not needed.\r\n\r\n04. general\r\n\r\nAccording to the documentation [1], there is another constraint \"exclude\", which\r\ncan cause another type of conflict. But this pattern cannot be logged in detail.\r\nI tested below workload as an example.\r\n\r\n=====\r\npublisher=# create table tab (a int, EXCLUDE (a WITH =));\r\npublisher=# create publication pub for all tables;\r\n\r\nsubscriber=# create table tab (a int, EXCLUDE (a WITH =));\r\nsubscriber=# create subscription sub...;\r\nsubscriber=# insert into tab values (1);\r\n\r\npublisher=# insert into tab values (1);\r\n\r\n-> Got conflict with below log lines:\r\n```\r\nERROR: conflicting key value violates exclusion constraint \"tab_a_excl\"\r\nDETAIL: Key (a)=(1) conflicts with existing key (a)=(1).\r\nCONTEXT: processing remote data for replication origin \"pg_16389\" during message type \"INSERT\"\r\nfor replication target relation \"public.tab\" in transaction 740, finished at 0/1543940\r\n```\r\n=====\r\n\r\nCan we support the type of conflict?\r\n\r\n[1]: https://www.postgresql.org/docs/devel/sql-createtable.html#SQL-CREATETABLE-EXCLUDE\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Thu, 1 Aug 2024 08:56:13 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Thu, Aug 1, 2024 at 2:26 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> 04. general\n>\n> According to the documentation [1], there is another constraint \"exclude\", which\n> can cause another type of conflict. But this pattern cannot be logged in detail.\n>\n\nAs per docs, \"exclusion constraints can specify constraints that are\nmore general than simple equality\", so I don't think it satisfies the\nkind of conflicts we are trying to LOG and then in the future patch\nallows automatic resolution for the same. For example, when we have\nlast_update_wins strategy, we will replace the rows with remote rows\nwhen the key column values match which shouldn't be true in general\nfor exclusion constraints. Similarly, we don't want to consider other\nconstraint violations like CHECK to consider as conflicts. We can\nalways extend the basic functionality for more conflicts if required\nbut let's go with reporting straight-forward stuff first.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 1 Aug 2024 17:23:16 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Thu, Aug 1, 2024 at 5:23 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Aug 1, 2024 at 2:26 PM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > 04. general\n> >\n> > According to the documentation [1], there is another constraint \"exclude\", which\n> > can cause another type of conflict. But this pattern cannot be logged in detail.\n> >\n>\n> As per docs, \"exclusion constraints can specify constraints that are\n> more general than simple equality\", so I don't think it satisfies the\n> kind of conflicts we are trying to LOG and then in the future patch\n> allows automatic resolution for the same. For example, when we have\n> last_update_wins strategy, we will replace the rows with remote rows\n> when the key column values match which shouldn't be true in general\n> for exclusion constraints. Similarly, we don't want to consider other\n> constraint violations like CHECK to consider as conflicts. We can\n> always extend the basic functionality for more conflicts if required\n> but let's go with reporting straight-forward stuff first.\n>\n\nIt is better to document that exclusion constraints won't be\nsupported. We can even write a comment in the code and or commit\nmessage that we can extend it in the future.\n\n*\n+ * Return true if the commit timestamp data was found, false otherwise.\n+ */\n+bool\n+GetTupleCommitTs(TupleTableSlot *localslot, TransactionId *xmin,\n+ RepOriginId *localorigin, TimestampTz *localts)\n\nThis API returns both xmin and commit timestamp, so wouldn't it be\nbetter to name it as GetTupleTransactionInfo or something like that?\n\nI have made several changes in the attached top-up patch. These\ninclude changes in the comments, docs, function names, etc.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Fri, 2 Aug 2024 16:33:06 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "Performance tests done on the v8-0001 and v8-0002 patches, available at [1].\n\nThe purpose of the performance tests is to measure the impact on\nlogical replication with track_commit_timestamp enabled, as this\ninvolves fetching the commit_ts data to determine\ndelete_differ/update_differ conflicts.\n\nFortunately, we did not see any noticeable overhead from the new\ncommit_ts fetch and comparison logic. The only notable impact is\npotential overhead from logging conflicts if they occur frequently.\nTherefore, enabling conflict detection by default seems feasible, and\nintroducing a new detect_conflict option may not be necessary.\n\nPlease refer to the following for detailed test results:\n\nFor all the tests, created two server nodes, one publisher and one as\nsubscriber. Both the nodes are configured with below settings -\n wal_level = logical\n shared_buffers = 40GB\n max_worker_processes = 32\n max_parallel_maintenance_workers = 24\n max_parallel_workers = 32\n synchronous_commit = off\n checkpoint_timeout = 1d\n max_wal_size = 24GB\n min_wal_size = 15GB\n autovacuum = off\n~~~~\n\nTest 1: create conflicts on Sub using pgbench.\n----------------------------------------------------------------\nSetup:\n - Both publisher and subscriber have pgbench tables created as-\n pgbench -p $node1_port postgres -qis 1\n - At Sub, a subscription created for all the changes from Pub node.\n\nTest Run:\n - To test, ran pgbench for 15 minutes on both nodes simultaneously,\nwhich led to concurrent updates and update_differ conflicts on the\nSubscriber node.\n Command used to run pgbench on both nodes-\n ./pgbench postgres -p 8833 -c 10 -j 3 -T 300 -P 20\n\nResults:\nFor each case, note the “tps” and total time taken by the apply-worker\non Sub to apply the changes coming from Pub.\n\nCase1: track_commit_timestamp = off, detect_conflict = off\n Pub-tps = 9139.556405\n Sub-tps = 8456.787967\n Time of replicating all the changes: 19min 28s\nCase 2 : track_commit_timestamp = on, detect_conflict = on\n Pub-tps = 8833.016548\n Sub-tps = 8389.763739\n Time of replicating all the changes: 20min 20s\nCase3: track_commit_timestamp = on, detect_conflict = off\n Pub-tps = 8886.101726\n Sub-tps = 8374.508017\n Time of replicating all the changes: 19min 35s\nCase 4: track_commit_timestamp = off, detect_conflict = on\n Pub-tps = 8981.924596\n Sub-tps = 8411.120808\n Time of replicating all the changes: 19min 27s\n\n**The difference of TPS between each case is small. While I can see a\nslight increase of the replication time (about 5%), when enabling both\ntrack_commit_timestamp and detect_conflict.\n\nTest2: create conflict using a manual script\n----------------------------------------------------------------\n - To measure the precise time taken by the apply-worker in all cases,\ncreate a test with a table having 10 million rows.\n - To record the total time taken by the apply-worker, dump the\ncurrent time in the logfile for apply_handle_begin() and\napply_handle_commit().\n\nSetup:\nPub : has a table ‘perf’ with 10 million rows.\nSub : has the same table ‘perf’ with its own 10 million rows (inserted\nby 1000 different transactions). This table is subscribed for all\nchanges from Pub.\n\nTest Run:\nAt Pub: run UPDATE on the table ‘perf’ to update all its rows in a\nsingle transaction. (this will lead to update_differ conflict for all\nrows on Sub when enabled).\nAt Sub: record the time(from log file) taken by the apply-worker to\napply all updates coming from Pub.\n\nResults:\nBelow table shows the total time taken by the apply-worker\n(apply_handle_commit Time - apply_handle_begin Time ).\n(Two test runs for each of the four cases)\n\nCase1: track_commit_timestamp = off, detect_conflict = off\n Run1 - 2min 42sec 579ms\n Run2 - 2min 41sec 75ms\nCase 2 : track_commit_timestamp = on, detect_conflict = on\n Run1 - 6min 11sec 602ms\n Run2 - 6min 25sec 179ms\nCase3: track_commit_timestamp = on, detect_conflict = off\n Run1 - 2min 34sec 223ms\n Run2 - 2min 33sec 482ms\nCase 4: track_commit_timestamp = off, detect_conflict = on\n Run1 - 2min 35sec 276ms\n Run2 - 2min 38sec 745ms\n\n** In the case-2 when both track_commit_timestamp and detect_conflict\nare enabled, the time taken by the apply-worker is ~140% higher.\n\nTest3: Case when no conflict is detected.\n----------------------------------------------------------------\nTo measure the time taken by the apply-worker when there is no\nconflict detected. This test is to confirm if the time overhead in\nTest1-Case2 is due to the new function GetTupleCommitTs() which\nfetches the origin and timestamp information for each row in the table\nbefore applying the update.\n\nSetup:\n - The Publisher and Subscriber both have an empty table to start with.\n - At Sub, the table is subscribed for all changes from Pub.\n - At Pub: Insert 10 million rows and the same will be replicated to\nthe Sub table as well.\n\nTest Run:\nAt Pub: run an UPDATE on the table to update all rows in a single\ntransaction. (This will NOT hit the update_differ on Sub because now\nall the tuples have the Pub’s origin).\n\nResults:\nCase1: track_commit_timestamp = off, detect_conflict = off\n Run1 - 2min 39sec 261ms\n Run2 - 2min 30sec 95ms\nCase 2 : track_commit_timestamp = on, detect_conflict = on\n Run1 - 2min 38sec 985ms\n Run2 - 2min 46sec 624ms\nCase3: track_commit_timestamp = on, detect_conflict = off\n Run1 - 2min 59sec 887ms\n Run2 - 2min 34sec 336ms\nCase 4: track_commit_timestamp = off, detect_conflict = on\n Run1 - 2min 33sec 477min\n Run2 - 2min 37sec 677ms\n\nTest Summary -\n-- The duration for case-2 was reduced to 2-3 minutes, matching the\ntimes of the other cases.\n-- The test revealed that the overhead in case-2 was not due to\ncommit_ts fetching (GetTupleCommitTs).\n-- The additional action in case-2 was the error logging of all 10\nmillion update_differ conflicts.\n-- To confirm that the additional time was due to logging, I conducted\nanother test. I removed the \"ReportApplyConflict()\" call for\nupdate_differ in the code and re-ran test1-case2 (which initially took\n~6 minutes). Without conflict logging, the duration was reduced to\n\"2min 56sec 758 ms\".\n\nTest4 - Code Profiling\n----------------------------------------------------------------\nTo narrow down the cause of the time overhead in Test2-case2, did code\nprofiling patches. Used same setup and test script as Test2.\nThe overhead in (track_committs=on and detect_conflict=on) case is not\nintroduced by the commit timestamp fetching(e.g. GetTupleCommitTs).\nThe main overhead comes from the log reporting which happens when\napplying each change:\n\n|--16.57%--ReportApplyConflict\n |--13.17%--errfinish\n --11.53%--EmitErrorReport\n --11.41%--send_message_to_server_log ...\n ...\n...\n|--0.74%--GetTupleCommitTs\"\n\nThank you Hou-San for helping in Test1 and conducting Test4.\n\n[1] https://www.postgresql.org/message-id/OS0PR01MB57162919F1D6C55D82D4D89D94B12%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nThanks,\n\nNisha\n\n\n", "msg_date": "Fri, 2 Aug 2024 18:28:13 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Friday, July 26, 2024 2:26 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Fri, Jul 26, 2024 at 9:39 AM shveta malik <[email protected]> wrote:\r\n> >\r\n> > On Thu, Jul 11, 2024 at 7:47 AM Zhijie Hou (Fujitsu)\r\n> > <[email protected]> wrote:\r\n> > >\r\n> > > On Wednesday, July 10, 2024 5:39 PM shveta malik\r\n> <[email protected]> wrote:\r\n> > > >\r\n> >\r\n> > > > 2)\r\n> > > > Another case which might confuse user:\r\n> > > >\r\n> > > > CREATE TABLE t1 (pk integer primary key, val1 integer, val2\r\n> > > > integer);\r\n> > > >\r\n> > > > On PUB: insert into t1 values(1,10,10); insert into t1\r\n> > > > values(2,20,20);\r\n> > > >\r\n> > > > On SUB: update t1 set pk=3 where pk=2;\r\n> > > >\r\n> > > > Data on PUB: {1,10,10}, {2,20,20}\r\n> > > > Data on SUB: {1,10,10}, {3,20,20}\r\n> > > >\r\n> > > > Now on PUB: update t1 set val1=200 where val1=20;\r\n> > > >\r\n> > > > On Sub, I get this:\r\n> > > > 2024-07-10 14:44:00.160 IST [648287] LOG: conflict update_missing\r\n> > > > detected on relation \"public.t1\"\r\n> > > > 2024-07-10 14:44:00.160 IST [648287] DETAIL: Did not find the row\r\n> > > > to be updated.\r\n> > > > 2024-07-10 14:44:00.160 IST [648287] CONTEXT: processing remote\r\n> > > > data for replication origin \"pg_16389\" during message type\r\n> > > > \"UPDATE\" for replication target relation \"public.t1\" in\r\n> > > > transaction 760, finished at 0/156D658\r\n> > > >\r\n> > > > To user, it could be quite confusing, as val1=20 exists on sub but\r\n> > > > still he gets update_missing conflict and the 'DETAIL' is not\r\n> > > > sufficient to give the clarity. I think on HEAD as well (have not\r\n> > > > tested), we will get same behavior i.e. update will be ignored as\r\n> > > > we make search based on RI (pk in this case). So we are not\r\n> > > > worsening the situation, but now since we are detecting conflict, is it\r\n> possible to give better details in 'DETAIL' section indicating what is actually\r\n> missing?\r\n> > >\r\n> > > I think It's doable to report the row value that cannot be found in\r\n> > > the local relation, but the concern is the potential risk of\r\n> > > exposing some sensitive data in the log. This may be OK, as we are\r\n> > > already reporting the key value for constraints violation, so if\r\n> > > others also agree, we can add the row value in the DETAIL as well.\r\n> >\r\n> > This is still awaiting some feedback. I feel it will be good to add\r\n> > some pk value at-least in DETAIL section, like we add for other\r\n> > conflict types.\r\n> >\r\n> \r\n> I agree that displaying pk where applicable should be okay as we display it at\r\n> other places but the same won't be possible when we do sequence scan to\r\n> fetch the required tuple. So, the message will be different in that case, right?\r\n\r\nAfter some research, I think we can report the key values in DETAIL if the\r\napply worker uses any unique indexes to find the tuple to update/delete.\r\nOtherwise, we can try to output all column values in DETAIL if the current user\r\nof apply worker has SELECT access to these columns.\r\n\r\nThis is consistent with what we do when reporting table constraint violation\r\n(e.g. when violating a check constraint, it could output all the column value\r\nif the current has access to all the column):\r\n\r\n- First, use super user to create a table.\r\nCREATE TABLE t1 (c1 int, c2 int, c3 int check (c3 < 5));\r\n\r\n- 1) using super user to insert a row that violates the constraint. We should\r\nsee all the column value.\r\n\r\nINSERT INTO t1(c3) VALUES (6);\r\n\tERROR: new row for relation \"t1\" violates check constraint \"t1_c3_check\"\r\n\tDETAIL: Failing row contains (null, null, 6).\r\n\r\n- 2) use a user without access to all the columns. We can only see the inserted column and \r\nCREATE USER regress_priv_user2;\r\nGRANT INSERT (c1, c2, c3) ON t1 TO regress_priv_user2;\r\n\r\nSET SESSION AUTHORIZATION regress_priv_user2;\r\nINSERT INTO t1 (c3) VALUES (6);\r\n\r\n\tERROR: new row for relation \"t1\" violates check constraint \"t1_c3_check\"\r\n\tDETAIL: Failing row contains (c3) = (6).\r\n\r\nTo achieve this, I think we can expose the ExecBuildSlotValueDescription\r\nfunction and use it in conflict reporting. What do you think ?\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Sun, 4 Aug 2024 07:34:21 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Friday, August 2, 2024 7:03 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Thu, Aug 1, 2024 at 5:23 PM Amit Kapila <[email protected]>\r\n> wrote:\r\n> >\r\n> > On Thu, Aug 1, 2024 at 2:26 PM Hayato Kuroda (Fujitsu)\r\n> > <[email protected]> wrote:\r\n> > >\r\n> > > 04. general\r\n> > >\r\n> > > According to the documentation [1], there is another constraint\r\n> > > \"exclude\", which can cause another type of conflict. But this pattern\r\n> cannot be logged in detail.\r\n> > >\r\n> >\r\n> > As per docs, \"exclusion constraints can specify constraints that are\r\n> > more general than simple equality\", so I don't think it satisfies the\r\n> > kind of conflicts we are trying to LOG and then in the future patch\r\n> > allows automatic resolution for the same. For example, when we have\r\n> > last_update_wins strategy, we will replace the rows with remote rows\r\n> > when the key column values match which shouldn't be true in general\r\n> > for exclusion constraints. Similarly, we don't want to consider other\r\n> > constraint violations like CHECK to consider as conflicts. We can\r\n> > always extend the basic functionality for more conflicts if required\r\n> > but let's go with reporting straight-forward stuff first.\r\n> >\r\n> \r\n> It is better to document that exclusion constraints won't be supported. We can\r\n> even write a comment in the code and or commit message that we can extend it\r\n> in the future.\r\n\r\nAdded.\r\n\r\n> \r\n> *\r\n> + * Return true if the commit timestamp data was found, false otherwise.\r\n> + */\r\n> +bool\r\n> +GetTupleCommitTs(TupleTableSlot *localslot, TransactionId *xmin,\r\n> +RepOriginId *localorigin, TimestampTz *localts)\r\n> \r\n> This API returns both xmin and commit timestamp, so wouldn't it be better to\r\n> name it as GetTupleTransactionInfo or something like that?\r\n\r\nThe suggested name looks better. Addressed in the patch.\r\n\r\n> \r\n> I have made several changes in the attached top-up patch. These include\r\n> changes in the comments, docs, function names, etc.\r\n\r\nThanks! I have reviewed and merged them in the patch.\r\n\r\nHere is the V11 patch set which addressed above and Kuroda-san[1] comments.\r\n\r\nNote that we may remove the 0002 patch in the next version as we didn't see\r\nperformance effect from the detection logic.\r\n\r\n[1] https://www.postgresql.org/message-id/TYAPR01MB569224262F44875973FAF344F5B22%40TYAPR01MB5692.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Sun, 4 Aug 2024 07:52:09 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Fri, Aug 2, 2024 at 6:28 PM Nisha Moond <[email protected]> wrote:\n>\n> Performance tests done on the v8-0001 and v8-0002 patches, available at [1].\n>\n\nThanks for doing the detailed tests for this patch.\n\n> The purpose of the performance tests is to measure the impact on\n> logical replication with track_commit_timestamp enabled, as this\n> involves fetching the commit_ts data to determine\n> delete_differ/update_differ conflicts.\n>\n> Fortunately, we did not see any noticeable overhead from the new\n> commit_ts fetch and comparison logic. The only notable impact is\n> potential overhead from logging conflicts if they occur frequently.\n> Therefore, enabling conflict detection by default seems feasible, and\n> introducing a new detect_conflict option may not be necessary.\n>\n...\n>\n> Test 1: create conflicts on Sub using pgbench.\n> ----------------------------------------------------------------\n> Setup:\n> - Both publisher and subscriber have pgbench tables created as-\n> pgbench -p $node1_port postgres -qis 1\n> - At Sub, a subscription created for all the changes from Pub node.\n>\n> Test Run:\n> - To test, ran pgbench for 15 minutes on both nodes simultaneously,\n> which led to concurrent updates and update_differ conflicts on the\n> Subscriber node.\n> Command used to run pgbench on both nodes-\n> ./pgbench postgres -p 8833 -c 10 -j 3 -T 300 -P 20\n>\n> Results:\n> For each case, note the “tps” and total time taken by the apply-worker\n> on Sub to apply the changes coming from Pub.\n>\n> Case1: track_commit_timestamp = off, detect_conflict = off\n> Pub-tps = 9139.556405\n> Sub-tps = 8456.787967\n> Time of replicating all the changes: 19min 28s\n> Case 2 : track_commit_timestamp = on, detect_conflict = on\n> Pub-tps = 8833.016548\n> Sub-tps = 8389.763739\n> Time of replicating all the changes: 20min 20s\n>\n\nWhy is there a noticeable tps (~3%) reduction in publisher TPS? Is it\nthe impact of track_commit_timestamp = on or something else?\n\n> Case3: track_commit_timestamp = on, detect_conflict = off\n> Pub-tps = 8886.101726\n> Sub-tps = 8374.508017\n> Time of replicating all the changes: 19min 35s\n> Case 4: track_commit_timestamp = off, detect_conflict = on\n> Pub-tps = 8981.924596\n> Sub-tps = 8411.120808\n> Time of replicating all the changes: 19min 27s\n>\n> **The difference of TPS between each case is small. While I can see a\n> slight increase of the replication time (about 5%), when enabling both\n> track_commit_timestamp and detect_conflict.\n>\n\nThe difference in TPS between case 1 and case 2 is quite visible.\nIIUC, the replication time difference is due to the logging of\nconflicts, right?\n\n> Test2: create conflict using a manual script\n> ----------------------------------------------------------------\n> - To measure the precise time taken by the apply-worker in all cases,\n> create a test with a table having 10 million rows.\n> - To record the total time taken by the apply-worker, dump the\n> current time in the logfile for apply_handle_begin() and\n> apply_handle_commit().\n>\n> Setup:\n> Pub : has a table ‘perf’ with 10 million rows.\n> Sub : has the same table ‘perf’ with its own 10 million rows (inserted\n> by 1000 different transactions). This table is subscribed for all\n> changes from Pub.\n>\n> Test Run:\n> At Pub: run UPDATE on the table ‘perf’ to update all its rows in a\n> single transaction. (this will lead to update_differ conflict for all\n> rows on Sub when enabled).\n> At Sub: record the time(from log file) taken by the apply-worker to\n> apply all updates coming from Pub.\n>\n> Results:\n> Below table shows the total time taken by the apply-worker\n> (apply_handle_commit Time - apply_handle_begin Time ).\n> (Two test runs for each of the four cases)\n>\n> Case1: track_commit_timestamp = off, detect_conflict = off\n> Run1 - 2min 42sec 579ms\n> Run2 - 2min 41sec 75ms\n> Case 2 : track_commit_timestamp = on, detect_conflict = on\n> Run1 - 6min 11sec 602ms\n> Run2 - 6min 25sec 179ms\n> Case3: track_commit_timestamp = on, detect_conflict = off\n> Run1 - 2min 34sec 223ms\n> Run2 - 2min 33sec 482ms\n> Case 4: track_commit_timestamp = off, detect_conflict = on\n> Run1 - 2min 35sec 276ms\n> Run2 - 2min 38sec 745ms\n>\n> ** In the case-2 when both track_commit_timestamp and detect_conflict\n> are enabled, the time taken by the apply-worker is ~140% higher.\n>\n> Test3: Case when no conflict is detected.\n> ----------------------------------------------------------------\n> To measure the time taken by the apply-worker when there is no\n> conflict detected. This test is to confirm if the time overhead in\n> Test1-Case2 is due to the new function GetTupleCommitTs() which\n> fetches the origin and timestamp information for each row in the table\n> before applying the update.\n>\n> Setup:\n> - The Publisher and Subscriber both have an empty table to start with.\n> - At Sub, the table is subscribed for all changes from Pub.\n> - At Pub: Insert 10 million rows and the same will be replicated to\n> the Sub table as well.\n>\n> Test Run:\n> At Pub: run an UPDATE on the table to update all rows in a single\n> transaction. (This will NOT hit the update_differ on Sub because now\n> all the tuples have the Pub’s origin).\n>\n> Results:\n> Case1: track_commit_timestamp = off, detect_conflict = off\n> Run1 - 2min 39sec 261ms\n> Run2 - 2min 30sec 95ms\n> Case 2 : track_commit_timestamp = on, detect_conflict = on\n> Run1 - 2min 38sec 985ms\n> Run2 - 2min 46sec 624ms\n> Case3: track_commit_timestamp = on, detect_conflict = off\n> Run1 - 2min 59sec 887ms\n> Run2 - 2min 34sec 336ms\n> Case 4: track_commit_timestamp = off, detect_conflict = on\n> Run1 - 2min 33sec 477min\n> Run2 - 2min 37sec 677ms\n>\n> Test Summary -\n> -- The duration for case-2 was reduced to 2-3 minutes, matching the\n> times of the other cases.\n> -- The test revealed that the overhead in case-2 was not due to\n> commit_ts fetching (GetTupleCommitTs).\n> -- The additional action in case-2 was the error logging of all 10\n> million update_differ conflicts.\n>\n\nAccording to me, this last point is key among all tests which will\ndecide whether we should have a new subscription option like\ndetect_conflict or not. I feel this is the worst case where all the\nrow updates have conflicts and the majority of time is spent writing\nLOG messages. Now, for this specific case, if one wouldn't have\nenabled track_commit_timestamp then there would be no difference as\nseen in case-4. So, I don't see this as a reason to introduce a new\nsubscription option like detect_conflicts, if one wants to avoid such\nan overhead, she shouldn't have enabled track_commit_timestamp in the\nfirst place to detect conflicts. Also, even without this, we would see\nsimilar overhead in the case of update/delete_missing where we LOG\nwhen the tuple to modify is not found.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 5 Aug 2024 09:18:55 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Sun, Aug 4, 2024 at 1:22 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> Here is the V11 patch set which addressed above and Kuroda-san[1] comments.\n>\n\nThanks for the patch. Few comments:\n\n1)\nCan you please recheck conflict.h inclusion. I think, these are not required:\n#include \"access/xlogdefs.h\"\n#include \"executor/tuptable.h\"\n#include \"utils/relcache.h\"\n\nOnly these should suffice:\n#include \"nodes/execnodes.h\"\n#include \"utils/timestamp.h\"\n\n2) create_subscription.sgml:\nFor 'insert_exists' as well, we can mention that\ntrack_commit_timestamp should be enabled *on the susbcriber*.\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 5 Aug 2024 09:45:23 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Sun, Aug 4, 2024 at 1:04 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Friday, July 26, 2024 2:26 PM Amit Kapila <[email protected]> wrote:\n> >\n> >\n> > I agree that displaying pk where applicable should be okay as we display it at\n> > other places but the same won't be possible when we do sequence scan to\n> > fetch the required tuple. So, the message will be different in that case, right?\n>\n> After some research, I think we can report the key values in DETAIL if the\n> apply worker uses any unique indexes to find the tuple to update/delete.\n> Otherwise, we can try to output all column values in DETAIL if the current user\n> of apply worker has SELECT access to these columns.\n>\n\nI don't see any problem with displaying the column values in the LOG\nmessage when the user can access it. Also, we do the same in other\nplaces to further strengthen this idea.\n\n> This is consistent with what we do when reporting table constraint violation\n> (e.g. when violating a check constraint, it could output all the column value\n> if the current has access to all the column):\n>\n> - First, use super user to create a table.\n> CREATE TABLE t1 (c1 int, c2 int, c3 int check (c3 < 5));\n>\n> - 1) using super user to insert a row that violates the constraint. We should\n> see all the column value.\n>\n> INSERT INTO t1(c3) VALUES (6);\n> ERROR: new row for relation \"t1\" violates check constraint \"t1_c3_check\"\n> DETAIL: Failing row contains (null, null, 6).\n>\n> - 2) use a user without access to all the columns. We can only see the inserted column and\n> CREATE USER regress_priv_user2;\n> GRANT INSERT (c1, c2, c3) ON t1 TO regress_priv_user2;\n>\n> SET SESSION AUTHORIZATION regress_priv_user2;\n> INSERT INTO t1 (c3) VALUES (6);\n>\n> ERROR: new row for relation \"t1\" violates check constraint \"t1_c3_check\"\n> DETAIL: Failing row contains (c3) = (6).\n>\n> To achieve this, I think we can expose the ExecBuildSlotValueDescription\n> function and use it in conflict reporting. What do you think ?\n>\n\nAgreed. We should also consider displaying both the local and remote\nrows in case of update/delete_differ conflicts. Do, we have any case\nduring conflict reporting where we won't have access to any of the\ncolumns? If so, we need to see what to display in such a case.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 5 Aug 2024 09:50:38 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Mon, Aug 5, 2024 at 9:19 AM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Aug 2, 2024 at 6:28 PM Nisha Moond <[email protected]> wrote:\n> >\n> > Performance tests done on the v8-0001 and v8-0002 patches, available at [1].\n> >\n>\n> Thanks for doing the detailed tests for this patch.\n>\n> > The purpose of the performance tests is to measure the impact on\n> > logical replication with track_commit_timestamp enabled, as this\n> > involves fetching the commit_ts data to determine\n> > delete_differ/update_differ conflicts.\n> >\n> > Fortunately, we did not see any noticeable overhead from the new\n> > commit_ts fetch and comparison logic. The only notable impact is\n> > potential overhead from logging conflicts if they occur frequently.\n> > Therefore, enabling conflict detection by default seems feasible, and\n> > introducing a new detect_conflict option may not be necessary.\n> >\n> ...\n> >\n> > Test 1: create conflicts on Sub using pgbench.\n> > ----------------------------------------------------------------\n> > Setup:\n> > - Both publisher and subscriber have pgbench tables created as-\n> > pgbench -p $node1_port postgres -qis 1\n> > - At Sub, a subscription created for all the changes from Pub node.\n> >\n> > Test Run:\n> > - To test, ran pgbench for 15 minutes on both nodes simultaneously,\n> > which led to concurrent updates and update_differ conflicts on the\n> > Subscriber node.\n> > Command used to run pgbench on both nodes-\n> > ./pgbench postgres -p 8833 -c 10 -j 3 -T 300 -P 20\n> >\n> > Results:\n> > For each case, note the “tps” and total time taken by the apply-worker\n> > on Sub to apply the changes coming from Pub.\n> >\n> > Case1: track_commit_timestamp = off, detect_conflict = off\n> > Pub-tps = 9139.556405\n> > Sub-tps = 8456.787967\n> > Time of replicating all the changes: 19min 28s\n> > Case 2 : track_commit_timestamp = on, detect_conflict = on\n> > Pub-tps = 8833.016548\n> > Sub-tps = 8389.763739\n> > Time of replicating all the changes: 20min 20s\n> >\n>\n> Why is there a noticeable tps (~3%) reduction in publisher TPS? Is it\n> the impact of track_commit_timestamp = on or something else?\n\nWas track_commit_timestamp enabled only on subscriber (as needed) or\non both publisher and subscriber? Nisha, can you please confirm from\nyour logs?\n\n> > Case3: track_commit_timestamp = on, detect_conflict = off\n> > Pub-tps = 8886.101726\n> > Sub-tps = 8374.508017\n> > Time of replicating all the changes: 19min 35s\n> > Case 4: track_commit_timestamp = off, detect_conflict = on\n> > Pub-tps = 8981.924596\n> > Sub-tps = 8411.120808\n> > Time of replicating all the changes: 19min 27s\n> >\n> > **The difference of TPS between each case is small. While I can see a\n> > slight increase of the replication time (about 5%), when enabling both\n> > track_commit_timestamp and detect_conflict.\n> >\n>\n> The difference in TPS between case 1 and case 2 is quite visible.\n> IIUC, the replication time difference is due to the logging of\n> conflicts, right?\n>\n> > Test2: create conflict using a manual script\n> > ----------------------------------------------------------------\n> > - To measure the precise time taken by the apply-worker in all cases,\n> > create a test with a table having 10 million rows.\n> > - To record the total time taken by the apply-worker, dump the\n> > current time in the logfile for apply_handle_begin() and\n> > apply_handle_commit().\n> >\n> > Setup:\n> > Pub : has a table ‘perf’ with 10 million rows.\n> > Sub : has the same table ‘perf’ with its own 10 million rows (inserted\n> > by 1000 different transactions). This table is subscribed for all\n> > changes from Pub.\n> >\n> > Test Run:\n> > At Pub: run UPDATE on the table ‘perf’ to update all its rows in a\n> > single transaction. (this will lead to update_differ conflict for all\n> > rows on Sub when enabled).\n> > At Sub: record the time(from log file) taken by the apply-worker to\n> > apply all updates coming from Pub.\n> >\n> > Results:\n> > Below table shows the total time taken by the apply-worker\n> > (apply_handle_commit Time - apply_handle_begin Time ).\n> > (Two test runs for each of the four cases)\n> >\n> > Case1: track_commit_timestamp = off, detect_conflict = off\n> > Run1 - 2min 42sec 579ms\n> > Run2 - 2min 41sec 75ms\n> > Case 2 : track_commit_timestamp = on, detect_conflict = on\n> > Run1 - 6min 11sec 602ms\n> > Run2 - 6min 25sec 179ms\n> > Case3: track_commit_timestamp = on, detect_conflict = off\n> > Run1 - 2min 34sec 223ms\n> > Run2 - 2min 33sec 482ms\n> > Case 4: track_commit_timestamp = off, detect_conflict = on\n> > Run1 - 2min 35sec 276ms\n> > Run2 - 2min 38sec 745ms\n> >\n> > ** In the case-2 when both track_commit_timestamp and detect_conflict\n> > are enabled, the time taken by the apply-worker is ~140% higher.\n> >\n> > Test3: Case when no conflict is detected.\n> > ----------------------------------------------------------------\n> > To measure the time taken by the apply-worker when there is no\n> > conflict detected. This test is to confirm if the time overhead in\n> > Test1-Case2 is due to the new function GetTupleCommitTs() which\n> > fetches the origin and timestamp information for each row in the table\n> > before applying the update.\n> >\n> > Setup:\n> > - The Publisher and Subscriber both have an empty table to start with.\n> > - At Sub, the table is subscribed for all changes from Pub.\n> > - At Pub: Insert 10 million rows and the same will be replicated to\n> > the Sub table as well.\n> >\n> > Test Run:\n> > At Pub: run an UPDATE on the table to update all rows in a single\n> > transaction. (This will NOT hit the update_differ on Sub because now\n> > all the tuples have the Pub’s origin).\n> >\n> > Results:\n> > Case1: track_commit_timestamp = off, detect_conflict = off\n> > Run1 - 2min 39sec 261ms\n> > Run2 - 2min 30sec 95ms\n> > Case 2 : track_commit_timestamp = on, detect_conflict = on\n> > Run1 - 2min 38sec 985ms\n> > Run2 - 2min 46sec 624ms\n> > Case3: track_commit_timestamp = on, detect_conflict = off\n> > Run1 - 2min 59sec 887ms\n> > Run2 - 2min 34sec 336ms\n> > Case 4: track_commit_timestamp = off, detect_conflict = on\n> > Run1 - 2min 33sec 477min\n> > Run2 - 2min 37sec 677ms\n> >\n> > Test Summary -\n> > -- The duration for case-2 was reduced to 2-3 minutes, matching the\n> > times of the other cases.\n> > -- The test revealed that the overhead in case-2 was not due to\n> > commit_ts fetching (GetTupleCommitTs).\n> > -- The additional action in case-2 was the error logging of all 10\n> > million update_differ conflicts.\n> >\n>\n> According to me, this last point is key among all tests which will\n> decide whether we should have a new subscription option like\n> detect_conflict or not. I feel this is the worst case where all the\n> row updates have conflicts and the majority of time is spent writing\n> LOG messages. Now, for this specific case, if one wouldn't have\n> enabled track_commit_timestamp then there would be no difference as\n> seen in case-4. So, I don't see this as a reason to introduce a new\n> subscription option like detect_conflicts, if one wants to avoid such\n> an overhead, she shouldn't have enabled track_commit_timestamp in the\n> first place to detect conflicts. Also, even without this, we would see\n> similar overhead in the case of update/delete_missing where we LOG\n> when the tuple to modify is not found.\n>\n\nOverall, it looks okay to get rid of the 'detect_conflict' parameter.\nMy only concern here is the purpose/use-cases of\n'track_commit_timestamp'. Is the only purpose of enabling\n'track_commit_timestamp' is to detect conflicts? I couldn't find much\nin the doc on this. Can there be a case where a user wants to enable\n'track_commit_timestamp' for any other purpose without enabling\nsubscription's conflict detection?\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 5 Aug 2024 10:05:01 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Mon, Aug 5, 2024 at 10:05 AM shveta malik <[email protected]> wrote:\n>\n> On Mon, Aug 5, 2024 at 9:19 AM Amit Kapila <[email protected]> wrote:\n> >\n> > On Fri, Aug 2, 2024 at 6:28 PM Nisha Moond <[email protected]> wrote:\n> > >\n> > > Test Summary -\n> > > -- The duration for case-2 was reduced to 2-3 minutes, matching the\n> > > times of the other cases.\n> > > -- The test revealed that the overhead in case-2 was not due to\n> > > commit_ts fetching (GetTupleCommitTs).\n> > > -- The additional action in case-2 was the error logging of all 10\n> > > million update_differ conflicts.\n> > >\n> >\n> > According to me, this last point is key among all tests which will\n> > decide whether we should have a new subscription option like\n> > detect_conflict or not. I feel this is the worst case where all the\n> > row updates have conflicts and the majority of time is spent writing\n> > LOG messages. Now, for this specific case, if one wouldn't have\n> > enabled track_commit_timestamp then there would be no difference as\n> > seen in case-4. So, I don't see this as a reason to introduce a new\n> > subscription option like detect_conflicts, if one wants to avoid such\n> > an overhead, she shouldn't have enabled track_commit_timestamp in the\n> > first place to detect conflicts. Also, even without this, we would see\n> > similar overhead in the case of update/delete_missing where we LOG\n> > when the tuple to modify is not found.\n> >\n>\n> Overall, it looks okay to get rid of the 'detect_conflict' parameter.\n> My only concern here is the purpose/use-cases of\n> 'track_commit_timestamp'. Is the only purpose of enabling\n> 'track_commit_timestamp' is to detect conflicts? I couldn't find much\n> in the doc on this. Can there be a case where a user wants to enable\n> 'track_commit_timestamp' for any other purpose without enabling\n> subscription's conflict detection?\n>\n\nI am not aware of any other use case for 'track_commit_timestamp' GUC.\nAs per my understanding, commit timestamp is primarily required for\nconflict detection and resolution. We can probably add a description\nin 'track_commit_timestamp' GUC about its usage along with this patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 5 Aug 2024 10:31:46 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Sun, Aug 4, 2024 at 1:22 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Friday, August 2, 2024 7:03 PM Amit Kapila <[email protected]> wrote:\n> >\n>\n> Here is the V11 patch set which addressed above and Kuroda-san[1] comments.\n>\n\nA few design-level points:\n\n*\n@@ -525,10 +602,33 @@ ExecSimpleRelationInsert(ResultRelInfo *resultRelInfo,\n /* OK, store the tuple and create index entries for it */\n simple_table_tuple_insert(resultRelInfo->ri_RelationDesc, slot);\n\n+ conflictindexes = resultRelInfo->ri_onConflictArbiterIndexes;\n+\n if (resultRelInfo->ri_NumIndices > 0)\n recheckIndexes = ExecInsertIndexTuples(resultRelInfo,\n- slot, estate, false, false,\n- NULL, NIL, false);\n+ slot, estate, false,\n+ conflictindexes ? true : false,\n+ &conflict,\n+ conflictindexes, false);\n+\n+ /*\n+ * Checks the conflict indexes to fetch the conflicting local tuple\n+ * and reports the conflict. We perform this check here, instead of\n+ * performing an additional index scan before the actual insertion and\n+ * reporting the conflict if any conflicting tuples are found. This is\n+ * to avoid the overhead of executing the extra scan for each INSERT\n+ * operation, even when no conflict arises, which could introduce\n+ * significant overhead to replication, particularly in cases where\n+ * conflicts are rare.\n+ *\n+ * XXX OTOH, this could lead to clean-up effort for dead tuples added\n+ * in heap and index in case of conflicts. But as conflicts shouldn't\n+ * be a frequent thing so we preferred to save the performance overhead\n+ * of extra scan before each insertion.\n+ */\n+ if (conflict)\n+ CheckAndReportConflict(resultRelInfo, estate, CT_INSERT_EXISTS,\n+ recheckIndexes, slot);\n\nI was thinking about this case where we have some pros and cons of\ndoing additional scans only after we found the conflict. I was\nwondering how we will handle the resolution strategy for this when we\nhave to remote_apply the tuple for insert_exists/update_exists cases.\nWe would have already inserted the remote tuple in the heap and index\nbefore we found the conflict which means we have to roll back that\nchange and then start a forest transaction to perform remote_apply\nwhich probably has to update the existing tuple. We may have to\nperform something like speculative insertion and then abort it. That\ndoesn't sound cheap either. Do you have any better ideas?\n\n*\n-ERROR: duplicate key value violates unique constraint \"test_pkey\"\n-DETAIL: Key (c)=(1) already exists.\n+ERROR: conflict insert_exists detected on relation \"public.test\"\n+DETAIL: Key (c)=(1) already exists in unique index \"t_pkey\", which\nwas modified locally in transaction 740 at 2024-06-26\n10:47:04.727375+08.\n\nI think the format to display conflicts is not very clear. The\nconflict should be apparent just by seeing the LOG/ERROR message. I am\nthinking of something like below:\n\nLOG: CONFLICT: <insert_exisits or whatever names we document>;\nDESCRIPTION: If any .. ; RESOLUTION: (This one can be added later)\nDEATAIL: remote_tuple (tuple values); local_tuple (tuple values);\n\nWith the above, one can easily identify the conflict's reason and\naction taken by apply worker.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 5 Aug 2024 16:22:20 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Monday, August 5, 2024 6:52 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Sun, Aug 4, 2024 at 1:22 PM Zhijie Hou (Fujitsu) <[email protected]>\r\n> wrote:\r\n> >\r\n> > On Friday, August 2, 2024 7:03 PM Amit Kapila <[email protected]>\r\n> wrote:\r\n> > >\r\n> >\r\n> > Here is the V11 patch set which addressed above and Kuroda-san[1]\r\n> comments.\r\n> >\r\n> \r\n> A few design-level points:\r\n> \r\n> *\r\n> @@ -525,10 +602,33 @@ ExecSimpleRelationInsert(ResultRelInfo\r\n> *resultRelInfo,\r\n> /* OK, store the tuple and create index entries for it */\r\n> simple_table_tuple_insert(resultRelInfo->ri_RelationDesc, slot);\r\n> \r\n> + conflictindexes = resultRelInfo->ri_onConflictArbiterIndexes;\r\n> +\r\n> if (resultRelInfo->ri_NumIndices > 0)\r\n> recheckIndexes = ExecInsertIndexTuples(resultRelInfo,\r\n> - slot, estate, false, false,\r\n> - NULL, NIL, false);\r\n> + slot, estate, false,\r\n> + conflictindexes ? true : false,\r\n> + &conflict,\r\n> + conflictindexes, false);\r\n> +\r\n> + /*\r\n> + * Checks the conflict indexes to fetch the conflicting local tuple\r\n> + * and reports the conflict. We perform this check here, instead of\r\n> + * performing an additional index scan before the actual insertion and\r\n> + * reporting the conflict if any conflicting tuples are found. This is\r\n> + * to avoid the overhead of executing the extra scan for each INSERT\r\n> + * operation, even when no conflict arises, which could introduce\r\n> + * significant overhead to replication, particularly in cases where\r\n> + * conflicts are rare.\r\n> + *\r\n> + * XXX OTOH, this could lead to clean-up effort for dead tuples added\r\n> + * in heap and index in case of conflicts. But as conflicts shouldn't\r\n> + * be a frequent thing so we preferred to save the performance overhead\r\n> + * of extra scan before each insertion.\r\n> + */\r\n> + if (conflict)\r\n> + CheckAndReportConflict(resultRelInfo, estate, CT_INSERT_EXISTS,\r\n> + recheckIndexes, slot);\r\n> \r\n> I was thinking about this case where we have some pros and cons of doing\r\n> additional scans only after we found the conflict. I was wondering how we will\r\n> handle the resolution strategy for this when we have to remote_apply the tuple\r\n> for insert_exists/update_exists cases.\r\n> We would have already inserted the remote tuple in the heap and index before\r\n> we found the conflict which means we have to roll back that change and then\r\n> start a forest transaction to perform remote_apply which probably has to\r\n> update the existing tuple. We may have to perform something like speculative\r\n> insertion and then abort it. That doesn't sound cheap either. Do you have any\r\n> better ideas?\r\n\r\nSince most of the codes of conflict detection can be reused in the later\r\nresolution patch. I am thinking we can go for re-scan after insertion approach\r\nfor detection patch. Then in resolution patch we can probably have a check in\r\nthe patch that if the resolver is remote_apply/last_update_win we detect\r\nconflict before, otherwise detect it after. This way we can save an\r\nsubscription option in the detection patch because we are not introducing overhead\r\nfor the detection. And we can also save some overhead in the resolution patch\r\nif there is no need to do a prior check. There could be a few duplicate codes\r\nin resolution patch as have codes for both prior check and after check, but it\r\nseems acceptable.\r\n\r\n\r\n> \r\n> *\r\n> -ERROR: duplicate key value violates unique constraint \"test_pkey\"\r\n> -DETAIL: Key (c)=(1) already exists.\r\n> +ERROR: conflict insert_exists detected on relation \"public.test\"\r\n> +DETAIL: Key (c)=(1) already exists in unique index \"t_pkey\", which\r\n> was modified locally in transaction 740 at 2024-06-26 10:47:04.727375+08.\r\n> \r\n> I think the format to display conflicts is not very clear. The conflict should be\r\n> apparent just by seeing the LOG/ERROR message. I am thinking of something\r\n> like below:\r\n> \r\n> LOG: CONFLICT: <insert_exisits or whatever names we document>;\r\n> DESCRIPTION: If any .. ; RESOLUTION: (This one can be added later)\r\n> DEATAIL: remote_tuple (tuple values); local_tuple (tuple values);\r\n> \r\n> With the above, one can easily identify the conflict's reason and action taken by\r\n> apply worker.\r\n\r\nThanks for the idea! I thought about few styles based on the suggested format,\r\nwhat do you think about the following ?\r\n\r\n---\r\nVersion 1\r\n---\r\nLOG: CONFLICT: insert_exists; DESCRIPTION: remote INSERT violates unique constraint \"uniqueindex\" on relation \"public.test\".\r\nDETAIL: Existing local tuple (a, b, c) = (2, 3, 4) xid=123,origin=\"pub\",timestamp=xxx; remote tuple (a, b, c) = (2, 4, 5).\r\n\r\nLOG: CONFLICT: update_differ; DESCRIPTION: updating a row with key (a, b) = (2, 4) on relation \"public.test\" was modified by a different source.\r\nDETAIL: Existing local tuple (a, b, c) = (2, 3, 4) xid=123,origin=\"pub\",timestamp=xxx; remote tuple (a, b, c) = (2, 4, 5).\r\n\r\nLOG: CONFLICT: update_missing; DESCRIPTION: did not find the row with key (a, b) = (2, 4) on \"public.test\" to update.\r\nDETAIL: remote tuple (a, b, c) = (2, 4, 5).\r\n\r\n---\r\nVersion 2\r\nIt moves most the details to the DETAIL line compared to version 1.\r\n--- \r\nLOG: CONFLICT: insert_exists on relation \"public.test\".\r\nDETAIL: Key (a)=(1) already exists in unique index \"uniqueindex\", which was modified by origin \"pub\" in transaction 123 at 2024xxx;\r\n\t\tExisting local tuple (a, b, c) = (1, 3, 4), remote tuple (a, b, c) = (1, 4, 5).\r\n\r\nLOG: CONFLICT: update_differ on relation \"public.test\".\r\nDETAIL: Updating a row with key (a, b) = (2, 4) that was modified by a different origin \"pub\" in transaction 123 at 2024xxx;\r\n\t\tExisting local tuple (a, b, c) = (2, 3, 4); remote tuple (a, b, c) = (2, 4, 5).\r\n\r\nLOG: CONFLICT: update_missing on relation \"public.test\".\r\nDETAIL: Did not find the row with key (a, b) = (2, 4) to update;\r\n\t\tRemote tuple (a, b, c) = (2, 4, 5).\r\n\r\n---\r\nVersion 3\r\nIt is similar to the style in the current patch, I only added the key value for\r\ndiffer and missing conflicts without outputting the complete\r\nremote/local tuple value.\r\n--- \r\nLOG: conflict insert_exists detected on relation \"public.test\".\r\nDETAIL: Key (a)=(1) already exists in unique index \"uniqueindex\", which was modified by origin \"pub\" in transaction 123 at 2024xxx.\r\n\r\nLOG: conflict update_differ detected on relation \"public.test\".\r\nDETAIL: Updating a row with key (a, b) = (2, 4), which was modified by a different origin \"pub\" in transaction 123 at 2024xxx.\r\n\r\nLOG: conflict update_missing detected on relation \"public.test\"\r\nDETAIL: Did not find the row with key (a, b) = (2, 4) to update.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Tue, 6 Aug 2024 08:15:27 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "Dear Hou,\r\n\r\n> \r\n> Here is the V11 patch set which addressed above and Kuroda-san[1] comments.\r\n>\r\n\r\nThanks for updating the patch. I read 0001 again and I don't have critical comments for now.\r\nI found some cosmetic issues (e.g., lines should be shorter than 80 columns) and\r\nattached the fix patch. Feel free to include in the next version.\r\n\t\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Wed, 7 Aug 2024 04:01:28 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Tue, Aug 6, 2024 at 1:45 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Monday, August 5, 2024 6:52 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Sun, Aug 4, 2024 at 1:22 PM Zhijie Hou (Fujitsu) <[email protected]>\n> > wrote:\n> > >\n> > > On Friday, August 2, 2024 7:03 PM Amit Kapila <[email protected]>\n> > wrote:\n> > > >\n> > >\n> > > Here is the V11 patch set which addressed above and Kuroda-san[1]\n> > comments.\n> > >\n> >\n> > A few design-level points:\n> >\n> > *\n> > @@ -525,10 +602,33 @@ ExecSimpleRelationInsert(ResultRelInfo\n> > *resultRelInfo,\n> > /* OK, store the tuple and create index entries for it */\n> > simple_table_tuple_insert(resultRelInfo->ri_RelationDesc, slot);\n> >\n> > + conflictindexes = resultRelInfo->ri_onConflictArbiterIndexes;\n> > +\n> > if (resultRelInfo->ri_NumIndices > 0)\n> > recheckIndexes = ExecInsertIndexTuples(resultRelInfo,\n> > - slot, estate, false, false,\n> > - NULL, NIL, false);\n> > + slot, estate, false,\n> > + conflictindexes ? true : false,\n> > + &conflict,\n> > + conflictindexes, false);\n> > +\n> > + /*\n> > + * Checks the conflict indexes to fetch the conflicting local tuple\n> > + * and reports the conflict. We perform this check here, instead of\n> > + * performing an additional index scan before the actual insertion and\n> > + * reporting the conflict if any conflicting tuples are found. This is\n> > + * to avoid the overhead of executing the extra scan for each INSERT\n> > + * operation, even when no conflict arises, which could introduce\n> > + * significant overhead to replication, particularly in cases where\n> > + * conflicts are rare.\n> > + *\n> > + * XXX OTOH, this could lead to clean-up effort for dead tuples added\n> > + * in heap and index in case of conflicts. But as conflicts shouldn't\n> > + * be a frequent thing so we preferred to save the performance overhead\n> > + * of extra scan before each insertion.\n> > + */\n> > + if (conflict)\n> > + CheckAndReportConflict(resultRelInfo, estate, CT_INSERT_EXISTS,\n> > + recheckIndexes, slot);\n> >\n> > I was thinking about this case where we have some pros and cons of doing\n> > additional scans only after we found the conflict. I was wondering how we will\n> > handle the resolution strategy for this when we have to remote_apply the tuple\n> > for insert_exists/update_exists cases.\n> > We would have already inserted the remote tuple in the heap and index before\n> > we found the conflict which means we have to roll back that change and then\n> > start a forest transaction to perform remote_apply which probably has to\n> > update the existing tuple. We may have to perform something like speculative\n> > insertion and then abort it. That doesn't sound cheap either. Do you have any\n> > better ideas?\n>\n> Since most of the codes of conflict detection can be reused in the later\n> resolution patch. I am thinking we can go for re-scan after insertion approach\n> for detection patch. Then in resolution patch we can probably have a check in\n> the patch that if the resolver is remote_apply/last_update_win we detect\n> conflict before, otherwise detect it after. This way we can save an\n> subscription option in the detection patch because we are not introducing overhead\n> for the detection.\n>\n\nSounds reasonable to me.\n\n>\n>\n> >\n> > *\n> > -ERROR: duplicate key value violates unique constraint \"test_pkey\"\n> > -DETAIL: Key (c)=(1) already exists.\n> > +ERROR: conflict insert_exists detected on relation \"public.test\"\n> > +DETAIL: Key (c)=(1) already exists in unique index \"t_pkey\", which\n> > was modified locally in transaction 740 at 2024-06-26 10:47:04.727375+08.\n> >\n> > I think the format to display conflicts is not very clear. The conflict should be\n> > apparent just by seeing the LOG/ERROR message. I am thinking of something\n> > like below:\n> >\n> > LOG: CONFLICT: <insert_exisits or whatever names we document>;\n> > DESCRIPTION: If any .. ; RESOLUTION: (This one can be added later)\n> > DEATAIL: remote_tuple (tuple values); local_tuple (tuple values);\n> >\n> > With the above, one can easily identify the conflict's reason and action taken by\n> > apply worker.\n>\n> Thanks for the idea! I thought about few styles based on the suggested format,\n> what do you think about the following ?\n>\n> ---\n> Version 1\n> ---\n> LOG: CONFLICT: insert_exists; DESCRIPTION: remote INSERT violates unique constraint \"uniqueindex\" on relation \"public.test\".\n> DETAIL: Existing local tuple (a, b, c) = (2, 3, 4) xid=123,origin=\"pub\",timestamp=xxx; remote tuple (a, b, c) = (2, 4, 5).\n>\n\nWon't this case be ERROR? If so, the error message format like the\nabove appears odd to me because in some cases, the user may want to\nadd some filter based on the error message though that is not ideal.\nAlso, the primary error message starts with a small case letter and\nshould be short.\n\n> LOG: CONFLICT: update_differ; DESCRIPTION: updating a row with key (a, b) = (2, 4) on relation \"public.test\" was modified by a different source.\n> DETAIL: Existing local tuple (a, b, c) = (2, 3, 4) xid=123,origin=\"pub\",timestamp=xxx; remote tuple (a, b, c) = (2, 4, 5).\n>\n> LOG: CONFLICT: update_missing; DESCRIPTION: did not find the row with key (a, b) = (2, 4) on \"public.test\" to update.\n> DETAIL: remote tuple (a, b, c) = (2, 4, 5).\n>\n> ---\n> Version 2\n> It moves most the details to the DETAIL line compared to version 1.\n> ---\n> LOG: CONFLICT: insert_exists on relation \"public.test\".\n> DETAIL: Key (a)=(1) already exists in unique index \"uniqueindex\", which was modified by origin \"pub\" in transaction 123 at 2024xxx;\n> Existing local tuple (a, b, c) = (1, 3, 4), remote tuple (a, b, c) = (1, 4, 5).\n>\n> LOG: CONFLICT: update_differ on relation \"public.test\".\n> DETAIL: Updating a row with key (a, b) = (2, 4) that was modified by a different origin \"pub\" in transaction 123 at 2024xxx;\n> Existing local tuple (a, b, c) = (2, 3, 4); remote tuple (a, b, c) = (2, 4, 5).\n>\n> LOG: CONFLICT: update_missing on relation \"public.test\".\n> DETAIL: Did not find the row with key (a, b) = (2, 4) to update;\n> Remote tuple (a, b, c) = (2, 4, 5).\n>\n\nI think we can combine sentences with full stop.\n\n...\n> ---\n> Version 3\n> It is similar to the style in the current patch, I only added the key value for\n> differ and missing conflicts without outputting the complete\n> remote/local tuple value.\n> ---\n> LOG: conflict insert_exists detected on relation \"public.test\".\n> DETAIL: Key (a)=(1) already exists in unique index \"uniqueindex\", which was modified by origin \"pub\" in transaction 123 at 2024xxx.\n>\n\nFor ERROR messages this appears suitable.\n\nConsidering all the above points, I propose yet another version:\n\nLOG: conflict detected for relation \"public.test\": conflict=insert_exists\nDETAIL: Key (a)=(1) already exists in unique index \"uniqueindex\",\nwhich was modified by the origin \"pub\" in transaction 123 at 2024xxx.\nExisting local tuple (a, b, c) = (1, 3, 4), remote tuple (a, b, c) =\n(1, 4, 5).\n\nLOG: conflict detected for relation \"public.test\": conflict=update_differ\nDETAIL: Updating a row with key (a, b) = (2, 4) that was modified by a\ndifferent origin \"pub\" in transaction 123 at 2024xxx. Existing local\ntuple (a, b, c) = (2, 3, 4); remote tuple (a, b, c) = (2, 4, 5).\n\nLOG: conflict detected for relation \"public.test\": conflict=update_missing\nDETAIL: Could not find the row with key (a, b) = (2, 4) to update.\nRemote tuple (a, b, c) = (2, 4, 5).\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 7 Aug 2024 10:53:36 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "Dear Hou,\r\n\r\nWhile playing with the 0003 patch (the patch may not be ready), I found that\r\nwhen the insert_exists event occurred, both apply_error_count and insert_exists_count\r\nwas counted.\r\n\r\n```\r\n-- insert a tuple on the subscriber\r\nsubscriber =# INSERT INTO tab VALUES (1);\r\n\r\n-- insert the same tuple on the publisher, which causes insert_exists conflict\r\npublisher =# INSERT INTO tab VALUES (1);\r\n\r\n-- after some time...\r\nsubscriber =# SELECT * FROM pg_stat_subscription_stats;\r\n-[ RECORD 1 ]--------+------\r\nsubid | 16389\r\nsubname | sub\r\napply_error_count | 16\r\nsync_error_count | 0\r\ninsert_exists_count | 16\r\nupdate_differ_count | 0\r\nupdate_exists_count | 0\r\nupdate_missing_count | 0\r\ndelete_differ_count | 0\r\ndelete_missing_count | 0\r\nstats_reset |\r\n```\r\n\r\nNot tested, but I think this could also happen for the update_exists_count case,\r\nor sync_error_count may be counted when the tablesync worker detects the conflict.\r\n\r\nIIUC, the reason is that pgstat_report_subscription_error() is called in the\r\nPG_CATCH part in start_apply() even after ReportApplyConflict(ERROR) is called.\r\n\r\nWhat do you think of the current behavior? I wouldn't say I like that the same\r\nphenomenon is counted as several events. E.g., in the case of vacuum, the entry\r\nseemed to be separated based on the process by backends or autovacuum.\r\nI feel the spec is unfamiliar in that only insert_exists and update_exists are\r\ncounted duplicated with the apply_error_count.\r\n\r\nAn easy fix is to introduce a global variable which is turned on when the conflict\r\nis found.\r\n\r\nThought?\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Wed, 7 Aug 2024 06:59:59 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wednesday, August 7, 2024 3:00 PM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote:\r\n> \r\n> While playing with the 0003 patch (the patch may not be ready), I found that\r\n> when the insert_exists event occurred, both apply_error_count and\r\n> insert_exists_count was counted.\r\n\r\nThanks for testing. 0003 is a separate feature which we might review\r\nafter the 0001 is in a good shape or committed.\r\n\r\n> \r\n> ```\r\n> -- insert a tuple on the subscriber\r\n> subscriber =# INSERT INTO tab VALUES (1);\r\n> \r\n> -- insert the same tuple on the publisher, which causes insert_exists conflict\r\n> publisher =# INSERT INTO tab VALUES (1);\r\n> \r\n> -- after some time...\r\n> subscriber =# SELECT * FROM pg_stat_subscription_stats; -[ RECORD\r\n> 1 ]--------+------\r\n> subid | 16389\r\n> subname | sub\r\n> apply_error_count | 16\r\n> sync_error_count | 0\r\n> insert_exists_count | 16\r\n> update_differ_count | 0\r\n> update_exists_count | 0\r\n> update_missing_count | 0\r\n> delete_differ_count | 0\r\n> delete_missing_count | 0\r\n> stats_reset |\r\n> ```\r\n> \r\n> Not tested, but I think this could also happen for the update_exists_count case,\r\n> or sync_error_count may be counted when the tablesync worker detects the\r\n> conflict.\r\n> \r\n> IIUC, the reason is that pgstat_report_subscription_error() is called in the\r\n> PG_CATCH part in start_apply() even after ReportApplyConflict(ERROR) is\r\n> called.\r\n> \r\n> What do you think of the current behavior? I wouldn't say I like that the same\r\n> phenomenon is counted as several events. E.g., in the case of vacuum, the\r\n> entry seemed to be separated based on the process by backends or\r\n> autovacuum.\r\n\r\nI think this is as expected. When the insert conflicts, it will report an ERROR\r\nso both the conflict count and error out are incremented which looks reasonable\r\nto me. The default behavior for each conflict could be different and is\r\ndocumented, I think It's clear that insert_exists will cause an ERROR while\r\ndelete_missing or .. will not.\r\n\r\nIn addition, we might support a resolution called \"error\" which is to report an\r\nERROR When facing the specified conflict, it would be a bit confusing to me if\r\nthe apply_error_count Is not incremented on the specified conflict, when I set\r\nresolution to ERROR.\r\n\r\n> I feel the spec is unfamiliar in that only insert_exists and update_exists are\r\n> counted duplicated with the apply_error_count.\r\n> \r\n> An easy fix is to introduce a global variable which is turned on when the conflict\r\n> is found.\r\n\r\nI am not sure about the benefit of changing the current behavior in the patch.\r\nAnd it will change the existing behavior, because before the conflict detection\r\npatch, the apply_error_count is incremented on each unique key violation, while\r\nafter the detection patch, it stops incrementing the apply_error and only\r\nconflict_count is incremented.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Wed, 7 Aug 2024 07:38:08 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wed, Aug 7, 2024 at 1:08 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Wednesday, August 7, 2024 3:00 PM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote:\n> >\n> > While playing with the 0003 patch (the patch may not be ready), I found that\n> > when the insert_exists event occurred, both apply_error_count and\n> > insert_exists_count was counted.\n>\n> Thanks for testing. 0003 is a separate feature which we might review\n> after the 0001 is in a good shape or committed.\n>\n> >\n> > ```\n> > -- insert a tuple on the subscriber\n> > subscriber =# INSERT INTO tab VALUES (1);\n> >\n> > -- insert the same tuple on the publisher, which causes insert_exists conflict\n> > publisher =# INSERT INTO tab VALUES (1);\n> >\n> > -- after some time...\n> > subscriber =# SELECT * FROM pg_stat_subscription_stats; -[ RECORD\n> > 1 ]--------+------\n> > subid | 16389\n> > subname | sub\n> > apply_error_count | 16\n> > sync_error_count | 0\n> > insert_exists_count | 16\n> > update_differ_count | 0\n> > update_exists_count | 0\n> > update_missing_count | 0\n> > delete_differ_count | 0\n> > delete_missing_count | 0\n> > stats_reset |\n> > ```\n> >\n> > Not tested, but I think this could also happen for the update_exists_count case,\n> > or sync_error_count may be counted when the tablesync worker detects the\n> > conflict.\n> >\n> > IIUC, the reason is that pgstat_report_subscription_error() is called in the\n> > PG_CATCH part in start_apply() even after ReportApplyConflict(ERROR) is\n> > called.\n> >\n> > What do you think of the current behavior? I wouldn't say I like that the same\n> > phenomenon is counted as several events. E.g., in the case of vacuum, the\n> > entry seemed to be separated based on the process by backends or\n> > autovacuum.\n>\n> I think this is as expected. When the insert conflicts, it will report an ERROR\n> so both the conflict count and error out are incremented which looks reasonable\n> to me. The default behavior for each conflict could be different and is\n> documented, I think It's clear that insert_exists will cause an ERROR while\n> delete_missing or .. will not.\n>\n\nI had also observed this behaviour during my testing of stats patch.\nBut I found this behaviour to be okay. IMO, apply_error_count should\naccount any error caused during applying and thus should incorporate\ninsert-exists error-count too.\n\n> In addition, we might support a resolution called \"error\" which is to report an\n> ERROR When facing the specified conflict, it would be a bit confusing to me if\n> the apply_error_count Is not incremented on the specified conflict, when I set\n> resolution to ERROR.\n>\n> > I feel the spec is unfamiliar in that only insert_exists and update_exists are\n> > counted duplicated with the apply_error_count.\n> >\n> > An easy fix is to introduce a global variable which is turned on when the conflict\n> > is found.\n>\n> I am not sure about the benefit of changing the current behavior in the patch.\n> And it will change the existing behavior, because before the conflict detection\n> patch, the apply_error_count is incremented on each unique key violation, while\n> after the detection patch, it stops incrementing the apply_error and only\n> conflict_count is incremented.\n>\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 7 Aug 2024 14:11:43 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wednesday, August 7, 2024 1:24 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Tue, Aug 6, 2024 at 1:45 PM Zhijie Hou (Fujitsu)\r\n> > Thanks for the idea! I thought about few styles based on the suggested\r\n> format,\r\n> > what do you think about the following ?\r\n> >\r\n> > ---\r\n> > Version 1\r\n> > ---\r\n> > LOG: CONFLICT: insert_exists; DESCRIPTION: remote INSERT violates\r\n> unique constraint \"uniqueindex\" on relation \"public.test\".\r\n> > DETAIL: Existing local tuple (a, b, c) = (2, 3, 4)\r\n> xid=123,origin=\"pub\",timestamp=xxx; remote tuple (a, b, c) = (2, 4, 5).\r\n> >\r\n> \r\n> Won't this case be ERROR? If so, the error message format like the\r\n> above appears odd to me because in some cases, the user may want to\r\n> add some filter based on the error message though that is not ideal.\r\n> Also, the primary error message starts with a small case letter and\r\n> should be short.\r\n> \r\n> > LOG: CONFLICT: update_differ; DESCRIPTION: updating a row with key (a,\r\n> b) = (2, 4) on relation \"public.test\" was modified by a different source.\r\n> > DETAIL: Existing local tuple (a, b, c) = (2, 3, 4)\r\n> xid=123,origin=\"pub\",timestamp=xxx; remote tuple (a, b, c) = (2, 4, 5).\r\n> >\r\n> > LOG: CONFLICT: update_missing; DESCRIPTION: did not find the row with\r\n> key (a, b) = (2, 4) on \"public.test\" to update.\r\n> > DETAIL: remote tuple (a, b, c) = (2, 4, 5).\r\n> >\r\n> > ---\r\n> > Version 2\r\n> > It moves most the details to the DETAIL line compared to version 1.\r\n> > ---\r\n> > LOG: CONFLICT: insert_exists on relation \"public.test\".\r\n> > DETAIL: Key (a)=(1) already exists in unique index \"uniqueindex\", which\r\n> was modified by origin \"pub\" in transaction 123 at 2024xxx;\r\n> > Existing local tuple (a, b, c) = (1, 3, 4), remote tuple (a, b, c)\r\n> = (1, 4, 5).\r\n> >\r\n> > LOG: CONFLICT: update_differ on relation \"public.test\".\r\n> > DETAIL: Updating a row with key (a, b) = (2, 4) that was modified by a\r\n> different origin \"pub\" in transaction 123 at 2024xxx;\r\n> > Existing local tuple (a, b, c) = (2, 3, 4); remote tuple (a, b, c)\r\n> = (2, 4, 5).\r\n> >\r\n> > LOG: CONFLICT: update_missing on relation \"public.test\".\r\n> > DETAIL: Did not find the row with key (a, b) = (2, 4) to update;\r\n> > Remote tuple (a, b, c) = (2, 4, 5).\r\n> >\r\n> \r\n> I think we can combine sentences with full stop.\r\n> \r\n> ...\r\n> > ---\r\n> > Version 3\r\n> > It is similar to the style in the current patch, I only added the key value for\r\n> > differ and missing conflicts without outputting the complete\r\n> > remote/local tuple value.\r\n> > ---\r\n> > LOG: conflict insert_exists detected on relation \"public.test\".\r\n> > DETAIL: Key (a)=(1) already exists in unique index \"uniqueindex\", which\r\n> was modified by origin \"pub\" in transaction 123 at 2024xxx.\r\n> >\r\n> \r\n> For ERROR messages this appears suitable.\r\n> \r\n> Considering all the above points, I propose yet another version:\r\n> \r\n> LOG: conflict detected for relation \"public.test\": conflict=insert_exists\r\n> DETAIL: Key (a)=(1) already exists in unique index \"uniqueindex\",\r\n> which was modified by the origin \"pub\" in transaction 123 at 2024xxx.\r\n> Existing local tuple (a, b, c) = (1, 3, 4), remote tuple (a, b, c) =\r\n> (1, 4, 5).\r\n> \r\n> LOG: conflict detected for relation \"public.test\": conflict=update_differ\r\n> DETAIL: Updating a row with key (a, b) = (2, 4) that was modified by a\r\n> different origin \"pub\" in transaction 123 at 2024xxx. Existing local\r\n> tuple (a, b, c) = (2, 3, 4); remote tuple (a, b, c) = (2, 4, 5).\r\n> \r\n> LOG: conflict detected for relation \"public.test\": conflict=update_missing\r\n> DETAIL: Could not find the row with key (a, b) = (2, 4) to update.\r\n> Remote tuple (a, b, c) = (2, 4, 5).\r\n\r\nHere is the V12 patch that improved the log format as discussed. I also fixed a\r\nbug in previous version where it reported the wrong column value in the DETAIL\r\nmessage.\r\n\r\nIn the latest patch, the DETAIL line comprises two parts: 1. Explanation of the\r\nconflict type, including the tuple value used to search the existing local\r\ntuple provided for update or deletion, or the tuple value causing the unique\r\nconstraint violation. 2. Display of the complete existing local tuple and the\r\nremote tuple, if any.\r\n\r\nI also addressed Shveta's comments and tried to merge Kuroda-san's changes[2] to\r\nthe new codes.\r\n\r\nAnd the 0002(new sub option) patch is removed as discussed. The 0003(stats\r\ncollection) patch is also removed temporarily, we can bring it back After\r\nfinishing the 0001 work.\r\n\r\n[1] https://www.postgresql.org/message-id/CAJpy0uAjJci%2BOtm4ANU0__-2qqhH2cALp8hQw5pBjNZyREF7rg%40mail.gmail.com\r\n[2] https://www.postgresql.org/message-id/TYAPR01MB5692224DB472AA3FA58E1D1AF5B82%40TYAPR01MB5692.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Fri, 9 Aug 2024 06:59:29 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "Hello, everyone.\n\nThere are some comments on this patch related to issue [0].\nIn short: any DirtySnapshot index scan may fail to find an existing tuple\nin the case of a concurrent update.\n\n- FindConflictTuple may return false negative result in the case of\nconcurrent update because ExecCheckIndexConstraints uses SnapshotDirty.\n- As a result, CheckAndReportConflict may fail to report the conflict.\n- In apply_handle_update_internal we may get an CT_UPDATE_MISSING instead\nof CT_UPDATE_DIFFER\n- In apply_handle_update_internal we may get an CT_DELETE_MISSING instead\nof CT_DELETE_DIFFER\n- In apply_handle_tuple_routing we may get an CT_UPDATE_MISSING instead of\nCT_UPDATE_DIFFER\n\nIf you're interested, I could create a test to reproduce the issue within\nthe context of logical replication. Issue [0] itself includes a test case\nto replicate the problem.\n\nIt also seems possible that a conflict could be resolved by a concurrent\nupdate before the call to CheckAndReportConflict, which means there's no\nguarantee that the conflict will be reported correctly.\nShould we be concerned about this?\n\n[0]: https://commitfest.postgresql.org/49/5151/\n\nHello, everyone.There are some comments on this patch related to issue [0].In short: any DirtySnapshot index scan may fail to find an existing tuple in the case of a concurrent update.- FindConflictTuple may return false negative result in the case of concurrent update because ExecCheckIndexConstraints uses SnapshotDirty.- As a result, CheckAndReportConflict may fail to report the conflict.- In apply_handle_update_internal we may get an CT_UPDATE_MISSING instead of CT_UPDATE_DIFFER- In apply_handle_update_internal we may get an CT_DELETE_MISSING instead of CT_DELETE_DIFFER- In apply_handle_tuple_routing we may get an CT_UPDATE_MISSING instead of CT_UPDATE_DIFFERIf you're interested, I could create a test to reproduce the issue within the context of logical replication. Issue [0] itself includes a test case to replicate the problem.It also seems possible that a conflict could be resolved by a concurrent update before the call to CheckAndReportConflict, which means there's no guarantee that the conflict will be reported correctly.Should we be concerned about this?[0]: https://commitfest.postgresql.org/49/5151/", "msg_date": "Fri, 9 Aug 2024 13:45:12 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Friday, August 9, 2024 7:45 PM Michail Nikolaev <[email protected]> wrote:\r\n> There are some comments on this patch related to issue [0]. In short: any\r\n> DirtySnapshot index scan may fail to find an existing tuple in the case of a\r\n> concurrent update.\r\n> \r\n> - FindConflictTuple may return false negative result in the case of concurrent update because > ExecCheckIndexConstraints uses SnapshotDirty.\r\n> - As a result, CheckAndReportConflict may fail to report the conflict.\r\n> - In apply_handle_update_internal we may get an CT_UPDATE_MISSING instead of CT_UPDATE_DIFFER\r\n> - In apply_handle_update_internal we may get an CT_DELETE_MISSING instead of CT_DELETE_DIFFER\r\n> - In apply_handle_tuple_routing we may get an CT_UPDATE_MISSING instead of CT_UPDATE_DIFFER\r\n> \r\n> If you're interested, I could create a test to reproduce the issue within the\r\n> context of logical replication. Issue [0] itself includes a test case to\r\n> replicate the problem.\r\n> \r\n> It also seems possible that a conflict could be resolved by a concurrent update\r\n> before the call to CheckAndReportConflict, which means there's no guarantee\r\n> that the conflict will be reported correctly. Should we be concerned about\r\n> this?\r\n\r\nThanks for reporting.\r\n\r\nI think this is an independent issue which can be discussed separately in the\r\noriginal thread[1], and I have replied to that thread.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Mon, 12 Aug 2024 03:33:05 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Fri, Aug 9, 2024 at 12:29 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> Here is the V12 patch that improved the log format as discussed.\n>\n\n*\ndiff --git a/src/test/subscription/out b/src/test/subscription/out\nnew file mode 100644\nindex 0000000000..2b68e9264a\n--- /dev/null\n+++ b/src/test/subscription/out\n@@ -0,0 +1,29 @@\n+make -C ../../../src/backend generated-headers\n+make[1]: Entering directory '/home/houzj/postgresql/src/backend'\n+make -C ../include/catalog generated-headers\n+make[2]: Entering directory '/home/houzj/postgresql/src/include/catalog'\n+make[2]: Nothing to be done for 'generated-headers'.\n+make[2]: Leaving directory '/home/houzj/postgresql/src/include/catalog'\n+make -C nodes generated-header-symlinks\n+make[2]: Entering directory '/home/houzj/postgresql/src/backend/nodes'\n+make[2]: Nothing to be done for 'generated-header-symlinks'.\n+make[2]: Leaving directory '/home/houzj/postgresql/src/backend/nodes'\n+make -C utils generated-header-symlinks\n+make[2]: Entering directory '/home/houzj/postgresql/src/backend/utils'\n+make -C adt jsonpath_gram.h\n+make[3]: Entering directory '/home/houzj/postgresql/src/backend/utils/adt'\n+make[3]: 'jsonpath_gram.h' is up to date.\n+make[3]: Leaving directory '/home/houzj/postgresql/src/backend/utils/adt'\n+make[2]: Leaving directory '/home/houzj/postgresql/src/backend/utils'\n+make[1]: Leaving directory '/home/houzj/postgresql/src/backend'\n+rm -rf '/home/houzj/postgresql'/tmp_install\n+/usr/bin/mkdir -p '/home/houzj/postgresql'/tmp_install/log\n+make -C '../../..' DESTDIR='/home/houzj/postgresql'/tmp_install\ninstall >'/home/houzj/postgresql'/tmp_install/log/install.log 2>&1\n+make -j1 checkprep >>'/home/houzj/postgresql'/tmp_install/log/install.log 2>&1\n+PATH=\"/home/houzj/postgresql/tmp_install/home/houzj/pgsql/bin:/home/houzj/postgresql/src/test/subscription:$PATH\"\nLD_LIBRARY_PATH=\"/home/houzj/postgresql/tmp_install/home/houzj/pgsql/lib\"\nINITDB_TEMPLATE='/home/houzj/postgresql'/tmp_install/initdb-template\ninitdb --auth trust --no-sync --no-instructions --lc-messages=C\n--no-clean '/home/houzj/postgresql'/tmp_install/initdb-template\n>>'/home/houzj/postgresql'/tmp_install/log/initdb-template.log 2>&1\n+echo \"# +++ tap check in src/test/subscription +++\" && rm -rf\n'/home/houzj/postgresql/src/test/subscription'/tmp_check &&\n/usr/bin/mkdir -p\n'/home/houzj/postgresql/src/test/subscription'/tmp_check && cd . &&\nTESTLOGDIR='/home/houzj/postgresql/src/test/subscription/tmp_check/log'\nTESTDATADIR='/home/houzj/postgresql/src/test/subscription/tmp_check'\nPATH=\"/home/houzj/postgresql/tmp_install/home/houzj/pgsql/bin:/home/houzj/postgresql/src/test/subscription:$PATH\"\nLD_LIBRARY_PATH=\"/home/houzj/postgresql/tmp_install/home/houzj/pgsql/lib\"\nINITDB_TEMPLATE='/home/houzj/postgresql'/tmp_install/initdb-template\nPGPORT='65432' top_builddir='/home/houzj/postgresql/src/test/subscription/../../..'\nPG_REGRESS='/home/houzj/postgresql/src/test/subscription/../../../src/test/regress/pg_regress'\n/usr/bin/prove -I ../../../src/test/perl/ -I . t/013_partition.pl\n+# +++ tap check in src/test/subscription +++\n+t/013_partition.pl .. ok\n+All tests successful.\n+Files=1, Tests=73, 4 wallclock secs ( 0.02 usr 0.00 sys + 0.59\ncusr 0.21 csys = 0.82 CPU)\n+Result: PASS\n\nThe above is added to the patch by mistake. Can you please remove it\nfrom the patch unless there is a reason?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 12 Aug 2024 09:30:34 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Fri, Aug 9, 2024 at 12:29 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> Here is the V12 patch that improved the log format as discussed.\n>\n\nReview comments:\n===============\n1. The patch doesn't display the remote tuple for delete_differ case.\nHowever, it shows the remote tuple correctly for update_differ. Is\nthere a reason for the same? See below messages:\n\nupdate_differ:\n--------------\nLOG: conflict detected on relation \"public.t1\": conflict=update_differ\nDETAIL: Updating the row containing (c1)=(1) that was modified\nlocally in transaction 806 at 2024-08-12 11:48:14.970002+05:30.\n Existing local tuple (1, 3, arun ); remote tuple (1, 3,\najay ).\n...\n\ndelete_differ\n--------------\nLOG: conflict detected on relation \"public.t1\": conflict=delete_differ\nDETAIL: Deleting the row containing (c1)=(1) that was modified by\nlocally in transaction 809 at 2024-08-12 14:15:41.966467+05:30.\n Existing local tuple (1, 3, arun ).\n\nNote this happens when the publisher table has a REPLICA IDENTITY FULL\nand the subscriber table has primary_key. It would be better to keep\nthe messages consistent. One possibility is that we remove\nkey/old_tuple from the first line of the DETAIL message and display it\nin the second line as Existing local tuple <local_tuple>; remote tuple\n<..>; key <...>\n\n2. Similar to above, the remote tuple is not displayed in\ndelete_missing but displayed in updated_missing type of conflict. If\nwe follow the style mentioned in the previous point then the DETAIL\nmessage: \"DETAIL: Did not find the row containing (c1)=(1) to be\nupdated.\" can also be changed to: \"DETAIL: Could not find the row to\nbe updated.\" followed by other detail.\n\n3. The detail of insert_exists is confusing.\n\nERROR: conflict detected on relation \"public.t1\": conflict=insert_exists\nDETAIL: Key (c1)=(1) already exists in unique index \"t1_pkey\", which\nwas modified locally in transaction 802 at 2024-08-12\n11:11:31.252148+05:30.\n\nIt sounds like the key value \"(c1)=(1)\" in the index is modified. How\nabout changing slightly as: \"Key (c1)=(1) already exists in unique\nindex \"t1_pkey\", modified locally in transaction 802 at 2024-08-12\n11:11:31.252148+05:30.\"? Feel free to propose if anything better comes\nto your mind.\n\n4.\nif (localorigin == InvalidRepOriginId)\n+ appendStringInfo(&err_detail, _(\"Deleting the row containing %s that\nwas modified by locally in transaction %u at %s.\"),\n+ val_desc, localxmin, timestamptz_to_str(localts));\n\nTypo in the above message. /modified by locally/modified locally\n\n5.\n@@ -2661,6 +2662,29 @@ apply_handle_update_internal(ApplyExecutionData *edata,\n{\n...\nfound = FindReplTupleInLocalRel(edata, localrel,\n&relmapentry->remoterel,\nlocalindexoid,\nremoteslot, &localslot);\n...\n...\n+\n+ ReportApplyConflict(LOG, CT_UPDATE_DIFFER, relinfo,\n+ GetRelationIdentityOrPK(localrel),\n\nTo find the tuple, we may have used an index other than Replica\nIdentity or PK (see IsIndexUsableForReplicaIdentityFull), but while\nreporting conflict we don't consider such an index. I think the reason\nis that such an index scan wouldn't have resulted in a unique tuple\nand that is why we always compare the complete tuple in such cases. Is\nthat the reason? Can we write a comment to make it clear?\n\n6.\nvoid ReportApplyConflict(int elevel, ConflictType type,\n+ ResultRelInfo *relinfo, Oid indexoid,\n+ TransactionId localxmin,\n+ RepOriginId localorigin,\n+ TimestampTz localts,\n+ TupleTableSlot *searchslot,\n+ TupleTableSlot *localslot,\n+ TupleTableSlot *remoteslot,\n+ EState *estate);\n\nThe prototype looks odd with pointers and non-pointer variables in\nmixed order. How about arranging parameters in the following order:\nEstate, ResultRelInfo, TupleTableSlot *searchslot, TupleTableSlot\n*localslot, TupleTableSlot *remoteslot, Oid indexoid, TransactionId\nlocalxmin, RepOriginId localorigin, TimestampTz localts?\n\n7. Like above, check the parameters of other functions like\nerrdetail_apply_conflict, build_index_value_desc,\nbuild_tuple_value_details, etc.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 12 Aug 2024 17:10:47 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Mon, Aug 5, 2024 at 10:05 AM shveta malik <[email protected]> wrote:\n>\n> On Mon, Aug 5, 2024 at 9:19 AM Amit Kapila <[email protected]> wrote:\n> >\n> > On Fri, Aug 2, 2024 at 6:28 PM Nisha Moond <[email protected]> wrote:\n> > >\n> > > Performance tests done on the v8-0001 and v8-0002 patches, available at [1].\n> > >\n> >\n> > Thanks for doing the detailed tests for this patch.\n> >\n> > > The purpose of the performance tests is to measure the impact on\n> > > logical replication with track_commit_timestamp enabled, as this\n> > > involves fetching the commit_ts data to determine\n> > > delete_differ/update_differ conflicts.\n> > >\n> > > Fortunately, we did not see any noticeable overhead from the new\n> > > commit_ts fetch and comparison logic. The only notable impact is\n> > > potential overhead from logging conflicts if they occur frequently.\n> > > Therefore, enabling conflict detection by default seems feasible, and\n> > > introducing a new detect_conflict option may not be necessary.\n> > >\n> > ...\n> > >\n> > > Test 1: create conflicts on Sub using pgbench.\n> > > ----------------------------------------------------------------\n> > > Setup:\n> > > - Both publisher and subscriber have pgbench tables created as-\n> > > pgbench -p $node1_port postgres -qis 1\n> > > - At Sub, a subscription created for all the changes from Pub node.\n> > >\n> > > Test Run:\n> > > - To test, ran pgbench for 15 minutes on both nodes simultaneously,\n> > > which led to concurrent updates and update_differ conflicts on the\n> > > Subscriber node.\n> > > Command used to run pgbench on both nodes-\n> > > ./pgbench postgres -p 8833 -c 10 -j 3 -T 300 -P 20\n> > >\n> > > Results:\n> > > For each case, note the “tps” and total time taken by the apply-worker\n> > > on Sub to apply the changes coming from Pub.\n> > >\n> > > Case1: track_commit_timestamp = off, detect_conflict = off\n> > > Pub-tps = 9139.556405\n> > > Sub-tps = 8456.787967\n> > > Time of replicating all the changes: 19min 28s\n> > > Case 2 : track_commit_timestamp = on, detect_conflict = on\n> > > Pub-tps = 8833.016548\n> > > Sub-tps = 8389.763739\n> > > Time of replicating all the changes: 20min 20s\n> > >\n> >\n> > Why is there a noticeable tps (~3%) reduction in publisher TPS? Is it\n> > the impact of track_commit_timestamp = on or something else?\n\nWhen both the publisher and subscriber nodes are on the same machine,\nwe observe a decrease in the publisher's TPS in case when\n'track_commit_timestamp' is ON for the subscriber. Testing on pgHead\n(without the patch) also showed a similar reduction in the publisher's\nTPS.\n\nTest Setup: The test was conducted with the same setup as Test-1.\n\nResults:\nCase 1: pgHead - 'track_commit_timestamp' = OFF\n - Pub TPS: 9306.25\n - Sub TPS: 8848.91\nCase 2: pgHead - 'track_commit_timestamp' = ON\n - Pub TPS: 8915.75\n - Sub TPS: 8667.12\n\nOn pgHead too, there was a ~400tps reduction in the publisher when\n'track_commit_timestamp' was enabled on the subscriber.\n\nAdditionally, code profiling of the walsender on the publisher showed\nthat the overhead in Case-2 was mainly in the DecodeCommit() call\nstack, causing slower write operations, especially in\nlogicalrep_write_update() and OutputPluginWrite().\n\ncase1 : 'track_commit_timestamp' = OFF\n--11.57%--xact_decode\n| | DecodeCommit\n| | ReorderBufferCommit\n...\n| | --6.10%--pgoutput_change\n| | |\n| | |--3.09%--logicalrep_write_update\n| | ....\n| | |--2.01%--OutputPluginWrite\n| | |--1.97%--WalSndWriteData\n\ncase2: 'track_commit_timestamp' = ON\n|--53.19%--xact_decode\n| | DecodeCommit\n| | ReorderBufferCommit\n...\n| | --30.25%--pgoutput_change\n| | |\n| | |--15.23%--logicalrep_write_update\n| | ....\n| | |--9.82%--OutputPluginWrite\n| | |--9.57%--WalSndWriteData\n\n-- In Case 2, the subscriber's process of writing timestamp data for\nmillions of rows appears to have impacted all write operations on the\nmachine.\n\nTo confirm the profiling results, we conducted the same test with the\npublisher and subscriber on separate machines.\n\nResults:\nCase 1: 'track_commit_timestamp' = OFF\n - Run 1: Pub TPS: 2144.10, Sub TPS: 2216.02\n - Run 2: Pub TPS: 2159.41, Sub TPS: 2233.82\n\nCase 2: 'track_commit_timestamp' = ON\n - Run 1: Pub TPS: 2174.39, Sub TPS: 2226.89\n - Run 2: Pub TPS: 2148.92, Sub TPS: 2224.80\n\nNote: The machines used in this test were not as powerful as the one\nused in the earlier tests, resulting in lower overall TPS (~2k vs.\n~8-9k).\nHowever, the results show no significant reduction in the publisher's\nTPS, indicating minimal impact when the nodes are run on separate\nmachines.\n\n> Was track_commit_timestamp enabled only on subscriber (as needed) or\n> on both publisher and subscriber? Nisha, can you please confirm from\n> your logs?\n\nYes, track_commit_timestamp was enabled only on the subscriber.\n\n> > > Case3: track_commit_timestamp = on, detect_conflict = off\n> > > Pub-tps = 8886.101726\n> > > Sub-tps = 8374.508017\n> > > Time of replicating all the changes: 19min 35s\n> > > Case 4: track_commit_timestamp = off, detect_conflict = on\n> > > Pub-tps = 8981.924596\n> > > Sub-tps = 8411.120808\n> > > Time of replicating all the changes: 19min 27s\n> > >\n> > > **The difference of TPS between each case is small. While I can see a\n> > > slight increase of the replication time (about 5%), when enabling both\n> > > track_commit_timestamp and detect_conflict.\n> > >\n> >\n> > The difference in TPS between case 1 and case 2 is quite visible.\n> > IIUC, the replication time difference is due to the logging of\n> > conflicts, right?\n> >\n\nRight, the major difference is due to the logging of conflicts.\n\n--\nThanks,\nNisha\n\n\n", "msg_date": "Tue, 13 Aug 2024 09:27:23 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Monday, August 12, 2024 7:41 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Fri, Aug 9, 2024 at 12:29 PM Zhijie Hou (Fujitsu)\r\n> <[email protected]> wrote:\r\n> >\r\n> > Here is the V12 patch that improved the log format as discussed.\r\n> >\r\n> \r\n> Review comments:\r\n\r\nThanks for the comments.\r\n\r\n> ===============\r\n> 1. The patch doesn't display the remote tuple for delete_differ case.\r\n> However, it shows the remote tuple correctly for update_differ. Is\r\n> there a reason for the same? See below messages:\r\n> \r\n> update_differ:\r\n> --------------\r\n> LOG: conflict detected on relation \"public.t1\": conflict=update_differ\r\n> DETAIL: Updating the row containing (c1)=(1) that was modified\r\n> locally in transaction 806 at 2024-08-12 11:48:14.970002+05:30.\r\n> Existing local tuple (1, 3, arun ); remote tuple (1, 3,\r\n> ajay ).\r\n> ...\r\n> \r\n> delete_differ\r\n> --------------\r\n> LOG: conflict detected on relation \"public.t1\": conflict=delete_differ\r\n> DETAIL: Deleting the row containing (c1)=(1) that was modified by\r\n> locally in transaction 809 at 2024-08-12 14:15:41.966467+05:30.\r\n> Existing local tuple (1, 3, arun ).\r\n> \r\n> Note this happens when the publisher table has a REPLICA IDENTITY FULL\r\n> and the subscriber table has primary_key. It would be better to keep\r\n> the messages consistent. One possibility is that we remove\r\n> key/old_tuple from the first line of the DETAIL message and display it\r\n> in the second line as Existing local tuple <local_tuple>; remote tuple\r\n> <..>; key <...>\r\n\r\nAgreed. I thought that in delete_differ/missing cases, the remote tuple is covered\r\nIn the key values in the first sentence. To be consistent, I have moved the column-values\r\nfrom the first sentence to the second sentence including the insert_exists conflict.\r\n\r\nThe new format looks like:\r\n\r\nLOG: xxx\r\nDETAIL: Key %s; existing local tuple %s; remote new tuple %s; replica identity %s\r\n\r\nThe Key will include the conflicting key for xxx_exists conflicts. And the replica identity part\r\nwill include the replica identity keys or the full tuple value in replica identity FULL case.\r\n\r\n> \r\n> 2. Similar to above, the remote tuple is not displayed in\r\n> delete_missing but displayed in updated_missing type of conflict. If\r\n> we follow the style mentioned in the previous point then the DETAIL\r\n> message: \"DETAIL: Did not find the row containing (c1)=(1) to be\r\n> updated.\" can also be changed to: \"DETAIL: Could not find the row to\r\n> be updated.\" followed by other detail.\r\n\r\nSame as above.\r\n\r\n\r\n> 3. The detail of insert_exists is confusing.\r\n> \r\n> ERROR: conflict detected on relation \"public.t1\": conflict=insert_exists\r\n> DETAIL: Key (c1)=(1) already exists in unique index \"t1_pkey\", which\r\n> was modified locally in transaction 802 at 2024-08-12\r\n> 11:11:31.252148+05:30.\r\n> \r\n> It sounds like the key value \"(c1)=(1)\" in the index is modified. How\r\n> about changing slightly as: \"Key (c1)=(1) already exists in unique\r\n> index \"t1_pkey\", modified locally in transaction 802 at 2024-08-12\r\n> 11:11:31.252148+05:30.\"? Feel free to propose if anything better comes\r\n> to your mind.\r\n\r\nThe suggested message looks good to me.\r\n\r\n> \r\n> 4.\r\n> if (localorigin == InvalidRepOriginId)\r\n> + appendStringInfo(&err_detail, _(\"Deleting the row containing %s that\r\n> was modified by locally in transaction %u at %s.\"),\r\n> + val_desc, localxmin, timestamptz_to_str(localts));\r\n> \r\n> Typo in the above message. /modified by locally/modified locally\r\n\r\nFixed.\r\n\r\n> \r\n> 5.\r\n> @@ -2661,6 +2662,29 @@ apply_handle_update_internal(ApplyExecutionData\r\n> *edata,\r\n> {\r\n> ...\r\n> found = FindReplTupleInLocalRel(edata, localrel,\r\n> &relmapentry->remoterel,\r\n> localindexoid,\r\n> remoteslot, &localslot);\r\n> ...\r\n> ...\r\n> +\r\n> + ReportApplyConflict(LOG, CT_UPDATE_DIFFER, relinfo,\r\n> + GetRelationIdentityOrPK(localrel),\r\n> \r\n> To find the tuple, we may have used an index other than Replica\r\n> Identity or PK (see IsIndexUsableForReplicaIdentityFull), but while\r\n> reporting conflict we don't consider such an index. I think the reason\r\n> is that such an index scan wouldn't have resulted in a unique tuple\r\n> and that is why we always compare the complete tuple in such cases. Is\r\n> that the reason? Can we write a comment to make it clear?\r\n\r\nAdded comments atop of ReportApplyConflict for the 'indexoid' parameter.\r\n\r\n> \r\n> 6.\r\n> void ReportApplyConflict(int elevel, ConflictType type,\r\n> + ResultRelInfo *relinfo, Oid indexoid,\r\n> + TransactionId localxmin,\r\n> + RepOriginId localorigin,\r\n> + TimestampTz localts,\r\n> + TupleTableSlot *searchslot,\r\n> + TupleTableSlot *localslot,\r\n> + TupleTableSlot *remoteslot,\r\n> + EState *estate);\r\n> \r\n> The prototype looks odd with pointers and non-pointer variables in\r\n> mixed order. How about arranging parameters in the following order:\r\n> Estate, ResultRelInfo, TupleTableSlot *searchslot, TupleTableSlot\r\n> *localslot, TupleTableSlot *remoteslot, Oid indexoid, TransactionId\r\n> localxmin, RepOriginId localorigin, TimestampTz localts?\r\n> \r\n> 7. Like above, check the parameters of other functions like\r\n> errdetail_apply_conflict, build_index_value_desc,\r\n> build_tuple_value_details, etc.\r\n\r\nChanged as suggested.\r\n\r\nHere is V13 patch set which addressed above comments.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Tue, 13 Aug 2024 04:39:15 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Tue, Aug 13, 2024 at 10:09 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> Here is V13 patch set which addressed above comments.\n>\n\n1.\n+ReportApplyConflict(int elevel, ConflictType type, EState *estate,\n+ ResultRelInfo *relinfo,\n\nThe change looks better but it would still be better to keep elevel\nand type after relinfo. The same applies to other places as well.\n\n2.\n+ * The caller should ensure that the index with the OID 'indexoid' is locked.\n+ *\n+ * Refer to errdetail_apply_conflict for the content that will be included in\n+ * the DETAIL line.\n+ */\n+void\n+ReportApplyConflict(int elevel, ConflictType type, EState *estate,\n\nIs it possible to add an assert to ensure that the index is locked by\nthe caller?\n\n3.\n+static char *\n+build_tuple_value_details(EState *estate, ResultRelInfo *relinfo,\n+ TupleTableSlot *searchslot,\n+ TupleTableSlot *localslot,\n+ TupleTableSlot *remoteslot,\n+ Oid indexoid)\n{\n...\n...\n+ /*\n+ * If 'searchslot' is NULL and 'indexoid' is valid, it indicates that we\n+ * are reporting the unique constraint violation conflict, in which case\n+ * the conflicting key values will be reported.\n+ */\n+ if (OidIsValid(indexoid) && !searchslot)\n+ {\n...\n...\n}\n\nThis indirect way of inferencing constraint violation looks fragile.\nThe caller should pass the required information explicitly and then\nyou can have the required assertions here.\n\nApart from the above, I have made quite a few changes in the code\ncomments and LOG messages in the attached.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 13 Aug 2024 16:33:43 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "Hello!\n\n> I think this is an independent issue which can be discussed separately in\nthe\n> original thread[1], and I have replied to that thread.\n\nThanks! But it seems like this part is still relevant to the current thread:\n\n> It also seems possible that a conflict could be resolved by a concurrent\nupdate\n> before the call to CheckAndReportConflict, which means there's no\nguarantee\n> that the conflict will be reported correctly. Should we be concerned about\n> this?\n\nBest regards,\nMikhail.\n\nHello!> I think this is an independent issue which can be discussed separately in the> original thread[1], and I have replied to that thread.Thanks! But it seems like this part is still relevant to the current thread:> It also seems possible that a conflict could be resolved by a concurrent update> before the call to CheckAndReportConflict, which means there's no guarantee> that the conflict will be reported correctly. Should we be concerned about> this?Best regards,Mikhail.", "msg_date": "Tue, 13 Aug 2024 13:32:33 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Tuesday, August 13, 2024 7:33 PM Michail Nikolaev <[email protected]> wrote:\r\n\r\n> I think this is an independent issue which can be discussed separately in the\r\n> original thread[1], and I have replied to that thread.\r\n\r\n>Thanks! But it seems like this part is still relevant to the current thread:\r\n\r\n> > It also seems possible that a conflict could be resolved by a concurrent update\r\n> > before the call to CheckAndReportConflict, which means there's no guarantee\r\n> > that the conflict will be reported correctly. Should we be concerned about\r\n> > this?\r\n\r\nThis is as expected, and we have documented this in the code comments. We don't\r\nneed to report a conflict if the conflicting tuple has been removed or updated\r\ndue to concurrent transaction. The same is true if the transaction that\r\ninserted the conflicting tuple is rolled back before CheckAndReportConflict().\r\nWe don't consider such cases as a conflict.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Wed, 14 Aug 2024 02:31:10 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Tuesday, August 13, 2024 7:04 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Tue, Aug 13, 2024 at 10:09 AM Zhijie Hou (Fujitsu)\r\n> <[email protected]> wrote:\r\n> >\r\n> > Here is V13 patch set which addressed above comments.\r\n> >\r\n> \r\n> 1.\r\n> +ReportApplyConflict(int elevel, ConflictType type, EState *estate,\r\n> +ResultRelInfo *relinfo,\r\n> \r\n> The change looks better but it would still be better to keep elevel and type after\r\n> relinfo. The same applies to other places as well.\r\n\r\nChanged.\r\n\r\n> \r\n> 2.\r\n> + * The caller should ensure that the index with the OID 'indexoid' is locked.\r\n> + *\r\n> + * Refer to errdetail_apply_conflict for the content that will be\r\n> +included in\r\n> + * the DETAIL line.\r\n> + */\r\n> +void\r\n> +ReportApplyConflict(int elevel, ConflictType type, EState *estate,\r\n> \r\n> Is it possible to add an assert to ensure that the index is locked by the caller?\r\n\r\nAdded.\r\n\r\n> \r\n> 3.\r\n> +static char *\r\n> +build_tuple_value_details(EState *estate, ResultRelInfo *relinfo,\r\n> + TupleTableSlot *searchslot,\r\n> + TupleTableSlot *localslot,\r\n> + TupleTableSlot *remoteslot,\r\n> + Oid indexoid)\r\n> {\r\n> ...\r\n> ...\r\n> + /*\r\n> + * If 'searchslot' is NULL and 'indexoid' is valid, it indicates that\r\n> + we\r\n> + * are reporting the unique constraint violation conflict, in which\r\n> + case\r\n> + * the conflicting key values will be reported.\r\n> + */\r\n> + if (OidIsValid(indexoid) && !searchslot) {\r\n> ...\r\n> ...\r\n> }\r\n> \r\n> This indirect way of inferencing constraint violation looks fragile.\r\n> The caller should pass the required information explicitly and then you can\r\n> have the required assertions here.\r\n> \r\n> Apart from the above, I have made quite a few changes in the code comments\r\n> and LOG messages in the attached.\r\n\r\nThanks. I have addressed above comments and merged the changes.\r\n\r\nHere is the V14 patch.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Wed, 14 Aug 2024 02:35:34 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wed, Aug 14, 2024 at 8:05 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> Here is the V14 patch.\n>\n\nReview comments:\n1.\nReportApplyConflict()\n{\n...\n+ ereport(elevel,\n+ errcode(ERRCODE_INTEGRITY_CONSTRAINT_VIOLATION),\n+ errmsg(\"conflict detected on relation \\\"%s.%s\\\": conflict=%s\",\n+ get_namespace_name(RelationGetNamespace(localrel)),\n...\n\nIs it a good idea to use ERRCODE_INTEGRITY_CONSTRAINT_VIOLATION for\nall conflicts? I think it is okay to use for insert_exists and\nupdate_exists. The other error codes to consider for conflicts other\nthan insert_exists and update_exists are\nERRCODE_T_R_SERIALIZATION_FAILURE, ERRCODE_CARDINALITY_VIOLATION,\nERRCODE_NO_DATA, ERRCODE_NO_DATA_FOUND,\nERRCODE_TRIGGERED_DATA_CHANGE_VIOLATION,\nERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE.\n\nBTW, even for insert/update_exists, won't it better to use\nERRCODE_UNIQUE_VIOLATION?\n\n2.\n+build_tuple_value_details()\n{\n...\n+ if (searchslot)\n+ {\n+ /*\n+ * If a valid index OID is provided, build the replica identity key\n+ * value string. Otherwise, construct the full tuple value for REPLICA\n+ * IDENTITY FULL cases.\n+ */\n\nAFAICU, this can't happen for insert/update_exists. If so, we should\nadd an assert for those two conflict types.\n\n3.\n+build_tuple_value_details()\n{\n...\n+    /*\n+     * Although logical replication doesn't maintain the bitmap for the\n+     * columns being inserted, we still use it to create 'modifiedCols'\n+     * for consistency with other calls to ExecBuildSlotValueDescription.\n+     */\n+    modifiedCols = bms_union(ExecGetInsertedCols(relinfo, estate),\n+                 ExecGetUpdatedCols(relinfo, estate));\n+    desc = ExecBuildSlotValueDescription(relid, remoteslot, tupdesc,\n+                       modifiedCols, 64);\n\nCan we mention in the comments the reason for not including generated columns?\n\nApart from the above, the attached contains some cosmetic changes.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Wed, 14 Aug 2024 16:31:56 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "Hello, Hou!\n\n> This is as expected, and we have documented this in the code comments. We\ndon't\n> need to report a conflict if the conflicting tuple has been removed or\nupdated\n> due to concurrent transaction. The same is true if the transaction that\n> inserted the conflicting tuple is rolled back before\nCheckAndReportConflict().\n> We don't consider such cases as a conflict.\n\nThat seems a little bit strange to me.\n\n From the perspective of a user, I expect that if a change from publisher is\nnot applied - I need to know about it from the logs.\nBut in that case, I will not see any information about conflict in the logs\nin SOME cases. But in OTHER cases I will see it.\nHowever, in both cases the change from publisher was not applied.\nAnd these cases are just random and depend on the timing of race\nconditions. It is not something I am expecting from the database.\n\nMaybe it is better to report about the fact that event from publisher was\nnot applied because of conflict and then try to\nprovide additional information about the conflict itself?\n\nOr possibly in case we were unable to apply the event and not able to find\nthe conflict, we should retry the event processing?\nEspecially, this seems to be a good idea with future [1] in mind.\n\nOr we may add ExecInsertIndexTuples ability to return information about\nconflicts (or ItemPointer of conflicting tuple) and then\nreport about the conflict in a more consistent way?\n\nBest regards,\nMikhail.\n\n[1]: https://commitfest.postgresql.org/49/5021/\n\nHello, Hou!> This is as expected, and we have documented this in the code comments. We don't> need to report a conflict if the conflicting tuple has been removed or updated> due to concurrent transaction. The same is true if the transaction that> inserted the conflicting tuple is rolled back before CheckAndReportConflict().> We don't consider such cases as a conflict.That seems a little bit strange to me.From the perspective of a user, I expect that if a change from publisher is not applied - I need to know about it from the logs.But in that case, I will not see any information about conflict in the logs in SOME cases. But in OTHER cases I will see it.However, in both cases the change from publisher was not applied.And these cases are just random and depend on the timing of race conditions. It is not something I am expecting from the database.Maybe it is better to report about the fact that event from publisher was not applied because of conflict and then try toprovide additional information about the conflict itself?Or possibly in case we were unable to apply the event and not able to find the conflict, we should retry the event processing?Especially, this seems to be a good idea with future [1] in mind.Or we may add ExecInsertIndexTuples ability to return information about conflicts (or ItemPointer of conflicting tuple) and thenreport about the conflict in a more consistent way?Best regards,Mikhail.[1]: https://commitfest.postgresql.org/49/5021/", "msg_date": "Wed, 14 Aug 2024 16:15:10 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wednesday, August 14, 2024 7:02 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Wed, Aug 14, 2024 at 8:05 AM Zhijie Hou (Fujitsu)\r\n> <[email protected]> wrote:\r\n> >\r\n> > Here is the V14 patch.\r\n> >\r\n> \r\n> Review comments:\r\n> 1.\r\n> ReportApplyConflict()\r\n> {\r\n> ...\r\n> + ereport(elevel,\r\n> + errcode(ERRCODE_INTEGRITY_CONSTRAINT_VIOLATION),\r\n> + errmsg(\"conflict detected on relation \\\"%s.%s\\\": conflict=%s\",\r\n> + get_namespace_name(RelationGetNamespace(localrel)),\r\n> ...\r\n> \r\n> Is it a good idea to use ERRCODE_INTEGRITY_CONSTRAINT_VIOLATION for\r\n> all conflicts? I think it is okay to use for insert_exists and update_exists. The\r\n> other error codes to consider for conflicts other than insert_exists and\r\n> update_exists are ERRCODE_T_R_SERIALIZATION_FAILURE,\r\n> ERRCODE_CARDINALITY_VIOLATION, ERRCODE_NO_DATA,\r\n> ERRCODE_NO_DATA_FOUND,\r\n> ERRCODE_TRIGGERED_DATA_CHANGE_VIOLATION,\r\n> ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE.\r\n> \r\n> BTW, even for insert/update_exists, won't it better to use\r\n> ERRCODE_UNIQUE_VIOLATION ?\r\n\r\nAgreed. I changed the patch to use ERRCODE_UNIQUE_VIOLATION for\r\nInsert,update_exists, and ERRCODE_T_R_SERIALIZATION_FAILURE for\r\nother conflicts.\r\n\r\n> \r\n> Apart from the above, the attached contains some cosmetic changes.\r\n\r\nThanks. I have checked and merged the changes. Here is the V15 patch\r\nwhich addressed above comments.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Thu, 15 Aug 2024 07:17:50 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wednesday, August 14, 2024 10:15 PM Michail Nikolaev <[email protected]> wrote:\r\n> > This is as expected, and we have documented this in the code comments. We don't\r\n> > need to report a conflict if the conflicting tuple has been removed or updated\r\n> > due to concurrent transaction. The same is true if the transaction that\r\n> > inserted the conflicting tuple is rolled back before CheckAndReportConflict().\r\n> > We don't consider such cases as a conflict.\r\n> \r\n> That seems a little bit strange to me.\r\n> \r\n> From the perspective of a user, I expect that if a change from publisher is not\r\n> applied - I need to know about it from the logs. \r\n\r\nI think this is exactly the current behavior in the patch. In the race\r\ncondition we discussed, the insert will be applied if the conflicting tuple is\r\nremoved concurrently before CheckAndReportConflict().\r\n\r\n> But in that case, I will not see any information about conflict in the logs\r\n> in SOME cases. But in OTHER cases I will see it. However, in both cases the\r\n> change from publisher was not applied. And these cases are just random and\r\n> depend on the timing of race conditions. It is not something I am expecting\r\n> from the database.\r\n\r\nI think you might misunderstand the behavior of CheckAndReportConflict(), even\r\nif it found a conflict, it still inserts the tuple into the index which means\r\nthe change is anyway applied.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Thu, 15 Aug 2024 07:18:42 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wed, Aug 14, 2024 at 7:45 PM Michail Nikolaev\n<[email protected]> wrote:\n>\n> > This is as expected, and we have documented this in the code comments. We don't\n> > need to report a conflict if the conflicting tuple has been removed or updated\n> > due to concurrent transaction. The same is true if the transaction that\n> > inserted the conflicting tuple is rolled back before CheckAndReportConflict().\n> > We don't consider such cases as a conflict.\n>\n> That seems a little bit strange to me.\n>\n> From the perspective of a user, I expect that if a change from publisher is not applied - I need to know about it from the logs.\n>\n\nIn the above conditions where a concurrent tuple insertion is removed\nor rolled back before CheckAndReportConflict, the tuple inserted by\napply will remain. There is no need to report anything in such cases\nas apply was successful.\n\n> But in that case, I will not see any information about conflict in the logs in SOME cases. But in OTHER cases I will see it.\n> However, in both cases the change from publisher was not applied.\n> And these cases are just random and depend on the timing of race conditions. It is not something I am expecting from the database.\n>\n> Maybe it is better to report about the fact that event from publisher was not applied because of conflict and then try to\n> provide additional information about the conflict itself?\n>\n> Or possibly in case we were unable to apply the event and not able to find the conflict, we should retry the event processing?\n>\n\nPer my understanding, we will apply or the conflict will be logged and\nretried where required (unique key violation).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 16 Aug 2024 09:53:25 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Thu, Aug 15, 2024 at 12:47 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> Thanks. I have checked and merged the changes. Here is the V15 patch\n> which addressed above comments.\n\nThanks for the patch. Please find few comments and queries:\n\n1)\nFor various conflicts , we have these in Logs:\nReplica identity (val1)=(30). (for RI on 1 column)\nReplica identity (pk, val1)=(200, 20). (for RI on 2 columns)\nReplica identity (40, 40, 11). (for RI full)\n\nShall we have have column list in last case as well, or can simply\nhave *full* keyword i.e. Replica identity full (40, 40, 11)\n\n\n2)\nFor toast column, we dump null in remote-tuple. I know that the toast\ncolumn is not sent in new-tuple from the publisher and thus the\nbehaviour, but it could be misleading for users. Perhaps document\nthis?\n\nSee 'null' in all these examples in remote tuple:\n\nupdate_differ With PK:\nLOG: conflict detected on relation \"public.t1\": conflict=update_differ\nDETAIL: Updating the row that was modified locally in transaction 831\nat 2024-08-16 09:59:26.566012+05:30.\nExisting local tuple (30, 30, 30,\nyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy...);\nremote tuple (30, 30, 300, null); replica identity (pk, val1)=(30,\n30).\n\nupdate_missing With PK:\nLOG: conflict detected on relation \"public.t1\": conflict=update_missing\nDETAIL: Could not find the row to be updated.\nRemote tuple (10, 10, 100, null); replica identity (pk, val1)=(10, 10).\n\n\nupdate_missing with RI full:\nLOG: conflict detected on relation \"public.t1\": conflict=update_missing\nDETAIL: Could not find the row to be updated.\nRemote tuple (20, 20, 2000, null); replica identity (20, 20, 10,\nxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...).\n\n3)\nFor update_exists(), we dump:\nKey (a, b)=(2, 1)\n\nFor delete_missing, update_missing, update_differ, we dump:\nReplica identity (a, b)=(2, 1).\n\nFor update_exists as well, shouldn't we dump 'Replica identity'? Only\nfor insert case, it should be referred as 'Key'.\n\n\n4)\nWhy delete_missing is not having remote_tuple. Is it because we dump\nnew tuple as 'remote tuple', which is not relevant for delete_missing?\n2024-08-16 09:13:33.174 IST [419839] LOG: conflict detected on\nrelation \"public.t1\": conflict=delete_missing\n2024-08-16 09:13:33.174 IST [419839] DETAIL: Could not find the row\nto be deleted.\nReplica identity (val1)=(30).\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 16 Aug 2024 10:46:43 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Fri, Aug 16, 2024 at 10:46 AM shveta malik <[email protected]> wrote:\n>\n> 3)\n> For update_exists(), we dump:\n> Key (a, b)=(2, 1)\n>\n> For delete_missing, update_missing, update_differ, we dump:\n> Replica identity (a, b)=(2, 1).\n>\n> For update_exists as well, shouldn't we dump 'Replica identity'? Only\n> for insert case, it should be referred as 'Key'.\n>\n\nOn rethinking, is it because for update_exists case 'Key' dumped is\nnot the one used to search the row to be updated? Instead it is the\none used to search the conflicting row. Unlike update_differ, the row\nto be updated and the row currently conflicting will be different for\nupdate_exists case. I earlier thought that 'KEY' and 'Existing local\ntuple' dumped always belong to the row currently being\nupdated/deleted/inserted. But for 'update_eixsts', that is not the\ncase. We are dumping 'Existing local tuple' and 'Key' for the row\nwhich is conflicting and not the one being updated. Example:\n\nERROR: conflict detected on relation \"public.tab_1\": conflict=update_exists\nKey (a, b)=(2, 1); existing local tuple (2, 1); remote tuple (2, 1).\n\nOperations performed were:\nPub: insert into tab values (1,1);\nSub: insert into tab values (2,1);\nPub: update tab set a=2 where a=1;\n\nHere Key and local tuple are both 2,1 instead of 1,1. While replica\nidentity value (used to search original row) will be 1,1 only.\n\nIt may be slightly confusing or say tricky to understand when compared\nto other conflicts' LOGs. But not sure what better we can do here.\n\n--------------------\n\nOne more comment:\n\n5)\nFor insert/update_exists, the sequence is:\nKey .. ; existing local tuple .. ; remote tuple ...\n\nFor rest of the conflicts, sequence is:\n Existing local tuple .. ; remote tuple .. ; replica identity ..\n\nIs it intentional? Shall the 'Key' or 'Replica Identity' be the first\none to come in all conflicts?\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 16 Aug 2024 11:48:14 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Fri, Aug 16, 2024 at 10:46 AM shveta malik <[email protected]> wrote:\n>\n> On Thu, Aug 15, 2024 at 12:47 PM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Thanks. I have checked and merged the changes. Here is the V15 patch\n> > which addressed above comments.\n>\n> Thanks for the patch. Please find few comments and queries:\n>\n> 1)\n> For various conflicts , we have these in Logs:\n> Replica identity (val1)=(30). (for RI on 1 column)\n> Replica identity (pk, val1)=(200, 20). (for RI on 2 columns)\n> Replica identity (40, 40, 11). (for RI full)\n>\n> Shall we have have column list in last case as well, or can simply\n> have *full* keyword i.e. Replica identity full (40, 40, 11)\n>\n\nI would prefer 'full' instead of the entire column list as the\ncomplete column list could be long and may not much sense.\n\n>\n> 2)\n> For toast column, we dump null in remote-tuple. I know that the toast\n> column is not sent in new-tuple from the publisher and thus the\n> behaviour, but it could be misleading for users. Perhaps document\n> this?\n>\n\nAgreed that we should document this. I suggest that we can have a doc\npatch that explains the conflict logging format and in that, we can\nmention this behavior as well.\n\n> 3)\n> For update_exists(), we dump:\n> Key (a, b)=(2, 1)\n>\n> For delete_missing, update_missing, update_differ, we dump:\n> Replica identity (a, b)=(2, 1).\n>\n> For update_exists as well, shouldn't we dump 'Replica identity'? Only\n> for insert case, it should be referred as 'Key'.\n>\n\nI think update_exists is quite similar to insert_exists and both\nhappen due to unique key violation. So, it seems okay to display the\nKey for update_exists.\n\n>\n> 4)\n> Why delete_missing is not having remote_tuple. Is it because we dump\n> new tuple as 'remote tuple', which is not relevant for delete_missing?\n>\n\nRight.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 16 Aug 2024 12:01:10 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Fri, Aug 16, 2024 at 11:48 AM shveta malik <[email protected]> wrote:\n>\n> On Fri, Aug 16, 2024 at 10:46 AM shveta malik <[email protected]> wrote:\n> >\n> > 3)\n> > For update_exists(), we dump:\n> > Key (a, b)=(2, 1)\n> >\n> > For delete_missing, update_missing, update_differ, we dump:\n> > Replica identity (a, b)=(2, 1).\n> >\n> > For update_exists as well, shouldn't we dump 'Replica identity'? Only\n> > for insert case, it should be referred as 'Key'.\n> >\n>\n> On rethinking, is it because for update_exists case 'Key' dumped is\n> not the one used to search the row to be updated? Instead it is the\n> one used to search the conflicting row. Unlike update_differ, the row\n> to be updated and the row currently conflicting will be different for\n> update_exists case. I earlier thought that 'KEY' and 'Existing local\n> tuple' dumped always belong to the row currently being\n> updated/deleted/inserted. But for 'update_eixsts', that is not the\n> case. We are dumping 'Existing local tuple' and 'Key' for the row\n> which is conflicting and not the one being updated. Example:\n>\n> ERROR: conflict detected on relation \"public.tab_1\": conflict=update_exists\n> Key (a, b)=(2, 1); existing local tuple (2, 1); remote tuple (2, 1).\n>\n> Operations performed were:\n> Pub: insert into tab values (1,1);\n> Sub: insert into tab values (2,1);\n> Pub: update tab set a=2 where a=1;\n>\n> Here Key and local tuple are both 2,1 instead of 1,1. While replica\n> identity value (used to search original row) will be 1,1 only.\n>\n> It may be slightly confusing or say tricky to understand when compared\n> to other conflicts' LOGs. But not sure what better we can do here.\n>\n\nThe update_exists behaves more like insert_exists as we detect that\nonly while inserting into index. It is also not clear to me if we can\ndo better than to clarify this in docs.\n\n> --------------------\n>\n> One more comment:\n>\n> 5)\n> For insert/update_exists, the sequence is:\n> Key .. ; existing local tuple .. ; remote tuple ...\n>\n> For rest of the conflicts, sequence is:\n> Existing local tuple .. ; remote tuple .. ; replica identity ..\n>\n> Is it intentional? Shall the 'Key' or 'Replica Identity' be the first\n> one to come in all conflicts?\n>\n\nThis is worth considering but Replica Identity signifies the old tuple\nvalues, that is why it is probably kept at the end. But let's see what\nHou-San or others think about this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 16 Aug 2024 12:19:14 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Fri, Aug 16, 2024 at 12:19 PM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Aug 16, 2024 at 11:48 AM shveta malik <[email protected]> wrote:\n> >\n> > On Fri, Aug 16, 2024 at 10:46 AM shveta malik <[email protected]> wrote:\n> > >\n> > > 3)\n> > > For update_exists(), we dump:\n> > > Key (a, b)=(2, 1)\n> > >\n> > > For delete_missing, update_missing, update_differ, we dump:\n> > > Replica identity (a, b)=(2, 1).\n> > >\n> > > For update_exists as well, shouldn't we dump 'Replica identity'? Only\n> > > for insert case, it should be referred as 'Key'.\n> > >\n> >\n> > On rethinking, is it because for update_exists case 'Key' dumped is\n> > not the one used to search the row to be updated? Instead it is the\n> > one used to search the conflicting row. Unlike update_differ, the row\n> > to be updated and the row currently conflicting will be different for\n> > update_exists case. I earlier thought that 'KEY' and 'Existing local\n> > tuple' dumped always belong to the row currently being\n> > updated/deleted/inserted. But for 'update_eixsts', that is not the\n> > case. We are dumping 'Existing local tuple' and 'Key' for the row\n> > which is conflicting and not the one being updated. Example:\n> >\n> > ERROR: conflict detected on relation \"public.tab_1\": conflict=update_exists\n> > Key (a, b)=(2, 1); existing local tuple (2, 1); remote tuple (2, 1).\n> >\n> > Operations performed were:\n> > Pub: insert into tab values (1,1);\n> > Sub: insert into tab values (2,1);\n> > Pub: update tab set a=2 where a=1;\n> >\n> > Here Key and local tuple are both 2,1 instead of 1,1. While replica\n> > identity value (used to search original row) will be 1,1 only.\n> >\n> > It may be slightly confusing or say tricky to understand when compared\n> > to other conflicts' LOGs. But not sure what better we can do here.\n> >\n>\n> The update_exists behaves more like insert_exists as we detect that\n> only while inserting into index. It is also not clear to me if we can\n> do better than to clarify this in docs.\n>\n\nInstead of 'existing local tuple', will it be slightly better to have\n'conflicting local tuple'?\n\nFew trivial comments:\n\n1)\nerrdetail_apply_conflict() header says:\n\n * 2. Display of conflicting key, existing local tuple, remote new tuple, and\n * replica identity columns, if any.\n\nWe may mention that existing *conflicting* local tuple.\n\nLooking at build_tuple_value_details(), the cases where we display\n'KEY 'and the ones where we display 'replica identity' are mutually\nexclusives (we have ASSERTs like that). Shall we add this info in\nheader that either Key or 'replica identity' is displayed. Or if we\ndon't want to make it mutually exclusive then update_exists is one\nsuch casw where we can have both Key and 'Replica Identity cols'.\n\n\n2)\nBuildIndexValueDescription() header comment says:\n\n * This is currently used\n * for building unique-constraint, exclusion-constraint and logical replication\n * tuple missing conflict error messages\n\nIs it being used only for 'tuple missing conflict' flow? I thought, it\nwill be hit for other flows as well.\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 16 Aug 2024 14:54:41 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "Hello!\n\n> I think you might misunderstand the behavior of CheckAndReportConflict(),\neven\n> if it found a conflict, it still inserts the tuple into the index which\nmeans\n> the change is anyway applied.\n\n> In the above conditions where a concurrent tuple insertion is removed\n> or rolled back before CheckAndReportConflict, the tuple inserted by\n> apply will remain. There is no need to report anything in such cases\n> as apply was successful.\n\nYes, thank you for explanation, I was thinking UNIQUE_CHECK_PARTIAL works\ndifferently.\n\nBut now I think DirtySnapshot-related bug is a blocker for this feature\nthen, I'll reply into original after rechecking it.\n\nBest regards,\nMikhail.\n\nHello!> I think you might misunderstand the behavior of CheckAndReportConflict(), even> if it found a conflict, it still inserts the tuple into the index which means> the change is anyway applied.> In the above conditions where a concurrent tuple insertion is removed> or rolled back before CheckAndReportConflict, the tuple inserted by> apply will remain. There is no need to report anything in such cases> as apply was successful.Yes, thank you for explanation, I was thinking UNIQUE_CHECK_PARTIAL works differently.But now I think DirtySnapshot-related bug is a blocker for this feature then, I'll reply into original after rechecking it.Best regards,Mikhail.", "msg_date": "Fri, 16 Aug 2024 13:46:45 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Friday, August 16, 2024 2:31 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Fri, Aug 16, 2024 at 10:46 AM shveta malik <[email protected]>\r\n> wrote:\r\n> >\r\n> > On Thu, Aug 15, 2024 at 12:47 PM Zhijie Hou (Fujitsu)\r\n> > <[email protected]> wrote:\r\n> > >\r\n> > > Thanks. I have checked and merged the changes. Here is the V15 patch\r\n> > > which addressed above comments.\r\n> >\r\n> > Thanks for the patch. Please find few comments and queries:\r\n> >\r\n> > 1)\r\n> > For various conflicts , we have these in Logs:\r\n> > Replica identity (val1)=(30). (for RI on 1 column)\r\n> > Replica identity (pk, val1)=(200, 20). (for RI on 2 columns)\r\n> > Replica identity (40, 40, 11). (for RI full)\r\n> >\r\n> > Shall we have have column list in last case as well, or can simply\r\n> > have *full* keyword i.e. Replica identity full (40, 40, 11)\r\n> >\r\n> \r\n> I would prefer 'full' instead of the entire column list as the complete column list\r\n> could be long and may not much sense.\r\n\r\n+1 and will change in V16 patch.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Sun, 18 Aug 2024 08:49:12 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Friday, August 16, 2024 2:49 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> \r\n> > --------------------\r\n> >\r\n> > One more comment:\r\n> >\r\n> > 5)\r\n> > For insert/update_exists, the sequence is:\r\n> > Key .. ; existing local tuple .. ; remote tuple ...\r\n> >\r\n> > For rest of the conflicts, sequence is:\r\n> > Existing local tuple .. ; remote tuple .. ; replica identity ..\r\n> >\r\n> > Is it intentional? Shall the 'Key' or 'Replica Identity' be the first\r\n> > one to come in all conflicts?\r\n> >\r\n> \r\n> This is worth considering but Replica Identity signifies the old tuple values,\r\n> that is why it is probably kept at the end. \r\n\r\nRight. I personally think the current position is ok.\r\n\r\nBest Regards,\r\nHou zj \r\n\r\n\r\n", "msg_date": "Sun, 18 Aug 2024 08:49:37 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Friday, August 16, 2024 5:25 PM shveta malik <[email protected]> wrote:\r\n> \r\n> On Fri, Aug 16, 2024 at 12:19 PM Amit Kapila <[email protected]>\r\n> wrote:\r\n> >\r\n> > On Fri, Aug 16, 2024 at 11:48 AM shveta malik <[email protected]>\r\n> wrote:\r\n> > >\r\n> > > On Fri, Aug 16, 2024 at 10:46 AM shveta malik <[email protected]>\r\n> wrote:\r\n> > > >\r\n> > > > 3)\r\n> > > > For update_exists(), we dump:\r\n> > > > Key (a, b)=(2, 1)\r\n> > > >\r\n> > > > For delete_missing, update_missing, update_differ, we dump:\r\n> > > > Replica identity (a, b)=(2, 1).\r\n> > > >\r\n> > > > For update_exists as well, shouldn't we dump 'Replica identity'?\r\n> > > > Only for insert case, it should be referred as 'Key'.\r\n> > > >\r\n> > >\r\n> > > On rethinking, is it because for update_exists case 'Key' dumped is\r\n> > > not the one used to search the row to be updated? Instead it is the\r\n> > > one used to search the conflicting row. Unlike update_differ, the\r\n> > > row to be updated and the row currently conflicting will be\r\n> > > different for update_exists case. I earlier thought that 'KEY' and\r\n> > > 'Existing local tuple' dumped always belong to the row currently\r\n> > > being updated/deleted/inserted. But for 'update_eixsts', that is not\r\n> > > the case. We are dumping 'Existing local tuple' and 'Key' for the\r\n> > > row which is conflicting and not the one being updated. Example:\r\n> > >\r\n> > > ERROR: conflict detected on relation \"public.tab_1\":\r\n> > > conflict=update_exists Key (a, b)=(2, 1); existing local tuple (2, 1); remote\r\n> tuple (2, 1).\r\n> > >\r\n> > > Operations performed were:\r\n> > > Pub: insert into tab values (1,1);\r\n> > > Sub: insert into tab values (2,1);\r\n> > > Pub: update tab set a=2 where a=1;\r\n> > >\r\n> > > Here Key and local tuple are both 2,1 instead of 1,1. While replica\r\n> > > identity value (used to search original row) will be 1,1 only.\r\n> > >\r\n> > > It may be slightly confusing or say tricky to understand when\r\n> > > compared to other conflicts' LOGs. But not sure what better we can do\r\n> here.\r\n> > >\r\n> >\r\n> > The update_exists behaves more like insert_exists as we detect that\r\n> > only while inserting into index. It is also not clear to me if we can\r\n> > do better than to clarify this in docs.\r\n> >\r\n> \r\n> Instead of 'existing local tuple', will it be slightly better to have 'conflicting local\r\n> tuple'?\r\n\r\nI am slightly not sure about adding one more variety to describe the \"existing\r\nlocal tuple\". I think we’d better use a consistent word. But if others feel otherwise,\r\nI can change it in next version.\r\n\r\n> \r\n> Few trivial comments:\r\n> \r\n> 1)\r\n> errdetail_apply_conflict() header says:\r\n> \r\n> * 2. Display of conflicting key, existing local tuple, remote new tuple, and\r\n> * replica identity columns, if any.\r\n> \r\n> We may mention that existing *conflicting* local tuple.\r\n\r\nLike above, I think that would duplicate the \"existing local tuple\" word.\r\n\r\n> \r\n> Looking at build_tuple_value_details(), the cases where we display 'KEY 'and\r\n> the ones where we display 'replica identity' are mutually exclusives (we have\r\n> ASSERTs like that). Shall we add this info in\r\n> header that either Key or 'replica identity' is displayed. Or if we\r\n> don't want to make it mutually exclusive then update_exists is one such casw\r\n> where we can have both Key and 'Replica Identity cols'.\r\n\r\nI think it’s fine to display replica identity for update_exists, so added.\r\n\r\n> \r\n> \r\n> 2)\r\n> BuildIndexValueDescription() header comment says:\r\n> \r\n> * This is currently used\r\n> * for building unique-constraint, exclusion-constraint and logical replication\r\n> * tuple missing conflict error messages\r\n> \r\n> Is it being used only for 'tuple missing conflict' flow? I thought, it will be hit for\r\n> other flows as well.\r\n\r\nRemoved the \"tuple missing\".\r\n\r\nAttach the V16 patch which addressed the comments we agreed on.\r\nI will add a doc patch to explain the log format after the 0001 is RFC.\r\n\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Sun, 18 Aug 2024 08:56:52 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Sun, Aug 18, 2024 at 2:27 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> Attach the V16 patch which addressed the comments we agreed on.\n> I will add a doc patch to explain the log format after the 0001 is RFC.\n>\n\nThank You for addressing comments. Please see this scenario:\n\ncreate table tab1(pk int primary key, val1 int unique, val2 int);\n\npub: insert into tab1 values(1,1,1);\nsub: insert into tab1 values(2,2,3);\npub: update tab1 set val1=2 where pk=1;\n\nWrong 'replica identity' column logged? shouldn't it be pk?\n\nERROR: conflict detected on relation \"public.tab1\": conflict=update_exists\nDETAIL: Key already exists in unique index \"tab1_val1_key\", modified\nlocally in transaction 801 at 2024-08-19 08:50:47.974815+05:30.\nKey (val1)=(2); existing local tuple (2, 2, 3); remote tuple (1, 2,\n1); replica identity (val1)=(1).\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 19 Aug 2024 09:07:51 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Friday, August 16, 2024 7:47 PM Michail Nikolaev <[email protected]> wrote:\r\n> > I think you might misunderstand the behavior of CheckAndReportConflict(),\r\n> > even if it found a conflict, it still inserts the tuple into the index which\r\n> > means the change is anyway applied.\r\n> \r\n> > In the above conditions where a concurrent tuple insertion is removed or\r\n> > rolled back before CheckAndReportConflict, the tuple inserted by apply will\r\n> > remain. There is no need to report anything in such cases as apply was\r\n> > successful.\r\n> \r\n> Yes, thank you for explanation, I was thinking UNIQUE_CHECK_PARTIAL works\r\n> differently.\r\n> \r\n> But now I think DirtySnapshot-related bug is a blocker for this feature then,\r\n> I'll reply into original after rechecking it.\r\n\r\nBased on your response in the original thread[1], where you confirmed that the\r\ndirty snapshot bug does not impact the detection of insert_exists conflicts, I\r\nassume we are in agreement that this bug is not a blocker for the detection\r\nfeature. If you think otherwise, please feel free to let me know.\r\n\r\n[1] https://www.postgresql.org/message-id/CANtu0oh69b%2BVCiASX86dF_eY%3D9%3DA2RmMQ_%2B0%2BuxZ_Zir%2BoNhhw%40mail.gmail.com\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Mon, 19 Aug 2024 03:38:47 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Mon, Aug 19, 2024 at 9:07 AM shveta malik <[email protected]> wrote:\n>\n> On Sun, Aug 18, 2024 at 2:27 PM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Attach the V16 patch which addressed the comments we agreed on.\n> > I will add a doc patch to explain the log format after the 0001 is RFC.\n> >\n>\n> Thank You for addressing comments. Please see this scenario:\n>\n> create table tab1(pk int primary key, val1 int unique, val2 int);\n>\n> pub: insert into tab1 values(1,1,1);\n> sub: insert into tab1 values(2,2,3);\n> pub: update tab1 set val1=2 where pk=1;\n>\n> Wrong 'replica identity' column logged? shouldn't it be pk?\n>\n> ERROR: conflict detected on relation \"public.tab1\": conflict=update_exists\n> DETAIL: Key already exists in unique index \"tab1_val1_key\", modified\n> locally in transaction 801 at 2024-08-19 08:50:47.974815+05:30.\n> Key (val1)=(2); existing local tuple (2, 2, 3); remote tuple (1, 2,\n> 1); replica identity (val1)=(1).\n\nApart from this one, I have no further comments on v16.\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 19 Aug 2024 10:19:43 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Mon, Aug 19, 2024 at 9:08 AM shveta malik <[email protected]> wrote:\n>\n> On Sun, Aug 18, 2024 at 2:27 PM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Attach the V16 patch which addressed the comments we agreed on.\n> > I will add a doc patch to explain the log format after the 0001 is RFC.\n> >\n>\n> Thank You for addressing comments. Please see this scenario:\n>\n> create table tab1(pk int primary key, val1 int unique, val2 int);\n>\n> pub: insert into tab1 values(1,1,1);\n> sub: insert into tab1 values(2,2,3);\n> pub: update tab1 set val1=2 where pk=1;\n>\n> Wrong 'replica identity' column logged? shouldn't it be pk?\n>\n> ERROR: conflict detected on relation \"public.tab1\": conflict=update_exists\n> DETAIL: Key already exists in unique index \"tab1_val1_key\", modified\n> locally in transaction 801 at 2024-08-19 08:50:47.974815+05:30.\n> Key (val1)=(2); existing local tuple (2, 2, 3); remote tuple (1, 2,\n> 1); replica identity (val1)=(1).\n>\n\nThe docs say that by default replica identity is primary_key [1] (see\nREPLICA IDENTITY), [2] (see pg_class.relreplident). So, using the same\nformat to display PK seems reasonable. I don't think adding additional\ncode to distinguish these two cases in the LOG message is worth it. We\ncan always change such things later if that is what users and or\nothers prefer.\n\n[1] - https://www.postgresql.org/docs/devel/sql-altertable.html\n[2] - https://www.postgresql.org/docs/devel/catalog-pg-class.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 19 Aug 2024 11:37:44 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Mon, Aug 19, 2024 at 11:37 AM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Aug 19, 2024 at 9:08 AM shveta malik <[email protected]> wrote:\n> >\n> > On Sun, Aug 18, 2024 at 2:27 PM Zhijie Hou (Fujitsu)\n> > <[email protected]> wrote:\n> > >\n> > > Attach the V16 patch which addressed the comments we agreed on.\n> > > I will add a doc patch to explain the log format after the 0001 is RFC.\n> > >\n> >\n> > Thank You for addressing comments. Please see this scenario:\n> >\n> > create table tab1(pk int primary key, val1 int unique, val2 int);\n> >\n> > pub: insert into tab1 values(1,1,1);\n> > sub: insert into tab1 values(2,2,3);\n> > pub: update tab1 set val1=2 where pk=1;\n> >\n> > Wrong 'replica identity' column logged? shouldn't it be pk?\n> >\n> > ERROR: conflict detected on relation \"public.tab1\": conflict=update_exists\n> > DETAIL: Key already exists in unique index \"tab1_val1_key\", modified\n> > locally in transaction 801 at 2024-08-19 08:50:47.974815+05:30.\n> > Key (val1)=(2); existing local tuple (2, 2, 3); remote tuple (1, 2,\n> > 1); replica identity (val1)=(1).\n> >\n>\n> The docs say that by default replica identity is primary_key [1] (see\n> REPLICA IDENTITY),\n\nyes, I agree. But here the importance of dumping it was to know the\nvalue of RI as well which is being used as an identification of row\nbeing updated rather than row being conflicted. Value is logged\ncorrectly.\n\n>[2] (see pg_class.relreplident). So, using the same\n> format to display PK seems reasonable. I don't think adding additional\n> code to distinguish these two cases in the LOG message is worth it.\n\nI don't see any additional code added for this case except getting an\nexisting logic being used for update_exists.\n\n>We\n> can always change such things later if that is what users and or\n> others prefer.\n>\n\nSure, if fixing this issue (where we are reporting the wrong col name)\nneeds additional logic, then I am okay to skip it for the time being.\nWe can address later if/when needed.\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 19 Aug 2024 11:53:50 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Mon, Aug 19, 2024 at 11:54 AM shveta malik <[email protected]> wrote:\n>\n> On Mon, Aug 19, 2024 at 11:37 AM Amit Kapila <[email protected]> wrote:\n> >\n> > On Mon, Aug 19, 2024 at 9:08 AM shveta malik <[email protected]> wrote:\n> > >\n> > > On Sun, Aug 18, 2024 at 2:27 PM Zhijie Hou (Fujitsu)\n> > > <[email protected]> wrote:\n> > > >\n> > > > Attach the V16 patch which addressed the comments we agreed on.\n> > > > I will add a doc patch to explain the log format after the 0001 is RFC.\n> > > >\n> > >\n> > > Thank You for addressing comments. Please see this scenario:\n> > >\n> > > create table tab1(pk int primary key, val1 int unique, val2 int);\n> > >\n> > > pub: insert into tab1 values(1,1,1);\n> > > sub: insert into tab1 values(2,2,3);\n> > > pub: update tab1 set val1=2 where pk=1;\n> > >\n> > > Wrong 'replica identity' column logged? shouldn't it be pk?\n> > >\n> > > ERROR: conflict detected on relation \"public.tab1\": conflict=update_exists\n> > > DETAIL: Key already exists in unique index \"tab1_val1_key\", modified\n> > > locally in transaction 801 at 2024-08-19 08:50:47.974815+05:30.\n> > > Key (val1)=(2); existing local tuple (2, 2, 3); remote tuple (1, 2,\n> > > 1); replica identity (val1)=(1).\n> > >\n> >\n> > The docs say that by default replica identity is primary_key [1] (see\n> > REPLICA IDENTITY),\n>\n> yes, I agree. But here the importance of dumping it was to know the\n> value of RI as well which is being used as an identification of row\n> being updated rather than row being conflicted. Value is logged\n> correctly.\n>\n\nAgreed, sorry, I misunderstood the problem reported. I thought the\nsuggestion was to use 'primary key' instead of 'replica identity' but\nyou pointed out that the column used in 'replica identity' was wrong.\nWe should fix this one.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 19 Aug 2024 12:09:43 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Monday, August 19, 2024 2:40 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Mon, Aug 19, 2024 at 11:54 AM shveta malik <[email protected]>\r\n> wrote:\r\n> >\r\n> > On Mon, Aug 19, 2024 at 11:37 AM Amit Kapila <[email protected]>\r\n> wrote:\r\n> > >\r\n> > > On Mon, Aug 19, 2024 at 9:08 AM shveta malik <[email protected]>\r\n> wrote:\r\n> > > >\r\n> > > > On Sun, Aug 18, 2024 at 2:27 PM Zhijie Hou (Fujitsu)\r\n> > > > <[email protected]> wrote:\r\n> > > > >\r\n> > > > > Attach the V16 patch which addressed the comments we agreed on.\r\n> > > > > I will add a doc patch to explain the log format after the 0001 is RFC.\r\n> > > > >\r\n> > > >\r\n> > > > Thank You for addressing comments. Please see this scenario:\r\n> > > >\r\n> > > > create table tab1(pk int primary key, val1 int unique, val2 int);\r\n> > > >\r\n> > > > pub: insert into tab1 values(1,1,1);\r\n> > > > sub: insert into tab1 values(2,2,3);\r\n> > > > pub: update tab1 set val1=2 where pk=1;\r\n> > > >\r\n> > > > Wrong 'replica identity' column logged? shouldn't it be pk?\r\n> > > >\r\n> > > > ERROR: conflict detected on relation \"public.tab1\":\r\n> > > > conflict=update_exists\r\n> > > > DETAIL: Key already exists in unique index \"tab1_val1_key\",\r\n> > > > modified locally in transaction 801 at 2024-08-19 08:50:47.974815+05:30.\r\n> > > > Key (val1)=(2); existing local tuple (2, 2, 3); remote tuple (1,\r\n> > > > 2, 1); replica identity (val1)=(1).\r\n> > > >\r\n> > >\r\n> > > The docs say that by default replica identity is primary_key [1]\r\n> > > (see REPLICA IDENTITY),\r\n> >\r\n> > yes, I agree. But here the importance of dumping it was to know the\r\n> > value of RI as well which is being used as an identification of row\r\n> > being updated rather than row being conflicted. Value is logged\r\n> > correctly.\r\n> >\r\n> \r\n> Agreed, sorry, I misunderstood the problem reported. I thought the suggestion\r\n> was to use 'primary key' instead of 'replica identity' but you pointed out that the\r\n> column used in 'replica identity' was wrong.\r\n> We should fix this one.\r\n\r\nThanks for reporting the bug. I have fixed it and ran pgindent in V17 patch.\r\nI also adjusted few comments and fixed a typo.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Mon, 19 Aug 2024 07:02:19 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Mon, Aug 19, 2024 at 12:09 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Aug 19, 2024 at 11:54 AM shveta malik <[email protected]> wrote:\n> >\n> > On Mon, Aug 19, 2024 at 11:37 AM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Mon, Aug 19, 2024 at 9:08 AM shveta malik <[email protected]> wrote:\n> > > >\n> > > > On Sun, Aug 18, 2024 at 2:27 PM Zhijie Hou (Fujitsu)\n> > > > <[email protected]> wrote:\n> > > > >\n> > > > > Attach the V16 patch which addressed the comments we agreed on.\n> > > > > I will add a doc patch to explain the log format after the 0001 is RFC.\n> > > > >\n> > > >\n> > > > Thank You for addressing comments. Please see this scenario:\n> > > >\n> > > > create table tab1(pk int primary key, val1 int unique, val2 int);\n> > > >\n> > > > pub: insert into tab1 values(1,1,1);\n> > > > sub: insert into tab1 values(2,2,3);\n> > > > pub: update tab1 set val1=2 where pk=1;\n> > > >\n> > > > Wrong 'replica identity' column logged? shouldn't it be pk?\n> > > >\n> > > > ERROR: conflict detected on relation \"public.tab1\": conflict=update_exists\n> > > > DETAIL: Key already exists in unique index \"tab1_val1_key\", modified\n> > > > locally in transaction 801 at 2024-08-19 08:50:47.974815+05:30.\n> > > > Key (val1)=(2); existing local tuple (2, 2, 3); remote tuple (1, 2,\n> > > > 1); replica identity (val1)=(1).\n> > > >\n> > >\n> > > The docs say that by default replica identity is primary_key [1] (see\n> > > REPLICA IDENTITY),\n> >\n> > yes, I agree. But here the importance of dumping it was to know the\n> > value of RI as well which is being used as an identification of row\n> > being updated rather than row being conflicted. Value is logged\n> > correctly.\n> >\n>\n> Agreed, sorry, I misunderstood the problem reported.\n\nno problem.\n\n>I thought the\n> suggestion was to use 'primary key' instead of 'replica identity' but\n> you pointed out that the column used in 'replica identity' was wrong.\n> We should fix this one.\n>\n\nYes, that is what I pointed out. But let me remind you that this logic\nto display both 'Key' and 'RI' is done in the latest patch. Earlier\neither 'Key' or 'RI' was logged. But since for 'update_exists', both\nmake sense, thus I suggested to dump both. 'RI' column is dumped\ncorrectly in all other cases, except this new one. So if fixing this\nwrong column name for update_exists adds more complexity, then I am\nokay with skipping the 'RI' dump for this case. We’re fine with just\n'Key' for now, and we can address this later.\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 19 Aug 2024 12:33:04 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Mon, Aug 19, 2024 at 12:32 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n>\n> Thanks for reporting the bug. I have fixed it and ran pgindent in V17 patch.\n> I also adjusted few comments and fixed a typo.\n>\n\nThanks for the patch. Re-tested it, all scenarios seem to work well now.\n\nI see that this version has new header inclusion in conflict.c, due to\nwhich I think \"catalog/index.h\" inclusion is now redundant. Please\nrecheck and remove if so.\nAlso, there are few long lines in conflict.c (see line 408, 410).\n\nRest looks good.\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 19 Aug 2024 15:03:03 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Mon, Aug 19, 2024 at 3:03 PM shveta malik <[email protected]> wrote:\n>\n> On Mon, Aug 19, 2024 at 12:32 PM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> >\n> >\n> > Thanks for reporting the bug. I have fixed it and ran pgindent in V17 patch.\n> > I also adjusted few comments and fixed a typo.\n> >\n>\n> Thanks for the patch. Re-tested it, all scenarios seem to work well now.\n>\n> I see that this version has new header inclusion in conflict.c, due to\n> which I think \"catalog/index.h\" inclusion is now redundant. Please\n> recheck and remove if so.\n>\n\nThis is an extra include, so removed in the attached. Additionally, I\nhave modified a few comments and commit message.\n\n> Also, there are few long lines in conflict.c (see line 408, 410).\n>\n\nI have left these as it is because pgindent doesn't complain about them.\n\n> Rest looks good.\n>\n\nThanks for the review and testing.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 19 Aug 2024 16:16:37 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Mon, Aug 19, 2024 at 4:16 PM Amit Kapila <[email protected]> wrote:\n>\n> > Rest looks good.\n> >\n>\n> Thanks for the review and testing.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 20 Aug 2024 10:06:35 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Tuesday, August 20, 2024 12:37 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Mon, Aug 19, 2024 at 4:16 PM Amit Kapila <[email protected]>\r\n> Pushed.\r\n\r\nThanks for pushing.\r\n\r\nHere are the remaining patches.\r\n\r\n0001 adds additional doc to explain the log format.\r\n0002 collects statistics about conflicts in logical replication.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Tue, 20 Aug 2024 11:15:24 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On 8/6/24 4:15 AM, Zhijie Hou (Fujitsu) wrote:\r\n\r\n> Thanks for the idea! I thought about few styles based on the suggested format,\r\n> what do you think about the following ?\r\n\r\nThanks for proposing formats. Before commenting on the specifics, I do \r\nwant to ensure that we're thinking about the following for the log formats:\r\n\r\n1. For the PostgreSQL logs, we'll want to ensure we do it in a way \r\nthat's as convenient as possible for people to parse the context from \r\nscripts.\r\n\r\n2. Semi-related, I still think the simplest way to surface this info to \r\na user is through a \"pg_stat_...\" view or similar catalog mechanism (I'm \r\nless opinionated on the how outside of we should make it available via SQL).\r\n\r\n3. We should ensure we're able to convey to the user these details about \r\nthe conflict:\r\n\r\n* What time it occurred on the local server (which we'd have in the logs)\r\n* What kind of conflict it is\r\n* What table the conflict occurred on\r\n* What action caused the conflict\r\n* How the conflict was resolved (ability to include source/origin info)\r\n\r\nWith that said:\r\n\r\n> ---\r\n> Version 1\r\n> ---\r\n> LOG: CONFLICT: insert_exists; DESCRIPTION: remote INSERT violates unique constraint \"uniqueindex\" on relation \"public.test\".\r\n> DETAIL: Existing local tuple (a, b, c) = (2, 3, 4) xid=123,origin=\"pub\",timestamp=xxx; remote tuple (a, b, c) = (2, 4, 5).\r\n> \r\n> LOG: CONFLICT: update_differ; DESCRIPTION: updating a row with key (a, b) = (2, 4) on relation \"public.test\" was modified by a different source.\r\n> DETAIL: Existing local tuple (a, b, c) = (2, 3, 4) xid=123,origin=\"pub\",timestamp=xxx; remote tuple (a, b, c) = (2, 4, 5).\r\n> \r\n> LOG: CONFLICT: update_missing; DESCRIPTION: did not find the row with key (a, b) = (2, 4) on \"public.test\" to update.\r\n> DETAIL: remote tuple (a, b, c) = (2, 4, 5).\r\n\r\nI agree with Amit's downthread comment, I think this tries to put much \r\ntoo much info on the LOG line, and it could be challenging to parse.\r\n\r\n> ---\r\n> Version 2\r\n> It moves most the details to the DETAIL line compared to version 1.\r\n> ---\r\n> LOG: CONFLICT: insert_exists on relation \"public.test\".\r\n> DETAIL: Key (a)=(1) already exists in unique index \"uniqueindex\", which was modified by origin \"pub\" in transaction 123 at 2024xxx;\r\n> \t\tExisting local tuple (a, b, c) = (1, 3, 4), remote tuple (a, b, c) = (1, 4, 5).\r\n> \r\n> LOG: CONFLICT: update_differ on relation \"public.test\".\r\n> DETAIL: Updating a row with key (a, b) = (2, 4) that was modified by a different origin \"pub\" in transaction 123 at 2024xxx;\r\n> \t\tExisting local tuple (a, b, c) = (2, 3, 4); remote tuple (a, b, c) = (2, 4, 5).\r\n> \r\n> LOG: CONFLICT: update_missing on relation \"public.test\".\r\n> DETAIL: Did not find the row with key (a, b) = (2, 4) to update;\r\n> \t\tRemote tuple (a, b, c) = (2, 4, 5).\r\n\r\nI like the brevity of the LOG line, while it still provides a lot of \r\ninfo. I think we should choose the words/punctuation in the DETAIL \r\ncarefully so it's easy for scripts to ultimately parse (even if we \r\nexpose info in pg_stat, people may need to refer to older log files to \r\nunderstand how a conflict resolved).\r\n\r\n> ---\r\n> Version 3\r\n> It is similar to the style in the current patch, I only added the key value for\r\n> differ and missing conflicts without outputting the complete\r\n> remote/local tuple value.\r\n> ---\r\n> LOG: conflict insert_exists detected on relation \"public.test\".\r\n> DETAIL: Key (a)=(1) already exists in unique index \"uniqueindex\", which was modified by origin \"pub\" in transaction 123 at 2024xxx.\r\n> \r\n> LOG: conflict update_differ detected on relation \"public.test\".\r\n> DETAIL: Updating a row with key (a, b) = (2, 4), which was modified by a different origin \"pub\" in transaction 123 at 2024xxx.\r\n> \r\n> LOG: conflict update_missing detected on relation \"public.test\"\r\n> DETAIL: Did not find the row with key (a, b) = (2, 4) to update.\r\n\r\nI think outputting the remote/local tuple value may be a parameter we \r\nneed to think about (with the desired outcome of trying to avoid another \r\nparameter). I have a concern about unintentionally leaking data (and I \r\nunderstand that someone with access to the logs does have a broad \r\nability to view data); I'm less concerned about the size of the logs, as \r\nconflicts in a well-designed system should be rare (though a conflict \r\nstorm could fill up the logs, likely there are other issues to content \r\nwith at that point).\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Tue, 20 Aug 2024 21:32:30 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wednesday, August 21, 2024 9:33 AM Jonathan S. Katz <[email protected]> wrote:\r\n> On 8/6/24 4:15 AM, Zhijie Hou (Fujitsu) wrote:\r\n> \r\n> > Thanks for the idea! I thought about few styles based on the suggested\r\n> > format, what do you think about the following ?\r\n> \r\n> Thanks for proposing formats. Before commenting on the specifics, I do want to\r\n> ensure that we're thinking about the following for the log formats:\r\n> \r\n> 1. For the PostgreSQL logs, we'll want to ensure we do it in a way that's as\r\n> convenient as possible for people to parse the context from scripts.\r\n\r\nYeah. And I personally think the current log format is OK for parsing purposes.\r\n\r\n> \r\n> 2. Semi-related, I still think the simplest way to surface this info to a user is\r\n> through a \"pg_stat_...\" view or similar catalog mechanism (I'm less opinionated\r\n> on the how outside of we should make it available via SQL).\r\n\r\nWe have a patch(v19-0002) in this thread to collect conflict stats and display\r\nthem in the view, and the patch is under review.\r\n\r\nStoring it into a catalog needs more analysis as we may need to add addition\r\nlogic to clean up old conflict data in that catalog table. I think we can\r\nconsider it as a future improvement.\r\n\r\n> \r\n> 3. We should ensure we're able to convey to the user these details about the\r\n> conflict:\r\n> \r\n> * What time it occurred on the local server (which we'd have in the logs)\r\n> * What kind of conflict it is\r\n> * What table the conflict occurred on\r\n> * What action caused the conflict\r\n> * How the conflict was resolved (ability to include source/origin info)\r\n\r\nI think all above are already covered in the current conflict log. Except that\r\nwe have not support resolving the conflict, so we don't log the resolution.\r\n\r\n> \r\n> \r\n> I think outputting the remote/local tuple value may be a parameter we need to\r\n> think about (with the desired outcome of trying to avoid another parameter). I\r\n> have a concern about unintentionally leaking data (and I understand that\r\n> someone with access to the logs does have a broad ability to view data); I'm\r\n> less concerned about the size of the logs, as conflicts in a well-designed\r\n> system should be rare (though a conflict storm could fill up the logs, likely there\r\n> are other issues to content with at that point).\r\n\r\nWe could use an option to control, but the tuple value is already output in some\r\nexisting cases (e.g. partition check, table constraints check, view with check\r\nconstraints, unique violation), and it would test the current user's\r\nprivileges to decide whether to output the tuple or not. So, I think it's OK\r\nto display the tuple for conflicts.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Wed, 21 Aug 2024 03:05:47 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Tue, Aug 20, 2024 at 4:45 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> Here are the remaining patches.\n>\n> 0001 adds additional doc to explain the log format.\n\nThanks for the patch. Please find few comments on 001:\n\n1)\n+<literal>Key</literal> (column_name, ...)=(column_name, ...);\nexisting local tuple (column_name, ...)=(column_name, ...); remote\ntuple (column_name, ...)=(column_name, ...); replica identity\n(column_name, ...)=(column_name, ...).\n\n-- column_name --> column_value everywhere in right to '='?\n\n2)\n+ Note that for an\n+ update operation, the column value of the new tuple may be NULL if the\n+ value is unchanged.\n\n-- Shall we mention the toast value here? In all other cases, we get a\nfull new tuple.\n\n3)\n+ The key section in the second sentence of the DETAIL line\nincludes the key values of the tuple that already exists in the local\nrelation for insert_exists or update_exists conflicts.\n\n-- Shall we mention the key is the column value violating a unique\nconstraint? Something like this:\nThe key section in the second sentence of the DETAIL line includes the\nkey values of the local tuple that violates unique constraint for\ninsert_exists or update_exists conflicts.\n\n4)\nShall we give an example LOG message in the end?\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 21 Aug 2024 09:09:32 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "Dear Hou,\r\n\r\nThanks for updating the patch! I think the patch is mostly good.\r\nHere are minor comments.\r\n\r\n0001:\r\n\r\n01.\r\n```\r\n+<screen>\r\n+LOG: conflict detected on relation \"schemaname.tablename\": conflict=<literal>conflict_type</literal>\r\n+DETAIL: <literal>detailed explaination</literal>.\r\n...\r\n+</screen>\r\n```\r\n\r\nI don't think the label is correct. <screen> label should be used for the actual\r\nexample output, not for explaining the format. I checked several files like\r\namcheck.sgml and auto-exlain.sgml and func.sgml and they seemed to follow the\r\nrule.\r\n\r\n02.\r\n```\r\n+ <para>\r\n+ The <literal>key</literal> section in the second sentence of the\r\n...\r\n```\r\n\r\nI preferred that section name is quoted.\r\n\r\n0002:\r\n\r\n03.\r\n```\r\n-#include \"replication/logicalrelation.h\"\r\n```\r\n\r\nJust to confirm - this removal is not related with the feature but just the\r\nimprovement, right?\r\n\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n", "msg_date": "Wed, 21 Aug 2024 05:30:43 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Tue, Aug 20, 2024 at 4:45 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> Here are the remaining patches.\n>\n> 0001 adds additional doc to explain the log format.\n> 0002 collects statistics about conflicts in logical replication.\n>\n\n0002 has not changed since I last reviewed it. It seems all my old\ncomments are addressed. One trivial thing:\n\nI feel in doc, we shall mention that any of the conflicts resulting in\napply-error will be counted in both apply_error_count and the\ncorresponding <conflict>_count. What do you think?\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 21 Aug 2024 11:45:37 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "Here are some review comments for the v19-0001 docs patch.\n\nThe content seemed reasonable, but IMO it should be presented quite differently.\n\n~~~~\n\n1. Use sub-sections\n\nI expect this logical replication \"Conflicts\" section is going to\nevolve into something much bigger. Surely, it's not going to be one\nhumongous page of details, so it will be a section with lots of\nsubsections like all the other in Chapter 29.\n\nIMO, you should be writing the docs in that kind of structure from the\nbeginning.\n\nFor example, I'm thinking something like below (this is just an\nexample - surely lots more subsections will be needed for this topic):\n\n29.6 Conflicts\n29.6.1. Conflict types\n29.6.2. Logging format\n29.6.3. Examples\n\nSpecifically, this v19-0001 patch information should be put into a\nsubsection like the 29.6.2 shown above.\n\n~~~\n\n2. Markup\n\n+<screen>\n+LOG: conflict detected on relation \"schemaname.tablename\":\nconflict=<literal>conflict_type</literal>\n+DETAIL: <literal>detailed explaination</literal>.\n+<literal>Key</literal> (column_name, ...)=(column_name, ...);\n<literal>existing local tuple</literal> (column_name,\n...)=(column_name, ...); <literal>remote tuple</literal> (column_name,\n...)=(column_name, ...); <literal>replica identity</literal>\n(column_name, ...)=(column_name, ...).\n+</screen>\n\nIMO this should be using markup more like the SQL syntax references.\n- e.g. I suggest <synopsis> instead of <screen>\n- e.g. I suggest all the substitution parameters (e.g. detailed\nexplanation, conflict_type, column_name, ...) in the log should use\n<replaceable class=\"parameter\"> and use those markups again later in\nthese docs instead of <literal>\n\n~\n\nnit - typo /explaination/explanation/\n\n~\n\nnit - The amount of scrolling needed makes this LOG format too hard to\nsee. Try to wrap it better so it can fit without being so wide.\n\n~~~\n\n3. Restructure the list\n\n+ <itemizedlist>\n+ <listitem>\n\nI suggest restructuring all this to use a nested list like:\n\nLOG\n- conflict_type\nDETAIL\n- detailed_explanation\n- key\n- existing_local_tuple\n- remote_tuple\n- replica_identity\n\nDoing this means you can remove a great deal of the unnecessary junk\nwords like \"of the first sentence in the DETAIL\", and \"sentence of the\nDETAIL line\" etc. The result will be much less text but much simpler\ntext too.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 21 Aug 2024 16:44:42 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wednesday, August 21, 2024 11:40 AM shveta malik <[email protected]> wrote:\r\n> \r\n> On Tue, Aug 20, 2024 at 4:45 PM Zhijie Hou (Fujitsu) <[email protected]>\r\n> wrote:> \r\n\r\nThanks for the comments!\r\n\r\n> 4)\r\n> Shall we give an example LOG message in the end?\r\n\r\nI feel the current insert_exists log in conflict section seems\r\nsufficient as an example to show the real conflict log.\r\n\r\nOther comments look good, and I will address.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Wed, 21 Aug 2024 09:32:23 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wednesday, August 21, 2024 1:31 PM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote:\r\n> \r\n> Dear Hou,\r\n> \r\n> Thanks for updating the patch! I think the patch is mostly good.\r\n> Here are minor comments.\r\n\r\nThanks for the comments !\r\n\r\n> \r\n> 02.\r\n> ```\r\n> + <para>\r\n> + The <literal>key</literal> section in the second sentence of the\r\n> ...\r\n> ```\r\n> \r\n> I preferred that section name is quoted.\r\n\r\nI thought about this. But I feel the 'key' here is not a real string, so I chose not to\r\nadd quote for it.\r\n\r\n> \r\n> 0002:\r\n> \r\n> 03.\r\n> ```\r\n> -#include \"replication/logicalrelation.h\"\r\n> ```\r\n> \r\n> Just to confirm - this removal is not related with the feature but just the\r\n> improvement, right?\r\n\r\nThe logicalrelation.h becomes unnecessary after adding worker_intenral.h, so I\r\nthink it's this patch's job to remove this.\r\n\r\nBest Regards,\r\nHou zj\r\n\r\n", "msg_date": "Wed, 21 Aug 2024 09:33:02 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wednesday, August 21, 2024 2:45 PM Peter Smith <[email protected]> wrote:\r\n> \r\n> Here are some review comments for the v19-0001 docs patch.\r\n> \r\n> The content seemed reasonable, but IMO it should be presented quite\r\n> differently.\r\n> \r\n> ~~~~\r\n> \r\n> 1. Use sub-sections\r\n> \r\n> I expect this logical replication \"Conflicts\" section is going to evolve into\r\n> something much bigger. Surely, it's not going to be one humongous page of\r\n> details, so it will be a section with lots of subsections like all the other in\r\n> Chapter 29.\r\n> \r\n> IMO, you should be writing the docs in that kind of structure from the\r\n> beginning.\r\n> \r\n> For example, I'm thinking something like below (this is just an example - surely\r\n> lots more subsections will be needed for this topic):\r\n> \r\n> 29.6 Conflicts\r\n> 29.6.1. Conflict types\r\n> 29.6.2. Logging format\r\n> 29.6.3. Examples\r\n> \r\n> Specifically, this v19-0001 patch information should be put into a subsection\r\n> like the 29.6.2 shown above.\r\n\r\nI think that's a good idea. But I preferred to do that in a separate\r\npatch(maybe a third patch after the first and second are RFC), because AFAICS\r\nwe would need to adjust some existing docs which falls outside the scope of\r\nthe current patch.\r\n\r\n> \r\n> ~~~\r\n> \r\n> 2. Markup\r\n> \r\n> +<screen>\r\n> +LOG: conflict detected on relation \"schemaname.tablename\":\r\n> conflict=<literal>conflict_type</literal>\r\n> +DETAIL: <literal>detailed explaination</literal>.\r\n> +<literal>Key</literal> (column_name, ...)=(column_name, ...);\r\n> <literal>existing local tuple</literal> (column_name, ...)=(column_name, ...);\r\n> <literal>remote tuple</literal> (column_name, ...)=(column_name, ...);\r\n> <literal>replica identity</literal> (column_name, ...)=(column_name, ...).\r\n> +</screen>\r\n> \r\n> IMO this should be using markup more like the SQL syntax references.\r\n> - e.g. I suggest <synopsis> instead of <screen>\r\n> - e.g. I suggest all the substitution parameters (e.g. detailed explanation,\r\n> conflict_type, column_name, ...) in the log should use <replaceable\r\n> class=\"parameter\"> and use those markups again later in these docs instead\r\n> of <literal>\r\n\r\nAgreed. I have changed to use <synopsis> and <replaceable>. But for static\r\nwords like \"Key\" or \"replica identity\" it doesn't look appropriate to use\r\n<replaceable>, so I kept using <literal> for them.\r\n\r\n> nit - The amount of scrolling needed makes this LOG format too hard to see.\r\n> Try to wrap it better so it can fit without being so wide.\r\n\r\nI thought about this, but wrapping the sentence would cause the words\r\nto be displayed in different lines after compiling. I think that's inconsistent\r\nwith the real log which display the tuples in one line.\r\n\r\nOther comments not mentioned above look good to me.\r\n\r\nAttach the V20 patch set which addressed above, Shveta[1][2] and Kuroda-san's[3]\r\ncomments.\r\n\r\n[1] https://www.postgresql.org/message-id/CAJpy0uDUNigg86KRnk4A0KjwY8-pPaXzZ2eCjnb1ydCH48VzJQ%40mail.gmail.com\r\n[2] https://www.postgresql.org/message-id/CAJpy0uARh2RRDBF6mJ7d807DsNXuCNQmEXZUn__fw4KZv8qEMg%40mail.gmail.com\r\n[3] https://www.postgresql.org/message-id/TYAPR01MB5692C4EDD8B86760496A993AF58E2%40TYAPR01MB5692.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Wed, 21 Aug 2024 09:34:45 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wed, Aug 21, 2024 at 8:35 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Wednesday, August 21, 2024 9:33 AM Jonathan S. Katz <[email protected]> wrote:\n> > On 8/6/24 4:15 AM, Zhijie Hou (Fujitsu) wrote:\n> >\n> > > Thanks for the idea! I thought about few styles based on the suggested\n> > > format, what do you think about the following ?\n> >\n> > Thanks for proposing formats. Before commenting on the specifics, I do want to\n> > ensure that we're thinking about the following for the log formats:\n> >\n> > 1. For the PostgreSQL logs, we'll want to ensure we do it in a way that's as\n> > convenient as possible for people to parse the context from scripts.\n>\n> Yeah. And I personally think the current log format is OK for parsing purposes.\n>\n> >\n> > 2. Semi-related, I still think the simplest way to surface this info to a user is\n> > through a \"pg_stat_...\" view or similar catalog mechanism (I'm less opinionated\n> > on the how outside of we should make it available via SQL).\n>\n> We have a patch(v19-0002) in this thread to collect conflict stats and display\n> them in the view, and the patch is under review.\n>\n\nIIUC, Jonathan is asking to store the conflict information (the one we\ndisplay in LOGs). We can do that separately as that is useful.\n\n> Storing it into a catalog needs more analysis as we may need to add addition\n> logic to clean up old conflict data in that catalog table. I think we can\n> consider it as a future improvement.\n>\n\nAgreed. The cleanup part needs more consideration.\n\n> >\n> > 3. We should ensure we're able to convey to the user these details about the\n> > conflict:\n> >\n> > * What time it occurred on the local server (which we'd have in the logs)\n> > * What kind of conflict it is\n> > * What table the conflict occurred on\n> > * What action caused the conflict\n> > * How the conflict was resolved (ability to include source/origin info)\n>\n> I think all above are already covered in the current conflict log. Except that\n> we have not support resolving the conflict, so we don't log the resolution.\n>\n> >\n> >\n> > I think outputting the remote/local tuple value may be a parameter we need to\n> > think about (with the desired outcome of trying to avoid another parameter). I\n> > have a concern about unintentionally leaking data (and I understand that\n> > someone with access to the logs does have a broad ability to view data); I'm\n> > less concerned about the size of the logs, as conflicts in a well-designed\n> > system should be rare (though a conflict storm could fill up the logs, likely there\n> > are other issues to content with at that point).\n>\n> We could use an option to control, but the tuple value is already output in some\n> existing cases (e.g. partition check, table constraints check, view with check\n> constraints, unique violation), and it would test the current user's\n> privileges to decide whether to output the tuple or not. So, I think it's OK\n> to display the tuple for conflicts.\n>\n\nThe current information is displayed keeping in mind that users should\nbe able to use that to manually resolve conflicts if required. If we\nthink there is a leak of information (either from a security angle or\notherwise) like tuple data then we can re-consider. However, as we are\ndisplaying tuple information in other places as pointed out by\nHou-San, we thought it is also okay to display in this case.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 21 Aug 2024 16:05:21 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "HI Hous-San,. Here is my review of the v20-0001 docs patch.\n\n1. Restructure into sections\n\n> I think that's a good idea. But I preferred to do that in a separate\n> patch(maybe a third patch after the first and second are RFC), because AFAICS\n> we would need to adjust some existing docs which falls outside the scope of\n> the current patch.\n\nOK. I thought deferring it would only make extra work/churn, given you\nalready know up-front that everything will require restructuring later\nanyway.\n\n~~~\n\n2. Synopsis\n\n2.1 synopsis wrapping.\n\n> I thought about this, but wrapping the sentence would cause the words\n> to be displayed in different lines after compiling. I think that's inconsistent\n> with the real log which display the tuples in one line.\n\nIMO the readability of the format is the most important objective for\nthe documentation. And, as you told Shveta, there is already a real\nexample where people can see the newlines if they want to.\n\nnit - Anyway, FYI there is are newline rendering problems here in v20.\nRemoved newlines to make all these optional parts appear on the same\nline.\n\n2.2 other stuff\n\nnit - Add underscore to /detailed explanation/detailed_explanation/,\nto make it more obvious this is a replacement parameter\n\nnit - Added newline after </synopsis> for readabilty of the SGML file.\n\n~~~\n\n3. Case of literals\n\nIt's not apparent to me why the optional \"Key\" part should be\nuppercase in the LOG but other (equally important?) literals of other\nparts like \"replica identity\" are not.\n\nIt seems inconsistent.\n\n~~~\n\n4. LOG parts\n\nnit - IMO the \"schema.tablename\" and the \"conflict_type\" deserved to\nhave separate listitems.\n\nnit - The \"conflict_type\" should have <replaceable> markup.\n\n~~~\n\n5. DETAIL parts\n\nnit - added newline above this <varlistentry> for readability of the SGML.\n\nnit - Add underscore to detailed_explanation, and rearrange wording to\nname the parameter up-front same as the other bullets do.\n\nnit - change the case /key/Key/ to match the synopsis.\n\n~~~\n\n6.\n+ <para>\n+ The <literal>replica identity</literal> section includes the replica\n+ identity key values that used to search for the existing\nlocal tuple to\n+ be updated or deleted. This may include the full tuple value\nif the local\n+ relation is marked with <literal>REPLICA IDENTITY FULL</literal>.\n+ </para>\n\nIt might be good to also provide a link for that REPLICA IDENTITY\nFULL. (I did this already in the attachment as an example)\n\n~~~\n\n7. Replacement parameters - column_name, column_value\n\nI've included these for completeness. I think it is useful.\n\nBTW, the column names seem sometimes optional but I did not know the\nrules. It should be documented what makes these names be shown or not\nshown.\n\n~~~\n\nPlease see the attachment which implements most of the items mentioned above.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 22 Aug 2024 12:58:10 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wed, Aug 21, 2024 at 3:04 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n>\n> Attach the V20 patch set which addressed above, Shveta[1][2] and Kuroda-san's[3]\n> comments.\n>\n\nThank You for the patch. Few comments:\n\n1)\n+ The key section includes the key values of the local tuple that\nviolated a unique constraint insert_exists or update_exists conflicts.\n\n --I think something is missing in this line. Maybe add a 'for' or\n*in case of*:\n The key section includes the key values of the local tuple that\nviolated a unique constraint *in case of*/*for* insert_exists or\nupdate_exists conflicts.\n\n2)\n+ The replica identity section includes the replica identity key\nvalues that used to search for the existing local tuple to be updated\nor deleted.\n\n--that *were* used to\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 22 Aug 2024 08:55:24 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Thursday, August 22, 2024 11:25 AM shveta malik <[email protected]> wrote:\r\n> \r\n> On Wed, Aug 21, 2024 at 3:04 PM Zhijie Hou (Fujitsu)\r\n> <[email protected]> wrote:\r\n> >\r\n> >\r\n> > Attach the V20 patch set which addressed above, Shveta[1][2] and\r\n> > Kuroda-san's[3] comments.\r\n> >\r\n> \r\n> Thank You for the patch. Few comments:\r\n\r\nThanks for the patches. Here is V21 patch which addressed\r\nPeter's and your comments.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Thu, 22 Aug 2024 07:11:04 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "Hi Hou-san.\n\nI was experimenting with some conflict logging and found that large\ncolumn values are truncated in the log DETAIL.\n\nE.g. Below I have a table where I inserted a 3000 character text value\n'bigbigbig...\"\n\nThen I caused a replication conflict.\n\ntest_sub=# delete fr2024-08-22 17:50:17.181 AEST [14901] LOG: logical\nreplication apply worker for subscription \"sub1\" has started\n2024-08-22 17:50:17.193 AEST [14901] ERROR: conflict detected on\nrelation \"public.t1\": conflict=insert_exists\n2024-08-22 17:50:17.193 AEST [14901] DETAIL: Key already exists in\nunique index \"t1_pkey\", modified in transaction 780.\nKey (a)=(k3); existing local tuple (k3,\nbigbigbigbigbigbigbigbigbigbigbigbigbigbigbigbigbigbigbigbigbigb...);\nremote tuple (k3, this will clash).\n\n~\n\nDo you think the documentation for the 'column_value' parameter of the\nconflict logging should say that the displayed value might be\ntruncated?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 22 Aug 2024 18:02:52 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Thu, Aug 22, 2024 at 1:33 PM Peter Smith <[email protected]> wrote:\n>\n> Do you think the documentation for the 'column_value' parameter of the\n> conflict logging should say that the displayed value might be\n> truncated?\n>\n\nI updated the patch to mention this and pushed it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 22 Aug 2024 14:21:42 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Thu, Aug 22, 2024 at 2:21 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Aug 22, 2024 at 1:33 PM Peter Smith <[email protected]> wrote:\n> >\n> > Do you think the documentation for the 'column_value' parameter of the\n> > conflict logging should say that the displayed value might be\n> > truncated?\n> >\n>\n> I updated the patch to mention this and pushed it.\n>\n\nPeter Smith mentioned to me off-list that the names of conflict types\n'update_differ' and 'delete_differ' are not intuitive as compared to\nall other conflict types like insert_exists, update_missing, etc. The\nother alternative that comes to mind for those conflicts is to name\nthem as 'update_origin_differ'/''delete_origin_differ'.\n\nThe description in docs for 'update_differ' is as follows: Updating a\nrow that was previously modified by another origin. Note that this\nconflict can only be detected when track_commit_timestamp is enabled\non the subscriber. Currently, the update is always applied regardless\nof the origin of the local row.\n\nDoes anyone else have any thoughts on the naming of these conflicts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 26 Aug 2024 15:22:01 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Mon, Aug 26, 2024 at 3:22 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Aug 22, 2024 at 2:21 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Aug 22, 2024 at 1:33 PM Peter Smith <[email protected]> wrote:\n> > >\n> > > Do you think the documentation for the 'column_value' parameter of the\n> > > conflict logging should say that the displayed value might be\n> > > truncated?\n> > >\n> >\n> > I updated the patch to mention this and pushed it.\n> >\n>\n> Peter Smith mentioned to me off-list that the names of conflict types\n> 'update_differ' and 'delete_differ' are not intuitive as compared to\n> all other conflict types like insert_exists, update_missing, etc. The\n> other alternative that comes to mind for those conflicts is to name\n> them as 'update_origin_differ'/''delete_origin_differ'.\n\n+1 on 'update_origin_differ'/''delete_origin_differ'. Gives more clarity.\n\n> The description in docs for 'update_differ' is as follows: Updating a\n> row that was previously modified by another origin. Note that this\n> conflict can only be detected when track_commit_timestamp is enabled\n> on the subscriber. Currently, the update is always applied regardless\n> of the origin of the local row.\n>\n> Does anyone else have any thoughts on the naming of these conflicts?\n>\n> --\n> With Regards,\n> Amit Kapila.\n\n\n", "msg_date": "Mon, 26 Aug 2024 16:06:29 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Monday, August 26, 2024 6:36 PM shveta malik <[email protected]> wrote:\r\n> \r\n> On Mon, Aug 26, 2024 at 3:22 PM Amit Kapila <[email protected]>\r\n> wrote:\r\n> >\r\n> > On Thu, Aug 22, 2024 at 2:21 PM Amit Kapila <[email protected]>\r\n> wrote:\r\n> > >\r\n> > > On Thu, Aug 22, 2024 at 1:33 PM Peter Smith <[email protected]>\r\n> wrote:\r\n> > > >\r\n> > > > Do you think the documentation for the 'column_value' parameter of\r\n> > > > the conflict logging should say that the displayed value might be\r\n> > > > truncated?\r\n> > > >\r\n> > >\r\n> > > I updated the patch to mention this and pushed it.\r\n> > >\r\n> >\r\n> > Peter Smith mentioned to me off-list that the names of conflict types\r\n> > 'update_differ' and 'delete_differ' are not intuitive as compared to\r\n> > all other conflict types like insert_exists, update_missing, etc. The\r\n> > other alternative that comes to mind for those conflicts is to name\r\n> > them as 'update_origin_differ'/''delete_origin_differ'.\r\n> \r\n> +1 on 'update_origin_differ'/''delete_origin_differ'. Gives more clarity.\r\n\r\n+1\r\n\r\n> \r\n> > The description in docs for 'update_differ' is as follows: Updating a\r\n> > row that was previously modified by another origin. Note that this\r\n> > conflict can only be detected when track_commit_timestamp is enabled\r\n> > on the subscriber. Currently, the update is always applied regardless\r\n> > of the origin of the local row.\r\n> >\r\n> > Does anyone else have any thoughts on the naming of these conflicts?\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Mon, 26 Aug 2024 12:14:19 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Mon, Aug 26, 2024 at 7:52 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Aug 22, 2024 at 2:21 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Aug 22, 2024 at 1:33 PM Peter Smith <[email protected]> wrote:\n> > >\n> > > Do you think the documentation for the 'column_value' parameter of the\n> > > conflict logging should say that the displayed value might be\n> > > truncated?\n> > >\n> >\n> > I updated the patch to mention this and pushed it.\n> >\n>\n> Peter Smith mentioned to me off-list that the names of conflict types\n> 'update_differ' and 'delete_differ' are not intuitive as compared to\n> all other conflict types like insert_exists, update_missing, etc. The\n> other alternative that comes to mind for those conflicts is to name\n> them as 'update_origin_differ'/''delete_origin_differ'.\n>\n\nFor things to \"differ\" there must be more than one them. The plural of\norigin is origins.\n\ne.g. 'update_origins_differ'/''delete_origins_differ'.\n\nOTOH, you could say \"differs\" instead of differ:\n\ne.g. 'update_origin_differs'/''delete_origin_differs'.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 27 Aug 2024 09:07:10 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Tue, Aug 27, 2024 at 4:37 AM Peter Smith <[email protected]> wrote:\n>\n> On Mon, Aug 26, 2024 at 7:52 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Aug 22, 2024 at 2:21 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Thu, Aug 22, 2024 at 1:33 PM Peter Smith <[email protected]> wrote:\n> > > >\n> > > > Do you think the documentation for the 'column_value' parameter of the\n> > > > conflict logging should say that the displayed value might be\n> > > > truncated?\n> > > >\n> > >\n> > > I updated the patch to mention this and pushed it.\n> > >\n> >\n> > Peter Smith mentioned to me off-list that the names of conflict types\n> > 'update_differ' and 'delete_differ' are not intuitive as compared to\n> > all other conflict types like insert_exists, update_missing, etc. The\n> > other alternative that comes to mind for those conflicts is to name\n> > them as 'update_origin_differ'/''delete_origin_differ'.\n> >\n>\n> For things to \"differ\" there must be more than one them. The plural of\n> origin is origins.\n>\n> e.g. 'update_origins_differ'/''delete_origins_differ'.\n>\n> OTOH, you could say \"differs\" instead of differ:\n>\n> e.g. 'update_origin_differs'/''delete_origin_differs'.\n>\n\n+1 on 'update_origin_differs' instead of 'update_origins_differ' as\nthe former is somewhat similar to other conflict names 'insert_exists'\nand 'update_exists'.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 28 Aug 2024 09:00:17 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wednesday, August 28, 2024 11:30 AM shveta malik <[email protected]> wrote:\r\n> \r\n> On Tue, Aug 27, 2024 at 4:37 AM Peter Smith <[email protected]>\r\n> wrote:\r\n> >\r\n> > On Mon, Aug 26, 2024 at 7:52 PM Amit Kapila <[email protected]>\r\n> wrote:\r\n> > >\r\n> > > On Thu, Aug 22, 2024 at 2:21 PM Amit Kapila <[email protected]>\r\n> wrote:\r\n> > > >\r\n> > > > On Thu, Aug 22, 2024 at 1:33 PM Peter Smith\r\n> <[email protected]> wrote:\r\n> > > > >\r\n> > > > > Do you think the documentation for the 'column_value' parameter\r\n> > > > > of the conflict logging should say that the displayed value\r\n> > > > > might be truncated?\r\n> > > > >\r\n> > > >\r\n> > > > I updated the patch to mention this and pushed it.\r\n> > > >\r\n> > >\r\n> > > Peter Smith mentioned to me off-list that the names of conflict\r\n> > > types 'update_differ' and 'delete_differ' are not intuitive as\r\n> > > compared to all other conflict types like insert_exists,\r\n> > > update_missing, etc. The other alternative that comes to mind for\r\n> > > those conflicts is to name them as\r\n> 'update_origin_differ'/''delete_origin_differ'.\r\n> > >\r\n> >\r\n> > For things to \"differ\" there must be more than one them. The plural of\r\n> > origin is origins.\r\n> >\r\n> > e.g. 'update_origins_differ'/''delete_origins_differ'.\r\n> >\r\n> > OTOH, you could say \"differs\" instead of differ:\r\n> >\r\n> > e.g. 'update_origin_differs'/''delete_origin_differs'.\r\n> >\r\n> \r\n> +1 on 'update_origin_differs' instead of 'update_origins_differ' as\r\n> the former is somewhat similar to other conflict names 'insert_exists'\r\n> and 'update_exists'.\r\n\r\nSince we reached a consensus on this, I am attaching a small patch\r\nto rename as suggested.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Wed, 28 Aug 2024 04:11:07 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wednesday, August 28, 2024 12:11 PM Zhijie Hou (Fujitsu) <[email protected]> wrote:\r\n> > > > Peter Smith mentioned to me off-list that the names of conflict\r\n> > > > types 'update_differ' and 'delete_differ' are not intuitive as\r\n> > > > compared to all other conflict types like insert_exists,\r\n> > > > update_missing, etc. The other alternative that comes to mind for\r\n> > > > those conflicts is to name them as\r\n> > 'update_origin_differ'/''delete_origin_differ'.\r\n> > > >\r\n> > >\r\n> > > For things to \"differ\" there must be more than one them. The plural\r\n> > > of origin is origins.\r\n> > >\r\n> > > e.g. 'update_origins_differ'/''delete_origins_differ'.\r\n> > >\r\n> > > OTOH, you could say \"differs\" instead of differ:\r\n> > >\r\n> > > e.g. 'update_origin_differs'/''delete_origin_differs'.\r\n> > >\r\n> >\r\n> > +1 on 'update_origin_differs' instead of 'update_origins_differ' as\r\n> > the former is somewhat similar to other conflict names 'insert_exists'\r\n> > and 'update_exists'.\r\n> \r\n> Since we reached a consensus on this, I am attaching a small patch to rename\r\n> as suggested.\r\n\r\nSorry, I attached the wrong patch. Here is correct one.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Wed, 28 Aug 2024 04:14:11 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wed, Aug 28, 2024 at 9:44 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> > > +1 on 'update_origin_differs' instead of 'update_origins_differ' as\n> > > the former is somewhat similar to other conflict names 'insert_exists'\n> > > and 'update_exists'.\n> >\n> > Since we reached a consensus on this, I am attaching a small patch to rename\n> > as suggested.\n>\n> Sorry, I attached the wrong patch. Here is correct one.\n>\n\nLGTM.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 28 Aug 2024 11:23:01 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wed, Aug 28, 2024 at 3:53 PM shveta malik <[email protected]> wrote:\n>\n> On Wed, Aug 28, 2024 at 9:44 AM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > > > +1 on 'update_origin_differs' instead of 'update_origins_differ' as\n> > > > the former is somewhat similar to other conflict names 'insert_exists'\n> > > > and 'update_exists'.\n> > >\n> > > Since we reached a consensus on this, I am attaching a small patch to rename\n> > > as suggested.\n> >\n> > Sorry, I attached the wrong patch. Here is correct one.\n> >\n>\n> LGTM.\n>\n\nLGTM.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 28 Aug 2024 16:54:27 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" }, { "msg_contents": "On Wed, Aug 28, 2024 at 12:24 PM Peter Smith <[email protected]> wrote:\n>\n> On Wed, Aug 28, 2024 at 3:53 PM shveta malik <[email protected]> wrote:\n> >\n> > On Wed, Aug 28, 2024 at 9:44 AM Zhijie Hou (Fujitsu)\n> > <[email protected]> wrote:\n> > >\n> > > > > +1 on 'update_origin_differs' instead of 'update_origins_differ' as\n> > > > > the former is somewhat similar to other conflict names 'insert_exists'\n> > > > > and 'update_exists'.\n> > > >\n> > > > Since we reached a consensus on this, I am attaching a small patch to rename\n> > > > as suggested.\n> > >\n> > > Sorry, I attached the wrong patch. Here is correct one.\n> > >\n> >\n> > LGTM.\n> >\n>\n> LGTM.\n>\n\nI'll push this patch tomorrow unless there are any suggestions or comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 28 Aug 2024 15:07:04 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Conflict detection and logging in logical replication" } ]
[ { "msg_contents": "In libpq, in the code of the lo_import function, for each piece of a file\nof 8KB in size, lo_write is called, which greatly slows down the work of\nlo_import because lo_write sends a request and waits for a response. The\nsize of 8KB is specified in define LO_BUFSIZE which changed from 1KB to 8KB\n24 years ago.\nWhy not increase the buffer size?\n\nIn libpq, in the code of the lo_import function, for each piece of a file of 8KB in size, lo_write is called, which greatly slows down the work of lo_import because lo_write sends a request and waits for a response.  The size of 8KB is specified in define LO_BUFSIZE which changed from 1KB to 8KB 24 years ago. Why not increase the buffer size?", "msg_date": "Fri, 21 Jun 2024 11:46:36 +0300", "msg_from": "=?UTF-8?B?0JTQvNC40YLRgNC40Lkg0J/QuNGC0LDQutC+0LI=?=\n <[email protected]>", "msg_from_op": true, "msg_subject": "Small LO_BUFSIZE slows down lo_import and lo_export in libpq" }, { "msg_contents": "On Fri, 21 Jun 2024 at 10:46, Дмитрий Питаков <[email protected]> wrote:\n> Why not increase the buffer size?\n\nI think changing the buffer size sounds like a reasonable idea, if\nthat speeds stuff up. But I think it would greatly help your case if\nyou showed the perf increase using a simple benchmark, especially if\npeople could run this benchmark on their own machines to reproduce.\n\n\n", "msg_date": "Fri, 21 Jun 2024 14:07:21 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Small LO_BUFSIZE slows down lo_import and lo_export in libpq" }, { "msg_contents": "Jelte Fennema-Nio <[email protected]> writes:\n> On Fri, 21 Jun 2024 at 10:46, Дмитрий Питаков <[email protected]> wrote:\n>> Why not increase the buffer size?\n\n> I think changing the buffer size sounds like a reasonable idea, if\n> that speeds stuff up. But I think it would greatly help your case if\n> you showed the perf increase using a simple benchmark, especially if\n> people could run this benchmark on their own machines to reproduce.\n\nYeah. \"Why not\" is not a patch proposal, mainly because the correct\nquestion is \"what other size are you proposing?\"\n\nThis is not something that we can just randomly whack around, either.\nBoth lo_import_internal and lo_export assume they can allocate the\nbuffer on the stack, which means you have to worry about available\nstack space. As a concrete example, I believe that musl still\ndefaults to 128kB thread stack size, which means that a threaded\nclient program on that platform would definitely fail with \nLO_BUFSIZE >= 128kB, and even 64kB would be not without risk.\n\nWe could dodge that objection by malloc'ing the buffer, which might\nbe a good thing to do anyway because it'd improve the odds of getting\na nicely-aligned buffer. But then you have to make the case that the\nextra malloc and free isn't a net loss, which it could be for\nnot-very-large transfers.\n\nSo bottom line is that you absolutely need a test case whose\nperformance can be measured under different conditions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Jun 2024 15:43:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Small LO_BUFSIZE slows down lo_import and lo_export in libpq" } ]
[ { "msg_contents": "After initially trying to add trace support for\nStartupMessage/SSLRequest/GSSENCRequest[1] I realized there were many\nmore cases where PQtrace was not correctly tracing messages, or not\neven tracing them at all. These patches fix all of the issues that I\nwas able to find.\n\n0001 is some cleanup after f4b54e1ed9\n0002 does some preparatory changes for 0004 & 0007\n\nAll the others improve the tracing, and apart from 0004 and 0007\ndepending on 0002, none of these patches depend on each other.\nAlthough you could argue that 0007 and 0008 depend on 0006, because\nwithout 0006 the code added by 0007 and 0008 won't ever really be\nexecuted.\n\nTo test you can add a PQreset(conn) call to the start of the\ntest_cancel function in:\nsrc/test/modules/libpq_pipeline/libpq_pipeline.c.\n\nAnd then run:\nninja -C build all install-quiet &&\nbuild/src/test/modules/libpq_pipeline/libpq_pipeline cancel\n'port=5432' -t test.trace\n\nAnd then look at the top of test.trace\n\n[1]: https://www.postgresql.org/message-id/CAGECzQTTN5aGqtDaRifJXPyd_O5qHWQcOxsHJsDSVNqMugGNEA%40mail.gmail.com", "msg_date": "Fri, 21 Jun 2024 11:22:05 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "libpq: Fix lots of discrepancies in PQtrace" }, { "msg_contents": "On Fri, Jun 21, 2024 at 11:22:05AM +0200, Jelte Fennema-Nio wrote:\n> 0001 is some cleanup after f4b54e1ed9\n\nOops. I'll plan on committing this after the 17beta2 release freeze is\nlifted.\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 21 Jun 2024 16:01:55 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq: Fix lots of discrepancies in PQtrace" }, { "msg_contents": "On Fri, Jun 21, 2024 at 04:01:55PM -0500, Nathan Bossart wrote:\n> On Fri, Jun 21, 2024 at 11:22:05AM +0200, Jelte Fennema-Nio wrote:\n>> 0001 is some cleanup after f4b54e1ed9\n> \n> Oops. I'll plan on committing this after the 17beta2 release freeze is\n> lifted.\n\nCommitted 0001.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 26 Jun 2024 11:28:23 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq: Fix lots of discrepancies in PQtrace" }, { "msg_contents": "On 2024-Jun-26, Nathan Bossart wrote:\n\n> On Fri, Jun 21, 2024 at 04:01:55PM -0500, Nathan Bossart wrote:\n> > On Fri, Jun 21, 2024 at 11:22:05AM +0200, Jelte Fennema-Nio wrote:\n> >> 0001 is some cleanup after f4b54e1ed9\n> > \n> > Oops. I'll plan on committing this after the 17beta2 release freeze is\n> > lifted.\n> \n> Committed 0001.\n\nThanks, Nathan. I'm holding myself responsible for the rest ... will\nhandle soon after the branch.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"The problem with the future is that it keeps turning into the present\"\n(Hobbes)\n\n\n", "msg_date": "Wed, 26 Jun 2024 18:36:17 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq: Fix lots of discrepancies in PQtrace" }, { "msg_contents": "On Wed, 26 Jun 2024 at 18:36, Alvaro Herrera <[email protected]> wrote:\n> Thanks, Nathan. I'm holding myself responsible for the rest ... will\n> handle soon after the branch.\n\nSounds great. Out of curiosity, what is the backpatching policy for\nsomething like this? Honestly most of these patches could be\nconsidered bugfixes in PQtrace, so backpatching might make sense. OTOH\nI don't think PQtrace is used very much in practice, so backpatching\nmight carry more risk than it's worth.\n\n\n", "msg_date": "Wed, 26 Jun 2024 22:02:08 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: libpq: Fix lots of discrepancies in PQtrace" }, { "msg_contents": "On Wed, Jun 26, 2024 at 10:02:08PM +0200, Jelte Fennema-Nio wrote:\n> On Wed, 26 Jun 2024 at 18:36, Alvaro Herrera <[email protected]> wrote:\n> > Thanks, Nathan. I'm holding myself responsible for the rest ... will\n> > handle soon after the branch.\n> \n> Sounds great. Out of curiosity, what is the backpatching policy for\n> something like this? Honestly most of these patches could be\n> considered bugfixes in PQtrace, so backpatching might make sense. OTOH\n> I don't think PQtrace is used very much in practice, so backpatching\n> might carry more risk than it's worth.\n\n0001 getting on HEAD after the feature freeze as a cleanup piece\ncleanup is no big deal. That's cosmetic, still OK.\n\nLooking at the whole, the rest of the patch set qualifies as a new\nfeature, even if they're aimed at closing existing gaps.\nParticularly, you have bits of new infrastructure introduced in libpq\nlike the current_auth_response business in 0004, making it a new\nfeature by structure.\n\n+\tconn->current_auth_response = AUTH_RESP_PASSWORD;\n \tret = pqPacketSend(conn, PqMsg_PasswordMessage, pwd_to_send, strlen(pwd_to_send) + 1);\n+\tconn->current_auth_response = AUTH_RESP_NONE;\n\nIt's a surprising approach. Future callers of pqPacketSend() and\npqPutMsgEnd() would easily miss that this flag should be set, as much\nas reset. Isn't that something that should be added in input of these\nfunctions?\n\n+\tAuthResponseType current_auth_response;\nI'd recommend to document what this flag is here for, with a comment.\n--\nMichael", "msg_date": "Thu, 27 Jun 2024 14:39:21 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq: Fix lots of discrepancies in PQtrace" }, { "msg_contents": "On Thu, 27 Jun 2024 at 07:39, Michael Paquier <[email protected]> wrote:\n> Looking at the whole, the rest of the patch set qualifies as a new\n> feature, even if they're aimed at closing existing gaps.\n\nAlright, seems reasonable. To be clear, I don't care at all about this\nbeing backported personally.\n\n> Particularly, you have bits of new infrastructure introduced in libpq\n> like the current_auth_response business in 0004, making it a new\n> feature by structure.\n>\n> + conn->current_auth_response = AUTH_RESP_PASSWORD;\n> ret = pqPacketSend(conn, PqMsg_PasswordMessage, pwd_to_send, strlen(pwd_to_send) + 1);\n> + conn->current_auth_response = AUTH_RESP_NONE;\n>\n> It's a surprising approach. Future callers of pqPacketSend() and\n> pqPutMsgEnd() would easily miss that this flag should be set, as much\n> as reset. Isn't that something that should be added in input of these\n> functions?\n\nYeah, I'm not entirely happy about it either. But adding an argument\nto pqPutMsgEnd and pqPutPacketSend would mean all the existing call\nsites would need to change, even though only 4 of them would care\nabout the new argument. You could argue that it's the better solution,\nbut it would at least greatly increase the size of the diff. Of course\nto reduce the diff size you could make the old functions a wrapper\naround a new one with the extra argument, but I couldn't think of a\ngood name for those functions. Overall I went for the chosen approach\nhere, because it only impacted code at the call sites for these auth\npackets (which are the only v3 packets in the protocol that you cannot\ninterpret based on their contents alone).\n\nI think your worry about easily missing to set/clear the flag is not a\nhuge problem in practice. We almost never add new authentication\nmessages and it's only needed there. Also the clearing is not even\nstrictly necessary for the tracing to behave correctly, but it seemed\nlike the right thing to do.\n\n> + AuthResponseType current_auth_response;\n> I'd recommend to document what this flag is here for, with a comment.\n\nOops, yeah I forgot about that. Done now.", "msg_date": "Thu, 27 Jun 2024 10:03:58 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: libpq: Fix lots of discrepancies in PQtrace" }, { "msg_contents": "Pushed 0002 and 0003. On the latter: I decided against using int32 to\nprint the request identifiers; by splitting into two int16's, we see\nthat the numbers match the values in the PG_PROTOCOL() declarations:\n\n2024-08-09 17:37:38.364622\tF\t8\tSSLRequest\t 1234 5679\nand\n2024-08-09 17:37:38.422109\tF\t16\tCancelRequest\t 1234 5678 NNNN NNNN\n\n(I didn't verify GSSEncRequest directly.)\n\nI also verified that in non-regress mode, the values printed by\nCancelRequest match those in the BackendKeyData message,\n2024-08-09 17:34:27.544686\tB\t12\tBackendKeyData\t NNNN NNNN\n\nI also added suppression in regress mode for the backend key in the\nCancelRequest message, since they would be different each time.\n\nThere are no tests for this code. We could add a trace file for the\nconnection packet in libpq_pipeline by changing PQconnectdb() to\nPQconnectStart() and then do PQtrace before polling until the connection\nis ready; we would have to have it match for the TAP test. Not sure\nthis is worth the effort. But doing this in a very crude way allowed me\nto verify that, at least on my machine, this code is doing what's\nexpected.\n\nThank you,\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 9 Aug 2024 18:08:02 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq: Fix lots of discrepancies in PQtrace" }, { "msg_contents": "Regarding 0004:\n\nI don't want to add 4 bytes to struct pg_conn for tracing support. I'm\ntempted to make the new struct member a plain 'char' to reduce overhead\nfor a feature that almost nobody is going to use. According to pahole\nwe have a 3 bytes hole in that position of the struct, so if we make it\na 1- or 2-byte member, there's no storage overhead whatsoever.\n\nAlso, why not have pqTraceOutputMessage() responsible for resetting the\nbyte after printing the message? It seems to cause less undesirable\ndetritus.\n\nI propose something like the attached, but it's as yet untested. What\ndo you think?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"El sentido de las cosas no viene de las cosas, sino de\nlas inteligencias que las aplican a sus problemas diarios\nen busca del progreso.\" (Ernesto Hernández-Novich)", "msg_date": "Fri, 9 Aug 2024 19:08:40 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq: Fix lots of discrepancies in PQtrace" }, { "msg_contents": "On Sat, 10 Aug 2024 at 01:08, Alvaro Herrera <[email protected]> wrote:\n> I don't want to add 4 bytes to struct pg_conn for tracing support. I'm\n> tempted to make the new struct member a plain 'char' to reduce overhead\n> for a feature that almost nobody is going to use. According to pahole\n> we have a 3 bytes hole in that position of the struct, so if we make it\n> a 1- or 2-byte member, there's no storage overhead whatsoever.\n\nSounds fine to me.\n\n> Also, why not have pqTraceOutputMessage() responsible for resetting the\n> byte after printing the message? It seems to cause less undesirable\n> detritus.\n\nYeah, that's indeed much nicer.\n\n> I propose something like the attached, but it's as yet untested. What\n> do you think?\n\nLooks good, but I haven't tested it yet either.\n\n\n", "msg_date": "Sat, 10 Aug 2024 16:27:02 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: libpq: Fix lots of discrepancies in PQtrace" }, { "msg_contents": "On 2024-Aug-10, Jelte Fennema-Nio wrote:\n\n> On Sat, 10 Aug 2024 at 01:08, Alvaro Herrera <[email protected]> wrote:\n\n> > I propose something like the attached, but it's as yet untested. What\n> > do you think?\n> \n> Looks good, but I haven't tested it yet either.\n\nI tested the SASL exchange and it looks OK. Didn't test the other ones.\n\nThanks!\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 12 Aug 2024 19:15:22 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq: Fix lots of discrepancies in PQtrace" }, { "msg_contents": "I gave another look to the remaining patches; here they are again. I\npropose some changes:\n\n- to 0005 I change your pqTraceOutputEncryptionRequestResponse()\n function name to pqTraceOutputCharResponse and instead of attaching\n the \"Response\" literal in the outpuer to the name given in the\n function call, just pass the whole string as argument to the function.\n\n- to 0006 I change function name pqFinishParsingMessage() to\n pqParseDone() and reworded the commentary; also moved it to fe-misc.c.\n Looks good otherwise.\n\n- 0008 to fix NegotiateProtocolVersion looks correct per [1], but I\n don't know how to test it. Suggestions?\n\nI didn't look at 0007.\n\n[1] https://www.postgresql.org/docs/16/protocol-message-formats.html#PROTOCOL-MESSAGE-FORMATS-NEGOTIATEPROTOCOLVERSION\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"No hay hombre que no aspire a la plenitud, es decir,\nla suma de experiencias de que un hombre es capaz\"", "msg_date": "Wed, 14 Aug 2024 13:37:30 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq: Fix lots of discrepancies in PQtrace" }, { "msg_contents": "On Wed, 14 Aug 2024 at 19:37, Alvaro Herrera <[email protected]> wrote:\n> - to 0005 I change your pqTraceOutputEncryptionRequestResponse()\n> function name to pqTraceOutputCharResponse and instead of attaching\n> the \"Response\" literal in the outpuer to the name given in the\n> function call, just pass the whole string as argument to the function.\n\nFine by me\n\n> - to 0006 I change function name pqFinishParsingMessage() to\n> pqParseDone() and reworded the commentary; also moved it to fe-misc.c.\n> Looks good otherwise.\n\nThe following removed comments seems useful to keep (I realize I\nalready removed them in a previous version of the patch, but I don't\nthink I did that on purpose)\n\n- /* Drop the processed message and loop around for another */\n\n- /* consume the message and exit */\n\n\n- /* Completed this message, keep going */\n- /* trust the specified message length as what to skip */\n\n\n> - 0008 to fix NegotiateProtocolVersion looks correct per [1], but I\n> don't know how to test it. Suggestions?\n\nTwo options:\n1. Manually change code to make sure SendNegotiateProtocolVersion is\ncalled in src/backend/tcop/backend_startup.c\n2. Apply my patches from this thread[2] and use\nmax_protocol_version=latest in the connection string while connecting\nto an older postgres server.\n\n[2]: https://www.postgresql.org/message-id/flat/CAGECzQTyXDNtMXdq2L-Wp%3DOvOCPa07r6%2BU_MGb%3D%3Dh90MrfT%2BfQ%40mail.gmail.com#1b8cda3523555aafae89cc04293b8613\n\n\n", "msg_date": "Wed, 14 Aug 2024 20:18:36 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: libpq: Fix lots of discrepancies in PQtrace" }, { "msg_contents": "Hello,\n\nOn 2024-Aug-14, Jelte Fennema-Nio wrote:\n\n> The following removed comments seems useful to keep (I realize I\n> already removed them in a previous version of the patch, but I don't\n> think I did that on purpose)\n> [...]\n\nAh, yeah, I agree. I put them back, and pushed 0005, 6 and 7 as a\nsingle commit. It didn't seem worth pushing each separately, really. I\nadded two lines for the CopyData message as well, since otherwise the\noutput shows the \"mismatched length\" error when getting COPY data.\n\nI'm leaving 0008 to whoever is doing the NegotiateProtocolVersion stuff;\nmaybe post that one in that thread you mentioned. I'll mark this CF\nentry committed.\n\nMany thanks!\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 15 Aug 2024 20:05:13 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq: Fix lots of discrepancies in PQtrace" } ]
[ { "msg_contents": "Dear Hackers,\n\nThis is a follow-up thread for pg_createsubscriber [1]. I started a new thread\nsince there is no activity around here.\n\n## Problem\n\nAssuming that there is a cascading replication like below:\n\nnode A --(logical replication)--> node B --(streaming replication)--> node C\n\nIn this case, subscriptions exist even on node C, but it does not try to connect\nto node A because the logical replication launcher/worker won't be launched.\nAfter the conversion, node C becomes a subscriber for node B, and the subscription\ntoward node A remains. Therefore, another worker that tries to connect to node A\nwill be launched, raising an ERROR [2]. This failure may occur even during the\nconversion.\n\n## Solution\n\nThe easiest solution is to drop pre-existing subscriptions from the converted node.\nTo avoid establishing connections during the conversion, slot_name is set to NONE\non the primary first, then drop on the standby. The setting will be restored on the\nprimary node.\nAttached patch implements the idea. Test script is also included, but not sure it should\nbe on the HEAD\n\nBTW, I found that LogicalRepInfo.oid won't be used. If needed, I can create\nanother patch to remove the attribute.\n\nHow do you think?\n\n[1]: https://www.postgresql.org/message-id/CAA4eK1J22UEfrqx222h5j9DQ7nxGrTbAa_BC%2B%3DmQXdXs-RCsew%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/CANhcyEWvimA1-f6hSrA%3D9qkfR5SonFb56b36M%2B%2BvT%3DLiFj%3D76g%40mail.gmail.com\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\nhttps://www.fujitsu.com/", "msg_date": "Fri, 21 Jun 2024 11:21:22 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "pg_createsubscriber: drop pre-existing subscriptions from the\n converted node" }, { "msg_contents": "On Fri, 21 Jun 2024 at 16:51, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Hackers,\n>\n> This is a follow-up thread for pg_createsubscriber [1]. I started a new thread\n> since there is no activity around here.\n>\n> ## Problem\n>\n> Assuming that there is a cascading replication like below:\n>\n> node A --(logical replication)--> node B --(streaming replication)--> node C\n>\n> In this case, subscriptions exist even on node C, but it does not try to connect\n> to node A because the logical replication launcher/worker won't be launched.\n> After the conversion, node C becomes a subscriber for node B, and the subscription\n> toward node A remains. Therefore, another worker that tries to connect to node A\n> will be launched, raising an ERROR [2]. This failure may occur even during the\n> conversion.\n>\n> ## Solution\n>\n> The easiest solution is to drop pre-existing subscriptions from the converted node.\n> To avoid establishing connections during the conversion, slot_name is set to NONE\n> on the primary first, then drop on the standby. The setting will be restored on the\n> primary node.\n> Attached patch implements the idea. Test script is also included, but not sure it should\n> be on the HEAD\n\nFew comments:\n1) Should we do this only for the enabled subscription, otherwise the\ndisabled subscriptions will be enabled after running\npg_createsubscriber:\n+obtain_and_disable_pre_existing_subscriptions(struct LogicalRepInfo *dbinfo)\n+{\n+ PQExpBuffer query = createPQExpBuffer();\n+\n+ for (int i = 0; i < num_dbs; i++)\n+ {\n+ PGconn *conn;\n+ PGresult *res;\n+ int ntups;\n+\n+ /* Connect to publisher */\n+ conn = connect_database(dbinfo[i].pubconninfo, true);\n+\n+ appendPQExpBuffer(query,\n+ \"SELECT s.subname,\ns.subslotname FROM pg_catalog.pg_subscription s \"\n+ \"INNER JOIN\npg_catalog.pg_database d ON (s.subdbid = d.oid) \"\n+ \"WHERE d.datname = '%s'\",\n+ dbinfo[i].dbname);\n+\n\n2) disconnect_database not called here, should the connection be disconnected:\n+drop_pre_existing_subscriptions(struct LogicalRepInfo *dbinfo)\n+{\n+ PQExpBuffer query = createPQExpBuffer();\n+\n+ for (int i = 0; i < num_dbs; i++)\n+ {\n+ PGconn *conn;\n+ struct LogicalRepInfo info = dbinfo[i];\n+\n+ /* Connect to subscriber */\n+ conn = connect_database(info.subconninfo, false);\n+\n+ for (int j = 0; j < info.num_subscriptions; j++)\n+ {\n+ appendPQExpBuffer(query,\n+ \"DROP\nSUBSCRIPTION %s;\", info.pre_subnames[j]);\n+ PQexec(conn, query->data);\n+ resetPQExpBuffer(query);\n+ }\n+ }\n\n3) Similarly here too:\n+static void\n+enable_subscirptions_on_publisher(struct LogicalRepInfo *dbinfo)\n+{\n+ PQExpBuffer query = createPQExpBuffer();\n+\n+ for (int i = 0; i < num_dbs; i++)\n+ {\n+ PGconn *conn;\n+ struct LogicalRepInfo info = dbinfo[i];\n+\n+ /* Connect to publisher */\n+ conn = connect_database(info.pubconninfo, false);\n\n4) them should be then here:\n+ /* ...and them enable the subscription */\n+ appendPQExpBuffer(query,\n+ \"ALTER\nSUBSCRIPTION %s ENABLE\",\n+ info.pre_subnames[j]);\n+ PQclear(PQexec(conn, query->data));\n+ resetPQExpBuffer(query);\n\n\n> BTW, I found that LogicalRepInfo.oid won't be used. If needed, I can create\n> another patch to remove the attribute.\n\nI was able to compile without this, I think this can be removed.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 26 Jun 2024 14:12:05 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_createsubscriber: drop pre-existing subscriptions from the\n converted node" }, { "msg_contents": "On Fri, Jun 21, 2024 at 4:51 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> This is a follow-up thread for pg_createsubscriber [1]. I started a new thread\n> since there is no activity around here.\n>\n> ## Problem\n>\n> Assuming that there is a cascading replication like below:\n>\n> node A --(logical replication)--> node B --(streaming replication)--> node C\n>\n> In this case, subscriptions exist even on node C, but it does not try to connect\n> to node A because the logical replication launcher/worker won't be launched.\n> After the conversion, node C becomes a subscriber for node B, and the subscription\n> toward node A remains. Therefore, another worker that tries to connect to node A\n> will be launched, raising an ERROR [2]. This failure may occur even during the\n> conversion.\n>\n> ## Solution\n>\n> The easiest solution is to drop pre-existing subscriptions from the converted node.\n> To avoid establishing connections during the conversion, slot_name is set to NONE\n> on the primary first, then drop on the standby. The setting will be restored on the\n> primary node.\n>\n\nIt seems disabling subscriptions on the primary can make the primary\nstop functioning for some duration of time. I feel we need some\nsolution where after converting to subscriber, we disable and drop\npre-existing subscriptions. One idea could be that we use the list of\nnew subscriptions created by the tool such that any subscription not\nexisting in that list will be dropped.\n\nShouldn't this be an open item for PG17?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 26 Jun 2024 16:32:57 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_createsubscriber: drop pre-existing subscriptions from the\n converted node" }, { "msg_contents": "Dear Amit, Vingesh,\r\n\r\nThanks for giving comments!\r\n\r\n> It seems disabling subscriptions on the primary can make the primary\r\n> stop functioning for some duration of time. I feel we need some\r\n> solution where after converting to subscriber, we disable and drop\r\n> pre-existing subscriptions. One idea could be that we use the list of\r\n> new subscriptions created by the tool such that any subscription not\r\n> existing in that list will be dropped.\r\n\r\nPreviously I avoided coding like yours, because there is a room that converted\r\nnode can connect to another publisher. But per off-list discussion, we can skip\r\nit by setting max_logical_replication_workers = 0. I refactored with the approach.\r\nNote that the GUC is checked at verification phase, so an attribute is added to\r\nstart_standby_server() to select the workload.\r\n\r\nMost of comments by Vignesh were invalidated due to the code change, but I hoped\r\nI checked your comments were not reproduced. Also, 0001 was created to remove an\r\nunused attribute.\r\n\r\n> Shouldn't this be an open item for PG17?\r\n\r\nAdded this thread to wikipage.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/", "msg_date": "Thu, 27 Jun 2024 06:17:18 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: pg_createsubscriber: drop pre-existing subscriptions from the\n converted node" }, { "msg_contents": "On Thu, Jun 27, 2024 at 11:47 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > It seems disabling subscriptions on the primary can make the primary\n> > stop functioning for some duration of time. I feel we need some\n> > solution where after converting to subscriber, we disable and drop\n> > pre-existing subscriptions. One idea could be that we use the list of\n> > new subscriptions created by the tool such that any subscription not\n> > existing in that list will be dropped.\n>\n> Previously I avoided coding like yours, because there is a room that converted\n> node can connect to another publisher. But per off-list discussion, we can skip\n> it by setting max_logical_replication_workers = 0. I refactored with the approach.\n> Note that the GUC is checked at verification phase, so an attribute is added to\n> start_standby_server() to select the workload.\n>\n\nThanks, this is a better approach. I have changed a few comments and\nmade some other cosmetic changes. See attached.\n\nEuler, Peter E., and others, do you have any comments/suggestions?\n\nBTW, why have you created a separate test file for this test? I think\nwe should add a new test to one of the existing tests in\n040_pg_createsubscriber. You can create a dummy subscription on node_p\nand do a test similar to what we are doing in \"# Create failover slot\nto test its removal\".\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Fri, 28 Jun 2024 16:37:05 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_createsubscriber: drop pre-existing subscriptions from the\n converted node" }, { "msg_contents": "Dear Amit,\r\n\r\nThanks for giving comments! PSA new version.\r\n\r\n> Thanks, this is a better approach. I have changed a few comments and\r\n> made some other cosmetic changes. See attached.\r\n\r\nI checked your attached and LGTM. Based on that, I added some changes\r\nlike below:\r\n\r\n- Made dbname be escaped while listing up pre-existing subscriptions\r\n Previous version could not pass tests by recent commits.\r\n- Skipped dropping subscriptions in dry_run mode\r\n I found the issue while poring the test to 040_pg_createsubscriber.pl.\r\n- Added info-level output to follow other drop_XXX functions\r\n\r\n> BTW, why have you created a separate test file for this test? I think\r\n> we should add a new test to one of the existing tests in\r\n> 040_pg_createsubscriber.\r\n\r\nI was separated a test file for just confirmation purpose, I've planned to merge.\r\nNew patch set did that.\r\n\r\n> You can create a dummy subscription on node_p\r\n> and do a test similar to what we are doing in \"# Create failover slot\r\n> to test its removal\".\r\n\r\nYour approach looks better than mine. I followed the approach.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/", "msg_date": "Mon, 1 Jul 2024 06:14:30 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: pg_createsubscriber: drop pre-existing subscriptions from the\n converted node" }, { "msg_contents": "On Mon, Jul 1, 2024 at 11:44 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > You can create a dummy subscription on node_p\n> > and do a test similar to what we are doing in \"# Create failover slot\n> > to test its removal\".\n>\n> Your approach looks better than mine. I followed the approach.\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 1 Jul 2024 17:36:36 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_createsubscriber: drop pre-existing subscriptions from the\n converted node" }, { "msg_contents": "On Mon, 1 Jul 2024 at 11:44, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Amit,\n>\n> Thanks for giving comments! PSA new version.\n>\n> > Thanks, this is a better approach. I have changed a few comments and\n> > made some other cosmetic changes. See attached.\n>\n> I checked your attached and LGTM. Based on that, I added some changes\n> like below:\n>\n> - Made dbname be escaped while listing up pre-existing subscriptions\n> Previous version could not pass tests by recent commits.\n> - Skipped dropping subscriptions in dry_run mode\n> I found the issue while poring the test to 040_pg_createsubscriber.pl.\n> - Added info-level output to follow other drop_XXX functions\n>\n> > BTW, why have you created a separate test file for this test? I think\n> > we should add a new test to one of the existing tests in\n> > 040_pg_createsubscriber.\n>\n> I was separated a test file for just confirmation purpose, I've planned to merge.\n> New patch set did that.\n>\n> > You can create a dummy subscription on node_p\n> > and do a test similar to what we are doing in \"# Create failover slot\n> > to test its removal\".\n>\n> Your approach looks better than mine. I followed the approach.\n\nHi Kuroda-san,\n\nI tested the patches on linux and windows and I confirm that it\nsuccessfully fixes the issue [1].\n\n[1]: https://www.postgresql.org/message-id/CANhcyEWvimA1-f6hSrA%3D9qkfR5SonFb56b36M%2B%2BvT%3DLiFj%3D76g%40mail.gmail.com\n\nThanks and Regards,\nShlok Kyal\n\n\n", "msg_date": "Tue, 2 Jul 2024 09:57:04 +0530", "msg_from": "Shlok Kyal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_createsubscriber: drop pre-existing subscriptions from the\n converted node" }, { "msg_contents": "On Tue, Jul 2, 2024 at 9:57 AM Shlok Kyal <[email protected]> wrote:\n>\n> On Mon, 1 Jul 2024 at 11:44, Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Dear Amit,\n> >\n> > Thanks for giving comments! PSA new version.\n> >\n> > > Thanks, this is a better approach. I have changed a few comments and\n> > > made some other cosmetic changes. See attached.\n> >\n> > I checked your attached and LGTM. Based on that, I added some changes\n> > like below:\n> >\n> > - Made dbname be escaped while listing up pre-existing subscriptions\n> > Previous version could not pass tests by recent commits.\n> > - Skipped dropping subscriptions in dry_run mode\n> > I found the issue while poring the test to 040_pg_createsubscriber.pl.\n> > - Added info-level output to follow other drop_XXX functions\n> >\n> > > BTW, why have you created a separate test file for this test? I think\n> > > we should add a new test to one of the existing tests in\n> > > 040_pg_createsubscriber.\n> >\n> > I was separated a test file for just confirmation purpose, I've planned to merge.\n> > New patch set did that.\n> >\n> > > You can create a dummy subscription on node_p\n> > > and do a test similar to what we are doing in \"# Create failover slot\n> > > to test its removal\".\n> >\n> > Your approach looks better than mine. I followed the approach.\n>\n> Hi Kuroda-san,\n>\n> I tested the patches on linux and windows and I confirm that it\n> successfully fixes the issue [1].\n>\n\nThanks for the verification. I have pushed the patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 2 Jul 2024 15:33:45 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_createsubscriber: drop pre-existing subscriptions from the\n converted node" } ]
[ { "msg_contents": "I recently got write access to the cfbot repo[1] and machine from\nThomas. And I deployed a few improvements this week. The most\nsignificant one is that it is now much easier to use GitHub as part of\nyour patch review workflow.\n\nOn the cfbot website[2] there's now a \"D\" (diff) link next to each\ncommit fest entry. A good example of such a link would be the one for\nmy most recent commitfest entry[3]. There is a separate commit for\neach patch file and those commits contain the \"git format-patch\"\nmetadata. (this is not done using git am, but using git mailinfo +\npatch + sed, because git am is horrible at resolving conflicts)\n\nThe killer feature (imho) of GitHub diffs over looking at patch files:\nYou can press the \"Expand up\"/\"Expand down\" buttons on the left of the\ndiff to see some extra context that the patch file doesn't contain.\n\nYou can also add the cfbot repo as a remote to your local git\nrepository. That way you don't have to manually download patches and\napply them to your local checkout anymore:\n\n# Add the remote\ngit remote add -f cfbot https://github.com/postgresql-cfbot/postgresql.git\n# make future git pulls much quicker (optional)\ngit maintenance start\n# check out a commitfest entry\ngit checkout cf/5065\n\nP.S. Suggestions for further improvements are definitely appreciated.\nWe're currently already working on better integration between the\ncommitfest app website and the cfbot website.\n\nP.P.S The \"D\" links don't work for patches that need to be rebased\nsince before I deployed this change, but that problem should fix\nitself with time.\n\n[1]: https://github.com/macdice/cfbot\n[2]: http://cfbot.cputube.org/\n[3]: https://github.com/postgresql-cfbot/postgresql/compare/cf/5065~1...cf/5065\n\n\n", "msg_date": "Fri, 21 Jun 2024 16:36:13 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "cfbot update: Using GitHub for patch review" }, { "msg_contents": "On Fri, Jun 21, 2024 at 04:36:13PM +0200, Jelte Fennema-Nio wrote:\n> I recently got write access to the cfbot repo[1] and machine from\n> Thomas. And I deployed a few improvements this week. The most\n> significant one is that it is now much easier to use GitHub as part of\n> your patch review workflow.\n\nNice! Thank you.\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 21 Jun 2024 10:54:50 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cfbot update: Using GitHub for patch review" }, { "msg_contents": "pá 21. 6. 2024 v 16:36 odesílatel Jelte Fennema-Nio <[email protected]> napsal:\n>\n> I recently got write access to the cfbot repo[1] and machine from\n> Thomas. And I deployed a few improvements this week. The most\n> significant one is that it is now much easier to use GitHub as part of\n> your patch review workflow.\n>\n> On the cfbot website[2] there's now a \"D\" (diff) link next to each\n> commit fest entry. A good example of such a link would be the one for\n> my most recent commitfest entry[3]. There is a separate commit for\n> each patch file and those commits contain the \"git format-patch\"\n> metadata. (this is not done using git am, but using git mailinfo +\n> patch + sed, because git am is horrible at resolving conflicts)\n\nThis is brilliant!\n\n> The killer feature (imho) of GitHub diffs over looking at patch files:\n> You can press the \"Expand up\"/\"Expand down\" buttons on the left of the\n> diff to see some extra context that the patch file doesn't contain.\n>\n> You can also add the cfbot repo as a remote to your local git\n> repository. That way you don't have to manually download patches and\n> apply them to your local checkout anymore:\n>\n> # Add the remote\n> git remote add -f cfbot https://github.com/postgresql-cfbot/postgresql.git\n> # make future git pulls much quicker (optional)\n> git maintenance start\n> # check out a commitfest entry\n> git checkout cf/5065\n>\n> P.S. Suggestions for further improvements are definitely appreciated.\n> We're currently already working on better integration between the\n> commitfest app website and the cfbot website.\n>\n> P.P.S The \"D\" links don't work for patches that need to be rebased\n> since before I deployed this change, but that problem should fix\n> itself with time.\n>\n> [1]: https://github.com/macdice/cfbot\n> [2]: http://cfbot.cputube.org/\n> [3]: https://github.com/postgresql-cfbot/postgresql/compare/cf/5065~1...cf/5065\n>\n>\n\n\n", "msg_date": "Fri, 21 Jun 2024 17:56:06 +0200", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cfbot update: Using GitHub for patch review" }, { "msg_contents": "pá 21. 6. 2024 v 17:55 odesílatel Nathan Bossart <[email protected]>\nnapsal:\n\n> On Fri, Jun 21, 2024 at 04:36:13PM +0200, Jelte Fennema-Nio wrote:\n> > I recently got write access to the cfbot repo[1] and machine from\n> > Thomas. And I deployed a few improvements this week. The most\n> > significant one is that it is now much easier to use GitHub as part of\n> > your patch review workflow.\n>\n> Nice! Thank you.\n\n\n+1\n\ngood work\n\nPavel\n\n>\n>\n> --\n> nathan\n>\n>\n>\n\npá 21. 6. 2024 v 17:55 odesílatel Nathan Bossart <[email protected]> napsal:On Fri, Jun 21, 2024 at 04:36:13PM +0200, Jelte Fennema-Nio wrote:\n> I recently got write access to the cfbot repo[1] and machine from\n> Thomas. And I deployed a few improvements this week. The most\n> significant one is that it is now much easier to use GitHub as part of\n> your patch review workflow.\n\nNice!  Thank you.+1good workPavel\n\n-- \nnathan", "msg_date": "Fri, 21 Jun 2024 18:09:42 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cfbot update: Using GitHub for patch review" }, { "msg_contents": "On Fri, Jun 21, 2024 at 8:06 PM Jelte Fennema-Nio <[email protected]>\nwrote:\n\n> I recently got write access to the cfbot repo[1] and machine from\n> Thomas. And I deployed a few improvements this week. The most\n> significant one is that it is now much easier to use GitHub as part of\n> your patch review workflow.\n>\n> On the cfbot website[2] there's now a \"D\" (diff) link next to each\n> commit fest entry. A good example of such a link would be the one for\n> my most recent commitfest entry[3]. There is a separate commit for\n> each patch file and those commits contain the \"git format-patch\"\n> metadata. (this is not done using git am, but using git mailinfo +\n> patch + sed, because git am is horrible at resolving conflicts)\n>\n> The killer feature (imho) of GitHub diffs over looking at patch files:\n> You can press the \"Expand up\"/\"Expand down\" buttons on the left of the\n> diff to see some extra context that the patch file doesn't contain.\n>\n> You can also add the cfbot repo as a remote to your local git\n> repository. That way you don't have to manually download patches and\n> apply them to your local checkout anymore:\n>\n> # Add the remote\n> git remote add -f cfbot https://github.com/postgresql-cfbot/postgresql.git\n> # make future git pulls much quicker (optional)\n> git maintenance start\n> # check out a commitfest entry\n> git checkout cf/5065\n>\n> P.S. Suggestions for further improvements are definitely appreciated.\n> We're currently already working on better integration between the\n> commitfest app website and the cfbot website.\n>\n> P.P.S The \"D\" links don't work for patches that need to be rebased\n> since before I deployed this change, but that problem should fix\n> itself with time.\n>\n\nThanks. Very helpful.\n\nWill it be possible to make it send an email containing the review\ncomments? Better even if a reply to that email adds comments/responses back\nto PR.\n\nI need to sign in to github to add my review comments. So those who do not\nhave a github account can not use it for review. But I don't think that can\nbe fixed. We need a way to know who left review comments.\n\nThere was some discussion at pgconf.dev about using gitlab instead of\ngithub. How easy is it to use gitlab if we decide to go that way?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Fri, Jun 21, 2024 at 8:06 PM Jelte Fennema-Nio <[email protected]> wrote:I recently got write access to the cfbot repo[1] and machine from\nThomas. And I deployed a few improvements this week. The most\nsignificant one is that it is now much easier to use GitHub as part of\nyour patch review workflow.\n\nOn the cfbot website[2] there's now a \"D\" (diff) link next to each\ncommit fest entry.  A good example of such a link would be the one for\nmy most recent commitfest entry[3]. There is a separate commit for\neach patch file and those commits contain the \"git format-patch\"\nmetadata. (this is not done using git am, but using git mailinfo +\npatch + sed, because git am is horrible at resolving conflicts)\n\nThe killer feature (imho) of GitHub diffs over looking at patch files:\nYou can press the \"Expand up\"/\"Expand down\" buttons on the left of the\ndiff to see some extra context that the patch file doesn't contain.\n\nYou can also add the cfbot repo as a remote to your local git\nrepository. That way you don't have to manually download patches and\napply them to your local checkout anymore:\n\n# Add the remote\ngit remote add -f cfbot https://github.com/postgresql-cfbot/postgresql.git\n# make future git pulls much quicker (optional)\ngit maintenance start\n# check out a commitfest entry\ngit checkout cf/5065\n\nP.S. Suggestions for further improvements are definitely appreciated.\nWe're currently already working on better integration between the\ncommitfest app website and the cfbot website.\n\nP.P.S The \"D\" links don't work for patches that need to be rebased\nsince before I deployed this change, but that problem should fix\nitself with time.Thanks. Very helpful.Will it be possible to make it send an email containing the review comments? Better even if a reply to that email adds comments/responses back to PR.I need to sign in to github to add my review comments. So those who do not have a github account can not use it for review. But I don't think that can be fixed. We need a way to know who left review comments.There was some discussion at pgconf.dev about using gitlab instead of github. How easy is it to use gitlab if we decide to go that way?-- Best Wishes,Ashutosh Bapat", "msg_date": "Fri, 28 Jun 2024 18:40:04 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cfbot update: Using GitHub for patch review" }, { "msg_contents": "On Sat, Jun 29, 2024 at 1:10 AM Ashutosh Bapat\n<[email protected]> wrote:\n> I need to sign in to github to add my review comments. So those who do not have a github account can not use it for review. But I don't think that can be fixed. We need a way to know who left review comments.\n\nI don't think Jelte was talking about moving review discussion to\nGithub, just providing a link to *view* the patches there. Now I'm\nwondering if there is a way to disable comments on commits in the\npostgresql-cfbot GH account. I guess they'd be lost after 48 hours\nanyway when the branch gets force-pushed and commit hash changes? I\ndon't want people to start posting comments there that no one is\nlooking at.\n\n> There was some discussion at pgconf.dev about using gitlab instead of github. How easy is it to use gitlab if we decide to go that way?\n\ncfbot could certainly be configured to push (ie mirror) the same\nbranches to gitlab too (I don't have much experience with Gitlab, but\nif it's just a matter of registering an account + ssh key, adding it\nas a remote and pushing...). Then there could be [View on Github]\n[View on Gitlab] buttons, if people think that's useful (note \"View\",\nnot \"Review\"!). The Cirrus CI system is currently only capable of\ntesting stuff pushed to Github, though, so cfbot would continue to\npush stuff there.\n\nIf memory servers, Cirrus used to say that they were planning to add\nsupport for testing code in public Gitlab next, but now their FAQ says\ntheir next public git host will be Bit Bucket:\nhttps://cirrus-ci.org/faq/#only-github-support\n\nGiven that cfbot is currently only using Github because we have to to\nreach Cirrus CI, not because we actually want Github features like\nissue tracking or pull requests with review discussion, it hardly\nmatters if it's Github, Gitlab or any other public git host. And if\nwe eventually decide to move our whole workflow to one of those\nsystems and shut down the CF app, then cfbot will be retired, and\nyou'll just create PRs on that system. But so far, we continue to\nprefer the CF app + email.\n\nThe reason we liked Cirrus so much despite the existence of many other\nCI systems including the ones build into GH, GL, etc and many 3rd\nparty ones, was because it was the only provider that allowed enough\ncompute minutes for our needs, supported lots of operating systems,\nand had public links to log files suitable for sharing on out mailing\nlist or cfbot's web interface (if you click to see the log, it doesn't\nsay \"Rol up roll up, welcome to Foo Corporation, get your tickets\nhere!\"). I still don't know of any other CI system that would be as\ngood for us, other than building our own. I would say it's been a\nvery good choice so far. The original cfbot goal was \"feed the\nmailing list to a CI system\", with Github just a necessary part of the\nplumbing. It is a nice way to view patches though.\n\n\n", "msg_date": "Sat, 29 Jun 2024 11:12:56 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cfbot update: Using GitHub for patch review" }, { "msg_contents": "On Sat, 29 Jun 2024 at 01:13, Thomas Munro <[email protected]> wrote:\n>\n> On Sat, Jun 29, 2024 at 1:10 AM Ashutosh Bapat\n> <[email protected]> wrote:\n> > I need to sign in to github to add my review comments. So those who do not have a github account can not use it for review. But I don't think that can be fixed. We need a way to know who left review comments.\n>\n> I don't think Jelte was talking about moving review discussion to\n> Github, just providing a link to *view* the patches there.\n\nTotally correct. And I realize now I should have called that out\nexplicitly in the initial email.\n\nWhile I personally would love to be able to read & write comments on a\nGithub PR, integrating that with the mailing list in a way that the\ncommunity is happy with as a whole is no small task (both technically\nand politically).\n\nSo (for now) I took the easy way out and sidestepped all those\ndifficulties, by making the github branches of the cfbot (which we\nalready had) a bit more user friendly as a way to access patches in a\nread-only way.\n\n> Now I'm\n> wondering if there is a way to disable comments on commits in the\n> postgresql-cfbot GH account. I guess they'd be lost after 48 hours\n> anyway when the branch gets force-pushed and commit hash changes? I\n> don't want people to start posting comments there that no one is\n> looking at.\n\nIt seems you can disable them for 6 months at a time here:\nhttps://github.com/postgresql-cfbot/postgresql/settings/interaction_limits\n\n\n", "msg_date": "Sat, 29 Jun 2024 10:42:23 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cfbot update: Using GitHub for patch review" }, { "msg_contents": "On Sat, Jun 29, 2024 at 2:12 PM Jelte Fennema-Nio <[email protected]>\nwrote:\n\n> On Sat, 29 Jun 2024 at 01:13, Thomas Munro <[email protected]> wrote:\n> >\n> > On Sat, Jun 29, 2024 at 1:10 AM Ashutosh Bapat\n> > <[email protected]> wrote:\n> > > I need to sign in to github to add my review comments. So those who do\n> not have a github account can not use it for review. But I don't think that\n> can be fixed. We need a way to know who left review comments.\n> >\n> > I don't think Jelte was talking about moving review discussion to\n> > Github, just providing a link to *view* the patches there.\n>\n> Totally correct. And I realize now I should have called that out\n> explicitly in the initial email.\n>\n\nWhile I personally would like to see that one day, getting a consensus and\nchanging the process for whole community is a lot of effort. I didn't think\n(or mean) that we would move our review process to Github with this change.\nSorry if it sounded like that.\n\n\n>\n> While I personally would love to be able to read & write comments on a\n> Github PR, integrating that with the mailing list in a way that the\n> community is happy with as a whole is no small task (both technically\n> and politically).\n>\n\nIt is not a small amount of work, I agree. But it may be a way forward.\nThose who want to use PR for review can review them as long as the reviews\nare visible on the mailing list. Many of us already draft our review emails\nsimilar to how it would look like in a PR. If the PR system can send that\nemail on reviewer's behalf (as if it's sent by the reviewer) it will\nintegrate well with the current process. People will learn, get used to it\nand move eventually to PR based reviews.\n\n\n>\n> So (for now) I took the easy way out and sidestepped all those\n> difficulties, by making the github branches of the cfbot (which we\n> already had) a bit more user friendly as a way to access patches in a\n> read-only way.\n>\n\n+1.\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Sat, Jun 29, 2024 at 2:12 PM Jelte Fennema-Nio <[email protected]> wrote:On Sat, 29 Jun 2024 at 01:13, Thomas Munro <[email protected]> wrote:\n>\n> On Sat, Jun 29, 2024 at 1:10 AM Ashutosh Bapat\n> <[email protected]> wrote:\n> > I need to sign in to github to add my review comments. So those who do not have a github account can not use it for review. But I don't think that can be fixed. We need a way to know who left review comments.\n>\n> I don't think Jelte was talking about moving review discussion to\n> Github, just providing a link to *view* the patches there.\n\nTotally correct. And I realize now I should have called that out\nexplicitly in the initial email.While I personally would like to see that one day, getting a consensus and changing the process for whole community is a lot of effort. I didn't think (or mean) that we would move our review process to Github with this change. Sorry if it sounded like that. \n\nWhile I personally would love to be able to read & write comments on a\nGithub PR, integrating that with the mailing list in a way that the\ncommunity is happy with as a whole is no small task (both technically\nand politically).It is not a small amount of work, I agree. But it may be a way forward. Those who want to use PR for review can review them as long as the reviews are visible on the mailing list. Many of us already draft our review emails similar to how it would look like in a PR. If the PR system can send that email on reviewer's behalf (as if it's sent by the reviewer) it will integrate well with the current process. People will learn, get used to it and move eventually to PR based reviews. \n\nSo (for now) I took the easy way out and sidestepped all those\ndifficulties, by making the github branches of the cfbot (which we\nalready had) a bit more user friendly as a way to access patches in a\nread-only way.+1.-- Best Wishes,Ashutosh Bapat", "msg_date": "Mon, 1 Jul 2024 15:31:38 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cfbot update: Using GitHub for patch review" } ]
[ { "msg_contents": "The release notes have this item:\n\n\tAllow specification of physical standbys that must be synchronized\n\tbefore they are visible to subscribers (Hou Zhijie, Shveta Malik)\n\n\tThe new server variable is standby_slot_names. \n\nIs standby_slot_names an accurate name for this GUC? It seems too\ngeneric.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 21 Jun 2024 11:37:54 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "New standby_slot_names GUC in PG 17" }, { "msg_contents": "On Fri, Jun 21, 2024 at 11:37:54AM -0400, Bruce Momjian wrote:\n> The release notes have this item:\n> \n> \tAllow specification of physical standbys that must be synchronized\n> \tbefore they are visible to subscribers (Hou Zhijie, Shveta Malik)\n> \n> \tThe new server variable is standby_slot_names. \n> \n> Is standby_slot_names an accurate name for this GUC? It seems too\n> generic.\n\n+1, I was considering bringing this up, too. I'm still thinking of\nalternate names to propose, though.\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 21 Jun 2024 10:46:56 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "Hi,\n\nA humble input, as on primary we have #primary_slot_name = '' then should\nnot it be okay to have standby_slot_names or standby_slot_name ? It seems\nconsistent with the Guc on primary.\n\nAnother suggestion is *standby_replication_slots*.\n\nRegards,\nMuhammad Ikram\nBitnine Global.\n\nOn Fri, Jun 21, 2024 at 8:47 PM Nathan Bossart <[email protected]>\nwrote:\n\n> On Fri, Jun 21, 2024 at 11:37:54AM -0400, Bruce Momjian wrote:\n> > The release notes have this item:\n> >\n> > Allow specification of physical standbys that must be synchronized\n> > before they are visible to subscribers (Hou Zhijie, Shveta Malik)\n> >\n> > The new server variable is standby_slot_names.\n> >\n> > Is standby_slot_names an accurate name for this GUC? It seems too\n> > generic.\n>\n> +1, I was considering bringing this up, too. I'm still thinking of\n> alternate names to propose, though.\n>\n> --\n> nathan\n>\n>\n>\n\n-- \nMuhammad Ikram\n\nHi,A humble input, as on primary we have #primary_slot_name = ''  then should not it be okay to have standby_slot_names or standby_slot_name ? It seems  consistent with the Guc on primary.Another suggestion is standby_replication_slots.Regards,Muhammad IkramBitnine Global.On Fri, Jun 21, 2024 at 8:47 PM Nathan Bossart <[email protected]> wrote:On Fri, Jun 21, 2024 at 11:37:54AM -0400, Bruce Momjian wrote:\n> The release notes have this item:\n> \n>       Allow specification of physical standbys that must be synchronized\n>       before they are visible to subscribers (Hou Zhijie, Shveta Malik)\n> \n>       The new server variable is standby_slot_names. \n> \n> Is standby_slot_names an accurate name for this GUC?  It seems too\n> generic.\n\n+1, I was considering bringing this up, too.  I'm still thinking of\nalternate names to propose, though.\n\n-- \nnathan\n\n\n-- Muhammad Ikram", "msg_date": "Sat, 22 Jun 2024 00:13:08 +0500", "msg_from": "Muhammad Ikram <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "Muhammad Ikram <[email protected]> writes:\n> A humble input, as on primary we have #primary_slot_name = '' then should\n> not it be okay to have standby_slot_names or standby_slot_name ? It seems\n> consistent with the Guc on primary.\n> Another suggestion is *standby_replication_slots*.\n\nIIUC, Bruce's complaint is that the name is too generic (which I agree\nwith). Given the stated functionality:\n\n>>>> Allow specification of physical standbys that must be synchronized\n>>>> before they are visible to subscribers (Hou Zhijie, Shveta Malik)\n\nit seems like the name ought to have some connection to\nsynchronization. Perhaps something like \"synchronized_standby_slots\"?\n\nI haven't read the patch, so I don't know if this name is especially\non-point. But \"standby_slot_names\" seems completely unhelpful, as\na server could well have slots that are for standbys but are not to\nbe included in this list.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Jun 2024 15:50:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "Thanks Tom Lane. You are more insightful.\n\nRegards,\nIkram\n\nOn Sat, Jun 22, 2024 at 12:50 AM Tom Lane <[email protected]> wrote:\n\n> Muhammad Ikram <[email protected]> writes:\n> > A humble input, as on primary we have #primary_slot_name = '' then\n> should\n> > not it be okay to have standby_slot_names or standby_slot_name ? It seems\n> > consistent with the Guc on primary.\n> > Another suggestion is *standby_replication_slots*.\n>\n> IIUC, Bruce's complaint is that the name is too generic (which I agree\n> with). Given the stated functionality:\n>\n> >>>> Allow specification of physical standbys that must be synchronized\n> >>>> before they are visible to subscribers (Hou Zhijie, Shveta Malik)\n>\n> it seems like the name ought to have some connection to\n> synchronization. Perhaps something like \"synchronized_standby_slots\"?\n>\n> I haven't read the patch, so I don't know if this name is especially\n> on-point. But \"standby_slot_names\" seems completely unhelpful, as\n> a server could well have slots that are for standbys but are not to\n> be included in this list.\n>\n> regards, tom lane\n>\n\n\n-- \nMuhammad Ikram\n\nThanks Tom Lane. You are more insightful.Regards,IkramOn Sat, Jun 22, 2024 at 12:50 AM Tom Lane <[email protected]> wrote:Muhammad Ikram <[email protected]> writes:\n> A humble input, as on primary we have #primary_slot_name = ''  then should\n> not it be okay to have standby_slot_names or standby_slot_name ? It seems\n> consistent with the Guc on primary.\n> Another suggestion is *standby_replication_slots*.\n\nIIUC, Bruce's complaint is that the name is too generic (which I agree\nwith).  Given the stated functionality:\n\n>>>> Allow specification of physical standbys that must be synchronized\n>>>> before they are visible to subscribers (Hou Zhijie, Shveta Malik)\n\nit seems like the name ought to have some connection to\nsynchronization.  Perhaps something like \"synchronized_standby_slots\"?\n\nI haven't read the patch, so I don't know if this name is especially\non-point.  But \"standby_slot_names\" seems completely unhelpful, as\na server could well have slots that are for standbys but are not to\nbe included in this list.\n\n                        regards, tom lane\n-- Muhammad Ikram", "msg_date": "Sat, 22 Jun 2024 01:03:09 +0500", "msg_from": "Muhammad Ikram <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "On Fri, Jun 21, 2024 at 03:50:00PM -0400, Tom Lane wrote:\n>>>>> Allow specification of physical standbys that must be synchronized\n>>>>> before they are visible to subscribers (Hou Zhijie, Shveta Malik)\n> \n> it seems like the name ought to have some connection to\n> synchronization. Perhaps something like \"synchronized_standby_slots\"?\n\nIMHO that might be a bit too close to synchronous_standby_names. But the\nname might not be the only issue, as there is a separate proposal [0] to\nadd _another_ GUC to tie standby_slot_names to synchronous replication. I\nwonder if this could just be a Boolean parameter or if folks really have\nuse-cases for both a list of synchronous standbys and a separate list of\nsynchronous standbys for failover slots.\n\n[0] https://postgr.es/m/CA%2B-JvFtq6f7%2BwAwSdud-x0yMTeMejUhpkyid1Xa_VNpRd_-oPw%40mail.gmail.com\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 21 Jun 2024 15:19:45 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "On Sat, Jun 22, 2024 at 1:49 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Fri, Jun 21, 2024 at 03:50:00PM -0400, Tom Lane wrote:\n> >>>>> Allow specification of physical standbys that must be synchronized\n> >>>>> before they are visible to subscribers (Hou Zhijie, Shveta Malik)\n> >\n> > it seems like the name ought to have some connection to\n> > synchronization. Perhaps something like \"synchronized_standby_slots\"?\n>\n> IMHO that might be a bit too close to synchronous_standby_names. But the\n> name might not be the only issue, as there is a separate proposal [0] to\n> add _another_ GUC to tie standby_slot_names to synchronous replication. I\n> wonder if this could just be a Boolean parameter or if folks really have\n> use-cases for both a list of synchronous standbys and a separate list of\n> synchronous standbys for failover slots.\n>\n\nBoth have separate functionalities. We need to wait for the standby's\nin synchronous_standby_names to be synced at the commit time whereas\nthe standby's in the standby_slot_names doesn't have such a\nrequirement. The standby's in the standby_slot_names are used by\nlogical WAL senders such that they will send decoded changes to\nplugins only after the specified replication slots confirm receiving\nWAL. So, combining them doesn't sound advisable.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 22 Jun 2024 15:08:19 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "On Sat, Jun 22, 2024 at 1:49 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Fri, Jun 21, 2024 at 03:50:00PM -0400, Tom Lane wrote:\n> >>>>> Allow specification of physical standbys that must be synchronized\n> >>>>> before they are visible to subscribers (Hou Zhijie, Shveta Malik)\n> >\n> > it seems like the name ought to have some connection to\n> > synchronization. Perhaps something like \"synchronized_standby_slots\"?\n>\n> IMHO that might be a bit too close to synchronous_standby_names.\n>\n\nRight, but better than the current one. The other possibility could be\nwait_for_standby_slots.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 22 Jun 2024 15:17:03 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "On Sat, Jun 22, 2024 at 03:17:03PM +0530, Amit Kapila wrote:\n> On Sat, Jun 22, 2024 at 1:49 AM Nathan Bossart <[email protected]> wrote:\n> >\n> > On Fri, Jun 21, 2024 at 03:50:00PM -0400, Tom Lane wrote:\n> > >>>>> Allow specification of physical standbys that must be synchronized\n> > >>>>> before they are visible to subscribers (Hou Zhijie, Shveta Malik)\n> > >\n> > > it seems like the name ought to have some connection to\n> > > synchronization. Perhaps something like \"synchronized_standby_slots\"?\n> >\n> > IMHO that might be a bit too close to synchronous_standby_names.\n> >\n> \n> Right, but better than the current one. The other possibility could be\n> wait_for_standby_slots.\n\nFYI, changing this GUC name could force an initdb because\npostgresql.conf would have the old name and removing the comment to\nchange it would cause an error. Therefore, we should change it ASAP.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 22 Jun 2024 08:53:17 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> FYI, changing this GUC name could force an initdb because\n> postgresql.conf would have the old name and removing the comment to\n> change it would cause an error. Therefore, we should change it ASAP.\n\nThat's not reason for a forced initdb IMO. It's easily fixed by\nhand.\n\nAt this point we're into the release freeze for beta2, so even\nif we had consensus on a new name it should wait till after.\nSo I see no particular urgency to make a decision.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 22 Jun 2024 10:43:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "On Saturday, June 22, 2024 5:47 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Sat, Jun 22, 2024 at 1:49 AM Nathan Bossart\r\n> <[email protected]> wrote:\r\n> >\r\n> > On Fri, Jun 21, 2024 at 03:50:00PM -0400, Tom Lane wrote:\r\n> > >>>>> Allow specification of physical standbys that must be\r\n> > >>>>> synchronized before they are visible to subscribers (Hou Zhijie,\r\n> > >>>>> Shveta Malik)\r\n> > >\r\n> > > it seems like the name ought to have some connection to\r\n> > > synchronization. Perhaps something like \"synchronized_standby_slots\"?\r\n> >\r\n> > IMHO that might be a bit too close to synchronous_standby_names.\r\n> >\r\n> \r\n> Right, but better than the current one. The other possibility could be\r\n> wait_for_standby_slots.\r\n\r\nI agree the current name seems too generic and the suggested ' synchronized_standby_slots '\r\nis better than the current one.\r\n\r\nSome other ideas could be:\r\n\r\nsynchronize_slots_on_standbys: it indicates that the standbys that enabled\r\nslot sync should be listed in this GUC.\r\n\r\nlogical_replication_wait_slots: it means the logical replication(logical\r\nWalsender process) will wait for these slots to advance the confirm flush\r\nlsn before proceeding.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Tue, 25 Jun 2024 02:21:25 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: New standby_slot_names GUC in PG 17" }, { "msg_contents": "On Tue, Jun 25, 2024 at 11:21 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Saturday, June 22, 2024 5:47 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Sat, Jun 22, 2024 at 1:49 AM Nathan Bossart\n> > <[email protected]> wrote:\n> > >\n> > > On Fri, Jun 21, 2024 at 03:50:00PM -0400, Tom Lane wrote:\n> > > >>>>> Allow specification of physical standbys that must be\n> > > >>>>> synchronized before they are visible to subscribers (Hou Zhijie,\n> > > >>>>> Shveta Malik)\n> > > >\n> > > > it seems like the name ought to have some connection to\n> > > > synchronization. Perhaps something like \"synchronized_standby_slots\"?\n> > >\n> > > IMHO that might be a bit too close to synchronous_standby_names.\n> > >\n> >\n> > Right, but better than the current one. The other possibility could be\n> > wait_for_standby_slots.\n>\n> I agree the current name seems too generic and the suggested ' synchronized_standby_slots '\n> is better than the current one.\n>\n> Some other ideas could be:\n>\n> synchronize_slots_on_standbys: it indicates that the standbys that enabled\n> slot sync should be listed in this GUC.\n>\n> logical_replication_wait_slots: it means the logical replication(logical\n> Walsender process) will wait for these slots to advance the confirm flush\n> lsn before proceeding.\n\nI feel that the name that has some connection to \"logical replication\"\nalso sounds good. Let me add some ideas:\n\n- logical_replication_synchronous_standby_slots (might be too long)\n- logical_replication_synchronous_slots\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 25 Jun 2024 11:50:14 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "On Tue, Jun 25, 2024 at 8:20 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Tue, Jun 25, 2024 at 11:21 AM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > I agree the current name seems too generic and the suggested ' synchronized_standby_slots '\n> > is better than the current one.\n> >\n> > Some other ideas could be:\n> >\n> > synchronize_slots_on_standbys: it indicates that the standbys that enabled\n> > slot sync should be listed in this GUC.\n> >\n> > logical_replication_wait_slots: it means the logical replication(logical\n> > Walsender process) will wait for these slots to advance the confirm flush\n> > lsn before proceeding.\n>\n> I feel that the name that has some connection to \"logical replication\"\n> also sounds good. Let me add some ideas:\n>\n> - logical_replication_synchronous_standby_slots (might be too long)\n> - logical_replication_synchronous_slots\n>\n\nI see your point about keeping logical_replication in the name but\nthat could also lead one to think that this list can contain logical\nslots. OTOH, there is some value in keeping '_standby_' in the name as\nthat is more closely associated with physical standby's and this list\ncontains physical slots corresponding to physical standby's. So, my\npreference is in order as follows: synchronized_standby_slots,\nwait_for_standby_slots, logical_replication_wait_slots,\nlogical_replication_synchronous_slots, and\nlogical_replication_synchronous_standby_slots.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 25 Jun 2024 10:24:41 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "Hi,\n\nOn Tue, Jun 25, 2024 at 10:24:41AM +0530, Amit Kapila wrote:\n> On Tue, Jun 25, 2024 at 8:20 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Tue, Jun 25, 2024 at 11:21 AM Zhijie Hou (Fujitsu)\n> > <[email protected]> wrote:\n> > >\n> > > I agree the current name seems too generic and the suggested ' synchronized_standby_slots '\n> > > is better than the current one.\n> > >\n> > > Some other ideas could be:\n> > >\n> > > synchronize_slots_on_standbys: it indicates that the standbys that enabled\n> > > slot sync should be listed in this GUC.\n> > >\n> > > logical_replication_wait_slots: it means the logical replication(logical\n> > > Walsender process) will wait for these slots to advance the confirm flush\n> > > lsn before proceeding.\n> >\n> > I feel that the name that has some connection to \"logical replication\"\n> > also sounds good. Let me add some ideas:\n> >\n> > - logical_replication_synchronous_standby_slots (might be too long)\n> > - logical_replication_synchronous_slots\n> >\n> \n> I see your point about keeping logical_replication in the name but\n> that could also lead one to think that this list can contain logical\n> slots.\n\nAgree, and we may add the same functionality for physical replication slots\nin the future too (it has been discussed in the thread [1]). So I don't think\n\"logical\" should be part of the name.\n\n> OTOH, there is some value in keeping '_standby_' in the name as\n> that is more closely associated with physical standby's and this list\n> contains physical slots corresponding to physical standby's. So, my\n> preference is in order as follows: synchronized_standby_slots,\n> wait_for_standby_slots, logical_replication_wait_slots,\n> logical_replication_synchronous_slots, and\n> logical_replication_synchronous_standby_slots.\n> \n\nI like the idea of having \"synchronize[d]\" in the name as it makes think of \nthe feature it is linked to [2]. The slots mentioned in this parameter are\nlinked to the \"primary_slot_name\" parameter on the standby, so what about?\n\nsynchronized_primary_slot_names \n\nIt makes clear it is somehow linked to \"primary_slot_name\" and that we want them\nto be in sync.\n\nSo I'd vote for (in that order);\n\nsynchronized_primary_slot_names, synchronized_standby_slots\n\n[1]: https://www.postgresql.org/message-id/bb437218-73bc-34c3-b8fb-8c1be4ddaec9%40enterprisedb.com\n[2]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=93db6cbda037f1be9544932bd9a785dabf3ff712\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 25 Jun 2024 06:35:56 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "On Tue, Jun 25, 2024 at 1:54 PM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Jun 25, 2024 at 8:20 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Tue, Jun 25, 2024 at 11:21 AM Zhijie Hou (Fujitsu)\n> > <[email protected]> wrote:\n> > >\n> > > I agree the current name seems too generic and the suggested ' synchronized_standby_slots '\n> > > is better than the current one.\n> > >\n> > > Some other ideas could be:\n> > >\n> > > synchronize_slots_on_standbys: it indicates that the standbys that enabled\n> > > slot sync should be listed in this GUC.\n> > >\n> > > logical_replication_wait_slots: it means the logical replication(logical\n> > > Walsender process) will wait for these slots to advance the confirm flush\n> > > lsn before proceeding.\n> >\n> > I feel that the name that has some connection to \"logical replication\"\n> > also sounds good. Let me add some ideas:\n> >\n> > - logical_replication_synchronous_standby_slots (might be too long)\n> > - logical_replication_synchronous_slots\n> >\n>\n> I see your point about keeping logical_replication in the name but\n> that could also lead one to think that this list can contain logical\n> slots.\n\nRight.\n\n> OTOH, there is some value in keeping '_standby_' in the name as\n> that is more closely associated with physical standby's and this list\n> contains physical slots corresponding to physical standby's.\n\nAgreed.\n\n> So, my\n> preference is in order as follows: synchronized_standby_slots,\n> wait_for_standby_slots, logical_replication_wait_slots,\n> logical_replication_synchronous_slots, and\n> logical_replication_synchronous_standby_slots.\n\nI also prefer synchronized_standby_slots.\n\n From a different angle just for discussion, is it worth considering\nthe term 'failover' since the purpose of this feature is to ensure a\nstandby to be ready for failover in terms of logical replication? For\nexample, failover_standby_slot_names?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 25 Jun 2024 15:59:41 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "On Tue, Jun 25, 2024 at 12:30 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Tue, Jun 25, 2024 at 1:54 PM Amit Kapila <[email protected]> wrote:\n> >\n>\n> > So, my\n> > preference is in order as follows: synchronized_standby_slots,\n> > wait_for_standby_slots, logical_replication_wait_slots,\n> > logical_replication_synchronous_slots, and\n> > logical_replication_synchronous_standby_slots.\n>\n> I also prefer synchronized_standby_slots.\n>\n> From a different angle just for discussion, is it worth considering\n> the term 'failover' since the purpose of this feature is to ensure a\n> standby to be ready for failover in terms of logical replication? For\n> example, failover_standby_slot_names?\n>\n\nI feel synchronized better indicates the purpose because we ensure\nsuch slots are synchronized before we process changes for logical\nfailover slots. We already have a 'failover' option for logical slots\nwhich could make things confusing if we add 'failover' where physical\nslots need to be specified.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 25 Jun 2024 14:02:09 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "On Tue, Jun 25, 2024 at 02:02:09PM +0530, Amit Kapila wrote:\n> On Tue, Jun 25, 2024 at 12:30 PM Masahiko Sawada <[email protected]> wrote:\n>> On Tue, Jun 25, 2024 at 1:54 PM Amit Kapila <[email protected]> wrote:\n>> > So, my\n>> > preference is in order as follows: synchronized_standby_slots,\n>> > wait_for_standby_slots, logical_replication_wait_slots,\n>> > logical_replication_synchronous_slots, and\n>> > logical_replication_synchronous_standby_slots.\n>>\n>> I also prefer synchronized_standby_slots.\n>>\n>> From a different angle just for discussion, is it worth considering\n>> the term 'failover' since the purpose of this feature is to ensure a\n>> standby to be ready for failover in terms of logical replication? For\n>> example, failover_standby_slot_names?\n> \n> I feel synchronized better indicates the purpose because we ensure\n> such slots are synchronized before we process changes for logical\n> failover slots. We already have a 'failover' option for logical slots\n> which could make things confusing if we add 'failover' where physical\n> slots need to be specified.\n\nI'm fine with synchronized_standby_slots.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 25 Jun 2024 12:35:44 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "On Tue, Jun 25, 2024 at 5:32 PM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Jun 25, 2024 at 12:30 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Tue, Jun 25, 2024 at 1:54 PM Amit Kapila <[email protected]> wrote:\n> > >\n> >\n> > > So, my\n> > > preference is in order as follows: synchronized_standby_slots,\n> > > wait_for_standby_slots, logical_replication_wait_slots,\n> > > logical_replication_synchronous_slots, and\n> > > logical_replication_synchronous_standby_slots.\n> >\n> > I also prefer synchronized_standby_slots.\n> >\n> > From a different angle just for discussion, is it worth considering\n> > the term 'failover' since the purpose of this feature is to ensure a\n> > standby to be ready for failover in terms of logical replication? For\n> > example, failover_standby_slot_names?\n> >\n>\n> I feel synchronized better indicates the purpose because we ensure\n> such slots are synchronized before we process changes for logical\n> failover slots. We already have a 'failover' option for logical slots\n> which could make things confusing if we add 'failover' where physical\n> slots need to be specified.\n\nAgreed. So +1 for synchronized_stnadby_slots.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 26 Jun 2024 10:39:47 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "On Wednesday, June 26, 2024 9:40 AM Masahiko Sawada <[email protected]> wrote:\r\n> \r\n> On Tue, Jun 25, 2024 at 5:32 PM Amit Kapila <[email protected]>\r\n> wrote:\r\n> >\r\n> > On Tue, Jun 25, 2024 at 12:30 PM Masahiko Sawada\r\n> <[email protected]> wrote:\r\n> > >\r\n> > > On Tue, Jun 25, 2024 at 1:54 PM Amit Kapila <[email protected]>\r\n> wrote:\r\n> > > >\r\n> > >\r\n> > > > So, my\r\n> > > > preference is in order as follows: synchronized_standby_slots,\r\n> > > > wait_for_standby_slots, logical_replication_wait_slots,\r\n> > > > logical_replication_synchronous_slots, and\r\n> > > > logical_replication_synchronous_standby_slots.\r\n> > >\r\n> > > I also prefer synchronized_standby_slots.\r\n> > >\r\n> > > From a different angle just for discussion, is it worth considering\r\n> > > the term 'failover' since the purpose of this feature is to ensure a\r\n> > > standby to be ready for failover in terms of logical replication?\r\n> > > For example, failover_standby_slot_names?\r\n> > >\r\n> >\r\n> > I feel synchronized better indicates the purpose because we ensure\r\n> > such slots are synchronized before we process changes for logical\r\n> > failover slots. We already have a 'failover' option for logical slots\r\n> > which could make things confusing if we add 'failover' where physical\r\n> > slots need to be specified.\r\n> \r\n> Agreed. So +1 for synchronized_stnadby_slots.\r\n\r\n+1.\r\n\r\nSince there is a consensus on this name, I am attaching the patch to rename\r\nthe GUC to synchronized_stnadby_slots. I have confirmed that the regression\r\ntests and pgindent passed for the patch.\r\n\r\nBest Regards,\r\nHou zj\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Wed, 26 Jun 2024 04:17:45 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: New standby_slot_names GUC in PG 17" }, { "msg_contents": "Hi,\n\nOn Wed, Jun 26, 2024 at 04:17:45AM +0000, Zhijie Hou (Fujitsu) wrote:\n> On Wednesday, June 26, 2024 9:40 AM Masahiko Sawada <[email protected]> wrote:\n> > \n> > On Tue, Jun 25, 2024 at 5:32 PM Amit Kapila <[email protected]>\n> > wrote:\n> > >\n> > > I feel synchronized better indicates the purpose because we ensure\n> > > such slots are synchronized before we process changes for logical\n> > > failover slots. We already have a 'failover' option for logical slots\n> > > which could make things confusing if we add 'failover' where physical\n> > > slots need to be specified.\n> > \n> > Agreed. So +1 for synchronized_stnadby_slots.\n> \n> +1.\n> \n> Since there is a consensus on this name, I am attaching the patch to rename\n> the GUC to synchronized_stnadby_slots. I have confirmed that the regression\n> tests and pgindent passed for the patch.\n> \n\nThanks for the patch!\n\nA few comments:\n\n1 ====\n\nIn the commit message:\n\n\"\nThe standby_slot_names GUC is intended to allow specification of physical\n standby slots that must be synchronized before they are visible to\n subscribers\n\"\n\nNot sure that wording is correct, if we feel the need to explain the GUC,\nmaybe repeat some wording from bf279ddd1c?\n\n2 ====\n\nShould we rename StandbySlotNamesConfigData too?\n\n3 ====\n\nShould we rename SlotExistsInStandbySlotNames too?\n\n4 ====\n\nShould we rename validate_standby_slots() too?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 26 Jun 2024 04:49:16 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "On Wed, Jun 26, 2024 at 10:19 AM Bertrand Drouvot\n<[email protected]> wrote:\n>\n>\n> 2 ====\n>\n> Should we rename StandbySlotNamesConfigData too?\n>\n\nHow about SyncStandbySlotsConfigData?\n\n> 3 ====\n>\n> Should we rename SlotExistsInStandbySlotNames too?\n>\n\nSimilarly SlotExistsInSyncStandbySlots?\n\n> 4 ====\n>\n> Should we rename validate_standby_slots() too?\n>\n\nAnd validate_sync_standby_slots()?\n\n--- a/doc/src/sgml/release-17.sgml\n+++ b/doc/src/sgml/release-17.sgml\n@@ -1325,7 +1325,7 @@ Author: Michael Paquier <[email protected]>\n\n <!--\n Author: Amit Kapila <[email protected]>\n-2024-03-08 [bf279ddd1] Introduce a new GUC 'standby_slot_names'.\n+2024-03-08 [bf279ddd1] Introduce a new GUC 'synchronized_standby_slots'.\n\nI am not sure if it is a good idea to change release notes in the same\ncommit as the code change. I would prefer to do it in a separate\ncommit.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 26 Jun 2024 11:39:45 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "On Wed, Jun 26, 2024 at 11:39:45AM +0530, Amit Kapila wrote:\n> --- a/doc/src/sgml/release-17.sgml\n> +++ b/doc/src/sgml/release-17.sgml\n> @@ -1325,7 +1325,7 @@ Author: Michael Paquier <[email protected]>\n> \n> <!--\n> Author: Amit Kapila <[email protected]>\n> -2024-03-08 [bf279ddd1] Introduce a new GUC 'standby_slot_names'.\n> +2024-03-08 [bf279ddd1] Introduce a new GUC 'synchronized_standby_slots'.\n> \n> I am not sure if it is a good idea to change release notes in the same\n> commit as the code change. I would prefer to do it in a separate\n> commit.\n\nThe existing commits referenced cannot change, but it's surely OK to\nadd a reference to the commit doing the rename for this item in the\nrelease notes, and update the release notes to reflect the new GUC\nname. Using two separate commits ensures that the correct reference\nabout the rename is added to the release notes, so that's the correct\nthing to do, IMHO.\n--\nMichael", "msg_date": "Wed, 26 Jun 2024 15:24:00 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "Hi,\n\nOn Wed, Jun 26, 2024 at 11:39:45AM +0530, Amit Kapila wrote:\n> On Wed, Jun 26, 2024 at 10:19 AM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> >\n> > 2 ====\n> >\n> > Should we rename StandbySlotNamesConfigData too?\n> >\n> \n> How about SyncStandbySlotsConfigData?\n> \n> > 3 ====\n> >\n> > Should we rename SlotExistsInStandbySlotNames too?\n> >\n> \n> Similarly SlotExistsInSyncStandbySlots?\n> \n> > 4 ====\n> >\n> > Should we rename validate_standby_slots() too?\n> >\n> \n> And validate_sync_standby_slots()?\n> \n\nThanks!\n\nAll of the above proposal sound good to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 26 Jun 2024 06:42:30 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "On Wednesday, June 26, 2024 12:49 PM Bertrand Drouvot <[email protected]> wrote:\r\n> \r\n> Hi,\r\n> \r\n> On Wed, Jun 26, 2024 at 04:17:45AM +0000, Zhijie Hou (Fujitsu) wrote:\r\n> > On Wednesday, June 26, 2024 9:40 AM Masahiko Sawada\r\n> <[email protected]> wrote:\r\n> > >\r\n> > > On Tue, Jun 25, 2024 at 5:32 PM Amit Kapila\r\n> > > <[email protected]>\r\n> > > wrote:\r\n> > > >\r\n> > > > I feel synchronized better indicates the purpose because we ensure\r\n> > > > such slots are synchronized before we process changes for logical\r\n> > > > failover slots. We already have a 'failover' option for logical\r\n> > > > slots which could make things confusing if we add 'failover' where\r\n> > > > physical slots need to be specified.\r\n> > >\r\n> > > Agreed. So +1 for synchronized_stnadby_slots.\r\n> >\r\n> > +1.\r\n> >\r\n> > Since there is a consensus on this name, I am attaching the patch to\r\n> > rename the GUC to synchronized_stnadby_slots. I have confirmed that\r\n> > the regression tests and pgindent passed for the patch.\r\n> A few comments:\r\n\r\nThanks for the comments!\r\n\r\n> 1 ====\r\n> \r\n> In the commit message:\r\n> \r\n> \"\r\n> The standby_slot_names GUC is intended to allow specification of physical\r\n> standby slots that must be synchronized before they are visible to\r\n> subscribers\r\n> \"\r\n> \r\n> Not sure that wording is correct, if we feel the need to explain the GUC, maybe\r\n> repeat some wording from bf279ddd1c?\r\n\r\nI intentionally copied some words from release note of this GUC which was\r\nalso part of the content in the initial email of this thread. I think it\r\nwould be easy to understand than the original commit msg. But others may\r\nhave different opinion, so I would leave the decision to the committer. (I adjusted\r\na bit the word in this version).\r\n\r\n> \r\n> 2 ====\r\n> \r\n> Should we rename StandbySlotNamesConfigData too?\r\n> \r\n> 3 ====\r\n> \r\n> Should we rename SlotExistsInStandbySlotNames too?\r\n> \r\n> 4 ====\r\n> \r\n> Should we rename validate_standby_slots() too?\r\n> \r\n\r\nRenamed these to the names suggested by Amit.\r\n\r\nAttach the v2 patch set which addressed above and removed\r\nthe changes in release-17.sgml according to the comment from Amit.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Wed, 26 Jun 2024 09:15:48 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: New standby_slot_names GUC in PG 17" }, { "msg_contents": "Hi,\n\nOn Wed, Jun 26, 2024 at 09:15:48AM +0000, Zhijie Hou (Fujitsu) wrote:\n> Renamed these to the names suggested by Amit.\n> \n> Attach the v2 patch set which addressed above and removed\n> the changes in release-17.sgml according to the comment from Amit.\n> \n\nThanks! LGTM.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 26 Jun 2024 12:30:49 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "On Wed, Jun 26, 2024 at 6:00 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Wed, Jun 26, 2024 at 09:15:48AM +0000, Zhijie Hou (Fujitsu) wrote:\n> > Renamed these to the names suggested by Amit.\n> >\n> > Attach the v2 patch set which addressed above and removed\n> > the changes in release-17.sgml according to the comment from Amit.\n> >\n>\n> Thanks! LGTM.\n>\n\nAs per my reading of this thread, we have an agreement on changing the\nGUC name standby_slot_names to synchronized_standby_slots. I'll wait\nfor a day and push the change unless someone thinks otherwise.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 26 Jun 2024 18:21:32 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "On 21.06.24 17:37, Bruce Momjian wrote:\n> The release notes have this item:\n> \n> \tAllow specification of physical standbys that must be synchronized\n> \tbefore they are visible to subscribers (Hou Zhijie, Shveta Malik)\n> \n> \tThe new server variable is standby_slot_names.\n> \n> Is standby_slot_names an accurate name for this GUC? It seems too\n> generic.\n\nThis was possibly inspired by pg_failover_slots.standby_slot_names \n(which in turn came from pglogical.standby_slot_names). In those cases, \nyou have some more context from the extension prefix.\n\nThe new suggested names sound good to me.\n\n\n\n", "msg_date": "Wed, 26 Jun 2024 21:32:27 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "On Wed, Jun 26, 2024 at 6:15 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Wednesday, June 26, 2024 12:49 PM Bertrand Drouvot <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On Wed, Jun 26, 2024 at 04:17:45AM +0000, Zhijie Hou (Fujitsu) wrote:\n> > > On Wednesday, June 26, 2024 9:40 AM Masahiko Sawada\n> > <[email protected]> wrote:\n> > > >\n> > > > On Tue, Jun 25, 2024 at 5:32 PM Amit Kapila\n> > > > <[email protected]>\n> > > > wrote:\n> > > > >\n> > > > > I feel synchronized better indicates the purpose because we ensure\n> > > > > such slots are synchronized before we process changes for logical\n> > > > > failover slots. We already have a 'failover' option for logical\n> > > > > slots which could make things confusing if we add 'failover' where\n> > > > > physical slots need to be specified.\n> > > >\n> > > > Agreed. So +1 for synchronized_stnadby_slots.\n> > >\n> > > +1.\n> > >\n> > > Since there is a consensus on this name, I am attaching the patch to\n> > > rename the GUC to synchronized_stnadby_slots. I have confirmed that\n> > > the regression tests and pgindent passed for the patch.\n> > A few comments:\n>\n> Thanks for the comments!\n>\n> > 1 ====\n> >\n> > In the commit message:\n> >\n> > \"\n> > The standby_slot_names GUC is intended to allow specification of physical\n> > standby slots that must be synchronized before they are visible to\n> > subscribers\n> > \"\n> >\n> > Not sure that wording is correct, if we feel the need to explain the GUC, maybe\n> > repeat some wording from bf279ddd1c?\n>\n> I intentionally copied some words from release note of this GUC which was\n> also part of the content in the initial email of this thread. I think it\n> would be easy to understand than the original commit msg. But others may\n> have different opinion, so I would leave the decision to the committer. (I adjusted\n> a bit the word in this version).\n>\n> >\n> > 2 ====\n> >\n> > Should we rename StandbySlotNamesConfigData too?\n> >\n> > 3 ====\n> >\n> > Should we rename SlotExistsInStandbySlotNames too?\n> >\n> > 4 ====\n> >\n> > Should we rename validate_standby_slots() too?\n> >\n>\n> Renamed these to the names suggested by Amit.\n>\n> Attach the v2 patch set which addressed above and removed\n> the changes in release-17.sgml according to the comment from Amit.\n>\n\nThank you for updating the patch. The v2 patch looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 27 Jun 2024 10:44:21 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "On Thu, Jun 27, 2024 at 7:14 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Jun 26, 2024 at 6:15 PM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n>\n> Thank you for updating the patch. The v2 patch looks good to me.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 1 Jul 2024 16:14:56 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" }, { "msg_contents": "On Monday, July 1, 2024 6:45 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Thu, Jun 27, 2024 at 7:14 AM Masahiko Sawada\r\n> <[email protected]> wrote:\r\n> >\r\n> > On Wed, Jun 26, 2024 at 6:15 PM Zhijie Hou (Fujitsu)\r\n> > <[email protected]> wrote:\r\n> >\r\n> > Thank you for updating the patch. The v2 patch looks good to me.\r\n> >\r\n> \r\n> Pushed.\r\n\r\nThanks! I am attaching another patch to modify the release note as discussed.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Mon, 1 Jul 2024 12:31:49 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: New standby_slot_names GUC in PG 17" }, { "msg_contents": "On Mon, Jul 1, 2024 at 6:01 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> Thanks! I am attaching another patch to modify the release note as discussed.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 2 Jul 2024 11:55:29 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New standby_slot_names GUC in PG 17" } ]
[ { "msg_contents": "While investigating a bug over on [1], I found that\nvacuum_set_xid_limits() is calculating freezeLimit in an unsafe way on\nat least Postgres 14 and 15.\n\n limit = *oldestXmin - freezemin;\n safeLimit = ReadNextTransactionId() - autovacuum_freeze_max_age;\n if (TransactionIdPrecedes(limit, safeLimit))\n limit = *oldestXmin;\n *freezeLimit = limit;\n\nAll of these are unsigned, so it doesn't work very nicely when\nfreezemin (based on autovacuum_freeze_min_age) is larger than\noldestXmin and autovacuum_freeze_max_age is bigger than the next\ntransaction ID -- which is pretty common right after initdb, for\nexample.\n\nI noticed the effect of this because FreezeLimit is saved in the\nLVRelState and passed to heap_prepare_freeze_tuple() as cutoff_xid,\nwhich is used to guard against freezing tuples that shouldn't be\nfrozen.\n\nI didn't propose a fix because I just want to make sure I'm not\nmissing something first.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CAAKRu_Y_NJzF4-8gzTTeaOuUL3CcGoXPjXcAHbTTygT8AyVqag%40mail.gmail.com\n\n\n", "msg_date": "Fri, 21 Jun 2024 16:22:44 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "FreezeLimit underflows in pg14 and 15 causing incorrect behavior in\n heap_prepare_freeze_tuple" }, { "msg_contents": "On Fri, Jun 21, 2024 at 4:22 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> While investigating a bug over on [1], I found that\n> vacuum_set_xid_limits() is calculating freezeLimit in an unsafe way on\n> at least Postgres 14 and 15.\n>\n> limit = *oldestXmin - freezemin;\n> safeLimit = ReadNextTransactionId() - autovacuum_freeze_max_age;\n> if (TransactionIdPrecedes(limit, safeLimit))\n> limit = *oldestXmin;\n> *freezeLimit = limit;\n>\n> All of these are unsigned, so it doesn't work very nicely when\n> freezemin (based on autovacuum_freeze_min_age) is larger than\n> oldestXmin and autovacuum_freeze_max_age is bigger than the next\n> transaction ID -- which is pretty common right after initdb, for\n> example.\n>\n> I noticed the effect of this because FreezeLimit is saved in the\n> LVRelState and passed to heap_prepare_freeze_tuple() as cutoff_xid,\n> which is used to guard against freezing tuples that shouldn't be\n> frozen.\n>\n> I didn't propose a fix because I just want to make sure I'm not\n> missing something first.\n\nHmm. So perhaps this subtraction results in the desired behavior for\nfreeze limit -- but by using FreezeLimit as the cutoff_xid for\nheap_prepare_freeze_tuple(), you can still end up considering freezing\ntuples with xmax older than OldestXmin.\n\nThis results in erroring out with \"cannot freeze committed xmax\" on 16\nand master but not erroring out like this in 14 and 15 for the same\ntuple and cutoff values.\n\n- Melanie\n\n\n", "msg_date": "Sat, 22 Jun 2024 10:42:50 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FreezeLimit underflows in pg14 and 15 causing incorrect behavior\n in heap_prepare_freeze_tuple" }, { "msg_contents": "On Sat, Jun 22, 2024 at 10:43 AM Melanie Plageman\n<[email protected]> wrote:\n> Hmm. So perhaps this subtraction results in the desired behavior for\n> freeze limit -- but by using FreezeLimit as the cutoff_xid for\n> heap_prepare_freeze_tuple(), you can still end up considering freezing\n> tuples with xmax older than OldestXmin.\n\nUsing a FreezeLimit > OldestXmin is just wrong. I don't think that\nthat even needs to be discussed.\n\n> This results in erroring out with \"cannot freeze committed xmax\" on 16\n> and master but not erroring out like this in 14 and 15 for the same\n> tuple and cutoff values.\n\nI don't follow. I thought that 16 and master don't have this\nparticular problem? Later versions don't use safeLimit as FreezeLimit\nlike this.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 22 Jun 2024 10:53:31 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreezeLimit underflows in pg14 and 15 causing incorrect behavior\n in heap_prepare_freeze_tuple" }, { "msg_contents": "On Sat, Jun 22, 2024 at 10:53 AM Peter Geoghegan <[email protected]> wrote:\n>\n> On Sat, Jun 22, 2024 at 10:43 AM Melanie Plageman\n> <[email protected]> wrote:\n> > Hmm. So perhaps this subtraction results in the desired behavior for\n> > freeze limit -- but by using FreezeLimit as the cutoff_xid for\n> > heap_prepare_freeze_tuple(), you can still end up considering freezing\n> > tuples with xmax older than OldestXmin.\n>\n> Using a FreezeLimit > OldestXmin is just wrong. I don't think that\n> that even needs to be discussed.\n\nBecause it is an unsigned int that wraps around, FreezeLimit can\nprecede OldestXmin, but we can have a tuple whose xmax precedes\nOldestXmin but does not precede FreezeLimit. So, the question is if it\nis okay to freeze tuples whose xmax precedes OldestXmin but follows\nFreezeLimit.\n\n> > This results in erroring out with \"cannot freeze committed xmax\" on 16\n> > and master but not erroring out like this in 14 and 15 for the same\n> > tuple and cutoff values.\n>\n> I don't follow. I thought that 16 and master don't have this\n> particular problem? Later versions don't use safeLimit as FreezeLimit\n> like this.\n\nYes, 16 and master don't consider freezing a tuple with an xmax older\nthan OldestXmin because they use OldestXmin for cutoff_xid and that\nerrors out in heap_prepare_freeze_tuple(). 14 and 15 (and maybe\nearlier, but I didn't check) use FreezeLimit so they do consider\nfreezing tuple with xmax older than OldestXmin.\n\n- Melanie\n\n\n", "msg_date": "Sat, 22 Jun 2024 11:53:47 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FreezeLimit underflows in pg14 and 15 causing incorrect behavior\n in heap_prepare_freeze_tuple" } ]
[ { "msg_contents": "Hello hackers,\n\nThis week dodo started failing on the 008_fsm_truncation test sporadically\ndue to pg_ctl timeout. For example, [1] (REL_14_STABLE):\n### Starting node \"standby\"\n# Running: pg_ctl -D \n/media/pi/250gb/proj/bf2/v17/buildroot/REL_14_STABLE/pgsql.build/src/test/recovery/tmp_check/t_008_fsm_truncation_standby_data/pgdata \n-l \n/media/pi/250gb/proj/bf2/v17/buildroot/REL_14_STABLE/pgsql.build/src/test/recovery/tmp_check/log/008_fsm_truncation_standby.log \n-o --cluster-name=standby start\nwaiting for server to \nstart........................................................................................................................... \nstopped waiting\npg_ctl: server did not start in time\n# pg_ctl start failed; logfile:\n2024-06-19 21:27:30.843 ACST [13244:1] LOG:  starting PostgreSQL 14.12 on armv7l-unknown-linux-gnueabihf, compiled by \ngcc (GCC) 15.0.0 20240617 (experimental), 32-bit\n2024-06-19 21:27:31.768 ACST [13244:2] LOG:  listening on Unix socket \n\"/media/pi/250gb/proj/bf2/v17/buildroot/tmp/vLgcHgvc7O/.s.PGSQL.50013\"\n2024-06-19 21:27:35.055 ACST [13246:1] LOG:  database system was interrupted; last known up at 2024-06-19 21:26:55 ACST\n\nA successful run between two failures [2]:\n2024-06-21 05:17:43.102 ACST [18033:1] LOG:  database system was interrupted; last known up at 2024-06-21 05:17:31 ACST\n2024-06-21 05:18:06.718 ACST [18033:2] LOG:  entering standby mode\n(That is, that start took around 20 seconds.)\n\nWe can also find longer \"successful\" duration at [3]:\n008_fsm_truncation_standby.log:\n2024-06-19 23:11:13.854 ACST [26202:1] LOG:  database system was interrupted; last known up at 2024-06-19 23:11:02 ACST\n2024-06-19 23:12:07.115 ACST [26202:2] LOG:  entering standby mode\n(That start took almost a minute.)\n\nSo it doesn't seem impossible for this operation to last for more than two\nminutes.\n\nThe facts that SyncDataDirectory() is executed between these two messages\nlogged, 008_fsm_truncation is the only test which turns fsync on, and we\nsee no such failures in newer branches (because of a7f417107), make me\nsuspect that dodo is slow on fsync.\n\nI managed to reproduce similar fsync degradation (and reached 40 seconds\nduration of this start operation) on a slow armv7 device with a SD card,\nwhich getting significantly slower after many test runs without executing\nfstrim periodically.\n\nSo maybe fstrim could help dodo too...\n\nOn the other hand, backporting a7f417107 could fix the issue too, but I'm\nafraid we'll still see other tests (027_stream_regress) failing like [4].\nWhen similar failures were observed on Andres Freund's animals, Andres\ncame to conclusion that they were caused by fsync too ([5]).\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-19%2010%3A15%3A08\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-20%2018%3A30%3A53\n[3] https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=dodo&dt=2024-06-19%2012%3A30%3A51&stg=recovery-check\n[4] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-21%2018%3A31%3A11\n[5] https://www.postgresql.org/message-id/20240321063953.oyfojyq3wbc77xxb%40awork3.anarazel.de\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sat, 22 Jun 2024 12:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "recoveryCheck/008_fsm_truncation is failing on dodo in v14- (due to\n slow fsync?)" }, { "msg_contents": "On Sat, 22 Jun 2024 at 18:30, Alexander Lakhin <[email protected]> wrote:\n\n> So it doesn't seem impossible for this operation to last for more than two\n> minutes.\n>\n> The facts that SyncDataDirectory() is executed between these two messages\n> logged, 008_fsm_truncation is the only test which turns fsync on, and we\n> see no such failures in newer branches (because of a7f417107), make me\n> suspect that dodo is slow on fsync.\n>\n\n\nNot sure if it helps but I can confirm that dodo is used for multiple tasks\nand that\nit is using a (slow) external USB3 disk. Also, while using dodo last week\n(for\nsomething unrelated), I noticed iotop at ~30MB/s usage & 1-min CPU around\n~7.\n\nRight now (while dodo's idle), via dd I see ~30MB/s is pretty much the max:\n\npi@pi4:/media/pi/250gb $ dd if=/dev/zero of=./test count=1024 oflag=direct\nbs=128k\n1024+0 records in\n1024+0 records out\n134217728 bytes (134 MB, 128 MiB) copied, 4.51225 s, 29.7 MB/s\n\npi@pi4:/media/pi/250gb $ dd if=/dev/zero of=./test count=1024 oflag=dsync\nbs=128k\n1024+0 records in\n1024+0 records out\n134217728 bytes (134 MB, 128 MiB) copied, 24.4916 s, 5.5 MB/s\n\n-\nrobins\n\nOn Sat, 22 Jun 2024 at 18:30, Alexander Lakhin <[email protected]> wrote:So it doesn't seem impossible for this operation to last for more than two\nminutes.\n\nThe facts that SyncDataDirectory() is executed between these two messages\nlogged, 008_fsm_truncation is the only test which turns fsync on, and we\nsee no such failures in newer branches (because of a7f417107), make me\nsuspect that dodo is slow on fsync.Not sure if it helps but I can confirm that dodo is used for multiple tasks and thatit is using a (slow) external USB3 disk. Also, while using dodo last week (forsomething unrelated), I noticed iotop at ~30MB/s usage & 1-min CPU around ~7.Right now (while dodo's idle), via dd I see ~30MB/s is pretty much the max:pi@pi4:/media/pi/250gb $ dd if=/dev/zero of=./test count=1024 oflag=direct bs=128k1024+0 records in1024+0 records out134217728 bytes (134 MB, 128 MiB) copied, 4.51225 s, 29.7 MB/spi@pi4:/media/pi/250gb $ dd if=/dev/zero of=./test count=1024 oflag=dsync bs=128k1024+0 records in1024+0 records out134217728 bytes (134 MB, 128 MiB) copied, 24.4916 s, 5.5 MB/s-robins", "msg_date": "Sat, 22 Jun 2024 21:22:51 +0930", "msg_from": "Robins Tharakan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: recoveryCheck/008_fsm_truncation is failing on dodo in v14- (due\n to slow fsync?)" }, { "msg_contents": "22.06.2024 12:00, Alexander Lakhin wrote:\n> On the other hand, backporting a7f417107 could fix the issue too, but I'm\n> afraid we'll still see other tests (027_stream_regress) failing like [4].\n> When similar failures were observed on Andres Freund's animals, Andres\n> came to conclusion that they were caused by fsync too ([5]).\n>\n\nIt seems to me that another dodo failure [1] has the same cause:\nt/001_emergency_vacuum.pl .. ok\n# Tests were run but no plan was declared and done_testing() was not seen.\n# Looks like your test exited with 29 just after 2.\nt/002_limits.pl ............\nDubious, test returned 29 (wstat 7424, 0x1d00)\nAll 2 subtests passed\nt/003_wraparounds.pl ....... ok\n\nTest Summary Report\n-------------------\nt/002_limits.pl          (Wstat: 7424 Tests: 2 Failed: 0)\n   Non-zero exit status: 29\n   Parse errors: No plan found in TAP output\nFiles=3, Tests=10, 4235 wallclock secs ( 0.10 usr  0.13 sys + 18.05 cusr 12.76 csys = 31.04 CPU)\nResult: FAIL\n\nUnfortunately, the buildfarm log doesn't contain regress_log_002_limits,\nbut I managed to reproduce the failure on that my device, when it's storage\nas slow as:\n$ dd if=/dev/zero of=./test count=1024 oflag=dsync bs=128k\n1024+0 records in\n1024+0 records out\n134217728 bytes (134 MB, 128 MiB) copied, 33.9446 s, 4.0 MB/s\n\nThe test fails as below:\n[15:36:04.253](729.754s) ok 1 - warn-limit reached\n#### Begin standard error\npsql:<stdin>:1: WARNING:  database \"postgres\" must be vacuumed within 37483631 transactions\nHINT:  To avoid XID assignment failures, execute a database-wide VACUUM in that database.\nYou might also need to commit or roll back old prepared transactions, or drop stale replication slots.\n#### End standard error\n[15:36:45.220](40.968s) ok 2 - stop-limit\n[15:36:45.222](0.002s) # issuing query via background psql: COMMIT\nIPC::Run: timeout on timer #1 at /usr/share/perl5/IPC/Run.pm line 2944.\n\nIt looks like this bump (coined at [2]) is not enough for machines that are\nthat slow:\n# Bump the query timeout to avoid false negatives on slow test systems.\nmy $psql_timeout_secs = 4 * $PostgreSQL::Test::Utils::timeout_default;\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-20%2007%3A18%3A46\n[2] https://www.postgresql.org/message-id/CAD21AoBKBVkXyEwkApSUqN98CuOWw%3DYQdbkeE6gGJ0zH7z-TBw%40mail.gmail.com\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sun, 23 Jun 2024 16:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: recoveryCheck/008_fsm_truncation is failing on dodo in v14- (due\n to slow fsync?)" }, { "msg_contents": "On Sun, 23 Jun 2024 at 22:30, Alexander Lakhin <[email protected]> wrote:\n\n\n> Unfortunately, the buildfarm log doesn't contain regress_log_002_limits,\n> but I managed to reproduce the failure on that my device, when it's storage\n> as slow as:\n> $ dd if=/dev/zero of=./test count=1024 oflag=dsync bs=128k\n> 1024+0 records in\n> 1024+0 records out\n> 134217728 bytes (134 MB, 128 MiB) copied, 33.9446 s, 4.0 MB/s\n>\n>\nThe past ~1 week, I tried to space out all other tasks on the machine, so\nas to ensure\nthat 1-min CPU is mostly <2 (and thus not many things hammering the disk)\nand with\nthat I see 0 failures these past few days. This isn't conclusive by any\nmeans, but it\ndoes seem that reducing IO contention has helped remove the errors, like\nwhat\nAlexander suspects / repros here.\n\nJust a note, that I've reverted some of those recent changes now, and so if\nthe theory\nholds true, I wouldn't be surprised if some of these errors restarted on\ndodo.\n\n-\nrobins\n\nOn Sun, 23 Jun 2024 at 22:30, Alexander Lakhin <[email protected]> wrote: Unfortunately, the buildfarm log doesn't contain regress_log_002_limits,\nbut I managed to reproduce the failure on that my device, when it's storage\nas slow as:\n$ dd if=/dev/zero of=./test count=1024 oflag=dsync bs=128k\n1024+0 records in\n1024+0 records out\n134217728 bytes (134 MB, 128 MiB) copied, 33.9446 s, 4.0 MB/s\nThe past ~1 week, I tried to space out all other tasks on the machine, so as to ensurethat 1-min CPU is mostly <2 (and thus not many things hammering the disk) and withthat I see 0 failures these past few days. This isn't conclusive by any means, but itdoes seem that reducing IO contention has helped remove the errors, like what Alexander suspects / repros here.Just a note, that I've reverted some of those recent changes now, and so if the theoryholds true, I wouldn't be surprised if some of these errors restarted on dodo.-robins", "msg_date": "Fri, 28 Jun 2024 19:50:08 +0930", "msg_from": "Robins Tharakan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: recoveryCheck/008_fsm_truncation is failing on dodo in v14- (due\n to slow fsync?)" }, { "msg_contents": "Hello Robins,\n\n28.06.2024 13:20, Robins Tharakan wrote:\n> The past ~1 week, I tried to space out all other tasks on the machine, so as to ensure\n> that 1-min CPU is mostly <2 (and thus not many things hammering the disk) and with\n> that I see 0 failures these past few days. This isn't conclusive by any means, but it\n> does seem that reducing IO contention has helped remove the errors, like what\n> Alexander suspects / repros here.\n>\n> Just a note, that I've reverted some of those recent changes now, and so if the theory\n> holds true, I wouldn't be surprised if some of these errors restarted on dodo.\n\nLooking back at the test failures, I can see errors really reappeared\njust after your revert (at 2024-06-28), so that theory proved true,\nbut I see none of those since 2024-07-02. Does it mean that you changed\nsomething on dodo/fixed that performance issue?\n\nCould you please describe how you resolved this issue, just for the record?\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-28%2017%3A00%3A28\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-28%2017%3A10%3A12\n[3] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-07-01%2012%3A10%3A12\n[4] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-07-01%2013%3A01%3A00\n[5] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-07-02%2005%3A00%3A36\n[6] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-07-02%2018%3A00%3A15\n\nBest regards,\nAlexander\n\n\n\n\n\nHello Robins,\n\n 28.06.2024 13:20, Robins Tharakan wrote:\n\n\n\nThe past ~1 week, I tried to space out all other\n tasks on the machine, so as to ensure\n \nthat 1-min CPU is mostly <2 (and thus not many things\n hammering the disk) and with\nthat I see 0 failures these past few days. This isn't\n conclusive by any means, but it\ndoes seem that reducing IO contention has helped remove\n the errors, like what \nAlexander suspects / repros here.\n\n\nJust a note, that I've reverted some of those recent\n changes now, and so if the theory\nholds true, I wouldn't be surprised if some of these\n errors restarted on dodo.\n\n\n\n\n Looking back at the test failures, I can see errors really\n reappeared\n just after your revert (at 2024-06-28), so that theory proved true,\n but I see none of those since 2024-07-02. Does it mean that you\n changed\n something on dodo/fixed that performance issue?\n\n Could you please describe how you resolved this issue, just for the\n record?\n\n [1]\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-28%2017%3A00%3A28\n [2]\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-06-28%2017%3A10%3A12\n [3]\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-07-01%2012%3A10%3A12\n [4]\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-07-01%2013%3A01%3A00\n [5]\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-07-02%2005%3A00%3A36\n [6]\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-07-02%2018%3A00%3A15\n\n Best regards,\n Alexander", "msg_date": "Fri, 26 Jul 2024 17:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: recoveryCheck/008_fsm_truncation is failing on dodo in v14- (due\n to slow fsync?)" } ]
[ { "msg_contents": "Hackers,\n\nThe treatment of timestamptz (and timetz) values with offsets that include seconds seems a bit inconsistent. One can create such timestamps through the input function:\n\ndavid=# select '2024-06-22T12:35:00+02:30:15'::timestamptz;\n timestamptz \n------------------------\n 2024-06-22 10:04:45+00\n\nBut the offset seconds are dropped (or rounded away?) by to_timestamp()’s `OF` and `TZ` formats[2]:\n\ndavid=# select to_timestamp('2024-06-03 12:35:00+02:30:15', 'YYYY-MM-DD HH24:MI:SSOF');\n to_timestamp \n------------------------\n 2024-06-03 10:05:00+00\n\ndavid=# select to_timestamp('2024-06-03 12:35:00+02:30:15', 'YYYY-MM-DD HH24:MI:SSTZ');\n to_timestamp \n------------------------\n 2024-06-03 02:05:00-08\n\nThe corresponding jsonpath methods don’t like offsets with seconds *at all*:\n\ndavid=# select jsonb_path_query('\"2024-06-03 12:35:00+02:30:15\"', '$.datetime(\"YYYY-MM-DD HH24:MI:SSOF\")');\nERROR: trailing characters remain in input string after datetime format\n\ndavid=# select jsonb_path_query('\"2024-06-03 12:35:00+02:30:15\"', '$.timestamp_tz()');\nERROR: timestamp_tz format is not recognized: \"2024-06-03 12:35:00+02:30:15\"\n\nI see from the source[1] that offsets between plus or minus 15:59:59 are allowed; should the `OF` and `TZ formats be able to parse them? Or perhaps there should be a `TZS` format to complement `TZH` and `TZM`?\n\nBest,\n\nDavid\n\n[1] https://github.com/postgres/postgres/blob/70a845c/src/include/datatype/timestamp.h#L136-L142\n[2]: https://www.postgresql.org/docs/16/functions-formatting.html\n\n\n\n", "msg_date": "Sat, 22 Jun 2024 12:25:29 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Inconsistent Parsing of Offsets with Seconds" }, { "msg_contents": "\"David E. Wheeler\" <[email protected]> writes:\n> The treatment of timestamptz (and timetz) values with offsets that include seconds seems a bit inconsistent.\n\nIt's hard to get excited about this. Per the IANA TZ data,\nnowhere in the world has used fractional-minute UT offsets\nsince 1972:\n\n# In 1972 Liberia was the last country to switch from a UT offset\n# that was not a multiple of 15 or 20 minutes.\n\nand they were twenty years later than the next-to-last place (although\nIANA will steadfastly deny reliability for their TZ data before 1970).\nSo timestamps like this simply don't exist in the wild.\n\n> The corresponding jsonpath methods don’t like offsets with seconds *at all*:\n\nPerhaps that should be fixed, but it's pretty low-priority IMO.\nI doubt there is any standard saying that JSON timestamps need\nto be able to include that.\n\n> I see from the source[1] that offsets between plus or minus 15:59:59\n> are allowed; should the `OF` and `TZ formats be able to parse them?\n\nI'd vote no. to_date/to_char already have enough trouble with format\nstrings being squishier than one might expect.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 22 Jun 2024 13:15:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inconsistent Parsing of Offsets with Seconds" }, { "msg_contents": "On Jun 22, 2024, at 13:15, Tom Lane <[email protected]> wrote:\n\n> It's hard to get excited about this.\n\nI freely admit I’m getting into the weeds here. :-)\n\n>> The corresponding jsonpath methods don’t like offsets with seconds *at all*:\n> \n> Perhaps that should be fixed, but it's pretty low-priority IMO.\n> I doubt there is any standard saying that JSON timestamps need\n> to be able to include that.\n> \n>> I see from the source[1] that offsets between plus or minus 15:59:59\n>> are allowed; should the `OF` and `TZ formats be able to parse them?\n> \n> I'd vote no. to_date/to_char already have enough trouble with format\n> strings being squishier than one might expect.\n\nI believe the former issue is caused by the latter: The jsonpath implementation uses the formatting strings to parse the timestamps[1], and since there is no formatting to support offsets with seconds, it doesn’t work at all in JSON timestamp parsing.\n\n[1]: https://github.com/postgres/postgres/blob/70a845c/src/backend/utils/adt/jsonpath_exec.c#L2420-L2442\n\nSo if we were to fix the parsing of offsets in jsonpath, we’d either have to change the parsing code there or augment the to_timestamp() formats and use them.\n\nTotally agree not a priority; happy to just pretend offsets with seconds don’t exist in any practical sense.\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Sat, 22 Jun 2024 14:10:00 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inconsistent Parsing of Offsets with Seconds" }, { "msg_contents": "On Jun 22, 2024, at 14:10, David E. Wheeler <[email protected]> wrote:\n\n> I believe the former issue is caused by the latter: The jsonpath implementation uses the formatting strings to parse the timestamps[1], and since there is no formatting to support offsets with seconds, it doesn’t work at all in JSON timestamp parsing.\n> \n> [1]: https://github.com/postgres/postgres/blob/70a845c/src/backend/utils/adt/jsonpath_exec.c#L2420-L2442\n\nA side-effect of this implementation of date/time parsing using the to_char templates is that only time zone offsets and abbreviations are supported. I find the behavior a little surprising TBH:\n\ndavid=# select to_timestamp('2024-06-03 12:35:00America/New_York', 'YYYY-MM-DD HH24:MI:SSTZ');\nERROR: invalid value \"America/New_York\" for \"TZ\"\nDETAIL: Time zone abbreviation is not recognized.\n\nUnless the SQL standard only supports offsets and abbreviations, I wonder if we’d be better off updating the above parsing code to also try the various date/time input functions, as well as the custom formats that *are* defined by the standard.\n\nBest,\n\n\n\n\n", "msg_date": "Mon, 24 Jun 2024 08:08:14 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inconsistent Parsing of Offsets with Seconds" } ]
[ { "msg_contents": "I was unable to parse a comment in src/backend/parser/gram.y around line 11364:\n\n/*\n * As func_expr but does not accept WINDOW functions directly (they\n * can still be contained in arguments for functions etc.)\n * Use this when window expressions are not allowed, so to disambiguate\n * the grammar. (e.g. in CREATE INDEX)\n */\n\nMaybe \"but\" is unnecessary in the first sentence in the comment?\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Sun, 23 Jun 2024 13:01:54 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Unable parse a comment in gram.y" }, { "msg_contents": "On Sat, Jun 22, 2024 at 9:02 PM Tatsuo Ishii <[email protected]> wrote:\n\n> I was unable to parse a comment in src/backend/parser/gram.y around line\n> 11364:\n>\n> /*\n> * As func_expr but does not accept WINDOW functions directly (they\n> * can still be contained in arguments for functions etc.)\n> * Use this when window expressions are not allowed, so to disambiguate\n> * the grammar. (e.g. in CREATE INDEX)\n> */\n>\n> Maybe \"but\" is unnecessary in the first sentence in the comment?\n>\n>\nThe \"but\" is required, add a comma before it. It could also be written a\nbit more verbosely:\n\nfunc_expr_windowless is the same as func_expr aside from not accepting\nwindow functions directly ...\n\nDavid J.\n\nOn Sat, Jun 22, 2024 at 9:02 PM Tatsuo Ishii <[email protected]> wrote:I was unable to parse a comment in src/backend/parser/gram.y around line 11364:\n\n/*\n * As func_expr but does not accept WINDOW functions directly (they\n * can still be contained in arguments for functions etc.)\n * Use this when window expressions are not allowed, so to disambiguate\n * the grammar. (e.g. in CREATE INDEX)\n */\n\nMaybe \"but\" is unnecessary in the first sentence in the comment?The \"but\" is required, add a comma before it.  It could also be written a bit more verbosely:func_expr_windowless is the same as func_expr aside from not accepting window functions directly ...David J.", "msg_date": "Sat, 22 Jun 2024 21:13:37 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unable parse a comment in gram.y" }, { "msg_contents": "> On Sat, Jun 22, 2024 at 9:02 PM Tatsuo Ishii <[email protected]> wrote:\r\n> \r\n>> I was unable to parse a comment in src/backend/parser/gram.y around line\r\n>> 11364:\r\n>>\r\n>> /*\r\n>> * As func_expr but does not accept WINDOW functions directly (they\r\n>> * can still be contained in arguments for functions etc.)\r\n>> * Use this when window expressions are not allowed, so to disambiguate\r\n>> * the grammar. (e.g. in CREATE INDEX)\r\n>> */\r\n>>\r\n>> Maybe \"but\" is unnecessary in the first sentence in the comment?\r\n>>\r\n>>\r\n> The \"but\" is required, add a comma before it. It could also be written a\r\n> bit more verbosely:\r\n> \r\n> func_expr_windowless is the same as func_expr aside from not accepting\r\n> window functions directly ...\r\n\r\nOh, I see.\r\n--\r\nTatsuo Ishii\r\nSRA OSS LLC\r\nEnglish: http://www.sraoss.co.jp/index_en/\r\nJapanese:http://www.sraoss.co.jp\r\n", "msg_date": "Sun, 23 Jun 2024 13:22:10 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unable parse a comment in gram.y" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Sat, Jun 22, 2024 at 9:02 PM Tatsuo Ishii <[email protected]> wrote:\n>> I was unable to parse a comment in src/backend/parser/gram.y around line\n>> 11364:\n>> \n>> * As func_expr but does not accept WINDOW functions directly (they\n>> * can still be contained in arguments for functions etc.)\n\n> The \"but\" is required, add a comma before it. It could also be written a\n> bit more verbosely:\n\nPerhaps s/As func_expr/Like func_expr/ would be less confusing?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 23 Jun 2024 00:41:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unable parse a comment in gram.y" }, { "msg_contents": ">>> * As func_expr but does not accept WINDOW functions directly (they\n>>> * can still be contained in arguments for functions etc.)\n> \n>> The \"but\" is required, add a comma before it. It could also be written a\n>> bit more verbosely:\n> \n> Perhaps s/As func_expr/Like func_expr/ would be less confusing?\n\n+1. It's easier to understand at least for me.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Sun, 23 Jun 2024 14:11:35 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unable parse a comment in gram.y" }, { "msg_contents": "On Saturday, June 22, 2024, Tatsuo Ishii <[email protected]> wrote:\n\n> >>> * As func_expr but does not accept WINDOW functions directly (they\n> >>> * can still be contained in arguments for functions etc.)\n> >\n> >> The \"but\" is required, add a comma before it. It could also be written\n> a\n> >> bit more verbosely:\n> >\n> > Perhaps s/As func_expr/Like func_expr/ would be less confusing?\n>\n> +1. It's easier to understand at least for me.\n>\n>\n+1\n\nDavid J.\n\nOn Saturday, June 22, 2024, Tatsuo Ishii <[email protected]> wrote:>>> * As func_expr but does not accept WINDOW functions directly (they\n>>> * can still be contained in arguments for functions etc.)\n> \n>> The \"but\" is required, add a comma before it.  It could also be written a\n>> bit more verbosely:\n> \n> Perhaps s/As func_expr/Like func_expr/ would be less confusing?\n\n+1. It's easier to understand at least for me.\n+1David J.", "msg_date": "Sat, 22 Jun 2024 23:19:52 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unable parse a comment in gram.y" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Saturday, June 22, 2024, Tatsuo Ishii <[email protected]> wrote:\n>>> Perhaps s/As func_expr/Like func_expr/ would be less confusing?\n\n>> +1. It's easier to understand at least for me.\n\n> +1\n\nOK. I looked through the rest of src/backend/parser/ and couldn't\nfind any other similar wording. There's plenty of \"As with ...\"\nand \"As in ...\", but at least to me those don't seem confusing.\nI'll plan to push the attached after the release freeze lifts.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 23 Jun 2024 12:59:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unable parse a comment in gram.y" }, { "msg_contents": ">>> +1. It's easier to understand at least for me.\n> \n>> +1\n> \n> OK. I looked through the rest of src/backend/parser/ and couldn't\n> find any other similar wording. There's plenty of \"As with ...\"\n> and \"As in ...\", but at least to me those don't seem confusing.\n> I'll plan to push the attached after the release freeze lifts.\n\nExcellent!\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Mon, 24 Jun 2024 09:23:37 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unable parse a comment in gram.y" } ]
[ { "msg_contents": "Hi.\n\nIn src/include/access/xlogbackup.h, the field *name*\nhas one byte extra to store null-termination.\n\nBut, in the function *do_pg_backup_start*,\nI think that is a mistake in the line (8736):\n\nmemcpy(state->name, backupidstr, strlen(backupidstr));\n\nmemcpy with strlen does not copy the whole string.\nstrlen returns the exact length of the string, without\nthe null-termination.\n\nSo, I think this can result in errors,\nlike in the function *build_backup_content*\n(src/backend/access/transam/xlogbackup.c)\nWhere *appendStringInfo* expects a string with null-termination.\n\nappendStringInfo(result, \"LABEL: %s\\n\", state->name);\n\nTo fix, copy strlen size plus one byte, to include the null-termination.\n\nTrivial patch attached.\n\nbest regards,\nRanier Vilela", "msg_date": "Sun, 23 Jun 2024 20:51:26 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "On Sun, 23 Jun 2024 at 20:51 Ranier Vilela <[email protected]> wrote:\n\n> Hi.\n>\n> In src/include/access/xlogbackup.h, the field *name*\n> has one byte extra to store null-termination.\n>\n> But, in the function *do_pg_backup_start*,\n> I think that is a mistake in the line (8736):\n>\n> memcpy(state->name, backupidstr, strlen(backupidstr));\n>\n> memcpy with strlen does not copy the whole string.\n> strlen returns the exact length of the string, without\n> the null-termination.\n>\n> So, I think this can result in errors,\n> like in the function *build_backup_content*\n> (src/backend/access/transam/xlogbackup.c)\n> Where *appendStringInfo* expects a string with null-termination.\n>\n> appendStringInfo(result, \"LABEL: %s\\n\", state->name);\n>\n> To fix, copy strlen size plus one byte, to include the null-termination.\n>\n\n>\nDoesn’t “sizeof” solve the problem? It take in account the null-termination\ncharacter.\nFabrízio de Royes Mello\n\nOn Sun, 23 Jun 2024 at 20:51 Ranier Vilela <[email protected]> wrote:Hi.In src/include/access/xlogbackup.h, the field *name*has one byte extra to store null-termination.But, in the function *do_pg_backup_start*,I think that is a mistake in the line (8736):memcpy(state->name, backupidstr, strlen(backupidstr));memcpy with strlen does not copy the whole string.strlen returns the exact length of the string, withoutthe null-termination.So, I think this can result in errors,like in the function *build_backup_content* (src/backend/access/transam/xlogbackup.c)Where *appendStringInfo* expects a string with null-termination.\tappendStringInfo(result, \"LABEL: %s\\n\", state->name);To fix, copy strlen size plus one byte, to include the null-termination.\nDoesn’t “sizeof” solve the problem? It take in account the null-termination character.Fabrízio de Royes Mello", "msg_date": "Sun, 23 Jun 2024 21:08:47 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "On Sun, Jun 23, 2024 at 09:08:47PM -0300, Fabrízio de Royes Mello wrote:\n> Doesn’t “sizeof” solve the problem? It take in account the null-termination\n> character.\n\nThe size of BackupState->name is fixed with MAXPGPATH + 1, so it would\nbe a better practice to use strlcpy() with sizeof(name) anyway?\n--\nMichael", "msg_date": "Mon, 24 Jun 2024 09:23:50 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "Em dom., 23 de jun. de 2024 às 21:08, Fabrízio de Royes Mello <\[email protected]> escreveu:\n\n>\n> On Sun, 23 Jun 2024 at 20:51 Ranier Vilela <[email protected]> wrote:\n>\n>> Hi.\n>>\n>> In src/include/access/xlogbackup.h, the field *name*\n>> has one byte extra to store null-termination.\n>>\n>> But, in the function *do_pg_backup_start*,\n>> I think that is a mistake in the line (8736):\n>>\n>> memcpy(state->name, backupidstr, strlen(backupidstr));\n>>\n>> memcpy with strlen does not copy the whole string.\n>> strlen returns the exact length of the string, without\n>> the null-termination.\n>>\n>> So, I think this can result in errors,\n>> like in the function *build_backup_content*\n>> (src/backend/access/transam/xlogbackup.c)\n>> Where *appendStringInfo* expects a string with null-termination.\n>>\n>> appendStringInfo(result, \"LABEL: %s\\n\", state->name);\n>>\n>> To fix, copy strlen size plus one byte, to include the null-termination.\n>>\n>\n>>\n> Doesn’t “sizeof” solve the problem? It take in account the\n> null-termination character.\n\nsizeof is is preferable when dealing with constants such as:\nmemcpy(name, \"string test1\", sizeof( \"string test1\");\n\nUsing sizeof in this case will always copy MAXPGPATH + 1.\nModern compilers will optimize strlen,\ncopying fewer bytes.\n\nbest regards,\nRanier Vilela\n\nEm dom., 23 de jun. de 2024 às 21:08, Fabrízio de Royes Mello <[email protected]> escreveu:On Sun, 23 Jun 2024 at 20:51 Ranier Vilela <[email protected]> wrote:Hi.In src/include/access/xlogbackup.h, the field *name*has one byte extra to store null-termination.But, in the function *do_pg_backup_start*,I think that is a mistake in the line (8736):memcpy(state->name, backupidstr, strlen(backupidstr));memcpy with strlen does not copy the whole string.strlen returns the exact length of the string, withoutthe null-termination.So, I think this can result in errors,like in the function *build_backup_content* (src/backend/access/transam/xlogbackup.c)Where *appendStringInfo* expects a string with null-termination.\tappendStringInfo(result, \"LABEL: %s\\n\", state->name);To fix, copy strlen size plus one byte, to include the null-termination.\nDoesn’t “sizeof” solve the problem? It take in account the null-termination character.sizeof is is preferable when dealing with constants such as:memcpy(name, \"string test1\", sizeof(\n\"string test1\");Using sizeof in this case will always copy MAXPGPATH + 1.Modern compilers will optimize strlen,copying fewer bytes.best regards,Ranier Vilela", "msg_date": "Sun, 23 Jun 2024 21:31:51 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "Em dom., 23 de jun. de 2024 às 21:24, Michael Paquier <[email protected]>\nescreveu:\n\n> On Sun, Jun 23, 2024 at 09:08:47PM -0300, Fabrízio de Royes Mello wrote:\n> > Doesn’t “sizeof” solve the problem? It take in account the\n> null-termination\n> > character.\n>\n> The size of BackupState->name is fixed with MAXPGPATH + 1, so it would\n> be a better practice to use strlcpy() with sizeof(name) anyway?\n>\nIt's not critical code, so I think it's ok to use strlen, even because the\nresult of strlen will already be available using modern compilers.\n\nSo, I think it's ok to use memcpy with strlen + 1.\n\nbest regards,\nRanier Vilela\n\nEm dom., 23 de jun. de 2024 às 21:24, Michael Paquier <[email protected]> escreveu:On Sun, Jun 23, 2024 at 09:08:47PM -0300, Fabrízio de Royes Mello wrote:\n> Doesn’t “sizeof” solve the problem? It take in account the null-termination\n> character.\n\nThe size of BackupState->name is fixed with MAXPGPATH + 1, so it would\nbe a better practice to use strlcpy() with sizeof(name) anyway?It's not critical code, so I think it's ok to use strlen, even because the result of strlen will already be available using modern compilers.So, I think it's ok to use memcpy with strlen + 1. best regards,Ranier Vilela", "msg_date": "Sun, 23 Jun 2024 21:34:45 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "On Sun, Jun 23, 2024 at 09:34:45PM -0300, Ranier Vilela wrote:\n> It's not critical code, so I think it's ok to use strlen, even because the\n> result of strlen will already be available using modern compilers.\n> \n> So, I think it's ok to use memcpy with strlen + 1.\n\nIt seems to me that there is a pretty good argument to just use\nstrlcpy() for the same reason as the one you cite: this is not a\nperformance-critical code, and that's just safer.\n--\nMichael", "msg_date": "Mon, 24 Jun 2024 09:53:52 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "Em dom., 23 de jun. de 2024 às 21:54, Michael Paquier <[email protected]>\nescreveu:\n\n> On Sun, Jun 23, 2024 at 09:34:45PM -0300, Ranier Vilela wrote:\n> > It's not critical code, so I think it's ok to use strlen, even because\n> the\n> > result of strlen will already be available using modern compilers.\n> >\n> > So, I think it's ok to use memcpy with strlen + 1.\n>\n> It seems to me that there is a pretty good argument to just use\n> strlcpy() for the same reason as the one you cite: this is not a\n> performance-critical code, and that's just safer.\n>\nYeah, I'm fine with strlcpy. I'm not against it.\n\nNew version, attached.\n\nbest regards,\nRanier Vilela", "msg_date": "Sun, 23 Jun 2024 22:05:42 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "Em dom., 23 de jun. de 2024 às 22:05, Ranier Vilela <[email protected]>\nescreveu:\n\n> Em dom., 23 de jun. de 2024 às 21:54, Michael Paquier <[email protected]>\n> escreveu:\n>\n>> On Sun, Jun 23, 2024 at 09:34:45PM -0300, Ranier Vilela wrote:\n>> > It's not critical code, so I think it's ok to use strlen, even because\n>> the\n>> > result of strlen will already be available using modern compilers.\n>> >\n>> > So, I think it's ok to use memcpy with strlen + 1.\n>>\n>> It seems to me that there is a pretty good argument to just use\n>> strlcpy() for the same reason as the one you cite: this is not a\n>> performance-critical code, and that's just safer.\n>>\n> Yeah, I'm fine with strlcpy. I'm not against it.\n>\nPerhaps, like the v2?\n\nEither v1 or v2, to me, looks good.\n\nbest regards,\nRanier Vilela\n\n>", "msg_date": "Sun, 23 Jun 2024 22:14:01 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "Em dom., 23 de jun. de 2024 às 22:14, Ranier Vilela <[email protected]>\nescreveu:\n\n> Em dom., 23 de jun. de 2024 às 22:05, Ranier Vilela <[email protected]>\n> escreveu:\n>\n>> Em dom., 23 de jun. de 2024 às 21:54, Michael Paquier <\n>> [email protected]> escreveu:\n>>\n>>> On Sun, Jun 23, 2024 at 09:34:45PM -0300, Ranier Vilela wrote:\n>>> > It's not critical code, so I think it's ok to use strlen, even because\n>>> the\n>>> > result of strlen will already be available using modern compilers.\n>>> >\n>>> > So, I think it's ok to use memcpy with strlen + 1.\n>>>\n>>> It seems to me that there is a pretty good argument to just use\n>>> strlcpy() for the same reason as the one you cite: this is not a\n>>> performance-critical code, and that's just safer.\n>>>\n>> Yeah, I'm fine with strlcpy. I'm not against it.\n>>\n> Perhaps, like the v2?\n>\n> Either v1 or v2, to me, looks good.\n>\nThinking about, does not make sense the field size MAXPGPATH + 1.\nall other similar fields are just MAXPGPATH.\n\nIf we copy MAXPGPATH + 1, it will also be wrong.\nSo it is necessary to adjust logbackup.h as well.\n\nSo, I think that v3 is ok to fix.\n\nbest regards,\nRanier Vilela\n\n>\n> best regards,\n> Ranier Vilela\n>\n>>", "msg_date": "Sun, 23 Jun 2024 22:34:03 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "On Mon, Jun 24, 2024 at 7:51 AM Ranier Vilela <[email protected]> wrote:\n> In src/include/access/xlogbackup.h, the field *name*\n> has one byte extra to store null-termination.\n>\n> But, in the function *do_pg_backup_start*,\n> I think that is a mistake in the line (8736):\n>\n> memcpy(state->name, backupidstr, strlen(backupidstr));\n>\n> memcpy with strlen does not copy the whole string.\n> strlen returns the exact length of the string, without\n> the null-termination.\n\nI noticed that the two callers of do_pg_backup_start both allocate\nBackupState with palloc0. Can we rely on this to ensure that the\nBackupState.name is initialized with null-termination?\n\nThanks\nRichard\n\n\n", "msg_date": "Mon, 24 Jun 2024 10:56:20 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "On Sun, 23 Jun 2024 22:34:03 -0300\nRanier Vilela <[email protected]> wrote:\n\n> Em dom., 23 de jun. de 2024 às 22:14, Ranier Vilela <[email protected]>\n> escreveu:\n> \n> > Em dom., 23 de jun. de 2024 às 22:05, Ranier Vilela <[email protected]>\n> > escreveu:\n> >\n> >> Em dom., 23 de jun. de 2024 às 21:54, Michael Paquier <\n> >> [email protected]> escreveu:\n> >>\n> >>> On Sun, Jun 23, 2024 at 09:34:45PM -0300, Ranier Vilela wrote:\n> >>> > It's not critical code, so I think it's ok to use strlen, even because\n> >>> the\n> >>> > result of strlen will already be available using modern compilers.\n> >>> >\n> >>> > So, I think it's ok to use memcpy with strlen + 1.\n> >>>\n> >>> It seems to me that there is a pretty good argument to just use\n> >>> strlcpy() for the same reason as the one you cite: this is not a\n> >>> performance-critical code, and that's just safer.\n> >>>\n> >> Yeah, I'm fine with strlcpy. I'm not against it.\n> >>\n> > Perhaps, like the v2?\n> >\n> > Either v1 or v2, to me, looks good.\n> >\n> Thinking about, does not make sense the field size MAXPGPATH + 1.\n> all other similar fields are just MAXPGPATH.\n> \n> If we copy MAXPGPATH + 1, it will also be wrong.\n> So it is necessary to adjust logbackup.h as well.\n\nI am not sure whether we need to change the size of the field,\nbut if change it, I wonder it is better to modify the following\nmessage from MAXPGPATH to MAXPGPATH -1.\n\n \t\t\t\t errmsg(\"backup label too long (max %d bytes)\",\n \t\t\t\t\t\tMAXPGPATH)));\n\nRegards,\nYugo Nagata\n\n> \n> So, I think that v3 is ok to fix.\n> \n> best regards,\n> Ranier Vilela\n> \n> >\n> > best regards,\n> > Ranier Vilela\n> >\n> >>\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n", "msg_date": "Mon, 24 Jun 2024 12:27:28 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid incomplete copy string\n (src/backend/access/transam/xlog.c)" }, { "msg_contents": "Em dom., 23 de jun. de 2024 às 23:56, Richard Guo <[email protected]>\nescreveu:\n\n> On Mon, Jun 24, 2024 at 7:51 AM Ranier Vilela <[email protected]> wrote:\n> > In src/include/access/xlogbackup.h, the field *name*\n> > has one byte extra to store null-termination.\n> >\n> > But, in the function *do_pg_backup_start*,\n> > I think that is a mistake in the line (8736):\n> >\n> > memcpy(state->name, backupidstr, strlen(backupidstr));\n> >\n> > memcpy with strlen does not copy the whole string.\n> > strlen returns the exact length of the string, without\n> > the null-termination.\n>\n> I noticed that the two callers of do_pg_backup_start both allocate\n> BackupState with palloc0. Can we rely on this to ensure that the\n> BackupState.name is initialized with null-termination?\n>\nI do not think so.\nIt seems to me the best solution is to use Michael's suggestion, strlcpy +\nsizeof.\n\nCurrently we have this:\nmemcpy(state->name, \"longlongpathexample1\",\nstrlen(\"longlongpathexample1\"));\nprintf(\"%s\\n\", state->name);\nlonglongpathexample1\n\nNext random call:\nmemcpy(state->name, \"longpathexample2\", strlen(\"longpathexample2\"));\nprintf(\"%s\\n\", state->name);\nlongpathexample2ple1\n\nIt's not a good idea to use memcpy with strlen.\n\nbest regards,\nRanier Vilela\n\nEm dom., 23 de jun. de 2024 às 23:56, Richard Guo <[email protected]> escreveu:On Mon, Jun 24, 2024 at 7:51 AM Ranier Vilela <[email protected]> wrote:\n> In src/include/access/xlogbackup.h, the field *name*\n> has one byte extra to store null-termination.\n>\n> But, in the function *do_pg_backup_start*,\n> I think that is a mistake in the line (8736):\n>\n> memcpy(state->name, backupidstr, strlen(backupidstr));\n>\n> memcpy with strlen does not copy the whole string.\n> strlen returns the exact length of the string, without\n> the null-termination.\n\nI noticed that the two callers of do_pg_backup_start both allocate\nBackupState with palloc0.  Can we rely on this to ensure that the\nBackupState.name is initialized with null-termination?I do not think so.It seems to me the best solution is to use Michael's suggestion, strlcpy + sizeof.Currently we have this:memcpy(state->name, \"longlongpathexample1\", strlen(\"longlongpathexample1\")); printf(\"%s\\n\", state->name);longlongpathexample1Next random call:\nmemcpy(state->name, \"longpathexample2\", strlen(\"longpathexample2\")); \nprintf(\"%s\\n\", state->name);longpathexample2ple1\n\n\nIt's not a good idea to use memcpy with strlen.best regards,Ranier Vilela", "msg_date": "Mon, 24 Jun 2024 08:25:36 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "Em seg., 24 de jun. de 2024 às 00:27, Yugo NAGATA <[email protected]>\nescreveu:\n\n> On Sun, 23 Jun 2024 22:34:03 -0300\n> Ranier Vilela <[email protected]> wrote:\n>\n> > Em dom., 23 de jun. de 2024 às 22:14, Ranier Vilela <[email protected]\n> >\n> > escreveu:\n> >\n> > > Em dom., 23 de jun. de 2024 às 22:05, Ranier Vilela <\n> [email protected]>\n> > > escreveu:\n> > >\n> > >> Em dom., 23 de jun. de 2024 às 21:54, Michael Paquier <\n> > >> [email protected]> escreveu:\n> > >>\n> > >>> On Sun, Jun 23, 2024 at 09:34:45PM -0300, Ranier Vilela wrote:\n> > >>> > It's not critical code, so I think it's ok to use strlen, even\n> because\n> > >>> the\n> > >>> > result of strlen will already be available using modern compilers.\n> > >>> >\n> > >>> > So, I think it's ok to use memcpy with strlen + 1.\n> > >>>\n> > >>> It seems to me that there is a pretty good argument to just use\n> > >>> strlcpy() for the same reason as the one you cite: this is not a\n> > >>> performance-critical code, and that's just safer.\n> > >>>\n> > >> Yeah, I'm fine with strlcpy. I'm not against it.\n> > >>\n> > > Perhaps, like the v2?\n> > >\n> > > Either v1 or v2, to me, looks good.\n> > >\n> > Thinking about, does not make sense the field size MAXPGPATH + 1.\n> > all other similar fields are just MAXPGPATH.\n> >\n> > If we copy MAXPGPATH + 1, it will also be wrong.\n> > So it is necessary to adjust logbackup.h as well.\n>\n> I am not sure whether we need to change the size of the field,\n> but if change it, I wonder it is better to modify the following\n> message from MAXPGPATH to MAXPGPATH -1.\n>\n> errmsg(\"backup label too long (max %d\n> bytes)\",\n> MAXPGPATH)));\n>\nOr perhaps, is it better to show the too long label?\n\nerrmsg(\"backup label too long (%s)\",\n backupidstr)));\n\nbest regards,\nRanier Vilela\n\n>\n> >\n> > So, I think that v3 is ok to fix.\n> >\n> > best regards,\n> > Ranier Vilela\n> >\n> > >\n> > > best regards,\n> > > Ranier Vilela\n> > >\n> > >>\n>\n>\n> --\n> Yugo NAGATA <[email protected]>\n>\n\nEm seg., 24 de jun. de 2024 às 00:27, Yugo NAGATA <[email protected]> escreveu:On Sun, 23 Jun 2024 22:34:03 -0300\nRanier Vilela <[email protected]> wrote:\n\n> Em dom., 23 de jun. de 2024 às 22:14, Ranier Vilela <[email protected]>\n> escreveu:\n> \n> > Em dom., 23 de jun. de 2024 às 22:05, Ranier Vilela <[email protected]>\n> > escreveu:\n> >\n> >> Em dom., 23 de jun. de 2024 às 21:54, Michael Paquier <\n> >> [email protected]> escreveu:\n> >>\n> >>> On Sun, Jun 23, 2024 at 09:34:45PM -0300, Ranier Vilela wrote:\n> >>> > It's not critical code, so I think it's ok to use strlen, even because\n> >>> the\n> >>> > result of strlen will already be available using modern compilers.\n> >>> >\n> >>> > So, I think it's ok to use memcpy with strlen + 1.\n> >>>\n> >>> It seems to me that there is a pretty good argument to just use\n> >>> strlcpy() for the same reason as the one you cite: this is not a\n> >>> performance-critical code, and that's just safer.\n> >>>\n> >> Yeah, I'm fine with strlcpy. I'm not against it.\n> >>\n> > Perhaps, like the v2?\n> >\n> > Either v1 or v2, to me, looks good.\n> >\n> Thinking about, does not make sense the field size MAXPGPATH + 1.\n> all other similar fields are just MAXPGPATH.\n> \n> If we copy MAXPGPATH + 1, it will also be wrong.\n> So it is necessary to adjust logbackup.h as well.\n\nI am not sure whether we need to change the size of the field,\nbut if change it, I wonder it is better to modify the following\nmessage from MAXPGPATH to MAXPGPATH -1.\n\n                                 errmsg(\"backup label too long (max %d bytes)\",\n                                                MAXPGPATH)));Or perhaps, is it better to show the too long label?\nerrmsg(\"backup label too long (%s)\",\n                                                backupidstr))); best regards,Ranier Vilela\n\n> \n> So, I think that v3 is ok to fix.\n> \n> best regards,\n> Ranier Vilela\n> \n> >\n> > best regards,\n> > Ranier Vilela\n> >\n> >>\n\n\n-- \nYugo NAGATA <[email protected]>", "msg_date": "Mon, 24 Jun 2024 08:37:26 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "On Mon, 24 Jun 2024 08:37:26 -0300\nRanier Vilela <[email protected]> wrote:\n\n> Em seg., 24 de jun. de 2024 às 00:27, Yugo NAGATA <[email protected]>\n> escreveu:\n> \n> > On Sun, 23 Jun 2024 22:34:03 -0300\n> > Ranier Vilela <[email protected]> wrote:\n> >\n> > > Em dom., 23 de jun. de 2024 às 22:14, Ranier Vilela <[email protected]\n> > >\n> > > escreveu:\n> > >\n> > > > Em dom., 23 de jun. de 2024 às 22:05, Ranier Vilela <\n> > [email protected]>\n> > > > escreveu:\n> > > >\n> > > >> Em dom., 23 de jun. de 2024 às 21:54, Michael Paquier <\n> > > >> [email protected]> escreveu:\n> > > >>\n> > > >>> On Sun, Jun 23, 2024 at 09:34:45PM -0300, Ranier Vilela wrote:\n> > > >>> > It's not critical code, so I think it's ok to use strlen, even\n> > because\n> > > >>> the\n> > > >>> > result of strlen will already be available using modern compilers.\n> > > >>> >\n> > > >>> > So, I think it's ok to use memcpy with strlen + 1.\n> > > >>>\n> > > >>> It seems to me that there is a pretty good argument to just use\n> > > >>> strlcpy() for the same reason as the one you cite: this is not a\n> > > >>> performance-critical code, and that's just safer.\n> > > >>>\n> > > >> Yeah, I'm fine with strlcpy. I'm not against it.\n> > > >>\n> > > > Perhaps, like the v2?\n> > > >\n> > > > Either v1 or v2, to me, looks good.\n> > > >\n> > > Thinking about, does not make sense the field size MAXPGPATH + 1.\n> > > all other similar fields are just MAXPGPATH.\n> > >\n> > > If we copy MAXPGPATH + 1, it will also be wrong.\n> > > So it is necessary to adjust logbackup.h as well.\n> >\n> > I am not sure whether we need to change the size of the field,\n> > but if change it, I wonder it is better to modify the following\n> > message from MAXPGPATH to MAXPGPATH -1.\n> >\n> > errmsg(\"backup label too long (max %d\n> > bytes)\",\n> > MAXPGPATH)));\n> >\n> Or perhaps, is it better to show the too long label?\n> \n> errmsg(\"backup label too long (%s)\",\n> backupidstr)));\n\nI don't think it is better to show a string longer than MAXPGPATH (=1024)\nin the error message.\n\nRegards,\nYugo Nagata\n\n> \n> best regards,\n> Ranier Vilela\n> \n> >\n> > >\n> > > So, I think that v3 is ok to fix.\n> > >\n> > > best regards,\n> > > Ranier Vilela\n> > >\n> > > >\n> > > > best regards,\n> > > > Ranier Vilela\n> > > >\n> > > >>\n> >\n> >\n> > --\n> > Yugo NAGATA <[email protected]>\n> >\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n", "msg_date": "Thu, 27 Jun 2024 12:17:04 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid incomplete copy string\n (src/backend/access/transam/xlog.c)" }, { "msg_contents": "On Mon, 24 Jun 2024 08:25:36 -0300\nRanier Vilela <[email protected]> wrote:\n\n> Em dom., 23 de jun. de 2024 às 23:56, Richard Guo <[email protected]>\n> escreveu:\n> \n> > On Mon, Jun 24, 2024 at 7:51 AM Ranier Vilela <[email protected]> wrote:\n> > > In src/include/access/xlogbackup.h, the field *name*\n> > > has one byte extra to store null-termination.\n> > >\n> > > But, in the function *do_pg_backup_start*,\n> > > I think that is a mistake in the line (8736):\n> > >\n> > > memcpy(state->name, backupidstr, strlen(backupidstr));\n> > >\n> > > memcpy with strlen does not copy the whole string.\n> > > strlen returns the exact length of the string, without\n> > > the null-termination.\n> >\n> > I noticed that the two callers of do_pg_backup_start both allocate\n> > BackupState with palloc0. Can we rely on this to ensure that the\n> > BackupState.name is initialized with null-termination?\n> >\n> I do not think so.\n> It seems to me the best solution is to use Michael's suggestion, strlcpy +\n> sizeof.\n> \n> Currently we have this:\n> memcpy(state->name, \"longlongpathexample1\",\n> strlen(\"longlongpathexample1\"));\n> printf(\"%s\\n\", state->name);\n> longlongpathexample1\n> \n> Next random call:\n> memcpy(state->name, \"longpathexample2\", strlen(\"longpathexample2\"));\n> printf(\"%s\\n\", state->name);\n> longpathexample2ple1\n\nIn the current uses, BackupState is freed (by pfree or MemoryContextDelete)\nafter each use of BackupState, so the memory space is not reused as your\nscenario above, and there would not harms even if the null-termination is omitted. \n\nHowever, I wonder it is better to use strlcpy without assuming such the good\nmanner of callers.\n\nRegards,\nYugo Nagata\n\n> \n> It's not a good idea to use memcpy with strlen.\n> \n> best regards,\n> Ranier Vilela\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n", "msg_date": "Thu, 27 Jun 2024 13:01:08 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid incomplete copy string\n (src/backend/access/transam/xlog.c)" }, { "msg_contents": "On Thu, Jun 27, 2024 at 12:17:04PM +0900, Yugo NAGATA wrote:\n> I don't think it is better to show a string longer than MAXPGPATH (=1024)\n> in the error message.\n\n+1. I am not convinced that this is useful in this context.\n--\nMichael", "msg_date": "Thu, 27 Jun 2024 13:04:22 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "Em qui., 27 de jun. de 2024 às 01:01, Yugo NAGATA <[email protected]>\nescreveu:\n\n> On Mon, 24 Jun 2024 08:25:36 -0300\n> Ranier Vilela <[email protected]> wrote:\n>\n> > Em dom., 23 de jun. de 2024 às 23:56, Richard Guo <\n> [email protected]>\n> > escreveu:\n> >\n> > > On Mon, Jun 24, 2024 at 7:51 AM Ranier Vilela <[email protected]>\n> wrote:\n> > > > In src/include/access/xlogbackup.h, the field *name*\n> > > > has one byte extra to store null-termination.\n> > > >\n> > > > But, in the function *do_pg_backup_start*,\n> > > > I think that is a mistake in the line (8736):\n> > > >\n> > > > memcpy(state->name, backupidstr, strlen(backupidstr));\n> > > >\n> > > > memcpy with strlen does not copy the whole string.\n> > > > strlen returns the exact length of the string, without\n> > > > the null-termination.\n> > >\n> > > I noticed that the two callers of do_pg_backup_start both allocate\n> > > BackupState with palloc0. Can we rely on this to ensure that the\n> > > BackupState.name is initialized with null-termination?\n> > >\n> > I do not think so.\n> > It seems to me the best solution is to use Michael's suggestion, strlcpy\n> +\n> > sizeof.\n> >\n> > Currently we have this:\n> > memcpy(state->name, \"longlongpathexample1\",\n> > strlen(\"longlongpathexample1\"));\n> > printf(\"%s\\n\", state->name);\n> > longlongpathexample1\n> >\n> > Next random call:\n> > memcpy(state->name, \"longpathexample2\", strlen(\"longpathexample2\"));\n> > printf(\"%s\\n\", state->name);\n> > longpathexample2ple1\n>\n> In the current uses, BackupState is freed (by pfree or MemoryContextDelete)\n> after each use of BackupState, so the memory space is not reused as your\n> scenario above, and there would not harms even if the null-termination is\n> omitted.\n>\n> However, I wonder it is better to use strlcpy without assuming such the\n> good\n> manner of callers.\n>\nThanks for your inputs.\n\nstrlcpy is used across all the sources, so this style is better and safe.\n\nHere v4, attached, with MAXPGPATH -1, according to your suggestion.\n\n From the linux man page:\nhttps://linux.die.net/man/3/strlcpy\n\n\" The *strlcpy*() function copies up to *size* - 1 characters from the\nNUL-terminated string *src* to *dst*, NUL-terminating the result. \"\n\nbest regards,\nRanier Vilela\n\nEm qui., 27 de jun. de 2024 às 01:01, Yugo NAGATA <[email protected]> escreveu:On Mon, 24 Jun 2024 08:25:36 -0300\nRanier Vilela <[email protected]> wrote:\n\n> Em dom., 23 de jun. de 2024 às 23:56, Richard Guo <[email protected]>\n> escreveu:\n> \n> > On Mon, Jun 24, 2024 at 7:51 AM Ranier Vilela <[email protected]> wrote:\n> > > In src/include/access/xlogbackup.h, the field *name*\n> > > has one byte extra to store null-termination.\n> > >\n> > > But, in the function *do_pg_backup_start*,\n> > > I think that is a mistake in the line (8736):\n> > >\n> > > memcpy(state->name, backupidstr, strlen(backupidstr));\n> > >\n> > > memcpy with strlen does not copy the whole string.\n> > > strlen returns the exact length of the string, without\n> > > the null-termination.\n> >\n> > I noticed that the two callers of do_pg_backup_start both allocate\n> > BackupState with palloc0.  Can we rely on this to ensure that the\n> > BackupState.name is initialized with null-termination?\n> >\n> I do not think so.\n> It seems to me the best solution is to use Michael's suggestion, strlcpy +\n> sizeof.\n> \n> Currently we have this:\n> memcpy(state->name, \"longlongpathexample1\",\n> strlen(\"longlongpathexample1\"));\n> printf(\"%s\\n\", state->name);\n> longlongpathexample1\n> \n> Next random call:\n> memcpy(state->name, \"longpathexample2\", strlen(\"longpathexample2\"));\n> printf(\"%s\\n\", state->name);\n> longpathexample2ple1\n\nIn the current uses, BackupState is freed (by pfree or MemoryContextDelete)\nafter each use of BackupState, so the memory space is not reused as your\nscenario above, and there would not harms even if the null-termination is omitted. \n\nHowever, I wonder it is better to use strlcpy without assuming such the good\nmanner of callers.Thanks for your inputs.strlcpy is used across all the sources, so this style is better and safe.Here v4, attached, with MAXPGPATH -1, according to your suggestion.From the linux man page:https://linux.die.net/man/3/strlcpy\"\nThe strlcpy() function copies up to size - 1 characters from the NUL-terminated string src to dst, NUL-terminating the result.\n\n\"best regards,Ranier Vilela", "msg_date": "Thu, 27 Jun 2024 08:48:47 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "Em qui., 27 de jun. de 2024 às 08:48, Ranier Vilela <[email protected]>\nescreveu:\n\n> Em qui., 27 de jun. de 2024 às 01:01, Yugo NAGATA <[email protected]>\n> escreveu:\n>\n>> On Mon, 24 Jun 2024 08:25:36 -0300\n>> Ranier Vilela <[email protected]> wrote:\n>>\n>> > Em dom., 23 de jun. de 2024 às 23:56, Richard Guo <\n>> [email protected]>\n>> > escreveu:\n>> >\n>> > > On Mon, Jun 24, 2024 at 7:51 AM Ranier Vilela <[email protected]>\n>> wrote:\n>> > > > In src/include/access/xlogbackup.h, the field *name*\n>> > > > has one byte extra to store null-termination.\n>> > > >\n>> > > > But, in the function *do_pg_backup_start*,\n>> > > > I think that is a mistake in the line (8736):\n>> > > >\n>> > > > memcpy(state->name, backupidstr, strlen(backupidstr));\n>> > > >\n>> > > > memcpy with strlen does not copy the whole string.\n>> > > > strlen returns the exact length of the string, without\n>> > > > the null-termination.\n>> > >\n>> > > I noticed that the two callers of do_pg_backup_start both allocate\n>> > > BackupState with palloc0. Can we rely on this to ensure that the\n>> > > BackupState.name is initialized with null-termination?\n>> > >\n>> > I do not think so.\n>> > It seems to me the best solution is to use Michael's suggestion,\n>> strlcpy +\n>> > sizeof.\n>> >\n>> > Currently we have this:\n>> > memcpy(state->name, \"longlongpathexample1\",\n>> > strlen(\"longlongpathexample1\"));\n>> > printf(\"%s\\n\", state->name);\n>> > longlongpathexample1\n>> >\n>> > Next random call:\n>> > memcpy(state->name, \"longpathexample2\", strlen(\"longpathexample2\"));\n>> > printf(\"%s\\n\", state->name);\n>> > longpathexample2ple1\n>>\n>> In the current uses, BackupState is freed (by pfree or\n>> MemoryContextDelete)\n>> after each use of BackupState, so the memory space is not reused as your\n>> scenario above, and there would not harms even if the null-termination is\n>> omitted.\n>>\n>> However, I wonder it is better to use strlcpy without assuming such the\n>> good\n>> manner of callers.\n>>\n> Thanks for your inputs.\n>\n> strlcpy is used across all the sources, so this style is better and safe.\n>\n> Here v4, attached, with MAXPGPATH -1, according to your suggestion.\n>\nNow with file patch really attached.\n\nbest regards,\nRanier Vilela", "msg_date": "Thu, 27 Jun 2024 08:50:03 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "> On 27 Jun 2024, at 06:01, Yugo NAGATA <[email protected]> wrote:\n>> Em dom., 23 de jun. de 2024 às 23:56, Richard Guo <[email protected]>\n>> escreveu:\n>>> On Mon, Jun 24, 2024 at 7:51 AM Ranier Vilela <[email protected]> wrote:\n\n>>>> memcpy with strlen does not copy the whole string.\n>>>> strlen returns the exact length of the string, without\n>>>> the null-termination.\n>>> \n>>> I noticed that the two callers of do_pg_backup_start both allocate\n>>> BackupState with palloc0. Can we rely on this to ensure that the\n>>> BackupState.name is initialized with null-termination?\n>>> \n>> I do not think so.\n\nIn this case we can, we do that today..\n\n> However, I wonder it is better to use strlcpy without assuming such the good\n> manner of callers.\n\n..that being said I agree that it seems safer to use strlcpy() here.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 1 Jul 2024 10:52:22 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "> On 27 Jun 2024, at 13:50, Ranier Vilela <[email protected]> wrote:\n\n> Now with file patch really attached.\n\n-\tif (strlen(backupidstr) > MAXPGPATH)\n+\tif (strlcpy(state->name, backupidstr, sizeof(state->name)) >= sizeof(state->name))\n \t\tereport(ERROR,\n\nStylistic nit perhaps, I would keep the strlen check here and just replace the\nmemcpy with strlcpy. Using strlen in the error message check makes the code\nmore readable.\n\n\n-\tchar\t\tname[MAXPGPATH + 1];\n+\tchar\t\tname[MAXPGPATH];/* backup label name */\n\nWith the introduced use of strlcpy, why do we need to change this field?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 1 Jul 2024 11:20:19 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "Em seg., 1 de jul. de 2024 às 06:20, Daniel Gustafsson <[email protected]>\nescreveu:\n\n> > On 27 Jun 2024, at 13:50, Ranier Vilela <[email protected]> wrote:\n>\n> > Now with file patch really attached.\n>\n> - if (strlen(backupidstr) > MAXPGPATH)\n> + if (strlcpy(state->name, backupidstr, sizeof(state->name)) >=\n> sizeof(state->name))\n> ereport(ERROR,\n>\n> Stylistic nit perhaps, I would keep the strlen check here and just replace\n> the\n> memcpy with strlcpy. Using strlen in the error message check makes the\n> code\n> more readable.\n>\nThis is not performance-critical code, so I see no problem using strlen,\nfor the sake of readability.\n\n\n>\n> - char name[MAXPGPATH + 1];\n> + char name[MAXPGPATH];/* backup label name */\n>\n> With the introduced use of strlcpy, why do we need to change this field?\n>\nThe part about being the only reference in the entire code that uses\nMAXPGPATH + 1.\nMAXPGPATH is defined as 1024, so MAXPGPATH +1 is 1025.\nI think this hurts the calculation of the array index,\npreventing power two optimization.\n\nAnother argument is that all other paths have a 1023 size limit,\nI don't see why the backup label would have to be different.\n\nNew version patch attached.\n\nbest regards,\nRanier Vilela", "msg_date": "Mon, 1 Jul 2024 14:35:49 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "Em seg., 1 de jul. de 2024 às 14:35, Ranier Vilela <[email protected]>\nescreveu:\n\n> Em seg., 1 de jul. de 2024 às 06:20, Daniel Gustafsson <[email protected]>\n> escreveu:\n>\n>> > On 27 Jun 2024, at 13:50, Ranier Vilela <[email protected]> wrote:\n>>\n>> > Now with file patch really attached.\n>>\n>> - if (strlen(backupidstr) > MAXPGPATH)\n>> + if (strlcpy(state->name, backupidstr, sizeof(state->name)) >=\n>> sizeof(state->name))\n>> ereport(ERROR,\n>>\n>> Stylistic nit perhaps, I would keep the strlen check here and just\n>> replace the\n>> memcpy with strlcpy. Using strlen in the error message check makes the\n>> code\n>> more readable.\n>>\n> This is not performance-critical code, so I see no problem using strlen,\n> for the sake of readability.\n>\n>\n>>\n>> - char name[MAXPGPATH + 1];\n>> + char name[MAXPGPATH];/* backup label name */\n>>\n>> With the introduced use of strlcpy, why do we need to change this field?\n>>\n> The part about being the only reference in the entire code that uses\n> MAXPGPATH + 1.\n> MAXPGPATH is defined as 1024, so MAXPGPATH +1 is 1025.\n> I think this hurts the calculation of the array index,\n> preventing power two optimization.\n>\n> Another argument is that all other paths have a 1023 size limit,\n> I don't see why the backup label would have to be different.\n>\n> New version patch attached.\n>\nSorry for v5, I forgot to update the patch and it was an error.\n\nbest regards,\nRanier Vilela", "msg_date": "Mon, 1 Jul 2024 14:38:20 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "On 2024-Jul-01, Ranier Vilela wrote:\n\n> > - char name[MAXPGPATH + 1];\n> > + char name[MAXPGPATH];/* backup label name */\n> >\n> > With the introduced use of strlcpy, why do we need to change this field?\n> >\n> The part about being the only reference in the entire code that uses\n> MAXPGPATH + 1.\n\nThe bit I don't understand about this discussion is what will happen\nwith users that currently have exactly 1024 chars in backup names today.\nWith this change, we'll be truncating their names to 1023 chars instead.\nWhy would they feel that such change is welcome?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 1 Jul 2024 21:15:48 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "> On 1 Jul 2024, at 21:15, Alvaro Herrera <[email protected]> wrote:\n> On 2024-Jul-01, Ranier Vilela wrote:\n\n>>> - char name[MAXPGPATH + 1];\n>>> + char name[MAXPGPATH];/* backup label name */\n>>> \n>>> With the introduced use of strlcpy, why do we need to change this field?\n>>> \n>> The part about being the only reference in the entire code that uses\n>> MAXPGPATH + 1.\n> \n> The bit I don't understand about this discussion is what will happen\n> with users that currently have exactly 1024 chars in backup names today.\n> With this change, we'll be truncating their names to 1023 chars instead.\n> Why would they feel that such change is welcome?\n\nThat's precisely what I was getting at. Maybe it makes sense to change, maybe\nnot, but that's not for this patch to decide as that's a different discussion\nfrom using safe string copying API's.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 1 Jul 2024 21:19:59 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "Em seg., 1 de jul. de 2024 às 16:15, Alvaro Herrera <[email protected]>\nescreveu:\n\n> On 2024-Jul-01, Ranier Vilela wrote:\n>\n> > > - char name[MAXPGPATH + 1];\n> > > + char name[MAXPGPATH];/* backup label name */\n> > >\n> > > With the introduced use of strlcpy, why do we need to change this\n> field?\n> > >\n> > The part about being the only reference in the entire code that uses\n> > MAXPGPATH + 1.\n>\n> The bit I don't understand about this discussion is what will happen\n> with users that currently have exactly 1024 chars in backup names today.\n> With this change, we'll be truncating their names to 1023 chars instead.\n> Why would they feel that such change is welcome?\n>\nYes of course, I understand.\nThis can be a problem for users.\n\nbest regards,\nRanier Vilela\n\nEm seg., 1 de jul. de 2024 às 16:15, Alvaro Herrera <[email protected]> escreveu:On 2024-Jul-01, Ranier Vilela wrote:\n\n> > -       char            name[MAXPGPATH + 1];\n> > +       char            name[MAXPGPATH];/* backup label name */\n> >\n> > With the introduced use of strlcpy, why do we need to change this field?\n> >\n> The part about being the only reference in the entire code that uses\n> MAXPGPATH + 1.\n\nThe bit I don't understand about this discussion is what will happen\nwith users that currently have exactly 1024 chars in backup names today.\nWith this change, we'll be truncating their names to 1023 chars instead.\nWhy would they feel that such change is welcome?Yes of course, I understand.This can be a problem for users.best regards,Ranier Vilela", "msg_date": "Mon, 1 Jul 2024 16:47:43 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "Em seg., 1 de jul. de 2024 às 16:20, Daniel Gustafsson <[email protected]>\nescreveu:\n\n> > On 1 Jul 2024, at 21:15, Alvaro Herrera <[email protected]> wrote:\n> > On 2024-Jul-01, Ranier Vilela wrote:\n>\n> >>> - char name[MAXPGPATH + 1];\n> >>> + char name[MAXPGPATH];/* backup label name */\n> >>>\n> >>> With the introduced use of strlcpy, why do we need to change this\n> field?\n> >>>\n> >> The part about being the only reference in the entire code that uses\n> >> MAXPGPATH + 1.\n> >\n> > The bit I don't understand about this discussion is what will happen\n> > with users that currently have exactly 1024 chars in backup names today.\n> > With this change, we'll be truncating their names to 1023 chars instead.\n> > Why would they feel that such change is welcome?\n>\n> That's precisely what I was getting at. Maybe it makes sense to change,\n> maybe\n> not, but that's not for this patch to decide as that's a different\n> discussion\n> from using safe string copying API's.\n>\nOk, this is not material for the proposal and can be considered unrelated.\n\nWe only have to replace it with strlcpy.\n\nv7 attached.\n\nbest regards,\nRanier Vilela", "msg_date": "Mon, 1 Jul 2024 16:58:06 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "> On 1 Jul 2024, at 21:58, Ranier Vilela <[email protected]> wrote:\n\n> We only have to replace it with strlcpy.\n\nThanks, I'll have a look at applying this in the tomorrow morning.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 1 Jul 2024 22:08:33 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "On Mon, Jul 01, 2024 at 09:19:59PM +0200, Daniel Gustafsson wrote:\n>> The bit I don't understand about this discussion is what will happen\n>> with users that currently have exactly 1024 chars in backup names today.\n>> With this change, we'll be truncating their names to 1023 chars instead.\n>> Why would they feel that such change is welcome?\n> \n> That's precisely what I was getting at. Maybe it makes sense to change, maybe\n> not, but that's not for this patch to decide as that's a different discussion\n> from using safe string copying API's.\n\nYep. Agreed to keep backward-compatibility here, even if I suspect\nthere is close to nobody relying on backup label names of this size.\n--\nMichael", "msg_date": "Tue, 2 Jul 2024 09:33:03 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "> On 2 Jul 2024, at 02:33, Michael Paquier <[email protected]> wrote:\n> \n> On Mon, Jul 01, 2024 at 09:19:59PM +0200, Daniel Gustafsson wrote:\n>>> The bit I don't understand about this discussion is what will happen\n>>> with users that currently have exactly 1024 chars in backup names today.\n>>> With this change, we'll be truncating their names to 1023 chars instead.\n>>> Why would they feel that such change is welcome?\n>> \n>> That's precisely what I was getting at. Maybe it makes sense to change, maybe\n>> not, but that's not for this patch to decide as that's a different discussion\n>> from using safe string copying API's.\n> \n> Yep. Agreed to keep backward-compatibility here, even if I suspect\n> there is close to nobody relying on backup label names of this size.\n\nI suspect so too, and it might be a good project for someone to go over such\nbuffers to see if there is reason grow or contract. Either way, pushed the\nstrlcpy part.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 2 Jul 2024 11:44:07 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" }, { "msg_contents": "Em ter., 2 de jul. de 2024 às 06:44, Daniel Gustafsson <[email protected]>\nescreveu:\n\n> > On 2 Jul 2024, at 02:33, Michael Paquier <[email protected]> wrote:\n> >\n> > On Mon, Jul 01, 2024 at 09:19:59PM +0200, Daniel Gustafsson wrote:\n> >>> The bit I don't understand about this discussion is what will happen\n> >>> with users that currently have exactly 1024 chars in backup names\n> today.\n> >>> With this change, we'll be truncating their names to 1023 chars\n> instead.\n> >>> Why would they feel that such change is welcome?\n> >>\n> >> That's precisely what I was getting at. Maybe it makes sense to\n> change, maybe\n> >> not, but that's not for this patch to decide as that's a different\n> discussion\n> >> from using safe string copying API's.\n> >\n> > Yep. Agreed to keep backward-compatibility here, even if I suspect\n> > there is close to nobody relying on backup label names of this size.\n>\n> I suspect so too, and it might be a good project for someone to go over\n> such\n> buffers to see if there is reason grow or contract. Either way, pushed the\n> strlcpy part.\n>\nThanks Daniel.\n\nbest regards,\nRanier Vilela\n\nEm ter., 2 de jul. de 2024 às 06:44, Daniel Gustafsson <[email protected]> escreveu:> On 2 Jul 2024, at 02:33, Michael Paquier <[email protected]> wrote:\n> \n> On Mon, Jul 01, 2024 at 09:19:59PM +0200, Daniel Gustafsson wrote:\n>>> The bit I don't understand about this discussion is what will happen\n>>> with users that currently have exactly 1024 chars in backup names today.\n>>> With this change, we'll be truncating their names to 1023 chars instead.\n>>> Why would they feel that such change is welcome?\n>> \n>> That's precisely what I was getting at.  Maybe it makes sense to change, maybe\n>> not, but that's not for this patch to decide as that's a different discussion\n>> from using safe string copying API's.\n> \n> Yep.  Agreed to keep backward-compatibility here, even if I suspect\n> there is close to nobody relying on backup label names of this size.\n\nI suspect so too, and it might be a good project for someone to go over such\nbuffers to see if there is reason grow or contract.  Either way, pushed the\nstrlcpy part.Thanks Daniel.best regards,Ranier Vilela", "msg_date": "Tue, 2 Jul 2024 08:07:42 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid incomplete copy string (src/backend/access/transam/xlog.c)" } ]
[ { "msg_contents": "Is there a desire to have a division operator / that takes dividend\nand divisor of types interval, and results in a quotient of type\ndouble precision.\n\nThis would be helpful in calculating how many times the divisor\ninterval can fit into the dividend interval.\n\nTo complement this division operator, it would be desirable to also\nhave a remainder operator %.\n\nFor example,\n\n('365 days'::interval / '5 days'::interval) => 73\n('365 days'::interval % '5 days'::interval) => 0\n\n('365 days'::interval / '3 days'::interval) => 121\n('365 days'::interval % '3 days'::interval) => 2\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Sun, 23 Jun 2024 17:57:00 -0700", "msg_from": "Gurjeet Singh <[email protected]>", "msg_from_op": true, "msg_subject": "Proposal: Division operator for (interval / interval => double\n precision)" }, { "msg_contents": "Hi\n\nIts always a good idea to extend the functionality of PG.\n\nThanks\nKashif Zeeshan\n\nOn Mon, Jun 24, 2024 at 5:57 AM Gurjeet Singh <[email protected]> wrote:\n\n> Is there a desire to have a division operator / that takes dividend\n> and divisor of types interval, and results in a quotient of type\n> double precision.\n>\n> This would be helpful in calculating how many times the divisor\n> interval can fit into the dividend interval.\n>\n> To complement this division operator, it would be desirable to also\n> have a remainder operator %.\n>\n> For example,\n>\n> ('365 days'::interval / '5 days'::interval) => 73\n> ('365 days'::interval % '5 days'::interval) => 0\n>\n> ('365 days'::interval / '3 days'::interval) => 121\n> ('365 days'::interval % '3 days'::interval) => 2\n>\n> Best regards,\n> Gurjeet\n> http://Gurje.et\n>\n>\n>\n\nHiIts always a good idea to extend the functionality of PG.ThanksKashif ZeeshanOn Mon, Jun 24, 2024 at 5:57 AM Gurjeet Singh <[email protected]> wrote:Is there a desire to have a division operator / that takes dividend\nand divisor of types interval, and results in a quotient of type\ndouble precision.\n\nThis would be helpful in calculating how many times the divisor\ninterval can fit into the dividend interval.\n\nTo complement this division operator, it would be desirable to also\nhave a remainder operator %.\n\nFor example,\n\n('365 days'::interval / '5 days'::interval) => 73\n('365 days'::interval % '5 days'::interval) => 0\n\n('365 days'::interval / '3 days'::interval) => 121\n('365 days'::interval % '3 days'::interval) => 2\n\nBest regards,\nGurjeet\nhttp://Gurje.et", "msg_date": "Mon, 24 Jun 2024 08:54:27 +0500", "msg_from": "Kashif Zeeshan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Division operator for (interval / interval => double\n precision)" }, { "msg_contents": "On Sun, Jun 23, 2024 at 5:57 PM Gurjeet Singh <[email protected]> wrote:\n\n> Is there a desire to have a division operator / that takes dividend\n> and divisor of types interval, and results in a quotient of type\n> double precision.\n\n[...]\n> ('365 days'::interval / '3 days'::interval) => 121\n> ('365 days'::interval % '3 days'::interval) => 2\n>\n>\nIs it double or biginteger that your operation is producing?\n\nHow about making the % operator output an interval? What is the answer to:\n\n'1 day 12 hours 59 min 10 sec' / '3 hours 22 min 6 sec'?\n\nThough I'd rather add functions to produce numbers from intervals then let\nthe existing math operations be used on those. These seem independently\nuseful though like this feature I've not really seen demand for them from\nothers.\n\nin_years(interval) -> numeric\nin_days(interval) -> numeric\nin_hours(interval) -> numeric\nin_microseconds(interval) -> numeric\netc...\n\nThat said, implementing the inverse of the existing\ninterval/double->interval operator has a nice symmetry. Though the 4\nexamples are trivial, single unit, single scale, divisions, so exactly how\nthat translates into support for a possibly messy example like above I'm\nuncertain.\n\nThere is no precedence, but why not add a new composite type, (whole\nbigint, remainder bigint) that, for you example #2, would be\n(121,2*24*60*60*1000*1000), the second field being 2 days in microseconds?\nPossibly under a different operator so those who just want integer division\ncan have it more cheaply and easily.\n\nDavid J.\n\nOn Sun, Jun 23, 2024 at 5:57 PM Gurjeet Singh <[email protected]> wrote:Is there a desire to have a division operator / that takes dividend\nand divisor of types interval, and results in a quotient of type\ndouble precision.[...]\n('365 days'::interval / '3 days'::interval) => 121\n('365 days'::interval % '3 days'::interval) => 2Is it double or biginteger that your operation is producing?How about making the % operator output an interval?  What is the answer to:'1 day 12 hours 59 min 10 sec' / '3 hours 22 min 6 sec'?Though I'd rather add functions to produce numbers from intervals then let the existing math operations be used on those.  These seem independently useful though like this feature I've not really seen demand for them from others.in_years(interval) -> numericin_days(interval) -> numericin_hours(interval) -> numericin_microseconds(interval) -> numericetc...That said, implementing the inverse of the existing interval/double->interval operator has a nice symmetry.  Though the 4 examples are trivial, single unit, single scale, divisions, so exactly how that translates into support for a possibly messy example like above I'm uncertain.There is no precedence, but why not add a new composite type, (whole bigint, remainder bigint) that, for you example #2, would be (121,2*24*60*60*1000*1000), the second field being 2 days in microseconds?  Possibly under a different operator so those who just want integer division can have it more cheaply and easily.David J.", "msg_date": "Sun, 23 Jun 2024 21:43:57 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Division operator for (interval / interval => double\n precision)" }, { "msg_contents": "On Sun, 2024-06-23 at 17:57 -0700, Gurjeet Singh wrote:\n> Is there a desire to have a division operator / that takes dividend\n> and divisor of types interval, and results in a quotient of type\n> double precision.\n> \n> This would be helpful in calculating how many times the divisor\n> interval can fit into the dividend interval.\n> \n> To complement this division operator, it would be desirable to also\n> have a remainder operator %.\n> \n> For example,\n> \n> ('365 days'::interval / '5 days'::interval) => 73\n> ('365 days'::interval % '5 days'::interval) => 0\n> \n> ('365 days'::interval / '3 days'::interval) => 121\n> ('365 days'::interval % '3 days'::interval) => 2\n\nI think that is a good idea in principle, but I have one complaint,\nand one thing should be discussed.\n\nThe complaint is that the result should be double precision or numeric.\nI'd want the result of '1 minute' / '8 seconds' to be 7.5.\nThat would match how the multiplication operator works.\n\nWhat should be settled is how to handle divisions that are not well defined.\nFor example, what is '1 year' / '1 day'?\n- 365.24217, because that is the number of solar days in a solar year?\n- 365, because we don't consider leap years?\n- 360, because we use the usual conversion of 1 month -> 30 days?\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 24 Jun 2024 10:34:00 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Division operator for (interval / interval => double\n precision)" }, { "msg_contents": "On Mon, Jun 24, 2024 at 2:04 PM Laurenz Albe <[email protected]>\nwrote:\n\n> On Sun, 2024-06-23 at 17:57 -0700, Gurjeet Singh wrote:\n> > Is there a desire to have a division operator / that takes dividend\n> > and divisor of types interval, and results in a quotient of type\n> > double precision.\n> >\n> > This would be helpful in calculating how many times the divisor\n> > interval can fit into the dividend interval.\n> >\n> > To complement this division operator, it would be desirable to also\n> > have a remainder operator %.\n> >\n> > For example,\n> >\n> > ('365 days'::interval / '5 days'::interval) => 73\n> > ('365 days'::interval % '5 days'::interval) => 0\n> >\n> > ('365 days'::interval / '3 days'::interval) => 121\n> > ('365 days'::interval % '3 days'::interval) => 2\n>\n> I think that is a good idea in principle, but I have one complaint,\n> and one thing should be discussed.\n>\n> The complaint is that the result should be double precision or numeric.\n> I'd want the result of '1 minute' / '8 seconds' to be 7.5.\n> That would match how the multiplication operator works.\n>\n> What should be settled is how to handle divisions that are not well\n> defined.\n> For example, what is '1 year' / '1 day'?\n> - 365.24217, because that is the number of solar days in a solar year?\n> - 365, because we don't consider leap years?\n> - 360, because we use the usual conversion of 1 month -> 30 days?\n>\n\nWe will need to go back to first principles, I guess. Result of division is\nquotient, which is how many times a divisor can be subtracted from\ndividend, and remainder, which is the what remains after so many\nsubtractions. Since day to hours and month to days conversions are not\nconstants, interval/interval will result in an integer quotient and\ninterval remainder. That looks painful.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Mon, Jun 24, 2024 at 2:04 PM Laurenz Albe <[email protected]> wrote:On Sun, 2024-06-23 at 17:57 -0700, Gurjeet Singh wrote:\n> Is there a desire to have a division operator / that takes dividend\n> and divisor of types interval, and results in a quotient of type\n> double precision.\n> \n> This would be helpful in calculating how many times the divisor\n> interval can fit into the dividend interval.\n> \n> To complement this division operator, it would be desirable to also\n> have a remainder operator %.\n> \n> For example,\n> \n> ('365 days'::interval / '5 days'::interval) => 73\n> ('365 days'::interval % '5 days'::interval) => 0\n> \n> ('365 days'::interval / '3 days'::interval) => 121\n> ('365 days'::interval % '3 days'::interval) => 2\n\nI think that is a good idea in principle, but I have one complaint,\nand one thing should be discussed.\n\nThe complaint is that the result should be double precision or numeric.\nI'd want the result of '1 minute' / '8 seconds' to be 7.5.\nThat would match how the multiplication operator works.\n\nWhat should be settled is how to handle divisions that are not well defined.\nFor example, what is '1 year' / '1 day'?\n- 365.24217, because that is the number of solar days in a solar year?\n- 365, because we don't consider leap years?\n- 360, because we use the usual conversion of 1 month -> 30 days?We will need to go back to first principles, I guess. Result of division is quotient, which is how many times a divisor can be subtracted from dividend, and remainder, which is the what remains after so many subtractions. Since day to hours and month to days conversions are not constants, interval/interval will result in an integer quotient and interval remainder. That looks painful.-- Best Wishes,Ashutosh Bapat", "msg_date": "Mon, 24 Jun 2024 14:17:06 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Division operator for (interval / interval => double\n precision)" } ]
[ { "msg_contents": "Hello hackers,\n\nAs recent caiman failures ([1], [2], ...) show, plperl.sql is incompatible\nwith Perl 5.40. (The last successful test runs took place when cayman\nhad Perl 5.38.2 installed: [3].)\n\nFWIW, I've found an already-existing fix for the issue [4] and a note\ndescribing the change for Perl 5.39.10 [5].\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-24%2001%3A34%3A23\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-24%2000%3A15%3A16\n[3] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-05-02%2021%3A57%3A17\n[4] \nhttps://git.alpinelinux.org/aports/tree/community/postgresql14/fix-test-plperl-5.8-pragma.patch?id=28aeb872811f59a7f646aa29ed7c9dc30e698e65\n[5] https://metacpan.org/release/PEVANS/perl-5.39.10/changes#Selected-Bug-Fixes\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 24 Jun 2024 07:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Buildfarm animal caiman showing a plperl test issue with newer Perl\n versions" }, { "msg_contents": "\nOn 2024-06-24 Mo 12:00 AM, Alexander Lakhin wrote:\n> Hello hackers,\n>\n> As recent caiman failures ([1], [2], ...) show, plperl.sql is \n> incompatible\n> with Perl 5.40. (The last successful test runs took place when cayman\n> had Perl 5.38.2 installed: [3].)\n>\n> FWIW, I've found an already-existing fix for the issue [4] and a note\n> describing the change for Perl 5.39.10 [5].\n>\n> [1] \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-24%2001%3A34%3A23\n> [2] \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-24%2000%3A15%3A16\n> [3] \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-05-02%2021%3A57%3A17\n> [4] \n> https://git.alpinelinux.org/aports/tree/community/postgresql14/fix-test-plperl-5.8-pragma.patch?id=28aeb872811f59a7f646aa29ed7c9dc30e698e65\n> [5] \n> https://metacpan.org/release/PEVANS/perl-5.39.10/changes#Selected-Bug-Fixes\n>\n>\n\nIt's a very odd bug. I guess we should just backpatch the removal of \nthat redundant version check in plc_perlboot.pl, probably all the way \ndown to 9.2 since godwit builds and tests with plperl that far back, and \nsome day in the not too distant future it will upgrade to perl 5.40.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 24 Jun 2024 06:46:57 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Buildfarm animal caiman showing a plperl test issue with newer\n Perl versions" }, { "msg_contents": "\nOn 2024-06-24 Mo 6:46 AM, Andrew Dunstan wrote:\n>\n> On 2024-06-24 Mo 12:00 AM, Alexander Lakhin wrote:\n>> Hello hackers,\n>>\n>> As recent caiman failures ([1], [2], ...) show, plperl.sql is \n>> incompatible\n>> with Perl 5.40. (The last successful test runs took place when cayman\n>> had Perl 5.38.2 installed: [3].)\n>>\n>> FWIW, I've found an already-existing fix for the issue [4] and a note\n>> describing the change for Perl 5.39.10 [5].\n>>\n>> [1] \n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-24%2001%3A34%3A23\n>> [2] \n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-06-24%2000%3A15%3A16\n>> [3] \n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2024-05-02%2021%3A57%3A17\n>> [4] \n>> https://git.alpinelinux.org/aports/tree/community/postgresql14/fix-test-plperl-5.8-pragma.patch?id=28aeb872811f59a7f646aa29ed7c9dc30e698e65\n>> [5] \n>> https://metacpan.org/release/PEVANS/perl-5.39.10/changes#Selected-Bug-Fixes\n>>\n>>\n>\n> It's a very odd bug. I guess we should just backpatch the removal of \n> that redundant version check in plc_perlboot.pl, probably all the way \n> down to 9.2 since godwit builds and tests with plperl that far back, \n> and some day in the not too distant future it will upgrade to perl 5.40.\n>\n>\n>\n\nDone.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 26 Jun 2024 07:35:24 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Buildfarm animal caiman showing a plperl test issue with newer\n Perl versions" } ]
[ { "msg_contents": "hi.\nthe following two queries should return the same result?\n\nSELECT * FROM JSON_query (jsonb 'null', '$' returning jsonb);\nSELECT * FROM JSON_value (jsonb 'null', '$' returning jsonb);\n\nI've tried a patch to implement it.\n(i raised the issue at\nhttps://www.postgresql.org/message-id/CACJufxFWiCnG3Q7f0m_GdrytPbv29A5OWngCDwKVjcftwzHbTA%40mail.gmail.com\ni think a new thread would be more appropriate).\n\n\n\ncurrent json_value doc:\n\"Note that scalar strings returned by json_value always have their\nquotes removed, equivalent to specifying OMIT QUOTES in json_query.\"\n\ni think there are two exceptions: when the returning data types are\njsonb or json.\n\n\n", "msg_date": "Mon, 24 Jun 2024 18:04:47 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "sql/json miscellaneous issue" }, { "msg_contents": "On Mon, Jun 24, 2024 at 5:05 PM jian he <[email protected]> wrote:\n\n> hi.\n> the following two queries should return the same result?\n>\n> SELECT * FROM JSON_query (jsonb 'null', '$' returning jsonb);\n> SELECT * FROM JSON_value (jsonb 'null', '$' returning jsonb);\n>\n> I've tried a patch to implement it.\n> (i raised the issue at\n>\n> https://www.postgresql.org/message-id/CACJufxFWiCnG3Q7f0m_GdrytPbv29A5OWngCDwKVjcftwzHbTA%40mail.gmail.com\n> i think a new thread would be more appropriate).\n>\n>\n>\n> current json_value doc:\n> \"Note that scalar strings returned by json_value always have their\n> quotes removed, equivalent to specifying OMIT QUOTES in json_query.\"\n>\n> i think there are two exceptions: when the returning data types are\n> jsonb or json.\n>\n>\n>\nHi!\n\nI also noticed a very strange difference in behavior in these two queries,\nit seems to me that although it returns a string by default, for the boolean\noperator it is necessary to return true or false\nSELECT * FROM JSON_value (jsonb '1', '$ == \"1\"' returning jsonb);\n json_value\n------------\n\n(1 row)\n\n SELECT * FROM JSON_value (jsonb 'null', '$ == \"1\"' returning jsonb);\n json_value\n------------\n false\n(1 row)\n\n\n\nBest regards, Stepan Neretin.\n\nOn Mon, Jun 24, 2024 at 5:05 PM jian he <[email protected]> wrote:hi.\nthe following two queries should return the same result?\n\nSELECT * FROM JSON_query (jsonb 'null', '$' returning jsonb);\nSELECT * FROM JSON_value (jsonb 'null', '$' returning jsonb);\n\nI've tried a patch to implement it.\n(i raised the issue at\nhttps://www.postgresql.org/message-id/CACJufxFWiCnG3Q7f0m_GdrytPbv29A5OWngCDwKVjcftwzHbTA%40mail.gmail.com\ni think a new thread would be more appropriate).\n\n\n\ncurrent json_value  doc:\n\"Note that scalar strings returned by json_value always have their\nquotes removed, equivalent to specifying OMIT QUOTES in json_query.\"\n\ni think there are two exceptions: when the returning data types are\njsonb or json.\n\nHi!I also noticed a very strange difference in behavior in these two queries, it seems to me that although it returns a string by default, for the boolean operator it is necessary to return true or falseSELECT * FROM JSON_value (jsonb '1', '$ == \"1\"' returning jsonb); json_value ------------ (1 row) SELECT * FROM JSON_value (jsonb 'null', '$ == \"1\"' returning jsonb); json_value ------------ false(1 row)Best regards, Stepan Neretin.", "msg_date": "Mon, 24 Jun 2024 18:02:46 +0700", "msg_from": "Stepan Neretin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sql/json miscellaneous issue" }, { "msg_contents": "Hi,\n\nOn Mon, Jun 24, 2024 at 8:02 PM Stepan Neretin <[email protected]> wrote:\n> Hi!\n>\n> I also noticed a very strange difference in behavior in these two queries, it seems to me that although it returns a string by default, for the boolean operator it is necessary to return true or false\n> SELECT * FROM JSON_value (jsonb '1', '$ == \"1\"' returning jsonb);\n> json_value\n> ------------\n>\n> (1 row)\n>\n> SELECT * FROM JSON_value (jsonb 'null', '$ == \"1\"' returning jsonb);\n> json_value\n> ------------\n> false\n> (1 row)\n\nHmm, that looks sane to me when comparing the above two queries with\ntheir jsonb_path_query() equivalents:\n\nselect jsonb_path_query(jsonb '1', '$ == \"1\"');\n jsonb_path_query\n------------------\n null\n(1 row)\n\nselect jsonb_path_query(jsonb 'null', '$ == \"1\"');\n jsonb_path_query\n------------------\n false\n(1 row)\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Mon, 24 Jun 2024 20:25:04 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sql/json miscellaneous issue" }, { "msg_contents": "Hi,\n\nOn Mon, Jun 24, 2024 at 7:04 PM jian he <[email protected]> wrote:\n>\n> hi.\n> the following two queries should return the same result?\n>\n> SELECT * FROM JSON_query (jsonb 'null', '$' returning jsonb);\n> SELECT * FROM JSON_value (jsonb 'null', '$' returning jsonb);\n\nI get this with HEAD:\n\nSELECT * FROM JSON_query (jsonb 'null', '$' returning jsonb);\n json_query\n------------\n null\n(1 row)\n\nTime: 734.587 ms\nSELECT * FROM JSON_value (jsonb 'null', '$' returning jsonb);\n json_value\n------------\n\n(1 row)\n\nMuch like:\n\nSELECT JSON_QUERY('{\"key\": null}', '$.key');\n json_query\n------------\n null\n(1 row)\n\nTime: 2.975 ms\nSELECT JSON_VALUE('{\"key\": null}', '$.key');\n json_value\n------------\n\n(1 row)\n\nWhich makes sense to me, because JSON_QUERY() is supposed to return a\nJSON null in both cases and JSON_VALUE() is supposed to return a SQL\nNULL for a JSON null.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Mon, 24 Jun 2024 20:46:36 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sql/json miscellaneous issue" }, { "msg_contents": "On Mon, Jun 24, 2024 at 7:46 PM Amit Langote <[email protected]> wrote:\n>\n> Hi,\n>\n> On Mon, Jun 24, 2024 at 7:04 PM jian he <[email protected]> wrote:\n> >\n> > hi.\n> > the following two queries should return the same result?\n> >\n> > SELECT * FROM JSON_query (jsonb 'null', '$' returning jsonb);\n> > SELECT * FROM JSON_value (jsonb 'null', '$' returning jsonb);\n>\n> I get this with HEAD:\n>\n> SELECT * FROM JSON_query (jsonb 'null', '$' returning jsonb);\n> json_query\n> ------------\n> null\n> (1 row)\n>\n> Time: 734.587 ms\n> SELECT * FROM JSON_value (jsonb 'null', '$' returning jsonb);\n> json_value\n> ------------\n>\n> (1 row)\n>\n> Much like:\n>\n> SELECT JSON_QUERY('{\"key\": null}', '$.key');\n> json_query\n> ------------\n> null\n> (1 row)\n>\n> Time: 2.975 ms\n> SELECT JSON_VALUE('{\"key\": null}', '$.key');\n> json_value\n> ------------\n>\n> (1 row)\n>\n> Which makes sense to me, because JSON_QUERY() is supposed to return a\n> JSON null in both cases and JSON_VALUE() is supposed to return a SQL\n> NULL for a JSON null.\n>\n> --\n> Thanks, Amit Langote\n\nhi amit, sorry to bother you again.\n\nMy thoughts for the above cases are:\n* json_value, json_query main description is the same:\n{{Returns the result of applying the SQL/JSON path_expression to the\ncontext_item using the PASSING values.}}\nsame context_item, same path_expression, for the above cases, the\nresult should be the same?\n\n* in json_value, description\n{{The extracted value must be a single SQL/JSON scalar item; an error\nis thrown if that's not the case. If you expect that extracted value\nmight be an object or an array, use the json_query function instead.}}\nquery: `SELECT * FROM JSON_value (jsonb 'null', '$' returning jsonb);`\nthe returned jsonb 'null' (applying the path expression) is a single\nSQL/JSON scalar item.\njson_value return jsonb null should be fine\n\n\nHowever, other database implementations return SQL null,\nso I guess returning SQL null is fine)\n(based on the doc explanation, returning jsonb null more make sense, imho)\n\n\n", "msg_date": "Tue, 25 Jun 2024 11:18:16 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sql/json miscellaneous issue" }, { "msg_contents": "On Tue, Jun 25, 2024 at 12:18 PM jian he <[email protected]> wrote:\n> On Mon, Jun 24, 2024 at 7:46 PM Amit Langote <[email protected]> wrote:\n> > On Mon, Jun 24, 2024 at 7:04 PM jian he <[email protected]> wrote:\n> > >\n> > > hi.\n> > > the following two queries should return the same result?\n> > >\n> > > SELECT * FROM JSON_query (jsonb 'null', '$' returning jsonb);\n> > > SELECT * FROM JSON_value (jsonb 'null', '$' returning jsonb);\n> >\n> > I get this with HEAD:\n> >\n> > SELECT * FROM JSON_query (jsonb 'null', '$' returning jsonb);\n> > json_query\n> > ------------\n> > null\n> > (1 row)\n> >\n> > Time: 734.587 ms\n> > SELECT * FROM JSON_value (jsonb 'null', '$' returning jsonb);\n> > json_value\n> > ------------\n> >\n> > (1 row)\n> >\n> > Much like:\n> >\n> > SELECT JSON_QUERY('{\"key\": null}', '$.key');\n> > json_query\n> > ------------\n> > null\n> > (1 row)\n> >\n> > Time: 2.975 ms\n> > SELECT JSON_VALUE('{\"key\": null}', '$.key');\n> > json_value\n> > ------------\n> >\n> > (1 row)\n> >\n> > Which makes sense to me, because JSON_QUERY() is supposed to return a\n> > JSON null in both cases and JSON_VALUE() is supposed to return a SQL\n> > NULL for a JSON null.\n> >\n> > --\n> > Thanks, Amit Langote\n>\n> hi amit, sorry to bother you again.\n\nNo worries.\n\n> My thoughts for the above cases are:\n> * json_value, json_query main description is the same:\n> {{Returns the result of applying the SQL/JSON path_expression to the\n> context_item using the PASSING values.}}\n> same context_item, same path_expression, for the above cases, the\n> result should be the same?\n>\n> * in json_value, description\n> {{The extracted value must be a single SQL/JSON scalar item; an error\n> is thrown if that's not the case. If you expect that extracted value\n> might be an object or an array, use the json_query function instead.}}\n> query: `SELECT * FROM JSON_value (jsonb 'null', '$' returning jsonb);`\n> the returned jsonb 'null' (applying the path expression) is a single\n> SQL/JSON scalar item.\n> json_value return jsonb null should be fine\n>\n>\n> However, other database implementations return SQL null,\n> so I guess returning SQL null is fine)\n> (based on the doc explanation, returning jsonb null more make sense, imho)\n\nIf the difference in behavior is not clear from the docs, I guess that\nmeans that we need to improve the docs. Would you like to give a shot\nat writing the patch?\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Tue, 25 Jun 2024 12:23:36 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sql/json miscellaneous issue" }, { "msg_contents": "On Tue, Jun 25, 2024 at 11:23 AM Amit Langote <[email protected]> wrote:\n>\n> On Tue, Jun 25, 2024 at 12:18 PM jian he <[email protected]> wrote:\n> > On Mon, Jun 24, 2024 at 7:46 PM Amit Langote <[email protected]> wrote:\n> > > On Mon, Jun 24, 2024 at 7:04 PM jian he <[email protected]> wrote:\n> > > >\n> > > > hi.\n> > > > the following two queries should return the same result?\n> > > >\n> > > > SELECT * FROM JSON_query (jsonb 'null', '$' returning jsonb);\n> > > > SELECT * FROM JSON_value (jsonb 'null', '$' returning jsonb);\n> > >\n> > > I get this with HEAD:\n> > >\n> > > SELECT * FROM JSON_query (jsonb 'null', '$' returning jsonb);\n> > > json_query\n> > > ------------\n> > > null\n> > > (1 row)\n> > >\n> > > Time: 734.587 ms\n> > > SELECT * FROM JSON_value (jsonb 'null', '$' returning jsonb);\n> > > json_value\n> > > ------------\n> > >\n> > > (1 row)\n> > >\n> > > Much like:\n> > >\n> > > SELECT JSON_QUERY('{\"key\": null}', '$.key');\n> > > json_query\n> > > ------------\n> > > null\n> > > (1 row)\n> > >\n> > > Time: 2.975 ms\n> > > SELECT JSON_VALUE('{\"key\": null}', '$.key');\n> > > json_value\n> > > ------------\n> > >\n> > > (1 row)\n> > >\n> > > Which makes sense to me, because JSON_QUERY() is supposed to return a\n> > > JSON null in both cases and JSON_VALUE() is supposed to return a SQL\n> > > NULL for a JSON null.\n> > >\n> > > --\n> > > Thanks, Amit Langote\n> >\n> > hi amit, sorry to bother you again.\n>\n> No worries.\n>\n> > My thoughts for the above cases are:\n> > * json_value, json_query main description is the same:\n> > {{Returns the result of applying the SQL/JSON path_expression to the\n> > context_item using the PASSING values.}}\n> > same context_item, same path_expression, for the above cases, the\n> > result should be the same?\n> >\n> > * in json_value, description\n> > {{The extracted value must be a single SQL/JSON scalar item; an error\n> > is thrown if that's not the case. If you expect that extracted value\n> > might be an object or an array, use the json_query function instead.}}\n> > query: `SELECT * FROM JSON_value (jsonb 'null', '$' returning jsonb);`\n> > the returned jsonb 'null' (applying the path expression) is a single\n> > SQL/JSON scalar item.\n> > json_value return jsonb null should be fine\n> >\n> >\n> > However, other database implementations return SQL null,\n> > so I guess returning SQL null is fine)\n> > (based on the doc explanation, returning jsonb null more make sense, imho)\n>\n> If the difference in behavior is not clear from the docs, I guess that\n> means that we need to improve the docs. Would you like to give a shot\n> at writing the patch?\n>\n\nother databases did mention how json_value deals with json null. eg.\n[0] mysql description:\nWhen the data at the specified path consists of or resolves to a JSON\nnull literal, the function returns SQL NULL.\n[1] oracle description:\nSQL/JSON function json_value applied to JSON value null returns SQL\nNULL, not the SQL string 'null'. This means, in particular, that you\ncannot use json_value to distinguish the JSON value null from the\nabsence of a value; SQL NULL indicates both cases.\n\n\nimitate above, i come up with following:\n\"The extracted value must be a single SQL/JSON scalar item; an error\nis thrown if that's not the case. ...\"\nto\n\"The extracted value must be a single SQL/JSON scalar item; an error\nis thrown if that's not the case.\nIf the extracted value is a JSON null, an SQL NULL value will return.\nThis means that you cannot use json_value to distinguish the JSON\nvalue null from evaluating path_expression yields no value at all; SQL\nNULL indicates both cases, to distinguish these two cases, use\njson_query instead.\n\"\n\n\nI also changed from\nON EMPTY is not specified is to return a null value.\nON ERROR is not specified is to return a null value.\nto\nThe default when ON EMPTY is not specified is to return an SQL NULL value.\nThe default when ON ERROR is not specified is to return an SQL NULL value.\n\n[0] https://dev.mysql.com/doc/refman/8.4/en/json-search-functions.html#function_json-value\n[1]https://docs.oracle.com/en/database/oracle/oracle-database/19/adjsn/function-JSON_VALUE.html#GUID-622170D8-7BAD-4F5F-86BF-C328451FC3BE", "msg_date": "Tue, 25 Jun 2024 12:53:37 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sql/json miscellaneous issue" }, { "msg_contents": "Hi,\n\nOn Tue, Jun 25, 2024 at 1:53 PM jian he <[email protected]> wrote:\n> On Tue, Jun 25, 2024 at 11:23 AM Amit Langote <[email protected]> wrote:\n> > > My thoughts for the above cases are:\n> > > * json_value, json_query main description is the same:\n> > > {{Returns the result of applying the SQL/JSON path_expression to the\n> > > context_item using the PASSING values.}}\n> > > same context_item, same path_expression, for the above cases, the\n> > > result should be the same?\n> > >\n> > > * in json_value, description\n> > > {{The extracted value must be a single SQL/JSON scalar item; an error\n> > > is thrown if that's not the case. If you expect that extracted value\n> > > might be an object or an array, use the json_query function instead.}}\n> > > query: `SELECT * FROM JSON_value (jsonb 'null', '$' returning jsonb);`\n> > > the returned jsonb 'null' (applying the path expression) is a single\n> > > SQL/JSON scalar item.\n> > > json_value return jsonb null should be fine\n> > >\n> > >\n> > > However, other database implementations return SQL null,\n> > > so I guess returning SQL null is fine)\n> > > (based on the doc explanation, returning jsonb null more make sense, imho)\n> >\n> > If the difference in behavior is not clear from the docs, I guess that\n> > means that we need to improve the docs. Would you like to give a shot\n> > at writing the patch?\n> >\n>\n> other databases did mention how json_value deals with json null. eg.\n> [0] mysql description:\n> When the data at the specified path consists of or resolves to a JSON\n> null literal, the function returns SQL NULL.\n> [1] oracle description:\n> SQL/JSON function json_value applied to JSON value null returns SQL\n> NULL, not the SQL string 'null'. This means, in particular, that you\n> cannot use json_value to distinguish the JSON value null from the\n> absence of a value; SQL NULL indicates both cases.\n>\n>\n> imitate above, i come up with following:\n> \"The extracted value must be a single SQL/JSON scalar item; an error\n> is thrown if that's not the case. ...\"\n> to\n> \"The extracted value must be a single SQL/JSON scalar item; an error\n> is thrown if that's not the case.\n> If the extracted value is a JSON null, an SQL NULL value will return.\n> This means that you cannot use json_value to distinguish the JSON\n> value null from evaluating path_expression yields no value at all; SQL\n> NULL indicates both cases, to distinguish these two cases, use\n> json_query instead.\n> \"\n>\n> I also changed from\n> ON EMPTY is not specified is to return a null value.\n> ON ERROR is not specified is to return a null value.\n> to\n> The default when ON EMPTY is not specified is to return an SQL NULL value.\n> The default when ON ERROR is not specified is to return an SQL NULL value.\n>\n> [0] https://dev.mysql.com/doc/refman/8.4/en/json-search-functions.html#function_json-value\n> [1]https://docs.oracle.com/en/database/oracle/oracle-database/19/adjsn/function-JSON_VALUE.html#GUID-622170D8-7BAD-4F5F-86BF-C328451FC3BE\n\nThanks, though the patch at [1], which is a much larger attempt to\nrewrite SQL/JSON query function docs, takes care of mentioning this.\nCould you please give that one a read?\n\n--\nThanks, Amit Langote\n\n[1] https://www.postgresql.org/message-id/CA%2BHiwqH_vwkNqL3Y0tpnugEaR5-7vU43XSxAC06oZJ6U%3D3LVdw%40mail.gmail.com\n\n\n", "msg_date": "Wed, 3 Jul 2024 10:02:58 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sql/json miscellaneous issue" } ]
[ { "msg_contents": "InjectionPointRun() acquires InjectionPointLock, looks up the hash \nentry, and releases the lock:\n\n> \tLWLockAcquire(InjectionPointLock, LW_SHARED);\n> \tentry_by_name = (InjectionPointEntry *)\n> \t\thash_search(InjectionPointHash, name,\n> \t\t\t\t\tHASH_FIND, &found);\n> \tLWLockRelease(InjectionPointLock);\n\nLater, it reads fields from the entry it looked up:\n\n> \t\t/* not found in local cache, so load and register */\n> \t\tsnprintf(path, MAXPGPATH, \"%s/%s%s\", pkglib_path,\n> \t\t\t\t entry_by_name->library, DLSUFFIX);\n\nIsn't that a straightforward race condition, if the injection point is \ndetached in between?\n\n\nAnother thing:\n\nI wanted use injection points to inject an error early at backend \nstartup, to write a test case for the scenarios that Jacob point out at \nhttps://www.postgresql.org/message-id/CAOYmi%2Bnwvu21mJ4DYKUa98HdfM_KZJi7B1MhyXtnsyOO-PB6Ww%40mail.gmail.com. \nBut I can't do that, because InjectionPointRun() requires a PGPROC \nentry, because it uses an LWLock. That also makes it impossible to use \ninjection points in the postmaster. Any chance we could allow injection \npoints to be triggered without a PGPROC entry? Could we use a simple \nspinlock instead? With a fast path for the case that no injection points \nare attached or something?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 24 Jun 2024 13:29:38 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Injection point locking" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> ... I can't do that, because InjectionPointRun() requires a PGPROC \n> entry, because it uses an LWLock. That also makes it impossible to use \n> injection points in the postmaster. Any chance we could allow injection \n> points to be triggered without a PGPROC entry? Could we use a simple \n> spinlock instead? With a fast path for the case that no injection points \n> are attached or something?\n\nEven taking a spinlock in the postmaster is contrary to project\npolicy. Maybe we could look the other way for debug-only code,\nbut it seems like a pretty horrible precedent.\n\nGiven your point that the existing locking is just a fig leaf\nanyway, maybe we could simply not have any? Then it's on the\ntest author to be sure that the injection point won't be\ngetting invoked when it's about to be removed. Or just rip\nout the removal capability, and say that once installed an\ninjection point is there until postmaster shutdown (or till\nshared memory reinitialization, anyway).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jun 2024 11:03:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection point locking" }, { "msg_contents": "On Mon, Jun 24, 2024 at 01:29:38PM +0300, Heikki Linnakangas wrote:\n> InjectionPointRun() acquires InjectionPointLock, looks up the hash entry,\n> and releases the lock:\n> \n> > \tLWLockAcquire(InjectionPointLock, LW_SHARED);\n> > \tentry_by_name = (InjectionPointEntry *)\n> > \t\thash_search(InjectionPointHash, name,\n> > \t\t\t\t\tHASH_FIND, &found);\n> > \tLWLockRelease(InjectionPointLock);\n> \n> Later, it reads fields from the entry it looked up:\n> \n> > \t\t/* not found in local cache, so load and register */\n> > \t\tsnprintf(path, MAXPGPATH, \"%s/%s%s\", pkglib_path,\n> > \t\t\t\t entry_by_name->library, DLSUFFIX);\n> \n> Isn't that a straightforward race condition, if the injection point is\n> detached in between?\n\nThis is a feature, not a bug :)\n\nJokes apart, this is a behavior that Noah was looking for so as it is\npossible to detach a point to emulate what a debugger would do with a\nbreakpoint for some of his tests with concurrent DDL bugs, so not\ntaking a lock while running a point is important. It's true, though,\nthat we could always delay the LWLock release once the local cache is\nloaded, but would it really matter?\n--\nMichael", "msg_date": "Tue, 25 Jun 2024 11:14:57 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection point locking" }, { "msg_contents": "On Mon, Jun 24, 2024 at 11:03:09AM -0400, Tom Lane wrote:\n> Heikki Linnakangas <[email protected]> writes:\n> > ... I can't do that, because InjectionPointRun() requires a PGPROC \n> > entry, because it uses an LWLock. That also makes it impossible to use \n> > injection points in the postmaster. Any chance we could allow injection \n> > points to be triggered without a PGPROC entry? Could we use a simple \n> > spinlock instead?\n\nThat sounds fine to me. If calling hash_search() with a spinlock feels too\nawful, a list to linear-search could work.\n\n> > With a fast path for the case that no injection points \n> > are attached or something?\n> \n> Even taking a spinlock in the postmaster is contrary to project\n> policy. Maybe we could look the other way for debug-only code,\n> but it seems like a pretty horrible precedent.\n\nIf you're actually using an injection point in the postmaster, that would be\nthe least of concerns. It is something of a concern for running an injection\npoint build while not attaching any injection point. One solution could be a\nGUC to control whether the postmaster participates in injection points.\nAnother could be to make the data structure readable with atomics only.\n\n> Given your point that the existing locking is just a fig leaf\n> anyway, maybe we could simply not have any? Then it's on the\n> test author to be sure that the injection point won't be\n> getting invoked when it's about to be removed.\n\nThat's tricky with injection_points_set_local() plus an injection point at a\nfrequently-reached location. It's one thing to control when the\ninjection_points_set_local() process reaches the injection point. It's too\nhard to control all the other processes that just reach the injection point\nand conclude it's not for their PID.\n\n> Or just rip\n> out the removal capability, and say that once installed an\n> injection point is there until postmaster shutdown (or till\n> shared memory reinitialization, anyway).\n\nThat could work. Tests do need a way to soft-disable, but it's okay with me\nif nothing can reclaim the resources.\n\n\n", "msg_date": "Mon, 24 Jun 2024 19:25:37 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection point locking" }, { "msg_contents": "On Mon, Jun 24, 2024 at 11:03:09AM -0400, Tom Lane wrote:\n> Given your point that the existing locking is just a fig leaf\n> anyway, maybe we could simply not have any? Then it's on the\n> test author to be sure that the injection point won't be\n> getting invoked when it's about to be removed.\n\nThat would work for me to put the responsibility to the test author,\nripping out the LWLock. I was wondering when somebody would come up\nwith a case where they'd want to point to the postmaster to do\nsomething, without really coming down to a case, so there was that\nfrom my side originally.\n\nLooking at all the points currently in the tree, nothing cares about\nthe concurrent locking when attaching or detaching a point, so perhaps\nit is a good thing based on these experiences to just let this LWLock\ngo. This should not impact the availability of the tests, either.\n\n> Or just rip\n> out the removal capability, and say that once installed an\n> injection point is there until postmaster shutdown (or till\n> shared memory reinitialization, anyway).\n\nBut not that. Being able to remove points on the fly can be important\nin some cases, for example where you'd still want to issue an ERROR\n(partial write path is one case) in a SQL test, then remove it in a\nfollow-up SQL query to not trigger the same ERROR.\n--\nMichael", "msg_date": "Tue, 25 Jun 2024 11:28:37 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection point locking" }, { "msg_contents": "On Tue, Jun 25, 2024 at 11:14:57AM +0900, Michael Paquier wrote:\n> On Mon, Jun 24, 2024 at 01:29:38PM +0300, Heikki Linnakangas wrote:\n> > InjectionPointRun() acquires InjectionPointLock, looks up the hash entry,\n> > and releases the lock:\n> > \n> > > \tLWLockAcquire(InjectionPointLock, LW_SHARED);\n> > > \tentry_by_name = (InjectionPointEntry *)\n> > > \t\thash_search(InjectionPointHash, name,\n> > > \t\t\t\t\tHASH_FIND, &found);\n> > > \tLWLockRelease(InjectionPointLock);\n> > \n> > Later, it reads fields from the entry it looked up:\n> > \n> > > \t\t/* not found in local cache, so load and register */\n> > > \t\tsnprintf(path, MAXPGPATH, \"%s/%s%s\", pkglib_path,\n> > > \t\t\t\t entry_by_name->library, DLSUFFIX);\n> > \n> > Isn't that a straightforward race condition, if the injection point is\n> > detached in between?\n> \n> This is a feature, not a bug :)\n> \n> Jokes apart, this is a behavior that Noah was looking for so as it is\n> possible to detach a point to emulate what a debugger would do with a\n> breakpoint for some of his tests with concurrent DDL bugs, so not\n> taking a lock while running a point is important. It's true, though,\n> that we could always delay the LWLock release once the local cache is\n> loaded, but would it really matter?\n\nI think your last sentence is what Heikki is saying should happen, and I\nagree. Yes, it matters. As written, InjectionPointRun() could cache an\nentry_by_name->function belonging to a different injection point.\n\n\n", "msg_date": "Tue, 25 Jun 2024 09:10:06 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection point locking" }, { "msg_contents": "On Tue, Jun 25, 2024 at 09:10:06AM -0700, Noah Misch wrote:\n> I think your last sentence is what Heikki is saying should happen, and I\n> agree. Yes, it matters. As written, InjectionPointRun() could cache an\n> entry_by_name->function belonging to a different injection point.\n\nThat's true, we could delay the release of the lock to happen just\nbefore a callback is run.\n\nNow, how much do people wish to see for the postmaster bits mentioned\nupthread? Taking a spinlock for so long is not going to work, so we\ncould just remove it and let developers deal with that and feed on the\nflexibility with the lock removal to allow this stuff in more areas.\nAll the existing tests are OK with that, and I think that also the\ncase of what you have proposed for the concurrency issues with\nin-place updates of catalogs. Or we could live with a no-lock path\nwhen going through that with the postmaster, but that's a bit weird.\n\nNote that with the current callbacks in the module, assuming that a\npoint is added within BackendStartup() in the postmaster like the\nattached, an ERROR is promoted to a FATAL, taking down the cluster. A\nNOTICE of course works find. Waits with conditional variables are not\nreally OK. How much are you looking for here?\n\nThe shmem state being initialized in the DSM registry is not something\nthat's going to work in the context of the postmaster, but we could\ntweak the module so as it can be loaded, initializing the shared state\nwith the shmem hooks and falling back to a DSM registry when the\nlibrary is not loaded with shared_preload_libraries. For example, see\nthe POC attached, where I've played with injection points in\nBackendStartup(), which is the area I'm guessing Heikki was looking\nat.\n--\nMichael", "msg_date": "Wed, 26 Jun 2024 10:56:12 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection point locking" }, { "msg_contents": "On Wed, Jun 26, 2024 at 10:56:12AM +0900, Michael Paquier wrote:\n> That's true, we could delay the release of the lock to happen just\n> before a callback is run.\n\nI am not sure what else we can do for the postmaster case for now, so\nI've moved ahead with the concern regarding the existing locking\nrelease delay when running a point, and pushed a patch for it.\n--\nMichael", "msg_date": "Fri, 28 Jun 2024 12:40:28 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection point locking" }, { "msg_contents": "On 25/06/2024 05:25, Noah Misch wrote:\n> On Mon, Jun 24, 2024 at 11:03:09AM -0400, Tom Lane wrote:\n>> Heikki Linnakangas <[email protected]> writes:\n>>> ... I can't do that, because InjectionPointRun() requires a PGPROC\n>>> entry, because it uses an LWLock. That also makes it impossible to use\n>>> injection points in the postmaster. Any chance we could allow injection\n>>> points to be triggered without a PGPROC entry? Could we use a simple\n>>> spinlock instead?\n> \n> That sounds fine to me. If calling hash_search() with a spinlock feels too\n> awful, a list to linear-search could work.\n\n>>> With a fast path for the case that no injection points\n>>> are attached or something?\n>>\n>> Even taking a spinlock in the postmaster is contrary to project\n>> policy. Maybe we could look the other way for debug-only code,\n>> but it seems like a pretty horrible precedent.\n> \n> If you're actually using an injection point in the postmaster, that would be\n> the least of concerns. It is something of a concern for running an injection\n> point build while not attaching any injection point. One solution could be a\n> GUC to control whether the postmaster participates in injection points.\n> Another could be to make the data structure readable with atomics only.\n\nI came up with the attached. It replaces the shmem hash table with an \narray that's scanned linearly. On each slot in the array, there's a \ngeneration number that indicates whether the slot is in use, and allows \ndetecting concurrent modifications without locks. The attach/detach \noperations still hold the LWLock, but InjectionPointRun() is now \nlock-free, so it can be used without a PGPROC entry.\n\nIt's now usable from postmaster too. However, it's theoretically \npossible that if shared memory is overwritten with garbage, the garbage \nlooks like a valid injection point with a name that matches one of the \ninjection points that postmaster looks at. That seems unlikely enough \nthat I think we can accept the risk. To close that gap 100% I think a \nGUC is the only solution.\n\nNote that until we actually add an injection point to a function that \nruns in the postmaster, there's no risk. If we're uneasy about that, we \ncould add an assertion to InjectionPointRun() to prevent it from running \nin the postmaster, so that we don't cross that line inadvertently.\n\nThoughts?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Mon, 8 Jul 2024 16:21:37 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Injection point locking" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> Note that until we actually add an injection point to a function that \n> runs in the postmaster, there's no risk. If we're uneasy about that, we \n> could add an assertion to InjectionPointRun() to prevent it from running \n> in the postmaster, so that we don't cross that line inadvertently.\n\nAs long as we consider injection points to be a debug/test feature\nonly, I think it's a net positive that one can be set in the\npostmaster. I'd be considerably more uncomfortable if somebody\nwanted to do that in production, but maybe it'd be fine even then.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Jul 2024 10:17:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection point locking" }, { "msg_contents": "On Mon, Jul 08, 2024 at 10:17:49AM -0400, Tom Lane wrote:\n> Heikki Linnakangas <[email protected]> writes:\n>> Note that until we actually add an injection point to a function that \n>> runs in the postmaster, there's no risk. If we're uneasy about that, we \n>> could add an assertion to InjectionPointRun() to prevent it from running \n>> in the postmaster, so that we don't cross that line inadvertently.\n\nAFAIU, you want to be able to do that to enforce some protocol checks.\nThat's a fine goal.\n\n> As long as we consider injection points to be a debug/test feature\n> only, I think it's a net positive that one can be set in the\n> postmaster. I'd be considerably more uncomfortable if somebody\n> wanted to do that in production, but maybe it'd be fine even then.\n\nThis is documented as a developer feature for tests, the docs are\nclear about that.\n--\nMichael", "msg_date": "Tue, 9 Jul 2024 13:14:53 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection point locking" }, { "msg_contents": "On Mon, Jul 08, 2024 at 04:21:37PM +0300, Heikki Linnakangas wrote:\n> I came up with the attached. It replaces the shmem hash table with an array\n> that's scanned linearly. On each slot in the array, there's a generation\n> number that indicates whether the slot is in use, and allows detecting\n> concurrent modifications without locks. The attach/detach operations still\n> hold the LWLock, but InjectionPointRun() is now lock-free, so it can be used\n> without a PGPROC entry.\n\nOkay, noted.\n\n> It's now usable from postmaster too. However, it's theoretically possible\n> that if shared memory is overwritten with garbage, the garbage looks like a\n> valid injection point with a name that matches one of the injection points\n> that postmaster looks at. That seems unlikely enough that I think we can\n> accept the risk. To close that gap 100% I think a GUC is the only solution.\n\nThis does not worry me much, FWIW.\n\n+ * optimization to.avoid scanning through the whole entry, in the common case\n\ns/to.avoid/to avoid/\n\n+ * generation counter on each entry to to allow safe, lock-free reading.\ns/to to/to/\n\n+ * we're looking for is concurrently added or remoed, we might or might\ns/remoed/removed/\n\n+ if (max_inuse == 0)\n+ {\n+ if (InjectionPointCache)\n+ {\n+ hash_destroy(InjectionPointCache);\n+ InjectionPointCache = NULL;\n+ }\n+ return false;\n\nIn InjectionPointCacheRefresh(), this points to nothing in the cache,\nso it should return NULL not false, even both are 0.\n\n typedef struct InjectionPointCacheEntry\n {\n char name[INJ_NAME_MAXLEN];\n+ int slot_idx;\n+ uint64 generation;\n char private_data[INJ_PRIVATE_MAXLEN];\n InjectionPointCallback callback;\n } InjectionPointCacheEntry;\n\nMay be worth mentioning that generation is a copy of\nInjectionPointEntry's generation cross-checked at runtime with the\nshmem entry to see if we have a cache consistent with shmem under the\nsame point name.\n\n+ generation = pg_atomic_read_u64(&entry->generation);\n+ if (generation % 2 == 0)\n+ continue;\nIn the loops of InjectionPointCacheRefresh() and\nInjectionPointDetach(), perhaps this should say that the slot is not\nused hence skipped when generation is even.\n\nInjectionPointDetach() has this code block at its end:\n if (!found)\n return false;\n return true;\n\nNot the fault of this patch, but this can just return \"found\".\n\nThe tricks with max_inuse to make the shmem lookups cheaper are\ninteresting.\n\n+ pg_read_barrier();\n+ if (memcmp(entry->name, name, namelen + 1) != 0)\n+ continue;\nWhy this barrier when checking the name of a shmem entry before\nreloading it in the local cache? Perhaps the reason should be\ncommented?\n\n+ pg_read_barrier();\n+ if (pg_atomic_read_u64(&entry->generation) != generation)\n+ continue; /* was detached concurrently */\n+\n+ return injection_point_cache_load(&local_copy, idx, generation);\n\nSo, in InjectionPointCacheRefresh(), when a point is loaded into the\nlocal cache for the first time, the read of \"generation\" is the\ntipping point: it is possible to take a breakpoint at the beginning of\ninjection_point_cache_load(), detach then attach the point. What\nmatters is that we are going to use the data in local_copy, even if\nshmem may have something entirely different. Hmm. Okay. It is a bit\nannoying that the entry is just discarded and ignored if the local\ncopy and shmem generations don't match? Could it be more\nuser-friendly to go back to the beginning of ActiveInjectionPoints and\nre-check the whole rather than return a NULL callback?\n\n- if (private_data != NULL)\n- memcpy(entry->private_data, private_data, INJ_PRIVATE_MAXLEN);\n+ memcpy(entry->private_data, private_data, INJ_PRIVATE_MAXLEN);\n\nprivate_data could be NULL, hence why the memcpy()?\n--\nMichael", "msg_date": "Tue, 9 Jul 2024 14:16:13 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection point locking" }, { "msg_contents": "On 09/07/2024 08:16, Michael Paquier wrote:\n> typedef struct InjectionPointCacheEntry\n> {\n> char name[INJ_NAME_MAXLEN];\n> + int slot_idx;\n> + uint64 generation;\n> char private_data[INJ_PRIVATE_MAXLEN];\n> InjectionPointCallback callback;\n> } InjectionPointCacheEntry;\n> \n> May be worth mentioning that generation is a copy of\n> InjectionPointEntry's generation cross-checked at runtime with the\n> shmem entry to see if we have a cache consistent with shmem under the\n> same point name.\n\nAdded a comment.\n\n> + generation = pg_atomic_read_u64(&entry->generation);\n> + if (generation % 2 == 0)\n> + continue;\n> In the loops of InjectionPointCacheRefresh() and\n> InjectionPointDetach(), perhaps this should say that the slot is not\n> used hence skipped when generation is even.\n\nAdded a brief \"/* empty slot */\" comment\n\n> InjectionPointDetach() has this code block at its end:\n> if (!found)\n> return false;\n> return true;\n> \n> Not the fault of this patch, but this can just return \"found\".\n\nDone.\n\n> The tricks with max_inuse to make the shmem lookups cheaper are\n> interesting.\n> \n> + pg_read_barrier();\n> + if (memcmp(entry->name, name, namelen + 1) != 0)\n> + continue;\n> Why this barrier when checking the name of a shmem entry before\n> reloading it in the local cache? Perhaps the reason should be\n> commented?\n\nAdded a comment.\n\n> + pg_read_barrier();\n> + if (pg_atomic_read_u64(&entry->generation) != generation)\n> + continue; /* was detached concurrently */\n> +\n> + return injection_point_cache_load(&local_copy, idx, generation);\n> \n> So, in InjectionPointCacheRefresh(), when a point is loaded into the\n> local cache for the first time, the read of \"generation\" is the\n> tipping point: it is possible to take a breakpoint at the beginning of\n> injection_point_cache_load(), detach then attach the point. What\n> matters is that we are going to use the data in local_copy, even if\n> shmem may have something entirely different. Hmm. Okay. It is a bit\n> annoying that the entry is just discarded and ignored if the local\n> copy and shmem generations don't match? Could it be more\n> user-friendly to go back to the beginning of ActiveInjectionPoints and\n> re-check the whole rather than return a NULL callback?\n\nI thought about it, but no. If the generation number doesn't match, \nthere are a few possibilities:\n\n1. The entry was what we were looking for, but it was concurrently \ndetached. Return NULL is correct in that case.\n\n2. The entry was what we were looking for, but it was concurrently \ndetached, and was then immediately reattached. NULL is a fine return \nvalue in that case too. When Run runs concurrently with Detach+Attach, \nyou don't get any guarantee whether the actual apparent order is \n\"Detach, Attach, Run\", \"Detach, Run, Attach\", or \"Run, Detach, Attach\". \nNULL result corresponds to the \"Detach, Run, Attach\" ordering.\n\n3. The entry was not actually what we were looking for. The name \ncomparison falsely matched just because the slot was concurrently \ndetached and recycled for a different injection point. We must continue \nthe search in that case.\n\nI added a comment to the top of the loop to explain scenario 2. And a \ncomment to the \"continue\" to explain scnario 3, because that's a bit subtle.\n\n> - if (private_data != NULL)\n> - memcpy(entry->private_data, private_data, INJ_PRIVATE_MAXLEN);\n> + memcpy(entry->private_data, private_data, INJ_PRIVATE_MAXLEN);\n> \n> private_data could be NULL, hence why the memcpy()?\n\nIt can not be NULL. You can pass NULL or a shorter length, to \nInjectionPointAttach(), but we don't store the length in shared memory.\n\nAttached is a new version. No other changes except for fixes for the \nthings you pointed out and comments.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Tue, 9 Jul 2024 12:12:04 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Injection point locking" }, { "msg_contents": "On Tue, Jul 09, 2024 at 12:12:04PM +0300, Heikki Linnakangas wrote:\n> I thought about it, but no. If the generation number doesn't match, there\n> are a few possibilities:\n> \n> 1. The entry was what we were looking for, but it was concurrently detached.\n> Return NULL is correct in that case.\n> \n> 2. The entry was what we were looking for, but it was concurrently detached,\n> and was then immediately reattached. NULL is a fine return value in that\n> case too. When Run runs concurrently with Detach+Attach, you don't get any\n> guarantee whether the actual apparent order is \"Detach, Attach, Run\",\n> \"Detach, Run, Attach\", or \"Run, Detach, Attach\". NULL result corresponds to\n> the \"Detach, Run, Attach\" ordering.\n>\n> 3. The entry was not actually what we were looking for. The name comparison\n> falsely matched just because the slot was concurrently detached and recycled\n> for a different injection point. We must continue the search in that case.\n> \n> I added a comment to the top of the loop to explain scenario 2. And a\n> comment to the \"continue\" to explain scnario 3, because that's a bit subtle.\n\nOkay. I am fine with your arguments here. There is still an argument\nimo about looping back at the beginning of ActiveInjectionPoints\nentries if we find an entry with a matching name but the generation\ndoes not match with the local copy for the detach-attach concurrent\ncase, but just moving on with the follow-up entries is also OK by me,\nas well.\n\nThe new comments in InjectionPointCacheRefresh() are nice\nimprovements. Thanks for that.\n--\nMichael", "msg_date": "Wed, 10 Jul 2024 12:44:22 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection point locking" }, { "msg_contents": "On 10/07/2024 06:44, Michael Paquier wrote:\n> On Tue, Jul 09, 2024 at 12:12:04PM +0300, Heikki Linnakangas wrote:\n>> I thought about it, but no. If the generation number doesn't match, there\n>> are a few possibilities:\n>>\n>> 1. The entry was what we were looking for, but it was concurrently detached.\n>> Return NULL is correct in that case.\n>>\n>> 2. The entry was what we were looking for, but it was concurrently detached,\n>> and was then immediately reattached. NULL is a fine return value in that\n>> case too. When Run runs concurrently with Detach+Attach, you don't get any\n>> guarantee whether the actual apparent order is \"Detach, Attach, Run\",\n>> \"Detach, Run, Attach\", or \"Run, Detach, Attach\". NULL result corresponds to\n>> the \"Detach, Run, Attach\" ordering.\n>>\n>> 3. The entry was not actually what we were looking for. The name comparison\n>> falsely matched just because the slot was concurrently detached and recycled\n>> for a different injection point. We must continue the search in that case.\n>>\n>> I added a comment to the top of the loop to explain scenario 2. And a\n>> comment to the \"continue\" to explain scnario 3, because that's a bit subtle.\n> \n> Okay. I am fine with your arguments here. There is still an argument\n> imo about looping back at the beginning of ActiveInjectionPoints\n> entries if we find an entry with a matching name but the generation\n> does not match with the local copy for the detach-attach concurrent\n> case, but just moving on with the follow-up entries is also OK by me,\n> as well.\n> \n> The new comments in InjectionPointCacheRefresh() are nice\n> improvements. Thanks for that.\n\nOk, committed this.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 15 Jul 2024 10:55:26 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Injection point locking" }, { "msg_contents": "On Mon, Jul 15, 2024 at 10:55:26AM +0300, Heikki Linnakangas wrote:\n> Ok, committed this.\n\nOkidoki. Thanks!\n--\nMichael", "msg_date": "Tue, 16 Jul 2024 09:51:47 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection point locking" } ]
[ { "msg_contents": "Hello hackers,\n\nThis patch is based on a suggestion from a separate thread [1]:\n\nOn Mon, Jun 24, 2024, at 01:46, Michael Paquier wrote:\n> Rather unrelated to this patch, still this patch makes the situation\n> more complicated in the docs, but wouldn't it be better to add ACL as\n> a term in acronyms.sql, and reuse it here? It would be a doc-only\n> patch that applies on top of the rest (could be on a new thread of its\n> own), with some <acronym> markups added where needed.\n\n[1] https://postgr.es/m/Zniz1n7qa3_i4iac%40paquier.xyz\n\n/Joel", "msg_date": "Mon, 24 Jun 2024 14:32:27 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH] Add ACL (Access Control List) acronym" }, { "msg_contents": "On Mon, Jun 24, 2024 at 02:32:27PM +0200, Joel Jacobson wrote:\n> This patch is based on a suggestion from a separate thread [1]:\n> \n> On Mon, Jun 24, 2024, at 01:46, Michael Paquier wrote:\n>> Rather unrelated to this patch, still this patch makes the situation\n>> more complicated in the docs, but wouldn't it be better to add ACL as\n>> a term in acronyms.sql, and reuse it here? It would be a doc-only\n>> patch that applies on top of the rest (could be on a new thread of its\n>> own), with some <acronym> markups added where needed.\n\nSounds reasonable to me.\n\n+ <ulink url=\"https://en.wikipedia.org/wiki/Access_Control_List\">Access Control List, i.e. privileges list</ulink>\n\nI think we could omit \"i.e. privileges list.\"\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 24 Jun 2024 10:44:48 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add ACL (Access Control List) acronym" }, { "msg_contents": "On Mon, Jun 24, 2024 at 8:44 AM Nathan Bossart <[email protected]>\nwrote:\n\n> On Mon, Jun 24, 2024 at 02:32:27PM +0200, Joel Jacobson wrote:\n> > This patch is based on a suggestion from a separate thread [1]:\n> >\n> > On Mon, Jun 24, 2024, at 01:46, Michael Paquier wrote:\n> >> Rather unrelated to this patch, still this patch makes the situation\n> >> more complicated in the docs, but wouldn't it be better to add ACL as\n> >> a term in acronyms.sql, and reuse it here? It would be a doc-only\n> >> patch that applies on top of the rest (could be on a new thread of its\n> >> own), with some <acronym> markups added where needed.\n>\n> Sounds reasonable to me.\n>\n\n+1\n\n\n> + <ulink url=\"https://en.wikipedia.org/wiki/Access_Control_List\">Access\n> Control List, i.e. privileges list</ulink>\n>\n> I think we could omit \"i.e. privileges list.\"\n>\n>\nAgreed. Between the docs and code we say \"privileges list\" once and that\nrefers to the dumputIls description of the arguments to grant. As the\nacronym page now defines the term using fundamentals, introducing another\nterm not used elsewhere seems undesirable.\n\nObservations:\nWe are referencing a disambiguation page. We never actually spell out ACL\nanywhere so we might as well just reference what Wikipedia believes is the\nexpected spelling.\n\nThe page we link to uses \"permissions\" while we consistently use\n\"privileges\" to describe the contents of the list. This seems like an\nobvious synonym, but as the point of these is to formally define things,\npointing this equivalence is worth considering.\n\nDavid J.\n\nOn Mon, Jun 24, 2024 at 8:44 AM Nathan Bossart <[email protected]> wrote:On Mon, Jun 24, 2024 at 02:32:27PM +0200, Joel Jacobson wrote:\n> This patch is based on a suggestion from a separate thread [1]:\n> \n> On Mon, Jun 24, 2024, at 01:46, Michael Paquier wrote:\n>> Rather unrelated to this patch, still this patch makes the situation\n>> more complicated in the docs, but wouldn't it be better to add ACL as\n>> a term in acronyms.sql, and reuse it here?  It would be a doc-only\n>> patch that applies on top of the rest (could be on a new thread of its\n>> own), with some <acronym> markups added where needed.\n\nSounds reasonable to me.+1\n\n+      <ulink url=\"https://en.wikipedia.org/wiki/Access_Control_List\">Access Control List, i.e. privileges list</ulink>\n\nI think we could omit \"i.e. privileges list.\"Agreed.  Between the docs and code we say \"privileges list\" once and that refers to the dumputIls description of the arguments to grant.  As the acronym page now defines the term using fundamentals, introducing another term not used elsewhere seems undesirable.Observations:We are referencing a disambiguation page.  We never actually spell out ACL anywhere so we might as well just reference what Wikipedia believes is the expected spelling.The page we link to uses \"permissions\" while we consistently use \"privileges\" to describe the contents of the list.  This seems like an obvious synonym, but as the point of these is to formally define things, pointing this equivalence is worth considering.David J.", "msg_date": "Mon, 24 Jun 2024 09:02:46 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add ACL (Access Control List) acronym" }, { "msg_contents": "On Mon, Jun 24, 2024, at 18:02, David G. Johnston wrote:\n> On Mon, Jun 24, 2024 at 8:44 AM Nathan Bossart <[email protected]> wrote:\n>> I think we could omit \"i.e. privileges list.\"\n>> \n>\n> Agreed. Between the docs and code we say \"privileges list\" once and \n> that refers to the dumputIls description of the arguments to grant. As \n> the acronym page now defines the term using fundamentals, introducing \n> another term not used elsewhere seems undesirable.\n\nNew version attached.\n\n> Observations:\n> We are referencing a disambiguation page. We never actually spell out \n> ACL anywhere so we might as well just reference what Wikipedia believes \n> is the expected spelling.\n>\n> The page we link to uses \"permissions\" while we consistently use \n> \"privileges\" to describe the contents of the list. This seems like an \n> obvious synonym, but as the point of these is to formally define \n> things, pointing this equivalence is worth considering.\n\nI like this idea. How could this be implemented in the docs? Maybe a <note>...</note> for ACL in acronyms.sgml?\n\n/Joel", "msg_date": "Mon, 24 Jun 2024 21:46:36 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add ACL (Access Control List) acronym" }, { "msg_contents": "On Mon, Jun 24, 2024 at 12:46 PM Joel Jacobson <[email protected]> wrote:\n\n> On Mon, Jun 24, 2024, at 18:02, David G. Johnston wrote:\n>\n> > The page we link to uses \"permissions\" while we consistently use\n> > \"privileges\" to describe the contents of the list. This seems like an\n> > obvious synonym, but as the point of these is to formally define\n> > things, pointing this equivalence is worth considering.\n>\n> I like this idea. How could this be implemented in the docs? Maybe a\n> <note>...</note> for ACL in acronyms.sgml?\n>\n>\nAdd a second <para> under the one holding the link?\n\nDavid J.\n\nOn Mon, Jun 24, 2024 at 12:46 PM Joel Jacobson <[email protected]> wrote:On Mon, Jun 24, 2024, at 18:02, David G. Johnston wrote:\n> The page we link to uses \"permissions\" while we consistently use \n> \"privileges\" to describe the contents of the list.  This seems like an \n> obvious synonym, but as the point of these is to formally define \n> things, pointing this equivalence is worth considering.\n\nI like this idea. How could this be implemented in the docs? Maybe a <note>...</note> for ACL in acronyms.sgml?Add a second <para> under the one holding the link?David J.", "msg_date": "Mon, 24 Jun 2024 12:51:37 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add ACL (Access Control List) acronym" }, { "msg_contents": "On Mon, Jun 24, 2024, at 21:51, David G. Johnston wrote:\n> On Mon, Jun 24, 2024 at 12:46 PM Joel Jacobson <[email protected]> wrote:\n>> On Mon, Jun 24, 2024, at 18:02, David G. Johnston wrote:\n>> \n>> > The page we link to uses \"permissions\" while we consistently use \n>> > \"privileges\" to describe the contents of the list. This seems like an \n>> > obvious synonym, but as the point of these is to formally define \n>> > things, pointing this equivalence is worth considering.\n>> \n>> I like this idea. How could this be implemented in the docs? Maybe a <note>...</note> for ACL in acronyms.sgml?\n>> \n>\n> Add a second <para> under the one holding the link?\n\nHow about?\n\n+ <para>\n+ The linked page uses \"permissions\" while we consistently use the synonym\n+ \"privileges\", to describe the contents of the list. For avoidance of\n+ doubt and clarity, these two terms are equivalent in the\n+ <productname>PostgreSQL</productname> documentation.\n+ </para>\n\n/Joel", "msg_date": "Mon, 24 Jun 2024 22:49:11 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add ACL (Access Control List) acronym" }, { "msg_contents": "On Mon, Jun 24, 2024 at 1:49 PM Joel Jacobson <[email protected]> wrote:\n\n> How about?\n>\n> + <para>\n> + The linked page uses \"permissions\" while we consistently use the\n> synonym\n> + \"privileges\", to describe the contents of the list. For avoidance of\n> + doubt and clarity, these two terms are equivalent in the\n> + <productname>PostgreSQL</productname> documentation.\n> + </para>\n>\n> /Joel\n\n\nI really dislike \"For avoidance of doubt and clarity\" - and in terms of\nbeing equivalent the following seems like a more accurate description of\nreality.\n\nThe PostgreSQL documentation, and code, refers to the specifications within\nthe ACL as \"privileges\". This has the same meaning as \"permissions\" on the\nlinked page. Generally if we say \"permissions\" we are referring to\nsomething that is not covered by the ACL. In routine communication the two\nwords are often used interchangeably.\n\nDavid J.\n\nOn Mon, Jun 24, 2024 at 1:49 PM Joel Jacobson <[email protected]> wrote:How about?\n\n+     <para>\n+      The linked page uses \"permissions\" while we consistently use the synonym\n+      \"privileges\", to describe the contents of the list. For avoidance of\n+      doubt and clarity, these two terms are equivalent in the\n+      <productname>PostgreSQL</productname> documentation.\n+     </para>\n\n/JoelI really dislike \"For avoidance of doubt and clarity\" - and in terms of being equivalent the following seems like a more accurate description of reality.The PostgreSQL documentation, and code, refers to the specifications within the ACL as \"privileges\".  This has the same meaning as \"permissions\" on the linked page.  Generally if we say \"permissions\" we are referring to something that is not covered by the ACL.  In routine communication the two words are often used interchangeably.David J.", "msg_date": "Mon, 24 Jun 2024 14:15:33 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add ACL (Access Control List) acronym" }, { "msg_contents": "On Mon, Jun 24, 2024, at 23:15, David G. Johnston wrote:\n> I really dislike \"For avoidance of doubt and clarity\" - and in terms of \n> being equivalent the following seems like a more accurate description \n> of reality.\n>\n> The PostgreSQL documentation, and code, refers to the specifications \n> within the ACL as \"privileges\". This has the same meaning as \n> \"permissions\" on the linked page. Generally if we say \"permissions\" we \n> are referring to something that is not covered by the ACL. In routine \n> communication the two words are often used interchangeably.\n\nThanks, much better. New version attached.\n\n/Joel", "msg_date": "Tue, 25 Jun 2024 00:20:20 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add ACL (Access Control List) acronym" }, { "msg_contents": "On Tue, Jun 25, 2024 at 12:20:20AM +0200, Joel Jacobson wrote:\n> Thanks, much better. New version attached.\n\n+ The <productname>PostgreSQL</productname> documentation, and code, refers\n+ to the specifications within the ACL as \"privileges\". This has the same\n+ meaning as \"permissions\" on the linked page. Generally if we say \n\nHmm? A privilege is a property that is part of an ACL, which is\nitself a set made of object types, roles and privileges. This entire\nparagraph is unnecessary IMO, let's keep it simple with only a\nreference link to the wiki page.\n\nv1 is fine without the \"privileges list\" part mentioned by Nathan in\nthe first reply.\n--\nMichael", "msg_date": "Tue, 25 Jun 2024 14:11:11 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add ACL (Access Control List) acronym" }, { "msg_contents": "On Tue, Jun 25, 2024, at 07:11, Michael Paquier wrote:\n> On Tue, Jun 25, 2024 at 12:20:20AM +0200, Joel Jacobson wrote:\n>> Thanks, much better. New version attached.\n>\n> + The <productname>PostgreSQL</productname> documentation, and code, refers\n> + to the specifications within the ACL as \"privileges\". This has the same\n> + meaning as \"permissions\" on the linked page. Generally if we say \n>\n> Hmm? A privilege is a property that is part of an ACL, which is\n> itself a set made of object types, roles and privileges. This entire\n> paragraph is unnecessary IMO, let's keep it simple with only a\n> reference link to the wiki page.\n>\n> v1 is fine without the \"privileges list\" part mentioned by Nathan in\n> the first reply.\n\nv2 is exactly that, but renamed and attached, so we have an entry this was the last version.\n\n/Joel", "msg_date": "Tue, 25 Jun 2024 08:10:24 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add ACL (Access Control List) acronym" }, { "msg_contents": "On Tue, Jun 25, 2024 at 08:10:24AM +0200, Joel Jacobson wrote:\n> On Tue, Jun 25, 2024, at 07:11, Michael Paquier wrote:\n>> v1 is fine without the \"privileges list\" part mentioned by Nathan in\n>> the first reply.\n> \n> v2 is exactly that, but renamed and attached, so we have an entry this\n> was the last version.\n\nLGTM\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 25 Jun 2024 11:55:03 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add ACL (Access Control List) acronym" }, { "msg_contents": "On Tue, Jun 25, 2024 at 11:55:03AM -0500, Nathan Bossart wrote:\n> On Tue, Jun 25, 2024 at 08:10:24AM +0200, Joel Jacobson wrote:\n> > On Tue, Jun 25, 2024, at 07:11, Michael Paquier wrote:\n> >> v1 is fine without the \"privileges list\" part mentioned by Nathan in\n> >> the first reply.\n> > \n> > v2 is exactly that, but renamed and attached, so we have an entry this\n> > was the last version.\n> \n> LGTM\n\nFine by me as well. I guess I'll just apply that once v18 opens up.\n--\nMichael", "msg_date": "Wed, 26 Jun 2024 09:30:27 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add ACL (Access Control List) acronym" }, { "msg_contents": "On Tue, Jun 25, 2024 at 5:30 PM Michael Paquier <[email protected]> wrote:\n\n> On Tue, Jun 25, 2024 at 11:55:03AM -0500, Nathan Bossart wrote:\n> > On Tue, Jun 25, 2024 at 08:10:24AM +0200, Joel Jacobson wrote:\n> > > On Tue, Jun 25, 2024, at 07:11, Michael Paquier wrote:\n> > >> v1 is fine without the \"privileges list\" part mentioned by Nathan in\n> > >> the first reply.\n> > >\n> > > v2 is exactly that, but renamed and attached, so we have an entry this\n> > > was the last version.\n> >\n> > LGTM\n>\n> Fine by me as well. I guess I'll just apply that once v18 opens up.\n>\n> Fine by me. We aren't consistent enough about all this to try and be\nauthoritative.\n\nThough there was no comment on the fact we should be linking to:\n\nhttps://en.wikipedia.org/wiki/Access-control_list\n\nnot:\n\nhttps://en.wikipedia.org/wiki/Access_Control_List\n\nto avoid the dis-ambiguation redirect.\n\nIf we are making wikipedia our authority we might as well use their\nstandard for naming.\n\nDavid J.\n\nOn Tue, Jun 25, 2024 at 5:30 PM Michael Paquier <[email protected]> wrote:On Tue, Jun 25, 2024 at 11:55:03AM -0500, Nathan Bossart wrote:\n> On Tue, Jun 25, 2024 at 08:10:24AM +0200, Joel Jacobson wrote:\n> > On Tue, Jun 25, 2024, at 07:11, Michael Paquier wrote:\n> >> v1 is fine without the \"privileges list\" part mentioned by Nathan in\n> >> the first reply.\n> > \n> > v2 is exactly that, but renamed and attached, so we have an entry this\n> > was the last version.\n> \n> LGTM\n\nFine by me as well.  I guess I'll just apply that once v18 opens up.\nFine by me.  We aren't consistent enough about all this to try and be authoritative.Though there was no comment on the fact we should be linking to:https://en.wikipedia.org/wiki/Access-control_listnot:https://en.wikipedia.org/wiki/Access_Control_Listto avoid the dis-ambiguation redirect.If we are making wikipedia our authority we might as well use their standard for naming.David J.", "msg_date": "Tue, 25 Jun 2024 17:59:01 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add ACL (Access Control List) acronym" }, { "msg_contents": "On Mon, Jun 24, 2024 at 10:11 PM Michael Paquier <[email protected]>\nwrote:\n\n> On Tue, Jun 25, 2024 at 12:20:20AM +0200, Joel Jacobson wrote:\n> > Thanks, much better. New version attached.\n>\n> + The <productname>PostgreSQL</productname> documentation, and code,\n> refers\n> + to the specifications within the ACL as \"privileges\". This has the\n> same\n> + meaning as \"permissions\" on the linked page. Generally if we say\n>\n> Hmm? A privilege is a property that is part of an ACL, which is\n> itself a set made of object types, roles and privileges.\n>\n\nSo, an ACL is a collection of composite typed things (grantor, grantee,\nprivileges) and the type name for that composite type is \"permission\".\nThat does clear things up, even if we tend to use privilege in cases where\npermission is meant.\n\nDavid J.\n\nOn Mon, Jun 24, 2024 at 10:11 PM Michael Paquier <[email protected]> wrote:On Tue, Jun 25, 2024 at 12:20:20AM +0200, Joel Jacobson wrote:\n> Thanks, much better. New version attached.\n\n+      The <productname>PostgreSQL</productname> documentation, and code, refers\n+      to the specifications within the ACL as \"privileges\".  This has the same\n+      meaning as \"permissions\" on the linked page.  Generally if we say \n\nHmm?  A privilege is a property that is part of an ACL, which is\nitself a set made of object types, roles and privileges.So, an ACL is a collection of composite typed things (grantor, grantee, privileges) and the type name for that composite type is \"permission\".  That does clear things up, even if we tend to use privilege in cases where permission is meant.David J.", "msg_date": "Tue, 25 Jun 2024 18:16:55 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add ACL (Access Control List) acronym" }, { "msg_contents": "On Tue, Jun 25, 2024 at 05:59:01PM -0700, David G. Johnston wrote:\n> Though there was no comment on the fact we should be linking to:\n> \n> https://en.wikipedia.org/wiki/Access-control_list\n> \n> not:\n> \n> https://en.wikipedia.org/wiki/Access_Control_List\n> \n> to avoid the dis-ambiguation redirect.\n\n+1\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 26 Jun 2024 09:23:18 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add ACL (Access Control List) acronym" }, { "msg_contents": "On Wed, Jun 26, 2024, at 02:59, David G. Johnston wrote:\n> Though there was no comment on the fact we should be linking to:\n>\n> https://en.wikipedia.org/wiki/Access-control_list\n>\n> not:\n>\n> https://en.wikipedia.org/wiki/Access_Control_List\n>\n> to avoid the dis-ambiguation redirect.\n>\n> If we are making wikipedia our authority we might as well use their \n> standard for naming.\n\nGood point.\n\nWant me to fix that or will the committer handle that?\n\nI found some more similar cases in acronyms.sgml.\n\n-https://en.wikipedia.org/wiki/Pluggable_Authentication_Modules\n+https://en.wikipedia.org/wiki/Pluggable_authentication_module\n-https://en.wikipedia.org/wiki/Data_Manipulation_Language\n+https://en.wikipedia.org/wiki/Data_manipulation_language\n-https://en.wikipedia.org/wiki/OLTP\n+https://en.wikipedia.org/wiki/Online_transaction_processing\n-https://en.wikipedia.org/wiki/Data_Definition_Language\n+https://en.wikipedia.org/wiki/Data_definition_language\n-https://en.wikipedia.org/wiki/ORDBMS\n+https://en.wikipedia.org/wiki/Object%E2%80%93relational_database\n-https://en.wikipedia.org/wiki/GMT\n+https://en.wikipedia.org/wiki/Greenwich_Mean_Time\n-https://en.wikipedia.org/wiki/Relational_database_management_system\n+https://en.wikipedia.org/wiki/Relational_database#RDBMS\n-https://en.wikipedia.org/wiki/Olap\n-https://en.wikipedia.org/wiki/Issn\n+https://en.wikipedia.org/wiki/Online_analytical_processing\n+https://en.wikipedia.org/wiki/ISSN\n-https://en.wikipedia.org/wiki/System_V\n+https://en.wikipedia.org/wiki/UNIX_System_V\n-https://en.wikipedia.org/wiki/Visual_C++\n+https://en.wikipedia.org/wiki/Microsoft_Visual_C%2B%2B\n-https://en.wikipedia.org/wiki/SGML\n+https://en.wikipedia.org/wiki/Standard_Generalized_Markup_Language\n-https://en.wikipedia.org/wiki/Ascii\n+https://en.wikipedia.org/wiki/ASCII\n-https://en.wikipedia.org/wiki/Dbms\n+https://en.wikipedia.org/wiki/Database#Database_management_system\n-https://en.wikipedia.org/wiki/Git_(software)\n+https://en.wikipedia.org/wiki/Git\n-https://en.wikipedia.org/wiki/Utf8\n+https://en.wikipedia.org/wiki/UTF-8\n-https://en.wikipedia.org/wiki/Secure_Sockets_Layer\n+https://en.wikipedia.org/wiki/Transport_Layer_Security#SSL_1.0,_2.0,_and_3.0\n\nBelow is the script I used to find them,\nwhich also reports some additional false positives:\n\n```\n#!/bin/bash\n\nexport LC_ALL=C\nwget -q -O acronyms.html https://www.postgresql.org/docs/current/acronyms.html\nurls=$(grep -o 'https://[^\"]*' acronyms.html)\noutput_file=\"canonical_urls.txt\"\n> $output_file\n\nextract_canonical() {\n local url=$1\n canonical=$(curl -s $url | sed -n 's/.*<link rel=\"canonical\" href=\"\\([^\"]*\\)\".*/\\1/p')\n if [[ -n \"$canonical\" && \"$canonical\" != \"$url\" ]]; then\n echo \"-$url\" >> $output_file\n echo \"+$canonical\" >> $output_file\n fi\n}\n\nfor url in $urls; do\n extract_canonical $url &\ndone\n\nwait\n\ncat $output_file\n```\n\n/Joel\n\n\n", "msg_date": "Wed, 26 Jun 2024 16:52:19 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add ACL (Access Control List) acronym" }, { "msg_contents": "On Wed, Jun 26, 2024 at 7:52 AM Joel Jacobson <[email protected]> wrote:\n\n> On Wed, Jun 26, 2024, at 02:59, David G. Johnston wrote:\n> > Though there was no comment on the fact we should be linking to:\n> >\n> > https://en.wikipedia.org/wiki/Access-control_list\n> >\n> > not:\n> >\n> > https://en.wikipedia.org/wiki/Access_Control_List\n> >\n> > to avoid the dis-ambiguation redirect.\n> >\n> > If we are making wikipedia our authority we might as well use their\n> > standard for naming.\n>\n> Good point.\n>\n> Want me to fix that or will the committer handle that?\n>\n> I found some more similar cases in acronyms.sgml.\n>\n> -https://en.wikipedia.org/wiki/Pluggable_Authentication_Modules\n> +https://en.wikipedia.org/wiki/Pluggable_authentication_module\n> -https://en.wikipedia.org/wiki/Data_Manipulation_Language\n> +https://en.wikipedia.org/wiki/Data_manipulation_language\n> -https://en.wikipedia.org/wiki/OLTP\n> +https://en.wikipedia.org/wiki/Online_transaction_processing\n> -https://en.wikipedia.org/wiki/Data_Definition_Language\n> +https://en.wikipedia.org/wiki/Data_definition_language\n> -https://en.wikipedia.org/wiki/ORDBMS\n> +https://en.wikipedia.org/wiki/Object%E2%80%93relational_database\n> -https://en.wikipedia.org/wiki/GMT\n> <https://en.wikipedia.org/wiki/Object%E2%80%93relational_database-https://en.wikipedia.org/wiki/GMT>\n> +https://en.wikipedia.org/wiki/Greenwich_Mean_Time\n> -https://en.wikipedia.org/wiki/Relational_database_management_system\n> +https://en.wikipedia.org/wiki/Relational_database#RDBMS\n> -https://en.wikipedia.org/wiki/Olap\n> -https://en.wikipedia.org/wiki/Issn\n> +https://en.wikipedia.org/wiki/Online_analytical_processing\n> +https://en.wikipedia.org/wiki/ISSN\n> -https://en.wikipedia.org/wiki/System_V\n> +https://en.wikipedia.org/wiki/UNIX_System_V\n> -https://en.wikipedia.org/wiki/Visual_C++\n> +https://en.wikipedia.org/wiki/Microsoft_Visual_C%2B%2B\n> -https://en.wikipedia.org/wiki/SGML\n> +https://en.wikipedia.org/wiki/Standard_Generalized_Markup_Language\n> -https://en.wikipedia.org/wiki/Ascii\n> <https://en.wikipedia.org/wiki/Standard_Generalized_Markup_Language-https://en.wikipedia.org/wiki/Ascii>\n> +https://en.wikipedia.org/wiki/ASCII\n> -https://en.wikipedia.org/wiki/Dbms\n> +https://en.wikipedia.org/wiki/Database#Database_management_system\n> -https://en.wikipedia.org/wiki/Git_(software)\n> <https://en.wikipedia.org/wiki/Database#Database_management_system-https://en.wikipedia.org/wiki/Git_(software)>\n> +https://en.wikipedia.org/wiki/Git\n> -https://en.wikipedia.org/wiki/Utf8\n> +https://en.wikipedia.org/wiki/UTF-8\n> -https://en.wikipedia.org/wiki/Secure_Sockets_Layer\n> +\n> https://en.wikipedia.org/wiki/Transport_Layer_Security#SSL_1.0,_2.0,_and_3.0\n>\n> Below is the script I used to find them,\n> which also reports some additional false positives:\n>\n>\nGiven this I'd be OK with committing as-is in the name of matching existing\nproject style. Then bringing up this inconsistency as a separate concern\nto be bulk fixed as part of implementing a new policy on what to check for\nand conform to when establishing acronyms in our documentation.\n\nOtherwise the author (you) should make the change here - the committer\nwouldn't be expected to know to do that from the discussion.\n\nDavid J.\n\nOn Wed, Jun 26, 2024 at 7:52 AM Joel Jacobson <[email protected]> wrote:On Wed, Jun 26, 2024, at 02:59, David G. Johnston wrote:\n> Though there was no comment on the fact we should be linking to:\n>\n> https://en.wikipedia.org/wiki/Access-control_list\n>\n> not:\n>\n> https://en.wikipedia.org/wiki/Access_Control_List\n>\n> to avoid the dis-ambiguation redirect.\n>\n> If we are making wikipedia our authority we might as well use their \n> standard for naming.\n\nGood point.\n\nWant me to fix that or will the committer handle that?\n\nI found some more similar cases in acronyms.sgml.\n\n-https://en.wikipedia.org/wiki/Pluggable_Authentication_Modules\n+https://en.wikipedia.org/wiki/Pluggable_authentication_module\n-https://en.wikipedia.org/wiki/Data_Manipulation_Language\n+https://en.wikipedia.org/wiki/Data_manipulation_language\n-https://en.wikipedia.org/wiki/OLTP\n+https://en.wikipedia.org/wiki/Online_transaction_processing\n-https://en.wikipedia.org/wiki/Data_Definition_Language\n+https://en.wikipedia.org/wiki/Data_definition_language\n-https://en.wikipedia.org/wiki/ORDBMS\n+https://en.wikipedia.org/wiki/Object%E2%80%93relational_database\n-https://en.wikipedia.org/wiki/GMT\n+https://en.wikipedia.org/wiki/Greenwich_Mean_Time\n-https://en.wikipedia.org/wiki/Relational_database_management_system\n+https://en.wikipedia.org/wiki/Relational_database#RDBMS\n-https://en.wikipedia.org/wiki/Olap\n-https://en.wikipedia.org/wiki/Issn\n+https://en.wikipedia.org/wiki/Online_analytical_processing\n+https://en.wikipedia.org/wiki/ISSN\n-https://en.wikipedia.org/wiki/System_V\n+https://en.wikipedia.org/wiki/UNIX_System_V\n-https://en.wikipedia.org/wiki/Visual_C++\n+https://en.wikipedia.org/wiki/Microsoft_Visual_C%2B%2B\n-https://en.wikipedia.org/wiki/SGML\n+https://en.wikipedia.org/wiki/Standard_Generalized_Markup_Language\n-https://en.wikipedia.org/wiki/Ascii\n+https://en.wikipedia.org/wiki/ASCII\n-https://en.wikipedia.org/wiki/Dbms\n+https://en.wikipedia.org/wiki/Database#Database_management_system\n-https://en.wikipedia.org/wiki/Git_(software)\n+https://en.wikipedia.org/wiki/Git\n-https://en.wikipedia.org/wiki/Utf8\n+https://en.wikipedia.org/wiki/UTF-8\n-https://en.wikipedia.org/wiki/Secure_Sockets_Layer\n+https://en.wikipedia.org/wiki/Transport_Layer_Security#SSL_1.0,_2.0,_and_3.0\n\nBelow is the script I used to find them,\nwhich also reports some additional false positives:Given this I'd be OK with committing as-is in the name of matching existing project style.  Then bringing up this inconsistency as a separate concern to be bulk fixed as part of implementing a new policy on what to check for and conform to when establishing acronyms in our documentation.Otherwise the author (you) should make the change here - the committer wouldn't be expected to know to do that from the discussion.David J.", "msg_date": "Wed, 26 Jun 2024 07:58:55 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add ACL (Access Control List) acronym" }, { "msg_contents": "On Wed, Jun 26, 2024 at 07:58:55AM -0700, David G. Johnston wrote:\n> On Wed, Jun 26, 2024 at 7:52 AM Joel Jacobson <[email protected]> wrote:\n>> Want me to fix that or will the committer handle that?\n>>\n>> I found some more similar cases in acronyms.sgml.\n>\n> Given this I'd be OK with committing as-is in the name of matching existing\n> project style. Then bringing up this inconsistency as a separate concern\n> to be bulk fixed as part of implementing a new policy on what to check for\n> and conform to when establishing acronyms in our documentation.\n> \n> Otherwise the author (you) should make the change here - the committer\n> wouldn't be expected to know to do that from the discussion.\n\nIf I was writing these patches, I'd create a separate 0001 patch to fix the\nexisting problems, then 0002 would be just the new stuff (without the\ninconsistency). But that's just what I'd do; there's no problem with doing\nit the other way around.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 26 Jun 2024 10:47:09 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add ACL (Access Control List) acronym" }, { "msg_contents": "On Wed, Jun 26, 2024 at 8:47 AM Nathan Bossart <[email protected]>\nwrote:\n\n> On Wed, Jun 26, 2024 at 07:58:55AM -0700, David G. Johnston wrote:\n> > On Wed, Jun 26, 2024 at 7:52 AM Joel Jacobson <[email protected]> wrote:\n> >> Want me to fix that or will the committer handle that?\n> >>\n> >> I found some more similar cases in acronyms.sgml.\n> >\n> > Given this I'd be OK with committing as-is in the name of matching\n> existing\n> > project style. Then bringing up this inconsistency as a separate concern\n> > to be bulk fixed as part of implementing a new policy on what to check\n> for\n> > and conform to when establishing acronyms in our documentation.\n> >\n> > Otherwise the author (you) should make the change here - the committer\n> > wouldn't be expected to know to do that from the discussion.\n>\n> If I was writing these patches, I'd create a separate 0001 patch to fix the\n> existing problems, then 0002 would be just the new stuff (without the\n> inconsistency). But that's just what I'd do; there's no problem with doing\n> it the other way around.\n>\n>\nAgreed, if Joel wants to write both. But as the broader fix shouldn't\nblock adding a new acronym, it doesn't make sense to insist on this\napproach. Consistency makes sense though doing it the expected way would\nbe OK as well. Either way, assuming the future patch materializes and gets\ncommitted the end state is the same, and the path to it doesn't really\nmatter.\n\nDavid J.\n\nOn Wed, Jun 26, 2024 at 8:47 AM Nathan Bossart <[email protected]> wrote:On Wed, Jun 26, 2024 at 07:58:55AM -0700, David G. Johnston wrote:\n> On Wed, Jun 26, 2024 at 7:52 AM Joel Jacobson <[email protected]> wrote:\n>> Want me to fix that or will the committer handle that?\n>>\n>> I found some more similar cases in acronyms.sgml.\n>\n> Given this I'd be OK with committing as-is in the name of matching existing\n> project style.  Then bringing up this inconsistency as a separate concern\n> to be bulk fixed as part of implementing a new policy on what to check for\n> and conform to when establishing acronyms in our documentation.\n> \n> Otherwise the author (you) should make the change here - the committer\n> wouldn't be expected to know to do that from the discussion.\n\nIf I was writing these patches, I'd create a separate 0001 patch to fix the\nexisting problems, then 0002 would be just the new stuff (without the\ninconsistency).  But that's just what I'd do; there's no problem with doing\nit the other way around.Agreed, if Joel wants to write both.  But as the broader fix shouldn't block adding a new acronym, it doesn't make sense to insist on this approach.  Consistency makes sense though doing it the expected way would be OK as well.  Either way, assuming the future patch materializes and gets committed the end state is the same, and the path to it doesn't really matter.David J.", "msg_date": "Wed, 26 Jun 2024 09:54:54 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add ACL (Access Control List) acronym" }, { "msg_contents": "On Wed, Jun 26, 2024, at 18:54, David G. Johnston wrote:\n> On Wed, Jun 26, 2024 at 8:47 AM Nathan Bossart <[email protected]> wrote:\n>> On Wed, Jun 26, 2024 at 07:58:55AM -0700, David G. Johnston wrote:\n>> > On Wed, Jun 26, 2024 at 7:52 AM Joel Jacobson <[email protected]> wrote:\n>> >> Want me to fix that or will the committer handle that?\n>> >>\n>> >> I found some more similar cases in acronyms.sgml.\n>> >\n>> > Given this I'd be OK with committing as-is in the name of matching existing\n>> > project style. Then bringing up this inconsistency as a separate concern\n>> > to be bulk fixed as part of implementing a new policy on what to check for\n>> > and conform to when establishing acronyms in our documentation.\n>> > \n>> > Otherwise the author (you) should make the change here - the committer\n>> > wouldn't be expected to know to do that from the discussion.\n\nOK, I've made the change, new patch attached.\n\n>> If I was writing these patches, I'd create a separate 0001 patch to fix the\n>> existing problems, then 0002 would be just the new stuff (without the\n>> inconsistency). But that's just what I'd do; there's no problem with doing\n>> it the other way around.\n>> \n>\n> Agreed, if Joel wants to write both. But as the broader fix shouldn't \n> block adding a new acronym, it doesn't make sense to insist on this \n> approach. Consistency makes sense though doing it the expected way \n> would be OK as well. Either way, assuming the future patch \n> materializes and gets committed the end state is the same, and the path \n> to it doesn't really matter.\n\nI'll start a new separate thread about fixing the other non-canonical URLs.\n\n/Joel", "msg_date": "Thu, 27 Jun 2024 10:42:09 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add ACL (Access Control List) acronym" }, { "msg_contents": "On Thu, Jun 27, 2024, at 10:42, Joel Jacobson wrote:\n> I'll start a new separate thread about fixing the other non-canonical URLs.\n\nHere is the separate thread to fix the docs to use canonical links:\nhttps://postgr.es/m/[email protected]\n\n\n", "msg_date": "Sun, 30 Jun 2024 12:27:45 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add ACL (Access Control List) acronym" }, { "msg_contents": "On Wed, Jun 26, 2024 at 09:30:27AM +0900, Michael Paquier wrote:\n> Fine by me as well. I guess I'll just apply that once v18 opens up.\n\nAnd done with 00d819d46a6f.\n--\nMichael", "msg_date": "Mon, 1 Jul 2024 10:00:17 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add ACL (Access Control List) acronym" } ]
[ { "msg_contents": "Hi,\n\nWhile doing some additional testing of (incremental) backups, I ran into\na couple regular failures. After pulling my hair for a couple days, I\nrealized the issue seems to affect regular backups, and incremental\nbackups (which I've been trying to test) are likely innocent.\n\nI'm using a simple (and admittedly not very pretty) bash scripts that\ntakes and verified backups, concurrently with this workload:\n\n\n1) initialize a cluster\n\n2) initialize pgbench in database 'db'\n\n3) run short pgbench on 'db'\n\n4) maybe do vacuum [full] on 'db'\n\n5) drop a database 'db_copy' if it exists\n\n6) create a database 'db_copy' by copying 'db' using one of the\n available strategies (file_copy, wal_log)\n\n7) run short pgbench on 'db_copy'\n\n8) maybe do vacuum [full] on 'db_copy'\n\n\nAnd concurrently with this, it takes a basebackup, starts a cluster on\nit (on a different port, ofc), and does various checks on that:\n\n\na) verify checksums using pg_checksums (cluster has them enabled)\n\nb) run amcheck on tables/indexes on both databases\n\nc) SQL check (we expect all tables to be 'consistent' as if we did a\nPITR - in particular sum(balance) is expected to be the same value on\nall pgbench tables) on both databases\n\n\nI believe those are reasonable expectations - that we get a database\nwith valid checksums, with non-broken tables/indexes, and that the\ndatabase looks as a snapshot taken at a single instant.\n\nUnfortunately it doesn't take long for the tests to start failing with\nvarious strange symptoms on the db_copy database (I'm yet to see an\nissue on the 'db' database):\n\ni) amcheck fails with 'heap tuple lacks matching index tuple'\n\n ERROR: heap tuple (116195,22) from table \"pgbench_accounts\" lacks\n matching index tuple within index \"pgbench_accounts_pkey\"\n HINT: Retrying verification using the function\n bt_index_parent_check() might provide a more specific error.\n\n I've seen this with other tables/indexes too, e.g. system catalogs\n pg_statitics or toast tables, but 'accounts' is most common.\n\nii) amcheck fails with 'could not open file'\n\n ERROR: could not open file \"base/18121/18137\": No such file or\n directory\n LINE 9: lateral verify_heapam(relation => c.oid, on_error_stop =>\n f...\n ^\n ERROR: could not open file \"base/18121/18137\": No such file or\n directory\n\niii) failures in the SQL check, with different tables have different\nbalance sums\n\n SQL check fails (db_copy) (account 156142 branches 136132 tellers\n 136132 history -42826)\n\n Sometimes this is preceded by amcheck issue, but not always.\n\nI guess this is not the behavior we expect :-(\n\nI've reproduced all of this on PG16 - I haven't tried with older\nreleases, but I have no reason to assume pre-16 releases are not affected.\n\nWith incremental backups I've observed a couple more symptoms, but those\nare most likely just fallout of this - not realizing the initial state\nis a bit wrong, and making it worse by applying the increments.\n\nThe important observation is that this only happens if a database is\ncreated while the backup is running, and that it only happens with the\nFILE_COPY strategy - I've never seen this with WAL_LOG (which is the\ndefault since PG15).\n\nI don't recall any reports of similar issues from pre-15 releases, where\nFILE_COPY was the only available option - I'm not sure why is that.\nEither it didn't have this issue back then, or maybe people happen to\nnot create databases concurrently with a backup very often. It's a race\ncondition / timing issue, essentially.\n\nI have no ambition to investigate this part of the code much deeper, or\ninvent a fix myself, at least not in foreseeable future. But it seems\nlike something we probably should fix - subtly broken backups are not a\ngreat thing.\n\nI see there have been a couple threads proposing various improvements to\nFILE_COPY, that might make it more efficient/faster, namely using the\nfilesystem cloning [1] or switching pg_upgrade to use it [2]. But having\nsomething that's (maybe) faster but not quite correct does not seem like\na winning strategy to me ...\n\nAlternatively, if we don't have clear desire to fix it, maybe the right\nsolution would be get rid of it?\n\n\nregards\n\n[1]\nhttps://www.postgresql.org/message-id/CA+hUKGLM+t+SwBU-cHeMUXJCOgBxSHLGZutV5zCwY4qrCcE02w@mail.gmail.com\n\n[2] https://www.postgresql.org/message-id/Zl9ta3FtgdjizkJ5%40nathan\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 24 Jun 2024 16:12:38 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "basebackups seem to have serious issues with FILE_COPY in CREATE\n DATABASE" }, { "msg_contents": "On Mon, Jun 24, 2024 at 04:12:38PM +0200, Tomas Vondra wrote:\n> The important observation is that this only happens if a database is\n> created while the backup is running, and that it only happens with the\n> FILE_COPY strategy - I've never seen this with WAL_LOG (which is the\n> default since PG15).\n\nMy first thought is that this sounds related to the large comment in\nCreateDatabaseUsingFileCopy():\n\n\t/*\n\t * We force a checkpoint before committing. This effectively means that\n\t * committed XLOG_DBASE_CREATE_FILE_COPY operations will never need to be\n\t * replayed (at least not in ordinary crash recovery; we still have to\n\t * make the XLOG entry for the benefit of PITR operations). This avoids\n\t * two nasty scenarios:\n\t *\n\t * #1: When PITR is off, we don't XLOG the contents of newly created\n\t * indexes; therefore the drop-and-recreate-whole-directory behavior of\n\t * DBASE_CREATE replay would lose such indexes.\n\t *\n\t * #2: Since we have to recopy the source database during DBASE_CREATE\n\t * replay, we run the risk of copying changes in it that were committed\n\t * after the original CREATE DATABASE command but before the system crash\n\t * that led to the replay. This is at least unexpected and at worst could\n\t * lead to inconsistencies, eg duplicate table names.\n\t *\n\t * (Both of these were real bugs in releases 8.0 through 8.0.3.)\n\t *\n\t * In PITR replay, the first of these isn't an issue, and the second is\n\t * only a risk if the CREATE DATABASE and subsequent template database\n\t * change both occur while a base backup is being taken. There doesn't\n\t * seem to be much we can do about that except document it as a\n\t * limitation.\n\t *\n\t * See CreateDatabaseUsingWalLog() for a less cheesy CREATE DATABASE\n\t * strategy that avoids these problems.\n\t */\n\n> I don't recall any reports of similar issues from pre-15 releases, where\n> FILE_COPY was the only available option - I'm not sure why is that.\n> Either it didn't have this issue back then, or maybe people happen to\n> not create databases concurrently with a backup very often. It's a race\n> condition / timing issue, essentially.\n\nIf it requires concurrent activity on the template database, I wouldn't be\nsurprised at all that this is rare.\n\n> I see there have been a couple threads proposing various improvements to\n> FILE_COPY, that might make it more efficient/faster, namely using the\n> filesystem cloning [1] or switching pg_upgrade to use it [2]. But having\n> something that's (maybe) faster but not quite correct does not seem like\n> a winning strategy to me ...\n> \n> Alternatively, if we don't have clear desire to fix it, maybe the right\n> solution would be get rid of it?\n\nIt would be unfortunate if we couldn't use this for pg_upgrade, especially\nif it is unaffected by these problems.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 24 Jun 2024 10:14:01 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: basebackups seem to have serious issues with FILE_COPY in CREATE\n DATABASE" }, { "msg_contents": "On 6/24/24 17:14, Nathan Bossart wrote:\n> On Mon, Jun 24, 2024 at 04:12:38PM +0200, Tomas Vondra wrote:\n>> The important observation is that this only happens if a database is\n>> created while the backup is running, and that it only happens with the\n>> FILE_COPY strategy - I've never seen this with WAL_LOG (which is the\n>> default since PG15).\n> \n> My first thought is that this sounds related to the large comment in\n> CreateDatabaseUsingFileCopy():\n> \n> \t/*\n> \t * We force a checkpoint before committing. This effectively means that\n> \t * committed XLOG_DBASE_CREATE_FILE_COPY operations will never need to be\n> \t * replayed (at least not in ordinary crash recovery; we still have to\n> \t * make the XLOG entry for the benefit of PITR operations). This avoids\n> \t * two nasty scenarios:\n> \t *\n> \t * #1: When PITR is off, we don't XLOG the contents of newly created\n> \t * indexes; therefore the drop-and-recreate-whole-directory behavior of\n> \t * DBASE_CREATE replay would lose such indexes.\n> \t *\n> \t * #2: Since we have to recopy the source database during DBASE_CREATE\n> \t * replay, we run the risk of copying changes in it that were committed\n> \t * after the original CREATE DATABASE command but before the system crash\n> \t * that led to the replay. This is at least unexpected and at worst could\n> \t * lead to inconsistencies, eg duplicate table names.\n> \t *\n> \t * (Both of these were real bugs in releases 8.0 through 8.0.3.)\n> \t *\n> \t * In PITR replay, the first of these isn't an issue, and the second is\n> \t * only a risk if the CREATE DATABASE and subsequent template database\n> \t * change both occur while a base backup is being taken. There doesn't\n> \t * seem to be much we can do about that except document it as a\n> \t * limitation.\n> \t *\n> \t * See CreateDatabaseUsingWalLog() for a less cheesy CREATE DATABASE\n> \t * strategy that avoids these problems.\n> \t */\n> \n\nPerhaps, the mentioned risks certainly seem like it might be related to\nthe issues I'm observing.\n\n>> I don't recall any reports of similar issues from pre-15 releases, where\n>> FILE_COPY was the only available option - I'm not sure why is that.\n>> Either it didn't have this issue back then, or maybe people happen to\n>> not create databases concurrently with a backup very often. It's a race\n>> condition / timing issue, essentially.\n> \n> If it requires concurrent activity on the template database, I wouldn't be\n> surprised at all that this is rare.\n> \n\nRight. Although, \"concurrent\" here means a somewhat different thing.\nAFAIK there can't be a any changes concurrent with the CREATE DATABASE\ndirectly, because we make sure there are no connections:\n\n createdb: error: database creation failed: ERROR: source database\n \"test\" is being accessed by other users\n DETAIL: There is 1 other session using the database.\n\nBut per the comment, it'd be a problem if there is activity after the\ndatabase gets copied, but before the backup completes (which is where\nthe replay will happen).\n\n>> I see there have been a couple threads proposing various improvements to\n>> FILE_COPY, that might make it more efficient/faster, namely using the\n>> filesystem cloning [1] or switching pg_upgrade to use it [2]. But having\n>> something that's (maybe) faster but not quite correct does not seem like\n>> a winning strategy to me ...\n>>\n>> Alternatively, if we don't have clear desire to fix it, maybe the right\n>> solution would be get rid of it?\n> \n> It would be unfortunate if we couldn't use this for pg_upgrade, especially\n> if it is unaffected by these problems.\n> \n\nYeah. I wouldn't mind using FILE_COPY in contexts where we know it's\nsafe, like pg_upgrade. I just don't want to let users to unknowingly\nstep on this.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 24 Jun 2024 17:29:42 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: basebackups seem to have serious issues with FILE_COPY in CREATE\n DATABASE" } ]
[ { "msg_contents": "Hello,\n\nHope you are doing well.\n\nI've been playing a bit with the incremental backup feature which might\ncome as\npart of the 17 release, and I think I hit a possible bug in the WAL\nsummarizer\nprocess.\n\nThe issue that I face refers to the summarizer process getting into a hung\nstate.\nWhen the issue is triggered, it keeps in an infinite loop trying to process\na WAL\nfile that no longer exists. It apparently comes up only when I perform\nchanges to\n`wal_summarize` GUC and reload Postgres, while there is some load in\nPostgres\nwhich makes it recycle WAL files.\n\nI'm running Postgres 17 in a Rockylinux 9 VM. In order to have less WAL\nfiles\navailable in `pg_wal` and make it easier to reproduce the issue, I'm using\na low\nvalue for `max_wal_size` ('100MB'). You can find below the steps that I\ntook to\nreproduce this problem, assuming this small `max_wal_size`, and\n`summarize_wal`\ninitially enabled:\n\n```bash\n# Assume we initially have max_wal_size = '100MB' and summarize_wal = on\n\n# Create a table of ~ 100MB\npsql -c \"CREATE TABLE test AS SELECT generate_series(1, 3000000)\"\n\n# Take a full backup\npg_basebackup -X none -c fast -P -D full_backup_1\n\n# Recreate a table of ~ 100MB\npsql -c \"DROP TABLE test\"\npsql -c \"CREATE TABLE test AS SELECT generate_series(1, 3000000)\"\n\n# Take an incremental backup\npg_basebackup -X none -c fast -P -D incremental_backup_1 --incremental\nfull_backup_1/backup_manifest\n\n# Disable summarize_wal\npsql -c \"ALTER SYSTEM SET summarize_wal TO off\"\npsql -c \"SELECT pg_reload_conf()\"\n\n# Recreate a table of ~ 100MB\npsql -c \"DROP TABLE test\"\npsql -c \"CREATE TABLE test AS SELECT generate_series(1, 3000000)\"\n\n# Re-enable sumarize_wal\npsql -c \"ALTER SYSTEM SET summarize_wal TO on\"\npsql -c \"SELECT pg_reload_conf()\"\n\n# Take a full backup\npg_basebackup -X none -c fast -P -D full_backup_2\n\n# Take an incremental backup\npg_basebackup -X none -c fast -P -D incremental_backup_2 --incremental\nfull_backup_2/backup_manifest\n```\n\nI'm able to reproduce the issue most of the time when running these steps\nmanually. It's harder to reproduce if I attempt to run those commands as a\nbash script.\n\nThis is the sample output of a run of those commands:\n\n```console\n\n(barman) [postgres@barmandevhost ~]$ psql -c \"CREATE TABLE test AS\nSELECT generate_series(1, 3000000)\"SELECT 3000000(barman)\n[postgres@barmandevhost ~]$ pg_basebackup -X none -c fast -P -D\nfull_backup_1NOTICE: WAL archiving is not enabled; you must ensure\nthat all required WAL segments are copied through other means to\ncomplete the backup331785/331785 kB (100%), 1/1 tablespace(barman)\n[postgres@barmandevhost ~]$ psql -c \"DROP TABLE test\"DROP\nTABLE(barman) [postgres@barmandevhost ~]$ psql -c \"CREATE TABLE test\nAS SELECT generate_series(1, 3000000)\"SELECT 3000000(barman)\n[postgres@barmandevhost ~]$ pg_basebackup -X none -c fast -P -D\nincremental_backup_1 --incremental\nfull_backup_1/backup_manifestNOTICE: WAL archiving is not enabled;\nyou must ensure that all required WAL segments are copied through\nother means to complete the backup111263/331720 kB (33%), 1/1\ntablespace(barman) [postgres@barmandevhost ~]$ psql -c \"ALTER SYSTEM\nSET summarize_wal TO off\"ALTER SYSTEM(barman) [postgres@barmandevhost\n~]$ psql -c \"SELECT pg_reload_conf()\" pg_reload_conf----------------\nt(1 row)\n(barman) [postgres@barmandevhost ~]$ psql -c \"DROP TABLE test\"DROP\nTABLE(barman) [postgres@barmandevhost ~]$ psql -c \"CREATE TABLE test\nAS SELECT generate_series(1, 3000000)\"SELECT 3000000(barman)\n[postgres@barmandevhost ~]$ psql -c \"ALTER SYSTEM SET summarize_wal TO\non\"ALTER SYSTEM(barman) [postgres@barmandevhost ~]$ psql -c \"SELECT\npg_reload_conf()\" pg_reload_conf---------------- t(1 row)\n(barman) [postgres@barmandevhost ~]$ pg_basebackup -X none -c fast -P\n-D full_backup_2NOTICE: WAL archiving is not enabled; you must ensure\nthat all required WAL segments are copied through other means to\ncomplete the backup331734/331734 kB (100%), 1/1 tablespace(barman)\n[postgres@barmandevhost ~]$ pg_basebackup -X none -c fast -P -D\nincremental_backup_2 --incremental\nfull_backup_2/backup_manifestWARNING: still waiting for WAL\nsummarization through 2/C1000028 after 10 secondsDETAIL:\nSummarization has reached 2/B30000D8 on disk and 2/B30000D8 in\nmemory.WARNING: still waiting for WAL summarization through\n2/C1000028 after 20 secondsDETAIL: Summarization has reached\n2/B30000D8 on disk and 2/B30000D8 in memory.WARNING: still waiting\nfor WAL summarization through 2/C1000028 after 30 secondsDETAIL:\nSummarization has reached 2/B30000D8 on disk and 2/B30000D8 in\nmemory.WARNING: still waiting for WAL summarization through\n2/C1000028 after 40 secondsDETAIL: Summarization has reached\n2/B30000D8 on disk and 2/B30000D8 in memory.WARNING: still waiting\nfor WAL summarization through 2/C1000028 after 50 secondsDETAIL:\nSummarization has reached 2/B30000D8 on disk and 2/B30000D8 in\nmemory.WARNING: still waiting for WAL summarization through\n2/C1000028 after 60 secondsDETAIL: Summarization has reached\n2/B30000D8 on disk and 2/B30000D8 in memory.WARNING: aborting backup\ndue to backend exiting before pg_backup_stop was calledpg_basebackup:\nerror: could not initiate base backup: ERROR: WAL summarization is\nnot progressingDETAIL: Summarization is needed through 2/C1000028,\nbut is stuck at 2/B30000D8 on disk and 2/B30000D8 in\nmemory.pg_basebackup: removing data directory \"incremental_backup_2\"\n\n```\n\nI took an `ls` output from `pg_wal` as well as `strace` and `gdb` from the\nWAL\nsummarizer process. I'm attaching that to this email hoping that can help\nsomehow.\n\nFWIW, once I restart Postgres the WAL summarizer process gets back to normal\nfunctioning. It seems to me there is some race condition between when a WAL\nfile\nis removed and when `summarize_wal` is re-enabled, causing the process to\nkeep\nlooking for a WAL file that is the past.\n\nBest regards,\nIsrael.\n\nHello,Hope you are doing well.I've been playing a bit with the incremental backup feature which might come aspart of the 17 release, and I think I hit a possible bug in the WAL summarizerprocess.The issue that I face refers to the summarizer process getting into a hung state.When the issue is triggered, it keeps in an infinite loop trying to process a WALfile that no longer exists.  It apparently comes up only when I perform changes to`wal_summarize` GUC and reload Postgres, while there is some load in Postgreswhich makes it recycle WAL files.I'm running Postgres 17 in a Rockylinux 9 VM. In order to have less WAL filesavailable in `pg_wal` and make it easier to reproduce the issue, I'm using a lowvalue for `max_wal_size` ('100MB'). You can find below the steps that I took toreproduce this problem, assuming this small `max_wal_size`, and `summarize_wal`initially enabled:```bash# Assume we initially have max_wal_size = '100MB' and summarize_wal = on# Create a table of ~ 100MBpsql -c \"CREATE TABLE test AS SELECT generate_series(1, 3000000)\"# Take a full backuppg_basebackup -X none -c fast -P -D full_backup_1# Recreate a table of ~ 100MBpsql -c \"DROP TABLE test\"psql -c \"CREATE TABLE test AS SELECT generate_series(1, 3000000)\"# Take an incremental backuppg_basebackup -X none -c fast -P -D incremental_backup_1 --incremental full_backup_1/backup_manifest# Disable summarize_walpsql -c \"ALTER SYSTEM SET summarize_wal TO off\"psql -c \"SELECT pg_reload_conf()\"# Recreate a table of ~ 100MBpsql -c \"DROP TABLE test\"psql -c \"CREATE TABLE test AS SELECT generate_series(1, 3000000)\"# Re-enable sumarize_walpsql -c \"ALTER SYSTEM SET summarize_wal TO on\"psql -c \"SELECT pg_reload_conf()\"# Take a full backuppg_basebackup -X none -c fast -P -D full_backup_2# Take an incremental backuppg_basebackup -X none -c fast -P -D incremental_backup_2 --incremental full_backup_2/backup_manifest```I'm able to reproduce the issue most of the time when running these stepsmanually. It's harder to reproduce if I attempt to run those commands as abash script.This is the sample output of a run of those commands:```console(barman) [postgres@barmandevhost ~]$ psql -c \"CREATE TABLE test AS SELECT generate_series(1, 3000000)\"\nSELECT 3000000\n(barman) [postgres@barmandevhost ~]$ pg_basebackup -X none -c fast -P -D full_backup_1\nNOTICE: WAL archiving is not enabled; you must ensure that all required WAL segments are copied through other means to complete the backup\n331785/331785 kB (100%), 1/1 tablespace\n(barman) [postgres@barmandevhost ~]$ psql -c \"DROP TABLE test\"\nDROP TABLE\n(barman) [postgres@barmandevhost ~]$ psql -c \"CREATE TABLE test AS SELECT generate_series(1, 3000000)\"\nSELECT 3000000\n(barman) [postgres@barmandevhost ~]$ pg_basebackup -X none -c fast -P -D incremental_backup_1 --incremental full_backup_1/backup_manifest\nNOTICE: WAL archiving is not enabled; you must ensure that all required WAL segments are copied through other means to complete the backup\n111263/331720 kB (33%), 1/1 tablespace\n(barman) [postgres@barmandevhost ~]$ psql -c \"ALTER SYSTEM SET summarize_wal TO off\"\nALTER SYSTEM\n(barman) [postgres@barmandevhost ~]$ psql -c \"SELECT pg_reload_conf()\"\n pg_reload_conf\n----------------\n t\n(1 row)\n\n(barman) [postgres@barmandevhost ~]$ psql -c \"DROP TABLE test\"\nDROP TABLE\n(barman) [postgres@barmandevhost ~]$ psql -c \"CREATE TABLE test AS SELECT generate_series(1, 3000000)\"\nSELECT 3000000\n(barman) [postgres@barmandevhost ~]$ psql -c \"ALTER SYSTEM SET summarize_wal TO on\"\nALTER SYSTEM\n(barman) [postgres@barmandevhost ~]$ psql -c \"SELECT pg_reload_conf()\"\n pg_reload_conf\n----------------\n t\n(1 row)\n\n(barman) [postgres@barmandevhost ~]$ pg_basebackup -X none -c fast -P -D full_backup_2\nNOTICE: WAL archiving is not enabled; you must ensure that all required WAL segments are copied through other means to complete the backup\n331734/331734 kB (100%), 1/1 tablespace\n(barman) [postgres@barmandevhost ~]$ pg_basebackup -X none -c fast -P -D incremental_backup_2 --incremental full_backup_2/backup_manifest\nWARNING: still waiting for WAL summarization through 2/C1000028 after 10 seconds\nDETAIL: Summarization has reached 2/B30000D8 on disk and 2/B30000D8 in memory.\nWARNING: still waiting for WAL summarization through 2/C1000028 after 20 seconds\nDETAIL: Summarization has reached 2/B30000D8 on disk and 2/B30000D8 in memory.\nWARNING: still waiting for WAL summarization through 2/C1000028 after 30 seconds\nDETAIL: Summarization has reached 2/B30000D8 on disk and 2/B30000D8 in memory.\nWARNING: still waiting for WAL summarization through 2/C1000028 after 40 seconds\nDETAIL: Summarization has reached 2/B30000D8 on disk and 2/B30000D8 in memory.\nWARNING: still waiting for WAL summarization through 2/C1000028 after 50 seconds\nDETAIL: Summarization has reached 2/B30000D8 on disk and 2/B30000D8 in memory.\nWARNING: still waiting for WAL summarization through 2/C1000028 after 60 seconds\nDETAIL: Summarization has reached 2/B30000D8 on disk and 2/B30000D8 in memory.\nWARNING: aborting backup due to backend exiting before pg_backup_stop was called\npg_basebackup: error: could not initiate base backup: ERROR: WAL summarization is not progressing\nDETAIL: Summarization is needed through 2/C1000028, but is stuck at 2/B30000D8 on disk and 2/B30000D8 in memory.\npg_basebackup: removing data directory \"incremental_backup_2\"```I took an `ls` output from `pg_wal` as well as `strace` and `gdb` from the WALsummarizer process. I'm attaching that to this email hoping that can helpsomehow.FWIW, once I restart Postgres the WAL summarizer process gets back to normalfunctioning. It seems to me there is some race condition between when a WAL fileis removed and when `summarize_wal` is re-enabled, causing the process to keeplooking for a WAL file that is the past.Best regards,Israel.", "msg_date": "Mon, 24 Jun 2024 14:56:00 -0300", "msg_from": "Israel Barth Rubio <[email protected]>", "msg_from_op": true, "msg_subject": "Apparent bug in WAL summarizer process (hung state)" }, { "msg_contents": "I'm attaching the files which I missed in the original email.\n\n>", "msg_date": "Mon, 24 Jun 2024 14:59:18 -0300", "msg_from": "Israel Barth Rubio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Apparent bug in WAL summarizer process (hung state)" }, { "msg_contents": "On Mon, Jun 24, 2024 at 02:56:00PM -0300, Israel Barth Rubio wrote:\n> I've been playing a bit with the incremental backup feature which might\n> come as\n> part of the 17 release, and I think I hit a possible bug in the WAL\n> summarizer\n> process.\n\nThanks for testing new features and for this report!\n\n> FWIW, once I restart Postgres the WAL summarizer process gets back to normal\n> functioning. It seems to me there is some race condition between when a WAL\n> file\n> is removed and when `summarize_wal` is re-enabled, causing the process to\n> keep\n> looking for a WAL file that is the past.\n\nI am adding an open item to track this issue, to make sure that this\nis looked at.\n--\nMichael", "msg_date": "Tue, 25 Jun 2024 08:01:12 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Apparent bug in WAL summarizer process (hung state)" }, { "msg_contents": "On Mon, Jun 24, 2024 at 1:56 PM Israel Barth Rubio\n<[email protected]> wrote:\n> I've been playing a bit with the incremental backup feature which might come as\n> part of the 17 release, and I think I hit a possible bug in the WAL summarizer\n> process.\n>\n> The issue that I face refers to the summarizer process getting into a hung state.\n> When the issue is triggered, it keeps in an infinite loop trying to process a WAL\n> file that no longer exists. It apparently comes up only when I perform changes to\n> `wal_summarize` GUC and reload Postgres, while there is some load in Postgres\n> which makes it recycle WAL files.\n\nYeah, this is a bug. It seems that the WAL summarizer process, when\nrestarted, wants to restart from wherever it was previously\nsummarizing WAL, which is correct if that WAL is still around, but if\nsummarize_wal has been turned off in the meanwhile, it might not be\ncorrect. Here's a patch to fix that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 25 Jun 2024 15:48:07 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Apparent bug in WAL summarizer process (hung state)" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> Yeah, this is a bug. It seems that the WAL summarizer process, when\n> restarted, wants to restart from wherever it was previously\n> summarizing WAL, which is correct if that WAL is still around, but if\n> summarize_wal has been turned off in the meanwhile, it might not be\n> correct. Here's a patch to fix that.\n\nThis comment seems to be truncated:\n\n+ /*\n+ * If we're the WAL summarizer, we always want to store the values we\n+ * just computed into shared memory, because those are the values we're\n+ * going to use to drive our operation, and so they are the authoritative\n+ * values. Otherwise, we only store values into shared memory if they are\n+ */\n+ LWLockAcquire(WALSummarizerLock, LW_EXCLUSIVE);\n+ if (am_wal_summarizer|| !WalSummarizerCtl->initialized)\n+ {\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2024 15:51:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Apparent bug in WAL summarizer process (hung state)" }, { "msg_contents": "On Tue, Jun 25, 2024 at 3:51 PM Tom Lane <[email protected]> wrote:\n> This comment seems to be truncated:\n\nThanks. New version attached.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 25 Jun 2024 16:07:27 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Apparent bug in WAL summarizer process (hung state)" }, { "msg_contents": "> Yeah, this is a bug. It seems that the WAL summarizer process, when\n> restarted, wants to restart from wherever it was previously\n> summarizing WAL, which is correct if that WAL is still around, but if\n> summarize_wal has been turned off in the meanwhile, it might not be\n> correct. Here's a patch to fix that.\n\nThanks for checking this!\n\n> Thanks. New version attached.\n\nAnd besides that, thanks for the patch, of course!\n\nI compiled Postgres locally with your patch. I attempted to break it several\ntimes, both manually and through a shell script.\n\nNo success on that -- which in this case is actually success :)\nThe WAL summarizer seems able to always resume from a valid point,\nso `pg_basebackup` isn't failing anymore.\n\n> Yeah, this is a bug. It seems that the WAL summarizer process, when> restarted, wants to restart from wherever it was previously> summarizing WAL, which is correct if that WAL is still around, but if> summarize_wal has been turned off in the meanwhile, it might not be> correct. Here's a patch to fix that.Thanks for checking this!> Thanks. New version attached.And besides that, thanks for the patch, of course!I compiled Postgres locally with your patch. I attempted to break it severaltimes, both manually and through a shell script.No success on that -- which in this case is actually success :)The WAL summarizer seems able to always resume from a valid point,so `pg_basebackup` isn't failing anymore.", "msg_date": "Thu, 27 Jun 2024 16:31:57 -0300", "msg_from": "Israel Barth Rubio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Apparent bug in WAL summarizer process (hung state)" } ]
[ { "msg_contents": "Xcode 16 beta was released on 2024-06-10 and ships with macOS SDK 15.0\n[1]. It appears PostgreSQL does not compile due to typedef redefiniton\nerrors in the regex library:\n\n/tmp/Xcode-beta.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang\n-Wall -Wmissing-prototypes -Wpointer-arith\n-Wdeclaration-after-statement -Werror=vla\n-Werror=unguarded-availability-new -Wendif-labels\n-Wmissing-format-attribute -Wcast-function-type -Wformat-security\n-fno-strict-aliasing -fwrapv -fexcess-precision=standard\n-Wno-unused-command-line-argument -Wno-compound-token-split-by-macro\n-Wno-cast-function-type-strict -O2 -I../../../src/include -isysroot\n/tmp/Xcode-beta.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.0.sdk\n -c -o regcomp.o regcomp.c\nIn file included from regcomp.c:2647:\nIn file included from ./regc_pg_locale.c:21:\nIn file included from ../../../src/include/utils/pg_locale.h:16:\nIn file included from\n/tmp/Xcode-beta.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.0.sdk/usr/include/xlocale.h:45:\nIn file included from\n/tmp/Xcode-beta.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.0.sdk/usr/include/xlocale/_regex.h:27:\n/tmp/Xcode-beta.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.0.sdk/usr/include/_regex.h:107:24:\nerror: typedef redefinition with different types ('__darwin_off_t'\n(aka 'long long') vs 'long')\n 107 | typedef __darwin_off_t regoff_t;\n | ^\n../../../src/include/regex/regex.h:48:14: note: previous definition is here\n 48 | typedef long regoff_t;\n | ^\nIn file included from regcomp.c:2647:\nIn file included from ./regc_pg_locale.c:21:\nIn file included from ../../../src/include/utils/pg_locale.h:16:\nIn file included from\n/tmp/Xcode-beta.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.0.sdk/usr/include/xlocale.h:45:\nIn file included from\n/tmp/Xcode-beta.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.0.sdk/usr/include/xlocale/_regex.h:27:\n/tmp/Xcode-beta.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.0.sdk/usr/include/_regex.h:114:3:\nerror: typedef redefinition with different types ('struct regex_t' vs\n'struct regex_t')\n 114 | } regex_t;\n | ^\n../../../src/include/regex/regex.h:82:3: note: previous definition is here\n 82 | } regex_t;\n | ^\nIn file included from regcomp.c:2647:\nIn file included from ./regc_pg_locale.c:21:\nIn file included from ../../../src/include/utils/pg_locale.h:16:\nIn file included from\n/tmp/Xcode-beta.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.0.sdk/usr/include/xlocale.h:45:\nIn file included from\n/tmp/Xcode-beta.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.0.sdk/usr/include/xlocale/_regex.h:27:\n/tmp/Xcode-beta.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.0.sdk/usr/include/_regex.h:119:3:\nerror: typedef redefinition with different types ('struct regmatch_t'\nvs 'struct regmatch_t')\n 119 | } regmatch_t;\n | ^\n../../../src/include/regex/regex.h:89:3: note: previous definition is here\n 89 | } regmatch_t;\n | ^\n3 errors generated.\nmake[3]: *** [regcomp.o] Error 1\nmake[2]: *** [regex-recursive] Error 2\nmake[1]: *** [all-backend-recurse] Error 2\nmake: *** [all-src-recurse] Error 2\n\nI've reproduced this issue by:\n\n1. Download the XCode 16 beta 2 ZIP file:\nhttps://developer.apple.com/services-account/download?path=/Developer_Tools/Xcode_16_beta/Xcode_16_beta.xip\n2. Extract this to `/tmp`.\n3. Then I ran:\n\nexport PATH=/tmp/Xcode-beta.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin:$PATH\nexport SDKROOT=/tmp/Xcode-beta.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.0.sdk\nexport XCODE_DIR=/tmp/Xcode-beta.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain\nexport CC=\"$XCODE_DIR/usr/bin/clang\" export CXX=\"$XCODE_DIR/usr/bin/clang++\"\n\n./configure CC=\"$CC\" CXX=\"$CXX\"\nmake\n\nThe compilation goes through if I comment out the \"#include\n<xlocale/_regex.h>\" from\n/tmp/Xcode-beta.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.0.sdk/usr/include/xlocale.h.\nHowever, even on macOS SDK 14.5 I see that include statement. I'm\nstill trying to figure out what changed here.\n\n[1] - https://developer.apple.com/macos/\n\n\n", "msg_date": "Mon, 24 Jun 2024 11:21:12 -0700", "msg_from": "Stan Hu <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL does not compile on macOS SDK 15.0" }, { "msg_contents": "It appears in macOS SDK 14.5, there were include guards in\n$SDK_ROOT/usr/include/xlocale/_regex.h:\n\n#ifndef _XLOCALE__REGEX_H_\n#define _XLOCALE__REGEX_H_\n\n#ifndef _REGEX_H_\n#include <_regex.h>\n#endif // _REGEX_H_\n#include <_xlocale.h>\n\nIn macOS SDK 15.5, these include guards are gone:\n\n#ifndef _XLOCALE__REGEX_H_\n#define _XLOCALE__REGEX_H_\n\n#include <_regex.h>\n#include <__xlocale.h>\n\nSince _REGEX_H_ was defined locally in PostgreSQL's version of\nsrc/include/regex/regex.h, these include guards prevented duplicate\ndefinitions from /usr/include/_regex.h (not to be confused with\n/usr/include/xlocale/_regex.h).\n\nIf I hack the PostgreSQL src/include/regex/regex.h to include the double\nunderscore include guard of __REGEX_H_, the build succeeds:\n\n```\ndiff --git a/src/include/regex/regex.h b/src/include/regex/regex.h\nindex d08113724f..734172167a 100644\n--- a/src/include/regex/regex.h\n+++ b/src/include/regex/regex.h\n@@ -1,3 +1,6 @@\n+#ifndef __REGEX_H_\n+#define __REGEX_H_ /* never again */\n+\n #ifndef _REGEX_H_\n #define _REGEX_H_ /* never again */\n /*\n@@ -187,3 +190,5 @@ extern bool RE_compile_and_execute(text *text_re, char\n*dat, int dat_len,\n int nmatch, regmatch_t *pmatch);\n\n #endif /* _REGEX_H_ */\n+\n+#endif /* __REGEX_H_ */\n```\n\nAny better ideas here?\n\nIt appears in macOS SDK 14.5, there were include guards in $SDK_ROOT/usr/include/xlocale/_regex.h:#ifndef _XLOCALE__REGEX_H_#define _XLOCALE__REGEX_H_#ifndef _REGEX_H_#include <_regex.h>#endif // _REGEX_H_#include <_xlocale.h>In macOS SDK 15.5, these include guards are gone:#ifndef _XLOCALE__REGEX_H_#define _XLOCALE__REGEX_H_#include <_regex.h>#include <__xlocale.h>Since _REGEX_H_ was defined locally in PostgreSQL's version of src/include/regex/regex.h, these include guards prevented duplicate definitions from /usr/include/_regex.h (not to be confused with /usr/include/xlocale/_regex.h).If I hack the PostgreSQL src/include/regex/regex.h to include the double underscore include guard of __REGEX_H_, the build succeeds:```diff --git a/src/include/regex/regex.h b/src/include/regex/regex.hindex d08113724f..734172167a 100644--- a/src/include/regex/regex.h+++ b/src/include/regex/regex.h@@ -1,3 +1,6 @@+#ifndef __REGEX_H_+#define __REGEX_H_\t\t\t\t/* never again */+ #ifndef _REGEX_H_ #define _REGEX_H_\t\t\t\t/* never again */ /*@@ -187,3 +190,5 @@ extern bool RE_compile_and_execute(text *text_re, char *dat, int dat_len, \t\t\t\t\t\t\t\t   int nmatch, regmatch_t *pmatch);  #endif\t\t\t\t\t\t\t/* _REGEX_H_ */++#endif\t\t\t\t\t\t\t/* __REGEX_H_ */```Any better ideas here?", "msg_date": "Mon, 24 Jun 2024 12:25:05 -0700", "msg_from": "Stan Hu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL does not compile on macOS SDK 15.0" }, { "msg_contents": "Hi,\n\n> I've reproduced this issue by:\n>\n> 1. Download the XCode 16 beta 2 ZIP file:\n> https://developer.apple.com/services-account/download?path=/Developer_Tools/Xcode_16_beta/Xcode_16_beta.xip\n> 2. Extract this to `/tmp`.\n> 3. Then I ran:\n>\n> export PATH=/tmp/Xcode-beta.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin:$PATH\n> export SDKROOT=/tmp/Xcode-beta.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.0.sdk\n> export XCODE_DIR=/tmp/Xcode-beta.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain\n> export CC=\"$XCODE_DIR/usr/bin/clang\" export CXX=\"$XCODE_DIR/usr/bin/clang++\"\n>\n> ./configure CC=\"$CC\" CXX=\"$CXX\"\n> make\n\nDoes it work if you do the same with XCode 15?\n\nPerhaps I'm missing something but to me it doesn't look like the\nright/supported way of compiling PostgreSQL on this platform [1]. I\ntried to figure out what version of Xcode I'm using right now, but it\nseems to be none:\n\n$ /usr/bin/xcodebuild -version\nxcode-select: error: tool 'xcodebuild' requires Xcode, but active\ndeveloper directory '/Library/Developer/CommandLineTools' is a command\nline tools instance\n\nClang I'm using doesn't seem to be part of XCode distribution either:\n\n$ clang --version\nHomebrew clang version 18.1.6\nTarget: x86_64-apple-darwin23.5.0\nThread model: posix\nInstalledDir: /usr/local/opt/llvm/bin\n\nIt's been a while since I installed all the dependencies on my laptop,\nbut I'm pretty confident I followed the documentation back then.\n\nIMO the right way to test PostgreSQL against the recent beta version\nof MacOS SDK would be replacing (via a symlink perhaps) the SDK\nprovided by the \"Command Line Tools for Xcode\" package\n(/Library/Developer/CommandLineTools/SDKs/). Or alternatively finding\nthe official way of installing the beta version of this package.\n\n[1]: https://www.postgresql.org/docs/current/installation-platform-notes.html#INSTALLATION-NOTES-MACOS\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 25 Jun 2024 15:19:13 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not compile on macOS SDK 15.0" }, { "msg_contents": "Hi,\n\n> IMO the right way to test PostgreSQL against the recent beta version\n> of MacOS SDK would be replacing (via a symlink perhaps) the SDK\n> provided by the \"Command Line Tools for Xcode\" package\n> (/Library/Developer/CommandLineTools/SDKs/). Or alternatively finding\n> the official way of installing the beta version of this package.\n\nAs it turned out there is Command_Line_Tools_for_Xcode_16_beta.dmg\npackage available. It can be downloaded from\nhttps://developer.apple.com/ after logging it. I installed it and also\ndid:\n\n```\ncd /Library/Developer/CommandLineTools/SDKs\nsudo mkdir __ignore__\nsudo mv MacOSX14.* __ignore__\n```\n\n... to make sure Postgres will not find the older version of SDK (it\ndid until I made this step).\n\nNow I get the following output from `meson --setup ...`:\n\n```\nThe Meson build system\nVersion: 0.61.2\nSource dir: /Users/eax/projects/c/postgresql\nBuild dir: /Users/eax/projects/c/postgresql/build\nBuild type: native build\nProject name: postgresql\nProject version: 17beta2\nC compiler for the host machine: cc (clang 16.0.0 \"Apple clang version\n16.0.0 (clang-1600.0.20.10)\")\nC linker for the host machine: cc ld64 1115.5.3\nHost machine cpu family: x86_64\nHost machine cpu: x86_64\nRun-time dependency threads found: YES\nMessage: darwin sysroot: /Library/Developer/CommandLineTools/SDKs/MacOSX15.0.sdk\n...\n```\n\n... and get the error reported by Stan. Also I can confirm that the\nproposed workaround fixes it. Attached is the result of `git\nformat-patch` for convenience.\n\nPersonally I'm not extremely happy with this workaround though. An\nalternative solution would be adding the \"pg_\" prefix to our type\ndeclarations.\n\nAnother question is whether we should fix this while the SDK is in\nbeta or only after it is released.\n\nThoughts?\n\nI added the patch to the nearest commitfest so that it wouldn't be lost [1].\n\n[1]: https://commitfest.postgresql.org/48/5073/\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 25 Jun 2024 16:49:32 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not compile on macOS SDK 15.0" }, { "msg_contents": "Hi,\n\nOn 2024-06-25 16:49:32 +0300, Aleksander Alekseev wrote:\n> ... to make sure Postgres will not find the older version of SDK (it\n> did until I made this step).\n\nYou should be able to influence that by specifying -Ddarwin_sysroot=...\n\n\n> ... and get the error reported by Stan. Also I can confirm that the\n> proposed workaround fixes it. Attached is the result of `git\n> format-patch` for convenience.\n> \n> Personally I'm not extremely happy with this workaround though.\n\nYea, it seems decidedly not great.\n\n\n> An alternative solution would be adding the \"pg_\" prefix to our type\n> declarations.\n\nA third approach would be to make sure we don't include xlocale.h from\npg_locale.h. IMO pg_locale currently exposes too many implementation details,\nneither xlocale.h nor ucol.h should be included in it, that should be in a C\nfile.\n\n\n> Another question is whether we should fix this while the SDK is in\n> beta or only after it is released.\n\nYea.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 25 Jun 2024 07:31:17 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not compile on macOS SDK 15.0" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-06-25 16:49:32 +0300, Aleksander Alekseev wrote:\n>> Another question is whether we should fix this while the SDK is in\n>> beta or only after it is released.\n\n> Yea.\n\nStan has started multiple threads about this, which is not doing\nanyone any favors, but that issue was already brought up in\n\nhttps://www.postgresql.org/message-id/flat/4edd2d3c30429c4445cc805ae9a788c489856eb7.1719265762.git.stanhu%40gmail.com\n\nI think the immediate action item should be to push back on the\nchange and see if we can get Apple to undo it. If we have to\nfix it on our side, it is likely to involve API-breaking changes\nthat will cause trouble for extensions. The more so because\nwe'll have to change stable branches too.\n\nI tend to agree with the idea that not including <xlocale.h>\nso widely might be the least-bad fix; but that still risks\nbreaking code that was dependent on that inclusion.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2024 10:39:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not compile on macOS SDK 15.0" }, { "msg_contents": "Thanks, everyone. Sorry to create multiple threads on this.\n\nAs I mentioned in the other thread, I've already submitted a bug\nreport to Apple (FB14047412). My colleagues know a key macOS engineer,\nand they have reached out to him to review that thread and bug report.\nI'll update if we hear anything.\n\nOn Tue, Jun 25, 2024 at 7:39 AM Tom Lane <[email protected]> wrote:\n>\n> Andres Freund <[email protected]> writes:\n> > On 2024-06-25 16:49:32 +0300, Aleksander Alekseev wrote:\n> >> Another question is whether we should fix this while the SDK is in\n> >> beta or only after it is released.\n>\n> > Yea.\n>\n> Stan has started multiple threads about this, which is not doing\n> anyone any favors, but that issue was already brought up in\n>\n> https://www.postgresql.org/message-id/flat/4edd2d3c30429c4445cc805ae9a788c489856eb7.1719265762.git.stanhu%40gmail.com\n>\n> I think the immediate action item should be to push back on the\n> change and see if we can get Apple to undo it. If we have to\n> fix it on our side, it is likely to involve API-breaking changes\n> that will cause trouble for extensions. The more so because\n> we'll have to change stable branches too.\n>\n> I tend to agree with the idea that not including <xlocale.h>\n> so widely might be the least-bad fix; but that still risks\n> breaking code that was dependent on that inclusion.\n>\n> regards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2024 10:36:13 -0700", "msg_from": "Stan Hu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL does not compile on macOS SDK 15.0" }, { "msg_contents": "On Wed, Jun 26, 2024 at 2:39 AM Tom Lane <[email protected]> wrote:\n> I think the immediate action item should be to push back on the\n> change and see if we can get Apple to undo it. If we have to\n> fix it on our side, it is likely to involve API-breaking changes\n> that will cause trouble for extensions. The more so because\n> we'll have to change stable branches too.\n\nI am struggling to understand why they would consider such a request.\nPOSIX reserves *_t, and then regex_t et al explicitly, and I don't see\nwhy Apple isn't allowed to have arbitrary transitive inclusions across\nsystem headers... AFIACT nothing in POSIX or C restricts that (?).\nAny system could have the same problem. It's only a coincidence that\nwe got away with it before because apparently other OSes don't pull in\nthe system regex-related definitions from anything that we include,\nexcept macOS which previously happened to use the same include guards\nscheme. I guess you could write PostgreSQL extensions (or TCL\nprograms) that could crash due to using bad struct definitions on\nearlier SDK versions depending on whether you included <regex.h> or\nPostgreSQL headers first?\n\nAmusingly, those matching include guards probably came from the same\nkeyboard (Apple's regex code is an earlier strain of Henry Spencer's\nregex code, from 4.4BSD). FreeBSD et al have it too. FreeBSD also\nhas an xlocale.h \"extended locale\" header (which came back from\nApple), though on FreeBSD it is automatically included by <locale.h>,\nbecause that \"extended\" locale stuff became standard issue basic\nlocale support in POSIX 2008, it's just that Apple hasn't got around\nto tidying that up yet so they still force us to include <xlocale.h>\nexplicitly (now *that* is material for a bug report)...\n\nIf you look at the header[1], you can see the mechanism for pulling in\na ton of other stuff: <xlocale.h> wants to activate all the _l\nfunctions, so it runs around including xlocale/_EVERYTHING.h. For\nexample xlocale/_string.h adds strcoll_l(..., locale_t), and\nxlocale/_regex.h adds regcomp_l(..., locale_t), etc etc. Which all\nseems rather backwards from our vantage point where locale_t is\nstandard and those should ideally have been declared in the \"primary\"\nheader when people actually wanted them and explicitly said so by\nincluding eg <string.h>. So why doesn't FreeBSD have the same\nproblem? Just because it doesn't actually have reg*_l() functions...\nyet. But it will, there is talk of adding the complete set of every\nimaginable _l function to POSIX. So FreeBSD might eventually add\nxlocale/_regex.h to that header explosion (unless someone does the\ncompletely merge/tidy-up I imagined above, which I might suggest).\nPerhaps in the fullness of time Apple will also do a similar clean-up,\nso that xlocale.h goes away, but I wouldn't hold my breath.\n\nI don't have any great ideas about what to do about this.\nCybersquatting system facilities is a messy business, so maybe the\nproposed grotty solution is actually appropriate! We did bring this\nduelling Henry Spencers problem upon ourselves. Longer term,\npg_regex_t seems to make a lot of sense, except IIUC we want to keep\nthis code in sync with TCL so perhaps a configurable prefix could be\ndone with macrology?\n\n[1] https://github.com/apple-oss-distributions/Libc/blob/main/include/xlocale.h\n\n\n", "msg_date": "Mon, 1 Jul 2024 12:40:14 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not compile on macOS SDK 15.0" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> I don't have any great ideas about what to do about this.\n> Cybersquatting system facilities is a messy business, so maybe the\n> proposed grotty solution is actually appropriate! We did bring this\n> duelling Henry Spencers problem upon ourselves. Longer term,\n> pg_regex_t seems to make a lot of sense, except IIUC we want to keep\n> this code in sync with TCL so perhaps a configurable prefix could be\n> done with macrology?\n\nYeah. I'd do pg_regex_t in a minute except that it'd break existing\nextensions using our facilities. However, your mention of macrology\nstirred an idea: could we have our regex/regex.h intentionally\n#include the system regex.h and then do\n\t#define regex_t pg_regex_t\n? If that works, our struct is really pg_regex_t, but we don't have\nto change any existing calling code. It might get a bit messy\nundef'ing and redef'ing all the other macros in regex/regex.h, but\nI think we could make it fly without any changes in other files.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 30 Jun 2024 22:06:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not compile on macOS SDK 15.0" }, { "msg_contents": "On Mon, Jul 1, 2024 at 2:06 PM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > I don't have any great ideas about what to do about this.\n> > Cybersquatting system facilities is a messy business, so maybe the\n> > proposed grotty solution is actually appropriate! We did bring this\n> > duelling Henry Spencers problem upon ourselves. Longer term,\n> > pg_regex_t seems to make a lot of sense, except IIUC we want to keep\n> > this code in sync with TCL so perhaps a configurable prefix could be\n> > done with macrology?\n>\n> Yeah. I'd do pg_regex_t in a minute except that it'd break existing\n> extensions using our facilities. However, your mention of macrology\n> stirred an idea: could we have our regex/regex.h intentionally\n> #include the system regex.h and then do\n> #define regex_t pg_regex_t\n> ? If that works, our struct is really pg_regex_t, but we don't have\n> to change any existing calling code. It might get a bit messy\n> undef'ing and redef'ing all the other macros in regex/regex.h, but\n> I think we could make it fly without any changes in other files.\n\nGood idea. Here's an attempt at that.\n\nI don't have a Mac with beta SDK 15 yet, but I think this should work?", "msg_date": "Thu, 4 Jul 2024 18:08:56 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not compile on macOS SDK 15.0" }, { "msg_contents": "Hi,\n\n> Good idea. Here's an attempt at that.\n>\n> I don't have a Mac with beta SDK 15 yet, but I think this should work?\n\nI checked against SDK 15 and 14. I also checked that it doesn't break\nsomething on Linux.\n\nThe patch seems to work. I don't have a Windows machine unfortunately.\n\n--\nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 4 Jul 2024 12:12:11 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not compile on macOS SDK 15.0" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> On Mon, Jul 1, 2024 at 2:06 PM Tom Lane <[email protected]> wrote:\n>> Yeah. I'd do pg_regex_t in a minute except that it'd break existing\n>> extensions using our facilities. However, your mention of macrology\n>> stirred an idea: could we have our regex/regex.h intentionally\n>> #include the system regex.h and then do\n>> #define regex_t pg_regex_t\n>> ?\n\n> Good idea. Here's an attempt at that.\n\nI think it might be cleaner to put the new #include and macro hacking\ninto regcustom.h, to show that it's our own hack and not part of the\n\"official\" Spencer code. OTOH, we do have to touch regex.h anyway\nto change the #include guards, and it's not like there are not any\nother PG-isms in there. So I'm not 100% sold that that way would\nbe better --- what do you think?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2024 14:37:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not compile on macOS SDK 15.0" }, { "msg_contents": "I wrote:\n> I think it might be cleaner to put the new #include and macro hacking\n> into regcustom.h, to show that it's our own hack and not part of the\n> \"official\" Spencer code.\n\nOh, scratch that. I was thinking regex.h included regcustom.h,\nbut it doesn't, so there's no way that can work. Never mind...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2024 14:44:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not compile on macOS SDK 15.0" }, { "msg_contents": "On Thu, Jul 4, 2024 at 9:12 PM Aleksander Alekseev\n<[email protected]> wrote:\n> I checked against SDK 15 and 14. I also checked that it doesn't break\n> something on Linux.\n\nThanks for testing!\n\n> The patch seems to work. I don't have a Windows machine unfortunately.\n\nYeah, Windows doesn't have <regex.h> (it has <regex> as part of the\nC++ standard library, but nothing for C because that's from POSIX, not\nthe C standard library). So I just skip the #include on Windows, and\nI see that it's passing on all CI.\n\nIt seems like there is no reason not to go ahead and push this,\nincluding back-patching, then.\n\nI had been thinking that I should try harder to make the pg_ prefix\ncompile-time configurable (imagine some kind of string-pasting macros\nconstructing the names), so that TCL and PG could have fewer diffs.\nBut we're already not doing that for the function names, so unless Tom\nwants me to try to do that...?\n\nIt's a funny position to finish up in: we have pg_ functions, pg_\ntypes but still standard REG_XXX macros. In the future someone might\nwant to rename them all to PG_REG_XXX, so that we completely move out\nof the way of the system regex stuff. But not today, and certainly\nnot in back-branches.\n\n\n", "msg_date": "Fri, 5 Jul 2024 11:54:29 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not compile on macOS SDK 15.0" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> I had been thinking that I should try harder to make the pg_ prefix\n> compile-time configurable (imagine some kind of string-pasting macros\n> constructing the names), so that TCL and PG could have fewer diffs.\n> But we're already not doing that for the function names, so unless Tom\n> wants me to try to do that...?\n\nNah, I don't see much point in that.\n\n> It's a funny position to finish up in: we have pg_ functions, pg_\n> types but still standard REG_XXX macros. In the future someone might\n> want to rename them all to PG_REG_XXX, so that we completely move out\n> of the way of the system regex stuff. But not today, and certainly\n> not in back-branches.\n\nThat would be an API break for any extensions using our regex code,\nso I'm not especially in favor of it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2024 20:04:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not compile on macOS SDK 15.0" }, { "msg_contents": "And pushed.\n\n\n", "msg_date": "Sat, 6 Jul 2024 11:57:21 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not compile on macOS SDK 15.0" } ]
[ { "msg_contents": "Greetings,\n\nWhile testing pgjdbc I noticed the following\n\npgdb-1 | Will execute command on database postgres:\npgdb-1 | SELECT pg_drop_replication_slot(slot_name) FROM\npg_replication_slots WHERE slot_name = 'replica_one';\npgdb-1 | DROP USER IF EXISTS replica_one;\npgdb-1 | CREATE USER replica_one WITH REPLICATION PASSWORD 'test';\npgdb-1 | SELECT * FROM\npg_create_physical_replication_slot('replica_one');\npgdb-1 |\npgdb-1 | NOTICE: role \"replica_one\" does not exist, skipping\npgdb-1 | pg_drop_replication_slot\npgdb-1 | --------------------------\npgdb-1 | (0 rows)\npgdb-1 |\npgdb-1 | DROP ROLE\npgdb-1 | CREATE ROLE\npgdb-1 | slot_name | lsn\npgdb-1 | -------------+-----\npgdb-1 | replica_one |\npgdb-1 | (1 row)\npgdb-1 |\npgdb-1 | waiting for checkpoint\npgdb-1 | 2024-06-24 19:07:18.569 UTC [66] LOG: checkpoint starting: force\nwait\npgdb-1 | 2024-06-24 19:11:48.008 UTC [66] LOG: checkpoint complete: wrote\n6431 buffers (39.3%); 0 WAL file(s) added, 0 removed, 3 recycled;\nwrite=269.438 s, sync=0.001 s, total=269.439 s; sync files=0, longest=0.000\ns, average=0.000 s; distance=44140 kB, estimate=44140 kB; lsn=0/40000B8,\nredo lsn=0/4000028\n\n\nNote that it takes 4 minutes 48 seconds to do the checkpoint. This seems\nridiculously long ?\n\nIf I add a checkpoint before doing anything there is no delay\n\n Will execute command on database postgres:\npgdb-1 | checkpoint;\npgdb-1 | SELECT pg_drop_replication_slot(slot_name) FROM\npg_replication_slots WHERE slot_name = 'replica_one';\npgdb-1 | DROP USER IF EXISTS replica_one;\npgdb-1 | CREATE USER replica_one WITH REPLICATION PASSWORD 'test';\npgdb-1 | SELECT * FROM\npg_create_physical_replication_slot('replica_one');\npgdb-1 |\npgdb-1 | 2024-06-24 19:19:57.498 UTC [66] LOG: checkpoint starting:\nimmediate force wait\npgdb-1 | 2024-06-24 19:19:57.558 UTC [66] LOG: checkpoint complete: wrote\n6431 buffers (39.3%); 0 WAL file(s) added, 0 removed, 2 recycled;\nwrite=0.060 s, sync=0.001 s, total=0.061 s; sync files=0, longest=0.000 s,\naverage=0.000 s; distance=29947 kB, estimate=29947 kB; lsn=0/3223BA0, redo\nlsn=0/3223B48\n===> pgdb-1 | CHECKPOINT\npgdb-1 | pg_drop_replication_slot\npgdb-1 | --------------------------\npgdb-1 | (0 rows)\npgdb-1 |\npgdb-1 | DROP ROLE\npgdb-1 | NOTICE: role \"replica_one\" does not exist, skipping\npgdb-1 | CREATE ROLE\npgdb-1 | slot_name | lsn\npgdb-1 | -------------+-----\npgdb-1 | replica_one |\npgdb-1 | (1 row)\npgdb-1 |\npgdb-1 | waiting for checkpoint\npgdb-1 | 2024-06-24 19:19:57.614 UTC [66] LOG: checkpoint starting: force\nwait\npgdb-1 | 2024-06-24 19:19:57.915 UTC [66] LOG: checkpoint complete: wrote\n4 buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.301\ns, sync=0.001 s, total=0.302 s; sync files=0, longest=0.000 s,\naverage=0.000 s; distance=14193 kB, estimate=28372 kB; lsn=0/4000080, redo\nlsn=0/4000028\n\nThis starts in version 16, versions up to and including 15 do not impose\nthe wait.\n\n\nDave Cramer\n\nGreetings,While testing pgjdbc I noticed the following pgdb-1  | Will execute command on database postgres:pgdb-1  |         SELECT pg_drop_replication_slot(slot_name) FROM pg_replication_slots WHERE slot_name = 'replica_one';pgdb-1  |         DROP USER IF EXISTS replica_one;pgdb-1  |         CREATE USER replica_one WITH REPLICATION PASSWORD 'test';pgdb-1  |         SELECT * FROM pg_create_physical_replication_slot('replica_one');pgdb-1  |pgdb-1  | NOTICE:  role \"replica_one\" does not exist, skippingpgdb-1  |  pg_drop_replication_slotpgdb-1  | --------------------------pgdb-1  | (0 rows)pgdb-1  |pgdb-1  | DROP ROLEpgdb-1  | CREATE ROLEpgdb-1  |   slot_name  | lsnpgdb-1  | -------------+-----pgdb-1  |  replica_one |pgdb-1  | (1 row)pgdb-1  |pgdb-1  | waiting for checkpointpgdb-1  | 2024-06-24 19:07:18.569 UTC [66] LOG:  checkpoint starting: force waitpgdb-1  | 2024-06-24 19:11:48.008 UTC [66] LOG:  checkpoint complete: wrote 6431 buffers (39.3%); 0 WAL file(s) added, 0 removed, 3 recycled; write=269.438 s, sync=0.001 s, total=269.439 s; sync files=0, longest=0.000 s, average=0.000 s; distance=44140 kB, estimate=44140 kB; lsn=0/40000B8, redo lsn=0/4000028Note that it takes 4 minutes 48 seconds to do the checkpoint. This seems ridiculously long ?If I add a checkpoint before doing anything there is no delay Will execute command on database postgres:pgdb-1  |         checkpoint;pgdb-1  |         SELECT pg_drop_replication_slot(slot_name) FROM pg_replication_slots WHERE slot_name = 'replica_one';pgdb-1  |         DROP USER IF EXISTS replica_one;pgdb-1  |         CREATE USER replica_one WITH REPLICATION PASSWORD 'test';pgdb-1  |         SELECT * FROM pg_create_physical_replication_slot('replica_one');pgdb-1  |pgdb-1  | 2024-06-24 19:19:57.498 UTC [66] LOG:  checkpoint starting: immediate force waitpgdb-1  | 2024-06-24 19:19:57.558 UTC [66] LOG:  checkpoint complete: wrote 6431 buffers (39.3%); 0 WAL file(s) added, 0 removed, 2 recycled; write=0.060 s, sync=0.001 s, total=0.061 s; sync files=0, longest=0.000 s, average=0.000 s; distance=29947 kB, estimate=29947 kB; lsn=0/3223BA0, redo lsn=0/3223B48===> pgdb-1  | CHECKPOINTpgdb-1  |  pg_drop_replication_slotpgdb-1  | --------------------------pgdb-1  | (0 rows)pgdb-1  |pgdb-1  | DROP ROLEpgdb-1  | NOTICE:  role \"replica_one\" does not exist, skippingpgdb-1  | CREATE ROLEpgdb-1  |   slot_name  | lsnpgdb-1  | -------------+-----pgdb-1  |  replica_one |pgdb-1  | (1 row)pgdb-1  |pgdb-1  | waiting for checkpointpgdb-1  | 2024-06-24 19:19:57.614 UTC [66] LOG:  checkpoint starting: force waitpgdb-1  | 2024-06-24 19:19:57.915 UTC [66] LOG:  checkpoint complete: wrote 4 buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.301 s, sync=0.001 s, total=0.302 s; sync files=0, longest=0.000 s, average=0.000 s; distance=14193 kB, estimate=28372 kB; lsn=0/4000080, redo lsn=0/4000028 This starts in version 16, versions up to and including 15 do not impose the wait.Dave Cramer", "msg_date": "Mon, 24 Jun 2024 15:44:25 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Unusually long checkpoint time on version 16, and 17beta1 running in\n a docker container" } ]
[ { "msg_contents": "Prior to macOS SDK 15, there were include guards in\n$SDK_ROOT/usr/include/xlocale/_regex.h:\n\n #ifndef _REGEX_H_\n #include <_regex.h>\n #endif // _REGEX_H_\n #include <_xlocale.h>\n\nIn macOS SDK 15.5, these include guards are gone:\n\n #include <_regex.h>\n #include <_xlocale.h>\n\nBecause _REGEX_H_ was defined locally in PostgreSQL's version of\nsrc/include/regex/regex.h, these include guards prevented duplicate\ndefinitions from $SDK_ROOT/usr/include/_regex.h (not to be confused\nwith $SDK_ROOT/usr/include/xlocale/_regex.h).\n\nTo fix this build issue, define __REGEX_H_ to prevent macOS from\nincluding the header that contain redefinitions of the local regex\nstructures.\n\nDiscussion: https://www.postgresql.org/message-id/CAMBWrQ%3DF9SSPfsFtCv%3DJT51WGK2VcgLA%2BiiJJOmjN0zbbufOEA%40mail.gmail.com\n---\n src/include/regex/regex.h | 14 ++++++++++++++\n 1 file changed, 14 insertions(+)\n\ndiff --git a/src/include/regex/regex.h b/src/include/regex/regex.h\nindex d08113724f..045ac626cc 100644\n--- a/src/include/regex/regex.h\n+++ b/src/include/regex/regex.h\n@@ -32,6 +32,20 @@\n * src/include/regex/regex.h\n */\n \n+#if defined(__darwin__)\n+/*\n+ * mmacOS SDK 15.0 removed the _REGEX_H_ include guards in\n+ * $SDK_ROOT/usr/include/xlocale/_regex.h, so now\n+ * $SDK_ROOT/usr/include/_regex.h is always included. That file defines\n+ * the same types as below. To guard against type redefinition errors,\n+ * define __REGEX_H_.\n+ */\n+#ifndef __REGEX_H_\n+#define __REGEX_H_\n+\n+#endif\t\t\t\t\t\t\t/* __REGEX_H_ */\n+#endif\n+\n /*\n * Add your own defines, if needed, here.\n */\n-- \n2.45.0\n\n\n\n", "msg_date": "Mon, 24 Jun 2024 14:51:07 -0700", "msg_from": "Stan Hu <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH] Fix type redefinition build errors with macOS SDK 15.0" }, { "msg_contents": "Prior to macOS SDK 15, there were include guards in\n$SDK_ROOT/usr/include/xlocale/_regex.h:\n\n #ifndef _REGEX_H_\n #include <_regex.h>\n #endif // _REGEX_H_\n #include <_xlocale.h>\n\nIn macOS SDK 15, these include guards are gone:\n\n #include <_regex.h>\n #include <_xlocale.h>\n\nBecause _REGEX_H_ was defined locally in PostgreSQL's version of\nsrc/include/regex/regex.h, these include guards prevented duplicate\ndefinitions from $SDK_ROOT/usr/include/_regex.h (not to be confused\nwith $SDK_ROOT/usr/include/xlocale/_regex.h).\n\nAs a result, attempting to compile PostgreSQL with macOS SDK 15 fails\nwith \"previous definition is here\" errors for regoff_t, regex_t, and\nregmatch_t structures.\n\nTo fix this build issue, define __REGEX_H_ to prevent macOS from\nincluding the header that contain redefinitions of the local regex\nstructures.\n\nDiscussion: https://www.postgresql.org/message-id/CAMBWrQ%3DF9SSPfsFtCv%3DJT51WGK2VcgLA%2BiiJJOmjN0zbbufOEA%40mail.gmail.com\n---\n src/include/regex/regex.h | 14 ++++++++++++++\n 1 file changed, 14 insertions(+)\n\ndiff --git a/src/include/regex/regex.h b/src/include/regex/regex.h\nindex d08113724f..045ac626cc 100644\n--- a/src/include/regex/regex.h\n+++ b/src/include/regex/regex.h\n@@ -32,6 +32,20 @@\n * src/include/regex/regex.h\n */\n \n+#if defined(__darwin__)\n+/*\n+ * mmacOS SDK 15.0 removed the _REGEX_H_ include guards in\n+ * $SDK_ROOT/usr/include/xlocale/_regex.h, so now\n+ * $SDK_ROOT/usr/include/_regex.h is always included. That file defines\n+ * the same types as below. To guard against type redefinition errors,\n+ * define __REGEX_H_.\n+ */\n+#ifndef __REGEX_H_\n+#define __REGEX_H_\n+\n+#endif\t\t\t\t\t\t\t/* __REGEX_H_ */\n+#endif\n+\n /*\n * Add your own defines, if needed, here.\n */\n-- \n2.45.0\n\n\n\n", "msg_date": "Mon, 24 Jun 2024 14:58:47 -0700", "msg_from": "Stan Hu <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH] Fix type redefinition build errors with macOS SDK 15.0" }, { "msg_contents": "Prior to macOS SDK 15, there were include guards in\n$SDK_ROOT/usr/include/xlocale/_regex.h:\n\n #ifndef _REGEX_H_\n #include <_regex.h>\n #endif // _REGEX_H_\n #include <_xlocale.h>\n\nIn macOS SDK 15, these include guards are gone:\n\n #include <_regex.h>\n #include <_xlocale.h>\n\nBecause _REGEX_H_ was defined locally in PostgreSQL's version of\nsrc/include/regex/regex.h, these include guards prevented duplicate\ndefinitions from $SDK_ROOT/usr/include/_regex.h (not to be confused\nwith $SDK_ROOT/usr/include/xlocale/_regex.h).\n\nAs a result, attempting to compile PostgreSQL with macOS SDK 15 fails\nwith \"previous definition is here\" errors for regoff_t, regex_t, and\nregmatch_t structures.\n\nTo fix this build issue, define __REGEX_H_ to prevent macOS from\nincluding the header that contain redefinitions of the local regex\nstructures.\n\nDiscussion: https://www.postgresql.org/message-id/CAMBWrQ%3DF9SSPfsFtCv%3DJT51WGK2VcgLA%2BiiJJOmjN0zbbufOEA%40mail.gmail.com\n---\n src/include/regex/regex.h | 14 ++++++++++++++\n 1 file changed, 14 insertions(+)\n\ndiff --git a/src/include/regex/regex.h b/src/include/regex/regex.h\nindex d08113724f..f7aa7cf3a3 100644\n--- a/src/include/regex/regex.h\n+++ b/src/include/regex/regex.h\n@@ -32,6 +32,20 @@\n * src/include/regex/regex.h\n */\n \n+#if defined(__darwin__)\n+/*\n+ * macOS SDK 15.0 removed the _REGEX_H_ include guards in\n+ * $SDK_ROOT/usr/include/xlocale/_regex.h, so now\n+ * $SDK_ROOT/usr/include/_regex.h is always included. That file defines\n+ * the same types as below. To guard against type redefinition errors,\n+ * define __REGEX_H_.\n+ */\n+#ifndef __REGEX_H_\n+#define __REGEX_H_\n+\n+#endif\t\t\t\t\t\t\t/* __REGEX_H_ */\n+#endif\n+\n /*\n * Add your own defines, if needed, here.\n */\n-- \n2.45.0\n\n\n\n", "msg_date": "Mon, 24 Jun 2024 15:20:25 -0700", "msg_from": "Stan Hu <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH v3] Fix type redefinition build errors with macOS SDK 15.0" }, { "msg_contents": "On Mon, Jun 24, 2024 at 02:58:47PM -0700, Stan Hu wrote:\n> Prior to macOS SDK 15, there were include guards in\n> $SDK_ROOT/usr/include/xlocale/_regex.h:\n> \n> #ifndef _REGEX_H_\n> #include <_regex.h>\n> #endif // _REGEX_H_\n> #include <_xlocale.h>\n\nUgh. Which means that you are testing macOS Sequoia still in beta\nphase? Thanks for the report.\n\nPerhaps we should wait for the actual release before seeing if this is\nstill an issue and see if this is still a problem? Tom is a heavy\nmacOS user, I'm still under 14 myself for some time.\n--\nMichael", "msg_date": "Tue, 25 Jun 2024 11:03:26 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix type redefinition build errors with macOS SDK 15.0" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> Ugh. Which means that you are testing macOS Sequoia still in beta\n> phase? Thanks for the report.\n\n> Perhaps we should wait for the actual release before seeing if this is\n> still an issue and see if this is still a problem? Tom is a heavy\n> macOS user, I'm still under 14 myself for some time.\n\nYeah, I'm not in a huge hurry to act on this. The problem may\ngo away by the time SDK 15 gets out of beta --- in fact, I think\nit'd be a good idea to file a bug with Apple complaining that this\npointless-looking change breaks third-party code. If it doesn't\ngo away, we're going to have to back-patch all supported branches\n(and, really, even out-of-support ones back to 9.2); which puts a\nlarge premium on getting the patch right. So we have both time to\nthink about it and good reason to be careful.\n\n(I've not yet read any of Stan's proposed patches.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jun 2024 22:15:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix type redefinition build errors with macOS SDK 15.0" }, { "msg_contents": "Thanks, Tom and Michael. I've submitted a bug report via Apple's\nFeedback Assistant. It's filed under FB14047412.\n\nIf anyone happens to know the right person at Apple to look at this,\nplease direct them there.\n\nOn Mon, Jun 24, 2024 at 7:15 PM Tom Lane <[email protected]> wrote:\n>\n> Michael Paquier <[email protected]> writes:\n> > Ugh. Which means that you are testing macOS Sequoia still in beta\n> > phase? Thanks for the report.\n>\n> > Perhaps we should wait for the actual release before seeing if this is\n> > still an issue and see if this is still a problem? Tom is a heavy\n> > macOS user, I'm still under 14 myself for some time.\n>\n> Yeah, I'm not in a huge hurry to act on this. The problem may\n> go away by the time SDK 15 gets out of beta --- in fact, I think\n> it'd be a good idea to file a bug with Apple complaining that this\n> pointless-looking change breaks third-party code. If it doesn't\n> go away, we're going to have to back-patch all supported branches\n> (and, really, even out-of-support ones back to 9.2); which puts a\n> large premium on getting the patch right. So we have both time to\n> think about it and good reason to be careful.\n>\n> (I've not yet read any of Stan's proposed patches.)\n>\n> regards, tom lane\n\n\n", "msg_date": "Mon, 24 Jun 2024 21:50:02 -0700", "msg_from": "Stan Hu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix type redefinition build errors with macOS SDK 15.0" } ]
[ { "msg_contents": "Hey!\n\nLots of SQL/JSON threads going about. This one is less about technical\ncorrectness and more about usability of the documentation. Though in\nwriting this I am finding some things that aren't quite clear. I'm going\nto come back with those on a follow-on post once I get a chance to make my\nsecond pass on this. But for the moment just opening it up to a content\nand structure review.\n\nPlease focus on the text changes. It passes \"check-docs\" but I still need\nto work on layout and stuff in html (markup, some more links).\n\nThanks!\n\nDavid J.\n\np.s. v1 exists here (is just the idea of using basically variable names in\nthe function signature and minimizing direct syntax in the table);\n\nhttps://www.postgresql.org/message-id/CAKFQuwbYBvUZasGj_ZnfXhC2kk4AT%3DepwGkNd2%3DRMMVXkfTNMQ%40mail.gmail.com", "msg_date": "Mon, 24 Jun 2024 23:46:40 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Doc Rework: Section 9.16.13 SQL/JSON Query Functions" }, { "msg_contents": "Hi David,\n\nOn Tue, Jun 25, 2024 at 3:47 PM David G. Johnston\n<[email protected]> wrote:\n>\n> Hey!\n>\n> Lots of SQL/JSON threads going about. This one is less about technical correctness and more about usability of the documentation. Though in writing this I am finding some things that aren't quite clear. I'm going to come back with those on a follow-on post once I get a chance to make my second pass on this. But for the moment just opening it up to a content and structure review.\n>\n> Please focus on the text changes. It passes \"check-docs\" but I still need to work on layout and stuff in html (markup, some more links).\n>\n> Thanks!\n>\n> David J.\n>\n> p.s. v1 exists here (is just the idea of using basically variable names in the function signature and minimizing direct syntax in the table);\n>\n> https://www.postgresql.org/message-id/CAKFQuwbYBvUZasGj_ZnfXhC2kk4AT%3DepwGkNd2%3DRMMVXkfTNMQ%40mail.gmail.com\n\nThanks for writing the patch. I'll take a look at this next Monday.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Fri, 28 Jun 2024 14:56:39 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc Rework: Section 9.16.13 SQL/JSON Query Functions" }, { "msg_contents": "On Fri, Jun 28, 2024 at 2:56 PM Amit Langote <[email protected]> wrote:\n> On Tue, Jun 25, 2024 at 3:47 PM David G. Johnston\n> <[email protected]> wrote:\n> >\n> > Hey!\n> >\n> > Lots of SQL/JSON threads going about. This one is less about technical correctness and more about usability of the documentation. Though in writing this I am finding some things that aren't quite clear. I'm going to come back with those on a follow-on post once I get a chance to make my second pass on this. But for the moment just opening it up to a content and structure review.\n> >\n> > Please focus on the text changes. It passes \"check-docs\" but I still need to work on layout and stuff in html (markup, some more links).\n> >\n> > Thanks!\n> >\n> > David J.\n> >\n> > p.s. v1 exists here (is just the idea of using basically variable names in the function signature and minimizing direct syntax in the table);\n> >\n> > https://www.postgresql.org/message-id/CAKFQuwbYBvUZasGj_ZnfXhC2kk4AT%3DepwGkNd2%3DRMMVXkfTNMQ%40mail.gmail.com\n>\n> Thanks for writing the patch. I'll take a look at this next Monday.\n\nI've attached a delta (0002) against your patch, wherein I've kept\nmost of the structuring changes you've proposed, but made changes such\nas:\n\n* use tags consistently\n* use language matching the rest of func.sgml, IMO\n* avoid repetition (eg. context_item described both above and below the table)\n* correcting some factual discrepancies (eg. json_value never returns json null)\n* avoid forward references\n* capitalize function names, SQL keywords in examples as requested in\na previous review [1]\n\nMaybe we could still polish this some more.\n\n--\nThanks, Amit Langote\n\n[1] https://www.postgresql.org/message-id/CAA-aLv7Dfy9BMrhUZ1skcg%3DOdqysWKzObS7XiDXdotJNF0E44Q%40mail.gmail.com", "msg_date": "Tue, 2 Jul 2024 21:38:41 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc Rework: Section 9.16.13 SQL/JSON Query Functions" }, { "msg_contents": "hi.\nthe following review is based on v2-0001, v2-0002.\n\n\"context_item can be a JSON document passed as a value of type json,\njsonb document, a character or an UTF8- endoded bytea string.\"\nis wrong?\ne.g. SELECT JSON_EXISTS( NULL::bytea, 'lax $.a[5]' ERROR ON ERROR)\n\ncheck following query:\nselect oid, typtype , typname from pg_type where typcategory = 'S';\n\nI think a more accurate description would be:\n\"context_item must be a JSON document passed as a value of type json,\njsonb document, a character string type(text, name, bpchar, varchar)\"\ndo we need to mention domain over these types?\n-------------------------------------\nJSON_EXISTS\nReturns true if the SQL/JSON path_expression possibly referencing the\nvariables in variable_definitions applied to the context_item yields\nany items.\nI am not native English speaker, so I found it hard to comprehend.\nI can understand it like:\n\"Returns true if the SQL/JSON path_expression (possibly referencing\nthe variables in variable_definitions) applied to the context_item\nyields any items.\"\n\nmaybe we can write it into two sentences, or\n\"Returns true if the SQL/JSON path_expression applied to the\ncontext_item yields any items.\"\nbecause you already mentioned \"path_expression can also contain\nvariables whose values are specified using the variable_definitions\nclause described below.\" in the top level.\n-------------------------------------\nThe JSON_QUERY and JSON_VALUE functions are polymorphic in their\noutput type with the returning_clause clause dictating what that type\nis.\nhow about\nThe JSON_QUERY and JSON_VALUE functions output type can be vary, using\nreturning_clause specify the desired data type.\n\n-------------------------------------\nyour doc: JSON_VALUE \"If path_expression points to a JSON null,\nJSON_VALUE returns a SQL NULL.\"\n`SELECT JSON_VALUE(jsonb 'null', '$');` here, the path_expression\npoints to '$' which is not json null?\nso i like to change it to\n\"If the extracted value is a JSON null, an SQL NULL value will return.\"\n-------------------------------------\ninconsistency:\nJSON_QUERY: <returnvalue></returnvalue> { <type>jsonb</type> |\n<replaceable>return_data_type</replaceable> }\nJSON_VALUE: <returnvalue></returnvalue> { <type>text</type> |\n<varname>return_data_type</varname> }\n-------------------------------------\n{{For JSON_EXISTS (... on_error_boolean), alternative can be: ERROR,\nUNKNOWN, TRUE, FALSE.\nFor JSON_QUERY (... on_error_set on_empty_set), alternative can be:\nERROR, NULL, EMPTY ARRAY, EMPTY OBJECT, or DEFAULT followed by an\nexpression.\nFor JSON_VALUE (... on_error_set on_empty_set), alternative can be:\nERROR, NULL, or DEFAULT followed by an expression.\n}}\ni am not sure what does there dot means here, in the synopsis section,\nthree dots is significant.\nAlso if I understand it correctly, JSON_EXISTS can only have on_error,\nthen I am more confused with ``JSON_EXISTS (... on_error_boolean)``\n\n\n\nOverall, I found this approach makes the synopsis scattered, it's not\neasy to see the full picture.\nfor example:\n```\nJSON_VALUE ( context_item, path_expression [variable_definitions]\n[return_type] [on_empty_value] [on_error_value]) → { text |\nreturn_data_type }\n ```\nthis way it is not easy to find out that RETURNING is a keyword.\nCurrently in master, we can quickly see RETURNING is the keyword, the\nmaster is kind of condense, though.\nbut if you are insistent with your approach, then that is fine for me.\n\n\n", "msg_date": "Wed, 3 Jul 2024 10:15:05 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc Rework: Section 9.16.13 SQL/JSON Query Functions" }, { "msg_contents": "still based on v2-0001, v2-0002.\npicture attached.\n\nas you can see from the curly braces,\n```\n{ KEEP | OMIT } QUOTES [ ON SCALAR STRING ]\n```\nwe must choose one, \"KEEP\" or \"OMIT\".\n\nbut the wrapping_clause:\n```\n WITHOUT [ARRAY] WRAPPER\n WITH [UNCONDITIONAL] [ARRAY] WRAPPER\n WITH CONDITIONAL [ARRAY] WRAPPER\n```\nthis way, didn't say we must choose between one in these three.\n-----------\non_error_boolean\non_error_set\non_error_value\non_empty_set\non_empty_value\n\n alternative ON { ERROR | EMPTY }\n\n````\ndidn't explain on_error_value, on_empty_value.\nwhy not just on_error_clause, on_empty_clause?\n\n-------\n<<<quoted paragraph\nWhen JSON_QUERY function produces multiple JSON values, they are\nreturned as a JSON array. By default, the result values are\nunconditionally wrapped even if the array contains only one element.\nYou can specify the WITH CONDITIONAL variant to say that the wrapper\nbe added only when there are multiple values in the resulting array.\nOr specify the WITHOUT variant to say that the wrapper be removed when\nthere is only one element, but it is ignored if there are multiple\nvalues.\n<<<quoted paragraph\n\nThe above paragraph didn't explicitly mention that UNCONDITIONAL is the default.\nBTW, by comparing patch with master, I found out:\n\n\"\"\"\nIf the wrapper is UNCONDITIONAL, an array wrapper will always be\napplied, even if the returned value is already a single JSON object or\nan array. If it is CONDITIONAL, it will not be applied to a single\nJSON object or an array. UNCONDITIONAL is the default.\n\"\"\"\nthis description seems not right.\nif \"UNCONDITIONAL is the default\", then\nselect json_query(jsonb '{\"a\": [1]}', 'lax $.a' with unconditional\narray wrapper);\nshould be same as\nselect json_query(jsonb '{\"a\": [1]}', 'lax $.a' );\n\nanother two examples with SQL/JSON scalar item:\n\nselect json_query(jsonb '{\"a\": 1}', 'lax $.a' );\nselect json_query(jsonb '{\"a\": 1}', 'lax $.a' with unconditional wrapper);\n\nAm I interpreting \"UNCONDITIONAL is the default\" the wrong way?", "msg_date": "Thu, 4 Jul 2024 17:16:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc Rework: Section 9.16.13 SQL/JSON Query Functions" }, { "msg_contents": "Hi Jian,\n\nThanks for the reviews.\n\nOn Wed, Jul 3, 2024 at 11:15 AM jian he <[email protected]> wrote:\n> Overall, I found this approach makes the synopsis scattered, it's not\n> easy to see the full picture.\n> for example:\n> ```\n> JSON_VALUE ( context_item, path_expression [variable_definitions]\n> [return_type] [on_empty_value] [on_error_value]) → { text |\n> return_data_type }\n> ```\n> this way it is not easy to find out that RETURNING is a keyword.\n> Currently in master, we can quickly see RETURNING is the keyword, the\n> master is kind of condense, though.\n> but if you are insistent with your approach, then that is fine for me.\n\nActually, on second thought, I too am finding the new structure\nwhereby descriptions of various clauses are moved below the table a\nbit hard to follow. Descriptions in the table have to use forward\nreferences to bits outside the table and vice versa. Like you, I also\ndon't like the style used in the new structure for describing various\nON ERROR values that differ based on the function. It seems better to\njust list the syntax in the table and then each function's syntax\nsynopsis tells what values are appropriate to return for that given\nfunction ON ERROR.\n\nSo I've decided to resurrect the *other* documentation rewrite patch\nthat you reviewed back in May.\n\n> \"context_item can be a JSON document passed as a value of type json,\n> jsonb document, a character or an UTF8- endoded bytea string.\"\n> is wrong?\n> e.g. SELECT JSON_EXISTS( NULL::bytea, 'lax $.a[5]' ERROR ON ERROR)\n\nOops, I was thinking of what can be used in the RETURNING clause.\n\n> check following query:\n> select oid, typtype , typname from pg_type where typcategory = 'S';\n>\n> I think a more accurate description would be:\n> \"context_item must be a JSON document passed as a value of type json,\n> jsonb document, a character string type(text, name, bpchar, varchar)\"\n> do we need to mention domain over these types?\n\nI don't feel the need to mention every possible type, so I went with:\n\n+ <replaceable>context_item</replaceable> can be any character string that\n+ can be succesfully cast to <type>jsonb</type>.\n\n> -------------------------------------\n> JSON_EXISTS\n> Returns true if the SQL/JSON path_expression possibly referencing the\n> variables in variable_definitions applied to the context_item yields\n> any items.\n> I am not native English speaker, so I found it hard to comprehend.\n> I can understand it like:\n> \"Returns true if the SQL/JSON path_expression (possibly referencing\n> the variables in variable_definitions) applied to the context_item\n> yields any items.\"\n>\n> maybe we can write it into two sentences, or\n> \"Returns true if the SQL/JSON path_expression applied to the\n> context_item yields any items.\"\n> because you already mentioned \"path_expression can also contain\n> variables whose values are specified using the variable_definitions\n> clause described below.\" in the top level.\n\nPlease check the attached patch which contains different text.\n\n> -------------------------------------\n> The JSON_QUERY and JSON_VALUE functions are polymorphic in their\n> output type with the returning_clause clause dictating what that type\n> is.\n> how about\n> The JSON_QUERY and JSON_VALUE functions output type can be vary, using\n> returning_clause specify the desired data type.\n\nDitto.\n\n> -------------------------------------\n> your doc: JSON_VALUE \"If path_expression points to a JSON null,\n> JSON_VALUE returns a SQL NULL.\"\n> `SELECT JSON_VALUE(jsonb 'null', '$');` here, the path_expression\n> points to '$' which is not json null?\n> so i like to change it to\n> \"If the extracted value is a JSON null, an SQL NULL value will return.\"\n\nI've added a <note> at the bottom:\n\n+ <note>\n+ <para>\n+ <function>JSON_VALUE()</function> returns SQL NULL if\n+ <replaceable>path_expression</replaceable> returns a JSON\n+ <literal>null</literal>, whereas <function>JSON_QUERY()</function> returns\n+ the JSON <literal>null</literal> as is.\n+ </para>\n+ </note>\n\n> -------------------------------------\n> inconsistency:\n> JSON_QUERY: <returnvalue></returnvalue> { <type>jsonb</type> |\n> <replaceable>return_data_type</replaceable> }\n> JSON_VALUE: <returnvalue></returnvalue> { <type>text</type> |\n> <varname>return_data_type</varname> }\n\nNo longer in the patch.\n\n> -------------------------------------\n> {{For JSON_EXISTS (... on_error_boolean), alternative can be: ERROR,\n> UNKNOWN, TRUE, FALSE.\n> For JSON_QUERY (... on_error_set on_empty_set), alternative can be:\n> ERROR, NULL, EMPTY ARRAY, EMPTY OBJECT, or DEFAULT followed by an\n> expression.\n> For JSON_VALUE (... on_error_set on_empty_set), alternative can be:\n> ERROR, NULL, or DEFAULT followed by an expression.\n> }}\n> i am not sure what does there dot means here, in the synopsis section,\n> three dots is significant.\n> Also if I understand it correctly, JSON_EXISTS can only have on_error,\n> then I am more confused with ``JSON_EXISTS (... on_error_boolean)``\n\nDitto.\n\n> still based on v2-0001, v2-0002.\n> picture attached.\n>\n> as you can see from the curly braces,\n> ```\n> { KEEP | OMIT } QUOTES [ ON SCALAR STRING ]\n> ```\n> we must choose one, \"KEEP\" or \"OMIT\".\n>\n> but the wrapping_clause:\n> ```\n> WITHOUT [ARRAY] WRAPPER\n> WITH [UNCONDITIONAL] [ARRAY] WRAPPER\n> WITH CONDITIONAL [ARRAY] WRAPPER\n> ```\n> this way, didn't say we must choose between one in these three.\n\nDitto.\n\n> -----------\n> on_error_boolean\n> on_error_set\n> on_error_value\n> on_empty_set\n> on_empty_value\n>\n> alternative ON { ERROR | EMPTY }\n>\n> ````\n> didn't explain on_error_value, on_empty_value.\n> why not just on_error_clause, on_empty_clause?\n\nDitto.\n\n> -------\n> <<<quoted paragraph\n> When JSON_QUERY function produces multiple JSON values, they are\n> returned as a JSON array. By default, the result values are\n> unconditionally wrapped even if the array contains only one element.\n> You can specify the WITH CONDITIONAL variant to say that the wrapper\n> be added only when there are multiple values in the resulting array.\n> Or specify the WITHOUT variant to say that the wrapper be removed when\n> there is only one element, but it is ignored if there are multiple\n> values.\n> <<<quoted paragraph\n>\n> The above paragraph didn't explicitly mention that UNCONDITIONAL is the default.\n> BTW, by comparing patch with master, I found out:\n>\n> \"\"\"\n> If the wrapper is UNCONDITIONAL, an array wrapper will always be\n> applied, even if the returned value is already a single JSON object or\n> an array. If it is CONDITIONAL, it will not be applied to a single\n> JSON object or an array. UNCONDITIONAL is the default.\n> \"\"\"\n> this description seems not right.\n> if \"UNCONDITIONAL is the default\", then\n> select json_query(jsonb '{\"a\": [1]}', 'lax $.a' with unconditional\n> array wrapper);\n> should be same as\n> select json_query(jsonb '{\"a\": [1]}', 'lax $.a' );\n>\n> another two examples with SQL/JSON scalar item:\n>\n> select json_query(jsonb '{\"a\": 1}', 'lax $.a' );\n> select json_query(jsonb '{\"a\": 1}', 'lax $.a' with unconditional wrapper);\n>\n> Am I interpreting \"UNCONDITIONAL is the default\" the wrong way?\n\nCurrent text is confusing, so I've rewritten the paragraph as:\n\n+ If the path expression may return multiple values, it might\nbe necessary\n+ to wrap those values using the <literal>WITH\nWRAPPER</literal> clause to\n+ make it a valid JSON string, because the default behavior is\nto not wrap\n+ them, as if <literal>WITHOUT WRAPPER</literal> were specified. The\n+ <literal>WITH WRAPPER</literal> clause is by default taken to mean\n+ <literal>WITH UNCONDITIONAL WRAPPER</literal>, which means that even a\n+ single result value will be wrapped. To apply the wrapper only when\n+ multiple values are present, specify <literal>WITH\nCONDITIONAL WRAPPER</literal>.\n+ Note that an error will be thrown if multiple values are returned and\n+ <literal>WITHOUT WRAPPER</literal> is specified.\n\nSo, UNCONDITIONAL is the default as in WITH [UNCONDITIONAL] WRAPPER.\n(The default when no wrapping clause is present is WITHOUT WRAPPER as\nseen in your example).\n\nPlease check the attached. I've also added <itemizedlist> lists as I\nremember you had proposed before to make the functions' descriptions a\nbit more readable -- I'm persuaded. :-)\n\n-- \nThanks, Amit Langote", "msg_date": "Fri, 5 Jul 2024 21:35:32 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc Rework: Section 9.16.13 SQL/JSON Query Functions" }, { "msg_contents": "Op 7/5/24 om 14:35 schreef Amit Langote:\n> Hi Jian,\n> \n> Thanks for the reviews.\n> \n > [v3-0001-SQL-JSON-Various-improvements-to-SQL-JSON-query-f.patch]\n i.e., from the patch for doc/src/sgml/func.sgml\n\n\nSmall changes:\n\n4x:\n'a SQL' should be\n'an SQL'\n('a SQL' does never occur in the docs; it's always 'an SQL'; apperently \nthe 'sequel' pronunciation is out)\n\n'some other type to which can be successfully coerced'\n'some other type to which it can be successfully coerced'\n\n\n'specifies the behavior behavior'\n'specifies the behavior'\n\n\nIn the following sentence:\n\n\"By default, the result is returned as a value of type <type>jsonb</type>,\nthough the <literal>RETURNING</literal> clause can be used to return\nthe original <type>jsonb</type> value as some other type to which it\ncan be successfully coerced.\"\n\nit seems to me that this phrase is better removed:\n \"the original <type>jsonb</type> value as\"\n\n\nthanks,\n\nErik\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Fri, 5 Jul 2024 15:16:12 +0200", "msg_from": "Erik Rijkers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc Rework: Section 9.16.13 SQL/JSON Query Functions" }, { "msg_contents": "On Fri, Jul 5, 2024 at 8:35 PM Amit Langote <[email protected]> wrote:\n> Please check the attached. I've also added <itemizedlist> lists as I\n> remember you had proposed before to make the functions' descriptions a\n> bit more readable -- I'm persuaded. :-)\n>\n\n\njson_exists\n\"Returns true if the SQL/JSON path_expression applied to the\ncontext_item using the PASSING values yields any items.\"\nnow you changed to\n<<\nReturns true if the SQL/JSON path_expression applied to the\ncontext_item. path_expression can reference variables named in the\nPASSING clause.\n<<\n Is it wrong?\n\n\n For both <literal>ON EMPTY</literal>and <literal>ON ERROR</literal>,\n specifying <literal>ERROR</literal> will cause an error to be\nthrown with\n the appropriate message. Other options include returning a SQL NULL, an\nneed one more whitespace. should be:\n For both <literal>ON EMPTY</literal> and <literal>ON ERROR</literal>,\n\n\n Note that an error will be thrown if multiple values are returned and\n <literal>WITHOUT WRAPPER</literal> is specified.\nsince not specify error on error, then no error will be thrown. maybe\nrephrase to\n It will be evaulated as error if multiple values are returned and\n <literal>WITHOUT WRAPPER</literal> is specified.\n\n\n <para>\n For both <literal>ON EMPTY</literal> and <literal>ON ERROR</literal>,\n specifying <literal>ERROR</literal> will cause an error to be\nthrown with\n the appropriate message. Other options include returning a SQL NULL, an\n empty array or object (array by default), or a user-specified expression\n that can be coerced to jsonb or the type specified in\n<literal>RETURNING</literal>.\n The default when <literal>ON EMPTY</literal> or <literal>ON\nERROR</literal>\n is not specified is to return a SQL NULL value when the respective\n situation occurs.\n </para>\nin here, \"empty array or object (array by default)\",\nI don't think people can understand the meaning of \"(array by default)\" .\n\n\"or a user-specified expression\"\nmaybe we can change to\n\"or a user-specified\n <literal>DEFAULT</literal> <replaceable>expression</replaceable>\"\nI think \"user-specified expression\" didn't have much link with\n\"<literal>DEFAULT</literal> <replaceable>expression</replaceable>\"\n\n\n<replaceable>path_expression</replaceable> can reference variables named\n in the <literal>PASSING</literal> clause.\ndo we need \"The <replaceable>path_expression</replaceable>\"?\nalso maybe we can add\n+ In <literal>PASSING</literal> clause, <replaceable>varname</replaceable> is\n+ the variables name, <replaceable>value</replaceable> is the\n+ variables' value.\nwe can add a PASSING clause example from sqljson_queryfuncs.sql ,\nsince all three functions can use it.\n\n\n\nJSON_VALUE:\n<<an error is thrown if that's not the case (though see the discussion\nof ON ERROR below).\nthen\n<< The ON ERROR and ON EMPTY clauses have similar semantics as\nmentioned in the description of JSON_QUERY, except the set of values\nreturned in lieu of throwing an error is different.\n\nyou first refer \"below\", then director to JSON_QUERY on error, on\nempty description.\nis the correct usage of \"below\"?\n\"(though see the discussion of ON ERROR below).\"\ni am not sure the meaning of \"though\" even watched this\nhttps://www.youtube.com/watch?v=r-LphuCKQ0Q\n\n\n", "msg_date": "Sat, 6 Jul 2024 10:55:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc Rework: Section 9.16.13 SQL/JSON Query Functions" }, { "msg_contents": "+ <replaceable>context_item</replaceable> can be any character string that\n+ can be succesfully cast to <type>jsonb</type>.\n\ntypo: \"succesfully\", should be \"successfully\"\n\nmaybe rephrase it to:\n+ <replaceable>context_item</replaceable> can be jsonb type or any\ncharacter string that\n+ can be successfully cast to <type>jsonb</type>.\n\n\n+ <literal>ON EMPTY</literal> expression (that is caused by empty result\n+ of <replaceable>path_expression</replaceable>evaluation).\nneed extra white space, should be\n+ of <replaceable>path_expression</replaceable> evaluation).\n\n\n\n+ The default when <literal>ON EMPTY</literal> or <literal>ON\nERROR</literal>\n+ is not specified is to return a SQL NULL value when the respective\n+ situation occurs.\nCorrect me if I'm wrong.\nwe can just say:\n+ The default when <literal>ON EMPTY</literal> or <literal>ON\nERROR</literal>\n+ is not specified is to return an SQL NULL value.\nAnyway, this is a minor issue.\n\n\n", "msg_date": "Mon, 8 Jul 2024 11:18:23 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc Rework: Section 9.16.13 SQL/JSON Query Functions" }, { "msg_contents": "another tiny issue.\n\n- <literal>select json_query(jsonb '{\"a\": \"[1, 2]\"}', 'lax $.a'\nOMIT QUOTES);</literal>\n+ <literal>JSON_QUERY(jsonb '{\"a\": \"[1, 2]\"}', 'lax $.a' OMIT\nQUOTES);</literal>\n <returnvalue>[1, 2]</returnvalue>\n </para>\n <para>\n- <literal>select json_query(jsonb '{\"a\": \"[1, 2]\"}', 'lax $.a'\nRETURNING int[] OMIT QUOTES ERROR ON ERROR);</literal>\n+ <literal>JSON_QUERY(jsonb '{\"a\": \"[1, 2]\"}', 'lax $.a'\nRETURNING int[] OMIT QUOTES ERROR ON ERROR);</literal>\n\nThese two example queries don't need semicolons at the end?\n\n\n", "msg_date": "Mon, 8 Jul 2024 15:41:03 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc Rework: Section 9.16.13 SQL/JSON Query Functions" }, { "msg_contents": "On Fri, Jul 5, 2024 at 10:16 PM Erik Rijkers <[email protected]> wrote:\n> Op 7/5/24 om 14:35 schreef Amit Langote:\n> > Hi Jian,\n> >\n> > Thanks for the reviews.\n> >\n> > [v3-0001-SQL-JSON-Various-improvements-to-SQL-JSON-query-f.patch]\n> i.e., from the patch for doc/src/sgml/func.sgml\n>\n>\n> Small changes:\n>\n> 4x:\n> 'a SQL' should be\n> 'an SQL'\n> ('a SQL' does never occur in the docs; it's always 'an SQL'; apperently\n> the 'sequel' pronunciation is out)\n>\n> 'some other type to which can be successfully coerced'\n> 'some other type to which it can be successfully coerced'\n>\n>\n> 'specifies the behavior behavior'\n> 'specifies the behavior'\n>\n>\n> In the following sentence:\n>\n> \"By default, the result is returned as a value of type <type>jsonb</type>,\n> though the <literal>RETURNING</literal> clause can be used to return\n> the original <type>jsonb</type> value as some other type to which it\n> can be successfully coerced.\"\n>\n> it seems to me that this phrase is better removed:\n> \"the original <type>jsonb</type> value as\"\n\nThanks, I've addressed all these in the next patch I'll send.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Mon, 8 Jul 2024 20:48:32 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc Rework: Section 9.16.13 SQL/JSON Query Functions" }, { "msg_contents": "Thanks for the readthrough.\n\nOn Sat, Jul 6, 2024 at 11:56 AM jian he <[email protected]> wrote:\n> json_exists\n> \"Returns true if the SQL/JSON path_expression applied to the\n> context_item using the PASSING values yields any items.\"\n> now you changed to\n> <<\n> Returns true if the SQL/JSON path_expression applied to the\n> context_item. path_expression can reference variables named in the\n> PASSING clause.\n> <<\n> Is it wrong?\n\nYes, I mistakenly dropped \"doesn't yield any items.\"\n\n> For both <literal>ON EMPTY</literal>and <literal>ON ERROR</literal>,\n> specifying <literal>ERROR</literal> will cause an error to be\n> thrown with\n> the appropriate message. Other options include returning a SQL NULL, an\n> need one more whitespace. should be:\n> For both <literal>ON EMPTY</literal> and <literal>ON ERROR</literal>,\n\nFixed.\n\n> Note that an error will be thrown if multiple values are returned and\n> <literal>WITHOUT WRAPPER</literal> is specified.\n> since not specify error on error, then no error will be thrown. maybe\n> rephrase to\n> It will be evaulated as error if multiple values are returned and\n> <literal>WITHOUT WRAPPER</literal> is specified.\n\nDone.\n\n> <para>\n> For both <literal>ON EMPTY</literal> and <literal>ON ERROR</literal>,\n> specifying <literal>ERROR</literal> will cause an error to be\n> thrown with\n> the appropriate message. Other options include returning a SQL NULL, an\n> empty array or object (array by default), or a user-specified expression\n> that can be coerced to jsonb or the type specified in\n> <literal>RETURNING</literal>.\n> The default when <literal>ON EMPTY</literal> or <literal>ON\n> ERROR</literal>\n> is not specified is to return a SQL NULL value when the respective\n> situation occurs.\n> </para>\n> in here, \"empty array or object (array by default)\",\n> I don't think people can understand the meaning of \"(array by default)\" .\n>\n> \"or a user-specified expression\"\n> maybe we can change to\n> \"or a user-specified\n> <literal>DEFAULT</literal> <replaceable>expression</replaceable>\"\n> I think \"user-specified expression\" didn't have much link with\n> \"<literal>DEFAULT</literal> <replaceable>expression</replaceable>\"\n\nYes, specifying each option's syntax in parenthesis makes sense.\n\n> <replaceable>path_expression</replaceable> can reference variables named\n> in the <literal>PASSING</literal> clause.\n> do we need \"The <replaceable>path_expression</replaceable>\"?\n\nFixed.\n\n> also maybe we can add\n> + In <literal>PASSING</literal> clause, <replaceable>varname</replaceable> is\n> + the variables name, <replaceable>value</replaceable> is the\n> + variables' value.\n\nInstead of expanding the description of the PASSING clause in each\nfunction's description, I've moved its description to the top\nparagraph with slightly different text.\n\n> we can add a PASSING clause example from sqljson_queryfuncs.sql ,\n> since all three functions can use it.\n\nDone.\n\n> JSON_VALUE:\n> <<an error is thrown if that's not the case (though see the discussion\n> of ON ERROR below).\n> then\n> << The ON ERROR and ON EMPTY clauses have similar semantics as\n> mentioned in the description of JSON_QUERY, except the set of values\n> returned in lieu of throwing an error is different.\n>\n> you first refer \"below\", then director to JSON_QUERY on error, on\n> empty description.\n> is the correct usage of \"below\"?\n> \"(though see the discussion of ON ERROR below).\"\n> i am not sure the meaning of \"though\" even watched this\n> https://www.youtube.com/watch?v=r-LphuCKQ0Q\n\nI've replaced the sentence with \"(though see the discussion of ON\nERROR below)\" with this:\n\n\"getting multiple values will be treated as an error.\"\n\nNo need to reference the ON ERROR clause with that wording.\n\n> + <replaceable>context_item</replaceable> can be any character string that\n> + can be succesfully cast to <type>jsonb</type>.\n>\n> typo: \"succesfully\", should be \"successfully\"\n\nFixed.\n\n> maybe rephrase it to:\n> + <replaceable>context_item</replaceable> can be jsonb type or any\n> character string that\n> + can be successfully cast to <type>jsonb</type>.\n\nDone.\n\n> + <literal>ON EMPTY</literal> expression (that is caused by empty result\n> + of <replaceable>path_expression</replaceable>evaluation).\n> need extra white space, should be\n> + of <replaceable>path_expression</replaceable> evaluation).\n\nFixed.\n\n> + The default when <literal>ON EMPTY</literal> or <literal>ON\n> ERROR</literal>\n> + is not specified is to return a SQL NULL value when the respective\n> + situation occurs.\n> Correct me if I'm wrong.\n> we can just say:\n> + The default when <literal>ON EMPTY</literal> or <literal>ON\n> ERROR</literal>\n> + is not specified is to return an SQL NULL value.\n\nAgreed.\n\n> another tiny issue.\n>\n> - <literal>select json_query(jsonb '{\"a\": \"[1, 2]\"}', 'lax $.a'\n> OMIT QUOTES);</literal>\n> + <literal>JSON_QUERY(jsonb '{\"a\": \"[1, 2]\"}', 'lax $.a' OMIT\n> QUOTES);</literal>\n> <returnvalue>[1, 2]</returnvalue>\n> </para>\n> <para>\n> - <literal>select json_query(jsonb '{\"a\": \"[1, 2]\"}', 'lax $.a'\n> RETURNING int[] OMIT QUOTES ERROR ON ERROR);</literal>\n> + <literal>JSON_QUERY(jsonb '{\"a\": \"[1, 2]\"}', 'lax $.a'\n> RETURNING int[] OMIT QUOTES ERROR ON ERROR);</literal>\n>\n> These two example queries don't need semicolons at the end?\n\nFixed.\n\nAlso, I've removed the following sentence in the description of\nJSON_EXISTS, because a) it seems out of place\n\n- <para>\n- Note that if the <replaceable>path_expression</replaceable> is\n- <literal>strict</literal> and <literal>ON ERROR</literal> behavior is\n- <literal>ERROR</literal>, an error is generated if it yields no items.\n- </para>\n\nand b) does not seem correct:\n\nSELECT JSON_EXISTS(jsonb '{\"key1\": [1,2,3]}', 'strict $.key1[*] ? (@ >\n3)' ERROR ON ERROR);\n json_exists\n-------------\n f\n(1 row)\n\npath_expression being strict or lax only matters inside\njsonpath_exec.c, not in ExecEvalJsonPathExpr().\n\nUpdated patch attached.\n\n-- \nThanks, Amit Langote", "msg_date": "Mon, 8 Jul 2024 21:57:00 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc Rework: Section 9.16.13 SQL/JSON Query Functions" }, { "msg_contents": "On Mon, Jul 8, 2024 at 8:57 PM Amit Langote <[email protected]> wrote:\n>\n> Updated patch attached.\n>\n\n Returns true if the SQL/JSON <replaceable>path_expression</replaceable>\n- applied to the <replaceable>context_item</replaceable> using the\n- <literal>PASSING</literal> <replaceable>value</replaceable>s yields any\n- items.\n+ applied to the <replaceable>context_item</replaceable> doesn't yield\n+ any items.\nshould \"doesn't\" be removed?\nshould it be \"yields\"?\n\n\n+ set. The <literal>ON ERROR</literal> clause specifies the behavior\n+ if an error occurs when evaluating\n<replaceable>path_expression</replaceable>,\n+ when coercing the result value to the\n<literal>RETURNING</literal> type,\n+ or when evaluating the <literal>ON EMPTY</literal> expression if the\n+ <replaceable>path_expression</replaceable> evaluation results in an\n+ empty set.\nlast sentence, \"in an empty set.\" should be \"is an empty set\"\n\n\nOther than that, it looks good to me.\n\n\n", "msg_date": "Tue, 9 Jul 2024 09:38:53 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc Rework: Section 9.16.13 SQL/JSON Query Functions" }, { "msg_contents": "On Tue, Jul 9, 2024 at 10:39 AM jian he <[email protected]> wrote:\n> On Mon, Jul 8, 2024 at 8:57 PM Amit Langote <[email protected]> wrote:\n> >\n> > Updated patch attached.\n> >\n>\n> Returns true if the SQL/JSON <replaceable>path_expression</replaceable>\n> - applied to the <replaceable>context_item</replaceable> using the\n> - <literal>PASSING</literal> <replaceable>value</replaceable>s yields any\n> - items.\n> + applied to the <replaceable>context_item</replaceable> doesn't yield\n> + any items.\n> should \"doesn't\" be removed?\n> should it be \"yields\"?\n\nOops, fixed.\n\n> + set. The <literal>ON ERROR</literal> clause specifies the behavior\n> + if an error occurs when evaluating\n> <replaceable>path_expression</replaceable>,\n> + when coercing the result value to the\n> <literal>RETURNING</literal> type,\n> + or when evaluating the <literal>ON EMPTY</literal> expression if the\n> + <replaceable>path_expression</replaceable> evaluation results in an\n> + empty set.\n> last sentence, \"in an empty set.\" should be \"is an empty set\"\n\n\"results in an empty set\" here means \"the result of the evaluation is\nan empty set\", similar to:\n\n$ git grep \"results in an\" doc\ndoc/src/sgml/charset.sgml: results in an error, because even though\nthe <literal>||</literal> operator\ndoc/src/sgml/plpgsql.sgml: an omitted <literal>ELSE</literal>\nclause results in an error rather\ndoc/src/sgml/plpython.sgml: If the second <literal>UPDATE</literal>\nstatement results in an\ndoc/src/sgml/pltcl.sgml: If the second <command>UPDATE</command>\nstatement results in an\n\nMaybe I could just replace that by \"returns an empty set\".\n\nWill push shortly after making those changes.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Tue, 9 Jul 2024 12:30:35 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc Rework: Section 9.16.13 SQL/JSON Query Functions" }, { "msg_contents": "On Tue, Jul 9, 2024 at 12:30 PM Amit Langote <[email protected]> wrote:\n> On Tue, Jul 9, 2024 at 10:39 AM jian he <[email protected]> wrote:\n> > On Mon, Jul 8, 2024 at 8:57 PM Amit Langote <[email protected]> wrote:\n> > >\n> > > Updated patch attached.\n> > >\n> >\n> > Returns true if the SQL/JSON <replaceable>path_expression</replaceable>\n> > - applied to the <replaceable>context_item</replaceable> using the\n> > - <literal>PASSING</literal> <replaceable>value</replaceable>s yields any\n> > - items.\n> > + applied to the <replaceable>context_item</replaceable> doesn't yield\n> > + any items.\n> > should \"doesn't\" be removed?\n> > should it be \"yields\"?\n>\n> Oops, fixed.\n>\n> > + set. The <literal>ON ERROR</literal> clause specifies the behavior\n> > + if an error occurs when evaluating\n> > <replaceable>path_expression</replaceable>,\n> > + when coercing the result value to the\n> > <literal>RETURNING</literal> type,\n> > + or when evaluating the <literal>ON EMPTY</literal> expression if the\n> > + <replaceable>path_expression</replaceable> evaluation results in an\n> > + empty set.\n> > last sentence, \"in an empty set.\" should be \"is an empty set\"\n>\n> \"results in an empty set\" here means \"the result of the evaluation is\n> an empty set\", similar to:\n>\n> $ git grep \"results in an\" doc\n> doc/src/sgml/charset.sgml: results in an error, because even though\n> the <literal>||</literal> operator\n> doc/src/sgml/plpgsql.sgml: an omitted <literal>ELSE</literal>\n> clause results in an error rather\n> doc/src/sgml/plpython.sgml: If the second <literal>UPDATE</literal>\n> statement results in an\n> doc/src/sgml/pltcl.sgml: If the second <command>UPDATE</command>\n> statement results in an\n>\n> Maybe I could just replace that by \"returns an empty set\".\n>\n> Will push shortly after making those changes.\n\nAnd...pushed. Thanks for the reviews.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Tue, 9 Jul 2024 16:19:35 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc Rework: Section 9.16.13 SQL/JSON Query Functions" } ]
[ { "msg_contents": "for json format, add a new option to let the explain json output also\ninclude the actual query string.\n\nit can make json usage more convenient.\nNow you only need to grab the json output, no need to\ncollect another explain statement and extract the actual query from\nthe explain statement.\n\nincluding_query name is so far what i can come up with, if people have\nbetter ideas, then we can change.\n\nexample:\nexplain (analyze,including_query on, format json) select 1;\n QUERY PLAN\n-------------------------------------\n [ +\n {\"Query\": \"select 1\"}, +\n { +\n \"Plan\": { +\n \"Node Type\": \"Result\", +\n \"Parallel Aware\": false, +\n \"Async Capable\": false, +\n \"Startup Cost\": 0.00, +\n \"Total Cost\": 0.01, +\n \"Plan Rows\": 1, +\n \"Plan Width\": 4, +\n \"Actual Startup Time\": 0.001,+\n \"Actual Total Time\": 0.001, +\n \"Actual Rows\": 1, +\n \"Actual Loops\": 1 +\n }, +\n \"Planning Time\": 0.119, +\n \"Triggers\": [ +\n ], +\n \"Execution Time\": 0.033 +\n } +\n ]\n(1 row)\n\n\n", "msg_date": "Tue, 25 Jun 2024 16:54:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "add a new explain option including_query for include query string\n inside the json plan output" }, { "msg_contents": "On Tue, 25 Jun 2024 at 10:55, jian he <[email protected]> wrote:\n>\n> for json format, add a new option to let the explain json output also\n> include the actual query string.\n\nHow would this cooperate with e.g. EXPLAIN (...) EXECUTE\nmy_prepared_statement? Would this query be the prepared statement's\nquery, or the top-level EXECUTE statement?\n\n> it can make json usage more convenient.\n> Now you only need to grab the json output, no need to\n> collect another explain statement and extract the actual query from\n> the explain statement.\n\nWouldn't the user be able to keep track of the query they wanted\nexplained by themselves? If not, why?\n\n> example:\n> explain (analyze,including_query on, format json) select 1;\n> QUERY PLAN\n> -------------------------------------\n> [ +\n> {\"Query\": \"select 1\"}, +\n> { +\n> \"Plan\": { +\n\nIf we were to add the query to the explain output, I think it should\nbe a top-level key in the same JSON object that holds the \"Plan\",\nTriggers, and \"Execution Time\" keys.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 25 Jun 2024 12:30:27 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add a new explain option including_query for include query string\n inside the json plan output" }, { "msg_contents": "Matthias van de Meent <[email protected]> writes:\n> On Tue, 25 Jun 2024 at 10:55, jian he <[email protected]> wrote:\n>> for json format, add a new option to let the explain json output also\n>> include the actual query string.\n\n> Wouldn't the user be able to keep track of the query they wanted\n> explained by themselves? If not, why?\n\nIndeed. I do not think this is a good idea at all, even if the\nquestion of \"where did you get the query string from\" could be\nresolved satisfactorily.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2024 10:23:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add a new explain option including_query for include query string\n inside the json plan output" } ]
[ { "msg_contents": "I’m a complete novice, although I’ve dipped my toes in Admin waters a\ncouple of times in my many years of using Linux.\n\nCan anyone recommend a good book on installing Postgres on multiple,\nconnected multiuser systems, tuning it, managing users, backups, updated,\netc.\n\nA cookbook/checklist approach would be great. I’ve bought several books\nover the years but a more current one is desirable.\n\nThanks for any help.\n\nBest regards,\n\n-Tom\n\nI’m a complete novice, although I’ve dipped my toes in Admin waters a couple of times in my many years of using Linux.Can anyone recommend a good book on installing Postgres on multiple, connected multiuser systems, tuning it, managing users, backups, updated, etc.A cookbook/checklist approach would be great. I’ve bought several books over the years but a more current one is desirable.Thanks for any help.Best regards,-Tom", "msg_date": "Tue, 25 Jun 2024 09:04:13 -0500", "msg_from": "Tom Browder <[email protected]>", "msg_from_op": true, "msg_subject": "Recommended books for admin" }, { "msg_contents": "Hi Tom\n\nThere is alot of stuff available online, you just need to find it, also the\nOfficial PG documentation is extensive too..\n\nRegards\nKashif Zeeshan\n\nOn Tue, Jun 25, 2024 at 7:04 PM Tom Browder <[email protected]> wrote:\n\n> I’m a complete novice, although I’ve dipped my toes in Admin waters a\n> couple of times in my many years of using Linux.\n>\n> Can anyone recommend a good book on installing Postgres on multiple,\n> connected multiuser systems, tuning it, managing users, backups, updated,\n> etc.\n>\n> A cookbook/checklist approach would be great. I’ve bought several books\n> over the years but a more current one is desirable.\n>\n> Thanks for any help.\n>\n> Best regards,\n>\n> -Tom\n>\n\nHi TomThere is alot of stuff available online, you just need to find it, also the Official PG documentation is extensive too..RegardsKashif ZeeshanOn Tue, Jun 25, 2024 at 7:04 PM Tom Browder <[email protected]> wrote:I’m a complete novice, although I’ve dipped my toes in Admin waters a couple of times in my many years of using Linux.Can anyone recommend a good book on installing Postgres on multiple, connected multiuser systems, tuning it, managing users, backups, updated, etc.A cookbook/checklist approach would be great. I’ve bought several books over the years but a more current one is desirable.Thanks for any help.Best regards,-Tom", "msg_date": "Tue, 25 Jun 2024 19:08:09 +0500", "msg_from": "Kashif Zeeshan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recommended books for admin" }, { "msg_contents": "Check out the left side bar\n\nhttps://www.postgresql.org/docs/\n\n> On Jun 25, 2024, at 10:04 AM, Tom Browder <[email protected]> wrote:\n> \n> I’m a complete novice, although I’ve dipped my toes in Admin waters a couple of times in my many years of using Linux.\n> \n> Can anyone recommend a good book on installing Postgres on multiple, connected multiuser systems, tuning it, managing users, backups, updated, etc.\n> \n> A cookbook/checklist approach would be great. I’ve bought several books over the years but a more current one is desirable.\n> \n> Thanks for any help.\n> \n> Best regards,\n> \n> -Tom", "msg_date": "Tue, 25 Jun 2024 10:09:05 -0400", "msg_from": "Bill Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recommended books for admin" }, { "msg_contents": "Hi ,\n\nHere is a lately published book\n\nhttps://www.amazon.com/PostgreSQL-Administration-Cookbook-real-world-challenges-ebook/dp/B0CP5PPSTQ\n\n\nMuhammad Ikram\n\nBitnine Global\n\nOn Tue, 25 Jun 2024 at 19:09, Bill Smith <[email protected]>\nwrote:\n\n> Check out the left side bar\n>\n> Documentation <https://www.postgresql.org/docs/>\n> postgresql.org <https://www.postgresql.org/docs/>\n> [image: favicon.ico] <https://www.postgresql.org/docs/>\n> <https://www.postgresql.org/docs/>\n>\n>\n> On Jun 25, 2024, at 10:04 AM, Tom Browder <[email protected]> wrote:\n>\n> I’m a complete novice, although I’ve dipped my toes in Admin waters a\n> couple of times in my many years of using Linux.\n>\n> Can anyone recommend a good book on installing Postgres on multiple,\n> connected multiuser systems, tuning it, managing users, backups, updated,\n> etc.\n>\n> A cookbook/checklist approach would be great. I’ve bought several books\n> over the years but a more current one is desirable.\n>\n> Thanks for any help.\n>\n> Best regards,\n>\n> -Tom\n>\n>\n>", "msg_date": "Tue, 25 Jun 2024 19:15:39 +0500", "msg_from": "Muhammad Ikram <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recommended books for admin" }, { "msg_contents": "On Tue, Jun 25, 2024 at 9:15 AM Muhammad Ikram <[email protected]> wrote:\n> Hi ,\n> Here is a lately published book\n> https://www.amazon.com/PostgreSQL-Administration-Cookbook-real-world-challenges-ebook/dp/B0CP5PPSTQ\n\nThanks, Muhammed, I just bought it.\n\nAnd thanks to all who answered!\n\nBest regards.\n\n-Tom\n\n\n", "msg_date": "Wed, 26 Jun 2024 15:39:24 -0500", "msg_from": "Tom Browder <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recommended books for admin" }, { "msg_contents": "On 6/26/24 22:39, Tom Browder wrote:\n> On Tue, Jun 25, 2024 at 9:15 AM Muhammad Ikram <[email protected]> wrote:\n>> Hi ,\n>> Here is a lately published book\n>> https://www.amazon.com/PostgreSQL-Administration-Cookbook-real-world-challenges-ebook/dp/B0CP5PPSTQ\n> \n> Thanks, Muhammed, I just bought it.\n> \n> And thanks to all who answered!\n> \n\nFWIW there's actually a page with a list of more books:\n\n https://www.postgresql.org/docs/books/\n\nI'd say most of the books are pretty good, from experienced authors.\nIt's more about the angle of each book - sometimes it's for DBAs,\nsometimes for developers, etc.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 3 Jul 2024 20:18:06 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recommended books for admin" }, { "msg_contents": "Hi,\n\n> Can anyone recommend a good book on installing Postgres on multiple, connected multiuser systems, tuning it, managing users, backups, updated, etc.\n>\n> A cookbook/checklist approach would be great. I’ve bought several books over the years but a more current one is desirable.\n>\n> Thanks for any help.\n\nThere are many books written but not all of them are equally well\nwritten to be honest.\n\nHere are several good ones in the recommended read order:\n\n- PostgreSQL Configuration by Baji Shaik (*)\n- PostgreSQL Query Optimization by Henrietta Dombrovskaya et al (**)\n- The Art of PostgreSQL by Dimitri Fontaine\n- PostgreSQL Server Programming by Hannu Krosing et al\n- PostgreSQL 14 Internals by Egor Rogov (***)\n\n(*) replication is better described in the official documentation [1]\n(**) doesn't cover such features as full-text search or PostGIS\n(***) despite the title it's written from the DBAs perspective\n\n[1]: https://www.postgresql.org/docs/current/runtime-config-replication.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 4 Jul 2024 13:04:27 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recommended books for admin" } ]
[ { "msg_contents": "Hi,\n\nDuring backend initialisation, pgStat DSA is attached using\ndsa_attach_in_place with a NULL segment. The NULL segment means that\nthere's no callback to release the DSA when the process exits.\npgstat_detach_shmem only calls dsa_detach which, as mentioned in the\nfunction's comment, doesn't include releasing and doesn't decrement the\nreference count of pgStat DSA.\n\nThus, every time a backend is created, pgStat DSA's refcnt is incremented\nbut never decremented when the backend shutdown. It will eventually\noverflow and reach 0, triggering the \"could not attach to dynamic shared\narea\" error on all newly created backends. When this state is reached, the\nonly way to recover is to restart the db to reset the counter.\n\nThe issue can be visible by calling dsa_dump in pgstat_detach_shmem and\nchecking that refcnt's value is continuously increasing as new backends are\ncreated. It is also possible to reach the state where all connections are\nrefused by editing the refcnt manually with lldb/gdb (The alternative,\ncreating enough backends to reach 0 exists but can take some time). Setting\nit to -10 and then opening 10 connections will eventually generate the\n\"could not attach\" error.\n\nThis patch fixes this issue by releasing pgStat DSA with\ndsa_release_in_place during pgStat shutdown to correctly decrement the\nrefcnt.\n\nRegards,\nAnthonin", "msg_date": "Tue, 25 Jun 2024 17:01:55 +0200", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Fix possible overflow of pg_stat DSA's refcnt" }, { "msg_contents": "On Tue, Jun 25, 2024 at 05:01:55PM +0200, Anthonin Bonnefoy wrote:\n> During backend initialisation, pgStat DSA is attached using\n> dsa_attach_in_place with a NULL segment. The NULL segment means that\n> there's no callback to release the DSA when the process exits.\n> pgstat_detach_shmem only calls dsa_detach which, as mentioned in the\n> function's comment, doesn't include releasing and doesn't decrement the\n> reference count of pgStat DSA.\n> \n> Thus, every time a backend is created, pgStat DSA's refcnt is incremented\n> but never decremented when the backend shutdown. It will eventually\n> overflow and reach 0, triggering the \"could not attach to dynamic shared\n> area\" error on all newly created backends. When this state is reached, the\n> only way to recover is to restart the db to reset the counter.\n\nVery good catch! It looks like you have seen that in the field, then.\nSad face.\n\n> This patch fixes this issue by releasing pgStat DSA with\n> dsa_release_in_place during pgStat shutdown to correctly decrement the\n> refcnt.\n\nSounds logic to me to do that in the pgstat shutdown callback, ordered\nwith the dsa_detach calls in a single location rather than registering\na different callback to do the same job. Will fix and backpatch,\nthanks for the report!\n--\nMichael", "msg_date": "Wed, 26 Jun 2024 14:39:57 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix possible overflow of pg_stat DSA's refcnt" }, { "msg_contents": "On Wed, Jun 26, 2024 at 7:40 AM Michael Paquier <[email protected]> wrote:\n>\n> Very good catch! It looks like you have seen that in the field, then.\n> Sad face.\n\nYeah, this happened last week on one of our replicas (version 15.5)\nlast week that had 134 days uptime. We are doing a lot of parallel\nqueries on this cluster so the combination of high uptime plus\nparallel workers creation eventually triggered the issue.\n\n> Will fix and backpatch, thanks for the report!\n\nThanks for handling this and for the quick answer!\n\nRegards,\nAnthonin\n\n\n", "msg_date": "Wed, 26 Jun 2024 08:48:06 +0200", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix possible overflow of pg_stat DSA's refcnt" }, { "msg_contents": "On Wed, Jun 26, 2024 at 08:48:06AM +0200, Anthonin Bonnefoy wrote:\n> Yeah, this happened last week on one of our replicas (version 15.5)\n> last week that had 134 days uptime. We are doing a lot of parallel\n> queries on this cluster so the combination of high uptime plus\n> parallel workers creation eventually triggered the issue.\n\nIt is not surprising that it would take this much amount of time\nbefore detecting it. I've applied the patch down to 15. Thanks a lot\nfor the analysis and the patch!\n--\nMichael", "msg_date": "Thu, 27 Jun 2024 09:48:26 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix possible overflow of pg_stat DSA's refcnt" } ]
[ { "msg_contents": "Hello,\n\nIt's possible I'm the only one who's been in this situation, but I've\nmultiple times found myself explaining to a user how column DEFAULT\nexpressions work: namely how the quoting on an expression following\nthe keyword DEFAULT controls whether or not the expression is\nevaluated at the time of the DDL statement or at the time of an\ninsertion.\n\nIn my experience this is non-obvious to users, and the quoting makes a\nbig difference.\n\nIs this something that we should document explicitly? I don't see it\ncalled out in the CREATE TABLE reference page, but it's possible I'm\nmissing something.\n\nThanks,\nJames Coleman\n\n\n", "msg_date": "Tue, 25 Jun 2024 16:51:05 -0400", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Should we document how column DEFAULT expressions work?" }, { "msg_contents": "James Coleman <[email protected]> writes:\n> It's possible I'm the only one who's been in this situation, but I've\n> multiple times found myself explaining to a user how column DEFAULT\n> expressions work: namely how the quoting on an expression following\n> the keyword DEFAULT controls whether or not the expression is\n> evaluated at the time of the DDL statement or at the time of an\n> insertion.\n\nUh ... what? I recall something about that with respect to certain\nfeatures such as nextval(), but you're making it sound like there\nis something generic going on with DEFAULT.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2024 16:59:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Tue, Jun 25, 2024 at 4:59 PM Tom Lane <[email protected]> wrote:\n>\n> James Coleman <[email protected]> writes:\n> > It's possible I'm the only one who's been in this situation, but I've\n> > multiple times found myself explaining to a user how column DEFAULT\n> > expressions work: namely how the quoting on an expression following\n> > the keyword DEFAULT controls whether or not the expression is\n> > evaluated at the time of the DDL statement or at the time of an\n> > insertion.\n>\n> Uh ... what? I recall something about that with respect to certain\n> features such as nextval(), but you're making it sound like there\n> is something generic going on with DEFAULT.\n\nHmm, I guess I'd never considered anything besides cases like\nnextval() and now(), but I see now that now() must also be special\ncased (when quoted) since 'date_trunc(day, now())'::timestamp doesn't\nwork but 'now()'::timestamp does.\n\nSo I guess what I'm asking about would be limited to those cases (I\nassume there are a few others...but I haven't gone digging through the\nsource yet).\n\nRegards,\nJames Coleman\n\n\n", "msg_date": "Tue, 25 Jun 2024 19:05:04 -0400", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "James Coleman <[email protected]> writes:\n> On Tue, Jun 25, 2024 at 4:59 PM Tom Lane <[email protected]> wrote:\n>> Uh ... what? I recall something about that with respect to certain\n>> features such as nextval(), but you're making it sound like there\n>> is something generic going on with DEFAULT.\n\n> Hmm, I guess I'd never considered anything besides cases like\n> nextval() and now(), but I see now that now() must also be special\n> cased (when quoted) since 'date_trunc(day, now())'::timestamp doesn't\n> work but 'now()'::timestamp does.\n\nHmm, both of those behaviors are documented, but not in the same place\nand possibly not anywhere near where you looked for info about\nDEFAULT. For instance, the Tip at the bottom of section 9.9.5\n\nhttps://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-CURRENT\n\nexplains about how 'now'::timestamp isn't what to use in DEFAULT.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2024 19:11:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Tue, Jun 25, 2024 at 4:11 PM Tom Lane <[email protected]> wrote:\n\n> James Coleman <[email protected]> writes:\n> > On Tue, Jun 25, 2024 at 4:59 PM Tom Lane <[email protected]> wrote:\n> >> Uh ... what? I recall something about that with respect to certain\n> >> features such as nextval(), but you're making it sound like there\n> >> is something generic going on with DEFAULT.\n>\n> > Hmm, I guess I'd never considered anything besides cases like\n> > nextval() and now(), but I see now that now() must also be special\n> > cased (when quoted) since 'date_trunc(day, now())'::timestamp doesn't\n> > work but 'now()'::timestamp does.\n>\n> Hmm, both of those behaviors are documented, but not in the same place\n> and possibly not anywhere near where you looked for info about\n> DEFAULT. For instance, the Tip at the bottom of section 9.9.5\n>\n>\n> https://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-CURRENT\n>\n> explains about how 'now'::timestamp isn't what to use in DEFAULT.\n>\n>\nI'd suggest adding to:\n\nDEFAULT default_expr\nThe DEFAULT clause assigns a default data value for the column whose column\ndefinition it appears within. The value is any variable-free expression (in\nparticular, cross-references to other columns in the current table are not\nallowed). Subqueries are not allowed either. The data type of the default\nexpression must match the data type of the column.\n\nThe default expression will be used in any insert operation that does not\nspecify a value for the column. If there is no default for a column, then\nthe default is null.\n\n+ Be aware that the [special timestamp values 1] are resolved immediately,\nnot upon insert. Use the [date/time constructor functions 2] to produce a\ntime relative to the future insertion.\n\n[1]\nhttps://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-DATETIME-SPECIAL-VALUES\n[2]\nhttps://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-CURRENT\n\nDavid J.\n\nOn Tue, Jun 25, 2024 at 4:11 PM Tom Lane <[email protected]> wrote:James Coleman <[email protected]> writes:\n> On Tue, Jun 25, 2024 at 4:59 PM Tom Lane <[email protected]> wrote:\n>> Uh ... what?  I recall something about that with respect to certain\n>> features such as nextval(), but you're making it sound like there\n>> is something generic going on with DEFAULT.\n\n> Hmm, I guess I'd never considered anything besides cases like\n> nextval() and now(), but I see now that now() must also be special\n> cased (when quoted) since 'date_trunc(day, now())'::timestamp doesn't\n> work but 'now()'::timestamp does.\n\nHmm, both of those behaviors are documented, but not in the same place\nand possibly not anywhere near where you looked for info about\nDEFAULT.  For instance, the Tip at the bottom of section 9.9.5\n\nhttps://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-CURRENT\n\nexplains about how 'now'::timestamp isn't what to use in DEFAULT.I'd suggest adding to:DEFAULT default_expr The DEFAULT clause assigns a default data value for the column whose column definition it appears within. The value is any variable-free expression (in particular, cross-references to other columns in the current table are not allowed). Subqueries are not allowed either. The data type of the default expression must match the data type of the column.The default expression will be used in any insert operation that does not specify a value for the column. If there is no default for a column, then the default is null.+ Be aware that the [special timestamp values 1] are resolved immediately, not upon insert.  Use the [date/time constructor functions 2] to produce a time relative to the future insertion.[1] https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-DATETIME-SPECIAL-VALUES[2] https://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-CURRENTDavid J.", "msg_date": "Tue, 25 Jun 2024 18:30:42 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Wed, 26 Jun 2024 at 13:31, David G. Johnston\n<[email protected]> wrote:\n> I'd suggest adding to:\n>\n> DEFAULT default_expr\n> The DEFAULT clause assigns a default data value for the column whose column definition it appears within. The value is any variable-free expression (in particular, cross-references to other columns in the current table are not allowed). Subqueries are not allowed either. The data type of the default expression must match the data type of the column.\n>\n> The default expression will be used in any insert operation that does not specify a value for the column. If there is no default for a column, then the default is null.\n>\n> + Be aware that the [special timestamp values 1] are resolved immediately, not upon insert. Use the [date/time constructor functions 2] to produce a time relative to the future insertion.\n\nFWIW, I disagree that we need to write anything about that in this\npart of the documentation. I think any argument for doing this could\nequally be applied to something like re-iterating what the operator\nprecedence rules for arithmetic are, and I don't think that should be\nmentioned. Also, what about all the other places where someone could\nuse one of the special timestamp input values? Should CREATE VIEW get\na memo too? How about PREPARE?\n\nIf people don't properly understand these special timestamp input\nvalues, then maybe the documentation in [1] needs to be improved. At\nthe moment the details are within parentheses. Namely \"(In particular,\nnow and related strings are converted to a specific time value as soon\nas they are read.)\". Maybe it would be better to be more explicit\nthere and mention that these are special values that the input\nfunction understands which are translated to actual timestamp values\nwhen the type's input function is called. That could maybe be tied\ninto the DEFAULT clause documentation to mention that the input\nfunction for constant values is called at DML time rather than DDL\ntime. That way, we're not adding these (unsustainable) special cases\nto the documentation.\n\nDavid\n\n[1] https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-DATETIME-SPECIAL-VALUES\n\n\n", "msg_date": "Wed, 26 Jun 2024 16:50:00 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> If people don't properly understand these special timestamp input\n> values, then maybe the documentation in [1] needs to be improved. At\n> the moment the details are within parentheses. Namely \"(In particular,\n> now and related strings are converted to a specific time value as soon\n> as they are read.)\". Maybe it would be better to be more explicit\n> there and mention that these are special values that the input\n> function understands which are translated to actual timestamp values\n> when the type's input function is called. That could maybe be tied\n> into the DEFAULT clause documentation to mention that the input\n> function for constant values is called at DML time rather than DDL\n> time. That way, we're not adding these (unsustainable) special cases\n> to the documentation.\n\nThis sounds like a reasonable approach to me for the\nmagic-input-values issue. Do we want to do anything about\nnextval()? I guess if you hold your head at the correct\nangle, that's also a magic-input-value issue, in the sense\nthat the question is when does regclass input get resolved.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2024 01:12:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Tue, Jun 25, 2024 at 9:50 PM David Rowley <[email protected]> wrote:\n\n> On Wed, 26 Jun 2024 at 13:31, David G. Johnston\n> <[email protected]> wrote:\n> > I'd suggest adding to:\n> >\n> > DEFAULT default_expr\n> > The DEFAULT clause assigns a default data value for the column whose\n> column definition it appears within. The value is any variable-free\n> expression (in particular, cross-references to other columns in the current\n> table are not allowed). Subqueries are not allowed either. The data type of\n> the default expression must match the data type of the column.\n> >\n> > The default expression will be used in any insert operation that does\n> not specify a value for the column. If there is no default for a column,\n> then the default is null.\n> >\n> > + Be aware that the [special timestamp values 1] are resolved\n> immediately, not upon insert. Use the [date/time constructor functions 2]\n> to produce a time relative to the future insertion.\n>\n\nAnnoyingly even this advice isn't correct:\n\npostgres=# create table tdts2 (ts timestamptz default 'now()');\nCREATE TABLE\npostgres=# \\d tdts2\n Table \"public.tdts2\"\n Column | Type | Collation | Nullable |\n Default\n\n--------+--------------------------+-----------+----------+-------------------------------------------\n----------------\n ts | timestamp with time zone | | | '2024-06-25\n18:05:33.055377-07'::timestamp\n with time zone\n\nI expected writing what looked like the function now() to be delayed\nevaluated but since I put it into quotes, the OPs complaint, it got read as\nthe literal with ignored extra bits.\n\n\n> FWIW, I disagree that we need to write anything about that in this\n> part of the documentation. I think any argument for doing this could\n> equally be applied to something like re-iterating what the operator\n> precedence rules for arithmetic are, and I don't think that should be\n> mentioned.\n\n\nI disagree on this equivalence. The time literals are clear deviations\nfrom expected behavior. Knowing operator precedence rules, they apply\neverywhere equally. And we should document the deviations directly where\nthey happen. Even if it's just a short link back to the source that\ndescribes the deviation. I'm fine with something less verbose pointing\nonly to the data types page, but not with nothing.\n\nAlso, what about all the other places where someone could\n> use one of the special timestamp input values? Should CREATE VIEW get\n> a memo too? How about PREPARE?\n>\n\nYes.\n\n\n> If people don't properly understand these special timestamp input\n> values, then maybe the documentation in [1] needs to be improved.\n\n\nRecall, and awareness, is the greater issue, not comprehension. This\nintends to increase the former. I don't believe the latter is an issue,\nthough I haven't deep dived into it.\n\nAnd the whole type casting happening right away just seems misleading.\n\npostgres=# create table testboold2 (expr boolean default boolean 'false');\nCREATE TABLE\npostgres=# \\d testboold2\n Table \"public.testboold2\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n expr | boolean | | | false\n\nI would expect 'f' in the default column if the boolean casting of the\nliteral happened sooner. Or I'd expect to see \"boolean 'false'\" as the\ndefault expression if it is captured as-is.\n\nSo yes, saving an expression into the default column has nuances that\nshould be documented where default is defined.\n\nMaybe the wording needs to be:\n\n\"If the default expression contains any constants [1] they are converted\ninto their typed value during create table execution. Thus time constants\n[1] save into the default expression the time the command was executed.\"\n\n[1]\nhttps://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS\n[2]\nhttps://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-DATETIME-SPECIAL-VALUES\n\nI'd be happy to be pointed to other constants that resolve to an\nexecution-time specific environment in a similar manner. If there is\nanother one I'll rethink the wisdom of trying to document all of them in\neach place. But reminding people that time is special and we have these\nspecial values seems to provide meaningful reader benefit for the cost of a\ncouple of sentences repeated in a few places. That were valid a decade ago\nno more or less than they are valid now.\n\nDavid J.\n\nOn Tue, Jun 25, 2024 at 9:50 PM David Rowley <[email protected]> wrote:On Wed, 26 Jun 2024 at 13:31, David G. Johnston\n<[email protected]> wrote:\n> I'd suggest adding to:\n>\n> DEFAULT default_expr\n> The DEFAULT clause assigns a default data value for the column whose column definition it appears within. The value is any variable-free expression (in particular, cross-references to other columns in the current table are not allowed). Subqueries are not allowed either. The data type of the default expression must match the data type of the column.\n>\n> The default expression will be used in any insert operation that does not specify a value for the column. If there is no default for a column, then the default is null.\n>\n> + Be aware that the [special timestamp values 1] are resolved immediately, not upon insert.  Use the [date/time constructor functions 2] to produce a time relative to the future insertion.Annoyingly even this advice isn't correct:postgres=# create table tdts2 (ts timestamptz default 'now()');CREATE TABLEpostgres=# \\d tdts2                                                 Table \"public.tdts2\" Column |           Type           | Collation | Nullable |                          Default                          --------+--------------------------+-----------+----------+----------------------------------------------------------- ts     | timestamp with time zone |           |          | '2024-06-25 18:05:33.055377-07'::timestamp with time zoneI expected writing what looked like the function now() to be delayed evaluated but since I put it into quotes, the OPs complaint, it got read as the literal with ignored extra bits.\n\nFWIW, I disagree that we need to write anything about that in this\npart of the documentation.  I think any argument for doing this could\nequally be applied to something like re-iterating what the operator\nprecedence rules for arithmetic are, and I don't think that should be\nmentioned.I disagree on this equivalence.  The time literals are clear deviations from expected behavior.  Knowing operator precedence rules, they apply everywhere equally.  And we should document the deviations directly where they happen.  Even if it's just a short link back to the source that describes the deviation.  I'm fine with something less verbose pointing only to the data types page, but not with nothing. Also, what about all the other places where someone could\nuse one of the special timestamp input values? Should CREATE VIEW get\na memo too?  How about PREPARE?Yes.\n\nIf people don't properly understand these special timestamp input\nvalues, then maybe the documentation in [1] needs to be improved.Recall, and awareness, is the greater issue, not comprehension.  This intends to increase the former.  I don't believe the latter is an issue, though I haven't deep dived into it.And the whole type casting happening right away just seems misleading.postgres=# create table testboold2 (expr boolean default boolean 'false');CREATE TABLEpostgres=# \\d testboold2             Table \"public.testboold2\" Column |  Type   | Collation | Nullable | Default --------+---------+-----------+----------+--------- expr   | boolean |           |          | falseI would expect 'f' in the default column if the boolean casting of the literal happened sooner.  Or I'd expect to see \"boolean 'false'\" as the default expression if it is captured as-is.So yes, saving an expression into the default column has nuances that should be documented where default is defined.Maybe the wording needs to be:\"If the default expression contains any constants [1] they are converted into their typed value during create table execution.  Thus time constants [1] save into the default expression the time the command was executed.\"[1] https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS[2] https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-DATETIME-SPECIAL-VALUESI'd be happy to be pointed to other constants that resolve to an execution-time specific environment in a similar manner.  If there is another one I'll rethink the wisdom of trying to document all of them in each place.  But reminding people that time is special and we have these special values seems to provide meaningful reader benefit for the cost of a couple of sentences repeated in a few places.  That were valid a decade ago no more or less than they are valid now.David J.", "msg_date": "Tue, 25 Jun 2024 22:35:29 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Tue, Jun 25, 2024 at 10:12 PM Tom Lane <[email protected]> wrote:\n\n> David Rowley <[email protected]> writes:\n> > If people don't properly understand these special timestamp input\n> > values, then maybe the documentation in [1] needs to be improved. At\n> > the moment the details are within parentheses. Namely \"(In particular,\n> > now and related strings are converted to a specific time value as soon\n> > as they are read.)\". Maybe it would be better to be more explicit\n> > there and mention that these are special values that the input\n> > function understands which are translated to actual timestamp values\n> > when the type's input function is called. That could maybe be tied\n> > into the DEFAULT clause documentation to mention that the input\n> > function for constant values is called at DML time rather than DDL\n> > time. That way, we're not adding these (unsustainable) special cases\n> > to the documentation.\n>\n> This sounds like a reasonable approach to me for the\n> magic-input-values issue. Do we want to do anything about\n> nextval()? I guess if you hold your head at the correct\n> angle, that's also a magic-input-value issue, in the sense\n> that the question is when does regclass input get resolved.\n>\n>\n From observations we transform constants into the: \" 'value'::type \" syntax\nwhich then makes it an operator resolved at execution time. For every type\nexcept time types the transformation leaves the constant as-is. The\nspecial time values are the exception whereby they get evaluated to a\nspecific time during the transformation.\n\npostgres=# create table tser3 (id integer not null default nextval(regclass\n'tser2_id_seq'));\nCREATE TABLE\npostgres=# \\d tser3\n Table \"public.tser3\"\n Column | Type | Collation | Nullable | Default\n\n--------+---------+-----------+----------+-----------------------------------\n id | integer | | not null | nextval('tser2_id_seq'::regclass)\n\nI cannot figure out how to get \"early binding\" into the default. I.e.,\nnextval(9000)\n\nSince early binding is similar to the special timestamp behavior I'd say\nnextval is behaving just as expected - literal transform, no evaluation.\nWe need only document the transforms that also evaluate.\n\nDavid J.\n\nOn Tue, Jun 25, 2024 at 10:12 PM Tom Lane <[email protected]> wrote:David Rowley <[email protected]> writes:\n> If people don't properly understand these special timestamp input\n> values, then maybe the documentation in [1] needs to be improved.  At\n> the moment the details are within parentheses. Namely \"(In particular,\n> now and related strings are converted to a specific time value as soon\n> as they are read.)\".  Maybe it would be better to be more explicit\n> there and mention that these are special values that the input\n> function understands which are translated to actual timestamp values\n> when the type's input function is called.  That could maybe be tied\n> into the DEFAULT clause documentation to mention that the input\n> function for constant values is called at DML time rather than DDL\n> time.  That way, we're not adding these (unsustainable) special cases\n> to the documentation.\n\nThis sounds like a reasonable approach to me for the\nmagic-input-values issue.  Do we want to do anything about\nnextval()?  I guess if you hold your head at the correct\nangle, that's also a magic-input-value issue, in the sense\nthat the question is when does regclass input get resolved.\nFrom observations we transform constants into the: \" 'value'::type \" syntax which then makes it an operator resolved at execution time.  For every type except time types the transformation leaves the constant as-is.  The special time values are the exception whereby they get evaluated to a specific time during the transformation.postgres=# create table tser3 (id integer not null default nextval(regclass 'tser2_id_seq'));CREATE TABLEpostgres=# \\d tser3                            Table \"public.tser3\" Column |  Type   | Collation | Nullable |              Default              --------+---------+-----------+----------+----------------------------------- id     | integer |           | not null | nextval('tser2_id_seq'::regclass)I cannot figure out how to get \"early binding\" into the default. I.e., nextval(9000)Since early binding is similar to the special timestamp behavior I'd say nextval is behaving just as expected - literal transform, no evaluation.  We need only document the transforms that also evaluate.David J.", "msg_date": "Tue, 25 Jun 2024 23:00:11 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On 2024-Jun-25, David G. Johnston wrote:\n\n> On Tue, Jun 25, 2024 at 9:50 PM David Rowley <[email protected]> wrote:\n\n> > FWIW, I disagree that we need to write anything about that in this\n> > part of the documentation. I think any argument for doing this could\n> > equally be applied to something like re-iterating what the operator\n> > precedence rules for arithmetic are, and I don't think that should be\n> > mentioned.\n> \n> I disagree on this equivalence. The time literals are clear deviations\n> from expected behavior. Knowing operator precedence rules, they apply\n> everywhere equally. And we should document the deviations directly where\n> they happen. Even if it's just a short link back to the source that\n> describes the deviation. I'm fine with something less verbose pointing\n> only to the data types page, but not with nothing.\n\nI agree that it'd be good to have _something_ -- the other stance seems\nsuper unhelpful. \"We're not going to spend two lines to explain some\nfunny rules that determine surprising behavior here, because we assume\nyou have read all of our other 3000 pages of almost impenetrably dense\ndocumentation\" is not great from a user's point of view. The behavior\nof 'now' in DEFAULT clauses is something that has been asked about for\ndecades.\n\nArithmetic precedence is a terrible straw man argument. Let's put that\naside.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"If it is not right, do not do it.\nIf it is not true, do not say it.\" (Marcus Aurelius, Meditations)\n\n\n", "msg_date": "Wed, 26 Jun 2024 15:36:10 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Wed, 26 Jun 2024 at 11:05, James Coleman <[email protected]> wrote:\n> Hmm, I guess I'd never considered anything besides cases like\n> nextval() and now(), but I see now that now() must also be special\n> cased (when quoted) since 'date_trunc(day, now())'::timestamp doesn't\n> work but 'now()'::timestamp does.\n\n'now()'::timestamp only works because we ignore trailing punctuation\nin ParseDateTime() during timestamp_in(). 'now!!'::timestamp works\nequally as well.\n\nDavid\n\n\n", "msg_date": "Thu, 27 Jun 2024 10:58:35 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Tuesday, June 25, 2024, James Coleman <[email protected]> wrote:\n\n> Hello,\n>\n> It's possible I'm the only one who's been in this situation, but I've\n> multiple times found myself explaining to a user how column DEFAULT\n> expressions work: namely how the quoting on an expression following\n> the keyword DEFAULT controls whether or not the expression is\n> evaluated at the time of the DDL statement or at the time of an\n> insertion.\n>\n\nI don’t know if it’s worth documenting but the following sentence is\nimplied by the syntax:\n\n“Do not single quote the expression as a whole. Write the expression as\nyou would in a select query.”\n\nDavid J.\n\nOn Tuesday, June 25, 2024, James Coleman <[email protected]> wrote:Hello,\n\nIt's possible I'm the only one who's been in this situation, but I've\nmultiple times found myself explaining to a user how column DEFAULT\nexpressions work: namely how the quoting on an expression following\nthe keyword DEFAULT controls whether or not the expression is\nevaluated at the time of the DDL statement or at the time of an\ninsertion.\nI don’t know if it’s worth documenting but the following sentence is implied by the syntax:“Do not single quote the expression as a whole.  Write the expression as you would in a select query.”David J.", "msg_date": "Wed, 26 Jun 2024 16:14:57 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Wed, 26 Jun 2024 at 17:12, Tom Lane <[email protected]> wrote:\n> Do we want to do anything about\n> nextval()? I guess if you hold your head at the correct\n> angle, that's also a magic-input-value issue, in the sense\n> that the question is when does regclass input get resolved.\n\nI think I'm not understanding what's special about that. Aren't\n'now'::timestamp and 'seq_name'::regclass are just casts that are\nevaluated during parse time in transformExpr()?\n\nDavid\n\n\n", "msg_date": "Thu, 27 Jun 2024 11:19:38 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> On Wed, 26 Jun 2024 at 17:12, Tom Lane <[email protected]> wrote:\n>> Do we want to do anything about\n>> nextval()? I guess if you hold your head at the correct\n>> angle, that's also a magic-input-value issue, in the sense\n>> that the question is when does regclass input get resolved.\n\n> I think I'm not understanding what's special about that. Aren't\n> 'now'::timestamp and 'seq_name'::regclass are just casts that are\n> evaluated during parse time in transformExpr()?\n\nRight. But there is an example in the manual explaining how\nthese two things act differently:\n\n\t'seq_name'::regclass\n\t'seq_name'::text::regclass\n\nThe latter produces a constant of type text with a run-time\ncast to regclass (and hence a run-time pg_class lookup).\nIIRC, we document that mainly because the latter provides a way\nto duplicate nextval()'s old behavior of run-time lookup.\n\nNow that I think about it, there's a very parallel difference in\nthe behavior of\n\n\t'now'::timestamp\n\t'now'::text::timestamp\n\nbut I doubt that that example is shown anywhere.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2024 19:38:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Wed, 26 Jun 2024 at 17:36, David G. Johnston\n<[email protected]> wrote:\n>\n> On Tue, Jun 25, 2024 at 9:50 PM David Rowley <[email protected]> wrote:\n>> FWIW, I disagree that we need to write anything about that in this\n>> part of the documentation. I think any argument for doing this could\n>> equally be applied to something like re-iterating what the operator\n>> precedence rules for arithmetic are, and I don't think that should be\n>> mentioned.\n>\n>\n> I disagree on this equivalence. The time literals are clear deviations from expected behavior. Knowing operator precedence rules, they apply everywhere equally. And we should document the deviations directly where they happen. Even if it's just a short link back to the source that describes the deviation. I'm fine with something less verbose pointing only to the data types page, but not with nothing.\n\nAre you able to share what the special behaviour is with DEFAULT\nconstraints and time literals that does not apply everywhere equally?\n\nMaybe I'm slow on the uptake, but I've yet to see anything here where\ntime literals act in a special way DEFAULT constraints. This is why I\ncouldn't understand why we should be adding documentation about this\nunder CREATE TABLE.\n\nI'd be happy to reconsider or retract my argument if you can show me\nwhat I'm missing.\n\nDavid\n\n\n", "msg_date": "Thu, 27 Jun 2024 11:54:01 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> Maybe I'm slow on the uptake, but I've yet to see anything here where\n> time literals act in a special way DEFAULT constraints. This is why I\n> couldn't understand why we should be adding documentation about this\n> under CREATE TABLE.\n\nIt's not that the parsing rules are any different: it's that in\nordinary DML queries, it seldom matters very much whether a\nsubexpression is evaluated at parse time versus run time.\nIn CREATE TABLE that difference is very in-your-face, so people\nwho haven't understood the rules clearly can get burnt.\n\nHowever, there are certainly other places where it matters,\nsuch as queries in plpgsql functions. So I understand your\nreluctance to go on about it in CREATE TABLE. At the same\ntime, I see where David J. is coming from.\n\nMaybe we could have a discussion of this in some single spot,\nand link to it from CREATE TABLE and other relevant places?\nISTR there is something about it in the plpgsql doco already.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2024 20:11:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Thu, 27 Jun 2024 at 12:11, Tom Lane <[email protected]> wrote:\n>\n> David Rowley <[email protected]> writes:\n> > Maybe I'm slow on the uptake, but I've yet to see anything here where\n> > time literals act in a special way DEFAULT constraints. This is why I\n> > couldn't understand why we should be adding documentation about this\n> > under CREATE TABLE.\n>\n> It's not that the parsing rules are any different: it's that in\n> ordinary DML queries, it seldom matters very much whether a\n> subexpression is evaluated at parse time versus run time.\n> In CREATE TABLE that difference is very in-your-face, so people\n> who haven't understood the rules clearly can get burnt.\n\nAha, now I understand. Thanks. So, seems like CREATE TABLE is being\ntargeted or maybe victimised here as it's probably the most common\nplace people learn about their misuse of the timestamp special input\nvalues.\n\n> However, there are certainly other places where it matters,\n> such as queries in plpgsql functions. So I understand your\n> reluctance to go on about it in CREATE TABLE. At the same\n> time, I see where David J. is coming from.\n>\n> Maybe we could have a discussion of this in some single spot,\n> and link to it from CREATE TABLE and other relevant places?\n> ISTR there is something about it in the plpgsql doco already.\n\nFor the special timestamp stuff, that place is probably the special\ntimestamp table in [1]. It looks like the large caution you added in\n540849814 might not be enough or perhaps wasn't done soon enough to\ncatch the people who read that part of the manual before the caution\nwas added. Hard to fix if it's the latter without a time machine. :-(\n\nI'm open to having some section that fleshes this stuff out a bit more\nwith a few examples with CREATE TABLE and maybe CREATE VIEW that we\ncan link to. Linking seems like a much more sustainable practice than\nadding special case documentation for non-special case behaviour.\n\nDavid\n\n[1] https://www.postgresql.org/docs/devel/datatype-datetime.html\n\n\n", "msg_date": "Thu, 27 Jun 2024 12:34:37 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On 27.06.24 02:34, David Rowley wrote:\n> For the special timestamp stuff, that place is probably the special\n> timestamp table in [1]. It looks like the large caution you added in\n> 540849814 might not be enough or perhaps wasn't done soon enough to\n> catch the people who read that part of the manual before the caution\n> was added. Hard to fix if it's the latter without a time machine. :-(\n\nMaybe we should really be thinking about deprecating these special \nvalues and steering users more urgently toward more robust alternatives.\n\nImagine if 'random' were a valid input value for numeric types.\n\n\n\n", "msg_date": "Thu, 27 Jun 2024 13:57:37 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Thu, 27 Jun 2024 at 23:57, Peter Eisentraut <[email protected]> wrote:\n> Maybe we should really be thinking about deprecating these special\n> values and steering users more urgently toward more robust alternatives.\n>\n> Imagine if 'random' were a valid input value for numeric types.\n\nI think there are valid reasons to use the special timestamp input\nvalues. One that I can think of is for use with partition pruning. If\nyou have a time-range partitioned table and want the planner to prune\nthe partitions rather than the executor, you could use\n'now'::timestamp in your queries to allow the planner to prune. That\nworks providing that you never use that in combination with PREPARE\nand never put the query with the WHERE clause inside a VIEW. I don't\nhave any other good examples, but I suppose that if someone needed to\ncapture the time some statement was executed and record that\nsomewhere, sort of like the __DATE__ and __TIME__ macros in C. Perhaps\nthat's useful to record the last time some DDL script was executed.\n\nI'd like to know what led someone down the path of doing something\nlike DEFAULT 'now()'::timestamp in a CREATE TABLE. Could it be a\nfaulty migration tool that created these and people copy them thinking\nit's a legitimate syntax?\n\nDavid\n\n\n", "msg_date": "Mon, 1 Jul 2024 11:54:50 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Sun, Jun 30, 2024 at 4:55 PM David Rowley <[email protected]> wrote:\n\n>\n> I'd like to know what led someone down the path of doing something\n> like DEFAULT 'now()'::timestamp in a CREATE TABLE. Could it be a\n> faulty migration tool that created these and people copy them thinking\n> it's a legitimate syntax?\n>\n>\nMy thought process on this used to be: Provide a text string of the\nexpression that is then stored within the catalog and eval'd during\nruntime. If the only thing you are providing is a single literal and not\nsome compound expression it isn't that obvious that you are supposed to\nprovide an unquoted expression - which feels like it should be immediately\nevaluated - versus something that is a constant. Kinda like dynamic SQL.\n\nDavid J.\n\nOn Sun, Jun 30, 2024 at 4:55 PM David Rowley <[email protected]> wrote:\nI'd like to know what led someone down the path of doing something\nlike DEFAULT 'now()'::timestamp in a CREATE TABLE. Could it be a\nfaulty migration tool that created these and people copy them thinking\nit's a legitimate syntax?My thought process on this used to be:  Provide a text string of the expression that is then stored within the catalog and eval'd during runtime.  If the only thing you are providing is a single literal and not some compound expression it isn't that obvious that you are supposed to provide an unquoted expression - which feels like it should be immediately evaluated - versus something that is a constant.  Kinda like dynamic SQL.David J.", "msg_date": "Sun, 30 Jun 2024 17:15:43 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Mon, 1 Jul 2024 at 12:16, David G. Johnston\n<[email protected]> wrote:\n>\n> On Sun, Jun 30, 2024 at 4:55 PM David Rowley <[email protected]> wrote:\n>>\n>>\n>> I'd like to know what led someone down the path of doing something\n>> like DEFAULT 'now()'::timestamp in a CREATE TABLE. Could it be a\n>> faulty migration tool that created these and people copy them thinking\n>> it's a legitimate syntax?\n>>\n>\n> My thought process on this used to be: Provide a text string of the expression that is then stored within the catalog and eval'd during runtime. If the only thing you are providing is a single literal and not some compound expression it isn't that obvious that you are supposed to provide an unquoted expression - which feels like it should be immediately evaluated - versus something that is a constant. Kinda like dynamic SQL.\n\nThanks for sharing that. Any idea where that thinking came from?\n\nMaybe it was born from the fact that nothing complains when you do:\n'now()'::timestamp? A quick test evaluation of that with a SELECT\nstatement might trick someone into thinking it'll work.\n\nI wonder if there's anything else like this that might help fool\npeople into thinking this is some valid way of getting delayed\nevaluation.\n\nDavid\n\n\n", "msg_date": "Mon, 1 Jul 2024 12:47:31 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Sun, Jun 30, 2024 at 5:47 PM David Rowley <[email protected]> wrote:\n\n> On Mon, 1 Jul 2024 at 12:16, David G. Johnston\n> <[email protected]> wrote:\n> >\n> > On Sun, Jun 30, 2024 at 4:55 PM David Rowley <[email protected]>\n> wrote:\n> >>\n> >>\n> >> I'd like to know what led someone down the path of doing something\n> >> like DEFAULT 'now()'::timestamp in a CREATE TABLE. Could it be a\n> >> faulty migration tool that created these and people copy them thinking\n> >> it's a legitimate syntax?\n> >>\n> >\n> > My thought process on this used to be: Provide a text string of the\n> expression that is then stored within the catalog and eval'd during\n> runtime. If the only thing you are providing is a single literal and not\n> some compound expression it isn't that obvious that you are supposed to\n> provide an unquoted expression - which feels like it should be immediately\n> evaluated - versus something that is a constant. Kinda like dynamic SQL.\n>\n> Thanks for sharing that. Any idea where that thinking came from?\n>\n> Maybe it was born from the fact that nothing complains when you do:\n> 'now()'::timestamp? A quick test evaluation of that with a SELECT\n> statement might trick someone into thinking it'll work.\n\n\n> I wonder if there's anything else like this that might help fool\n> people into thinking this is some valid way of getting delayed\n> evaluation.\n>\n>\nI presume the relatively new atomic SQL functions pose a similar hazard.\n\nIt probably boils down, for me, that I learned about, though never used,\neval functions from javascript, and figured this is probably implemented\nsomething like that and I should thus supply a string. Internalizing that\nDDL can treat the unquoted content of expression in \"DEFAULT expression\" as\nbasically text hadn't happened; nor that the actual difference between just\ntreating it as text and the parsing to a standard form that really happens,\nis quite important. Namely that, in reverse of expectations, quoted\nthings, which are literals, are transformed to their typed values during\nparse while functions, which are not quoted, don't have a meaningfully\ndifferent parsed form and are indeed executed at runtime.\n\nThe fact that 'now()'::timestamp fails to fail doesn't help...\n\nConsider this phrasing for default:\n\nThe DEFAULT clause assigns a default data value for the column whose column\ndefinition it appears within. The expression is parsed according to\nSection X.X.X, with the limitation that it may neither include references\nto other columns nor subqueries, and then stored for later evaluation of\nany functions it contains. The data type of the default expression must\nmatch the data type of the column.\n\nThen in Section X.X.X we note, in part:\nDuring parsing, all constants are immediately converted to their internal\nrepresentation. In particular, the time-related literals noted in Section\n8.5.1.4 get set to their date/time values.\n\nThen, in 8.5.1.4 we should call out:\nCaution:\n'now' is a special time value, evaluated during parsing.\nnow() is a function, evaluated during execution.\n'now()' is a special time value due to the quoting, PostgreSQL ignored the\nparentheses.\n\n\nThe above doesn't make the special constants particularly special in how\nthey behave within parse-bind-execute while still noting that what they do\nduring parsing is a bit unique since a timestamp has not representation of\n'tomorrow' that is can hold but instead is a short-hand for writing the\nconstant representing \"whatever tomorrow is\" at that moment.\n\nI hope the reason for the additional caution in this framing is intuitive\nfor everyone.\n\nThere is probably a good paragraph or two that could be added under the new\nSection X.X.X to centralize this for views, atomic sql, defaults, etc... to\nrefer to and give the reader the needed framing.\n\nDavid J.\n\nOn Sun, Jun 30, 2024 at 5:47 PM David Rowley <[email protected]> wrote:On Mon, 1 Jul 2024 at 12:16, David G. Johnston\n<[email protected]> wrote:\n>\n> On Sun, Jun 30, 2024 at 4:55 PM David Rowley <[email protected]> wrote:\n>>\n>>\n>> I'd like to know what led someone down the path of doing something\n>> like DEFAULT 'now()'::timestamp in a CREATE TABLE. Could it be a\n>> faulty migration tool that created these and people copy them thinking\n>> it's a legitimate syntax?\n>>\n>\n> My thought process on this used to be:  Provide a text string of the expression that is then stored within the catalog and eval'd during runtime.  If the only thing you are providing is a single literal and not some compound expression it isn't that obvious that you are supposed to provide an unquoted expression - which feels like it should be immediately evaluated - versus something that is a constant.  Kinda like dynamic SQL.\n\nThanks for sharing that.  Any idea where that thinking came from?\n\nMaybe it was born from the fact that nothing complains when you do:\n'now()'::timestamp? A quick test evaluation of that with a SELECT\nstatement might trick someone into thinking it'll work.\n\nI wonder if there's anything else like this that might help fool\npeople into thinking this is some valid way of getting delayed\nevaluation.I presume the relatively new atomic SQL functions pose a similar hazard.It probably boils down, for me, that I learned about, though never used, eval functions from javascript, and figured this is probably implemented something like that and I should thus supply a string.  Internalizing that DDL can treat the unquoted content of expression in \"DEFAULT expression\" as basically text hadn't happened; nor that the actual difference between just treating it as text and the parsing to a standard form that really happens, is quite important.  Namely that, in reverse of expectations, quoted things, which are literals, are transformed to their typed values during parse while functions, which are not quoted, don't have a meaningfully different parsed form and are indeed executed at runtime.The fact that 'now()'::timestamp fails to fail doesn't help...Consider this phrasing for default:The DEFAULT clause assigns a default data value for the column whose column definition it appears within.  The expression is parsed according to Section X.X.X, with the limitation that it may neither include references to other columns nor subqueries, and then stored for later evaluation of any functions it contains.  The data type of the default expression must match the data type of the column.Then in Section X.X.X we note, in part:During parsing, all constants are immediately converted to their internal representation.  In particular, the time-related literals noted in Section 8.5.1.4 get set to their date/time values.Then, in 8.5.1.4 we should call out:Caution:'now' is a special time value, evaluated during parsing.now() is a function, evaluated during execution.'now()' is a special time value due to the quoting, PostgreSQL ignored the parentheses.The above doesn't make the special constants particularly special in how they behave within parse-bind-execute while still noting that what they do during parsing is a bit unique since a timestamp has not representation of 'tomorrow' that is can hold but instead is a short-hand for writing the constant representing \"whatever tomorrow is\" at that moment.I hope the reason for the additional caution in this framing is intuitive for everyone.There is probably a good paragraph or two that could be added under the new Section X.X.X to centralize this for views, atomic sql, defaults, etc... to refer to and give the reader the needed framing.David J.", "msg_date": "Sun, 30 Jun 2024 18:41:17 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Mon, 1 Jul 2024 at 13:41, David G. Johnston\n<[email protected]> wrote:\n> I presume the relatively new atomic SQL functions pose a similar hazard.\n\nDo you have an example of this?\n\n> The fact that 'now()'::timestamp fails to fail doesn't help...\n\nIf that's the case, maybe a tiny step towards what Peter proposed is\njust to make trailing punctuation fail for timestamp special values in\nv18.\n\nDavid\n\n\n", "msg_date": "Mon, 1 Jul 2024 14:52:42 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Sun, Jun 30, 2024 at 7:52 PM David Rowley <[email protected]> wrote:\n\n> On Mon, 1 Jul 2024 at 13:41, David G. Johnston\n> <[email protected]> wrote:\n> > I presume the relatively new atomic SQL functions pose a similar hazard.\n>\n> Do you have an example of this?\n>\n>\ncreate function testnow() returns timestamptz language sql\nreturn 'now'::timestamptz;\n\nselect testnow();\nselect pg_sleep(5);\nselect testnow(); -- same time as the first call\n\nWhich conforms with the documentation and expression parsing rules for\nliterals:\n\n\"This form is parsed at function definition time, the string constant form\nis parsed at execution time;...\"\n\nDavid J.\n\nOn Sun, Jun 30, 2024 at 7:52 PM David Rowley <[email protected]> wrote:On Mon, 1 Jul 2024 at 13:41, David G. Johnston\n<[email protected]> wrote:\n> I presume the relatively new atomic SQL functions pose a similar hazard.\n\nDo you have an example of this?create function testnow() returns timestamptz language sqlreturn 'now'::timestamptz;select testnow();select pg_sleep(5);select testnow(); -- same time as the first callWhich conforms with the documentation and expression parsing rules for literals:\"This form is parsed at function definition time, the string constant form is parsed at execution time;...\"David J.", "msg_date": "Sun, 30 Jun 2024 20:08:33 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Sun, Jun 30, 2024 at 7:52 PM David Rowley <[email protected]> wrote:\n\n> If that's the case, maybe a tiny step towards what Peter proposed is\n> just to make trailing punctuation fail for timestamp special values in\n> v18.\n>\n>\nI'm game. If anyone is using the ambiguous spelling it is probably to their\nbenefit to have it break and realize they wanted a function expression, not\na constant expression.\n\nDavid J.\n\nOn Sun, Jun 30, 2024 at 7:52 PM David Rowley <[email protected]> wrote:If that's the case, maybe a tiny step towards what Peter proposed is\njust to make trailing punctuation fail for timestamp special values in\nv18.I'm game. If anyone is using the ambiguous spelling it is probably to their benefit to have it break and realize they wanted a function expression, not a constant expression.David J.", "msg_date": "Sun, 30 Jun 2024 20:14:58 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Sun, Jun 30, 2024 at 8:16 PM David G. Johnston\n<[email protected]> wrote:\n>\n> On Sun, Jun 30, 2024 at 4:55 PM David Rowley <[email protected]> wrote:\n>>\n>>\n>> I'd like to know what led someone down the path of doing something\n>> like DEFAULT 'now()'::timestamp in a CREATE TABLE. Could it be a\n>> faulty migration tool that created these and people copy them thinking\n>> it's a legitimate syntax?\n>>\n>\n> My thought process on this used to be: Provide a text string of the expression that is then stored within the catalog and eval'd during runtime. If the only thing you are providing is a single literal and not some compound expression it isn't that obvious that you are supposed to provide an unquoted expression - which feels like it should be immediately evaluated - versus something that is a constant. Kinda like dynamic SQL.\n\nI have a similar story to tell: I've honestly never thought about it\ndeeply until I started this thread, but just through experimentation a\nfew things were obvious:\n\n- now() as a function call gives you the current timestamp in a query\n- now() as a function call in a DDL DEFAULT clause sets that as a\ndefault function call\n- Quoting that function call (using the function call syntax is the\nnatural thing to try, I think, if you've already done the first two)\n-- because some examples online show quoting it -- gives you DDL time\nevaluation.\n\nSo I suspect -- though I've been doing this for so long I couldn't\ntell you for certain -- that I largely intuitive the behavior by\nobservation.\n\nAnd similarly to David J. I'd then assumed -- but never had a need to\ntest it -- that this was generalized.\n\nI think DDL is also different conceptually from SQL/DML here in a kind\nof insidious way: the \"bare\" function call in DEFAULT is *not*\nexecuted as part of the query for DDL like it is with other queries.\n\nHope this helps explain things.\n\nJames Coleman\n\n\n", "msg_date": "Mon, 1 Jul 2024 09:37:17 -0400", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On 01.07.24 01:54, David Rowley wrote:\n> On Thu, 27 Jun 2024 at 23:57, Peter Eisentraut <[email protected]> wrote:\n>> Maybe we should really be thinking about deprecating these special\n>> values and steering users more urgently toward more robust alternatives.\n>>\n>> Imagine if 'random' were a valid input value for numeric types.\n> \n> I think there are valid reasons to use the special timestamp input\n> values. One that I can think of is for use with partition pruning. If\n> you have a time-range partitioned table and want the planner to prune\n> the partitions rather than the executor, you could use\n> 'now'::timestamp in your queries to allow the planner to prune.\n\nYeah, but is that a good user interface? Or is that just something that \nhappens to work now with the pieces that happened to be there, rather \nthan a really designed interface?\n\nHypothetically, what would need to be done to make this work with now() \nor current_timestamp or something similar? Do we need a new stability \nlevel that somehow encompasses this behavior, so that the function call \ncan be evaluated at planning time?\n\n> That\n> works providing that you never use that in combination with PREPARE\n> and never put the query with the WHERE clause inside a VIEW.\n\nAnd this kind of thing obviously makes this interface even worse.\n\n\n\n", "msg_date": "Mon, 1 Jul 2024 16:26:22 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 01.07.24 01:54, David Rowley wrote:\n>> I think there are valid reasons to use the special timestamp input\n>> values. One that I can think of is for use with partition pruning. If\n>> you have a time-range partitioned table and want the planner to prune\n>> the partitions rather than the executor, you could use\n>> 'now'::timestamp in your queries to allow the planner to prune.\n\n> Yeah, but is that a good user interface? Or is that just something that \n> happens to work now with the pieces that happened to be there, rather \n> than a really designed interface?\n\nThat's not a very useful argument to make. What percentage of the\nSQL language as a whole is legacy cruft that we'd do differently if\nwe could? I think the answer is depressingly high. Adding more\nspecial-purpose features to the ones already there doesn't move\nthat needle in a desirable direction.\n\nI'd be more excited about this discussion if I didn't think that\nthe chances of removing 'now'::timestamp are exactly zero. You\ncan't just delete useful decades-old features, whether there's\na better way or not.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Jul 2024 10:43:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Tue, 2 Jul 2024 at 02:43, Tom Lane <[email protected]> wrote:\n> I'd be more excited about this discussion if I didn't think that\n> the chances of removing 'now'::timestamp are exactly zero. You\n> can't just delete useful decades-old features, whether there's\n> a better way or not.\n\nDo you have any thoughts on rejecting trailing punctuation with the\ntimestamp special values?\n\nFor me, I've mixed feelings about it. I think it would be good to\nbreak things for people who are doing this and getting the wrong\nbehaviour who haven't noticed yet, however, there could be a series of\npeople doing this and have these embedded in statements that are\nparsed directly before execution, and they just happen to get the\nright behaviour. It might be better not to upset the latter set of\npeople needlessly. Perhaps the former set of people don't exist since\nthe behaviour is quite different and it seems quite obviously wrong.\n\nDavid\n\n\n", "msg_date": "Tue, 2 Jul 2024 13:48:28 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Mon, Jul 1, 2024 at 02:52:42PM +1200, David Rowley wrote:\n> On Mon, 1 Jul 2024 at 13:41, David G. Johnston\n> <[email protected]> wrote:\n> > I presume the relatively new atomic SQL functions pose a similar hazard.\n> \n> Do you have an example of this?\n> \n> > The fact that 'now()'::timestamp fails to fail doesn't help...\n> \n> If that's the case, maybe a tiny step towards what Peter proposed is\n> just to make trailing punctuation fail for timestamp special values in\n> v18.\n\nI dug into this and I have a suggestion at the end. First, the special\nvalues like 'now' are the only values that can be optionally quoted:\n\n\tSELECT current_timestamp::timestamptz;\n\t current_timestamp\n\t-------------------------------\n\t 2024-07-05 15:15:22.692072-04\n\t\n\tSELECT 'current_timestamp'::timestamptz;\n\tERROR: invalid input syntax for type timestamp with time zone: \"current_timestamp\"\n\nAlso interestingly, \"now\" without quotes requires parentheses to make it\na function call:\n\n\tSELECT 'now'::timestamptz;\n\t timestamptz\n\t-------------------------------\n\t 2024-07-05 15:17:11.394182-04\n\t\n\tSELECT 'now()'::timestamptz;\n\t timestamptz\n\t-------------------------------\n\t 2024-07-05 15:17:15.201621-04\n\t\n\tSELECT now()::timestamptz;\n\t now\n\t-------------------------------\n\t 2024-07-05 15:17:21.925611-04\n\n\tSELECT now::timestamptz;\n\tERROR: column \"now\" does not exist\n\tLINE 1: SELECT now::timestamptz;\n\t ^\nAnd the quoting shows \"now\" evaluation at function creation time:\n\n\tCREATE OR REPLACE FUNCTION testnow() RETURNS timestamptz LANGUAGE SQL\n\tRETURN 'now'::timestamptz;\n\t\n\tSELECT testnow();\n\tSELECT pg_sleep(5);\n\tSELECT testnow();\n\t testnow\n\t-------------------------------\n\t 2024-07-05 15:19:38.915255-04\n\t\n\t testnow\n\t-------------------------------\n\t 2024-07-05 15:19:38.915255-04 -- same\n\t\n---------------------------------------------------------------------------\t\n\t\n\tCREATE OR REPLACE FUNCTION testnow() RETURNS timestamptz LANGUAGE SQL\n\tRETURN 'now()'::timestamptz;\n\n\tSELECT testnow();\n\tSELECT pg_sleep(5);\n\tSELECT testnow();\n\t testnow\n\t-------------------------------\n\t 2024-07-05 15:20:41.475997-04\n\t\n\t testnow\n\t-------------------------------\n\t 2024-07-05 15:20:41.475997-04 -- same\n\t\n---------------------------------------------------------------------------\t\n\t\n\tCREATE OR REPLACE FUNCTION testnow() RETURNS timestamptz LANGUAGE SQL\n\tRETURN now()::timestamptz;\n\n\tSELECT testnow();\n\tSELECT pg_sleep(5);\n\tSELECT testnow();\n\t testnow\n\t-------------------------------\n\t 2024-07-05 15:21:18.204574-04\n\t\n\t testnow\n\t-------------------------------\n\t 2024-07-05 15:21:23.210442-04 -- different\n\nI don't think we can bounce people around to different sections to\nexplain this --- I think we need text in the CREATE TABLE ... DEFAULT\nsection. I think the now() case is unusual since there are few cases\nwhere function calls can be put inside of single quotes.\n\nI have written the attached patch to clarify the behavior.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Fri, 5 Jul 2024 16:31:13 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Also interestingly, \"now\" without quotes requires parentheses to make it\n> a function call:\n\nI'm not sure why you find that surprising, or why you think that\n'now()'::timestamptz is a function call. (Well, it is a call of\ntimestamptz_in, but not of the SQL function now().) Documentation\nthat is equally confused won't help any.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Jul 2024 16:50:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Fri, Jul 5, 2024 at 04:50:32PM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Also interestingly, \"now\" without quotes requires parentheses to make it\n> > a function call:\n> \n> I'm not sure why you find that surprising, or why you think that\n> 'now()'::timestamptz is a function call. (Well, it is a call of\n> timestamptz_in, but not of the SQL function now().) Documentation\n> that is equally confused won't help any.\n\nWell, 'now()' certainly _looks_ like a function call, though it isn't. \nThe fact that 'now()'::timestamptz and 'now'::timestamptz generate\nvolatile results via a function call was my point.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 5 Jul 2024 16:55:42 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Well, 'now()' certainly _looks_ like a function call, though it isn't. \n> The fact that 'now()'::timestamptz and 'now'::timestamptz generate\n> volatile results via a function call was my point.\n\nThe only reason 'now()'::timestamptz works is that timestamptz_in\nignores irrelevant punctuation (or what it thinks is irrelevant,\nanyway). I do not think we should include examples that look like\nthat, because it will further confuse readers who don't already\nhave a solid grasp of how this works.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Jul 2024 17:03:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Fri, Jul 5, 2024 at 1:55 PM Bruce Momjian <[email protected]> wrote:\n\n> On Fri, Jul 5, 2024 at 04:50:32PM -0400, Tom Lane wrote:\n> > Bruce Momjian <[email protected]> writes:\n> > > Also interestingly, \"now\" without quotes requires parentheses to make\n> it\n> > > a function call:\n> >\n> > I'm not sure why you find that surprising, or why you think that\n> > 'now()'::timestamptz is a function call.\n\n\nI suspect mostly because SQL has a habit of adding functions that don't\nrequire parentheses and it isn't obvious that \"now\" is not one of them.\n\nselect current_timestamp;\n current_timestamp\n-------------------------------\n 2024-07-05 13:55:12.521334-07\n(1 row)\n\n\n> (Well, it is a call of\n> > timestamptz_in, but not of the SQL function now().) Documentation\n> > that is equally confused won't help any.\n>\n> Well, 'now()' certainly _looks_ like a function call, though it isn't.\n> The fact that 'now()'::timestamptz and 'now'::timestamptz generate\n> volatile results via a function call was my point.\n>\n>\nThey generate volatile results during typed value construction. That such\nthings are implemented via functions are best left unreferenced here,\nreserving mention of function calls to those things users explicitly add to\ntheir query that are, and only are, function calls.\n\nWhether we change going forward or not I'd be content to simply add a\nwarning that writing 'now()' in a default expression is invalid syntax that\nfails-to-fails on backward compatibility grounds. If you want the function\ndon't quote it, if you want the literal, remove the parentheses.\n\nDavid J.\n\nOn Fri, Jul 5, 2024 at 1:55 PM Bruce Momjian <[email protected]> wrote:On Fri, Jul  5, 2024 at 04:50:32PM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Also interestingly, \"now\" without quotes requires parentheses to make it\n> > a function call:\n> \n> I'm not sure why you find that surprising, or why you think that\n> 'now()'::timestamptz is a function call.I suspect mostly because SQL has a habit of adding functions that don't require parentheses and it isn't obvious that \"now\" is not one of them.select current_timestamp;       current_timestamp       ------------------------------- 2024-07-05 13:55:12.521334-07(1 row)   (Well, it is a call of\n> timestamptz_in, but not of the SQL function now().)  Documentation\n> that is equally confused won't help any.\n\nWell, 'now()' certainly _looks_ like a function call, though it isn't. \nThe fact that 'now()'::timestamptz and 'now'::timestamptz generate\nvolatile results via a function call was my point.They generate volatile results during typed value construction.  That such things are implemented via functions are best left unreferenced here, reserving mention of function calls to those things users explicitly add to their query that are, and only are, function calls.Whether we change going forward or not I'd be content to simply add a warning that writing 'now()' in a default expression is invalid syntax that fails-to-fails on backward compatibility grounds.  If you want the function don't quote it, if you want the literal, remove the parentheses.David J.", "msg_date": "Fri, 5 Jul 2024 14:04:38 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Fri, Jul 5, 2024 at 05:03:35PM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Well, 'now()' certainly _looks_ like a function call, though it isn't. \n> > The fact that 'now()'::timestamptz and 'now'::timestamptz generate\n> > volatile results via a function call was my point.\n> \n> The only reason 'now()'::timestamptz works is that timestamptz_in\n> ignores irrelevant punctuation (or what it thinks is irrelevant,\n> anyway). I do not think we should include examples that look like\n> that, because it will further confuse readers who don't already\n> have a solid grasp of how this works.\n\nWow, I see that now:\n\n\ttest=> SELECT 'now('::timestamptz;\n\t timestamptz\n\t-------------------------------\n\t 2024-07-05 17:04:33.457915-04\n\nIf I remove the 'now()' mention in the docs, patch attached, I am\nconcerned people will be confused whether it is the removal of the\nsingle quotes or the use of \"()\" which causes insert-time evaluation,\nand they might try 'now()'.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Fri, 5 Jul 2024 17:11:22 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Fri, Jul 5, 2024 at 2:11 PM Bruce Momjian <[email protected]> wrote:\n\n>\n> If I remove the 'now()' mention in the docs, patch attached, I am\n> concerned people will be confused whether it is the removal of the\n> single quotes or the use of \"()\" which causes insert-time evaluation,\n> and they might try 'now()'.\n>\n>\nLiterals are DDL-time because of parsing, functions are insert-time because\nof execution. IMO this is presently confusing because we are focused on\ncharacters, not concepts.\n\ndiff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml\nindex c55fa607e8..ac661958fd 100644\n--- a/doc/src/sgml/datatype.sgml\n+++ b/doc/src/sgml/datatype.sgml\n@@ -2391,6 +2391,17 @@ TIMESTAMP WITH TIME ZONE '2004-10-19 10:23:54+02'\n </para>\n </caution>\n\n+ <caution>\n+ <para>\n+ The input parser for timestamp values is forgiving: it ignores\n+ trailing invalid characters. This poses a hazard in\n+ the case of the <literal>'now'</literal> special date/time input.\n+ The constant <literal>'now()'</literal> is the same special\ndate/time input;\n+ not the <function>now()</function> function, which like all function\n+ call expressions, is not single-quoted. Writing\n<literal>'now()'</literal>\n+ is considered deprecated and may become an error in future versions.\n+ </para>\n+ </caution>\n+\n </sect3>\n </sect2>\n\ndiff --git a/doc/src/sgml/ref/create_table.sgml\nb/doc/src/sgml/ref/create_table.sgml\nindex f19306e776..4cecab011a 100644\n--- a/doc/src/sgml/ref/create_table.sgml\n+++ b/doc/src/sgml/ref/create_table.sgml\n@@ -889,9 +889,10 @@ WITH ( MODULUS <replaceable\nclass=\"parameter\">numeric_literal</replaceable>, REM\n </para>\n\n <para>\n- The default expression will be used in any insert operation that\n- does not specify a value for the column. If there is no default\n- for a column, then the default is null.\n+ The default expression is immediately parsed, which causes\nevaluation of any literals, notably\n+ <link linkend=\"datatype-datetime-special-table\">special date/time\ninputs</link>.\n+ Execution happens during insert for any row that does not specify a\nvalue for the column.\n+ If there is no explicit default constraint for a column, the default\nis a null value.\n </para>\n </listitem>\n </varlistentry>\n\nDavid J.\n\nOn Fri, Jul 5, 2024 at 2:11 PM Bruce Momjian <[email protected]> wrote:\nIf I remove the 'now()' mention in the docs, patch attached, I am\nconcerned people will be confused whether it is the removal of the\nsingle quotes or the use of \"()\" which causes insert-time evaluation,\nand they might try 'now()'.Literals are DDL-time because of parsing, functions are insert-time because of execution.  IMO this is presently confusing because we are focused on characters, not concepts.diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgmlindex c55fa607e8..ac661958fd 100644--- a/doc/src/sgml/datatype.sgml+++ b/doc/src/sgml/datatype.sgml@@ -2391,6 +2391,17 @@ TIMESTAMP WITH TIME ZONE '2004-10-19 10:23:54+02'       </para>      </caution> +     <caution>+      <para>+       The input parser for timestamp values is forgiving: it ignores+       trailing invalid characters.  This poses a hazard in+       the case of the <literal>'now'</literal> special date/time input.+       The constant <literal>'now()'</literal> is the same special date/time input;+       not the <function>now()</function> function, which like all function+       call expressions, is not single-quoted.  Writing <literal>'now()'</literal>+       is considered deprecated and may become an error in future versions.+      </para>+     </caution>+     </sect3>    </sect2> diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgmlindex f19306e776..4cecab011a 100644--- a/doc/src/sgml/ref/create_table.sgml+++ b/doc/src/sgml/ref/create_table.sgml@@ -889,9 +889,10 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM      </para>       <para>-      The default expression will be used in any insert operation that-      does not specify a value for the column.  If there is no default-      for a column, then the default is null.+      The default expression is immediately parsed, which causes evaluation of any literals, notably+      <link linkend=\"datatype-datetime-special-table\">special date/time inputs</link>.+      Execution happens during insert for any row that does not specify a value for the column.+      If there is no explicit default constraint for a column, the default is a null value.      </para>     </listitem>    </varlistentry> David J.", "msg_date": "Fri, 5 Jul 2024 15:00:07 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Tue, 2 Jul 2024 at 13:48, David Rowley <[email protected]> wrote:\n>\n> On Tue, 2 Jul 2024 at 02:43, Tom Lane <[email protected]> wrote:\n> > I'd be more excited about this discussion if I didn't think that\n> > the chances of removing 'now'::timestamp are exactly zero. You\n> > can't just delete useful decades-old features, whether there's\n> > a better way or not.\n>\n> Do you have any thoughts on rejecting trailing punctuation with the\n> timestamp special values?\n\nCancel that idea. I'd thought that these special values must be\nstandalone, but I didn't realise until a few minutes ago that it's\nperfectly valid to mix them:\n\nselect 'yesterday 13:00:00'::timestamp, 'yesterday allballs'::timestamp;\n\nDavid\n\n\n", "msg_date": "Sat, 6 Jul 2024 10:43:44 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" }, { "msg_contents": "On Thu, Jun 27, 2024 at 1:11 AM Tom Lane <[email protected]> wrote:\n>\n> David Rowley <[email protected]> writes:\n> > Maybe I'm slow on the uptake, but I've yet to see anything here where\n> > time literals act in a special way DEFAULT constraints. This is why I\n> > couldn't understand why we should be adding documentation about this\n> > under CREATE TABLE.\n>\n> It's not that the parsing rules are any different: it's that in\n> ordinary DML queries, it seldom matters very much whether a\n> subexpression is evaluated at parse time versus run time.\n> In CREATE TABLE that difference is very in-your-face, so people\n> who haven't understood the rules clearly can get burnt.\n>\n> However, there are certainly other places where it matters,\n> such as queries in plpgsql functions. So I understand your\n> reluctance to go on about it in CREATE TABLE. At the same\n> time, I see where David J. is coming from.\n>\n> Maybe we could have a discussion of this in some single spot,\n> and link to it from CREATE TABLE and other relevant places?\n> ISTR there is something about it in the plpgsql doco already.\n>\n\n+1 to this idea.\n\n\n", "msg_date": "Sat, 6 Jul 2024 10:51:20 +0100", "msg_from": "Pantelis Theodosiou <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we document how column DEFAULT expressions work?" } ]
[ { "msg_contents": "Hi guys,\n\nThis is inspired by this TODO list\nhttps://wiki.postgresql.org/wiki/Todo#CLUSTER and by pg_repack and\npg_freeze projects.\nMy final goal is to create an extension that does direct data-file to\ndata-file transfer (no intermediate tables, no triggers) with no blocking\nat all in order to simulate a zero-downtime FULL VACUUM and work on\ncontinuously maintained clustering (which I'm a big fan of). This of course\nshould involve fixing inexes, etc... There are multiple steps to have a\nfinal product but the first thing I would do is have two pointers: one\niterates from the beginning till it finds enough space, and the other\niterates from the end and adds this row to the space pointed by the first\npointer.\nI would like to first know whether this is useful (in my previous companies\nthis was a complete game changer), and whether there are any alternative\nalgorithms that you would suggest.\nOf course, there are many essential features that would follow later like\nrunning automatically when system is under light load, adding a threshold\nfor when to do this \"online FULL-VACUUM\", and utilizing index CLUSTERing\nand/or a set of predefined columns.\n\nAny and all comments are welcome,\nAhmed\n\nHi guys,This is inspired by this TODO list https://wiki.postgresql.org/wiki/Todo#CLUSTER and by pg_repack and pg_freeze projects.My final goal is to create an extension that does direct data-file to data-file transfer (no intermediate tables, no triggers) with no blocking at all in order to simulate a zero-downtime FULL VACUUM and work on continuously maintained clustering (which I'm a big fan of). This of course should involve fixing inexes, etc... There are multiple steps to have a final product but the first thing I would do is have two pointers: one iterates from the beginning till it finds enough space, and the other iterates from the end and adds this row to the space pointed by the first pointer.I would like to first know whether this is useful (in my previous companies this was a complete game changer), and whether there are any alternative algorithms that you would suggest.Of course, there are many essential features that would follow later like running automatically when system is under light load, adding a threshold for when to do this \"online FULL-VACUUM\", and utilizing index CLUSTERing and/or a set of predefined columns.Any and all comments are welcome,Ahmed", "msg_date": "Tue, 25 Jun 2024 19:12:44 -0300", "msg_from": "Ahmed Yarub Hani Al Nuaimi <[email protected]>", "msg_from_op": true, "msg_subject": "Zero -downtime FULL VACUUM/clustering/defragmentation with\n zero-downtime and now extra disk space" } ]
[ { "msg_contents": "Hi,\n\nProblem #1: we're still using Ventura, but Cirrus has started doing this:\n\nOnly ghcr.io/cirruslabs/macos-runner:sonoma is allowed. Automatically upgraded.\n\nIt doesn't do it to cfbot, which runs macOS stuff on PGDG-hosted Mac\nMinis, but it does it to regular users who use free compute minutes\ntagged \"instance:OSXCommunityInstance\". This causes them to fail,\nbecause:\n\n[11:17:42.711] Error: Current platform \"darwin 23\" does not match\nexpected platform \"darwin 22\"\n\nSure enough, the sysinfo task shows \"... Darwin Kernel Version\n23.5.0...\", but for cfbot it's still 22.y.z. So probably it's time to\nchange to macOS 14 AKA sonoma AKA darwin 23.\n\nProblem #2:\n\nOnce you do that with a simple s/ventura/sonoma/, it still \"upgrades\"\nto macos-runner:sonoma, which is not the same as\nmacos-sonoma-base:latest. It has more versions of xcode installed?\nNot sure what else will break with that because I haven't successfully\nrun it yet due to the next problem, but blind patch attached.\n\nProblem #3:\n\nIf you have a macports installation cached (eg for CI in your github\naccount), then the pre-existing macports installation will be for the\nwrong darwin version (error shown above). So I think we need to teach\nsrc/tools/ci/ci_macports_packages.sh to detect that condition and do a\nclean install. I can look into that, but does anyone already know how\nto do it?\n\nI know how to find out which darwin version is running: uname -r | sed\n's/\\..*//'. What I don't know is how to find the darwin version for a\nmacports installation. I have found a terrible way to deduce it:\n\nsqlite3 /opt/local/var/macports/registry/registry.db \"select\nmax(os_major) from ports where os_major != 'any'\"\n\nBut that's stupid. There must be a way to ask it what version it was\ninstalled for ... I think it's the variable macports::os_major[2]\n(which is written in TCL, a language I can't follow too well), but I\ncan't figure out where it's reading it from.... I hope there is a\ntext file under /opt/local or at worst a SQLite database, or a way to\nask the port command to spit that number out or ask it if it thinks\nmigration is necessary...\n\n[1] https://github.com/cirruslabs/macos-image-templates/pkgs/container/macos-ventura-xcode\n[2] https://github.com/macports/macports-base/blob/bf27e0c98c7443877e081d5f6b6", "msg_date": "Wed, 26 Jun 2024 11:54:09 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "CI, macports, darwin version problems" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> I know how to find out which darwin version is running: uname -r | sed\n> 's/\\..*//'. What I don't know is how to find the darwin version for a\n> macports installation.\n\n\"port platform\"?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2024 20:00:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On Wed, Jun 26, 2024 at 12:00 PM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > I know how to find out which darwin version is running: uname -r | sed\n> > 's/\\..*//'. What I don't know is how to find the darwin version for a\n> > macports installation.\n>\n> \"port platform\"?\n\nThanks, that's exactly what I was looking for.\n\nBut I thought of an easier way: instead of trying to do my own cache\ninvalidation with shell script and duct tape, I can include the\ncurrent OS major version in the cache key used to carry the\nmacports directory between CI runs. Hopefully Cirrus's cache machinery\nis smart enough to age out the old stuff eventually.\n\nThis seems to have the desired effect. I've registered this thread to\nsee how cfbot likes this, and see if anyone sees a problem with\nswitching to the \"macos-runner:sonoma\" image, or the cache key scheme.\n\nhttps://commitfest.postgresql.org/48/5076/\n\nFTR there is a newer macOS release that recently came out, Sequoia aka\nmacOS 15, but the image available to us for CI is marked beta so I\nfigured we can wait a bit longer for that.", "msg_date": "Wed, 26 Jun 2024 15:58:18 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> But I thought of an easier way: instead of trying to do my own cache\n> invalidation with shell script and duct tape, I can include the\n> current OS major version in the cache key used to carry the\n> macports directory between CI runs. Hopefully Cirrus's cache machinery\n> is smart enough to age out the old stuff eventually.\n\nSounds reasonable.\n\n> FTR there is a newer macOS release that recently came out, Sequoia aka\n> macOS 15, but the image available to us for CI is marked beta so I\n> figured we can wait a bit longer for that.\n\nIndeed not; that's only beta and will be so till September-ish.\nWe don't really want to touch it yet because of this issue:\n\nhttps://www.postgresql.org/message-id/flat/CAMBWrQnEwEJtgOv7EUNsXmFw2Ub4p5P%2B5QTBEgYwiyjy7rAsEQ%40mail.gmail.com\n\nI'm not sure what the resolution of that will be, but we surely\ndon't want to gate CI improvement on that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2024 00:04:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On Wed, Jun 26, 2024 at 4:04 PM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > But I thought of an easier way: instead of trying to do my own cache\n> > invalidation with shell script and duct tape, I can include the\n> > current OS major version in the cache key used to carry the\n> > macports directory between CI runs. Hopefully Cirrus's cache machinery\n> > is smart enough to age out the old stuff eventually.\n>\n> Sounds reasonable.\n\ncfbot didn't like v2. It seems that github accounts using\n\"instance:OSXCommunityInstance\" are forced to use\nghcr.io/cirruslabs/macos-runner:sonoma no matter what you ask for\n(example: [1]), while accounts configured to use user-supplied runners\nlike the Mac Minis that cfbot is using *can't* use\nghcr.io/cirruslabs/macos-runner:sonoma, and fail (example: [2]). I\ndon't know why.\n\nSo I think we should request\nghcr.io/cirruslabs/macos-sonoma-base:latest. Personal github accounts\nwill use macos-runner:sonoma instead, but at least it's the same OS\nrelease. Here's a new version like that, to see if cfbot likes it.\n\nGiven that the OS release affects the macports_url we have to specify,\nI think this either means that we'll have to stay in sync with\nwhatever macOS version is being forced for\n\"instance:OSXCommunityInstance\" users, or construct the macports_url\nautomatically. Here is an attempt at the latter, as a second patch.\nSeems to work OK. For example, the setup_additional_packages step\ncurrently prints out:\n\n[06:23:08.584] macOS major version: 14\n[06:23:09.672] MacPorts package URL:\nhttps://github.com/macports/macports-base/releases/download/v2.9.3/MacPorts-2.9.3-14-Sonoma.pkg\n\nAs for the difference between the two types of image, they're\ndescribed at [3]. The -runner images seem to be part of a project for\nfaster starting VMs[4], which sounds like a pretty good reason to want\nto standardise on images to make pre-started instances fungible but\nthere is perhaps also potential for selecting different xcode\nversions.\n\n> > FTR there is a newer macOS release that recently came out, Sequoia aka\n> > macOS 15, but the image available to us for CI is marked beta so I\n> > figured we can wait a bit longer for that.\n>\n> Indeed not; that's only beta and will be so till September-ish.\n> We don't really want to touch it yet because of this issue:\n>\n> https://www.postgresql.org/message-id/flat/CAMBWrQnEwEJtgOv7EUNsXmFw2Ub4p5P%2B5QTBEgYwiyjy7rAsEQ%40mail.gmail.com\n>\n> I'm not sure what the resolution of that will be, but we surely\n> don't want to gate CI improvement on that.\n\nUrgh.\n\nAlso we have to wait for MacPorts to make a release for Sequoia, which\nmight involve lots of maintainers hunting stuff like that. (If Cirrus\nstarts forcing people to use Sequoia before then, that'd be a\nproblem.)\n\n[1] https://cirrus-ci.com/task/4747151899623424\n[2] https://cirrus-ci.com/task/6601239016767488\n[3] https://github.com/cirruslabs/macos-image-templates\n[4] https://cirrus-runners.app/blog/2024/04/11/optimizing-startup-time-of-cirrus-runners/", "msg_date": "Thu, 27 Jun 2024 18:32:52 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On Thu, Jun 27, 2024 at 6:32 PM Thomas Munro <[email protected]> wrote:\n> So I think we should request\n> ghcr.io/cirruslabs/macos-sonoma-base:latest. Personal github accounts\n> will use macos-runner:sonoma instead, but at least it's the same OS\n> release. Here's a new version like that, to see if cfbot likes it.\n\nThe first cfbot run of v3 was successful, but a couple of days later\nwhen retested it failed with the dreaded \"Error:\nShouldBeAtLeastOneLayer\". (It also failed on Windows, just because\nmaster was temporarily broken, unrelated to any of this. Note also\nthat the commit message created by cfbot now includes the patch\nversion, making the test history easier to grok, thanks Jelte!)\n\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/cf/5076\n\nOne difference that jumps out is that the successful v3 run has label\nworker:jc-m2-1 (Mac hosted by Joe), and the failure has\nworker:pgx-m2-1 (Mac hosted by Christophe P). Is this a software\nversion issue, ie need newer Tart to use that image, or could be a\ndifficulty fetching the image? CCing our Mac Mini pool attendants.\n\nTemporary options include disabling pgx-m2-1 from the pool, or\nteaching .cirrus.task.yml to use Ventura for cfbot but Sonoma for\nanyone else's github account, but ideally we'd figure out why it's not\nworking...\n\nThis new information also invalidates my previous hypothesis, that the\nnew \"macos-runner:sonoma\" image can't work on self-hosted Macs,\nbecause that was also on pgx-m2-1.\n\n\n", "msg_date": "Wed, 3 Jul 2024 09:39:06 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On 7/2/24 17:39, Thomas Munro wrote:\n> One difference that jumps out is that the successful v3 run has label\n> worker:jc-m2-1 (Mac hosted by Joe), and the failure has\n> worker:pgx-m2-1 (Mac hosted by Christophe P). Is this a software\n> version issue, ie need newer Tart to use that image, or could be a\n> difficulty fetching the image? CCing our Mac Mini pool attendants.\n\nHow can I help? Do you need to know versions of some of the stuff on my \nmac mini?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 3 Jul 2024 09:17:29 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "Hi,\n\nOn July 3, 2024 3:17:29 PM GMT+02:00, Joe Conway <[email protected]> wrote:\n>On 7/2/24 17:39, Thomas Munro wrote:\n>> One difference that jumps out is that the successful v3 run has label\n>> worker:jc-m2-1 (Mac hosted by Joe), and the failure has\n>> worker:pgx-m2-1 (Mac hosted by Christophe P). Is this a software\n>> version issue, ie need newer Tart to use that image, or could be a\n>> difficulty fetching the image? CCing our Mac Mini pool attendants.\n>\n>How can I help? Do you need to know versions of some of the stuff on my mac mini?\n\nFwiw, I seem to recall that macos vms didn't work on hosts that are older than the guest. So I think it might be worth upgrading Christophe's Mac mini.\n\nGreetings, \n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Wed, 03 Jul 2024 15:25:25 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "Hi,\n\nOn 2024-07-03 09:39:06 +1200, Thomas Munro wrote:\n> On Thu, Jun 27, 2024 at 6:32 PM Thomas Munro <[email protected]> wrote:\n> > So I think we should request\n> > ghcr.io/cirruslabs/macos-sonoma-base:latest. Personal github accounts\n> > will use macos-runner:sonoma instead, but at least it's the same OS\n> > release. Here's a new version like that, to see if cfbot likes it.\n> \n> The first cfbot run of v3 was successful, but a couple of days later\n> when retested it failed with the dreaded \"Error:\n> ShouldBeAtLeastOneLayer\". (It also failed on Windows, just because\n> master was temporarily broken, unrelated to any of this. Note also\n> that the commit message created by cfbot now includes the patch\n> version, making the test history easier to grok, thanks Jelte!)\n> \n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/cf/5076\n> \n> One difference that jumps out is that the successful v3 run has label\n> worker:jc-m2-1 (Mac hosted by Joe), and the failure has\n> worker:pgx-m2-1 (Mac hosted by Christophe P). Is this a software\n> version issue, ie need newer Tart to use that image, or could be a\n> difficulty fetching the image? CCing our Mac Mini pool attendants.\n> \n> Temporary options include disabling pgx-m2-1 from the pool, or\n> teaching .cirrus.task.yml to use Ventura for cfbot but Sonoma for\n> anyone else's github account, but ideally we'd figure out why it's not\n> working...\n\nYep, I think we'll have to do that, unless it has been fixed by now.\n\n\n> This new information also invalidates my previous hypothesis, that the\n> new \"macos-runner:sonoma\" image can't work on self-hosted Macs,\n> because that was also on pgx-m2-1.\n\nBesides the base-os-version issue, another theory is that the newer image is\njust very large (141GB) and that we've seen some other issues related to\nChristophe's internet connection not being the fastest.\n\nWRT your patches:\n- I think we ought to switch to the -runner image, otherwise we'll just\n continue to get that \"upgraded\" warning\n\n- With a fingerprint_script specified, we need to add\n reupload_on_changes: true\n otherwise it'll not be updated.\n\n- I think the fingerprint_script should use sw_vers, just as the script\n does. I see no reason to differ?\n\n- We could just sw_vers -productVersion | sed 's/\\..*//g' instead of the more\n complicated version you used, I doubt that they're going to go away from\n numerical major versions...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 Jul 2024 15:48:37 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On Tue, Jul 16, 2024 at 10:48 AM Andres Freund <[email protected]> wrote:\n> WRT your patches:\n> - I think we ought to switch to the -runner image, otherwise we'll just\n> continue to get that \"upgraded\" warning\n\nRight, let's try it.\n\n> - With a fingerprint_script specified, we need to add\n> reupload_on_changes: true\n> otherwise it'll not be updated.\n\nAhh, I see.\n\n> - I think the fingerprint_script should use sw_vers, just as the script\n> does. I see no reason to differ?\n\nYeah might as well. I started with Darwin versions because that is\nwhat MacPorts complains about, but they move in lockstep.\n\n> - We could just sw_vers -productVersion | sed 's/\\..*//g' instead of the more\n> complicated version you used, I doubt that they're going to go away from\n> numerical major versions...\n\nYep.\n\nI've attached a new version like that. Let's see which runner machine\ngets it and how it turns out...", "msg_date": "Tue, 16 Jul 2024 15:19:06 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On Tue, Jul 16, 2024 at 3:19 PM Thomas Munro <[email protected]> wrote:\n> I've attached a new version like that. Let's see which runner machine\n> gets it and how it turns out...\n\nIt failed[1] on pgx-m2-1: \"Error: ShouldBeAtLeastOneLayer\". So I\ntemporarily disabled that machine from the pool and click the re-run\nbutton, and it failed[2] on jc-m2-1: \"Error: The operation couldn’t be\ncompleted. No space left on device\" after a long period during which\nit was presumably trying to download that image. I could try this\nexperiment again if Joe could see a way to free up some disk space.\nI've reenabled pgx-m2-1 for now.\n\n[1] https://cirrus-ci.com/task/5127256689868800\n[2] https://cirrus-ci.com/task/6446688024395776\n\n\n", "msg_date": "Tue, 16 Jul 2024 16:34:52 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On 7/16/24 00:34, Thomas Munro wrote:\n> temporarily disabled that machine from the pool and click the re-run\n> button, and it failed[2] on jc-m2-1: \"Error: The operation couldn’t be\n> completed. No space left on device\" after a long period during which\n> it was presumably trying to download that image. I could try this\n> experiment again if Joe could see a way to free up some disk space.\n\nHmmm, sorry, will take a look now\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 16 Jul 2024 08:28:09 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On 7/16/24 08:28, Joe Conway wrote:\n> On 7/16/24 00:34, Thomas Munro wrote:\n>> temporarily disabled that machine from the pool and click the re-run\n>> button, and it failed[2] on jc-m2-1: \"Error: The operation couldn’t be\n>> completed. No space left on device\" after a long period during which\n>> it was presumably trying to download that image. I could try this\n>> experiment again if Joe could see a way to free up some disk space.\n> \n> Hmmm, sorry, will take a look now\n\nI am not super strong on Macs in general, but cannot see anything full:\n\ndf -h\nFilesystem Size Used Avail Capacity iused ifree %iused \nMounted on\n/dev/disk3s1s1 228Gi 8.7Gi 111Gi 8% 356839 1165143240 0% /\ndevfs 199Ki 199Ki 0Bi 100% 690 0 100% /dev\n/dev/disk3s6 228Gi 20Ki 111Gi 1% 0 1165143240 0% \n/System/Volumes/VM\n/dev/disk3s2 228Gi 5.0Gi 111Gi 5% 1257 1165143240 0% \n/System/Volumes/Preboot\n/dev/disk3s4 228Gi 28Mi 111Gi 1% 47 1165143240 0% \n/System/Volumes/Update\n/dev/disk1s2 500Mi 6.0Mi 483Mi 2% 1 4941480 0% \n/System/Volumes/xarts\n/dev/disk1s1 500Mi 6.2Mi 483Mi 2% 29 4941480 0% \n/System/Volumes/iSCPreboot\n/dev/disk1s3 500Mi 492Ki 483Mi 1% 55 4941480 0% \n/System/Volumes/Hardware\n/dev/disk3s5 228Gi 102Gi 111Gi 48% 365768 1165143240 0% \n/System/Volumes/Data\nmap auto_home 0Bi 0Bi 0Bi 100% 0 0 100% \n/System/Volumes/Data/home\n\nAs far as I can tell, the 100% usage for /dev and \n/System/Volumes/Data/home are irrelevant.\n\n¯\\_(ツ)_/¯\n\nI ran an update to the latest Ventura and rebooted as part of that. Can \nyou check again?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 16 Jul 2024 09:38:21 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "hi,\n\nOn 2024-07-16 09:38:21 -0400, Joe Conway wrote:\n> On 7/16/24 08:28, Joe Conway wrote:\n> > On 7/16/24 00:34, Thomas Munro wrote:\n> > > temporarily disabled that machine from the pool and click the re-run\n> > > button, and it failed[2] on jc-m2-1: \"Error: The operation couldn’t be\n> > > completed. No space left on device\" after a long period during which\n> > > it was presumably trying to download that image. I could try this\n> > > experiment again if Joe could see a way to free up some disk space.\n> > \n> > Hmmm, sorry, will take a look now\n> \n> I am not super strong on Macs in general, but cannot see anything full:\n\n> /dev/disk3s5 228Gi 102Gi 111Gi 48% 365768 1165143240 0%\n> /System/Volumes/Data\n\nUnfortunately the 'base disk' for sonoma is 144GB large...\n\nIt might be worth trying to pull it separately from a CI job, under your\ncontrol. As the CI user (it'll be downloaded redundantly if you do it as your\nuser!), you can do:\ntart pull ghcr.io/cirruslabs/macos-runner:sonoma\n\nIt's possible you have some old images stored as your user, check\n\"tart list\" for both users.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 16 Jul 2024 08:44:33 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On 7/16/24 11:44, Andres Freund wrote:\n> hi,\n> \n> On 2024-07-16 09:38:21 -0400, Joe Conway wrote:\n>> On 7/16/24 08:28, Joe Conway wrote:\n>> > On 7/16/24 00:34, Thomas Munro wrote:\n>> > > temporarily disabled that machine from the pool and click the re-run\n>> > > button, and it failed[2] on jc-m2-1: \"Error: The operation couldn’t be\n>> > > completed. No space left on device\" after a long period during which\n>> > > it was presumably trying to download that image. I could try this\n>> > > experiment again if Joe could see a way to free up some disk space.\n>> > \n>> > Hmmm, sorry, will take a look now\n>> \n>> I am not super strong on Macs in general, but cannot see anything full:\n> \n>> /dev/disk3s5 228Gi 102Gi 111Gi 48% 365768 1165143240 0%\n>> /System/Volumes/Data\n> \n> Unfortunately the 'base disk' for sonoma is 144GB large...\n> \n> It might be worth trying to pull it separately from a CI job, under your\n> control. As the CI user (it'll be downloaded redundantly if you do it as your\n> user!), you can do:\n> tart pull ghcr.io/cirruslabs/macos-runner:sonoma\n> \n> It's possible you have some old images stored as your user, check\n> \"tart list\" for both users.\n\nHmm, this is not the easiest ever to parse for me...\n\nmacmini:~ ci-run$ tart list\nSource Name \n Disk Size State\nlocal ventura-base-test \n 50 20 stopped\noci ghcr.io/cirruslabs/macos-ventura-base:latest \n 50 21 stopped\noci \nghcr.io/cirruslabs/macos-ventura-base@sha256:bddfa1e2b6f6ec41b5db844b06a6784a2bffe0b071965470efebd95ea3355b4f \n50 21 stopped\n\nmacmini:~ jconway$ tart list\nSource Name \n Disk Size State\nlocal ventura-test \n 50 20 stopped\noci ghcr.io/cirruslabs/macos-ventura-base:latest \n 50 50 stopped\noci \nghcr.io/cirruslabs/macos-ventura-base@sha256:a4d4861123427a23ad3dc53a6a1d4d20d6bc1a0df82bd1495cc53217075c0a8c \n50 50 stopped\n\nSo does that mean I have 6 copies of the ventura image? How do I get rid \nof them?\n\nOr maybe simpler -- how do people typically add storage to a mac mini? I \ndon't mind buying an external disk or whatever.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 16 Jul 2024 12:12:37 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "Hi,\n\nOn 2024-07-16 12:12:37 -0400, Joe Conway wrote:\n> > It's possible you have some old images stored as your user, check\n> > \"tart list\" for both users.\n> \n> Hmm, this is not the easiest ever to parse for me...\n\nUnfortunately due to the wrapping it's not easy to read here either...\n\nI don't think it quite indicates 6 - the ones with :latest are just aliases\nfor the one with the hash, I believe.\n\n\n> macmini:~ ci-run$ tart list\n> Source Name Disk Size State\n> local ventura-base-test 50 20 stopped\n> oci ghcr.io/cirruslabs/macos-ventura-base:latest 50 21 stopped\n> oci ghcr.io/cirruslabs/macos-ventura-base@sha256:bddfa1e2b6f6ec41b5db844b06a6784a2bffe0b071965470efebd95ea3355b4f 50 21 stopped\n> \n> macmini:~ jconway$ tart list\n\nI'd delete all of the ones stored for jconway - that's just redundant.\n\ntart delete ghcr.io/cirruslabs/macos-ventura-base:latest\n\n\n> Or maybe simpler -- how do people typically add storage to a mac mini? I\n> don't mind buying an external disk or whatever.\n\nThat I do not know, not a mac person at all...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 17 Jul 2024 10:01:42 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On 7/17/24 13:01, Andres Freund wrote:\n> On 2024-07-16 12:12:37 -0400, Joe Conway wrote:\n>> > It's possible you have some old images stored as your user, check\n>> > \"tart list\" for both users.\n>> \n>> Hmm, this is not the easiest ever to parse for me...\n> \n> Unfortunately due to the wrapping it's not easy to read here either...\n> \n> I don't think it quite indicates 6 - the ones with :latest are just aliases\n> for the one with the hash, I believe.\n\nmakes sense\n\n>> macmini:~ ci-run$ tart list\n>> Source Name Disk Size State\n>> local ventura-base-test 50 20 stopped\n>> oci ghcr.io/cirruslabs/macos-ventura-base:latest 50 21 stopped\n>> oci ghcr.io/cirruslabs/macos-ventura-base@sha256:bddfa1e2b6f6ec41b5db844b06a6784a2bffe0b071965470efebd95ea3355b4f 50 21 stopped\n>> \n>> macmini:~ jconway$ tart list\n> \n> I'd delete all of the ones stored for jconway - that's just redundant.\n\ndone\n\n> tart delete ghcr.io/cirruslabs/macos-ventura-base:latest\n\nand done\n\ntart list for both users shows nothing now.\n\n>> Or maybe simpler -- how do people typically add storage to a mac mini? I\n>> don't mind buying an external disk or whatever.\n> \n> That I do not know, not a mac person at all...\n\nWell maybe unneeded?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 17 Jul 2024 13:20:06 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-07-16 12:12:37 -0400, Joe Conway wrote:\n>> Or maybe simpler -- how do people typically add storage to a mac mini? I\n>> don't mind buying an external disk or whatever.\n\n> That I do not know, not a mac person at all...\n\nI think USB SSD is the way at present. MacRumors has some\nreviews/testing, eg this one:\n\nhttps://www.macrumors.com/review/hyper-usb-hubs-ssd-enclosure/\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Jul 2024 13:25:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "Hi,\n\nOn 2024-07-17 13:20:06 -0400, Joe Conway wrote:\n> > > Or maybe simpler -- how do people typically add storage to a mac mini? I\n> > > don't mind buying an external disk or whatever.\n> > \n> > That I do not know, not a mac person at all...\n> \n> Well maybe unneeded?\n\nDoes \"tart pull ghcr.io/cirruslabs/macos-runner:sonoma\" as the CI user\nsucceed?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 17 Jul 2024 13:41:45 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On 7/17/24 16:41, Andres Freund wrote:\n> Hi,\n> \n> On 2024-07-17 13:20:06 -0400, Joe Conway wrote:\n>> > > Or maybe simpler -- how do people typically add storage to a mac mini? I\n>> > > don't mind buying an external disk or whatever.\n>> > \n>> > That I do not know, not a mac person at all...\n>> \n>> Well maybe unneeded?\n> \n> Does \"tart pull ghcr.io/cirruslabs/macos-runner:sonoma\" as the CI user\n> succeed?\n\nYes, with about 25 GB to spare.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 17 Jul 2024 17:58:20 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On Thu, Jul 18, 2024 at 9:58 AM Joe Conway <[email protected]> wrote:\n> On 7/17/24 16:41, Andres Freund wrote:\n> > Does \"tart pull ghcr.io/cirruslabs/macos-runner:sonoma\" as the CI user\n> > succeed?\n>\n> Yes, with about 25 GB to spare.\n\nThanks. Now it works! But for some reason it spends several minutes\nin the \"scheduling\" stage before it starts. Are there any logs that\nmight give a clue what it was doing, for example for this run?\n\nhttps://cirrus-ci.com/task/5963784852865024\n\n\n", "msg_date": "Thu, 18 Jul 2024 16:40:07 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "Hi,\n\nOn Thu, 18 Jul 2024 at 07:40, Thomas Munro <[email protected]> wrote:\n>\n> On Thu, Jul 18, 2024 at 9:58 AM Joe Conway <[email protected]> wrote:\n> > On 7/17/24 16:41, Andres Freund wrote:\n> > > Does \"tart pull ghcr.io/cirruslabs/macos-runner:sonoma\" as the CI user\n> > > succeed?\n> >\n> > Yes, with about 25 GB to spare.\n>\n> Thanks. Now it works! But for some reason it spends several minutes\n> in the \"scheduling\" stage before it starts. Are there any logs that\n> might give a clue what it was doing, for example for this run?\n>\n> https://cirrus-ci.com/task/5963784852865024\n\nCould it be pulling the ''macos-runner:sonoma' image on every run? I\ncross-compared and for every new version of the\n'macos-sonoma-base:latest' image [1]; scheduling takes ~4 minutes [2].\nThen, it takes a couple of seconds [2] for the consecutive runs until\na new version of the image is released. Also, from their manifest; the\nuncompressed size of the runner image is 5x of the sonoma-base image\n[3]. This is very close to scheduling time differences between\n'macos-runner:sonoma' and 'newly pulled macos-sonoma-base:latest'\n(22mins / 4 mins).\n\n[1] https://github.com/cirruslabs/macos-image-templates/pkgs/container/macos-sonoma-base/versions\n\n[2]\nhttps://cirrus-ci.com/task/5299490515582976 -> 4 minutes, first pull\nhttps://cirrus-ci.com/task/6081946936147968 -> 20 seconds\nhttps://cirrus-ci.com/task/6078712070799360 -> 4 minutes, new version\nof the image was released on the same day (6th of July)\nhttps://cirrus-ci.com/task/6539977129984000 -> 40 seconds\nhttps://cirrus-ci.com/task/5839361126694912 -> 40 seconds\nhttps://cirrus-ci.com/task/6708845278396416 -> 4 minutes, new version\nof the image was released a day ago\n\n[3]\nhttps://github.com/cirruslabs/macos-image-templates/pkgs/container/macos-sonoma-base/245087497?tag=latest\nhttps://github.com/orgs/cirruslabs/packages/container/macos-runner/242649219?tag=sonoma\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Thu, 18 Jul 2024 11:12:35 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On 7/18/24 04:12, Nazir Bilal Yavuz wrote:\n> Hi,\n> \n> On Thu, 18 Jul 2024 at 07:40, Thomas Munro <[email protected]> wrote:\n>>\n>> On Thu, Jul 18, 2024 at 9:58 AM Joe Conway <[email protected]> wrote:\n>> > On 7/17/24 16:41, Andres Freund wrote:\n>> > > Does \"tart pull ghcr.io/cirruslabs/macos-runner:sonoma\" as the CI user\n>> > > succeed?\n>> >\n>> > Yes, with about 25 GB to spare.\n>>\n>> Thanks. Now it works! But for some reason it spends several minutes\n>> in the \"scheduling\" stage before it starts. Are there any logs that\n>> might give a clue what it was doing, for example for this run?\n>> https://cirrus-ci.com/task/5963784852865024\n\nI only see this in the log:\ntime=\"2024-07-17T23:13:56-04:00\" level=info msg=\"started task \n5963784852865024\"\ntime=\"2024-07-17T23:42:24-04:00\" level=info msg=\"task 5963784852865024 \ncompleted\"\n\n> Could it be pulling the ''macos-runner:sonoma' image on every run?\n\nOr perhaps since this was the first run it simply needed to pull the \nimage for the first time?\n\nThe scheduling timing (21:24) looks a lot like what I observed when I \ndid the test for the time to download. Unfortunately I did not time the \ntest though.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Thu, 18 Jul 2024 07:55:28 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On 7/18/24 07:55, Joe Conway wrote:\n> On 7/18/24 04:12, Nazir Bilal Yavuz wrote:\n>> Could it be pulling the ''macos-runner:sonoma' image on every run?\n> \n> Or perhaps since this was the first run it simply needed to pull the\n> image for the first time?\n> \n> The scheduling timing (21:24) looks a lot like what I observed when I\n> did the test for the time to download. Unfortunately I did not time the\n> test though.\n\nActually it does look like the image is gone now based on the free space \non the volume, so maybe it is pulling every run and cleaning up rather \nthan caching for some reason?\n\nFilesystem Size Used Avail Capacity\n/dev/disk3s5 228Gi 39Gi 161Gi 20%\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Thu, 18 Jul 2024 08:00:56 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "Hi,\n\nOn Thu, 18 Jul 2024 at 15:00, Joe Conway <[email protected]> wrote:\n>\n> On 7/18/24 07:55, Joe Conway wrote:\n> > On 7/18/24 04:12, Nazir Bilal Yavuz wrote:\n> >> Could it be pulling the ''macos-runner:sonoma' image on every run?\n> >\n> > Or perhaps since this was the first run it simply needed to pull the\n> > image for the first time?\n\nIt was not the first run, Thomas rerun it a couple of times but all of\nthem were in the same build. So, I thought that CI may set some\nsettings to pull the image while starting the build, so it\nre-downloads the image for all the tasks in the same build. But that\nlooks wrong because of what you said below.\n\n> >\n> > The scheduling timing (21:24) looks a lot like what I observed when I\n> > did the test for the time to download. Unfortunately I did not time the\n> > test though.\n>\n> Actually it does look like the image is gone now based on the free space\n> on the volume, so maybe it is pulling every run and cleaning up rather\n> than caching for some reason?\n>\n> Filesystem Size Used Avail Capacity\n> /dev/disk3s5 228Gi 39Gi 161Gi 20%\n\nThat is interesting. Only one thing comes to my mind. It seems that\nthe 'tart prune' command runs automatically to reclaim space when\nthere is no space left and thinks it can reclaim the space by removing\nsome things [1]. So, it could be that somehow 'tart prune' ran\nautomatically and deleted the sonoma image. I think you can check if\nthis is the case. You can check these locations [2] from ci-user to\nsee when ventura images are created. If they have been created less\nthan 1 day ago, I think the current space is not enough to pull both\nventura and sonoma images.\n\n[1] https://github.com/cirruslabs/tart/issues/33#issuecomment-1134789129\n[2] https://tart.run/faq/#vm-location-on-disk\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Thu, 18 Jul 2024 15:55:23 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On 7/18/24 08:55, Nazir Bilal Yavuz wrote:\n> Hi,\n> \n> On Thu, 18 Jul 2024 at 15:00, Joe Conway <[email protected]> wrote:\n>>\n>> On 7/18/24 07:55, Joe Conway wrote:\n>> > On 7/18/24 04:12, Nazir Bilal Yavuz wrote:\n>> >> Could it be pulling the ''macos-runner:sonoma' image on every run?\n>> >\n>> > Or perhaps since this was the first run it simply needed to pull the\n>> > image for the first time?\n> \n> It was not the first run, Thomas rerun it a couple of times but all of\n> them were in the same build. So, I thought that CI may set some\n> settings to pull the image while starting the build, so it\n> re-downloads the image for all the tasks in the same build. But that\n> looks wrong because of what you said below.\n> \n>> >\n>> > The scheduling timing (21:24) looks a lot like what I observed when I\n>> > did the test for the time to download. Unfortunately I did not time the\n>> > test though.\n>>\n>> Actually it does look like the image is gone now based on the free space\n>> on the volume, so maybe it is pulling every run and cleaning up rather\n>> than caching for some reason?\n>>\n>> Filesystem Size Used Avail Capacity\n>> /dev/disk3s5 228Gi 39Gi 161Gi 20%\n> \n> That is interesting. Only one thing comes to my mind. It seems that\n> the 'tart prune' command runs automatically to reclaim space when\n> there is no space left and thinks it can reclaim the space by removing\n> some things [1]. So, it could be that somehow 'tart prune' ran\n> automatically and deleted the sonoma image. I think you can check if\n> this is the case. You can check these locations [2] from ci-user to\n> see when ventura images are created. If they have been created less\n> than 1 day ago, I think the current space is not enough to pull both\n> ventura and sonoma images.\n\nI think you nailed it (this will wrap badly):\n8<-----------------\nmacmini:~ ci-run$ ll ~/.tart/cache/OCIs/ghcr.io/cirruslabs/*\n/Users/ci-run/.tart/cache/OCIs/ghcr.io/cirruslabs/macos-runner:\ntotal 0\ndrwxr-xr-x 2 ci-run staff 64 Jul 17 23:53 .\ndrwxr-xr-x 5 ci-run staff 160 Jul 17 17:16 ..\n\n/Users/ci-run/.tart/cache/OCIs/ghcr.io/cirruslabs/macos-sonoma-base:\ntotal 0\ndrwxr-xr-x 2 ci-run staff 64 Jul 17 13:18 .\ndrwxr-xr-x 5 ci-run staff 160 Jul 17 17:16 ..\n\n/Users/ci-run/.tart/cache/OCIs/ghcr.io/cirruslabs/macos-ventura-base:\ntotal 0\ndrwxr-xr-x 4 ci-run staff 128 Jul 17 23:53 .\ndrwxr-xr-x 5 ci-run staff 160 Jul 17 17:16 ..\nlrwxr-xr-x 1 ci-run staff 140 Jul 17 23:53 latest -> \n/Users/ci-run/.tart/cache/OCIs/ghcr.io/cirruslabs/macos-ventura-base/sha256:bddfa1e2b6f6ec41b5db844b06a6784a2bffe0b071965470efebd95ea3355b4f\ndrwxr-xr-x 5 ci-run staff 160 Jul 17 23:53 \nsha256:bddfa1e2b6f6ec41b5db844b06a6784a2bffe0b071965470efebd95ea3355b4f\n8<-----------------\n\nSo perhaps I am back to needing more storage...\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Thu, 18 Jul 2024 10:01:44 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "Hi,\n\nOn Thu, 18 Jul 2024 at 17:01, Joe Conway <[email protected]> wrote:\n>\n> On 7/18/24 08:55, Nazir Bilal Yavuz wrote:\n> > Hi,\n> >\n> > On Thu, 18 Jul 2024 at 15:00, Joe Conway <[email protected]> wrote:\n> >>\n> >> On 7/18/24 07:55, Joe Conway wrote:\n> >> > On 7/18/24 04:12, Nazir Bilal Yavuz wrote:\n> >> >> Could it be pulling the ''macos-runner:sonoma' image on every run?\n> >> >\n> >> > Or perhaps since this was the first run it simply needed to pull the\n> >> > image for the first time?\n> >\n> > It was not the first run, Thomas rerun it a couple of times but all of\n> > them were in the same build. So, I thought that CI may set some\n> > settings to pull the image while starting the build, so it\n> > re-downloads the image for all the tasks in the same build. But that\n> > looks wrong because of what you said below.\n> >\n> >> >\n> >> > The scheduling timing (21:24) looks a lot like what I observed when I\n> >> > did the test for the time to download. Unfortunately I did not time the\n> >> > test though.\n> >>\n> >> Actually it does look like the image is gone now based on the free space\n> >> on the volume, so maybe it is pulling every run and cleaning up rather\n> >> than caching for some reason?\n> >>\n> >> Filesystem Size Used Avail Capacity\n> >> /dev/disk3s5 228Gi 39Gi 161Gi 20%\n> >\n> > That is interesting. Only one thing comes to my mind. It seems that\n> > the 'tart prune' command runs automatically to reclaim space when\n> > there is no space left and thinks it can reclaim the space by removing\n> > some things [1]. So, it could be that somehow 'tart prune' ran\n> > automatically and deleted the sonoma image. I think you can check if\n> > this is the case. You can check these locations [2] from ci-user to\n> > see when ventura images are created. If they have been created less\n> > than 1 day ago, I think the current space is not enough to pull both\n> > ventura and sonoma images.\n>\n> I think you nailed it (this will wrap badly):\n> 8<-----------------\n> macmini:~ ci-run$ ll ~/.tart/cache/OCIs/ghcr.io/cirruslabs/*\n> /Users/ci-run/.tart/cache/OCIs/ghcr.io/cirruslabs/macos-runner:\n> total 0\n> drwxr-xr-x 2 ci-run staff 64 Jul 17 23:53 .\n> drwxr-xr-x 5 ci-run staff 160 Jul 17 17:16 ..\n>\n> /Users/ci-run/.tart/cache/OCIs/ghcr.io/cirruslabs/macos-sonoma-base:\n> total 0\n> drwxr-xr-x 2 ci-run staff 64 Jul 17 13:18 .\n> drwxr-xr-x 5 ci-run staff 160 Jul 17 17:16 ..\n>\n> /Users/ci-run/.tart/cache/OCIs/ghcr.io/cirruslabs/macos-ventura-base:\n> total 0\n> drwxr-xr-x 4 ci-run staff 128 Jul 17 23:53 .\n> drwxr-xr-x 5 ci-run staff 160 Jul 17 17:16 ..\n> lrwxr-xr-x 1 ci-run staff 140 Jul 17 23:53 latest ->\n> /Users/ci-run/.tart/cache/OCIs/ghcr.io/cirruslabs/macos-ventura-base/sha256:bddfa1e2b6f6ec41b5db844b06a6784a2bffe0b071965470efebd95ea3355b4f\n> drwxr-xr-x 5 ci-run staff 160 Jul 17 23:53\n> sha256:bddfa1e2b6f6ec41b5db844b06a6784a2bffe0b071965470efebd95ea3355b4f\n> 8<-----------------\n>\n> So perhaps I am back to needing more storage...\n\nYou might not need more storage. Thomas knows better, but AFAIU, CFBot\nwill pull only sonoma images after the patch in this thread gets\nmerged. And your storage seems enough for storing it.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Thu, 18 Jul 2024 17:23:01 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On 7/18/24 10:23, Nazir Bilal Yavuz wrote:\n> On Thu, 18 Jul 2024 at 17:01, Joe Conway <[email protected]> wrote:\n>> So perhaps I am back to needing more storage...\n> \n> You might not need more storage. Thomas knows better, but AFAIU, CFBot\n> will pull only sonoma images after the patch in this thread gets\n> merged. And your storage seems enough for storing it.\n\nI figured I would go ahead and buy it. Basically $250 total for a 2TB \nWD_BLACK NVMe plus a mac mini expansion enclosure. Should be delivered \nSunday.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Thu, 18 Jul 2024 10:33:03 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On 7/18/24 10:33, Joe Conway wrote:\n> On 7/18/24 10:23, Nazir Bilal Yavuz wrote:\n>> On Thu, 18 Jul 2024 at 17:01, Joe Conway <[email protected]> wrote:\n>>> So perhaps I am back to needing more storage...\n>> \n>> You might not need more storage. Thomas knows better, but AFAIU, CFBot\n>> will pull only sonoma images after the patch in this thread gets\n>> merged. And your storage seems enough for storing it.\n> \n> I figured I would go ahead and buy it. Basically $250 total for a 2TB\n> WD_BLACK NVMe plus a mac mini expansion enclosure. Should be delivered\n> Sunday.\n\nI installed and mounted the new volume, moved \"~/.tart\" to \n/Volumes/extnvme and created a symlink, and the restarted the ci \nprocess, but now I am getting continuous errors streaming to the log:\n\n8<------------------\nmacmini:~ ci-run$ ll /Users/ci-run/.tart\nlrwxr-xr-x 1 ci-run staff 29 Jul 21 15:53 /Users/ci-run/.tart -> \n/Volumes/extnvme/ci-run/.tart\n\nmacmini:~ ci-run$ df -h /Volumes/extnvme\nFilesystem Size Used Avail Capacity iused ifree %iused \nMounted on\n/dev/disk5s2 1.8Ti 76Gi 1.7Ti 5% 105 18734532280 0% \n/Volumes/extnvme\n\nmacmini:~ ci-run$ tail -n1 log/cirrus-worker.log\ntime=\"2024-07-21T16:09:29-04:00\" level=error msg=\"failed to poll \nupstream https://grpc.cirrus-ci.com:443: rpc error: code = NotFound desc \n= Can't find worker by session token!\"\n8<------------------\n\nAny ideas?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sun, 21 Jul 2024 16:15:02 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On 7/21/24 16:15, Joe Conway wrote:\n> On 7/18/24 10:33, Joe Conway wrote:\n>> On 7/18/24 10:23, Nazir Bilal Yavuz wrote:\n>>> On Thu, 18 Jul 2024 at 17:01, Joe Conway <[email protected]> wrote:\n>>>> So perhaps I am back to needing more storage...\n>>> \n>>> You might not need more storage. Thomas knows better, but AFAIU, CFBot\n>>> will pull only sonoma images after the patch in this thread gets\n>>> merged. And your storage seems enough for storing it.\n>> \n>> I figured I would go ahead and buy it. Basically $250 total for a 2TB\n>> WD_BLACK NVMe plus a mac mini expansion enclosure. Should be delivered\n>> Sunday.\n> \n> I installed and mounted the new volume, moved \"~/.tart\" to\n> /Volumes/extnvme and created a symlink, and the restarted the ci\n> process, but now I am getting continuous errors streaming to the log:\n> \n> 8<------------------\n> macmini:~ ci-run$ tail -n1 log/cirrus-worker.log\n> time=\"2024-07-21T16:09:29-04:00\" level=error msg=\"failed to poll\n> upstream https://grpc.cirrus-ci.com:443: rpc error: code = NotFound desc\n> = Can't find worker by session token!\"\n> 8<------------------\n> \n> Any ideas?\n\nHmmm, maybe nevermind? I rebooted the mac mini and now it seems to be \nworking. Maybe someone can confirm. There ought to be plenty of space \navailable for sonoma and ventura at the same time now.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sun, 21 Jul 2024 16:34:01 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On Mon, Jul 22, 2024 at 8:34 AM Joe Conway <[email protected]> wrote:\n> Hmmm, maybe nevermind? I rebooted the mac mini and now it seems to be\n> working. Maybe someone can confirm. There ought to be plenty of space\n> available for sonoma and ventura at the same time now.\n\nThanks for doing that. Initial results are that it's running the\ntests much more slowly. Example:\n\nhttps://cirrus-ci.com/task/5607066713194496\n\nI wonder if there is a way to use the external drive for caching\nimages and so on, but the faster (?) internal drive for work...\n\n\n", "msg_date": "Mon, 22 Jul 2024 09:26:32 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On 7/21/24 17:26, Thomas Munro wrote:\n> On Mon, Jul 22, 2024 at 8:34 AM Joe Conway <[email protected]> wrote:\n>> Hmmm, maybe nevermind? I rebooted the mac mini and now it seems to be\n>> working. Maybe someone can confirm. There ought to be plenty of space\n>> available for sonoma and ventura at the same time now.\n> \n> Thanks for doing that. Initial results are that it's running the\n> tests much more slowly. Example:\n> \n> https://cirrus-ci.com/task/5607066713194496\n> \n> I wonder if there is a way to use the external drive for caching\n> images and so on, but the faster (?) internal drive for work...\n\nMaybe -- I moved the symlink to include only the \"cache\" part of the \ntree under ~/.tart.\n\n8<--------------\nmacmini:.tart ci-run$ cd ~\nmacmini:~ ci-run$ ll .tart\ntotal 0\ndrwxr-xr-x 5 ci-run staff 160 Jul 22 08:42 .\ndrwxr-x---+ 25 ci-run staff 800 Jul 22 08:41 ..\nlrwxr-xr-x 1 ci-run staff 35 Jul 22 08:42 cache -> \n/Volumes/extnvme/ci-run/.tart/cache\ndrwxr-xr-x 3 ci-run staff 96 Jul 22 08:43 tmp\ndrwxr-xr-x 2 ci-run staff 64 Jul 22 08:39 vms\n8<--------------\n\nPreviously I had the entire \"~/.tart\" directory tree on the external drive.\n\nPlease check again.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Mon, 22 Jul 2024 08:46:03 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "Hi,\n\nOn 2024-07-22 08:46:03 -0400, Joe Conway wrote:\n> On 7/21/24 17:26, Thomas Munro wrote:\n> > On Mon, Jul 22, 2024 at 8:34 AM Joe Conway <[email protected]> wrote:\n> > > Hmmm, maybe nevermind? I rebooted the mac mini and now it seems to be\n> > > working. Maybe someone can confirm. There ought to be plenty of space\n> > > available for sonoma and ventura at the same time now.\n> > \n> > Thanks for doing that. Initial results are that it's running the\n> > tests much more slowly. Example:\n> > \n> > https://cirrus-ci.com/task/5607066713194496\n> > \n> > I wonder if there is a way to use the external drive for caching\n> > images and so on, but the faster (?) internal drive for work...\n> \n> Maybe -- I moved the symlink to include only the \"cache\" part of the tree\n> under ~/.tart.\n> [...]\n> Previously I had the entire \"~/.tart\" directory tree on the external drive.\n> \n> Please check again.\n\nThat looks like it did the trick! E.g. [1] has good timings.\n\nI triggered a run with Sonoma that did end up scheduled on your machine [2], let's\nsee how that goes. Looks like it's perhaps downloading the image again :/.\n\nGreetings,\n\nAndres Freund\n\n[1] https://cirrus-ci.com/task/6187836754362368\n[2] https://cirrus-ci.com/task/5190473306865664\n\n\n", "msg_date": "Mon, 22 Jul 2024 12:37:30 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On Tue, Jul 23, 2024 at 7:37 AM Andres Freund <[email protected]> wrote:\n> [2] https://cirrus-ci.com/task/5190473306865664\n\n\"Error: “disk.img” couldn’t be copied to\n“3FA983DD-3078-4B28-A969-BCF86F8C9585” because there isn’t enough\nspace.\"\n\nCould it be copying the whole image every time, in some way that would\nget copy-on-write on the same file system, but having to copy\nphysically here? That is, instead of using some kind of chain of\noverlay disk image files as seen elsewhere, is this Tart thing relying\non file system COW for that? Perhaps that is happening here[1] but I\ndon't immediately know how to find out where that Swift standard\nlibrary call turns into system calls...\n\n[1] https://github.com/cirruslabs/tart/blob/main/Sources/tart/VMDirectory.swift#L119\n\n\n", "msg_date": "Tue, 23 Jul 2024 22:31:07 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On 7/23/24 06:31, Thomas Munro wrote:\n> On Tue, Jul 23, 2024 at 7:37 AM Andres Freund <[email protected]> wrote:\n>> [2] https://cirrus-ci.com/task/5190473306865664\n> \n> \"Error: “disk.img” couldn’t be copied to\n> “3FA983DD-3078-4B28-A969-BCF86F8C9585” because there isn’t enough\n> space.\"\n> \n> Could it be copying the whole image every time, in some way that would\n> get copy-on-write on the same file system, but having to copy\n> physically here? That is, instead of using some kind of chain of\n> overlay disk image files as seen elsewhere, is this Tart thing relying\n> on file system COW for that? Perhaps that is happening here[1] but I\n> don't immediately know how to find out where that Swift standard\n> library call turns into system calls...\n> \n> [1] https://github.com/cirruslabs/tart/blob/main/Sources/tart/VMDirectory.swift#L119\n\nI tried moving ~/.tart/tmp to the external drive as well, but that \nfailed -- I *think* because tart is trying to do some kind of hardlink \nbetween the files in ~/.tart/tmp and ~/.tart/vms. So I move that back \nand at least the ventura runs are working again.\n\n</facepalm>I also noticed that when I set up the external drive, I \nsomehow automatically configured time machine to run (it was not done \nintentionally), and it seemed that the backups were consuming space on \nthe primary drive </facepalm>. Did I mention I really hate messing with \nmacos ;-). Any idea how to disable time machine entirely? The settings \napp provides next to zero configuration of the thing.\n\nAnyway, maybe with the time machine stuff removed the there is enough space?\n\nI guess if all else fails I will have to get the mac mini with more \nbuilt in storage in order to accommodate sonoma.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 23 Jul 2024 10:44:42 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On 7/23/24 10:44, Joe Conway wrote:\n> I guess if all else fails I will have to get the mac mini with more\n> built in storage in order to accommodate sonoma.\n\nI *think* I finally have it in a good place. I replaced the nvme \nenclosure that I bought the other day (which had a 10G interface speed) \nwith a new one (which has 40G rated speed). The entire ~/.tart directory \nis a symlink to /Volumes/extnvme. The last two runs completed \nsuccessfully and at about the same speed as the PGX macmini does.\n\nLet me know if you see any issues.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 24 Jul 2024 15:25:37 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On Thu, Jul 25, 2024 at 7:25 AM Joe Conway <[email protected]> wrote:\n> I *think* I finally have it in a good place. I replaced the nvme\n> enclosure that I bought the other day (which had a 10G interface speed)\n> with a new one (which has 40G rated speed). The entire ~/.tart directory\n> is a symlink to /Volumes/extnvme. The last two runs completed\n> successfully and at about the same speed as the PGX macmini does.\n\nLooking good! Thanks. I have now pushed the patch to switch CI to\nSonoma, back-patched as far as 15. Let's see how that goes. I have\nalso paused the pgx machine for now, until Christophe is available to\nhelp us fix it.\n\n\n", "msg_date": "Thu, 25 Jul 2024 11:35:46 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On Thu, Jul 25, 2024 at 11:35 AM Thomas Munro <[email protected]> wrote:\n> Looking good! Thanks. I have now pushed the patch to switch CI to\n> Sonoma, back-patched as far as 15. Let's see how that goes. I have\n> also paused the pgx machine for now, until Christophe is available to\n> help us fix it.\n\nCfbot builds are working nicely on Sonoma.\n\nBut... unfortunately the github.com/postgres/postgres CI was working\nfor REL_15_STABLE only, and not 16, 17 or master. After scratching my\nhead for a moment, I realised that the new logic for finding the most\nrecent version of MacPorts is now picking up a new beta version of\nMacPorts that has just been published, and apparently it doesn't work\nquite right and couldn't install meson. D'oh!\n\nI have pushed a fix for that: I went back to requesting 2.9.3 until we\nare ready to change it.\n\nLong version: I had imagined we might one day need to nail down the\nversion in a commented out line already, I just failed to think about\nnon-release versions appearing in the list. An alternative might be\nto use a pattern that matches stable releases eg [0-9][0-9.]* to\nexclude stuff like 2.10-beta1 but for now I have cold feet about that\nlack of control. While thinking about that, it occurred to me that it\nmight also be better if it also re-installs from scratch whenever our\nscript that installs MacPorts changes, so I included its md5 in the\ncache key. Otherwise it's a bit hard to test, since the cached\ninstallation survives across rebuilds by design (that's why cfbot\nisn't affected, it installed 2.9.3 before they unleashed 2.10-beta1\nand cached the result). This way, if someone changes 2.9.3 to\n2.whatever in that script, it'll do a fresh installation of MacPorts\non the first build in a given github account, and then later builds\nwill work from the cached copy. Seems like desirable behaviour for\nfuture maintenance work.\n\n\n", "msg_date": "Thu, 25 Jul 2024 14:53:53 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "\n\n> On Jul 24, 2024, at 16:35, Thomas Munro <[email protected]> wrote:\n> I have\n> also paused the pgx machine for now, until Christophe is available to\n> help us fix it.\n\nPresent. I haven't fully digested the thread, but is there a fix that doesn't involve adding more storage to the machine (I'm happy to do that, just wanted to confirm)?\n\n--\nChristophe Pettus / [email protected]\nChief Executive Officer / PGX Inc. / 24x7 Support, Consulting, Development / pgexperts.com\nSee us at PgConf.nyc and PgConf.eu!\n\n\n\n", "msg_date": "Wed, 24 Jul 2024 21:54:32 -0700", "msg_from": "Christophe Pettus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On Thu, Jul 25, 2024 at 4:55 PM Christophe Pettus\n<[email protected]> wrote:\n> Present. I haven't fully digested the thread, but is there a fix that doesn't involve adding more storage to the machine (I'm happy to do that, just wanted to confirm)?\n\nHow much disk space is free after deleting existing cached images?\n\n\n", "msg_date": "Thu, 25 Jul 2024 17:09:58 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On 7/25/24 01:09, Thomas Munro wrote:\n> On Thu, Jul 25, 2024 at 4:55 PM Christophe Pettus \n> <[email protected]> wrote:\n>> Present. I haven't fully digested the thread, but is there a fix\n>> that doesn't involve adding more storage to the machine (I'm happy\n>> to do that, just wanted to confirm)?\n> \n> How much disk space is free after deleting existing cached images?\n\n\nFWIW, here is the TL;DR:\n* Bought to expand storage:\n-----------\nhttps://www.amazon.com/dp/B0B7CMZ3QH?ref=ppx_yo2ov_dt_b_product_details\nhttps://www.amazon.com/dp/B0BYPVNBTQ?psc=1&ref=ppx_yo2ov_dt_b_product_details\n-----------\n* Moved \"~/.tart\"\n-----------\nmacmini:~ ci-run$ ll ~/.tart\n... /Users/ci-run/.tart -> /Volumes/extnvme/ci-run/.tart\n-----------\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Thu, 25 Jul 2024 08:48:33 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "\n\n> On Jul 24, 2024, at 22:09, Thomas Munro <[email protected]> wrote:\n> How much disk space is free after deleting existing cached images?\n\nAt the moment, the system (and only real) volume is only 20% used (without deleting anything):\n\ncfbot@Cockerel ~ % df ~\nFilesystem 512-blocks Used Available Capacity iused ifree %iused Mounted on\n/dev/disk3s5 965595304 180477256 725211416 20% 637368 3626057080 0% /System/Volumes/Data\n\n\n--\nChristophe Pettus / [email protected]\nChief Executive Officer / PGX Inc. / 24x7 Support, Consulting, Development / pgexperts.com\nSee us at PgConf.nyc and PgConf.eu!\n\n\n\n", "msg_date": "Wed, 31 Jul 2024 10:55:33 -0700", "msg_from": "Christophe Pettus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "\n\n> On Jul 31, 2024, at 10:55, Christophe Pettus <[email protected]> wrote:\n> \n> \n> \n>> On Jul 24, 2024, at 22:09, Thomas Munro <[email protected]> wrote:\n>> How much disk space is free after deleting existing cached images?\n> \n> At the moment, the system (and only real) volume is only 20% used (without deleting anything):\n> \n> cfbot@Cockerel ~ % df ~\n> Filesystem 512-blocks Used Available Capacity iused ifree %iused Mounted on\n> /dev/disk3s5 965595304 180477256 725211416 20% 637368 3626057080 0% /System/Volumes/Data\n\nA quick search shows that the issue is most likely an old version of `tart`. I've upgraded both to the current cirrus/cli version. Can you let me know if things look resolved?\n\n--\nChristophe Pettus / [email protected]\nChief Executive Officer / PGX Inc. / 24x7 Support, Consulting, Development / pgexperts.com\nSee us at PgConf.nyc and PgConf.eu!\n\n\n\n", "msg_date": "Wed, 31 Jul 2024 11:07:39 -0700", "msg_from": "Christophe Pettus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On Thu, Aug 1, 2024 at 6:08 AM Christophe Pettus\n<[email protected]> wrote:\n> A quick search shows that the issue is most likely an old version of `tart`. I've upgraded both to the current cirrus/cli version. Can you let me know if things look resolved?\n\nI re-enabled it in the pool that cfbot uses for a couple of hours, and\nit said[1]:\n\nPersistent worker failed to start the task: tart command returned\nnon-zero exit code: \"root privileges are required to run and\npasswordless sudo was not available\"\n\nI recall Joe and Andres dealing with something like that at some point\non their Macs, but I don't have the details...\n\n[1] https://cirrus-ci.com/task/5597845632319488\n\n\n", "msg_date": "Fri, 2 Aug 2024 13:42:11 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On 8/1/24 21:42, Thomas Munro wrote:\n> On Thu, Aug 1, 2024 at 6:08 AM Christophe Pettus\n> <[email protected]> wrote:\n>> A quick search shows that the issue is most likely an old version of `tart`. I've upgraded both to the current cirrus/cli version. Can you let me know if things look resolved?\n> \n> I re-enabled it in the pool that cfbot uses for a couple of hours, and\n> it said[1]:\n> \n> Persistent worker failed to start the task: tart command returned\n> non-zero exit code: \"root privileges are required to run and\n> passwordless sudo was not available\"\n> \n> I recall Joe and Andres dealing with something like that at some point\n> on their Macs, but I don't have the details...\n> \n> [1] https://cirrus-ci.com/task/5597845632319488\n\n\nI think the solution was that the ci runner had to be executed directly \nas the ci user.\n\n8<-----------------\nmacmini:~ ci-run$ cat /Users/ci-run/bin/ci1.sh\n#!/bin/bash\n\nWORKER_NAME=jc-m2-1\n\nTOKEN=/Users/ci-run/cirrus-token.txt\nWORKER_YML=/Users/ci-run/cirrus-worker-macos.yml\nBREW_BIN=/opt/homebrew/bin\nCIRRUS=${BREW_BIN}/cirrus\nCAT=/bin/cat\n\nexport PATH=${BREW_BIN}:${PATH}\n${CIRRUS} worker run \\\n -f \"${WORKER_YML}\" \\\n --name \"${WORKER_NAME}\" \\\n --token \"$(${CAT} ${TOKEN})\"\n\nmacmini:~ ci-run$ /Users/ci-run/bin/ci1.sh &\n8<-----------------\n\nI tried making this run like a service using launchctl, but that was \ngiving the permissions errors. I finally gave up trying to figure it out \nand just accepted that I need to manually start the script whenever I \nreboot the mac.\n\nBTW, if there are any MacOS launchctl wizards around, I am all ears :-)\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 2 Aug 2024 08:07:40 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On Sat, Aug 3, 2024 at 12:07 AM Joe Conway <[email protected]> wrote:\n> I tried making this run like a service using launchctl, but that was\n> giving the permissions errors. I finally gave up trying to figure it out\n> and just accepted that I need to manually start the script whenever I\n> reboot the mac.\n\nIt seems to be unhappy recently:\n\nPersistent worker failed to start the task: tart isolation failed:\nfailed to create VM cloned from\n\"ghcr.io/cirruslabs/macos-runner:sonoma\": tart command returned\nnon-zero exit code: \"tart/VMStorageOCI.swift:5: Fatal error: 'try!'\nexpression unexpectedly raised an error: Error\nDomain=NSCocoaErrorDomain Code=512 \\\"The file “ci-run” couldn’t be\nsaved in the folder “Users”.\\\" UserInfo={NSFilePath=/Users/ci-run,\nNSUnderlyingError=0x6000019f0720 {Error Domain=NSPOSIXErrorDomain\nCode=20 \\\"Not a directory\\\"}}\"\n\n\n", "msg_date": "Mon, 9 Sep 2024 08:55:31 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On 9/8/24 16:55, Thomas Munro wrote:\n> On Sat, Aug 3, 2024 at 12:07 AM Joe Conway <[email protected]> wrote:\n>> I tried making this run like a service using launchctl, but that was\n>> giving the permissions errors. I finally gave up trying to figure it out\n>> and just accepted that I need to manually start the script whenever I\n>> reboot the mac.\n> \n> It seems to be unhappy recently:\n> \n> Persistent worker failed to start the task: tart isolation failed:\n> failed to create VM cloned from\n> \"ghcr.io/cirruslabs/macos-runner:sonoma\": tart command returned\n> non-zero exit code: \"tart/VMStorageOCI.swift:5: Fatal error: 'try!'\n> expression unexpectedly raised an error: Error\n> Domain=NSCocoaErrorDomain Code=512 \\\"The file “ci-run” couldn’t be\n> saved in the folder “Users”.\\\" UserInfo={NSFilePath=/Users/ci-run,\n> NSUnderlyingError=0x6000019f0720 {Error Domain=NSPOSIXErrorDomain\n> Code=20 \\\"Not a directory\\\"}}\"\n\n\nSeems the mounted drive got unmounted somehow ¯\\_(ツ)_/¯\n\nPlease check it out and let me know if it is working properly now.\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 8 Sep 2024 21:37:55 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On Mon, Sep 9, 2024 at 1:37 PM Joe Conway <[email protected]> wrote:\n> Seems the mounted drive got unmounted somehow ¯\\_(ツ)_/¯\n>\n> Please check it out and let me know if it is working properly now.\n\nLooks good, thanks!\n\n\n", "msg_date": "Tue, 10 Sep 2024 15:04:32 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On Tue, Sep 10, 2024 at 3:04 PM Thomas Munro <[email protected]> wrote:\n> On Mon, Sep 9, 2024 at 1:37 PM Joe Conway <[email protected]> wrote:\n> > Seems the mounted drive got unmounted somehow ¯\\_(ツ)_/¯\n> >\n> > Please check it out and let me know if it is working properly now.\n>\n> Looks good, thanks!\n\n... but it's broken again.\n\n\n", "msg_date": "Thu, 12 Sep 2024 13:05:59 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CI, macports, darwin version problems" }, { "msg_contents": "On 9/12/24 02:05, Thomas Munro wrote:\n> On Tue, Sep 10, 2024 at 3:04 PM Thomas Munro <[email protected]> wrote:\n>> On Mon, Sep 9, 2024 at 1:37 PM Joe Conway <[email protected]> wrote:\n>>> Seems the mounted drive got unmounted somehow ¯\\_(ツ)_/¯\n>>>\n>>> Please check it out and let me know if it is working properly now.\n>>\n>> Looks good, thanks!\n> \n> ... but it's broken again.\n\n\nThe external volume somehow got unmounted again :-(\nI have rebooted it and restarted the process now.\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 12 Sep 2024 11:48:56 +0100", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI, macports, darwin version problems" } ]
[ { "msg_contents": "psql (PostgreSQL) 17beta2 (Debian 17~beta2-1.pgdg+~20240625.1534.g23c5a0e)\n\nFailed to retrieve data from the server..\n\n\nretrieve information from database:\n\ncolumn \"daticulocale\" does not exist\nLINE 5: datconnlimit, daticulocale, daticurules, datcollversion,\n^\nHINT: Perhaps you meant to reference the column \"db.datlocale\".\n--\n\n\nretrieve information from tables:\n\n'>' not supported between instances of 'NoneType' and 'int'\n--\n\nretrieve information from schema's has no issues.\n\n\n\nINFO PGADMIN:\n\nversion 8.8\nApplication Mode: Desktop\nNW.js Version: 0.77.0\nBrowser: Chromium 114.0.5735.91\nOperating System: Kubuntu 24.04 - Linux-6.8.0-35-generic-x86_64-with \nglibc2.39\n\n\n", "msg_date": "Wed, 26 Jun 2024 02:35:59 +0200", "msg_from": "=?UTF-8?Q?Andr=C3=A9_Verwijs?= <[email protected]>", "msg_from_op": true, "msg_subject": "psql (PostgreSQL) 17beta2 (Debian\n 17~beta2-1.pgdg+~20240625.1534.g23c5a0e) Failed to retrieve data from the\n server.." }, { "msg_contents": "=?UTF-8?Q?Andr=C3=A9_Verwijs?= <[email protected]> writes:\n> retrieve information from database:\n> column \"daticulocale\" does not exist\n> LINE 5: datconnlimit, daticulocale, daticurules, datcollversion,\n> ^\n> HINT: Perhaps you meant to reference the column \"db.datlocale\".\n\nI'm guessing you need to complain to the pgadmin folks about this?\nAFAICS there is no client included in core postgres that would\ngenerate a query spelled exactly like this, and certainly none\nthat would try to do so against a v17 server.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jun 2024 20:52:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql (PostgreSQL) 17beta2 (Debian\n 17~beta2-1.pgdg+~20240625.1534.g23c5a0e) Failed to retrieve data from the\n server.." }, { "msg_contents": "On Tue, Jun 25, 2024 at 5:36 PM André Verwijs <[email protected]> wrote:\n\n> psql (PostgreSQL) 17beta2 (Debian 17~beta2-1.pgdg+~20240625.1534.g23c5a0e)\n\n\n> column \"daticulocale\" does not exist\n> LINE 5: datconnlimit, daticulocale, daticurules, datcollversion,\n> ^\n> HINT: Perhaps you meant to reference the column \"db.datlocale\".\n> --\n>\n>\n> INFO PGADMIN:\n>\n>\n We seem to be missing a release note item for the catalog breakage done in\n f696c0c\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=f696c0cd5f\n\nDavid J.\n\nOn Tue, Jun 25, 2024 at 5:36 PM André Verwijs <[email protected]> wrote:psql (PostgreSQL) 17beta2 (Debian 17~beta2-1.pgdg+~20240625.1534.g23c5a0e)\ncolumn \"daticulocale\" does not exist\nLINE 5: datconnlimit, daticulocale, daticurules, datcollversion,\n^\nHINT: Perhaps you meant to reference the column \"db.datlocale\".\n--\n\nINFO PGADMIN: We seem to be missing a release note item for the catalog breakage done in f696c0chttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=f696c0cd5fDavid J.", "msg_date": "Tue, 25 Jun 2024 17:52:47 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql (PostgreSQL) 17beta2 (Debian\n 17~beta2-1.pgdg+~20240625.1534.g23c5a0e)\n Failed to retrieve data from the server.." }, { "msg_contents": "On Tue, Jun 25, 2024 at 05:52:47PM -0700, David G. Johnston wrote:\n> On Tue, Jun 25, 2024 at 5:36 PM André Verwijs <[email protected]> wrote:\n> \n> psql (PostgreSQL) 17beta2 (Debian 17~beta2-1.pgdg+~20240625.1534.g23c5a0e)\n> \n> \n> column \"daticulocale\" does not exist\n> LINE 5: datconnlimit, daticulocale, daticurules, datcollversion,\n> ^\n> HINT: Perhaps you meant to reference the column \"db.datlocale\".\n> --\n> \n> \n> INFO PGADMIN:\n> \n> \n> \n>  We seem to be missing a release note item for the catalog breakage done in\n>  f696c0c\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=f696c0cd5f\n\nIt seemed too internal to mention in the release notes --- more of an\ninfrastructure change, but I can add it if I was wrong about this.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 25 Jun 2024 21:54:37 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql (PostgreSQL) 17beta2 (Debian\n 17~beta2-1.pgdg+~20240625.1534.g23c5a0e) Failed to retrieve data from the\n server.." }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> On Tue, Jun 25, 2024 at 05:52:47PM -0700, David G. Johnston wrote:\n>> We seem to be missing a release note item for the catalog breakage done in\n>> f696c0c\n>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=f696c0cd5f\n\n> It seemed too internal to mention in the release notes --- more of an\n> infrastructure change, but I can add it if I was wrong about this.\n\nAs this breakage demonstrates, that change is quite\napplication-visible. It needs an entry under incompatibilities.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2024 00:06:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql (PostgreSQL) 17beta2 (Debian\n 17~beta2-1.pgdg+~20240625.1534.g23c5a0e) Failed to retrieve data from the\n server.." }, { "msg_contents": "On Wed, Jun 26, 2024 at 12:06:13AM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > On Tue, Jun 25, 2024 at 05:52:47PM -0700, David G. Johnston wrote:\n> >> We seem to be missing a release note item for the catalog breakage done in\n> >> f696c0c\n> >> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=f696c0cd5f\n> \n> > It seemed too internal to mention in the release notes --- more of an\n> > infrastructure change, but I can add it if I was wrong about this.\n> \n> As this breakage demonstrates, that change is quite\n> application-visible. It needs an entry under incompatibilities.\n\nOkay, will do today.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 26 Jun 2024 09:59:35 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql (PostgreSQL) 17beta2 (Debian\n 17~beta2-1.pgdg+~20240625.1534.g23c5a0e) Failed to retrieve data from the\n server.." }, { "msg_contents": "On Wed, Jun 26, 2024 at 09:59:35AM -0400, Bruce Momjian wrote:\n> On Wed, Jun 26, 2024 at 12:06:13AM -0400, Tom Lane wrote:\n> > Bruce Momjian <[email protected]> writes:\n> > > On Tue, Jun 25, 2024 at 05:52:47PM -0700, David G. Johnston wrote:\n> > >> We seem to be missing a release note item for the catalog breakage done in\n> > >> f696c0c\n> > >> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=f696c0cd5f\n> > \n> > > It seemed too internal to mention in the release notes --- more of an\n> > > infrastructure change, but I can add it if I was wrong about this.\n> > \n> > As this breakage demonstrates, that change is quite\n> > application-visible. It needs an entry under incompatibilities.\n> \n> Okay, will do today.\n\nDone.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 26 Jun 2024 13:14:08 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql (PostgreSQL) 17beta2 (Debian\n 17~beta2-1.pgdg+~20240625.1534.g23c5a0e) Failed to retrieve data from the\n server.." } ]
[ { "msg_contents": "Hi Team,\n\nCurrently we are using SHA-256 default for password_encryption in our postgresql deployments. Is there any active work being done for adding additional hashing options like PBKDF2, HKDF, SCRYPT or Argon2 password hashing functions, either of which is only accepted as a algorithms that should be used for encrypting or hashing the password at storage as per the Organization's Cryptography Standard.\n\nIf it is not in current plan, is there a plan to include that in subsequent versions?\n\nThanks and Regards,\nAnbazhagan M\n\n\n\n\n\n\n\n\n\nHi Team,\n \nCurrently we are using SHA-256 default for password_encryption in our postgresql deployments. Is there any active work being done for adding additional hashing options like PBKDF2, HKDF, SCRYPT or Argon2 password hashing\n functions, either of which is only accepted as a algorithms that should be used for encrypting or hashing the password at storage as per the Organization's Cryptography Standard.\n\n \nIf it is not in current plan, is there a plan to include that in subsequent versions?\n \nThanks and Regards,\nAnbazhagan M", "msg_date": "Wed, 26 Jun 2024 05:00:23 +0000", "msg_from": "\"M, Anbazhagan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Reg: Alternate way of hashing database role passwords" }, { "msg_contents": "\"M, Anbazhagan\" <[email protected]> writes:\n> Currently we are using SHA-256 default for password_encryption in our postgresql deployments. Is there any active work being done for adding additional hashing options like PBKDF2, HKDF, SCRYPT or Argon2 password hashing functions, either of which is only accepted as a algorithms that should be used for encrypting or hashing the password at storage as per the Organization's Cryptography Standard.\n\n> If it is not in current plan, is there a plan to include that in subsequent versions?\n\nIt is not, and I doubt we have any interest in dramatically expanding\nthe set of allowed password hashes. Adding SCRAM was enough work and\ncreated a lot of client-v-server and cross-version incompatibility\nalready; nobody is in a hurry to repeat that. Moreover, I know of\nno reason to think that SHA-256 isn't perfectly adequate.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2024 12:11:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reg: Alternate way of hashing database role passwords" }, { "msg_contents": "On Wed, Jun 26, 2024 at 12:11 PM Tom Lane <[email protected]> wrote:\n> It is not, and I doubt we have any interest in dramatically expanding\n> the set of allowed password hashes. Adding SCRAM was enough work and\n> created a lot of client-v-server and cross-version incompatibility\n> already; nobody is in a hurry to repeat that. Moreover, I know of\n> no reason to think that SHA-256 isn't perfectly adequate.\n\nIf history is any guide, every algorithm will eventually look too\nweak. It seems inevitable that we're going to have to keep changing\nalgorithms as time passes. However, it seems like SCRAM is designed so\nthat different hash functions can be substituted into it, so what I'm\nhoping is that we can keep SCRAM and just replace SCRAM-SHA-256 with\nSCRAM-WHATEVER when SHA-256 starts to look too weak.\n\nWhat I find a bit surprising about Anbazhagan's question is that he\nasks about PBKDF2, which seems to be part of SCRAM already.[1] In\nfact, I think all the things he lists are key derivation functions,\nnot hash functions. I'm far from a cryptography expert, but it seems\nsurprising to me that somebody would be concerned about the KDF rather\nthan the hash function. We know that people get concerned about code\nthat still uses MD5, for example, or SHA-1, but this is the first time\nI can remember someone expressing a concern about a KDF.\n\nAnbazhagan, or anyone, is there some reason to think that the PBKDF2\napproach used by SCRAM is a problem?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n[1] https://en.wikipedia.org/wiki/Salted_Challenge_Response_Authentication_Mechanism#Password-based_derived_key,_or_salted_password\n\n\n", "msg_date": "Wed, 26 Jun 2024 12:59:50 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reg: Alternate way of hashing database role passwords" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Wed, Jun 26, 2024 at 12:11 PM Tom Lane <[email protected]> wrote:\n>> It is not, and I doubt we have any interest in dramatically expanding\n>> the set of allowed password hashes. Adding SCRAM was enough work and\n>> created a lot of client-v-server and cross-version incompatibility\n>> already; nobody is in a hurry to repeat that. Moreover, I know of\n>> no reason to think that SHA-256 isn't perfectly adequate.\n\n> If history is any guide, every algorithm will eventually look too\n> weak. It seems inevitable that we're going to have to keep changing\n> algorithms as time passes. However, it seems like SCRAM is designed so\n> that different hash functions can be substituted into it, so what I'm\n> hoping is that we can keep SCRAM and just replace SCRAM-SHA-256 with\n> SCRAM-WHATEVER when SHA-256 starts to look too weak.\n\nTotally agreed, that day will come. What I'm pushing back on is the\nsuggestion that we should implement a ton of variant password hash\nfunctionality on the basis of somebody's whim. The costs are large\nand they are not all paid by us, so the bar to replacing any part\nof that has to be very high.\n\n> What I find a bit surprising about Anbazhagan's question is that he\n> asks about PBKDF2, which seems to be part of SCRAM already.[1] In\n> fact, I think all the things he lists are key derivation functions,\n> not hash functions.\n\nThis I don't have any info about.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2024 13:39:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reg: Alternate way of hashing database role passwords" }, { "msg_contents": "> On 26 Jun 2024, at 18:59, Robert Haas <[email protected]> wrote:\n\n> However, it seems like SCRAM is designed so\n> that different hash functions can be substituted into it, so what I'm\n> hoping is that we can keep SCRAM and just replace SCRAM-SHA-256 with\n> SCRAM-WHATEVER when SHA-256 starts to look too weak.\n\nCorrect, SCRAM is an authentication method which can use different hashing\nalgorithms. There are current drafts for SCRAM-SHA-512 and SHA3-512 but they\nare some way away from being standardized. If they become standards at some\npoint reasonable to extend our support, but until then there is no evidence\nthat what we have is insecure AFAIK.\n\nhttps://datatracker.ietf.org/doc/html/draft-melnikov-scram-sha-512\nhttps://datatracker.ietf.org/doc/html/draft-melnikov-scram-sha3-512\n\n> What I find a bit surprising about Anbazhagan's question is that he\n> asks about PBKDF2, which seems to be part of SCRAM already.\n\nIn scram_SaltedPassword() we perform PBKDF2 with HMAC as the pseudorandom\nfunction.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 1 Jul 2024 15:25:38 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reg: Alternate way of hashing database role passwords" } ]
[ { "msg_contents": "Hi All,\n\nIf pgindent encounters an error while applying pg_bsd_indent on a file, it\nreports an error to stderr but does not exit with non-zero status.\n\n$ src/tools/pgindent/pgindent .\nFailure in ./src/backend/optimizer/util/relnode.c: Error@2412: Stuff\nmissing from end of file\n\n$ echo $?\n0\n\nA zero status usually indicates success [1]. In that sense pgindent\nshouldn't be returning 0 in this case. It has not been able to process\nfile/s successfully. Not returning non-zero exit status in such cases means\nwe can not rely on `set -e` or `git rebase` s automatic detection of\ncommand failures. I propose to add non-zero exit status in the above case.\n\nIn the attached patch I have used exit code 3 for file processing errors.\nThe program exits only after reporting all such errors instead of exiting\non the first instance. That way we get to know all the errors upfront. But\nI am ok if we want to exit on the first instance. That might simplify its\ninteraction with other exit codes.\n\nWith this change, if I run pgident in `git rebase` it stops after those\nerrors automatically like below\n```\nExecuting: src/tools/pgindent/pgindent .\nFailure in ./src/backend/optimizer/util/relnode.c: Error@2424: Stuff\nmissing from end of file\n\nFailure in ./src/backend/optimizer/util/appendinfo.c: Error@1028: Stuff\nmissing from end of file\n\nwarning: execution failed: src/tools/pgindent/pgindent .\nYou can fix the problem, and then run\n\n git rebase --continue\n```\n\nI looked at pgindent README but it doesn't mention anything about exit\ncodes. So I believe this change is acceptable as per documentation.\n\nWith --check option pgindent reports a non-zero exit code instead of making\nchanges. So I feel the above change should happen at least if --check is\nprovided. But certainly better if we do it even without --check.\n\nIn case --check is specified and both the following happen a. pg_bsd_indent\nexits with non-zero status while processing some file and b. changes are\nproduced while processing some other file, the program will exit with\nstatus 2. It may be argued that instead it should exit with code 3. I am\nopen to both.\n\n[1] https://en.wikipedia.org/wiki/Exit_status\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Wed, 26 Jun 2024 16:07:31 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "pgindent exit status if a file encounters an error" }, { "msg_contents": "On 2024-06-26 We 6:37 AM, Ashutosh Bapat wrote:\n> Hi All,\n>\n> If pgindent encounters an error while applying pg_bsd_indent on a \n> file, it reports an error to stderr but does not exit with non-zero \n> status.\n>\n> $ src/tools/pgindent/pgindent .\n> Failure in ./src/backend/optimizer/util/relnode.c: Error@2412: Stuff \n> missing from end of file\n>\n> $ echo $?\n> 0\n>\n> A zero status usually indicates success [1]. In that sense pgindent \n> shouldn't be returning 0 in this case. It has not been able to process \n> file/s successfully. Not returning non-zero exit status in such cases \n> means we can not rely on `set -e` or `git rebase` s automatic \n> detection of command failures. I propose to add non-zero exit status \n> in the above case.\n>\n> In the attached patch I have used exit code 3 for file processing \n> errors. The program exits only after reporting all such errors instead \n> of exiting on the first instance. That way we get to know all the \n> errors upfront. But I am ok if we want to exit on the first instance. \n> That might simplify its interaction with other exit codes.\n>\n> With this change, if I run pgident in `git rebase` it stops after \n> those errors automatically like below\n> ```\n> Executing: src/tools/pgindent/pgindent .\n> Failure in ./src/backend/optimizer/util/relnode.c: Error@2424: Stuff \n> missing from end of file\n>\n> Failure in ./src/backend/optimizer/util/appendinfo.c: Error@1028: \n> Stuff missing from end of file\n>\n> warning: execution failed: src/tools/pgindent/pgindent .\n> You can fix the problem, and then run\n>\n>   git rebase --continue\n> ```\n>\n> I looked at pgindent README but it doesn't mention anything about exit \n> codes. So I believe this change is acceptable as per documentation.\n>\n> With --check option pgindent reports a non-zero exit code instead of \n> making changes. So I feel the above change should happen at least if \n> --check is provided. But certainly better if we do it even without \n> --check.\n>\n> In case --check is specified and both the following happen a. \n> pg_bsd_indent exits with non-zero status while processing some file \n> and b. changes are produced while processing some other file, the \n> program will exit with status 2. It may be argued that instead it \n> should exit with code 3. I am open to both.\n\n\nYeah, I think this is reasonable but we should adjust the status setting \na few lines lower to\n\n\n    $status ||= 2;\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-06-26 We 6:37 AM, Ashutosh\n Bapat wrote:\n\n\n\nHi All,\n \n\nIf pgindent encounters an error while applying\n pg_bsd_indent on a file, it reports an error to stderr but\n does not exit with non-zero status.\n\n$ src/tools/pgindent/pgindent .\n Failure in ./src/backend/optimizer/util/relnode.c:\n Error@2412: Stuff missing from end of file\n\n $ echo $?\n 0\n\n\nA zero status usually indicates success [1]. In that\n sense pgindent shouldn't be returning 0 in this case. It\n has not been able to process file/s successfully. Not\n returning non-zero exit status in such cases means we can\n not rely on `set -e` or `git rebase` s automatic detection\n of command failures. I propose to add non-zero exit status\n in the above case.\n\n\nIn the attached patch I have used exit code 3 for file\n processing errors. The program exits only after reporting\n all such errors instead of exiting on the first instance.\n That way we get to know all the errors upfront. But I am\n ok if we want to exit on the first instance. That might\n simplify its interaction with other exit codes.\n\n\nWith this change, if I run pgident in `git rebase` it\n stops after those errors automatically like below\n```\nExecuting: src/tools/pgindent/pgindent .\n Failure in ./src/backend/optimizer/util/relnode.c:\n Error@2424: Stuff missing from end of file\n\n Failure in ./src/backend/optimizer/util/appendinfo.c:\n Error@1028: Stuff missing from end of file\n\n warning: execution failed: src/tools/pgindent/pgindent .\n You can fix the problem, and then run\n\n   git rebase --continue\n\n```\n\n\n I looked at pgindent README but it doesn't mention\n anything about exit codes. So I believe this change is\n acceptable as per documentation.\n\n\nWith --check option pgindent reports a non-zero exit\n code instead of making changes. So I feel the above\n change should happen at least if --check is provided.\n But certainly better if we do it even without --check.\n\n\n\nIn case --check is specified and both the following\n happen a. pg_bsd_indent exits with non-zero status while\n processing some file and b. changes are produced while\n processing some other file, the program will exit with\n status 2. It may be argued that instead it should exit\n with code 3. I am open to both.\n\n\n\n\n\n\nYeah, I think this is reasonable but we should adjust the status\n setting a few lines lower to\n\n\n   $status ||= 2;\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 26 Jun 2024 07:23:39 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgindent exit status if a file encounters an error" }, { "msg_contents": "Hi Andrew,\nThanks for the quick review.\n\nOn Wed, Jun 26, 2024 at 4:53 PM Andrew Dunstan <[email protected]> wrote:\n\n>\n> With --check option pgindent reports a non-zero exit code instead of\n> making changes. So I feel the above change should happen at least if\n> --check is provided. But certainly better if we do it even without --check.\n>\n> In case --check is specified and both the following happen a.\n> pg_bsd_indent exits with non-zero status while processing some file and b.\n> changes are produced while processing some other file, the program will\n> exit with status 2. It may be argued that instead it should exit with code\n> 3. I am open to both.\n>\n>\n> Yeah, I think this is reasonable but we should adjust the status setting a\n> few lines lower to\n>\n>\n> $status ||= 2;\n>\n\nSo you are suggesting that status 3 is preferred over status 2 when both\nare applicable. I am fine with that.\n\nHere's what the behaviour looks like: (notice echo $? after running\npgindent)\n\n1. Running without --check, if pgindent encounters file processing errors,\nexit code is 3.\n2. Running with --check, exit code depends upon whether pgindent encounters\na file with processing error first or a file that undergoes a change.\n a. If it encounters a file that would undergo a change first, exit\nstatus is 2\n b. If it encounters a file with processing error first, exit status is 3\n3. If --check is specified and no file undergoes a change, but there are\nfile processing errors, it will exit with code 3.\n\nThe variation in the second case based on the order of files processed\nlooks fine to me. What do you say?\n\nThe usage help mentions exit code 2 specifically while describing --check\noption but it doesn't mention exit code 1. Neither does the README. So I\ndon't think we need to document exit code 3 anywhere. Please let me know if\nyou think otherwise.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Wed, 26 Jun 2024 18:58:12 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgindent exit status if a file encounters an error" }, { "msg_contents": "Ashutosh Bapat <[email protected]> writes:\n> The usage help mentions exit code 2 specifically while describing --check\n> option but it doesn't mention exit code 1. Neither does the README. So I\n> don't think we need to document exit code 3 anywhere. Please let me know if\n> you think otherwise.\n\nI think we should have at least a code comment summarizing the\npossible exit codes, along the lines of\n\n# Exit codes:\n# 0 -- all OK\n# 1 -- could not invoke pgindent, nothing done\n# 2 -- --check mode and at least one file requires changes\n# 3 -- pgindent failed on at least one file\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2024 11:24:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgindent exit status if a file encounters an error" }, { "msg_contents": "On Wed, Jun 26, 2024 at 8:54 PM Tom Lane <[email protected]> wrote:\n\n> Ashutosh Bapat <[email protected]> writes:\n> > The usage help mentions exit code 2 specifically while describing --check\n> > option but it doesn't mention exit code 1. Neither does the README. So I\n> > don't think we need to document exit code 3 anywhere. Please let me know\n> if\n> > you think otherwise.\n>\n> I think we should have at least a code comment summarizing the\n> possible exit codes, along the lines of\n>\n> # Exit codes:\n> # 0 -- all OK\n> # 1 -- could not invoke pgindent, nothing done\n> # 2 -- --check mode and at least one file requires changes\n> # 3 -- pgindent failed on at least one file\n>\n\nThanks. Here's a patch with these lines.\n\nIn an offline chat Andrew mentioned that he expects more changes in the\npatch and he would take care of those. Will review and test his patch.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Fri, 28 Jun 2024 18:05:35 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgindent exit status if a file encounters an error" } ]
[ { "msg_contents": "Hi,\n\nThe function 'libpqrcv_check_conninfo()' returns 'void', but the comment \nabove says that the function returns true or false.\nI've attached a patch to modify the comment.\n\nRegard,\nRintaro Ikeda", "msg_date": "Wed, 26 Jun 2024 21:52:55 +0900", "msg_from": "ikedarintarof <[email protected]>", "msg_from_op": true, "msg_subject": "doc: modify the comment in function libpqrcv_check_conninfo()" }, { "msg_contents": "On Wed, 26 Jun 2024 at 14:53, ikedarintarof\n<[email protected]> wrote:\n> The function 'libpqrcv_check_conninfo()' returns 'void', but the comment\n> above says that the function returns true or false.\n> I've attached a patch to modify the comment.\n\nAgreed that the current comment is wrong, but the new comment should\nmention the must_use_password argument. Because only if\nmust_use_password is true, will it throw an error.\n\n\n", "msg_date": "Wed, 26 Jun 2024 16:36:49 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doc: modify the comment in function libpqrcv_check_conninfo()" }, { "msg_contents": "Thank you for your comment!\n\nI've added the must_use_password argument. A new patch is attached.\n\n\nOn 2024-06-26 23:36, Jelte Fennema-Nio wrote:\n> On Wed, 26 Jun 2024 at 14:53, ikedarintarof\n> <[email protected]> wrote:\n>> The function 'libpqrcv_check_conninfo()' returns 'void', but the \n>> comment\n>> above says that the function returns true or false.\n>> I've attached a patch to modify the comment.\n> \n> Agreed that the current comment is wrong, but the new comment should\n> mention the must_use_password argument. Because only if\n> must_use_password is true, will it throw an error.", "msg_date": "Thu, 27 Jun 2024 16:09:37 +0900", "msg_from": "ikedarintarof <[email protected]>", "msg_from_op": true, "msg_subject": "Re: doc: modify the comment in function libpqrcv_check_conninfo()" }, { "msg_contents": "On Thu, 27 Jun 2024 at 09:09, ikedarintarof\n<[email protected]> wrote:\n>\n> Thank you for your comment!\n>\n> I've added the must_use_password argument. A new patch is attached.\n\ns/whther/whether\n\nnit: \"it will do nothing\" sounds a bit strange to me (but I'm not\nnative english). Something like this reads more natural to me:\n\nan error. If must_use_password is true, the function raises an error\nif no password is provided in the connection string. In any other case\nit successfully completes.\n\n\n", "msg_date": "Thu, 27 Jun 2024 10:18:39 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doc: modify the comment in function libpqrcv_check_conninfo()" }, { "msg_contents": "Thanks for your suggestion. I used ChatGPT to choose the wording, but \nit's still difficult for me.\n\nThe new patch includes your suggestion.\n\nOn 2024-06-27 17:18, Jelte Fennema-Nio wrote:\n> On Thu, 27 Jun 2024 at 09:09, ikedarintarof\n> <[email protected]> wrote:\n>> \n>> Thank you for your comment!\n>> \n>> I've added the must_use_password argument. A new patch is attached.\n> \n> s/whther/whether\n> \n> nit: \"it will do nothing\" sounds a bit strange to me (but I'm not\n> native english). Something like this reads more natural to me:\n> \n> an error. If must_use_password is true, the function raises an error\n> if no password is provided in the connection string. In any other case\n> it successfully completes.", "msg_date": "Thu, 27 Jun 2024 19:27:33 +0900", "msg_from": "ikedarintarof <[email protected]>", "msg_from_op": true, "msg_subject": "Re: doc: modify the comment in function libpqrcv_check_conninfo()" }, { "msg_contents": "On Thu, 27 Jun 2024 at 12:27, ikedarintarof\n<[email protected]> wrote:\n> Thanks for your suggestion. I used ChatGPT to choose the wording, but\n> it's still difficult for me.\n\nLooks good to me now (but obviously biased since you took my wording).\nAdding Robert, since he authored the commit that introduced this\ncomment.\n\n\n", "msg_date": "Mon, 1 Jul 2024 11:15:14 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doc: modify the comment in function libpqrcv_check_conninfo()" }, { "msg_contents": "\n\nOn 2024/07/01 18:15, Jelte Fennema-Nio wrote:\n> On Thu, 27 Jun 2024 at 12:27, ikedarintarof\n> <[email protected]> wrote:\n>> Thanks for your suggestion. I used ChatGPT to choose the wording, but\n>> it's still difficult for me.\n> \n> Looks good to me now (but obviously biased since you took my wording).\n\nLGTM, too.\n\n\n * Validate connection info string, and determine whether it might cause\n * local filesystem access to be attempted.\n\nThe later part of the above comment for libpqrcv_check_conninfo()\nseems unclear. My understanding is that if must_use_password is true\nand this function completes without error, the password must be\nin the connection string, so there's no need to read the .pgpass file\n(i.e., no local filesystem access). That part seems to be trying to\nexplaing something like that. Is this correct?\n\nAnyway, wouldn't it be better to either remove this unclear part or\nrephrase it for clarity?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 8 Jul 2024 15:28:04 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doc: modify the comment in function libpqrcv_check_conninfo()" }, { "msg_contents": "On 2024-07-08 15:28, Fujii Masao wrote:\n> On 2024/07/01 18:15, Jelte Fennema-Nio wrote:\n>> On Thu, 27 Jun 2024 at 12:27, ikedarintarof\n>> <[email protected]> wrote:\n>>> Thanks for your suggestion. I used ChatGPT to choose the wording, but\n>>> it's still difficult for me.\n>> \n>> Looks good to me now (but obviously biased since you took my wording).\n> \n> LGTM, too.\n> \n> \n> * Validate connection info string, and determine whether it might \n> cause\n> * local filesystem access to be attempted.\n> \n> The later part of the above comment for libpqrcv_check_conninfo()\n> seems unclear. My understanding is that if must_use_password is true\n> and this function completes without error, the password must be\n> in the connection string, so there's no need to read the .pgpass file\n> (i.e., no local filesystem access). That part seems to be trying to\n> explaing something like that. Is this correct?\n> \n> Anyway, wouldn't it be better to either remove this unclear part or\n> rephrase it for clarity?\n> \n> Regards,\n\nThank you for your comment.\n\nI agree \"local filesystem access\" is vague. I think the reference to \n.pgpass\n(local file system) is not necessary in the comment for \nlibpqrcv_check_conninfo()\nbecause it is explained in libpqrcv_connect() as shown below.\n\n> /*\n> * Re-validate connection string. The validation already happened at DDL\n> * time, but the subscription owner may have changed. If we don't \n> recheck\n> * with the correct must_use_password, it's possible that the connection\n> * will obtain the password from a different source, such as PGPASSFILE \n> or\n> * PGPASSWORD.\n> */\n> libpqrcv_check_conninfo(conninfo, must_use_password);\n\nI remove the unclear part from the previous patch and add some \nexplanation for\n later part of the function.\n\n\n\nOr, I think it is also good to make the comment more simple:\n> * The function checks that\n> * 1. connection info string is properly parsed and\n> * 2. a password is provided if must_use_password is true.\n> * If the check is failed, the it will raise an error.\n> */\n> static void\n> libpqrcv_check_conninfo(const char *conninfo, bool must_use_password)\n\nRegards,\n\nRintaro Ikeda", "msg_date": "Mon, 08 Jul 2024 20:44:56 +0900", "msg_from": "ikedarintarof <[email protected]>", "msg_from_op": true, "msg_subject": "Re: doc: modify the comment in function libpqrcv_check_conninfo()" }, { "msg_contents": "\n\nOn 2024/07/08 20:44, ikedarintarof wrote:\n> I remove the unclear part from the previous patch and add some explanation for\n>  later part of the function.\n\n- * Validate connection info string, and determine whether it might cause\n- * local filesystem access to be attempted.\n+ * The function\n+ * 1. validates connection info string and\n+ * 2. checks a password is provided if must_use_password is true.\n\nIMO, \"Validate connection info string\" is sufficient. The mention of\nmust_use_password is redundant since it's explained in the latter part of\nthe comment. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 9 Jul 2024 09:24:48 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doc: modify the comment in function libpqrcv_check_conninfo()" }, { "msg_contents": "> - * Validate connection info string, and determine whether it might \n> cause\n> - * local filesystem access to be attempted.\n> + * The function\n> + * 1. validates connection info string and\n> + * 2. checks a password is provided if must_use_password is true.\n> \n> IMO, \"Validate connection info string\" is sufficient. The mention of\n> must_use_password is redundant since it's explained in the latter part \n> of\n> the comment.\n\nThat is reasonable for me. I've removed the second one.\nThe modified patch is attached.\n\nRegards,\n\nRintaro Ikeda", "msg_date": "Tue, 09 Jul 2024 15:56:34 +0900", "msg_from": "ikedarintarof <[email protected]>", "msg_from_op": true, "msg_subject": "Re: doc: modify the comment in function libpqrcv_check_conninfo()" }, { "msg_contents": "\n\nOn 2024/07/09 15:56, ikedarintarof wrote:\n>> - * Validate connection info string, and determine whether it might cause\n>> - * local filesystem access to be attempted.\n>> + * The function\n>> + * 1. validates connection info string and\n>> + * 2. checks a password is provided if must_use_password is true.\n>>\n>> IMO, \"Validate connection info string\" is sufficient. The mention of\n>> must_use_password is redundant since it's explained in the latter part of\n>> the comment.\n> \n> That is reasonable for me. I've removed the second one.\n> The modified patch is attached.\n\nThanks for updating the patch! Pushed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 9 Jul 2024 21:33:33 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doc: modify the comment in function libpqrcv_check_conninfo()" } ]
[ { "msg_contents": "Hi,\n\nWhile designing an improvement for the cost sort model, I discovered \nthat the query plan can vary if we slightly change the query text \nwithout pushing semantic differences. See the example below:\n\nCREATE TABLE test(x integer, y integer,z text);\nINSERT INTO test (x,y) SELECT x, 1 FROM generate_series(1,1000000) AS x;\nCREATE INDEX ON test(x);\nCREATE INDEX ON test(y);\nVACUUM ANALYZE;\nSET max_parallel_workers_per_gather = 0;\n\nFirst query:\n\nEXPLAIN (ANALYZE, TIMING OFF) SELECT count(*) FROM test t1, test t2\nWHERE t1.x=t2.y AND t1.y=t2.x GROUP BY t1.x,t1.y;\n\nAnd the second one - just reverse the left and right sides of \nexpressions in the WHERE condition:\n\nEXPLAIN (ANALYZE, TIMING OFF) SELECT count(*) FROM test t1, test t2\nWHERE t2.y=t1.x AND t2.x=t1.y GROUP BY t1.x,t1.y;\n\nYou can see two different plans here:\n\nGroupAggregate (cost=37824.89..37824.96 rows=1 width=16)\n Group Key: t1.y, t1.x\n -> Incremental Sort (cost=37824.89..37824.94 rows=2 width=8)\n Sort Key: t1.y, t1.x\n Presorted Key: t1.y\n -> Merge Join (cost=0.85..37824.88 rows=1 width=8)\n Merge Cond: (t1.y = t2.x)\n Join Filter: (t2.y = t1.x)\n -> Index Scan using test_y_idx on test t1\n -> Index Scan using test_x_idx on test t2\n\nGroupAggregate (cost=37824.89..37824.92 rows=1 width=16)\n Group Key: t1.x, t1.y\n -> Sort (cost=37824.89..37824.90 rows=1 width=8)\n Sort Key: t1.x, t1.y\n Sort Method: quicksort Memory: 25kB\n -> Merge Join (cost=0.85..37824.88 rows=1 width=8)\n Merge Cond: (t1.y = t2.x)\n Join Filter: (t2.y = t1.x)\n -> Index Scan using test_y_idx on test t1\n -> Index Scan using test_x_idx on test t2\n\nDon't mind for now that the best plan is to do IncrementalSort with \npresorted key t1.x. Just pay attention to the fact that the plan has \naltered without any valuable reason.\nThe cost_incremental_sort() routine causes such behaviour: it chooses \nthe expression to estimate the number of groups from the first \nEquivalenceClass member that relies on the syntax.\nI tried to invent a simple solution to fight this minor case. But the \nmost clear and straightforward way here is to save a reference to the \nexpression that triggered the PathKey creation inside the PathKey itself.\nSee the sketch of the patch in the attachment.\nI'm not sure this instability is worth fixing this way, but the \ndependence of the optimisation outcome on the query text looks buggy.\n\n-- \nregards, Andrei Lepikhov", "msg_date": "Wed, 26 Jun 2024 22:00:27 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": true, "msg_subject": "Incremental Sort Cost Estimation Instability" }, { "msg_contents": "On Thu, 27 Jun 2024 at 03:00, Andrei Lepikhov <[email protected]> wrote:\n> I tried to invent a simple solution to fight this minor case. But the\n> most clear and straightforward way here is to save a reference to the\n> expression that triggered the PathKey creation inside the PathKey itself.\n> See the sketch of the patch in the attachment.\n> I'm not sure this instability is worth fixing this way, but the\n> dependence of the optimisation outcome on the query text looks buggy.\n\nI don't think that's going to work as that'll mean it'll just choose\nwhichever expression was used when the PathKey was first created. For\nyour example query, both PathKey's are first created for the GROUP BY\nclause in standard_qp_callback(). I only have to change the GROUP BY\nin your query to use the equivalent column in the other table to get\nit to revert back to the plan you complained about.\n\npostgres=# EXPLAIN (costs off) SELECT count(*) FROM test t1, test t2\nWHERE t1.x=t2.y AND t1.y=t2.x GROUP BY t2.y,t2.x;\n QUERY PLAN\n----------------------------------------------------------\n GroupAggregate\n Group Key: t2.y, t2.x\n -> Sort\n Sort Key: t2.y, t2.x\n -> Merge Join\n Merge Cond: (t1.y = t2.x)\n Join Filter: (t2.y = t1.x)\n -> Index Scan using test_y_idx on test t1\n -> Index Scan using test_x_idx on test t2\n(9 rows)\n\nMaybe doing something with estimate_num_groups() to find the\nEquivalenceClass member with the least distinct values would be\nbetter. I just can't think how that could be done in a performant way.\n\nDavid\n\n\n", "msg_date": "Thu, 12 Sep 2024 13:05:25 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incremental Sort Cost Estimation Instability" }, { "msg_contents": "On 12/9/2024 03:05, David Rowley wrote:\n> On Thu, 27 Jun 2024 at 03:00, Andrei Lepikhov <[email protected]> wrote:\n>> I tried to invent a simple solution to fight this minor case. But the\n>> most clear and straightforward way here is to save a reference to the\n>> expression that triggered the PathKey creation inside the PathKey itself.\n>> See the sketch of the patch in the attachment.\n>> I'm not sure this instability is worth fixing this way, but the\n>> dependence of the optimisation outcome on the query text looks buggy.\n> \n> I don't think that's going to work as that'll mean it'll just choose\n> whichever expression was used when the PathKey was first created. For\n> your example query, both PathKey's are first created for the GROUP BY\n> clause in standard_qp_callback(). I only have to change the GROUP BY\n> in your query to use the equivalent column in the other table to get\n> it to revert back to the plan you complained about.\nYes, it is true. It is not ideal solution so far - looking for better ideas.\n\n> Maybe doing something with estimate_num_groups() to find the\n> EquivalenceClass member with the least distinct values would be\n> better. I just can't think how that could be done in a performant way.\nInitial problem causes wrong cost_sort estimation. Right now I think \nabout providing cost_sort() the sort clauses instead of (or in addition \nto) the pathkeys.\n\n-- \nregards, Andrei Lepikhov\n\n\n\n", "msg_date": "Thu, 12 Sep 2024 11:51:19 +0200", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incremental Sort Cost Estimation Instability" }, { "msg_contents": "On Thu, 12 Sept 2024 at 21:51, Andrei Lepikhov <[email protected]> wrote:\n> Initial problem causes wrong cost_sort estimation. Right now I think\n> about providing cost_sort() the sort clauses instead of (or in addition\n> to) the pathkeys.\n\nI'm not quite sure why the sort clauses matter any more than the\nEquivalenceClass. If the EquivalanceClass defines that all members\nwill have the same value for any given row, then, if we had to choose\nany single member to drive the n_distinct estimate from, isn't the\nmost accurate distinct estimate from the member with the smallest\nn_distinct estimate? (That assumes the less distinct member has every\nvalue the more distinct member has, which might not be true)\n\nDavid\n\n\n", "msg_date": "Thu, 12 Sep 2024 22:12:03 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incremental Sort Cost Estimation Instability" }, { "msg_contents": "On 9/12/24 12:12, David Rowley wrote:\n> On Thu, 12 Sept 2024 at 21:51, Andrei Lepikhov <[email protected]> wrote:\n>> Initial problem causes wrong cost_sort estimation. Right now I think\n>> about providing cost_sort() the sort clauses instead of (or in addition\n>> to) the pathkeys.\n> \n> I'm not quite sure why the sort clauses matter any more than the\n> EquivalenceClass. If the EquivalanceClass defines that all members\n> will have the same value for any given row, then, if we had to choose\n> any single member to drive the n_distinct estimate from, isn't the\n> most accurate distinct estimate from the member with the smallest\n> n_distinct estimate? (That assumes the less distinct member has every\n> value the more distinct member has, which might not be true)\n> \n> David\n> \n\nHow large can the cost difference get? My assumption was it's correlated\nwith how different the ndistincts are for the two sides, so I tried\n\n CREATE TABLE t1(x integer, y integer,z text);\n CREATE TABLE t2(x integer, y integer,z text);\n\n INSERT INTO t1 (x,y) SELECT x, 1\n FROM generate_series(1,1000000) AS x;\n INSERT INTO t2 (x,y) SELECT mod(x,1000), 1\n FROM generate_series(1,1000000) AS x;\n\n CREATE INDEX ON t1(x);\n CREATE INDEX ON t2(x);\n CREATE INDEX ON t1(y);\n CREATE INDEX ON t2(y);\n\n VACUUM ANALYZE;\n\nWhich changes the ndistinct for t2.x from 1M to 1k. I've expected the\ncost difference to get much larger, but in it does not really change:\n\nGroupAggregate (cost=38.99..37886.88 rows=992 width=16) (actual rows=1\nloops=1)\n\nGroupAggregate (cost=37874.26..37904.04 rows=992 width=16) (actual\nrows=1 loops=1)\n\nThat is pretty significant - the total cost difference is tiny, I'd even\nsay it does not matter in practice (seems well within possible impact of\ncollecting a different random sample).\n\nBut the startup cost changes in rather absurd way, while the rest of the\nplan is exactly the same. We even know this:\n\n -> Incremental Sort (cost=38.99..37869.52 rows=992 width=8)\n Sort Key: t1.x, t1.y\n Presorted Key: t1.x\n\nin both cases. There's literally no other difference between these plans\nvisible in the explain ...\n\n\nI'm not sure how to fix this, but it seems estimate_num_groups() needs\nto do things differently. And I agree looking for the minimum ndistinct\nseems like the right approach. but doesn't estimate_num_groups()\nsupposed to already do that? The comment says:\n\n * 3. If the list contains Vars of different relations that are known equal\n * due to equivalence classes, then drop all but one of the Vars from each\n * known-equal set, keeping the one with smallest estimated # of values\n * (since the extra values of the others can't appear in joined rows).\n * Note the reason we only consider Vars of different relations is that\n * if we considered ones of the same rel, we'd be double-counting the\n * restriction selectivity of the equality in the next step.\n\nI haven't debugged this, but how come this doesn't do the trick?\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Thu, 12 Sep 2024 16:57:18 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incremental Sort Cost Estimation Instability" }, { "msg_contents": "On 12/9/2024 12:12, David Rowley wrote:\n> On Thu, 12 Sept 2024 at 21:51, Andrei Lepikhov <[email protected]> wrote:\n>> Initial problem causes wrong cost_sort estimation. Right now I think\n>> about providing cost_sort() the sort clauses instead of (or in addition\n>> to) the pathkeys.\n> \n> I'm not quite sure why the sort clauses matter any more than the\n> EquivalenceClass. If the EquivalanceClass defines that all members\n> will have the same value for any given row, then, if we had to choose\n> any single member to drive the n_distinct estimate from, isn't the\n> most accurate distinct estimate from the member with the smallest\n> n_distinct estimate? (That assumes the less distinct member has every\n> value the more distinct member has, which might not be true)\nThanks for your efforts! Your idea looks more stable and applicable than \nmy patch.\nBTW, it could still provide wrong ndistinct estimations if we choose a \nsorting operator under clauses mentioned in the EquivalenceClass.\nHowever, this thread's primary intention is to stabilize query plans, so \nI'll try to implement your idea.\n\nThe second reason was to distinguish sortings by cost (see proposal [1]) \nbecause sometimes it could help to save CPU cycles on comparisons. \nHaving a lot of sort/grouping queries with only sporadic joins, I see \nhow profitable it could sometimes be - text or numeric grouping over \nmostly Cartesian join may be painful without fine tuned sorting.\n\n[1] \nhttps://www.postgresql.org/message-id/[email protected]\n\n-- \nregards, Andrei Lepikhov\n\n\n\n", "msg_date": "Thu, 19 Sep 2024 10:44:25 +0200", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incremental Sort Cost Estimation Instability" }, { "msg_contents": "On 12/9/2024 12:12, David Rowley wrote:\n> On Thu, 12 Sept 2024 at 21:51, Andrei Lepikhov <[email protected]> wrote:\n>> Initial problem causes wrong cost_sort estimation. Right now I think\n>> about providing cost_sort() the sort clauses instead of (or in addition\n>> to) the pathkeys.\n> \n> I'm not quite sure why the sort clauses matter any more than the\n> EquivalenceClass. If the EquivalanceClass defines that all members\n> will have the same value for any given row, then, if we had to choose\n> any single member to drive the n_distinct estimate from, isn't the\n> most accurate distinct estimate from the member with the smallest\n> n_distinct estimate? (That assumes the less distinct member has every\n> value the more distinct member has, which might not be true)\nFinally, I implemented this approach in code (see attachment).\nThe effectiveness may be debatable, but general approach looks even \nbetter than previous one.\nChange status to 'Need review'.\n\n-- \nregards, Andrei Lepikhov", "msg_date": "Mon, 23 Sep 2024 15:02:57 +0200", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incremental Sort Cost Estimation Instability" }, { "msg_contents": "On 12/9/2024 16:57, Tomas Vondra wrote:\n> On 9/12/24 12:12, David Rowley wrote:\n>> On Thu, 12 Sept 2024 at 21:51, Andrei Lepikhov <[email protected]> wrote:\n> I'm not sure how to fix this, but it seems estimate_num_groups() needs\n> to do things differently. And I agree looking for the minimum ndistinct\n> seems like the right approach. but doesn't estimate_num_groups()\n> supposed to already do that? The comment says:\nI've rewritten the code in the previous email. It looks like we can try \nto rewrite estimate_num_groups to do it more effectively, but I haven't \ndone it yet.\nRegarding the tiny change in the cost, my initial reason was to teach \ncost_sort to differ sort orderings: begin by considering the number of \ncolumns in the cost estimation and then consider the distinct estimation \nof the first column.\nBTW, it was triggered by user reports, where a slight change in the \nbalance between MergeAppend/GatherMerge/Sort/IncrementalSort (or columns \norder) could give significant profit. Especially when grouping millions \nof rows in 2-4 bytea columns.\n\n-- \nregards, Andrei Lepikhov\n\n\n\n", "msg_date": "Mon, 23 Sep 2024 15:21:16 +0200", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incremental Sort Cost Estimation Instability" }, { "msg_contents": "On 12/9/2024 16:57, Tomas Vondra wrote:\n> On 9/12/24 12:12, David Rowley wrote:\n>> On Thu, 12 Sept 2024 at 21:51, Andrei Lepikhov <[email protected]> wrote:\n> but doesn't estimate_num_groups()\n> supposed to already do that? The comment says:\n> \n> * 3. If the list contains Vars of different relations that are known equal\n> * due to equivalence classes, then drop all but one of the Vars from each\n> * known-equal set, keeping the one with smallest estimated # of values\n> * (since the extra values of the others can't appear in joined rows).\n> * Note the reason we only consider Vars of different relations is that\n> * if we considered ones of the same rel, we'd be double-counting the\n> * restriction selectivity of the equality in the next step.\n> \n> I haven't debugged this, but how come this doesn't do the trick?\nI've got your point now.\nUnfortunately, this comment says that estimate_num_groups removes \nduplicates from the list of grouping expressions (see \nexprs_known_equal). But it doesn't discover em_members to find the \nmost-fitted clause for each grouping position.\n\n-- \nregards, Andrei Lepikhov\n\n\n\n", "msg_date": "Mon, 23 Sep 2024 16:45:25 +0200", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incremental Sort Cost Estimation Instability" } ]
[ { "msg_contents": "I initially ran into this while trying to reproduce the recent\nreports of trouble with LLVM 14 on ARM. However, it also reproduces\nwith LLVM 17 on x86_64, and I see no reason to think it's at all\narch-specific. I also reproduced it in back branches (only tried\nv14, but it's definitely not new in HEAD).\n\nTo reproduce:\n\n1. Build with --with-llvm\n\n2. Create a config file containing\n\n$ cat $HOME/tmp/temp_config\n# enable jit at max\njit_above_cost = 1\njit_inline_above_cost = 1\njit_optimize_above_cost = 1\n\nand do\nexport TEMP_CONFIG=$HOME/tmp/temp_config\n\n3. cd to .../src/pl/plpgsql/src/, and do \"make check\".\n\nIt gets a SIGSEGV in plpgsql_transaction.sql's\ncursor_fail_during_commit test. The stack trace looks like\n\n(gdb) bt\n#0 __strlen_evex () at ../sysdeps/x86_64/multiarch/strlen-evex.S:77\n#1 0x0000000000735c58 in pq_sendstring (buf=0x7ffd80f8eeb0, \n str=0x7f77cffdf000 <error: Cannot access memory at address 0x7f77cffdf000>)\n at pqformat.c:197\n#2 0x00000000009ca09c in err_sendstring (buf=0x7ffd80f8eeb0, \n str=0x7f77cffdf000 <error: Cannot access memory at address 0x7f77cffdf000>)\n at elog.c:3449\n#3 0x00000000009ca4ba in send_message_to_frontend (edata=0xf786a0 <errordata>)\n at elog.c:3568\n#4 0x00000000009c73a3 in EmitErrorReport () at elog.c:1715\n#5 0x00000000008987e7 in PostgresMain (dbname=<optimized out>, \n username=0x29fdb00 \"postgres\") at postgres.c:4378\n#6 0x0000000000893c5d in BackendMain (startup_data=<optimized out>, \n startup_data_len=<optimized out>) at backend_startup.c:105\n\nThe errordata structure it's trying to print out contains\n\n(gdb) p *edata\n$1 = {elevel = 21, output_to_server = true, output_to_client = true, \n hide_stmt = false, hide_ctx = false, \n filename = 0x7f77cffdf000 <error: Cannot access memory at address 0x7f77cffdf000>, lineno = 843, \n funcname = 0x7f77cffdf033 <error: Cannot access memory at address 0x7f77cffdf033>, domain = 0xbd3baa \"postgres-17\", \n context_domain = 0x7f77c3343320 \"plpgsql-17\", sqlerrcode = 33816706, \n message = 0x29fdc20 \"division by zero\", detail = 0x0, detail_log = 0x0, \n hint = 0x0, \n context = 0x29fdc50 \"PL/pgSQL function cursor_fail_during_commit() line 6 at COMMIT\", backtrace = 0x0, \n message_id = 0x7f77cffdf022 <error: Cannot access memory at address 0x7f77cffdf022>, schema_name = 0x0, table_name = 0x0, column_name = 0x0, \n datatype_name = 0x0, constraint_name = 0x0, cursorpos = 0, internalpos = 0, \n internalquery = 0x0, saved_errno = 2, assoc_context = 0x29fdb20}\n\nlineno = 843 matches the expected error location in int4_div().\nThe three string fields containing obviously-garbage pointers\nare ones that elog.c expects to point at compile-time constants,\nso it just records the caller's pointers without strdup'ing them.\n\nPerhaps somebody else will know better, but what I think is happening\nhere is\n\nA. Thanks to the low jit cost settings, we choose to jit-compile\nthe \"1/(x-1000)\" expression inside cursor_fail_during_commit().\n\nB. When x reaches 1000, the division-by-zero error that the test\nintends to provoke is thrown from jit-compiled code.\n\nC. Somewhere between there and EmitErrorReport(), something decided\nit could unmap the jit-compiled code.\n\nD. Now filename/funcname are pointing into the void, and \nsend_message_to_frontend() dumps core while trying to send them.\n\nOne way to fix this could be to pstrdup those strings even\nthough they should be constants. I don't especially like\nthe amount of overhead that'd add though.\n\nWhat I think is the right solution is to fix things so that\nseemingly-no-longer-used jit compilations are not thrown away\nuntil transaction cleanup. I don't know the JIT code nearly\nwell enough to take point on fixing it like that, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2024 14:09:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "JIT causes core dump during error recovery" }, { "msg_contents": "Em qua., 26 de jun. de 2024 às 15:09, Tom Lane <[email protected]> escreveu:\n\n> I initially ran into this while trying to reproduce the recent\n> reports of trouble with LLVM 14 on ARM. However, it also reproduces\n> with LLVM 17 on x86_64, and I see no reason to think it's at all\n> arch-specific. I also reproduced it in back branches (only tried\n> v14, but it's definitely not new in HEAD).\n>\n> To reproduce:\n>\n> 1. Build with --with-llvm\n>\n> 2. Create a config file containing\n>\n> $ cat $HOME/tmp/temp_config\n> # enable jit at max\n> jit_above_cost = 1\n> jit_inline_above_cost = 1\n> jit_optimize_above_cost = 1\n>\n> and do\n> export TEMP_CONFIG=$HOME/tmp/temp_config\n>\n> 3. cd to .../src/pl/plpgsql/src/, and do \"make check\".\n>\n> It gets a SIGSEGV in plpgsql_transaction.sql's\n> cursor_fail_during_commit test. The stack trace looks like\n>\n> (gdb) bt\n> #0 __strlen_evex () at ../sysdeps/x86_64/multiarch/strlen-evex.S:77\n> #1 0x0000000000735c58 in pq_sendstring (buf=0x7ffd80f8eeb0,\n> str=0x7f77cffdf000 <error: Cannot access memory at address\n> 0x7f77cffdf000>)\n> at pqformat.c:197\n> #2 0x00000000009ca09c in err_sendstring (buf=0x7ffd80f8eeb0,\n> str=0x7f77cffdf000 <error: Cannot access memory at address\n> 0x7f77cffdf000>)\n> at elog.c:3449\n> #3 0x00000000009ca4ba in send_message_to_frontend (edata=0xf786a0\n> <errordata>)\n> at elog.c:3568\n> #4 0x00000000009c73a3 in EmitErrorReport () at elog.c:1715\n> #5 0x00000000008987e7 in PostgresMain (dbname=<optimized out>,\n> username=0x29fdb00 \"postgres\") at postgres.c:4378\n> #6 0x0000000000893c5d in BackendMain (startup_data=<optimized out>,\n> startup_data_len=<optimized out>) at backend_startup.c:105\n>\n> The errordata structure it's trying to print out contains\n>\n> (gdb) p *edata\n> $1 = {elevel = 21, output_to_server = true, output_to_client = true,\n> hide_stmt = false, hide_ctx = false,\n> filename = 0x7f77cffdf000 <error: Cannot access memory at address\n> 0x7f77cffdf000>, lineno = 843,\n> funcname = 0x7f77cffdf033 <error: Cannot access memory at address\n> 0x7f77cffdf033>, domain = 0xbd3baa \"postgres-17\",\n> context_domain = 0x7f77c3343320 \"plpgsql-17\", sqlerrcode = 33816706,\n> message = 0x29fdc20 \"division by zero\", detail = 0x0, detail_log = 0x0,\n> hint = 0x0,\n> context = 0x29fdc50 \"PL/pgSQL function cursor_fail_during_commit() line\n> 6 at COMMIT\", backtrace = 0x0,\n> message_id = 0x7f77cffdf022 <error: Cannot access memory at address\n> 0x7f77cffdf022>, schema_name = 0x0, table_name = 0x0, column_name = 0x0,\n> datatype_name = 0x0, constraint_name = 0x0, cursorpos = 0, internalpos =\n> 0,\n> internalquery = 0x0, saved_errno = 2, assoc_context = 0x29fdb20}\n>\n> lineno = 843 matches the expected error location in int4_div().\n>\nDid you mean *int4div*, right?\nSince there is no reference to int4_div in *.c or *.h\n\nbest regards,\nRanier Vilela\n\nEm qua., 26 de jun. de 2024 às 15:09, Tom Lane <[email protected]> escreveu:I initially ran into this while trying to reproduce the recent\nreports of trouble with LLVM 14 on ARM.  However, it also reproduces\nwith LLVM 17 on x86_64, and I see no reason to think it's at all\narch-specific.  I also reproduced it in back branches (only tried\nv14, but it's definitely not new in HEAD).\n\nTo reproduce:\n\n1. Build with --with-llvm\n\n2. Create a config file containing\n\n$ cat $HOME/tmp/temp_config\n# enable jit at max\njit_above_cost = 1\njit_inline_above_cost = 1\njit_optimize_above_cost = 1\n\nand do\nexport TEMP_CONFIG=$HOME/tmp/temp_config\n\n3. cd to .../src/pl/plpgsql/src/, and do \"make check\".\n\nIt gets a SIGSEGV in plpgsql_transaction.sql's\ncursor_fail_during_commit test.  The stack trace looks like\n\n(gdb) bt\n#0  __strlen_evex () at ../sysdeps/x86_64/multiarch/strlen-evex.S:77\n#1  0x0000000000735c58 in pq_sendstring (buf=0x7ffd80f8eeb0, \n    str=0x7f77cffdf000 <error: Cannot access memory at address 0x7f77cffdf000>)\n    at pqformat.c:197\n#2  0x00000000009ca09c in err_sendstring (buf=0x7ffd80f8eeb0, \n    str=0x7f77cffdf000 <error: Cannot access memory at address 0x7f77cffdf000>)\n    at elog.c:3449\n#3  0x00000000009ca4ba in send_message_to_frontend (edata=0xf786a0 <errordata>)\n    at elog.c:3568\n#4  0x00000000009c73a3 in EmitErrorReport () at elog.c:1715\n#5  0x00000000008987e7 in PostgresMain (dbname=<optimized out>, \n    username=0x29fdb00 \"postgres\") at postgres.c:4378\n#6  0x0000000000893c5d in BackendMain (startup_data=<optimized out>, \n    startup_data_len=<optimized out>) at backend_startup.c:105\n\nThe errordata structure it's trying to print out contains\n\n(gdb) p *edata\n$1 = {elevel = 21, output_to_server = true, output_to_client = true, \n  hide_stmt = false, hide_ctx = false, \n  filename = 0x7f77cffdf000 <error: Cannot access memory at address 0x7f77cffdf000>, lineno = 843, \n  funcname = 0x7f77cffdf033 <error: Cannot access memory at address 0x7f77cffdf033>, domain = 0xbd3baa \"postgres-17\", \n  context_domain = 0x7f77c3343320 \"plpgsql-17\", sqlerrcode = 33816706, \n  message = 0x29fdc20 \"division by zero\", detail = 0x0, detail_log = 0x0, \n  hint = 0x0, \n  context = 0x29fdc50 \"PL/pgSQL function cursor_fail_during_commit() line 6 at COMMIT\", backtrace = 0x0, \n  message_id = 0x7f77cffdf022 <error: Cannot access memory at address 0x7f77cffdf022>, schema_name = 0x0, table_name = 0x0, column_name = 0x0, \n  datatype_name = 0x0, constraint_name = 0x0, cursorpos = 0, internalpos = 0, \n  internalquery = 0x0, saved_errno = 2, assoc_context = 0x29fdb20}\n\nlineno = 843 matches the expected error location in int4_div().Did you mean *int4div*, right?Since there is no reference to int4_div in *.c or *.hbest regards,Ranier Vilela", "msg_date": "Wed, 26 Jun 2024 15:28:33 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JIT causes core dump during error recovery" }, { "msg_contents": "Ranier Vilela <[email protected]> writes:\n> Em qua., 26 de jun. de 2024 às 15:09, Tom Lane <[email protected]> escreveu:\n>> lineno = 843 matches the expected error location in int4_div().\n\n> Did you mean *int4div*, right?\n\nRight, typo.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2024 14:42:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: JIT causes core dump during error recovery" }, { "msg_contents": "I wrote:\n> It gets a SIGSEGV in plpgsql_transaction.sql's\n> cursor_fail_during_commit test.\n\nHere's a simpler way to reproduce: just run the attached script\nin a --with-llvm build. (This is merely extracting the troublesome\nregression case for convenience.)\n\nInteresting, if you take out any one of the three \"set\" commands,\nit doesn't crash. This probably explains why, for example,\nbuildfarm member urutu hasn't shown this --- it's only reducing\none of the three costs to zero.\n\nI don't have any idea what to make of that result, except that\nit suggests the problem might be at least partly LLVM's fault.\nSurely, if we are prematurely unmapping a compiled code segment,\nthat behavior wouldn't depend on whether we had asked for\ninlining?\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 26 Jun 2024 15:13:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: JIT causes core dump during error recovery" }, { "msg_contents": "I wrote:\n> What I think is the right solution is to fix things so that\n> seemingly-no-longer-used jit compilations are not thrown away\n> until transaction cleanup. I don't know the JIT code nearly\n> well enough to take point on fixing it like that, though.\n\nOr maybe not. I found by bisecting that it doesn't fail before\n2e517818f (Fix SPI's handling of errors during transaction commit).\nA salient part of that commit message:\n\n Having made that API redefinition, we can fix this mess by having\n SPI_commit[_and_chain] trap errors and start a new, clean transaction\n before re-throwing the error. Likewise for SPI_rollback[_and_chain].\n\nSo delaying removal of the jit-created code segment until transaction\ncleanup wouldn't be enough to prevent this crash, if I'm reading\nthings right. The extra-pstrdup solution may be the only viable one.\n\nI could use confirmation from someone who knows the JIT code about\nwhen jit-created code is unloaded. It also remains very unclear\nwhy there is no crash if we don't force both jit_optimize_above_cost\nand jit_inline_above_cost to small values.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jun 2024 16:01:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: JIT causes core dump during error recovery" }, { "msg_contents": "I wrote:\n> So delaying removal of the jit-created code segment until transaction\n> cleanup wouldn't be enough to prevent this crash, if I'm reading\n> things right. The extra-pstrdup solution may be the only viable one.\n\n> I could use confirmation from someone who knows the JIT code about\n> when jit-created code is unloaded. It also remains very unclear\n> why there is no crash if we don't force both jit_optimize_above_cost\n> and jit_inline_above_cost to small values.\n\nI found where the unload happens: ResOwnerReleaseJitContext, which\nis executed during the resource owner BEFORE_LOCKS phase. (Which\nseems like a pretty dubious choice from here; why can't we leave\nit till the less time-critical phase after we've let go of locks?)\nBut anyway, we definitely do drop this stuff during xact cleanup.\n\nAlso, it seems that the reason that both jit_optimize_above_cost\nand jit_inline_above_cost must be small is that otherwise int4div\nis simply called from the JIT-generated code, not inlined into it.\nThis gives me very considerable fear about how well that behavior\nhas been tested: if throwing an error from inlined code doesn't\nwork, and we hadn't noticed that, how much can it really have been\nexercised? I have also got an itchy feeling that we have code that\nwill be broken by this behavior of \"if it happens to get inlined\nthen string constants aren't so constant\".\n\nIn any case, I found that adding some copying logic to CopyErrorData()\nis enough to solve this problem, since the SPI infrastructure applies\nthat before executing xact cleanup. I had feared that we'd have to\nadd copying to every single elog/ereport sequence, which would have\nbeen an annoying amount of overhead; but the attached seems\nacceptable. We do get through check-world with this patch and the\nJIT parameters all set to small values.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 27 Jun 2024 12:18:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: JIT causes core dump during error recovery" }, { "msg_contents": "Em qui., 27 de jun. de 2024 às 13:18, Tom Lane <[email protected]> escreveu:\n\n> I wrote:\n> > So delaying removal of the jit-created code segment until transaction\n> > cleanup wouldn't be enough to prevent this crash, if I'm reading\n> > things right. The extra-pstrdup solution may be the only viable one.\n>\n> > I could use confirmation from someone who knows the JIT code about\n> > when jit-created code is unloaded. It also remains very unclear\n> > why there is no crash if we don't force both jit_optimize_above_cost\n> > and jit_inline_above_cost to small values.\n>\n> I found where the unload happens: ResOwnerReleaseJitContext, which\n> is executed during the resource owner BEFORE_LOCKS phase. (Which\n> seems like a pretty dubious choice from here; why can't we leave\n> it till the less time-critical phase after we've let go of locks?)\n> But anyway, we definitely do drop this stuff during xact cleanup.\n>\n> Also, it seems that the reason that both jit_optimize_above_cost\n> and jit_inline_above_cost must be small is that otherwise int4div\n> is simply called from the JIT-generated code, not inlined into it.\n> This gives me very considerable fear about how well that behavior\n> has been tested: if throwing an error from inlined code doesn't\n> work, and we hadn't noticed that, how much can it really have been\n> exercised? I have also got an itchy feeling that we have code that\n> will be broken by this behavior of \"if it happens to get inlined\n> then string constants aren't so constant\".\n>\n> In any case, I found that adding some copying logic to CopyErrorData()\n> is enough to solve this problem, since the SPI infrastructure applies\n> that before executing xact cleanup.\n\nIn this case, I think that these fields, in struct definition struct\nErrorData (src/include/utils/elog.h)\nshould be changed too?\nfrom const char * to char*\n\nbest regards,\nRanier Vilela\n\nEm qui., 27 de jun. de 2024 às 13:18, Tom Lane <[email protected]> escreveu:I wrote:\n> So delaying removal of the jit-created code segment until transaction\n> cleanup wouldn't be enough to prevent this crash, if I'm reading\n> things right.  The extra-pstrdup solution may be the only viable one.\n\n> I could use confirmation from someone who knows the JIT code about\n> when jit-created code is unloaded.  It also remains very unclear\n> why there is no crash if we don't force both jit_optimize_above_cost\n> and jit_inline_above_cost to small values.\n\nI found where the unload happens: ResOwnerReleaseJitContext, which\nis executed during the resource owner BEFORE_LOCKS phase.  (Which\nseems like a pretty dubious choice from here; why can't we leave\nit till the less time-critical phase after we've let go of locks?)\nBut anyway, we definitely do drop this stuff during xact cleanup.\n\nAlso, it seems that the reason that both jit_optimize_above_cost\nand jit_inline_above_cost must be small is that otherwise int4div\nis simply called from the JIT-generated code, not inlined into it.\nThis gives me very considerable fear about how well that behavior\nhas been tested: if throwing an error from inlined code doesn't\nwork, and we hadn't noticed that, how much can it really have been\nexercised?  I have also got an itchy feeling that we have code that\nwill be broken by this behavior of \"if it happens to get inlined\nthen string constants aren't so constant\".\n\nIn any case, I found that adding some copying logic to CopyErrorData()\nis enough to solve this problem, since the SPI infrastructure applies\nthat before executing xact cleanup.In this case, I think that these fields, in struct definition struct ErrorData (src/include/utils/elog.h)should be changed too?from const char * to char*best regards,Ranier Vilela", "msg_date": "Thu, 27 Jun 2024 13:31:11 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JIT causes core dump during error recovery" }, { "msg_contents": "Ranier Vilela <[email protected]> writes:\n> Em qui., 27 de jun. de 2024 às 13:18, Tom Lane <[email protected]> escreveu:\n>> In any case, I found that adding some copying logic to CopyErrorData()\n>> is enough to solve this problem, since the SPI infrastructure applies\n>> that before executing xact cleanup.\n\n> In this case, I think that these fields, in struct definition struct\n> ErrorData (src/include/utils/elog.h)\n> should be changed too?\n> from const char * to char*\n\nNo, that would imply casting away const in errstart() etc. We're\nstill mostly expecting those things to be pointers to constant\nstrings.\n\nI'm about half tempted to file this as an LLVM bug. When it inlines\na function, it should still reference the same string constants that\nthe original code did, otherwise it's failing to be a transparent\nconversion. But they'll probably cite some standards-ese that claims\nthis is undefined behavior:\n\n\tconst char * foo(void) { return \"foo\"; }\n\n\tvoid bar(void) { Assert( foo() == foo() ); }\n\non which I call BS, but it's probably in there somewhere.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Jun 2024 12:59:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: JIT causes core dump during error recovery" } ]
[ { "msg_contents": "Here is a patch for using gmtime_r() and localtime_r() instead of \ngmtime() and localtime(), for thread-safety.\n\nThere are a few affected calls in libpq and ecpg's libpgtypes, which are \nprobably effectively bugs, because those libraries already claim to be \nthread-safe.\n\nThere is one affected call in the backend. Most of the backend \notherwise uses the custom functions pg_gmtime() and pg_localtime(), \nwhich are implemented differently.\n\nSome portability fun: gmtime_r() and localtime_r() are in POSIX but are \nnot available on Windows. Windows has functions gmtime_s() and \nlocaltime_s() that can fulfill the same purpose, so we can add some \nsmall wrappers around them. (Note that these *_s() functions are also\ndifferent from the *_s() functions in the bounds-checking extension of\nC11. We are not using those here.)\n\nMinGW exposes neither *_r() nor *_s() by default. You can get at the\nPOSIX-style *_r() functions by defining _POSIX_C_SOURCE appropriately\nbefore including <time.h>. (There is apparently probably also a way to \nget at the Windows-style *_s() functions by supplying some additional \noptions or defines. But we might as well just use the POSIX ones.)", "msg_date": "Wed, 26 Jun 2024 20:42:23 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "thread-safety: gmtime_r(), localtime_r()" }, { "msg_contents": "On Thu, Jun 27, 2024 at 1:42 AM Peter Eisentraut <[email protected]>\nwrote:\n\n> Here is a patch for using gmtime_r() and localtime_r() instead of\n> gmtime() and localtime(), for thread-safety.\n>\n> There are a few affected calls in libpq and ecpg's libpgtypes, which are\n> probably effectively bugs, because those libraries already claim to be\n> thread-safe.\n>\n> There is one affected call in the backend. Most of the backend\n> otherwise uses the custom functions pg_gmtime() and pg_localtime(),\n> which are implemented differently.\n>\n> Some portability fun: gmtime_r() and localtime_r() are in POSIX but are\n> not available on Windows. Windows has functions gmtime_s() and\n> localtime_s() that can fulfill the same purpose, so we can add some\n> small wrappers around them. (Note that these *_s() functions are also\n> different from the *_s() functions in the bounds-checking extension of\n> C11. We are not using those here.)\n>\n> MinGW exposes neither *_r() nor *_s() by default. You can get at the\n> POSIX-style *_r() functions by defining _POSIX_C_SOURCE appropriately\n> before including <time.h>. (There is apparently probably also a way to\n> get at the Windows-style *_s() functions by supplying some additional\n> options or defines. But we might as well just use the POSIX ones.)\n>\n>\nHi! Looks good to me.\nBut why you don`t change localtime function at all places?\nFor example:\nsrc/bin/pg_controldata/pg_controldata.c\nsrc/bin/pg_dump/pg_backup_archiver.c\nsrc/bin/initdb/findtimezone.c\nBest regards, Stepan Neretin.\n\nOn Thu, Jun 27, 2024 at 1:42 AM Peter Eisentraut <[email protected]> wrote:Here is a patch for using gmtime_r() and localtime_r() instead of \ngmtime() and localtime(), for thread-safety.\n\nThere are a few affected calls in libpq and ecpg's libpgtypes, which are \nprobably effectively bugs, because those libraries already claim to be \nthread-safe.\n\nThere is one affected call in the backend.  Most of the backend \notherwise uses the custom functions pg_gmtime() and pg_localtime(), \nwhich are implemented differently.\n\nSome portability fun: gmtime_r() and localtime_r() are in POSIX but are \nnot available on Windows.  Windows has functions gmtime_s() and \nlocaltime_s() that can fulfill the same purpose, so we can add some \nsmall wrappers around them.  (Note that these *_s() functions are also\ndifferent from the *_s() functions in the bounds-checking extension of\nC11.  We are not using those here.)\n\nMinGW exposes neither *_r() nor *_s() by default.  You can get at the\nPOSIX-style *_r() functions by defining _POSIX_C_SOURCE appropriately\nbefore including <time.h>.  (There is apparently probably also a way to \nget at the Windows-style *_s() functions by supplying some additional \noptions or defines.  But we might as well just use the POSIX ones.)Hi! Looks good to me.But why you don`t change localtime function at all places? For example:src/bin/pg_controldata/pg_controldata.csrc/bin/pg_dump/pg_backup_archiver.csrc/bin/initdb/findtimezone.c Best regards, Stepan Neretin.", "msg_date": "Thu, 27 Jun 2024 11:47:17 +0700", "msg_from": "Stepan Neretin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: thread-safety: gmtime_r(), localtime_r()" }, { "msg_contents": "On 27.06.24 06:47, Stepan Neretin wrote:\n> Hi! Looks good to me.\n> But why you don`t change localtime function at all places?\n> For example:\n> src/bin/pg_controldata/pg_controldata.c\n> src/bin/pg_dump/pg_backup_archiver.c\n> src/bin/initdb/findtimezone.c\n\nAt the moment, I am focusing on the components that are already meant to \nbe thread-safe (libpq, ecpg libs) and the ones we are actively looking \nat maybe converting (backend). I don't intend at this point to convert \nall other code to use only thread-safe APIs.\n\n\n\n", "msg_date": "Thu, 27 Jun 2024 08:09:22 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: thread-safety: gmtime_r(), localtime_r()" }, { "msg_contents": "On 26/06/2024 21:42, Peter Eisentraut wrote:\n> Here is a patch for using gmtime_r() and localtime_r() instead of\n> gmtime() and localtime(), for thread-safety.\n> \n> There are a few affected calls in libpq and ecpg's libpgtypes, which are\n> probably effectively bugs, because those libraries already claim to be\n> thread-safe.\n\n+1\n\nThe Linux man page for localtime_r() says:\n\n> According to POSIX.1-2001, localtime() is required to behave as\n> though tzset(3) was called, while localtime_r() does not have this\n> requirement. For portable code, tzset(3) should be called before\n> localtime_r().\n\nIt's not clear to me what happens if tzset() has not been called and the \nlocaltime_r() implementation does not call it either. I guess some \nimplementation default timezone is used.\n\nIn the libpq traces, I don't think we care much. In ecpg, I'm not sure \nwhat the impact is if the application has not previously called tzset(). \nI'd suggest that we just document that an ecpg application should call \ntzset() before calling the functions that are sensitive to local \ntimezone setting.\n\n> There is one affected call in the backend. Most of the backend\n> otherwise uses the custom functions pg_gmtime() and pg_localtime(),\n> which are implemented differently.\n\nDo we need to call tzset(3) somewhere in backend startup? Or could we \nreplace that localtime() call in the backend with pg_localtime()?\n\npg_gmtime() isn't thread-safe, because of the static 'gmtptr' in \ngmtsub(). But we can handle that in a separate patch.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 4 Jul 2024 19:36:05 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: thread-safety: gmtime_r(), localtime_r()" }, { "msg_contents": "On 04.07.24 18:36, Heikki Linnakangas wrote:\n> The Linux man page for localtime_r() says:\n> \n>> According to POSIX.1-2001, localtime() is required to behave as\n>> though tzset(3) was called, while localtime_r() does not have  this\n>> requirement.   For  portable  code,  tzset(3) should be called before\n>> localtime_r().\n> \n> It's not clear to me what happens if tzset() has not been called and the \n> localtime_r() implementation does not call it either. I guess some \n> implementation default timezone is used.\n> \n> In the libpq traces, I don't think we care much. In ecpg, I'm not sure \n> what the impact is if the application has not previously called tzset(). \n> I'd suggest that we just document that an ecpg application should call \n> tzset() before calling the functions that are sensitive to local \n> timezone setting.\n\nI have been studying this question. It appears that various libc \nimplementers have been equally puzzled by this; there are various \ncomments like \"it's unclear what POSIX wants here\" in the sources. (I \nhave checked glibc, FreeBSD, and Solaris.)\n\nConsider if a program calls localtime() or localtime_r() twice:\n\n localtime(...);\n ...\n localtime(...);\n\nor\n\n localtime_r(...);\n ...\n localtime_r(...);\n\nThe question here is, how many times does this effectively (internally) \ncall tzset(). There are three possible answers: 0, 1, or 2.\n\nFor localtime(), the answer is clear. localtime() is required to call \ntzset() every time, so the answer is 2.\n\nFor localtime_r(), it's unclear. What you are wondering, and I have \nbeen wondering, is whether the answer is 0 or non-zero (and possibly, if \nit's 0, will these calls misbehave badly). What the libc implementers \nare wondering is whether the answer is 1 or 2. The implementations \nwhose source code I have checked think it should be 1. They never \nconsider that it could be 0 and it's okay to misbehave.\n\nWhere this difference appears it practice would be something like\n\n setenv(\"TZ\", \"foo\");\n localtime(...); // uses TZ foo\n setenv(\"TZ\", \"bar\");\n localtime(...); // uses TZ bar\n\nversus\n\n setenv(\"TZ\", \"foo\");\n localtime_r(...); // uses TZ foo if first call in program\n setenv(\"TZ\", \"bar\");\n localtime_r(...); // probably does not use new TZ\n\nIf you want the second case to pick up the changed TZ setting, you must \nexplicitly call tzset() to be sure.\n\nI think, considering this, the proposed patch should be okay. At least, \nthe libraries are not going to misbehave if tzset() hasn't been called \nexplicitly. It will be called internally the first time it's needed. I \ndon't think we need to cater to cases like my example where the \napplication changes the TZ environment variable but neglects to call \ntzset() itself.\n\n\n >> There is one affected call in the backend. Most of the backend\n >> otherwise uses the custom functions pg_gmtime() and pg_localtime(),\n >> which are implemented differently.\n >\n > Do we need to call tzset(3) somewhere in backend startup? Or could we\n > replace that localtime() call in the backend with pg_localtime()?\n\nLet's look at what this code actually does. It just takes the current \ntime and then loops through all possible weekdays and months to collect \nand cache the localized names. The current time or time zone doesn't \nactually matter for this, we just need to fill in the struct tm a bit \nfor strftime() to be happy. We could probably replace this with \ngmtime() to avoid the question about time zone state. (We probably \ndon't even need to call time() beforehand, we could just use time zero. \nBut the comment says \"We use times close to current time as data for \nstrftime().\", which is probably prudent.) (Since we are caching the \nresults for the session, we're already not dealing with the hilarious \nhypothetical situation where the weekday and month names depend on the \nactual time, in case that is a concern.)\n\n\n\n", "msg_date": "Tue, 23 Jul 2024 12:51:56 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: thread-safety: gmtime_r(), localtime_r()" }, { "msg_contents": "On Tue, Jul 23, 2024 at 10:52 PM Peter Eisentraut <[email protected]> wrote:\n> Let's look at what this code actually does. It just takes the current\n> time and then loops through all possible weekdays and months to collect\n> and cache the localized names. The current time or time zone doesn't\n> actually matter for this, we just need to fill in the struct tm a bit\n> for strftime() to be happy. We could probably replace this with\n> gmtime() to avoid the question about time zone state. (We probably\n> don't even need to call time() beforehand, we could just use time zero.\n> But the comment says \"We use times close to current time as data for\n> strftime().\", which is probably prudent.) (Since we are caching the\n> results for the session, we're already not dealing with the hilarious\n> hypothetical situation where the weekday and month names depend on the\n> actual time, in case that is a concern.)\n\nI think you could even just use a struct tm filled in with an example\ndate? Hmm, but it's annoying to choose one, and annoying that POSIX\nsays there may be other members of the struct, so yeah, I think\ngmtime_r(0, tm) makes sense. It can't be that important, because we\naren't even using dates consistent with tm_wday, so we're assuming\nthat strftime(\"%a\") only looks at tm_wday.\n\nThis change complements CF #5170's change strftime()->strftime_l(), to\nmake the function fully thread-safe.\n\nSomeone could also rewrite it to call\nnl_langinfo_l({ABDAY,ABMON,DAY,MON}_1 + n, locale) directly, or\nGetLocaleInfoEx(locale_name,\nLOCALE_S{ABBREVDAY,ABBREVMONTH,DAY,MONTH}NAME1 + n, ...) on Window,\nbut that'd be more code churn.\n\n\n", "msg_date": "Sat, 17 Aug 2024 01:09:07 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: thread-safety: gmtime_r(), localtime_r()" }, { "msg_contents": "On Sat, Aug 17, 2024 at 1:09 AM Thomas Munro <[email protected]> wrote:\n> This change complements CF #5170's change strftime()->strftime_l(), to\n> make the function fully thread-safe.\n\n(Erm, I meant its standard library... of course it has its own global\nvariables to worry about still.)\n\n\n", "msg_date": "Sat, 17 Aug 2024 01:12:43 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: thread-safety: gmtime_r(), localtime_r()" }, { "msg_contents": "Here is an updated patch version.\n\nI have changed the backend call from localtime() to gmtime() but then \nalso to gmtime_r().\n\nI moved the _POSIX_C_SOURCE definition for MinGW from the header file to \na command-line option (-D_POSIX_C_SOURCE). This matches the treatment \nof _GNU_SOURCE and similar.\n\nI think this is about as good as it's going to get, and we need it to \nbe, so I propose to commit this version if there are no further concerns.", "msg_date": "Fri, 16 Aug 2024 17:43:00 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: thread-safety: gmtime_r(), localtime_r()" }, { "msg_contents": "On Sat, Aug 17, 2024 at 3:43 AM Peter Eisentraut <[email protected]> wrote:\n> I moved the _POSIX_C_SOURCE definition for MinGW from the header file to\n> a command-line option (-D_POSIX_C_SOURCE). This matches the treatment\n> of _GNU_SOURCE and similar.\n\nI was trying to figure out what else -D_POSIX_C_SOURCE does to MinGW.\nEnables __USE_MINGW_ANSI_STDIO, apparently, but I don't know if we\nwere using that already, or if it matters. I suppose if it ever shows\nup as a problem, we can explicitly disable it.\n\n. o O ( MinGW is a strange beast. Do we want to try to keep the code\nit runs as close as possible to what is used by MSVC? I thought so,\nbut we can't always do that due to missing interfaces (though I\nsuspect that many #ifdef _MSC_VER tests are based on ancient versions\nand now bogus). But it also offers ways to be more POSIX-y if we\nwant, and then we have to decide whether to take them, and make it\nmore like a separate platform with different quirks... )\n\n> I think this is about as good as it's going to get, and we need it to\n> be, so I propose to commit this version if there are no further concerns.\n\nLGTM.\n\n\n", "msg_date": "Sat, 17 Aug 2024 09:01:43 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: thread-safety: gmtime_r(), localtime_r()" }, { "msg_contents": "On 16.08.24 23:01, Thomas Munro wrote:\n> On Sat, Aug 17, 2024 at 3:43 AM Peter Eisentraut<[email protected]> wrote:\n>> I moved the _POSIX_C_SOURCE definition for MinGW from the header file to\n>> a command-line option (-D_POSIX_C_SOURCE). This matches the treatment\n>> of _GNU_SOURCE and similar.\n> I was trying to figure out what else -D_POSIX_C_SOURCE does to MinGW.\n> Enables __USE_MINGW_ANSI_STDIO, apparently, but I don't know if we\n> were using that already, or if it matters. I suppose if it ever shows\n> up as a problem, we can explicitly disable it.\n> \n> . o O ( MinGW is a strange beast. Do we want to try to keep the code\n> it runs as close as possible to what is used by MSVC? I thought so,\n> but we can't always do that due to missing interfaces (though I\n> suspect that many #ifdef _MSC_VER tests are based on ancient versions\n> and now bogus). But it also offers ways to be more POSIX-y if we\n> want, and then we have to decide whether to take them, and make it\n> more like a separate platform with different quirks... )\n\nYeah, ideally we'd keep it aligned with MSVC. But a problem here is \nthat if _POSIX_C_SOURCE (or _GNU_SOURCE or something like that) gets \ndefined for other reasons, then there would be conflicts between the \nsystem headers and our workaround #define's. At least plpython triggers \nsuch a conflict in my testing. This is the usual problem that we also \nhave with _GNU_SOURCE in other contexts.\n\n\n\n", "msg_date": "Mon, 19 Aug 2024 11:43:06 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: thread-safety: gmtime_r(), localtime_r()" }, { "msg_contents": "On 19.08.24 11:43, Peter Eisentraut wrote:\n> On 16.08.24 23:01, Thomas Munro wrote:\n>> On Sat, Aug 17, 2024 at 3:43 AM Peter \n>> Eisentraut<[email protected]>  wrote:\n>>> I moved the _POSIX_C_SOURCE definition for MinGW from the header file to\n>>> a command-line option (-D_POSIX_C_SOURCE).  This matches the treatment\n>>> of _GNU_SOURCE and similar.\n>> I was trying to figure out what else -D_POSIX_C_SOURCE does to MinGW.\n>> Enables __USE_MINGW_ANSI_STDIO, apparently, but I don't know if we\n>> were using that already, or if it matters.  I suppose if it ever shows\n>> up as a problem, we can explicitly disable it.\n>>\n>> . o O ( MinGW is a strange beast.  Do we want to try to keep the code\n>> it runs as close as possible to what is used by MSVC?  I thought so,\n>> but we can't always do that due to missing interfaces (though I\n>> suspect that many #ifdef _MSC_VER tests are based on ancient versions\n>> and now bogus).  But it also offers ways to be more POSIX-y if we\n>> want, and then we have to decide whether to take them, and make it\n>> more like a separate platform with different quirks... )\n> \n> Yeah, ideally we'd keep it aligned with MSVC.  But a problem here is \n> that if _POSIX_C_SOURCE (or _GNU_SOURCE or something like that) gets \n> defined for other reasons, then there would be conflicts between the \n> system headers and our workaround #define's.  At least plpython triggers \n> such a conflict in my testing.  This is the usual problem that we also \n> have with _GNU_SOURCE in other contexts.\n\nI have committed this, with this amended explanation.\n\n\n\n", "msg_date": "Fri, 23 Aug 2024 08:00:55 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: thread-safety: gmtime_r(), localtime_r()" } ]
[ { "msg_contents": "Attached is a POC patch that adds skip scan to nbtree. The patch\nteaches nbtree index scans to efficiently use a composite index on\n'(a, b)' for queries with a predicate such as \"WHERE b = 5\". This is\nfeasible in cases where the total number of distinct values in the\ncolumn 'a' is reasonably small (think tens or hundreds, perhaps even\nthousands for very large composite indexes).\n\nIn effect, a skip scan treats this composite index on '(a, b)' as if\nit was a series of subindexes -- one subindex per distinct value in\n'a'. We can exhaustively \"search every subindex\" using an index qual\nthat behaves just like \"WHERE a = ANY(<every possible 'a' value>) AND\nb = 5\" would behave.\n\nThis approach might be much less efficient than an index scan that can\nuse an index on 'b' alone, but skip scanning can still be orders of\nmagnitude faster than a sequential scan. The user may very well not\nhave a dedicated index on 'b' alone, for whatever reason.\n\nNote that the patch doesn't just target these simpler \"skip leading\nindex column omitted from the predicate\" cases. It's far more general\nthan that -- skipping attributes (or what the patch refers to as skip\narrays) can be freely mixed with SAOPs/conventional arrays, in any\norder you can think of. They can also be combined with inequalities to\nform range skip arrays.\n\nThis patch is a direct follow-up to the Postgres 17 work that became\ncommit 5bf748b8. Making everything work well together is an important\ndesign goal here. I'll talk more about that further down, and will\nshow a benchmark example query that'll give a good general sense of\nthe value of the patch with these more complicated cases.\n\nA note on terminology\n=====================\n\nThe terminology in this area has certain baggage. Many of us will\nrecall the patch that implemented loose index scan. That patch also\ndubbed itself \"skip scan\", but that doesn't seem right to me (it's at\nodds with how other RDBMSs describe features in this area). I would\nlike to address the issues with the terminology in this area now, to\navoid creating further confusion.\n\nWhen I use the term \"skip scan\", I'm referring to a feature that's\ncomparable to the skip scan features from Oracle and MySQL 8.0+. This\n*isn't* at all comparable to the feature that MySQL calls \"loose index\nscan\" -- don't confuse the two features.\n\nLoose index scan is a far more specialized technique than skip scan.\nIt only applies within special scans that feed into a DISTINCT group\naggregate. Whereas my skip scan patch isn't like that at all -- it's\nmuch more general. With my patch, nbtree has exactly the same contract\nwith the executor/core code as before. There are no new index paths\ngenerated by the optimizer to make skip scan work, even. Skip scan\nisn't particularly aimed at improving group aggregates (though the\nbenchmark I'll show happens to involve a group aggregate, simply\nbecause the technique works best with large and expensive index\nscans).\n\nMy patch is an additive thing, that speeds up what we'd currently\nrefer to as full index scans (as well as range index scans that\ncurrently do a \"full scan\" of a range/subset of an index). These index\npaths/index scans can no longer really be called \"full index scans\",\nof course, but they're still logically the same index paths as before.\n\nMDAM and skip scan\n==================\n\nAs I touched on already, the patch actually implements a couple of\nrelated optimizations. \"Skip scan\" might be considered one out of the\nseveral optimizations from the 1995 paper \"Efficient Search of\nMultidimensional B-Trees\" [1] -- the paper describes skip scan under\nits \"Missing Key Predicate\" subsection. I collectively refer to the\noptimizations from the paper as the \"MDAM techniques\".\n\nAlternatively, you could define these MDAM techniques as each\nimplementing some particular flavor of skip scan, since they all do\nrather similar things under the hood. In fact, that's how I've chosen\nto describe things in my patch: it talks about skip scan, and about\nrange skip scan, which is considered a minor variant of skip scan.\n(Note that the term \"skip scan\" is never used in the MDAM paper.)\n\nMDAM is short for \"multidimensional access method\". In the context of\nthe paper, \"dimension\" refers to dimensions in a decision support\nsystem. These dimensions are represented by low cardinality columns,\neach of which appear in a large composite B-Tree index. The emphasis\nin the paper (and for my patch) is DSS and data warehousing; OLTP apps\ntypically won't benefit as much.\n\nNote: Loose index scan *isn't* described by the paper at all. I also\nwouldn't classify loose index scan as one of the MDAM techniques. I\nthink of it as being in a totally different category, due to the way\nthat it applies semantic information. No MDAM technique will ever\napply high-level semantic information about what is truly required by\nthe plan tree, one level up. And so my patch simply teaches nbtree to\nfind the most efficient way of navigating through an index, based\nsolely on information that is readily available to the scan. The same\nprinciples apply to all of the other MDAM techniques; they're\nbasically all just another flavor of skip scan (that do some kind of\nclever preprocessing/transformation that enables reducing the scan to\na series of disjunctive accesses, and that could be implemented using\nthe new abstraction I'm calling skip arrays).\n\nThe paper more or less just applies one core idea, again and again.\nIt's surprising how far that one idea can take you. But it is still\njust one core idea (don't overlook that point).\n\nRange skip scan\n---------------\n\nTo me, the most interesting MDAM technique is probably one that I\nrefer to as \"range skip scan\" in the patch. This is the technique that\nthe paper introduces first, in its \"Intervening Range Predicates\"\nsubsection. The best way of explaining it is through an example (you\ncould also just read the paper, which has an example of its own).\n\nImagine a table with just one index: a composite index on \"(pdate,\ncustomer_id)\". Further suppose we have a query such as:\n\nSELECT * FROM payments WHERE pdate BETWEEN '2024-01-01' AND\n'2024-01-30' AND customer_id = 5; -- both index columns (pdate and\ncustomer_id) appear in predicate\n\nThe patch effectively makes the nbtree code execute the index scan as\nif the query had been written like this instead:\n\nSELECT * FROM payments WHERE pdate = ANY ('2024-01-01', '2024-01-02',\n..., '2024-01-30') AND customer_id = 5;\n\nThe use of a \"range skip array\" within nbtree allows the scan to skip\nwhen that makes sense, locating the next date with customer_id = 5\neach time (we might skip over many irrelevant leaf pages each time).\nThe scan must also *avoid* skipping when it *doesn't* make sense.\n\nAs always (since commit 5bf748b8 went in), whether and to what extent\nwe skip using array keys depends in large part on the physical\ncharacteristics of the index at runtime. If the tuples that we need to\nreturn are all clustered closely together, across only a handful of\nleaf pages, then we shouldn't be skipping at all. When skipping makes\nsense, we should skip constantly.\n\nI'll discuss the trade-offs in this area a little more below, under \"Design\".\n\nUsing multiple MDAM techniques within the same index scan (includes benchmark)\n------------------------------------------------------------------------------\n\nI recreated the data in the MDAM paper's \"sales\" table by making\ninferences from the paper. It's very roughly the same data set as the\npaper (close enough to get the general idea across). The table size is\nabout 51GB, and the index is about 25GB (most of the attributes from\nthe table are used as index columns). There is nothing special about\nthis data set -- I just thought it would be cool to \"recreate\" the\nqueries from the paper, as best I could. Thought that this approach\nmight make my points about the design easier to follow.\n\nThe index we'll be using for this can be created via: \"create index\nmdam_idx on sales_mdam_paper(dept, sdate, item_class, store)\". Again,\nthis is per the paper. It's also the order that the columns appear in\nevery WHERE clause in every query from the paper.\n\n(That said, the particular column order from the index definition\nmostly doesn't matter. Every index column is a low cardinality column,\nso unless the order used completely obviates the need to skip a column\nthat would otherwise need to be skipped, such as \"dept\", the effect on\nquery execution time from varying column order is in the noise.\nObviously that's very much not how users are used to thinking about\ncomposite indexes.)\n\nThe MDAM paper has numerous example queries, each of which builds on\nthe last, adding one more complication each time -- each of which is\naddressed by another MDAM technique. The query I'll focus on here is\nan example query that's towards the end of the paper, and so combines\nmultiple techniques together -- it's the query that appears in the \"IN\nLists\" subsection:\n\nselect\n dept,\n sdate,\n item_class,\n store,\n sum(total_sales)\nfrom\n sales_mdam_paper\nwhere\n -- omitted: leading \"dept\" column from composite index\n sdate between '1995-06-01' and '1995-06-30'\n and item_class in (20, 35, 50)\n and store in (200, 250)\ngroup by dept, sdate, item_class, store\norder by dept, sdate, item_class, store;\n\nOn HEAD, when we run this query we either get a sequential scan (which\nis very slow) or a full index scan (which is almost as slow). Whereas\nwith the patch, nbtree will execute the query as a succession of a few\nthousand very selective primitive index scans (which usually only scan\none leaf page, though some may scan two neighboring leaf pages).\n\nResults: The full index scan on HEAD takes about 32 seconds. With the\npatch, the query takes just under 52ms to execute. That works out to\nbe about 630x faster with the patch.\n\nSee the attached SQL file for full details. It provides all you'll\nneed to recreate this test result with the patch.\n\nNobody would put up with such an inefficient full index scan in the\nfirst place, so the behavior on HEAD is not really a sensible baseline\n-- 630x isn't very meaningful. I could have come up with a case that\nshowed an even larger improvement if I'd felt like it, but that\nwouldn't have proven anything.\n\nThe important point is that the patch makes a huge composite index\nlike the one I've built for this actually make sense, for the first\ntime. So we're not so much making something faster as enabling a whole\nnew approach to indexing -- particularly for data warehousing use\ncases. The way that Postgres DBAs choose which indexes they'll need to\ncreate is likely to be significantly changed by this optimization.\n\nI'll break down how this is possible. This query makes use of 3\nseparate MDAM techniques:\n\n1. A \"simple\" skip scan (on \"dept\").\n\n2. A \"range\" skip scan (on \"sdate\").\n\n3. The pair of IN() lists/SAOPs on item_class and on store. (Nothing\nnew here, except that nbtree needs these regular SAOP arrays to roll\nover the higher-order skip arrays to trigger moving on to the next\ndept/date.)\n\nInternally, we're just doing a series of several thousand distinct\nnon-overlapping accesses, in index key space order (so as to preserve\nthe appearance of one continuous index scan). These accesses starts\nout like this:\n\ndept=INT_MIN, date='1995-06-01', item_class=20, store=200\n (Here _bt_advance_array_keys discovers that the actual lowest dept\nis 1, not INT_MIN)\ndept=1, date='1995-06-01', item_class=20, store=200\ndept=1, date='1995-06-01', item_class=20, store=250\ndept=1, date='1995-06-01', item_class=35, store=200\ndept=1, date='1995-06-01', item_class=35, store=250\n...\n\n(Side note: as I mentioned, each of the two \"store\" values usually\nappear together on the same leaf page in practice. Arguably I should\nhave shown 2 lines/accesses here (for \"dept=1\"), rather than showing\n4. The 4 \"dept=1\" lines shown required only 2 primitive index\nscans/index descents/leaf page reads. Disjunctive accesses don't\nnecessarily map 1:1 with primitive/physical index scans.)\n\nAbout another ten thousand similar accesses occur (omitted for\nbrevity). Execution of the scan within nbtree finally ends with these\nprimitive index scans/accesses:\n...\ndept=100, date='1995-06-30', item_class=50, store=250\ndept=101, date='1995-06-01', item_class=20, store=200\nSTOP\n\nThere is no \"dept=101\" entry in the index (the highest department in\nthe index happens to be 100). The index scan therefore terminates at\nthis point, having run out of leaf pages to scan (we've reached the\nrightmost point of the rightmost leaf page, as the scan attempts to\nlocate non-existent dept=101 tuples).\n\nDesign\n======\n\nSince index scans with skip arrays work just like index scans with\nregular arrays (as of Postgres 17), naturally, there are no special\nrestrictions. Associated optimizer index paths have path keys, and so\ncould (just for example) appear in a merge join, or feed into a group\naggregate, while avoiding a sort node. Index scans that skip could\nalso feed into a relocatable cursor.\n\nAs I mentioned already, the patch adds a skipping mechanism that is\npurely an additive thing. I think that this will turn out to be an\nimportant enabler of using the optimizations, even when there's much\nuncertainty about how much they'll actually help at runtime.\n\nOptimizer\n---------\n\nWe make a broad assumption that skipping is always to our advantage\nduring nbtree preprocessing -- preprocessing generates as many skip\narrays as could possibly make sense based on static rules (rules that\ndon't apply any kind of information about data distribution). Of\ncourse, skipping isn't necessarily the best approach in all cases, but\nthat's okay. We only actually skip when physical index characteristics\nshow that it makes sense. The real decisions about skipping are all\nmade dynamically.\n\nThat approach seems far more practicable than preempting the problem\nduring planning or during nbtree preprocessing. It seems like it'd be\nvery hard to model the costs statistically. We need revisions to\nbtcostestimate, of course, but the less we can rely on btcostestimate\nthe better. As I said, there are no new index paths generated by the\noptimizer for any of this.\n\nWhat do you call an index scan where 90% of all index tuples are 1 of\nonly 3 distinct values, while the remaining 10% of index tuples are\nall perfectly unique in respect of a leading column? Clearly the best\nstrategy when skipping using the leading column to \"use skip scan for\n90% of the index, and use a conventional range scan for the remaining\n10%\". Skipping generally makes sense, but we legitimately need to vary\nour strategy *during the same index scan*. It makes no sense to think\nof skip scan as a discrete sort of index scan.\n\nI have yet to prove that always having the option of skipping (even\nwhen it's very unlikely to help) really does \"come for free\" -- for\nnow I'm just asserting that that's possible. I'll need proof. I expect\nto hear some principled skepticism on this point. It's probably not\nquite there in this v1 of the patch -- there'll be some regressions (I\nhaven't looked very carefully just yet). However, we seem to already\nbe quite close to avoiding regressions from excessive/useless\nskipping.\n\nExtensible infrastructure/support functions\n-------------------------------------------\n\nCurrently, the patch only supports skip scan for a subset of all\nopclasses -- those that have the required support function #6, or\n\"skip support\" function. This provides the opclass with (among other\nthings) a way to increment the current skip array value (or to\ndecrement it, in the case of backward scans). In practice we only have\nthis for a handful of discrete integer (and integer-ish) types. Note\nthat the patch currently cannot skip for an index column that happens\nto be text. Note that even this v1 supports skip scans that use\nunsupported types, provided that the input opclass of the specific\ncolumns we'll need to skip has support.\n\nThe patch should be able to support every type/opclass as a target for\nskipping, regardless of whether an opclass support function happened\nto be available. That could work by teaching the nbtree code to have\nexplicit probes for the next skip array value in the index, only then\ncombining that new value with the qual from the input scan keys/query.\nI've put that off for now because it seems less important -- it\ndoesn't really affect anything I've said about the core design, which\nis what I'm focussing on for now.\n\nIt makes sense to use increment/decrement whenever feasible, even\nthough it isn't strictly necessary (or won't be, once the patch has\nthe required explicit probe support). The only reason to not apply\nincrement/decrement opclass skip support (that I can see) is because\nit just isn't practical (this is generally the case for continuous\ntypes). While it's slightly onerous to have to invent all this new\nopclass infrastructure, it definitely makes sense.\n\nThere is a performance advantage to having skip arrays that can\nincrement through each distinct possible indexable value (this\nincrement/decrement stuff comes from the MDAM paper). The MDAM\ntechniques inherently work best when \"skipping\" columns of discrete\ntypes like integer and date, which is why the paper has examples that\nall look like that. If you look at my example query and its individual\naccesses, you'll realize why this is so.\n\nThoughts?\n\n[1] https://vldb.org/conf/1995/P710.PDF\n--\nPeter Geoghegan", "msg_date": "Wed, 26 Jun 2024 15:16:07 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "Hi Peter,\n\n> Attached is a POC patch that adds skip scan to nbtree. The patch\n> teaches nbtree index scans to efficiently use a composite index on\n> '(a, b)' for queries with a predicate such as \"WHERE b = 5\". This is\n> feasible in cases where the total number of distinct values in the\n> column 'a' is reasonably small (think tens or hundreds, perhaps even\n> thousands for very large composite indexes).\n>\n> [...]\n>\n> Thoughts?\n\nMany thanks for working on this. I believe it is an important feature\nand it would be great to deliver it during the PG18 cycle.\n\nI experimented with the patch and here are the results I got so far.\n\nFirstly, it was compiled on Intel MacOS and ARM Linux. All the tests\npass just fine.\n\nSecondly, I tested the patch manually using a release build on my\nRaspberry Pi 5 and the GUCs that can be seen in [1].\n\nTest 1 - simple one.\n\n```\nCREATE TABLE test1(c char, n bigint);\nCREATE INDEX test1_idx ON test1 USING btree(c,n);\n\nINSERT INTO test1\n SELECT chr(ascii('a') + random(0,2)) AS c,\n random(0, 1_000_000_000) AS n\n FROM generate_series(0, 1_000_000);\n\nEXPLAIN [ANALYZE] SELECT COUNT(*) FROM test1 WHERE n > 900_000_000;\n```\n\nTest 2 - a more complicated one.\n\n```\nCREATE TABLE test2(c1 char, c2 char, n bigint);\nCREATE INDEX test2_idx ON test2 USING btree(c1,c2,n);\n\nINSERT INTO test2\n SELECT chr(ascii('a') + random(0,2)) AS c1,\n chr(ascii('a') + random(0,2)) AS c2,\n random(0, 1_000_000_000) AS n\n FROM generate_series(0, 1_000_000);\n\nEXPLAIN [ANALYZE] SELECT COUNT(*) FROM test2 WHERE n > 900_000_000;\n```\n\nTest 3 - to see how it works with covering indexes.\n\n```\nCREATE TABLE test3(c char, n bigint, s text DEFAULT 'text_value' || n);\nCREATE INDEX test3_idx ON test3 USING btree(c,n) INCLUDE(s);\n\nINSERT INTO test3\n SELECT chr(ascii('a') + random(0,2)) AS c,\n random(0, 1_000_000_000) AS n,\n 'text_value_' || random(0, 1_000_000_000) AS s\n FROM generate_series(0, 1_000_000);\n\nEXPLAIN [ANALYZE] SELECT s FROM test3 WHERE n < 1000;\n```\n\nIn all the cases the patch worked as expected.\n\nI noticed that with the patch we choose Index Only Scans for Test 1\nand without the patch - Parallel Seq Scan. However the Parallel Seq\nScan is 2.4 times faster. Before the patch the query takes 53 ms,\nafter the patch - 127 ms. I realize this could be just something\nspecific to my hardware and/or amount of data.\n\nDo you think this is something that was expected or something worth\ninvestigating further?\n\nI haven't looked at the code yet.\n\n[1]: https://github.com/afiskon/pgscripts/blob/master/single-install-meson.sh\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 2 Jul 2024 15:52:58 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Tue, Jul 2, 2024 at 8:53 AM Aleksander Alekseev\n<[email protected]> wrote:\n> CREATE TABLE test1(c char, n bigint);\n> CREATE INDEX test1_idx ON test1 USING btree(c,n);\n\nThe type \"char\" (note the quotes) is different from char(1). It just\nso happens that v1 has support for skipping attributes that use the\ndefault opclass for \"char\", without support for char(1).\n\nIf you change your table definition to CREATE TABLE test1(c \"char\", n\nbigint), then your example queries can use the optimization. This\nmakes a huge difference.\n\n> EXPLAIN [ANALYZE] SELECT COUNT(*) FROM test1 WHERE n > 900_000_000;\n\nFor example, this first test query goes from needing a full index scan\nthat has 5056 buffer hits to a skip scan that requires only 12 buffer\nhits.\n\n> I noticed that with the patch we choose Index Only Scans for Test 1\n> and without the patch - Parallel Seq Scan. However the Parallel Seq\n> Scan is 2.4 times faster. Before the patch the query takes 53 ms,\n> after the patch - 127 ms.\n\nI'm guessing that it's actually much faster once you change the\nleading column to the \"char\" type/default opclass.\n\n> I realize this could be just something\n> specific to my hardware and/or amount of data.\n\nThe selfuncs.c costing current has a number of problems.\n\nOne problem is that it doesn't know that some opclasses/types don't\nsupport skipping at all. That particular problem should be fixed on\nthe nbtree side; nbtree should support skipping regardless of the\nopclass that the skipped attribute uses (while still retaining the new\nopclass support functions for a subset of types where we expect it to\nmake skip scans somewhat faster).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 2 Jul 2024 09:30:28 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Tue, Jul 2, 2024 at 9:30 AM Peter Geoghegan <[email protected]> wrote:\n> > EXPLAIN [ANALYZE] SELECT COUNT(*) FROM test1 WHERE n > 900_000_000;\n>\n> For example, this first test query goes from needing a full index scan\n> that has 5056 buffer hits to a skip scan that requires only 12 buffer\n> hits.\n\nActually, looks like that's an invalid result. The \"char\" opclass\nsupport function appears to have bugs.\n\nMy testing totally focussed on types like integer, date, and UUID. The\n\"char\" opclass was somewhat of an afterthought. Will fix \"char\" skip\nsupport for v2.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 2 Jul 2024 09:40:24 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Tue, Jul 2, 2024 at 9:40 AM Peter Geoghegan <[email protected]> wrote:\n>\n> On Tue, Jul 2, 2024 at 9:30 AM Peter Geoghegan <[email protected]> wrote:\n> > > EXPLAIN [ANALYZE] SELECT COUNT(*) FROM test1 WHERE n > 900_000_000;\n> >\n> > For example, this first test query goes from needing a full index scan\n> > that has 5056 buffer hits to a skip scan that requires only 12 buffer\n> > hits.\n>\n> Actually, looks like that's an invalid result. The \"char\" opclass\n> support function appears to have bugs.\n\nAttached v2 fixes this bug. The problem was that the skip support\nfunction used by the \"char\" opclass assumed signed char comparisons,\neven though the authoritative B-Tree comparator (support function 1)\nuses signed comparisons (via uint8 casting). A simple oversight. Your\ntest cases will work with this v2, provided you use \"char\" (instead of\nunadorned char) in the create table statements.\n\nAnother small change in v2: I added a DEBUG2 message to nbtree\npreprocessing, indicating the number of attributes that we're going to\nskip. This provides an intuitive way to see whether the optimizations\nare being applied in the first place. That should help to avoid\nfurther confusion like this as the patch continues to evolve.\n\nSupport for char(1) doesn't seem feasible within the confines of a\nskip support routine. Just like with text (which I touched on in the\nintroductory email), this will require teaching nbtree to perform\nexplicit next-key probes. An approach based on explicit probes is\nsomewhat less efficient in some cases, but it should always work. It's\nimpractical to write opclass support that (say) increments a char\nvalue 'a' to 'b'. Making that approach work would require extensive\ncooperation from the collation provider, and some knowledge of\nencoding, which just doesn't make sense (if it's possible at all). I\ndon't have the problem with \"char\" because it isn't a collatable type\n(it is essentially the same thing as an uint8 integer type, except\nthat it outputs printable ascii characters).\n\nFWIW, your test cases don't seem like particularly good showcases for\nthe patch. The queries you came up with require a relatively large\namount of random I/O when accessing the heap, which skip scan will\nnever help with -- so skip scan is a small win (at least relative to\nan unoptimized full index scan). Obviously, no skip scan can ever\navoid any required heap accesses compared to a naive full index scan\n(loose index scan *does* have that capability, which is possible only\nbecause it applies semantic information in a way that's very\ndifferent).\n\nFWIW, a more sympathetic version of your test queries would have\ninvolved something like \"WHERE n = 900_500_000\". That would allow the\nimplementation to perform a series of *selective* primitive index\nscans (one primitive index scan per \"c\" column/char grouping). That\nchange has the effect of allowing the scan to skip over many\nirrelevant leaf pages, which is of course the whole point of skip\nscan. It also makes the scan will require far fewer heap accesses, so\nheap related costs no longer drown out the nbtree improvements.\n\n-- \nPeter Geoghegan", "msg_date": "Tue, 2 Jul 2024 12:25:51 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Tue, Jul 2, 2024 at 12:25 PM Peter Geoghegan <[email protected]> wrote:\n> Attached v2 fixes this bug. The problem was that the skip support\n> function used by the \"char\" opclass assumed signed char comparisons,\n> even though the authoritative B-Tree comparator (support function 1)\n> uses signed comparisons (via uint8 casting). A simple oversight.\n\nAlthough v2 gives correct answers to the queries, the scan itself\nperforms an excessive amount of leaf page accesses. In short, it\nbehaves just like a full index scan would, even though we should\nexpect it to skip over significant runs of the index. So that's\nanother bug.\n\nIt looks like the queries you posted have a kind of adversarial\nquality to them, as if they were designed to confuse the\nimplementation. Was it intentional? Did you take them from an existing\ntest suite somewhere?\n\nThe custom instrumentation I use to debug these issues shows:\n\n_bt_readpage: 🍀 1981 with 175 offsets/tuples (leftsib 4032, rightsib 3991) ➡️\n _bt_readpage first: (c, n)=(b, 998982285), TID='(1236,173)',\n0x7f1464fe9fc0, from non-pivot offnum 2 started page\n _bt_readpage final: , (nil), continuescan high key check did not set\nso->currPos.moreRight=false ➡️ 🟢\n _bt_readpage stats: currPos.firstItem: 0, currPos.lastItem: 173,\nnmatching: 174 ✅\n_bt_readpage: 🍀 3991 with 175 offsets/tuples (leftsib 1981, rightsib 9) ➡️\n _bt_readpage first: (c, n)=(b, 999474517), TID='(4210,9)',\n0x7f1464febfc8, from non-pivot offnum 2 started page\n _bt_readpage final: , (nil), continuescan high key check did not set\nso->currPos.moreRight=false ➡️ 🟢\n _bt_readpage stats: currPos.firstItem: 0, currPos.lastItem: 173,\nnmatching: 174 ✅\n_bt_readpage: 🍀 9 with 229 offsets/tuples (leftsib 3991, rightsib 3104) ➡️\n _bt_readpage first: (c, n)=(c, 1606), TID='(882,68)', 0x7f1464fedfc0,\nfrom non-pivot offnum 2 started page\n _bt_readpage final: , (nil), continuescan high key check did not set\nso->currPos.moreRight=false ➡️ 🟢\n _bt_readpage stats: currPos.firstItem: 0, currPos.lastItem: -1, nmatching: 0 ❌\n_bt_readpage: 🍀 3104 with 258 offsets/tuples (leftsib 9, rightsib 1685) ➡️\n _bt_readpage first: (c, n)=(c, 706836), TID='(3213,4)',\n0x7f1464feffc0, from non-pivot offnum 2 started page\n _bt_readpage final: , (nil), continuescan high key check did not set\nso->currPos.moreRight=false ➡️ 🟢\n _bt_readpage stats: currPos.firstItem: 0, currPos.lastItem: -1, nmatching: 0 ❌\n*** SNIP, many more \"nmatching: 0\" pages appear after these two ***\n\nThe final _bt_advance_array_keys call for leaf page 3991 should be\nscheduling a new primitive index scan (i.e. skipping), but that never\nhappens. Not entirely sure why that is, but it probably has something\nto do with _bt_advance_array_keys failing to hit the\n\"has_required_opposite_direction_only\" path for determining if another\nprimitive scan is required. You're using an inequality required in the\nopposite-to-scan-direction here, so that path is likely to be\nrelevant.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 2 Jul 2024 12:55:59 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Tue, Jul 2, 2024 at 12:55 PM Peter Geoghegan <[email protected]> wrote:\n> Although v2 gives correct answers to the queries, the scan itself\n> performs an excessive amount of leaf page accesses. In short, it\n> behaves just like a full index scan would, even though we should\n> expect it to skip over significant runs of the index. So that's\n> another bug.\n\nHit \"send\" too soon. I simply forgot to run \"alter table test1 alter\ncolumn c type \"char\";\" before running the query. So, I was mistaken\nabout there still being a bug in v2. The issue here is that we don't\nhave support for the underlying type, char(1) -- nothing more.\n\nv2 of the patch with your query 1 (when changed to use the \"char\"\ntype/opclass instead of the currently unsupported char(1)\ntype/opclass) performs 395 index related buffer hits, and 5406 heap\nblock accesses. Whereas it's 3833 index buffer hits with master\n(naturally, the same 5406 heap accesses are required with master). In\nshort, this query isn't particularly sympathetic to the patch. Nor is\nit unsympathetic.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 2 Jul 2024 13:09:27 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "Hi Peter,\n\n> It looks like the queries you posted have a kind of adversarial\n> quality to them, as if they were designed to confuse the\n> implementation. Was it intentional?\n\nTo some extent. I merely wrote several queries that I would expect\nshould benefit from skip scans. Since I didn't look at the queries you\nused there was a chance that I will hit something interesting.\n\n> Attached v2 fixes this bug. The problem was that the skip support\n> function used by the \"char\" opclass assumed signed char comparisons,\n> even though the authoritative B-Tree comparator (support function 1)\n> uses signed comparisons (via uint8 casting). A simple oversight. Your\n> test cases will work with this v2, provided you use \"char\" (instead of\n> unadorned char) in the create table statements.\n\nThanks for v2.\n\n> If you change your table definition to CREATE TABLE test1(c \"char\", n\n> bigint), then your example queries can use the optimization. This\n> makes a huge difference.\n\nYou are right, it does.\n\nTest1 takes 33.7 ms now (53 ms before the path, x1.57)\n\nTest3 I showed before contained an error in the table definition\n(Postgres can't do `n bigint, s text DEFAULT 'text_value' || n`). Here\nis the corrected test:\n\n```\nCREATE TABLE test3(c \"char\", n bigint, s text);\nCREATE INDEX test3_idx ON test3 USING btree(c,n) INCLUDE(s);\n\nINSERT INTO test3\n SELECT chr(ascii('a') + random(0,2)) AS c,\n random(0, 1_000_000_000) AS n,\n 'text_value_' || random(0, 1_000_000_000) AS s\n FROM generate_series(0, 1_000_000);\n\nEXPLAIN ANALYZE SELECT s FROM test3 WHERE n < 10_000;\n```\n\nIt runs fast (< 1 ms) and uses the index, as expected.\n\nTest2 with \"char\" doesn't seem to benefit from the patch anymore\n(pretty sure it did in v1). It always chooses Parallel Seq Scans even\nif I change the condition to `WHERE n > 999_995_000` or `WHERE n =\n999_997_362`. Is it an expected behavior?\n\nI also tried Test4 and Test5.\n\nIn Test4 I was curious if scip scans work properly with functional indexes:\n\n```\nCREATE TABLE test4(d date, n bigint);\nCREATE INDEX test4_idx ON test4 USING btree(extract(year from d),n);\n\nINSERT INTO test4\n SELECT ('2024-' || random(1,12) || '-' || random(1,28)) :: date AS d,\n random(0, 1_000_000_000) AS n\n FROM generate_series(0, 1_000_000);\n\nEXPLAIN ANALYZE SELECT COUNT(*) FROM test4 WHERE n > 900_000_000;\n```\n\nThe query uses Index Scan, however the performance is worse than with\nSeq Scan chosen before the patch. It doesn't matter if I choose '>' or\n'=' condition.\n\nTest5 checks how skip scans work with partial indexes:\n\n```\nCREATE TABLE test5(c \"char\", n bigint);\nCREATE INDEX test5_idx ON test5 USING btree(c, n) WHERE n > 900_000_000;\n\nINSERT INTO test5\n SELECT chr(ascii('a') + random(0,2)) AS c,\n random(0, 1_000_000_000) AS n\n FROM generate_series(0, 1_000_000);\n\nEXPLAIN ANALYZE SELECT COUNT(*) FROM test5 WHERE n > 950_000_000;\n```\n\nIt runs fast and choses Index Only Scan. But then I discovered that\nwithout the patch Postgres also uses Index Only Scan for this query. I\ndidn't know it could do this - what is the name of this technique? The\nquery takes 17.6 ms with the patch, 21 ms without the patch. Not a\nhuge win but still.\n\nThat's all I have for now.\n\n--\nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 5 Jul 2024 14:03:51 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Fri, Jul 5, 2024 at 7:04 AM Aleksander Alekseev\n<[email protected]> wrote:\n> Test2 with \"char\" doesn't seem to benefit from the patch anymore\n> (pretty sure it did in v1). It always chooses Parallel Seq Scans even\n> if I change the condition to `WHERE n > 999_995_000` or `WHERE n =\n> 999_997_362`. Is it an expected behavior?\n\nThe \"char\" opclass's skip support routine was totally broken in v1, so\nits performance isn't really relevant. In any case v2 didn't make any\nchanges to the costing, so I'd expect it to use exactly the same query\nplan as v1.\n\n> The query uses Index Scan, however the performance is worse than with\n> Seq Scan chosen before the patch. It doesn't matter if I choose '>' or\n> '=' condition.\n\nThat's because the index has a leading/skipped column of type\n\"numeric\", which isn't a supported type just yet (a supported B-Tree\nopclass, actually).\n\nThe optimization is effective if you create the expression index with\na cast to integer:\n\nCREATE INDEX test4_idx ON test4 USING btree(((extract(year from d))::int4),n);\n\nThis performs much better. Now I see \"DEBUG: skipping 1 index\nattributes\" when I run the query \"EXPLAIN (ANALYZE, BUFFERS) SELECT\nCOUNT(*) FROM test4 WHERE n > 900_000_000\", which indicates that the\noptimization has in fact been used as expected. There are far fewer\nbuffers hit with this version of your test4, which also indicates that\nthe optimization has been effective.\n\nNote that the original numeric expression index test4 showed \"DEBUG:\nskipping 0 index attributes\" when the test query ran, which indicated\nthat the optimization couldn't be used. I suggest that you look out\nfor that, by running \"set client_min_messages to debug2;\" from psql\nwhen testing the patch.\n\n> It runs fast and choses Index Only Scan. But then I discovered that\n> without the patch Postgres also uses Index Only Scan for this query. I\n> didn't know it could do this - what is the name of this technique?\n\nIt is a full index scan. These have been possible for many years now\n(possibly well over 20 years).\n\nArguably, the numeric case that didn't use the optimization (your\ntest4) should have been costed as a full index scan, but it wasn't --\nthat's why you didn't get a faster sequential scan, which would have\nmade a little bit more sense. In general, the costing changes in the\npatch are very rough.\n\nThat said, this particular problem (the test4 numeric issue) should be\nfixed by inventing a way for nbtree to use skip scan with types that\nlack skip support. It's not primarily a problem with the costing. At\nleast not in my mind.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 5 Jul 2024 20:44:50 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Fri, Jul 5, 2024 at 8:44 PM Peter Geoghegan <[email protected]> wrote:\n> CREATE INDEX test4_idx ON test4 USING btree(((extract(year from d))::int4),n);\n>\n> This performs much better. Now I see \"DEBUG: skipping 1 index\n> attributes\" when I run the query \"EXPLAIN (ANALYZE, BUFFERS) SELECT\n> COUNT(*) FROM test4 WHERE n > 900_000_000\", which indicates that the\n> optimization has in fact been used as expected. There are far fewer\n> buffers hit with this version of your test4, which also indicates that\n> the optimization has been effective.\n\nActually, with an index-only scan it is 281 buffer hits (including\nsome small number of VM buffer hits) with the patch, versus 2736\nbuffer hits on master. So a big change to the number of index page\naccesses only.\n\nIf you use a plain index scan for this, then the cost of random heap\naccesses totally dominates, so skip scan cannot possibly give much\nbenefit. Even a similar bitmap scan requires 4425 distinct heap page accesses,\nwhich is significantly more than the total number of index pages in\nthe index. 4425 heap pages is almost the entire table; the table\nconsists of 4480 mainfork blocks.\n\nThis is a very nonselective query. It's not at all surprising that\nthis query (and others like it) hardly benefit at all, except when we\ncan use an index-only scan (so that the cost of heap accesses doesn't\ntotally dominate).\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 5 Jul 2024 20:57:58 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "Hi,\n\nSince I'd like to understand the skip scan to improve the EXPLAIN output\nfor multicolumn B-Tree Index[1], I began to try the skip scan with some\nqueries and look into the source code.\n\nI have some feedback and comments.\n\n(1)\n\nAt first, I was surprised to look at your benchmark result because the skip scan\nindex can improve much performance. I agree that there are many users to be\nhappy with the feature for especially OLAP use-case. I expected to use v18.\n\n\n(2)\n\nI found the cost is estimated to much higher if the number of skipped attributes\nis more than two. Is it expected behavior?\n\n# Test result. The attached file is the detail of tests.\n\n-- Index Scan\n-- The actual time is low since the skip scan works well\n-- But the cost is higher than one of seqscan\nEXPLAIN (ANALYZE, BUFFERS, VERBOSE) SELECT * FROM test WHERE id3 = 101;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_id1_id2_id3 on public.test (cost=0.42..26562.77 rows=984 width=20) (actual time=0.051..15.533 rows=991 loops=1)\n Output: id1, id2, id3, value\n Index Cond: (test.id3 = 101)\n Buffers: shared hit=4402\n Planning:\n Buffers: shared hit=7\n Planning Time: 0.234 ms\n Execution Time: 15.711 ms\n(8 rows)\n\n-- Seq Scan\n-- actual time is high, but the cost is lower than one of the above Index Scan.\nEXPLAIN (ANALYZE, BUFFERS, VERBOSE) SELECT * FROM test WHERE id3 = 101;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------\n Gather (cost=1000.00..12676.73 rows=984 width=20) (actual time=0.856..113.861 rows=991 loops=1)\n Output: id1, id2, id3, value\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=6370\n -> Parallel Seq Scan on public.test (cost=0.00..11578.33 rows=410 width=20) (actual time=0.061..102.016 rows=330 loops=3)\n Output: id1, id2, id3, value\n Filter: (test.id3 = 101)\n Rows Removed by Filter: 333003\n Buffers: shared hit=6370\n Worker 0: actual time=0.099..98.014 rows=315 loops=1\n Buffers: shared hit=2066\n Worker 1: actual time=0.054..97.162 rows=299 loops=1\n Buffers: shared hit=1858\n Planning:\n Buffers: shared hit=19\n Planning Time: 0.194 ms\n Execution Time: 114.129 ms\n(18 rows)\n\n\nI look at btcostestimate() to find the reason and found the bound quals\nand cost.num_sa_scans are different from my expectation.\n\nMy assumption is\n* bound quals is id3=XXX (and id1 and id2 are skipped attributes)\n* cost.num_sa_scans = 100 (=10*10 because assuming 10 primitive index scans\n per skipped attribute)\n\nBut it's wrong. The above index scan result is\n* bound quals is NULL\n* cost.num_sa_scans = 1\n\n\nAs I know you said the below, but I'd like to know the above is expected or not.\n\n> That approach seems far more practicable than preempting the problem\n> during planning or during nbtree preprocessing. It seems like it'd be\n> very hard to model the costs statistically. We need revisions to\n> btcostestimate, of course, but the less we can rely on btcostestimate\n> the better. As I said, there are no new index paths generated by the\n> optimizer for any of this.\n\nI couldn't understand why there is the below logic well.\n\n btcostestimate()\n (...omit...)\n \t\t\tif (indexcol != iclause->indexcol)\n \t\t\t{\n \t\t\t\t/* no quals at all for indexcol */\n \t\t\t\tfound_skip = true;\n \t\t\t\tif (index->pages < 100)\n \t\t\t\t\tbreak;\n \t\t\t\tnum_sa_scans += 10 * (indexcol - iclause->indexcol); // why add minus value?\n \t\t\t\tcontinue; // why skip to add bound quals?\n \t\t\t}\n\n\n(3)\n\nCurrently, there is an assumption that \"there will be 10 primitive index scans\nper skipped attribute\". Is any chance to use pg_stats.n_distinct?\n\n[1] Improve EXPLAIN output for multicolumn B-Tree Index\nhttps://www.postgresql.org/message-id/flat/TYWPR01MB1098260B694D27758FE2BA46FB1C92%40TYWPR01MB10982.jpnprd01.prod.outlook.com\n\nRegards,\n--\nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Fri, 12 Jul 2024 05:18:56 +0000", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "RE: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Fri, Jul 12, 2024 at 1:19 AM <[email protected]> wrote:\n> Since I'd like to understand the skip scan to improve the EXPLAIN output\n> for multicolumn B-Tree Index[1], I began to try the skip scan with some\n> queries and look into the source code.\n\nThanks for the review!\n\nAttached is v3, which generalizes skip scan, allowing it to work with\nopclasses/types that lack a skip support routine. In other words, v3\nmakes skip scan work for all types, including continuous types, where\nit's impractical or infeasible to add skip support. So now important\ntypes like text and numeric also get the skip scan optimization (it's\nnot just discrete types like integer and date, as in previous\nversions).\n\nI feel very strongly that everything should be implemented as part of\nthe new skip array abstraction; the patch should only add the concept\nof skip arrays, which should work just like SAOP arrays. We should\navoid introducing any special cases. In short, _bt_advance_array_keys\nshould work in exactly the same way as it does as of Postgres 17\n(except for a few representational differences for skip arrays). This\nseems essential because _bt_advance_array_keys inherently need to be\nable to trigger moving on to the next skip array value when it reaches\nthe end of a SAOP array (and vice-versa). And so it just makes sense\nto abstract-away the differences, hiding the difference in lower level\ncode.\n\nI have described the new _bt_first behavior that is now available in\nthis new v3 of the patch as \"adding explicit next key probes\". While\nv3 does make new changes to _bt_first, it's not really a special kind\nof index probe. v3 invents new sentinel values instead.\n\nThe use of sentinels avoids inventing true special cases: the values\n-inf, +inf, as well as variants of = that use a real datum value, but\nmatch on the next key in the index. These new = variants can be\nthought of as \"+infinitesimal\" values. So when _bt_advance_array_keys\nhas to \"increment\" the numeric value 5.0, it sets the scan key to the\nvalue \"5.0 +infinitesimal\". There can never be any matching tuples in\nthe index (just like with -inf sentinel values), but that doesn't\nmatter. So the changes v3 makes to _bt_first doesn't change the basic\nconceptual model. The added complexity is kept to a manageable level,\nparticularly within _bt_advance_array_keys, which is already very\ncomplicated.\n\nTo help with testing and review, I've added another temporary testing\nGUC to v3: skipscan_skipsupport_enabled. This can be set to \"false\" to\navoid using skip support, even where available. The GUC makes it easy\nto measure how skip support routines can help performance (with\ndiscrete types like integer and date).\n\n> I found the cost is estimated to much higher if the number of skipped attributes\n> is more than two. Is it expected behavior?\n\nYes and no.\n\nHonestly, the current costing is just placeholder code. It is totally\ninadequate. I'm not surprised that you found problems with it. I just\ndidn't put much work into it, because I didn't really know what to do.\n\n> # Test result. The attached file is the detail of tests.\n>\n> -- Index Scan\n> -- The actual time is low since the skip scan works well\n> -- But the cost is higher than one of seqscan\n> EXPLAIN (ANALYZE, BUFFERS, VERBOSE) SELECT * FROM test WHERE id3 = 101;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using idx_id1_id2_id3 on public.test (cost=0.42..26562.77 rows=984 width=20) (actual time=0.051..15.533 rows=991 loops=1)\n> Output: id1, id2, id3, value\n> Index Cond: (test.id3 = 101)\n> Buffers: shared hit=4402\n> Planning:\n> Buffers: shared hit=7\n> Planning Time: 0.234 ms\n> Execution Time: 15.711 ms\n> (8 rows)\n\nThis is a useful example, because it shows the difficulty with the\ncosting. I ran this query using my own custom instrumentation of the\nscan. I saw that we only ever manage to skip ahead by perhaps 3 leaf\npages at a time, but we still come out ahead. As you pointed out, it's\n~7.5x faster than the sequential scan, but not very different to the\nequivalent full index scan. At least not very different in terms of\nleaf page accesses. Why should we win by this much, for what seems\nlike a marginal case for skip scan?\n\nEven cases where \"skipping\" doesn't manage to skip any leaf pages can\nstill benefit from skipping *index tuples* -- there is more than one\nkind of skipping to consider. That is, the patch helps a lot with some\n(though not all) cases where I didn't really expect that to happen:\nthe Postgres 17 SAOP tuple skipping code (the code in\n_bt_checkkeys_look_ahead, and the related code in _bt_readpage) helps\nquite a bit in \"marginal\" skip scan cases, even though it wasn't\nreally designed for that purpose (it was added to avoid regressions in\nSAOP array scans for the Postgres 17 work).\n\nI find that some queries using my original example test case are about\ntwice as fast as an equivalent full index scan, even when only the\nfourth and final index column is used in the query predicate. The scan\ncan't even skip a single leaf page at a time, and yet we still win by\na nice amount. We win, though it is almost by mistake!\n\nThis is mostly a good thing. Both for the obvious reason (fast is\nbetter than slow), and because it justifies being so aggressive in\nassuming that skip scan might work out during planning (being wrong\nwithout really losing is nice). But there is also a downside: it makes\nit even harder to model costs at runtime, from within the optimizer.\n\nIf I measure the actual runtime costs other than runtime (e.g.,\nbuffers accesses), I'm not sure that the optimizer is wrong to think\nthat the parallel sequential scan is faster. It looks approximately\ncorrect. It is only when we look at runtime that the optimizer's\nchoice looks wrong. Which is...awkward.\n\nIn general, I have very little idea about how to improve the costing\nwithin btcostestimate. I am hoping that somebody has better ideas\nabout it. btcostestimate is definitely the area where the patch is\nweakest right now.\n\n> I look at btcostestimate() to find the reason and found the bound quals\n> and cost.num_sa_scans are different from my expectation.\n>\n> My assumption is\n> * bound quals is id3=XXX (and id1 and id2 are skipped attributes)\n> * cost.num_sa_scans = 100 (=10*10 because assuming 10 primitive index scans\n> per skipped attribute)\n>\n> But it's wrong. The above index scan result is\n> * bound quals is NULL\n> * cost.num_sa_scans = 1\n\nThe logic with cost.num_sa_scans was definitely not what I intended.\nThat's fixed in v3, at least. But the code in btcostestimate is still\nessentially the same as in earlier versions -- it needs to be\ncompletely redesigned (or, uh, designed for the first time).\n\n> As I know you said the below, but I'd like to know the above is expected or not.\n\n> Currently, there is an assumption that \"there will be 10 primitive index scans\n> per skipped attribute\". Is any chance to use pg_stats.n_distinct?\n\nIt probably makes sense to use pg_stats.n_distinct here. But how?\n\nIf the problem is that we're too pessimistic, then I think that this\nwill usually (though not always) make us more pessimistic. Isn't that\nthe wrong direction to go in? (We're probably also too optimistic in\nsome cases, but being too pessimistic is a bigger problem in\npractice.)\n\nFor example, your test case involved 11 distinct values in each\ncolumn. The current approach of hard-coding 10 (which is just a\ntemporary hack) should actually make the scan look a bit cheaper than\nit would if we used the true ndistinct.\n\nAnother underlying problem is that the existing SAOP costing really\nisn't very accurate, without skip scan -- that's a big source of the\npessimism with arrays/skipping. Why should we be able to get the true\nnumber of primitive index scans just by multiplying together each\nomitted prefix column's ndistinct? That approach is good for getting\nthe worst case, which is probably relevant -- but it's probably not a\nvery good assumption for the average case. (Though at least we can cap\nthe total number of primitive index scans to 1/3 of the total number\nof pages in the index in btcostestimate, since we have guarantees\nabout the worst case as of Postgres 17.)\n\n-- \nPeter Geoghegan", "msg_date": "Mon, 15 Jul 2024 14:34:38 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Mon, Jul 15, 2024 at 2:34 PM Peter Geoghegan <[email protected]> wrote:\n> Attached is v3, which generalizes skip scan, allowing it to work with\n> opclasses/types that lack a skip support routine. In other words, v3\n> makes skip scan work for all types, including continuous types, where\n> it's impractical or infeasible to add skip support.\n\nAttached is v4, which:\n\n* Fixes a previous FIXME item affecting range skip scans/skip arrays\nused in cross-type scenarios.\n\n* Refactors and simplifies the handling of range inequalities\nassociated with skip arrays more generally. We now always use\ninequality scan keys during array advancement (and when descending the\ntree within _bt_first), rather than trying to use a datum taken from\nthe range inequality as an array element directly.\n\nThis gives us cleaner separation between scan keys/data types in\ncross-type scenarios: skip arrays will now only ever contain\n\"elements\" of opclass input type. Sentinel values such as -inf are\nexpanded to represent \"the lowest possible value that comes after the\narray's low_compare lower bound, if any\". Opclasses that don't offer\nskip support took roughly this same approach within v3, but in v4 all\nopclasses do it the same way (so opclasses with skip support use the\nSK_BT_NEG_INF sentinel marking in their scan keys, though never the\nSK_BT_NEXTKEY sentinel marking).\n\nThis is really just a refactoring revision. Nothing particularly\nexciting here compared to v3.\n\n-- \nPeter Geoghegan", "msg_date": "Wed, 24 Jul 2024 17:14:54 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "> On Wed, Jun 26, 2024 at 03:16:07PM GMT, Peter Geoghegan wrote:\n>\n> Loose index scan is a far more specialized technique than skip scan.\n> It only applies within special scans that feed into a DISTINCT group\n> aggregate. Whereas my skip scan patch isn't like that at all -- it's\n> much more general. With my patch, nbtree has exactly the same contract\n> with the executor/core code as before. There are no new index paths\n> generated by the optimizer to make skip scan work, even. Skip scan\n> isn't particularly aimed at improving group aggregates (though the\n> benchmark I'll show happens to involve a group aggregate, simply\n> because the technique works best with large and expensive index\n> scans).\n\nI see that the patch is not supposed to deal with aggregates in any special\nway. But from what I understand after a quick review, skip scan is not getting\napplied to them if there are no quals in the query (in that case\n_bt_preprocess_keys returns before calling _bt_preprocess_array_keys). Yet such\nqueries could benefit from skipping, I assume they still could be handled by\nthe machinery introduced in this patch?\n\n> > Currently, there is an assumption that \"there will be 10 primitive index scans\n> > per skipped attribute\". Is any chance to use pg_stats.n_distinct?\n>\n> It probably makes sense to use pg_stats.n_distinct here. But how?\n>\n> If the problem is that we're too pessimistic, then I think that this\n> will usually (though not always) make us more pessimistic. Isn't that\n> the wrong direction to go in? (We're probably also too optimistic in\n> some cases, but being too pessimistic is a bigger problem in\n> practice.)\n>\n> For example, your test case involved 11 distinct values in each\n> column. The current approach of hard-coding 10 (which is just a\n> temporary hack) should actually make the scan look a bit cheaper than\n> it would if we used the true ndistinct.\n>\n> Another underlying problem is that the existing SAOP costing really\n> isn't very accurate, without skip scan -- that's a big source of the\n> pessimism with arrays/skipping. Why should we be able to get the true\n> number of primitive index scans just by multiplying together each\n> omitted prefix column's ndistinct? That approach is good for getting\n> the worst case, which is probably relevant -- but it's probably not a\n> very good assumption for the average case. (Though at least we can cap\n> the total number of primitive index scans to 1/3 of the total number\n> of pages in the index in btcostestimate, since we have guarantees\n> about the worst case as of Postgres 17.)\n\nDo I understand correctly, that the only way how multiplying ndistincts could\nproduce too pessimistic results is when there is a correlation between distinct\nvalues? Can one benefit from the extended statistics here?\n\nAnd while we're at it, I think it would be great if the implementation will\nallow some level of visibility about the skip scan. From what I see, currently\nit's by design impossible for users to tell whether something was skipped or\nnot. But when it comes to planning and estimates, maybe it's not a bad idea to\nlet explain analyze show something like \"expected number of primitive scans /\nactual number of primitive scans\".\n\n\n", "msg_date": "Sat, 3 Aug 2024 21:34:54 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Sat, Aug 3, 2024 at 3:34 PM Dmitry Dolgov <[email protected]> wrote:\n> I see that the patch is not supposed to deal with aggregates in any special\n> way.\n\nRight.\n\n> But from what I understand after a quick review, skip scan is not getting\n> applied to them if there are no quals in the query (in that case\n> _bt_preprocess_keys returns before calling _bt_preprocess_array_keys).\n\nRight.\n\n> Yet such queries could benefit from skipping, I assume they still could be handled by\n> the machinery introduced in this patch?\n\nI'm not sure.\n\nThere are no real changes required inside _bt_advance_array_keys with\nthis patch -- skip arrays are dealt with in essentially the same way\nas conventional arrays (as of Postgres 17). I suspect that loose index\nscan would be best implemented using _bt_advance_array_keys. It could\nalso \"plug in\" to the existing _bt_advance_array_keys design, I\nsuppose.\n\nAs I touched on already, your loose index scan patch applies\nhigh-level semantic information in a way that is very different to my\nskip scan patch. This means that it makes revisions to the index AM\nAPI (if memory serves it adds a callback called amskip to that API).\nIt also means that loose index scan can actually avoid heap accesses;\nloose scans wholly avoid accessing logical rows (in both the index and\nthe heap) by reasoning that it just isn't necessary to do so at all.\nSkipping happens in both data structures. Right?\n\nObviously, my new skip scan patch cannot possibly reduce the number of\nheap page accesses required by a given index scan. Precisely the same\nlogical rows must be accessed as before. There is no two-way\nconversation between the index AM and the table AM about which\nrows/row groupings have at least one visible tuple. We're just\nnavigating through the index more efficiently, without changing any\ncontract outside of nbtree itself.\n\nThe \"skip scan\" name collision is regrettable. But the fact is that\nOracle, MySQL, and now SQLite all call this feature skip scan. That\nfeels like the right precedent to follow.\n\n> Do I understand correctly, that the only way how multiplying ndistincts could\n> produce too pessimistic results is when there is a correlation between distinct\n> values?\n\nYes, that's one problem with the costing. Not the only one, though.\n\nThe true number of primitive index scans depends on the cardinality of\nthe data. For example, a skip scan might be the cheapest plan by far\nif (say) 90% of the index has the same leading column value and the\nremaining 10% has totally unique values. We'd still do a bad job of\ncosting this query with an accurate ndistinct for the leading column.\nWe really one need to do one or two primitive index scans for \"the\nfirst 90% of the index\", and one more primitive index scan for \"the\nremaining 10% of the index\". For a query such as this, we \"require a\nfull index scan for the remaining 10% of the index\", which is\nsuboptimal, but doesn't fundamentally change anything (I guess that a\nskip scan is always suboptimal, in the sense that you could always do\nbetter by having more indexes).\n\n> Can one benefit from the extended statistics here?\n\nI really don't know. Certainly seems possible in cases with more than\none skipped leading column.\n\nThe main problem with the costing right now is that it's just not very\nwell thought through, in general. The performance at runtime depends\non the layout of values in the index itself, so the underlying way\nthat you'd model the costs doesn't have any great precedent in\ncostsize.c. We do have some idea of the number of leaf pages we'll\naccess in btcostestimate(), but that works in a way that isn't really\nfit for purpose. It kind of works with one primitive index scan, but\nworks much less well with multiple primitive scans.\n\n> And while we're at it, I think it would be great if the implementation will\n> allow some level of visibility about the skip scan. From what I see, currently\n> it's by design impossible for users to tell whether something was skipped or\n> not. But when it comes to planning and estimates, maybe it's not a bad idea to\n> let explain analyze show something like \"expected number of primitive scans /\n> actual number of primitive scans\".\n\nI agree. I think that that's pretty much mandatory for this patch. At\nleast the actual number of primitive scans should be exposed. Not\nquite as sure about showing the estimated number, since that might be\nembarrassingly wrong quite regularly, without it necessarily mattering\nthat much (I'd worry that it'd be distracting).\n\nDisplaying the number of primitive scans would already be useful for\nindex scans with SAOPs, even without this patch. The same general\nconcepts (estimated vs. actual primitive index scans) already exist,\nas of Postgres 17. That's really nothing new.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 3 Aug 2024 18:14:18 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Sat, Aug 3, 2024 at 6:14 PM Peter Geoghegan <[email protected]> wrote:\n> Displaying the number of primitive scans would already be useful for\n> index scans with SAOPs, even without this patch. The same general\n> concepts (estimated vs. actual primitive index scans) already exist,\n> as of Postgres 17. That's really nothing new.\n\nWe actually expose this via instrumentation, in a certain sense. This\nis documented by a \"Note\":\n\nhttps://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-ALL-INDEXES-VIEW\n\nThat is, we already say \"Each internal primitive index scan increments\npg_stat_all_indexes.idx_scan, so it's possible for the count of index\nscans to significantly exceed the total number of index scan executor\nnode executions\". So, as I said in the last email, advertising the\ndifference between # of primitive index scans and # of index scan\nexecutor node executions in EXPLAIN ANALYZE is already a good idea.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 3 Aug 2024 18:21:43 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Wed, Jul 24, 2024 at 5:14 PM Peter Geoghegan <[email protected]> wrote:\n> Attached is v4\n\nAttached is v5, which splits the code from v4 patch into 2 pieces --\nit becomes 0002-* and 0003-*. Certain refactoring work now appears\nunder its own separate patch/commit -- see 0002-* (nothing new here,\nexcept the commit message/patch structure). The patch that actually\nadds skip scan (0003-* in this new version) has been further polished,\nthough not in a way that I think is interesting enough to go into\nhere.\n\nThe interesting and notable change for v5 is the addition of the code\nin 0001-*. The new 0001-* patch is concerned with certain aspects of\nhow _bt_advance_array_keys decides whether to start another primitive\nindex scan (or to stick with the ongoing one for one more leaf page\ninstead). This is a behavioral change, albeit a subtle one. It's also\nkinda independent of skip scan (more on why that is at the end).\n\nIt's easiest to explain why 0001-* matters by way of an example. My\nexample will show significantly more internal/root page accesses than\nseen on master, though only when 0002-* and 0003-* are applied, and\n0001-* is omitted. When all 3 v5 patches are applied together, the\ntotal number of index pages accessed by the test query will match the\nmaster branch. It's important that skip scan never loses by much to\nthe master branch, of course. Even when the details of the index/scan\nare inconvenient to the implementation, in whatever way.\n\nSetup:\n\ncreate table demo (int4 a, numeric b);\ncreate index demo_idx on demo (a, b);\ninsert into demo select a, random() from generate_series(1, 10000) a,\ngenerate_series(1,5) five_rows_per_a_val;\nvacuum demo;\n\nWe now have a btree index \"demo_idx\", which has two levels (a root\npage plus a leaf level). The root page contains several hundred pivot\ntuples, all of which have their \"b\" value truncated away (or have the\nvalue -inf, if you prefer), with just one prefix \"a\" column left in\nplace. Naturally, every leaf page has a high key with its own\nseparator key that matches one particular tuple that appears in the\nroot page (except for the rightmost leaf page). So our leaf level scan\nwill see lots of truncated leaf page high keys (all matching a\ncorresponding root page tuple).\n\nTest query:\n\nselect a from demo where b > 0.99;\n\nThis is a query that really shouldn't be doing any skipping at all. We\nnevertheless still see a huge amount of skipping with this query, ocne\n0001-* is omitted. Prior to 0001-*, a new primitive index scan is\nstarted whenever the scan reaches a \"boundary\" between adjoining leaf\npages. That is, whenever _bt_advance_array_keys stopped on a high key\npstate.finaltup. So without the new 0001-* work, the number of page\naccesses almost doubles (because we access the root page once per leaf\npage accessed, instead of just accessing it once for the whole scan).\n\nWhat skip scan should have been doing all along (and will do now) is\nto step forward to the next right sibling leaf page whenever it\nreaches a boundary between leaf pages. This should happen again and\nagain, without our ever choosing to start a new primitive index scan\ninstead (it shouldn't happen even once with this query). In other\nwords, we ought to behave just like a full index scan would behave\nwith this query -- which is exactly what we get on master.\n\nThe scan will still nominally \"use skip scan\" even with this fix in\nplace, but in practice, for this particular query/index, the scan\nwon't ever actually decide to skip. So it at least \"looks like\" an\nindex scan from the point of view of EXPLAIN (ANALYZE, BUFFERS). There\nis a separate question of how many CPU cycles we use to do all this,\nbut for now my focus is on total pages accessed by the patch versus on\nmaster, especially for adversarial cases such as this.\n\nIt should be noted that the skip scan patch never had any problems\nwith this very similar query (same table as before):\n\nselect a from demo where b < 0.01;\n\nThe fact that we did the wrong thing for the first query, but the\nright thing for this second similar query, was solely due to certain\naccidental implementation details -- it had nothing to do with the\nfundamentals of the problem. You might even say that 0001-* makes the\noriginal \"b > 0.99\" case behave in the same manner as this similar \"b\n< 0.01\" case, which is justifiable on consistency grounds. Why\nwouldn't these two cases behave similarly? It's only logical.\n\nThe underlying problem arguably has little to do with skip scan;\nwhether we use a real SAOP array on \"a\" or a consed up skip array is\nincidental to the problem that my example highlights. As always, the\nunderlying \"array type\" (skip vs SOAP) only matters to the lowest\nlevel code. And so technically, this is an existing issue on\nHEAD/master. You can see that for yourself by making the problematic\nquery's qual \"where a = any ('every possible a value') and b > 0.99\"\n-- same problem on Postgres 17, without involving skip scan.\n\nTo be sure, the underlying problem does become more practically\nrelevant with the invention of skip arrays for skip scan, but 0001-*\ncan still be treated as independent work. It can be committed well\nahead of the other stuff IMV. The same is likely also true of the\nrefactoring now done in 0002-* -- it does refactoring that makes\nsense, even without skip scan. And so I don't expect it to take all\nthat long for it to be committable.\n\n--\nPeter Geoghegan", "msg_date": "Fri, 9 Aug 2024 17:13:28 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Mon, Jul 15, 2024 at 2:34 PM Peter Geoghegan <[email protected]> wrote:\n> On Fri, Jul 12, 2024 at 1:19 AM <[email protected]> wrote:\n> > I found the cost is estimated to much higher if the number of skipped attributes\n> > is more than two. Is it expected behavior?\n>\n> Yes and no.\n>\n> Honestly, the current costing is just placeholder code. It is totally\n> inadequate. I'm not surprised that you found problems with it. I just\n> didn't put much work into it, because I didn't really know what to do.\n\nAttached is v6, which finally does something sensible in btcostestimate.\n\nv6 is also the first version that supports parallel index scans that\ncan skip. This works by extending the approach taken by scans with\nregular SAOP arrays to work with skip arrays. We need to serialize and\ndeserialize the current array keys in shared memory, as datums -- we\ncannot just use simple BTArrayKeyInfo.cur_elem offsets with skip\narrays.\n\nv6 also includes the patch that shows \"Index Searches\" in EXPLAIN\nANALYZE output, just because it's convenient when testing the patch.\nThis has been independently submitted as\nhttps://commitfest.postgresql.org/49/5183/, so probably doesn't need\nreview here.\n\nv6 is the first version of the patch that is basically feature\ncomplete. I only have one big open item left: I must still fix certain\nregressions seen with queries that are very unfavorable for skip scan,\nwhere the CPU cost (but not I/O cost) of maintaining skip arrays slows\nthings down. Overall, I'm making fast progress here.\n\nBack to the topic of the btcostestimate/planner changes. The rest of\nthe email is a discussion of the cost model.\n\nThe planner changes probably still have some problems, but all of the\nobvious problems have been fixed by v6. I found it useful to focus on\nmaking the cost model not have any obvious problems instead of trying\nto make it match a purely theoretical ideal. For example, your\n(Ikeda-san's) complaint about the \"Index Scan using idx_id1_id2_id3 on\npublic.test\" test case having too high a cost (higher than the cost of\na slower sequential scan) has been fixed. It's now about 3x cheaper\nthan the sequential scan, since we're actually paying attention to\nndistinct in v6.\n\nJust like when we cost SAOP arrays on HEAD, skip arrays are costed by\npessimistically multiplying together the estimated number of array\nelements for all the scan's arrays, without trying to account for\ncorrelation between index columns. Being pessimistic about\ncorrelations like this is often wrong, but that still seems like the\nbest bias we could have, all things considered. Plus it's nothing new.\n\nRange style skip arrays require a slightly more complicated approach\nto estimating the number of array elements: costing applies a\nselectivity estimate, taken from the associated index column's\ninequality keys, and applies that estimate to ndistinct itself. That\nway the cost of a range skip array is lower than an\notherwise-equivalent simple skip array case (we prorate ndistinct with\nskip arrays). More importantly, the cost of more selectivity ranges is\nlower than the cost of less selective ranges. There is also a bias\nhere: we don't account for skew in ndistinct. That's probably OK,\nbecause at least it's a bias *against* skip scan.\n\nThe new cost model does not specifically try to account for how scans\nwill behave when no skipping should be expected at all -- cases where\na so-called \"skip scan\" degenerates into a full index scan. In theory,\nwe should be costing these scans the same as before, since there has\nbeen no change in runtime behavior. Overall, the cost of full index\nscans with very many distinct prefix column values goes down by quite\na bit -- the cost is something like 1/3 lower in typical cases.\n\nThe problem with preserving the cost model from HEAD for these\nunfavorable cases for skip scan is that I don't feel that I understand\nthe existing behavior. In practice the revised costing seems to be a\nsomewhat more accurate predictor of the actual runtime of queries.\nAnother problem is that I can't see a good way to make the behavior\ncontinuous when ndistinct starts small and grows so large that we\nshould expect a true full index scan. (As I mentioned at the start of\nthis email, there are unfixed regressions for these unfavorable cases,\nso I'm basing this analysis on the \"set skipscan_prefix_cols = 0\"\nbehavior rather than the current default patch behavior to correct for\nthat. This behavior matches HEAD with a full index scan, and should\nmatch the default behavior in a future version of the skip scan\npatch.)\n\n--\nPeter Geoghegan", "msg_date": "Wed, 4 Sep 2024 12:52:57 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "Hi,\n\nI started looking at this patch today. The first thing I usually do for\nnew patches is a stress test, so I did a simple script that generates\nrandom table and runs a random query with IN() clause with various\nconfigs (parallel query, index-only scans, ...). And it got stuck on a\nparallel query pretty quick.\n\nI've seen a bunch of those cases, so it's not a particularly unlikely\nissue. The backtraces look pretty much the same in all cases - the\nprocesses are stuck either waiting on the conditional variable in\n_bt_parallel_seize, or trying to send data in shm_mq_send_bytes.\n\nAttached is the script I use for stress testing (pretty dumb, just a\nbunch of loops generating tables + queries), and backtraces for two\nlockups (one is EXPLAIN ANALYZE, but otherwise exactly the same).\n\nI haven't investigated why this is happening, but I wonder if this might\nbe similar to the parallel hashjoin issues, with trying to send data,\nbut the receiver being unable to proceed and effectively working on the\nsender. But that's just a wild guess.\n\nregards\n\n-- \nTomas Vondra", "msg_date": "Sat, 7 Sep 2024 17:27:05 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Sat, Sep 7, 2024 at 11:27 AM Tomas Vondra <[email protected]> wrote:\n> I started looking at this patch today.\n\nThanks for taking a look!\n\n> The first thing I usually do for\n> new patches is a stress test, so I did a simple script that generates\n> random table and runs a random query with IN() clause with various\n> configs (parallel query, index-only scans, ...). And it got stuck on a\n> parallel query pretty quick.\n\nI can reproduce this locally, without too much difficulty.\nUnfortunately, this is a bug on master/Postgres 17. Some kind of issue\nin my commit 5bf748b8.\n\nThe timing of this is slightly unfortunate. There's only a few weeks\nuntil the release of 17, plus I have to travel for work over the next\nweek. I won't be back until the 16th, and will have limited\navailability between then and now. I think that I'll have ample time\nto debug and fix the issue ahead of the release of 17, though.\n\nLooks like the problem is a parallel index scan with SAOP array keys\ncan find itself in a state where every parallel worker waits for the\nleader to finish off a scheduled primitive index scan, while the\nleader itself waits for the scan's tuple queue to return more tuples.\nObviously, the query will effectively go to sleep indefinitely when\nthat happens (unless and until the DBA cancels the query). This is\nonly possible with just the right/wrong combination of array keys and\nindex cardinality.\n\nI cannot recreate the problem with parallel_leader_participation=off,\nwhich strongly suggests that leader participation is a factor. I'll\nfind time to study this in detail as soon as I can.\n\nFurther background: I was always aware of the leader's tendency to go\naway forever shortly after the scan begins. That was supposed to be\nsafe, since we account for it by serializing the scan's current array\nkeys in shared memory, at the point a primitive index scan is\nscheduled -- any backend should be able to pick up where any other\nbackend left off, no matter how primitive scans are scheduled. That\nnow doesn't seem to be completely robust, likely due to restrictions\non when and how other backends can pick up the scheduled work from\nwithin _bt_first, at the point that it calls _bt_parallel_seize.\n\nIn short, one or two details of how backends call _bt_parallel_seize\nto pick up BTPARALLEL_NEED_PRIMSCAN work likely need to be rethought.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 9 Sep 2024 16:54:57 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Mon, 9 Sept 2024 at 21:55, Peter Geoghegan <[email protected]> wrote:\n>\n> On Sat, Sep 7, 2024 at 11:27 AM Tomas Vondra <[email protected]> wrote:\n> > I started looking at this patch today.\n>\n> Thanks for taking a look!\n>\n> > The first thing I usually do for\n> > new patches is a stress test, so I did a simple script that generates\n> > random table and runs a random query with IN() clause with various\n> > configs (parallel query, index-only scans, ...). And it got stuck on a\n> > parallel query pretty quick.\n>\n> I can reproduce this locally, without too much difficulty.\n> Unfortunately, this is a bug on master/Postgres 17. Some kind of issue\n> in my commit 5bf748b8.\n[...]\n> In short, one or two details of how backends call _bt_parallel_seize\n> to pick up BTPARALLEL_NEED_PRIMSCAN work likely need to be rethought.\n\nThanks to Peter for the description, that helped me debug the issue. I\nthink I found a fix for the issue: regression tests for 811af978\nconsistently got stuck on my macbook before the attached patch 0001,\nafter applying that this patch they completed just fine.\n\nThe issue to me seems to be the following:\n\nOnly _bt_first can start a new primitive scan, so _bt_parallel_seize\nonly assigns a new primscan if the process is indeed in _bt_first (as\nprovided with _b_p_s(first=true)). All other backends that hit a\nNEED_PRIMSCAN state will currently pause until a backend in _bt_first\ndoes the next primitive scan.\n\nA backend that hasn't requested the next primitive scan will likely\nhit _bt_parallel_seize from code other than _bt_first, thus pausing.\nIf this is the leader process, it'll stop consuming tuples from\nfollower processes.\n\nIf the follower process finds a new primary scan is required after\nfinishing reading results from a page, it will first request a new\nprimitive scan, and only then start producing the tuples.\n\nAs such, we can have a follower process that just finished reading a\npage, had issued a new primitive scan, and now tries to send tuples to\nits primary process before getting back to _bt_first, but the its\nprimary process won't acknowledge any tuples because it's waiting for\nthat process to start the next primitive scan - now we're deadlocked.\n\n---\n\nThe fix in 0001 is relatively simple: we stop backends from waiting\nfor a concurrent backend to resolve the NEED_PRIMSCAN condition, and\ninstead move our local state machine so that we'll hit _bt_first\nourselves, so that we may be able to start the next primitive scan.\nAlso attached is 0002, which adds tracking of responsible backends to\nparallel btree scans, thus allowing us to assert we're never waiting\nfor our own process to move the state forward. I found this patch\nhelpful while working on solving this issue, even if it wouldn't have\nfound the bug as reported.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)", "msg_date": "Thu, 12 Sep 2024 15:49:24 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On 9/12/24 16:49, Matthias van de Meent wrote:\n> On Mon, 9 Sept 2024 at 21:55, Peter Geoghegan <[email protected]> wrote:\n>>\n> ...\n> \n> The fix in 0001 is relatively simple: we stop backends from waiting\n> for a concurrent backend to resolve the NEED_PRIMSCAN condition, and\n> instead move our local state machine so that we'll hit _bt_first\n> ourselves, so that we may be able to start the next primitive scan.\n> Also attached is 0002, which adds tracking of responsible backends to\n> parallel btree scans, thus allowing us to assert we're never waiting\n> for our own process to move the state forward. I found this patch\n> helpful while working on solving this issue, even if it wouldn't have\n> found the bug as reported.\n> \n\nNo opinion on the analysis / coding, but per my testing the fix indeed\naddresses the issue. The script reliably got stuck within a minute, now\nit's running for ~1h just fine. It also checks results and that seems\nfine too, so that seems fine too.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Sat, 14 Sep 2024 14:23:50 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Thu, Sep 12, 2024 at 10:49 AM Matthias van de Meent\n<[email protected]> wrote:\n> Thanks to Peter for the description, that helped me debug the issue. I\n> think I found a fix for the issue: regression tests for 811af978\n> consistently got stuck on my macbook before the attached patch 0001,\n> after applying that this patch they completed just fine.\n\nThanks for taking a look at it.\n\n> The fix in 0001 is relatively simple: we stop backends from waiting\n> for a concurrent backend to resolve the NEED_PRIMSCAN condition, and\n> instead move our local state machine so that we'll hit _bt_first\n> ourselves, so that we may be able to start the next primitive scan.\n\nI agree with your approach, but I'm concerned about it causing\nconfusion inside _bt_parallel_done. And so I attach a v2 revision of\nyour bug fix. v2 adds a check that nails that down, too. I'm not 100%\nsure if the change to _bt_parallel_done becomes strictly necessary, to\nmake the basic fix robust, but it's a good idea either way. In fact, it\nseemed like a good idea even before this bug came to light: it was\nalready clear that this was strictly necessary for the skip scan\npatch. And for reasons that really have nothing to do with the\nrequirements for skip scan (it's related to how we call\n_bt_parallel_done without much care in code paths from the original\nparallel index scan commit).\n\nMore details on changes in v2 that didn't appear in Matthias' v1:\n\nv2 makes _bt_parallel_done do nothing at all when the backend-local\nso->needPrimScan flag is set (regardless of whether it has been set by\n_bt_parallel_seize or by _bt_advance_array_keys). This is a bit like\nthe approach taken before the Postgres 17 work went in:\n_bt_parallel_done used to only permit the shared btps_pageStatus state\nto become BTPARALLEL_DONE when it found that \"so->arrayKeyCount >=\nbtscan->btps_arrayKeyCount\" (else the call was a no-op). With this\nextra hardening, _bt_parallel_done will only permit setting BTPARALLEL_DONE when\n\"!so->needPrimScan\". Same idea, more or less.\n\nv2 also changes comments in _bt_parallel_seize. The comment tweaks\nsuggest that the new \"if (!first && status ==\nBTPARALLEL_NEED_PRIMSCAN) return false\" path is similar to the\nexisting master branch \"if (!first && so->needPrimScan) return false\"\nprecheck logic on master (the precheck that takes place before\nexamining any state in shared memory). The new path can be thought of\nas dealing with cases where the backend-local so->needPrimScan flag\nmust have been stale back when it was prechecked -- it's essentially the same\nlogic, though unlike the precheck it works against the authoritative\nshared memory state.\n\nMy current plan is to commit something like this in the next day or two.\n\n--\nPeter Geoghegan", "msg_date": "Mon, 16 Sep 2024 15:13:47 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "Hi,\n\nI've been looking at this patch over the couple last days, mostly doing\nsome stress testing / benchmarking (hence the earlier report) and basic\nreview. I do have some initial review comments, and the testing produced\nsome interesting regressions (not sure if those are the cases where\nskipscan can't really help, that Peter mentioned he needs to look into).\n\n\nreview\n------\n\nFirst, the review comments - nothing particularly serious, mostly just\ncosmetic stuff:\n\n1) v6-0001-Show-index-search-count-in-EXPLAIN-ANALYZE.patch\n\n- I find the places that increment \"nsearches\" a bit random. Each AM\ndoes it in entirely different place (at least it seems like that to me).\nIs there a way make this a bit more consistent?\n\n- I find this comment rather unhelpful:\n\n uint64 btps_nsearches; /* instrumentation */\n\nInstrumentation what? What's the counter for?\n\n- I see _bt_first moved the pgstat_count_index_scan, but doesn't that\nmean we skip it if the earlier code does \"goto readcomplete\"? Shouldn't\nthat still count as an index scan?\n\n- show_indexscan_nsearches does this:\n\n if (scanDesc && scanDesc->nsearches > 0)\n ExplainPropertyUInteger(\"Index Searches\", NULL,\n scanDesc->nsearches, es);\n\nBut shouldn't it divide the count by nloops, similar to (for example)\nshow_instrumentation_count?\n\n\n2) v6-0002-Normalize-nbtree-truncated-high-key-array-behavio.patch\n\n- Admittedly very subjective, but I find the \"oppoDirCheck\" abbreviation\nrather weird, I'd just call it \"oppositeDirCheck\".\n\n\n3) v6-0003-Refactor-handling-of-nbtree-array-redundancies.patch\n\n- nothing\n\n\n4) v6-0004-Add-skip-scan-to-nbtree.patch\n\n- indices.sgml seems to hahve typo \"Intevening\" -> \"Intervening\"\n\n- It doesn't seem like a good idea to remove the paragraph about\nmulticolumn indexes and replace it with just:\n\n Multicolumn indexes should be used judiciously.\n\nI mean, what does judiciously even mean? what should the user consider\nto be judicious? Seems rather unclear to me. Admittedly, the old text\nwas not much helpful, but at least it gave some advice.\n\nBut maybe more importantly, doesn't skipscan apply only to a rather\nlimited subset of data types (that support increment/decrement)? Doesn't\nthe new wording mostly ignore that, implying skipscan applies to all\nbtree indexes? I don't think it mentions datatypes anywhere, but there\nare many indexes on data types like text, UUID and so on.\n\n- Very subjective nitpicking, but I find it a bit strange when a comment\nabout a block is nested in the block, like in _bt_first() for the\narray->null_elem check.\n\n- assignProcTypes() claims providing skipscan for cross-type scenarios\ndoesn't make sense. Why is that? I'm not saying the claim is wrong, but\nit's not clear to me why would that be the case.\n\n\ncosting\n-------\n\nPeter asked me to look at the costing, and I think it looks generally\nsensible. We don't really have a lot of information to base the costing\non in the first place - the whole point of skipscan is about multicolumn\nindexes, but none of the existing extended statistic seems very useful.\nWe'd need some cross-column correlation info, or something like that.\n\nIt's an interesting question - if we could collect some new statistics\nfor multicolumn indexes (say, by having a way to collect AM-specific\nstats), what would we collect for skipscan?\n\nThere's one thing that I don't quite understand, and that's how\nbtcost_correlation() adjusts correlation for multicolumn indexes:\n\n if (index->nkeycolumns > 1)\n indexCorrelation = varCorrelation * 0.75;\n\nThat seems fine for a two-column index, I guess. But shouldn't it\ncompound for indexes with more keys? I mean, 0.75 * 0.75 for third\ncolumn, etc? I don't think btcostestimate() does that, it just remembers\nwhatever btcost_correlation() returns.\n\nAnyway, the overall costing approach seems sensible, I think. It assumes\nthings we assume in general (columns/keys are considered independent),\nwhich may be problematic, but this is the best we can do.\n\nThe only alternative approach I can think of is not to adjust the\ncosting for the index scan at all, and only use this to enable (or not\nenable) the skipscan internally. That would mean the overall plan\nremains the same, and maybe sometimes we would think an index scan would\nbe too expensive and use something else. Not great, but it doesn't have\nthe risk of regressions - IIUC we can disable the skipscan at runtime,\nif we realize it's not really helpful.\n\nIf we're concerned about regressions, I think this would be the way to\ndeal with them. Or at least it's the best idea I have.\n\n\ntesting\n-------\n\nAs usual, I wrote a bash script to do a bit of stress testing. It\ngenerates tables with random data, and then runs random queries with\nrandom predicates on them, while mutating a couple parameters (like\nnumber of workers) to trigger different plans. It does that on 16,\nmaster and with the skipscan patch (with the fix for parallel scans).\n\nI've uploaded the script and results from the last run here:\n\n https://github.com/tvondra/pg-skip-scan-tests\n\nThere's the \"run-mdam.sh\" script that generates tables/queries, runs\nthem, collects all kinds of info about the query, and produces files\nwith explain plans, CSV with timings, etc.\n\nNot all of the queries end up using index scans - depending on the\npredicates, etc. it might have to use seqscan. Or maybe it only uses\nindex scan because it's forced to by the enable_* options, etc.\n\nAnyway, I ran a couple thousand such queries, and I haven't found any\nincorrect results (the script compares that between versions too). So\nthat's good ;-)\n\nBut my main goal was to see how this affects performance. The tables\nwere pretty small (just 1M rows, maybe ~80MB), but with restarts and\ndropping caches, large enough to test this.\n\nAnd generally the results seem good. You can either inspect the CSV with\nraw results (look at the script to undestand what the fields are), or\ncheck the attached PDF with a pivot table summarizing them.\n\nAs usual, there's a heatmap on the right side, comparing the results for\ndifferent versions (first \"master/16\" and then \"skipscan/master\"). Green\nmeans \"speedup/good\" and red meand \"regression/bad\".\n\nMost of the places are \"white\" (no change) or not very far from it, or\nperhaps \"green\". But there's also a bunch of red results, which means\nregression (FWIW the PDF is filtered only to queries that would actually\nuse the executed plans without the GUCs).\n\nSome of the red placees are for very short queries - just a couple ms,\nwhich means it can easily be random noise, or something like that. But a\ncouple queries are much longer, and might deserve some investigation.\nThe easiest way is to look at the \"QID\" column in the row, which\nidentifies the query in the \"query\" CSV. Then look into the results CSV\nfor IDs of the runs (in the first \"SEQ\" column), and find the details in\nthe \"analyze\" log, which has all the plans etc.\n\nAlternatively, use the .ods in the git repository, which allows drill\ndown to results (from the pivot tables).\n\nFor example, one of the slowed down queries is query 702 (top of page 8\nin the PDF). The query is pretty simple:\n\n explain (analyze, timing off, buffers off)\n select id1,id2 from t_1000000_1000_1_2\n where NOT (id1 in (:list)) AND (id2 = :value);\n\nand it was executed on a table with random data in two columns, each\nwith 1000 distinct values. This is perfectly random data, so a great\nmatch for the assumptions in costing etc.\n\nBut with uncached data, this runs in ~50 ms on master, but takes almost\n200 ms with skipscan (these timings are from my laptop, but similar to\nthe results).\n\n-- master\n Index Only Scan using t_1000000_1000_1_2_id1_id2_idx on\nt_1000000_1000_1_2 (cost=0.96..20003.96 rows=1719 width=16)\n (actual rows=811 loops=1)\n Index Cond: (id2 = 997)\n Filter: (id1 <> ALL ('{983,...,640}'::bigint[]))\n Rows Removed by Filter: 163\n Heap Fetches: 0\n Planning Time: 7.596 ms\n Execution Time: 28.851 ms\n(7 rows)\n\n\n-- with skipscan\n Index Only Scan using t_1000000_1000_1_2_id1_id2_idx on\nt_1000000_1000_1_2 (cost=0.96..983.26 rows=1719 width=16)\n (actual rows=811 loops=1)\n Index Cond: (id2 = 997)\n Index Searches: 1007\n Filter: (id1 <> ALL ('{983,...,640}'::bigint[]))\n Rows Removed by Filter: 163\n Heap Fetches: 0\n Planning Time: 3.730 ms\n Execution Time: 238.554 ms\n(8 rows)\n\nI haven't looked into why this is happening, but this seems like a\npretty good match for skipscan (on the first column). And for the\ncosting too - it's perfectly random data, no correllation, etc.\n\n\n\nregards\n\n-- \nTomas Vondra", "msg_date": "Tue, 17 Sep 2024 00:05:47 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Mon, Sep 16, 2024 at 3:13 PM Peter Geoghegan <[email protected]> wrote:\n> I agree with your approach, but I'm concerned about it causing\n> confusion inside _bt_parallel_done. And so I attach a v2 revision of\n> your bug fix. v2 adds a check that nails that down, too.\n\nPushed this just now.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 17 Sep 2024 11:10:58 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Wed, Sep 4, 2024 at 12:52 PM Peter Geoghegan <[email protected]> wrote:\n> Attached is v6, which finally does something sensible in btcostestimate.\n\nAttached is v7, which is just to fix bitrot, and keep CFBot happy. No\nreal changes here.\n\nI will work through Tomas' recent feedback in the next few days.\n\n-- \nPeter Geoghegan", "msg_date": "Tue, 17 Sep 2024 11:49:11 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Mon, Sep 16, 2024 at 6:05 PM Tomas Vondra <[email protected]> wrote:\n> I've been looking at this patch over the couple last days, mostly doing\n> some stress testing / benchmarking (hence the earlier report) and basic\n> review.\n\nThanks for taking a look! Very helpful.\n\n> I do have some initial review comments, and the testing produced\n> some interesting regressions (not sure if those are the cases where\n> skipscan can't really help, that Peter mentioned he needs to look into).\n\nThe one type of query that's clearly regressed in a way that's just\nnot acceptable are queries where we waste CPU cycles during scans\nwhere it's truly hopeless. For example, I see a big regression on one\nof the best cases for the Postgres 17 work, described here:\n\nhttps://pganalyze.com/blog/5mins-postgres-17-faster-btree-index-scans#a-practical-example-3x-performance-improvement\n\nNotably, these cases access exactly the same buffers/pages as before,\nso this really isn't a matter of \"doing too much skipping\". The number\nof buffers hit exactly matches what you'll see on Postgres 17. It's\njust that we waste too many CPU cycles in code such as\n_bt_advance_array_keys, to uselessly maintain skip arrays.\n\nI'm not suggesting that there won't be any gray area with these\nregressions -- nothing like this will ever be that simple. But it\nseems to me like I should go fix these obviously-not-okay cases next,\nand then see where that leaves everything else, regressions-wise. That\nseems likely to be the most efficient way of dealing with the\nregressions. So I'll start there.\n\nThat said, I *would* be surprised if you found a regression in any\nquery that simply didn't receive any new scan key transformations in\nnew preprocessing code in places like _bt_decide_skipatts and\n_bt_skip_preproc_shrink. I see that many of the queries that you're\nusing for your stress-tests \"aren't really testing skip scan\", in this\nsense. But I'm hardly about to tell you that you shouldn't spend time\non such queries -- that approach just discovered a bug affecting\nPostgres 17 (that was also surprising, but it still happened!). My\npoint is that it's worth being aware of which test queries actually\nuse skip arrays in the first place -- it might help you with your\ntesting. There are essentially no changes to _bt_advance_array_keys\nthat'll affect traditional SAOP arrays (with the sole exception of\nchanges made by\nv6-0003-Refactor-handling-of-nbtree-array-redundancies.patch, which\naffect every kind of array in the same way).\n\n> 1) v6-0001-Show-index-search-count-in-EXPLAIN-ANALYZE.patch\n>\n> - I find the places that increment \"nsearches\" a bit random. Each AM\n> does it in entirely different place (at least it seems like that to me).\n> Is there a way make this a bit more consistent?\n\n From a mechanical perspective there is nothing at all random about it:\nwe do this at precisely the same point that we currently call\npgstat_count_index_scan, which in each index AM maps to one descent of\nthe index. It is at least consistent. Whenever a B-Tree index scan\nshows \"Index Scans: N\", you'll see precisely the same number by\nswapping it with an equivalent contrib/btree_gist-based GiST index and\nrunning the same query again (assuming that index tuples that match\nthe array keys are spread apart in both the B-Tree and GiST indexes).\n\n(Though I see problems with the precise place that nbtree calls\npgstat_count_index_scan right now, at least in certain edge-cases,\nwhich I discuss below in response to your questions about that.)\n\n> uint64 btps_nsearches; /* instrumentation */\n>\n> Instrumentation what? What's the counter for?\n\nWill fix.\n\nIn case you missed it, there is another thread + CF Entry dedicated to\ndiscussing this instrumentation patch:\n\nhttps://commitfest.postgresql.org/49/5183/\nhttps://www.postgresql.org/message-id/flat/CAH2-WzkRqvaqR2CTNqTZP0z6FuL4-3ED6eQB0yx38XBNj1v-4Q@mail.gmail.com\n\n> - I see _bt_first moved the pgstat_count_index_scan, but doesn't that\n> mean we skip it if the earlier code does \"goto readcomplete\"? Shouldn't\n> that still count as an index scan?\n\nIn my opinion, no, it should not.\n\nWe're counting the number of times we'll have descended the tree using\n_bt_search (or using _bt_endpoint, perhaps), which is a precisely\ndefined physical cost. A little like counting the number of buffers\naccessed. I actually think that this aspect of how we call\npgstat_count_index_scan is a bug that should be fixed, with the fix\nbackpatched to Postgres 17. Right now, we see completely different\ncounts for a parallel index scan, compared to an equivalent serial\nindex scan -- differences that cannot be explained as minor\ndifferences caused by parallel scan implementation details. I think\nthat it's just wrong right now, on master, since we're simply not\ncounting the thing that we're supposed to be counting (not reliably,\nnot if it's a parallel index scan).\n\n> - show_indexscan_nsearches does this:\n>\n> if (scanDesc && scanDesc->nsearches > 0)\n> ExplainPropertyUInteger(\"Index Searches\", NULL,\n> scanDesc->nsearches, es);\n>\n> But shouldn't it divide the count by nloops, similar to (for example)\n> show_instrumentation_count?\n\nI can see arguments for and against doing it that way. It's\nambiguous/subjective, but on balance I favor not dividing by nloops.\nYou can make a similar argument for doing this with \"Buffers: \", and\nyet we don't divide by nloops there, either.\n\nHonestly, I just want to find a way to do this that everybody can live\nwith. Better documentation could help here.\n\n> 2) v6-0002-Normalize-nbtree-truncated-high-key-array-behavio.patch\n>\n> - Admittedly very subjective, but I find the \"oppoDirCheck\" abbreviation\n> rather weird, I'd just call it \"oppositeDirCheck\".\n\nWill fix.\n\n> 3) v6-0003-Refactor-handling-of-nbtree-array-redundancies.patch\n>\n> - nothing\n\nGreat. I think that I should be able to commit this one soon, since\nit's independently useful work.\n\n> 4) v6-0004-Add-skip-scan-to-nbtree.patch\n>\n> - indices.sgml seems to hahve typo \"Intevening\" -> \"Intervening\"\n>\n> - It doesn't seem like a good idea to remove the paragraph about\n> multicolumn indexes and replace it with just:\n>\n> Multicolumn indexes should be used judiciously.\n>\n> I mean, what does judiciously even mean? what should the user consider\n> to be judicious? Seems rather unclear to me. Admittedly, the old text\n> was not much helpful, but at least it gave some advice.\n\nYeah, this definitely needs more work.\n\n> But maybe more importantly, doesn't skipscan apply only to a rather\n> limited subset of data types (that support increment/decrement)? Doesn't\n> the new wording mostly ignore that, implying skipscan applies to all\n> btree indexes? I don't think it mentions datatypes anywhere, but there\n> are many indexes on data types like text, UUID and so on.\n\nActually, no, skip scan works in almost the same way with all data\ntypes. Earlier versions of the patch didn't support every data type\n(perhaps I should have waited for that before posting my v1), but the\nversion of the patch you looked at has no restrictions on any data\ntype.\n\nYou must be thinking of whether or not an opclass has skip support.\nThat's just an extra optimization, which can be used for a small\nhandful of discrete data types such as integer and date (hard to\nimagine how skip support could ever be implemented for types like\nnumeric and text). There is a temporary testing GUC that will allow\nyou to get a sense of how much skip support can help: try \"set\nskipscan_skipsupport_enabled=off\" with (say) my original MDAM test\nquery to get a sense of that. You'll see more buffer hits needed for\n\"next key probes\", though not dramatically more.\n\nIt's worth having skip support (the idea comes from the MDAM paper),\nbut it's not essential. Whether or not an opclass has skip support\nisn't accounted for by the cost model, but I doubt that it's worth\naddressing (the cost model is already pessimistic).\n\n> - Very subjective nitpicking, but I find it a bit strange when a comment\n> about a block is nested in the block, like in _bt_first() for the\n> array->null_elem check.\n\nWill fix.\n\n> - assignProcTypes() claims providing skipscan for cross-type scenarios\n> doesn't make sense. Why is that? I'm not saying the claim is wrong, but\n> it's not clear to me why would that be the case.\n\nIt is just talking about the support function that skip scan can\noptionally use, where it makes sense (skip support functions). The\nrelevant \"else if (member->number == BTSKIPSUPPORT_PROC)\" stanza is\nlargely copied from the existing nearby \"else if (member->number ==\nBTEQUALIMAGE_PROC)\" stanza that was added for B-Tree deduplication. In\nboth stanzas we're talking about a capability that maps to a\nparticular \"input opclass\", which means the opclass that maps to the\ndatums that are stored on disk, in index tuples.\n\nThere are no restrictions on the use of skip scan with queries that\nhappen to involve the use of cross-type operators. It doesn't even\nmatter if we happen to be using an incomplete opfamily, since range\nskip arrays never need to *directly* take the current array element\nfrom a lower/upper bound inequality scan key's argument. It all\nhappens indirectly: code in places like _bt_first and _bt_checkkeys\ncan use inequalities (which are stored in BTArrayKeyInfo.low_compare\nand BTArrayKeyInfo.high_compare) to locate the next matching on-disk\nindex tuple that satisfies the inequality in question. Obviously, the\nlocated datum must be the same type as the one used by the array and\nits scan key (it has to be the input opclass type if it's taken from\nan index tuple).\n\nI think that it's a bit silly that nbtree generally bends over\nbackwards to find a way to execute a scan, given an incomplete\nopfamily; in a green field situation it would make sense to just throw\nan error instead. Even still, skip scan works in a way that is\nmaximally forgiving when incomplete opfamilies are used. Admittedly,\nit is just about possible to come up with a scenario where we'll now\nthrow an error for a query that would have worked on Postgres 17. But\nthat's no different to what would happen if the query had an explicit\n\"= any( )\" non-cross-type array instead of an implicit non-cross-type\nskip array. The real problem in these scenarios is the lack of a\nsuitable cross-type ORDER proc (for a cross-type-operator query)\nwithin _bt_first -- not the lack of cross-type operators. This issue\nwith missing ORDER procs just doesn't seem worth worrying about,\nsince, as I said, even slightly different queries (that don't use skip\nscan) are bound to throw the same errors either way.\n\n> Peter asked me to look at the costing, and I think it looks generally\n> sensible.\n\nI'm glad that you think that I basically have the right idea here.\nHard to know how to approach something like this, which doesn't have\nany kind of precedent to draw on.\n\n> We don't really have a lot of information to base the costing\n> on in the first place - the whole point of skipscan is about multicolumn\n> indexes, but none of the existing extended statistic seems very useful.\n> We'd need some cross-column correlation info, or something like that.\n\nMaybe, but that would just mean that we'd sometimes be more optimistic\nabout skip scan helping than we are with the current approach of\npessimistically assuming that there is no correlation at all. Not\nclear that being pessimistic in this sense isn't the right thing to\ndo, despite the fact that it's clearly less accurate on average.\n\n> There's one thing that I don't quite understand, and that's how\n> btcost_correlation() adjusts correlation for multicolumn indexes:\n>\n> if (index->nkeycolumns > 1)\n> indexCorrelation = varCorrelation * 0.75;\n>\n> That seems fine for a two-column index, I guess. But shouldn't it\n> compound for indexes with more keys? I mean, 0.75 * 0.75 for third\n> column, etc? I don't think btcostestimate() does that, it just remembers\n> whatever btcost_correlation() returns.\n\nI don't know either. In general I'm out of my comfort zone here.\n\n> The only alternative approach I can think of is not to adjust the\n> costing for the index scan at all, and only use this to enable (or not\n> enable) the skipscan internally. That would mean the overall plan\n> remains the same, and maybe sometimes we would think an index scan would\n> be too expensive and use something else. Not great, but it doesn't have\n> the risk of regressions - IIUC we can disable the skipscan at runtime,\n> if we realize it's not really helpful.\n\nIn general I would greatly prefer to not have a distinct kind of index\npath for scans that use skip scan. I'm quite keen on a design that\nallows the scan to adapt to unpredictable conditions at runtime.\n\nOf course, that doesn't preclude passing the index scan a hint about\nwhat's likely to work at runtime, based on information figured out\nwhen costing the scan. Perhaps that will prove necessary to avoid\nregressing index scans that are naturally quite cheap already -- scans\nwhere we really need to have the right general idea from the start to\navoid any regressions. I'm not opposed to that, provided the index\nscan has the ability to change its mind when (for whatever reason) the\nguidance from the optimizer turns out to be wrong.\n\n> As usual, I wrote a bash script to do a bit of stress testing. It\n> generates tables with random data, and then runs random queries with\n> random predicates on them, while mutating a couple parameters (like\n> number of workers) to trigger different plans. It does that on 16,\n> master and with the skipscan patch (with the fix for parallel scans).\n\nI wonder if some of the regressions you see can be tied to the use of\nan LWLock in place of the existing use of a spin lock. I did that\nbecause I sometimes need to allocate memory to deserialize the array\nkeys, with the exclusive lock held. It might be the case that a lot of\nthese regressions are tied to that, or something else that is far from\nobvious...have to investigate.\n\nIn general, I haven't done much on parallel index scans here (I only\nadded support for them very recently), whereas your testing places a\nlot of emphasis on parallel scans. Nothing wrong with that emphasis\n(it caught that 17 bug), but just want to put it in context.\n\n> I've uploaded the script and results from the last run here:\n>\n> https://github.com/tvondra/pg-skip-scan-tests\n>\n> There's the \"run-mdam.sh\" script that generates tables/queries, runs\n> them, collects all kinds of info about the query, and produces files\n> with explain plans, CSV with timings, etc.\n\nIt'll take me a while to investigate all this data.\n\n> Anyway, I ran a couple thousand such queries, and I haven't found any\n> incorrect results (the script compares that between versions too). So\n> that's good ;-)\n\nThat's good!\n\n> But my main goal was to see how this affects performance. The tables\n> were pretty small (just 1M rows, maybe ~80MB), but with restarts and\n> dropping caches, large enough to test this.\n\nThe really compelling cases all tend to involve fairly selective index\nscans. Obviously, skip scan can only save work by navigating the index\nstructure more efficiently (unlike loose index scan). So if the main\ncost is inherently bound to be the cost of heap accesses, then we\nshouldn't expect a big speed up.\n\n> For example, one of the slowed down queries is query 702 (top of page 8\n> in the PDF). The query is pretty simple:\n>\n> explain (analyze, timing off, buffers off)\n> select id1,id2 from t_1000000_1000_1_2\n> where NOT (id1 in (:list)) AND (id2 = :value);\n>\n> and it was executed on a table with random data in two columns, each\n> with 1000 distinct values. This is perfectly random data, so a great\n> match for the assumptions in costing etc.\n>\n> But with uncached data, this runs in ~50 ms on master, but takes almost\n> 200 ms with skipscan (these timings are from my laptop, but similar to\n> the results).\n\nI'll need to investigate this specifically. That does seem odd.\n\nFWIW, it's a pity that the patch doesn't know how to push down the NOT\nIN () here. The MDAM paper contemplates such a scheme. We see the use\nof filter quals here, when in principle this could work by using a\nskip array that doesn't generate elements that appear in the NOT IN()\nlist (it'd generate every possible indexable value *except* the given\nlist/array values). The only reason that I haven't implemented this\nyet is because I'm not at all sure how to make it work on the\noptimizer side. The nbtree side of the implementation will probably be\nquite straightforward, since it's really just a slight variant of a\nskip array, that excludes certain values.\n\n> -- with skipscan\n> Index Only Scan using t_1000000_1000_1_2_id1_id2_idx on\n> t_1000000_1000_1_2 (cost=0.96..983.26 rows=1719 width=16)\n> (actual rows=811 loops=1)\n> Index Cond: (id2 = 997)\n> Index Searches: 1007\n> Filter: (id1 <> ALL ('{983,...,640}'::bigint[]))\n> Rows Removed by Filter: 163\n> Heap Fetches: 0\n> Planning Time: 3.730 ms\n> Execution Time: 238.554 ms\n> (8 rows)\n>\n> I haven't looked into why this is happening, but this seems like a\n> pretty good match for skipscan (on the first column). And for the\n> costing too - it's perfectly random data, no correllation, etc.\n\nI wonder what \"Buffers: N\" shows? That's usually the first thing I\nlook at (that and \"Index Searches\", which looks like what you said it\nshould look like here). But, yeah, let me get back to you on this.\n\nThanks again!\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 17 Sep 2024 18:14:46 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On 9/18/24 00:14, Peter Geoghegan wrote:\n> On Mon, Sep 16, 2024 at 6:05 PM Tomas Vondra <[email protected]> wrote:\n>> I've been looking at this patch over the couple last days, mostly doing\n>> some stress testing / benchmarking (hence the earlier report) and basic\n>> review.\n> \n> Thanks for taking a look! Very helpful.\n> \n>> I do have some initial review comments, and the testing produced\n>> some interesting regressions (not sure if those are the cases where\n>> skipscan can't really help, that Peter mentioned he needs to look into).\n> \n> The one type of query that's clearly regressed in a way that's just\n> not acceptable are queries where we waste CPU cycles during scans\n> where it's truly hopeless. For example, I see a big regression on one\n> of the best cases for the Postgres 17 work, described here:\n> \n> https://pganalyze.com/blog/5mins-postgres-17-faster-btree-index-scans#a-practical-example-3x-performance-improvement\n> \n> Notably, these cases access exactly the same buffers/pages as before,\n> so this really isn't a matter of \"doing too much skipping\". The number\n> of buffers hit exactly matches what you'll see on Postgres 17. It's\n> just that we waste too many CPU cycles in code such as\n> _bt_advance_array_keys, to uselessly maintain skip arrays.\n> \n> I'm not suggesting that there won't be any gray area with these\n> regressions -- nothing like this will ever be that simple. But it\n> seems to me like I should go fix these obviously-not-okay cases next,\n> and then see where that leaves everything else, regressions-wise. That\n> seems likely to be the most efficient way of dealing with the\n> regressions. So I'll start there.\n> \n> That said, I *would* be surprised if you found a regression in any\n> query that simply didn't receive any new scan key transformations in\n> new preprocessing code in places like _bt_decide_skipatts and\n> _bt_skip_preproc_shrink. I see that many of the queries that you're\n> using for your stress-tests \"aren't really testing skip scan\", in this\n> sense. But I'm hardly about to tell you that you shouldn't spend time\n> on such queries -- that approach just discovered a bug affecting\n> Postgres 17 (that was also surprising, but it still happened!). My\n> point is that it's worth being aware of which test queries actually\n> use skip arrays in the first place -- it might help you with your\n> testing. There are essentially no changes to _bt_advance_array_keys\n> that'll affect traditional SAOP arrays (with the sole exception of\n> changes made by\n> v6-0003-Refactor-handling-of-nbtree-array-redundancies.patch, which\n> affect every kind of array in the same way).\n> \n\nMakes sense. I started with the testing before before even looking at\nthe code, so it's mostly a \"black box\" approach. I did read the 1995\npaper before that, and the script generates queries with clauses\ninspired by that paper, in particular:\n\n- col = $value\n- col IN ($values)\n- col BETWEEN $value AND $value\n- NOT (clause)\n- clause [AND|OR] clause\n\nThere certainly may be gaps and interesting cases the script does not\ncover. Something to improve.\n\n>> 1) v6-0001-Show-index-search-count-in-EXPLAIN-ANALYZE.patch\n>>\n>> - I find the places that increment \"nsearches\" a bit random. Each AM\n>> does it in entirely different place (at least it seems like that to me).\n>> Is there a way make this a bit more consistent?\n> \n> From a mechanical perspective there is nothing at all random about it:\n> we do this at precisely the same point that we currently call\n> pgstat_count_index_scan, which in each index AM maps to one descent of\n> the index. It is at least consistent. Whenever a B-Tree index scan\n> shows \"Index Scans: N\", you'll see precisely the same number by\n> swapping it with an equivalent contrib/btree_gist-based GiST index and\n> running the same query again (assuming that index tuples that match\n> the array keys are spread apart in both the B-Tree and GiST indexes).\n> \n> (Though I see problems with the precise place that nbtree calls\n> pgstat_count_index_scan right now, at least in certain edge-cases,\n> which I discuss below in response to your questions about that.)\n> \n\nOK, understood. FWIW I'm not saying these places are \"wrong\", just that\nit feels each AM does that in a very different place.\n\n>> uint64 btps_nsearches; /* instrumentation */\n>>\n>> Instrumentation what? What's the counter for?\n> \n> Will fix.\n> \n> In case you missed it, there is another thread + CF Entry dedicated to\n> discussing this instrumentation patch:\n> \n> https://commitfest.postgresql.org/49/5183/\n> https://www.postgresql.org/message-id/flat/CAH2-WzkRqvaqR2CTNqTZP0z6FuL4-3ED6eQB0yx38XBNj1v-4Q@mail.gmail.com\n> \n\nThanks, I wasn't aware of that.\n\n>> - I see _bt_first moved the pgstat_count_index_scan, but doesn't that\n>> mean we skip it if the earlier code does \"goto readcomplete\"? Shouldn't\n>> that still count as an index scan?\n> \n> In my opinion, no, it should not.\n> \n> We're counting the number of times we'll have descended the tree using\n> _bt_search (or using _bt_endpoint, perhaps), which is a precisely\n> defined physical cost. A little like counting the number of buffers\n> accessed. I actually think that this aspect of how we call\n> pgstat_count_index_scan is a bug that should be fixed, with the fix\n> backpatched to Postgres 17. Right now, we see completely different\n> counts for a parallel index scan, compared to an equivalent serial\n> index scan -- differences that cannot be explained as minor\n> differences caused by parallel scan implementation details. I think\n> that it's just wrong right now, on master, since we're simply not\n> counting the thing that we're supposed to be counting (not reliably,\n> not if it's a parallel index scan).\n> \n\nOK, understood. If it's essentially an independent issue (perhaps even\ncounts as a bug?) what about correcting it on master first? Doesn't\nsound like something we'd backpatch, I guess.\n\n>> - show_indexscan_nsearches does this:\n>>\n>> if (scanDesc && scanDesc->nsearches > 0)\n>> ExplainPropertyUInteger(\"Index Searches\", NULL,\n>> scanDesc->nsearches, es);\n>>\n>> But shouldn't it divide the count by nloops, similar to (for example)\n>> show_instrumentation_count?\n> \n> I can see arguments for and against doing it that way. It's\n> ambiguous/subjective, but on balance I favor not dividing by nloops.\n> You can make a similar argument for doing this with \"Buffers: \", and\n> yet we don't divide by nloops there, either.\n> \n> Honestly, I just want to find a way to do this that everybody can live\n> with. Better documentation could help here.\n> \n\nSeems like a bit of a mess. IMHO we should either divide everything by\nnloops (so that everything is \"per loop\", or not divide anything. My\nvote would be to divide, but that's mostly my \"learned assumption\" from\nthe other fields. But having a 50:50 split is confusing for everyone.\n\n>> 2) v6-0002-Normalize-nbtree-truncated-high-key-array-behavio.patch\n>>\n>> - Admittedly very subjective, but I find the \"oppoDirCheck\" abbreviation\n>> rather weird, I'd just call it \"oppositeDirCheck\".\n> \n> Will fix.\n> \n>> 3) v6-0003-Refactor-handling-of-nbtree-array-redundancies.patch\n>>\n>> - nothing\n> \n> Great. I think that I should be able to commit this one soon, since\n> it's independently useful work.\n> \n\n+1\n\n>> 4) v6-0004-Add-skip-scan-to-nbtree.patch\n>>\n>> - indices.sgml seems to hahve typo \"Intevening\" -> \"Intervening\"\n>>\n>> - It doesn't seem like a good idea to remove the paragraph about\n>> multicolumn indexes and replace it with just:\n>>\n>> Multicolumn indexes should be used judiciously.\n>>\n>> I mean, what does judiciously even mean? what should the user consider\n>> to be judicious? Seems rather unclear to me. Admittedly, the old text\n>> was not much helpful, but at least it gave some advice.\n> \n> Yeah, this definitely needs more work.\n> \n>> But maybe more importantly, doesn't skipscan apply only to a rather\n>> limited subset of data types (that support increment/decrement)? Doesn't\n>> the new wording mostly ignore that, implying skipscan applies to all\n>> btree indexes? I don't think it mentions datatypes anywhere, but there\n>> are many indexes on data types like text, UUID and so on.\n> \n> Actually, no, skip scan works in almost the same way with all data\n> types. Earlier versions of the patch didn't support every data type\n> (perhaps I should have waited for that before posting my v1), but the\n> version of the patch you looked at has no restrictions on any data\n> type.\n> \n> You must be thinking of whether or not an opclass has skip support.\n> That's just an extra optimization, which can be used for a small\n> handful of discrete data types such as integer and date (hard to\n> imagine how skip support could ever be implemented for types like\n> numeric and text). There is a temporary testing GUC that will allow\n> you to get a sense of how much skip support can help: try \"set\n> skipscan_skipsupport_enabled=off\" with (say) my original MDAM test\n> query to get a sense of that. You'll see more buffer hits needed for\n> \"next key probes\", though not dramatically more.\n> \n> It's worth having skip support (the idea comes from the MDAM paper),\n> but it's not essential. Whether or not an opclass has skip support\n> isn't accounted for by the cost model, but I doubt that it's worth\n> addressing (the cost model is already pessimistic).\n> \n\nI admit I'm a bit confused. I probably need to reread the paper, but my\nimpression was that the increment/decrement is required for skipscan to\nwork. If we can't do that, how would it generate the intermediate values\nto search for? I imagine it would be possible to \"step through\" the\nindex, but I thought the point of skip scan is to not do that.\n\nAnyway, probably a good idea for extending the stress testing script.\nRight now it tests with \"bigint\" columns only.\n\n>> - Very subjective nitpicking, but I find it a bit strange when a comment\n>> about a block is nested in the block, like in _bt_first() for the\n>> array->null_elem check.\n> \n> Will fix.\n> \n>> - assignProcTypes() claims providing skipscan for cross-type scenarios\n>> doesn't make sense. Why is that? I'm not saying the claim is wrong, but\n>> it's not clear to me why would that be the case.\n> \n> It is just talking about the support function that skip scan can\n> optionally use, where it makes sense (skip support functions). The\n> relevant \"else if (member->number == BTSKIPSUPPORT_PROC)\" stanza is\n> largely copied from the existing nearby \"else if (member->number ==\n> BTEQUALIMAGE_PROC)\" stanza that was added for B-Tree deduplication. In\n> both stanzas we're talking about a capability that maps to a\n> particular \"input opclass\", which means the opclass that maps to the\n> datums that are stored on disk, in index tuples.\n> \n> There are no restrictions on the use of skip scan with queries that\n> happen to involve the use of cross-type operators. It doesn't even\n> matter if we happen to be using an incomplete opfamily, since range\n> skip arrays never need to *directly* take the current array element\n> from a lower/upper bound inequality scan key's argument. It all\n> happens indirectly: code in places like _bt_first and _bt_checkkeys\n> can use inequalities (which are stored in BTArrayKeyInfo.low_compare\n> and BTArrayKeyInfo.high_compare) to locate the next matching on-disk\n> index tuple that satisfies the inequality in question. Obviously, the\n> located datum must be the same type as the one used by the array and\n> its scan key (it has to be the input opclass type if it's taken from\n> an index tuple).\n> \n> I think that it's a bit silly that nbtree generally bends over\n> backwards to find a way to execute a scan, given an incomplete\n> opfamily; in a green field situation it would make sense to just throw\n> an error instead. Even still, skip scan works in a way that is\n> maximally forgiving when incomplete opfamilies are used. Admittedly,\n> it is just about possible to come up with a scenario where we'll now\n> throw an error for a query that would have worked on Postgres 17. But\n> that's no different to what would happen if the query had an explicit\n> \"= any( )\" non-cross-type array instead of an implicit non-cross-type\n> skip array. The real problem in these scenarios is the lack of a\n> suitable cross-type ORDER proc (for a cross-type-operator query)\n> within _bt_first -- not the lack of cross-type operators. This issue\n> with missing ORDER procs just doesn't seem worth worrying about,\n> since, as I said, even slightly different queries (that don't use skip\n> scan) are bound to throw the same errors either way.\n> \n\nOK. Thanks for the explanation. I'll think about maybe testing such\nqueries too (with cross-type clauses).\n\n>> Peter asked me to look at the costing, and I think it looks generally\n>> sensible.\n> \n> I'm glad that you think that I basically have the right idea here.\n> Hard to know how to approach something like this, which doesn't have\n> any kind of precedent to draw on.\n> \n>> We don't really have a lot of information to base the costing\n>> on in the first place - the whole point of skipscan is about multicolumn\n>> indexes, but none of the existing extended statistic seems very useful.\n>> We'd need some cross-column correlation info, or something like that.\n> \n> Maybe, but that would just mean that we'd sometimes be more optimistic\n> about skip scan helping than we are with the current approach of\n> pessimistically assuming that there is no correlation at all. Not\n> clear that being pessimistic in this sense isn't the right thing to\n> do, despite the fact that it's clearly less accurate on average.\n> \n\nHmmm, yeah. I think it'd be useful to explain this reasoning (assuming\nno correlation means pessimistic skipscan costing) in a comment before\nbtcostestimate, or somewhere close.\n\n>> There's one thing that I don't quite understand, and that's how\n>> btcost_correlation() adjusts correlation for multicolumn indexes:\n>>\n>> if (index->nkeycolumns > 1)\n>> indexCorrelation = varCorrelation * 0.75;\n>>\n>> That seems fine for a two-column index, I guess. But shouldn't it\n>> compound for indexes with more keys? I mean, 0.75 * 0.75 for third\n>> column, etc? I don't think btcostestimate() does that, it just remembers\n>> whatever btcost_correlation() returns.\n> \n> I don't know either. In general I'm out of my comfort zone here.\n> \n\nDon't we do something similar elsewhere? For example, IIRC we do some\nadjustments when estimating grouping in estimate_num_groups(), and\nincremental sort had to deal with something similar too. Maybe we could\nlearn something from those places ... (both from the good and bad\nexperiences).\n\n>> The only alternative approach I can think of is not to adjust the\n>> costing for the index scan at all, and only use this to enable (or not\n>> enable) the skipscan internally. That would mean the overall plan\n>> remains the same, and maybe sometimes we would think an index scan would\n>> be too expensive and use something else. Not great, but it doesn't have\n>> the risk of regressions - IIUC we can disable the skipscan at runtime,\n>> if we realize it's not really helpful.\n> \n> In general I would greatly prefer to not have a distinct kind of index\n> path for scans that use skip scan. I'm quite keen on a design that\n> allows the scan to adapt to unpredictable conditions at runtime.\n> \n\nRight. I don't think I've been suggesting having a separate path, I 100%\nagree it's better to have this as an option for index scan paths.\n\n> Of course, that doesn't preclude passing the index scan a hint about\n> what's likely to work at runtime, based on information figured out\n> when costing the scan. Perhaps that will prove necessary to avoid\n> regressing index scans that are naturally quite cheap already -- scans\n> where we really need to have the right general idea from the start to\n> avoid any regressions. I'm not opposed to that, provided the index\n> scan has the ability to change its mind when (for whatever reason) the\n> guidance from the optimizer turns out to be wrong.\n> \n\n+1 (assuming it's feasible, given the amount of available information)\n\n>> As usual, I wrote a bash script to do a bit of stress testing. It\n>> generates tables with random data, and then runs random queries with\n>> random predicates on them, while mutating a couple parameters (like\n>> number of workers) to trigger different plans. It does that on 16,\n>> master and with the skipscan patch (with the fix for parallel scans).\n> \n> I wonder if some of the regressions you see can be tied to the use of\n> an LWLock in place of the existing use of a spin lock. I did that\n> because I sometimes need to allocate memory to deserialize the array\n> keys, with the exclusive lock held. It might be the case that a lot of\n> these regressions are tied to that, or something else that is far from\n> obvious...have to investigate.\n> \n> In general, I haven't done much on parallel index scans here (I only\n> added support for them very recently), whereas your testing places a\n> lot of emphasis on parallel scans. Nothing wrong with that emphasis\n> (it caught that 17 bug), but just want to put it in context.\n> \n\nSure. With this kind of testing I don't know what I'm looking for, so I\ntry to cover very wide range of cases. Inevitably, some of the cases\nwill not test the exact subject of the patch. I think it's fine.\n\n>> I've uploaded the script and results from the last run here:\n>>\n>> https://github.com/tvondra/pg-skip-scan-tests\n>>\n>> There's the \"run-mdam.sh\" script that generates tables/queries, runs\n>> them, collects all kinds of info about the query, and produces files\n>> with explain plans, CSV with timings, etc.\n> \n> It'll take me a while to investigate all this data.\n> \n\nI think it'd help if I go through the results and try to prepare some\nreproducers, to make it easier for you. After all, it's my script and\nyou'd have to reverse engineer some of it.\n\n>> Anyway, I ran a couple thousand such queries, and I haven't found any\n>> incorrect results (the script compares that between versions too). So\n>> that's good ;-)\n> \n> That's good!\n> \n>> But my main goal was to see how this affects performance. The tables\n>> were pretty small (just 1M rows, maybe ~80MB), but with restarts and\n>> dropping caches, large enough to test this.\n> \n> The really compelling cases all tend to involve fairly selective index\n> scans. Obviously, skip scan can only save work by navigating the index\n> structure more efficiently (unlike loose index scan). So if the main\n> cost is inherently bound to be the cost of heap accesses, then we\n> shouldn't expect a big speed up.\n> \n>> For example, one of the slowed down queries is query 702 (top of page 8\n>> in the PDF). The query is pretty simple:\n>>\n>> explain (analyze, timing off, buffers off)\n>> select id1,id2 from t_1000000_1000_1_2\n>> where NOT (id1 in (:list)) AND (id2 = :value);\n>>\n>> and it was executed on a table with random data in two columns, each\n>> with 1000 distinct values. This is perfectly random data, so a great\n>> match for the assumptions in costing etc.\n>>\n>> But with uncached data, this runs in ~50 ms on master, but takes almost\n>> 200 ms with skipscan (these timings are from my laptop, but similar to\n>> the results).\n> \n> I'll need to investigate this specifically. That does seem odd.\n> \n> FWIW, it's a pity that the patch doesn't know how to push down the NOT\n> IN () here. The MDAM paper contemplates such a scheme. We see the use\n> of filter quals here, when in principle this could work by using a\n> skip array that doesn't generate elements that appear in the NOT IN()\n> list (it'd generate every possible indexable value *except* the given\n> list/array values). The only reason that I haven't implemented this\n> yet is because I'm not at all sure how to make it work on the\n> optimizer side. The nbtree side of the implementation will probably be\n> quite straightforward, since it's really just a slight variant of a\n> skip array, that excludes certain values.\n> \n>> -- with skipscan\n>> Index Only Scan using t_1000000_1000_1_2_id1_id2_idx on\n>> t_1000000_1000_1_2 (cost=0.96..983.26 rows=1719 width=16)\n>> (actual rows=811 loops=1)\n>> Index Cond: (id2 = 997)\n>> Index Searches: 1007\n>> Filter: (id1 <> ALL ('{983,...,640}'::bigint[]))\n>> Rows Removed by Filter: 163\n>> Heap Fetches: 0\n>> Planning Time: 3.730 ms\n>> Execution Time: 238.554 ms\n>> (8 rows)\n>>\n>> I haven't looked into why this is happening, but this seems like a\n>> pretty good match for skipscan (on the first column). And for the\n>> costing too - it's perfectly random data, no correllation, etc.\n> \n> I wonder what \"Buffers: N\" shows? That's usually the first thing I\n> look at (that and \"Index Searches\", which looks like what you said it\n> should look like here). But, yeah, let me get back to you on this.\n> \n\nYeah, I forgot to get that from my reproducer. But the logs in the\ngithub repo with results has BUFFERS - for master (SEQ 12621), the plan\nlooks like this:\n\n Index Only Scan using t_1000000_1000_1_2_id1_id2_idx\n on t_1000000_1000_1_2\n (cost=0.96..12179.41 rows=785 width=16)\n (actual rows=785 loops=1)\n Index Cond: (id2 = 997)\n Filter: (id1 <> ALL ('{983, ..., 640}'::bigint[]))\n Rows Removed by Filter: 181\n Heap Fetches: 0\n Buffers: shared read=3094\n Planning:\n Buffers: shared hit=93 read=27\n Planning Time: 9.962 ms\n Execution Time: 38.007 ms\n(10 rows)\n\nand with the patch (SEQ 12623) it's this:\n\n Index Only Scan using t_1000000_1000_1_2_id1_id2_idx\n on t_1000000_1000_1_2\n (cost=0.96..1745.27 rows=784 width=16)\n (actual rows=785 loops=1)\n Index Cond: (id2 = 997)\n Index Searches: 1002\n Filter: (id1 <> ALL ('{983, ..., 640}'::bigint[]))\n Rows Removed by Filter: 181\n Heap Fetches: 0\n Buffers: shared hit=1993 read=1029\n Planning:\n Buffers: shared hit=93 read=27\n Planning Time: 9.506 ms\n Execution Time: 179.048 ms\n(11 rows)\n\nThis is on exactly the same data, after dropping caches and restarting\nthe instance. So there should be no caching effects. Yet, there's a\npretty clear difference - the total number of buffers is the same, but\nthe patched version has many more hits. Yet it's slower. Weird, right?\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Wed, 18 Sep 2024 13:36:33 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Wed, Sep 18, 2024 at 7:36 AM Tomas Vondra <[email protected]> wrote:\n> Makes sense. I started with the testing before before even looking at\n> the code, so it's mostly a \"black box\" approach. I did read the 1995\n> paper before that, and the script generates queries with clauses\n> inspired by that paper, in particular:\n\nI think that this approach with black box testing is helpful, but also\nsomething to refine over time. Gray box testing might work best.\n\n> OK, understood. If it's essentially an independent issue (perhaps even\n> counts as a bug?) what about correcting it on master first? Doesn't\n> sound like something we'd backpatch, I guess.\n\nWhat about backpatching it to 17?\n\nAs things stand, you can get quite contradictory counts of the number\nof index scans due to irrelevant implementation details from parallel\nindex scan. It just looks wrong, particularly on 17, where it is\nreasonable to expect near exact consistency between parallel and\nserial scans of the same index.\n\n> Seems like a bit of a mess. IMHO we should either divide everything by\n> nloops (so that everything is \"per loop\", or not divide anything. My\n> vote would be to divide, but that's mostly my \"learned assumption\" from\n> the other fields. But having a 50:50 split is confusing for everyone.\n\nMy idea was that it made most sense to follow the example of\n\"Buffers:\", since both describe physical costs.\n\nHonestly, I'm more than ready to take whatever the path of least\nresistance is. If dividing by nloops is what people want, I have no\nobjections.\n\n> > It's worth having skip support (the idea comes from the MDAM paper),\n> > but it's not essential. Whether or not an opclass has skip support\n> > isn't accounted for by the cost model, but I doubt that it's worth\n> > addressing (the cost model is already pessimistic).\n> >\n>\n> I admit I'm a bit confused. I probably need to reread the paper, but my\n> impression was that the increment/decrement is required for skipscan to\n> work. If we can't do that, how would it generate the intermediate values\n> to search for? I imagine it would be possible to \"step through\" the\n> index, but I thought the point of skip scan is to not do that.\n\nI think that you're probably still a bit confused because the\nterminology in this area is a little confusing. There are two ways of\nexplaining the situation with types like text and numeric (types that\nlack skip support). The two explanations might seem to be\ncontradictory, but they're really not, if you think about it.\n\nThe first way of explaining it, which focuses on how the scan moves\nthrough the index:\n\nFor a text index column \"a\", and an int index column \"b\", skip scan\nwill work like this for a query with a qual \"WHERE b = 55\":\n\n1. Find the first/lowest sorting \"a\" value in the index. Let's say\nthat it's \"Aardvark\".\n\n2. Look for matches \"WHERE a = 'Aardvark' and b = 55\", possibly\nreturning some matches.\n\n3. Find the next value after \"Aardvark\" in the index using a probe\nlike the one we'd use for a qual \"WHERE a > 'Aardvark'\". Let's say\nthat it turns out to be \"Abacus\".\n\n4. Look for matches \"WHERE a = 'Abacus' and b = 55\"...\n\n... (repeat these steps until we've exhaustively processed every\nexisting \"a\" value in the index)...\n\nThe second way of explaining it, which focuses on how the skip arrays\nadvance. Same query (and really the same behavior) as in the first\nexplanation:\n\n1. Skip array's initial value is the sentinel -inf, which cannot\npossibly match any real index tuple, but can still guide the search.\nSo we search for tuples \"WHERE a = -inf AND b = 55\" (actually we don't\ninclude the \"b = 55\" part, since it is unnecessary, but conceptually\nit's a part of what we search for within _bt_first).\n\n2. Find that the index has no \"a\" values matching -inf (it inevitably\ncannot have any matches for -inf), but we do locate the next highest\nmatch. The closest matching value is \"Aardvark\". The skip array on \"a\"\ntherefore advances from -inf to \"Aardvark\".\n\n3. Look for matches \"WHERE a = 'Aardvark' and b = 55\", possibly\nreturning some matches.\n\n4. Reach tuples after the last match for \"WHERE a = 'Aardvark' and b =\n55\", which will cause us to advance the array on \"a\" incrementally\ninside _bt_advance_array_keys (just like it would if there was a\nstandard SAOP array on \"a\" instead). The skip array on \"a\" therefore\nadvances from \"Aardvark\" to \"Aardvark\" +infinitesimal (we need to use\nsentinel values for this text column, which lacks skip support).\n\n5. Look for matches \"WHERE a = 'Aardvark'+infinitesimal and b = 55\",\nwhich cannot possibly find matches, but, again, can reposition the\nscan as needed. We can't find an exact match, of course, but we do\nlocate the next closest match -- which is \"Abacus\", again. So the skip\narray now advances from \"Aardvark\" +infinitesimal to \"Abacus\". The\nsentinel values are made up values, but that doesn't change anything.\n(And, again, we don't include the \"b = 55\" part here, for the same\nreason as before.)\n\n6. Look for matches \"WHERE a = 'Abacus' and b = 55\"...\n\n...(repeat these steps as many times as required)...\n\nIn summary:\n\nEven index columns that lack skip support get to \"increment\" (or\n\"decrement\") their arrays by using sentinel values that represent -inf\n(or +inf for backwards scans), as well as sentinels that represent\nconcepts such as \"Aardvark\" +infinitesimal (or \"Zebra\" -infinitesimal\nfor backwards scans, say). This scheme sounds contradictory, because\nin one sense it allows every skip array to be incremented, but in\nanother sense it makes it okay that we don't have a type-specific way\nto increment values for many individual types/opclasses.\n\nInventing these sentinel values allows _bt_advance_array_keys to reason about\narrays without really having to care about which kinds of arrays are\ninvolved, their order relative to each other, etc. In a certain sense,\nwe don't really need explicit \"next key\" probes of the kind that the\nMDAM paper contemplates, though we do still require the same index\naccesses as a design with explicit accesses.\n\nDoes that make sense?\n\nObviously, if we did add skip support for text, it would be very\nunlikely to help performance. Sure, one can imagine incrementing from\n\"Aardvark\" to \"Aardvarl\" using dedicated opclass infrastructure, but\nthat isn't very helpful. You're almost certain to end up accessing the\nsame pages with such a scheme, anyway. What are the chances of an\nindex with a leading text column actually containing tuples matching\n(say) \"WHERE a = 'Aardvarl' and b = 55\"? The chances are practically\nzero. Whereas if the column \"a\" happens to use a discrete type such as\ninteger or date, then skip support is likely to help: there's a decent\nchance that a value generated by incrementing the last value\n(and I mean incrementing it for real) will find a real match when\ncombined with the user-supplied \"b\" predicate.\n\nIt might be possible to add skip support for text, but there wouldn't\nbe much point.\n\n> Anyway, probably a good idea for extending the stress testing script.\n> Right now it tests with \"bigint\" columns only.\n\nGood idea.\n\n> Hmmm, yeah. I think it'd be useful to explain this reasoning (assuming\n> no correlation means pessimistic skipscan costing) in a comment before\n> btcostestimate, or somewhere close.\n\nWill do.\n\n> Don't we do something similar elsewhere? For example, IIRC we do some\n> adjustments when estimating grouping in estimate_num_groups(), and\n> incremental sort had to deal with something similar too. Maybe we could\n> learn something from those places ... (both from the good and bad\n> experiences).\n\nI'll make a note of that. Gonna focus on regressions for now.\n\n> Right. I don't think I've been suggesting having a separate path, I 100%\n> agree it's better to have this as an option for index scan paths.\n\nCool.\n\n> Sure. With this kind of testing I don't know what I'm looking for, so I\n> try to cover very wide range of cases. Inevitably, some of the cases\n> will not test the exact subject of the patch. I think it's fine.\n\nI agree. Just wanted to make sure that we were on the same page.\n\n> I think it'd help if I go through the results and try to prepare some\n> reproducers, to make it easier for you. After all, it's my script and\n> you'd have to reverse engineer some of it.\n\nYes, that would be helpful.\n\nI'll probably memorialize the problem by writing my own minimal test\ncase for it. I'm using the same TDD approach for this project as was\nused for the related Postgres 17 project.\n\n> This is on exactly the same data, after dropping caches and restarting\n> the instance. So there should be no caching effects. Yet, there's a\n> pretty clear difference - the total number of buffers is the same, but\n> the patched version has many more hits. Yet it's slower. Weird, right?\n\nYes, it's weird. It seems likely that you've found an unambiguous bug,\nnot just a \"regular\" performance regression. The regressions that I\nalready know about aren't nearly this bad. So it seems like you have\nthe right general idea about what to expect, and it seems like your\napproach to testing the patch is effective.\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 18 Sep 2024 14:52:35 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Mon, Sep 16, 2024 at 6:05 PM Tomas Vondra <[email protected]> wrote:\n> For example, one of the slowed down queries is query 702 (top of page 8\n> in the PDF). The query is pretty simple:\n>\n> explain (analyze, timing off, buffers off)\n> select id1,id2 from t_1000000_1000_1_2\n> where NOT (id1 in (:list)) AND (id2 = :value);\n>\n> and it was executed on a table with random data in two columns, each\n> with 1000 distinct values.\n\nI cannot recreate this problem using the q702.sql repro you provided.\nFeels like I'm missing a step, because I find that skip scan wins\nnicely here.\n\n> This is perfectly random data, so a great\n> match for the assumptions in costing etc.\n\nFWIW, I wouldn't say that this is a particularly sympathetic case for\nskip scan. It's definitely still a win, but less than other cases I\ncan imagine. This is due to the relatively large number of rows\nreturned by the scan. Plus 1000 distinct leading values for a skip\narray isn't all that low, so we end up scanning over 1/3 of all of the\nleaf pages in the index.\n\nBTW, be careful to distinguish between leaf pages and internal pages\nwhen interpreting \"Buffers:\" output with the patch. Generally\nspeaking, the patch repeats many internal page accesses, which needs\nto be taken into account when compare \"Buffers:\" counts against\nmaster. It's not uncommon for 3/4 or even 4/5 of all index page hits\nto be for internal pages with the patch. Whereas on master the number\nof internal page hits is usually tiny. This is one reason why the\nadditional context provided by \"Index Searches:\" can be helpful.\n\n> But with uncached data, this runs in ~50 ms on master, but takes almost\n> 200 ms with skipscan (these timings are from my laptop, but similar to\n> the results).\n\nEven 50ms seems really slow for your test case -- with or without my\npatch applied.\n\nAre you sure that this wasn't an assert-enabled build? There's lots of\nextra assertions for the code paths used by skip scan for this, which\ncould explain the apparent regression.\n\nI find that this same query takes only ~2.056 ms with the patch. When\nI disabled skip scan locally via \"set skipscan_prefix_cols = 0\" (which\nshould give me behavior that's pretty well representative of master),\nit takes ~12.039 ms. That's exactly what I'd expect for this query: a\nsolid improvement, though not the really enormous ones that you'll see\nwhen skip scan is able to avoid reading many of the index pages that\nmaster reads.\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 19 Sep 2024 15:22:46 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On 9/19/24 21:22, Peter Geoghegan wrote:\n> On Mon, Sep 16, 2024 at 6:05 PM Tomas Vondra <[email protected]> wrote:\n>> For example, one of the slowed down queries is query 702 (top of page 8\n>> in the PDF). The query is pretty simple:\n>>\n>> explain (analyze, timing off, buffers off)\n>> select id1,id2 from t_1000000_1000_1_2\n>> where NOT (id1 in (:list)) AND (id2 = :value);\n>>\n>> and it was executed on a table with random data in two columns, each\n>> with 1000 distinct values.\n> \n> I cannot recreate this problem using the q702.sql repro you provided.\n> Feels like I'm missing a step, because I find that skip scan wins\n> nicely here.\n> \n\nI don't know, I can reproduce it just fine. I just tried with v7.\n\nWhat I do is this:\n\n1) build master and patched versions:\n\n ./configure --enable-depend --prefix=/mnt/data/builds/$(build}/\n make -s clean\n make -s -j4 install\n\n2) create a new cluster (default config), create DB, generate the data\n\n3) restart cluster, drop caches\n\n4) run the query from the SQL script\n\nI suspect you don't do (3). I didn't mention this explicitly, my message\nonly said \"with uncached data\", so maybe that's the problem?\n\n\n>> This is perfectly random data, so a great\n>> match for the assumptions in costing etc.\n> \n> FWIW, I wouldn't say that this is a particularly sympathetic case for\n> skip scan. It's definitely still a win, but less than other cases I\n> can imagine. This is due to the relatively large number of rows\n> returned by the scan. Plus 1000 distinct leading values for a skip\n> array isn't all that low, so we end up scanning over 1/3 of all of the\n> leaf pages in the index.\n> \n\nI wasn't suggesting it's a sympathetic case for skipscan. My point is\nthat it perfectly matches the costing assumptions, i.e. columns are\nindependent etc. But if it's not sympathetic, maybe the cost shouldn't\nbe 1/5 of cost from master?\n\n> BTW, be careful to distinguish between leaf pages and internal pages\n> when interpreting \"Buffers:\" output with the patch. Generally\n> speaking, the patch repeats many internal page accesses, which needs\n> to be taken into account when compare \"Buffers:\" counts against\n> master. It's not uncommon for 3/4 or even 4/5 of all index page hits\n> to be for internal pages with the patch. Whereas on master the number\n> of internal page hits is usually tiny. This is one reason why the\n> additional context provided by \"Index Searches:\" can be helpful.\n> \n\nYeah, I recall there's an issue with that.\n\n>> But with uncached data, this runs in ~50 ms on master, but takes almost\n>> 200 ms with skipscan (these timings are from my laptop, but similar to\n>> the results).\n> \n> Even 50ms seems really slow for your test case -- with or without my\n> patch applied.\n> \n> Are you sure that this wasn't an assert-enabled build? There's lots of\n> extra assertions for the code paths used by skip scan for this, which\n> could explain the apparent regression.\n> \n> I find that this same query takes only ~2.056 ms with the patch. When\n> I disabled skip scan locally via \"set skipscan_prefix_cols = 0\" (which\n> should give me behavior that's pretty well representative of master),\n> it takes ~12.039 ms. That's exactly what I'd expect for this query: a\n> solid improvement, though not the really enormous ones that you'll see\n> when skip scan is able to avoid reading many of the index pages that\n> master reads.\n> \n\nI'm pretty sure you're doing this on cached data, because 2ms is exactly\nthe timing I see in that case.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Fri, 20 Sep 2024 15:45:25 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On 9/18/24 20:52, Peter Geoghegan wrote:\n> On Wed, Sep 18, 2024 at 7:36 AM Tomas Vondra <[email protected]> wrote:\n>> Makes sense. I started with the testing before before even looking at\n>> the code, so it's mostly a \"black box\" approach. I did read the 1995\n>> paper before that, and the script generates queries with clauses\n>> inspired by that paper, in particular:\n> \n> I think that this approach with black box testing is helpful, but also\n> something to refine over time. Gray box testing might work best.\n> \n>> OK, understood. If it's essentially an independent issue (perhaps even\n>> counts as a bug?) what about correcting it on master first? Doesn't\n>> sound like something we'd backpatch, I guess.\n> \n> What about backpatching it to 17?\n> \n> As things stand, you can get quite contradictory counts of the number\n> of index scans due to irrelevant implementation details from parallel\n> index scan. It just looks wrong, particularly on 17, where it is\n> reasonable to expect near exact consistency between parallel and\n> serial scans of the same index.\n> \n\nYes, I think backpatching to 17 would be fine. I'd be worried about\nmaybe disrupting some monitoring in production systems, but for 17 that\nshouldn't be a problem yet. So fine with me.\n\nFWIW I wonder how likely is it that someone has some sort of alerting\ntied to this counter. I'd bet few people do. It's probably more about a\ncouple people looking at explain plans, but they'll be confused even if\nwe change that only starting with 17.\n\n>> Seems like a bit of a mess. IMHO we should either divide everything by\n>> nloops (so that everything is \"per loop\", or not divide anything. My\n>> vote would be to divide, but that's mostly my \"learned assumption\" from\n>> the other fields. But having a 50:50 split is confusing for everyone.\n> \n> My idea was that it made most sense to follow the example of\n> \"Buffers:\", since both describe physical costs.\n> \n> Honestly, I'm more than ready to take whatever the path of least\n> resistance is. If dividing by nloops is what people want, I have no\n> objections.\n> \n\nI don't have a strong opinion on this. I just know I'd be confused by\nhalf the counters being total and half /loop, but chances are other\npeople would disagree.\n\n>>> It's worth having skip support (the idea comes from the MDAM paper),\n>>> but it's not essential. Whether or not an opclass has skip support\n>>> isn't accounted for by the cost model, but I doubt that it's worth\n>>> addressing (the cost model is already pessimistic).\n>>>\n>>\n>> I admit I'm a bit confused. I probably need to reread the paper, but my\n>> impression was that the increment/decrement is required for skipscan to\n>> work. If we can't do that, how would it generate the intermediate values\n>> to search for? I imagine it would be possible to \"step through\" the\n>> index, but I thought the point of skip scan is to not do that.\n> \n> I think that you're probably still a bit confused because the\n> terminology in this area is a little confusing. There are two ways of\n> explaining the situation with types like text and numeric (types that\n> lack skip support). The two explanations might seem to be\n> contradictory, but they're really not, if you think about it.\n> \n> The first way of explaining it, which focuses on how the scan moves\n> through the index:\n> \n> For a text index column \"a\", and an int index column \"b\", skip scan\n> will work like this for a query with a qual \"WHERE b = 55\":\n> \n> 1. Find the first/lowest sorting \"a\" value in the index. Let's say\n> that it's \"Aardvark\".\n> \n> 2. Look for matches \"WHERE a = 'Aardvark' and b = 55\", possibly\n> returning some matches.\n> \n> 3. Find the next value after \"Aardvark\" in the index using a probe\n> like the one we'd use for a qual \"WHERE a > 'Aardvark'\". Let's say\n> that it turns out to be \"Abacus\".\n> \n> 4. Look for matches \"WHERE a = 'Abacus' and b = 55\"...\n> \n> ... (repeat these steps until we've exhaustively processed every\n> existing \"a\" value in the index)...\n\nAh, OK. So we do probe the index like this. I was under the impression\nwe don't do that. But yeah, this makes sense.\n\n> \n> The second way of explaining it, which focuses on how the skip arrays\n> advance. Same query (and really the same behavior) as in the first\n> explanation:\n> \n> 1. Skip array's initial value is the sentinel -inf, which cannot\n> possibly match any real index tuple, but can still guide the search.\n> So we search for tuples \"WHERE a = -inf AND b = 55\" (actually we don't\n> include the \"b = 55\" part, since it is unnecessary, but conceptually\n> it's a part of what we search for within _bt_first).\n> \n> 2. Find that the index has no \"a\" values matching -inf (it inevitably\n> cannot have any matches for -inf), but we do locate the next highest\n> match. The closest matching value is \"Aardvark\". The skip array on \"a\"\n> therefore advances from -inf to \"Aardvark\".\n> \n> 3. Look for matches \"WHERE a = 'Aardvark' and b = 55\", possibly\n> returning some matches.\n> \n> 4. Reach tuples after the last match for \"WHERE a = 'Aardvark' and b =\n> 55\", which will cause us to advance the array on \"a\" incrementally\n> inside _bt_advance_array_keys (just like it would if there was a\n> standard SAOP array on \"a\" instead). The skip array on \"a\" therefore\n> advances from \"Aardvark\" to \"Aardvark\" +infinitesimal (we need to use\n> sentinel values for this text column, which lacks skip support).\n> \n> 5. Look for matches \"WHERE a = 'Aardvark'+infinitesimal and b = 55\",\n> which cannot possibly find matches, but, again, can reposition the\n> scan as needed. We can't find an exact match, of course, but we do\n> locate the next closest match -- which is \"Abacus\", again. So the skip\n> array now advances from \"Aardvark\" +infinitesimal to \"Abacus\". The\n> sentinel values are made up values, but that doesn't change anything.\n> (And, again, we don't include the \"b = 55\" part here, for the same\n> reason as before.)\n> \n> 6. Look for matches \"WHERE a = 'Abacus' and b = 55\"...\n> \n> ...(repeat these steps as many times as required)...\n> \n\nYeah, this makes more sense. Thanks.\n\n> In summary:\n> \n> Even index columns that lack skip support get to \"increment\" (or\n> \"decrement\") their arrays by using sentinel values that represent -inf\n> (or +inf for backwards scans), as well as sentinels that represent\n> concepts such as \"Aardvark\" +infinitesimal (or \"Zebra\" -infinitesimal\n> for backwards scans, say). This scheme sounds contradictory, because\n> in one sense it allows every skip array to be incremented, but in\n> another sense it makes it okay that we don't have a type-specific way\n> to increment values for many individual types/opclasses.\n> \n> Inventing these sentinel values allows _bt_advance_array_keys to reason about\n> arrays without really having to care about which kinds of arrays are\n> involved, their order relative to each other, etc. In a certain sense,\n> we don't really need explicit \"next key\" probes of the kind that the\n> MDAM paper contemplates, though we do still require the same index\n> accesses as a design with explicit accesses.\n> \n> Does that make sense?\n> \n\nYes, it does. Most of my confusion was caused by my belief that we can't\nprobe the index for the next value without \"incrementing\" the current\nvalue, but that was a silly idea.\n\n> Obviously, if we did add skip support for text, it would be very\n> unlikely to help performance. Sure, one can imagine incrementing from\n> \"Aardvark\" to \"Aardvarl\" using dedicated opclass infrastructure, but\n> that isn't very helpful. You're almost certain to end up accessing the\n> same pages with such a scheme, anyway. What are the chances of an\n> index with a leading text column actually containing tuples matching\n> (say) \"WHERE a = 'Aardvarl' and b = 55\"? The chances are practically\n> zero. Whereas if the column \"a\" happens to use a discrete type such as\n> integer or date, then skip support is likely to help: there's a decent\n> chance that a value generated by incrementing the last value\n> (and I mean incrementing it for real) will find a real match when\n> combined with the user-supplied \"b\" predicate.\n> \n> It might be possible to add skip support for text, but there wouldn't\n> be much point.\n> \n\nStupid question - so why does it make sense for types like int? There\ncan also be a lot of values between the current and the next value, so\nwhy would that be very different from \"incrementing\" a text value?\n\n> \n>> I think it'd help if I go through the results and try to prepare some\n>> reproducers, to make it easier for you. After all, it's my script and\n>> you'd have to reverse engineer some of it.\n> \n> Yes, that would be helpful.\n> \n> I'll probably memorialize the problem by writing my own minimal test\n> case for it. I'm using the same TDD approach for this project as was\n> used for the related Postgres 17 project.\n> \n\nSure. Still, giving you a reproducer should make it easier .\n\n>> This is on exactly the same data, after dropping caches and restarting\n>> the instance. So there should be no caching effects. Yet, there's a\n>> pretty clear difference - the total number of buffers is the same, but\n>> the patched version has many more hits. Yet it's slower. Weird, right?\n> \n> Yes, it's weird. It seems likely that you've found an unambiguous bug,\n> not just a \"regular\" performance regression. The regressions that I\n> already know about aren't nearly this bad. So it seems like you have\n> the right general idea about what to expect, and it seems like your\n> approach to testing the patch is effective.\n> \n\nYeah, it's funny. It's not the first time I start stress testing a patch\nonly to stumble over some pre-existing issues ... ;-)\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Fri, 20 Sep 2024 16:07:41 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Fri, Sep 20, 2024 at 9:45 AM Tomas Vondra <[email protected]> wrote:\n> 3) restart cluster, drop caches\n>\n> 4) run the query from the SQL script\n>\n> I suspect you don't do (3). I didn't mention this explicitly, my message\n> only said \"with uncached data\", so maybe that's the problem?\n\nYou're right that I didn't do step 3 here. I'm generally in the habit\nof using fully cached data when testing this kind of work.\n\nThe only explanation I can think of is that (at least on your\nhardware) OS readahead helps the master branch more than skipping\nhelps the patch. That's surprising, but I guess it's possible here\nbecause skip scan only needs to access about every third page. And\nbecause this particular index was generated by CREATE INDEX, and so\nhappens to have a strong correlation between key space order and\nphysical block order. And probably because this is an index-only scan.\n\n> I wasn't suggesting it's a sympathetic case for skipscan. My point is\n> that it perfectly matches the costing assumptions, i.e. columns are\n> independent etc. But if it's not sympathetic, maybe the cost shouldn't\n> be 1/5 of cost from master?\n\nThe costing is pretty accurate if we assume cached data, though --\nwhich is what the planner will actually assume. In any case, is that\nreally the only problem you see here? That the costing might be\ninaccurate because it fails to account for some underlying effect,\nsuch as the influence of OS readhead?\n\nLet's assume for a moment that the regression is indeed due to\nreadahead effects, and that we deem it to be unacceptable. What can be\ndone about it? I have a really hard time thinking of a fix, since by\nmost conventional measures skip scan is indeed much faster here.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 20 Sep 2024 10:21:29 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On 9/20/24 16:21, Peter Geoghegan wrote:\n> On Fri, Sep 20, 2024 at 9:45 AM Tomas Vondra <[email protected]> wrote:\n>> 3) restart cluster, drop caches\n>>\n>> 4) run the query from the SQL script\n>>\n>> I suspect you don't do (3). I didn't mention this explicitly, my message\n>> only said \"with uncached data\", so maybe that's the problem?\n> \n> You're right that I didn't do step 3 here. I'm generally in the habit\n> of using fully cached data when testing this kind of work.\n> \n> The only explanation I can think of is that (at least on your\n> hardware) OS readahead helps the master branch more than skipping\n> helps the patch. That's surprising, but I guess it's possible here\n> because skip scan only needs to access about every third page. And\n> because this particular index was generated by CREATE INDEX, and so\n> happens to have a strong correlation between key space order and\n> physical block order. And probably because this is an index-only scan.\n> \n\nGood idea. Yes, it does seem to be due to readahead - if I disable that,\nthe query takes ~320ms on master and ~280ms with the patch.\n\n>> I wasn't suggesting it's a sympathetic case for skipscan. My point is\n>> that it perfectly matches the costing assumptions, i.e. columns are\n>> independent etc. But if it's not sympathetic, maybe the cost shouldn't\n>> be 1/5 of cost from master?\n> \n> The costing is pretty accurate if we assume cached data, though --\n> which is what the planner will actually assume. In any case, is that\n> really the only problem you see here? That the costing might be\n> inaccurate because it fails to account for some underlying effect,\n> such as the influence of OS readhead?\n> \n> Let's assume for a moment that the regression is indeed due to\n> readahead effects, and that we deem it to be unacceptable. What can be\n> done about it? I have a really hard time thinking of a fix, since by\n> most conventional measures skip scan is indeed much faster here.\n> \n\nIt does seem to be due to readahead, and the costing not accounting for\nthese effects. And I don't think it's unacceptable - I don't think we\nconsider readahead elsewhere, and it certainly is not something I'd\nexpect this patch to fix. So I think it's fine.\n\nUltimately, I think this should be \"fixed\" by explicitly prefetching\npages. My index prefetching patch won't really help, because AFAIK this\nis about index pages. And I don't know how feasible it is.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Fri, 20 Sep 2024 16:42:37 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Fri, Sep 20, 2024 at 10:07 AM Tomas Vondra <[email protected]> wrote:\n> Yes, I think backpatching to 17 would be fine. I'd be worried about\n> maybe disrupting some monitoring in production systems, but for 17 that\n> shouldn't be a problem yet. So fine with me.\n\nI'll commit minimal changes to _bt_first that at least make the\ncounters consistent, then. I'll do so soon.\n\n> FWIW I wonder how likely is it that someone has some sort of alerting\n> tied to this counter. I'd bet few people do. It's probably more about a\n> couple people looking at explain plans, but they'll be confused even if\n> we change that only starting with 17.\n\nOn 17 the behavior in this area is totally different, either way.\n\n> Ah, OK. So we do probe the index like this. I was under the impression\n> we don't do that. But yeah, this makes sense.\n\nWell, we don't have *explicit* next-key probes. If you think of values\nlike \"Aardvark\" + infinitesimal as just another array value (albeit\none that requires a little special handling in _bt_first), then there\nare no explicit probes. There are no true special cases required.\n\nMaybe this sounds like a very academic point. I don't think that it\nis, though. Bear in mind that even when _bt_first searches for index\ntuples matching a value like \"Aardvark\" + infinitesimal, there's some\nchance that _bt_search will return a leaf page with tuples that the\nindex scan ultimately returns. And so there really is no \"separate\nexplicit probe\" of the kind the MDAM paper contemplates.\n\nWhen this happens, we won't get any exact matches for the sentinel\nsearch value, but there could still be matches for (say) \"WHERE a =\n'Abacus' AND b = 55\" on that same leaf page. In general, repositioning\nthe scan to later \"within\" the 'Abacus' index tuples might not be\nrequired -- our initial position (based on the sentinel search key)\ncould be \"close enough\". This outcome is more likely to happen if the\nquery happened to be written \"WHERE b = 1\", rather than \"WHERE b =\n55\".\n\n> Yes, it does. Most of my confusion was caused by my belief that we can't\n> probe the index for the next value without \"incrementing\" the current\n> value, but that was a silly idea.\n\nIt's not a silly idea, I think. Technically that understanding is\nfairly accurate -- we often *do* have to \"increment\" to get to the\nnext value (though reading the next value from an index tuple and then\nrepositioning using it with later/less significant scan keys is the\nother possibility).\n\nIncrementing is always possible, even without skip support, because we\ncan always fall back on +infinitesimal style sentinel values (AKA\nSK_BT_NEXTPRIOR values). That's the definitional sleight-of-hand that\nallows _bt_advance_array_keys to not have to think about skip arrays\nas a special case, regardless of whether or not they happen to have\nskip support.\n\n> > It might be possible to add skip support for text, but there wouldn't\n> > be much point.\n> >\n>\n> Stupid question - so why does it make sense for types like int? There\n> can also be a lot of values between the current and the next value, so\n> why would that be very different from \"incrementing\" a text value?\n\nNot a stupid question at all. You're right; it'd be the same.\n\nObviously, there are at least some workloads (probably most) where any\nint columns will contain values that are more or less fully\ncontiguous. I also expect there to be some workloads where int columns\nappear in B-Tree indexes that contain values with large gaps between\nneighboring values (e.g., because the integers are hash values). We'll\nalways use skip support for any omitted prefix int column (same with\nany opclass that offers skip support), but we can only expect to see a\nbenefit in the former \"dense\" cases -- never in the latter \"sparse\"\ncases.\n\nThe MDAM paper talks about an adaptive strategy for dense columns and\nsparse columns. I don't see any point in that, and assume that it's\ndown to some kind of implementation deficiencies in NonStop SQL back\nin the 1990s. I can just always use skip support in the hope that\ninteger column data will turn out to be \"sparse\" because there's no\ndownside to being optimistic about it. The access patterns are exactly\nthe same as they'd be with skip support disabled.\n\nMy \"academic point\" about not having *explicit* next-key probes might\nmake more sense now. This is the thing that makes it okay to always be\noptimistic about types with skip support containing \"dense\" data.\n\nFWIW I actually have skip support for the UUID opclass. I implemented\nit to have test coverage for pass-by-reference types in certain code\npaths, but it's otherwise I don't expect it to be useful -- in\npractice all UUID columns contain \"sparse\" data. There's still no real\ndownside to it, though. (I wouldn't try to do it with text because\nit'd be much harder to implement skip support correctly, especially\nwith collated text.)\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 20 Sep 2024 10:59:41 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Fri, Sep 20, 2024 at 10:59 AM Peter Geoghegan <[email protected]> wrote:\n> On Fri, Sep 20, 2024 at 10:07 AM Tomas Vondra <[email protected]> wrote:\n> > Yes, I think backpatching to 17 would be fine. I'd be worried about\n> > maybe disrupting some monitoring in production systems, but for 17 that\n> > shouldn't be a problem yet. So fine with me.\n>\n> I'll commit minimal changes to _bt_first that at least make the\n> counters consistent, then. I'll do so soon.\n\nPushed, thanks\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 20 Sep 2024 14:06:41 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Wed, Sep 18, 2024 at 7:36 AM Tomas Vondra <[email protected]> wrote:\n> >> 3) v6-0003-Refactor-handling-of-nbtree-array-redundancies.patch\n> >>\n> >> - nothing\n> >\n> > Great. I think that I should be able to commit this one soon, since\n> > it's independently useful work.\n> >\n>\n> +1\n\nI pushed this just now. There was one small change: I decided that it\nmade more sense to repalloc() in the case where skip scan must enlarge\nthe so.keyData[] space, rather than doing the so.keyData[] allocation\nlazily in all cases. I was concerned that my original approach might\nregress nested loop joins with very fast inner index scans.\n\nAttached is v8. I still haven't worked through any of your feedback,\nTomas. Again, this revision is just to keep CFBot happy by fixing the\nbit rot on the master branch created by my recent commits.\n\n-- \nPeter Geoghegan", "msg_date": "Sat, 21 Sep 2024 13:44:53 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" }, { "msg_contents": "On Sat, Sep 21, 2024 at 1:44 PM Peter Geoghegan <[email protected]> wrote:\n> Attached is v8. I still haven't worked through any of your feedback,\n> Tomas. Again, this revision is just to keep CFBot happy by fixing the\n> bit rot on the master branch created by my recent commits.\n\nAttached is v9.\n\nI think that v9-0002-Normalize-nbtree-truncated-high-key-array-behavio.patch\nis close to committable. It's basically independent work, which would\nbe nice to get out of the way soon.\n\nHighlights for v9:\n\n* Fixed a bug affecting scans that use scrollable cursors: v9 splits\nthe previous SK_BT_NEXTPRIOR sentinel scan key flag into separate\nSK_BT_NEXT and SK_BT_PRIOR flags, so it's no longer possible to\nconfuse \"foo\"+infinitesimal with \"foo\"-infinitesimal when the scan's\ndirection changes at exactly the wrong time.\n\n* Worked through all of Tomas' feedback.\n\nIn more detail:\n\n- v9-0001-Show-index-search-count-in-EXPLAIN-ANALYZE.patch has been\ntaught to divide the total number of index searches by nloop as\nrequired (e.g., for nested loop joins), per Tomas. This doesn't make\nmuch difference, either way, so if that's what people want I'm happy\nto oblige.\n\n- Separately, the same EXPLAIN ANALYZE patch now shows \"Index\nSearches: 0\" in cases where the scan node is never executed. (This was\nalready possible in cases where the scan node was executed, only for\n_bt_preprocess_keys to determine that the scan's qual was\ncontradictory.)\n\n- Various small updates to comments/symbol names, again based on\nfeedback from Tomas.\n\n- Various small updates to the user sgml docs, again based on feedback\nfrom Tomas.\n\n- I've polished the commit messages for all 3 patches, particularly\nthe big one (v9-0003-Add-skip-scan-to-nbtree.patch).\n\n- I haven't done anything about fixing any of the known regressions in\nv9. I'm not aware that Tomas expects me to fix any regressions\nhighlighted by his recent testing (only regressions that I've been\naware of before Tomas became involved). Tomas should correct me if I\nhave that wrong, though.\n\nObviously, the #1 open item right now remains fixing the known\nregressions in cases where skip scan should be attempted, but cannot\npossibly help.\n\nThanks\n-- \nPeter Geoghegan", "msg_date": "Wed, 25 Sep 2024 15:08:21 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding skip scan (including MDAM style range skip scan) to nbtree" } ]
[ { "msg_contents": "Hi,\n\nI have just tested PG17 beta1 with the E-Maj solution I maintain. The \nonly issue I found is the removal of the adminpack contrib.\n\nIn the emaj extension, which is the heart of the solution, and which is \nwritten in plpgsql, the code just uses the pg_file_unlink() function to \nautomatically remove files produced by COPY TO statements when no rows \nhave been written. In some specific use cases, it avoids the user to get \na few interesting files among numerous empty files in a directory. I \nhave found a workaround. That's a little bit ugly, but it works. So this \nis not blocking for me.\n\nFYI, the project's repo is on github (https://github.com/dalibo/emaj), \nwhich was supposed to be scanned to detect potential adminpack usages.\n\nFinally, I wouldn't be surprise if some other user projects or \napplications use adminpack as this is a simple way to get sql functions \nthat write, rename or remove files.\n\nRegards.\n\n\n\n", "msg_date": "Thu, 27 Jun 2024 07:34:32 +0200", "msg_from": "Philippe BEAUDOIN <[email protected]>", "msg_from_op": true, "msg_subject": "Adminpack removal" }, { "msg_contents": "I agree that removing adminpack was a bit of a surprise for me as\nwell. At first I assumed that it was just moved into the core to\naccompany the file and directory *reading* functions, until I found\nthe release notes mentioning that now one of the users of adminpack\ndoes not need it and so it is dropped.\n\nThe easy and currently supported-in-core way to do file manipulation\nis using pl/pythonu or pl/perlu but I agree that it is an overkill if\nall you need is a little bit of file manipulation.\n\nBest Regards\nHannu\n\nOn Thu, Jun 27, 2024 at 7:34 AM Philippe BEAUDOIN <[email protected]> wrote:\n>\n> Hi,\n>\n> I have just tested PG17 beta1 with the E-Maj solution I maintain. The\n> only issue I found is the removal of the adminpack contrib.\n>\n> In the emaj extension, which is the heart of the solution, and which is\n> written in plpgsql, the code just uses the pg_file_unlink() function to\n> automatically remove files produced by COPY TO statements when no rows\n> have been written. In some specific use cases, it avoids the user to get\n> a few interesting files among numerous empty files in a directory. I\n> have found a workaround. That's a little bit ugly, but it works. So this\n> is not blocking for me.\n>\n> FYI, the project's repo is on github (https://github.com/dalibo/emaj),\n> which was supposed to be scanned to detect potential adminpack usages.\n>\n> Finally, I wouldn't be surprise if some other user projects or\n> applications use adminpack as this is a simple way to get sql functions\n> that write, rename or remove files.\n>\n> Regards.\n>\n>\n>\n\n\n", "msg_date": "Thu, 27 Jun 2024 08:12:20 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adminpack removal" }, { "msg_contents": "On Thu, 27 Jun 2024, 07:34 Philippe BEAUDOIN, <[email protected]> wrote:\n>\n> Hi,\n>\n> I have just tested PG17 beta1 with the E-Maj solution I maintain. The\n> only issue I found is the removal of the adminpack contrib.\n>\n> In the emaj extension, which is the heart of the solution, and which is\n> written in plpgsql, the code just uses the pg_file_unlink() function to\n> automatically remove files produced by COPY TO statements when no rows\n> have been written. In some specific use cases, it avoids the user to get\n> a few interesting files among numerous empty files in a directory. I\n> have found a workaround. That's a little bit ugly, but it works. So this\n> is not blocking for me.\n>\n> FYI, the project's repo is on github (https://github.com/dalibo/emaj),\n> which was supposed to be scanned to detect potential adminpack usages.\n\nThe extension at first glance doesn't currently seem to depend on\nadminpack: it is not included in the control file as dependency, and\nhas not been included as a dependency since the creation of that file.\n\nWhere else would you expect us to search for dependencies?\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Thu, 27 Jun 2024 10:38:49 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adminpack removal" }, { "msg_contents": "Le 27/06/2024 à 10:38, Matthias van de Meent a écrit :\n> On Thu, 27 Jun 2024, 07:34 Philippe BEAUDOIN, <[email protected]> wrote:\n>> Hi,\n>>\n>> I have just tested PG17 beta1 with the E-Maj solution I maintain. The\n>> only issue I found is the removal of the adminpack contrib.\n>>\n>> In the emaj extension, which is the heart of the solution, and which is\n>> written in plpgsql, the code just uses the pg_file_unlink() function to\n>> automatically remove files produced by COPY TO statements when no rows\n>> have been written. In some specific use cases, it avoids the user to get\n>> a few interesting files among numerous empty files in a directory. I\n>> have found a workaround. That's a little bit ugly, but it works. So this\n>> is not blocking for me.\n>>\n>> FYI, the project's repo is on github (https://github.com/dalibo/emaj),\n>> which was supposed to be scanned to detect potential adminpack usages.\n> The extension at first glance doesn't currently seem to depend on\n> adminpack: it is not included in the control file as dependency, and\n> has not been included as a dependency since the creation of that file.\n\nYou are right. Even before the adminpack usage removal, the extension \nwas not listed as prerequisite in the control file. In fact, I \nintroduced a new E-Maj feature in the version of last automn, that used \nthe adminpack extension in one specific case. But the user may not \ninstall adminpack. In such a case, the feature was limited and a warning \nmessage told the user why it reached the limitation. I was waiting for \nsome feedbacks before possibly adding adminpack as a real prerequisite.\n\n>\n> Where else would you expect us to search for dependencies?\n\nThe word \"adminpack\" can be found in the sql source file \n(sql/emaj--4.4.0.sql), and in 2 documentation source files (in \ndocs/en/*.rst).\n\nThe pg_file_unlink() function name can be found in the same sql source file.\n\nBut, I understand that looking for simple strings in all types of files \nin a lot of repo is costly and may report a lot of noise.\n\n\nMore broadly, my feeling is that just looking at public repositories is \nnot enough. The Postgres features usage can be found in:\n\n- public tools, visible in repo (in github, gitlab and some other \nplatforms) ;\n\n- softwares from commercial vendors, so in close source ;\n\n- and a huge number of applications developed in all organizations, and \nthat are not public.\n\nSo just looking in public repo covers probably less than 1% of the code. \nHowever, this may give a first idea, especialy if a feature use is \nalready detected.\n\nIn this \"adminpack\" case, it may be interesting to distinguish the \npg_logdir_ls() function, which covers a very specific administration \nfeature, and the other functions, which are of a general interest. It \nwouldn't be surprising that pg_logdir_ls() be really obsolete now that \nit is not used by pgAdmin anymore, and thus could be removed if nobody \ncomplains about that. May be the others functions could be directly \nintegrated into the core (or left in adminpack, with the pgAdmin \nreference removed from the documentation).\n\nKind Regards.\n\nPhilippe.\n\n\n\n\n", "msg_date": "Fri, 28 Jun 2024 09:06:40 +0200", "msg_from": "Philippe BEAUDOIN <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adminpack removal" }, { "msg_contents": "> On 28 Jun 2024, at 09:06, Philippe BEAUDOIN <[email protected]> wrote:\n\n> So just looking in public repo covers probably less than 1% of the code. However, this may give a first idea, especialy if a feature use is already detected.\n\nSearching for anything on Github is essentially a dead end since it reports so\nmany duplicates in forks etc. That being said, I did a lot of searching and\nbrowsing to find users [0], but came up empty (apart from forks which already\nmaintain their own copy). A more targeted search is the Debian Code search\nwhich at the time of removal (and well before then) showed zero occurrences of\nadminpack functions in any packaged software, and no extensions which had\nadminpack as a dependency. While not an exhaustive search by any means, it\ndoes provide a good hint.\n\nSince you list no other extensions using adminpack to support keeping it, I\nassume you also didn't find any when searching?\n\n--\nDaniel Gustafsson\n\n[0] https://www.postgresql.org/message-id/B07CC211-DE35-4AC5-BD4E-0C6466700B06%40yesql.se\n\n", "msg_date": "Mon, 1 Jul 2024 10:07:05 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adminpack removal" }, { "msg_contents": "Le 01/07/2024 à 10:07, Daniel Gustafsson a écrit :\n>> On 28 Jun 2024, at 09:06, Philippe BEAUDOIN <[email protected]> wrote:\n>> So just looking in public repo covers probably less than 1% of the code. However, this may give a first idea, especialy if a feature use is already detected.\n> Searching for anything on Github is essentially a dead end since it reports so\n> many duplicates in forks etc. That being said, I did a lot of searching and\n> browsing to find users [0], but came up empty (apart from forks which already\n> maintain their own copy). A more targeted search is the Debian Code search\n> which at the time of removal (and well before then) showed zero occurrences of\n> adminpack functions in any packaged software, and no extensions which had\n> adminpack as a dependency. While not an exhaustive search by any means, it\n> does provide a good hint.\n>\n> Since you list no other extensions using adminpack to support keeping it, I\n> assume you also didn't find any when searching?\nI just said that there are much much more code in private repos (so not \nanalyzable) than in the public ones.\n>\n> --\n> Daniel Gustafsson\n>\n> [0] https://www.postgresql.org/message-id/B07CC211-DE35-4AC5-BD4E-0C6466700B06%40yesql.se\n\n\n\n\n", "msg_date": "Mon, 8 Jul 2024 11:13:01 +0200", "msg_from": "Philippe BEAUDOIN <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adminpack removal" } ]
[ { "msg_contents": "I have a question regarding postgresql waiting for the client. I queried\nthe pg_stat_activity because I noticed a connection that had not been\nreleased for days!!! I saw that the wait_event was ClientRead and the query\nwas ROLLBACK. What the server is waiting for from the client? It is a\nsimple ROLLBACK. I'd expect postgresql to abort the transaction, that's it!\nThis was the client thread stack trace:\n\nat sun.nio.ch.Net.poll(Native Method)\nat sun.nio.ch.NioSocketImpl.park()\nat sun.nio.ch.NioSocketImpl.park()\nat sun.nio.ch.NioSocketImpl.implRead()\nat sun.nio.ch.NioSocketImpl.read()\nat sun.nio.ch.NioSocketImpl$1.read()\nat java.net.Socket$SocketInputStream.read()\nat sun.security.ssl.SSLSocketInputRecord.read()\nat sun.security.ssl.SSLSocketInputRecord.readFully()\nat sun.security.ssl.SSLSocketInputRecord.decodeInputRecord()\nat sun.security.ssl.SSLSocketInputRecord.decode()\nat sun.security.ssl.SSLTransport.decode()\nat sun.security.ssl.SSLSocketImpl.decode()\nat sun.security.ssl.SSLSocketImpl.readApplicationRecord()\nat sun.security.ssl.SSLSocketImpl$AppInputStream.read()\nat\norg.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:161)\nat\norg.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:128)\nat\norg.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:113)\nat\norg.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:73)\nat org.postgresql.core.PGStream.receiveChar(PGStream.java:453)\nat\norg.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2120)\nat\norg.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:356)\nat\norg.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:316)\nat\norg.postgresql.jdbc.PgConnection.executeTransactionCommand(PgConnection.java:879)\nat org.postgresql.jdbc.PgConnection.rollback(PgConnection.java:922)\nat com.zaxxer.hikari.pool.ProxyConnection.rollback(ProxyConnection.java:396)\nat\ncom.zaxxer.hikari.pool.HikariProxyConnection.rollback(HikariProxyConnection.java)\nat\norg.hibernate.resource.jdbc.internal.AbstractLogicalConnectionImplementor.rollback(AbstractLogicalConnectionImplementor.java:121)\nat\norg.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl$TransactionDriverControlImpl.rollback(JdbcResourceLocalTransactionCoordinatorImpl.java:304)\nat\norg.hibernate.engine.transaction.internal.TransactionImpl.rollback(TransactionImpl.java:142)\nat\norg.springframework.orm.jpa.JpaTransactionManager.doRollback(JpaTransactionManager.java:589)\n...\n\nSo I ended up in a situation where both client and server were reading from\nthe socket :(. I'm not sure why. Something went wrong between client and\nserver, network problems? The connection was held for 5 days until it was\nmanually terminated. But why the server was waiting in the first place?\n\n-- \nSimone Giusso\n\nI have a question regarding postgresql waiting for the client. I queried the pg_stat_activity because I noticed a connection that had not been released for days!!! I saw that the wait_event was ClientRead and the query was ROLLBACK. What the server is waiting for from the client? It is a simple ROLLBACK. I'd expect postgresql to abort the transaction, that's it!This was the client thread stack trace:at sun.nio.ch.Net.poll(Native Method)at sun.nio.ch.NioSocketImpl.park()at sun.nio.ch.NioSocketImpl.park()at sun.nio.ch.NioSocketImpl.implRead()at sun.nio.ch.NioSocketImpl.read()at sun.nio.ch.NioSocketImpl$1.read()at java.net.Socket$SocketInputStream.read()at sun.security.ssl.SSLSocketInputRecord.read()at sun.security.ssl.SSLSocketInputRecord.readFully()at sun.security.ssl.SSLSocketInputRecord.decodeInputRecord()at sun.security.ssl.SSLSocketInputRecord.decode()at sun.security.ssl.SSLTransport.decode()at sun.security.ssl.SSLSocketImpl.decode()at sun.security.ssl.SSLSocketImpl.readApplicationRecord()at sun.security.ssl.SSLSocketImpl$AppInputStream.read()at org.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:161)at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:128)at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:113)at org.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:73)at org.postgresql.core.PGStream.receiveChar(PGStream.java:453)at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2120)at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:356)at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:316)at org.postgresql.jdbc.PgConnection.executeTransactionCommand(PgConnection.java:879)at org.postgresql.jdbc.PgConnection.rollback(PgConnection.java:922)at com.zaxxer.hikari.pool.ProxyConnection.rollback(ProxyConnection.java:396)at com.zaxxer.hikari.pool.HikariProxyConnection.rollback(HikariProxyConnection.java)at org.hibernate.resource.jdbc.internal.AbstractLogicalConnectionImplementor.rollback(AbstractLogicalConnectionImplementor.java:121)at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl$TransactionDriverControlImpl.rollback(JdbcResourceLocalTransactionCoordinatorImpl.java:304)at org.hibernate.engine.transaction.internal.TransactionImpl.rollback(TransactionImpl.java:142)at org.springframework.orm.jpa.JpaTransactionManager.doRollback(JpaTransactionManager.java:589)...So I ended up in a situation where both client and server were reading from the socket :(. I'm not sure why. Something went wrong between client and server, network problems? The connection was held for 5 days until it was manually terminated. But why the server was waiting in the first place?-- Simone Giusso", "msg_date": "Thu, 27 Jun 2024 09:11:00 +0200", "msg_from": "Simone Giusso <[email protected]>", "msg_from_op": true, "msg_subject": "ClientRead on ROLLABACK" }, { "msg_contents": "Simone Giusso <[email protected]> writes:\n> I have a question regarding postgresql waiting for the client. I queried\n> the pg_stat_activity because I noticed a connection that had not been\n> released for days!!! I saw that the wait_event was ClientRead and the query\n> was ROLLBACK. What the server is waiting for from the client?\n\nYou are misunderstanding that display. If the wait state is ClientRead\nthen the server has nothing to do and is awaiting a fresh SQL command\nfrom the client. The query that's shown is the last-executed query.\n(We used to show \"<IDLE>\" in the query column in this state, but that\nwas deemed less helpful than the current behavior.)\n\n> So I ended up in a situation where both client and server were reading from\n> the socket :(. I'm not sure why. Something went wrong between client and\n> server, network problems?\n\nYeah, a dropped packet could explain this perhaps.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Jun 2024 11:21:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ClientRead on ROLLABACK" }, { "msg_contents": "On Thu, 27 Jun 2024 at 17:21, Tom Lane <[email protected]> wrote:\n> (We used to show \"<IDLE>\" in the query column in this state, but that\n> was deemed less helpful than the current behavior.)\n\nI think this is a super common confusion among users. Maybe we should\nconsider making it clearer that no query is currently being executed.\nSomething like\n\nIDLE: last query: SELECT * FROM mytable;\n\n\n", "msg_date": "Thu, 27 Jun 2024 17:26:02 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ClientRead on ROLLABACK" }, { "msg_contents": "Oh, I see. So the ROLLBACK command was executed! So I suppose the client was waiting just for the ACK and the connection has been left open.\n\n> I think this is a super common confusion among users. Maybe we should\n> consider making it clearer that no query is currently being executed.\n> Something like\n> \n> IDLE: last query: SELECT * FROM mytable;\n\nI think the clearest option would be leave the query column empty and add a new column last_query. But this suggestion may still do its job in clarifying that the query is not running. \n\n", "msg_date": "Thu, 27 Jun 2024 17:34:50 +0200", "msg_from": "\"Simone G.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ClientRead on ROLLABACK" } ]